OpenAI presents a new tool that concerns election administrators.

                                                    

OpenAI presents a new tool that concerns election administrators.


"OpenAI" emphasized the importance of referring to the development of voices by artificial intelligence when using the "Voice Engine"


According to Agence France-Presse, Open AI, the creator of the well-known chat program " Chat GPT " based on generative artificial intelligence, has introduced a voice cloning tool, the use of which will be limited to avoid recording episodes of fraud or crime.
According to an Open AI statement, the tool, known as "Voice Engine," can reproduce a person's speech from a 15-second audio sample. The statement also mentioned the results of a small-scale test.

 

According to the press release, "We realize that the ability to generate human-like voices is a step that entails great risks, especially in this election year."


He goes on to say, "We work with American and international partners from governments, media, entertainment, education, civil society, and other sectors, and we take their feedback into account during the process of creating the tool."


This year, several nations will hold elections, and disinformation experts are concerned about the misuse of generative artificial intelligence applications, particularly voice cloning tools, which are inexpensive, simple to use, and difficult to track.


OpenAI confirmed that it had adopted a “cautious approach” before deploying the new tool more widely “due to the potential for misuse of artificially generated sounds.”

The presentation of the tool comes after a consultant working in the presidential campaign of a Democratic competitor to Joe Biden created an automated program that impersonated the US President running for a new term.

A voice similar to Joe Biden's called on voters to abstain from voting in the New Hampshire primary.

The United States has since banned calls that use cloned voices generated by artificial intelligence, in order to combat political or commercial fraud.

 

Open AI explained that the partners testing Voice Engine have agreed on rules that require, for example, explicit consent from any person before using their voice, and the need to clearly indicate to listeners that the voices were created by artificial intelligence.

The company continued, "We have adopted a set of security measures, including a watermark, so that we can trace the origin of every sound created by the new tool, in addition to proactively monitoring its use."

 



 

 

No comments:

Powered by Blogger.