International
OpenAI addresses election meddling concerns with AI technology
OpenAI addresses election meddling concerns, collaborating with election authorities and enhancing transparency in AI technologies to ensure responsible use globally.
OpenAI, a US-based AI research and deployment company, released a blog post on Monday (15 Jan) addressing concerns about the potential misuse of its technology in the elections.
With over a third of the world gearing up for polls this year, the company aims to reassure the public about the responsible use of its AI products amid fears of election interference.
The unease stems from OpenAI’s development of two groundbreaking products, ChatGPT and DALL-E.
ChatGPT is capable of convincingly mimicking human writing, while DALL-E can generate realistic-looking images, raising concerns about the creation of deceptive content or “deepfakes” that could compromise the integrity of elections.
Even OpenAI’s CEO, Sam Altman, expressed apprehension, testifying before Congress in May last year about his nervousness regarding generative AI’s potential to spread one-on-one interactive disinformation, thereby threatening election integrity.
In response to these concerns, OpenAI announced its collaboration with the National Association of Secretaries of State in the United States, where presidential elections are scheduled for this year.
This partnership aims to promote effective democratic processes and ensure that AI technologies are used responsibly.
Notably, ChatGPT will redirect users to CanIVote.org when presented with specific election-related queries.
OpenAI is actively working on enhancing transparency in the use of its AI technologies.
For DALL-E-generated images, the company plans to introduce a “cr” icon, following the guidelines set by the Coalition for Content Provenance and Authenticity. This icon will serve as an indicator that the image has been AI-generated.
Moreover, OpenAI is developing methods to identify DALL-E-generated content even after modifications have been made to the images. This commitment reflects the organization’s dedication to preventing the spread of deceptive content that could impact public opinion.
OpenAI reiterated its commitment to ethical AI use in its blog post, emphasizing policies that prohibit potentially abusive applications. This includes preventing the creation of chatbots from impersonating real individuals or engaging in activities that discourage voting.
Additionally, DALL-E is restricted from generating images of real people, including political candidates.
Reuters reported encountering restrictions when attempting to create images of Donald Trump and Joe Biden, citing a content policy violation message.
However, OpenAI faces challenges in enforcing these policies consistently on its platform.
very funny. earlier USA’s MSM also said that suggesting that their votes can be tampered means you are russian collusions and a trump puppet!!!!