OpenAI, Meta and Microsoft will stop fake content created with AI
About twenty technology companies announced that they will work together this year to stop misleading content generated by artificial intelligence. The group, led by OpenAI, has specifically set out to fight misinformation during the elections that will be held this 2024 in more than 50 countries. They are elections that involve almost half of the population of the planet.
The agreement was announced this Friday, on the first day of the Munich Security Conference. Both developers and various social media platforms appear on the list of participants. Between them, Google, Amazon, Meta, Microsoft, OpenAI, X (Twitter) and TikTok.
The voluntary agreement recognized that the rapid development of artificial intelligence was “creating new opportunities as well as challenges for the democratic process.” In this sense, the diffusion of Misleading content could “jeopardize the integrity of electoral processes.”
Companies will put the magnifying glass on the increasingly realistic images, audio and videos generated by new artificial intelligence tools. That is, any content “that falsifies or deceptively alters the appearance, voice or actions of political candidates, election officials and other interested parties.”
These technologies, however, have not committed to ban or eliminate deepfakes. Instead, they plant some methods to detect and label this type of content, when it is created or distributed on their platforms. For example, the implementation of watermarks or the option to embed metadata.
A symbolic commitment on artificial intelligence and elections
“I think the usefulness of this agreement is the breadth of the companies that sign it,” said Nick Clegg, president of global affairs at Meta Platforms. “Everyone recognizes that no technology company, no government, no civil society organization is capable of dealing alone with the arrival of this technology and its possible harmful use,” Clegg added.
The agreement between these companies stipulates that they will share best practices with each other. They assure that they will provide “rapid and proportionate responses” when misleading content generated with artificial intelligence begins to spread. However, the companies did not specify more details about how they would comply with these premises. They also did not propose a schedule of actions.
For this reason, some organizations and activists have already described the agreement as vague and merely symbolic. “The language is not as strong as one might have expected,” Rachel Orey, director of the Elections Project at the Bipartisan American Policy Center, told the AP. “It is voluntary and we will be attentive to whether they comply,” he added.
One of the votes scheduled for this year are the presidential elections in the United States. Already last year, the Republican Party released material generated by artificial intelligence to attack President Joe Biden, candidate for re-election. And last January, voters in New Hampshire received a robocall with the voice of someone claiming to be Biden. In the false communication, they were asked not to vote in the primary elections.
Several of the companies that signed the agreement had already announced actions on their own to address the contingency due to the elections. OpenAI, creator of ChatGPT, announced that it would prohibit politicians from using its tools for their campaigns. And TikTok said this week that it would create polling centers, which will monitor your posts during the upcoming European Parliament elections.