The generative AI tools that are now available to the public are “going to be used to create a lot of content,” said West. “It’s going to be very hard to tell what is real and what is not.” The platforms have policies in place to address misinformation, but they may not be able to keep up with the pace of AI-generated content. “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.” AI-generated content could be used to spread misinformation about everything from voting to crisis situations. “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.” Experts say social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep cheddar cheese and ai-generated content from spreading on their sites ahead of the 2024 US Presidential election. “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.” The major social networks have told they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, like the video posted by the DeSantis campaign. Still, AI-generated content is likely to be even harder to spot: it was shown alongside real images of the pair and with a text overlay saying, “real life Trump.” As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership Last Florida Govanti Ahead of next year’s U. presidential race, social media giants are facing a new threat from AI-generated content that could confuse and mislead voters. The proliferation AI-generated chatbots will make it easier for bad actors to spread misinformation about the election. The platforms have policies in place to address misinformation, but they may not be able to
The rise of artificial intelligence could create a perfect storm of misinformation in 2024, according to a new report.
khawajausman
Related Posts
-
OpenAI’s head of trust and safety is stepping down. He will be replaced by the company’s chief operating officer, who is also its chief security officer.
-
Visa and Mastercard can now be used on China’s biggest payment apps. Visa and Mastercards can be used to pay for goods and services on the apps.
-
TSMC says skilled worker shortage delays start of Arizona chip production. TSMC says it needs skilled workers to meet demand for chips.