Chatbots Trigger Next Misinformation Fears 

New generative artificial intelligence chatbot tools have stoked a new fear of the next misinformation nightmare. 

AI tools like OpenAI’s ChatGPT, Microsoft’s BingGPT, and Google’s Bard have stoked a tech-industry frenzy. But they are also capable of spreading online misinformation. 

Regulators and technologists were slow to address the dangers of misinformation spread on social media. And now they are still playing catch-up with imperfect and incomplete policy and product solutions.

But there’s a new alarm bell ringing now. Experts are sounding the alarm even faster as real-life examples of inaccurate, erratic, and flag-out wrong responses from the AI bots. 

Experts said it’s getting worse, and getting worse fast. 

Generative AI programs like ChatGPT do not have a clear sense of the boundary between fact and fiction. The bots are also prone to making things up as they try to satisfy human users’ inquiries.

Soon after ChatGPT debuted last year, researchers tested what the artificial intelligence chatbot would write after it was asked questions peppered with conspiracy theories and false narratives.

The results — in writings formatted as news articles, essays, and television scripts — were so troubling that the researchers minced no words.

For now, experts say the biggest generative AI misinformation threat is bad actors leveraging the tools to spread false narratives quickly and at scale.

Disinformation is difficult to wrangle when it’s created manually by humans. Researchers predict that generative technology could make disinformation cheaper and easier to produce for an even larger number of conspiracy theorists and spreaders of disinformation.

Personalized, real-time chatbots could share conspiracy theories in increasingly credible and persuasive ways, researchers say, smoothing out human errors like poor syntax and mistranslations and advancing beyond easily discoverable copy-paste jobs. And they say that no available mitigation tactics can effectively combat it.

Tech firms are trying to get ahead of the possible regulatory and industry concerns around AI-generated misinformation by developing their own tools to detect falsehoods and using feedback to train the algorithms in real-time.

Be the first to comment

Leave a Reply

Your email address will not be published.


*