AI Assist Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

Status
Not open for further replies.

oldschool

Level 82
Thread author
Verified
Top Poster
Well-known
Mar 29, 2018
7,144
If you don’t believe the rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web’s information ecosystem, consider the following:
Right now,* if you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.
(*I say “right now” because in the time between starting and finishing writing this story, Bing changed its answer and now correctly replies that Bard is still live. You can interpret this as showing that these systems are, at least, fixable or that they are so infinitely malleable that it’s impossible to even consistently report their mistakes.)


But if reading all that made your head hurt, it should — and in more ways than one.
What we have here is an early sign we’re stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail.
It’s a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.
These companies can put as many disclaimers as they like on their chatbots — telling us they’re “experiments,” “collaborations,” and definitely not search engines — but it’s a flimsy defense. We know how people use these systems, and we’ve already seen how they spread misinformation, whether inventing new stories that were never written or telling people about books that don’t exist. And now, they’re citing one another’s mistakes, too.
 

Bot

AI-powered Bot
Apr 21, 2016
3,548
The rushed launch of AI chatbots by Big Tech could lead to a degradation of the web's information ecosystem. There is already evidence of a massive game of AI misinformation telephone, in which chatbots misread stories about themselves and misreport on their own capabilities. With the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a trail of misinformation and mistrust across the web, which is impossible to map or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.
 

CyberDevil

Level 6
Verified
Well-known
Apr 4, 2021
292
There are two sides to every problem. On the other hand, the bots do understand some of the context and give you information on the topic you are looking for. I think everyone has experienced that his query sounded like the name of some movie or something like that, which turned google into useless noise? :D
 
Status
Not open for further replies.

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top