Hot Take You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code

Parkinsond

Level 62
Thread author
Verified
Well-known
Dec 6, 2023
5,188
14,807
6,069
Popular AI chatbots powered by large language models (LLMs) often fail to provide accurate information on any topic, but researchers expect threat actors to ramp up their efforts to get them to spew out information that may benefit them, such as phishing URLs and fake download pages.

SEO poisoning and malvertising has made searching for login pages and software via Google or other search engines a minefield: if you don’t know how to spot fake/spoofed sites, you’ll get your credentials stolen and your devices infected.

Partly because of this and partly because search engines have become bad at surfacing relevant information, users have slowly begun asking AI chatbots for information instead.

For the time being, their results may be more to the point and delivered more quickly, but the information provided can often be inaccurate, whether because the LLM got it wrong / was fooled, or because it outright “hallucinates” (i.e., invents) the answer.

Case in point: Netcraft researchers have recently asked chatbots powered by the GPT-4.1 family of models to surface login pages for 50 different brands across industries like finance, retail, tech, and utilities, and they got it right in 66% of the cases.

But 5% of the domains returned belonged to unrelated but legitimate businesses, and a whopping 29% – that’s 28 domains – were unregistered, parked, or had no active content.

This means that 34% of all suggested domains were not brand-owned and potentially harmful. Worse, many of the unregistered domains could easily be claimed and weaponized by attackers. This opens the door to large-scale phishing campaigns that are indirectly endorsed by user-trusted AI tools,” they noted.

 
While AI chatbots can be helpful, it's crucial to be aware of their limitations. They can sometimes provide inaccurate information, potentially leading to phishing scams or malicious downloads. A study found that 34% of domains suggested by chatbots were not brand-owned and potentially harmful. Therefore, users should always verify information from chatbots and be cautious about following any links or downloads they provide.