Popular AI chatbots powered by large language models (LLMs) often fail to provide accurate information on any topic, but researchers expect threat actors to ramp up their efforts to get them to spew out information that may benefit them, such as phishing URLs and fake download pages.
SEO poisoning and malvertising has made searching for login pages and software via Google or other search engines a minefield: if you don’t know how to spot fake/spoofed sites, you’ll get your credentials stolen and your devices infected.
Partly because of this and partly because search engines have become bad at surfacing relevant information,
users have slowly begun asking AI chatbots for information instead.
For the time being, their results may be more to the point and delivered more quickly, but the information provided can often be inaccurate, whether because the LLM got it wrong / was fooled, or because it outright “hallucinates” (i.e., invents) the answer.
Case in point: Netcraft researchers have recently asked chatbots powered by the GPT-4.1 family of models to surface login pages for 50 different brands across industries like finance, retail, tech, and utilities, and they got it right in 66% of the cases.
But 5% of the domains returned belonged to unrelated but legitimate businesses, and a whopping
29% – that’s 28 domains – were unregistered, parked, or had no active content.
“
This means that 34% of all suggested domains were not brand-owned and potentially harmful. Worse, many of the unregistered domains could easily be claimed and weaponized by attackers. This opens the door to large-scale phishing campaigns that are indirectly endorsed by user-trusted AI tools,” they
noted.
Threat actors are ramping up their efforts to get AI chatbots to spew out phishing URLs, fake download pages, and more.
www.helpnetsecurity.com