The rise of large language models (LLMs) that fuel chatbots is leading to new challenges as these sophisticated systems are increasingly targeted in scam attempts. In an intriguing twist, research reveals that these AI models are also vulnerable to being misled by scammers.
Researchers at JP Morgan AI Research conducted a study involving notable chatbot models, including GPT-3.5 and GPT-4 from OpenAI, and Meta’s Llama 2. They exposed these AI systems to 37 different scam scenarios to evaluate their susceptibility.
One of the scenarios presented to the chatbots involved an email suggesting an investment opportunity in a new cryptocurrency, highlighting the cunning tactics used by scammers in a digital landscape.