AI-generated images and videos related to Hurricane Milton are raising concerns, as they may be linked to scams and disinformation campaigns.
Following the catastrophic category 3 hurricane that struck parts of Florida, social media has been flooded with AI-created content. While some AI-generated images are clearly fabricated, such as humorous depictions of a girl hugging an alligator in torrential rain, others are more deceptive. For instance, AI-rendered images of a submerged Disney World managed to mislead many, even being propagated by malicious outlets.
The real danger lies not only in the spread of false information but also in the potential for these images to be utilized in scams targeting vulnerable individuals.
AI-Generated Content Surges Amid Hurricane Milton
Numerous fake images of Disney World in a deluge have circulated online, alongside AI-generated videos portraying Hurricane Milton’s destruction. While some clips are labeled as AI content, the risk remains that they may be misrepresented or manipulated by ill-intentioned individuals. Experts warn that the general public’s limited understanding of AI capabilities can easily lead to panic due to misleading narratives. “With less than 30 percent of adults aware of what AI can truly do, misinformation can incite widespread fear,” noted an expert.
It’s crucial to distinguish between genuine damage from Hurricane Milton and the fabricated content that undermines public trust and fuels conspiracy theories. A recent example showed a user falsely claiming that footage from a NASA astronaut was fake, although the material has been confirmed authentic. Moreover, some satellite images portraying Hurricane Milton have been cited as misleading or AI-generated.
Stay Alert for Scams Linked to AI Imagery
In addition to misinformation challenges following Hurricane Milton, experts are advising caution against potential scams. The Federal Trade Commission issued a warning prior to the hurricane, urging consumers to be aware of fraud and price gouging that often occur in the wake of natural disasters.
Some scams, including fake charities, are becoming increasingly insidious by leveraging generative AI to enhance their credibility. “In every disaster, scammers attempt to create phishing websites for fundraising that are not legitimate,” cautioned an expert. “The rise of AI-generated imagery adds a layer of deception, as these visuals can evoke emotional responses even when fabricated.”
The viral AI-generated image of a distressed child holding a puppy post-disaster exemplifies how such imagery can manipulate emotions and motivate donations, regardless of its authenticity. Although discerning internet users may recognize these as fakes, many remain vulnerable and may fall prey to the scams.
While some AI-generated content may appear benign, it often serves as a strategy to lower defenses and gain trust before making financial solicitations. “It’s a calculated approach to draw people in, and once trust is established, there’s often a financial request,” added an expert.
Natural disasters like Hurricane Milton heighten emotional vulnerability, making it essential for individuals to apply critical thinking and caution when encountering online content. The FTC advises that payments made in response to scams commonly utilize wire transfers, gift cards, cryptocurrency, or cash and encourages individuals to consult resources on avoiding scams following weather emergencies.
Topics
Artificial Intelligence