FBI says criminals are exploiting GenAI to scale up fraud schemes

Facepalm: Generative AI services are gaining immense popularity among both internet users and cybercriminals. According to the FBI, “synthetic” content is increasingly being exploited to carry out various types of fraud. However, with the right precautions, individuals can still effectively protect themselves online.

The FBI has issued an alert about the criminal misuse of generative AI technology. In a recently published public service announcement, the bureau warns Americans that fraudsters are exploiting AI-generated content to make their illegal schemes more convincing and effective.

According to the FBI, generative AI allows criminals to reduce the time and effort needed to deceive their targets. These AI tools take user inputs to “synthesize” entirely new content based on prompts. They can even help correct human errors that might otherwise raise suspicion, particularly in AI-generated text.

Creating content with AI isn’t inherently illegal, but it becomes a crime when that content is used in fraud or extortion attempts. The FBI’s alert outlines several examples of how generative AI can be misused and offers practical advice to help users protect themselves online.

AI-generated text can appear highly convincing in social engineering or spear-phishing campaigns. Fraudsters are leveraging generative AI to produce large amounts of fake content, create fake social media profiles, send messages, and translate languages with greater accuracy and fewer grammatical errors. Entire fraudulent websites can now be built in record time, and chatbots are being used to trick victims into clicking malicious links.

AI-generated images are, unsurprisingly, at the forefront of current trends, and cybercriminals are taking full advantage of them. Fraudsters are using AI-generated visuals to enhance fake social media profiles and counterfeit ID documents that support fraudulent activities. According to the FBI, AI algorithms can produce “realistic” images that are being exploited in social engineering campaigns, spear phishing attempts, scams, and even “romance schemes.”

AI-generated audio and video content poses similar risks. Criminals can now impersonate public figures or even people personally known to their targets, requesting financial assistance or access to sensitive information like bank accounts.

The FBI advises users to establish a “secret word” or phrase with trusted family and friends as a quick way to verify identities. Additional tips to guard against generative AI-enabled crimes include carefully inspecting images and videos for irregularities or inconsistencies, as well as minimizing the online availability of personal images or voice recordings.

When dealing with financial requests, the FBI stresses the importance of verifying their legitimacy through direct phone calls rather than relying on text or email. Sensitive information should never be shared with individuals met exclusively online. While it may seem obvious, the FBI also reiterates that sending money, gift cards, or cryptocurrency to strangers online is highly risky and often leads to fraud.

Related Content

Hidden underground hydrogen reserves could power the entire Earth for centuries

Mexico plans to launch an app in January 2025 that will let Mexican migrants warn relatives and alert the nearest consulate if US immigration agents detain them (Russell Contreras/Axios)

How AI Can Solve your Management Crisis and Boost Productivity

Leave a Comment