Types of AI Scams

AI has significantly expanded the methods available to scammers. Understanding how these scams work makes them easier to recognize. The examples on this page are not theoretical. They are actively used and affect people across all age groups, professions, and levels of technical familiarity. While the methods vary, the objective is consistent: to make fabricated content appear credible and prompt a quick response.

AI is used in different ways across scam types, but the underlying goal remains the same. Fraudulent contact is designed to appear legitimate by imitating trusted people, organizations, or situations. This page outlines the major categories of AI-enabled scams, explains how they work, and highlights what to watch for. Recognizing these patterns is a key step in preventing harm.

Voice Cloning Scams

Voice cloning is one of the most alarming applications of AI in fraud. It allows a scammer to create a realistic reproduction of a person's voice using a short audio sample, sometimes as little as a few seconds, gathered from a voicemail, a social media video, or a phone call. The cloned voice can then be used to make calls that sound convincingly like a family member, a colleague, or a company executive.

  • Voice cloning uses AI to analyze the patterns, tone, pitch, and rhythm of a person's speech and generate new audio that mimics those characteristics. The technology has become widely accessible through commercially available tools, some of which require no technical expertise to operate. Once a voice has been cloned, a scammer can generate speech saying anything they choose.

    The quality of cloned voices has improved significantly in recent years. In many cases, the reproduction is convincing enough to deceive family members, colleagues, and even people who interact regularly with the person being impersonated.

  • The grandparent scam is one of the most emotionally manipulative forms of AI-enabled fraud. In a typical version of this scam, a victim receives a call that appears to use the voice of a grandchild or other family member claiming to be in an emergency, such as a car accident, an arrest, or a medical situation. The caller asks for immediate financial help, often in the form of cash, wire transfer, or gift cards, and urges the victim not to contact other family members.

    AI voice cloning has made this scam significantly more convincing. Where earlier versions relied on the victim not recognizing an unfamiliar voice, cloned voices can replicate the actual voice of a known family member.

    The Federal Trade Commission has documented this scam type and notes that older adults are disproportionately targeted.

    Source: Federal Trade Commission, "Grandkid and Family Scams": https://consumer.ftc.gov/features/pass-it-on/impersonator-scams/grandkid-scams

  • Voice cloning is also used to impersonate business executives in what is sometimes called a CEO fraud or business email compromise variant. In these scams, an employee receives a call that appears to come from a senior leader, such as a chief executive or chief financial officer, instructing them to make an urgent wire transfer, share login credentials, or authorize a payment outside of normal procedures.

    The use of a cloned voice adds a layer of credibility that written impersonation alone does not provide. Employees who would be skeptical of an unusual email request may find it harder to question what sounds like a direct call from a familiar voice.

    The FBI reported that business email compromise and related executive impersonation schemes resulted in losses exceeding $2.9 billion in 2023.

    Source: FBI Internet Crime Complaint Center 2023 Annual Report: https://www.ic3.gov/AnnualReport/Reports/2023_IC3Report.pdf

Image of Person talking into iphone

Deepfake Scams

Deepfakes are AI-generated images, videos, or audio recordings that depict real people saying or doing things they never said or did. The technology has advanced rapidly and is now accessible through widely available tools. In the context of fraud, deepfakes are used to impersonate trusted individuals, fabricate endorsements, and create false evidence to support fraudulent schemes.

  • Fraudsters use deepfake technology in several ways. They create fake videos of public figures endorsing investment products or financial schemes. They generate realistic video calls impersonating executives, colleagues, or family members. They produce fabricated news segments or interviews to lend credibility to fraudulent claims. In some cases they create compromising material to use as leverage in extortion schemes.

    The convincing nature of deepfake video makes it particularly effective in situations where a victim is making a quick judgment about whether something is real. A brief video clip or a short call does not provide much opportunity to scrutinize subtle inconsistencies.

  • One of the most serious uses of deepfake technology in fraud involves real-time or recorded video impersonation. In documented cases, scammers have used deepfake video during live video calls to impersonate company executives, tricking employees into authorizing large financial transfers.

    In one widely reported case in 2024, a finance employee at a multinational company was deceived into transferring approximately $25 million after participating in a video call in which all other participants, including a person appearing to be the company's chief financial officer, were deepfake representations.

    Source: CNN reporting on the Hong Kong deepfake fraud case, February 2024: https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

  • Deepfake technology is frequently used to generate fake video endorsements from celebrities, politicians, and business figures to promote fraudulent investment schemes. These videos are distributed through social media platforms and appear to show well-known individuals recommending specific products, cryptocurrencies, or investment opportunities.

    Victims who trust the apparent endorser may invest significant sums before discovering the endorsement was fabricated. Because the videos are designed to be shared widely, a single piece of fabricated content can reach a large number of potential victims quickly.

    The Federal Trade Commission has warned consumers specifically about celebrity deepfake investment scams.

    Source: Federal Trade Commission, Spot and Avoid Scams: https://consumer.ftc.gov/scams

Image of blurred person deepfake

AI Phishing and Smishing

Phishing refers to fraudulent communications designed to trick recipients into revealing personal information, clicking malicious links, or transferring money. Smishing is the same tactic delivered by text message. AI has significantly increased the sophistication and volume of both.

  • Traditional phishing emails were often identifiable by poor grammar, generic greetings, and implausible scenarios. AI-generated phishing eliminates most of these signals. Large language models can produce well-written, contextually appropriate messages that reference real details about the recipient and mimic the tone and style of legitimate communications from known organizations.

    AI also allows scammers to generate large numbers of personalized phishing messages quickly, targeting many individuals simultaneously with content tailored to each one. This combination of quality and scale represents a significant shift in the threat posed by phishing attacks.

  • AI-generated phishing emails can convincingly impersonate banks, government agencies, employers, delivery services, and technology companies. They often include the recipient's name, reference a recent transaction or account activity, and create a sense of urgency designed to prompt immediate action without careful scrutiny.

    Common requests in phishing emails include clicking a link to verify account information, downloading an attachment, or confirming login credentials. The linked pages are typically fake versions of legitimate websites designed to capture whatever information the victim enters.

    Before clicking any link in an unexpected email, verify the sender's actual email address, navigate directly to the organization's website rather than using the link provided, and contact the organization through a verified phone number if you are uncertain.

  • Smishing attacks follow the same pattern as email phishing but are delivered by text message. Common versions include fake package delivery notifications, bank fraud alerts, government benefit messages, and prize notifications. Text messages can feel more immediate and personal than emails, which can make recipients more likely to respond quickly without pausing to evaluate the message.

    AI enables scammers to generate large volumes of personalized smishing messages and to adapt their content rapidly in response to current events, making smishing campaigns increasingly difficult to distinguish from legitimate text communications.

    The FBI and FTC both maintain resources on recognizing and reporting smishing attacks.

    Source: FTC guidance on text message scams: https://consumer.ftc.gov/articles/how-recognize-and-report-spam-text-messages

Image of Scrabble words Phising

Fake Job Offers and Recruitment Scams

Fake job scams have grown significantly with the rise of remote work and AI tools. Scammers create fraudulent job listings, conduct fake interviews, and impersonate legitimate companies to steal money and personal information from job seekers.

  • AI enables scammers to create convincing fake job postings, generate professional-sounding recruiter communications, and conduct automated interview processes that appear legitimate. AI chatbots can simulate recruiter conversations, respond to candidate questions, and advance applicants through a fake hiring process designed to build trust before requesting money or sensitive personal information.

    Fake job listings may appear on legitimate job platforms, making them harder to identify. The postings often describe attractive positions with competitive compensation and flexible or remote work arrangements.

  • Work from home scams typically promise easy income for simple tasks such as product reviews, data entry, or package reshipping. After an initial period of apparent legitimacy, victims are asked to purchase equipment, pay for training, or advance funds that are promised to be reimbursed. The reimbursement never arrives and contact with the scammer ends once money has been sent.

    AI enables these scams to appear more professional through polished communications, fake company websites, and automated follow-up messages that mimic legitimate onboarding processes.

  • Some fake job scams are specifically designed to collect personal information rather than money directly. Applicants who progress through a fake hiring process may be asked to submit a resume, provide references, complete a background check form, or supply identification documents. This information can be used for identity theft, sold to other fraudsters, or used to open fraudulent accounts.

    Job seekers should be cautious about providing sensitive personal information, including Social Security numbers and copies of identification documents, until they have independently verified that a job offer and employer are legitimate.

    Source: FTC guidance on job scams: https://consumer.ftc.gov/articles/job-scams

Image of Person working from home

Romance Scams and AI-Generated Personas

Romance scams involve the creation of fake relationships designed to build emotional trust before requesting money. AI has made these scams significantly more scalable and convincing by enabling the creation and maintenance of realistic fake personas across multiple targets simultaneously.

  • AI tools can generate realistic profile photographs of people who do not exist, produce written communication that is fluent, emotionally engaging, and personalized, and maintain consistent fake backstories across extended conversations. A scammer using AI assistance can manage relationships with multiple targets at once, generating responses that feel personal and attentive without the limitations of a single person managing all communications manually.

    The fake personas used in romance scams are often designed to appeal to the specific circumstances of the target. Common fabricated identities include military personnel stationed overseas, professionals working abroad, and widowed individuals seeking companionship.

  • Romance scams typically unfold over weeks or months. The scammer invests time in building emotional connection before introducing a financial request, often framed as a temporary emergency or an investment opportunity the target is invited to participate in.

    By the time a financial request is made, many victims have developed genuine emotional attachment and are reluctant to question the relationship. Requests often escalate over time, with each successful request followed by another.

    The FTC reported that romance scams resulted in losses of $1.14 billion in 2023, with a median individual loss of $2,000. Cryptocurrency was the most commonly reported payment method.

    Source: Federal Trade Commission, "Romance Scammers' Favorite Lies Exposed," February 2023: https://www.ftc.gov/news-events/data-visualizations/data-spotlight/2023/02/romance-scammers-favorite-lies-exposed

Image of Romance

Impersonation Scams

Impersonation scams involve a scammer pretending to represent a trusted institution or authority, such as a government agency, a technology company, or a financial institution. AI tools make impersonation more convincing by enabling personalized communications, realistic fake websites, and in some cases cloned voices.

  • Government impersonation scams involve fraudsters posing as representatives of agencies such as the Social Security Administration, the IRS, Medicare, or law enforcement. Victims are typically told they owe money, that their benefits are at risk, or that they are under investigation. The communication creates urgency and fear designed to prompt immediate payment.

    AI enables these scams to be delivered through realistic-sounding phone calls, well-written emails, and fake official documents. Government agencies do not demand immediate payment by gift card, wire transfer, or cryptocurrency. Any communication making such a request is fraudulent regardless of how official it appears.

    Source: IRS guidance on tax scams and identity theft: https://www.irs.gov/newsroom/tax-scams-consumer-alerts

  • Tech support scams involve fraudsters posing as representatives of technology companies such as Microsoft, Apple, or antivirus software providers. Victims are typically told their device has been compromised and that immediate action is required. The scammer may request remote access to the device, payment for fake services, or login credentials.

    AI makes tech support scams more convincing by generating realistic-looking security alerts, professional-sounding support communications, and fake company websites. Legitimate technology companies do not initiate unsolicited contact to inform you of a security problem on your device.

    Source: Microsoft guidance on tech support scams: https://support.microsoft.com/en-us/windows/protect-yourself-from-tech-support-scams-2ebf91bd-f94c-2a8a-e541-f5c800d18435

  • Bank impersonation scams involve fraudsters posing as representatives of a victim's financial institution, typically claiming that suspicious activity has been detected on the account. The victim is asked to verify their identity, transfer funds to a safe account, or provide login credentials to prevent unauthorized access.

    AI enables these scams to be delivered through realistic-sounding automated phone systems, well-crafted emails, and fake banking websites that closely resemble legitimate ones. Legitimate banks will never ask you to transfer money to a new account for security purposes or request your full login credentials over the phone or by email.

    Source: FDIC Consumer News, "Scammers and Fake Banks": https://www.fdic.gov/consumer-resource-center/2023-10/scammers-and-fake-banks

Image of Goverenment

AI-Generated Misinformation Used in Fraud

AI is increasingly used to generate false information that supports or amplifies fraudulent schemes. Fabricated news articles, fake testimonials, and synthetic media are used to make fraudulent investment opportunities, fake products, and impersonation scams appear more credible.

Fake News and False Urgency

AI can generate realistic-looking news articles, social media posts, and official-seeming announcements that do not reflect real events. These fabricated pieces of content are used to create a false sense of urgency or legitimacy around a fraudulent scheme. A fake news article claiming that a celebrity has endorsed a particular investment, that a government agency is offering a limited-time benefit, or that a company has issued an emergency security alert can prompt victims to act quickly without verifying the information.

Before acting on alarming or urgent information encountered online, verify it through at least two independent and established news sources. Fact-checking resources including Snopes at https://www.snopes.com and FactCheck.org at https://www.factcheck.org can help evaluate suspicious claims.

Fabricated Endorsements and Testimonials

AI is used to generate fake customer reviews, fake testimonials, and fake endorsements from public figures to lend credibility to fraudulent products, services, and investment schemes. These endorsements may appear on fake websites, in social media advertisements, and in fraudulent email campaigns.

AI-generated testimonials can be produced at scale, creating the appearance of widespread positive experience with a product or service that does not exist or does not function as claimed. Fabricated endorsements from celebrities are a particularly common feature of fraudulent investment and cryptocurrency schemes.

When evaluating testimonials or endorsements, look for verifiable sources, independently search for reviews on platforms not controlled by the seller, and be skeptical of any endorsement that cannot be confirmed through an independent source.

For guidance on how to respond if you encounter a scam, see the How to Spot AI Scams and What to Do if Scammed page. For real-world examples and how different groups are targeted, see the Scams by Target Group page.

Last Reviewed: March 2026