Disclaimer: Information found on CryptoreNews is those of writers quoted. It does not represent the opinions of CryptoreNews on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
CryptoreNews covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.
The Hidden Aspects of AI: How Technology Assists Fraudsters, 2026/02/18 12:30:50

The Threat of AI
In recent years, the activity of cryptocurrency scammers utilizing artificial intelligence tools has significantly increased. The Chainabuse platform from TRM Labs recorded a 456% rise in such incidents from May 2024 to April 2025 compared to the previous year.
According to a more recent study by Chainalysis, a record $17 billion was stolen from the crypto industry in 2025, including through the use of AI tools.
Analysts anticipate that this trend will continue to grow. The Vectra platform predicts that by 2027, losses from AI-related fraud schemes could reach $40 billion.
What makes artificial intelligence such an effective tool for criminals?
Characteristics of AI-Driven Fraud
The primary advantages of AI for criminals include the speed of attack preparation, scalability, and ease of organization. Previously, crafting a convincing fraudulent scheme required time and technical expertise; today, criminals can quickly launch mass phishing campaigns, create fake websites and deepfake videos, mimic the activity of well-known individuals, and test numerous attack variations.
Cryptocurrency holders remain particularly vulnerable. The irreversibility of blockchain transactions means that user errors almost always result in irreversible financial losses.
AI does not necessarily create fundamentally new fraud schemes but elevates existing methods to a qualitatively new level. Generative models enable the creation of phishing messages in various languages and stylistically mimic official notifications from crypto exchanges, wallets, and DeFi platforms.
Additionally, it has become easier for criminals to gather and analyze data about potential victims—such as their interests, involvement in crypto projects, and, if public addresses are available, asset structures through blockchain explorers. This allows for the development of personalized interaction scenarios and more precise targeted attacks.
What AI-based schemes are currently the most prevalent, and how can users protect their assets?
AI-Enhanced Phishing
Phishing attacks remain one of the most common forms of crypto fraud, but the quality of the forgeries has noticeably improved. Generative models allow for the rapid creation of websites that are nearly indistinguishable from the original interfaces of popular services.
Typically, scammers send a message to the victim via email, messenger, or social media, posing as a service the victim actually uses. These messages often contain malicious links or forms where users may inadvertently provide personal information, including usernames and passwords.
A notable example is a mass mailing to users of the MetaMask wallet. The emails warned of “security issues” and requested users to enter a phrase supposedly for additional protection. However, MetaMask, like other non-custodial wallets, warns users in advance that the seed phrase should never be re-entered and is never requested by the service.
A significant portion of the increase in AI-related fraud recorded by TRM Labs is linked specifically to phishing and its combinations with other methods. Experts note that recognizing fake messages is becoming more challenging: they contain fewer errors and increasingly accurately mimic the official notifications of the services that scammers impersonate.
Deepfakes as a Fraud Tool
The use of deepfakes has emerged as one of the fastest-growing tools for cybercriminals. They create audio and video materials that imitate the speech and behavior of bloggers, politicians, entrepreneurs, and representatives of well-known companies. These videos are often used to promote fraudulent projects: the user sees a video in which a familiar public figure supposedly urges them to urgently invest in a new cryptocurrency or platform.
A notable incident occurred in 2024 in Hong Kong. An employee of the finance department of an international company received a video call from someone appearing to be the financial director, demanding urgent transactions. It was later revealed that the scammers used deepfake technology, and the damage from their actions exceeded $25 million.
More frequently, criminals conduct mass streams and broadcasts, targeting a wide audience. If their goal is cryptocurrency, they may impersonate well-known industry figures, such as Michael Saylor—the chairman of Strategy and a Bitcoin supporter—or Vitalik Buterin, the co-founder of Ethereum.
Saylor has previously noted that his team works daily to remove dozens of fraudulent videos featuring his likeness. Another popular figure for such schemes is Elon Musk, whose name is often used to promote cryptocurrency scams.
Fake Trading Platforms and AI Bots
Scammers are increasingly creating fake trading platforms that supposedly utilize “special” AI tools for automatic profit generation. Users are promised guaranteed returns, but they are required to transfer cryptocurrency to the service’s account to begin. The scheme then involves AI bots or automated algorithms that supposedly ensure “smart” trading in the investor’s interest.
After transferring funds, users typically lose their cryptocurrency irretrievably. These platforms often appear convincing: they may display real-time market charts and include demo modes simulating successful trades. In some cases, funds disappear immediately after connecting a crypto wallet to the service or trading bot.
Such schemes are primarily spread through social media and video platforms. Analysts from SentinelLABS estimate that the damage from one such campaign, promoted through descriptions of thematic videos on YouTube, exceeded $900,000.
“Pig Butchering” and AI
The “pig butchering” scheme represents a long-term strategy employed by scammers to gain the victim’s trust and subsequently persuade them to invest in a fraudulent project. Criminals may engage in casual conversations on social media or messaging apps for days, weeks, or even months.
There have been instances where criminals built romantic relationships with victims before defrauding them. For example, a 62-year-old resident of Florida lost $300,000 by investing in a fake crypto platform.
According to the FBI’s Internet Crime Complaint Center (IC3), most reports come from victims over 60 years old. The total number of affected individuals and the volume of losses continue to rise annually, making the “pig butchering” scheme one of the most prevalent and dangerous in the industry.
AI assists scammers in scaling such schemes. Chatbots can maintain lengthy conversations, simulate live interactions, and remain constantly available, which enhances trust and makes the interaction more believable for potential victims.
“Prompt Injection” and AI
“Prompt injection” is a relatively new threat compared to classic phishing or “pig butchering” schemes. It refers not so much to a standalone fraudulent scheme but to vulnerabilities in systems and services that utilize artificial intelligence. A malicious actor can inject harmful instructions into requests made to an AI assistant that interacts with a browser, email, or crypto wallet. As a result, such an assistant may disclose confidential information or perform actions that harm the user.
Cybersecurity experts consider “prompt injection” to be one of the most serious and rapidly evolving threats. This assessment is shared by representatives of OpenAI.
An example is a vulnerability discovered in 2025 by Brave Software in the Comet browser AI agent: through manipulation of instructions, criminals were able to access user data.
How to Protect Yourself
The use of artificial intelligence enables scammers to combine various methods and make attacks more convincing and scalable. This means that users must remain vigilant at every stage of their online activities. Below are basic recommendations that can help mitigate risks.
-
Enable two-factor authentication (2FA). This is a fundamental security measure against unauthorized access to accounts that many still overlook. Even if a scammer learns the password, they will not be able to log in without the second factor. Hardware 2FA keys are considered the most secure option.
-
Verify URLs and links. Before clicking on a link, ensure its authenticity. It is advisable to avoid links received via social media, messaging apps, or email from unknown senders. If phishing is suspected, check the sender’s address and access services only through saved or manually entered addresses.
-
Do not share confidential information with outsiders, especially seed phrases and private keys. Modern non-custodial wallets, such as Trust Wallet or MetaMask, warn users that this information should not be shared with third parties, including supposed representatives of the service.
-
Be aware of deepfakes. Fake voices and videos can be used in both mass attacks and personal schemes. In a fraudulent scenario, a colleague, supervisor, or business partner may be involved. If you are urged to urgently transfer funds, click on a link, or take other actions, it is advisable to contact that person through a verified channel to confirm details.
-
Approach offers of “quick money,” “guaranteed profits,” and other overly enticing conditions with skepticism. Offers of guaranteed returns and risk-free investments are often used to promote fraudulent services. Even the most advanced AI bots cannot guarantee profit.
-
Store cryptocurrency separately. Do not keep all funds in one place, especially on centralized exchange accounts. For long-term investments, consider cold storage and the use of hardware wallets.
Conclusion
Artificial intelligence enables scammers to conduct attacks more quickly, cheaply, and on a larger scale, combining different methods and enhancing their credibility. In this context, security still relies on the user: attentiveness, caution, and adherence to basic rules—protecting seed phrases and keys, verifying links and sources, and maintaining a critical attitude towards unfamiliar services and promises of guaranteed profits. Digital discipline is essential for maintaining control over assets and reducing the risk of becoming a victim of modern fraud schemes.
This material and the information contained herein do not constitute individual or any other investment advice. The views of the editorial team may not align with those of analytical portals and experts.