The U.S. Securities and Exchange Commission (SEC) has issued a risk alert regarding the weaponization of artificial intelligence in investment communities. The agency’s Office of Investor Education and Advocacy (OIEA) signaled that fraudsters are utilizing AI to automate trust-building in group chats on platforms like Telegram and Discord, effectively scaling social engineering attacks that were previously manual.
Automated Social Engineering
The warning addresses a structural shift in retail phishing. Scammers previously relied on human-operated "boiler rooms" to cultivate victims, a method limited by manpower. The integration of AI allows bad actors to deploy bots capable of mimicking human cadence, empathy, and localized slang at scale. These bots infiltrate established crypto communities, engaging targets with personalized scripts before pivoting to fraudulent investment schemes.
"Fraudsters can use AI technology to clone voices, alter images, and even create fake videos… AI technology allows fraudsters to create personalized interactions that may seem genuine."
The OIEA noted that these AI agents often promote fake trading platforms or high-yield arbitrage bots. Once trust is established, the conversation typically moves to private channels where the extraction phase, often mimicking "pig butchering" tactics, begins.
Market Context
This alert follows a series of enforcement actions targeting social media manipulation. While traditional pump-and-dump schemes rely on hype, AI-driven fraud relies on fabricated intimacy. The regulator advised investors to verify the identity of any unsolicited contact, noting that AI tools can now generate deepfake video calls to bypass traditional "proof of life" verification checks.