AI Washing – Why Regulators Demand Proof, Not Promises from Advisers and Broker-Dealers

Overview

Artificial intelligence has become the financial industry’s favorite buzzword, but it is also a growing regulatory risk. The Securities and Exchange Commission (SEC) is making it clear that exaggerating or misrepresenting how AI is used by financial firms is not simply aggressive marketing; it is a potential securities law violation. Since March 2024, the SEC has brought enforcement actions against investment advisers, public issuers, and even an individual startup founder, all tied to false or misleading AI claims. For financial firms and their compliance professionals, the lesson is that unambiguous references to AI must be treated with the same rigor as performance disclosures or risk factors.

What the SEC Means by “AI Washing”

“AI washing” occurs when firms inflate the sophistication of their tools, label routine automation as “artificial intelligence,” or imply proprietary, innovative models without substantiation. Statements such as “fully automated” or “first AI-driven advisor” can easily cross into misleading territory if humans are involved in the process or if third-party technology is driving results. SEC leadership has explicitly warned the industry: AI claims must be balanced, accurate, and supported by evidence.

Recent Enforcement Lessons

The SEC’s recent actions provide a roadmap for where compliance risk lies:

  • Investment Advisers (Delphia & Global Predictions – March 2024): Both firms overstated their use of AI in marketing materials, claiming unique predictive capabilities that did not exist. The cases resulted in $400,000 in penalties and highlighted that the Advisers Act Marketing Rule (Rule 206(4)-1) applies squarely to AI references in websites, social media, and investor materials.
  • Public Issuers (Presto Automation – January 2025): A public company touted its “voice AI” product but failed to disclose heavy reliance on a third-party vendor and significant human intervention. While the company avoided penalties by cooperating, the case underscores that AI claims in SEC filings and public statements must accurately describe functionality and dependencies.
  • Private Companies and Individuals (SEC v. Saniger – April 2025): A founder raised $42 million by claiming an app used AI to process purchases, when contractors were doing the work manually. The SEC pursued fraud charges directly against the individual. This demonstrates that private companies and their executives are not shielded from liability.

Enforcement Themes Emerging

From these developments, several important themes are emerging. First, substantiation is not optional. Firms must be able to demonstrate precisely how artificial intelligence is being used and support those claims with clear documentation. Second, public disclosures must accurately reflect reality, providing investors and clients with truthful descriptions of automation levels, the extent of human oversight, and any reliance on third-party systems. Finally, compliance with the SEC’s Marketing Rule remains critical. Advisers, and by extension broker-dealers, cannot make exaggerated or unsupported claims about AI in their advertising or client communications. Together, these themes underscore that regulatory expectations around AI use are grounded in accuracy, transparency, and accountability.

Compliance Imperatives

For compliance teams, AI claims must be integrated into the existing governance and review frameworks. Key steps include:

  • Governance & Accountability: Designate ownership of AI-related claims and require pre-clearance before any external reference is published.
  • Substantiation & Documentation: Maintain dossiers that detail where and how AI is used, including reliance on third-party providers.
  • Marketing Rule Controls: Apply the same standard to AI claims as to performance results to ensure a reasonable basis exists.
  • Vendor Oversight: Confirm that vendor claims about AI are accurate; require warranties and audit rights in contracts.
  • Monitoring & Training: Build AI-specific review checkpoints into workflows and train staff on red flags and recent enforcement actions.

Key Takeaways

The SEC’s enforcement actions show that advisers, broker-dealers and issuers can face scrutiny and liability for overstating AI capabilities. Financial firms must ensure AI references are documented, substantiated, and vetted through the same controls applied to financial performance and risk disclosures. To help prioritize action, the following key takeaways highlight the practical steps every firm utilizing AI should implement:

  • AI references equal high-risk disclosures. Treat them with the same rigor as performance, risk, or fee disclosures.
  • Substantiate everything. Maintain evidence of how, where, and to what extent AI is used, including third-party dependencies.
  • Marketing and Public Communications Rules apply. Websites, social posts, and sales decks are advertisements and must meet Rule 206(4)-1 standards or FINRA Public Communication Rules.
  • Governance is critical. Assign ownership of AI claims, implement checkpoints, and require pre-clearance before any external statement is released.

Firms should move quickly to inventory, validate, and monitor all AI-related claims. The SEC has made clear that “AI washing” is an enforcement priority, and financial firms that fail to align marketing with reality face significant regulatory and reputational risk.