About a-team Marketing Services

A-Team Insight Blogs

Key Takeaways from FINRA’s 2025 Oversight Report

Subscribe to our newsletter

FINRA released its 2025 Regulatory Oversight Report (the Report) in January, highlighting several rapidly evolving challenges confronting its member firms. From the rise of “deepfake” AI that empowers criminals to carry out convincing cyberattacks, to newly highlighted dangers of synthetic identity fraud, the Report details how malicious actors continue to adapt their methods. Along with this warning, the Report outlines a range of practical measures—like multi-factor authentication, ongoing tabletop exercises, and staff training—that can help bolster firms’ security posture against these emerging threats.

The Report also explores rising risks in areas such as identity theft, small-cap IPO manipulation, and unvetted third-party vendors. It emphasizes that firms remain accountable for AI-powered tools, underscoring how longstanding supervisory rules still apply when automated communications or trade execution platforms are involved.

With clear guidance on updating supervisory procedures, conducting due diligence on vendors, and watching for manipulative trading patterns, the Report offers insights for member firms to address a continually evolving threat landscape.

Cyber-Enabled Threats

Cybersecurity incidents, including ransomware, account takeovers, and network intrusions, continue to be a major source of operational risk for firms. Threat actors employ increasingly sophisticated methods to compromise firm systems and data. The Report notes that cyberattacks now leverage generative artificial intelligence (Gen AI) to create “deepfake” media and polymorphic malware— a type of malware that can change its code to avoid detection by antivirus software, amplifying the difficulty of detection. In addition, the Report highlights that criminals have shifted to more advanced tactics like ransomware-as-a-service and “Quasi-Advanced Persistent Threats”—i.e., highly resourceful groups that may not necessarily be state-sponsored.

Firms are advised to adopt appropriate controls—such as multi-factor authentication, strong data governance frameworks, and intrusion-detection tools—to ward off these evolving attacks. As outlined in the Report’s cybersecurity guidance, FINRA also advises frequent tabletop exercises, early and proactive incident reporting, and regular training of firm personnel. These measures help reduce the likelihood of successful account intrusions or unauthorized access to sensitive data, including any customer information protected under Regulation P.

Identity Theft and Fraud

The Report notes a rise in identity theft schemes, including “synthetic identity fraud,” where bad actors create and use fictitious identities built from randomly assembled personal data. This risk is heightened by AI-driven technologies that facilitate highly convincing identification documents and impersonations. FINRA emphasizes that firms’ customer identification programs (CIP) must remain vigilant for red flags, such as unverified Social Security numbers, mismatched identifying information, or multiple accounts sharing the same suspicious email or physical address.

Additionally, FINRA highlights scenarios where victims are induced to withdraw large sums from their brokerage accounts and transfer funds to illegitimate investment schemes. The Report underscores the value of staff training on detecting possible financial exploitation, especially among senior or vulnerable investors, and using tools such as temporary holds or outreach to a trusted contact person. Strengthening account-opening procedures, verifying unusual or significant withdrawals, and regularly reviewing automated approval processes are among the recommended steps to minimize exposure to identity theft and fraudulent activities.

Manipulative Trading

The Report notes that small-capitalization IPOs and thinly traded securities remain prime targets for fraud. Manipulators use coordinated schemes to artificially “ramp” a stock’s price through social media hype, often involving AI-enhanced imposter accounts or fabricated endorsements, and then “dump” their holdings at a profit. The Report notes that such manipulative trading patterns can be challenging to detect if a firm’s supervisory systems are not calibrated to identify potential layering, spoofing, or wash trades.

FINRA encourages firms to implement tailored surveillance thresholds and cross-platform monitoring to spot unusual market activity. Enhanced due diligence of accounts associated with small cap IPOs is essential—particularly when the accounts might be nominee or linked accounts. The Report points out that thorough reviews of suspicious order flow and prompt escalation of red flags can help firms stay ahead of manipulative practices designed to exploit newly listed or low-priced securities.

Governance and Supervision

Under longstanding rules like FINRA Rule 3110, firms must ensure their supervisory system is reasonably designed to achieve compliance with all applicable regulations, including those relating to AI-powered tools. The Report emphasizes that member firms cannot rely on AI-based systems—for example, automated communications or trade execution platforms—without confirming those systems meet the existing standards of supervision, recordkeeping, and investor protection. AI does not alter regulatory obligations; firms remain accountable for the outputs of any system they use.

FINRA highlights the importance of updating written supervisory procedures (WSPs) to reflect current practices involving AI or machine learning algorithms. The Report indicates that if AI-driven technologies are used in customer-facing contexts—such as personalized recommendations or account reviews—firms must oversee the communications and outputs just as they would any traditional supervisory channel. In particular, potential biases or inaccuracies stemming from algorithmic decision-making need to be addressed to ensure compliance with principles of fair dealing and data protection.

Third-Party Risk

The Report introduces a new topic on the third-party risk landscape, emphasizing that vendor-related outages or breaches can severely affect multiple firms simultaneously. As many vendors now incorporate AI into their services, assessing those providers’ data security controls is more critical than ever. The Report advises firms to maintain a current inventory of all third-party systems and services, verify that third-party contracts specify data protection expectations (including the return or destruction of data upon contract termination), and incorporate vendors into the firm’s broader incident response planning.

FINRA also underscores that thorough due diligence of vendors’ cybersecurity programs can uncover potentially weak controls before they cause system vulnerabilities. By periodically validating a vendor’s safeguards—particularly around AI-based solutions—firms can be more confident that any shared or entrusted customer data is not inadvertently exposed through unauthorized usage of GenAI models or other evolving technologies. Enhanced third-party oversight thus remains key to preventing large-scale disruptions and maintaining business continuity.

Regulatory Notices and Guidance

The Report references multiple FINRA and SEC notices that outline regulatory expectations around AI, cybersecurity, and fraud prevention. In particular, FINRA’s Regulatory Notice 24-09 provides targeted guidance for firms contemplating or already using Gen AI systems. This notice, along with other resources spotlighted in the Report, urges firms to engage with their risk monitoring analysts or take advantage of interpretive guidance channels to clarify any obligations or uncertainties.

FINRA encourages member firms to stay current on updates from multiple regulatory bodies to ensure their supervisory systems adapt to emerging AI and cybersecurity risks. By actively reviewing new alerts, risk notices, and best practices documents, firms can remain well-informed of tactics employed by threat actors and effectively guard against both customer-facing and operational vulnerabilities.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Hearing from the Experts: AI Governance Best Practices

9 September 2025 10:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes The rapid spread of artificial intelligence in the financial industry presents data teams with novel challenges. AI’s ability to harvest and utilize vast amounts of data has raised concerns about the privacy and security of sensitive proprietary data and the ethical...

BLOG

FINRA’s 2024 New and Updated Guidelines on AI and GenAI/LLM Integration

In 2024, the Financial Industry Regulatory Authority (FINRA) expanded its guidance on integrating Artificial Intelligence (AI) within the securities industry, placing particular emphasis on generative AI (GenAI) and large language models. This builds upon FINRA’s ongoing efforts since 2020 to ensure that member firms adhere to existing regulatory frameworks while adopting advanced technologies. The recent...

EVENT

TradingTech Summit London

Now in its 14th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Fatca – Getting to Grips with the Challenge Ahead

The industry breathed a sigh of relief when the deadline for reporting under the US Foreign Account Tax Compliance Act (Fatca) was pushed back to July 1, 2014. But what’s starting to look like perhaps the most significant regulation of the next 12 months may start to impact our marketplace sooner than we think, especially...