Earlier this month, an early stage RegTech startup Castellum.AI closed an oversubscribed US $8.5 million Series A round led by Curql and other banking backed investors. Management says the cash will turbocharge its platform that fuses first party risk data, a highspeed screening engine and “agentic” AI assistants.
“Compliance teams are drowning in false positives while financial crime slips through the cracks. We’ve eliminated the trade off between accuracy and speed,” says Peter Piatetsky, CEO, Castellum.AI.
False positive Fatigue
Across financial services, roughly 99 percent of AML/KYC alerts prove to be false leads, yet institutions must still investigate each one. That reality drains analyst hours, delays onboarding and, paradoxically, allows sophisticated crime to hide in the noise. The promise underpinning Castellum’s raise is simple: replace repetitive alert triage with AI agents that can reason, cite sources and leave a defensible audit trail in seconds.Castellum.AI’s platform is underpinned by a proprietary global risk dataset that goes beyond off the shelf watchlists. Rather than aggregating third party feeds, Castellum ingests sanctions, politically exposed persons (PEP), adverse media mentions and corporate ownership records directly from issuing authorities. This information flows through a patented enrichment pipeline that standardizes and validates each record before it ever reaches an analyst’s dashboard, ensuring that every alert is backed by the original source documents rather than an intermediary’s interpretation.
At the heart of the system lies a native screening engine engineered for real time responsiveness. Where many vendors schedule list updates daily or hourly, Castellum’s engine refreshes its entire database of risk indicators every five minutes. This design choice addresses the reality that sanctions and PEP designations can change on very short notice, and that even a few hours’ delay can expose institutions to inadvertent compliance breaches or enforcement actions.
Above this data and engine layer sits the company’s signature agentic AI assistants. These agents do more than simply run prompts against a language model; they orchestrate multistep workflows, identifying which screening rules to apply, invoking the right enrichment tools and ultimately drafting initial narratives for suspicious activity reports.
Curql, BTech Consortium and Framework Venture Partners are institutions that match Castellum’s target customer profile. The funding will expand engineering, customer success and partnership teams while “layering agents over legacy vendor stacks” to lower switching costs for risk averse institutions.
Inside the agentic playbook
The company recommends treating each AI agent like a new team member: by feeding it the organisation’s standard operating procedures and strictly limiting its access to only those data sources that have been quality vetted, firms can ensure agents start on solid, compliant ground.
Rather than building one monolithic model to tackle every task, Castellum.AI advises deploying a constellation of specialised “micro agents,” each responsible for a single, discrete function – whether that’s first pass name screening or drafting a suspicious activity report narrative. This modular structure not only simplifies testing and validation but also makes troubleshooting more straightforward when an individual agent’s performance drifts.Crucially, every automated action is designed to be interruptible: human reviewers sit at predefined checkpoints, and every decision is recorded in a version controlled log. That combination of gate controlled workflows and immutable audit trails satisfies even the most demanding regulators, giving compliance teams confidence that they can scale AI driven efficiency without sacrificing oversight.
Taken together, the architecture mirrors emerging regulator rhetoric that AI must remain explainable, testable and under ultimate human control.
Explainability as a compliance edge
Castellum dedicates an entire April insight to the “compliance paradox”: institutions need advanced analytics, yet regulators refuse black box outputs. Its answer is explainable AI that pairs deterministic rules with narrative justifications for every match or clearance and maps each decision back to source data. By aligning with OFAC’s sanctions compliance framework, FATF recommendations and OCC/NCUA model risk guidance, the company pitches transparency as a competitive moat rather than a box ticking exercise.
A recent Banking as a Service sponsor bank reported that by deploying Castellum.AI’s agentic platform, it was able to reduce its alert review costs by 88 percent, a dramatic cut that slashed operational overhead and freed up staff for higher value work.
Similarly, a Fortune 50 corporation saw its level one KYC review cycle time shrink by 83 percent, accelerating customer onboarding and tightening time to revenue without compromising compliance rigor.
Meanwhile, a mid tier regional bank achieved a 94 percent reduction in false positive alerts on its screening workflows, according to a Business Wire announcement – illustrating how more precise detection can sharply lower wasted investigation efforts.
Equally eye catching is the claim that Castellum’s agent passed a CAMS practice exam on the first try, signalling that the model can internalise formal regulatory curricula, not just technical heuristics.
RegTech budgets remain under pressure, yet supervisory scrutiny is intensifying. Castellum’s thesis: agentic, explainable AI that sits transparently atop proprietary data, offers a way to square that circle. Early evidence suggests institutions can slash operational drag without sacrificing auditability, and investors are betting that those gains will prove sticky.
Still, prudence is warranted. Even Castellum’s own experts emphasise that AI agents demand continuous QA, lineage documentation and robust human intervention points to stay regulator ready. Compliance leaders evaluating the technology should focus due diligence on:
- Decision log depth: Does every auto closed alert carry a machine readable justification?
- Model version controls: Can the institution roll back an agent if behaviour drifts?
- Data stewardship: How frequently are sanctions and PEP lists refreshed and reconciled?
Those questions will determine whether agentic systems become a true force multiplier or simply another layer of complexity.
Castellum.AI’s capital raise and development path paint a consistent picture: the firm is betting that explainable AI agents, armed with its own risk datasets and screening engine, can turn today’s alert glut into actionable intelligence. With clients cum investors at the table and early ROI metrics on the board, the next 12 months will show whether the agent model can replicate those early wins at scale and under the watchful eye of global regulators.
Subscribe to our newsletter