The Saifr sponsored whitepaper—From Caution to Action: How Advisory Firms are Integrating AI in Compliance—published in November, explored a number of key themes surrounding the adoption of generative AI (GenAI) enabled technologies for compliance by advisors and wealth management companies. We recently covered the theme of in-house versus vendor supplied solutions in an interview with Saifr CEO and Co-founder, Vall Herard.
Another survey topic examined survey participants attitudes and enthusiasm for AI in compliance and to dig deeper into this aspect of the survey, RegTech Insight interviewed Jon Elvin, Strategic Risk Advisor at Saifr. With over 30 years of experience in compliance, risk management, AML, BSA, and sanctions, Jon is uniquely positioned to share insights on this topic.
RegTech Insight: Having looked at the report, I understand there was one bar chart that stood out indicating that several of the seventeen survey participants responded with a wait-and-see attitude towards adoption so, perhaps we could start there?
Jon Elvin: I was surprised to see so many people hesitant to move forward and really taking a wait-and-see approach. In financial services, particularly with heavy regulatory compliance implications, like anti-money laundering, risk, and reputation, there are two trains of thought.
On the business side—business generation, how can you use AI to understand more about customers? How can you bring more products and services, more insights to the forefront that become actionable?
There’s definitely a camp in financial services that completely embraces AI. Every chief technology officer wants to show they’re taking advantage of efficiency gains. If they can build stronger relationships with clients, generate more revenue and reduce their regulatory risk, that’s their direction.
At the same time, there’s the question of how to bring AI into exploration or experimentation responsibly, because AI can have very positive effects, but also negative ones, like disparate impact on customers.
My view, from being a practitioner and policymaker at a large US financial institution, is that the personal liability decision to explore AI can bring regulatory risk. As a Chief BSA Compliance Risk Officer, if you step outside the pack and do something new, you might get asked, “How do you know it’s not having a disparate impact?” There are efficiency gains with robotics and so forth, but the true value is how AI might become more effective, reduce cycle times, or potentially conflict with the historical rule-set version of AML.
Transaction monitoring in every AML program deals with false positives. That’s a huge frustration. When you explore AI, you have to think about the person with the most personal liability for the good, bad, and ugly—someone like my old role, the BSA officer.
I’ll walk you through that dichotomy. Regulators do promote innovation. They have office hours where vendors or financial institutions can share ideas and early returns. But they also talk about “responsible innovation,” meaning if you’re going to change an existing process—transaction monitoring, KYC attributes, adverse media—they expect you to run things in parallel for a while to compare apples to apples. The challenge is, from a day-to-day standpoint, I might not be staffed to run two things at once. If I embark on that course, I have to run two parallel processes.
Another risk, if the AI experiment shows significant value—efficiency, effectiveness, precision, then great. But what if it shows my current process is completely broken? That leads to potential additional scrutiny for deficiencies in my program.
RegTech Insight: If AI exposes weakness in the existing process, and you show that to the regulator, and it reveals you’ve been wrong for 10 years, that’s not a good signal, it becomes a liability.
Jon Elvin: Exactly. Think of a cartoon with someone on one side saying, “Do it,” and someone else saying “Danger, danger.” That’s what it feels like for someone with those responsibilities. As soon as you start such an experiment, if I put it on paper, it goes on a project sheet and becomes visible to management. Then business and audit might say, “Why are we doing this? The regulators never had an issue with our current process.” Regulators say they support you, but then they watch closely, and that can bring risk.
There’s also human nature for managers and analysts who worry AI might eliminate operational jobs. Some may not fully embrace innovation, or they challenge early results because they fear losing their teams or positions. I’ve seen that repeatedly.
When I saw that many survey respondents preferred safety in the pack, it aligned with the idea that if you’re not well-positioned or well-tenured, or if you don’t have total buy-in from management, you might not push it. You might not want additional budget or scrutiny, especially if you’re finishing a regulatory exam.
I’ve heard compliance officers say, “Regulators haven’t had a problem with how we’ve been doing it, so I’ll accept the status quo,” even though they know their current process isn’t thorough. They’re hesitant for various reasons.
RegTech Insight: I think you’re talking about “If it ain’t broke, don’t fix it,” versus “Don’t rock the boat,” which means you’re going to come under scrutiny.
Jon Elvin: Exactly. Also, depending on the maturity and stakeholders in that firm, it’s about the personality, intent, and vision of the BSA officer. I always supported innovation, but I understand why some don’t. They view it as a risk or undue scrutiny, so they become complacent. Longer-tenured ones might just wait for someone else to do it first, but I think that is short sighted. The risks aren’t going away and only getting more complex and smarter. So, I think you should always be exploring ways to improve.”
RegTech Insight: That concern came up at our RegTech Summit in New York last November. One school of thought encourages interaction with regulators, but the counter is: if you show them what you’re doing, you might expose deficiencies. If your process is fundamentally broken, you’re sitting on a time bomb. Eventually, something breaks, the regulator comes in, and they know how to dig and where to look. We see that with enforcement actions around surveillance and record keeping failures.
Jon Elvin: Yes, those are anchor points for several tier-1 firms. There was a case where a BSA officer decided to tackle human trafficking, doing great work for society, but they took their eye off the ball in other areas and got hit with an enforcement order. In another case a small bank in the mid west had a BSA officer with no experience, no specialized training, and didn’t recognize the basics.
This goes to the intent, vision, and tenure of the BSA officer. If they don’t realize what they need to do or keep blinders on, the same problems happen. Even large institutions can have the wrong focus.
Firms have to adapt with conditions: changing customer demographics, typologies of bad actors, and embracing technology.
AI can be a force multiplier here. We shouldn’t expect a magic button for everything. You don’t have to solve everything overnight, but you can target some areas. If I come in with a plan—maybe five main areas I’m focusing on, like adverse media or transaction monitoring—and say, “I’m going to use AI to explore and experiment for effectiveness,” at the core of AML: providing quality referrals to law enforcement, being efficient, and protecting customers and employees. If I clearly lay out what I’m going after, it reduces anxiety.
Another point, some board members, senior management, and AML programs don’t truly understand what’s in that broad AI category. They’re embarrassed or hesitant to ask for help or don’t know what questions to ask. You need courage to bring in insight.
Finally, whatever you experiment with, you either quickly find out it doesn’t work, or it does and here’s how to deploy it. But you must monitor it in case it degrades or shows disparate impact. With AI, you can make a mistake consistently and fast, so you need a recovery model. You need a way to undo it.
Understand that you won’t solve everything right away, but you have to have courage and candour to try, and each time you get closer to that impactful difference-maker.
RegTech Insight: This lack of understanding was apparent from the survey when we focused on generative AI. Some compliance officers really knew their jobs, but they’d been given bad info and assumed everything was open source or ChatGPT. They were worried about data security, privacy, and all kinds of issues, without even discussing specific use cases. It’s important to ask, “What are we trying to do?” like your example of picking five things.
Jon Elvin: A parallel is virus scanning. Years ago, you might run it a couple of times a year, then monthly, and now we run it continuously. In adverse media, many organizations still just screen customers at onboarding. Then they risk-rate them as low, medium, or high, and the regulators say to do it on a risk-based schedule. But if you’re at low- or medium-risk, you might only get screened every few years. That’s like doing virus detection every few years.
We have technology now that can continuously screen millions of customers. For example, SaifrScreen monitors ~50 million customers for one customer in real time every day using AI.
A model that only checks a client every two or three years is no longer sufficient. Maybe he’s good today, but not next month. Yet many institutions accept that older approach because it became the norm. Some will embrace new technology to improve coverage, reduce costs, and gain efficiency. Others will wait for a big problem or a regulatory nudge, or until the solution is so mainstream they can’t ignore it. I would advise being on the side of trying new technology.
RegTech Insight: A couple of survey participants were genuinely fearful of AI—fearful for their jobs and teams. When you’re in that mindset, innovation can feel like a threat.
Jon Elvin: Sometimes, if you want to explore something, it helps to pull out a business-process expert, an experienced AML compliance person, and a skilled technologist to form an accelerated solution team for a few weeks. That’s like tech sprints or contests where people are locked in a room for 72 hours to see what they come up with for the product side. Why not do that on the AML side?
You’ll need a nimble, open-minded BSA officer who can adapt quickly. All the business folks want to run full speed. You have to do it in a risk-responsible way, be able to explain it, measure it, monitor it, undo it if needed. That’s the framework to successfully move forward.
Subscribe to our newsletter