Financial supervisors are increasingly convinced that Suspicious Activity Reports (SARs) drafted using large language models (LLMs) are delivering higher quality disclosures. At a recent webinar hosted by IMTF and AML Intelligence, compliance experts revealed that banks leveraging LLMs to assist in report writing have seen improvements in accuracy, context and readability of SARs.
Elevating the narrative
Avalon Ingram, APAC Head of Payments at SWIFT, explained that when institutions deploy LLM-driven workflows in SAR preparation, regulators have flagged the enhanced “narrative quality” and data completeness. “We’re seeing that the information being delivered is far more accurate, and the context around each case is significantly better,” she said. The implication: machine-assisted drafting may reduce false positives and enrich the story behind each investigation.
Not a replacement, but a force multiplier
While enthusiasm is rising, the experts emphasised that human oversight remains critical. Gion‑Andri Büsser, Co-CEO of IMTF, articulated that financial institutions should adopt a “hybrid model” — combining legacy systems, expert analysts and targeted AI modules for specific use-cases such as alert triage and pattern detection. “You don’t need to discard everything you’ve built so far,” Büsser remarked. “Focus on introducing AI into a distinct problem-space, such as investigative narrative writing or alert qualification.”
Why this matters
- Efficiency boost: Where manual reviews struggled with millions of records, AI-powered tools can pinpoint relevant suspicious behaviours in milliseconds.
- Quality lift: Enhanced narratives and better structured reports mean supervisory bodies are getting richer, more coherent disclosures.
- Strategic risk tool: Institutions are shifting from firefighting alerts to proactively managing typologies, aided by AI-enhanced analytic engines.
What’s next
Compliance leaders will need to address data integrity, model governance and ethics. For maximum benefit, organizations must align their AI deployments with both regulatory expectations and internal controls. The best results come from using LLMs to augment — not replace — analyst judgement and regulatory expertise.
In short: when it comes to writing SARs, banks and regulators alike are increasingly saying that human plus machine is the winning combination.
