Skip to content
Back to Blog

Synthetic Compliance Scenarios: A Preview

Building test scenarios by hand for transaction monitoring, sanctions screening, or reserve rules is slow and hard to repeat. We're previewing a faster way — AI-generated synthetic scenarios you can run against your rule pack, with a built-in audit trail. Here's what it does today and what it doesn't yet do.

PreviewCapability lifecycle stage
ShippedCatalogue, dossier export, audit ledger
DeferredEngine scoring, pilot-data integration

If you've ever had to build a test scenario to check whether your transaction monitoring rules actually catch a money-laundering pattern, you know the drill. You sketch a counterparty graph on a whiteboard. You cook up plausible transactions in a spreadsheet. You write down what the rule should do. You email it to a colleague for review. Three months later, someone asks if your rules still behave the same way — and nobody can quite reproduce what you built.

This post is a preview of something we're building to change that. It's a pre-production capability today — visible in the platform, usable by early reviewers, not yet something we'd hand to a regulator on its own.

Preview Lifecycle stage
Shipped Catalogue, export, audit trail
Coming Automated rule scoring

The Problem

Compliance scenario analysis sounds fancy but it's usually just this: you invent realistic-looking bad behaviour and check whether your controls notice. Smurfing. Sanctions evasion through shell counterparties. A reserve composition that drifts past its policy limit.

The awkward truth is that most teams build these scenarios by hand, one at a time. Each one is a mini-project. Once it's done, it's hard to reproduce, hard to version, and hard to re-run when your rule pack changes. Your scenario library ends up being a folder of one-offs rather than something you can routinely exercise.

Every junior compliance officer we've talked to has the same war story: they built a test scenario, it lived in a spreadsheet, and six months later nobody could prove it was the same scenario anymore.

What Synthetic Scenarios Do

A synthetic scenario is just a named, versioned test case — counterparties, wallets, transactions, sanctions profiles, and the outcome you'd expect a good rule pack to produce. The "synthetic" part means none of it is real customer data. The "scenario" part means someone has already written down what a sensible rule should do with it.

Two things make this better than the spreadsheet version:

The same inputs always produce the same dataset. Click "generate" today, click "generate" in six months, and if the code hasn't changed, you get an identical test case. This sounds boring until an auditor asks, "is this the same scenario you ran last quarter?" With our version, the answer is a short string match.

AI does the boilerplate. Instead of a consultant hand-authoring every typology, a template — "layered transfers through a few intermediaries, ending near a sanctioned address" — generates the counterparty names, the timings, and the narrative automatically. Your senior reviewers spend their time on the judgment calls, not on inventing plausible-sounding wallet addresses.

Why the Audit Trail Matters

A test scenario that nobody can prove was generated is a demo, not a compliance artefact. So every time someone on your team exports a scenario dossier, we save a record: who clicked the button, when, against which version of your rule pack, and a fingerprint of the scenario itself. The record is append-only — nobody can quietly edit it later.

That record is what turns a test case into something you can walk a reviewer through. "Yes, Priya on my team ran scenario X on April 12 against rule pack v3.1. Here's the fingerprint. Here's the same one today — same fingerprint, so you know the test hasn't drifted."

The scenarios are the flashy part. The audit trail is the part that makes them defensible.

Where This Approach Comes From

The technique underneath this capability is not ours. It is an adaptation of "Reasoning-Driven Synthetic Data Generation and Evaluation" by Tim R. Davidson, Benoit Seguin, Enrico Bacis, Cesar Ilharco, and Hamza Harkous (TMLR, March 2026). The paper introduces a framework called Simula — a reasoning-first approach that maps a target domain into explicit taxonomies and then uses an agentic generator-and-critic loop to produce diverse, complex, reproducible examples traceable back to those taxonomies.

We took that structure and applied it to compliance scenario analysis. The money-laundering typologies, sanctions-evasion patterns, and reserve-drift cases sit where Simula's taxonomy nodes sit. The generator-critic loop is what produces the synthetic counterparty graphs and transaction sequences. The scenario fingerprint — the thing that lets you prove a test has not drifted — is a direct extension of Simula's emphasis on explainable, controllable, reproducible generation. The compliance-specific parts (regulator-grade evidence, append-only audit trails, rule-pack versioning) are ours. The data-generation spine belongs to Davidson et al.

What This Is Not (Yet)

We're being deliberate about what we won't claim, because overclaimed compliance tooling gets you in trouble:

These scenarios carry expected outcomes written by the scenario author — what a well-built rule pack should do. We're not yet automatically scoring your live rules against them. That piece is coming next.

Synthetic data is a companion to real pilot data, not a replacement. For a regulator submission, you'll still want both — synthetic coverage for breadth, real-data testing for fidelity.

Nothing synthetic gets mixed into your real operational data. Every record is flagged as synthetic at the database level, and your organization's scenarios are only visible to your organization.

Ready to Build Your Stablecoin Strategy?

Get expert guidance on implementing stablecoins for your business. From regulatory compliance to technical integration, we'll help you navigate the path to digital currency adoption.

Strategic Planning

Custom roadmap for your use case

Expert Guidance

Direct access to specialists

30-Min Session

Focused consultation

No commitment required • 100% confidential

If You're Early in Your Compliance Career

This is the kind of capability you should get hands-on with before it becomes table stakes. Regulators in the EU, Hong Kong, Singapore, and the UK are all moving toward wanting evidence that you've stress-tested your controls on a regular cadence. The teams that do that well won't be the ones with bigger spreadsheets. They'll be the ones who treat scenario analysis like a workflow — something you run often, version properly, and hand to an auditor without panicking.

You can look at the preview catalogue at /compliance/scenarios. Browse the scenarios, export a dossier, see what the audit trail looks like when you do. If something feels off or missing, that's exactly the feedback we want while this is still a preview.

In one sentence

Synthetic compliance scenarios let you test your rules against realistic-but-fake bad behaviour, with a built-in record of who tested what and when — shipped today as a preview, with automated rule scoring as the next step.

Written by

Stablecoin Roadmap Team

Compliance Engineering

Covering stablecoin infrastructure, regulation, and the evolution of programmable money.

Share this article

Related Posts