Early accessSome features may be unavailable
Back to Blog
auditmethodologyfsmengagementaudit-firmbig4casewareteammate

Active audit methodology: from catalog to execution

VynFi's audit-methodology layer used to be read-only — Wave 1 catalog. Wave 4 makes it executable: drive engagements through the Big-4-spine FSM with audit-trailed transitions and per-state workpaper templates that export to Caseware, TeamMate, PDF, or JSON.

VynFi Team · EngineeringMay 10, 202613 min read

Wave 1 of the VynFi audit-methodology library shipped a catalog: 19 endpoints, ~50 portal pages, public Apache-2.0 reference content covering the Big-4-spine plus 7 jurisdictional overlays, 6 KYC blueprints, 7 banking forms, 15 deterministic engagement scenarios, the 12-factor RMM taxonomy, the L4 graph schema, and the ISA 230 working-paper Merkle bundle structure. It was thorough, it was free, and it answered a real problem — when engagement teams or methodology leads needed an authoritative reference for ISA 600 ¶22-49 or Wolfsberg CBDDQ field semantics, the catalog gave them one URL to point at instead of hunting through PDFs.

**TL;DR** — Wave 4 promotes the catalog from a reference layer to an execution layer. Engagements drive through a 7-state FSM (Planning → Risk Assessment → Fieldwork → Evaluation → Reporting → Sealed) with audit-trailed transitions. Each state emits the workpaper templates your firm expects, exportable to Caseware Cloud XML, TeamMate JSON, PDF, or generic JSON. We aren't replacing your authoring tool — we're the methodology spine that sits beneath it.

The catalog vs. execution distinction

A reference catalog answers the question 'what does this standard say.' An execution layer answers the question 'where am I in this engagement, and what should happen next.' Those are different problems with different shapes. The catalog is a static-ish lookup, indexed by standard / blueprint / form / scenario. The execution layer is a dynamic state machine, indexed by engagement-id, with per-state guards, allowed transitions, and downstream artefact emission. We knew from day one of Wave 1 that catalog-only would be a starting point, not the finish — but we wanted to prove the catalog was right before promoting it to execution.

By the time Wave 1 had been live for two months, three patterns had emerged from customer conversations. First, methodology leads wanted to encode their firm's interpretation of ISA 600 ¶36-40 (meaningful involvement of the group auditor) as a runtime check, not just a paragraph in a PDF. Second, engagement-quality reviewers wanted a tamper-evident audit trail of state transitions — when did this engagement move from Risk Assessment to Fieldwork, and who approved the transition. Third, partners wanted per-state workpaper templates that emitted directly to Caseware Cloud or TeamMate without manual re-entry. All three of those are execution-layer concerns. The catalog couldn't answer any of them.

Wave 4's promotion of the catalog to an execution layer is therefore not 'we changed our minds about what to build' — it's 'we kept the catalog and added the runtime that consumes it.' The Big4Spine + JurisdictionalOverlay + MethodologyBlueprint records that powered the Wave 1 catalog now also power the Wave 4 FSM. A Big-4 firm that's been using the catalog can adopt the FSM without re-encoding their methodology — the FSM reads the same records the catalog already exposes.

Big4Spine + JurisdictionalOverlay + MethodologyBlueprint: how the FSM is built

Three records compose to produce a per-engagement FSM instance. The Big4Spine is the canonical audit lifecycle, derived from ISAs and shared across all four firms (the spine is what they all have in common — the firm overlays are the deltas). The JurisdictionalOverlay is the regulator-specific overlay (PCAOB US, EU CSRD, UK FRC, ASIC AU, JFSA JP, ACRA SG, HKICPA HK) that adds or refines procedures and reporting requirements. The MethodologyBlueprint is the engagement-shape selector (ISA 600 group audit, CSRD limited assurance, KYC private-banking onboarding, etc.) that picks which spine procedures are in-scope and how they sequence.

When an engagement is created in VynFi, the user picks a blueprint (e.g. 'ISA 600 group audit'), a jurisdictional overlay (e.g. 'PCAOB US'), and a firm overlay (e.g. 'EY GAM'). The system composes a concrete FSM instance from those three records: the Big4Spine provides the 7 canonical states, the firm overlay refines the per-state procedures (EY GAM's risk-assessment procedures differ from PwC Aura's), and the jurisdictional overlay adds the regulator-specific reporting requirements (PCAOB AS 3101 critical-audit-matter handling, for instance). The composed FSM is what the engagement runs against.

TypeScript
// Pseudocode: FSM instantiation from the three records.
const fsm = composeEngagementFsm({
spine: getBig4Spine(), // Wave 1 record
firmOverlay: getFirmOverlay("ey-gam"), // Wave 1 record
jurisdictionalOverlay: getJurisdictional("pcaob-us"), // Wave 1 record
blueprint: getBlueprint("isa-600"), // Wave 1 record
});
// fsm now has:
// .states — 7 canonical states (NotStarted ... Sealed)
// .procedures(s) — composed procedures per state
// .transitions — allowed transitions with guards
// .templates(s) — workpaper templates per state

The 7-state engagement FSM

The Big4Spine canonicalises the engagement lifecycle into 7 states. Each state has a defined entry condition, a set of in-scope procedures, an exit condition, and a list of allowed transitions. State transitions are audit-trailed: every transition records the actor (which user under which firm), the reason (free-text + structured guard-result code), and the timestamp. The audit trail is append-only and tamper-evident (it threads into the Merkle WP bundle when the engagement seals — see the Merkle WP post for that).

  • **NotStarted** — engagement created, scoping not begun. Allowed: → Planning.
  • **Planning** — scope, materiality, team allocation, preliminary RMM. Allowed: → RiskAssessment, ↩ NotStarted (retraction with reason).
  • **RiskAssessment** — full RMM scoring, control-environment understanding, fraud-risk identification, group-engagement scoping (ISA 600). Allowed: → Fieldwork.
  • **Fieldwork** — substantive procedures, controls testing, component-auditor coordination, evidence collection. Allowed: → Evaluation. (Many engagements iterate Fieldwork ↔ Evaluation; the FSM permits backward transitions with a documented reason.)
  • **Evaluation** — misstatement aggregation, materiality re-assessment, going-concern evaluation. Allowed: → Reporting, ↩ Fieldwork (re-open if new evidence emerges).
  • **Reporting** — opinion drafting, KAM finalisation, group-engagement letter, regulator filings. Allowed: → Sealed.
  • **Sealed** — engagement closed, all artefacts captured into a Merkle WP bundle, no further mutations allowed. Terminal state; no allowed transitions.

Each state is the unit of audit-trail commitment. When the engagement transitions from RiskAssessment → Fieldwork, the system snapshots the RMM scores, the auditor-prior overrides, the fraud-risk classifications, and the entry-condition guards into the audit trail. The snapshot is immutable from that point on. If the engagement later transitions back from Fieldwork → RiskAssessment (because new evidence surfaced a previously-unconsidered risk), the new RiskAssessment cycle starts a new snapshot — the old one stays in the trail with its 'superseded' marker.

Per-state workpaper templates: emit to Caseware, TeamMate, PDF, or JSON

Every FSM state emits a set of workpaper templates: the actual documents the engagement team needs to author. Templates are state-keyed and parameterised on the engagement context (entity tree, materiality, RMM scores, etc.). The template emitter supports four output targets, picked by the firm's authoring tool of choice:

  • **Caseware Cloud XML** — the firm exports templates straight into Caseware as XML; the engagement team authors there. VynFi remains the methodology spine; Caseware is the editor.
  • **TeamMate JSON** — equivalent flow for TeamMate users; the JSON shape matches TeamMate's import schema.
  • **PDF** — for firms not standardised on Caseware or TeamMate, a styled PDF emit (with the firm's letterhead and methodology overlay) goes straight to the workpaper folder.
  • **Generic JSON** — the raw template structure, for firms with proprietary authoring tools or custom integrations.

The choice of output target is per-firm (set in the firm-organization tenancy config) and overridable per-engagement. Most Big 4 firms run Caseware Cloud as their authoring layer; many mid-tier firms use TeamMate; a handful of firms (especially regional shops or specialist boutiques) have proprietary tooling. We're not in the workpaper-authoring business; we're in the methodology + risk + synthetic-data business, and we export to whichever editor the firm has standardised on.

TypeScript
// Pseudocode: emit a state's templates in the firm's chosen format.
const templates = fsm.templates(EngagementState.RiskAssessment);
const targetFormat = engagement.firm.authoringTool;
// ^ "caseware-cloud" | "teammate" | "pdf" | "json"
const emitted = templates.map(t =>
emitTemplate(t, engagement.context, targetFormat),
);
// emitTemplate returns:
// { format: "caseware-cloud", xmlContent: "...", filename: "..." }
// or { format: "teammate", jsonContent: {...}, filename: "..." }
// etc.

Walking through a US GAAP retail engagement

Concrete walkthrough. A US-listed retailer with $500M revenue runs a full-scope external audit for the year ended 31 December 2025. The engagement team (4 people: partner, manager, senior, associate) is at a Big 4 firm using EY GAM. The firm has standardised on Caseware Cloud. Here's how the engagement flows through the FSM:

  • **Day 1, NotStarted → Planning** — engagement created with 'US GAAP audit / EY GAM / PCAOB US / Audit blueprint'. The FSM composes from those four selections. The Planning state emits scope memo template, materiality determination template, team allocation template — all as Caseware Cloud XML, dropped into the firm's Caseware folder. The team authors there.
  • **Day 12, Planning → RiskAssessment** — manager reviews the planning workpapers, signs off, transitions the FSM. Audit trail captures: actor=manager, reason='Planning workpapers reviewed and approved', timestamp. The new RiskAssessment state emits ~40 procedure templates (control-environment understanding, journal-entry analytics, going-concern preliminary, fraud-risk identification, ICFR walkthroughs).
  • **Day 30, mid-RiskAssessment, partner override** — partner reviews the Bayesian RMM scoring, decides to override the prior on revenue-recognition complexity (the model said 'medium', the partner says 'high' based on a new contract structure that landed in Q4). Override captured in the audit trail (actor, rationale, timestamp). RMM re-scores in real time; affected procedures get re-flagged.
  • **Day 45, RiskAssessment → Fieldwork** — substantive testing begins. Each in-scope procedure emits a Caseware template; testing populates the templates; evidence is collected and hashed into the L4 graph (every Evidence node references the underlying file's SHA-256).
  • **Day 90, Fieldwork → Evaluation** — testing complete. The Evaluation state emits the misstatement aggregation template, materiality re-assessment template, and going-concern final-evaluation template.
  • **Day 100, Evaluation → Reporting** — clean opinion. Reporting state emits the opinion drafting template, the KAM template, and the group-engagement letter.
  • **Day 105, Reporting → Sealed** — engagement files. The Sealed transition triggers the Merkle WP bundle gen: every workpaper, every audit-trail entry, every L4-graph edge from this engagement gets hashed into a binary Merkle tree. The root hash + manifest.json are persisted. The engagement is now immutable.

Total elapsed: 105 days, four people. The engagement team spent its time authoring workpapers in their existing Caseware environment. VynFi sat behind the scenes as the methodology spine: state transitions, RMM re-scoring, audit-trail capture, template emission, and the final Merkle bundle. They never had to leave Caseware.

Why we don't compete with Caseware/TeamMate

This is a feature, not a limitation. Caseware Cloud and TeamMate have decades of investment in workpaper authoring UX: keyboard shortcuts, comment threading, sign-off workflows, file-server replication, Office integration, PDF rendering. They're very good at it. We're not going to out-Caseware Caseware on Caseware's home turf, and we shouldn't try. The audit-tech stack has too many layers for any one vendor to own end-to-end credibly.

What VynFi does is sit beneath those tools. The methodology spine + jurisdictional overlay + firm overlay composition. The Bayesian RMM. The L4 audit graph. The Merkle WP integrity primitives. The ISA 600 cross-firm coordination. The synthetic-data generation that lets methodology teams test new procedures against realistic-looking ledgers before they hit a real engagement. None of those are workpaper-authoring concerns; all of them are methodology / risk / integrity / data concerns. The two layers compose: VynFi emits templates, Caseware authors them, VynFi re-ingests the completed workpapers as evidence in the L4 graph and the Merkle bundle.

This is also why our pricing is explicitly seat-based annual, not per-workpaper or per-engagement. We're billed by who's licensed to drive engagements through the methodology spine, not by how much they author. That model is procurement-friendly for Big 4 firms (predictable annual fees, enterprise discounts, multi-year MSAs) and aligns the incentives — we want every partner / manager / senior / associate to be in the methodology layer, regardless of which authoring tool they prefer.

What's next: per-procedure FSM, richer PDF layouts, methodology overlay editor

Wave 4 ships the engagement-level FSM. The next iteration (call it Wave 4.1, on the 6-month horizon) extends the FSM down to per-procedure granularity: every procedure in every state has its own micro-FSM (Drafted → Reviewed → Approved → Filed) with auditor sign-off and audit-trail capture. That maps cleanly to how engagement teams actually work — partners sign off on individual procedures, not on whole states.

Other in-flight items: (1) richer PDF layouts for firms not on Caseware / TeamMate (the current PDF emit is functional but not visually polished); (2) a methodology-overlay editor that lets methodology leads author and version-control their firm's interpretation of the spine without engineering involvement; (3) integration with Big-4-specific audit data analytic platforms (EY Helix, KPMG Clara Analytics, PwC Halo, Deloitte Argus) so the L4 graph can ingest analytics findings as evidence; (4) eventually, regulator submission flows that take a sealed Merkle bundle and submit it directly to the relevant authority's filing endpoint.

If you're a Big 4 firm running a methodology transformation programme — or a mid-tier firm that wants Big 4-grade methodology execution at sub-Big-4 contract terms — schedule a design partner call. We're onboarding firms in cohorts; design partners help shape the per-procedure FSM and the methodology-overlay editor against real engagement workflow.

Background reading: the Audit Firm landing page covers the full v3.0 surface (FSM execution + Bayesian RMM + L4 graph + Merkle WP bundles). The Bayesian RMM calibration post walks through the 12-factor model and how prior overrides shape the posterior in real time. The Merkle WP bundles post covers the cryptographic integrity primitives behind the Sealed-state bundle generation.

Ready to try VynFi?

Start generating synthetic financial data with 10,000 free credits. No credit card required.