Affirmark · AI Intake
From a 6-question interview
to your SPRS baseline.
Most CMMC L1 self-attestation projects spend the first month staring at 59 blank implementation narratives. Affirmark's AI Intake closes that gap in about a minute — grounded in your actual stack, reviewable row by row, anchored to the same audit chain the assessor verifies.
What the SPRS baseline actually requires
To self-attest CMMC Level 1, your organization signs an SPRS submission affirming — under 32 CFR § 170.22 — that the assessment accurately represents your compliance posture across all 15 FAR requirements and 59 assessment objectives. Each objective needs a recorded implementation: how your organization meets it, which tool or process is responsible, and who owns it.
Most L1 projects stall here. Writing 59 implementation narratives from scratch — each tied to a specific framework citation, naming the right tool from your stack, calibrated to the right level of detail — is what consultants charge $20-40K for. Without a consultant, it's the work that gets started, postponed, and started again.
AI Intake compresses that month-long stall into a 60-second baseline. You spend the time you saved reviewing instead of writing.
What's actually grounding the model
The intake passes Claude four ground-truth inputs:
- Your organization metadata — legal name, NAICS, location. Used so narratives reference your actual organization rather than a placeholder.
- Your enabled tool inventory — every tool you've toggled on in Settings → Tooling, with its FedRAMP status, MFA posture, and logging coverage. The model picks responsible tools from this list, not from its training data. If you've enabled Microsoft Entra ID, Defender, and Intune, those are what shows up in your access-control narratives — not Okta, not CrowdStrike, not AzureAD (the deprecated name).
- The full 59-objective catalog — verbatim CMMC L1 v2.13 text. The model returns one row per objective in the same order; no objective is skipped or invented.
- Your six narrative answers — operator-written prose covering authentication, access control, endpoint protection, vulnerability management, network boundaries, and physical media handling. The model is instructed to cite the answer index that grounded each row, so you can trace any generated implementation back to the source sentence.
What's not grounding the model: anything from the public internet, anything from
another customer's tenant, anything inferred about your stack. If your inventory is
empty, the responsible-tool dropdown stays generic — the model says [TOOL] and you fill it in.
The audit story, in plain English
Every accepted row gets two markers:
-
llm_assisted = trueon thecontrol_implementationrow, with thellm_modelid (claude-sonnet-4-6today) recorded. -
needs_review_flag = trueso the cycle dashboard shows AI-assisted rows distinctly from rows your team wrote from scratch.
When the cycle freezes, every row's full history is in the hash-chained audit log: the AI-generated draft, every operator edit, every status change, signed in sequence with previous-hash linkage. An assessor can verify the chain end-to-end with a single command; the closed cycle snapshots the chain head into the SPRS package.
The result the assessor sees: not "AI wrote this," but "this team accepted these specific drafts after review, anchored to a verifiable history." That's a defensible posture. Hidden AI use that an assessor catches later isn't.
You're still in control
The intake doesn't write to your control_implementation table. It produces 59 drafts on a review screen, each tagged with a confidence flag
(high / medium / low) and the question indices it cited as grounding. Your team accepts
rows individually, edits inline, regenerates the whole batch, or discards entirely.
A typical first cycle: an operator runs the intake, accepts 35-45 high-confidence rows unchanged, edits another 10-15 to add specifics the model couldn't know (license counts, internal owner names, exact policy filenames), and writes the remaining handful from scratch where the prompt didn't pick up enough context.
That's still 80%+ of the implementation work compressed into review-and-confirm rather than write-from-scratch. The hour you spent on intake-plus-review beats the month consultants schedule for the same scope.
Ready to skip the consultant bill?
Affirmark is in design-partner mode. We're working with a small group of SMB DoD subcontractors to refine the workflow against real CMMC L1 cycles.
Talk to us