Your Security Scans Already Contain Compliance Evidence. You Cannot See It.
Scan Results to Compliance Evidence
DevSecOps teams run SAST, DAST, SCA, container scans, and STIG checks. Compliance teams maintain spreadsheets mapping controls to evidence. Both teams work from the same underlying security data. Neither team can see the other's work. Five mapping strategies connect scan findings to controls across CMMC, NIST 800-53, FedRAMP, SOC 2, ISO 27001, and every derived framework simultaneously. One scan. Every framework. Zero manual translation.
Scan to Compliance
Security posture generates compliance proofs. Not the other way around.
Every organization that runs security scans already possesses compliance evidence. The problem is not a lack of data. The problem is that scan results live in SARIF files, JSON exports, and proprietary scanner formats while compliance evidence lives in spreadsheets, shared drives, and GRC platforms. The gap between these two worlds is not technical. It is organizational. Bridging it requires understanding how scan findings map to framework controls, which mapping strategies produce auditable results, and how a single scan can advance compliance posture across every framework in the catalog simultaneously.
DevSecOps teams operate in a world of scanners, pipelines, and findings. They run static application security testing against source code repositories, dynamic application security testing against running applications, software composition analysis against dependency manifests, container image scanning against registry artifacts, and STIG compliance checks against operating system and application configurations. These scans produce structured output: SARIF files with rule IDs and locations, JSON reports with severity scores and remediation guidance, proprietary formats with scanner-specific metadata. The DevSecOps team triages these findings, assigns them to developers, tracks remediation through ticketing systems, and measures progress through vulnerability density metrics. Their world is measured in findings per scan, mean time to remediation, and critical vulnerability counts. They care about whether the code is secure. They do not think in terms of controls, practices, or framework requirements.
Compliance teams operate in a parallel universe. They maintain spreadsheets or GRC platforms that list every control required by every framework the organization must satisfy. For each control, they need evidence: a document, a screenshot, a configuration export, or a narrative that describes how the organization satisfies the requirement. They schedule quarterly evidence collection cycles where engineers take screenshots of dashboards, export access control lists, and capture configuration snapshots. They write implementation narratives that describe the organization's approach to each control. They chase engineers for proof that specific security measures are operating. Their world is measured in controls satisfied, evidence freshness, and assessment readiness percentage. They care about whether compliance can be demonstrated. They do not think in terms of SARIF findings, scanner rule IDs, or vulnerability severity scores.
Both teams are working from the same underlying security reality. A SAST finding that identifies an input validation vulnerability is evidence relevant to NIST 800-53 SI-10 (Information Input Validation). A container scan that confirms no known vulnerabilities in a base image is evidence relevant to SI-2 (Flaw Remediation) and CM-2 (Baseline Configuration). A STIG check that verifies password complexity settings is evidence relevant to IA-5 (Authenticator Management). The scan results that DevSecOps teams produce every day contain compliance evidence that compliance teams spend weeks collecting manually. But the data sits in different formats, different systems, and different organizational silos. The DevSecOps team does not know which compliance controls their scan results satisfy. The compliance team does not know that the evidence they need already exists in the CI/CD pipeline output from last night's build. The bridge between these worlds does not exist in most organizations. Building it requires understanding how scan findings map to framework controls at a structural level.
The core problem is format translation. Scan results are structured data optimized for developer consumption: rule IDs, file paths, line numbers, severity scores, CWE classifications, and remediation suggestions. Compliance evidence is structured around control satisfaction: which requirement is addressed, what defense satisfies it, what proof demonstrates that the defense is operating, and when that proof was collected. These are fundamentally different data models. A SARIF file contains everything an assessor needs to evaluate whether flaw remediation controls are operating, but no assessor reads SARIF files. A compliance spreadsheet contains every control the organization must satisfy, but no scanner outputs compliance spreadsheets. The translation between these formats happens manually, if it happens at all. An engineer runs a scan, reviews the results, identifies which findings are relevant to compliance, exports a summary, reformats it into the evidence template the compliance team requires, and uploads it to the evidence repository. This process takes hours per scan, assumes the engineer understands the compliance framework well enough to identify relevance, and produces a static artifact that begins decaying the moment it is created.
The manual translation problem compounds across assessment cycles. Most organizations operate on quarterly evidence collection cadences. During the collection window, engineers stop their normal work to produce compliance artifacts. They re-run scans specifically for evidence purposes, even though the same scans ran automatically in CI/CD pipelines throughout the quarter. They take screenshots of dashboards that display information already captured in structured scan output. They write narrative descriptions of findings that the scanner already described in machine-readable format. They copy data from one system into another, reformatting it to match the compliance team's template. Conservative estimates place this translation effort at 20 to 40 hours per engineer per quarterly cycle for organizations managing multiple frameworks. Multiply that across the engineering team, multiply again across four quarterly cycles per year, and the labor cost of manual scan-to-compliance translation exceeds six figures annually for a mid-sized organization. That labor produces nothing new. It translates existing data from one format to another.
Evidence decay makes the translation problem worse. A scan result collected in January describes the security posture as it existed in January. By March, the codebase has changed, dependencies have been updated, configurations have been modified, and new services have been deployed. The January evidence is technically accurate for January, but an assessor arriving in April is evaluating the organization's current security posture, not its historical posture. Stale evidence forces re-collection, which means re-running the translation process, which means more engineering time diverted from security work. The fundamental problem is not that scan results cannot serve as compliance evidence. They can. The problem is that the translation from scan output to compliance artifact is manual, expensive, and produces static snapshots that decay immediately. The scan-to-compliance bridge needs to be structural and continuous, not manual and periodic.
Security scanners produce output in formats optimized for their specific function. SAST scanners output SARIF (Static Analysis Results Interchange Format) with rule IDs, file paths, line numbers, code snippets, and severity classifications. DAST scanners output JSON or XML reports with HTTP request/response pairs, vulnerability classifications, and exploitation evidence. SCA scanners output dependency trees with CVE identifiers, affected package versions, fixed versions, and severity scores. Container image scanners output layer-by-layer vulnerability assessments with package manifests and base image provenance. STIG scanners output check results with V-numbers, CCIs, check status (open, not a finding, not applicable), and target system identifiers. Each format contains the information needed for its primary audience. None of them are structured for compliance consumption. The data that an assessor needs exists within these outputs, but it is encoded in scanner-specific schemas that compliance teams cannot parse without translation.
Different formats prevent correlation across scanner types. A SAST finding about insecure deserialization and a DAST finding about the same vulnerability exploited through an API endpoint are two observations of the same security issue, but they share no common identifier, no common format, and no common data model. Without normalization, these findings remain isolated in their respective scanner outputs. The compliance team cannot determine that two scan results address the same control from different perspectives. The DevSecOps team cannot correlate a code-level vulnerability with its runtime manifestation. Each finding exists in isolation, mapped to nothing, correlated with nothing, and contributing to compliance posture only if someone manually identifies the relationship and translates it into the evidence format.
Vanguard normalizes findings across 14 scanner types into a unified format. Each finding, regardless of source scanner, carries a consistent metadata structure: scanner type, finding type, severity, CWE classification (when applicable), CVE identifier (when applicable), CCI reference (when applicable), V-number (when applicable), affected resource (file path, container image, host, URL), remediation guidance, and collection timestamp. The normalization preserves the original scanner output as a linked artifact while extracting the fields needed for compliance mapping into a consistent schema. Rampart ingests normalized findings through a unified intake pipeline, where each finding is evaluated against every active framework's control mappings. The normalization layer is the foundation that makes everything downstream possible: without a common data model, cross-scanner correlation, multi-framework mapping, and continuous evidence collection all require manual intervention at every step.
The first two strategies produce the highest-confidence mappings because they follow published, deterministic relationships. AUTHORITATIVE mapping connects findings directly to controls using identifiers published by the framework authority. STIG V-numbers map to CCIs published by DISA. CCIs map to NIST 800-53 controls through the CCI Reference List, also published by DISA. This chain is authoritative: V-257844 maps to CCI-000015 maps to NIST 800-53 AC-2. There is no interpretation involved. The relationship is defined by the organizations that publish and maintain both the STIG and the CCI reference. Similarly, CWE classifications map to NIST 800-53 controls through mappings maintained by MITRE and NIST. CWE-79 (Cross-Site Scripting) maps to SI-10 (Information Input Validation) and SC-18 (Mobile Code). These mappings are versioned, publicly available, and maintained by the authoritative bodies. PUBLISHED mapping extends coverage using official cross-walks between frameworks maintained by their respective publishers: the AICPA's SOC 2 to NIST 800-53 mapping, NIST's CSF 2.0 to 800-53 mapping, HITRUST's CSF-to-800-53 alignment, and PCI SSC's mapping guidance between PCI-DSS and NIST frameworks.
Manual mapping at scale is not merely difficult. It is mathematically infeasible. An organization maintaining certifications across CMMC Level 2, FedRAMP Moderate, SOC 2, and ISO 27001 operates under approximately 700 unique controls. A comprehensive scan program producing 500 findings per cycle against those 700 controls creates a mapping matrix of 350,000 potential relationships. Each relationship must be evaluated for relevance, accuracy, and evidentiary sufficiency. At five minutes per evaluation, the mapping exercise alone requires 29,000 hours. No organization has that capacity. The result is incomplete mapping: teams map the obvious relationships (STIG checks to NIST controls) and leave the rest unmapped. Findings that could serve as evidence for dozens of controls across multiple frameworks are filed against one control or none.
Rampart implements five fidelity-ranked strategies with automatic selection based on the available metadata in each finding. Every mapping records the full mapping_path: the complete chain from finding identifier through every intermediate reference to the target control. For a STIG finding: V-257844 maps to CCI-000015 maps to AC-2 maps to CMMC AC.L2-3.1.1. Each mapping carries a mapping_fidelity classification: AUTHORITATIVE for mappings through published authoritative chains (DISA CCI, MITRE CWE), PUBLISHED for mappings through official cross-walks, DERIVED for mappings computed through the NIST 800-53 derivation chain (800-53 to 800-171 to CMMC, 800-53 to FedRAMP baselines), and AI_SUGGESTED for mappings proposed by semantic analysis with a confidence score and human confirmation requirement. The fifth strategy, CSF bridging, uses NIST CSF 2.0 as an intermediary between framework families that do not share a common NIST 800-53 ancestor. Rampart selects the highest-fidelity strategy available for each finding-to-control relationship automatically. Assessors see both the mapping result and the complete provenance chain that produced it, allowing independent verification of every link.
The derivation chains between compliance frameworks mean that scan-to-compliance mapping is not a linear relationship. It is exponential. Consider a concrete example. A STIG scan evaluates a Linux server against the DISA RHEL 9 STIG. One check, V-257844, verifies that the system enforces a minimum password length of 15 characters. The check returns "Not a Finding" because the server's pam_pwquality configuration sets minlen=15. That single check result carries CCI-000015, which maps to NIST 800-53 AC-2 (Account Management). From AC-2, the derivation chain resolves: CMMC Level 2 practice AC.L2-3.1.1 (Authorized Access Control), NIST 800-171 rev2 requirement 3.1.1, FedRAMP Moderate control AC-2, SOC 2 CC6.1 (Logical and Physical Access Controls), and ISO 27001 A.5.15 (Access Control). One STIG check. One evidence artifact. Five frameworks advanced. The mapping is deterministic at every link: the V-number to CCI relationship is published by DISA, the CCI to 800-53 relationship is published by DISA, and the 800-53 to downstream framework relationships are published by the respective framework authorities.
Without structural mapping automation, one scan equals one framework. The DevSecOps team runs a STIG scan and files the results in the CMMC evidence package. The same findings are relevant to FedRAMP, SOC 2, ISO 27001, and NIST 800-53, but no one maps them there because the mapping exercise is manual and the team's current assessment priority is CMMC. The SOC 2 compliance team collects separate evidence for the same controls three months later, re-running scans that already produced the relevant data, re-formatting results into the SOC 2 evidence template, and uploading them to a different evidence repository. The organization pays for the same evidence twice: once in the DevSecOps pipeline and once in the compliance collection cycle. Multiply this redundancy across every framework the organization maintains, and the waste compounds from inefficiency into a material cost center.
Rampart projects each finding through the derivation chain to every active framework simultaneously. When a STIG check completes, the finding is normalized, the control resolution runs through all five mapping strategies, and the evidence is filed against every applicable control in every active framework within seconds. Sentinel ensures that evidence from one scan advances CMMC, FedRAMP, SOC 2, ISO 27001, and every other active framework without separate collection, separate mapping, or separate formatting. The marginal cost of adding a new framework to the organization's compliance portfolio is the configuration effort required to activate it. The evidence already exists. The mappings already resolve. The only new work is framework-specific parameter validation: confirming that the evidence meets the target framework's specific requirements for collection frequency, retention period, and artifact format. An organization that has maintained CMMC certification for two years and now needs SOC 2 does not start from scratch. Every finding already mapped through the 800-53 derivation chain is already filed against the corresponding SOC 2 Trust Service Criteria.
Point-in-time evidence collection is the root cause of the assessment scramble that every compliance team dreads. The quarterly cycle produces a burst of evidence that is maximally fresh on collection day and progressively stale every day after. By the time an assessor reviews the evidence, weeks or months have passed. The organization cannot prove that the security posture described by the evidence still reflects the running environment. The assessor must either accept stale evidence with reduced confidence or request fresh collection, which delays the assessment. Continuous evidence collection eliminates this problem by treating scan results as a stream rather than a snapshot. Every scan that runs, whether triggered by a CI/CD pipeline, a scheduled job, or a manual execution, produces evidence that enters the compliance mapping pipeline immediately. The evidence is timestamped, integrity-hashed, and filed against every applicable control.
Periodic scans leave evidentiary gaps that are invisible until assessment time. An organization that runs quarterly STIG scans has evidence for four days out of 365. The other 361 days are a gap in the evidentiary record. An assessor reviewing that evidence must either accept the implicit assumption that the control operated continuously between scans or flag the control as insufficiently evidenced. For frameworks that require continuous monitoring (NIST 800-53 CA-7, CMMC SI.L2-3.14.6), quarterly scans are structurally insufficient. The framework requires continuous evidence. The organization provides intermittent snapshots. The gap between what the framework demands and what the evidence demonstrates is not a documentation problem. It is an architectural problem with the evidence collection cadence itself.
Sentinel schedules Vanguard scans continuously across the managed estate. SAST scans run on every pull request and every merge to the main branch. Container scans run on every image build and on a scheduled basis against deployed images. STIG checks run daily against infrastructure configurations. DAST scans run against staging environments on every deployment and against production on a configurable schedule. Results feed Rampart scoring in real time: as findings arrive, the control satisfaction status updates for every affected control in every active framework. The evidence freshness dimension of the three-dimensional scoring model (defense_effectiveness, evidence_coverage, evidence_freshness) ensures that scan evidence does not age past usefulness. Controls whose evidence approaches a configurable staleness threshold trigger automatic re-scan. Pipeline gates in the CI/CD integration block non-compliant code from merging: if a scan produces findings that would degrade a control below its required threshold, the merge is blocked until the finding is resolved or an exception is granted through the approval workflow.
The distinction between a scan finding and a compliance evidence artifact is the metadata that establishes provenance, integrity, and control relevance. A raw scan finding is a data point: this scanner observed this condition on this target at this time. A compliance evidence artifact is that same data point wrapped in provenance metadata that answers the questions an assessor will ask. Who collected it? What scanner configuration was used? What target was scanned? When was the scan executed? Can the result be independently verified? Has the artifact been modified since collection? Which controls does this finding address? What is the mapping provenance for each control relationship? Without this metadata, the finding is security data. With it, the finding is compliance evidence. The transformation from one to the other is the difference between a scan result that sits in a pipeline log and a scan result that satisfies a control requirement during an assessment.
Findings without mapping are wasted data from a compliance perspective. An organization that runs comprehensive security scans but does not map findings to controls possesses the raw material for compliance evidence without possessing the evidence itself. The DevSecOps team knows the codebase has no critical vulnerabilities. The compliance team needs evidence that RA-5 (Vulnerability Monitoring and Scanning) and SI-2 (Flaw Remediation) are operating effectively. Both statements describe the same reality, but the first is expressed in DevSecOps terms and the second in compliance terms. Without the mapping bridge, the compliance team must independently collect evidence for RA-5 and SI-2, even though the DevSecOps team already possesses it in a different format. The finding is not wasted from a security perspective. It is wasted from a compliance perspective because it cannot be consumed by the compliance process without manual translation.
Each Vanguard finding is stored as an immutable event with a SHA-256 integrity hash and an OpenTelemetry trace ID that links the finding to the scan execution, the scanner configuration, the target specification, and the pipeline that orchestrated the scan. The integrity hash ensures the finding has not been modified after collection. The trace ID enables end-to-end verification of the evidence chain from scan initiation through finding production to evidence filing. Rampart maps each normalized finding to every applicable control through the five mapping strategies, recording the full mapping_path and mapping_fidelity for each relationship. Artificer generates assessment narratives from scan evidence using 9 context blocks (system description, control requirements, evidence inventory, prior assessment results, scan findings, configuration observations, remediation history, policy documents, and framework crosswalk metadata) and three specialized tools (evidence correlation, gap analysis, and narrative composition). The narrative computation produces implementation descriptions that cite specific scan evidence, reference specific finding identifiers, and describe the security behavior observed rather than the security behavior intended. The assessor receives not a generic statement that "vulnerability scanning is performed regularly" but a specific narrative: "Vanguard executed 847 SAST scans, 12 container image scans, and 23 STIG evaluations against this system's authorization boundary between January 1 and March 31, producing 1,247 findings, 1,198 of which confirm compliant configurations."
The gap between DevSecOps and compliance is not a people problem. Both teams are competent, motivated, and doing their jobs. The gap is a data model problem. DevSecOps teams produce structured security data in scanner-native formats. Compliance teams consume evidence in control-mapped formats. The translation between these formats has historically been manual, expensive, and lossy. Information is lost at every translation step: the SARIF file contains line-level detail that the compliance screenshot does not capture. The scanner configuration metadata is discarded during the manual export. The temporal precision of a continuous scan pipeline is reduced to a quarterly snapshot. Each translation step degrades the evidence from machine-verifiable structured data into a static document that proves less than the original data contained.
The bridge is not a tool that sits between two workflows. It is the elimination of the second workflow entirely. When scan findings are normalized, mapped to controls through deterministic derivation chains, packaged with provenance metadata, and filed against every applicable framework automatically, the compliance team does not need a separate evidence collection process. The evidence arrives through the scan pipeline. The mapping is structural and continuous. The provenance is cryptographic and verifiable. The compliance team's role shifts from evidence collection (a labor-intensive, low-value activity) to evidence validation and posture management (a high-value activity that requires compliance expertise). The DevSecOps team's role does not change at all. They continue running scans, triaging findings, and remediating vulnerabilities. The compliance evidence is a byproduct of their existing workflow, not an additional burden.
Citadel provides the unified view that both teams have lacked. DevSecOps sees scan findings with their compliance impact: this finding affects these controls across these frameworks, and remediating it will improve posture by this amount. Compliance sees control satisfaction with its security evidence: this control is satisfied by these scan findings collected at these timestamps with these integrity hashes. Both views are projections of the same underlying data model. Vanguard produces the findings. Rampart maps them to controls and scores posture across three dimensions (defense_effectiveness, evidence_coverage, evidence_freshness). Sentinel ensures continuous collection and monitors for posture changes between scans. Citadel surfaces the unified result: one dashboard where both teams see the same security reality expressed in their respective vocabularies. The two-team problem dissolves because the two teams are finally working from one data model, one evidence pipeline, and one source of truth.
Something is being forged.
The full platform is under active development. Reach out to learn more or get early access.