The Infrastructure IS the Evidence. Not a Screenshot of It.

Infrastructure as Evidence

When infrastructure is deployed from hardened IaC modules with STIG parameters and CIS benchmark configurations built in, the deployment itself satisfies controls. No manual evidence collection. No screenshots. Continuous monitoring creates an unbroken evidence chain from deployment to compliance proof. The running system proves its own posture.

Deploy compliant infrastructure. The evidence generates itself.

Most compliance programs treat evidence collection as a separate workflow from infrastructure operations. Engineers build systems. Compliance teams collect screenshots. The two activities share a subject but not a process, not a timeline, and not a data source. That separation is the root cause of evidence decay, assessment scrambles, and compliance failures. When infrastructure is deployed from hardened code modules that encode control requirements as configuration parameters, the deployment event itself becomes the evidence. Continuous monitoring confirms the state persists. The gap between "what was deployed" and "what the assessor sees" collapses to zero.

01
IaC Evidence
Infrastructure Deployed from Code IS the Proof.

Infrastructure as Code inverts the evidence model. Instead of deploying infrastructure manually and then collecting evidence of its configuration, the infrastructure is defined in code before deployment. The code specifies every configuration parameter: encryption settings, access control rules, logging destinations, network boundaries, retention policies, key management configurations. When that code is applied, the resulting infrastructure matches the specification exactly. The deployment event itself is evidence that the specified configuration exists. The code repository maintains the complete history of every configuration change, who made it, when, and why. The gap between "what was intended" and "what was deployed" collapses because they are the same artifact. The Terraform state file records the deployed resource identifiers, configuration values, and dependency relationships. The plan output shows exactly what will change before it changes. The apply log records exactly what did change and when.

When infrastructure is defined as code, compliance requirements can be encoded as configuration parameters. An encryption control does not require a screenshot of a console page showing encryption enabled. It requires a Terraform resource definition with server_side_encryption_configuration set to AES-256 or aws:kms with a specific key ARN. That configuration is version-controlled, peer-reviewed, and applied through an automated pipeline. The evidence is the code itself, the deployment log, and the state file confirming the resource exists with the specified configuration. This evidence is machine-readable, timestamped, attributable to a specific commit and author, and verifiable against the running infrastructure at any time. It does not decay because the code repository preserves every version indefinitely. It does not become disconnected from the running system because the deployment pipeline enforces the code as the single source of truth. If someone modifies the infrastructure outside the code pipeline (console click, manual API call), the drift is detectable because the running state no longer matches the declared state.

The compliance implications are structural, not incremental. Infrastructure as Code does not make evidence collection faster. It eliminates the need for a separate evidence collection process entirely. The deployment pipeline IS the evidence collection process. Every terraform apply that provisions or modifies infrastructure simultaneously creates the evidence that the infrastructure meets its specified configuration. Every pull request that modifies infrastructure code creates a reviewable, auditable record of the change, the justification, the approvals, and the resulting state. The compliance team does not need to ask engineers to take screenshots because the deployment artifacts contain more information, with higher fidelity, than any screenshot could capture. The assessor does not need to trust that a screenshot represents the current state because the infrastructure state can be verified against the code in real time. This is not an optimization of the screenshot model. It is a replacement of it. The evidence model shifts from "capture and file" to "deploy and verify."

02
The Problem
Screenshots Lie. Configs Change. Evidence Decays.

The dominant evidence collection model in compliance programs is manual capture. An engineer opens a console, navigates to a configuration page, takes a screenshot, and uploads it to a shared drive or GRC platform. A compliance analyst labels the screenshot, cross-references it to a control, and files it in the evidence package. This process repeats for every control that requires technical evidence: encryption settings, access control configurations, logging pipelines, network segmentation rules, vulnerability scan results, patch management records. For a framework like NIST 800-53 Moderate with over 300 controls, the evidence collection burden runs into thousands of individual artifacts. Each artifact is a static image of a dynamic system, captured at a single point in time, disconnected from the infrastructure it claims to represent the moment the screenshot is saved. The screenshot is not evidence of a control. It is evidence that someone looked at a configuration page on a specific date. It proves nothing about what happened before that moment or after it.

Evidence decay is not a risk to manage. It is a certainty to plan for. Every piece of static evidence begins decaying the moment it is captured. Infrastructure is not static. Security groups are modified to accommodate new application requirements. IAM roles are created, expanded, and reassigned. Encryption keys rotate. Logging configurations are adjusted. Network ACLs are updated. Storage buckets are created for ad hoc projects without compliance review. Services are deployed to new regions. Each of these changes invalidates some portion of the existing evidence package, but the evidence package does not know it. The screenshot of the security group configuration from three weeks ago still sits in the evidence library, cross-referenced to the boundary protection control, presenting a configuration that no longer matches the running environment. The compliance team discovers this during the next collection cycle, or worse, the assessor discovers it during the assessment.

Configuration drift compounds evidence decay into active misrepresentation. When an engineer modifies a security group rule to troubleshoot a connectivity issue and forgets to revert the change, the evidence package now contains a screenshot showing a compliant configuration while the running system is non-compliant. This is not intentional misrepresentation. It is the mechanical consequence of maintaining evidence separately from infrastructure. The evidence and the infrastructure are two independent systems with no synchronization mechanism. They diverge naturally and silently. The compliance team does not know the configuration changed because they are not monitoring the infrastructure; they are managing a document library. The engineering team does not know the change has compliance implications because they are not tracking which configurations map to which controls. Both teams are doing their jobs. Neither team has visibility into the other's domain. The result is an evidence package that actively contradicts the running environment, discovered only when an assessor compares the documented state to the observed state during an assessment.

03
Deploy and Comply
IaC Modules That Satisfy Controls from First Deployment.

Writing compliant infrastructure code from scratch requires deep knowledge of both the target framework's control requirements and the cloud provider's configuration options. An engineer implementing SC-28 (Protection of Information at Rest) must know that the control requires encryption of all data at rest, that the encryption must use FIPS 140-2 validated cryptographic modules, that the key management must include rotation policies and access restrictions, and that the specific Terraform resource attributes that implement these requirements vary by service. Multiply that by every control in the framework, and the engineering burden becomes prohibitive. Most teams either implement a subset of controls (leaving gaps) or implement controls inconsistently across services (creating partial coverage that is difficult to assess). The problem is not willingness. It is the cognitive load of mapping framework requirements to infrastructure configuration across dozens of services simultaneously.

Manual infrastructure requires a separate evidence collection process because no structural link exists between the deployment action and the compliance requirement it satisfies. An engineer who provisions an encrypted storage bucket through the console has satisfied a control, but the only record is the console action itself. No metadata connects that action to SC-28. No provenance chain links the configuration to the framework requirement. The compliance team must independently discover that the bucket exists, verify its encryption configuration, capture evidence of that configuration, and map the evidence to the control. This is redundant work: the engineer already knows the configuration because they set it. The compliance team must rediscover what engineering already did, in a different format, on a different timeline, through a different process. Every manual deployment creates this gap. Every gap requires manual bridging. The labor cost compounds across every resource, every control, and every assessment cycle.

Armory provides hardened Terraform modules that encode control requirements as configuration parameters. Each module is built with STIG parameters and CIS benchmark configurations as native inputs, not bolted-on documentation. A storage encryption module exposes parameters for encryption algorithm selection (AES-256, aws:kms), key ARN specification, key rotation period, and access logging. The default values satisfy the strictest applicable baseline. A logging module deploys centralized log collection with tamper-evident storage, configurable retention periods that meet framework minimums, alerting on audit processing failures, and cross-account log aggregation. A network segmentation module establishes VPC boundaries with deny-by-default ingress rules, explicit segment separation, traffic inspection via flow logs, and network ACLs that enforce boundary protection requirements. Armory modules ship through a Chainguard distroless supply chain with SLSA provenance attestations and cosign-signed artifacts. Every module's provenance is verifiable from source commit through build pipeline to published artifact. Deploy the module, and the control is satisfied by design. The infrastructure IS the evidence. Sentinel reads the module metadata to automatically map the deployed resource to the controls it satisfies in Rampart. The infrastructure declares its own compliance posture through its configuration. No manual mapping required.

04
The Evidence Chain
From Code to Deployed Resource to Proof.

The evidence chain begins at the code repository. A hardened Terraform module is committed with a specific version tag. The module's metadata declares which controls it satisfies: SC-28 for encryption at rest, AU-3 for audit content, SC-7 for boundary protection. The pull request records who authored the change, who reviewed it, what approvals were granted, and when the merge occurred. The CI/CD pipeline runs terraform plan, which produces a machine-readable execution plan showing exactly what resources will be created or modified and with what configuration values. The plan is stored as an artifact with a SHA-256 integrity hash. When the plan is approved and applied, terraform apply executes the changes and produces a state file recording the deployed resource identifiers, their configuration attributes, and the dependency graph. At this point, a complete, immutable, machine-verifiable record exists of what was intended (the code), what was planned (the plan), and what was deployed (the state file and apply log).

Traditional evidence chains have gaps at every handoff point. The engineer deploys infrastructure. Weeks later, the compliance team captures a screenshot. Between deployment and capture, the configuration may have changed. Between capture and assessment, the configuration may change again. Each handoff introduces a temporal gap where the evidence and the infrastructure are unsynchronized. The chain is not continuous; it is a series of disconnected snapshots stitched together with assumptions. The assessor must trust that the screenshot taken on January 15 still reflects the configuration on April 1. That trust is an assertion, not evidence. The longer the chain of assumptions, the weaker the evidentiary value. A chain that requires "we believe this screenshot still reflects reality" at three separate points is structurally weaker than a chain where every link is independently verifiable at the moment of assessment.

Sentinel detects deployed resources and maps them to controls in Rampart through a unified collection pipeline. When Sentinel observes a resource whose running configuration matches its declared state, it generates a compliance event with a SHA-256 integrity hash of the observed configuration and an OpenTelemetry trace ID linking the observation to the monitoring pipeline that produced it. These compliance events accumulate over time, creating a continuous evidence stream rather than a point-in-time snapshot. The evidence for SC-28 is not a screenshot showing encryption enabled on January 15. It is a continuous series of observations confirming encryption remained enabled on January 15, 16, 17, and every subsequent day through the present. The full provenance chain runs from IaC module (source commit with Git SHA) through Terraform state (SHA-256 hash of the state file) to running resource (SHA-256 hash of observed configuration at each collection interval). Every link in the chain carries cryptographic integrity verification. The assessor can verify any link independently: does the deployed resource match the code? Does the monitoring observation match the deployed resource? Has any artifact been modified after creation?

05
Terraform Baseline
Terraform State as Compliance Baseline for CM-2, CM-6, and CM-8.

Three NIST 800-53 controls define the configuration management triad that underpins infrastructure compliance. CM-2 (Baseline Configuration) requires organizations to develop, document, and maintain a current baseline configuration of the information system. CM-6 (Configuration Settings) requires establishing and enforcing security configuration settings for information technology products employed in the system. CM-8 (System Component Inventory) requires developing and documenting an inventory of system components that accurately reflects the current system. Terraform state satisfies all three simultaneously. The state file IS the baseline configuration: a machine-readable document that records every resource, its configuration attributes, and its relationships. The Terraform code that produces the state IS the configuration settings: the declared values that define how each resource must be configured. The resource inventory within the state IS the component inventory: every deployed resource with its type, identifier, and current attributes. When these controls are satisfied through Terraform, they are satisfied structurally rather than through separate documentation artifacts.

Without Terraform, satisfying CM-2, CM-6, and CM-8 requires maintaining three separate document sets. The baseline configuration document describes the approved system architecture and configuration parameters. The configuration settings document lists the security-relevant parameters for each component and their approved values. The component inventory lists every hardware and software element in the system with attributes like version, location, and owner. These documents are authored manually, updated periodically (often quarterly), and reviewed during assessments. Between updates, the documents drift from reality. New components are deployed without updating the inventory. Configuration parameters change without updating the settings document. The baseline document describes an architecture that existed at authorship time but has evolved since. Drift from Terraform state IS a compliance violation of these controls, because drift means the running system no longer matches its documented baseline, its documented settings, or its documented inventory.

Sentinel uses Terraform state combined with Config Rules to establish the compliance baseline and detect deviations. The Terraform state file provides the declared configuration for every managed resource. Config Rules evaluate the running configuration of those resources against the declared state. When a resource's running configuration matches its Terraform declaration, the Config Rule returns compliant. When drift occurs, the rule returns non-compliant with the specific attribute and value that diverged. Sentinel correlates these evaluations to CM-2, CM-6, and CM-8 in Rampart. Remediation-as-code generates Terraform pull requests that restore the drifted resource to its declared state: the PR contains the exact terraform apply command needed, scoped to the affected resource, with the before and after configuration visible in the diff. Rampart enforces policy-gated auto-apply: pre-approved drift categories (encryption restoration, logging re-enablement, security group reversion) execute within defined change windows, while structural changes require explicit human approval before the PR is merged and applied.

06
Drift Detection
What Configuration Drift Means for Compliance Posture.

Configuration drift is the primary threat to infrastructure-as-evidence. If the running state diverges from the declared state, the evidence chain breaks. The code says encryption is enabled with a specific key. The running resource has encryption disabled because someone modified it through the console. The monitoring system observes the discrepancy. At this point, the evidence stream for the affected control transitions from "satisfied" to "degraded." The speed of detection determines the severity of the gap. A drift detected within minutes produces a brief compliance event gap that can be explained and remediated. A drift undetected for weeks produces a sustained evidence gap that may constitute a control failure during an assessment. The difference between a minor annotation and a material finding depends entirely on how quickly the drift is detected and how quickly the compliant state is restored.

Between assessments, drift accumulates silently in organizations that lack continuous monitoring. A security group rule is modified to troubleshoot a connectivity issue. An IAM policy is broadened to accommodate a deadline. A logging configuration is adjusted to reduce costs. A storage bucket is created without encryption for a temporary project that becomes permanent. Each change is individually rational, operationally justified, and compliance-invisible. No alert fires. No ticket is created. No evidence artifact is updated. When the next assessment arrives, the assessor compares the documented baseline to the running environment and discovers dozens of discrepancies. Each discrepancy requires investigation, explanation, and remediation. The organization enters a scramble phase that could have been avoided entirely if drift had been detected and addressed continuously.

Sentinel evaluates over 400 Config Rules continuously to detect drift across the managed estate. When a discrepancy is detected, Sentinel generates a drift event that identifies the affected resource, the specific attribute that changed, the declared value, the observed value, the timestamp of detection, and the controls affected. The drift event triggers the AutomationPolicy engine, which determines the response based on the organization's configured gates: auto-apply for pre-approved categories within defined change windows, approval for changes that require human review before execution, and manual-only for structural changes that require engineering intervention. Rampart re-scores every affected control the moment drift is detected, updating the three-dimensional score (defense_effectiveness, evidence_coverage, evidence_freshness) for each control impacted by the change. Citadel surfaces the drift event with full context: what changed, when, which controls are affected, what the posture impact is, and what remediation action the AutomationPolicy has initiated or is awaiting approval to execute.

07
The Convergence Loop
Scan. Monitor. Score. Remediate. Converge.

The convergence loop is the operational model that ties infrastructure-as-evidence into a continuous compliance posture. It is not a linear process with a beginning and end. It is a closed loop that runs continuously, converging the actual state of infrastructure toward the declared desired state. The loop has five phases. Scan: security tools evaluate the codebase and deployed infrastructure for vulnerabilities, misconfigurations, and policy violations. Monitor: the monitoring layer ingests security service data, observes resource configurations against declared state, and detects drift in real time. Score: each control is evaluated across three dimensions (defense_effectiveness, evidence_coverage, evidence_freshness) and an aggregate posture score is computed for every active framework. Remediate: when scoring identifies a degraded control, remediation actions are generated. Converge: after remediation executes, the loop cycles back to scan and monitor, verifying the fix took effect and the control has returned to full satisfaction.

Without convergence, posture degrades mechanically. Infrastructure changes introduce new drift. New vulnerabilities are disclosed against existing dependencies. Evidence ages past its usefulness. Personnel changes alter access patterns. Each of these events affects compliance posture, but without a continuous loop to detect and correct them, the degradation is invisible until the next assessment. Organizations that treat compliance as a periodic project experience this degradation as an assessment surprise: the posture that was 95% converged six months ago is 78% converged today, and the remediation effort to restore it consumes weeks of engineering time under deadline pressure. The convergence loop prevents this degradation from accumulating by detecting deviations at the speed they occur and initiating correction before the gap widens.

The convergence loop is formalized as PostureFunction(system, framework, timestamp): a deterministic computation that takes a system identifier, a framework identifier, and a point in time, and returns the precise compliance posture at that moment. Vanguard feeds scan results into Rampart, which evaluates each finding against the controls it affects. Sentinel monitors the deployed estate continuously, detecting drift and feeding configuration observations into the same scoring engine. Citadel displays the convergence dashboard: how many controls are converged (running state matches declared state with current evidence), how many are diverged (drift detected, remediation pending), and how many are converging (remediation in progress, awaiting re-verification). Intelligent remediation PR generation produces Terraform pull requests scoped to the specific resource and attribute that drifted, with the before and after configuration visible in the diff, linked to the control it affects and the posture impact of the change. The trajectory matters as much as the current score. A system that was 85% converged last week and is 92% converged today demonstrates active posture improvement. A system that was 95% converged last month and is 88% converged today signals degradation that requires investigation.

08
The New Model
One Hardened Deployment. Living Evidence. Continuous Convergence.

The traditional compliance model treats infrastructure and evidence as separate concerns managed by separate teams on separate timelines. Engineers build and maintain systems. Compliance teams collect and organize proof that those systems satisfy framework requirements. The two workflows share a subject (the infrastructure) but nothing else: not the data sources, not the collection mechanisms, not the temporal cadence, not the storage systems. This separation is not a process inefficiency. It is an architectural flaw that guarantees evidence decay, assessment scrambles, and compliance findings. The evidence and the infrastructure are two independent representations of the same reality, maintained by different people with different tools, synchronized manually at quarterly intervals. Every organization that operates this model experiences the same failure modes: stale evidence, scrambled re-collection, assessor surprises, and engineering time consumed by documentation rather than security improvement.

The new model eliminates the separation. Infrastructure deployed from hardened IaC modules satisfies controls from first deployment. The deployment event is the evidence creation event. Continuous monitoring confirms the configuration persists. Drift detection identifies deviations within minutes. Remediation restores the compliant state through the same code pipeline that created it. The convergence loop runs continuously, keeping the actual state aligned with the declared state and accumulating evidence of both compliance and resilience with every cycle. There is no separate evidence collection process because the infrastructure operations ARE the evidence collection process. There is no assessment scramble because the evidence is always current. There is no gap between what the documentation says and what the infrastructure does because they are the same artifact.

One hardened deployment through Armory satisfies controls across every framework the organization maintains. Living evidence through Sentinel replaces static snapshots with a continuous stream of cryptographically verified observations, each carrying a SHA-256 integrity hash and OpenTelemetry trace ID. Continuous convergence through the PostureFunction keeps the actual posture aligned with the declared posture, with every deviation detected, scored, and remediated through the same infrastructure-as-code pipeline that created the baseline. Rampart scores every control across three dimensions (defense_effectiveness, evidence_coverage, evidence_freshness), projecting posture across every active framework simultaneously. Citadel surfaces the unified view: one dashboard showing every system, every framework, every control, and every piece of evidence, with full provenance from code commit through deployment through continuous operation to compliance proof. The infrastructure is the evidence. The convergence loop is the proof that the evidence persists.

Something is being forged.

The full platform is under active development. Reach out to learn more or get early access.