Your Compliance Evidence Is Already Stale.
Evidence Decay in Compliance Programs
Point-in-time evidence collection is not a risk management trade-off. It is a mechanical certainty of failure. Every compliance artifact begins decaying the moment it is captured. Infrastructure drifts. Configurations change. Personnel rotate. The evidence that earned your last assessment describes a system that no longer exists. Continuous, event-sourced evidence with immutable integrity records is the only architecture that keeps pace with the environments it describes.
Evidence Decay
Evidence ages. Infrastructure does not wait.
Every compliance program depends on evidence: proof that controls are implemented, operating effectively, and producing the outcomes they claim. That evidence has a shelf life. A configuration snapshot taken today describes today's environment. Tomorrow, the environment changes. The snapshot does not. This gap between captured evidence and running reality is evidence decay, and it undermines every framework, every assessment, and every compliance posture that depends on point-in-time collection.
Compliance evidence is any artifact that demonstrates a control is implemented and operating effectively. A configuration snapshot proving encryption is enabled on a storage volume. An access review log showing quarterly privilege validation. A vulnerability scan report demonstrating that critical findings are remediated within the required timeframe. A network diagram depicting the authorization boundary and data flow paths. Each of these artifacts captures the state of the environment at a specific moment. That moment passes. The environment continues to change. The artifact does not update itself. This divergence between the captured state and the current state is evidence decay. It is not a failure of process discipline. It is a property of time applied to static records in dynamic environments.
The rate of decay varies by evidence type. Infrastructure configuration evidence decays fastest because cloud environments change continuously: security groups are modified, IAM policies are updated, new resources are provisioned, encryption settings are adjusted. A configuration snapshot of an environment taken on Monday may not reflect Tuesday's reality. Access control evidence decays at a medium rate, governed by personnel changes, role modifications, and privilege escalations that accumulate between formal access reviews. Policy evidence decays slowest because organizational policies change infrequently, but even policy evidence becomes stale when the infrastructure it describes is modified without a corresponding policy update. The System Security Plan that describes your network architecture is valid until the network architecture changes. After that, it describes a system that no longer exists.
Assessors understand evidence decay intuitively, even when organizations do not. A C3PAO evaluating CMMC Level 2 practices will ask when each evidence artifact was collected. A SOC 2 Type II auditor evaluating the operating effectiveness of controls over a twelve-month period will look for evidence distributed across that entire period, not a batch collected in the final week. A FedRAMP assessor reviewing continuous monitoring artifacts expects monthly or quarterly evidence demonstrating sustained control operation, not annual snapshots. The assessment is not asking whether the control was ever implemented. It is asking whether the control was operating effectively throughout the assessment period. Evidence that is six months old answers the first question. It does not answer the second. The distinction between "was this control ever working" and "is this control working now" is the distinction between evidence that supports certification and evidence that does not.
Most organizations collect compliance evidence on a periodic cycle. Quarterly is common. Monthly is aspirational. Annually is more frequent than many organizations achieve in practice. The collection process follows a predictable pattern: a compliance manager sends a spreadsheet to control owners requesting updated evidence for their assigned controls. Engineers take screenshots of console configurations, export access control lists, generate vulnerability scan reports, and upload them to a shared drive or GRC platform. The compliance manager reviews submissions for completeness, follows up on missing items, and compiles the evidence package. This cycle takes two to six weeks depending on organization size and the number of controls under assessment. By the time the collection cycle completes, the earliest artifacts are already weeks old. The environment they describe has already changed.
The cost of evidence decay compounds silently until assessment time. Between collection cycles, nobody tracks whether the evidence remains valid. The configuration snapshot from last quarter still sits in the evidence repository, marked as "current" because the next collection cycle has not replaced it. Meanwhile, an engineer modified the security group rules to accommodate a new application. The IAM team created a new service role with broader permissions than the access control policy permits. A storage bucket was created without the encryption configuration that the evidence artifact claims is enforced on all storage. None of these changes trigger an evidence update because the evidence collection process is periodic, not continuous. The evidence repository describes a compliant environment. The running environment is not compliant. The organization does not know this until the next collection cycle, or worse, until the assessor discovers it during the assessment.
The assessment scramble begins four to eight weeks before the assessor arrives. The compliance manager reviews the evidence repository and discovers that most artifacts are months old. Configuration snapshots no longer match the running environment. Access reviews have not been completed since the last collection cycle. Network diagrams depict an architecture that has been modified multiple times without documentation updates. The scramble is the organization's attempt to rebuild its evidence package from scratch under time pressure while simultaneously maintaining normal operations. Under that pressure, control owners take shortcuts. Screenshots are captured without verifying correctness. Policy documents are signed without substantive review. Narrative descriptions are written from memory rather than verified against running infrastructure. The resulting package is complete in the sense that every control has an artifact attached. It is not reliable in the sense that every artifact accurately reflects the current state of the control it describes.
The traditional evidence collection cycle begins with a request and ends with an artifact repository that describes a system frozen at multiple points in time. A compliance manager identifies which controls require updated evidence, generates a task list, and distributes it to control owners across the organization. Each control owner is responsible for producing an artifact that demonstrates their assigned control is implemented and operating. For infrastructure controls, this typically means logging into a management console, navigating to the relevant configuration page, taking a screenshot, and uploading it to the designated repository. For process controls, it means locating the most recent policy document and confirming it has a current approval signature. For operational controls, it means exporting log data, generating reports, or providing attestation statements. Each task is treated as a discrete work item, disconnected from the other evidence tasks and from the running environment.
This process creates three categories of false compliance. First, evidence of configuration is not evidence of operation. A screenshot showing that encryption is enabled on a storage volume proves the setting exists at the time of capture. It does not prove that every object written to that volume was encrypted, that the encryption key is managed according to policy, or that the setting was not disabled and re-enabled specifically for the screenshot. Second, evidence of policy is not evidence of enforcement. A signed access control policy proves that management approved a document. It does not prove that the policy is technically enforced, that exceptions are tracked, or that violations are detected and remediated. Third, evidence of existence is not evidence of effectiveness. A vulnerability scan report proves that a scan was executed. It does not prove that critical findings were remediated within the required timeframe, that the scan covered the full authorization boundary, or that the scanner was configured to detect the vulnerability categories relevant to the compliance framework. The collection cycle produces artifacts that satisfy the letter of the evidence requirement while leaving the spirit of the requirement unaddressed.
Sentinel replaces periodic collection with continuous evidence streams. Instead of requesting artifacts on a schedule, Sentinel declares evidence requirements as structured specifications: what artifact type is needed, from which resource, at what minimum freshness, mapped to which controls. These declarative requirements are resolved against the infrastructure that Garrison discovers and inventories. When Sentinel detects that a requirement can be satisfied by observing a running resource, it establishes a persistent evidence stream. Configuration state is captured as it changes, not when someone remembers to take a screenshot. Access events are recorded as they occur, not reconstructed from logs weeks later. Scan results flow into the evidence chain at completion, not when a control owner uploads them manually. Each evidence event is stored with a SHA-256 integrity hash computed at creation and an OpenTelemetry W3C trace ID linking the evidence record to the originating infrastructure event. The result is an evidence architecture where collection is a continuous property of the system, not a periodic project imposed on it.
Infrastructure drift is the divergence between the declared state of an environment and its actual running state. In compliance terms, drift means the environment no longer matches the evidence that describes it. Drift is not caused by negligence. It is caused by operations. Engineers modify configurations to resolve incidents. New services are deployed to meet business requirements. Auto-scaling creates and destroys resources continuously. Each change is individually reasonable. Collectively, they move the environment away from the state captured in the last evidence collection. The rate of drift correlates with the pace of operational activity. High-velocity environments that deploy multiple times per day drift faster than static environments. But even the most stable environments drift: certificate renewals, patching cycles, and infrastructure provider changes all modify the environment without requiring a deployment.
Drift affects compliance evidence in two directions. Positive drift occurs when the environment improves beyond what the evidence describes: a team implements stronger encryption or tighter access controls without updating the evidence repository. The organization is more secure than its evidence suggests, but the evidence is still stale. Negative drift is more dangerous: configurations weaken, permissions widen, monitoring coverage decreases, and the evidence repository still describes the stronger previous state. A security group modification that opens a port for troubleshooting and is never closed. An IAM policy exception granted for a migration project that becomes permanent. A logging pipeline that stops collecting events from a new resource type because the collector was not updated. Each of these changes degrades the security posture without triggering an evidence update. The gap between documented posture and running posture is the drift gap, and it is invisible between collection cycles.
Sentinel detects drift in real time by continuously comparing the observed state of every resource against its declared baseline. When a configuration diverges from the baseline, Sentinel records the drift event with full before-and-after state, maps it to every affected control, and triggers immediate re-evaluation. Rampart re-computes the per-control score across three dimensions: defense_effectiveness (is the control still technically enforced), evidence_coverage (does sufficient evidence exist to prove it), and evidence_freshness (how recently was the evidence produced). The resulting confidence score between 0.0 and 1.0 reflects the compound impact of drift on each control. For drift scenarios with known remediation paths, Sentinel executes auto-remediation governed by an AutomationPolicy per rule: auto-apply for low-risk corrections such as restoring a logging configuration, approval-gated for changes that require human review such as modifying access policies, and manual-only for high-impact changes that demand engineering judgment. Each remediation generates its own evidence event, closing the loop from detection through correction to proof.
Evidence freshness is a compliance dimension that most organizations treat as binary: evidence either exists or it does not. In reality, freshness operates on a continuous scale. A configuration snapshot taken five minutes ago is more valuable than one taken five days ago, which is more valuable than one taken five months ago. The value degrades at a rate determined by the volatility of the underlying resource and the sensitivity of the control it supports. A snapshot of a firewall rule set in a production environment with daily deployments loses relevance within hours. A snapshot of an encryption configuration on an archival storage volume may remain valid for weeks. A signed policy document may remain valid for months. Treating all evidence as equally valid regardless of age is the foundational error that makes periodic collection feel adequate when it is not.
Staleness compounds across controls. An organization with 200 controls does not have 200 independent freshness problems. It has a system-wide posture that degrades as individual controls lose evidence currency. When 10% of controls have stale evidence, the organization can reasonably claim strong posture. When 40% of controls have stale evidence, the posture is uncertain. When 70% of controls have stale evidence, the organization cannot demonstrate compliance regardless of how strong its actual defenses are. The compounding effect accelerates between collection cycles because evidence does not decay at a uniform rate. Fast-changing infrastructure controls go stale within days. Process controls may hold for weeks. By the midpoint of a quarterly collection cycle, the controls with the fastest decay rates have already lost their evidentiary support. By the end of the cycle, the majority of infrastructure-related evidence is no longer current.
Rampart treats evidence freshness as a first-class scoring dimension. Each control receives a three-dimensional score: defense_effectiveness measures whether the control is technically enforced, evidence_coverage measures whether sufficient artifacts exist to demonstrate it, and evidence_freshness measures how recently those artifacts were produced. The three dimensions multiply to produce a confidence score between 0.0 and 1.0. A control with strong defenses (0.95) but aging evidence (0.3) scores 0.285, reflecting the reality that the organization cannot prove what it cannot demonstrate with current evidence. Sentinel manages evidence expiration through a configurable lifecycle with three escalation stages: WARN_ONLY alerts the control owner that evidence is approaching staleness, DEGRADE_CONTROL reduces the evidence_freshness score to reflect the increased uncertainty, and CREATE_FINDING generates a formal compliance finding that enters the remediation workflow. Sentinel re-collects evidence automatically for controls where the collection source supports automation, refreshing the artifact before it crosses the staleness threshold. For controls that require human-generated evidence such as access reviews or policy attestations, the expiration lifecycle routes the refresh task to the responsible owner with sufficient lead time to complete it before degradation begins.
Immutability means an evidence record cannot be modified after creation without detection. In compliance, immutability addresses a specific trust problem: when an organization presents evidence to an assessor, the assessor must determine whether the evidence reflects the actual state of the environment at the time of capture or whether it has been modified to present a more favorable picture. Traditional evidence repositories offer no mechanism for this determination. A screenshot in a shared drive can be replaced. A log export can be edited. A scan report can be regenerated with different parameters. The assessor evaluates the artifacts at face value because there is no technical mechanism to verify their integrity. Trust in the evidence depends entirely on trust in the organization that produced it.
The non-tampering problem extends beyond intentional manipulation. Evidence records are also vulnerable to accidental corruption: file system errors that alter stored artifacts, migration processes that modify timestamps or metadata, organizational restructuring that moves evidence between repositories and inadvertently changes file properties. Even well-intentioned corrections create integrity questions. When a compliance analyst notices an error in an evidence description and updates the metadata, the modification is indistinguishable from deliberate falsification without an integrity verification mechanism. The assessor cannot determine whether the record was corrected or fabricated. In regulated environments where evidence integrity carries legal weight, the inability to distinguish between correction and fabrication is a material liability.
Redoubt Forge stores every evidence event with a SHA-256 integrity hash computed at the moment of creation. The hash covers the full evidence payload: the artifact content, the timestamp, the source system identifier, the control mapping, and the provenance metadata. Each subsequent evidence event includes the hash of the previous event in its computation, forming a hash chain where any modification to any record invalidates every subsequent hash. Every evidence event also carries an OpenTelemetry W3C trace ID that connects it to the originating infrastructure event, establishing a verifiable chain from the infrastructure change to the compliance record. Rampart supports temporal posture queries: given any timestamp, it reconstructs the exact posture state that existed at that moment by replaying the event-sourced evidence stream up to that point. An assessor can query posture at any date during the assessment period and receive a cryptographically verifiable answer. Sentinel produces the evidence events continuously, and the integrity chain grows with each event. The result is not a claim that the evidence is trustworthy. It is a cryptographic proof that the evidence has not been altered since creation, verifiable by any party with access to the hash chain.
Continuous assessment is the operational model where compliance posture is computed at every change, not evaluated at periodic intervals. In a continuous model, every infrastructure modification triggers a re-evaluation of every affected control. Every evidence event updates the freshness scores of every control it supports. Every drift detection triggers a posture recalculation. The compliance state of the system is always current because it is a function of the current evidence stream, not a function of the last time someone checked. This model eliminates the fundamental weakness of periodic assessment: the gap between assessments where the organization operates without knowing whether it remains compliant.
Periodic assessment always fails because it optimizes for the assessment event rather than for continuous compliance. Organizations prepare for assessments. They collect evidence in the weeks before the assessor arrives. They remediate gaps discovered during preparation. They present a polished evidence package that represents their best compliance state during a narrow window. Between assessments, the incentive structure shifts: operational priorities take precedence, evidence collection stops, drift accumulates, and the compliance posture degrades until the next assessment cycle forces another round of preparation. This sawtooth pattern, where compliance peaks at assessment time and decays between assessments, is the predictable outcome of periodic verification applied to continuously changing environments. The pattern cannot be corrected by more frequent assessments. It can only be corrected by eliminating the gap between assessments entirely.
PostureFunction(system, framework, timestamp) computes the exact compliance state of any system against any framework at any point in time. The function consumes the event-sourced evidence stream, applies the framework's control requirements and parameter thresholds, and produces a per-control posture snapshot that is fully reproducible. Citadel surfaces the current posture in real time and maintains a pre-computed action queue ranked by (score_impact x urgency) / estimated_effort, ensuring that the highest-value remediation tasks are always visible and prioritized. Sentinel operates as a convergence controller: it continuously compares the current posture against the target posture and executes corrective actions through its AutomationPolicy framework. When a control's score drops below its target threshold, Sentinel determines whether the degradation is caused by evidence staleness, defense failure, or coverage gap, and routes the appropriate corrective action. Evidence staleness triggers re-collection. Defense failure triggers remediation. Coverage gap triggers discovery. The system converges toward the target posture continuously, not in periodic bursts. Every corrective action generates its own evidence event, and the cycle repeats.
Evidence decay is not a process problem that can be solved with better discipline or more frequent collection cycles. It is an architectural limitation of static artifacts in dynamic environments. No amount of diligence changes the fundamental property: a point-in-time snapshot begins aging the instant it is captured. The only architecture that eliminates decay is one where evidence is produced continuously, stored immutably, and scored against the running state of the environment it describes. This is not an incremental improvement to periodic collection. It is a different model entirely.
The consequences of the old model are predictable and severe. Organizations operate in a perpetual cycle of collection, decay, scramble, and assessment. Each cycle consumes engineering time that should be spent improving security posture. Each cycle produces evidence packages that describe systems frozen at various points in the past. Each cycle ends with an assessment that evaluates a composite image assembled from temporal fragments rather than a coherent view of a running system. The scramble before each assessment is not a failure of preparation. It is the inevitable outcome of an evidence architecture that cannot keep pace with the environment it describes.
The transformation from stale snapshots to living evidence eliminates the decay cycle at its root. Sentinel produces evidence continuously from connected infrastructure, detects drift in real time, and executes corrective actions through policy-governed automation. Rampart scores every control across three dimensions and manages evidence expiration through configurable lifecycle stages. Citadel surfaces the current posture state and prioritizes the action queue by compound impact. Every evidence event carries a SHA-256 integrity hash and OpenTelemetry trace ID, making the entire evidence chain cryptographically verifiable. PostureFunction computes compliance state at any timestamp from the event-sourced stream. The scramble disappears because evidence is always current. The drift gap closes because drift is detected and corrected continuously. The assessment transforms from a high-pressure verification event into a routine confirmation of a posture that has been continuously demonstrated. Your evidence is not stale. It was forged minutes ago. And the next event is already being recorded.
Something is being forged.
The full platform is under active development. Reach out to learn more or get early access.