EU AI Act. AI Regulation Mapped to Governance Controls.
EU AI Act Overlay (Future)
The EU AI Act is the first comprehensive AI regulation enacted by any jurisdiction. It classifies AI systems by risk tier and imposes binding obligations on providers and deployers of high-risk systems, including risk management, data governance, transparency, human oversight, and conformity assessment. This overlay will map EU AI Act requirements to NIST 800-53 controls and NIST AI RMF functions through ADD and MODIFY operations. Organizations building AI governance posture today through AI RMF and AI 600-1 are accumulating controls and evidence that directly prepare them for EU AI Act compliance when this overlay is finalized.
EU AI Act
European AI regulation mapped to security and governance controls.
The EU AI Act establishes a risk-based regulatory framework with binding legal obligations that extend beyond any single nation's borders. Organizations that deploy AI systems affecting EU residents face classification requirements, conformity assessments, and penalties that rival GDPR in scope and financial consequence. This overlay translates those regulatory obligations into the control language your compliance program already speaks: NIST 800-53 baselines, AI RMF functions, and structured evidence collection from running systems.
The EU AI Act (Regulation (EU) 2024/1689) is the first comprehensive legal framework governing artificial intelligence. The European Parliament adopted the Act in March 2024, and it entered into force on August 1, 2024. Unlike voluntary frameworks such as the NIST AI RMF, the EU AI Act carries binding legal obligations and enforcement mechanisms with substantial financial penalties. The regulation applies to providers that place AI systems on the EU market, deployers that use AI systems within the EU, and any organization whose AI system's output is used within the EU, regardless of where the provider or deployer is established. This extraterritorial reach means that a U.S.-based defense contractor deploying an AI-assisted threat detection system that processes data from EU partners falls within scope. A healthcare organization using AI diagnostic tools that serve EU patients falls within scope. The jurisdictional trigger is effect within the EU, not physical presence.
The Act takes a risk-based approach to classification. Rather than regulating all AI systems uniformly, it establishes four tiers of risk with corresponding levels of obligation. Unacceptable risk systems are prohibited entirely. High-risk systems face the most extensive requirements across risk management, data governance, documentation, transparency, and human oversight. Limited risk systems carry specific transparency obligations. Minimal risk systems face no mandatory requirements beyond voluntary codes of practice. This tiered structure recognizes that a spam filter and a criminal sentencing algorithm pose fundamentally different risks and warrant fundamentally different regulatory treatment. The classification determines which articles apply, what evidence must be maintained, and what assessment process governs market access.
Enforcement follows a phased timeline. Provisions prohibiting unacceptable-risk AI practices applied from February 2, 2025. General-purpose AI model obligations and governance structure requirements apply from August 2, 2025. The full set of high-risk system obligations, conformity assessment requirements, and penalty provisions applies from August 2, 2026. Sector-specific high-risk classifications for certain Annex III use cases extend to August 2, 2027. This phased schedule provides organizations with a structured planning horizon. The EU AI Act overlay in Redoubt Forge is under development to align with these enforcement milestones, ensuring that organizations can map their existing AI governance controls to EU obligations before the compliance deadlines arrive.
The EU AI Act overlay will function through the same structural mechanism used by every overlay in the platform: ADD and MODIFY operations on NIST 800-53 controls and AI RMF actions. Article 9 (Risk Management System) maps to modifications of the Risk Assessment (RA) and Program Management (PM) control families, extending their scope to require AI-specific risk identification, analysis, and mitigation throughout the system lifecycle. Article 10 (Data and Data Governance) maps to modifications of the Media Protection (MP), System and Information Integrity (SI), and System and Communications Protection (SC) families, requiring training data quality criteria, bias assessment, and data lineage documentation that traditional baselines do not address. Where no existing 800-53 control adequately covers an EU AI Act obligation, the overlay ADDs new controls. Conformity assessment procedures, CE marking requirements, and AI-specific incident reporting obligations have no direct analog in the current control catalog.
This overlay is currently in development. The EU AI Act introduces regulatory concepts that do not map cleanly to existing NIST control structures, and the mapping must be precise to be useful. "Fundamental rights impact assessment" (Article 27) is a concept that exists in European law but has no direct counterpart in U.S. federal security frameworks. "Conformity assessment" (Article 43) is a European market access procedure with specific documentation and procedural requirements that differ from U.S. authorization processes like FedRAMP. The overlay development process involves mapping each article's specific obligations to the most appropriate 800-53 controls, identifying where modifications suffice and where new controls must be added, and validating that the resulting control set faithfully represents the regulation's requirements without distortion. This work must be thorough. An overlay that oversimplifies EU AI Act obligations would create a false sense of compliance.
Organizations that have already activated the AI governance overlay pack containing NIST AI RMF (AI 100-1) and NIST AI 600-1 are building the foundation that the EU AI Act overlay will extend. The AI RMF's Govern function maps to the EU AI Act's governance and accountability requirements. The Map function maps to risk classification and impact assessment obligations. The Measure function maps to performance monitoring and bias testing requirements. The Manage function maps to incident response and post-market monitoring obligations. When the EU AI Act overlay is finalized, Rampart will resolve the cross-references between AI RMF actions you have already assessed and the EU AI Act articles they satisfy. Evidence collected under your current AI governance posture will carry forward to the new overlay without re-collection. The preparation is structural, not speculative. Every AI governance control you implement today reduces the gap that the EU AI Act overlay will reveal.
Unacceptable risk AI systems are prohibited under Article 5. These include AI systems that deploy subliminal techniques to manipulate behavior in ways that cause harm, systems that exploit vulnerabilities related to age, disability, or social or economic situation, social scoring systems operated by or on behalf of public authorities, and real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes (with narrow exceptions for specific serious crimes). The prohibition is absolute for most categories. Organizations must assess whether any of their AI deployments fall within these definitions and cease operation of any that do. The classification is based on the system's purpose and method, not its underlying technology. A large language model used for general-purpose text generation is not inherently prohibited, but the same model deployed specifically to manipulate vulnerable populations through subliminal techniques would be.
High-risk AI systems are defined in Article 6 and Annex III. Two paths lead to high-risk classification. First, AI systems that are safety components of products covered by existing EU harmonization legislation (medical devices, machinery, vehicles, aviation) are automatically classified as high-risk. Second, stand-alone AI systems used in specific domains listed in Annex III are high-risk: biometric identification, critical infrastructure management, education and vocational training (admissions, assessment), employment (recruitment, performance evaluation, task allocation), essential services access (credit scoring, emergency dispatch), law enforcement (risk assessment, evidence analysis), migration and border control, and administration of justice. The classification triggers the full set of obligations under Chapter III, Section 2: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy, robustness, and cybersecurity requirements. Organizations must determine whether their AI systems fall within these categories before the August 2026 enforcement date.
Limited risk AI systems carry transparency obligations under Article 50. Systems that interact directly with natural persons must disclose their AI nature unless the context makes it obvious. AI systems that generate synthetic audio, image, video, or text content must label that content as artificially generated or manipulated. Deepfake content must be disclosed. Emotion recognition systems and biometric categorization systems must inform the individuals subject to them. These transparency requirements apply regardless of whether the system is otherwise classified as high-risk. Minimal risk AI systems face no mandatory requirements. The regulation encourages voluntary codes of conduct for minimal-risk systems but does not impose binding obligations. The majority of AI systems in commercial use fall into this category: recommendation engines, spam filters, search optimization, and general-purpose productivity tools. The classification boundary between limited and minimal risk depends on whether the system interacts with individuals or generates content that could be mistaken for human-created material.
Article 9 requires a risk management system that operates throughout the entire lifecycle of the high-risk AI system. This is not a one-time risk assessment. It is a continuous, iterative process that identifies and analyzes known and reasonably foreseeable risks, estimates and evaluates those risks based on testing under real-world conditions, and implements risk mitigation measures. The risk management system must account for risks to health, safety, and fundamental rights, including risks that emerge from the system's intended use and from reasonably foreseeable misuse. Article 10 establishes data governance requirements for training, validation, and testing datasets. Data must be relevant, sufficiently representative, and free from errors to the extent reasonably achievable. Organizations must examine datasets for potential biases, particularly those that could lead to discrimination against protected groups. Data governance extends to the selection of data sources, the preparation and labeling processes, and the ongoing monitoring of data quality throughout the system's operational life.
Article 11 mandates technical documentation that demonstrates conformity with the regulation before the system is placed on the market or put into service. The documentation must describe the system's intended purpose, design specifications, development methodology, training and testing procedures, performance metrics, and known limitations. Article 12 requires automatic logging (record-keeping) of events that enable tracing of the system's functioning throughout its lifecycle. Logs must capture at minimum the period of use, the reference database against which input data was checked, input data for which the search led to a match, and the identification of natural persons involved in verification of results. These logging requirements map directly to the Audit and Accountability (AU) control family in NIST 800-53, though the AI-specific event categories extend well beyond traditional system audit logging.
Article 13 requires transparency: high-risk AI systems must be designed and developed to ensure their operation is sufficiently transparent for deployers to interpret and use the system's output appropriately. Article 14 establishes human oversight obligations. High-risk systems must include measures that allow human operators to fully understand the system's capacities and limitations, correctly interpret its output, decide not to use or override the system in any particular situation, and interrupt the system's operation through a stop mechanism. Article 15 requires that high-risk AI systems achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle. Cybersecurity measures must address AI-specific vulnerabilities: training data poisoning, adversarial inputs designed to cause model errors, model inversion attacks, and confidentiality attacks against training data. These seven obligation domains form the core of what the EU AI Act overlay must map to existing control structures.
Before a high-risk AI system can be placed on the EU market, it must undergo a conformity assessment under Article 43. The assessment procedure depends on the system's classification path. Systems classified as high-risk because they are safety components of products covered by existing EU harmonization legislation follow the conformity assessment procedures already established in that legislation. Stand-alone high-risk AI systems listed in Annex III generally follow an internal conformity assessment procedure (self-assessment) based on Annex VI, with one critical exception: AI systems used for biometric identification require third-party conformity assessment by a notified body. The internal procedure requires the provider to verify that the quality management system complies with Article 17 requirements, examine the technical documentation, and verify that the design and development process and post-market monitoring comply with the regulation.
Successful conformity assessment results in a EU declaration of conformity under Article 47 and the affixing of the CE marking under Article 48. The declaration of conformity states that the provider has demonstrated that the high-risk AI system complies with all applicable requirements. The CE marking indicates conformity to the EU market and enables free movement within the single market. Providers must maintain the technical documentation, the declaration of conformity, and the quality management system documentation for ten years after the AI system is placed on the market. The documentation must be made available to national supervisory authorities upon request. For organizations accustomed to U.S. federal authorization processes (FedRAMP ATOs, CMMC certifications), the conformity assessment shares a structural similarity: demonstrate compliance with defined requirements, produce documentation as evidence, and submit to assessment by an authorized body.
The overlay will map conformity assessment requirements to the assessment and authorization workflow that Rampart already manages. Technical documentation requirements under Annex IV map to the System Security Plan structure and control narratives that Rampart maintains for each system. Quality management system requirements under Article 17 map to the Program Management (PM) control family and organizational governance documentation. Post-market monitoring requirements under Article 72 map to the continuous monitoring capabilities that Sentinel provides. The structural parallel between EU conformity assessment and U.S. federal authorization means that organizations already managing authorization packages through the platform will recognize the workflow. The specific documentation formats and procedural requirements differ, but the underlying discipline of demonstrating compliance through structured evidence and independent assessment is the same.
The EU AI Act establishes a three-tier penalty structure under Article 99 that scales with the severity of the violation. The highest tier applies to violations of the prohibited AI practices defined in Article 5: fines up to 35 million EUR or 7% of total worldwide annual turnover for the preceding financial year, whichever is higher. For enterprises, the turnover-based calculation typically produces the larger figure. The second tier applies to non-compliance with high-risk AI system requirements (Chapter III, Section 2) and obligations for providers of general-purpose AI models: fines up to 15 million EUR or 3% of global turnover. The third tier applies to supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities: fines up to 7.5 million EUR or 1% of global turnover. For SMEs and startups, the regulation specifies that the lower of the two amounts (fixed or turnover-based) applies, providing some proportionality for smaller organizations.
Enforcement operates through national supervisory authorities designated by each EU member state. Each member state must establish or designate at least one notified body for conformity assessment and at least one market surveillance authority responsible for monitoring compliance and investigating complaints. At the EU level, the AI Office within the European Commission oversees the implementation of the regulation, particularly for general-purpose AI models and cross-border enforcement coordination. The European Artificial Intelligence Board, composed of representatives from each member state, provides advice and coordination. The enforcement architecture mirrors the GDPR model: national authorities handle most enforcement actions, the EU body coordinates and handles systemic issues, and the penalty structure is designed to be meaningful even for the largest global technology companies.
The financial exposure under the EU AI Act is comparable to GDPR penalties, which have already demonstrated that EU regulators will impose substantial fines. GDPR enforcement has produced penalties exceeding 1 billion EUR against individual organizations. The EU AI Act's penalty structure signals that the European Commission views AI governance with the same seriousness as data protection. For organizations operating in both U.S. federal and EU markets, the convergence of U.S. AI governance frameworks (AI RMF, AI 600-1) with EU regulatory obligations creates a dual compliance requirement that compounds existing framework management complexity. The structural approach to overlay composition addresses this directly. Rather than maintaining separate compliance programs for U.S. and EU AI requirements, the overlay model maps both sets of obligations onto a shared control baseline, identifying where they overlap and where each introduces unique requirements. One security posture assessed continuously, with evidence flowing to every applicable regulatory obligation.
EU AI Act compliance requires scanning across evidence categories that traditional security tools do not cover. Model documentation must be verified against Annex IV requirements: system description, design specifications, development methodology, training procedures, validation results, and performance metrics. Code analysis must assess whether AI system implementations include the required human oversight mechanisms (stop functions, override capabilities, output interpretation aids) and whether logging implementations capture the event categories that Article 12 mandates. Bias testing must evaluate training datasets and model outputs against the non-discrimination requirements embedded in the risk management obligations. Container and infrastructure scans must verify that AI system deployments maintain the cybersecurity posture required by Article 15, covering AI-specific attack vectors including adversarial input resistance, training data integrity protection, and model extraction prevention. Each of these assessment categories produces evidence artifacts that must trace to specific articles and recitals in the regulation.
The post-market monitoring obligations under Article 72 create evidence collection challenges that differ from pre-deployment assessment. Providers of high-risk AI systems must establish and document a monitoring system that actively and systematically collects, documents, and analyzes relevant data on the system's performance throughout its lifetime. This means capturing model performance metrics over time, detecting accuracy degradation and behavioral drift that could indicate the system no longer meets its assessed performance baselines. Access patterns to AI system components must be monitored for unauthorized modifications to model weights, training pipelines, or inference configurations that could compromise the assessed conformity. Point-in-time assessments do not satisfy these obligations; the regulation expects continuous compliance demonstrated through ongoing evidence collection. Organizations that lack automated post-market monitoring infrastructure struggle to detect changes that affect conformity status until the next scheduled review, by which time the system may have operated in a non-conforming state for weeks or months without detection or documentation.
The evidence chain for EU AI Act controls will follow the same immutable provenance model used across all overlays. Every evidence artifact carries a SHA-256 integrity hash, collection timestamp, source system identifier, and the specific regulatory article it satisfies. Rampart will map collected evidence to EU AI Act articles and their corresponding NIST 800-53 control modifications, creating dual traceability for organizations that maintain both U.S. and EU compliance obligations. Garrison will maintain the passive inventory of AI system components within the system boundary: model registries, training infrastructure, inference endpoints, and datasets. This inventory establishes the scope of what the EU AI Act overlay governs, ensuring that every AI component subject to the regulation is covered by the evidence collection program. Citadel will display EU AI Act readiness alongside your existing framework posture, surfacing gaps introduced by the overlay and prioritizing remediation actions based on enforcement timeline urgency and cross-framework benefit.
The EU AI Act does not exist in regulatory isolation. Its requirements map structurally to the NIST AI RMF (AI 100-1) functions, the NIST AI 600-1 Generative AI Profile risk categories, and the NIST 800-53 control catalog that underpins CMMC, FedRAMP, and other derived frameworks. Article 9's risk management system obligation parallels the AI RMF's Govern and Map functions. Article 10's data governance requirements parallel the Map and Measure functions. Article 15's accuracy, robustness, and cybersecurity requirements parallel the Measure and Manage functions. The EU AI Act's emphasis on continuous post-market monitoring reflects the same principle that drives the AI RMF's iterative risk management approach: AI governance is not a one-time certification event but an ongoing operational discipline. Organizations that assess their AI posture against the AI RMF are performing work that translates directly to EU AI Act compliance.
The connection extends through the 800-53 control catalog to every framework derived from it. When the EU AI Act overlay MODIFYs AU-2 (Event Logging) to require logging of AI inference decisions, confidence scores, and input data characteristics, that modification strengthens your base AU-2 evidence, which satisfies CMMC AU.L2-3.3.1, FedRAMP AU-2, and SOC 2 CC7.1 through established cross-walks. When the overlay MODIFYs RA-3 (Risk Assessment) to require AI-specific risk identification including adversarial attack vectors and training data integrity threats, that modification deepens your risk assessment practice across all derived frameworks. The overlay model prevents the EU AI Act from becoming a separate compliance silo. Instead, it integrates European regulatory obligations into the same control structure your organization already manages, assessed through the same evidence collection infrastructure, and displayed through the same dashboard views.
Rampart will resolve the complete cross-regulatory impact of every EU AI Act requirement when the overlay is finalized. Each article's obligations will trace through the overlay's ADD and MODIFY operations to the underlying 800-53 controls, and from those controls to every derived framework that benefits. Artificer will generate narratives that reference both the EU AI Act article and the corresponding 800-53 control context, providing assessors and regulators with complete traceability from the European obligation through the control structure to the evidence that satisfies it. Organizations that begin AI governance work now through the AI RMF and AI 600-1 overlays are not performing throwaway preparation. They are building the control implementations, evidence chains, and operational practices that the EU AI Act overlay will reference. When the overlay activates, Rampart will show which EU AI Act requirements your existing AI governance posture already satisfies, and Citadel will surface only the incremental gaps that remain. The structural investment compounds. Start now.
Something is being forged.
The full platform is under active development. Reach out to learn more or get early access.