Documents
eu-ai-act-mapping
eu-ai-act-mapping
Type
External
Status
Published
Created
Mar 25, 2026
Updated
Mar 30, 2026
Updated by
Dosu Bot

EU AI Act Compliance Mapping - Pipelock#

How Pipelock's runtime security controls map to the EU AI Act (Regulation 2024/1689) requirements for high-risk AI systems, with a NIST AI RMF 1.0 crosswalk.

Scope: Pipelock is an application-layer firewall for AI agent deployments. It covers network egress filtering, content inspection, audit logging, and human oversight. It doesn't cover model training, data governance, or full-lifecycle AI management. Coverage gaps are documented below.

Disclaimer: This document maps Pipelock's security features to EU AI Act requirements for informational purposes. It does not constitute legal advice or guarantee regulatory compliance. Organizations should consult qualified legal counsel for compliance obligations specific to their AI systems.

Last updated: March 2026


Coverage Summary#

Coverage levels: Full = Pipelock feature directly implements the requirement with automated enforcement. Partial = feature contributes to the requirement but doesn't fully satisfy it alone. Moderate = multiple features partially address the requirement.

ArticleTopicCoverage
Art. 9Risk Management SystemPartial (runtime controls only)
Art. 12Record-KeepingFull (logging requirements)
Art. 13TransparencyModerate
Art. 14Human OversightModerate (terminal-only HITL)
Art. 15Accuracy, Robustness, CybersecurityModerate
Art. 26Deployer ObligationsModerate

EU AI Act Article Mapping#

Article 9 - Risk Management System#

Article 9 requires a continuous, iterative risk management process throughout the AI system lifecycle. This includes risk identification, mitigation through design, prior-defined testing metrics, and post-market monitoring.

RequirementPipelock FeatureCoverage
Identify and analyze known risks (Art. 9(2)(a))11-layer scanner pipeline classifies network-level risks: scheme validation, CRLF injection, path traversal, domain blocklist, DLP (inc. env leak), path entropy, subdomain entropy, SSRF, rate limiting, URL length, data budgetPartial
Evaluate risks under foreseeable misuse (Art. 9(2)(b))Adversarial testing of bypass attempts (encoded secrets, DNS exfiltration, zero-width injection, split-key attacks)Partial
Post-market monitoring data (Art. 9(2)(c))Prometheus metrics (/metrics), JSON stats (/stats), structured audit logsPartial
Eliminate risks through design (Art. 9(5)(a))Capability separation reduces network-based credential exfiltration risk: agent holds secrets with no network; proxy has network with no agent secrets. Deployment enforces the boundaryPartial
Mitigation and control measures (Art. 9(5)(b))Multi-layer scanning, domain blocklist, rate limiting, DLP patterns, HITL approvalFull
Residual risk information to deployers (Art. 9(5)(c))Audit logs document every scan decision; /stats endpoint surfaces top threatsPartial
Prior defined metrics and thresholds (Art. 9(8))Configurable thresholds per scanner layer; Prometheus counters for block rates by categoryFull
Continuous lifecycle process (Art. 9(2))Hot-reload config (fsnotify + SIGHUP) for live policy updates without restartPartial

Gap: Art. 9 covers the full AI system lifecycle. Pipelock provides runtime network-level risk management. Lifecycle-wide risk identification (health, safety, fundamental rights impacts) and systematic misuse analysis require organizational processes beyond runtime controls.


Article 12 - Record-Keeping#

Article 12 requires automatic event logging in high-risk AI systems for risk identification, post-market monitoring, and deployer oversight.

RequirementPipelock FeatureCoverage
Automatic recording of events (Art. 12(1))Structured JSON audit logging (zerolog) for every request: URL, domain, agent name, scan result, scanner reason, timestamp, durationFull
Identify risk situations (Art. 12(2)(a))Categorized threat events: SSRF, DLP match, prompt injection, env leak, entropy anomaly, rate limit, redirect chainFull
Support post-market monitoring (Art. 12(2)(b))Prometheus metrics with counters, histograms, and alerting integration; Grafana dashboard (configs/grafana-dashboard.json)Full
Enable deployer monitoring (Art. 12(2)(c))Per-agent profiles with named identity, independent config, and per-agent budgets; agent name in every log entryFull

Gap: None for the logging requirements Pipelock addresses. Full Art. 12 includes biometric system requirements (Art. 12(3)) that don't apply.


Article 13 - Transparency#

Article 13 requires sufficient operational transparency for deployers to understand system behavior, including documentation of capabilities, limitations, logging mechanisms, and human oversight measures.

RequirementPipelock FeatureCoverage
System characteristics and capabilities documentedREADME, OWASP mapping docs, Claude Code integration guide, comparison docFull
Limitations documentedEach OWASP mapping doc includes explicit coverage gaps and "out of scope" sectionsFull
Logging mechanism descriptionAudit log format, event types, and fields documented in CLAUDE.md and guidesFull
Human oversight measures described (Art. 14 ref)HITL documentation in guides and config presetsPartial
Computational/hardware requirementsSingle static binary (~18MB), documented in READMEFull

Gap: Full Art. 13 compliance requires system-level documentation that depends on the deployer's AI system, not just the security layer.


Article 14 - Human Oversight#

Article 14 requires AI systems to be designed for effective human oversight, including the ability to understand system operation, detect anomalies, override or reverse outputs, and intervene or interrupt via a "stop" mechanism.

RequirementPipelock FeatureCoverage
Understand system operation (Art. 14(4)(a))Audit logs, Prometheus metrics, /stats endpoint, Grafana dashboardFull
Detect anomalies and dysfunctions (Art. 14(4)(a))Real-time threat detection via pattern matching and entropy threshold analysisPartial
Override or reverse output (Art. 14(4)(d))HITL action: ask lets the operator approve, deny, or strip on each flagged requestFull
Intervene or interrupt via "stop button" (Art. 14(4)(e))Fail-closed design: HITL timeout defaults to block; context cancellation stops operationFull
Awareness of automation bias (Art. 14(4)(b))Configurable modes (audit/balanced/strict) force explicit enforcement decisionsPartial
Commensurate with risk level (Art. 14(3))Three preset modes map to different risk tolerances; per-scanner thresholds configurableFull
Built into system by provider (Art. 14(3))HITL module compiled into binary; fail-closed defaults are structural, not configurableFull

Gap: HITL is terminal-only. No UI for non-terminal environments.


Article 15 - Accuracy, Robustness, and Cybersecurity#

Article 15 requires resilience against unauthorized alteration, with specific protections against data poisoning, model poisoning, adversarial examples (model evasion), confidentiality attacks, and model flaws (Art. 15(5)). It also requires technical redundancy and fail-safe plans (Art. 15(4)).

Note: Art. 15(5) uses "adversarial examples" and "model evasion," not "prompt injection." Pipelock's injection detection addresses a subset of the adversarial examples category through pattern-based content scanning, but doesn't cover model-level evasion attacks.

RequirementPipelock FeatureCoverage
Adversarial examples / model evasion (Art. 15(5))Content scanning on responses and MCP tool results; zero-width char stripping; NFKC normalization; case-insensitive matching; null byte stripping. Covers text-based injection patterns, not model-level evasion.Partial
Confidentiality attacks (Art. 15(5))DLP scanning (46 built-in credential patterns, extensible via config), env leak detection (raw + base64 + hex), Shannon entropy analysis, DNS subdomain exfiltration detection, split-key concatenation scanningFull
Data poisoning (Art. 15(5))File integrity monitoring (SHA256 manifests), Ed25519 signing and verification, response scanning on fetched contentPartial
Resilient against unauthorized alteration (Art. 15(5))Capability separation prevents agent from being manipulated into exfiltrating data; SSRF blocks access to internal infrastructureFull
Technical redundancy / fail-safe (Art. 15(4))Fail-closed architecture: scan error, HITL timeout, parse failure, DNS error, context cancellation all default to blockFull
Resilient to errors and faults (Art. 15(4))DNS rebinding protection (resolve-validate-dial); IPv4-mapped IPv6 normalization; CRLF normalization in diff parsingFull
Accuracy metrics declared (Art. 15(1-3))Prometheus counters per scanner layer; false positive tuning via audit modePartial

Gap: Data poisoning in Art. 15(5) refers to training data manipulation. Pipelock's integrity monitoring protects workspace files, not training datasets. Model poisoning and model flaws are out of scope.


Article 26 - Deployer Obligations#

Article 26 requires deployers to monitor AI system operation, keep automatically generated logs for at least 6 months, and use the system per instructions.

RequirementPipelock FeatureCoverage
Monitor operation per instructions (Art. 26(1))Prometheus metrics, /health endpoint for K8s liveness probes, structured audit logsFull
Keep logs for 6+ months (Art. 26(6))Persistent audit logs with configurable output (file, stdout, or both)Partial
Use system per instructions of use (Art. 26(1))Config presets provide instructions for different deployment contextsFull

Gap: Pipelock writes persistent audit logs but doesn't enforce retention periods. Whether logs are kept for 6 months depends on the deployer's log infrastructure (rotation, storage, forwarding).


What Pipelock Does Not Cover#

These require other tools or organizational processes:

EU AI Act RequirementArticleWhy Not Covered
Training data governanceArt. 10Pipelock operates at runtime, not training time
Conformity assessmentArt. 43Organizational process, not a tool feature
CE markingArt. 48Regulatory formality
Technical documentation (full system)Art. 11Pipelock documents itself; full system docs are the deployer's responsibility
Fundamental rights impact assessmentArt. 27Requires organizational assessment beyond runtime controls
EU database registrationArt. 71Administrative requirement
Incident reporting timelinesArt. 73Audit logs provide incident data; reporting process is organizational
Bias and fairness evaluationArt. 10(2)Pipelock applies rules uniformly but doesn't evaluate model fairness
Code execution sandboxingArt. 15(4)Pipelock controls egress, not process isolation. See srt or agentsh.

NIST AI RMF 1.0 Crosswalk#

How Pipelock maps to NIST AI Risk Management Framework functions, with EU AI Act cross-references.

GOVERN - Policies, Processes, and Accountability#

NIST SubcategoryDescriptionPipelock FeatureEU AI Act
GOVERN 1.2Trustworthy AI characteristics integrated into organizational policiesCapability separation architecture; fail-closed design philosophyArt. 9
GOVERN 1.4Ongoing monitoring plans documentedPrometheus metrics, audit logging, Grafana dashboardArt. 12
GOVERN 2.1Roles and responsibilities for AI risk managementPer-agent profiles with listener binding (spoof-proof) or header-based identification; HITL assigns human approval responsibilityArt. 14
GOVERN 4.2Organizational teams document AI risks and impactsStructured audit logs, config files, OWASP mapping docsArt. 11, 13
GOVERN 6.1Third-party AI risks addressed in policyMCP bidirectional scanning treats all MCP servers as untrusted; domain blocklists control external accessArt. 9, 15
GOVERN 6.2Contingency processes for third-party riskFail-closed: scanning failure blocks traffic; HITL timeout blocks; MCP parse errors blockArt. 15

MAP - Context, Risk Identification#

NIST SubcategoryDescriptionPipelock FeatureEU AI Act
MAP 1.1Intended purposes and contexts documentedCapability separation documented; deployment guides per agent typeArt. 9, 13
MAP 1.5Organizational risk tolerance definedConfig presets: audit (log only), balanced (default), strict (aggressive blocking)Art. 9
MAP 2.1System and potential harms classifiedScanner pipeline classifies: SSRF, DLP, injection, env leak, entropy, rate abuseArt. 9
MAP 4.1Risks prioritized by impact and likelihoodPipeline ordering reflects priority: blocklist/DLP (critical) before DNS, SSRF before rate limitArt. 9

MEASURE - Metrics, Monitoring, Assessment#

NIST SubcategoryDescriptionPipelock FeatureEU AI Act
MEASURE 1.1Metrics selected and documentedPrometheus: pipelock_requests_total, pipelock_scanner_hits_total, pipelock_request_duration_secondsArt. 12
MEASURE 2.5System demonstrated valid and reliableCI: 6 required checks (test, lint, build, govulncheck, CodeQL, pipelock self-scan), CodeQL analysis, full test suite with race detector (see README)Art. 15
MEASURE 2.6Evaluated for misuse and abuseScanning layers target misuse: DLP catches exfiltration, SSRF catches internal probing, injection detection catches hijackingArt. 9, 15
MEASURE 2.7Security and resilience evaluatedSecurity audit completed (26 of 32 items fixed); DNS rebinding protection; fail-closed architectureArt. 15
MEASURE 3.1Risks tracked on ongoing basisPrometheus real-time tracking; zerolog persistent timeline; both queryable and alertableArt. 12
MEASURE 3.3Feedback mechanisms for improvementHITL ask action: human decisions logged for policy refinement; audit mode measures before enforcingArt. 14

MANAGE - Risk Mitigation Controls#

NIST SubcategoryDescriptionPipelock FeatureEU AI Act
MANAGE 1.1Risks mitigated, transferred, or acceptedEach scanning layer configurable: enabled (mitigate), audit-only (accept with monitoring), disabled (accept)Art. 9
MANAGE 1.3Risk responses documentedEvery block/allow decision logged with timestamp, category, action, URL, reasonArt. 12
MANAGE 2.2Mechanisms to disengage or deactivateHITL override; config hot-reload to tighten controls; fail-closed timeout = safe defaultArt. 14
MANAGE 2.3Procedures for appeal and human reviewHITL terminal approval: agent paused, human reviews with context, decides approve/deny/stripArt. 14
MANAGE 3.1Third-party AI risks managedMCP bidirectional scanning: server responses scanned for injection, client requests scanned for DLP/injectionArt. 9, 15
MANAGE 4.1Post-deployment monitoring with incident responseAudit logs, HITL override, hot-reload for change management, Prometheus alerts, /health for K8s livenessArt. 12

Control-Level Mapping#

Mapping from individual Pipelock controls to both frameworks.

ControlDescriptionEU AI ActNIST AI RMF
Capability separationAgent has secrets, no network; proxy has network, no agent secrets. Deployment enforces boundaryArt. 15(5)GOVERN 1.2, MAP 1.1
SSRF protectionPrivate IP blocking, DNS rebinding prevention, metadata endpoint blockingArt. 15(4-5)MAP 2.1, MEASURE 2.7
Domain blocklistConfigurable deny/allow lists with wildcard supportArt. 9(5)GOVERN 1.2, MANAGE 1.1
Rate limitingPer-domain sliding window, base domain normalizationArt. 15(4)MANAGE 1.1
DLP scanning46 built-in credential patterns, custom regex, severity classificationArt. 15(5)GOVERN 1.2, MEASURE 2.6
Env leak detectionRaw + base64 + hex, Shannon entropy > 3.0Art. 15(5)MEASURE 2.6
Entropy analysisShannon entropy on URL path segments and query parametersArt. 15(5)MAP 2.1
Content scanningResponse scanning with zero-width stripping, NFKC, case-insensitiveArt. 15(5)MAP 2.1, MEASURE 2.6
MCP bidirectional scanningRequest DLP/injection + response injection scanningArt. 9(5), 15(5)GOVERN 6.1, MANAGE 3.1
HITL terminal approvalAsk action, fail-closed timeout, approve/deny/stripArt. 14(4)(d-e)GOVERN 2.1, MANAGE 2.2, 2.3
Structured audit loggingZerolog JSON, event classification, agent attribution, log sanitizationArt. 12, 13, 26(6)GOVERN 4.2, MEASURE 3.1, MANAGE 4.1
Prometheus metricsCustom registry, /metrics, /stats, Grafana dashboardArt. 12, 26(1)MEASURE 1.1, 3.1
File integrity monitoringSHA256 manifests, check/diff, glob exclusionsArt. 15(5)MEASURE 2.7
Ed25519 signingKey management, file signing, verification, trust storeArt. 15(5)GOVERN 1.2
Config validationPre-load validation, mode enforcementArt. 9(5)GOVERN 6.2
Hot-reloadfsnotify + SIGHUP, atomic config swapArt. 9(2)MANAGE 4.1
Fail-closed defaultsTimeout/error/parse/DNS failure all blockArt. 15(4)GOVERN 6.2
Git diff scanningPre-push secret detection in unified diffsArt. 15(5)MAP 2.1

High-Risk Classification Context#

Are AI coding agents high-risk under the EU AI Act?#

AI coding agents aren't explicitly listed in Annex III. The eight high-risk categories cover biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice administration.

Classification is context-dependent:

  • An AI coding agent writing software for medical devices or critical infrastructure (Annex III, Category 2) could be classified as a safety component of a high-risk system.
  • An agent used to evaluate developer performance or allocate tasks (Annex III, Category 4) could fall under employment-related high-risk classification.
  • The underlying LLM likely qualifies as a general-purpose AI model under Articles 51-55, with additional obligations if it has systemic risk (>10^25 FLOPs training compute).
  • Article 7 allows the Commission to expand Annex III categories via delegated acts. Agentic AI with autonomous action capabilities is actively discussed.

AI coding agents aren't formally high-risk in most cases. But organizations in regulated sectors may still choose to comply with Articles 9, 12-15 as a risk management best practice and to demonstrate due diligence.


Enforcement Timeline#

DateMilestone
August 1, 2024EU AI Act enters into force
February 2, 2025Prohibited AI practices (Art. 5) and AI literacy (Art. 4) take effect
August 2, 2025GPAI model obligations (Art. 51-55) take effect
February 2, 2026Commission publishes high-risk classification guidelines (Art. 6)
August 2, 2026High-risk AI system requirements take effect (Art. 9, 12-15, 26)
August 2, 2027Extended transition for safety-component AI under Annex I harmonization legislation

Penalties: up to EUR 35M or 7% global turnover (prohibited practices), EUR 15M or 3% (high-risk system violations), EUR 7.5M or 1% (misleading information to authorities). SME/startup fines capped at the lower of percentage or absolute amount.


This document complements Pipelock's existing security framework mappings:

OWASP → EU AI Act Compliance Chain#

OWASP has an official liaison partnership with CEN/CENELEC and ISO. The OWASP AI Exchange contributed 70 pages to ISO/IEC 27090 (the global AI security standard) and 40 pages to prEN 18282, the European cybersecurity standard for AI systems being developed under the EU AI Act. When prEN 18282 is published as a harmonized standard, compliance with it will provide a "presumption of conformity" with the relevant AI Act provisions.

Pipelock's existing OWASP mapping documents demonstrate alignment with frameworks that are being written into the EU's harmonized standards. Pipelock is one component of a defense-in-depth approach, not a complete compliance solution.

NIST AI 600-1 - Generative AI Risk Profile#

Six of twelve GAI-specific risks in NIST AI 600-1 map to Pipelock controls:

GAI RiskPipelock Feature
Data PrivacyDLP scanning, env leak detection, entropy analysis
Information SecuritySSRF protection, rate limiting, capability separation
Information IntegrityContent scanning, MCP response scanning
Human-AI ConfigurationHITL approval, fail-closed defaults
Confabulation (tangential)Response scanning catches manipulated content; doesn't detect hallucinations
CBRN Information (partial)Domain blocklist restricts access to dangerous content sources

NIST CAISI - AI Agent Security#

In January 2026, NIST's Center for AI Standards and Innovation published a Request for Information on security considerations for AI agents. The RFI topics (agent hijacking, backdoor attacks, exploits of autonomous agents) align directly with Pipelock's scanner pipeline and OWASP threat mappings. Comment deadline: March 9, 2026.


EU AI Act#

NIST#

OWASP#