Skip to Content
ComplianceEU AI Act

EU AI Act Compliance

Inkog helps organizations meet the technical requirements of the EU AI Act, particularly for high-risk AI systems.

This guide provides technical implementation guidance. Consult legal experts for complete compliance assessment.

Article 15: Accuracy, Robustness, and Cybersecurity

Article 15 of the EU AI Act requires high-risk AI systems to achieve appropriate levels of:

  1. Accuracy - Consistent, reliable outputs
  2. Robustness - Resilience against errors and attacks
  3. Cybersecurity - Protection against manipulation

How Inkog Helps

Article 15 RequirementInkog Capability
15.1 - Appropriate accuracy levelsDetects data quality issues affecting reliability
15.3 - Resilience to manipulationIdentifies prompt injection vulnerabilities
15.4 - Cyber resilienceScans for security vulnerabilities in AI code
15.5 - Robustness against third-party attacksDetects RAG poisoning, memory attacks

Mapping Inkog Rules to Article 15

15.1 - Accuracy

“High-risk AI systems shall be designed and developed in such a way to achieve an appropriate level of accuracy, robustness and cybersecurity.”

Relevant Inkog Rules:

Rule IDDescriptionCompliance Mapping
INKOG-006RAG context manipulationPrevents inaccurate retrieval affecting outputs
INKOG-007Chain-of-thought leakageEnsures reasoning integrity
INKOG-020Context window overflowPrevents truncation errors

15.3 - Robustness Against Manipulation

“High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use or performance.”

Relevant Inkog Rules:

Rule IDDescriptionAttack Vector
INKOG-001Prompt injection via user inputDirect prompt manipulation
INKOG-002Tool use without validationTool-based attacks
INKOG-003Memory poisoningLong-term manipulation
INKOG-006RAG context manipulationIndirect injection
Vulnerable
No protection against manipulation attempts
# Non-compliant: Vulnerable to manipulation
def process_user_request(user_input):
  response = agent.run(user_input)
  return response
Secure
Multi-layer defense meeting Article 15.3 requirements
# Article 15 compliant: Resilient design
def process_user_request(user_input):
  # Input validation
  validated = validate_input(user_input)

  # Anomaly detection
  if detect_injection(validated):
      log_security_event(user_input)
      raise SecurityError("Potential manipulation detected")

  # Sandboxed execution
  response = agent.run(validated, sandbox=True)

  # Output validation
  return validate_output(response)

15.4 - Cybersecurity

“High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use, outputs or performance by exploiting the system vulnerabilities.”

Relevant Inkog Rules:

Rule IDDescriptionCybersecurity Control
INKOG-004Sensitive data in promptsData protection
INKOG-005Unrestricted code executionExecution controls
INKOG-010Insecure tool definitionsAccess controls
INKOG-015Insufficient output filteringOutput sanitization

Compliance Workflow

1. Initial Assessment

Terminal
$inkog scan . --compliance eu-ai-act --format report
EU AI Act Compliance Assessment ================================ Article 15 - Accuracy, Robustness, Cybersecurity ------------------------------------------------ 15.1 Accuracy: 3 findings 15.3 Robustness: 5 findings (2 critical) 15.4 Cybersecurity: 4 findings (1 critical) Overall Compliance Score: 62/100 Detailed report saved to: compliance-report.html

2. Risk Classification

.inkog.yaml
compliance: eu_ai_act: system_classification: high_risk domains: - ai_in_employment - credit_scoring article_15: accuracy_threshold: 0.95 require_anomaly_detection: true mandatory_logging: true

3. Remediation

Inkog generates remediation tickets:

{ "finding_id": "F-2024-001", "rule_id": "INKOG-001", "article": "15.3", "severity": "critical", "file": "src/agent.py", "line": 42, "remediation": { "description": "Add input validation to prevent prompt injection", "code_suggestion": "validated_input = sanitize_for_llm(user_input)", "references": [ "https://docs.inkog.io/vulnerabilities/prompt-injection" ] }, "compliance_impact": "Required for Article 15.3 robustness" }

4. Evidence Generation

Generate compliance documentation:

Terminal
$inkog compliance eu-ai-act --generate-evidence
Generating compliance evidence package... Created: - technical-documentation.pdf - vulnerability-scan-results.json - remediation-log.csv - test-coverage-report.html - architecture-diagrams/ Evidence package: eu-ai-act-evidence-2024-01-15.zip

Technical Documentation Requirements

Article 11 requires technical documentation. Inkog generates:

Architecture Documentation

## AI System Architecture ### Components - LLM Provider: OpenAI GPT-4 - Framework: LangChain v0.1.0 - Vector Store: Pinecone - Memory: Redis ### Data Flow [Auto-generated diagram from Inkog scan] ### Security Controls - Input validation: sanitize_for_llm() - Output filtering: filter_pii() - Access controls: role-based tool permissions

Risk Assessment

## Cybersecurity Risk Assessment ### Identified Risks | Risk | Likelihood | Impact | Mitigation | |------|------------|--------|------------| | Prompt Injection | High | Critical | Input validation | | Data Leakage | Medium | High | Output filtering | | Tool Misuse | Low | Critical | Permission controls | ### Residual Risk: LOW After implementing all mitigations.

Continuous Compliance

CI/CD Integration

.github/workflows/compliance.yml
name: EU AI Act Compliance Check on: push: branches: [main] schedule: - cron: '0 0 * * 1' # Weekly jobs: compliance: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Run Inkog Compliance Check uses: inkog-io/inkog-action@v1 with: compliance: eu-ai-act fail-on-critical: true - name: Upload Evidence uses: actions/upload-artifact@v4 with: name: compliance-evidence path: compliance-report/

Compliance Dashboard

Track compliance metrics over time:

┌────────────────────────────────────────────────────────────┐ │ EU AI Act Compliance Dashboard │ ├────────────────────────────────────────────────────────────┤ │ │ │ Overall Score: 94/100 ████████████████████░░ (+8) │ │ │ │ Article 15.1 (Accuracy): 98% ███████████████████░ │ │ Article 15.3 (Robustness): 92% ██████████████████░░ │ │ Article 15.4 (Cybersecurity): 91% ██████████████████░░ │ │ │ │ Open Findings: 3 (0 critical, 2 high, 1 medium) │ │ Last Scan: 2024-01-15 14:32 UTC │ │ │ └────────────────────────────────────────────────────────────┘

Additional Resources

Need help with compliance? Contact us at compliance@inkog.io for enterprise support.

Last updated on