EU AI Act Compliance
Inkog helps organizations meet the technical requirements of the EU AI Act, particularly for high-risk AI systems.
This guide provides technical implementation guidance. Consult legal experts for complete compliance assessment.
Article 15: Accuracy, Robustness, and Cybersecurity
Article 15 of the EU AI Act requires high-risk AI systems to achieve appropriate levels of:
- Accuracy - Consistent, reliable outputs
- Robustness - Resilience against errors and attacks
- Cybersecurity - Protection against manipulation
How Inkog Helps
| Article 15 Requirement | Inkog Capability |
|---|---|
| 15.1 - Appropriate accuracy levels | Detects data quality issues affecting reliability |
| 15.3 - Resilience to manipulation | Identifies prompt injection vulnerabilities |
| 15.4 - Cyber resilience | Scans for security vulnerabilities in AI code |
| 15.5 - Robustness against third-party attacks | Detects RAG poisoning, memory attacks |
Mapping Inkog Rules to Article 15
15.1 - Accuracy
“High-risk AI systems shall be designed and developed in such a way to achieve an appropriate level of accuracy, robustness and cybersecurity.”
Relevant Inkog Rules:
| Rule ID | Description | Compliance Mapping |
|---|---|---|
| INKOG-006 | RAG context manipulation | Prevents inaccurate retrieval affecting outputs |
| INKOG-007 | Chain-of-thought leakage | Ensures reasoning integrity |
| INKOG-020 | Context window overflow | Prevents truncation errors |
15.3 - Robustness Against Manipulation
“High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use or performance.”
Relevant Inkog Rules:
| Rule ID | Description | Attack Vector |
|---|---|---|
| INKOG-001 | Prompt injection via user input | Direct prompt manipulation |
| INKOG-002 | Tool use without validation | Tool-based attacks |
| INKOG-003 | Memory poisoning | Long-term manipulation |
| INKOG-006 | RAG context manipulation | Indirect injection |
# Non-compliant: Vulnerable to manipulation
def process_user_request(user_input):
response = agent.run(user_input)
return response# Article 15 compliant: Resilient design
def process_user_request(user_input):
# Input validation
validated = validate_input(user_input)
# Anomaly detection
if detect_injection(validated):
log_security_event(user_input)
raise SecurityError("Potential manipulation detected")
# Sandboxed execution
response = agent.run(validated, sandbox=True)
# Output validation
return validate_output(response)15.4 - Cybersecurity
“High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use, outputs or performance by exploiting the system vulnerabilities.”
Relevant Inkog Rules:
| Rule ID | Description | Cybersecurity Control |
|---|---|---|
| INKOG-004 | Sensitive data in prompts | Data protection |
| INKOG-005 | Unrestricted code execution | Execution controls |
| INKOG-010 | Insecure tool definitions | Access controls |
| INKOG-015 | Insufficient output filtering | Output sanitization |
Compliance Workflow
1. Initial Assessment
2. Risk Classification
compliance:
eu_ai_act:
system_classification: high_risk
domains:
- ai_in_employment
- credit_scoring
article_15:
accuracy_threshold: 0.95
require_anomaly_detection: true
mandatory_logging: true3. Remediation
Inkog generates remediation tickets:
{
"finding_id": "F-2024-001",
"rule_id": "INKOG-001",
"article": "15.3",
"severity": "critical",
"file": "src/agent.py",
"line": 42,
"remediation": {
"description": "Add input validation to prevent prompt injection",
"code_suggestion": "validated_input = sanitize_for_llm(user_input)",
"references": [
"https://docs.inkog.io/vulnerabilities/prompt-injection"
]
},
"compliance_impact": "Required for Article 15.3 robustness"
}4. Evidence Generation
Generate compliance documentation:
Technical Documentation Requirements
Article 11 requires technical documentation. Inkog generates:
Architecture Documentation
## AI System Architecture
### Components
- LLM Provider: OpenAI GPT-4
- Framework: LangChain v0.1.0
- Vector Store: Pinecone
- Memory: Redis
### Data Flow
[Auto-generated diagram from Inkog scan]
### Security Controls
- Input validation: sanitize_for_llm()
- Output filtering: filter_pii()
- Access controls: role-based tool permissionsRisk Assessment
## Cybersecurity Risk Assessment
### Identified Risks
| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
| Prompt Injection | High | Critical | Input validation |
| Data Leakage | Medium | High | Output filtering |
| Tool Misuse | Low | Critical | Permission controls |
### Residual Risk: LOW
After implementing all mitigations.Continuous Compliance
CI/CD Integration
name: EU AI Act Compliance Check
on:
push:
branches: [main]
schedule:
- cron: '0 0 * * 1' # Weekly
jobs:
compliance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Inkog Compliance Check
uses: inkog-io/inkog-action@v1
with:
compliance: eu-ai-act
fail-on-critical: true
- name: Upload Evidence
uses: actions/upload-artifact@v4
with:
name: compliance-evidence
path: compliance-report/Compliance Dashboard
Track compliance metrics over time:
┌────────────────────────────────────────────────────────────┐
│ EU AI Act Compliance Dashboard │
├────────────────────────────────────────────────────────────┤
│ │
│ Overall Score: 94/100 ████████████████████░░ (+8) │
│ │
│ Article 15.1 (Accuracy): 98% ███████████████████░ │
│ Article 15.3 (Robustness): 92% ██████████████████░░ │
│ Article 15.4 (Cybersecurity): 91% ██████████████████░░ │
│ │
│ Open Findings: 3 (0 critical, 2 high, 1 medium) │
│ Last Scan: 2024-01-15 14:32 UTC │
│ │
└────────────────────────────────────────────────────────────┘Additional Resources
Need help with compliance? Contact us at compliance@inkog.io for enterprise support.