Core Concepts
Understanding how Inkog analyzes AI agent code and reports vulnerabilities.
How Inkog Works
Inkog is a security scanner built specifically for AI agents. It analyzes your code to find vulnerabilities before they reach production.
Your Code → Inkog Analysis → Security ReportKey Features
Cross-Framework Analysis
Inkog works across all major AI agent frameworks:
- Code-first: LangChain, LangGraph, CrewAI, AutoGen
- No-code: n8n, Flowise, Dify
- RAG: LlamaIndex, Haystack
The same vulnerabilities are detected regardless of which framework you use.
Learn more about Cross-Framework Analysis →
Hybrid Privacy Model
Your code security is paramount. Inkog’s privacy model ensures:
- Secrets detected locally - Credentials found and redacted on your machine
- Redacted code sent for analysis - Only sanitized code leaves your machine
- Results merged - Local and server findings combined
Your actual credentials never leave your machine.
Learn more about Hybrid Privacy →
Security Scoring
Findings are scored by severity (Critical, High, Medium, Low) and aggregated into a security grade (A-F).
This determines whether your CI/CD pipeline passes or fails.
Learn more about Security Scoring →
What Inkog Detects
Inkog detects vulnerabilities across seven categories:
| Category | Examples |
|---|---|
| Resource Exhaustion | Infinite loops, token bombing, context overflow |
| Code Injection | Tainted eval/exec, unsafe deserialization |
| Prompt Injection | User input in prompts, SQL injection via LLM |
| Data Exposure | Hardcoded credentials, logging PII |
| Output Handling | Unvalidated LLM output |
| Access Control | Missing auth, path traversal |
| Deserialization | Pickle, YAML code execution |
Compliance Mapping
Every finding maps to industry standards:
- OWASP LLM Top 10 - AI-specific vulnerability taxonomy
- EU AI Act - European AI regulation
- NIST AI RMF - AI risk management framework
- CWE - Common Weakness Enumeration
This enables automated compliance reporting for audits.
Community & Contributing
Inkog’s detection engine uses open YAML rules maintained by the community. Security researchers can contribute new detection patterns for emerging AI agent vulnerabilities. Want to help? See our Contributing Guide or visit the Open Source page to learn more.