Skip to Content
VulnerabilitiesOverview

Vulnerability Database

Browse Inkog’s comprehensive database of AI agent security vulnerabilities.

Each vulnerability includes detection rules, affected frameworks, and remediation guidance.

Vulnerability Categories

Injection Attacks

Vulnerabilities where untrusted input manipulates AI behavior:

  • Prompt Injection: User input alters LLM instructions
  • Indirect Prompt Injection: Malicious content in retrieved data
  • Template Injection: Dynamic template construction with user data

Authorization & Access Control

Improper access controls in agent systems:

  • Tool Use Without Validation: Agents execute tools without input checks
  • Privilege Escalation: Agents perform actions beyond intended scope
  • Missing Authentication: Unprotected agent endpoints

Memory & State

Vulnerabilities in agent memory and state management:

  • Memory Poisoning: Malicious data persisted to long-term memory
  • State Manipulation: Session state altered by untrusted input
  • Context Window Attacks: Exceeding context limits to truncate instructions

Information Disclosure

Unintended exposure of sensitive information:

  • Chain-of-Thought Leakage: Internal reasoning exposed to users
  • System Prompt Extraction: Attackers extract hidden instructions
  • Sensitive Data in Prompts: Credentials or PII in prompt text

Code Execution

Risks from dynamic code execution:

  • Unrestricted Code Interpreters: Sandbox escapes in code tools
  • Unsafe Deserialization: Arbitrary object instantiation
  • Command Injection: Shell commands with user input

Severity Levels

LevelDescriptionResponse
CriticalImmediate exploitation possible, severe impactFix immediately, block deployment
HighLikely exploitable with significant impactFix before next release
MediumExploitable under specific conditionsFix in normal development cycle
LowLimited exploitability or impactTrack and address as time permits

OWASP LLM Top 10 Mapping

Inkog’s rules map to the OWASP Top 10 for LLM Applications :

OWASP LLMInkog Rules
LLM01: Prompt InjectionINKOG-001, INKOG-002, INKOG-006
LLM02: Insecure Output HandlingINKOG-008, INKOG-015
LLM03: Training Data PoisoningOut of scope (runtime analysis)
LLM04: Model Denial of ServiceINKOG-020, INKOG-021
LLM05: Supply ChainINKOG-030, INKOG-031
LLM06: Sensitive Info DisclosureINKOG-004, INKOG-007
LLM07: Insecure Plugin DesignINKOG-002, INKOG-010
LLM08: Excessive AgencyINKOG-011, INKOG-012
LLM09: OverrelianceAdvisory only
LLM10: Model TheftOut of scope (infrastructure)

Adding Custom Rules

Define organization-specific vulnerability patterns:

.inkog.yaml
custom_rules: - id: "CUSTOM-001" title: "Internal API key in prompt" description: "Internal API keys should never appear in LLM prompts" severity: critical pattern: type: "string_in_sink" sink: "llm_prompt" match: "INTERNAL_API_KEY_\\w+" - id: "CUSTOM-002" title: "Unapproved LLM provider" description: "Only approved LLM providers should be used" severity: high pattern: type: "function_call" disallowed: - "openai.*" - "anthropic.*" allowed: - "internal_llm.*"

API Access

Query the vulnerability database programmatically:

# List all rules curl https://api.inkog.io/v1/rules # Get specific rule curl https://api.inkog.io/v1/rules/INKOG-001 # Search rules curl "https://api.inkog.io/v1/rules?framework=langchain&severity=critical"

Contributing Rules

We welcome community contributions to the vulnerability database:

  1. Fork the rules repository 
  2. Add your rule in YAML format
  3. Include test cases (vulnerable and secure examples)
  4. Submit a pull request
rules/community/my-rule.yaml
id: "COMMUNITY-001" title: "Custom framework vulnerability" author: "your-github-username" description: | Detailed description of the vulnerability. severity: high frameworks: - custom-framework cwe: "CWE-94" references: - "https://example.com/advisory" patterns: - type: "taint_flow" source: "http_request" sink: "custom_framework.execute" tests: vulnerable: - | from custom_framework import execute execute(request.data) secure: - | from custom_framework import execute execute(validate(request.data))
Last updated on