Vulnerability Database
Browse Inkog’s comprehensive database of AI agent security vulnerabilities.
Each vulnerability includes detection rules, affected frameworks, and remediation guidance.
Prompt Injection
Unsanitized user input embedded in prompts allowing attackers to override system instructions.
Tainted Eval/Exec
LLM-generated or user-controlled code executed via eval(), exec(), or equivalent functions.
Hardcoded Credentials
API keys, passwords, or tokens hardcoded in source files instead of using environment variables.
Infinite Loop
Loop condition depends on LLM output without deterministic termination guarantee.
SQL Injection via LLM
LLM-generated SQL queries concatenated without parameterization allowing database attacks.
Output Validation Failures
LLM output used in dangerous sinks (eval, HTML, SQL, commands) without validation.
Cross-Tenant Data Leakage
Data from one tenant accessible to another due to improper isolation in multi-tenant systems.
Unsafe Deserialization
Untrusted data deserialized via pickle or YAML allowing arbitrary code execution.
Vulnerability Categories
Injection Attacks
Vulnerabilities where untrusted input manipulates AI behavior:
- Prompt Injection: User input alters LLM instructions
- Indirect Prompt Injection: Malicious content in retrieved data
- Template Injection: Dynamic template construction with user data
Authorization & Access Control
Improper access controls in agent systems:
- Tool Use Without Validation: Agents execute tools without input checks
- Privilege Escalation: Agents perform actions beyond intended scope
- Missing Authentication: Unprotected agent endpoints
Memory & State
Vulnerabilities in agent memory and state management:
- Memory Poisoning: Malicious data persisted to long-term memory
- State Manipulation: Session state altered by untrusted input
- Context Window Attacks: Exceeding context limits to truncate instructions
Information Disclosure
Unintended exposure of sensitive information:
- Chain-of-Thought Leakage: Internal reasoning exposed to users
- System Prompt Extraction: Attackers extract hidden instructions
- Sensitive Data in Prompts: Credentials or PII in prompt text
Code Execution
Risks from dynamic code execution:
- Unrestricted Code Interpreters: Sandbox escapes in code tools
- Unsafe Deserialization: Arbitrary object instantiation
- Command Injection: Shell commands with user input
Severity Levels
| Level | Description | Response |
|---|---|---|
| Critical | Immediate exploitation possible, severe impact | Fix immediately, block deployment |
| High | Likely exploitable with significant impact | Fix before next release |
| Medium | Exploitable under specific conditions | Fix in normal development cycle |
| Low | Limited exploitability or impact | Track and address as time permits |
OWASP LLM Top 10 Mapping
Inkog’s rules map to the OWASP Top 10 for LLM Applications :
| OWASP LLM | Inkog Rules |
|---|---|
| LLM01: Prompt Injection | INKOG-001, INKOG-002, INKOG-006 |
| LLM02: Insecure Output Handling | INKOG-008, INKOG-015 |
| LLM03: Training Data Poisoning | Out of scope (runtime analysis) |
| LLM04: Model Denial of Service | INKOG-020, INKOG-021 |
| LLM05: Supply Chain | INKOG-030, INKOG-031 |
| LLM06: Sensitive Info Disclosure | INKOG-004, INKOG-007 |
| LLM07: Insecure Plugin Design | INKOG-002, INKOG-010 |
| LLM08: Excessive Agency | INKOG-011, INKOG-012 |
| LLM09: Overreliance | Advisory only |
| LLM10: Model Theft | Out of scope (infrastructure) |
Adding Custom Rules
Define organization-specific vulnerability patterns:
custom_rules:
- id: "CUSTOM-001"
title: "Internal API key in prompt"
description: "Internal API keys should never appear in LLM prompts"
severity: critical
pattern:
type: "string_in_sink"
sink: "llm_prompt"
match: "INTERNAL_API_KEY_\\w+"
- id: "CUSTOM-002"
title: "Unapproved LLM provider"
description: "Only approved LLM providers should be used"
severity: high
pattern:
type: "function_call"
disallowed:
- "openai.*"
- "anthropic.*"
allowed:
- "internal_llm.*"API Access
Query the vulnerability database programmatically:
# List all rules
curl https://api.inkog.io/v1/rules
# Get specific rule
curl https://api.inkog.io/v1/rules/INKOG-001
# Search rules
curl "https://api.inkog.io/v1/rules?framework=langchain&severity=critical"Contributing Rules
We welcome community contributions to the vulnerability database:
- Fork the rules repository
- Add your rule in YAML format
- Include test cases (vulnerable and secure examples)
- Submit a pull request
id: "COMMUNITY-001"
title: "Custom framework vulnerability"
author: "your-github-username"
description: |
Detailed description of the vulnerability.
severity: high
frameworks:
- custom-framework
cwe: "CWE-94"
references:
- "https://example.com/advisory"
patterns:
- type: "taint_flow"
source: "http_request"
sink: "custom_framework.execute"
tests:
vulnerable:
- |
from custom_framework import execute
execute(request.data)
secure:
- |
from custom_framework import execute
execute(validate(request.data))