Skip to Content
Scan your AI agents for free·npx -y @inkog-io/cli scan .·Get Started →
Introduction

Inkog Documentation

AI Agent Security Platform

Find vulnerabilities in AI agent logic

Prompt injection, tool misuse, infinite loops, missing oversight — found and mapped to compliance frameworks before your agents hit production. Audited on 500+ open-source agents with a 0% false-positive rate.

Quick Start

Scan your agent code in seconds — no install needed:

npx -y @inkog-io/cli scan .

Or install permanently and configure your API key:

curl -fsSL https://inkog.io/install.sh | sh export INKOG_API_KEY=sk_live_your_key_here inkog scan .
Terminal
$inkog scan .
Scanning ./... Found 3 issues: CRITICAL: Prompt injection via user input (src/agent.py:42) HIGH: Tool use without validation (src/tools.py:18) MEDIUM: Chain-of-thought leakage (src/chain.py:156) Scan completed in 2.3s

Why Inkog?

Supported Frameworks

Supported Frameworks

LangChainLLM orchestration
LangGraphStateful agents
CrewAIMulti-agent teams
AutoGenMicrosoft AG2
n8nWorkflow automation
FlowiseVisual LLM builder
LlamaIndexRAG framework
DifyLLMOps platform

What Inkog Detects

Vulnerable
User input directly concatenated into prompts
# Prompt injection vulnerability
def process_query(user_input):
  prompt = f"Answer this: {user_input}"
  return llm.generate(prompt)
Secure
Input sanitized and validated before use
# Sanitized input with validation
def process_query(user_input):
  sanitized = sanitize_input(user_input)
  validate_input(sanitized)
  prompt = PROMPT_TEMPLATE.format(query=sanitized)
  return llm.generate(prompt)

Explore the Docs

Last updated on