Skip to Content
Introduction

Inkog Verify Documentation

Static Analysis for AI Agents

Prevent Infinite Loops, Token Bombing & Prompt Injection

Inkog Verify is the Static Analysis engine for the Inkog Platform. It scans agent code and configs to prevent logic flaws before they reach production.

Looking for Runtime Protection? Inkog Runtime provides real-time monitoring and protection for deployed agents. Currently in private beta. Contact Sales for early access.

Quick Start

Get up and running with Inkog in seconds:

Terminal
$docker run -v $(pwd):/scan ghcr.io/inkog-io/inkog scan /scan
Scanning /scan... Found 3 vulnerabilities: CRITICAL: Prompt injection via user input (src/agent.py:42) HIGH: Tool use without validation (src/tools.py:18) MEDIUM: Chain-of-thought leakage (src/chain.py:156) Scan completed in 2.3s

Why Inkog?

Supported Frameworks

Supported Frameworks

LangChainLLM orchestration
LangGraphStateful agents
CrewAIMulti-agent teams
AutoGenMicrosoft AG2
n8nWorkflow automation
FlowiseVisual LLM builder
LlamaIndexRAG framework
DifyLLMOps platform

What Inkog Detects

Traditional security scanners miss agentic vulnerabilities. Inkog catches infinite loops, token bombing, prompt injection, unsafe code execution, and data leakage.
Vulnerable
User input directly concatenated into prompts
# Prompt injection vulnerability
def process_query(user_input):
  prompt = f"Answer this: {user_input}"
  return llm.generate(prompt)
Secure
Input sanitized and validated before use
# Sanitized input with validation
def process_query(user_input):
  sanitized = sanitize_input(user_input)
  validate_input(sanitized)
  prompt = PROMPT_TEMPLATE.format(query=sanitized)
  return llm.generate(prompt)

Explore the Docs

Last updated on