Skip to Content
Core ConceptsOverview

Core Concepts

Understanding how Inkog analyzes AI agent code and reports vulnerabilities.

How Inkog Works

Inkog is a security scanner built specifically for AI agents. It analyzes your code to find vulnerabilities before they reach production.

Your Code → Inkog Analysis → Security Report

Key Features

Cross-Framework Analysis

Inkog works across all major AI agent frameworks:

  • Code-first: LangChain, LangGraph, CrewAI, AutoGen
  • No-code: n8n, Flowise, Dify
  • RAG: LlamaIndex, Haystack

The same vulnerabilities are detected regardless of which framework you use.

Learn more about Cross-Framework Analysis →

Hybrid Privacy Model

Your code security is paramount. Inkog’s privacy model ensures:

  1. Secrets detected locally - Credentials found and redacted on your machine
  2. Redacted code sent for analysis - Only sanitized code leaves your machine
  3. Results merged - Local and server findings combined

Your actual credentials never leave your machine.

Learn more about Hybrid Privacy →

Security Scoring

Findings are scored by severity (Critical, High, Medium, Low) and aggregated into a security grade (A-F).

This determines whether your CI/CD pipeline passes or fails.

Learn more about Security Scoring →

What Inkog Detects

Inkog detects vulnerabilities across seven categories:

CategoryExamples
Resource ExhaustionInfinite loops, token bombing, context overflow
Code InjectionTainted eval/exec, unsafe deserialization
Prompt InjectionUser input in prompts, SQL injection via LLM
Data ExposureHardcoded credentials, logging PII
Output HandlingUnvalidated LLM output
Access ControlMissing auth, path traversal
DeserializationPickle, YAML code execution

Compliance Mapping

Every finding maps to industry standards:

  • OWASP LLM Top 10 - AI-specific vulnerability taxonomy
  • EU AI Act - European AI regulation
  • NIST AI RMF - AI risk management framework
  • CWE - Common Weakness Enumeration

This enables automated compliance reporting for audits.

Last updated on