Skip to Content
Core ConceptsCross-Framework Analysis

Cross-Framework Analysis

Inkog uses a unified analysis engine that works across all supported frameworks, providing consistent security results regardless of which tools you use.

The Challenge

AI agent codebases span multiple languages and frameworks:

  • Python agents built with LangChain, CrewAI, or AutoGen
  • TypeScript frontends with LlamaIndex
  • No-code workflows in n8n or Flowise
  • Custom agents mixing multiple languages

Traditional security tools require separate analyzers for each, missing cross-boundary vulnerabilities.

How Inkog Solves This

Inkog analyzes all supported frameworks using the same detection rules:

Your Code → Inkog Analysis → Security Report

This means:

  • Consistent detection - The same vulnerability patterns are caught whether you use LangChain or n8n
  • Framework-agnostic rules - Security rules apply across all frameworks
  • Cross-language analysis - Vulnerabilities that span Python and JavaScript are detected

What Gets Analyzed

Inkog examines:

  • User input handling - How external data enters your agent
  • LLM interactions - Prompts, completions, and tool calls
  • Data flow - How information moves through your application
  • Dangerous operations - Code execution, file access, database queries

Supported Input Types

TypeExamples
Python code.py files with LangChain, CrewAI, etc.
JavaScript/TypeScript.js, .ts files
Workflow definitionsn8n, Flowise JSON exports
Configuration filesYAML/JSON configs

Example

Whether your code looks like this:

# Python with LangChain def chat(user_input): return llm.invoke(f"Answer: {user_input}")

Or this:

{ "nodes": [{ "type": "openai", "parameters": { "prompt": "={{ $json.body.message }}" } }] }

Inkog detects the same prompt injection vulnerability in both.

Last updated on