Langflow
Static analysis for Langflow flow exports to detect agent cycles, component vulnerabilities, and data leakage.
Quick Start
# Export flow from Langflow UI, then scan
inkog scan ./flowsWhat Inkog Detects
| Finding | Severity | Description |
|---|---|---|
| Agent Cycle | CRITICAL | AgentComponent circular connections |
| Flow Loop | HIGH | Edges creating infinite loops |
| Config Exposure | CRITICAL | API keys in flow JSON |
| Data Leakage | HIGH | Sensitive data to untrusted outputs |
| Python Risk | CRITICAL | Custom Python components with risks |
AgentComponent Cycles
Agent components with circular edges loop indefinitely.
Vulnerable
Circular agent-tool connection
{
"nodes": [
{
"id": "agent-1",
"type": "AgentComponent",
"data": {
"type": "ReActAgent"
}
},
{
"id": "tool-1",
"type": "ToolComponent"
}
],
"edges": [
{"source": "agent-1", "target": "tool-1"},
{"source": "tool-1", "target": "agent-1"}
]
}Secure
Linear flow with iteration limit
{
"nodes": [
{
"id": "agent-1",
"type": "AgentComponent",
"data": {
"type": "ReActAgent",
"max_iterations": 10,
"handle_parsing_errors": true
}
},
{
"id": "tool-1",
"type": "ToolComponent"
},
{
"id": "output-1",
"type": "OutputComponent"
}
],
"edges": [
{"source": "agent-1", "target": "tool-1"},
{"source": "tool-1", "target": "output-1"}
]
}Flow Connection Loops
General flow edges that create cycles.
Vulnerable
Validator loops back to prompt
{
"edges": [
{"source": "prompt-1", "target": "llm-1"},
{"source": "llm-1", "target": "parser-1"},
{"source": "parser-1", "target": "validator-1"},
{"source": "validator-1", "target": "prompt-1"}
]
}Secure
Retry counter breaks the cycle
{
"nodes": [
{
"id": "retry-counter",
"type": "CustomComponent",
"data": {
"code": "if self.count >= 3: return output; self.count += 1"
}
}
],
"edges": [
{"source": "prompt-1", "target": "llm-1"},
{"source": "llm-1", "target": "parser-1"},
{"source": "parser-1", "target": "validator-1"},
{"source": "validator-1", "target": "retry-counter"},
{"source": "retry-counter", "target": "output-1"}
]
}Configuration Exposure
Flow exports may contain sensitive configuration.
Vulnerable
API key hardcoded in node
{
"nodes": [
{
"id": "openai-1",
"type": "OpenAIModel",
"data": {
"api_key": "sk-abc123...",
"model_name": "gpt-4"
}
}
]
}Secure
Environment variable reference
{
"nodes": [
{
"id": "openai-1",
"type": "OpenAIModel",
"data": {
"model_name": "gpt-4"
}
}
],
"global_variables": {
"openai_api_key": {
"type": "credential",
"value": "{{OPENAI_API_KEY}}"
}
}
}Data Flow to Untrusted Outputs
Sensitive data can leak to external systems.
Vulnerable
DB data → external webhook without filtering
{
"nodes": [
{"id": "db-1", "type": "DatabaseComponent"},
{"id": "llm-1", "type": "OpenAIModel"},
{"id": "webhook-1", "type": "WebhookOutput", "data": {"url": "https://external.api"}}
],
"edges": [
{"source": "db-1", "target": "llm-1"},
{"source": "llm-1", "target": "webhook-1"}
]
}Secure
Sanitizer removes PII before output
{
"nodes": [
{"id": "db-1", "type": "DatabaseComponent"},
{"id": "sanitizer-1", "type": "CustomComponent", "data": {"code": "return redact_pii(input)"}},
{"id": "llm-1", "type": "OpenAIModel"},
{"id": "webhook-1", "type": "WebhookOutput", "data": {"url": "https://internal.api"}}
],
"edges": [
{"source": "db-1", "target": "sanitizer-1"},
{"source": "sanitizer-1", "target": "llm-1"},
{"source": "llm-1", "target": "webhook-1"}
]
}Custom Python Components
Custom components with dangerous code patterns.
Vulnerable
Shell command execution
{
"nodes": [
{
"id": "custom-1",
"type": "CustomComponent",
"data": {
"code": "import os; os.system(self.input)"
}
}
]
}Secure
Safe string processing only
{
"nodes": [
{
"id": "custom-1",
"type": "CustomComponent",
"data": {
"code": "import re; result = re.sub(r'[^a-zA-Z0-9\s]', '', self.input); return result[:1000]"
}
}
]
}How to Export Flows
To scan Langflow flows:
- From UI: Flow menu → Export → JSON
- Via API:
# Export flow
curl -X GET "http://localhost:7860/api/v1/flows/{flow_id}" \
-o flow.jsonThen scan:
inkog scan ./flow.jsonVector Store Risks
Vector stores loading from untrusted sources.
Vulnerable
User controls collection and path
{
"nodes": [
{
"id": "vectorstore-1",
"type": "VectorStoreComponent",
"data": {
"collection_name": "={{input.collection}}",
"persist_directory": "={{input.path}}"
}
}
]
}Secure
Fixed approved collection
{
"nodes": [
{
"id": "vectorstore-1",
"type": "VectorStoreComponent",
"data": {
"collection_name": "approved_collection",
"persist_directory": "./data/vectors"
}
}
]
}Best Practices
- Avoid circular edges in flow connections
- Set
max_iterationson Agent components - Use environment variables for API keys
- Add sanitizers before external outputs
- Restrict Custom Component code to safe operations
- Fix vector store paths to approved directories
CLI Examples
# Scan Langflow exports
inkog scan ./flows
# Check for data leakage
inkg scan . -severity high
# Verbose output
inkog scan . -verboseRelated
- Flowise - Similar visual builder
- LangChain - Underlying framework
- Data Exposure
Last updated on