Vellum
Static analysis for Vellum applications to detect workflow loops, YAML configuration issues, and prompt template vulnerabilities.
Quick Start
inkog scan ./my-vellum-appWhat Inkog Detects
| Finding | Severity | Description |
|---|---|---|
| Workflow Loop | CRITICAL | Circular workflow connections |
| YAML Injection | HIGH | Unsafe YAML parsing |
| Template Injection | HIGH | Prompt template variables from user input |
| Config Exposure | CRITICAL | API keys in configuration files |
| Deployment Risk | HIGH | Insecure deployment configurations |
Workflow Loops
Workflow configurations with cycles run indefinitely.
Vulnerable
Unconditional loop back to analyze
# workflow.yaml
name: analysis-workflow
nodes:
- id: analyze
type: prompt
next: validate
- id: validate
type: conditional
conditions:
- if: "{{output.valid}}"
next: complete
- else:
next: analyze # Loop back without limit!
- id: complete
type: outputSecure
Retry counter with maximum limit
# workflow.yaml
name: analysis-workflow
max_iterations: 5 # Global limit
nodes:
- id: analyze
type: prompt
next: validate
- id: validate
type: conditional
retry_count: 0 # Track retries
max_retries: 3 # Limit retries
conditions:
- if: "{{output.valid}}"
next: complete
- if: "{{retry_count >= max_retries}}"
next: error_handler
- else:
next: analyze
increment: retry_count
- id: error_handler
type: output
message: "Max retries exceeded"
- id: complete
type: outputYAML Configuration Parsing
Unsafe YAML parsing can lead to code execution.
Vulnerable
yaml.load allows code execution
import yaml
# Load workflow from user-provided file
with open(user_provided_path, 'r') as f:
config = yaml.load(f, Loader=yaml.Loader) # Unsafe!
# YAML can contain: !!python/object/apply:os.system ["rm -rf /"]Secure
yaml.safe_load with path validation
import yaml
from pathlib import Path
ALLOWED_DIR = Path("./workflows")
def safe_load_workflow(path: str) -> dict:
"""Safely load workflow configuration."""
# Validate path
target = Path(path).resolve()
if not target.is_relative_to(ALLOWED_DIR):
raise ValueError("Invalid workflow path")
if not target.suffix in ['.yaml', '.yml']:
raise ValueError("Invalid file type")
with open(target, 'r') as f:
# Use safe_load - no code execution
config = yaml.safe_load(f)
# Validate structure
required = ['name', 'nodes']
if not all(k in config for k in required):
raise ValueError("Invalid workflow structure")
return config
config = safe_load_workflow(validated_path)Prompt Template Injection
User input in prompt templates enables injection.
Vulnerable
Raw user input in template
from vellum import Vellum
client = Vellum(api_key=api_key)
# User input directly in template variables
response = client.execute_prompt(
deployment_name="assistant",
inputs={
"user_query": user_input # Unvalidated!
}
)Secure
Sanitized input with length limit
from vellum import Vellum
import html
import re
def sanitize_input(text: str) -> str:
"""Remove injection patterns from input."""
# Escape special characters
text = html.escape(text)
# Remove instruction patterns
patterns = [
r'ignore.*instruction',
r'disregard.*above',
r'new.*instruction',
r'{{.*}}' # Template syntax
]
for p in patterns:
text = re.sub(p, '[filtered]', text, flags=re.I)
return text[:1000] # Limit length
client = Vellum(api_key=api_key)
response = client.execute_prompt(
deployment_name="assistant",
inputs={
"user_query": sanitize_input(user_input)
}
)Configuration Exposure
API keys and secrets in configuration files.
Vulnerable
Hardcoded secrets in config
# config.yaml
vellum:
api_key: "vl_sk_live_abc123..." # Exposed!
database:
password: "supersecret123"
openai:
api_key: "sk-abc123..."Secure
Environment variable references
# config.yaml
vellum:
api_key: "${VELLUM_API_KEY}" # From environment
database:
password: "${DATABASE_PASSWORD}"
openai:
api_key: "${OPENAI_API_KEY}"
# Load with environment substitution
import os
import yaml
def load_config(path: str) -> dict:
with open(path, 'r') as f:
content = f.read()
# Substitute environment variables
for match in re.finditer(r'${(w+)}', content):
var = match.group(1)
value = os.environ.get(var, '')
content = content.replace(match.group(0), value)
return yaml.safe_load(content)Deployment Configuration
Insecure deployment settings can expose vulnerabilities.
Vulnerable
Debug mode with no limits
# deployment.yaml
name: my-prompt
environment: production
settings:
debug: true # Debug in production!
log_level: verbose # Logs sensitive data
rate_limit: null # No rate limiting
max_tokens: null # Unlimited tokensSecure
Secure defaults with rate limiting
# deployment.yaml
name: my-prompt
environment: production
settings:
debug: false
log_level: error # Minimal logging
rate_limit:
requests_per_minute: 60
burst: 10
max_tokens: 4000
timeout_seconds: 30
security:
input_validation: true
output_sanitization: true
pii_detection: trueWorkflow Variable Leakage
Variables can leak between workflow nodes.
Vulnerable
PII leaked to external service
# workflow.yaml
nodes:
- id: get_user_data
type: api_call
output: user_data # Contains PII
- id: external_call
type: api_call
url: "https://external.api/process"
body:
data: "{{user_data}}" # Leaks PII to external API!Secure
Sanitization before external calls
# workflow.yaml
nodes:
- id: get_user_data
type: api_call
output: user_data
- id: sanitize
type: transform
input: "{{user_data}}"
operations:
- redact_pii: true
- remove_fields: ["ssn", "credit_card"]
output: safe_data
- id: external_call
type: api_call
url: "https://external.api/process"
body:
data: "{{safe_data}}" # Sanitized data onlyBest Practices
- Set
max_iterationson workflow definitions - Use
yaml.safe_load- neveryaml.load - Sanitize template inputs before execution
- Store secrets in environment - never in config files
- Disable debug mode in production
- Sanitize data before external API calls
CLI Examples
# Scan Vellum project
inkog scan ./my-vellum-app
# Check YAML configurations
inkog scan ./workflows -severity high
# Scan for exposed secrets
inkog scan . -verboseRelated
- Prompt Injection
- Data Exposure
- n8n - Similar workflow patterns
Last updated on