Securing LangChain
Scan, fix, and verify a LangChain agent in 10 minutes.
1. Install
go install github.com/inkog-io/inkog/cmd/inkog@latest2. Scan
inkog scan ./my-langchain-appExample output:
agent.py:15:1: CRITICAL [infinite_loop]
AgentExecutor without max_iterations
│
14 │ agent = AgentExecutor(
15 │ agent=react_agent,
│ ^^^^^^^^^^^^^^^^^
16 │ tools=tools,
│
EU AI Act Article 15 | OWASP LLM08
agent.py:23:5: HIGH [prompt_injection]
User input directly in prompt template
│
22 │ prompt = f"""
23 │ You are helpful. User: {user_input}
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
24 │ """
│
OWASP LLM01
─────────────────────────────────────────
2 findings (1 critical, 1 high)3. Fix
Fix 1: Add iteration limit
# Before
agent = AgentExecutor(agent=react_agent, tools=tools)
# After
agent = AgentExecutor(
agent=react_agent,
tools=tools,
max_iterations=10,
max_execution_time=60,
)Fix 2: Sanitize user input
# Before
prompt = f"You are helpful. User: {user_input}"
# After
from langchain.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are helpful. Only answer factual questions."),
("human", "{user_input}"),
])
chain = prompt | llm4. Verify
inkog scan ./my-langchain-appExpected:
─────────────────────────────────────────
0 findings
Security Gate: PASSED5. Add to CI
# .github/workflows/security.yml
name: Security
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: inkog-io/inkog-action@v1
with:
path: .
fail-on: critical,highCommon Fixes
| Finding | Fix |
|---|---|
infinite_loop | Add max_iterations=10 |
prompt_injection | Use ChatPromptTemplate |
hardcoded_credentials | Use os.environ["API_KEY"] |
unsafe_tool | Add human approval for dangerous tools |
Next
Last updated on