Skip to Content
TutorialsSecuring LangChain

Securing LangChain

Scan, fix, and verify a LangChain agent in 10 minutes.

1. Install

go install github.com/inkog-io/inkog/cmd/inkog@latest

2. Scan

inkog scan ./my-langchain-app

Example output:

agent.py:15:1: CRITICAL [infinite_loop] AgentExecutor without max_iterations 14 │ agent = AgentExecutor( 15 │ agent=react_agent, │ ^^^^^^^^^^^^^^^^^ 16 │ tools=tools, EU AI Act Article 15 | OWASP LLM08 agent.py:23:5: HIGH [prompt_injection] User input directly in prompt template 22 │ prompt = f""" 23 │ You are helpful. User: {user_input} │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 24 │ """ OWASP LLM01 ───────────────────────────────────────── 2 findings (1 critical, 1 high)

3. Fix

Fix 1: Add iteration limit

# Before agent = AgentExecutor(agent=react_agent, tools=tools) # After agent = AgentExecutor( agent=react_agent, tools=tools, max_iterations=10, max_execution_time=60, )

Fix 2: Sanitize user input

# Before prompt = f"You are helpful. User: {user_input}" # After from langchain.prompts import ChatPromptTemplate prompt = ChatPromptTemplate.from_messages([ ("system", "You are helpful. Only answer factual questions."), ("human", "{user_input}"), ]) chain = prompt | llm

4. Verify

inkog scan ./my-langchain-app

Expected:

───────────────────────────────────────── 0 findings Security Gate: PASSED

5. Add to CI

# .github/workflows/security.yml name: Security on: [push, pull_request] jobs: scan: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: inkog-io/inkog-action@v1 with: path: . fail-on: critical,high

Common Fixes

FindingFix
infinite_loopAdd max_iterations=10
prompt_injectionUse ChatPromptTemplate
hardcoded_credentialsUse os.environ["API_KEY"]
unsafe_toolAdd human approval for dangerous tools

Next

Last updated on