AI Agent Security in CI/CD
Add automated security gates to catch AI agent vulnerabilities before they reach production.
Why CI/CD Gates for AI Agents
Traditional SAST tools miss AI-specific risks like infinite loops in agent executors, prompt injection via f-strings, and missing human oversight. Inkog’s CI/CD integration catches these in pull requests before merge.
1. GitHub Actions Setup
name: AI Security
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: inkog-io/inkog-action@v1
with:
path: .
fail-on: critical,highAdd your API key as a repository secret:
- Go to Settings > Secrets and variables > Actions
- Add
INKOG_API_KEYwith your key from app.inkog.io
2. Choose a Policy Preset
Inkog has 5 policy presets that control which findings block your pipeline:
| Preset | Blocks On | Best For |
|---|---|---|
low-noise | Critical + High only | Production CI gates |
balanced | Medium and above | Default — security scanning |
comprehensive | Everything | Full security audits |
governance | Governance findings | EU AI Act Article 14/12 |
eu-ai-act | Compliance findings | Regulatory compliance |
Set the policy in your workflow:
- uses: inkog-io/inkog-action@v1
with:
path: .
policy: low-noise
fail-on: critical,highOr via the CLI directly:
npx -y @inkog-io/cli scan . -policy low-noise -output sarif3. SARIF Output for GitHub Security Tab
Generate SARIF output to see findings directly in GitHub’s Security tab:
name: AI Security
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Inkog scan
uses: inkog-io/inkog-action@v1
with:
path: .
output: sarif
output-file: results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: results.sarifFindings appear as code annotations on pull requests, inline with the affected lines.
4. Diff Mode for Regressions
Scan only changes since a baseline to catch new vulnerabilities without noise from existing ones:
# Create a baseline on main branch
inkog scan . -output json > baseline.json
# On PR branch, scan for new findings only
inkog scan . -diff baseline.jsonIn CI:
- name: Baseline scan
run: npx -y @inkog-io/cli scan . -output json > baseline.json
- name: Diff scan
run: npx -y @inkog-io/cli scan . -diff baseline.json -fail-on critical,high5. Reading Scan Results
A typical CI output looks like:
agent.py:15:1: CRITICAL [infinite_loop]
AgentExecutor without max_iterations
EU AI Act Article 15 | OWASP LLM08
agent.py:23:5: HIGH [prompt_injection]
User input directly in prompt template
OWASP LLM01
---------------------------------------------
2 findings (1 critical, 1 high)
Security Gate: FAILEDEach finding includes:
- File and line — exact location in code
- Severity — CRITICAL, HIGH, MEDIUM, LOW
- Rule ID — e.g.,
infinite_loop,prompt_injection - Compliance mapping — EU AI Act articles, OWASP references
6. Blocking on Specific Rules
Fine-tune which findings block your pipeline:
# Block only on specific critical rules
- uses: inkog-io/inkog-action@v1
with:
path: .
fail-on: criticalOr use a config file for more control:
policy: balanced
ignore:
- hardcoded_credentials # Handled by separate secret scanner
- missing_rate_limit # Rate limiting at infrastructure levelCommon Fixes
| CI Failure | Fix |
|---|---|
infinite_loop | Add max_iterations to AgentExecutor |
prompt_injection | Use ChatPromptTemplate with role separation |
hardcoded_credentials | Move to environment variables or secret manager |
missing_human_oversight | Add human approval for sensitive tool calls |