Security Gates
Security gates determine whether a CI/CD pipeline should pass or fail based on scan results.
Gate Status
Inkog uses a simple gate model:
| Status | Condition | Exit Code |
|---|---|---|
| PASSED | 0 Critical + 0 High findings | 0 |
| BLOCKED | Any Critical or High finding | 1 |
Exit Codes
| Code | Meaning | Pipeline Action |
|---|---|---|
0 | No findings or only Medium/Low | Continue |
1 | Critical or High findings detected | Stop |
2 | Scan error (network, parse failure) | Stop |
Basic Usage
# Scan and fail on high/critical
inkog -severity high .
# Check exit code
echo $? # 0 = passed, 1 = findings, 2 = errorSeverity Filtering
Control which severities trigger a failure:
# Only fail on critical (permissive)
inkog -severity critical .
# Fail on high and above (recommended)
inkog -severity high .
# Fail on medium and above (strict)
inkog -severity medium .
# Fail on any finding (strictest)
inkog -severity low .Gate Strategies
1. Block on Critical Only
Use during early development or for legacy codebases:
# GitHub Actions
- name: Security Scan
run: inkog -severity critical .Pros: Low friction, catches only showstoppers Cons: High-severity issues may slip through
2. Block on High and Above (Recommended)
Standard for production deployments:
# GitHub Actions
- name: Security Scan
run: inkog -severity high .Pros: Balanced security/velocity tradeoff Cons: May block on new patterns
3. Block on All Findings
Use for high-security environments:
# GitHub Actions
- name: Security Scan
run: inkog -severity low .Pros: Maximum security Cons: High maintenance, frequent blocks
Soft Gates
Allow pipeline to continue but record findings:
# GitHub Actions - soft gate
- name: Security Scan (Soft)
run: inkog -output json . > report.json
continue-on-error: true
- name: Upload Report
uses: actions/upload-artifact@v4
with:
name: security-report
path: report.jsonEnvironment-Based Gates
Different thresholds for different environments:
# GitHub Actions
- name: Security Scan
run: |
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
# Strict on main branch
inkog -severity high .
else
# Permissive on feature branches
inkog -severity critical .
fiJSON Gate Parsing
Programmatic gate decisions:
#!/bin/bash
# custom-gate.sh
REPORT=$(inkog -output json .)
CRITICAL=$(echo "$REPORT" | jq '.summary.critical')
HIGH=$(echo "$REPORT" | jq '.summary.high')
if [ "$CRITICAL" -gt 0 ]; then
echo "BLOCKED: $CRITICAL critical findings"
exit 1
fi
if [ "$HIGH" -gt 2 ]; then
echo "BLOCKED: Too many high findings ($HIGH)"
exit 1
fi
echo "PASSED: Within acceptable thresholds"
exit 0Gate Exceptions
For known false positives or accepted risks, use inline comments (coming soon):
# inkog-ignore: hardcoded_credentials - test fixture
API_KEY = "sk-test-fixture-only"Or maintain an allowlist file:
# .inkog-allowlist.yaml
exceptions:
- file: tests/fixtures/credentials.py
rule: hardcoded_credentials
reason: Test fixtures only
expires: 2024-12-31Monitoring Gate Trends
Track gate pass rates over time:
# Record in CI
RESULT=$(inkog -output json .)
STATUS=$(echo "$RESULT" | jq -r '.security_gate.status')
COUNT=$(echo "$RESULT" | jq '.summary.total')
curl -X POST https://metrics.company.com/security \
-d "repo=$REPO&status=$STATUS&findings=$COUNT"Best Practices
- Start permissive, tighten over time - Begin with critical-only, then add high
- Different gates for different branches - Stricter on main, relaxed on feature
- Always save reports - Upload artifacts for audit trail
- Review blocked PRs quickly - Don’t let security become a bottleneck
- Document exceptions - Track why specific findings are accepted