Skip to Content
Core ConceptsSecurity Gates

Security Gates

Security gates determine whether a CI/CD pipeline should pass or fail based on scan results.

Gate Status

Inkog uses a simple gate model:

StatusConditionExit Code
PASSED0 Critical + 0 High findings0
BLOCKEDAny Critical or High finding1

Exit Codes

CodeMeaningPipeline Action
0No findings or only Medium/LowContinue
1Critical or High findings detectedStop
2Scan error (network, parse failure)Stop

Basic Usage

# Scan and fail on high/critical inkog -severity high . # Check exit code echo $? # 0 = passed, 1 = findings, 2 = error

Severity Filtering

Control which severities trigger a failure:

# Only fail on critical (permissive) inkog -severity critical . # Fail on high and above (recommended) inkog -severity high . # Fail on medium and above (strict) inkog -severity medium . # Fail on any finding (strictest) inkog -severity low .

Gate Strategies

1. Block on Critical Only

Use during early development or for legacy codebases:

# GitHub Actions - name: Security Scan run: inkog -severity critical .

Pros: Low friction, catches only showstoppers Cons: High-severity issues may slip through

Standard for production deployments:

# GitHub Actions - name: Security Scan run: inkog -severity high .

Pros: Balanced security/velocity tradeoff Cons: May block on new patterns

3. Block on All Findings

Use for high-security environments:

# GitHub Actions - name: Security Scan run: inkog -severity low .

Pros: Maximum security Cons: High maintenance, frequent blocks

Soft Gates

Allow pipeline to continue but record findings:

# GitHub Actions - soft gate - name: Security Scan (Soft) run: inkog -output json . > report.json continue-on-error: true - name: Upload Report uses: actions/upload-artifact@v4 with: name: security-report path: report.json

Environment-Based Gates

Different thresholds for different environments:

# GitHub Actions - name: Security Scan run: | if [ "${{ github.ref }}" = "refs/heads/main" ]; then # Strict on main branch inkog -severity high . else # Permissive on feature branches inkog -severity critical . fi

JSON Gate Parsing

Programmatic gate decisions:

#!/bin/bash # custom-gate.sh REPORT=$(inkog -output json .) CRITICAL=$(echo "$REPORT" | jq '.summary.critical') HIGH=$(echo "$REPORT" | jq '.summary.high') if [ "$CRITICAL" -gt 0 ]; then echo "BLOCKED: $CRITICAL critical findings" exit 1 fi if [ "$HIGH" -gt 2 ]; then echo "BLOCKED: Too many high findings ($HIGH)" exit 1 fi echo "PASSED: Within acceptable thresholds" exit 0

Gate Exceptions

For known false positives or accepted risks, use inline comments (coming soon):

# inkog-ignore: hardcoded_credentials - test fixture API_KEY = "sk-test-fixture-only"

Or maintain an allowlist file:

# .inkog-allowlist.yaml exceptions: - file: tests/fixtures/credentials.py rule: hardcoded_credentials reason: Test fixtures only expires: 2024-12-31

Track gate pass rates over time:

# Record in CI RESULT=$(inkog -output json .) STATUS=$(echo "$RESULT" | jq -r '.security_gate.status') COUNT=$(echo "$RESULT" | jq '.summary.total') curl -X POST https://metrics.company.com/security \ -d "repo=$REPO&status=$STATUS&findings=$COUNT"

Best Practices

  1. Start permissive, tighten over time - Begin with critical-only, then add high
  2. Different gates for different branches - Stricter on main, relaxed on feature
  3. Always save reports - Upload artifacts for audit trail
  4. Review blocked PRs quickly - Don’t let security become a bottleneck
  5. Document exceptions - Track why specific findings are accepted
Last updated on