Security Scoring
Inkog uses a consistent scoring system to help you understand the severity of findings and prioritize remediation.
Severity Levels
Each finding is assigned one of four severity levels:
| Severity | Description |
|---|---|
| Critical | Immediate exploitation possible, full system compromise |
| High | Significant risk, exploitation likely |
| Medium | Moderate risk, exploitation requires specific conditions |
| Low | Minor risk, limited impact |
Security Grades
Your findings determine your security grade:
| Grade | Status |
|---|---|
| A | Excellent - No findings |
| B | Good - Minor issues only |
| C | Moderate - Address soon |
| D | Needs Work - High priority |
| F | Critical - Immediate action required |
HTML Report Grades
The HTML output includes a visual grade badge:
┌─────────────────────────────┐
│ SECURITY GRADE │
│ │
│ ┌───┐ │
│ │ B │ │
│ └───┘ │
│ │
│ Status: PASSED │
└─────────────────────────────┘Interpreting Results
Clean Scan (Grade A)
✓ Security scan complete
Grade: A
Findings: 0Action: No immediate action needed. Consider periodic rescanning.
Minor Issues (Grade B)
⚠ Security scan complete
Grade: B
Findings: 3 lowAction: Address low-severity findings during regular maintenance.
Moderate Issues (Grade C)
⚠ Security scan complete
Grade: C
Findings: 1 high, 2 mediumAction: Prioritize high findings, schedule medium findings.
Serious Issues (Grade D/F)
✗ Security scan complete
Grade: F
Findings: 2 critical, 3 highAction: Stop deployment. Address critical findings immediately.
Severity Filtering
Control which severities you want to see:
# Only show critical
inkog -severity critical .
# Show high and above (recommended for CI/CD)
inkog -severity high .
# Show all findings
inkog -severity low .See Security Gates for CI/CD configuration.
Confidence Calibration
Every finding includes a confidence score (0-1) indicating how likely it is to be a true positive. Inkog uses self-learning Bayesian calibration to improve these scores over time.
How It Works
- Base Confidence: Each detection rule has an initial confidence from our testing
- User Feedback: When you mark findings as true/false positives via the Feedback API, the system learns
- Calibrated Confidence: The adjusted score reflects real-world accuracy
Interpreting Confidence
| Confidence | Interpretation |
|---|---|
| 0.90+ | Very high confidence, almost certainly a real issue |
| 0.70-0.90 | High confidence, likely a real issue |
| 0.50-0.70 | Moderate confidence, requires review |
| < 0.50 | Low confidence, may be a false positive |
Calibration Reliability
The calibration reliability indicates how trustworthy the calibrated score is:
| Reliability | Sample Count | Meaning |
|---|---|---|
insufficient | < 5 | Use base confidence instead |
low | 5-10 | Calibration is preliminary |
moderate | 11-30 | Calibration is reasonably stable |
high | 31-100 | Calibration is reliable |
very_high | > 100 | Calibration is very reliable |
Help improve accuracy. Submit feedback on findings to improve calibration for everyone. See the Feedback API for details.