Feedback API
The Feedback API enables self-learning confidence calibration. By submitting feedback on findings, the system improves its confidence scores over time using Bayesian updating.
Submit Feedback
Report whether a finding was a true positive, false positive, or uncertain.
POST /v1/feedbackRequest Body
{
"finding_id": "abc123",
"rule_id": "universal_prompt_injection",
"pattern_id": "instruction_override",
"type": "false_positive",
"original_confidence": 0.85,
"notes": "This is a legitimate system prompt, not injection",
"file_path": "agent.py",
"line_number": 42,
"framework": "langchain",
"severity": "HIGH"
}Parameters
| Field | Type | Required | Description |
|---|---|---|---|
finding_id | string | No | Unique ID of the finding |
rule_id | string | Yes | Rule that generated the finding |
pattern_id | string | No | Specific pattern within the rule |
type | string | Yes | true_positive, false_positive, or uncertain |
original_confidence | number | No | Original confidence (0-1) |
notes | string | No | Optional explanation |
file_path | string | No | Path to the scanned file |
line_number | integer | No | Line number of finding |
framework | string | No | Framework (langchain, crewai, n8n) |
severity | string | No | Severity level |
Response
{
"success": true,
"result": {
"rule_id": "universal_prompt_injection",
"old_confidence": 0.85,
"new_confidence": 0.78,
"samples_used": 15,
"reliability": "moderate"
}
}Example
curl -X POST https://api.inkog.io/v1/feedback \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"rule_id": "universal_prompt_injection",
"type": "false_positive",
"original_confidence": 0.85
}'Get Calibrations
Retrieve calibration data for all rules or a specific rule.
GET /v1/feedback
GET /v1/feedback?rule_id=universal_prompt_injectionResponse (All Rules)
{
"success": true,
"calibrations": [
{
"rule_id": "universal_prompt_injection",
"base_confidence": 0.85,
"calibrated_confidence": 0.78,
"total_samples": 25,
"true_positive_count": 18,
"false_positive_count": 5,
"uncertain_count": 2,
"reliability": "moderate",
"last_updated": "2024-12-25T10:30:00Z"
}
],
"recommendations": [
{
"rule_id": "universal_sql_injection",
"priority": "high",
"reason": "Low sample count (3), high variance"
}
],
"total_rules": 15
}Response (Specific Rule)
{
"success": true,
"calibration": {
"rule_id": "universal_prompt_injection",
"base_confidence": 0.85,
"calibrated_confidence": 0.78,
"total_samples": 25,
"credible_interval": [0.72, 0.84],
"reliability": "moderate"
},
"summary": {
"true_positive_count": 18,
"false_positive_count": 5,
"uncertain_count": 2,
"accuracy_rate": 0.72
}
}How Calibration Works
Inkog uses Bayesian confidence updating to improve over time:
- Prior: Each rule has a base confidence from YAML configuration
- Likelihood: User feedback indicates true/false positive rate
- Posterior: The calibrated confidence is computed as:
calibrated = (prior × weight + true_rate × samples) / (weight + samples)Reliability Levels
| Level | Sample Count | Description |
|---|---|---|
insufficient | < 5 | Too few samples for reliable calibration |
low | 5-10 | Preliminary calibration, may change significantly |
moderate | 11-30 | Reasonably stable, calibration is useful |
high | 31-100 | Stable calibration, high confidence in adjustment |
very_high | > 100 | Very stable, unlikely to change significantly |
Finding Response with Calibration
When you scan code, findings now include calibration data:
{
"findings": [
{
"pattern_id": "universal_prompt_injection",
"severity": "HIGH",
"confidence": 0.85,
"calibrated_confidence": 0.78,
"calibration_reliability": "moderate",
"calibration_samples": 25,
"message": "Prompt injection vulnerability detected"
}
]
}Use calibrated_confidence for decisions. If calibration_reliability is “moderate” or higher, the calibrated score better reflects real-world accuracy.
Last updated on