Skip to Content
APIFeedback & Calibration

Feedback API

The Feedback API enables self-learning confidence calibration. By submitting feedback on findings, the system improves its confidence scores over time using Bayesian updating.

Submit Feedback

Report whether a finding was a true positive, false positive, or uncertain.

POST /v1/feedback

Request Body

{ "finding_id": "abc123", "rule_id": "universal_prompt_injection", "pattern_id": "instruction_override", "type": "false_positive", "original_confidence": 0.85, "notes": "This is a legitimate system prompt, not injection", "file_path": "agent.py", "line_number": 42, "framework": "langchain", "severity": "HIGH" }

Parameters

FieldTypeRequiredDescription
finding_idstringNoUnique ID of the finding
rule_idstringYesRule that generated the finding
pattern_idstringNoSpecific pattern within the rule
typestringYestrue_positive, false_positive, or uncertain
original_confidencenumberNoOriginal confidence (0-1)
notesstringNoOptional explanation
file_pathstringNoPath to the scanned file
line_numberintegerNoLine number of finding
frameworkstringNoFramework (langchain, crewai, n8n)
severitystringNoSeverity level

Response

{ "success": true, "result": { "rule_id": "universal_prompt_injection", "old_confidence": 0.85, "new_confidence": 0.78, "samples_used": 15, "reliability": "moderate" } }

Example

curl -X POST https://api.inkog.io/v1/feedback \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "rule_id": "universal_prompt_injection", "type": "false_positive", "original_confidence": 0.85 }'

Get Calibrations

Retrieve calibration data for all rules or a specific rule.

GET /v1/feedback GET /v1/feedback?rule_id=universal_prompt_injection

Response (All Rules)

{ "success": true, "calibrations": [ { "rule_id": "universal_prompt_injection", "base_confidence": 0.85, "calibrated_confidence": 0.78, "total_samples": 25, "true_positive_count": 18, "false_positive_count": 5, "uncertain_count": 2, "reliability": "moderate", "last_updated": "2024-12-25T10:30:00Z" } ], "recommendations": [ { "rule_id": "universal_sql_injection", "priority": "high", "reason": "Low sample count (3), high variance" } ], "total_rules": 15 }

Response (Specific Rule)

{ "success": true, "calibration": { "rule_id": "universal_prompt_injection", "base_confidence": 0.85, "calibrated_confidence": 0.78, "total_samples": 25, "credible_interval": [0.72, 0.84], "reliability": "moderate" }, "summary": { "true_positive_count": 18, "false_positive_count": 5, "uncertain_count": 2, "accuracy_rate": 0.72 } }

How Calibration Works

Inkog uses Bayesian confidence updating to improve over time:

  1. Prior: Each rule has a base confidence from YAML configuration
  2. Likelihood: User feedback indicates true/false positive rate
  3. Posterior: The calibrated confidence is computed as:
calibrated = (prior × weight + true_rate × samples) / (weight + samples)

Reliability Levels

LevelSample CountDescription
insufficient< 5Too few samples for reliable calibration
low5-10Preliminary calibration, may change significantly
moderate11-30Reasonably stable, calibration is useful
high31-100Stable calibration, high confidence in adjustment
very_high> 100Very stable, unlikely to change significantly

Finding Response with Calibration

When you scan code, findings now include calibration data:

{ "findings": [ { "pattern_id": "universal_prompt_injection", "severity": "HIGH", "confidence": 0.85, "calibrated_confidence": 0.78, "calibration_reliability": "moderate", "calibration_samples": 25, "message": "Prompt injection vulnerability detected" } ] }

Use calibrated_confidence for decisions. If calibration_reliability is “moderate” or higher, the calibrated score better reflects real-world accuracy.

Last updated on