NIST AI RMF
The NIST AI Risk Management Framework provides guidance for managing risks in AI systems. Inkog maps findings to relevant NIST AI RMF categories.
Framework Overview
NIST AI RMF is organized into four core functions:
| Function | Focus | Inkog Coverage |
|---|---|---|
| GOVERN | Organizational AI governance | Reporting only |
| MAP | Understanding AI system context | Full detection |
| MEASURE | Assessing AI risks | Full detection |
| MANAGE | Prioritizing and acting on risks | Reporting only |
Coverage Matrix
| NIST Category | Description | Inkog Rules |
|---|---|---|
| MAP 1.1 | Input/Output Validation | prompt_injection, output_validation_failures, sql_injection_via_llm |
| MAP 1.2 | Data Governance | logging_sensitive_data, unsafe_env_access |
| MAP 1.3 | System Reliability | infinite_loop_semantic, context_exhaustion_semantic |
| MEASURE 2.2 | Security Risk Assessment | hardcoded_credentials |
| MEASURE 2.4 | AI System Risks | tainted_eval, unvalidated_exec_eval, output_validation_failures |
MAP 1.1: Input/Output Validation
Ensuring proper validation of AI system inputs and outputs.
Inkog Rules:
prompt_injection- User input in prompts without sanitizationoutput_validation_failures- LLM output used unsafelysql_injection_via_llm- LLM-generated SQL without parameterization
Requirements:
- Validate all inputs before processing
- Sanitize outputs before use in downstream systems
- Implement type checking and schema validation
Example Finding:
api.py:45:1: HIGH [prompt_injection]
User input directly embedded in prompt template
NIST: MAP 1.1MAP 1.2: Data Governance
Protecting data privacy and ensuring proper data handling.
Inkog Rules:
logging_sensitive_data- PII or secrets in logsunsafe_env_access- Environment variable exposurecross_tenant_data_leakage- Multi-tenant data exposure
Requirements:
- Redact sensitive data from logs
- Implement proper access controls
- Ensure tenant data isolation
Example Finding:
logging.py:23:1: MEDIUM [logging_sensitive_data]
LLM responses logged without sanitization
NIST: MAP 1.2MAP 1.3: System Reliability
Ensuring AI systems operate reliably and predictably.
Inkog Rules:
infinite_loop_semantic- Unbounded processing loopscontext_exhaustion_semantic- Resource exhaustion via context overflowtoken_bombing- Excessive token consumptionrecursive_tool_calling- Infinite tool recursion
Requirements:
- Implement termination guarantees
- Set resource limits and timeouts
- Monitor system behavior for anomalies
Example Finding:
agent.py:78:1: CRITICAL [infinite_loop_semantic]
Loop depends on LLM output without termination guarantee
NIST: MAP 1.3MEASURE 2.2: Security Risk Assessment
Evaluating security risks in AI systems.
Inkog Rules:
hardcoded_credentials- API keys and secrets in codeunsafe_deserialization- Pickle/YAML code executionmissing_authentication_check- Unauthenticated endpoints
Requirements:
- Regular security scanning
- Credential rotation and management
- Authentication and authorization controls
Example Finding:
config.py:12:1: CRITICAL [hardcoded_credentials]
API key hardcoded in source file
NIST: MEASURE 2.2MEASURE 2.4: AI System Risks
Assessing risks specific to AI and ML systems.
Inkog Rules:
tainted_eval- LLM output in code executionunvalidated_exec_eval- Unsafe command executionoutput_validation_failures- Unvalidated LLM outputs
Requirements:
- Sandbox code execution
- Validate all AI-generated content
- Implement human oversight for sensitive operations
Example Finding:
tools.py:56:1: CRITICAL [tainted_eval]
LLM-generated code executed without validation
NIST: MEASURE 2.4Compliance Report
Inkog generates NIST AI RMF compliance reports:
{
"nist_ai_rmf": {
"MAP_1_1": 3,
"MAP_1_2": 1,
"MAP_1_3": 2,
"MEASURE_2_2": 1,
"MEASURE_2_4": 2,
"total_violations": 9
}
}Audit Integration
For NIST compliance audits:
# Generate timestamped report
inkog -output json . > "nist-audit-$(date +%Y%m%d).json"
# Extract NIST violations
jq '.compliance_report.nist_ai_rmf' nist-audit-*.jsonResources
Last updated on