Skip to Content
ComplianceNIST AI RMF

NIST AI RMF

The NIST AI Risk Management Framework provides guidance for managing risks in AI systems. Inkog maps findings to relevant NIST AI RMF categories.

Framework Overview

NIST AI RMF is organized into four core functions:

FunctionFocusInkog Coverage
GOVERNOrganizational AI governanceReporting only
MAPUnderstanding AI system contextFull detection
MEASUREAssessing AI risksFull detection
MANAGEPrioritizing and acting on risksReporting only

Coverage Matrix

NIST CategoryDescriptionInkog Rules
MAP 1.1Input/Output Validationprompt_injection, output_validation_failures, sql_injection_via_llm
MAP 1.2Data Governancelogging_sensitive_data, unsafe_env_access
MAP 1.3System Reliabilityinfinite_loop_semantic, context_exhaustion_semantic
MEASURE 2.2Security Risk Assessmenthardcoded_credentials
MEASURE 2.4AI System Riskstainted_eval, unvalidated_exec_eval, output_validation_failures

MAP 1.1: Input/Output Validation

Ensuring proper validation of AI system inputs and outputs.

Inkog Rules:

  • prompt_injection - User input in prompts without sanitization
  • output_validation_failures - LLM output used unsafely
  • sql_injection_via_llm - LLM-generated SQL without parameterization

Requirements:

  1. Validate all inputs before processing
  2. Sanitize outputs before use in downstream systems
  3. Implement type checking and schema validation

Example Finding:

api.py:45:1: HIGH [prompt_injection] User input directly embedded in prompt template NIST: MAP 1.1

MAP 1.2: Data Governance

Protecting data privacy and ensuring proper data handling.

Inkog Rules:

  • logging_sensitive_data - PII or secrets in logs
  • unsafe_env_access - Environment variable exposure
  • cross_tenant_data_leakage - Multi-tenant data exposure

Requirements:

  1. Redact sensitive data from logs
  2. Implement proper access controls
  3. Ensure tenant data isolation

Example Finding:

logging.py:23:1: MEDIUM [logging_sensitive_data] LLM responses logged without sanitization NIST: MAP 1.2

MAP 1.3: System Reliability

Ensuring AI systems operate reliably and predictably.

Inkog Rules:

  • infinite_loop_semantic - Unbounded processing loops
  • context_exhaustion_semantic - Resource exhaustion via context overflow
  • token_bombing - Excessive token consumption
  • recursive_tool_calling - Infinite tool recursion

Requirements:

  1. Implement termination guarantees
  2. Set resource limits and timeouts
  3. Monitor system behavior for anomalies

Example Finding:

agent.py:78:1: CRITICAL [infinite_loop_semantic] Loop depends on LLM output without termination guarantee NIST: MAP 1.3

MEASURE 2.2: Security Risk Assessment

Evaluating security risks in AI systems.

Inkog Rules:

  • hardcoded_credentials - API keys and secrets in code
  • unsafe_deserialization - Pickle/YAML code execution
  • missing_authentication_check - Unauthenticated endpoints

Requirements:

  1. Regular security scanning
  2. Credential rotation and management
  3. Authentication and authorization controls

Example Finding:

config.py:12:1: CRITICAL [hardcoded_credentials] API key hardcoded in source file NIST: MEASURE 2.2

MEASURE 2.4: AI System Risks

Assessing risks specific to AI and ML systems.

Inkog Rules:

  • tainted_eval - LLM output in code execution
  • unvalidated_exec_eval - Unsafe command execution
  • output_validation_failures - Unvalidated LLM outputs

Requirements:

  1. Sandbox code execution
  2. Validate all AI-generated content
  3. Implement human oversight for sensitive operations

Example Finding:

tools.py:56:1: CRITICAL [tainted_eval] LLM-generated code executed without validation NIST: MEASURE 2.4

Compliance Report

Inkog generates NIST AI RMF compliance reports:

{ "nist_ai_rmf": { "MAP_1_1": 3, "MAP_1_2": 1, "MAP_1_3": 2, "MEASURE_2_2": 1, "MEASURE_2_4": 2, "total_violations": 9 } }

Audit Integration

For NIST compliance audits:

# Generate timestamped report inkog -output json . > "nist-audit-$(date +%Y%m%d).json" # Extract NIST violations jq '.compliance_report.nist_ai_rmf' nist-audit-*.json

Resources

Last updated on