Skip to Content
FrameworksAzure AI Foundry

Azure AI Foundry

Static analysis for Azure AI Foundry agents, Azure OpenAI applications, and Azure AI Agent Service projects to detect unsafe tool use, missing oversight, and compliance gaps.

Quick Start

inkog scan ./my-azure-ai-project

What Inkog Detects

FindingSeverityDescription
Unsafe Code InterpreterCRITICALCode interpreter tool without sandboxing
SSRF via Function CallingCRITICALFunction tools making unvalidated HTTP requests
Missing Human OversightHIGHAgent actions without approval workflow
Unbounded Agent LoopHIGHAgent run loop without max_turns or timeout
Hardcoded API KeysCRITICALAzure OpenAI keys embedded in source code

Unsafe Function Tools

Azure AI Agent Service supports function calling — functions that make external requests without validation are exploitable.

Vulnerable
Function tool fetches any URL without validation
from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import FunctionTool

def fetch_data(url: str) -> str:
  """Fetch data from any URL."""
  import requests
  return requests.get(url).text  # SSRF risk

functions = FunctionTool(functions=[fetch_data])
agent = project_client.agents.create_agent(
  model="gpt-4o",
  tools=functions.definitions
)
Secure
URL allowlist with scheme and timeout enforcement
from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import FunctionTool
from urllib.parse import urlparse

ALLOWED_HOSTS = {"api.example.com", "data.internal.com"}

def fetch_data(url: str) -> str:
  """Fetch data from allowed URLs only."""
  import requests
  parsed = urlparse(url)
  if parsed.hostname not in ALLOWED_HOSTS:
      return "Error: Host not allowed"
  if parsed.scheme != "https":
      return "Error: HTTPS required"
  return requests.get(url, timeout=10).text[:5000]

functions = FunctionTool(functions=[fetch_data])
agent = project_client.agents.create_agent(
  model="gpt-4o",
  tools=functions.definitions
)

Code Interpreter Without Guardrails

The built-in code interpreter can execute arbitrary code if not properly constrained.

Vulnerable
Unrestricted code execution instructions
from azure.ai.projects.models import CodeInterpreterTool

code_interpreter = CodeInterpreterTool()

agent = project_client.agents.create_agent(
  model="gpt-4o",
  instructions="You are a helpful assistant. Run any code the user asks.",
  tools=code_interpreter.definitions,
  # No file restrictions, no output limits
)
Secure
Scoped instructions with clear boundaries
from azure.ai.projects.models import CodeInterpreterTool

code_interpreter = CodeInterpreterTool()

agent = project_client.agents.create_agent(
  model="gpt-4o",
  instructions="""You are a data analysis assistant.
  RULES:
  - Only execute Python code for data analysis
  - Never access the filesystem beyond uploaded files
  - Never make network requests
  - Limit output to 1000 lines""",
  tools=code_interpreter.definitions,
  tool_resources=code_interpreter.resources,  # Scoped file access
)

Missing Agent Run Limits

Agent conversation loops without termination guards can run indefinitely and consume unbounded tokens.

Vulnerable
No termination guard on agent run
from azure.ai.projects.models import AgentThread

thread = project_client.agents.create_thread()

# Add message and run - no limits
project_client.agents.create_message(
  thread_id=thread.id,
  role="user",
  content=user_input
)

run = project_client.agents.create_and_process_run(
  thread_id=thread.id,
  agent_id=agent.id
  # No timeout, no max_turns
)
Secure
Iteration limit with run cancellation
from azure.ai.projects.models import AgentThread
import asyncio

thread = project_client.agents.create_thread()

project_client.agents.create_message(
  thread_id=thread.id,
  role="user",
  content=user_input
)

# Run with timeout and status checking
run = project_client.agents.create_run(
  thread_id=thread.id,
  agent_id=agent.id
)

max_iterations = 20
iteration = 0
while run.status in ["queued", "in_progress", "requires_action"]:
  iteration += 1
  if iteration > max_iterations:
      project_client.agents.cancel_run(
          thread_id=thread.id, run_id=run.id
      )
      break
  time.sleep(1)
  run = project_client.agents.get_run(
      thread_id=thread.id, run_id=run.id
  )

Hardcoded Azure Credentials

API keys and connection strings embedded in source code are detected locally before upload.

Vulnerable
Connection string hardcoded in source
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential

# Hardcoded connection string
conn_str = "eastus.api.azureml.ms;12345-abcd;my-rg;my-project"
client = AIProjectClient.from_connection_string(
  credential=DefaultAzureCredential(),
  conn_str=conn_str
)
Secure
Credentials loaded from environment
import os
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential

# Load from environment
client = AIProjectClient.from_connection_string(
  credential=DefaultAzureCredential(),
  conn_str=os.environ["AZURE_AI_PROJECT_CONNECTION_STRING"]
)

Bing Grounding Without Validation

Azure AI agents can use Bing grounding for web search — responses should be validated before use in downstream actions.

Vulnerable
Web results drive actions without human review
from azure.ai.projects.models import BingGroundingTool

bing = BingGroundingTool(connection_id=bing_connection.id)

agent = project_client.agents.create_agent(
  model="gpt-4o",
  instructions="Search the web and execute actions based on results.",
  tools=bing.definitions
  # Web results used directly in actions without validation
)
Secure
Human-in-the-loop for web-sourced actions
from azure.ai.projects.models import BingGroundingTool

bing = BingGroundingTool(connection_id=bing_connection.id)

agent = project_client.agents.create_agent(
  model="gpt-4o",
  instructions="""Search the web for information.
  RULES:
  - Present search results to the user for review
  - Never execute actions based solely on web search results
  - Always cite sources
  - Flag potentially unreliable information""",
  tools=bing.definitions
)

Best Practices

  1. Use DefaultAzureCredential instead of hardcoded keys or connection strings
  2. Set iteration limits on agent run loops with max_iterations and timeouts
  3. Validate function tool inputs — allowlist URLs, hostnames, and command parameters
  4. Scope code interpreter with clear instructions limiting filesystem and network access
  5. Add human oversight for high-impact agent actions (financial transactions, data deletion)
  6. Validate Bing grounding results before using them in downstream tool calls

CLI Examples

# Scan Azure AI Foundry project inkog scan ./my-azure-agent # EU AI Act compliance check inkog scan ./my-azure-agent --policy eu-ai-act # Governance-focused scan inkog scan ./my-azure-agent --policy governance # Verbose output with all details inkog scan ./my-azure-agent -verbose
Last updated on