Problem
When production errors fire in Sentry, the typical workflow is: alert triggers notification, developer context-switches, triages the issue, reproduces it, writes a fix, opens a PR, and waits for review. This cycle can take hours to days. Meanwhile, users keep hitting the same error.
Solution
Wire Sentry webhooks to an AI agent (ClawdBot, Claude Code in a VM, or a custom handler) that has codebase context and can auto-generate a fix PR for human review.
1. Configure Sentry webhook
# webhook_handler.py
from flask import Flask, request
import subprocess
import json
app = Flask(__name__)
@app.route("/sentry-webhook", methods=["POST"])
def handle_sentry_event():
payload = request.json
event = payload.get("data", {}).get("event", {})
error_title = event.get("title", "Unknown error")
stacktrace = extract_stacktrace(event)
error_tags = event.get("tags", [])
# Create a structured prompt for the AI agent
prompt = build_fix_prompt(error_title, stacktrace, error_tags)
# Dispatch to AI agent
dispatch_to_agent(prompt)
return "", 200
def extract_stacktrace(event: dict) -> str:
frames = []
for entry in event.get("exception", {}).get("values", []):
for frame in entry.get("stacktrace", {}).get("frames", []):
frames.append(
f" {frame.get('filename')}:{frame.get('lineno')} in {frame.get('function')}"
)
return "\n".join(frames)
2. Build the agent prompt with codebase context
def build_fix_prompt(title: str, stacktrace: str, tags: list) -> str:
return f"""A production error was reported in Sentry.
## Error
{title}
## Stacktrace
{stacktrace}
## Instructions
1. Read the files referenced in the stacktrace
2. Identify the root cause
3. Write a minimal fix (do not refactor surrounding code)
4. Add or update a test that would catch this regression
5. Create a new branch named fix/sentry-{{issue_id}}
6. Commit and push the changes
7. Open a draft PR with the Sentry issue link in the description
"""
3. Dispatch to agent running in isolated environment
#!/bin/bash
# dispatch_fix.sh - Run Claude Code in an isolated environment
BRANCH="fix/sentry-${SENTRY_ISSUE_ID}"
git checkout -b "$BRANCH" main
claude --model claude-opus-4-6 \
--print \
--allowedTools "Read,Edit,Write,Bash(git:*),Bash(npm test)" \
"$PROMPT"
gh pr create \
--draft \
--title "fix: ${ERROR_TITLE}" \
--body "Auto-generated fix for ${SENTRY_URL}\n\nRequires human review before merge."
Why It Works
The AI agent receives a focused context: the exact error, stacktrace with file paths and line numbers, and a clear instruction set. With codebase access, it can read the relevant files, understand the failure, and produce a targeted fix. Creating draft PRs ensures a human reviews every change before it reaches production. The pipeline turns a multi-hour triage cycle into a minutes-long automated response with human approval as the final gate.
Context
- ClawdBot and similar tools (Claude in Firecracker VMs) can run this autonomously with cron-based polling
- Sentry's letter to engineering teams emphasized AI-driven error resolution as a key strategic direction
- Restrict the agent's permissions to read, edit, test, and git -- no deploy access
- Start with high-frequency, low-severity errors to build confidence before handling critical issues
- Rate-limit the webhook handler to avoid spinning up agents for error storms
- Pair with a Slack notification so the team knows a fix PR is ready for review