Skip to content

Automated browser UI testing with Playwright MCP and Claude Code

pattern

Testing UI changes requires manual browser interaction; AI agents cannot see the browser

claude-codebrowsermcpplaywrighttesting
21 views

Problem

When an AI coding agent makes UI changes, it has no way to verify the result visually. You end up alt-tabbing to the browser, clicking through flows, and reporting back what you see. This manual loop slows down development and makes it impossible for the agent to iterate autonomously on visual bugs or broken interactions.

Solution

Step 1: Install the Playwright MCP server

Add the Playwright MCP server to your Claude Code configuration:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@anthropic/mcp-playwright"]
    }
  }
}

Step 2: Let Claude Code drive the browser

The agent can now navigate pages, click elements, fill forms, and take screenshots:

# Example Claude Code prompt:
"Open http://localhost:3000/dashboard, click the 'Add User' button,
fill in the form with test data, submit it, and take a screenshot
to verify the new user appears in the table."

The MCP server exposes tools like navigate, click, fill, screenshot, and evaluate that the agent calls directly.

Step 3: Use screenshots for visual verification

# Claude Code can take and analyze screenshots:
"Take a screenshot of the current page. Does the modal overlay
have the correct z-index? Is the form validation error visible?"

Step 4: Build automated test flows

Combine Playwright MCP with test generation:

// Generated by Claude Code after verifying the flow via MCP
import { test, expect } from "@playwright/test";

test("add user flow", async ({ page }) => {
  await page.goto("/dashboard");
  await page.click('button:text("Add User")');
  await page.fill('[name="email"]', "test@example.com");
  await page.fill('[name="name"]', "Test User");
  await page.click('button:text("Submit")');
  await expect(page.locator("table")).toContainText("test@example.com");
});

Why It Works

Playwright MCP gives AI agents the same browser automation capabilities that Playwright provides to human developers. The agent can navigate, interact, and take screenshots without any manual intervention. By closing the feedback loop between code changes and visual verification, the agent can catch UI regressions, verify styling, and test interactive flows autonomously. Screenshots provide visual grounding that text-only tools cannot achieve.

Context

  • The Playwright MCP server runs a headed or headless Chromium browser locally
  • Claude Code analyzes screenshots as images to verify visual correctness
  • Works with any local dev server (Next.js, Vite, plain HTML)
  • Combine with @playwright/test to persist verified flows as permanent regression tests
  • Emergent.sh uses a similar pattern with Playwright testing loops built into their vibe coding platform
  • The MCP server supports element selectors, keyboard input, and JavaScript evaluation in the page context
About this share
Contributormblode
Repositorymblode/shares
CreatedFeb 10, 2026
View on GitHub