Problem
When using AI coding agents with React Native/Expo projects, the agent has no way to see what the app looks like, inspect network requests, or read simulator console output. Conductor handles web projects well but mobile development needs a custom harness. Developers report running the bundler outside their coding environment and manually switching between workspaces to test features.
Solution
Build a Tauri-based development interface that streams simulator output, captures screenshots, and exposes debugging data to AI agents.
1. Stream simulator logs to a file agents can read
#!/bin/bash
# scripts/stream-simulator.sh
# Capture iOS simulator logs for AI agent consumption
xcrun simctl spawn booted log stream \
--level debug \
--predicate 'subsystem == "com.your.app"' \
2>&1 | tee /tmp/simulator-logs.txt &
# Expo bundler output
npx expo start --no-dev --port 8081 2>&1 | tee /tmp/bundler-logs.txt &
echo "Logs streaming to /tmp/simulator-logs.txt and /tmp/bundler-logs.txt"
2. Screenshot capture for visual state
// scripts/capture-screenshot.ts
import { execSync } from "child_process";
export function captureSimulatorScreenshot(outputPath: string): string {
execSync(`xcrun simctl io booted screenshot "${outputPath}"`);
return outputPath;
}
// Capture on file change for hot-reload feedback
import { watch } from "chokidar";
watch("./src/**/*.{tsx,ts}").on("change", () => {
setTimeout(() => {
captureSimulatorScreenshot("/tmp/app-screenshot.png");
}, 2000); // Wait for hot reload
});
3. Tauri shell for unified view
// src-tauri/src/main.rs
use tauri::Manager;
#[tauri::command]
fn get_simulator_logs() -> String {
std::fs::read_to_string("/tmp/simulator-logs.txt")
.unwrap_or_default()
.lines()
.rev()
.take(100)
.collect::<Vec<&str>>()
.into_iter()
.rev()
.collect::<Vec<&str>>()
.join("\n")
}
#[tauri::command]
fn capture_screenshot() -> Result<String, String> {
let output = std::process::Command::new("xcrun")
.args(["simctl", "io", "booted", "screenshot", "/tmp/app-screenshot.png"])
.output()
.map_err(|e| e.to_string())?;
if output.status.success() {
Ok("/tmp/app-screenshot.png".to_string())
} else {
Err(String::from_utf8_lossy(&output.stderr).to_string())
}
}
4. Claude Code skill for mobile debugging
<!-- skills/mobile-debug.md -->
---
name: mobile-debug
description: Debug the mobile app using simulator tools
---
When debugging the mobile app:
1. Read /tmp/simulator-logs.txt for recent simulator output
2. Read /tmp/bundler-logs.txt for Metro bundler errors
3. Run `xcrun simctl io booted screenshot /tmp/screenshot.png` to capture current state
4. Use the screenshot to verify visual changes
Why It Works
AI agents need observable state to debug effectively. By streaming simulator logs to files, capturing screenshots on code changes, and wrapping it in a Tauri shell, you give agents the same visibility a human developer gets from Xcode or Chrome DevTools. The file-based approach means any agent (Claude Code, Codex, or a custom harness) can read the state without special integrations.
Context
- Christian Mitchell raised this problem for Expo apps -- wanting deeper tool calling into network inspectors and bundler logs
- Aaron Vanston's team uses Conductor for Expo but has to customize the setup and run the bundler externally
- An undocumented keyboard shortcut in Conductor interferes with Expo's interactive terminal (zen mode toggling)
- For teams not needing native code, switching branches in Conductor rebundles JS without a full native rebuild
- The screenshot capture approach also works with Android emulators via
adb shell screencap