Skip to content

Use Google AI Studio 1M context for comprehensive project planning

pattern

Limited context windows in coding tools constrain complex project planning with full codebase awareness

geminicontext-windowgoogle-studioplanningcodebase
15 views

Problem

IDE-based coding agents like Cursor and Claude Code work file by file or use RAG to retrieve snippets. When planning a complex feature that spans many services, you need the LLM to see the entire codebase and API spec simultaneously. A 128k token context window cannot hold a 50k line codebase plus a 20k line API specification, leading to plans that miss cross-cutting concerns or propose changes that conflict with existing code.

Solution

Step 1: Dump your codebase into a single file

Concatenate your project files into a format the LLM can consume:

# Concatenate all source files with file path headers
find src -name "*.ts" -o -name "*.py" | sort | while read f; do
  echo "===== $f ====="
  cat "$f"
done > codebase-dump.txt

# Check token count (rough estimate: 1 token ~ 4 chars)
wc -c codebase-dump.txt | awk '{printf "~%.0f tokens\n", $1/4}'

Step 2: Load everything into Google AI Studio

1. Open https://aistudio.google.com
2. Select Gemini 2.5 Pro (1M token context)
3. Paste or upload codebase-dump.txt
4. Add your API spec (OpenAPI, GraphQL schema, etc.)
5. Add your feature requirements or ticket description

Step 3: Generate a comprehensive implementation plan

# Example prompt:
"I need to implement [feature description]. Given the full codebase
and API spec above, create a detailed plan broken into atomic tasks.
Each task should be a single Jira/Linear ticket with:
- Clear title
- Files that need to change
- Specific changes required
- Dependencies on other tasks
- Estimated complexity (S/M/L)"

Step 4: Execute tasks one by one in your IDE agent

# Take each generated task and feed it to Cursor/Claude Code:
"Implement task 3: Add the user permissions check to the
dashboard API resolver. See the plan context for details."

# The IDE agent handles implementation with its local tooling
# while the plan from Gemini provides the architectural guidance

Why It Works

Google AI Studio provides free access to Gemini 2.5 Pro with a 1M token context window. This is enough to hold an entire medium-sized codebase plus API specs plus requirements in a single prompt. The LLM sees all cross-cutting concerns, existing patterns, and architectural decisions simultaneously, producing plans that are grounded in reality rather than based on assumptions about unseen code. The IDE agent then executes each task with full local tooling support but with much better guidance.

Context

  • Google AI Studio free tier does not use data for training if you create a billing account, even though the service remains free
  • Refresh the chat and start new sessions to avoid stale context accumulation
  • This is a planning tool, not an implementation tool; use your IDE agent for actual code changes
  • Works best with codebases under 500k tokens; larger projects need selective file inclusion
  • The same approach works with any large-context model but Google AI Studio is currently the best free option
  • Combine with the Opus-for-planning, Sonnet-for-implementation pattern for maximum cost efficiency
About this share
Contributormblode
Repositorymblode/shares
CreatedFeb 10, 2026
View on GitHub