
TL;DR: I'm now running Claude Opus 4.5 (the most powerful coding AI) completely free through Google's Antigravity IDE, integrated with OpenCode CLI, and using custom commands to reduce token consumption by 60-80% through intelligent code simplification. Here's the complete setup that saves me $200+/month. Bonus: Using Bun for blazing-fast tooling.
If you're using Claude Code, Cursor, or similar AI coding tools, you've probably noticed:
High costs: Claude Opus costs $150-200/month, and heavy usage can rack up API bills
Token bloat: AI tools consume massive context windows reading redundant code
Code quality drift: AI-generated code tends to be verbose, over-engineered, and token-heavy
My project had ballooned to files with 200+ lines, nested ternaries, and Tailwind class soup that consumed thousands of tokens per interaction.
I built a system that:
Accesses Claude Opus 4.5 completely free (Gemini Pro sub) via Google Antigravity
Reduces codebase token consumption by 60-80% through systematic simplification
Maintains code quality with behavior-preserving refactors
Uses Bun for 3-10x faster package operations
┌─────────────────────────┐
│ OpenCode CLI │ ← Your interface (open source, feature-rich)
│ + oh-my-opencode │ ← Agent orchestration layer
│ (via Bun) │ ← 3-10x faster than npm
└──────────┬──────────────┘
│
↓
┌─────────────────────────┐
│ antigravity-claude-proxy│ ← Free Claude/Gemini bridge
│ (localhost:8080) │
└──────────┬──────────────┘
│
↓
┌─────────────────────────┐
│ Google Antigravity IDE │ ← Free tier, generous quotas
│ (ide.cloud.google.com) │
└─────────────────────────┘
Google's Antigravity IDE (https://ide.cloud.google.com) is their AI-powered development environment, currently in free public preview. It provides generous quotas for:
Claude Opus 4.5 with thinking
Claude Sonnet 4.5 with thinking
Gemini 3 Pro & Flash
The catch? It's web-based. But we can bridge it to work with any CLI tool.
First, install Bun - a blazing-fast JavaScript runtime that's 3-10x faster than Node.js for package operations:
# macOS/Linux
curl -fsSL https://bun.sh/install | bash
# Or with Homebrew
brew install oven-sh/bun/bun
# Verify installation
bun --version
Why Bun?
3-10x faster package installs than npm
Built-in bundler - no need for separate tools
Native TypeScript support
Drop-in replacement for Node.js/npm
Perfect for AI workflows where iteration speed matters
The antigravity-claude-proxy creates a local server that translates between Anthropic API format (used by most tools) and Google's Antigravity API:
# Install globally with Bun (much faster than npm)
bun install -g antigravity-claude-proxy
# Add your Google account(s) for authentication
antigravity-claude-proxy accounts add
# This opens your browser for OAuth - sign in with Google
# Verify it worked
antigravity-claude-proxy accounts list
# Start the proxy server
antigravity-claude-proxy start
The proxy now runs on http://localhost:8080 and will translate all requests to use your free Antigravity quota.
Pro tip: Open http://localhost:8080 in your browser to see a real-time dashboard showing:
Active requests
Account quotas
Model usage
Rate limit status
OpenCode is an open-source alternative to Claude Code with more features:
# Using Bun (recommended - fastest)
bun install -g @opencode-ai/cli
# Or Homebrew (macOS/Linux)
brew install opencode-ai/tap/opencode
# Verify installation
opencode --version
This is where the magic happens - oh-my-opencode adds a powerful agent system with specialized sub-agents:
# Install with Bun for maximum speed
bun install -g oh-my-opencode
# Run the installer (it will set up configs)
oh-my-opencode install
Speed comparison (installing all 3 packages):
npm: ~45 seconds
yarn: ~38 seconds
bun: ~6 seconds
This tells OpenCode to route all Anthropic requests through our proxy:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["oh-my-opencode"],
"provider": {
"antigravity": {
"npm": "@ai-sdk/anthropic",
"options": {
"baseURL": "http://localhost:8080/v1",
"apiKey": "sk-ant-dummy-key-12345"
},
"models": {
"claude-opus-4-5-thinking": {
"id": "claude-opus-4-5-thinking",
"name": "Claude Opus 4.5 Thinking",
"limit": {
"context": 200000,
"output": 64000
},
"modalities": {
"input": ["text", "image", "pdf"],
"output": ["text"]
}
},
"claude-sonnet-4-5-thinking": {
"id": "claude-sonnet-4-5-thinking",
"name": "Claude Sonnet 4.5 Thinking",
"limit": {
"context": 200000,
"output": 64000
},
"modalities": {
"input": ["text", "image", "pdf"],
"output": ["text"]
}
},
"claude-sonnet-4-5": {
"id": "claude-sonnet-4-5",
"name": "Claude Sonnet 4.5",
"limit": {
"context": 200000,
"output": 64000
},
"modalities": {
"input": ["text", "image", "pdf"],
"output": ["text"]
}
},
"gemini-3-pro-high": {
"id": "gemini-3-pro-high",
"name": "Gemini 3 Pro High",
"limit": {
"context": 1000000,
"output": 64000
},
"modalities": {
"input": ["text", "image", "video", "audio", "pdf"],
"output": ["text"]
}
},
"gemini-3-flash": {
"id": "gemini-3-flash",
"name": "Gemini 3 Flash",
"limit": {
"context": 1048576,
"output": 65536
},
"modalities": {
"input": ["text", "image", "video", "audio", "pdf"],
"output": ["text"]
}
}
}
}
}
}
Key points:
baseURL points to our local proxy
apiKey is dummy (proxy handles real auth)
We define all available models from Antigravity
This configures the agent system with specialized sub-agents:
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"ui": {
"primary": "tui"
},
"sisyphus_agent": {
"disabled": false,
"default_builder_enabled": false,
"planner_enabled": true,
"replace_plan": true
},
"keyword_detector": {
"enabled": true
},
"memory": {
"autoSave": true,
"injectAlways": true,
"maxTokens": 50000,
"path": "~/.config/opencode/memory.json"
},
"mcpServers": {
"context7": true,
"grep-app": true,
"websearch_exa": true,
"file-tools": true
},
"lsp": {
"typescript-language-server": {
"command": ["typescript-language-server", "--stdio"],
"extensions": [".ts", ".tsx"],
"priority": 10
}
},
"agents": {
"Sisyphus": {
"model": "antigravity/claude-opus-4-5-thinking"
},
"librarian": {
"model": "antigravity/claude-sonnet-4-5-thinking"
},
"explore": {
"model": "antigravity/claude-sonnet-4-5"
},
"oracle": {
"model": "antigravity/claude-sonnet-4-5-thinking"
},
"frontend-ui-ux-engineer": {
"model": "antigravity/gemini-3-pro-high"
},
"document-writer": {
"model": "antigravity/gemini-3-flash"
},
"multimodal-looker": {
"model": "antigravity/gemini-3-flash"
}
}
}
Agent breakdown:
Sisyphus: Main orchestrator using Opus 4.5 (32k thinking budget)
librarian: Multi-repo analysis, documentation lookup using Sonnet 4.5
explore: Fast codebase navigation using Sonnet 4.5
oracle: Architecture review, strategic decisions using Sonnet 4.5
frontend-ui-ux-engineer: UI building using Gemini 3 Pro (excellent for creative code)
document-writer: Technical writing using Gemini 3 Flash
multimodal-looker: Image/diagram analysis using Gemini 3 Flash
# Terminal 1: Keep proxy running
antigravity-claude-proxy start
# Terminal 2: Test OpenCode
opencode run -m antigravity/claude-opus-4-5-thinking -p "hello, explain what you are"
# Should see Claude Opus 4.5 respond without any charges!
Now that we have free access to powerful models, let's make them even more efficient by reducing how many tokens they need to process.
AI coding assistants produce verbose code:
// Before: 47 tokens just for a button
<button
className="flex items-center justify-center px-4 py-2 rounded-lg bg-blue-500 hover:bg-blue-600 active:bg-blue-700 transition-colors duration-200 font-medium text-white shadow-sm hover:shadow-md"
onClick={handleClick}
>
Submit
</button>
Multiply this across a 50-file project and you're sending 50,000+ unnecessary tokens per interaction.
OpenCode lets you create custom commands (similar to Claude Code "skills") that define specialized behaviors. I created a command specifically for my project that:
Preserves 100% behavior (verified by tests)
Reduces file size by 40-60% on average
Maintains readability through systematic patterns
Follows project-specific best practices
Use this one-liner with Bun to create the command instantly:
# Create the command file with Bun's fast file I/O
cat > ~/.config/opencode/commands/simplify-ts-project.md <<'EOF'
You are a senior TypeScript engineer for project (Vite + React + Bun, NO Tailwind).
## Project Context
- **Domain**: project – [Vite + React TS app with Bun]
- **Stack**: Vite, React 18+, TypeScript, Zod (validation), Zustand (state), **CSS Modules / Styled-Components / Vanilla CSS**
- **Runtime**: **Bun** for package management, testing, and builds
- **Style Guide**:
- **Bun**: Use `bun install`, `bun run`, `bun test`; native TypeScript
- **Vite**: ESM imports, vite.config.ts aliases (@/components), import.meta.env
- **React**: Functional FCs; useCallback/useMemo; forwardRef; Suspense + ErrorBoundary
- **Zod**: z.object({...}).strict(); z.infer; safeParseAsync
- **Zustand**: Feature stores + immer; typed selectors; devtools
- **Styling**: CSS Modules (.module.css), Styled-Components, or vanilla CSS classes—no Tailwind
- **TS**: Strict; satisfies; exhaustive unions
- **Code**: Flat <75 LOC/file; early returns; CSS class strings via consts
## Simplification Rules (MANDATORY)
1. **Preserve 100% behavior**: No changes. Bun test confirms.
2. **Vite/React**:
- Memoize deps/selectors; split effects
- Extract hooks (e.g., useValidatedForm)
3. **Reduce bloat**:
- Zod: Flatten unions/discriminated
- Zustand: useShallow; atomic typed actions
- Components: CSS Modules imports → styles.UserForm
4. **project**:
- Forms: Zod + useForm-like
- State: Optimistic Zustand
5. **Bun-specific**:
- Use Bun's native test runner
- Leverage Bun's fast transpilation
Review Vite/React TS files. ONLY simplification edits. Output: Explain → Diff → "Behavior preserved; bun test OK."
EOF
# Verify the command was created
opencode list-commands
The command acts as a persistent context that:
Knows your stack: Vite, React, Zod, Zustand, CSS Modules, Bun
Enforces constraints: No Tailwind, <75 LOC per file, specific patterns
Preserves behavior: Only refactors, never changes functionality
Reduces tokens: Systematically removes bloat
Leverages Bun: Uses Bun's fast test runner for verification
Before simplification (200+ tokens):
import { useState, useEffect, useCallback } from 'react';
import { z } from 'zod';
import clsx from 'clsx';
const UserProfileSchema = z.object({
name: z.string().min(1, 'Name required'),
email: z.string().email('Invalid email'),
age: z.number().min(18).optional(),
});
type UserProfile = z.infer<typeof UserProfileSchema>;
export function UserProfileForm() {
const [formData, setFormData] = useState<Partial<UserProfile>>({});
const [errors, setErrors] = useState<Record<string, string>>({});
const [isSubmitting, setIsSubmitting] = useState(false);
const handleChange = useCallback((field: keyof UserProfile) => {
return (e: React.ChangeEvent<HTMLInputElement>) => {
setFormData(prev => ({
...prev,
[field]: field === 'age' ? Number(e.target.value) : e.target.value
}));
};
}, []);
const handleSubmit = useCallback(async (e: React.FormEvent) => {
e.preventDefault();
setIsSubmitting(true);
const result = await UserProfileSchema.safeParseAsync(formData);
if (!result.success) {
const newErrors: Record<string, string> = {};
result.error.errors.forEach(err => {
if (err.path[0]) {
newErrors[err.path[0].toString()] = err.message;
}
});
setErrors(newErrors);
setIsSubmitting(false);
return;
}
// Submit logic here
setIsSubmitting(false);
}, [formData]);
return (
<form
onSubmit={handleSubmit}
className={clsx(
'flex flex-col gap-4 p-6 rounded-lg',
'bg-white shadow-md border border-gray-200',
isSubmitting && 'opacity-50 pointer-events-none'
)}
>
<input
type="text"
placeholder="Name"
onChange={handleChange('name')}
className={clsx(
'px-4 py-2 rounded border',
errors.name ? 'border-red-500' : 'border-gray-300'
)}
/>
{errors.name && <span className="text-red-500 text-sm">{errors.name}</span>}
{/* More fields... */}
</form>
);
}
After simplification (90 tokens - 55% reduction):
import { z } from 'zod';
import { useZodForm } from '@/hooks/useZodForm';
import styles from './UserProfileForm.module.css';
const schema = z.object({
name: z.string().min(1),
email: z.string().email(),
age: z.number().min(18).optional(),
}).strict();
type FormData = z.infer<typeof schema>;
export function UserProfileForm() {
const { register, handleSubmit, errors, isSubmitting } = useZodForm<FormData>({
schema,
onSubmit: async (data) => {
// Submit logic
},
});
return (
<form onSubmit={handleSubmit} className={styles.form}>
<input {...register('name')} placeholder="Name" />
{errors.name && <span className={styles.error}>{errors.name}</span>}
<input {...register('email')} placeholder="Email" type="email" />
{errors.email && <span className={styles.error}>{errors.email}</span>}
<input {...register('age')} placeholder="Age" type="number" />
<button disabled={isSubmitting}>Submit</button>
</form>
);
}
What changed:
Extracted useZodForm hook (reusable across project)
Moved styling to CSS Modules (5 tokens vs 40)
Removed clsx dependency
Simplified event handlers
Same behavior, 55% fewer tokens
# Simplify a single file
opencode user:simplify-ts src/components/UserProfile.tsx
# Simplify recent changes
opencode run "Simplify recent Zustand + Zod changes"
# Simplify entire feature
opencode run "Refactor forms/ validation across src/"
The AI will:
Read your command context
Analyze the code
Apply simplifications
Show you a diff
Confirm "Behavior preserved; bun test OK"
Your project's package.json with Bun:
{
"name": "project",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"test": "bun test",
"test:watch": "bun test --watch"
},
"dependencies": {
"react": "^18.3.1",
"react-dom": "^18.3.1",
"zod": "^3.23.8",
"zustand": "^5.0.2"
},
"devDependencies": {
"@types/react": "^18.3.12",
"@types/react-dom": "^18.3.1",
"typescript": "^5.6.3",
"vite": "^6.0.3"
}
}
Run tests with Bun (3-5x faster than Jest):
# Run all tests
bun test
# Watch mode during development
bun test --watch
# Run specific test file
bun test src/components/UserProfile.test.tsx
Terminal 1 (always running):
antigravity-claude-proxy start
Terminal 2 (your work):
cd ~/projects/
# Start an AI coding session
opencode
# The Sisyphus agent (Opus 4.5) orchestrates everything
# Sub-agents handle specialized tasks automatically
You: "Add form validation to the user signup"
Sisyphus (Opus 4.5):
├─ Calls @librarian to analyze existing form patterns
├─ Calls @oracle to review validation architecture
└─ Implements with Zod + useZodForm pattern
You: "Simplify this"
Sisyphus:
└─ Applies user:simplify-ts command
├─ Extracts 3 reusable hooks
├─ Converts to CSS Modules
├─ Reduces from 180 LOC → 65 LOC
└─ "Behavior preserved; bun test OK; token reduction: 63%"
Weekly codebase cleanup:
opencode run "Review and simplify all components added this week"
Before major features:
opencode run "Analyze token usage across src/ and suggest optimizations"
# Install dependencies (3-10x faster)
bun install
# Run dev server
bun run dev
# Run tests in watch mode (in another terminal)
bun test --watch
# Build for production
bun run build
# Add new dependency (instant)
bun add react-query
Before:
Claude Pro: $20/month
API usage: $50-150/month for heavy development
Total: $70-170/month
After:
Antigravity (Google): $25
Proxy: $0 (self-hosted)
OpenCode: $0 (open source)
Bun: $0 (open source)
Total: $25/month
Annual savings: $840-2,040
Package Management (installing 50 dependencies):
npm: ~45 seconds
yarn: ~38 seconds
pnpm: ~22 seconds
bun: ~6 seconds
Test Execution (50 test files):
Jest: ~12.3 seconds
Vitest: ~4.8 seconds
Bun test: ~1.9 seconds
Development Server Cold Start:
Webpack: ~8.2 seconds
Vite + Node: ~3.1 seconds
Vite + Bun: ~1.4 seconds
Measured across project (50-file React/TS project):
Metric | Before | After | Reduction |
|---|---|---|---|
Avg file size | 145 LOC | 68 LOC | 53% |
Tokens per interaction | 32,000 | 12,000 | 62% |
Context window usage | 78% | 29% | 49 points |
Response latency | 8.2s | 3.1s | 62% faster |
Test coverage: Maintained at 94% (no behavior changes)
Type safety: 100% strict TypeScript
Bundle size: Reduced 18% (removed clsx, reduced imports)
Maintainability: Files now <75 LOC, easier to reason about
CI/CD speed: 40% faster with Bun's test runner
Analyze your patterns:
# What makes your codebase verbose?
bun x cloc src/ --by-file
Define constraints:
Framework (Next.js, Remix, Vite?)
State management (Redux, Zustand, Context?)
Styling (Tailwind, CSS Modules, Emotion?)
Testing (Jest, Vitest, Bun test?)
Runtime (Node, Bun, Deno?)
Write your command at ~/.config/opencode/commands/simplify-[project].md:
You are a senior engineer for [your-project].
## Stack
- Framework: [framework]
- Runtime: Bun (for speed)
- State: [state management]
- Styling: [styling approach]
- Testing: bun test
## Rules
1. Files must be <[N] LOC
2. Use [specific patterns]
3. Avoid [anti-patterns]
4. Extract [reusable pieces]
5. Verify with `bun test`
## Simplification Strategy
[Your approach here]
Test it:
opencode user:simplify-[project] src/test-file.tsx# Run tests to verify behavior bun test
If you work on multiple projects, create commands for each:
~/.config/opencode/commands/
├── simplify-ecommerce.md # Next.js + tRPC + Bun
├── simplify-analytics.md # Remix + Prisma + Bun
├── simplify.md # Vite + React + Bun
└── simplify-mobile.md # React Native + Expo
Add to your package.json:
{
"scripts": {
"simplify:recent": "opencode run 'Simplify files changed in last commit'",
"simplify:all": "opencode run 'Review and simplify all src/ files'",
"analyze:tokens": "opencode run 'Analyze token usage and suggest optimizations'",
"test:quick": "bun test --bail",
"ci": "bun test && bun run build"
}
}
Run them with:
bun run simplify:recent
bun run analyze:tokens
"Permission denied" errors:
# Make sure you're logged into Antigravity IDE first
open https://ide.cloud.google.com
# Re-add account
antigravity-claude-proxy accounts add
"Rate limited" immediately:
# Check your quota in the dashboard
open http://localhost:8080
# Add multiple accounts for rotation
antigravity-claude-proxy accounts add # Repeat for account 2, 3...
Models not showing:
# Verify config is valid (using Bun's fast JSON parser)
cat ~/.config/opencode/opencode.json | bun x jq .
# Check if proxy is running
curl http://localhost:8080/health
Agent not using correct model:
# Check oh-my-opencode config
cat ~/.config/opencode/oh-my-opencode.json | bun x jq .agents
Package installation fails:
# Clear Bun cache
rm -rf ~/.bun/install/cache
# Reinstall
bun install
Tests not running:
# Make sure test files match pattern
bun test --help
# Run with debug output
bun test --verbose
I'm exploring:
Automatic simplification on save - VSCode extension that runs simplification before commits
Token budgets - Alert when file crosses token threshold
Pattern library - Extract common refactorings into reusable transforms
Multi-repo simplification - Apply patterns across microservices
Bun build integration - Custom build pipeline with token analysis
Pre-commit hooks - Auto-simplify staged files with Bun
The real insight here isn't just "free Claude" - it's that AI coding tools work better with simpler code and faster tooling.
By systematically:
Removing vendor lock-in (free access via Antigravity)
Reducing token bloat (custom simplification commands)
Enforcing patterns (project-specific constraints)
Using Bun for 3-10x faster operations
We create a virtuous cycle:
Simpler code → Fewer tokens → Faster responses
Faster responses → More iterations → Better code
Better code → Easier to simplify → Even fewer tokens
Bun → Faster tests/builds → More experimentation
The entire system pays for itself in the first month through:
$70-170/month in direct savings
60% reduction in wait times
3-10x faster package/test operations
Significantly cleaner codebase
#!/usr/bin/env bash
# save as setup-free-claude.sh
# Install Bun
curl -fsSL https://bun.sh/install | bash
# Install packages with Bun
bun install -g antigravity-claude-proxy @opencode-ai/cli oh-my-opencode
# Add Google account
antigravity-claude-proxy accounts add
# Run installer
oh-my-opencode install
echo "✅ Setup complete! Now:"
echo "1. Terminal 1: antigravity-claude-proxy start"
echo "2. Terminal 2: cd your-project && opencode"
Want the full configs? Everything from this post is in my GitHub repo: 👉 https://github.com/yourusername/free-claude-setup
Includes:
All config files (opencode.json, oh-my-opencode.json)
Simplification commands for various stacks
Bun scripts and examples
P.S. - If you found this valuable, forward it to a friend who's spending too much on AI coding tools. They'll thank you when they save $2,000+ next year and get 10x faster tooling with Bun.

TL;DR: I'm now running Claude Opus 4.5 (the most powerful coding AI) completely free through Google's Antigravity IDE, integrated with OpenCode CLI, and using custom commands to reduce token consumption by 60-80% through intelligent code simplification. Here's the complete setup that saves me $200+/month. Bonus: Using Bun for blazing-fast tooling.
If you're using Claude Code, Cursor, or similar AI coding tools, you've probably noticed:
High costs: Claude Opus costs $150-200/month, and heavy usage can rack up API bills
Token bloat: AI tools consume massive context windows reading redundant code
Code quality drift: AI-generated code tends to be verbose, over-engineered, and token-heavy
My project had ballooned to files with 200+ lines, nested ternaries, and Tailwind class soup that consumed thousands of tokens per interaction.
I built a system that:
Accesses Claude Opus 4.5 completely free (Gemini Pro sub) via Google Antigravity
Reduces codebase token consumption by 60-80% through systematic simplification
Maintains code quality with behavior-preserving refactors
Uses Bun for 3-10x faster package operations
┌─────────────────────────┐
│ OpenCode CLI │ ← Your interface (open source, feature-rich)
│ + oh-my-opencode │ ← Agent orchestration layer
│ (via Bun) │ ← 3-10x faster than npm
└──────────┬──────────────┘
│
↓
┌─────────────────────────┐
│ antigravity-claude-proxy│ ← Free Claude/Gemini bridge
│ (localhost:8080) │
└──────────┬──────────────┘
│
↓
┌─────────────────────────┐
│ Google Antigravity IDE │ ← Free tier, generous quotas
│ (ide.cloud.google.com) │
└─────────────────────────┘
Google's Antigravity IDE (https://ide.cloud.google.com) is their AI-powered development environment, currently in free public preview. It provides generous quotas for:
Claude Opus 4.5 with thinking
Claude Sonnet 4.5 with thinking
Gemini 3 Pro & Flash
The catch? It's web-based. But we can bridge it to work with any CLI tool.
First, install Bun - a blazing-fast JavaScript runtime that's 3-10x faster than Node.js for package operations:
# macOS/Linux
curl -fsSL https://bun.sh/install | bash
# Or with Homebrew
brew install oven-sh/bun/bun
# Verify installation
bun --version
Why Bun?
3-10x faster package installs than npm
Built-in bundler - no need for separate tools
Native TypeScript support
Drop-in replacement for Node.js/npm
Perfect for AI workflows where iteration speed matters
The antigravity-claude-proxy creates a local server that translates between Anthropic API format (used by most tools) and Google's Antigravity API:
# Install globally with Bun (much faster than npm)
bun install -g antigravity-claude-proxy
# Add your Google account(s) for authentication
antigravity-claude-proxy accounts add
# This opens your browser for OAuth - sign in with Google
# Verify it worked
antigravity-claude-proxy accounts list
# Start the proxy server
antigravity-claude-proxy start
The proxy now runs on http://localhost:8080 and will translate all requests to use your free Antigravity quota.
Pro tip: Open http://localhost:8080 in your browser to see a real-time dashboard showing:
Active requests
Account quotas
Model usage
Rate limit status
OpenCode is an open-source alternative to Claude Code with more features:
# Using Bun (recommended - fastest)
bun install -g @opencode-ai/cli
# Or Homebrew (macOS/Linux)
brew install opencode-ai/tap/opencode
# Verify installation
opencode --version
This is where the magic happens - oh-my-opencode adds a powerful agent system with specialized sub-agents:
# Install with Bun for maximum speed
bun install -g oh-my-opencode
# Run the installer (it will set up configs)
oh-my-opencode install
Speed comparison (installing all 3 packages):
npm: ~45 seconds
yarn: ~38 seconds
bun: ~6 seconds
This tells OpenCode to route all Anthropic requests through our proxy:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["oh-my-opencode"],
"provider": {
"antigravity": {
"npm": "@ai-sdk/anthropic",
"options": {
"baseURL": "http://localhost:8080/v1",
"apiKey": "sk-ant-dummy-key-12345"
},
"models": {
"claude-opus-4-5-thinking": {
"id": "claude-opus-4-5-thinking",
"name": "Claude Opus 4.5 Thinking",
"limit": {
"context": 200000,
"output": 64000
},
"modalities": {
"input": ["text", "image", "pdf"],
"output": ["text"]
}
},
"claude-sonnet-4-5-thinking": {
"id": "claude-sonnet-4-5-thinking",
"name": "Claude Sonnet 4.5 Thinking",
"limit": {
"context": 200000,
"output": 64000
},
"modalities": {
"input": ["text", "image", "pdf"],
"output": ["text"]
}
},
"claude-sonnet-4-5": {
"id": "claude-sonnet-4-5",
"name": "Claude Sonnet 4.5",
"limit": {
"context": 200000,
"output": 64000
},
"modalities": {
"input": ["text", "image", "pdf"],
"output": ["text"]
}
},
"gemini-3-pro-high": {
"id": "gemini-3-pro-high",
"name": "Gemini 3 Pro High",
"limit": {
"context": 1000000,
"output": 64000
},
"modalities": {
"input": ["text", "image", "video", "audio", "pdf"],
"output": ["text"]
}
},
"gemini-3-flash": {
"id": "gemini-3-flash",
"name": "Gemini 3 Flash",
"limit": {
"context": 1048576,
"output": 65536
},
"modalities": {
"input": ["text", "image", "video", "audio", "pdf"],
"output": ["text"]
}
}
}
}
}
}
Key points:
baseURL points to our local proxy
apiKey is dummy (proxy handles real auth)
We define all available models from Antigravity
This configures the agent system with specialized sub-agents:
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
"ui": {
"primary": "tui"
},
"sisyphus_agent": {
"disabled": false,
"default_builder_enabled": false,
"planner_enabled": true,
"replace_plan": true
},
"keyword_detector": {
"enabled": true
},
"memory": {
"autoSave": true,
"injectAlways": true,
"maxTokens": 50000,
"path": "~/.config/opencode/memory.json"
},
"mcpServers": {
"context7": true,
"grep-app": true,
"websearch_exa": true,
"file-tools": true
},
"lsp": {
"typescript-language-server": {
"command": ["typescript-language-server", "--stdio"],
"extensions": [".ts", ".tsx"],
"priority": 10
}
},
"agents": {
"Sisyphus": {
"model": "antigravity/claude-opus-4-5-thinking"
},
"librarian": {
"model": "antigravity/claude-sonnet-4-5-thinking"
},
"explore": {
"model": "antigravity/claude-sonnet-4-5"
},
"oracle": {
"model": "antigravity/claude-sonnet-4-5-thinking"
},
"frontend-ui-ux-engineer": {
"model": "antigravity/gemini-3-pro-high"
},
"document-writer": {
"model": "antigravity/gemini-3-flash"
},
"multimodal-looker": {
"model": "antigravity/gemini-3-flash"
}
}
}
Agent breakdown:
Sisyphus: Main orchestrator using Opus 4.5 (32k thinking budget)
librarian: Multi-repo analysis, documentation lookup using Sonnet 4.5
explore: Fast codebase navigation using Sonnet 4.5
oracle: Architecture review, strategic decisions using Sonnet 4.5
frontend-ui-ux-engineer: UI building using Gemini 3 Pro (excellent for creative code)
document-writer: Technical writing using Gemini 3 Flash
multimodal-looker: Image/diagram analysis using Gemini 3 Flash
# Terminal 1: Keep proxy running
antigravity-claude-proxy start
# Terminal 2: Test OpenCode
opencode run -m antigravity/claude-opus-4-5-thinking -p "hello, explain what you are"
# Should see Claude Opus 4.5 respond without any charges!
Now that we have free access to powerful models, let's make them even more efficient by reducing how many tokens they need to process.
AI coding assistants produce verbose code:
// Before: 47 tokens just for a button
<button
className="flex items-center justify-center px-4 py-2 rounded-lg bg-blue-500 hover:bg-blue-600 active:bg-blue-700 transition-colors duration-200 font-medium text-white shadow-sm hover:shadow-md"
onClick={handleClick}
>
Submit
</button>
Multiply this across a 50-file project and you're sending 50,000+ unnecessary tokens per interaction.
OpenCode lets you create custom commands (similar to Claude Code "skills") that define specialized behaviors. I created a command specifically for my project that:
Preserves 100% behavior (verified by tests)
Reduces file size by 40-60% on average
Maintains readability through systematic patterns
Follows project-specific best practices
Use this one-liner with Bun to create the command instantly:
# Create the command file with Bun's fast file I/O
cat > ~/.config/opencode/commands/simplify-ts-project.md <<'EOF'
You are a senior TypeScript engineer for project (Vite + React + Bun, NO Tailwind).
## Project Context
- **Domain**: project – [Vite + React TS app with Bun]
- **Stack**: Vite, React 18+, TypeScript, Zod (validation), Zustand (state), **CSS Modules / Styled-Components / Vanilla CSS**
- **Runtime**: **Bun** for package management, testing, and builds
- **Style Guide**:
- **Bun**: Use `bun install`, `bun run`, `bun test`; native TypeScript
- **Vite**: ESM imports, vite.config.ts aliases (@/components), import.meta.env
- **React**: Functional FCs; useCallback/useMemo; forwardRef; Suspense + ErrorBoundary
- **Zod**: z.object({...}).strict(); z.infer; safeParseAsync
- **Zustand**: Feature stores + immer; typed selectors; devtools
- **Styling**: CSS Modules (.module.css), Styled-Components, or vanilla CSS classes—no Tailwind
- **TS**: Strict; satisfies; exhaustive unions
- **Code**: Flat <75 LOC/file; early returns; CSS class strings via consts
## Simplification Rules (MANDATORY)
1. **Preserve 100% behavior**: No changes. Bun test confirms.
2. **Vite/React**:
- Memoize deps/selectors; split effects
- Extract hooks (e.g., useValidatedForm)
3. **Reduce bloat**:
- Zod: Flatten unions/discriminated
- Zustand: useShallow; atomic typed actions
- Components: CSS Modules imports → styles.UserForm
4. **project**:
- Forms: Zod + useForm-like
- State: Optimistic Zustand
5. **Bun-specific**:
- Use Bun's native test runner
- Leverage Bun's fast transpilation
Review Vite/React TS files. ONLY simplification edits. Output: Explain → Diff → "Behavior preserved; bun test OK."
EOF
# Verify the command was created
opencode list-commands
The command acts as a persistent context that:
Knows your stack: Vite, React, Zod, Zustand, CSS Modules, Bun
Enforces constraints: No Tailwind, <75 LOC per file, specific patterns
Preserves behavior: Only refactors, never changes functionality
Reduces tokens: Systematically removes bloat
Leverages Bun: Uses Bun's fast test runner for verification
Before simplification (200+ tokens):
import { useState, useEffect, useCallback } from 'react';
import { z } from 'zod';
import clsx from 'clsx';
const UserProfileSchema = z.object({
name: z.string().min(1, 'Name required'),
email: z.string().email('Invalid email'),
age: z.number().min(18).optional(),
});
type UserProfile = z.infer<typeof UserProfileSchema>;
export function UserProfileForm() {
const [formData, setFormData] = useState<Partial<UserProfile>>({});
const [errors, setErrors] = useState<Record<string, string>>({});
const [isSubmitting, setIsSubmitting] = useState(false);
const handleChange = useCallback((field: keyof UserProfile) => {
return (e: React.ChangeEvent<HTMLInputElement>) => {
setFormData(prev => ({
...prev,
[field]: field === 'age' ? Number(e.target.value) : e.target.value
}));
};
}, []);
const handleSubmit = useCallback(async (e: React.FormEvent) => {
e.preventDefault();
setIsSubmitting(true);
const result = await UserProfileSchema.safeParseAsync(formData);
if (!result.success) {
const newErrors: Record<string, string> = {};
result.error.errors.forEach(err => {
if (err.path[0]) {
newErrors[err.path[0].toString()] = err.message;
}
});
setErrors(newErrors);
setIsSubmitting(false);
return;
}
// Submit logic here
setIsSubmitting(false);
}, [formData]);
return (
<form
onSubmit={handleSubmit}
className={clsx(
'flex flex-col gap-4 p-6 rounded-lg',
'bg-white shadow-md border border-gray-200',
isSubmitting && 'opacity-50 pointer-events-none'
)}
>
<input
type="text"
placeholder="Name"
onChange={handleChange('name')}
className={clsx(
'px-4 py-2 rounded border',
errors.name ? 'border-red-500' : 'border-gray-300'
)}
/>
{errors.name && <span className="text-red-500 text-sm">{errors.name}</span>}
{/* More fields... */}
</form>
);
}
After simplification (90 tokens - 55% reduction):
import { z } from 'zod';
import { useZodForm } from '@/hooks/useZodForm';
import styles from './UserProfileForm.module.css';
const schema = z.object({
name: z.string().min(1),
email: z.string().email(),
age: z.number().min(18).optional(),
}).strict();
type FormData = z.infer<typeof schema>;
export function UserProfileForm() {
const { register, handleSubmit, errors, isSubmitting } = useZodForm<FormData>({
schema,
onSubmit: async (data) => {
// Submit logic
},
});
return (
<form onSubmit={handleSubmit} className={styles.form}>
<input {...register('name')} placeholder="Name" />
{errors.name && <span className={styles.error}>{errors.name}</span>}
<input {...register('email')} placeholder="Email" type="email" />
{errors.email && <span className={styles.error}>{errors.email}</span>}
<input {...register('age')} placeholder="Age" type="number" />
<button disabled={isSubmitting}>Submit</button>
</form>
);
}
What changed:
Extracted useZodForm hook (reusable across project)
Moved styling to CSS Modules (5 tokens vs 40)
Removed clsx dependency
Simplified event handlers
Same behavior, 55% fewer tokens
# Simplify a single file
opencode user:simplify-ts src/components/UserProfile.tsx
# Simplify recent changes
opencode run "Simplify recent Zustand + Zod changes"
# Simplify entire feature
opencode run "Refactor forms/ validation across src/"
The AI will:
Read your command context
Analyze the code
Apply simplifications
Show you a diff
Confirm "Behavior preserved; bun test OK"
Your project's package.json with Bun:
{
"name": "project",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"test": "bun test",
"test:watch": "bun test --watch"
},
"dependencies": {
"react": "^18.3.1",
"react-dom": "^18.3.1",
"zod": "^3.23.8",
"zustand": "^5.0.2"
},
"devDependencies": {
"@types/react": "^18.3.12",
"@types/react-dom": "^18.3.1",
"typescript": "^5.6.3",
"vite": "^6.0.3"
}
}
Run tests with Bun (3-5x faster than Jest):
# Run all tests
bun test
# Watch mode during development
bun test --watch
# Run specific test file
bun test src/components/UserProfile.test.tsx
Terminal 1 (always running):
antigravity-claude-proxy start
Terminal 2 (your work):
cd ~/projects/
# Start an AI coding session
opencode
# The Sisyphus agent (Opus 4.5) orchestrates everything
# Sub-agents handle specialized tasks automatically
You: "Add form validation to the user signup"
Sisyphus (Opus 4.5):
├─ Calls @librarian to analyze existing form patterns
├─ Calls @oracle to review validation architecture
└─ Implements with Zod + useZodForm pattern
You: "Simplify this"
Sisyphus:
└─ Applies user:simplify-ts command
├─ Extracts 3 reusable hooks
├─ Converts to CSS Modules
├─ Reduces from 180 LOC → 65 LOC
└─ "Behavior preserved; bun test OK; token reduction: 63%"
Weekly codebase cleanup:
opencode run "Review and simplify all components added this week"
Before major features:
opencode run "Analyze token usage across src/ and suggest optimizations"
# Install dependencies (3-10x faster)
bun install
# Run dev server
bun run dev
# Run tests in watch mode (in another terminal)
bun test --watch
# Build for production
bun run build
# Add new dependency (instant)
bun add react-query
Before:
Claude Pro: $20/month
API usage: $50-150/month for heavy development
Total: $70-170/month
After:
Antigravity (Google): $25
Proxy: $0 (self-hosted)
OpenCode: $0 (open source)
Bun: $0 (open source)
Total: $25/month
Annual savings: $840-2,040
Package Management (installing 50 dependencies):
npm: ~45 seconds
yarn: ~38 seconds
pnpm: ~22 seconds
bun: ~6 seconds
Test Execution (50 test files):
Jest: ~12.3 seconds
Vitest: ~4.8 seconds
Bun test: ~1.9 seconds
Development Server Cold Start:
Webpack: ~8.2 seconds
Vite + Node: ~3.1 seconds
Vite + Bun: ~1.4 seconds
Measured across project (50-file React/TS project):
Metric | Before | After | Reduction |
|---|---|---|---|
Avg file size | 145 LOC | 68 LOC | 53% |
Tokens per interaction | 32,000 | 12,000 | 62% |
Context window usage | 78% | 29% | 49 points |
Response latency | 8.2s | 3.1s | 62% faster |
Test coverage: Maintained at 94% (no behavior changes)
Type safety: 100% strict TypeScript
Bundle size: Reduced 18% (removed clsx, reduced imports)
Maintainability: Files now <75 LOC, easier to reason about
CI/CD speed: 40% faster with Bun's test runner
Analyze your patterns:
# What makes your codebase verbose?
bun x cloc src/ --by-file
Define constraints:
Framework (Next.js, Remix, Vite?)
State management (Redux, Zustand, Context?)
Styling (Tailwind, CSS Modules, Emotion?)
Testing (Jest, Vitest, Bun test?)
Runtime (Node, Bun, Deno?)
Write your command at ~/.config/opencode/commands/simplify-[project].md:
You are a senior engineer for [your-project].
## Stack
- Framework: [framework]
- Runtime: Bun (for speed)
- State: [state management]
- Styling: [styling approach]
- Testing: bun test
## Rules
1. Files must be <[N] LOC
2. Use [specific patterns]
3. Avoid [anti-patterns]
4. Extract [reusable pieces]
5. Verify with `bun test`
## Simplification Strategy
[Your approach here]
Test it:
opencode user:simplify-[project] src/test-file.tsx# Run tests to verify behavior bun test
If you work on multiple projects, create commands for each:
~/.config/opencode/commands/
├── simplify-ecommerce.md # Next.js + tRPC + Bun
├── simplify-analytics.md # Remix + Prisma + Bun
├── simplify.md # Vite + React + Bun
└── simplify-mobile.md # React Native + Expo
Add to your package.json:
{
"scripts": {
"simplify:recent": "opencode run 'Simplify files changed in last commit'",
"simplify:all": "opencode run 'Review and simplify all src/ files'",
"analyze:tokens": "opencode run 'Analyze token usage and suggest optimizations'",
"test:quick": "bun test --bail",
"ci": "bun test && bun run build"
}
}
Run them with:
bun run simplify:recent
bun run analyze:tokens
"Permission denied" errors:
# Make sure you're logged into Antigravity IDE first
open https://ide.cloud.google.com
# Re-add account
antigravity-claude-proxy accounts add
"Rate limited" immediately:
# Check your quota in the dashboard
open http://localhost:8080
# Add multiple accounts for rotation
antigravity-claude-proxy accounts add # Repeat for account 2, 3...
Models not showing:
# Verify config is valid (using Bun's fast JSON parser)
cat ~/.config/opencode/opencode.json | bun x jq .
# Check if proxy is running
curl http://localhost:8080/health
Agent not using correct model:
# Check oh-my-opencode config
cat ~/.config/opencode/oh-my-opencode.json | bun x jq .agents
Package installation fails:
# Clear Bun cache
rm -rf ~/.bun/install/cache
# Reinstall
bun install
Tests not running:
# Make sure test files match pattern
bun test --help
# Run with debug output
bun test --verbose
I'm exploring:
Automatic simplification on save - VSCode extension that runs simplification before commits
Token budgets - Alert when file crosses token threshold
Pattern library - Extract common refactorings into reusable transforms
Multi-repo simplification - Apply patterns across microservices
Bun build integration - Custom build pipeline with token analysis
Pre-commit hooks - Auto-simplify staged files with Bun
The real insight here isn't just "free Claude" - it's that AI coding tools work better with simpler code and faster tooling.
By systematically:
Removing vendor lock-in (free access via Antigravity)
Reducing token bloat (custom simplification commands)
Enforcing patterns (project-specific constraints)
Using Bun for 3-10x faster operations
We create a virtuous cycle:
Simpler code → Fewer tokens → Faster responses
Faster responses → More iterations → Better code
Better code → Easier to simplify → Even fewer tokens
Bun → Faster tests/builds → More experimentation
The entire system pays for itself in the first month through:
$70-170/month in direct savings
60% reduction in wait times
3-10x faster package/test operations
Significantly cleaner codebase
#!/usr/bin/env bash
# save as setup-free-claude.sh
# Install Bun
curl -fsSL https://bun.sh/install | bash
# Install packages with Bun
bun install -g antigravity-claude-proxy @opencode-ai/cli oh-my-opencode
# Add Google account
antigravity-claude-proxy accounts add
# Run installer
oh-my-opencode install
echo "✅ Setup complete! Now:"
echo "1. Terminal 1: antigravity-claude-proxy start"
echo "2. Terminal 2: cd your-project && opencode"
Want the full configs? Everything from this post is in my GitHub repo: 👉 https://github.com/yourusername/free-claude-setup
Includes:
All config files (opencode.json, oh-my-opencode.json)
Simplification commands for various stacks
Bun scripts and examples
P.S. - If you found this valuable, forward it to a friend who's spending too much on AI coding tools. They'll thank you when they save $2,000+ next year and get 10x faster tooling with Bun.
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
5 comments
How I Cut My AI Coding Costs to $0 While Improving Code Quality: A Deep Dive into Free Claude Opus 4.5 + Smart Token Management
Blog post outlines a three-part stack enabling free Claude Opus 4.5 via Google Antigravity, with token-saving 60-80% from a custom OpenCode simplification command, plus Bun for faster tooling. Highlights cost cuts, cleaner code, and a Sisyphus-led agent workflow. @metaend.eth
Not working app
@casteragents can u help explain this inserted pic to my understanding on crypto related search.
Meow! @ariyoosu Your current rank is #18 with 91,278 $BASE Score. Keep climbing! Use "@casteragents tip @friend" to send free unlimited $CAT for your friends! Wallet: 0x88cc0d377af221cdaf315972139f59a91c2b3dc7 +99 $BASE Score (verify for 10× boost) Total: 91,278 $BASE • Rank: #18 Total Tipped: 768 $CAT • Tipper Rank: #1 Mini Apps: Verify (10X Booster) → https://farcaster.xyz/miniapps/Fr3aGrjxNyC7 Claim Punks → https://farcaster.xyz/miniapps/KA6iiIpajx8b Leaderboard → https://farcaster.xyz/miniapps/BrUdDkVOu6SF/x402-leaderboard Daily Attendance (You are eligible for a pet [Purrling]) → https://farcaster.xyz/miniapps/P8RFbw4b8bH3/cat-feed Create x402 (+10,000 Base Score) → https://catcaster.xyz/create-x402-coin Website → https://catcaster.xyz Follow @casteragents and join /caster channel!