See where your AI budget goes.

Track every LLM call across OpenAI, Anthropic, and Google Gemini. One line of code. Full visibility.

import llmtracer
llmtracer.init()

# All OpenAI, Anthropic & Gemini calls are now tracked.
import llmtracer from '@llmtracer/sdk';
llmtracer.init();

// All OpenAI, Anthropic & Gemini calls are now tracked.
Start free → pip install llmtracer-sdk npm install @llmtracer/sdk
Cost Explorer Last 7 days $847.23 total
gpt-4o
$412.50
claude-sonnet-4-5
$203.18
gpt-4o-mini
$142.30
gemini-2.5-pro
$89.25
Source file Cost Calls
planning.py $312.40 2,400
execution.py $245.18 1,800
classifier.py $89.20 4,200

All providers, one SDK

OpenAI, Anthropic, and Google Gemini. Auto-patches your SDK clients. Cost breakdown by model, source file, conversation, and user — automatically.

Full cost capture

Thinking tokens, cache tokens, tool tokens, reasoning tokens. Every billing dimension your provider charges for — captured automatically. Match your bill exactly.

Zero config

One line of code. Auto-flushes on exit. Captures caller file and function out of the box. No wrappers, no callbacks, no code changes.

The old way

  • × Install framework-specific adapter
  • × Create callback handler class
  • × Wire callbacks to every LLM call site
  • × Tag each call manually
  • × 8 files modified
  • × Hope you didn't miss one

With LLM Tracer

  • pip install llmtracer-sdk
  • import llmtracer; llmtracer.init()
  • Done. Every call captured.
  • 0 files modified
  • 0 call-site changes
  • Works with any framework