Track every LLM call across OpenAI, Anthropic, and Google Gemini. One line of code. Full visibility.
import llmtracer
llmtracer.init()
# All OpenAI, Anthropic & Gemini calls are now tracked.
import llmtracer from '@llmtracer/sdk';
llmtracer.init();
// All OpenAI, Anthropic & Gemini calls are now tracked.
| Source file | Cost | Calls |
|---|---|---|
| planning.py | $312.40 | 2,400 |
| execution.py | $245.18 | 1,800 |
| classifier.py | $89.20 | 4,200 |
OpenAI, Anthropic, and Google Gemini. Auto-patches your SDK clients. Cost breakdown by model, source file, conversation, and user — automatically.
Thinking tokens, cache tokens, tool tokens, reasoning tokens. Every billing dimension your provider charges for — captured automatically. Match your bill exactly.
One line of code. Auto-flushes on exit. Captures caller file and function out of the box. No wrappers, no callbacks, no code changes.