We don't stream pixels. We stream understanding.
As humans we naturally focus on what matters in the moment — our brains actively discard what we already know, so our attention lands only on what's new. AI doesn't have this. Every step, it processes everything: signal and noise together, from scratch. JD Codec gives AI that same selective focus. Strip away the known. Surface what matters.
Same task. Same model. Same tools.
Task: Filter products by status 4/6
Same model · Same task · Same tools
99% fewer tokens. $2.23 saved per task.
Setup
Pick your flavour. One command.
npx @jdcodec/cli
pip install jdcodec
Drop-in layer between your agent and the browser. No changes to your agent, model, or tools.
jdcodec start
86% fewer input tokens. Higher task success rates. Faster execution. Your agent sees only what changed, and performs better because of it.
Benchmarked
Same model (Claude Sonnet 4). Same tasks. Same environment. The only difference: standard snapshots vs. JD Codec.
| Task | Baseline | JD Codec | Baseline tokens | JD Codec tokens | Reduction |
|---|---|---|---|---|---|
| Navigate to Products | PASS | PASS | 108.8k | 10.2k | -91% |
| Navigate to Customers | PASS | PASS | 30.6k | 8.6k | -72% |
| Navigate to Orders | PASS | PASS | 54.3k | 9.9k | -82% |
| Filter products by status | PASS | PASS | 748.9k | 5.2k | -99% |
| Change customer group | FAIL | FAIL | 203.2k | 50.8k | -75% |
| Toggle product status | FAIL | PASS | 543.5k | 145.3k | -73% |
| Total (6 tasks) | 4/6 | 5/6 | 1,689k | 230k | -86.4% |
Methodology: 6 real-world tasks on production web app. Input tokens via tiktoken. Wall-clock includes all latency.
Alpha access is limited to high-velocity teams. Join the waitlist to cut agent costs by 86% and convert failures into completions.