The Book Is Live
After several months, two APIs built from scratch, four documentation formats, four AI models, and 21,000+ integration tests, Tokens Not Jokin' is here.
What this book is about
Every time a developer points an AI coding tool at your API documentation, tokens get consumed. The format of those docs determines how many tokens, what kind of code gets generated, and whether the code works at all.
Nobody had tested this with clean data. Every popular API is all over the training data, so any test you run with Stripe's docs or GitHub's docs is contaminated. You can't tell if the AI is reading your documentation or drawing on patterns it already learned.
I built two control APIs from scratch. No public footprint. No training data exposure. I documented each one in four formats and ran over 21,000 integration tests across four AI models, from models you can run on a laptop to frontier cloud APIs.
The book contains the full methodology, the complete results, and a framework for testing your own documentation.
What I found
The findings are in the book, not in this post. But I'll say this much: the industry conversation about which AI model produces the best code might be missing a bigger variable. One that documentation teams can influence.
If you write or maintain API documentation, this research will change how you think about format choices. If you manage a team that does, it gives you a testing methodology you can set up in an afternoon.
How to get it
The book is available now on Leanpub. You can also read the research blog posts that led up to it, starting with The Problem Nobody's Measuring.
If you've been following along since the first LinkedIn post in February, thank you. The engagement, comments, and conversations shaped how I framed the research and what I chose to emphasize. This book is better because of that.