Why We Built clawfeeder.ai
The problem with per-token pricing
Every developer building with AI APIs has experienced the same anxiety: you write a feature, deploy it, and then wait nervously to see what the bill looks like at the end of the month. Per-token pricing creates unpredictable costs that scale in non-obvious ways.
A single Claude Opus 4.6 call with a long system prompt and a rich response can cost anywhere from $0.02 to $0.40 depending on the exact input/output split. Multiply that by thousands of requests and you have a forecasting nightmare.
What we wanted
When we started building internal AI tooling, we wanted three things:
Predictability. We should be able to look at the number of API calls and immediately know what it will cost. No token counting. No mental math about 1M input tokens vs 1M output tokens at different rates.
Access to the best models. We didn't want to compromise on model quality to save money. Claude Sonnet 4.6, DeepSeek V3.2, Gemini 2.0 Flash — we wanted them all under one API key.
OpenAI SDK compatibility. We've already written client code against the OpenAI SDK. We didn't want to rewrite it every time we switched models.
The clawfeeder.ai approach
We charge per request, not per token. Claude Sonnet 4.6 costs 15 credits (¥0.15). DeepSeek V3.2 costs 1 credit (¥0.01). You always know what a call will cost before you make it.
The credit prices are set at roughly 30% of the official upstream provider prices — so you get the model quality you need at a predictable, lower cost.
What's next
We're adding smart routing (the model="auto" parameter) that picks the cheapest model capable of handling your task. More on that in a future post.
Try clawfeeder.ai for free
7-day free trial · 300 credits · No card required