What you can get via API:
- o3-mini (reasoning-optimized, extremely cheap)
- ChatGPT 4o
- ChatGPT 4o-mini
- GPT-4.1 / 4.1-mini
- GPT-5
- GPT-OSS (20B free and 120B paid)
- Plus dozens of third-party models if you go the OpenRouter route (Claude, Qwen, Llama, Mistral, etc)
How it works:
Using the API bypasses the ChatGPT web UI pricing. You’re billed only for tokens – so small tasks cost fractions of a cent. Most normal requests land somewhere between $0.0002–$0.01 depending on model and length.
As an example (using GPT-4.1 mini), assuming each message you send is 500 tokens (roughly, 400ish words) and each response it spits out is the same (therefore, total per message ~ 1000 tokens) $10 nets you roughly ten thousand exchanges for $10 worth of credit. Dunno how much you use GPT, but that should last you a lot longer than $20USD / month on the paid tier.
PS: there no limits to how many chats you can have per day (other than token / credit) and the service is fast (if not faster) than using OpenAi web, though it may lack access to the most bleeding edge GPT models.
Two easy ways to do it:
1. Direct OpenAI API purchase
- Create an API key at platform.openai.com
- Point your app/client to:
https://api.openai.com/v1/chat/completions - You pay only for tokens you actually use.
2. OpenRouter (the better way IMHO; multi-model gateway)
- Instead, make an account at openrouter.ai
- Get an API key from there and plug that into your app/client
- Pay $10 for credits (once off; top up as needed)
- Point your client to:
https://openrouter.ai/api/v1/chat/completions - OR use it directly via web UI https://openrouter.ai/models
- Select from many models, including OpenAI’s, Anthropic’s, Meta’s, Qwen, Mistral, etc.
- Paying $10 once gives you 1000 free chats/day with OpenRouter's free tier models (GPT-OSS 20b etc), even after you blow thru your $10, though there may be rate limits (eg: Mistral 7b multi-modal gets spanked pretty hard at all times though gpt-oss 20b is generally easy to get access to).
Why people do this:
- Significantly cheaper than ChatGPT Plus for heavy users
- Works with local clients and apps
- You choose GPT4-1, it stays GPT4-1; no more routing to other models etc. You get what you pay for.
- Good for automation, coding, RAG, testing prompts, or bulk tasks
If you’ve only used ChatGPT through the web UI, this is the cheapest upgrade you can make.
Client recommendations for non-techy users
Download this and follow the set ups they provide
https://www.jan.ai/ <—Windows, Mac, Linux. Easy, simple set up if you just use your computer
https://github.com/open-webui/open-webui <— more complex set up but generally a touch faster. Once set up, can "serve" you models both on computer (via URL) and phone app.
Apps:
https://github.com/cogwheel0/conduit <—Iphone/Android
https://f-droid.org/packages/io.github.stardomains3.oxproxion/
Hope this helps someone that needs it.
EDIT: See below discussion wrt persistent memory / work around there-to.