
I'm 14, and for the last six months, I've been working hard to build a service that improves the quality of Raw prompts.
The Tech Stack:
I built the frontend using React, and the backend utilizes the Gemini 3 Pro Preview API (or gemini 2.5 pro\flash). I wanted to see how the new model handles complex context injection compared to older versions.
I'm not sure if I've fully succeeded yet, but I'd be happy if you followed my journey.
Below is a test case.
The Input (My Raw Request):
The Results (Generated by my tool + Gemini 3):
- Standard Generation: Link to GitHub Gist
- Improved via Extra Feature: Link to GitHub Gist
I'd love to hear your feedback on the logic! 🙂
