Google’s Gemini 3 and early signs of Generative UI – curious what people think.


Google released Gemini 3 today, and with it we’re already seeing glimpses of Generative UI rolling out in Google Search and the Gemini app.

This feels like a meaningful shift. Generative UI is starting to go mainstream.

I’m personally excited because this is something I’ve been saying for a while – human–machine interaction won’t stay text-only.

A lot of tasks make more sense with adaptive, context-aware UI that the model can generate on the fly.

I work at Thesys, and we’ve been building infrastructure around this idea for some time, so it’s interesting to finally see a big player push this direction into real products.

Curious how others here view this shift. Is Generative UI something you see becoming common? Or is it still too early for most AI apps?

If you’re experimenting with Generative UI yourself and want to integrate it into your app, our API is free to try: https://thesys.dev/

Would love feedback from folks here.

Google’s Gemini 3 and early signs of Generative UI – curious what people think.
byu/AviusAnima inGeminiAI

Leave a Reply