4 days ago OpenAI released ChatGPT apps. This allows you to build apps that you can host inside of ChatGPT. 2 days later, we released Magic Cloud “widgets”. A widget is a micro app that can be hosted inside an AINIRO AI chatbot. This changes your chatbot from being a static thing, to allowing it to spawn up apps, on demand, that somehow solves some problem at hand.
OpenAI has about 700 million unique users per week. This makes them the second or third largest website on earth I think. Since no existing apps can be consumed as ChatGPT apps, this implies 100% of all software we ever built needs to be created again if you want to take advantage of the distribution OpenAI gives you!
Widgets
Two days after OpenAI released ChatGPT apps, we released “widgets” at AINIRO. A widget is similar to a ChatGPT app, in that it contains a frontend, a backend, maybe a database somewhere, and it can be dynamically injected into the chatbot stream. However, a widget cannot be hosted inside of ChatGPT.
A widget can however be associated with your AINIRO chatbot, on demand, when the LLM believes it should display it. This makes it semantically similar to a ChatGPT “app”, even though technically it’s not!
Below is a screenshot of such a widget.
The above just displays a Chuck Norris joke when the button is clicked, but the complexity level of such apps is literally infinite. Anything you can do with a classic web app, can just as easily be done with an AINIRO “widget”.
With AINIRO though, the above app was created using natural English, and no code had to be manually created. Everything was 100% perfectly generated by the AI.
Micro Apps
This completely changes how we create and deliver software. First of all, purely logically, we have to break down our monolithic frontends into a host of “micro apps.”
Nobody wants to deliver monoliths, but previously we were forced to creating monoliths, because of needing the ability to “orchestrate” our forms, widgets, and components. With ChatGPT apps, and AINIRO widgets, the orchestration is being done with the LLM, allowing us to create micro apps, solving one single thing, and then prompt engineer it into the LLM such that it will dynamically display it when needed.
This way of thinking is surprisingly similar to what we used to refer to as “micro services”, where we split up our monolithic backends, into multiple micro services, where each individual service was solving only one problem.
To understand how you can use Magic to create such apps, you can watch the following video — But basically, with AINIRO you can create “ChatGPT apps” without having to code. 100% No-Code and gen-AI based!
If you want to read more about how to start creating such micro apps, and look at some code, you can read my article about it below.