Exploring the ChatGPT Apps SDK with a Simple “Hello Widget”

Recently, OpenAI introduced the ChatGPT Apps SDK, allowing developers to build interactive apps that run inside ChatGPT itself.

https://openai.com/index/introducing-apps-in-chatgpt/

While the official docs include several examples, I wanted to try something even simpler — a “Hello Widget” that demonstrates the basic flow of creating and displaying a custom widget within ChatGPT.

Creating the Server

Let’s start by creating a basic MCP (Model Context Protocol) server that serves a simple widget and greets the user.

import express from "express";
import {McpServer} from "@modelcontextprotocol/sdk/server/mcp.js";
import {StreamableHTTPServerTransport} from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import {z} from "zod";
const server = new McpServer({name: "hello-server", version: "1.0.0"});
server.registerResource(
"hello-widget",
"ui://widget/hello.html",
{},
async () => ({
contents: [
{
uri: "ui://widget/hello.html",
mimeType: "text/html+skybridge",
text: `
<div style="font-family:system-ui,sans-serif;padding:12px;border:1px solid #e5e7eb;border-radius:8px;">
<h3 style="margin:0 0 8px 0;">👋 Hello Widget</h3>
<div id="out" style="font-size:13px;color:#374151;">Loading...</div>
<script type="module">
const out = document.getElementById('out');
// Display initial state
const render = (data) => {
if (data?.name) out.textContent = \`Hello, \${data.name}!\`;
else out.textContent = 'No structuredContent received.';
};
// Render initial data
render(window.openai?.toolOutput ?? {});
// Re-render when new data arrives from the tool
if (window.openai?.onToolOutput) {
window.openai.onToolOutput((newData) => render(newData));
}
</script>
</div>
`.trim(),
},
],
})
);
server.registerTool(
"show_hello",
{
title: "Show Hello Widget",
description: "Displays a simple Hello message inside a widget.",
inputSchema: {
name: z.string().describe("Name of the person to greet"),
},
_meta: {
"openai/outputTemplate": "ui://widget/hello.html",
"openai/toolInvocation/invoking": "Loading widget…",
"openai/toolInvocation/invoked": "Widget displayed!",
},
},
async ({name}) => ({
content: [{type: "text", text: `Hello, ${name}!`}],
structuredContent: {name},
})
);

Using registerResource, we register an HTML resource at the path ui://widget/hello.html.
This resource defines the UI that will be displayed inside ChatGPT.

If you look at the text property in registerResource, it contains a simple <div> element that displays a greeting message.
Through onToolOutput(), the widget updates dynamically whenever the structuredContent changes.

The registerTool section is where we register the tool itself — essentially an endpoint that ChatGPT can call.
Using the _meta field and the openai/outputTemplate property, we specify which widget should appear when this tool is invoked.

Finally, we’ll use Express to serve this code and make it accessible.

// Existing code
const app = express();
app.use(express.json({type: "*/*"}));
app.post("/mcp", async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true,
});
res.on("close", () => transport.close());
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
const port = parseInt(process.env.PORT || "3000", 10);
app
.listen(port, () => {
console.log(`MCP server running: http://localhost:${port}/mcp`);
})
.on("error", (err) => {
console.error("Server error:", err);
process.exit(1);
});

Checking with the Inspector

To verify the MCP server works correctly, we can use the Model Context Protocol Inspector:

npx @modelcontextprotocol/inspector

Set the Transport Type to “Streamable HTTP” and the URL to http://localhost:3000/mcp.

⚠️ Be sure to launch your Express server first!

Also, make sure to connect using a link that includes your PROXY_AUTH_TOKEN.

Once connected, clicking List Resources should display your hello-widget.

Connecting to ChatGPT

Now let’s connect the app to ChatGPT.

You can expose your local server using a tunneling tool like ngrok:

ngrok http 3000

Now let’s connect the app to ChatGPT.
Note that this feature only works in the web version of ChatGPT — it is not yet supported in the standalone desktop or mobile apps.

First, you need to enable Development Mode.
You can do this by navigating to: Settings → Apps & Connectors → Advanced.

Now we’ll register the app we created earlier.

Go back to Settings → Apps & Connectors, then click Create at the top.

Enter the Name and the MCP Server URL generated by ngrok, then set Auth to No.

Click Create, and you’ll see your newly created app appear in the list.

Click the “+” button inside ChatGPT → More, and you should see your app listed.

Even without directly using the tool at first, when I asked ChatGPT to say hello, it automatically accessed the Hello App.

After clicking Confirm, the widget we created earlier appeared perfectly.

During my testing, I noticed that onToolOutput() was triggered intermittently — this seems to be something that needs a bit more investigation.

Wrapping Up

And that’s it! You’ve built and deployed a minimal ChatGPT App SDK example — a simple “Hello Widget”.

OpenAI’s documentation also shows examples of widgets that display videos, draw images, and more.

The possibilities are vast — this SDK opens the door to building rich, interactive experiences inside ChatGPT.

Leave a Reply