How do I send 1 prompt to multiple LLM APIs (ChatGPT, Gemini, Perplexity) and auto-merge their answers into a unified output?

Hey everyone — I’m trying to build a workflow where:
1. I type one prompt.
2. It automatically sends that prompt to:
• ChatGPT API
• Gemini 3 API
• Perplexity Pro API (if possible — unsure if they provide one?)
3. It receives all three responses.
4. It combines them into a single, cohesive answer.

Basically: a “Meta-LLM orchestrator” that compares and synthesizes multiple model outputs.

I can use either:
• Python (open to FastAPI, LangChain, or just raw requests)
• No-code/low-code tools (Make.com, Zapier, Replit, etc.)

Questions:
1. What’s the simplest way to orchestrate multiple LLM API calls?
2. Is there a known open-source framework already doing this?
3. Does Perplexity currently offer a public write-capable API?
4. Any tips on merging responses intelligently? (rank, summarize, majority consensus?)

Happy to share progress or open-source whatever I build.
Thanks!

Leave a Reply