ChatPilot
AI receptionist that handles WhatsApp, Instagram, Telegram, and Messenger simultaneously




Why this needed to be built
How it works
Built with
System design
What I learned
Channel Provider Pattern: the only way to scale multi-channel
Each messaging platform has a completely different webhook format, authentication scheme, and message structure. Without a Provider Pattern to normalize everything into a common `MessageEvent` type, adding a 5th channel would require rewriting the AI pipeline. With it, a new channel is ~200 lines: implement the interface, register the provider, done.
Making Telegram work in demo mode
Telegram requires a public HTTPS URL for webhooks — a problem during local development. The solution was a dual-mode setup: webhook mode for production (Vercel) and polling mode for development (local). A single env variable switches between them. Zero code changes when deploying.
AI response quality: context is everything
Generic GPT-4o responses are generic. The quality jump came from injecting: (1) business name and description, (2) the last 10 messages as conversation history, (3) the detected user intent from a preliminary classification call. Three-pass prompting added ~2 seconds but increased relevant responses from 60% to 94%.
What I'd do differently
I'd use Inngest for the AI pipeline instead of synchronous Next.js API routes. The 8-second average response time is at the edge of acceptability — a proper job queue would allow streaming responses back to the user as GPT generates them, dropping perceived latency to under 2 seconds.
Want me to build something similar for you?
Hire me for your project →