AI Chatbot iMessage Integration: Connect Any LLM to Text Messaging
Most AI chatbots live on web widgets that nobody clicks. The ones that actually get used? They meet people where they already are: iMessage. Here is how to connect any LLM — OpenAI, Claude, Gemini, or open-source models — to iMessage using the Sendblue API, so your AI chatbot sends and receives native blue-bubble messages.
Why iMessage for AI Chatbots?
Web-based chatbots have an engagement problem. Users close the tab and forget about them. SMS gets filtered by carriers under A2P 10DLC rules. Email sits unread. iMessage is different — and the numbers show it:
98% Open Rate
iMessages get opened and read almost immediately. Compare that to 20% for email or the increasingly filtered SMS channel.
Native Experience
Your AI chatbot appears as a regular iMessage conversation. No app to download, no link to click, no widget to find. Blue bubbles from a real phone number.
Typing Indicators
Show the typing bubble while your LLM generates a response. This makes the conversation feel human and natural — users wait patiently instead of abandoning.
No A2P Registration
iMessage bypasses carrier filtering entirely. No 10DLC registration, no campaign approvals, no throughput limits. Send immediately after signup.
The core insight: an AI chatbot iMessage integration turns your LLM from a demo into a product. When your chatbot lives in the same thread as messages from friends and family, engagement rates are 5-10x higher than any web interface.
Architecture: How AI Chatbot iMessage Integration Works
The architecture for connecting an LLM to iMessage through Sendblue is straightforward. Here is the complete message flow:
User sends iMessage
|
v
Sendblue receives message on your dedicated number
|
v
Sendblue POSTs webhook to your server (JSON payload)
|
v
Your server extracts message content + sender phone number
|
v
Your server calls LLM API (OpenAI / Claude / Gemini / etc.)
|
v
LLM generates response
|
v
Your server calls Sendblue send API with response text
|
v
User receives reply as native iMessage (blue bubble)Every component is stateless except your conversation store. Sendblue handles all the iMessage infrastructure — you never need a Mac server, an Apple Developer account, or any Apple hardware. Your server is a simple HTTP endpoint that brokers between Sendblue webhooks and your LLM of choice.
The LLM text messaging API pattern works with any provider. The Sendblue side is always the same: receive a webhook, send a reply. The only thing that changes is which LLM you call in between.
Step-by-Step: Build an iMessage AI Chatbot (Node.js)
This is a complete, working implementation. You can copy this, set your environment variables, deploy it, and have a working AI chatbot on iMessage in under 10 minutes.
1. Install dependencies
mkdir imessage-ai-chatbot && cd imessage-ai-chatbot
npm init -y
npm install express sendblue openai @anthropic-ai/sdk2. Set environment variables
# .env
SENDBLUE_API_KEY=your_sendblue_api_key
SENDBLUE_API_SECRET=your_sendblue_api_secret
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
PORT=30003. Complete server with OpenAI integration
import express from 'express';
import Sendblue from 'sendblue';
import OpenAI from 'openai';
const app = express();
app.use(express.json());
// Initialize clients
const sendblue = new Sendblue(
process.env.SENDBLUE_API_KEY,
process.env.SENDBLUE_API_SECRET
);
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// In-memory conversation store (use Redis or a DB in production)
const conversations = new Map();
// System prompt for your AI chatbot
const SYSTEM_PROMPT = `You are a helpful assistant that communicates
over iMessage. Keep responses concise (under 300 characters when
possible) since this is a text conversation. Be friendly and natural.
Do not use markdown formatting — plain text only.`;
// Webhook endpoint — Sendblue POSTs here when a message arrives
app.post('/webhook/inbound', async (req, res) => {
// Acknowledge immediately so Sendblue doesn't retry
res.status(200).json({ status: 'received' });
const { content, number, media_url } = req.body;
if (!content || !number) return;
try {
// Send typing indicator while LLM processes
await sendblue.sendTypingIndicator({ number });
// Retrieve or initialize conversation history
if (!conversations.has(number)) {
conversations.set(number, []);
}
const history = conversations.get(number);
// Add the user's message
history.push({ role: 'user', content });
// Keep last 20 messages to manage token usage
if (history.length > 20) {
history.splice(0, history.length - 20);
}
// Call OpenAI
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: SYSTEM_PROMPT },
...history,
],
max_tokens: 500,
});
const reply = completion.choices[0].message.content;
// Store assistant response in history
history.push({ role: 'assistant', content: reply });
// Send reply via iMessage through Sendblue
await sendblue.sendMessage({
number,
content: reply,
});
console.log(`Replied to ${number}: ${reply.substring(0, 50)}...`);
} catch (error) {
console.error('Error processing message:', error);
// Send a fallback message so the user isn't left hanging
await sendblue.sendMessage({
number,
content: "Sorry, I hit a snag. Try again in a moment.",
});
}
});
// Health check
app.get('/health', (req, res) => {
res.json({ status: 'ok', uptime: process.uptime() });
});
app.listen(process.env.PORT || 3000, () => {
console.log(`AI chatbot server running on port ${process.env.PORT || 3000}`);
});4. Swap to Claude (Anthropic) — drop-in replacement
Want to use Claude instead of GPT-4? Replace the LLM call. Everything else stays the same:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
// Replace the OpenAI call with this:
async function generateReply(history) {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 500,
system: SYSTEM_PROMPT,
messages: history, // Same format: { role, content }
});
return response.content[0].text;
}5. Configure your Sendblue webhook
In your Sendblue dashboard, set your inbound message webhook URL to your server's public URL:
https://your-server.com/webhook/inboundFor local development, use ngrok to expose your local server:
ngrok http 3000
# Then set the webhook URL to: https://abc123.ngrok.io/webhook/inboundPython Implementation
Here is the same AI chatbot iMessage integration in Python using Flask. This is a complete working server:
Install dependencies
pip install flask requests openai anthropicComplete Flask server
import os
import requests
from flask import Flask, request, jsonify
from openai import OpenAI
app = Flask(__name__)
# Sendblue credentials
SB_API_KEY = os.environ["SENDBLUE_API_KEY"]
SB_API_SECRET = os.environ["SENDBLUE_API_SECRET"]
SB_BASE_URL = "https://api.sendblue.co/api"
# OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Conversation store
conversations: dict[str, list] = {}
SYSTEM_PROMPT = (
"You are a helpful assistant communicating over iMessage. "
"Keep responses concise. Use plain text, no markdown."
)
def sendblue_headers():
return {
"sb-api-key-id": SB_API_KEY,
"sb-api-secret-key": SB_API_SECRET,
"Content-Type": "application/json",
}
def send_imessage(number: str, content: str):
"""Send an iMessage via Sendblue API."""
resp = requests.post(
f"{SB_BASE_URL}/send-message",
headers=sendblue_headers(),
json={"number": number, "content": content},
)
resp.raise_for_status()
return resp.json()
def send_typing_indicator(number: str):
"""Show typing indicator to the user."""
requests.post(
f"{SB_BASE_URL}/send-typing-indicator",
headers=sendblue_headers(),
json={"number": number},
)
@app.route("/webhook/inbound", methods=["POST"])
def inbound_webhook():
data = request.json
content = data.get("content")
number = data.get("number")
if not content or not number:
return jsonify({"status": "ignored"}), 200
# Show typing indicator
send_typing_indicator(number)
# Manage conversation history
if number not in conversations:
conversations[number] = []
history = conversations[number]
history.append({"role": "user", "content": content})
# Trim to last 20 messages
if len(history) > 20:
conversations[number] = history[-20:]
history = conversations[number]
try:
# Call OpenAI
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
*history,
],
max_tokens=500,
)
reply = completion.choices[0].message.content
# Store and send
history.append({"role": "assistant", "content": reply})
send_imessage(number, reply)
except Exception as e:
print(f"Error: {e}")
send_imessage(number, "Sorry, something went wrong. Try again.")
return jsonify({"status": "ok"}), 200
@app.route("/health")
def health():
return jsonify({"status": "ok"})
if __name__ == "__main__":
app.run(port=int(os.environ.get("PORT", 3000)))Using Claude in Python
import anthropic
claude = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
def generate_reply(history: list) -> str:
response = claude.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=500,
system=SYSTEM_PROMPT,
messages=history,
)
return response.content[0].textAdvanced Features
Once your basic AI chatbot iMessage integration is working, here are the features that make it production-ready:
Typing Indicators
Send a typing indicator immediately when you receive a message. This tells the user the chatbot is "thinking" and dramatically reduces abandonment during LLM generation time:
// Send typing indicator before calling your LLM
await sendblue.sendTypingIndicator({ number });
// For long-running generations, you can refresh it
const typingInterval = setInterval(async () => {
await sendblue.sendTypingIndicator({ number });
}, 10000);
const reply = await generateReply(history);
clearInterval(typingInterval);Read Receipts
Sendblue delivers read receipt callbacks to your webhook. Use them to track engagement and trigger follow-ups:
app.post('/webhook/status', async (req, res) => {
res.status(200).json({ status: 'received' });
const { number, status, message_handle } = req.body;
if (status === 'read') {
console.log(`Message to ${number} was read`);
// Trigger follow-up logic, update analytics, etc.
}
});Media Handling (Images, Videos)
Your AI chatbot can receive and send images. Handle inbound media and optionally pass it to a vision model:
app.post('/webhook/inbound', async (req, res) => {
res.status(200).json({ status: 'received' });
const { content, number, media_url } = req.body;
let messages = [];
if (media_url) {
// Pass image to GPT-4o vision
messages.push({
role: 'user',
content: [
{ type: 'text', text: content || 'What is in this image?' },
{ type: 'image_url', image_url: { url: media_url } },
],
});
} else {
messages.push({ role: 'user', content });
}
// ... continue with LLM call and Sendblue reply
});
// Sending media back to the user
await sendblue.sendMessage({
number,
content: 'Here is the image you requested:',
media_url: 'https://your-server.com/generated-image.png',
});Persistent Conversation Memory
For production, replace the in-memory Map with Redis or a database. Here is a Redis example:
import { createClient } from 'redis';
const redis = createClient({ url: process.env.REDIS_URL });
await redis.connect();
async function getHistory(number) {
const raw = await redis.get(`conv:${number}`);
return raw ? JSON.parse(raw) : [];
}
async function saveHistory(number, history) {
// Keep last 50 messages, expire after 24 hours
const trimmed = history.slice(-50);
await redis.set(`conv:${number}`, JSON.stringify(trimmed), {
EX: 86400,
});
}
// In your webhook handler:
const history = await getHistory(number);
history.push({ role: 'user', content });
const reply = await generateReply(history);
history.push({ role: 'assistant', content: reply });
await saveHistory(number, history);LLM Providers That Work with Sendblue
Any LLM with an API works as the brain of your iMessage chatbot. Here is a comparison of the most popular options for an LLM text messaging API workflow:
| Provider | Best Model | Strengths | Latency |
|---|---|---|---|
| OpenAI | GPT-4o | Fastest responses, vision support, function calling | ~1-2s |
| Anthropic | Claude Sonnet | Longest context window, best instruction-following, MCP support | ~1-3s |
| Gemini 2.5 Pro | Multimodal native, competitive pricing | ~1-3s | |
| Open-source | Llama 3.1 / Mistral | Self-hosted, no per-token cost, full data control | Varies |
Claude with MCP Server
Anthropic's Claude supports Model Context Protocol (MCP), and Sendblue offers an official MCP server. This means Claude can send and receive iMessages as a native tool call — no webhook code required for agent-to-agent or human-in-the-loop workflows:
# Install the Sendblue MCP server
npx @anthropic-ai/claude-code mcp add sendblue
# Claude can now use send_message and get_messages as tools
# Example: "Send an iMessage to +15551234567 saying hello"The MCP server is ideal for Claude Desktop and Claude Code integrations where the AI agent needs to initiate conversations, not just respond to inbound messages.
Using open-source models (Llama, Mistral)
Self-hosted models work with the same pattern. Point your LLM call at your own inference server instead of a cloud API:
// Using an OpenAI-compatible API (vLLM, Ollama, etc.)
const openai = new OpenAI({
apiKey: 'not-needed',
baseURL: 'http://localhost:8000/v1', // Your local inference server
});
const completion = await openai.chat.completions.create({
model: 'meta-llama/Llama-3.1-70B-Instruct',
messages: [
{ role: 'system', content: SYSTEM_PROMPT },
...history,
],
});
// The rest of your Sendblue integration stays identicalFrequently Asked Questions
Can I connect ChatGPT or Claude to iMessage?
Yes. Using the Sendblue API, you can connect any LLM — including OpenAI GPT-4, Anthropic Claude, Google Gemini, or open-source models like Llama and Mistral — to iMessage. Your server receives inbound messages via webhook, calls the LLM API, and sends the response back through Sendblue as a native iMessage.
Do I need A2P 10DLC registration for an AI chatbot on iMessage?
No. iMessage is not subject to A2P 10DLC registration requirements that apply to SMS. Sendblue sends native iMessages through Apple's infrastructure, so your AI chatbot avoids carrier filtering and registration delays entirely.
What happens if the recipient does not have an iPhone?
Sendblue automatically falls back to SMS for non-iMessage recipients. Your webhook handler and LLM integration work the same way regardless of the delivery channel — the only difference is the transport layer.
Can the AI chatbot send images and media over iMessage?
Yes. Sendblue supports sending images, videos, and other media attachments via iMessage. You can have your LLM generate image descriptions, use DALL-E or similar APIs to create images, and send them through the media_url parameter in the Sendblue send API.
How do I maintain conversation context across multiple messages?
Store conversation history keyed by phone number in a database (Redis, PostgreSQL, etc.). On each inbound message, retrieve the history, append the new message, send the full conversation to your LLM, and store the response. This gives your AI chatbot memory across the entire conversation.
What is the Sendblue MCP server for Claude?
Sendblue offers a Model Context Protocol (MCP) server that integrates directly with Claude Desktop and Claude Code. It allows Claude to send and receive iMessages as a native tool call, enabling AI agents to communicate over iMessage without writing custom webhook code.
Connect your AI chatbot to iMessage today
Free sandbox. No credit card. No A2P registration required.