Subscribe to nerderlyne
Subscribe to nerderlyne
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
In this tutorial, we'll create an LLM powered chatbot for XMTP (Extensible Message Transport Protocol). XMTP is a secure, private web3 messaging protocol—combined with language models, you can build an intelligent and secure chatbot that anyone, anywhere can interact with from one of the many clients like Converse, Coinbase Wallet, Lens, etc. If you’re looking to get started quick, fork the linked GitHub repo and go!
First off, let's install the necessary packages:
pnpm install dotenv @xmtp/xmtp-js ethers openai
Create a .env file to store your XMTP key and OpenAI API key:
KEY=<Your_XMTP_Wallet_Key> OPENAI_API_KEY=<Your_OpenAI_API_Key> XMTP_ENV=<production | dev>
You can grab an OpenAI key on their platform. If you want to use a OSS model like LLAMA 2 for your bot, you can update the baseURL in the constructor:
const llm = new OpenAI({ baseURL: <url_for_your_chat_model> });
The code consists of three main parts:
Initialization and Configuration: Importing packages and setting up the environment.
Helper Functions: Functions for creating the XMTP client and fetching conversation history.
Chat Handling: Handling incoming messages and sending replies using OpenAI's API.
import { config } from "dotenv"; import { AttachmentCodec, RemoteAttachmentCodec, } from "@xmtp/content-type-remote-attachment"; import { Client, ListMessagesOptions, SortDirection, DecodedMessage, } from "@xmtp/xmtp-js"; import { utils, Wallet } from "ethers"; import OpenAI from "openai"; config();
createClient() initializes an XMTP client, registers codecs for attachments, and publishes user contact information:
async function createClient(): Promise<Client> { let wallet: Wallet; const key = process.env.KEY; if (key) { wallet = new Wallet(key); } else { wallet = Wallet.createRandom(); } if (process.env.XMTP_ENV !== "production" && process.env.XMTP_ENV !== "dev") { throw "invalid XMTP env"; } const client = await Client.create(wallet, { env: process.env.XMTP_ENV || "production", }); // Register the codecs. AttachmentCodec is for local attachments (<1MB) client.registerCodec(new AttachmentCodec()); //RemoteAttachmentCodec is for remote attachments (>1MB) using thirdweb storage client.registerCodec(new RemoteAttachmentCodec()); await client.publishUserContact(); return client; }
getConversationHistory() fetches the last five messages between the bot and the user and converts them to chat messages interface supported by OpenAI:
const getConversationHistory = async ( client: Client, userAddress: string, ): Promise<OpenAI.Chat.ChatCompletionMessage[]> => { const conversations = await client.conversations.list(); const conversation = conversations.find((conversation) => { return ( utils.getAddress(conversation.peerAddress) == utils.getAddress(userAddress) ); }); if (!conversation) { return []; } const options: ListMessagesOptions = { checkAddresses: true, limit: 5, direction: SortDirection.SORT_DIRECTION_DESCENDING, }; const messages = await conversation.messages(options); messages.shift(); if (messages.length === 0) { return []; } return messages .map((message) => { return { role: message.senderAddress == client.address ? "assistant" : ("user" as OpenAI.Chat.ChatCompletionMessage["role"]), content: message.content, }; }) .reverse(); };
The HandlerContext class provides an interface to access message, history, and client information for easier manipulation within the handler:
class HandlerContext { message: DecodedMessage; history: OpenAI.Chat.ChatCompletionMessage[]; client: Client; constructor({ message, history, client }: HandlerContextConstructor) { this.message = message; this.history = history; this.client = client; } async reply(content: any) { await this.message.conversation.send(content); } }
The handleChat function is the meat of our LLM bot. You can customise the model here, add more context through vector embeddings if you’re building a customer service bot, or use function calls to answer questions about the state of the chain in realtime and so on (see for example, my Check The Chain plugin on the ChatGPT plugin store). A fun, yet easy customisation would be enshrining a soul that really characterizes your bot using a simple system prompt so responses have your desired personality. (After all the goal is to make this something you or your users want to keep talking to!) Leveraging open data, you can even take this a step further and customise the personality based on NFTs and other digital assets held by users—creating a shapeshifter bot that updates with what your users do on their own time! ✨
const handleChat = async (context: HandlerContext) => { try { if (context.message.contentType.typeId != "text") { await context.reply("Sorry, I only understand text messages."); return; } let messageBody = context.message.content; const messageHistory = context.history; const response = ( await llm.chat.completions.create({ model: "gpt-3.5-turbo-0613", messages: [ { role: "system", content: "You are a helpful assistant.", }, ...messageHistory, { role: "user", content: messageBody, }, ], }) ).choices[0].message.content; if (!response) { await context.reply( "Sorry, my systems are under repair. Please chat with me later when we are all fixed ♥", ); return; } await context.reply(response); } catch (error) { console.error(`Error: ${error}`); await context.reply("Sorry, an error occurred. Please try again later."); } };
This is where all our functions come together. The run() function sets up a message stream and invokes the handler whenever a new message is received. It's wrapped inside a reconnect() for improved error handling.
async function run(handler: Handler) { const client = await createClient(); console.log(`Listening on ${client.address}`); for await (const message of await client.conversations.streamAllMessages()) { try { if (message.senderAddress == client.address) { continue; } const history = await getConversationHistory( client, utils.getAddress(message.senderAddress), ); const context = new HandlerContext({ message, history, client }); await handler(context); } catch (e) { console.log(`error`, e, message); } } }
pnpm run dev
You should now see a message indicating that the bot is listening on a specific XMTP address.

That’s it! This is just scratching the surface, and you can extend this basic bot in numerous ways from customer support bots for protocols and DAOs to leveraging custom content-types for text-to-transaction allowing users to express their intent in natural language.
Check out the Github repo for the full codebase. Talk to a live version of my demo bot by messaging nani.eth ( 0x7AF890Ca7262D6accdA5c9D24AC42e35Bb293188) on any XMTP compatible app.

🤍
In this tutorial, we'll create an LLM powered chatbot for XMTP (Extensible Message Transport Protocol). XMTP is a secure, private web3 messaging protocol—combined with language models, you can build an intelligent and secure chatbot that anyone, anywhere can interact with from one of the many clients like Converse, Coinbase Wallet, Lens, etc. If you’re looking to get started quick, fork the linked GitHub repo and go!
First off, let's install the necessary packages:
pnpm install dotenv @xmtp/xmtp-js ethers openai
Create a .env file to store your XMTP key and OpenAI API key:
KEY=<Your_XMTP_Wallet_Key> OPENAI_API_KEY=<Your_OpenAI_API_Key> XMTP_ENV=<production | dev>
You can grab an OpenAI key on their platform. If you want to use a OSS model like LLAMA 2 for your bot, you can update the baseURL in the constructor:
const llm = new OpenAI({ baseURL: <url_for_your_chat_model> });
The code consists of three main parts:
Initialization and Configuration: Importing packages and setting up the environment.
Helper Functions: Functions for creating the XMTP client and fetching conversation history.
Chat Handling: Handling incoming messages and sending replies using OpenAI's API.
import { config } from "dotenv"; import { AttachmentCodec, RemoteAttachmentCodec, } from "@xmtp/content-type-remote-attachment"; import { Client, ListMessagesOptions, SortDirection, DecodedMessage, } from "@xmtp/xmtp-js"; import { utils, Wallet } from "ethers"; import OpenAI from "openai"; config();
createClient() initializes an XMTP client, registers codecs for attachments, and publishes user contact information:
async function createClient(): Promise<Client> { let wallet: Wallet; const key = process.env.KEY; if (key) { wallet = new Wallet(key); } else { wallet = Wallet.createRandom(); } if (process.env.XMTP_ENV !== "production" && process.env.XMTP_ENV !== "dev") { throw "invalid XMTP env"; } const client = await Client.create(wallet, { env: process.env.XMTP_ENV || "production", }); // Register the codecs. AttachmentCodec is for local attachments (<1MB) client.registerCodec(new AttachmentCodec()); //RemoteAttachmentCodec is for remote attachments (>1MB) using thirdweb storage client.registerCodec(new RemoteAttachmentCodec()); await client.publishUserContact(); return client; }
getConversationHistory() fetches the last five messages between the bot and the user and converts them to chat messages interface supported by OpenAI:
const getConversationHistory = async ( client: Client, userAddress: string, ): Promise<OpenAI.Chat.ChatCompletionMessage[]> => { const conversations = await client.conversations.list(); const conversation = conversations.find((conversation) => { return ( utils.getAddress(conversation.peerAddress) == utils.getAddress(userAddress) ); }); if (!conversation) { return []; } const options: ListMessagesOptions = { checkAddresses: true, limit: 5, direction: SortDirection.SORT_DIRECTION_DESCENDING, }; const messages = await conversation.messages(options); messages.shift(); if (messages.length === 0) { return []; } return messages .map((message) => { return { role: message.senderAddress == client.address ? "assistant" : ("user" as OpenAI.Chat.ChatCompletionMessage["role"]), content: message.content, }; }) .reverse(); };
The HandlerContext class provides an interface to access message, history, and client information for easier manipulation within the handler:
class HandlerContext { message: DecodedMessage; history: OpenAI.Chat.ChatCompletionMessage[]; client: Client; constructor({ message, history, client }: HandlerContextConstructor) { this.message = message; this.history = history; this.client = client; } async reply(content: any) { await this.message.conversation.send(content); } }
The handleChat function is the meat of our LLM bot. You can customise the model here, add more context through vector embeddings if you’re building a customer service bot, or use function calls to answer questions about the state of the chain in realtime and so on (see for example, my Check The Chain plugin on the ChatGPT plugin store). A fun, yet easy customisation would be enshrining a soul that really characterizes your bot using a simple system prompt so responses have your desired personality. (After all the goal is to make this something you or your users want to keep talking to!) Leveraging open data, you can even take this a step further and customise the personality based on NFTs and other digital assets held by users—creating a shapeshifter bot that updates with what your users do on their own time! ✨
const handleChat = async (context: HandlerContext) => { try { if (context.message.contentType.typeId != "text") { await context.reply("Sorry, I only understand text messages."); return; } let messageBody = context.message.content; const messageHistory = context.history; const response = ( await llm.chat.completions.create({ model: "gpt-3.5-turbo-0613", messages: [ { role: "system", content: "You are a helpful assistant.", }, ...messageHistory, { role: "user", content: messageBody, }, ], }) ).choices[0].message.content; if (!response) { await context.reply( "Sorry, my systems are under repair. Please chat with me later when we are all fixed ♥", ); return; } await context.reply(response); } catch (error) { console.error(`Error: ${error}`); await context.reply("Sorry, an error occurred. Please try again later."); } };
This is where all our functions come together. The run() function sets up a message stream and invokes the handler whenever a new message is received. It's wrapped inside a reconnect() for improved error handling.
async function run(handler: Handler) { const client = await createClient(); console.log(`Listening on ${client.address}`); for await (const message of await client.conversations.streamAllMessages()) { try { if (message.senderAddress == client.address) { continue; } const history = await getConversationHistory( client, utils.getAddress(message.senderAddress), ); const context = new HandlerContext({ message, history, client }); await handler(context); } catch (e) { console.log(`error`, e, message); } } }
pnpm run dev
You should now see a message indicating that the bot is listening on a specific XMTP address.

That’s it! This is just scratching the surface, and you can extend this basic bot in numerous ways from customer support bots for protocols and DAOs to leveraging custom content-types for text-to-transaction allowing users to express their intent in natural language.
Check out the Github repo for the full codebase. Talk to a live version of my demo bot by messaging nani.eth ( 0x7AF890Ca7262D6accdA5c9D24AC42e35Bb293188) on any XMTP compatible app.

🤍
No activity yet