# Build ChatGPT for XMTP

By [nerderlyne](https://paragraph.com/@nerderlyne) · 2023-09-24

---

In this tutorial, we'll create an LLM powered chatbot for [XMTP (Extensible Message Transport Protocol)](https://xmtp.org/). XMTP is a secure, private web3 messaging protocol—combined with language models, you can build an intelligent and secure chatbot that anyone, anywhere can interact with from one of the many clients like [Converse](https://getconverse.app), [Coinbase Wallet](https://www.coinbase.com/wallet), [Lens](https://lenster.xyz/), _etc_. If you’re looking to get started quick, fork the linked [GitHub repo](https://github.com/nerderlyne/xmtp-llm-bot) and go!

Prerequisites
-------------

*   [_Node.js_](https://nodejs.org/)
    
*   [_OpenAI API Key_](https://platform.openai.com/docs/api-reference/authentication)
    

Install Dependencies
--------------------

First off, let's install the necessary packages:

`pnpm install dotenv @xmtp/xmtp-js ethers openai`

Environment Setup
-----------------

Create a `.env` file to store your XMTP key and OpenAI API key:

`KEY=<Your_XMTP_Wallet_Key> OPENAI_API_KEY=<Your_OpenAI_API_Key> XMTP_ENV=<production | dev>`

You can grab an OpenAI key on their [platform](https://platform.openai.com/). If you want to use a OSS model like LLAMA 2 for your bot, you can update the `baseURL` in the constructor:

`const llm = new OpenAI({ baseURL: <url_for_your_chat_model> });`

Overview of Code Structure
--------------------------

The code consists of three main parts:

1.  **Initialization and Configuration**: Importing packages and setting up the environment.
    
2.  **Helper Functions**: Functions for creating the XMTP client and fetching conversation history.
    
3.  **Chat Handling**: Handling incoming messages and sending replies using OpenAI's API.
    

`import { config } from "dotenv"; import { AttachmentCodec, RemoteAttachmentCodec, } from "@xmtp/content-type-remote-attachment"; import { Client, ListMessagesOptions, SortDirection, DecodedMessage, } from "@xmtp/xmtp-js"; import { utils, Wallet } from "ethers"; import OpenAI from "openai"; config();`

Key Components
--------------

### XMTP Client Setup

`createClient()` initializes an XMTP client, registers codecs for attachments, and publishes user contact information:

`async function createClient(): Promise<Client> { let wallet: Wallet; const key = process.env.KEY; if (key) { wallet = new Wallet(key); } else { wallet = Wallet.createRandom(); } if (process.env.XMTP_ENV !== "production" && process.env.XMTP_ENV !== "dev") { throw "invalid XMTP env"; } const client = await Client.create(wallet, { env: process.env.XMTP_ENV || "production", }); // Register the codecs. AttachmentCodec is for local attachments (<1MB) client.registerCodec(new AttachmentCodec()); //RemoteAttachmentCodec is for remote attachments (>1MB) using thirdweb storage client.registerCodec(new RemoteAttachmentCodec()); await client.publishUserContact(); return client; }`

### Conversation History

`getConversationHistory()` fetches the last five messages between the bot and the user and converts them to chat messages interface supported by OpenAI:

`const getConversationHistory = async ( client: Client, userAddress: string, ): Promise<OpenAI.Chat.ChatCompletionMessage[]> => { const conversations = await client.conversations.list(); const conversation = conversations.find((conversation) => { return ( utils.getAddress(conversation.peerAddress) == utils.getAddress(userAddress) ); }); if (!conversation) { return []; } const options: ListMessagesOptions = { checkAddresses: true, limit: 5, direction: SortDirection.SORT_DIRECTION_DESCENDING, }; const messages = await conversation.messages(options); messages.shift(); if (messages.length === 0) { return []; } return messages .map((message) => { return { role: message.senderAddress == client.address ? "assistant" : ("user" as OpenAI.Chat.ChatCompletionMessage["role"]), content: message.content, }; }) .reverse(); };`

### Handler Context

The `HandlerContext` class provides an interface to access message, history, and client information for easier manipulation within the handler:

`class HandlerContext { message: DecodedMessage; history: OpenAI.Chat.ChatCompletionMessage[]; client: Client; constructor({ message, history, client }: HandlerContextConstructor) { this.message = message; this.history = history; this.client = client; } async reply(content: any) { await this.message.conversation.send(content); } }`

### Message Handling

The `handleChat` function is the meat of our LLM bot. You can customise the model here, add more context through vector embeddings if you’re building a customer service bot, or use function calls to answer questions about the state of the chain in realtime and so on (see for example, my _Check The Chain_ plugin on the ChatGPT plugin store). A fun, yet easy customisation would be enshrining a _soul_ that really characterizes your bot using a simple system prompt so responses have your desired personality. (After all the goal is to make this something you or your users want to keep talking to!) Leveraging open data, you can even take this a step further and customise the personality based on NFTs and other digital assets held by users—creating a shapeshifter bot that updates with what your users do on their own time! ✨

``const handleChat = async (context: HandlerContext) => { try { if (context.message.contentType.typeId != "text") { await context.reply("Sorry, I only understand text messages."); return; } let messageBody = context.message.content; const messageHistory = context.history; const response = ( await llm.chat.completions.create({ model: "gpt-3.5-turbo-0613", messages: [ { role: "system", content: "You are a helpful assistant.", }, ...messageHistory, { role: "user", content: messageBody, }, ], }) ).choices[0].message.content; if (!response) { await context.reply( "Sorry, my systems are under repair. Please chat with me later when we are all fixed ♥", ); return; } await context.reply(response); } catch (error) { console.error(`Error: ${error}`); await context.reply("Sorry, an error occurred. Please try again later."); } };``

### Running the Bot

This is where all our functions come together. The `run()` function sets up a message stream and invokes the handler whenever a new message is received. It's wrapped inside a `reconnect()` for improved error handling.

``async function run(handler: Handler) { const client = await createClient(); console.log(`Listening on ${client.address}`); for await (const message of await client.conversations.streamAllMessages()) { try { if (message.senderAddress == client.address) { continue; } const history = await getConversationHistory( client, utils.getAddress(message.senderAddress), ); const context = new HandlerContext({ message, history, client }); await handler(context); } catch (e) { console.log(`error`, e, message); } } }``

Running the Bot
---------------

`pnpm run dev`

You should now see a message indicating that the bot is listening on a specific XMTP address.

![](https://storage.googleapis.com/papyrus_images/ae6e20a60a2fea69773140d7eb09817d.png)

Conclusion
----------

That’s it! This is just scratching the surface, and you can extend this basic bot in numerous ways from customer support bots for protocols and DAOs to leveraging custom content-types for text-to-transaction allowing users to express their intent in natural language.

Check out the [Github repo](https://github.com/nerderlyne/xmtp-llm-bot) for the full codebase. Talk to a live version of my demo bot by messaging `nani.eth` ( 0x7AF890Ca7262D6accdA5c9D24AC42e35Bb293188) on any XMTP compatible app.

![chatting away with nani.eth on Converse! ](https://images.mirror-media.xyz/publication-images/0Evcl2W4EBhhDWOxonXwu.jpeg?height=2436&width=1125 "null")

🤍

---

*Originally published on [nerderlyne](https://paragraph.com/@nerderlyne/build-chatgpt-for-xmtp)*
