Building Farcaster and Web 3 applications and tools
Building Farcaster and Web 3 applications and tools
Share Dialog
Share Dialog
Subscribe to Building in Public
Subscribe to Building in Public
As a developer particularly focused in building the future of France, the idea of an open, permission-less and alternative social network protocol to the Big Ones seems appealing. Having played around with previous iterations of social media data endpoints back when you were allowed to do so without paying, even the most restricted API limits were enough to generate some ideas and validate them. That's why the idea of being able to run your own hardware and contribute to the sufficient decentralisation of the network appealed greatly to me and I wanted to play around with and understand how it worked from the ground up.
What kind of design and architecture choices were taken? What are the primitives for communication, message storage, etc? What are the balances and trade-offs or potential hazards to look out for when writing applications that consume or broadcast this data? Are there any improvements you can suggest to the protocol overall, or implementation-specific to make your life a little easier? The documentation on the official Farcaster websites for the protocol as well as the message types for the Hubble APIs get us enough information to start building and are a pretty easy way to get across the basics to understand this information on your own as well:
https://docs.farcaster.xyz/learn/architecture/overview
https://docs.farcaster.xyz/reference/hubble/datatypes/messages
A good way to conceptualise all of this, much like any other distributed and shared data source, is as an event stream that every node resolves to get to an overall view of the state. Although in the Hub's case it is out of scope to assume you could query any "reduced" view of this state for any given user or post, instead relying on that stream's state reduction being done in your app/indexer-specific process; either a service you can pay for, an open source implementation, or writing your own application-specific implementations using libraries or clients to query against your own hub (my personal favourite, and fitting with the open and decentralised theme of the protocol). You can visualise this separation conceptually for a single user's "Profile Update" messages like this:

This diagram shows the separation between what you could consider the data streams of the network, real-time, historical for every user and cast on the network. Every like and recast, every follow and unfollow, every cast and the data associated with them. Today I want to go through the process of creating a simple script that could give you the "up to date" view of a specific user's profile information as that's probably one of the earliest and easiest tasks one might want to do with their own Farcaster application.
The choices for implementation matter and are at the end of the day it is always better to work with languages and tooling you are more familiar with. Most of the Farcaster work I've done outside of frames or browser-facing applications are all in Rust, just because I like it and there wasn't a huge amount of existing tooling in it for building Farcaster applications (good learning opportunity compared to using existing libraries I wouldn't understand at the fundamental level). I started writing a library to make a lot of these hub to app-specific translations easier or help import the historical data and I'll continue to build it in a semi-leftcurve way because I'm not S-tier Rust or library developer, but I'm Doing My Part™️ and it's been fun which is the most important thing I think. :)
The Hub endpoints we'll be communicating with use gRPC and protobufs, the first step of creating a library or application to help script out communications with the Hub is to compile the protobufs from the official Farcaster hub monorepo, I started by just cloning it as a submodule and creating a build step to compile and make available the compiled RPC services and types. You can usually get a gRPC library for this in your language of choice, I used tonic for the Rust code. It might be mentioned that you would need a Hub running and up to date with all the OP and mainnet data as well as the Farcaster network messages you care about querying to be synced with it, you can use a hosted hub from a provider or run your own on your own machine. The current DB requirement is probably 100-200+GB of storage.
The data we'll be querying comes from the Hub's UserData endpoints in the RPC service (protos), which we could crudely construct a couple of different ways depending on what specific information the application wanted. If we didn't want historical data we could just use the rpc GetUserData(UserDataRequest) returns (Message); function which returns a single Message for each type (to get the latest value for each field, although we have to make a different request for each type we cared about) or iterate through all of the messages using rpc GetAllUserDataMessagesByFid(FidRequest) returns (MessagesResponse); or the paged version rpc GetUserDataByFid(FidRequest) returns (MessagesResponse); which will return all the currently stored user data messages for that fid (minus any that have been pruned from storage limits or deletions I guess)
Writing this in a Rust script, you can start by adding my fatline-rs library, or just pull in the protobufs and add the tonic dependency and build.rs stuff yourself (bear in mind some code may reference utility functions or extra library code I've written for fatline-rs)
For adding the library to your existing Cargo.toml:
# Cargo.toml file
# other dependencies you might want and Cargo metadata etc ...
[depdencies]
eyre = "0.6.12"
tokio = { version = "1", features = ["full"] } # probably useful as the tonic responses will be async
[dependencies.fatline-rs]
git = "https://github.com/0x330a-public/fatline-rs.git"
rev = "c155d9f862c56e94ecf508d1185a114e1c5bc1a4"In the src/main.rs we're just going to add a constant for our Hub's publicly accessible IP/URL, you could just as easily add this as a dotenv or something using dotenvy or equivalent library
const HUB_URL: &'static str = "http://somethingsomething:2283";We're going to use the application to query and print out the current state of a specific user by their fid, we'll use the Farcaster account for this example (fid #1). The information we should expect to see should match the Warpcast display for Farcaster's user profile page and looks (presently) something like this:

We can see the user's username @farcaster, the display name Farcaster, the profile picture (Farcaster logo), and the bio A sufficiently decentralized social network. farcaster.xyz. To pull all this data we need to construct a client using our Hub URL, surrounded by some other async Rust program boilerplate and imports:
use eyre::Result;
use fatline_rs::HubService;
const HUB_URL: &'static str = "http://somethingsomething:2283";
#[tokio::main]
async fn main() -> Result<()> {
// HubService here is a simplified, re-exposed type from fatline-rs, HubServiceClient<Channel> is the original type
let mut service = HubService::connect(HUB_URL).await?;
// return Result::Ok at the end of main
Ok(())
}Running this won't give us any useful information, except I guess that the execution didn't result in an error when connecting to our Hub. We can expand the code to fetch each specific piece of information for the Farcaster account (fid #1):
const FC_FID: u64 = 1;In fatline-rs there is a function to simplify getting a User's current profile information, there's a custom Profile type I use that enables serialization/deserialization from an API as well. The implementation just calls the client's get_user_data endpoint for each type, with a shorthand function to build the specific FidRequest, as well as some utility functions to get the response out of the proto RPC response.
The basic gist is that we want to query every UserDataType for any given fid as individual requests, which should give us the latest message for every type of user data. Once we get back an optional body, we check that the body is a UserDataBody which indicates it's a message containing an update to this user's user-data aka pfp/username/displayname etc. After this check we just have to get the UserDataBody field and type to see which field was updated and what the new field value is. We could stream through or subscribe to all realtime messages and filter on this user's updates to set or update user fields in a DB for indexing purposes so we can always return the current state in some API or use it for whatever purpose we want to.
we can recreate the logic for getting the profile like so:
use eyre::Result;
use fatline_rs::proto::{UserDataType, UserDataRequest, Message, MessageData};
use fatline_rs::proto::message_data::Body;
use fatline_rs::HubService;
use tonic::{Response, Status};
/// The combined user's profile, holding values from all user update types
#[derive(Debug)]
pub struct Profile {
pub fid: u64,
pub username: Option<String>,
pub display_name: Option<String>,
pub profile_picture: Option<String>,
pub bio: Option<String>,
pub url: Option<String>
}
// shorthand so we can call this for each type
fn get_user_data_request(fid: u64, data_type: UserDataType) -> UserDataRequest {
// the RPC method is expecting this as the "request"
UserDataRequest {
fid,
user_data_type: data_type as i32
}
}
// Actual function to get the profile
async fn get_user_profile(client: &mut HubService, fid: u64) -> Result<Profile> {
// lets start by getting the username as an example:
let username_request: Response<Message> = client.get_user_data(
get_user_data_request(fid, UserDataType::Username)
).await?;
// Now we have a response, we can get the inner message and extract the message, data and body,
// and finally see the client published "profile update" message body
let message: Message = username_request.into_inner();
let body: Option<Body> = message.data.and_then(| data: MessageData | data.body);
// The UserDataBody contains the type (UserDataType) as well as the value (String)
// We *should* expect the type to match the type we requested, in this case the username
let username_data_body: Option<UserDataBody> = match body {
Some(Body::UserDataBody(body)) => Some(body),
_ => None
};
// ignore the type since we requested it explicitly, map the body and pull the value out
let username_value: Option<String> = username_data_body.map(|body| body.value);
// technically usernames can be optional, so this is all we need for the profile for now
let profile = Profile {
fid,
username: username_value,
display_name: None,
profile_picture: None,
bio: None,
url: None
};
// return our poorly populated profile
Ok(profile)
}At this point we would just populate all the other remaining fields, maybe write some useful helpers to extract other username data, shortcut all of the body/optional/message etc types and fill out the rest of the get_user_profile function here.
Returning to our main method, we would add the call for this new profile function and display the data however we want, like logging it to the terminal output:
use eyre::Result;
use fatline_rs::HubService;
const HUB_URL: &'static str = "http://somethingsomething:2283";
const FC_FID: u64 = 1;
#[tokio::main]
async fn main() -> Result<()> {
// HubService here is a simplified, re-exposed type from fatline-rs, HubServiceClient<Channel> is the original type
let mut service = HubService::connect(HUB_URL).await?;
// call the new profile function from wherever we implemented it, assuming here it's in the main.rs
let profile: Profile = get_user_profile(client, FC_FID).await?;
// as username is optional, get a default if the user doesn't have one set
let username = match profile.username {
Some(value) => value,
_ => "actually nothing".to_string()
};
println!("FC profile's username is: {}", username);
// return Result::Ok at the end of main
Ok(())
}Running this should give us an output something like this:
FC profile's username is: farcasterI'll leave implementing the helpers and other user profile fields as an exercise for the reader, alternatively read through the library implementation, mine isn't perfect or the ideal implementation but seems to work for me!
Reading data from your own Hub is easy and fun and can help to diversify client applications, reduce dependence on hosted or paid services which also helps decentralize the overall Farcaster network and keep your personal costs down. If you have the hardware to run one it provides very low latency access to the current entire state of the network. You can think of this similar to having your own Eth node running to query against vs relying on something like Infura. I'll try to write more content like this alongside writing a library and alternate client as a learning tool and to help diversify the open source Farcaster community. Any tips or feedback is welcome and appreciated.
Thanks for reading until the end! Hit me up on Farcaster @harris- for what kind of content you want to see next, and I would appreciate any stars or follows and requests for new features on my fatline client and libraries
As a developer particularly focused in building the future of France, the idea of an open, permission-less and alternative social network protocol to the Big Ones seems appealing. Having played around with previous iterations of social media data endpoints back when you were allowed to do so without paying, even the most restricted API limits were enough to generate some ideas and validate them. That's why the idea of being able to run your own hardware and contribute to the sufficient decentralisation of the network appealed greatly to me and I wanted to play around with and understand how it worked from the ground up.
What kind of design and architecture choices were taken? What are the primitives for communication, message storage, etc? What are the balances and trade-offs or potential hazards to look out for when writing applications that consume or broadcast this data? Are there any improvements you can suggest to the protocol overall, or implementation-specific to make your life a little easier? The documentation on the official Farcaster websites for the protocol as well as the message types for the Hubble APIs get us enough information to start building and are a pretty easy way to get across the basics to understand this information on your own as well:
https://docs.farcaster.xyz/learn/architecture/overview
https://docs.farcaster.xyz/reference/hubble/datatypes/messages
A good way to conceptualise all of this, much like any other distributed and shared data source, is as an event stream that every node resolves to get to an overall view of the state. Although in the Hub's case it is out of scope to assume you could query any "reduced" view of this state for any given user or post, instead relying on that stream's state reduction being done in your app/indexer-specific process; either a service you can pay for, an open source implementation, or writing your own application-specific implementations using libraries or clients to query against your own hub (my personal favourite, and fitting with the open and decentralised theme of the protocol). You can visualise this separation conceptually for a single user's "Profile Update" messages like this:

This diagram shows the separation between what you could consider the data streams of the network, real-time, historical for every user and cast on the network. Every like and recast, every follow and unfollow, every cast and the data associated with them. Today I want to go through the process of creating a simple script that could give you the "up to date" view of a specific user's profile information as that's probably one of the earliest and easiest tasks one might want to do with their own Farcaster application.
The choices for implementation matter and are at the end of the day it is always better to work with languages and tooling you are more familiar with. Most of the Farcaster work I've done outside of frames or browser-facing applications are all in Rust, just because I like it and there wasn't a huge amount of existing tooling in it for building Farcaster applications (good learning opportunity compared to using existing libraries I wouldn't understand at the fundamental level). I started writing a library to make a lot of these hub to app-specific translations easier or help import the historical data and I'll continue to build it in a semi-leftcurve way because I'm not S-tier Rust or library developer, but I'm Doing My Part™️ and it's been fun which is the most important thing I think. :)
The Hub endpoints we'll be communicating with use gRPC and protobufs, the first step of creating a library or application to help script out communications with the Hub is to compile the protobufs from the official Farcaster hub monorepo, I started by just cloning it as a submodule and creating a build step to compile and make available the compiled RPC services and types. You can usually get a gRPC library for this in your language of choice, I used tonic for the Rust code. It might be mentioned that you would need a Hub running and up to date with all the OP and mainnet data as well as the Farcaster network messages you care about querying to be synced with it, you can use a hosted hub from a provider or run your own on your own machine. The current DB requirement is probably 100-200+GB of storage.
The data we'll be querying comes from the Hub's UserData endpoints in the RPC service (protos), which we could crudely construct a couple of different ways depending on what specific information the application wanted. If we didn't want historical data we could just use the rpc GetUserData(UserDataRequest) returns (Message); function which returns a single Message for each type (to get the latest value for each field, although we have to make a different request for each type we cared about) or iterate through all of the messages using rpc GetAllUserDataMessagesByFid(FidRequest) returns (MessagesResponse); or the paged version rpc GetUserDataByFid(FidRequest) returns (MessagesResponse); which will return all the currently stored user data messages for that fid (minus any that have been pruned from storage limits or deletions I guess)
Writing this in a Rust script, you can start by adding my fatline-rs library, or just pull in the protobufs and add the tonic dependency and build.rs stuff yourself (bear in mind some code may reference utility functions or extra library code I've written for fatline-rs)
For adding the library to your existing Cargo.toml:
# Cargo.toml file
# other dependencies you might want and Cargo metadata etc ...
[depdencies]
eyre = "0.6.12"
tokio = { version = "1", features = ["full"] } # probably useful as the tonic responses will be async
[dependencies.fatline-rs]
git = "https://github.com/0x330a-public/fatline-rs.git"
rev = "c155d9f862c56e94ecf508d1185a114e1c5bc1a4"In the src/main.rs we're just going to add a constant for our Hub's publicly accessible IP/URL, you could just as easily add this as a dotenv or something using dotenvy or equivalent library
const HUB_URL: &'static str = "http://somethingsomething:2283";We're going to use the application to query and print out the current state of a specific user by their fid, we'll use the Farcaster account for this example (fid #1). The information we should expect to see should match the Warpcast display for Farcaster's user profile page and looks (presently) something like this:

We can see the user's username @farcaster, the display name Farcaster, the profile picture (Farcaster logo), and the bio A sufficiently decentralized social network. farcaster.xyz. To pull all this data we need to construct a client using our Hub URL, surrounded by some other async Rust program boilerplate and imports:
use eyre::Result;
use fatline_rs::HubService;
const HUB_URL: &'static str = "http://somethingsomething:2283";
#[tokio::main]
async fn main() -> Result<()> {
// HubService here is a simplified, re-exposed type from fatline-rs, HubServiceClient<Channel> is the original type
let mut service = HubService::connect(HUB_URL).await?;
// return Result::Ok at the end of main
Ok(())
}Running this won't give us any useful information, except I guess that the execution didn't result in an error when connecting to our Hub. We can expand the code to fetch each specific piece of information for the Farcaster account (fid #1):
const FC_FID: u64 = 1;In fatline-rs there is a function to simplify getting a User's current profile information, there's a custom Profile type I use that enables serialization/deserialization from an API as well. The implementation just calls the client's get_user_data endpoint for each type, with a shorthand function to build the specific FidRequest, as well as some utility functions to get the response out of the proto RPC response.
The basic gist is that we want to query every UserDataType for any given fid as individual requests, which should give us the latest message for every type of user data. Once we get back an optional body, we check that the body is a UserDataBody which indicates it's a message containing an update to this user's user-data aka pfp/username/displayname etc. After this check we just have to get the UserDataBody field and type to see which field was updated and what the new field value is. We could stream through or subscribe to all realtime messages and filter on this user's updates to set or update user fields in a DB for indexing purposes so we can always return the current state in some API or use it for whatever purpose we want to.
we can recreate the logic for getting the profile like so:
use eyre::Result;
use fatline_rs::proto::{UserDataType, UserDataRequest, Message, MessageData};
use fatline_rs::proto::message_data::Body;
use fatline_rs::HubService;
use tonic::{Response, Status};
/// The combined user's profile, holding values from all user update types
#[derive(Debug)]
pub struct Profile {
pub fid: u64,
pub username: Option<String>,
pub display_name: Option<String>,
pub profile_picture: Option<String>,
pub bio: Option<String>,
pub url: Option<String>
}
// shorthand so we can call this for each type
fn get_user_data_request(fid: u64, data_type: UserDataType) -> UserDataRequest {
// the RPC method is expecting this as the "request"
UserDataRequest {
fid,
user_data_type: data_type as i32
}
}
// Actual function to get the profile
async fn get_user_profile(client: &mut HubService, fid: u64) -> Result<Profile> {
// lets start by getting the username as an example:
let username_request: Response<Message> = client.get_user_data(
get_user_data_request(fid, UserDataType::Username)
).await?;
// Now we have a response, we can get the inner message and extract the message, data and body,
// and finally see the client published "profile update" message body
let message: Message = username_request.into_inner();
let body: Option<Body> = message.data.and_then(| data: MessageData | data.body);
// The UserDataBody contains the type (UserDataType) as well as the value (String)
// We *should* expect the type to match the type we requested, in this case the username
let username_data_body: Option<UserDataBody> = match body {
Some(Body::UserDataBody(body)) => Some(body),
_ => None
};
// ignore the type since we requested it explicitly, map the body and pull the value out
let username_value: Option<String> = username_data_body.map(|body| body.value);
// technically usernames can be optional, so this is all we need for the profile for now
let profile = Profile {
fid,
username: username_value,
display_name: None,
profile_picture: None,
bio: None,
url: None
};
// return our poorly populated profile
Ok(profile)
}At this point we would just populate all the other remaining fields, maybe write some useful helpers to extract other username data, shortcut all of the body/optional/message etc types and fill out the rest of the get_user_profile function here.
Returning to our main method, we would add the call for this new profile function and display the data however we want, like logging it to the terminal output:
use eyre::Result;
use fatline_rs::HubService;
const HUB_URL: &'static str = "http://somethingsomething:2283";
const FC_FID: u64 = 1;
#[tokio::main]
async fn main() -> Result<()> {
// HubService here is a simplified, re-exposed type from fatline-rs, HubServiceClient<Channel> is the original type
let mut service = HubService::connect(HUB_URL).await?;
// call the new profile function from wherever we implemented it, assuming here it's in the main.rs
let profile: Profile = get_user_profile(client, FC_FID).await?;
// as username is optional, get a default if the user doesn't have one set
let username = match profile.username {
Some(value) => value,
_ => "actually nothing".to_string()
};
println!("FC profile's username is: {}", username);
// return Result::Ok at the end of main
Ok(())
}Running this should give us an output something like this:
FC profile's username is: farcasterI'll leave implementing the helpers and other user profile fields as an exercise for the reader, alternatively read through the library implementation, mine isn't perfect or the ideal implementation but seems to work for me!
Reading data from your own Hub is easy and fun and can help to diversify client applications, reduce dependence on hosted or paid services which also helps decentralize the overall Farcaster network and keep your personal costs down. If you have the hardware to run one it provides very low latency access to the current entire state of the network. You can think of this similar to having your own Eth node running to query against vs relying on something like Infura. I'll try to write more content like this alongside writing a library and alternate client as a learning tool and to help diversify the open source Farcaster community. Any tips or feedback is welcome and appreciated.
Thanks for reading until the end! Hit me up on Farcaster @harris- for what kind of content you want to see next, and I would appreciate any stars or follows and requests for new features on my fatline client and libraries
<100 subscribers
<100 subscribers
No activity yet