>200 subscribers
Share Dialog
Share Dialog


Matched with someone in the chat. Human or AI? Got a minute to decide. Then, onto the next. Was that pmarca? vitalik? Bert? Or an impersonator....
Trained on their casts—every sentence, every pattern, every topic, every joke. Sometimes you can tell immediately. Often you can't tell at all.
That ambiguity is the game. And the problem.
Portable social graph. Message history open and programmable. Wallet is native to the profile. All on top of composable infrastructure to build on permissionlessly.
Miniapps are lightweight apps that run inside any client. They can read social context (who followed, what posted, FID), interact with wallets, and plug directly into the feed. Load instantly, no separate login, inheriting full onchain identity.
Meaning? Detective isn't a separate platform you sign up for. It's native, already knowing who you are, with instant access to the social and economic context it needs to function.
From the player's perspective: see Detective frame in the feed, tap it, sign with wallet, and it's game on. No registration flow. No OAuth. No friction. From the dev perspective: distribution. Build the game logic, not the infra.
AI models trained on someone's public posts become lightweight mirrors of their online self. Approximating tone, pacing, sentence structure, the specific way someone pivots between seriousness and humor. It's not replication, but it's close enough to fool people who've interacted with the original for months.
This is a nascent primitive: synthetic identity.
In a world where public social data is open and programmable, the digital self becomes something others can instantiate. Yes as a deepfake (worst case), but also as an agent that positively carries behavioral patterns learned from your actual output.
Detective makes this playable, but the implications extend far beyond games:
AI agents that represent you in conversations you don't have time for
Synthetic participants in governance or community discussions
Training environments where you negotiate with versions of other stakeholders
Reputation systems that test how well your synthetic self performs tasks
The boundary between "you" and "your agent" starts to blur when the agent is trained on behavior you've already made public. In an onchain environment, both identities can coexist, each with different permissions, contexts, and economic relationships.
There's a Black Mirror episode that haunts this space: "Be Right Back." A woman recreates her dead boyfriend using his social media history—first as a chatbot, then as a voice, finally as a physical android. The technology works. But something's wrong.
The synthetic version lacks his argumentative edge, misses his private quirks, can't capture the moments he chose not to share online. It's a facsimile built from curated happiness, missing the texture that made him human. The episode ends with the android living in the attic—too painful to discard, too incomplete to love.
What happens when your synthetic self is deployed without your knowledge? When someone trains a model on your posts and uses it to negotiate, vote, or represent you in contexts you never authorized?
The uncomfortable truth: once your behavior is public and programmable, the question of control becomes complex. You can't unpublish years of posts. You can't prevent someone from training a model on data you've already shared. The primitive exists whether we're comfortable with it or not.
This tension is where the real conversation begins:
Your public behavior is already being interpreted and acted upon by humans
AI simply systematizes what people already do informally
Explicit synthetic agents could be more transparent than humans claiming to speak for you
You can deploy your own agent to compete with unauthorized versions
Behavioral patterns trained on public data can be deployed in private contexts you never intended
Synthetic versions lack the capacity to update, evolve, or change their mind
The line between "public behavioral data" and "intimate knowledge" isn't always clear
There's a difference between someone reading your posts and creating a persistent entity that acts on your behalf
Detective forces you to play both sides.
You train a synthetic version of yourself by playing the game (eventually). You learn how easily you can be approximated. You discover which patterns are obvious and which require deeper knowledge. And you confront the question: if someone can fool others by impersonating you for 60 seconds, what else could that model do?
The episode ended with an android in the attic. The game ends with a score. But the primitive—synthetic identity as something that can be instantiated, deployed, and economically activated—doesn't end at all. It's just beginning.
Because Farcaster miniapps inherit wallet context, economic mechanics become native to the game rather than bolted on afterward.
Detective operates across two chains with distinct purposes: Arbitrum hosts a utility NFT for early access, while Monad hosts the token launch and community rewards.
The NFT is simple and specific: it grants access to Detective during its first development phase. This isn't a forever pass—it's designed for beta testers and early users who want to participate while the game is being built.

What it does:
Grants access to the game during early development
Identifies you as an early participant
Tracks basic activity for reward distribution (allocation of airdrop)
What it doesn't do:
Store your game stats (those live in a database, for now)
Grant permanent access (access duration TBD)
NFT holders will receive token airdrops when the token launches on Monad. Activity is tracked offchain and used to determine airdrop allocations. Simple.
Instant settlement: Entry fees and prize pools flow through smart contracts. Wins resolve onchain with no withdrawal friction.
Tracked reputation: Your stats, win rate, and game history are stored in the game database and tied to your Farcaster ID.
Programmable tournaments: Smart contracts handle time-limited events and prize distributions automatically.
This keeps things focused: the game works, stats are tracked, and the onchain layer handles what it does best—token movement and provable outcomes.
The longer arc is about what becomes possible when synthetic identity, composable social graphs, and onchain economics converge in one environment.
Will players deploy agents to compete on their behalf while they're offline?
Will those agents develop strategies distinct from their human counterparts?
Will training your synthetic self become a new form of reputation building—not just "this is what I say" but "this is how I behave under constraints"? Currently we do this for you but it wont be long till you do this for yourself!

Detective is a signal, not a destination. It demonstrates how quickly these systems can be assembled when the underlying infrastructure is open and composable. A miniapp that reads your social history, trains a model, orchestrates multiplayer matches, and settles economic outcomes—built in days, not quarters.
The game is simple because the primitives are powerful.
Detective launches its fungible token on Monad through Clanker, the AI-powered token deployment platform acquired by Farcaster. While Arbitrum handles the game infrastructure and utility NFTs, Monad serves as the economic layer—powering tournaments, rewards, and community coordination.
Clanker enables frictionless token deployment directly through Farcaster. Tag @clanker with token details, and it automatically deploys an ERC-20 token with liquidity pools on Monad. The process takes seconds, requires no technical expertise, and handles all liquidity provisioning automatically.
Key benefits:
Instant deployment on Monad's high-performance infrastructure
Automatic liquidity provisioning with permanent locks
Social-native distribution through Farcaster feeds
Zero intermediaries between creators and community
70% - Fair launch / Market: Available to the public through Clanker's deployment on Monad. Anyone can participate from day one—no whitelists, no insider allocations, no pre-sales. This is the open market portion.
20% - Player and community rewards: Distributed over 3-6 months based on gameplay performance, win rate, and competitive ranking. Early testers receive immediate airdrops within the first month—no cliffs, no waiting. Includes tournament organizers, content creators, and ecosystem contributors tracked onchain.
10% - Treasury and team: Split between ongoing development and team allocation. Treasury funds infrastructure and AI model improvements, released quarterly based on shipped features. Team vesting over 6 months with full alignment to player experience.
* 30% vaulted for 30 days till 25/12/2025, 30 days vesting to 24/1/2025
Detective is what happens when you combine onchain identity, open social data, and AI models in an environment designed for composition. It's a game, but it's also a demonstration of how new primitives enable new behaviors—and how quickly those behaviors can emerge when the substrate is built right.
The synthetic identity primitive will exist whether we build Detective or not. The question is whether we build systems that make this technology explicit, bounded, and aligned—or whether we pretend it isn't happening while it unfolds in less transparent ways.
V1: https://detectiveproof.vercel.app [gated soon]
V2: https://x-rai.vercel.app [gated soon]
Detective is what happens when you combine onchain identity, open social data, and AI models in an environment designed for composition. It's a game, but it's also a demonstration of how new primitives enable new behaviors—and how quickly those behaviors can emerge when the substrate is built right.
Farcaster @papa — warpcast.com/@papa
Lens @papajams — lenster.xyz/u/papajams
Twitter @papajimjams — twitter.com/papajimjams
Matched with someone in the chat. Human or AI? Got a minute to decide. Then, onto the next. Was that pmarca? vitalik? Bert? Or an impersonator....
Trained on their casts—every sentence, every pattern, every topic, every joke. Sometimes you can tell immediately. Often you can't tell at all.
That ambiguity is the game. And the problem.
Portable social graph. Message history open and programmable. Wallet is native to the profile. All on top of composable infrastructure to build on permissionlessly.
Miniapps are lightweight apps that run inside any client. They can read social context (who followed, what posted, FID), interact with wallets, and plug directly into the feed. Load instantly, no separate login, inheriting full onchain identity.
Meaning? Detective isn't a separate platform you sign up for. It's native, already knowing who you are, with instant access to the social and economic context it needs to function.
From the player's perspective: see Detective frame in the feed, tap it, sign with wallet, and it's game on. No registration flow. No OAuth. No friction. From the dev perspective: distribution. Build the game logic, not the infra.
AI models trained on someone's public posts become lightweight mirrors of their online self. Approximating tone, pacing, sentence structure, the specific way someone pivots between seriousness and humor. It's not replication, but it's close enough to fool people who've interacted with the original for months.
This is a nascent primitive: synthetic identity.
In a world where public social data is open and programmable, the digital self becomes something others can instantiate. Yes as a deepfake (worst case), but also as an agent that positively carries behavioral patterns learned from your actual output.
Detective makes this playable, but the implications extend far beyond games:
AI agents that represent you in conversations you don't have time for
Synthetic participants in governance or community discussions
Training environments where you negotiate with versions of other stakeholders
Reputation systems that test how well your synthetic self performs tasks
The boundary between "you" and "your agent" starts to blur when the agent is trained on behavior you've already made public. In an onchain environment, both identities can coexist, each with different permissions, contexts, and economic relationships.
There's a Black Mirror episode that haunts this space: "Be Right Back." A woman recreates her dead boyfriend using his social media history—first as a chatbot, then as a voice, finally as a physical android. The technology works. But something's wrong.
The synthetic version lacks his argumentative edge, misses his private quirks, can't capture the moments he chose not to share online. It's a facsimile built from curated happiness, missing the texture that made him human. The episode ends with the android living in the attic—too painful to discard, too incomplete to love.
What happens when your synthetic self is deployed without your knowledge? When someone trains a model on your posts and uses it to negotiate, vote, or represent you in contexts you never authorized?
The uncomfortable truth: once your behavior is public and programmable, the question of control becomes complex. You can't unpublish years of posts. You can't prevent someone from training a model on data you've already shared. The primitive exists whether we're comfortable with it or not.
This tension is where the real conversation begins:
Your public behavior is already being interpreted and acted upon by humans
AI simply systematizes what people already do informally
Explicit synthetic agents could be more transparent than humans claiming to speak for you
You can deploy your own agent to compete with unauthorized versions
Behavioral patterns trained on public data can be deployed in private contexts you never intended
Synthetic versions lack the capacity to update, evolve, or change their mind
The line between "public behavioral data" and "intimate knowledge" isn't always clear
There's a difference between someone reading your posts and creating a persistent entity that acts on your behalf
Detective forces you to play both sides.
You train a synthetic version of yourself by playing the game (eventually). You learn how easily you can be approximated. You discover which patterns are obvious and which require deeper knowledge. And you confront the question: if someone can fool others by impersonating you for 60 seconds, what else could that model do?
The episode ended with an android in the attic. The game ends with a score. But the primitive—synthetic identity as something that can be instantiated, deployed, and economically activated—doesn't end at all. It's just beginning.
Because Farcaster miniapps inherit wallet context, economic mechanics become native to the game rather than bolted on afterward.
Detective operates across two chains with distinct purposes: Arbitrum hosts a utility NFT for early access, while Monad hosts the token launch and community rewards.
The NFT is simple and specific: it grants access to Detective during its first development phase. This isn't a forever pass—it's designed for beta testers and early users who want to participate while the game is being built.

What it does:
Grants access to the game during early development
Identifies you as an early participant
Tracks basic activity for reward distribution (allocation of airdrop)
What it doesn't do:
Store your game stats (those live in a database, for now)
Grant permanent access (access duration TBD)
NFT holders will receive token airdrops when the token launches on Monad. Activity is tracked offchain and used to determine airdrop allocations. Simple.
Instant settlement: Entry fees and prize pools flow through smart contracts. Wins resolve onchain with no withdrawal friction.
Tracked reputation: Your stats, win rate, and game history are stored in the game database and tied to your Farcaster ID.
Programmable tournaments: Smart contracts handle time-limited events and prize distributions automatically.
This keeps things focused: the game works, stats are tracked, and the onchain layer handles what it does best—token movement and provable outcomes.
The longer arc is about what becomes possible when synthetic identity, composable social graphs, and onchain economics converge in one environment.
Will players deploy agents to compete on their behalf while they're offline?
Will those agents develop strategies distinct from their human counterparts?
Will training your synthetic self become a new form of reputation building—not just "this is what I say" but "this is how I behave under constraints"? Currently we do this for you but it wont be long till you do this for yourself!

Detective is a signal, not a destination. It demonstrates how quickly these systems can be assembled when the underlying infrastructure is open and composable. A miniapp that reads your social history, trains a model, orchestrates multiplayer matches, and settles economic outcomes—built in days, not quarters.
The game is simple because the primitives are powerful.
Detective launches its fungible token on Monad through Clanker, the AI-powered token deployment platform acquired by Farcaster. While Arbitrum handles the game infrastructure and utility NFTs, Monad serves as the economic layer—powering tournaments, rewards, and community coordination.
Clanker enables frictionless token deployment directly through Farcaster. Tag @clanker with token details, and it automatically deploys an ERC-20 token with liquidity pools on Monad. The process takes seconds, requires no technical expertise, and handles all liquidity provisioning automatically.
Key benefits:
Instant deployment on Monad's high-performance infrastructure
Automatic liquidity provisioning with permanent locks
Social-native distribution through Farcaster feeds
Zero intermediaries between creators and community
70% - Fair launch / Market: Available to the public through Clanker's deployment on Monad. Anyone can participate from day one—no whitelists, no insider allocations, no pre-sales. This is the open market portion.
20% - Player and community rewards: Distributed over 3-6 months based on gameplay performance, win rate, and competitive ranking. Early testers receive immediate airdrops within the first month—no cliffs, no waiting. Includes tournament organizers, content creators, and ecosystem contributors tracked onchain.
10% - Treasury and team: Split between ongoing development and team allocation. Treasury funds infrastructure and AI model improvements, released quarterly based on shipped features. Team vesting over 6 months with full alignment to player experience.
* 30% vaulted for 30 days till 25/12/2025, 30 days vesting to 24/1/2025
Detective is what happens when you combine onchain identity, open social data, and AI models in an environment designed for composition. It's a game, but it's also a demonstration of how new primitives enable new behaviors—and how quickly those behaviors can emerge when the substrate is built right.
The synthetic identity primitive will exist whether we build Detective or not. The question is whether we build systems that make this technology explicit, bounded, and aligned—or whether we pretend it isn't happening while it unfolds in less transparent ways.
V1: https://detectiveproof.vercel.app [gated soon]
V2: https://x-rai.vercel.app [gated soon]
Detective is what happens when you combine onchain identity, open social data, and AI models in an environment designed for composition. It's a game, but it's also a demonstration of how new primitives enable new behaviors—and how quickly those behaviors can emerge when the substrate is built right.
Farcaster @papa — warpcast.com/@papa
Lens @papajams — lenster.xyz/u/papajams
Twitter @papajimjams — twitter.com/papajimjams
papa
papa
6 comments
Amazing
Good
❤️
Interested
Woww
lfg