
Subscribe to nintynick.eth 🧢

Subscribe to nintynick.eth 🧢
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
if you're interested in the intersection of AI x DAOs, here's an article I wrote about the topic a little while back: https://paragraph.xyz/@nicknaraghi/interfaces-for-ai-in-daos-1
looks like you gave given this way more thought than me haha. Solid write up btw https://warpcast.com/gramajo.eth/0x3a4275f7
thank you!
Primer
Collective Intelligence that is inclusive of Human and Artificial Intelligences. Together, we are Smarter. 🐉
The primary factors for the learning rate [in DAOs] are 1) large amounts of funding that has enough risk tolerance for experimentation, 2) the ability to fork successful experiments to propagate them throughout the ecosystem, and 3) multi-layered network effects both within and between DAOs. Once DAOs are ahead, there will be no way for TradOrgs to catch up.
Today, many of the processes in DAOs are manual. Human input is required for things like making proposals, voting, and enacting proposal results. Like coordinating any group of humans, this is often slow [governance = rate limiting factor], and in many cases, still relies on trusted parties instead of robust cryptographic mechanisms.
The first opportunity for automation in DAOs is to create the mechanisms and algorithms that manage decision-making and decision implementation. As DAOs evolve, more and more will be automated: sales and marketing, recruitment, coordination of teams, value creation, even editing the DAO’s code. […]
Overall, this is great for DAO members. This type of automation will bring internal costs down, which in turn will decrease platform costs for users. At the same time, increased profits will be reinvested or distributed to stakeholders — completing the virtuous cycle of aligned incentives.
— https://governors.substack.com/p/governors-2-humans-at-the-edges-why
My thesis:
Strong AI is coming in the not-so-distant future
Harmonious coexistence with AI is desirable for the survival and evolution of humanity
Our best chance at harmonious coexistence with AI is:
Deep integration on a technological level
Efficient and fair collective decision-making for how we interact with the AI, including for example, how we advocate for our values as potential objective functions for the AI [i.e. management of the relationship AI as a public good]
Having the support of cryptographic mechanisms to make credible future commitments to one another [and to the AI, and from the AI to us]
Shared upside via organizations that have aligned incentives for the AI, humans, and groups that coordinate
Even before the existence of Strong AI, both Narrow AI and Collective Intelligence (via markets and other mechanisms) are powerful forces of similar type. These are worthwhile to align in the same ways that we might work with Strong AI (a worthy goal, and great practice at lower stakes)
DAOs are the container for and latest evolution of Collective Intelligence
DAOs cannot import knowledge from outside of the DAO ecosystem directly - DAOs can only learn via experiments. The way to get knowledge into DAOs from pre-DAO organizations is via the experiment design process.
DAOs (with tokens) create a fundamentally new way to align incentives between stakeholders and organizations, which encourages (instead of discouraging) automation
We can measure DAOs effectiveness by their Coordination Power: the amount of complexity they can manage without coordination failures
Today, one major limitation on DAO Coordination Power is governance
As such, it is incumbent upon us to:
Design the set of interfaces where AIs can participate in DAOs [alt: Make the interfaces for participation in DAOs inclusive (composable?) to individuals, DAOs, and AIs]
Run as many experiments as we can
Design our experiments to the best of our intellectual and ethical capacities
Share our learnings well
Shut down failed experiments
Design horizontal coordination incentives that support massive, ethical, open-source experimentation at scale [the "science" of DAOs]
Today, I feel that my part in this is to synthesize components of each of these domains (AI and DAOs), and to create a bridge between them. I will attempt to do this by defining acceptance crtieria for a mechanistic DAO that supports AI stakeholdership, and enumerating the specific opportunities where AI can add value in DAOs in this model.
"We should think of ourselves, too, as composable building blocks of a DAO."
A DAO is an Intelligent Agent
Agents are one of:
Person
AI
Group of Agents
A DAO has interfaces:
For interacting inside the DAO
For interacting with other Agents (API)
A DAO has goals (or wants)
A DAO has resources
In order for such a DAO to get what it wants, it needs ways to advocate for its wants and ways to allocate its resources [its economy]
Specific Opportunities for AI Stakeholdership in DAOs
Tuning the parameters of a summoning contract (in order to iterate on economic / game design experiments) [gradient descent]
With sufficient data about summoning contract parameters, an AI could [make predictions about and] recommend parameter sets that more effectively achieve a person or group's goals
Note: TEC has done a great job creating a humanistic approach to token experiement design: param simulator + experiment design parties + voting on the best experiments to run. This kind of model, but for DAO parameters, could be used to generate data for such an AI
Making Proposals
Attempting to write statements in the format "I think the DAO wants…" to be voted on by the DAO [human-in-the-loop]
Attention Management
Governance facilitation (dynamic governance minimization): deciding which votes are important enough to ask the DAO to vote on [note: DAOstack tried this with prediction markets]
Supporting a participant in deciding which votes across all of the DAOs they are a part of are important for them to dedicate their attention to
Dynamic proposal periods: instead of a static proposal process, actively lengthen and shorten phases of the proposal process based on engagement and other (zk) data sources
Voting
Voting on behalf of individual members (an agent that represents you)
Voting on behalf of a particular objective or ideal (imagine assigning voting power in a DAO to an idea)
Assigning Rewards
(Dynamic) Task / Bounty Pricing: Automatic auction curve is one algorithm (transparent, easy to understand). Composability here would mean a DAO could bring its own algorithm to manage the pricing of tasks in the labor market ("things the DAO wants")
[The catch is that any of these things could be managed by any Agent, not just an AI. You could have DAOs of humans who are dedicated to making one of these composable lego blocks operate flawlessly.] […as measured by???]
Composability is essential to decentralized coordination [see API Memo]. As a positive externality, it opens up a whole world of automation.
Getting to Work
Composability is contingent on interfaces [Interfaces are roughly sets of requirements]
So, building the interfaces is the place to start
Then, we can connect any Agent to the interface, starting with MVPs and iterating our way to better and better composable building blocks
MVPs can be a person or a group of people (concierge), or a static algorithm
Examples
Perspectival Reputation
[Composability describes both to the interchangable nature of parts within a system, as well as the ability to use the same part in multiple systems that have the same interface]
This interface accepts an algorithm [smart contract] that calculates reputation
Requirements:
Input a wallet address
Output a score
Given the state of a wallet address at a given block, the output score from a given scoring algorithm version should always be the same
Send notifications when the scoring algorithm is updated
Considerations:
Any agent can have a scoring algorithm (DAOs or individuals)
A DAO's reputation score for a wallet will likely consider their contributions outside of the DAO, as well as inside the DAO [inside may be more highly weighted]
A DAO should be able to vote to change their scoring algorithm
Scoring algorithms might delegate to one another (imagine an index of reputations in other DAOs)
For individuals, scoring algorithms show reputation with that person [their values]
How the DAO or individual constructs their scoring algorithm is up to them
[Scoring algorithm mechanics should be transparent / on-chain]
Scoring algorithms should be forkable
It may be interesting to calculate the similarity between multiple agents' scoring algorithms
[note: DeepDAO is a prototype of a scoring algorithm - but centralized instead of perspectival]
Web3 Kanban
A set of interfaces that enable a DAO to decide what to do, and get it done.
The way to think about this loop is that each step is an interface that has requirements. Those requirements can be fulfilled by individuals or DAOs or algorithms. But by mapping the process we can see everything a DAO needs to do to get things done. The Web3 Kanban is an attempt to create a tool that consists of composable building blocks that can plug into each of the interfaces, while also creating a cohesive user journey. The tools should be opinionated, the interfaces should not.
Propose
Decide
Price
Commit
Execute
Evaluate
See full requirements here: https://hackmd.io/Jl1w-NeiQLquTBXf95L7kw
Originally published here.
Primer
Collective Intelligence that is inclusive of Human and Artificial Intelligences. Together, we are Smarter. 🐉
The primary factors for the learning rate [in DAOs] are 1) large amounts of funding that has enough risk tolerance for experimentation, 2) the ability to fork successful experiments to propagate them throughout the ecosystem, and 3) multi-layered network effects both within and between DAOs. Once DAOs are ahead, there will be no way for TradOrgs to catch up.
Today, many of the processes in DAOs are manual. Human input is required for things like making proposals, voting, and enacting proposal results. Like coordinating any group of humans, this is often slow [governance = rate limiting factor], and in many cases, still relies on trusted parties instead of robust cryptographic mechanisms.
The first opportunity for automation in DAOs is to create the mechanisms and algorithms that manage decision-making and decision implementation. As DAOs evolve, more and more will be automated: sales and marketing, recruitment, coordination of teams, value creation, even editing the DAO’s code. […]
Overall, this is great for DAO members. This type of automation will bring internal costs down, which in turn will decrease platform costs for users. At the same time, increased profits will be reinvested or distributed to stakeholders — completing the virtuous cycle of aligned incentives.
— https://governors.substack.com/p/governors-2-humans-at-the-edges-why
My thesis:
Strong AI is coming in the not-so-distant future
Harmonious coexistence with AI is desirable for the survival and evolution of humanity
Our best chance at harmonious coexistence with AI is:
Deep integration on a technological level
Efficient and fair collective decision-making for how we interact with the AI, including for example, how we advocate for our values as potential objective functions for the AI [i.e. management of the relationship AI as a public good]
Having the support of cryptographic mechanisms to make credible future commitments to one another [and to the AI, and from the AI to us]
Shared upside via organizations that have aligned incentives for the AI, humans, and groups that coordinate
Even before the existence of Strong AI, both Narrow AI and Collective Intelligence (via markets and other mechanisms) are powerful forces of similar type. These are worthwhile to align in the same ways that we might work with Strong AI (a worthy goal, and great practice at lower stakes)
DAOs are the container for and latest evolution of Collective Intelligence
DAOs cannot import knowledge from outside of the DAO ecosystem directly - DAOs can only learn via experiments. The way to get knowledge into DAOs from pre-DAO organizations is via the experiment design process.
DAOs (with tokens) create a fundamentally new way to align incentives between stakeholders and organizations, which encourages (instead of discouraging) automation
We can measure DAOs effectiveness by their Coordination Power: the amount of complexity they can manage without coordination failures
Today, one major limitation on DAO Coordination Power is governance
As such, it is incumbent upon us to:
Design the set of interfaces where AIs can participate in DAOs [alt: Make the interfaces for participation in DAOs inclusive (composable?) to individuals, DAOs, and AIs]
Run as many experiments as we can
Design our experiments to the best of our intellectual and ethical capacities
Share our learnings well
Shut down failed experiments
Design horizontal coordination incentives that support massive, ethical, open-source experimentation at scale [the "science" of DAOs]
Today, I feel that my part in this is to synthesize components of each of these domains (AI and DAOs), and to create a bridge between them. I will attempt to do this by defining acceptance crtieria for a mechanistic DAO that supports AI stakeholdership, and enumerating the specific opportunities where AI can add value in DAOs in this model.
"We should think of ourselves, too, as composable building blocks of a DAO."
A DAO is an Intelligent Agent
Agents are one of:
Person
AI
Group of Agents
A DAO has interfaces:
For interacting inside the DAO
For interacting with other Agents (API)
A DAO has goals (or wants)
A DAO has resources
In order for such a DAO to get what it wants, it needs ways to advocate for its wants and ways to allocate its resources [its economy]
Specific Opportunities for AI Stakeholdership in DAOs
Tuning the parameters of a summoning contract (in order to iterate on economic / game design experiments) [gradient descent]
With sufficient data about summoning contract parameters, an AI could [make predictions about and] recommend parameter sets that more effectively achieve a person or group's goals
Note: TEC has done a great job creating a humanistic approach to token experiement design: param simulator + experiment design parties + voting on the best experiments to run. This kind of model, but for DAO parameters, could be used to generate data for such an AI
Making Proposals
Attempting to write statements in the format "I think the DAO wants…" to be voted on by the DAO [human-in-the-loop]
Attention Management
Governance facilitation (dynamic governance minimization): deciding which votes are important enough to ask the DAO to vote on [note: DAOstack tried this with prediction markets]
Supporting a participant in deciding which votes across all of the DAOs they are a part of are important for them to dedicate their attention to
Dynamic proposal periods: instead of a static proposal process, actively lengthen and shorten phases of the proposal process based on engagement and other (zk) data sources
Voting
Voting on behalf of individual members (an agent that represents you)
Voting on behalf of a particular objective or ideal (imagine assigning voting power in a DAO to an idea)
Assigning Rewards
(Dynamic) Task / Bounty Pricing: Automatic auction curve is one algorithm (transparent, easy to understand). Composability here would mean a DAO could bring its own algorithm to manage the pricing of tasks in the labor market ("things the DAO wants")
[The catch is that any of these things could be managed by any Agent, not just an AI. You could have DAOs of humans who are dedicated to making one of these composable lego blocks operate flawlessly.] […as measured by???]
Composability is essential to decentralized coordination [see API Memo]. As a positive externality, it opens up a whole world of automation.
Getting to Work
Composability is contingent on interfaces [Interfaces are roughly sets of requirements]
So, building the interfaces is the place to start
Then, we can connect any Agent to the interface, starting with MVPs and iterating our way to better and better composable building blocks
MVPs can be a person or a group of people (concierge), or a static algorithm
Examples
Perspectival Reputation
[Composability describes both to the interchangable nature of parts within a system, as well as the ability to use the same part in multiple systems that have the same interface]
This interface accepts an algorithm [smart contract] that calculates reputation
Requirements:
Input a wallet address
Output a score
Given the state of a wallet address at a given block, the output score from a given scoring algorithm version should always be the same
Send notifications when the scoring algorithm is updated
Considerations:
Any agent can have a scoring algorithm (DAOs or individuals)
A DAO's reputation score for a wallet will likely consider their contributions outside of the DAO, as well as inside the DAO [inside may be more highly weighted]
A DAO should be able to vote to change their scoring algorithm
Scoring algorithms might delegate to one another (imagine an index of reputations in other DAOs)
For individuals, scoring algorithms show reputation with that person [their values]
How the DAO or individual constructs their scoring algorithm is up to them
[Scoring algorithm mechanics should be transparent / on-chain]
Scoring algorithms should be forkable
It may be interesting to calculate the similarity between multiple agents' scoring algorithms
[note: DeepDAO is a prototype of a scoring algorithm - but centralized instead of perspectival]
Web3 Kanban
A set of interfaces that enable a DAO to decide what to do, and get it done.
The way to think about this loop is that each step is an interface that has requirements. Those requirements can be fulfilled by individuals or DAOs or algorithms. But by mapping the process we can see everything a DAO needs to do to get things done. The Web3 Kanban is an attempt to create a tool that consists of composable building blocks that can plug into each of the interfaces, while also creating a cohesive user journey. The tools should be opinionated, the interfaces should not.
Propose
Decide
Price
Commit
Execute
Evaluate
See full requirements here: https://hackmd.io/Jl1w-NeiQLquTBXf95L7kw
Originally published here.
Recommending possible Sourcecred (and similar tool) parameters
Recommending a default GIVE allocation for Coordinape (and similar tools). Or nudging such a system (e.g. when contributors have not been recognized sufficiently)
Token airdrop design: recommending a translation from a pre-token reputation systems to token allocations upon a token launch
Task Execution
Any agent should be able to commit to, execute, and earn rewards for a task. AIs may charge a lot less for tasks (and therefore outcompete other agents in the labor market)
Death
Instead of a static death parameter, an algorithm could be responsible for shutting down the DAO (or making a proposal to shut it down)
Recruitment / ["Promotion"]
Determining an algorithm for calculating reputation and setting a reputation threshold for joining (or multiple phases of joining, i.e. Contribution Zones)
Inverse of this: helping an individual find work that they would be great at / would love to do
Team Formation
For specific tasks (say an Agent knows it can complete a task because it can coordinate a team for it, then the Agent will try to compete in the labor market)
Disengagement / ["Firing"]
Recommending disengagement from / decreased engagement with people as their contribution decreases
Curation of Knowledge (and routing it to the right places)
Internal Knowledge Management
Sharing learnings about experimentation successes / failures
Marketing
Protocol Updates
Writing code and submitting PRs [a PR is in essence a type of proposal]
Eventually, writing the summoning contract
Care Work
(zk) predictions of which people and relationships need support and surfacing that to humans who can give support
Recruitment / admission / election
Delegation of (additional) roles and responsibilities to individuals
Surfaced in decisions made by people in a DAO
Social and credibility signaling outside of the DAO
Recommending possible Sourcecred (and similar tool) parameters
Recommending a default GIVE allocation for Coordinape (and similar tools). Or nudging such a system (e.g. when contributors have not been recognized sufficiently)
Token airdrop design: recommending a translation from a pre-token reputation systems to token allocations upon a token launch
Task Execution
Any agent should be able to commit to, execute, and earn rewards for a task. AIs may charge a lot less for tasks (and therefore outcompete other agents in the labor market)
Death
Instead of a static death parameter, an algorithm could be responsible for shutting down the DAO (or making a proposal to shut it down)
Recruitment / ["Promotion"]
Determining an algorithm for calculating reputation and setting a reputation threshold for joining (or multiple phases of joining, i.e. Contribution Zones)
Inverse of this: helping an individual find work that they would be great at / would love to do
Team Formation
For specific tasks (say an Agent knows it can complete a task because it can coordinate a team for it, then the Agent will try to compete in the labor market)
Disengagement / ["Firing"]
Recommending disengagement from / decreased engagement with people as their contribution decreases
Curation of Knowledge (and routing it to the right places)
Internal Knowledge Management
Sharing learnings about experimentation successes / failures
Marketing
Protocol Updates
Writing code and submitting PRs [a PR is in essence a type of proposal]
Eventually, writing the summoning contract
Care Work
(zk) predictions of which people and relationships need support and surfacing that to humans who can give support
Recruitment / admission / election
Delegation of (additional) roles and responsibilities to individuals
Surfaced in decisions made by people in a DAO
Social and credibility signaling outside of the DAO
Nick Naraghi
Nick Naraghi
3 comments
if you're interested in the intersection of AI x DAOs, here's an article I wrote about the topic a little while back: https://paragraph.xyz/@nicknaraghi/interfaces-for-ai-in-daos-1
looks like you gave given this way more thought than me haha. Solid write up btw https://warpcast.com/gramajo.eth/0x3a4275f7
thank you!