<100 subscribers
Share Dialog
Share Dialog


Our research aims to enhance the transparency and effectiveness of Web3 grant programs by providing a robust tool for self-assessment and benchmarking. The Grant Maturity Index (GMI) will enable grant operators to better understand their program's maturity, identify areas for improvement, and enhance their overall impact. The insights from our research will feed directly into a tool designed to demonstrate and improve the effectiveness of grant programs.
Our research methodology tracks the maturity of Web3 grant programs over time and compares their fitness for purpose within specific periods. We utilized both qualitative methods, such as practitioner surveys and expert assessments, and quantitative methods, such as data collection from grant platforms. This mixed-method approach ensures a holistic evaluation of grant programs.
We began by creating a detailed research plan and establishing a theoretical framework. Then, we developed a scoring matrix to construct the GMI and collected data from various grant programs. This data was processed into indicators, which were used to compute the GMI. We refined this preliminary analysis through stakeholder feedback to ensure accuracy and relevance.
By dividing the research responsibilities between two researchers, we leveraged our combined expertise in mixed-methods research. This collaboration allowed us to cross-verify results and ensure a balanced, objective analysis of the data.
Given the required threshold maturity for effective scoring, as it is present in the GTMI, the construction of the GMI only includes Web3 grant programs operated by entities consisting of both a corporate entity, such as a foundation, and potentially more programmatic entities like DAOs.
The divergence among Web3 grant programs requires a differentiated analysis that cannot adopt a one-size-fits-all approach, especially in areas of funding that target different verticals and overall program goals. Our focus was on a small group of Layer 2 protocol grants that concentrate on ecosystem growth, adoption, liquidity, and development (dApps). Working with these pools allows us to create a baseline framework to test against and assess.
Ecosystem Growth:
Optimism (Growth Experiments Grant Program, Season 5): Managed by the Growth Experiments Sub Committee, this program's goal is to maximize the number of users interacting with applications that further a specified intent defined by the Optimism Collective.
Arbitrum (Short Term Incentive Program): An experimental program to distribute 50 million ARB to active Arbitrum protocols, focusing on enhancing network growth and liquidity. Following this, more funding was provided in the form of Arbitrum STIP backfund, totaling 21.1 million ARB tokens ($23.54 million) to support 26 additional projects. These projects include Gains Network, Stargate Finance, Synapse, and Wormhole.
Taiko Grants: Designed to support the development of the Taiko ecosystem by funding projects that enhance scalability, reduce Ethereum transaction costs, and promote network adoption. They also serve as a means to generate liquidity and distribute the anticipated project token.
Mantle Grants: Distributed by a third-party service provider with significant organizational overlap with Mantle entities. This program funds consumer applications in SocialFi, Gaming, and DeFi, focusing on increasing user engagement and network activity.
Optimism (Season 5): Selected for its comprehensive data availability, the grant program emphasizes long-term incentives to drive user interaction and application engagement.
Arbitrum: Focuses on liquidity enhancement and network growth through a substantial allocation of ARB tokens to active protocols. The backfunding initiative further underscores its commitment to supporting impactful projects that initially missed out on funding.
Taiko: Aims to bolster technology development, community engagement, and research initiatives without an initially launched token or Mainnet.
Mantle: Prioritizes solutions that enhance layer two capabilities and drive ecosystem growth through targeted funding of applications and infrastructure projects.
We understood that to ensure the validity of not only the data but also the output, we needed a small test case with commonalities in goals, processes, and networks. Therefore, we decided to focus on the area of ecosystem growth to establish a foundation, ensuring that the cases had similar foundations to build upon. This approach allowed us to create a baseline framework to test against and assess, leading to more accurate and relevant insights.
We focused on several critical areas contributing to the maturity of Web3 grant programs, including:
Clarity of objectives
Organizational structure
Governance processes
Effectiveness and impact
Transparency and accountability
Community engagement
By examining these dimensions, we aimed to identify strengths, weaknesses, and opportunities for improvement in grant programs.
Integrating the GMI into grant programs offers numerous benefits:
Efficiency: Identifies and eliminates redundant processes.
Transparency: Enhances transparency through clear metrics and standardized reporting.
Strategic Decision-Making: Supports better strategic decision-making with data-driven insights.
Impact Showcasing: Showcases the tangible impact of grant programs, fostering accountability and promoting best practices across the ecosystem.
Our research methodology follows a structured process:
Creation of Research Plan: Establishing the scope and objectives of the study.
Theoretical Framework: Building a foundation based on existing literature and concepts.
Scoring Matrix Development: Designing a tool to evaluate grant programs.
GMI Construction: Combining qualitative and quantitative data to create the index.
Data Collection and Analysis: Gathering and processing data from various grant platforms.
Stakeholder Feedback: Refining the GMI through expert and practitioner input.
Dissemination: Sharing our findings and the benefits of the GMI with the broader community.
Stay tuned for our next article, where we will delve deeper into the design behind the GMI.
Our research aims to enhance the transparency and effectiveness of Web3 grant programs by providing a robust tool for self-assessment and benchmarking. The Grant Maturity Index (GMI) will enable grant operators to better understand their program's maturity, identify areas for improvement, and enhance their overall impact. The insights from our research will feed directly into a tool designed to demonstrate and improve the effectiveness of grant programs.
Our research methodology tracks the maturity of Web3 grant programs over time and compares their fitness for purpose within specific periods. We utilized both qualitative methods, such as practitioner surveys and expert assessments, and quantitative methods, such as data collection from grant platforms. This mixed-method approach ensures a holistic evaluation of grant programs.
We began by creating a detailed research plan and establishing a theoretical framework. Then, we developed a scoring matrix to construct the GMI and collected data from various grant programs. This data was processed into indicators, which were used to compute the GMI. We refined this preliminary analysis through stakeholder feedback to ensure accuracy and relevance.
By dividing the research responsibilities between two researchers, we leveraged our combined expertise in mixed-methods research. This collaboration allowed us to cross-verify results and ensure a balanced, objective analysis of the data.
Given the required threshold maturity for effective scoring, as it is present in the GTMI, the construction of the GMI only includes Web3 grant programs operated by entities consisting of both a corporate entity, such as a foundation, and potentially more programmatic entities like DAOs.
The divergence among Web3 grant programs requires a differentiated analysis that cannot adopt a one-size-fits-all approach, especially in areas of funding that target different verticals and overall program goals. Our focus was on a small group of Layer 2 protocol grants that concentrate on ecosystem growth, adoption, liquidity, and development (dApps). Working with these pools allows us to create a baseline framework to test against and assess.
Ecosystem Growth:
Optimism (Growth Experiments Grant Program, Season 5): Managed by the Growth Experiments Sub Committee, this program's goal is to maximize the number of users interacting with applications that further a specified intent defined by the Optimism Collective.
Arbitrum (Short Term Incentive Program): An experimental program to distribute 50 million ARB to active Arbitrum protocols, focusing on enhancing network growth and liquidity. Following this, more funding was provided in the form of Arbitrum STIP backfund, totaling 21.1 million ARB tokens ($23.54 million) to support 26 additional projects. These projects include Gains Network, Stargate Finance, Synapse, and Wormhole.
Taiko Grants: Designed to support the development of the Taiko ecosystem by funding projects that enhance scalability, reduce Ethereum transaction costs, and promote network adoption. They also serve as a means to generate liquidity and distribute the anticipated project token.
Mantle Grants: Distributed by a third-party service provider with significant organizational overlap with Mantle entities. This program funds consumer applications in SocialFi, Gaming, and DeFi, focusing on increasing user engagement and network activity.
Optimism (Season 5): Selected for its comprehensive data availability, the grant program emphasizes long-term incentives to drive user interaction and application engagement.
Arbitrum: Focuses on liquidity enhancement and network growth through a substantial allocation of ARB tokens to active protocols. The backfunding initiative further underscores its commitment to supporting impactful projects that initially missed out on funding.
Taiko: Aims to bolster technology development, community engagement, and research initiatives without an initially launched token or Mainnet.
Mantle: Prioritizes solutions that enhance layer two capabilities and drive ecosystem growth through targeted funding of applications and infrastructure projects.
We understood that to ensure the validity of not only the data but also the output, we needed a small test case with commonalities in goals, processes, and networks. Therefore, we decided to focus on the area of ecosystem growth to establish a foundation, ensuring that the cases had similar foundations to build upon. This approach allowed us to create a baseline framework to test against and assess, leading to more accurate and relevant insights.
We focused on several critical areas contributing to the maturity of Web3 grant programs, including:
Clarity of objectives
Organizational structure
Governance processes
Effectiveness and impact
Transparency and accountability
Community engagement
By examining these dimensions, we aimed to identify strengths, weaknesses, and opportunities for improvement in grant programs.
Integrating the GMI into grant programs offers numerous benefits:
Efficiency: Identifies and eliminates redundant processes.
Transparency: Enhances transparency through clear metrics and standardized reporting.
Strategic Decision-Making: Supports better strategic decision-making with data-driven insights.
Impact Showcasing: Showcases the tangible impact of grant programs, fostering accountability and promoting best practices across the ecosystem.
Our research methodology follows a structured process:
Creation of Research Plan: Establishing the scope and objectives of the study.
Theoretical Framework: Building a foundation based on existing literature and concepts.
Scoring Matrix Development: Designing a tool to evaluate grant programs.
GMI Construction: Combining qualitative and quantitative data to create the index.
Data Collection and Analysis: Gathering and processing data from various grant platforms.
Stakeholder Feedback: Refining the GMI through expert and practitioner input.
Dissemination: Sharing our findings and the benefits of the GMI with the broader community.
Stay tuned for our next article, where we will delve deeper into the design behind the GMI.
Feems and benedictvs.eth
Feems and benedictvs.eth
No comments yet