data scientist pocking around issues on web3
data scientist pocking around issues on web3

Subscribe to stefi pereira

Subscribe to stefi pereira
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
*Imagine a funding system made up of multiple concurrent Impact Evaluators (IEs)—each scoped to a specific strategic outcome and operating as a self-contained cell within a broader funder strategy. These IEs enable round-level reasoning, distributed experimentation, and coordinated return analysis.*
*The Generalized Impact Evaluator (IE) framework *{r, e, m, S} serves as a generalized abstraction of evaluators—capturing reward, evaluation, measurement, and scope. This model lays the groundwork for composing multiple concurrent evaluators, each tailored to different strategic goals within a broader funding architecture.
Pairing this model with actor-based diagrams—like the one developed by Adam Spiers during a recent research retreat—makes it easier to visualize how evaluators are composed and how they interact with operational agents.
Aligning measurement algorithms with evaluator goals allows funders to surface and reward the kinds of behaviors they actually want to incentivize—turning IEs into strategic instruments rather than passive mechanisms.
Calculating ROI at the round level and introducing a
*Imagine a funding system made up of multiple concurrent Impact Evaluators (IEs)—each scoped to a specific strategic outcome and operating as a self-contained cell within a broader funder strategy. These IEs enable round-level reasoning, distributed experimentation, and coordinated return analysis.*
*The Generalized Impact Evaluator (IE) framework *{r, e, m, S} serves as a generalized abstraction of evaluators—capturing reward, evaluation, measurement, and scope. This model lays the groundwork for composing multiple concurrent evaluators, each tailored to different strategic goals within a broader funding architecture.
Pairing this model with actor-based diagrams—like the one developed by Adam Spiers during a recent research retreat—makes it easier to visualize how evaluators are composed and how they interact with operational agents.
Aligning measurement algorithms with evaluator goals allows funders to surface and reward the kinds of behaviors they actually want to incentivize—turning IEs into strategic instruments rather than passive mechanisms.
Calculating ROI at the round level and introducing a
A cost-mapping calculator can identify operational bottlenecks, helping funders evaluate the feasibility of running concurrent rounds at scale.
Crucially, this approach doesn’t exclude hard-to-measure work—it simply makes the risks and unknowns more legible, allowing for more intentional funding design.
Next steps include: backtesting impact signals, analyzing round costs, and exploring how plural IEs can operate in coordination under a shared strategy.
Imagine a world where grant rounds aren’t isolated funding events, but strategic instruments, running recurrently, concurrently—each contributing to a broader organizational objective.
In this world, funding is not only reactive or opportunistic. Instead, each round is thoughtfully designed, with every grant acting as a cell within a larger system. These cells, or Impact Evaluators (IEs), differ in their scopes and intended outcomes, but all serve the same overarching mission. Each has its own expectations of return, allowing funders—whether DAOs, foundations, or ecosystem actors—to allocate capital using principles of risk management and sustainability.

Over the past month, I’ve been studying retroactive funding rounds that took place between late 2023 and August 2025. During the two-week research retreat hosted by Protocol Labs, I had the chance to explore this topic more deeply with others working on similar questions around the impact and sustainability of public goods funding. From these conversations and this short but intense research sprint, one recurring thought kept resurfacing: we are missing some key links that could help us transition from isolated or experimental rounds to strategic, recurrent, and concurrent rounds—rounds that generate returns not only for the broader community, but also for the funding actor, ultimately enabling a sustainable feedback loop.
This post is an attempt to unpack these links and explore what it would take to make strategic, measurable, and scalable grants systems a reality. I don’t claim to have the answers—this is simply the perspective I’ve built through research and prototyping. I’ll reference existing models, share tools I’ve developed, and propose next steps toward testing whether this framework can deliver the kind of planning and coordination funders are seeking.
To ground this vision, I’ve been using the model of the Impact Evaluator (IE) introduced in the Generalized Impact Evaluators paper, where an IE is defined as a set:
IE={r,e,m,S}IE = \{r, e, m, S\}IE={r,e,m,S}
Here:
r is the reward function—how rewards are distributed,
e is the evaluation function—how outcomes are judged,
m is the measurement function—what data is used to support evaluation,
and S is the scope—what kinds of impact are within the evaluator’s focus.
This abstraction is not only useful for defining what an evaluator is, but also for clarifying how each of its components—reward, evaluation, measurement, and scope—are internally composed.
During the retreat, I had the chance to collaborate with Adam Spiers, who developed a compelling visual framework that maps out the different agents and their roles in impact evaluators. His work provides a powerful lens for understanding how each component of an IE—{r, e, m, S}—is shaped by different actors, from protocol designers to algorithm developers to evaluators themselves.

This framework helped me see IEs not as isolated mechanisms, but as cells in a modular, multi-agent architecture—each scoped to a specific objective, but collectively contributing to a strategic funding ecosystem
If we want Impact Evaluators to contribute meaningfully to broader strategy, we need to ensure their measurement components (m) are built to detect what matters most to their specific objectives (S). In other words, the algorithms we use to evaluate outcomes should be intentionally aligned with the outcomes we want to reward.
This is the first missing link I see in many retro funding rounds today. The potential of these evaluators lies not just in distributing rewards, but in shaping incentives—guiding project teams toward behaviors that are more likely to create the kind of impact the evaluator is designed to prioritize.
There are already promising experiments moving in this direction. For example, during Optimism Season 7, Open Source Observer developed and deployed measurement algorithms tailored to specific outcomes like ecosystem growth, user retention, and adoption. These models combined onchain metrics to surface projects whose behaviors aligned with those goals—making the evaluation function not just reactive, but strategic.
Building on this idea, we could imagine a broader library of measurement algorithms, each corresponding to a different scope or strategic objective. This would allow funders to design rounds that are outcome-aware from the start, enabling project teams to align their efforts with clearer expectations of what will be rewarded. As the Generalized IE paper notes,
"IEs work on the entities’ expectation that valuable work (as defined by scope S) towards objectives O will be rewarded in the future."
By making the evaluators more transparent and purposeful, we give projects a real chance to internalize the logic of the system—not as a black box, but as something they can understand and optimize toward.
While measurement algorithms help evaluators reward outcomes aligned with their scopes, that alone doesn’t close the loop. To move toward a truly strategic grant system, we need to understand the return on investment (ROI) of each funding round—not just at the project level, but also in aggregate.
This brings us to the second missing link: the ability to calculate the expected return of a round over time. If each Impact Evaluator is a cell contributing to a broader strategy, then the sum of the returns across all projects in a round could help us approximate the ROI of that evaluator—and, with iteration over time, the ROI of that type of round more generally.
The question then becomes: what would we need to reliably calculate that ROI?
One approach I’ve started exploring is time series decomposition. In particular, using Seasonal and Trend decomposition using Loess (STL) to isolate and reduce market noise, and then applying X-13ARIMA-SEATS, a well-established statistical method for analyzing seasonal adjustments, to extract more meaningful patterns in the data. These techniques could help us distinguish between general ecosystem growth and the specific effects of grants.
Another complementary method would be logging how much funding a project receives, when it was used, and what it was spent on. This type of data, especially when combined with time series decomposition to remove the effects of market volatility or other external factors (like receiving additional grants), could bring us closer to an accurate estimate of a project’s true return.
With this, we could begin defining a new evaluation metric: the Capital Efficiency Index (CEI). This index would relate the outcomes a project produces to the capital it received, enabling us to identify which grantees were most efficient in turning funding into measurable impact.
Such a metric would not only allow for better ROI calculations, but could also inform future compensation, amplify the funding of high-efficiency projects, and improve the overall return of a grant program. It shifts the system from simply distributing capital based on retroactive claims to learning from past funding behavior and strategically reinforcing what works.
Even with well-aligned evaluators and meaningful metrics, a grant system can’t scale if its operations don’t. One of the most persistent bottlenecks I’ve observed is the heavy coordination cost required to run each round. As it stands, increasing the number of rounds often means increasing the size of the grants team—a model that doesn't scale well.
To investigate this, I developed a cost-mapping calculator that breaks down the operational workload of retro funding rounds into core components: Governance, Measurement, Impact, Evaluation, and Rewarding.

The goal is to quickly visualize which parts of the system are most resource-intensive—whether that’s designing governance processes, implementing measurement systems, tracking and interpreting impact, running evaluations, or distributing rewards. By comparing past rounds using this calculator, we can begin to identify recurring patterns: Which types of rounds are more expensive to run? What themes or evaluator designs introduce more friction? Where are we overspending?
This kind of analysis can help round operators make more informed decisions:
What grant size makes sense for a high-cost round?
How many rounds can a team realistically run in parallel?
What types of evaluators are operationally lightweight but still strategically impactful?
By combining insights from operational cost, round design, and expected return, we can move toward a much more precise and efficient system design. One where recurrent and concurrent rounds don’t strain human capacity, but instead become predictable, repeatable processes embedded in the funder’s larger strategy.
At this point, it might be tempting to think that this kind of system—built around strategic alignment, measurable outcomes, and operational efficiency—would inherently favor projects with easily quantifiable impact. But that’s not the intention.
The goal isn’t to stop funding unmeasurable work, but to explicitly recognize and account for uncertainty. By distinguishing between what we can measure and what we can’t, we give funders the tools to make more informed decisions about how much risk they’re taking on, and how those risks fit into their overall strategy.
We’re not eliminating the unknown—we’re contextualizing it.
A system like this doesn’t require perfect information. It simply provides a way to reason about what is known, what can be estimated, and what remains speculative. This enables funders to balance their portfolios more deliberately: investing in both high-confidence, high-efficiency projects and more exploratory, emergent work that might not show measurable outcomes until much later.
In doing so, we move toward a grants system that is not just reactive or aspirational—but strategic, pluralistic, and better equipped to plan for the long term.
These ideas are still evolving, and the best way to test their value is through experimentation. Here are a few concrete next steps that could help validate—or challenge—this emerging framework:
**Backtest the impact of a retro funding round.**Working with a small set of projects and applying time series decomposition techniques (such as STL and X-13ARIMA-SEATS) to analyze whether it's possible to isolate the effect of grant funding from broader market activity. What signals emerge? What’s missing? What kinds of metadata (e.g., fund usage logs) would improve the analysis?
**Use the cost-mapping calculator to analyze previous rounds.**Comparing rounds using the five operational components—Governance, Measurement, Impact, Evaluation, and Rewarding. Identify which themes or evaluator types introduce the most operational complexity, and explore what round structures might scale better under limited team capacity.
These steps won’t confirm a full theory, but they can help us understand whether this direction is practically useful—and what’s still needed to make it viable.
This post reflects the early contours of an idea, shaped by a short but intense research retreat, two years of retro funding data, and conversations with peers navigating similar challenges. I don’t see this as a final answer—but as a framework for thinking, testing, and building better funding systems.
I’m sharing these thoughts in the spirit of open exploration and would love to hear from others experimenting with strategy, impact, and evaluation in grants. Whether you're running rounds, building evaluators, or designing metrics—I’m curious what you see, what you’re trying, and where this might resonate (or not).
Let’s keep learning together.
A cost-mapping calculator can identify operational bottlenecks, helping funders evaluate the feasibility of running concurrent rounds at scale.
Crucially, this approach doesn’t exclude hard-to-measure work—it simply makes the risks and unknowns more legible, allowing for more intentional funding design.
Next steps include: backtesting impact signals, analyzing round costs, and exploring how plural IEs can operate in coordination under a shared strategy.
Imagine a world where grant rounds aren’t isolated funding events, but strategic instruments, running recurrently, concurrently—each contributing to a broader organizational objective.
In this world, funding is not only reactive or opportunistic. Instead, each round is thoughtfully designed, with every grant acting as a cell within a larger system. These cells, or Impact Evaluators (IEs), differ in their scopes and intended outcomes, but all serve the same overarching mission. Each has its own expectations of return, allowing funders—whether DAOs, foundations, or ecosystem actors—to allocate capital using principles of risk management and sustainability.

Over the past month, I’ve been studying retroactive funding rounds that took place between late 2023 and August 2025. During the two-week research retreat hosted by Protocol Labs, I had the chance to explore this topic more deeply with others working on similar questions around the impact and sustainability of public goods funding. From these conversations and this short but intense research sprint, one recurring thought kept resurfacing: we are missing some key links that could help us transition from isolated or experimental rounds to strategic, recurrent, and concurrent rounds—rounds that generate returns not only for the broader community, but also for the funding actor, ultimately enabling a sustainable feedback loop.
This post is an attempt to unpack these links and explore what it would take to make strategic, measurable, and scalable grants systems a reality. I don’t claim to have the answers—this is simply the perspective I’ve built through research and prototyping. I’ll reference existing models, share tools I’ve developed, and propose next steps toward testing whether this framework can deliver the kind of planning and coordination funders are seeking.
To ground this vision, I’ve been using the model of the Impact Evaluator (IE) introduced in the Generalized Impact Evaluators paper, where an IE is defined as a set:
IE={r,e,m,S}IE = \{r, e, m, S\}IE={r,e,m,S}
Here:
r is the reward function—how rewards are distributed,
e is the evaluation function—how outcomes are judged,
m is the measurement function—what data is used to support evaluation,
and S is the scope—what kinds of impact are within the evaluator’s focus.
This abstraction is not only useful for defining what an evaluator is, but also for clarifying how each of its components—reward, evaluation, measurement, and scope—are internally composed.
During the retreat, I had the chance to collaborate with Adam Spiers, who developed a compelling visual framework that maps out the different agents and their roles in impact evaluators. His work provides a powerful lens for understanding how each component of an IE—{r, e, m, S}—is shaped by different actors, from protocol designers to algorithm developers to evaluators themselves.

This framework helped me see IEs not as isolated mechanisms, but as cells in a modular, multi-agent architecture—each scoped to a specific objective, but collectively contributing to a strategic funding ecosystem
If we want Impact Evaluators to contribute meaningfully to broader strategy, we need to ensure their measurement components (m) are built to detect what matters most to their specific objectives (S). In other words, the algorithms we use to evaluate outcomes should be intentionally aligned with the outcomes we want to reward.
This is the first missing link I see in many retro funding rounds today. The potential of these evaluators lies not just in distributing rewards, but in shaping incentives—guiding project teams toward behaviors that are more likely to create the kind of impact the evaluator is designed to prioritize.
There are already promising experiments moving in this direction. For example, during Optimism Season 7, Open Source Observer developed and deployed measurement algorithms tailored to specific outcomes like ecosystem growth, user retention, and adoption. These models combined onchain metrics to surface projects whose behaviors aligned with those goals—making the evaluation function not just reactive, but strategic.
Building on this idea, we could imagine a broader library of measurement algorithms, each corresponding to a different scope or strategic objective. This would allow funders to design rounds that are outcome-aware from the start, enabling project teams to align their efforts with clearer expectations of what will be rewarded. As the Generalized IE paper notes,
"IEs work on the entities’ expectation that valuable work (as defined by scope S) towards objectives O will be rewarded in the future."
By making the evaluators more transparent and purposeful, we give projects a real chance to internalize the logic of the system—not as a black box, but as something they can understand and optimize toward.
While measurement algorithms help evaluators reward outcomes aligned with their scopes, that alone doesn’t close the loop. To move toward a truly strategic grant system, we need to understand the return on investment (ROI) of each funding round—not just at the project level, but also in aggregate.
This brings us to the second missing link: the ability to calculate the expected return of a round over time. If each Impact Evaluator is a cell contributing to a broader strategy, then the sum of the returns across all projects in a round could help us approximate the ROI of that evaluator—and, with iteration over time, the ROI of that type of round more generally.
The question then becomes: what would we need to reliably calculate that ROI?
One approach I’ve started exploring is time series decomposition. In particular, using Seasonal and Trend decomposition using Loess (STL) to isolate and reduce market noise, and then applying X-13ARIMA-SEATS, a well-established statistical method for analyzing seasonal adjustments, to extract more meaningful patterns in the data. These techniques could help us distinguish between general ecosystem growth and the specific effects of grants.
Another complementary method would be logging how much funding a project receives, when it was used, and what it was spent on. This type of data, especially when combined with time series decomposition to remove the effects of market volatility or other external factors (like receiving additional grants), could bring us closer to an accurate estimate of a project’s true return.
With this, we could begin defining a new evaluation metric: the Capital Efficiency Index (CEI). This index would relate the outcomes a project produces to the capital it received, enabling us to identify which grantees were most efficient in turning funding into measurable impact.
Such a metric would not only allow for better ROI calculations, but could also inform future compensation, amplify the funding of high-efficiency projects, and improve the overall return of a grant program. It shifts the system from simply distributing capital based on retroactive claims to learning from past funding behavior and strategically reinforcing what works.
Even with well-aligned evaluators and meaningful metrics, a grant system can’t scale if its operations don’t. One of the most persistent bottlenecks I’ve observed is the heavy coordination cost required to run each round. As it stands, increasing the number of rounds often means increasing the size of the grants team—a model that doesn't scale well.
To investigate this, I developed a cost-mapping calculator that breaks down the operational workload of retro funding rounds into core components: Governance, Measurement, Impact, Evaluation, and Rewarding.

The goal is to quickly visualize which parts of the system are most resource-intensive—whether that’s designing governance processes, implementing measurement systems, tracking and interpreting impact, running evaluations, or distributing rewards. By comparing past rounds using this calculator, we can begin to identify recurring patterns: Which types of rounds are more expensive to run? What themes or evaluator designs introduce more friction? Where are we overspending?
This kind of analysis can help round operators make more informed decisions:
What grant size makes sense for a high-cost round?
How many rounds can a team realistically run in parallel?
What types of evaluators are operationally lightweight but still strategically impactful?
By combining insights from operational cost, round design, and expected return, we can move toward a much more precise and efficient system design. One where recurrent and concurrent rounds don’t strain human capacity, but instead become predictable, repeatable processes embedded in the funder’s larger strategy.
At this point, it might be tempting to think that this kind of system—built around strategic alignment, measurable outcomes, and operational efficiency—would inherently favor projects with easily quantifiable impact. But that’s not the intention.
The goal isn’t to stop funding unmeasurable work, but to explicitly recognize and account for uncertainty. By distinguishing between what we can measure and what we can’t, we give funders the tools to make more informed decisions about how much risk they’re taking on, and how those risks fit into their overall strategy.
We’re not eliminating the unknown—we’re contextualizing it.
A system like this doesn’t require perfect information. It simply provides a way to reason about what is known, what can be estimated, and what remains speculative. This enables funders to balance their portfolios more deliberately: investing in both high-confidence, high-efficiency projects and more exploratory, emergent work that might not show measurable outcomes until much later.
In doing so, we move toward a grants system that is not just reactive or aspirational—but strategic, pluralistic, and better equipped to plan for the long term.
These ideas are still evolving, and the best way to test their value is through experimentation. Here are a few concrete next steps that could help validate—or challenge—this emerging framework:
**Backtest the impact of a retro funding round.**Working with a small set of projects and applying time series decomposition techniques (such as STL and X-13ARIMA-SEATS) to analyze whether it's possible to isolate the effect of grant funding from broader market activity. What signals emerge? What’s missing? What kinds of metadata (e.g., fund usage logs) would improve the analysis?
**Use the cost-mapping calculator to analyze previous rounds.**Comparing rounds using the five operational components—Governance, Measurement, Impact, Evaluation, and Rewarding. Identify which themes or evaluator types introduce the most operational complexity, and explore what round structures might scale better under limited team capacity.
These steps won’t confirm a full theory, but they can help us understand whether this direction is practically useful—and what’s still needed to make it viable.
This post reflects the early contours of an idea, shaped by a short but intense research retreat, two years of retro funding data, and conversations with peers navigating similar challenges. I don’t see this as a final answer—but as a framework for thinking, testing, and building better funding systems.
I’m sharing these thoughts in the spirit of open exploration and would love to hear from others experimenting with strategy, impact, and evaluation in grants. Whether you're running rounds, building evaluators, or designing metrics—I’m curious what you see, what you’re trying, and where this might resonate (or not).
Let’s keep learning together.
No activity yet