Cover photo

Impact Measurement and Program Evaluations

*This is part three in 3 part series. Parts one and two in case you missed them. *

Tl;dr

Measuring impact is hard

3. Impact Measurement and Program Evaluations

One of the focal points of this phase of research that Mashal and I did was to explore the nature of impact measurement in the grant programs that we spoke to. This made me think back to a class I took in grad school called ‘Creating Results Oriented Programs’ taught by Prof. Matt Hannigan. Coming off the back of trying to build eduDAO (not the hackathon one) and being interested in thinking through funding and assessing the impact of funding, it was a great opportunity to learn from someone who ran a nonprofit that had issued grants to local nonprofits for over a decade.

One of the resources that we covered in that class was the W.K. Kellogg Foundation Evaluation Handbook (KFEH or Evaluation Handbook), which is the main work that will inform this section.

Let’s step back and look at the definition of the word impact:

  • “Verb - to have an influence on something”, or

  • “Noun - a powerful effect that something, especially something new, has on a situation or person.”

A danger of thinking of impact purely by the above definition is a potential implication that it’s possible to understand the influence or effect of something as soon as that thing happens. In reality, impact might need time to surface.

To get a better sense of what kind of the timelines impact might operate on, it was useful to look back to the Evaluation Handbook:

     “Effective evaluation is not an “event” that occurs at the end of a project, but is an      ongoing process which helps decision makers better understand the project; how it      is impacting participants, partner agencies and the community; and how it is being      influenced/impacted by both internal and external factors.Thinking of evaluation      tools in this way allows you to collect and analyze important data for decision      making throughout the life of a project: from assessing community needs prior to      designing a project, to making connections between project activities and intended      outcomes, to making mid-course changes in program design, to providing evidence      to funders that yours is an effort worth supporting.” (page 3)

In order to truly understand impact, systems of evaluation need to be generated in an ongoing fashion. The idea of a singular metric that can be assessed at a finite point in time might be a bit of a fool's errand, even if it is a necessary first step in developing more robust systems of evaluation.

     “Demonstrating effectiveness and measuring impact are important and valuable; yet      we believe that it is equally important to focus on gathering and analyzing data      which will help us improve our social initiatives. In fact, when the balance is shifted      too far to a focus on measuring statistically significant changes in quantifiable      outcomes, we miss important parts of the picture.This ultimately hinders our ability      to understand the richness and complexity of contemporary human-services      programs.” (page 6)

The framing of the KFEH is around community service based programs (as in services on the ground supporting an existing community made up of mostly IRL humans), something quite different from the focus of web3 grant programs (where communities are the digital collective of people who care about the future of the project or protocol, or at least in terms of the coin price going up/the hope of future airdrops).

In speaking to people in grant programs, it is clear that an investment assessment framework is not enough to model the intended impact of grant programs as financially oriented impact models are usually focused on more mature organizations that have a clearer domain of impact (e.g. growing a product with a clear potential market size and well-defined potential users).

Given that grant programs can come in at a much earlier stage in the maturity of projects, coupled with the fact that many things being developed are open-sourced public goods that are not meant to capture market share per se as much as enable the ability for others to come in and build. Taking Peter Theil’s 0 to 1 as an example, the book talks about the importance of finding new innovations in verticals where near monopolies can be generated based on a) being the first to come up with that idea and b) doing it better than those who come in and imitate.* This kind of thinking doesn’t really apply to open-sourced public goods.

Grants need to think of impact evaluation in different ways, and it’s helpful to think of some issues that might arise from poor evaluation setups. Below are 4 consequences of operating in a limited evaluation framework as outlined in the report.

  1. “We begin to believe that there is only one way to do evaluation.” (page 7)

  2. “We do not ask and examine equally important questions.” (page 8)

  3. “We come up short when attempting to evaluate complex system change and comprehensive community initiatives.” (page 8)

  4. “We lose sight of the fact that all evaluation work is political and value laden.” (page 9)

Recognizing the limitations is a good first step, but where do we go from here? Let’s think about the dimensions of evaluation, or the level at which metrics are meant to be assessed (as the Evaluation Handbook calls it, the levels at which evaluation happens). The handbook talks of three separate levels:

  1. “Project-Level Evaluation

  2. Cluster Evaluation

  3. Programming and Policymaking Evaluation” (page 14)

The point in mentioning these three levels is to signal the importance of thinking of metrics as not being a singular absolute structure. Rather, it is key to break things down to think about the project specific metrics, metrics across similar types of projects, and zooming out to explore the systems around these metrics and their generation, including any other influences around them, to better understand the system of feedback loops that need to be developed to effectively issue capital and assess the impact of the activities that capital went on to fund.

I’m not going to be doing a thorough breakdown of the Evaluation Handbook, though I do strongly recommend that anyone structuring or managing a grant program read it.

The last element I want to touch on here is the idea of a theory of change, which “is essentially a comprehensive description and illustration of how and why a desired change is expected to happen in a particular context.” The Kellogg Handbook refers to one style of thinking about how to chart out a theory of change as a Program Logic Model and describes as such:

     “Theory-based evaluation starts with the premise that every social program is      based on a theory—some thought process about how and why it will work.This      theory can be either explicit or implicit. The key to understanding what really      matters about the program is through identifying this theory (Weiss, 1995).This      process is also known as developing a program logic model—or picture— describing      how the program works.” (page 11)

     “A program logic model links outcomes (both short- and long-term) with program      activities/processes and the theoretical assumptions/principles of the program. This      model provides a roadmap of your program, highlighting how it is expected to work,      what activities need to come before others, and how desired outcomes are      achieved.” (page 35)

They outline three types of logic models:

  • Outcomes model

  • Activities model

  • Theory model

While there are differences between these three logic models, at their essence is are the following:

  1. Inputs and resources (what goes into the activities)

  2. Clearly defined activities (the direct work that gets funded).

  3. These activities lead to outcomes, which emerge following the activities themselves (so short-term)

  4. Medium-term outcomes can be realized anywhere from a year or two up to 5-7ish years depending on the nature of the project and whether or not there are ongoing activities

  5. Impact is realized on a longer-term timescale, ranging from 3ish years upwards of 7-10 years  (pages 37-42)

When applying theory of change or program logic model lens, it seems that all grant programs in web3 are currently falling short of defining their desired impact. It might be more useful to think of the growth of your ecosystem (which as things currently stand, is the frequently used desired impact of the grant program) as an outcome that itself leads to a broader impact.

What will happen in the world if Ethereum or Solana or any other L1 grows? Why is the world better if Aave or Uniswap or any other DeFi project flourishes?

These are not straightforward questions to answer but starting to dedicate more time and attention to coming up with theories, and processes to keep asking questions and revising the answers, could help create more robust systems of intention setting, evaluation, and accountability that lead to some powerful long-term impacts.

Conclusion

It’s great to see an increase in attention on the topic of how to better understand and to improve granting and capital issuance in web3 more broadly. Both my rambles here as well as the research Mashal and I are doing are part of this growing momentum. I am very excited to see where all of this heads in the coming months, to see the kind of experiments that will be thought up and run, and to see how we can grow as a result.

To provide some kind of specificity, here are some high-level next steps I’d suggest when thinking through a grant program:

  • Make sure the overall org has a clearly stated mission

    • Ideally in the context of a greater theory of change

    • Impact =/= growth of a project or protocol, that’s an outcome. Impact is what happens as a result of that outcome happening overtime

  • It’s helpful to make sure that the grant program itself has a mission, or at least clear goals for understanding what success looks like

  • Clarify how much capital you have to issue and over what time in order to accomplish that mission or goal

  • Clarify the type of grant you have capacity to run in terms of prospective vs RFPs or if you want to issue retroactive grants or run QF-based rounds with matching pool to surface ideas that the community enjoys, etc.

Still quite vague, I know, but it’s a starting point. More to come in future pieces. If you want to stay in the loop on the forthcoming report, just follow Mashal or me on twitter as we’ll be announcing it there. Feel free to reach out if you want to chat about grants.

* Theil’s book is unsurprisingly paywalled, so you can reference these articles to get a get a sense: 1) https://www.forbes.com/sites/gregsatell/2014/10/03/peter-thiels-4-rules-for-creating-a-great-business/?sh=7a48b97c54df; 2) https://www.wsj.com/articles/peter-thiel-competition-is-for-losers-1410535536; 3) https://journal.c2er.org/2015/01/no-interrupting-peter-thiel-zero-to-one-a-philosophy-of-technology-entrepreneurialism/

Thanks for checking out this piece! You can go back to parts one and two if you missed them.

You can follow me on twitter for more grants related discussion.

Photo by Nils: https://www.pexels.com/photo/gray-pencil-and-triangular-ruler-on-brown-wooden-surface-376689/