Cover photo

Post Round Analysis 101

Post Round Analysis

Just like any other experimental situation, there are suggested steps one may take to analyze grant round results. These steps are taken to improve the process as time goes.

The Grant Stack is a tool produced from an objective place, and how it is implemented depends on the community. Although these steps are recommended from our previous experiences, they are not mandatory because each community is different.

The purpose of this type of grant is to build from the bottom up where the community itself know what would hold value/impact within that specific community.

It is assumed that the reader is aware of Gitcoin Grants Stacks and practiced a round up to this post round analysis stage.

Take the following recommended steps for Data Cleaning as tailored to your community:

Greenpill Network GG19 stats
Greenpill Network GG19 stats
  • On the round manager page, you will see significant tabs on the left hand section. These are “Round Stats” and “Round Results”.

  • Round Stats: This tab is live during the round and allows the manager to check on the status of the round. Stats like, number of donors, unique donors, matching percentage etc.

  • Round Results: This section is useful after the round has concluded. This is the analysis stage to ensure quality of the round.

Take the following steps to retrieve the round data for analysis after connecting your wallet.

  • On the left hand tab, click on “round results”.

    On this page, you are able to export the round results by clicking the csv download button.

    After downloading, identify and handle missing or incomplete data. Do a wide search to check discrepancies.

  • Check for outliers/anomalies or patterns that may or may not skew results. (This may be the result of sybil attackers).

    If your round acquired a substantial amount of data, you can now start looking for outliers or anomalies. Any data at all that may be significantly different or too similar from others would be worth noting.

  • Standardize or normalize data if necessary.

    If you can identify any sort of outliers that may be a result of sybil attacks, the round manager can adjust this data to fit the necessary alignment for their community regulations.

  • Sybil attacks can come  in many forms. Each community has guidelines that they use to measure the matching amount on a quadratic level.

Sybils exploit the guidelines in order to gain an excess amount of funding from the pool.

The word “Sybil,” means to possess many faces/identities. What these sybil attackers do or don’t do are very essential when it comes to round success.

An example of a sybil would be a group of accounts owned by one person or entity, created to gain more share of the matching pool.

  • Some of the important concepts that you look for in the metrics when identifying sybils (can vary between communities and are not limited to the following) are:

  1. Are their donors only donating to them?

    (What are the donor’s donation pattern?)

  2. Their donors' donation time.

    (Donations made exactly at the same time to the same projects should be considered a red flag.)

  3. How overlapping are the projects in regards to the same donors.

Grants Stack utilizes PassPort to determine matching levels and identify potential sybils.

Passport is not mandatory but highly recommended. It is up to the round operators to determine what matching level that they deem applicable to their round.

Descriptive Statistics:

Take detailed notes of what was observed and what was adjusted.

  • Detailed notes will allow for better adjustments with consequential rounds.

  • Document project’s contact information when adjusting any outliers.

  • If any adjustment to a projects’ score or matching point is made, it is suggested to send a follow up email if time permits. This will put the project on notice in advance although the decision has been made based on the community guidelines.

  • Summarize any discovered information that was discovered after data abstraction.

Visualizing the Exploratory Results:

  • Visualize relationships between variables using scatter plots, correlation matrices, or other graphical representations.

  • Use various available tools to map out the data being provided. With the abstracted data, the round operators could use any graphing or data visualization tool they prefer.

An example of a visual representation of the round can be viewed on a round report card, and the program explorer page.

  • Take time to scrutinize the information for a double check, seeing if the graph/map will reveal any other details.

  • Identify patterns, trends, and potential outliers.

  • Identifying these things will allow the operator to make sense of the data.

  • It will equip them with the knowledge to make a sound decision that aligns with their community’s values.

This is also important because it gives a foundation in which to improve from in the next rounds.

Hypothesis Testing:

  • Formulate null and alternative hypotheses based on the community’s desired goals.

  • Choose a significance level that is tailored to the community (Cluster Matching) and conduct statistical tests to determine if there are significant differences between groups.

  • Depending on a project’s donor activities, the round manager uses that to form the basis for Cluster Matching(CM).

  • CM is a process in which a group of data are grouped together based on their similarities compared to the entire pool.

In this case, CM identifies projects that have an above average amount of unique donors and boosts their matching amount. This is not only a catalyst for wide distribution of resources, but also a barrier for sybil attackers.

Using CM, sybil attackers would have to donate to multiple projects across the round in order to have their donations matched.

Example Cluster Matching visual
Example Cluster Matching visual

Statistical Significance:

  • Determine if the observed effects are statistically significant to the resource distribution.

  • Consider practical significance along with statistical significance.

  • When thinking of the significance, make sure you apply potential issues that may arise and look into possible solutions.

    Effect Size Calculation:

  • Calculate effect sizes to quantify the magnitude of observed differences (depending the size of the round).

  • Effect sizes provide a measure of the practical importance of the results.

    Multiple Comparisons Correction:

  • Adjust for multiple comparisons.

  • Ideally the grant round would have more than one operator that way the results can be reviewed by more than one individual.

  • Having a team member review your work will mitigate the human errors that may arise from the initial revision.

Data Visualization:

  • Create visualizations (charts, graphs) to effectively communicate the results.

  • Use visual aids to highlight key findings and trends.

  • As mentioned before, the report card integration creates a finalized visual of the results.

Visual data of a grant round
Visual data of a grant round

Interpretation and Conclusion:

  • Draw conclusions based on the analysis and compare the final data with the estimated goals and data points.

  • Consider limitations and potential sources of bias.

    Documentation and Reporting:

  • As stated before, it is important to document the entire analysis process, including any decisions made and the rationale behind them.

  • At this point, look through the entire recorded data for a final submission. Understand the full metrics of the round and how it can be interpretated.

  • It is also suggested as good practice for a round manager to prepare a comprehensive report or presentation, summarizing the details that occurred through the round.

    Finalizing the round:

  • After third party reviews, upload the final corrected csv file back into the Grant Stack.

  • To upload files, scroll to the bottom of the page and click on “upload file” under the “revised results section”.

  • After uploading the updated files, click on “Finalize Results” to officially end the round on the back end.

    Peer Review:

  • At this point, you would have the report read and analyzed by knowledgeable, but objective third parties that is not the operators.

    Feedback and Iteration:

  • Gather feedback from team members, contributors and the participating community.

  • Use insights gained to refine the next round while educate others on your findings.

The steps stated in this module are recommendations based on experiences and not mandatory in order to perform a successful round.

Each community or manager has different desired goals. These managers, and their respectful communities, would know the regulation to use in order to achieve those goals.

As a round manager or operator, it is highly suggested that you understand the needs of the community that you are assisting. It requires an in-depth understanding of the community goals and values in order to manage a round effectively.

The Grant Stack is made as an objective tool that can be used to achieve various democratic results depending on the implementation.