
A Deep Dive into Mysticeti: The New Consensus Powering Sui
Written by Fieldlnwza007, infographics by EarthMysticetiIn July, Sui announced a shift from using Narwhal and Bullshark as their consensus protocol to Mysticeti[1]. In this article, I will do my best to simplify the Mysticeti paper and explain how Mysticeti works and why it was chosen from my perspective, while preserving the core ideas.Narwhal and Bullshark RecapTo understand the need for Mysticeti, let’s first review Narwhal and Bullshark. If you’re familiar with these protocols, feel free ...
Celestia: A Summary of How Fraud Proofs and Data Availability Proofs Work
1 OverviewData availability (DA) is crucial to the functionality and security of blockchains. In traditional monolithic blockchain structures, full nodes retrieve blocks from peers to verify the integrity of the entire history of their chain based on predefined rules. This process relies on the assumption that data on the blockchain remains consistently "available" for access by all nodes in the network. This principle holds not just for layer-1 blockchains but also for layer-2 blockchains, w...

Introduction to Walrus: A Note on How the Red Stuff Protocol Works
1. IntroductionSo far, we’ve discussed two decentralized storage networks: Celestia [1] and Espresso [2]. Unlike traditional decentralized storage systems that rely on full replication, where every node stores a complete copy of the original data, Celestia and Espresso use erasure coding. This method splits data into encoded fragments, allowing reconstruction from only a subset of these fragments. By requiring nodes to store only a small fraction of the encoded data (much smaller than the ful...

A Deep Dive into Mysticeti: The New Consensus Powering Sui
Written by Fieldlnwza007, infographics by EarthMysticetiIn July, Sui announced a shift from using Narwhal and Bullshark as their consensus protocol to Mysticeti[1]. In this article, I will do my best to simplify the Mysticeti paper and explain how Mysticeti works and why it was chosen from my perspective, while preserving the core ideas.Narwhal and Bullshark RecapTo understand the need for Mysticeti, let’s first review Narwhal and Bullshark. If you’re familiar with these protocols, feel free ...
Celestia: A Summary of How Fraud Proofs and Data Availability Proofs Work
1 OverviewData availability (DA) is crucial to the functionality and security of blockchains. In traditional monolithic blockchain structures, full nodes retrieve blocks from peers to verify the integrity of the entire history of their chain based on predefined rules. This process relies on the assumption that data on the blockchain remains consistently "available" for access by all nodes in the network. This principle holds not just for layer-1 blockchains but also for layer-2 blockchains, w...

Introduction to Walrus: A Note on How the Red Stuff Protocol Works
1. IntroductionSo far, we’ve discussed two decentralized storage networks: Celestia [1] and Espresso [2]. Unlike traditional decentralized storage systems that rely on full replication, where every node stores a complete copy of the original data, Celestia and Espresso use erasure coding. This method splits data into encoded fragments, allowing reconstruction from only a subset of these fragments. By requiring nodes to store only a small fraction of the encoded data (much smaller than the ful...

Subscribe to ContributionDAO

Subscribe to ContributionDAO
Share Dialog
Share Dialog


>100 subscribers
>100 subscribers
Written by Fieldlnwza007, infographics by Earth, special thanks to Michael Zacharski for his valuable suggestions.
Artificial Intelligence (AI) and Machine Learning (ML) have garnered significant attention in the blockchain space, leading to the launch of many projects that integrate AI to attract users and investors. However, many of these projects either use AI merely as a marketing buzzword or struggle to effectively leverage AI within decentralized networks.
In this article, we will explore Allora, a blockchain network specifically designed for AI and ML. Unlike traditional decentralized AI systems, Allora is a self-improving decentralized collective intelligence network that optimizes its capabilities beyond those of its individual participants.
Note: The formulae used in Allora are more complex than presented in this article. They involve logarithms and differentiations. However, for the sake of clarity and simplicity, I have adjusted the formulae to include only basic math operations while maintaining the core concepts, so that general readers can understand without being distracted by complex mathematics.
Before diving into how Allora works, let’s define some key terms that will be used throughout this article:
Inference: A prediction or output generated by an inference worker node (an AI/ML model) for a specific topic. For example, the ETH price prediction made by an inference worker is an inference for the ETH price prediction topic.
Forecast: A prediction made by a forecast worker regarding the accuracy of an inference worker's output, typically expressed as a loss value.
Loss: A metric used to quantify the difference between predicted outputs and actual target values, essentially measuring error. Generally, a lower loss value indicates better model performance.
Loss Function: A function used to calculate the loss or error between a model's predicted output and the actual target values.
With these definitions in mind, you are ready to explore how the Allora network works.
To achieve its decentralized and self-improving capabilities, Allora introduces two key innovations:
Context-Awareness: Allora has the ability to learn from the historical performance of models provided by network participants. This enables it to identify which models perform best under specific conditions and combine their outputs to deliver the most appropriate results for the current scenario.
Aggregation of Intelligence: Behind each output from the Allora Network, the network aggregates inferences from multiple disparate purpose-built models to create a single meta-inference. The meta-inference is the result of the weighted average of all Inference Worker responses in that round with the most weight given to the Inference Workers which tend to be the most accurate under similar market conditions historically.
Differentiated Incentive Structure: To support a truly decentralized network, Allora’s incentive structure is designed to reward participants fairly and intuitively based on their performance, encouraging active and effective participation.
Before we explore how Allora achieves context-awareness and its unique reward structure, it's essential to understand the network's architecture and components. Allora embraces a modular approach in its network design. It is built as a hub chain on the Cosmos network, with the Allora appchain, developed using the Cosmos SDK, serving as the backbone. This appchain is secured by validators through CometBFT and a delegated Proof-of-Stake (dPoS) mechanism.
Within this appchain, Allora contains sub-networks known as topics. Each topic is dedicated to a specific AI/ML task and has its own set of rules that govern participant interactions, including the topic's loss function and the target variable to be predicted.
The Allora network consists of four main types of participants:
Workers: Workers run worker nodes to provide AI/ML predictions on specific topics within the network. They are rewarded based on the quality of their predictions. There are two types of workers:
Inference Workers: These workers run AI/ML models to generate inferences for specific topics.
Forecast Workers: These workers run AI/ML models to predict the losses of inference workers. The losses they produce are referred to as forecasted losses.
It's important to note that the number of inference workers and forecast workers does not need to be equal. Each worker node can choose to run one type of worker or both.
Reputers: Reputers provide ground truth for specific topics when it becomes available. With access to the ground truth, they are responsible for calculating the losses of worker inferences and forecast-implied inferences with respect to the ground truth. Reputers also secure the topic through staking, where their stake weight determines their influence on the consensus of network losses. They are rewarded based on how closely their reported losses align with the consensus.
Consumers: Consumers are entities that request the inferences generated by the network and pay a fee for these inferences.
Validators: Validators run validator nodes to secure the Allora appchain through the dPoS mechanism and CometBFT. The appchain serves as the settlement layer for the entire network, coordinating interactions and incentives for all participants. Specifically, the appchain:
Stores the weights of reputers and workers, as well as the logic used to update these weights on-chain.
Rewards workers and reputers according to their weights.
Collects fees for inferences from consumers.
Triggers requests to workers and reputers to collect inferences and execute loss-calculation logic.

Allora is composed of sub-networks, also known as topics, where each topic is optimized for a specific task or objective. Examples of topics include Ethereum price prediction, social media sentiment analysis, and presidential election predictions. Anyone can create topics on Allora, with interactions between network participants governed and coordinated by the Allora rule set, also known as the Topic Coordinator.
Within each topic, workers, reputers, and the Topic Coordinator play crucial roles in delivering predictions to the consumer.

Now, let’s explore how Allora achieves its decentralized and self-improving properties, starting with context awareness. In any given topic, there may be multiple inference workers, each utilizing its own unique model. While naively combining predictions from different models based on historical reputation may contribute to decentralization, it is insufficient for achieving true context awareness—the ability of the network to adjust weights dynamically under varying scenarios.
Allora introduces forecast workers to address this challenge. These forecast workers learn from historical data and predict which inference worker is most suitable in different environments by estimating the loss for each inference worker. For example, in the case of ETH price prediction, worker A’s predictions may perform well on Mondays, while worker B’s predictions might be more suitable during a sideways market. By leveraging this information, the network can optimally assign weights to inferences, improving its accuracy over time as it gathers more data on each worker’s performance.
The process by which Allora assigns weights to each inference and combines them into a final network inference for a given epoch , denoted as , can be described as follows:
Inference Production: Each inference worker generates its inference (the estimated output for the topic) for the current epoch using its model and dataset.
Forecasting Loss: Each forecast worker runs its forecast model to predict the losses of each inference worker for the current epoch and forwards the forecasted losses to the Topic Coordinator. This prediction, , reflects how accurate forecast worker expects inference worker to be.
The reason for using the previous epoch's losses is to gauge the reliability of each inference based on actual results (the ground truth), allowing the network to prioritize and assign weights accordingly. The network loss from the previous epoch is calculated by measuring the deviation of the network inference from the ground truth.
Different reputers may propose varying ground truths due to differences in data sources or levels of precision, leading to different loss evaluations. The final network loss is calculated by taking a stake-weighted average of the losses reported by the reputers, i.e.,
where is the weight for staker and is the loss reported by staker .
Allora rewards participants based on their contributions to the network. For workers, contributions are measured by observing how the network loss changes when an inference (or forecast-implied inference) is included or excluded from the network. In contrast, a reputer’s contribution is measured by the distance between its reported losses and the network loss.
In each epoch, the inferences provided by inference workers are crucial for many of Allora’s calculations. However, not all inferences are of equal value, so it’s important to evaluate which inferences perform better and reward them accordingly. This incentivizes inference workers to improve their models, encouraging competition and enhancing overall network performance.
Allora measures the performance of an inference worker using the concept of a one-out loss. Specifically, the one-out loss of an inference in epoch , denoted as , is calculated by excluding inference from the network and computing the loss using the remaining inferences. The difference between and the network loss for the same epoch gives the one-out score, :
The interpretation of the one-out score is as follows:
is positive: This indicates , meaning that removing inference increases the network loss. In other words, the network's performance worsens when inference is excluded, indicating that inference makes a positive contribution to the network.
The reward for inference workers is based on the one-out score . The reward structure is designed to give significant rewards to workers with positive scores while giving small to negligible rewards to those with zero or negative scores, recognizing their contribution to decentralization.

Unlike the rewards for inference workers, which are based on a one-out loss, the reward mechanism for forecast workers introduces the concept of a one-in loss. Allora explains that because the forecasting task is inherently redundant, removing an individual forecast worker is unlikely to significantly impact the network inference loss. This redundancy arises because every honest forecast worker is expected to produce results that are closely aligned, with minimal differences in their observation of other workers' performances.
To better capture the contribution of each forecast worker, Allora also measures a one-in score, which evaluates the effect of adding a forecast worker k to the network. The one-in score of forecast worker , denoted as , is defined as:
Here:
is the network loss calculated using only the inferences from inference workers, excluding any forecast-implied inferences.
is the one-in loss, which is calculated using the inferences from inference workers combined with one forecast-implied inference from forecast worker .
The interpretation of is similar to that of the one-out score:
is positive: This indicates that adding forecast worker improves the network's performance (i.e., the loss decreases after adding forecast worker to the network).
The reward structure for forecast workers is similar to that for inference workers, but it replaces the one-out score with , where is a weighted sum of the one-out score and the one-in score:
Here, the weights and are derived from worker permutations, with .
When a ground truth becomes available, reputers reach a consensus on the final network loss, which is calculated as a stake-weighted average of the losses reported by all reputers in the network. It is therefore reasonable to reward a reputer based on how close their reported losses are to the value obtained from this network consensus.
The greater the deviation from the consensus, the more likely it is that the reputer has provided inaccurate data, resulting in a smaller reward. Conversely, an honest reputer, whose data closely aligns with the network consensus, should receive a higher reward. Let represent the distance between the loss reported by reputer and the network loss. The reward for reputer should be proportional to . Simply put, the closer the reported data is to the network value, the greater the reward for the reputer who provided that data.
Another important factor to consider is the amount of staked tokens. A reputer’s reward should also be proportional to their stake. The greater the stake, the larger the reward should be. Thus, a simple reward mechanism for reputer is to base the reward on , as discussed above, and weight it by the reputer’s stake . Therefore, reputer is rewarded proportionally to .
However, this approach could lead to a runaway effect, where a reputer with a large stake accumulates rewards faster than those with smaller stakes, potentially leading to increasing centralization. Allora addresses this issue by introducing an upper bound on the weight . If the weight is below , then remains as is. However, if exceeds , it is capped at . The final formula for the reward of reputer is therefore proportional to , where is the specified upper bound.
Unlike traditional AI/ML projects in the blockchain space, Allora introduces a novel approach with a blockchain optimized for AI/ML, offering both self-improvement and decentralization. With its modular architecture, Allora allows anyone to create sub-networks, or topics, each focused on specific tasks such as price predictions or sentiment analysis.
Allora achieves self-improvement and decentralization through its context-awareness, enabling the network to adjust the weights of each model appropriately in different scenarios. Its differentiated incentive structure ensures that participants are rewarded based on their contributions to the network's accuracy.
These features make Allora a compelling and promising network for AI, with the potential to be applied in real-world blockchain use cases.
https://whitepaper.assets.allora.network/whitepaper.pdf
Written by Fieldlnwza007, infographics by Earth, special thanks to Michael Zacharski for his valuable suggestions.
Artificial Intelligence (AI) and Machine Learning (ML) have garnered significant attention in the blockchain space, leading to the launch of many projects that integrate AI to attract users and investors. However, many of these projects either use AI merely as a marketing buzzword or struggle to effectively leverage AI within decentralized networks.
In this article, we will explore Allora, a blockchain network specifically designed for AI and ML. Unlike traditional decentralized AI systems, Allora is a self-improving decentralized collective intelligence network that optimizes its capabilities beyond those of its individual participants.
Note: The formulae used in Allora are more complex than presented in this article. They involve logarithms and differentiations. However, for the sake of clarity and simplicity, I have adjusted the formulae to include only basic math operations while maintaining the core concepts, so that general readers can understand without being distracted by complex mathematics.
Before diving into how Allora works, let’s define some key terms that will be used throughout this article:
Inference: A prediction or output generated by an inference worker node (an AI/ML model) for a specific topic. For example, the ETH price prediction made by an inference worker is an inference for the ETH price prediction topic.
Forecast: A prediction made by a forecast worker regarding the accuracy of an inference worker's output, typically expressed as a loss value.
Loss: A metric used to quantify the difference between predicted outputs and actual target values, essentially measuring error. Generally, a lower loss value indicates better model performance.
Loss Function: A function used to calculate the loss or error between a model's predicted output and the actual target values.
With these definitions in mind, you are ready to explore how the Allora network works.
To achieve its decentralized and self-improving capabilities, Allora introduces two key innovations:
Context-Awareness: Allora has the ability to learn from the historical performance of models provided by network participants. This enables it to identify which models perform best under specific conditions and combine their outputs to deliver the most appropriate results for the current scenario.
Aggregation of Intelligence: Behind each output from the Allora Network, the network aggregates inferences from multiple disparate purpose-built models to create a single meta-inference. The meta-inference is the result of the weighted average of all Inference Worker responses in that round with the most weight given to the Inference Workers which tend to be the most accurate under similar market conditions historically.
Differentiated Incentive Structure: To support a truly decentralized network, Allora’s incentive structure is designed to reward participants fairly and intuitively based on their performance, encouraging active and effective participation.
Before we explore how Allora achieves context-awareness and its unique reward structure, it's essential to understand the network's architecture and components. Allora embraces a modular approach in its network design. It is built as a hub chain on the Cosmos network, with the Allora appchain, developed using the Cosmos SDK, serving as the backbone. This appchain is secured by validators through CometBFT and a delegated Proof-of-Stake (dPoS) mechanism.
Within this appchain, Allora contains sub-networks known as topics. Each topic is dedicated to a specific AI/ML task and has its own set of rules that govern participant interactions, including the topic's loss function and the target variable to be predicted.
The Allora network consists of four main types of participants:
Workers: Workers run worker nodes to provide AI/ML predictions on specific topics within the network. They are rewarded based on the quality of their predictions. There are two types of workers:
Inference Workers: These workers run AI/ML models to generate inferences for specific topics.
Forecast Workers: These workers run AI/ML models to predict the losses of inference workers. The losses they produce are referred to as forecasted losses.
It's important to note that the number of inference workers and forecast workers does not need to be equal. Each worker node can choose to run one type of worker or both.
Reputers: Reputers provide ground truth for specific topics when it becomes available. With access to the ground truth, they are responsible for calculating the losses of worker inferences and forecast-implied inferences with respect to the ground truth. Reputers also secure the topic through staking, where their stake weight determines their influence on the consensus of network losses. They are rewarded based on how closely their reported losses align with the consensus.
Consumers: Consumers are entities that request the inferences generated by the network and pay a fee for these inferences.
Validators: Validators run validator nodes to secure the Allora appchain through the dPoS mechanism and CometBFT. The appchain serves as the settlement layer for the entire network, coordinating interactions and incentives for all participants. Specifically, the appchain:
Stores the weights of reputers and workers, as well as the logic used to update these weights on-chain.
Rewards workers and reputers according to their weights.
Collects fees for inferences from consumers.
Triggers requests to workers and reputers to collect inferences and execute loss-calculation logic.

Allora is composed of sub-networks, also known as topics, where each topic is optimized for a specific task or objective. Examples of topics include Ethereum price prediction, social media sentiment analysis, and presidential election predictions. Anyone can create topics on Allora, with interactions between network participants governed and coordinated by the Allora rule set, also known as the Topic Coordinator.
Within each topic, workers, reputers, and the Topic Coordinator play crucial roles in delivering predictions to the consumer.

Now, let’s explore how Allora achieves its decentralized and self-improving properties, starting with context awareness. In any given topic, there may be multiple inference workers, each utilizing its own unique model. While naively combining predictions from different models based on historical reputation may contribute to decentralization, it is insufficient for achieving true context awareness—the ability of the network to adjust weights dynamically under varying scenarios.
Allora introduces forecast workers to address this challenge. These forecast workers learn from historical data and predict which inference worker is most suitable in different environments by estimating the loss for each inference worker. For example, in the case of ETH price prediction, worker A’s predictions may perform well on Mondays, while worker B’s predictions might be more suitable during a sideways market. By leveraging this information, the network can optimally assign weights to inferences, improving its accuracy over time as it gathers more data on each worker’s performance.
The process by which Allora assigns weights to each inference and combines them into a final network inference for a given epoch , denoted as , can be described as follows:
Inference Production: Each inference worker generates its inference (the estimated output for the topic) for the current epoch using its model and dataset.
Forecasting Loss: Each forecast worker runs its forecast model to predict the losses of each inference worker for the current epoch and forwards the forecasted losses to the Topic Coordinator. This prediction, , reflects how accurate forecast worker expects inference worker to be.
The reason for using the previous epoch's losses is to gauge the reliability of each inference based on actual results (the ground truth), allowing the network to prioritize and assign weights accordingly. The network loss from the previous epoch is calculated by measuring the deviation of the network inference from the ground truth.
Different reputers may propose varying ground truths due to differences in data sources or levels of precision, leading to different loss evaluations. The final network loss is calculated by taking a stake-weighted average of the losses reported by the reputers, i.e.,
where is the weight for staker and is the loss reported by staker .
Allora rewards participants based on their contributions to the network. For workers, contributions are measured by observing how the network loss changes when an inference (or forecast-implied inference) is included or excluded from the network. In contrast, a reputer’s contribution is measured by the distance between its reported losses and the network loss.
In each epoch, the inferences provided by inference workers are crucial for many of Allora’s calculations. However, not all inferences are of equal value, so it’s important to evaluate which inferences perform better and reward them accordingly. This incentivizes inference workers to improve their models, encouraging competition and enhancing overall network performance.
Allora measures the performance of an inference worker using the concept of a one-out loss. Specifically, the one-out loss of an inference in epoch , denoted as , is calculated by excluding inference from the network and computing the loss using the remaining inferences. The difference between and the network loss for the same epoch gives the one-out score, :
The interpretation of the one-out score is as follows:
is positive: This indicates , meaning that removing inference increases the network loss. In other words, the network's performance worsens when inference is excluded, indicating that inference makes a positive contribution to the network.
The reward for inference workers is based on the one-out score . The reward structure is designed to give significant rewards to workers with positive scores while giving small to negligible rewards to those with zero or negative scores, recognizing their contribution to decentralization.

Unlike the rewards for inference workers, which are based on a one-out loss, the reward mechanism for forecast workers introduces the concept of a one-in loss. Allora explains that because the forecasting task is inherently redundant, removing an individual forecast worker is unlikely to significantly impact the network inference loss. This redundancy arises because every honest forecast worker is expected to produce results that are closely aligned, with minimal differences in their observation of other workers' performances.
To better capture the contribution of each forecast worker, Allora also measures a one-in score, which evaluates the effect of adding a forecast worker k to the network. The one-in score of forecast worker , denoted as , is defined as:
Here:
is the network loss calculated using only the inferences from inference workers, excluding any forecast-implied inferences.
is the one-in loss, which is calculated using the inferences from inference workers combined with one forecast-implied inference from forecast worker .
The interpretation of is similar to that of the one-out score:
is positive: This indicates that adding forecast worker improves the network's performance (i.e., the loss decreases after adding forecast worker to the network).
The reward structure for forecast workers is similar to that for inference workers, but it replaces the one-out score with , where is a weighted sum of the one-out score and the one-in score:
Here, the weights and are derived from worker permutations, with .
When a ground truth becomes available, reputers reach a consensus on the final network loss, which is calculated as a stake-weighted average of the losses reported by all reputers in the network. It is therefore reasonable to reward a reputer based on how close their reported losses are to the value obtained from this network consensus.
The greater the deviation from the consensus, the more likely it is that the reputer has provided inaccurate data, resulting in a smaller reward. Conversely, an honest reputer, whose data closely aligns with the network consensus, should receive a higher reward. Let represent the distance between the loss reported by reputer and the network loss. The reward for reputer should be proportional to . Simply put, the closer the reported data is to the network value, the greater the reward for the reputer who provided that data.
Another important factor to consider is the amount of staked tokens. A reputer’s reward should also be proportional to their stake. The greater the stake, the larger the reward should be. Thus, a simple reward mechanism for reputer is to base the reward on , as discussed above, and weight it by the reputer’s stake . Therefore, reputer is rewarded proportionally to .
However, this approach could lead to a runaway effect, where a reputer with a large stake accumulates rewards faster than those with smaller stakes, potentially leading to increasing centralization. Allora addresses this issue by introducing an upper bound on the weight . If the weight is below , then remains as is. However, if exceeds , it is capped at . The final formula for the reward of reputer is therefore proportional to , where is the specified upper bound.
Unlike traditional AI/ML projects in the blockchain space, Allora introduces a novel approach with a blockchain optimized for AI/ML, offering both self-improvement and decentralization. With its modular architecture, Allora allows anyone to create sub-networks, or topics, each focused on specific tasks such as price predictions or sentiment analysis.
Allora achieves self-improvement and decentralization through its context-awareness, enabling the network to adjust the weights of each model appropriately in different scenarios. Its differentiated incentive structure ensures that participants are rewarded based on their contributions to the network's accuracy.
These features make Allora a compelling and promising network for AI, with the potential to be applied in real-world blockchain use cases.
https://whitepaper.assets.allora.network/whitepaper.pdf
Calculating Forecast-Implied Inference: The Topic Coordinator then calculates the forecast-implied inference , which represents the estimated network inference from forecast worker ’s perspective. This value is computed by taking a weighted average of all inferences produced in step 1, where each inference is weighted according to its expected contribution to the network, as estimated by forecast worker . The weight is determined by the difference between the network loss from the previous epoch and . For instance, if is greater than zero, it indicates that forecast worker expects inference worker to outperform the network inference from the previous round, and thus assigns it a higher weight. Conversely, if is less than zero, it implies poorer performance, leading to a lower weight. The weight is designed to approach zero for inferences with losses greater than , while it approaches a constant value otherwise.
Combining Inferences: Finally, the Allora sub-network combines the forecast-implied inferences from each forecast worker (evaluated in step 3) with the inferences from step 1 through a weighted average to obtain the final network inference . The weight assigned to each inference is determined by the network loss from the previous epoch and the inference's loss relative to the ground truth of the previous epoch.
is negative: This indicates , meaning that removing inference decreases the network loss. In this case, the network performs better without inference , suggesting that inference makes a negative contribution to the network.
is zero: This indicates , meaning that removing inference has no impact on the network loss. Therefore, inference makes no contribution to the network.
Calculating Forecast-Implied Inference: The Topic Coordinator then calculates the forecast-implied inference , which represents the estimated network inference from forecast worker ’s perspective. This value is computed by taking a weighted average of all inferences produced in step 1, where each inference is weighted according to its expected contribution to the network, as estimated by forecast worker . The weight is determined by the difference between the network loss from the previous epoch and . For instance, if is greater than zero, it indicates that forecast worker expects inference worker to outperform the network inference from the previous round, and thus assigns it a higher weight. Conversely, if is less than zero, it implies poorer performance, leading to a lower weight. The weight is designed to approach zero for inferences with losses greater than , while it approaches a constant value otherwise.
Combining Inferences: Finally, the Allora sub-network combines the forecast-implied inferences from each forecast worker (evaluated in step 3) with the inferences from step 1 through a weighted average to obtain the final network inference . The weight assigned to each inference is determined by the network loss from the previous epoch and the inference's loss relative to the ground truth of the previous epoch.
is negative: This indicates , meaning that removing inference decreases the network loss. In this case, the network performs better without inference , suggesting that inference makes a negative contribution to the network.
is zero: This indicates , meaning that removing inference has no impact on the network loss. Therefore, inference makes no contribution to the network.
No activity yet