<100 subscribers


BrightID's Aura protocol helps apps decentralize decision-making trust among many participants, yet the network of nodes that run Aura is not decentralized. Up till now, we've asked apps to either
Host their own BrightID (Aura) node so they can trust their own results
Choose which existing BrightID (Aura) nodes they trust to give accurate results
Either of these options leads to centralized reliance on either the app providing the service or some well-known nodes.
An app client can try to query several nodes and compare answers, but this still requires them to discover which nodes to query. There is no easy, decentralized way to find trustworthy BrightID (Aura) nodes. The system outlined in this document is meant to change that.
The Aura protocol is a decentralized way for a group of peers to authorize each other to evaluate subjects in a domain. Many domains are possible, including "BrightID" (the domain of unique humans).
Aura nodes are responsible for running the Aura scoring algorithm using the same set of shared protocol data, consisting of evaluations, which are currently posted to the IDChain blockchain (though we would like to move to Status L2 when it's ready).
As stated before, any person or app wanting the results from running the protocol, can run their own Aura node, or find one or more trusted Aura nodes. Our goal is to decentralize the process of finding a trusted Aura node, so not everyone needs to run a node.
Participants in Aura produce expert evaluations in many domains such as human uniqueness, insurance claims, work credentials, etc.
In the new system for decentralization, we will create a corresponding "node domain" for each regular domain. The "BrightID" domain will also have a "BrightID node" domain, for example. The goal of each "node domain" is to decide which nodes can be trusted to serve answers for its corresponding domain.
Each node will be required to make a minimum number of evaluations of other nodes.
The evaluation asks the question "Is this node trustworthy to help people find answers in the corresponding domain?" This implies that the evaluated node is serving trustworthy results in the domain. It also implies that the evaluated node is itself helping to identify other trustworthy nodes through its evaluations.
As is common with Aura, the evaluator chooses "positive" or "negative" and then a confidence rating (1-4).
Each node runs the Aura scoring algorithm (weighted SybilRank) on each node domain it serves. The results will show which node operators the network considers trustworthy.
A node can use the following procedure to audit a node's trustworthiness.
Find the (most recent) evaluations the node has made. This is easy to do by searching the local database.
Randomly follow a positive evaluation to another node.
Repeat steps 1-2 a predefined number of times (equal to the number of HOPS in the Aura algorithm, which is the log base 10 of the number of nodes in the network.)
Query the landed node (over the network) for a subject score in the associated domain (e.g. BrightID domain) and ensure that it matches the local score.
Because steps 1-3 can be done locally using DB queries, the procedure can be run efficiently many times.
This procedure can help a node know how to evaluate other nodes. If a downstream node fails an audit, upstream nodes evaluations should be downgraded in confidence or changed from positive to negative.
Apps and their users still need to be able to find trustworthy nodes in a decentralized way. The results of Aura scoring will provide the basis, but node operators need a method to reveal them.
I propose using Updraft to allow the public to vote on which node operators are trustworthy--with each domain having its own "Campaign" and each operator being an "Idea." This way, the public acts as an oracle to the answers that are readily verifiable by anyone running the Aura scoring algorithm. Early answers (or even accurate predictions) will be rewarded, while false answers will lose money.
With Updraft voting, anyone can simply look at the ranked list of ideas in a campaign to know the best nodes to use for that domain. Clients should randomly choose a node near the top of the list.

Team owners in Aura are the starting points of the flow of trust, but that doesn't mean they will receive the highest scores in the Aura algorithm. Trust must flow, and the edges (and their weights) are determined by evaluations. This is the method for decentralizing trust.
Each team has its own "energy" (trust) that flows independently of other teams. Having multiple teams provides resilience: If one team becomes corrupt or dysfunctional such that the public decides to ignore their energy, the system continues to provide scores to participants using the remaining teams.
New teams register through a smart contract, which makes Aura nodes aware of their existence. The smart contract can collect a fee to pay Aura nodes for the cost to compute the energy flows for the team, and to pay Aura participants for making evaluations.
To find an appropriate mix of team scores to compute an overall Aura score, I again propose to use Updraft. Each team produces its own scores using the Weighted SybilRank algorithm and its own energy flows. An overall score for each subject (or node) can be computed by combining team scores proportionally to their interest(🔥) as voted in Updraft. The teams most favorable to the public will have the highest interest.
Every social protocol implicitly relies on public acceptance. A major part of Bitcoin, Ethereum, DNS, etc. is social trust.
We can help to decentralize social protocols by explicitly asking the crowd for their ratification on the parts that require social trust.
Which codebase should be the official one? Who should maintain it? Which are the bootstrap nodes? Which are the roots of authority or trust in our network?
It helps to have an open, decentralized, expert network like Aura to make recommendations where expert judgment is needed, but we also need ratification from the public on judgments they must make--like which Aura teams to trust. This is why both Aura and Updraft need to exist and why they complement each other so well.
Updraft seeks public opinion by paying people to give it. Similar to a prediction market, early, correct, opinion providers are rewarded financially. Anyone seeking to distort the truth (in order to influence Aura node operators, for example) will end up lining the pockets of honest and accurate predictors in Updraft as social reality is inevitably revealed.
BrightID's Aura protocol helps apps decentralize decision-making trust among many participants, yet the network of nodes that run Aura is not decentralized. Up till now, we've asked apps to either
Host their own BrightID (Aura) node so they can trust their own results
Choose which existing BrightID (Aura) nodes they trust to give accurate results
Either of these options leads to centralized reliance on either the app providing the service or some well-known nodes.
An app client can try to query several nodes and compare answers, but this still requires them to discover which nodes to query. There is no easy, decentralized way to find trustworthy BrightID (Aura) nodes. The system outlined in this document is meant to change that.
The Aura protocol is a decentralized way for a group of peers to authorize each other to evaluate subjects in a domain. Many domains are possible, including "BrightID" (the domain of unique humans).
Aura nodes are responsible for running the Aura scoring algorithm using the same set of shared protocol data, consisting of evaluations, which are currently posted to the IDChain blockchain (though we would like to move to Status L2 when it's ready).
As stated before, any person or app wanting the results from running the protocol, can run their own Aura node, or find one or more trusted Aura nodes. Our goal is to decentralize the process of finding a trusted Aura node, so not everyone needs to run a node.
Participants in Aura produce expert evaluations in many domains such as human uniqueness, insurance claims, work credentials, etc.
In the new system for decentralization, we will create a corresponding "node domain" for each regular domain. The "BrightID" domain will also have a "BrightID node" domain, for example. The goal of each "node domain" is to decide which nodes can be trusted to serve answers for its corresponding domain.
Each node will be required to make a minimum number of evaluations of other nodes.
The evaluation asks the question "Is this node trustworthy to help people find answers in the corresponding domain?" This implies that the evaluated node is serving trustworthy results in the domain. It also implies that the evaluated node is itself helping to identify other trustworthy nodes through its evaluations.
As is common with Aura, the evaluator chooses "positive" or "negative" and then a confidence rating (1-4).
Each node runs the Aura scoring algorithm (weighted SybilRank) on each node domain it serves. The results will show which node operators the network considers trustworthy.
A node can use the following procedure to audit a node's trustworthiness.
Find the (most recent) evaluations the node has made. This is easy to do by searching the local database.
Randomly follow a positive evaluation to another node.
Repeat steps 1-2 a predefined number of times (equal to the number of HOPS in the Aura algorithm, which is the log base 10 of the number of nodes in the network.)
Query the landed node (over the network) for a subject score in the associated domain (e.g. BrightID domain) and ensure that it matches the local score.
Because steps 1-3 can be done locally using DB queries, the procedure can be run efficiently many times.
This procedure can help a node know how to evaluate other nodes. If a downstream node fails an audit, upstream nodes evaluations should be downgraded in confidence or changed from positive to negative.
Apps and their users still need to be able to find trustworthy nodes in a decentralized way. The results of Aura scoring will provide the basis, but node operators need a method to reveal them.
I propose using Updraft to allow the public to vote on which node operators are trustworthy--with each domain having its own "Campaign" and each operator being an "Idea." This way, the public acts as an oracle to the answers that are readily verifiable by anyone running the Aura scoring algorithm. Early answers (or even accurate predictions) will be rewarded, while false answers will lose money.
With Updraft voting, anyone can simply look at the ranked list of ideas in a campaign to know the best nodes to use for that domain. Clients should randomly choose a node near the top of the list.

Team owners in Aura are the starting points of the flow of trust, but that doesn't mean they will receive the highest scores in the Aura algorithm. Trust must flow, and the edges (and their weights) are determined by evaluations. This is the method for decentralizing trust.
Each team has its own "energy" (trust) that flows independently of other teams. Having multiple teams provides resilience: If one team becomes corrupt or dysfunctional such that the public decides to ignore their energy, the system continues to provide scores to participants using the remaining teams.
New teams register through a smart contract, which makes Aura nodes aware of their existence. The smart contract can collect a fee to pay Aura nodes for the cost to compute the energy flows for the team, and to pay Aura participants for making evaluations.
To find an appropriate mix of team scores to compute an overall Aura score, I again propose to use Updraft. Each team produces its own scores using the Weighted SybilRank algorithm and its own energy flows. An overall score for each subject (or node) can be computed by combining team scores proportionally to their interest(🔥) as voted in Updraft. The teams most favorable to the public will have the highest interest.
Every social protocol implicitly relies on public acceptance. A major part of Bitcoin, Ethereum, DNS, etc. is social trust.
We can help to decentralize social protocols by explicitly asking the crowd for their ratification on the parts that require social trust.
Which codebase should be the official one? Who should maintain it? Which are the bootstrap nodes? Which are the roots of authority or trust in our network?
It helps to have an open, decentralized, expert network like Aura to make recommendations where expert judgment is needed, but we also need ratification from the public on judgments they must make--like which Aura teams to trust. This is why both Aura and Updraft need to exist and why they complement each other so well.
Updraft seeks public opinion by paying people to give it. Similar to a prediction market, early, correct, opinion providers are rewarded financially. Anyone seeking to distort the truth (in order to influence Aura node operators, for example) will end up lining the pockets of honest and accurate predictors in Updraft as social reality is inevitably revealed.
Share Dialog
Share Dialog
Adam Stallard
Adam Stallard
No comments yet