Captain NFTGo.io ⛴
Share Dialog
Share Dialog
Captain NFTGo.io ⛴

Subscribe to yyh.eth

Subscribe to yyh.eth
OpenSea has just launched a new rarity standard – OpenRarity. I’ve taken 30 minutes to understand the basic mechanism, and it was pretty clear and concise. Hence, I’d to take some notes on my understanding.
https://twitter.com/opensea/status/1570179078485082113?s=20&t=rK33ezoWAbsuIjNpljl8TQ
First of all, there are two main rarity score solutions on the market:
"Summation Methodology": Invented by Rarity Sniper, Rarity Tools and other early rarity tools, which calculate the probability of each trait, and then sum up the corresponding scores to get the rarity score;
"Difference Methodology": Released by NFTGo last December, which calculates the "difference" of each NFT in the group based on the Jaccard distance (currently widely used by our API customers, as well as a plug-in NFTInspect). This algorithm can be simply understood as: if you look different from everyone, the rarer you are. We use the Jaccard Distance algorithm to calculate how different you are from other individuals in the group. Read this article for more details: https://nftgo.mirror.xyz/kHWaMtNY6ZOvDzr7PR99D03--VNu6-ZOjYuf6E9-QH0
So what does OpenRarity do? From my perspective, OpenRarity still belongs to the "Summation Methodology" in essence without deriving from the probability of each trait, but calculates the "Shannon Information Content" by the probability of each trait and then sums it up.
The academic definition is "the amount of information required to eliminate uncertainty”. For example, a random event, or a question can be "Will the national football team can enter the World Cup?". How many matches do you need to observe to analyze the team’s performance to give the answer. If you only need to watch one game, or maybe you don’t even need to watch it at all, but you can know whether the national football team can enter the World Cup, then the amount of information about “whether the national football team can enter the World Cup” is very low.
OpenRarity sums the "Information Content" of each trait in a single NFT to obtain the "Information Content" of that NFT. This process is to find out "how much information do you need to judge if this NFT owns certain traits at the same time?" In that sense, the greater the amount of information, the more uncertain and rare it is.
At the same time, the method calculates the "Information Entropy" of the entire collection. That is using the weighted average of the "Information Content" of all traits in this collection, to obtain the average level of the Information Content of this group.
Finally, the rarity score of a certain NFT = "Information Content" of the NFT / "Information Entropy" of the entire collection. This eliminates the group difference caused by the difference in the total number of traits, so the rarity of NFTs from various collections (even different in the total number of traits) can also be compared horizontally.
For more details about OpenRarity, pls check here: https://openrarity.gitbook.io/developers/fundamentals/methodology
Compared to the "Summation methodology", OpenRarity provides a solution to the scenario where with rarity score is higher due to a large number of traits, and the summation of information is more suitable for the concept of overall rarity. The reason is that the probability of multiple independent events occurring at the same time is not the sum of the probabilities of each event, but the product of them. However, the amount of information content required for multiple independent events to occur simultaneously is the sum of the amount of information content for each event to occur individually. Therefore, summing the "information content" is more mathematically reasonable and can better represent the rarity of NFTs that are composed of different independent traits. But for NFTs with 1/1 traits, this method will give them lower ranks than communities expect,which is very unreasonable.
Compared to the "Difference methodology", OpenRarity evaluates rarity from the perspective of "surprise, but not "difference", even though the two concepts seem to be similar - when you find individuals who are very different, you will be pleasantly surprised. But these two algorithms have been theoretically based on different angles and may draw different conclusions, which is also interesting.
Of course, rarity itself is a relatively subjective concept, and it is a good thing to have more rarity algorithms in the NFT ecology, but I don't think rarity standard is necessary at all.
NFTGo will also support OpenRarity model on our product side. And then you will be able to check FOUR rarity rankings on NFTGo, which are NFTGo Rarity, rarity from @RaritySniperNFT, rarity from @trait_sniper, and OpenRarity paralley on NFTGo.😉
OpenSea has just launched a new rarity standard – OpenRarity. I’ve taken 30 minutes to understand the basic mechanism, and it was pretty clear and concise. Hence, I’d to take some notes on my understanding.
https://twitter.com/opensea/status/1570179078485082113?s=20&t=rK33ezoWAbsuIjNpljl8TQ
First of all, there are two main rarity score solutions on the market:
"Summation Methodology": Invented by Rarity Sniper, Rarity Tools and other early rarity tools, which calculate the probability of each trait, and then sum up the corresponding scores to get the rarity score;
"Difference Methodology": Released by NFTGo last December, which calculates the "difference" of each NFT in the group based on the Jaccard distance (currently widely used by our API customers, as well as a plug-in NFTInspect). This algorithm can be simply understood as: if you look different from everyone, the rarer you are. We use the Jaccard Distance algorithm to calculate how different you are from other individuals in the group. Read this article for more details: https://nftgo.mirror.xyz/kHWaMtNY6ZOvDzr7PR99D03--VNu6-ZOjYuf6E9-QH0
So what does OpenRarity do? From my perspective, OpenRarity still belongs to the "Summation Methodology" in essence without deriving from the probability of each trait, but calculates the "Shannon Information Content" by the probability of each trait and then sums it up.
The academic definition is "the amount of information required to eliminate uncertainty”. For example, a random event, or a question can be "Will the national football team can enter the World Cup?". How many matches do you need to observe to analyze the team’s performance to give the answer. If you only need to watch one game, or maybe you don’t even need to watch it at all, but you can know whether the national football team can enter the World Cup, then the amount of information about “whether the national football team can enter the World Cup” is very low.
OpenRarity sums the "Information Content" of each trait in a single NFT to obtain the "Information Content" of that NFT. This process is to find out "how much information do you need to judge if this NFT owns certain traits at the same time?" In that sense, the greater the amount of information, the more uncertain and rare it is.
At the same time, the method calculates the "Information Entropy" of the entire collection. That is using the weighted average of the "Information Content" of all traits in this collection, to obtain the average level of the Information Content of this group.
Finally, the rarity score of a certain NFT = "Information Content" of the NFT / "Information Entropy" of the entire collection. This eliminates the group difference caused by the difference in the total number of traits, so the rarity of NFTs from various collections (even different in the total number of traits) can also be compared horizontally.
For more details about OpenRarity, pls check here: https://openrarity.gitbook.io/developers/fundamentals/methodology
Compared to the "Summation methodology", OpenRarity provides a solution to the scenario where with rarity score is higher due to a large number of traits, and the summation of information is more suitable for the concept of overall rarity. The reason is that the probability of multiple independent events occurring at the same time is not the sum of the probabilities of each event, but the product of them. However, the amount of information content required for multiple independent events to occur simultaneously is the sum of the amount of information content for each event to occur individually. Therefore, summing the "information content" is more mathematically reasonable and can better represent the rarity of NFTs that are composed of different independent traits. But for NFTs with 1/1 traits, this method will give them lower ranks than communities expect,which is very unreasonable.
Compared to the "Difference methodology", OpenRarity evaluates rarity from the perspective of "surprise, but not "difference", even though the two concepts seem to be similar - when you find individuals who are very different, you will be pleasantly surprised. But these two algorithms have been theoretically based on different angles and may draw different conclusions, which is also interesting.
Of course, rarity itself is a relatively subjective concept, and it is a good thing to have more rarity algorithms in the NFT ecology, but I don't think rarity standard is necessary at all.
NFTGo will also support OpenRarity model on our product side. And then you will be able to check FOUR rarity rankings on NFTGo, which are NFTGo Rarity, rarity from @RaritySniperNFT, rarity from @trait_sniper, and OpenRarity paralley on NFTGo.😉
<100 subscribers
<100 subscribers
No activity yet