Poorbirds DAO is an organization which was founded for acquiring Moonbirds. The writings published to Mirror are related to DAO Philosophies
Poorbirds DAO is an organization which was founded for acquiring Moonbirds. The writings published to Mirror are related to DAO Philosophies
Share Dialog
Share Dialog

Subscribe to Poorbirds DAO

Subscribe to Poorbirds DAO
Statistical rarity and visual rarity oftentimes do not correlate. Here’s a quick twitter thread detailing the discrepency. The rarity discussed in this twitter thread is OpenRarity. OpenRarity is an open standard which is trying to become the de facto standard of NFT ranking. Here I am going to propose that the developers make the standard dynamic.
I am going to keep this section short. Go read the documentation. The key point is the ranking equation:

This equation compares a logarithmic assessment of rarity with the expected rarity. The logarithmic component allows you to arithmetically add the rarity of each trait of an NFT to determine the NFT rarity. The rarity is a static trait.
We will keep this section short. What do we mean by Rarity. Arran on the Divergence team prefers scarcity. The definition of scarcity can be taken as:
the state of being scarce or in short supply; shortage.
When adjusting the ranking system, we will want to be measuring the shortage of a given trait. Scarcity can have market driven elements.
When you are looking at a complex system, you want to decide what inputs are useful and what inputs make others absolete. Inputs can be NFT price, probability that an NFT will sell, probability that an NFT will be listed, volume of a collection, volume of a trait, and many other variables. Once you have your variables, you create an equation which remains robust to market conditions.
Shortage immediately brings to mind the number of NFTs listed with a specific trait. If we considered how many of an item are listed or the probability that an NFT with “x” trait were listed then we can start creating a scarcity off this variable. However, this is not an effective variable to use. If someone lists an NFT at an exhorbitant rate then it is still effectively “off the table” for all buyers.
Instead, we would want to use the probability that an NFT gets sold. The selling of the NFT takes into account multiple different variables. The NFT system is a dynamic, market. If something is becoming more scarce then it will be bought until it reaches a point of price of equilibrium. We would expect more scarce NFTs to naturally be higher priced. However, we don’t want to use price because this can be easily manipulated by pricing a trait at an exorbitant price. This exorbitant NFT is effectively not available (scarce) but still influences positively on a rarity score. Even if a trait is statistically more rare but is more accessible through sales then it is effectively less scarce. We worry about the attainability of the NFT for scarcity concerns.



What do we see across the last 2 months of various NFT collections? The majority of the sales happen on the floor. There are some sales of more valuable and more scarce items but the majority of the volume happens on or near the floor across collections. Floor NFTs are considered less rare by the market. The odds that an item sells for a fair price in an established collection has to do with its market rarity.
Most volume is created by floor finders and flippers. They are taking some arbitrage of the future price and pulling value from this arbitrage. To do this, they use floor items as proxies for trading project trends. The floor items are undesirable and less scarce items. If an item is trading on the floor then it is considered non-scarce by market conditions.
Rare items are priced higher so will sell less often. They will also be put up for sale less often. On the Proof podcast revealing OpenRarity, the hoodie punks were used as an example. They effectively became more rare and their trait floor price increased due to demand. This would remove the hoodie punks from floor considerations and would lower total sales volume for that trait. Thus total sales for scarce traits decrease.
Knowing these market attributes, I propose we can use the probability of a trait selling in our formula. We expect less scarce traits to have a higher probability of being sold:

When the trait is likely to be the next item sold then I(x) will have a smaller value. When the trait is not likely to be the next item sold then I(x) will be a larger value. The probability of being sold is based on historical sales data and is the likelihood the next sale will have “x” trait.
To make this system robust, it should respond to the dynamic system and prevent abuse. The most obvious method of abuse prevention is weighting the function so the market condition determined rarity only affects the scarcity score when a the market has matured. This maturation can be represented by a collection volume threshold. This volume threshold represents a statistically robust sampling size. So we want the statistical rarity to dominate indefinitely until market forces are abuse resistant for determining rarity. So we want a formula that looks like:

The simplest weighting function would be a set volume minimum but this can be adjusted.

This weighting formula would have the second term (probability of selling next) begin to dominate as the Collection Volume (CV) began to approach the Predetermined Volume Minimum (PVM). We could even add an exponent so that the addition does not become meaningful until the collection volume becomes larger than the PVM.

There are other methods to try but they bring in added complexity. For instance, traits could have a minimum trait volume instead of collection volume. This could be used if individuals were worried about wash trading for rarity manipulation. If people were very concerned about wash trading the weighting function could be a function of “Trait Royalties.” This would require a significant payment to the project in order to wash trade scarcity scores. If the project is itself wash trading then there is not much you can do but let’s look at wash trading extremes.
There are two obvious scenarios, we sell low scarcity items to influence scarcity values or we sell high scarcity items to influence scarcity. Let’s consider the low scarcity (common) items first.
In this situation, we will assume our system is dominated by market scarcity, total collection volume is 5k ETH, we have 10 floor traits which are all equal in value/traded volume, and selling below collection floor will result in our item getting sniped. If we want 1 trait to be worth more then we can not wash trade that trait. If we wash trade that trait then it will decrease in rarity. Instead, we would need to wash trade all 9 other traits to only influence our desired trait. If we want to have a 5% rarity premium then we would need to wash trade a total volume of 225 ETH or 4.5% of the entire collection volume. At 5% royalties and 2.5% platform fees then we are looking at almost 17 ETH lost.
This 5% is on a linear scale. 10% → 9.5% of traded volume which yields 2.25% on a Log base 2 scale -3.322 → -3.395. This is only considering a collection with 10 traits. You dilute the effectiveness for every trait you leave out of the wash trade. If you only wash traded 8 of 10 traits with the same volume of ETH then you get a rarity premium of around 4.3%. That’s a 14% decrease in effectiveness. So the maximally effective strategy for increasing rarity is wash trading EVERY OTHER TRAIT. This is not economical to wash trade a non scarce trait to scarcity.
As discussed, if you wash trade a trait then it becomes less scarce. The economics of increasing the score of a high scarcity item are even worse than a low scarcity item. Play with the numbers and it becomes obvious you need to be a considerable market mover to even noticeably increase high scarcity values. It’s not economical.
Lets assume the motivation is to decrease the rarity. Assuming similar parameters to the low scarcity example except that 9 are equally scarce and our target trait is 1% of the collection. In order to double the score to 2% (which is still considerably more rare than 10.8%) we would need to trade 50 ETH (assuming equal volume to rarity split). This only costs 3.75 ETH. The rarity score has decreased by 15%. This is much more cost effective. However, you have decreased the rarity of your piece with minimal increase in the other trait rarities. If you are using this as a way to decrease the price of your desired trait then you have reinforced other social buying signals like a desire for high rarity items.
I can’t think of any situation in which this is economical. These are all considering instantaneous trades so no one else is trading. There were 31 projects with greater than 50 ETH in volume in the past 24 hours of when this article was written. The majority have a larger trading volume than 5,000 ETH. To take a 1% scarcity to 2% on Moonbirds would take 1,730 ETH (loss of 130 ETH). This value as a social signal becomes more valuable than the decrease in rarity or is obvious wash trading.
Even when we approximate the real world better and consider the glitch moonbirds, which have been sold on average 0.667 times each, you would need to sell a glitch bird 4 times to double the rarity. To be safe that no one snipes your bird you will be spending between 7.5 - 15 ETH in fees. This doubles the “scarcity score” but is obvious, burns ETH and does not increase your chances of getting a glitch bird. The new glitch rarity is 8/10,000 as opposed to a statistical rarity of 6/10,000.
Please prove me wrong. How do you wash trade this system? Why is this worse than a static rarity system? The easiest argument is against my assumptions. However, I believe this system works to accurately rate NFTs within a dynamic market context.

Statistical rarity and visual rarity oftentimes do not correlate. Here’s a quick twitter thread detailing the discrepency. The rarity discussed in this twitter thread is OpenRarity. OpenRarity is an open standard which is trying to become the de facto standard of NFT ranking. Here I am going to propose that the developers make the standard dynamic.
I am going to keep this section short. Go read the documentation. The key point is the ranking equation:

This equation compares a logarithmic assessment of rarity with the expected rarity. The logarithmic component allows you to arithmetically add the rarity of each trait of an NFT to determine the NFT rarity. The rarity is a static trait.
We will keep this section short. What do we mean by Rarity. Arran on the Divergence team prefers scarcity. The definition of scarcity can be taken as:
the state of being scarce or in short supply; shortage.
When adjusting the ranking system, we will want to be measuring the shortage of a given trait. Scarcity can have market driven elements.
When you are looking at a complex system, you want to decide what inputs are useful and what inputs make others absolete. Inputs can be NFT price, probability that an NFT will sell, probability that an NFT will be listed, volume of a collection, volume of a trait, and many other variables. Once you have your variables, you create an equation which remains robust to market conditions.
Shortage immediately brings to mind the number of NFTs listed with a specific trait. If we considered how many of an item are listed or the probability that an NFT with “x” trait were listed then we can start creating a scarcity off this variable. However, this is not an effective variable to use. If someone lists an NFT at an exhorbitant rate then it is still effectively “off the table” for all buyers.
Instead, we would want to use the probability that an NFT gets sold. The selling of the NFT takes into account multiple different variables. The NFT system is a dynamic, market. If something is becoming more scarce then it will be bought until it reaches a point of price of equilibrium. We would expect more scarce NFTs to naturally be higher priced. However, we don’t want to use price because this can be easily manipulated by pricing a trait at an exorbitant price. This exorbitant NFT is effectively not available (scarce) but still influences positively on a rarity score. Even if a trait is statistically more rare but is more accessible through sales then it is effectively less scarce. We worry about the attainability of the NFT for scarcity concerns.



What do we see across the last 2 months of various NFT collections? The majority of the sales happen on the floor. There are some sales of more valuable and more scarce items but the majority of the volume happens on or near the floor across collections. Floor NFTs are considered less rare by the market. The odds that an item sells for a fair price in an established collection has to do with its market rarity.
Most volume is created by floor finders and flippers. They are taking some arbitrage of the future price and pulling value from this arbitrage. To do this, they use floor items as proxies for trading project trends. The floor items are undesirable and less scarce items. If an item is trading on the floor then it is considered non-scarce by market conditions.
Rare items are priced higher so will sell less often. They will also be put up for sale less often. On the Proof podcast revealing OpenRarity, the hoodie punks were used as an example. They effectively became more rare and their trait floor price increased due to demand. This would remove the hoodie punks from floor considerations and would lower total sales volume for that trait. Thus total sales for scarce traits decrease.
Knowing these market attributes, I propose we can use the probability of a trait selling in our formula. We expect less scarce traits to have a higher probability of being sold:

When the trait is likely to be the next item sold then I(x) will have a smaller value. When the trait is not likely to be the next item sold then I(x) will be a larger value. The probability of being sold is based on historical sales data and is the likelihood the next sale will have “x” trait.
To make this system robust, it should respond to the dynamic system and prevent abuse. The most obvious method of abuse prevention is weighting the function so the market condition determined rarity only affects the scarcity score when a the market has matured. This maturation can be represented by a collection volume threshold. This volume threshold represents a statistically robust sampling size. So we want the statistical rarity to dominate indefinitely until market forces are abuse resistant for determining rarity. So we want a formula that looks like:

The simplest weighting function would be a set volume minimum but this can be adjusted.

This weighting formula would have the second term (probability of selling next) begin to dominate as the Collection Volume (CV) began to approach the Predetermined Volume Minimum (PVM). We could even add an exponent so that the addition does not become meaningful until the collection volume becomes larger than the PVM.

There are other methods to try but they bring in added complexity. For instance, traits could have a minimum trait volume instead of collection volume. This could be used if individuals were worried about wash trading for rarity manipulation. If people were very concerned about wash trading the weighting function could be a function of “Trait Royalties.” This would require a significant payment to the project in order to wash trade scarcity scores. If the project is itself wash trading then there is not much you can do but let’s look at wash trading extremes.
There are two obvious scenarios, we sell low scarcity items to influence scarcity values or we sell high scarcity items to influence scarcity. Let’s consider the low scarcity (common) items first.
In this situation, we will assume our system is dominated by market scarcity, total collection volume is 5k ETH, we have 10 floor traits which are all equal in value/traded volume, and selling below collection floor will result in our item getting sniped. If we want 1 trait to be worth more then we can not wash trade that trait. If we wash trade that trait then it will decrease in rarity. Instead, we would need to wash trade all 9 other traits to only influence our desired trait. If we want to have a 5% rarity premium then we would need to wash trade a total volume of 225 ETH or 4.5% of the entire collection volume. At 5% royalties and 2.5% platform fees then we are looking at almost 17 ETH lost.
This 5% is on a linear scale. 10% → 9.5% of traded volume which yields 2.25% on a Log base 2 scale -3.322 → -3.395. This is only considering a collection with 10 traits. You dilute the effectiveness for every trait you leave out of the wash trade. If you only wash traded 8 of 10 traits with the same volume of ETH then you get a rarity premium of around 4.3%. That’s a 14% decrease in effectiveness. So the maximally effective strategy for increasing rarity is wash trading EVERY OTHER TRAIT. This is not economical to wash trade a non scarce trait to scarcity.
As discussed, if you wash trade a trait then it becomes less scarce. The economics of increasing the score of a high scarcity item are even worse than a low scarcity item. Play with the numbers and it becomes obvious you need to be a considerable market mover to even noticeably increase high scarcity values. It’s not economical.
Lets assume the motivation is to decrease the rarity. Assuming similar parameters to the low scarcity example except that 9 are equally scarce and our target trait is 1% of the collection. In order to double the score to 2% (which is still considerably more rare than 10.8%) we would need to trade 50 ETH (assuming equal volume to rarity split). This only costs 3.75 ETH. The rarity score has decreased by 15%. This is much more cost effective. However, you have decreased the rarity of your piece with minimal increase in the other trait rarities. If you are using this as a way to decrease the price of your desired trait then you have reinforced other social buying signals like a desire for high rarity items.
I can’t think of any situation in which this is economical. These are all considering instantaneous trades so no one else is trading. There were 31 projects with greater than 50 ETH in volume in the past 24 hours of when this article was written. The majority have a larger trading volume than 5,000 ETH. To take a 1% scarcity to 2% on Moonbirds would take 1,730 ETH (loss of 130 ETH). This value as a social signal becomes more valuable than the decrease in rarity or is obvious wash trading.
Even when we approximate the real world better and consider the glitch moonbirds, which have been sold on average 0.667 times each, you would need to sell a glitch bird 4 times to double the rarity. To be safe that no one snipes your bird you will be spending between 7.5 - 15 ETH in fees. This doubles the “scarcity score” but is obvious, burns ETH and does not increase your chances of getting a glitch bird. The new glitch rarity is 8/10,000 as opposed to a statistical rarity of 6/10,000.
Please prove me wrong. How do you wash trade this system? Why is this worse than a static rarity system? The easiest argument is against my assumptions. However, I believe this system works to accurately rate NFTs within a dynamic market context.

<100 subscribers
<100 subscribers
No activity yet