My kids often say: "But, Why?"
If you ask this question about 3-4 times you either uncover irrational assumptions or quantum physics (or both!).
Why do central banks target 2% ? The answer may shock you...
Central banks target 2% because the Bank of New Zealand arbitrarily picked 2% Literally because it was higher than 1% ... and everyone copied them!
There is no real fundamental analysis that underpinned the original decision that every major central bank copied.
Maybe they got it right? Boris Johnson had a famous phrase^ for civil servants that are paid lots of money to take on no risk with no accountability or engagement,
"great supine protoplasmic invertebrate jellies"
I prefer his assessment.
I was looking at how to set LTV limits - basically everyone seems to have copied each other +/- a bit. I haven't yet found any analysis backing a lot of the LTVs on most platforms... at least not publicly and nothing quantitative.
This got me thinking, shall we just take 80/85% for $wstETH
and 70/75% for $wAVAX
... or is this just sloppy?
In traditional finance risk modelling, this is
Let us firstly define a separation of two types of assets
native assets: UNI, ETH, BTC - these assets have no off-chain credit risk
non-native - assets are deposited and taken off-chain
Even this basic classification puts us in a conundrum: Tether and Circle literally take assets off chain and deposit into non-cash like assets.
For now, however, we will treat $USDC
and $USDT
as special cases of "native assets" (eeek!) until they get a banking license. This however remains such a systemic risk it will have to be accepted as a fundamental tenant of risk management on-chain. Either accept it or don't play.
For the native assets, VaR based models with credit risk add-ons are probably the correct approach due to the nature of their operations, the credit components and the redemption windows.
This approach isn't so valid for "native assets" - VaR is not the right measure of risk in a context where no (acknowledged) credit exposure exists, particularly in liquidations where zero-holding period is usually taken. Generally the "inventory risk" is pushed to JIT liquidity providers who will increasingly start to look at statistical arbitrages (requiring holding periods) as the instant "free lunch" starts to dry up with highly sophisticated actors like Jump Crypto re-entering.
So what is the correct model for liquidators? A model that measures the likelihood of these arbitrageurs to come and backstop the trade made. This cannot be judged from the LP pools as most of it is "dark".
We have to measure Execution Risk (Slippage).
As mentioned, the AMM model is sort of irrelevant with most liquidity these days off-exchange. The reality is that we should inherit directly from quant risk modelling as most of the liquidity will indeed come from Limit Order Books (LOBs).
When creating any model in quant finance the dumbest thing to do is go to machine learning (unless you just finished a PhD on the topic) - instead start with something and build on it.
It is with this in mind that we look to basic price impact models such as Kyle's Lambda: The foundation of market microstructure.
Kyle (1985) formulated a framework including a parameter known as a Kyle's Lambda. This parameter comes from a modelling the optimal behaviours of of informed traders amongst the uninformed. Assume you are an insider and "informed" then what is the optimal strategy to move your position? Kyle considers this topic in the context of a single static trade. For more complex sequential trading actions you must consider optimal inventory and additional constraints (see below).
For a DeFi interpretation, Kyles Lambda can be interpreted as the cost of requiring a certain market depth in a given period of time which is established by relating the price change during a transaction , with the volume, executed subject to a random process which is assumed to be independently and identically distributed,
like everything so simple, it's a bit of a pain to derive but super easy to explain conceptually.
In this approach I decided to remove the "signed volume" approach of Hasbrouck (2006) since we can precisely obtain all the orders on-chain in a separate fashion. This means we can calculate Kyles lambda in two directions, buys & sells.
However, we only care about sells for liquidations.
Now we have a regression model to run across all the data on the AMMs.
Modern "solvers" like Odos (below) route single transactions through a large number of liquidity venues. It might present a challenge to identify correctly all the events from all venues.
There are so many extremely talented quants working in this space it's hard to know where to start but I'll just pick two of my personal favourites.
On the single trade horizon, Bouchard in "Trades, Quotes, Prices" looks at correlator models which I found fascinating from a physics perspective. He demonstrates how to calculate correlators that accurately fit CEX execution data in crypto but I think it could probably be used in the same way on interest rate models to identify the reaction speed.
For multi-trade inventory management it becomes more of an optimal inventory management problem that market makers face, this is tackled by Cartea quite comprehensively in "Algorithmic and High Frequency Trading".
ISDA is a global co-operative involving all major banks. Effectively ISDA is a DAO. I hope that curators should look to create something similar to ISDA for risk modelling such that users have transparency with a simple benchmark.
^Johnson was expected to face a two-hour grilling over his proposed £16.5 billion budget, which included cuts to services like the fire and police departments. The insult was a response to the assembly's decision to bypass the question session, which Johnson perceived as a lack of engagement or accountability.
Alex McFarlane