
GaussianMove uses the AAA algorithm to generate near-optimal rational approximations offline, then evaluates them on-chain via Horner's method—achieving CDF error of 3.35×10⁻⁹ and PPF error of 3.11×10⁻¹³ with predictable gas costs. Sui's native sui::random then makes Gaussian sampling operationally simple inside a single transaction. This article describes the constraints, the approximation methods, and several applications once these functions are available as ordinary library calls.
On-chain, Gaussian machinery is usually pushed off to oracles, hidden inside off-chain engines, or avoided entirely because gas and fixed-point arithmetic make it challenging. GaussianMove asks a simple question:
What does it actually take to compute Φ and Φ⁻¹ on Sui when all you have are integers, fixed-point scaling, and strict gas limits?
We’ll start from the mathematical constraints, choose an approximation strategy that is compatible with those constraints, and then plug the resulting primitives into familiar objects like Black–Scholes, VaR, and Gaussian-shaped AMMs.
GaussianMove and its companion Black-Scholes package are deployed on Sui testnet:
Package | Version | Package ID | Explorer |
|---|---|---|---|
gaussian | v0.9.0 |
| |
black_scholes | v0.2.0 |
|
Repositories:
move-gaussian — Core Gaussian library (399 tests)
move-black-scholes — European options pricing (83 tests)
Why is Gaussian math hard on-chain? The difficulty is not just gas; it’s that you only get integers, fixed-point scaling, and inverse functions that amplify small errors, which constrains the approximation method choices.
Gaussian-based models are standard tools in traditional finance.
The Black–Scholes–Merton model for European options pricing literally contains Φ(d₁) and Φ(d₂), where Φ is the standard normal CDF.
Risk measures such as Value at Risk and Expected Shortfall often assume (log)normal returns and use the inverse CDF Φ⁻¹(α) to convert a confidence level α into a loss threshold.
Many factor and term-structure models linearize around Gaussian assumptions even when reality is heavier-tailed.
In banks and brokerages, these functions are evaluated using high-quality numerical libraries (BLAS/LAPACK, Boost, SciPy, etc.), and their error properties are well understood. The question here is what it takes to bring that level of Gaussian math on-chain, where the only primitives are integer arithmetic and fixed-point representations.
DeFi derivatives protocols already lean heavily on Gaussian-style thinking, but most of the heavy numerical work happens off-chain:
Lyra (~$58M TVL, one of the largest on-chain options protocols) prices options with Black–Scholes-style formulas but relies on Chainlink price feeds and off-chain Greeks; it does not evaluate Φ/Φ⁻¹ directly on-chain.
Hegic, Dopex, Rysk, and Valorem use a mix of RFQ mechanisms, oracles, and simplified pricing rules rather than full on-chain Gaussian math.
Panoptic is explicitly oracle-free but achieves this by deriving option payoffs from Uniswap v3 LP mechanics instead of computing Gaussian functions.
Primitive RMM-01 together with solstat is the clearest EVM example of explicit on-chain Gaussian CDF usage, but the protocol has since closed down.
In practice, most production DeFi options protocols avoid full on-chain Gaussian computation. They either use oracles to import off-chain pricing, derive prices from AMM mechanics, or accept simplified models. The few that attempted full on-chain Black–Scholes (like early Primitive) faced gas costs and complexity that limited adoption.
So why build this?
Three reasons:
The landscape is changing. Sui’s native randomness and lower compute costs make on-chain statistical computation more practical than on EVM.
Research infrastructure matters. Even if production protocols use hybrid approaches, having audited, well-documented Gaussian primitives enables experimentation. The Paradigm pm-AMM paper (2024) and Distribution Markets work show continued research interest in Gaussian-based mechanisms.
The mathematical problem is nontrivial. Computing Φ⁻¹(p) without floating-point is a substantive applied mathematics problem, independent of any particular application.
The fundamental question is deceptively simple: how do you compute Φ⁻¹(p) when your only arithmetic primitives are integer addition, subtraction, multiplication, and division? This is a timeless numerical analysis challenge from the 1960s—blockchain merely adds gas costs and determinism constraints.
Fixed-point arithmetic constraints. On-chain environments (Solidity, Move, etc.) operate with integers and implicit scaling (typically WAD = 10¹⁸). This introduces overflow risk for (a * b) / SCALE, truncation error in division, and a hard floor on the smallest representable probabilities. Practical accuracy is limited by cumulative rounding, even though u256 offers ~77 decimal digits of intermediate precision.
Randomness source. Gaussian sampling via inverse transform requires a high-quality uniform variate U ∈ (0,1). On EVM/Solana/Aptos this usually comes from external VRF providers such as Chainlink VRF or Switchboard VRF, adding multi-transaction flows, callbacks, and gas. On Sui, sui::random exposes a Random object that can be consumed inside a single transaction.
Inverse functions amplify errors. Even if Φ(x) is approximated accurately, inverting it to obtain Φ⁻¹(p) amplifies small forward errors, especially in the tails where d/dp Φ⁻¹(p) grows large. Classical work such as Algorithm AS 241 (Wichura) and Acklam's normal quantile function emphasizes careful piecewise design and tail handling; GaussianMove follows the same philosophy with AAA-based piecewise approximations.
Before we talk about algorithms, we need to make peace with a simple fact: on-chain, there are no floats—only integers with a fixed scale. That single choice drives almost every design decision in GaussianMove.
DeFi universally uses WAD scaling: 1.0 is represented as 10¹⁸. This gives 18 decimal places of precision, sufficient for most financial calculations (basis points are 10⁻⁴), but it also means all transcendental functions (exp, ln, sqrt) must be realized as integer-based polynomial or rational approximations.
const SCALE: u256 = 1_000_000_000_000_000_000; // 10^18
/// Fixed-point multiplication: (a × b) / SCALE
public fun mul_wad(a: u256, b: u256): u256 {
(a * b) / SCALE // u256 intermediate prevents overflow
}
/// Fixed-point division: (a × SCALE) / b
public fun div_wad(a: u256, b: u256): u256 {
(a * SCALE) / b
}
Understanding where precision is lost is crucial for error analysis:
Layer Precision Notes
─────────────────────────────────────────────────────
1. Off-chain AAA fitting ~10⁻¹⁴ Dominated by algorithm tolerance
2. Coefficient quantization ~10⁻¹⁸ Negligible (WAD has 18 digits)
3. On-chain Horner rounding ~10⁻¹⁵ ~0.5 ULP per operation
4. WAD representation 10⁻¹⁸ Hard floor
─────────────────────────────────────────────────────
Current achieved: 3.35×10⁻⁹ CDF
Theoretical floor: ~10⁻¹⁵
The precision floor (~10⁻¹⁵) comes from accumulated rounding in Horner evaluation: each of the ~11 iterations loses approximately 0.5 ULP (unit in last place), totaling 5-10 ULP. In practical terms, the approximation method is not the bottleneck here—the fixed-point evaluation is.
Once you accept fixed-point arithmetic and u256 as your playing field, the next question is how to approximate Φ and Φ⁻¹ themselves.
Several approaches exist for approximating Φ and Φ⁻¹:
Method | Era | Pros | Cons |
|---|---|---|---|
Taylor series | 1700s | Simple, well-understood | Slow convergence, many terms |
Padé approximation | 1890s | Better than Taylor for same degree | Non-trivial to compute |
Abramowitz-Stegun | 1964 | Battle-tested, industry standard | Requires exp(), fixed formulas |
Chebyshev polynomials | 1960s | Near-optimal for polynomials | Limited to polynomial (not rational) |
AAA (Adaptive Antoulas-Anderson) | 2018 | Near-optimal rational, automatic | Requires offline toolchain |
GaussianMove uses AAA because it produces near-optimal rational approximations automatically, avoiding the need to hand-tune coefficients for each function.
AAA (Nakatsukasa et al., 2018) is a greedy algorithm that iteratively builds a barycentric rational approximation:
r(x) = Σⱼ (wⱼ × fⱼ) / (x - zⱼ)
─────────────────────────
Σⱼ wⱼ / (x - zⱼ)
Where:
zⱼ (nodes): Sample points chosen adaptively
wⱼ (weights): Barycentric weights computed by the algorithm
fⱼ (values): Function values at nodes (from high-precision baseline)
The algorithm's key property: it produces approximations close to the theoretical best rational function of a given degree, without requiring manual coefficient tuning. In practice, this lets us treat “pick a good rational approximation” as a design-time task handled by Python, not an on-chain concern.
For on-chain evaluation, we convert the barycentric form to explicit polynomials:
r(x) = P(x) / Q(x) = (p₀ + p₁x + p₂x² + ... + pₙxⁿ) / (q₀ + q₁x + ... + qₘxᵐ)
This conversion happens offline. The Move code only evaluates P(x) and Q(x) using Horner's method, then computes their ratio.
The Morpheus PM-AMM on Aptos uses a different approach: Abramowitz-Stegun (1964) for CDF and Acklam (2000) for inverse CDF, with Newton-Raphson refinement.
Aspect | Morpheus (Aptos) | GaussianMove (Sui) |
|---|---|---|
CDF algorithm | Abramowitz-Stegun polynomial | AAA rational approximation |
CDF raw error | ~7.5×10⁻⁸ | ~3.35×10⁻⁹ (10× better) |
PPF algorithm | Acklam + Newton refinement | AAA piecewise + optional Newton |
Requires exp()? | Yes (for PDF, Newton) | Only for Newton refinement |
Code complexity | High (piecewise, multi-algorithm) | Medium (unified pipeline) |
Both approaches are valid. Morpheus prioritizes maximum precision with proven classical formulas; GaussianMove prioritizes a unified, auditable pipeline with modern approximation theory.
The inverse CDF presents special challenges because its derivative approaches infinity as p → 0 or p → 1. A single rational approximation cannot maintain accuracy across the full domain.
GaussianMove uses three regions:
Region 1: Central (0.02 ≥ p ≥ 0.98)
Direct AAA approximation of Φ⁻¹(p)
Expected error: ~3.11×10⁻¹³
Region 2: Lower tail (10⁻¹⁰ ≤ p < 0.02)
Transform: t = √(-2 ln(p))
Approximate Φ⁻¹ as function of t
Expected error: ~2.03×10⁻¹³
Region 3: Upper tail (0.98 < p ≤ 1 - 10⁻¹⁰)
Symmetry: Φ⁻¹(p) = -Φ⁻¹(1-p)
Reuses lower tail approximation
This piecewise approach mirrors classical algorithms (Wichura AS 241, Moro/Jäckel) but fits coefficients via AAA rather than reusing floating-point polynomials.
At this point, we have a clear picture of the constraints (fixed-point, randomness, error amplification) and a modern approximation strategy (AAA + piecewise design). Next we need to quantify how much error is actually left and how it propagates into financial models.
Error bounds link approximation theory to practical finance. If the approximation error is many orders of magnitude smaller than model uncertainty and oracle noise, then from a DeFi perspective the numerical contribution of the approximation is relatively small.
From GaussianMove v0.9.0 coefficient generation:
Function | Max Absolute Error | Notes |
|---|---|---|
Φ (CDF) | 3.35×10⁻⁹ | WAD output ≤ 3.35×10⁹ raw units |
φ (PDF) | 7.61×10⁻¹⁵ | Negligible vs WAD quantization |
Φ⁻¹ central | 3.11×10⁻¹³ | Applies on [0.02, 0.98] |
Φ⁻¹ tail | 2.03×10⁻¹³ | Inputs clamped to (10⁻¹⁰, 1-10⁻¹⁰) |
These errors are validated against mpmath with 100-digit precision baselines.
For VaR calculations:
VaR(α) = μ + σ × Φ⁻¹(α)
With |Φ⁻¹ error| ≤ 3.11×10⁻¹³, the VaR error is:
|VaR error| ≤ σ × 3.11×10⁻¹³
For σ = $10,000 (typical portfolio volatility), this is $3.11×10⁻⁹—far below any practical threshold.
For Black-Scholes: The Greeks (Delta, Gamma, Vega) involve Φ(d₁) and φ(d₁). With CDF error 3.35×10⁻⁹:
Delta error ≤ 3.35×10⁻⁹ (direct)
Gamma involves φ, error ≤ 7.61×10⁻¹⁵
Vega scales by √T, error remains negligible
Comparison with model uncertainty:
Volatility estimation error: typically 5-20% (10⁻¹ to 10⁰)
Oracle price staleness: seconds to minutes of drift
Approximation error: 10⁻⁹ to 10⁻¹³
Conclusion: Approximation error is 6-10 orders of magnitude smaller than model/oracle uncertainty. For DeFi applications, even 10⁻⁸ precision (solgauss level) is overkill.
Probability inputs are clamped to (ε, 1-ε) with ε = 10⁻¹⁰ × WAD. This corresponds to approximately ±6.3σ—beyond which fixed-point arithmetic cannot meaningfully distinguish probabilities.
Applications requiring fatter tails (e.g., extreme risk modeling) should document this limitation and consider alternative approaches.
With constraints, approximation strategy, and error budgets in place, we can now switch from “math and theory” to “engineering”: how to turn all of this into a reproducible Python→Move pipeline.
Part II describes how GaussianMove turns the previous section’s numerical choices into concrete code. The design-time Python pipeline finds and validates rational approximations; the runtime Move code evaluates them cheaply and deterministically on-chain.
GaussianMove follows a "design-time vs runtime" separation:
┌─────────────────────────────────────────────────────────┐
│ PYTHON (Design-Time / Offline) │
│ ┌─────────────────────────────────────────────────┐ │
│ │ 1. Sample function with mpmath (50+ digits) │ │
│ │ 2. Run AAA algorithm (SciPy) │ │
│ │ 3. Convert barycentric → polynomial coeffs │ │
│ │ 4. Quantize to WAD-scaled integers │ │
│ │ 5. Validate accuracy, export to Move │ │
│ └─────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ [coefficient arrays] │
└───────────────────────────┼─────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ MOVE (Runtime / On-Chain) │
│ ┌─────────────────────────────────────────────────┐ │
│ │ 1. Load pre-computed coefficients (constants) │ │
│ │ 2. Evaluate P(x), Q(x) using Horner's method │ │
│ │ 3. Compute P(x) / Q(x) │ │
│ │ 4. Apply domain clamping and output bounds │ │
│ └─────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
Why this matters for auditability: All numerically delicate work (node selection, coefficient fitting, convergence analysis) happens in a Python toolchain that can be inspected, re-run, and validated. The on-chain Move code is a small, predictable evaluation kernel.
Evaluating P(x) = c₀ + c₁x + c₂x² + ... + cₙxⁿ naively requires O(n²) multiplications (computing x², x³, etc.). Horner's method restructures as:
P(x) = c₀ + x(c₁ + x(c₂ + ... + x(cₙ)))
This is O(n)—one multiplication and one addition per term.
/// Horner's method for polynomial evaluation
public fun horner_eval(x: u256, coeffs: &vector<u256>): u256 {
let n = vector::length(coeffs);
if (n == 0) { return 0 };
// Start with highest-degree coefficient
let mut result = *vector::borrow(coeffs, n - 1);
// Work backwards: result = result × x + coeffs[i]
let mut i = n - 1;
while (i > 0) {
i = i - 1;
result = mul_wad(result, x);
result = result + *vector::borrow(coeffs, i);
};
result
}
For degree-11 polynomials (typical for GaussianMove), this is 11 iterations—predictable gas cost, numerical stability, and minimal intermediate overflow risk.
The standard normal distribution is symmetric around zero, so Φ⁻¹(p) can be negative. Move lacks native signed integers, so GaussianMove uses a SignedWad type:
public struct SignedWad has copy, drop, store {
magnitude: u256,
negative: bool,
}
Operations track sign explicitly:
public fun add(a: SignedWad, b: SignedWad): SignedWad {
if (a.negative == b.negative) {
// Same sign: add magnitudes
SignedWad { magnitude: a.magnitude + b.magnitude, negative: a.negative }
} else {
// Different signs: subtract smaller from larger
if (a.magnitude >= b.magnitude) {
SignedWad { magnitude: a.magnitude - b.magnitude, negative: a.negative }
} else {
SignedWad { magnitude: b.magnitude - a.magnitude, negative: b.negative }
}
}
}
Strict PPF Domain Validation
The ppf(p) function enforces strict domain validation:
// Aborts with EProbOutOfDomain (302) if p is outside valid domain
let z = core::ppf(0); // Aborts! p=0 is outside (EPS, 1-EPS)
// For sampling, use ppf_from_u64 which maps any u64 to valid domain
let z = core::ppf_from_u64(random_seed); // Always succeeds
Rationale: Explicit failure surfaces invalid inputs immediately rather than silently clamping. For sampling use cases, ppf_from_u64(seed) provides a safe alternative that maps any u64 into the valid domain.
SignedWad Fields
The SignedWad struct uses short field names for ergonomics:
public struct SignedWad has copy, drop, store {
mag: u256, // magnitude (absolute value)
neg: bool, // true if negative
}
Accessor methods (abs(), is_negative()) provide a stable API.
With the mathematical machinery in place, Sui's sui::random completes the picture for sampling:
use sui::random::Random;
use gaussian::core;
use gaussian::signed_wad::SignedWad;
public entry fun sample_standard_normal(r: &Random, ctx: &mut TxContext): SignedWad {
let mut gen = random::new_generator(r, ctx);
// Generate uniform in (0, 1), avoiding exact 0 or 1
let u = random::generate_u256_in_range(&mut gen, 1, WAD - 1);
// Transform via inverse CDF
core::ppf(u)
}
Single transaction: No VRF callback or second transaction.
Validator consensus security: Randomness derived from distributed validator set, not manipulable by any single party.
Native integration: &Random is a first-class Sui object, passed directly to functions that need it.
This is the one place where Sui provides a genuine platform advantage—but it only matters because the mathematical foundation (accurate Φ⁻¹ approximation) is already in place.
Taken together, the Python pipeline, Horner evaluation, SignedWad representation, and sui::random give us a complete implementation story. The next natural question is: what do you actually do with these primitives?
Part III sketches how Φ and Φ⁻¹ plug into familiar financial formulas. These are not full protocol designs; they are case studies meant to show that once GaussianMove exists, Black–Scholes, VaR, and Gaussian-shaped AMMs can be expressed directly in terms of its API.
The Black-Scholes formula for European call options is:
C = S₀Φ(d₁) - Ke^(-rT)Φ(d₂)
where:
d₁ = [ln(S₀/K) + (r + σ²/2)T] / (σ√T)
d₂ = d₁ - σ√T
Computing d₁ requires:
ln(S₀/K) — natural logarithm (transcendental)
σ√T — square root (algebraic but irrational)
Division and addition — straightforward in fixed-point
Computing the option price requires: 4. Φ(d₁), Φ(d₂) — normal CDF (the hard part) 5. e^(-rT) — exponential (transcendental)
GaussianMove provides the CDF. The companion move-black-scholes package implements the transcendental helpers:
// From black_scholes::d_values
public fun compute_d1(
spot: u256, // S₀ in WAD
strike: u256, // K in WAD
time: u256, // T in WAD (years)
rate: u256, // r in WAD
vol: u256 // σ in WAD
): SignedWad {
// ln(S/K)
let log_moneyness = transcendental::ln_wad(div_wad(spot, strike));
// (r + σ²/2)T
let drift = mul_wad(rate + mul_wad(vol, vol) / 2, time);
// σ√T
let vol_sqrt_t = mul_wad(vol, transcendental::sqrt_wad(time));
// d₁ = (ln(S/K) + drift) / (σ√T)
signed_wad::div(
signed_wad::add(log_moneyness, drift),
vol_sqrt_t
)
}
The transcendental functions (ln_wad, sqrt_wad, exp_wad) are also implemented via polynomial approximation, following the same AAA pipeline philosophy. In practice, this means Black–Scholes-style pricing on Sui becomes a matter of wiring together a few math primitives.
The following transactions demonstrate GaussianMove in production, pricing ATM European options (S=$100, K=$100, T=1yr, r=5%, σ=20%):
Transaction 1: Option Pricing
TX: CdAxPyw1T7tF4xMPpfVqVhJMDL4Xy6zeyC24YeQxpjJt
Results: Call=$10.45, Put=$5.57, Put-Call Parity=✓
Transaction 2: Greeks Calculation
TX: 48TFYV87TXRJMUuCzoMZ4T5CLVsFgQoT1fptR2w7NXPv
Results: Δ=0.637, Γ=0.019, ν=37.52, θ=-6.41, ρ=53.23
Comparison with scipy reference:
Metric | On-Chain Result | scipy Reference | Error |
|---|---|---|---|
Call Price | $10.4506 | $10.4506 | <0.01% |
Put Price | $5.5735 | $5.5735 | <0.01% |
Delta | 0.6368 | 0.6368 | <0.01% |
Gamma | 0.0188 | 0.0188 | <0.01% |
These results validate that GaussianMove's ~10⁻⁹ CDF error propagates to <0.01% pricing error in Black-Scholes applications.
Part IV places GaussianMove alongside existing Gaussian and options libraries in EVM and other ecosystems. The goal is not to declare a winner, but to show where a Sui-native, AAA-based library fits on the accuracy/gas/complexity frontier.
The EVM ecosystem has seen two waves of Gaussian implementations:
First wave (2014-2022): Production-driven
errcw/gaussian (2014): JavaScript reference
primitivefinance/solstat (2022): First DeFi library, ~10⁻⁷ error, ~5,000 gas
Second wave (2024): Research-driven
GA006/gaussian-cdf: Zelen-Severo polynomial, ~10⁻⁸ error
0xknxwledge/DegeGauss: ABDK 128-bit float, ~10⁻¹⁶ error, ~53,000 gas
cairoeth/solgauss: Rational Chebyshev, ~10⁻⁸ error, ~600 gas, includes PPF
Key insight: solgauss achieves the best gas/accuracy Pareto frontier via rational approximation—the same approach GaussianMove uses, but with AAA instead of hand-tuned Chebyshev coefficients.
Library | Algorithm | CDF Error | PPF? | Notes |
|---|---|---|---|---|
solstat | Abramowitz-Stegun | 10⁻⁷ | No | Uses exp(), high gas |
solgauss | Rational Chebyshev | 10⁻⁸ | Yes | No exp() for CDF |
DegeGauss | ABDK 128-bit float | 10⁻¹⁶ | No | Extreme precision, extreme gas |
Morpheus | A-S + Acklam + Newton | 10⁻¹⁵ | Yes | Most complete, most complex |
GaussianMove | AAA rational |
GaussianMove occupies a distinct position: better accuracy than solgauss, lower complexity than Morpheus, and native randomness integration that no EVM library can match.
We started with a simple question: how do you compute Φ and Φ⁻¹ on a chain that only speaks integers? GaussianMove's answer is to treat this as an approximation-theory problem first, and an engineering problem second.
Concretely, GaussianMove:
Uses modern AAA rational approximation to generate near-optimal fits for Φ and Φ⁻¹ offline.
Enforces explicit error budgets (CDF 3.35×10⁻⁹, PPF 3.11×10⁻¹³) validated against high-precision baselines.
Wraps everything in an auditable Python→Move pipeline and a small, deterministic on-chain evaluation kernel.
Leverages Sui's native randomness so Gaussian sampling fits cleanly into single-transaction flows.
Whether or not DeFi fully embraces on-chain Gaussian math, it is useful to have a transparent implementation available.
Nakatsukasa, Y., Sète, O., & Trefethen, L. N. (2018). The AAA algorithm for rational approximation. SIAM J. Sci. Comput., 40(3), A1494-A1522. arXiv:1612.00337
Nakatsukasa, Y., & Trefethen, L. N. (2023). The first five years of the AAA algorithm. arXiv:2312.03565.
Nakatsukasa, Y., & Trefethen, L. N. (2025). Applications of AAA rational approximation. Acta Numerica. arXiv:2510.16237.
Driscoll, T. A., Nakatsukasa, Y., & Trefethen, L. N. (2024). AAA Rational Approximation on a Continuum. SIAM J. Sci. Comput., 46(2), A929-A952.
Abramowitz, M., & Stegun, I. A. (1964). Handbook of Mathematical Functions. Ch. 26.2.17.
Wichura, M. J. (1988). Algorithm AS 241: The Percentage Points of the Normal Distribution. Applied Statistics, 37(3), 477-484.
Cody, W. J. (1969). Rational Chebyshev Approximations for the Error Function. Mathematics of Computation, 23(107).
Acklam, P. J. (2000). An algorithm for computing the inverse normal cumulative distribution function. (See also John D. Cook's literate program)
Koopman, R. (2025). Some Simple Full-Range Inverse-Normal Approximations. Journal of Numerical Analysis and Approximation Theory, 54(1).
Graillat, S., et al. (2024). Accurate Horner methods in real and complex floating-point arithmetic. BIT Numerical Mathematics, 64, article 17.
Graillat, S., Langlois, P., & Louvet, N. (2009). Algorithms for accurate, validated and fast polynomial evaluation. Japan Journal of Industrial and Applied Mathematics, 26(2), 191-214.
RareSkills. (2024). Fixed Point Arithmetic in Solidity. (Comprehensive tutorial on WAD/RAY standards)
PRBMath: Solidity library for advanced fixed-point math.
ds-math: Original DappHub WAD/RAY implementation.
brine-fp: Fixed-point math library with logarithmic and exponential functions for blockchain.
primitivefinance/solstat: First production DeFi Gaussian library (Primitive RMM-01).
cairoeth/solgauss: Rational Chebyshev approximation, most complete API.
0xknxwledge/DegeGauss: ABDK 128-bit floating-point approach.
simontianx/OnChainRNG/GaussianRNG: CLT-based Gaussian approximation.
araghava/cairo-black-scholes: Black-Scholes on StarkNet.
opynfinance/BlackScholes: Opyn's Black-Scholes implementation.
GaussianMove (move-gaussian) — v0.9.0, Package: 0x66f9087a3d9ae3fe07a5f3c1475d503f1b0ea508d3b83b73b0b8637b57629f7f
BlackScholes (move-black-scholes) — v0.2.0, Package: 0x1637ddc0495a8833ebd580224dad7154dfb33477f73d2c7fb41e2b350efa55b3
Evans, A., Angeris, G., & Chitra, T. (2021). Introducing Primitive RMM-01. Primitive Finance.
Sterrett, E., & Jepsen, W. (2022). Replicating Portfolios: Constructing Permissionless Derivatives. arXiv:2205.09890.
RMM Primer: Friendly guide to Primitive.
Moallemi, C., & Robinson, D. (2024). pm-AMM: A Uniform AMM for Prediction Markets. Paradigm.
White, D. (2024). Distribution Markets. Paradigm.
Chainrisk. (2024). VaR Methodology for DeFi.
Gauntlet. (2023). Improved VaR Methodology.
GARP. (2024). Digital-Asset Risk Management: VaR Meets Cryptocurrencies.
Aufiero, S., et al. (2025). Mapping Microscopic and Systemic Risks in TradFi and DeFi. arXiv:2508.12007.
KernelDAO. (2024). Traditional vs DeFi Risk Management: A Quantitative Comparison.
Polygon. (2022). Black Scholes Merton Model to Price DeFi Options.
Chainlink. (2020). Build a DeFi Call Option Exchange With Chainlink Price Feeds.
Auctus. (2020). ACO Black-Scholes: A Pooled Liquidity Model for Options Powered by Chainlink.
Panoptic. (2024). How to Price Perpetual Options: Five Models Compared.
Yes |
v0.9.0, 399 tests, sui::random |
No comments yet