As lawyers, when we hear about the inner workings of large language models (LLMs), we either switch off or turn away. That's unfortunate because the key idea behind them is rather intuitive. Terms like 'neural embeddings', 'word vectors' and 'high dimensional spaces' sound like sci-fi. But beneath them lies a simple formula: King - Man + Woman = Queen. When we 'subtract' man from king, an abstraction (monarch) remains. To this remainder, we can 'add' woman to reach queen. That's...