There is a growing industry forming around a comforting idea: That the central risk of generative AI in education is bias. That if we can make AI outputs more inclusive, more representative, more “fair”, then we will have made these systems safe for children. This framing is not malicious. It is simply incomplete. Bias is a surface phenomenon. Sovereignty is the substrate. What is arriving in education right now is not just a new content engine. It is a new interpretive layer between a child ...