OpenAI’s recent policy note — that conversations which indicate imminent risk of physical harm to others can be routed to human review and, in rare cases, law enforcement — is not primarily a technical announcement. It’s a political one. It announces, in practice, that private symbolic exchange inside an AI becomes legible to institutional systems when those systems decide the charge is too high. This is not about fear-mongering. It’s about the redistribution of symbolic mass.What actually ju...