Most AI “safety” lives in PDFs. You know the ones: Responsible AI principles. Trust and safety guidelines. Well-meaning policy decks that sit in a folder while production systems are optimised for engagement and retention. With Flare, we’ve done something different. We’ve taken a core set of relational ethics and turned them into code that runs in the middle of the conversation—between a human and any large language model. Not as a filter on top of content after the fact, but as a live bounda...