<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Invari</title>
        <link>https://paragraph.com/@invari</link>
        <description>Invari publishes essays under a pseudonymous byline and takes on a small number of private engagements</description>
        <lastBuildDate>Tue, 28 Apr 2026 22:33:16 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[AI Governance is a Fiduciary Problem]]></title>
            <link>https://paragraph.com/@invari/ai-governance-is-a-fiduciary-problem</link>
            <guid>FOeD90sz0Eo8mDDN19SW</guid>
            <pubDate>Mon, 27 Apr 2026 21:23:35 GMT</pubDate>
            <description><![CDATA[AI governance is being built in the wrong language.

Compliance lawyers treat it as a regulatory problem. Safety researchers treat it as a technical one. Both miss the category that already exists for power exercised over another's interests: fiduciary law.

invari.xyz]]></description>
            <content:encoded><![CDATA[<p>AI governance is being built in the wrong language. The dominant vocabulary is the vocabulary of compliance: classification, documentation, conformity assessment, risk management, transparency, oversight, monitoring. The second vocabulary is the vocabulary of technical safety: alignment, red-teaming, evaluations, capability thresholds, safeguards, system cards. Both are necessary. Neither is enough.</p><p>Compliance lawyers ask: what does the law require? That is a sensible question inside a regulated institution, but it is not the first question of governance. The EU AI Act is the clearest example of the compliance frame. It sorts AI systems through a risk-based structure, prohibits certain practices, imposes requirements on high-risk systems, allocates obligations across providers and deployers, and requires machinery such as risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, cyber security and post-market monitoring (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689"><strong>EU AI Act</strong></a>). That is an impressive legislative achievement. It is also an answer to a narrower question than the one consequential AI deployments actually pose.</p><p>The NIST AI Risk Management Framework makes the same move in softer form: it is voluntary, non-sector-specific and organised around Govern, Map, Measure and Manage, with trustworthiness expressed through characteristics such as validity, safety, accountability, transparency, explainability, privacy and fairness (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf"><strong>NIST AI RMF 1.0</strong></a>). It is serious, practical and careful. It still says, in effect, that the problem is risk: identify it, measure it where possible, manage it, document what remains.</p><p>That frame has value. It forces organisations to inventory systems, assign roles, test performance, monitor drift and write things down. It gives regulators a handle and procurement teams a checklist. It gives boards and compliance departments something administrative. But it also risks reducing governance to institutional hygiene. The danger is not that compliance frameworks are empty. The danger is that they are too full of process and too thin on obligation.</p><p>The compliance frame is especially weak where the legal question is not simply whether a system met a pre-market standard, but whether someone with power over another person’s interests exercised that power properly. A high-risk classification does not tell us whether an investment adviser using an AI allocation engine has acted loyally towards a client. A technical documentation file does not tell us whether a trustee who relied on an opaque model has exercised independent judgment. A human oversight control does not tell us whether the human is competent, conflicted, informed or merely decorative. Compliance can specify procedures. It cannot, on its own, supply the relational grammar of duty.</p><p>The safety community has different instincts. It treats AI governance as a problem of model behaviour, system capability and failure mode. OpenAI describes external red-teaming as a way to discover novel risks, stress-test mitigations, enrich safety metrics, create evaluations and strengthen risk assessments, while also acknowledging limits such as cost, information hazards, participant harm and the fact that red-team results may not reflect production behaviour over time (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://cdn.openai.com/papers/openais-approach-to-external-red-teaming.pdf"><strong>OpenAI</strong></a>). Anthropic’s Responsible Scaling Policy is a voluntary framework for managing catastrophic risks, built around capability thresholds, AI Safety Levels, safeguards, risk reports and external review in defined circumstances (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://www.anthropic.com/news/responsible-scaling-policy-v3"><strong>Anthropic</strong></a>). These are not unserious efforts. They are among the more disciplined attempts to govern frontier systems before disaster supplies the doctrine.</p><p>Yet the safety frame has its own narrowing effect. It is most fluent when the problem can be expressed as a capability, a benchmark, a threat model, a misuse pathway or an alignment failure. It is less fluent when the question is who may properly exercise judgment for whom. A model may pass an evaluation and still be embedded in a disloyal institutional design. A system may refuse bioweapon prompts and still steer a pension beneficiary into products that serve the sponsor. A triage model may be accurate in aggregate and still be used by an insurer in a way that quietly converts clinical judgment into cost containment. Safety asks whether the machine will do the dangerous thing. Governance must also ask whether the person deploying the machine was entitled to ask it to decide at all.</p><hr><div data-type="x402Embed"></div><p>This is the missing category. Consequential AI is not only a regulatory problem and not only a technical problem. It is a fiduciary problem.</p><p>The fiduciary question begins where both dominant frames tend to become awkward: who owes duties to whom when decision-making power is mediated by a system that is owned by one party, designed by another, deployed by a third and experienced by a fourth.</p><p>Investment management shows the point cleanly. Registered investment advisers using AI do not step outside fiduciary law because a model enters the workflow. Under the adviser frame, the duties of loyalty and care still require the adviser not to put its own interests ahead of the client’s, to disclose material facts and conflicts, to understand the client’s objectives, to monitor suitability and to remain responsible for decisions made by its AI-based programme (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://corpgov.law.harvard.edu/2020/06/11/investment-advisers-fiduciary-duties-the-use-of-artificial-intelligence/"><strong>Harvard Law School Forum on Corporate Governance</strong></a>). The AI-specific questions are not exotic. What exactly is the system doing? What risks does it introduce? When will the adviser override it? How much human involvement is there? Does the system’s inference from behavioural data conflict with the client’s stated objectives? Is the adviser periodically testing whether the system remains within expected parameters? Those are fiduciary questions before they are AI questions.</p><p>The SEC’s first AI-washing enforcement actions against investment advisers make the same point in enforcement language. Delphia and Global Predictions settled charges that they made false and misleading statements about their claimed use of AI in investment processes, and the SEC ordered total civil penalties of $400,000 (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://www.sec.gov/newsroom/press-releases/2024-36"><strong>SEC</strong></a>). Gary Gensler’s statement was framed as investor protection: advisers should not mislead the public by saying they use an AI model when they do not (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://www.sec.gov/newsroom/press-releases/2024-36"><strong>SEC</strong></a>). That is not merely a truth-in-advertising concern. It is a loyalty concern. If an adviser sells the aura of machine intelligence to attract trust, while the actual process is something else, the client’s dependence has been exploited.</p><p>Legal advice is the familiar warning label. In Mata v Avianca, lawyers submitted filings containing fabricated authorities generated by ChatGPT, and the court sanctioned them for failing to verify the cases before relying on them (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://www.acc.com/resource-library/practical-lessons-attorney-ai-missteps-mata-v-avianca"><strong>Association of Corporate Counsel</strong></a>). The point is not that lawyers must avoid AI. The point is that professional judgment cannot be outsourced to a system that cannot owe candour to the court, loyalty to the client or competence in the professional sense. The lawyer may use tools. The lawyer remains the fiduciary and officer of the court.</p><p>Medical triage and utilisation review expose the same structure under higher stakes. California’s Physicians Make Decisions Act, effective 1 January 2025, requires any denial, delay or modification of care based on medical necessity to be reviewed and decided by a licensed physician or qualified health care provider with expertise in the specific clinical issues at hand (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://sd13.senate.ca.gov/news/press-release/december-9-2024/landmark-law-prohibits-health-insurance-companies-using-ai-to"><strong>California State Senate</strong></a>). That is usually described as a healthcare AI rule. More deeply, it is an anti-abdication rule. It says that where an institutional decision affects access to care, the responsible professional decision cannot be collapsed into an algorithmic output. It is not enough that there is a human somewhere in the loop. The human must be the right kind of human, exercising the right kind of judgment, over the right kind of decision.</p><p>Director and trustee decision-making bring the point back to institutional governance. Directors often rely on experts, officers, employees and advisers, and Delaware law recognises protected reliance where directors act in good faith on information or advice from persons reasonably believed to be within their competence and selected with reasonable care (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://www.rlf.com/reinterpreting-section-141e-of-delawares-general-corporation-law-why-interested-directors-should-be-fully-protected-in-relying-on-expert-advice/"><strong>Richards, Layton &amp; Finger</strong></a>). But reliance is not abdication. In Smith v Van Gorkom, the Delaware Supreme Court stated that whether a business judgment is informed turns on whether directors informed themselves of all material information reasonably available before making the decision (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://law.justia.com/cases/delaware/supreme-court/1985/488-a-2d-858-4.html"><strong>Justia</strong></a>). The rule is old, but the AI version is immediate: a board that accepts a model’s acquisition analysis, credit forecast, litigation strategy or capital allocation recommendation without understanding its basis has not discovered a new form of judgment. It has discovered a new way to fail at judgment.</p><p>These examples are not edge cases. They are the ordinary settings in which AI is already being absorbed: money, law, medicine, corporate administration, trust administration, insurance, credit and public services. They all share a common architecture. One person or institution has discretionary power. Another person is exposed to the consequences. The exposed person cannot realistically monitor the decision process. The intermediary claims expertise, or at least control. That is fiduciary territory.</p><hr><div data-type="x402Embed"></div><p>Fiduciary doctrine is useful here because it has spent centuries refusing to be impressed by the glamour of delegation.</p><p>The starting point is not “AI ethics”. It is care, loyalty, prudence, conflict, disclosure, delegation, supervision and accountability. These are not decorative concepts. They are operational design principles for systems in which one party is entrusted with power over another’s interests.</p><p>The duty of care asks whether the decision-maker acted with the care, skill and diligence appropriate to the role and circumstances. In the trustee context, the Trustee Act 2000 created a statutory duty of care requiring trustees to show such skill and care as is reasonable in the circumstances, taking account of special knowledge, experience or professional status (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://www.legislation.gov.uk/ukpga/2000/29/notes/division/5/1"><strong>Trustee Act 2000 Explanatory Notes</strong></a>). The explanatory notes make the old point in plain terms: in investment matters, the duty reflects the standard of the ordinary prudent man of business, adjusted for the trustee’s actual expertise (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://www.legislation.gov.uk/ukpga/2000/29/notes/division/5/1"><strong>Trustee Act 2000 Explanatory Notes</strong></a>). The professional trustee is not judged as a beekeeper. The investment banker trustee is not judged as an amateur.</p><p>That matters for AI. The standard should not be flattened into a universal instruction to “maintain human oversight”. A sophisticated asset manager, corporate trustee, insurer, hospital, law firm or board cannot say that AI was too complex to understand. Complexity may justify delegation, but it also raises the standard of selection, instruction and monitoring. The more the institution holds itself out as expert, the less patience fiduciary law should have for vague statements that the model was proprietary, probabilistic or difficult to explain.</p><p>The duty of loyalty supplies the sharper edge. Care asks whether the fiduciary was competent. Loyalty asks whether the fiduciary was faithful. AI systems make loyalty problems easier to hide because conflicts can be embedded in optimisation targets, training data, interface design, ranking logic and commercial incentives. A recommendation engine may not intend self-dealing. It does not need to intend anything. If the system is built to prefer products, pathways or decisions that benefit the deployer at the expense of the person relying on the service, the disloyalty sits in the architecture.</p><p>This is where technical safety language often misfires. A system can be accurate and disloyal. It can be explainable and conflicted. It can be well documented and still optimised against the person whose interests the fiduciary is meant to protect. Fiduciary law knows this pattern. The oldest loyalty cases are not about whether the fiduciary made a clever decision. They are about whether the fiduciary was permitted to be in that position of conflict at all.</p><p>The prudent person standard is also more adaptable than AI exceptionalism assumes. Trust investment law did not freeze in the age of consols and landed security. It absorbed professional investment management, portfolio theory, diversification, custodians, nominees and discretionary managers. The law moved from suspicion of delegation towards a more practical principle: delegation may be prudent, and sometimes failure to delegate may itself be imprudent. John Langbein’s account of the shift from the Restatement Second’s nondelegation rule to the Restatement Third’s prudent investor approach captures the change: the older rule said a trustee could not properly delegate the selection of investments, while the newer approach approves delegation and imposes a duty to consider whether and how to delegate investment functions (<a target="_blank" rel="nofollow noopener" class="dont-break-out reset interactable cursor-pointer decoration-1 underline-offset-1 text-super hover:underline" href="https://openyls.law.yale.edu/entities/publication/36cda63c-52ad-4061-8821-ceb1edb04598"><strong>Yale Law School</strong></a>).</p><p>That is the strongest analogy for AI governance. Not because an AI system is just another investment manager. It is not. The analogy matters because trust law has already confronted the central governance problem: a fiduciary with ultimate responsibility must act through imperfect agents in a world too complex for personal execution of every task.</p><p>The answer was not to ban delegation. Nor was it to let trustees shrug and point at the delegate. The answer was disciplined delegation. Select the agent with care. Define the scope and terms of the delegation. Give the agent the information and constraints needed to perform the function. Monitor performance. Review whether the delegation remains suitable. Retain the core responsibility that cannot be handed away.</p><p>That structure maps cleanly onto consequential AI use. Select the system with care. Define the permitted domain of use. State the objective in fiduciary terms, not merely performance terms. Identify conflicts in the provider’s incentives and the deployer’s incentives. Test the system against the actual population and context in which it will operate. Monitor drift and error. Require escalation where outputs affect protected interests, trust assets, legal rights, medical access, creditworthiness or corporate decisions. Record enough to permit explanation and challenge. Decide in advance what cannot be delegated.</p><p>The last point matters most. Trust law has always distinguished assistance from surrender. Trustees may employ agents, nominees, custodians and advisers, but the administration of the trust is not simply handed over. Directors may rely on experts, but the board must still make the decision. Lawyers may use research tools, but they must still verify authorities and exercise professional judgment. Doctors may use diagnostic systems, but they must still practise medicine. The doctrine does not romanticise unaided human judgment. It disciplines mediated judgment.</p><p>AI governance needs that discipline. Current frameworks often speak of accountability as if it can be installed through roles, logs and audit trails. Those things help, but they are not accountability. Accountability is a theory of answerability. Fiduciary law supplies one: the person entrusted with power must answer to the person whose interests are subject to that power. It also supplies a theory of non-excuse: the fiduciary does not escape because the task was difficult, because an agent was used, because a market practice existed, or because the tool behaved unexpectedly. The question is whether the fiduciary acted prudently and loyally in using it.</p><p>That is why fiduciary doctrine is a better starting point than general regulation or technical safety for high-consequence deployments. The EU AI Act can tell us what obligations attach to a high-risk system. The NIST framework can tell us how to structure an internal risk programme. Red-teaming can tell us how a model behaves under adversarial pressure. Fiduciary analysis asks the more basic governance question: should this person have been allowed to put that system between themselves and their duty?</p><hr><div data-type="x402Embed"></div><p>The analogy has limits. They should be stated plainly.</p><p>AI is not a legal agent in the ordinary sense. It cannot owe loyalty. It cannot understand a beneficiary’s interests. It cannot be examined as a witness in any meaningful moral sense. It cannot form intentions, accept office, exercise conscience or be shamed by breach. Calling it an “agent” is often useful operational shorthand, but it can mislead if imported too literally into law.</p><p>Fiduciary doctrine also assumes a human or institutional duty-bearer capable of judgment. AI unsettles that assumption. A trustee who delegates to a discretionary manager can ask who the manager is, what authority they have, what incentives govern them and what professional standards apply. A deployer using a general-purpose model may face a stranger chain: foundation model provider, fine-tuning vendor, application layer, data supplier, integration consultant, internal business owner, human reviewer and end user. Accountability can fragment before the harmed person even knows which system acted.</p><p>There is also a difference between imperfect human agents and systems that generate behaviour rather than merely execute instructions. A human delegate can misunderstand, act dishonestly, make a mistake or exceed authority. An AI system can produce outputs that emerge from training data, prompts, weights, retrieval systems, tool integrations and user interactions in ways no participant can fully trace. That does not defeat fiduciary analysis, but it does mean old delegation doctrine cannot simply be pasted over the top.</p><p>Some fiduciary concepts will need to be rebuilt. Loyalty without machine intention must focus on design, incentives and institutional purpose. Care must include model selection, validation, monitoring and the competence to interrogate technical claims. Disclosure must move beyond saying “AI was used” towards explaining what role the system played and what kind of human judgment remained. Prudence must account for scale: a single inadequate human review may be negligence, but a thousand nominal human reviews per hour may be a governance fiction.</p><p>The harder questions begin where the analogy runs out. Who is the fiduciary when an AI system gives direct financial guidance through a consumer app? What duties should fall on a foundation model provider whose system is foreseeably used inside professional services? When does a deployer become responsible for model behaviour it cannot inspect? When should law impose fiduciary duties on the architecture itself, through mandatory loyalty constraints, conflict prohibitions or design obligations? How should remedies work where harm comes from thousands of small decisions rather than one visible breach?</p><p>Those questions are new. They deserve new doctrine. But new doctrine should begin from the right inheritance.</p><hr><div data-type="x402Embed"></div><p>The present discourse too often treats fiduciary law as a sectoral afterthought, relevant only if the AI happens to be used by a lawyer, doctor, director, trustee or investment adviser. That gets the order wrong. Fiduciary law is not a niche overlay on AI governance. It is the legal tradition most concerned with mediated power, vulnerable reliance, divided interests and delegated judgment. Those are the defining conditions of consequential AI.</p><p>The regulatory frame will keep growing. It should. The safety frame will keep improving. It must. But neither will tell us, by itself, how to govern systems that act inside relationships of trust. For that, the better starting point is the old law of duty: care, loyalty, prudence, delegation and answerability. AI governance does not need to invent that wheel. It needs to remember why it was built.</p>]]></content:encoded>
            <author>invari@newsletter.paragraph.com (Invari)</author>
        </item>
    </channel>
</rss>