Lex et Ratio Ltd
AI Governance Phase Two: Delegated authority, mandate, and liability architecture

AI Governance Phase Two

AI Governance Phase Two shifts the inquiry upstream. The central question is not whether AI systems qualify as ‘agents’, but whether delegated discretion was lawfully specified, bounded, and supervised at mandate design. Once decision authority migrates into infrastructure — through identity scoping, tool invocation, and workflow integration — governance becomes a problem of allocatable responsibility. If foreseeable unlawful or rights-violating outcomes remain reachable within the authorised operational space, liability does not dissipate into technical opacity. It remains traceable to those who defined scope, constraints, and oversight architecture.

1. Delegation, Mandate Design and Allocatable Responsibility

AI governance is entering its second phase.

The first phase concentrated on alignment, safety engineering, and the ontological question of whether artificial systems qualify as ‘agents’. That debate remains analytically useful. It clarifies optimisation, statistical reasoning, and behavioural risk. It does not, however, resolve the central legal question: how responsibility attaches once decision authority becomes infrastructural.

The relevant inquiry is not whether AI is an agent. It is what changes when discretion is embedded into operational systems rather than exercised episodically by identifiable officers.

When systems become persistent, embedded, and non-optional, organisations are no longer merely deploying tools. They are structuring authority through architecture. Governance therefore shifts from downstream risk mitigation to upstream duty.

This structural reallocation of discretion marks the beginning of Phase Two.

Back to contents

2. From Tools to Entrusted Infrastructure

In Phase One, AI systems were treated as advisory or discretionary tools. Governance focused on output integrity — bias mitigation, validation, monitoring, and post-hoc correction.

In Phase Two, systems operate as infrastructure. They shape admissible evidence, prioritise options, initiate workflows, invoke tools, and execute actions across live environments. They are embedded in underwriting, diagnostics, compliance, recruitment, credit allocation, and public administration. Exit becomes constrained. Review becomes procedural. Decisions become path-dependent.

This is often described operationally as a move from output risk to execution risk. But execution risk is derivative. The underlying shift concerns the scope of delegated authority at runtime.

Once authority is compiled into architecture, governance can no longer be confined to behavioural oversight. The inquiry becomes one of mandate design, admissibility constraints, and structural boundedness.

The issue is no longer behavioural failure. It is architectural allocation.

Back to contents

3. The Delegation Threshold

The organising concept is the Delegation Threshold: the point at which operational authority migrates into technical systems without a corresponding re-specification of responsibility.

This threshold is crossed when system outputs materially structure binding outcomes; when human review becomes formal rather than deliberative; when execution authority is granted through identity scoping and tool invocation; and when responsibility remains nominally human yet functionally mediated by infrastructure.

Metaphysical autonomy is irrelevant. What matters is effective discretion — the capacity to initiate or constrain real-world consequences under conditions where intervention is exceptional rather than routine.

Once crossed, optimisation may intensify while accountability attenuates. Decisions continue to be generated, yet the locus of authorship becomes opaque and the chain of delegation increasingly difficult to reconstruct.

Phase Two governance begins precisely at this migration point.

Back to contents

4. Upstream Liability: Architecture Rather Than Outcome

Conventional legal analysis examines discrete harms. A problematic output is framed as misuse, malfunction, or insufficient safeguard.

Phase Two reframes the inquiry upstream.

If a system’s reachable operational or semantic space includes foreseeable unlawful or rights-violating outcomes, the decisive question is not whether runtime filters failed. It is whether the mandate was defensibly specified at the point of authorisation.

“Misuse” presupposes a boundary.

Where enforceable boundaries were not architecturally embedded, responsibility does not dissipate into user conduct. It remains allocatable to those who defined operational scope.

Liability therefore attaches to mandate definition, constraint compilation, integration architecture, and oversight design — not merely to isolated outputs.

Governance becomes a question of admissibility and bounded authority, not reactive policing.

Back to contents

5. Operational Corollary: Runtime Control Is Necessary but Not Sufficient

Practically, Phase Two governance requires scoped identities, least-privilege access boundaries, runtime constraint enforcement, comprehensive logging, intervention thresholds, and pre-assigned incident ownership.

However, these mechanisms regulate the exercise of authority; they do not legitimise its scope. The primary governance act occurs at mandate design — when authority is defined, prohibited classes are excluded, and execution boundaries are compiled into the system before deployment.

Runtime enforcement without bounded mandate design risks entrenching structurally excessive discretion. Controls may function perfectly while the underlying delegation remains under-specified.

Back to contents

6. Institutions as Fiduciaries of Decision Environments

Where individuals must organise their affairs around system outputs, deploying institutions assume responsibility for the epistemic environment they have constructed.

This resembles fiduciary delegation: entrusted discretion exercised under conditions of dependency and constrained exit.

Authority over admissibility — what counts as evidence, what is escalated, what is filtered — becomes infrastructural rather than discretionary.

Under such conditions, governance entails maintaining traceability of delegation, structural contestability of outcomes, and revisability of operational authority without destabilising institutional continuity.

These are emerging legality conditions for legitimate authority.

Back to contents

7. The Auditable Chain of Delegation

The decisive test in Phase Two is not generic safety.

It is whether the chain of delegation remains auditable.

This requires reconstructable identity scoping, documented permission inheritance, visible tool invocation pathways, escalation mechanisms, and defined shutdown authority. An auditable chain is not rhetorical accountability; it is traceable structural delegation.

Courts and regulators will ask who authorised scope, what constraints were embedded at compilation, how oversight was preserved after integration, and whether responsibility can be reconstructed across time.

This inquiry echoes longstanding corporate law questions: who may bind whom, within what authority, and subject to which constraints.

The novelty is not doctrinal but locational. Discretion once exercised through identifiable officers is now embedded in technical architecture. Delegation is executed by design rather than by signature.

Delegation that removes practical reviewability without preserving allocatable responsibility approaches abdication.

Back to contents

8. Separating Intelligence from Authority

Contemporary discourse often conflates optimisation, agency, and authority.

Phase Two separates them.

A system may optimise probabilistically while lacking normative standing. Authority arises not from intelligence, but from institutional reliance that structures binding outcomes.

The governance question is therefore whether authorship, admissibility, and liability remain intelligible once automation becomes infrastructural.

If responsibility cannot be clearly located, authority has migrated beyond its justificatory foundation.

Back to contents

9. Structural Failure Modes

Embedded socio-technical systems exhibit recurrent structural risks:

  • Opacity in reasoning or execution pathways;
  • Diffusion of responsibility across organisational layers;
  • Misrecognition of architectural decisions as technological inevitability;
  • Constrained exit for affected parties;
  • Incremental expansion of runtime authority through drift or integration creep.

These conditions stabilise authority while weakening corrective challenge. They represent misaligned delegation rather than isolated technical defects.

Back to contents

10. Judicial Signal: Cognitive Delegation Is Legally Visible

In United States v. Heppner, No. 1:25-cr-00503 (S.D.N.Y. Feb. 10, 2026), Judge Jed S. Rakoff held that a criminal defendant’s communications with the generative AI platform Claude were not protected by attorney–client privilege or the work-product doctrine.

The reasoning was orthodox. There was no attorney–client relationship with the platform. The defendant’s use of the system was not undertaken at counsel’s direction. The platform’s published privacy terms permitted data retention and third-party disclosure. On these facts, no reasonable expectation of confidentiality existed, and the work-product doctrine did not apply.

The significance of the ruling lies not in its novelty, but in its classification choice. The court treated the AI platform as a legally external infrastructure provider, not as an extension of private deliberation. Once reasoning is mediated through a system that reserves independent data rights, cognitive activity becomes legally visible as disclosure.

This is a Phase Two moment.

The question is no longer whether AI systems ‘reason’. The question is how authority is delegated, how confidentiality is architected, and whether the chain of delegation remains auditable. Privilege does not attach to subjective experience of thinking. It attaches to relationships, mandates, and legally cognisable boundaries.

Where AI is embedded without mandate design, institutional actors remain fully responsible for the consequences of delegation. Infrastructure choices determine liability exposure.

Back to contents

11. The Governing Question

The defining governance problem of this decade is not intelligence.

It is governability.

Intelligent infrastructures must remain subject to bounded, traceable, and reviewable delegation. Where authority migrates into architecture without preserved allocatability, governance fails irrespective of technical performance.

AI Governance Phase Two marks the transition from supervising tools to structuring legitimate authority.

And that is a fiduciary question.

Back to contents

Authorities

  1. Companies Act 2006, ss 171–175.
  2. Armitage v Nurse [1998] Ch 241 (CA).
  3. Re Barings plc (No 5) [2000] 1 BCLC 523 (Ch).
  4. Item Software (UK) Ltd v Fassihi [2004] EWCA Civ 1244 (CA).
  5. In re Caremark International Inc Derivative Litigation 698 A 2d 959 (Del Ch 1996).
  6. Palkon v Maffei, No 125, 2024 (Del Sup Ct, 4 February 2025).
  7. Peña v MacArthur Group, Inc., C.A. No 2023-0412-MTZ (Del Ch, 1 October 2025).
  8. Deborah A DeMott, ‘Beyond Metaphor: An Analysis of Fiduciary Obligation’ (1988) 37 Duke Law Journal 879.
  9. Tamar Frankel, Fiduciary Law (OUP 2011).
  10. Paul B Miller and Andrew S Gold (eds), Philosophical Foundations of Fiduciary Law (OUP 2014).
  11. NIST, AI Risk Management Framework (AI RMF 1.0) (2023).
  12. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ L 1689/1.
  13. United States v Heppner, No 1:25-cr-00503 (S.D.N.Y. Feb. 10, 2026).
  14. Peter Kahl, ‘Authority without Authorship: Delegation Thresholds in Agentic AI Systems’ (2026) Lex et Ratio Ltd Preprint.
  15. Peter Kahl, ‘Epistemic Humility as Fiduciary Obligation: Entrusted Discretion and Responsibility for Belief’ (2026) Lex et Ratio Ltd Preprint.

Back to contents