Agentic Security at Trent: From Judgment to Time-Bounded Delegation
Abstract
Building on our first Trent session, this short offsite talk focuses on one practical question: how do we scale agentic systems without ceding the institutional judgement layer that keeps decisions safe?
We frame the challenge through three ideas: (1) Data-Oriented Agents (DOAgents) for networks of specialised agents, (2) the Consistent Reasoning Paradox and why robust systems need explicit “I don’t know” behavior, and (3) agentic debt as the operational cost of delegation without bounded time, authority, and recovery paths.
The proposal is pragmatic: each subtask in an agent graph receives a time budget and explicit termination policy. Agents either complete with evidence, escalate with “I don’t know,” or trigger human involvement. These budgets can be tuned empirically by balancing human interruption cost against compute waste and risk exposure.
Context and objective
Institutional tacit knowledge
This motivates an emulsion metaphor: organisations are a stable mixture of automatable routines and irreducible human context. In current institutions, the interface between these routine decisions and the judgment is mixed, like an emulsion. When we replace these decisions with orchestrated agents without understanding the judgment interventions we accumulate agentic debt. Paying down agentic debt means extracting the tacit judgment layer into explicit policies, evidence requirements, and reversible action boundaries.
Architecture: DOAgents
This section draws on Christian Cabrera and collaborators’ data-oriented architecture perspective: in production, robustness comes from making data and boundaries first-class. Here we apply that principle to networks of agents, where each node has scoped authority and each edge carries explicit evidence and constraints.
Reasoning limits and trust
Following Bastounis et al. (2024), the practical lesson is not philosophical pessimism; it is engineering discipline. If a system cannot reliably discriminate when it is out of depth, “always answer” becomes a liability.
Paying down agentic debt
Agentic Debt
Agentic AI could pay down the technical debt and intellectual debt that plauges our deployment of complex systems. But in doing so it could create a new form of debt: agentic debt.
Agentic debt is the “new debt” introduced by systems that can act: the accrued risk and cost of operating delegated workflows without crisp boundaries. Unlike technical debt, e.g. emerging from engineering shortcuts, and intellectual debt emerging from (well engineered) complex systems, agentic debt is about unsafe or illegible delegation. Who (or what) can cause what action, on what evidence, with what recovery path?
This is the key proposal: convert hidden judgement debt into explicit runtime policy. Every delegated decision has a clock, an evidence threshold, and a recovery route.
Thanks!
For more information on these subjects and more you might want to check the following resources.
- company: Trent AI
- book: The Atomic Human
- twitter: @lawrennd
- podcast: The Talking Machines
- newspaper: Guardian Profile Page
- blog: http://inverseprobability.com