The Rewrite

Classical software
executed instructions.
Agents generate them.

For three decades, the enterprise software stack optimized around a single assumption: intelligence lives in the human, execution lives in the machine. A user read a screen, interpreted context, decided what mattered, clicked a button. Databases stored records. APIs moved events. The runtime did exactly what it was told.

AI agents invert that contract. The next step is no longer authored ahead of time by a programmer — it is inferred at runtime by a probabilistic system that assembles context, chooses among tools, revises plans mid-flight, and acts on partial information. That is not a feature addition. It is a change in the ontology of software.

Every primitive beneath the agent was built for a different species of software. Codicera governs the new one.

The Image

Six primitives, rewritten

The old column still runs every bank, every ERP, every system of record. It is not dead. But it is no longer where work happens. Work moves to the right column — and the right column needs a governance fabric the left column never required.

Old software

Deterministic · Explicit · Static

  • Users / service accounts
  • Databases
  • Message queues
  • Sessions / caches
  • Request / response
  • Logs / traces

Predictable · Reliable · Repeatable

Agent-native

Probabilistic · Generative · Dynamic

  • Delegation chains
  • Context compilation
  • Intent coordination
  • Cognitive memory stack
  • Durable trajectories
  • Behavioral forensics

Adaptive · Autonomous · Accountable

A corollary

The unit of work is the trajectory,
not the click

For twenty-five years, the browser was the membrane through which human judgment flowed into systems of record. Every SaaS vendor optimized for what the human sees. Every security vendor optimized for securing the human's session. Session-centric controls — DLP, RBAC, session recording — were the right answer because the session was the unit of work.

Agents introduce a new layer above the session. The browser does not disappear; it becomes one tool among many — shell execution, database queries, HTTP fetches — orchestrated inside a durable trajectory. Humans shift upward into design, delegation, and review. Agents do the clicking — sometimes literally, in a headless Chromium sandbox, when the long-tail SaaS tool has no API.

This new unit of work needs its own governance: provenance for every action, attenuated capabilities scoped to intent, and durable execution that survives failure. Session-level controls still protect the session. Trajectory-level controls protect the work.

That is the layer Codicera defines.

Six Primitives

Six layers to rewrite

Each primitive has a rewrite argument — the reason the old layer is wrong — and a Codicera surface that ships the replacement.

01 / 06

Identity → Delegated Capability

OldClassical identity maps cleanly to actors: a user, a service account, a workload identity. Permissions attach to the actor and check at request time.

NewAgents shatter this. A single user request spawns a planner, a researcher, a browser, a code executor, a memory consolidator, and a reviewer — each with distinct risk profiles, tool access, and failure modes. The question is no longer who you are but what you are allowed to do right now, on whose behalf, for what reason, with what evidence, under what budget, and with what expiration. Identity becomes dynamic delegation.

  • Attenuated capability tokens for every agent-to-agent handoff
  • Delegation chain: every tool call carries initiator, sub-agent, policy, budget, expiry
  • OPA policy decisions logged as first-class telemetry
  • Provenance-bound tool invocations — every action traceable to the goal that spawned it

02 / 06

Storage → Context Compilation

OldDatabases were built for applications that knew what they wanted. The programmer defined the schema, wrote the query, understood the join path.

NewAgents begin with goals, not queries — "investigate the incident," "prepare me for this meeting," "find why the customer is upset." They don't know which table matters. They need context assembled on demand, under constraints of relevance, freshness, permissioning, and cost. Long context windows are a brute-force trick that works until it doesn't. Storage stops being persistence and becomes context compilation.

  • Behavior graph as the queryable context layer
  • Blueprint-declared context sources with permissioning, freshness, and cost metadata
  • PII redaction at the L7 boundary so context compilation never leaks sensitive data
  • Compliance-as-context: "no agent in team X used a model not on the approved list" becomes a query, not a log grep

03 / 06

Messaging → Intent Coordination

OldDistributed systems moved facts between services — queues, logs, topics, RPC. Beautiful abstractions for explicit coordination.

NewBut agents don't pass messages; they pass work: a goal, a plan fragment, a confidence estimate, a rejection with critique, a handoff, a partial artifact. Agent-native messaging needs propose, accept, reject, revise, checkpoint, escalate, resume. And critically: cognition must be separable from commitment. Agents can explore branches internally; they cannot leak five versions of an invoice into the world because the model was thinking out loud.

  • Four-tier HITL: READ_ONLY · REVERSIBLE · SIDE_EFFECT · DESTRUCTIVE
  • Propose / approve / reject / revise task semantics
  • Idempotent tool boundary: internal deliberation is fluid; external side effects are durable and signed
  • ReAct loop as the internal deliberation surface, with trajectory export for audit

04 / 06

Memory → Cognitive Stack

OldMost applications have thin memory: a session store, a user profile, a conversation transcript. Enough when the program logic already knows how to behave.

NewAgents need much deeper memory because different kinds of memory play different computational roles. Working memory: the active state of the task. Episodic: trajectories across time. Semantic: durable facts about users, systems, domains. Procedural: how this class of task should generally be approached. Resource: where things live. Collapse these into a single transcript or vector store and the system gets confused. Agent-native memory needs consolidation rules, forgetting policies, trust boundaries, and selective recall — more cognitive architecture than database feature.

  • Structured memory types exposed through sandbox APIs
  • Behavior graph as episodic and semantic memory, versioned and queryable
  • Blueprints as procedural memory — declarative policy for how a class of task runs
  • Consolidation passes and summarization inside the behavior queue

05 / 06

Execution → Durable Trajectory

OldTraditional runtimes are built around request-response: a request arrives, code runs, a result returns.

NewAgents are loops. An agent observes, plans, acts, inspects, revises, repeats — over seconds, minutes, hours, or days. The first answer may be a rough sketch; the third may be correct. The system does not execute a path, it discovers one. This requires durable execution: checkpointed state, crash recovery, deterministic side-effect wrappers, replay for debugging, and mid-flight human intervention. The primary object is no longer the request — it is the trajectory.

  • ReAct loop with state-machine enforcement (Thought → Action → Observation)
  • Sandbox lifecycle: pause, resume, inspect, approve, branch, rollback — all first-class APIs
  • Trajectory replay for post-hoc debugging and audit
  • Kernel-level isolation (Landlock, seccomp, network namespaces) per trajectory

06 / 06

Observability → Behavioral Forensics

OldClassical observability asks what happened: traces, logs, metrics, errors. Enough when the application's behavior is already encoded in code.

NewAgents demand why: why did the agent choose that tool, what context did it retrieve, what alternatives did it consider, what evidence did it find persuasive, what policy boundaries were in effect? The debugging unit is no longer the request — it is the decision trajectory. Traces must preserve goals, context selections, reasoning artifacts, tool choices, confidence levels, revisions, and verifier outputs. The postmortem reads like a behavioral forensic analysis, not an infrastructure incident report.

  • Behavior graph with semantic depth (goals, context, tool choices, confidence, revisions)
  • Drift detection (KL divergence, PSI) and hallucination indices via LLM-as-judge
  • Compliance reports mapped to NIST AI RMF, EU AI Act, ISO 42001
  • Telemetry export API for parametric AI insurance oracles

Governance

Governance lives in the control loop

Classical software treated governance as a compliance layer bolted on after the fact — quarterly audits, annual reports, periodic access reviews. This worked when software was deterministic. Agents break it. A policy that only checks at design time cannot constrain a runtime that generates its own workflows. Agent-native governance is embedded in the control loop.

Primitive

Runtime governance

Delegated capability
Attenuated tokens, per-action provenance, revocability
Context compilation
PII redaction, permission-aware retrieval, residency enforcement
Intent coordination
HITL tiers, approval thresholds, side-effect idempotency
Cognitive memory
Source attribution, forgetting policies, trust boundaries
Durable trajectory
Policy-as-code state machines, loop limits, forbidden-tool lists
Behavioral forensics
Drift thresholds, hallucination indices, parametric insurance triggers

Governance becomes tangible in two directions. Upward to the CIO, as enablement wrapped in safety — the buyer purchases the ability to deploy agents confidently. Outward to the AI insurance and compliance ecosystem, as the telemetry oracle parametric contracts consume and the evidence source regulatory audits accept.

The old stack wasn't built for agents.
Codicera is.

Early access is open. Join the waitlist and we'll get you running on your infrastructure in days, not quarters.