On May 4, 2026, AWS open-sourced a piece of infrastructure that promises to reshape how security teams architect agentic AI deployments. The project, Trusted Remote Execution—or Rex for short—introduces runtime guardrails that gate every system operation an AI-generated script attempts. By evaluating each action against a Cedar policy defined by the host owner, Rex shifts trust away from the agent and toward the infrastructure owner. While this is a genuine advancement for runtime security, it leaves a critical gap that compliance and data security teams cannot ignore: the data layer.
What AWS Solved
The mechanics of Rex are straightforward. Scripts run in Rhai, a lightweight embedded language with no built-in access to the operating system. Every read, write, or open operation is intercepted by a Rex SDK call, which evaluates a Cedar policy before permitting the underlying system call. If the policy denies the action, the script receives an ACCESS_DENIED_EXCEPTION and the operation never reaches the kernel. Critically, the script and the policy are versioned separately. The host owner—not the developer who wrote the script, not the agent that may have generated it—defines what is allowed.
The targeted use case addresses three specific failure modes in agentic AI: hallucinated code, prompt injection, and overly eager task interpretation. None of these are hypothetical. OpenAI stated in late 2025 that prompt injection “is unlikely to ever be fully ‘solved.’” Anthropic acknowledged in research that “prompt injection is far from a solved problem, particularly as models take more real-world actions.” Rex inverts the traditional sandbox approach: instead of bounding what the agent generates, it bounds what any host operation the agent invokes can actually accomplish. This is a shift in where trust is allowed to live, encoded in production code.
The architectural inversion is real. Most agentic sandboxes try to constrain the agent’s behavior; Rex constrains the impact of those behaviors. The pattern treats prompts as instructions rather than access controls, and treats the agent’s claimed identity as something to be verified rather than trusted. Vendor security questionnaires, internal architecture reviews, and audit evidence packages can now reference a working open-source implementation of this pattern. It is a runtime layer worth adopting.
What AWS Did Not Solve
Rex governs system calls. It does not govern data security. That distinction is not a footnote. It is the difference between protecting the host from the agent and protecting the data from misuse, and it is the difference between passing a runtime audit and passing a regulatory one. A Cedar policy can permit file_system::Action::“read” on a customer-records file. At the kernel layer that is correct. At the data layer it is inadequate.
The data layer must answer a different set of questions: Is this read happening on behalf of a specific human user with the right authorization, or is the agent acting on its own claimed identity? Is the requester operating within the scope of the engagement that authorized access to this data in the first place? Are the records returned minimum-necessary for the task, or is the agent pulling more context than the prompt actually requires? Are any of the records subject to a deletion request, a legal hold, or a jurisdictional restriction that has not yet propagated to the file system? Is the access being logged in a tamper-evident form, with sufficient detail to reconstruct who authorized what—three years from now, when the model that generated the request has been retired and replaced twice?
Rex does not answer those questions. Cedar policies on system calls cannot answer them. They live one layer below the runtime, where the data lives, and that layer is where data security has to be enforced. Without it, an organization can run every agentic workload through Rex, prove that no script ever exceeded its host permissions, and still be unable to demonstrate to a regulator that the right person authorized the right access to the right data for the right purpose.
This matters operationally and legally. GDPR Article 5 demands purpose limitation, data minimization, storage limitation, and accountability. HIPAA’s minimum-necessary standard requires controls on which data the agent is permitted to access, not just which system calls the agent’s script is allowed to make. CMMC Level 2 access control families assume enforced authorization for AI access to controlled unclassified information. None of those frameworks is satisfied by runtime gating alone.
The Numbers Make the Gap Concrete
Kiteworks Data Security and Compliance Risk: 2026 Forecast Report found that 63% of organizations cannot enforce purpose limitations on AI agents. Sixty percent cannot quickly terminate a misbehaving agent. Fifty-five percent cannot isolate AI systems from broader network access. Fifty-four percent cannot validate AI inputs. Some of these gaps are exactly what Rex closes at the runtime layer—termination, isolation, input validation. Others are not. Purpose limitation is a data-semantics control. It cannot be enforced on a system call. It must be enforced on the data.
Only 43% of organizations have a centralized AI data gateway. The remaining 57% are running agentic AI through fragmented or partial data-layer controls. Adding Rex to that 57% closes the runtime gap and leaves the data gap where it was. The audit-defensible layer is not the kernel. It is the data.
The Five Eyes joint advisory on agentic AI, released April 30 and May 1, names five risk categories: privilege, design and configuration, behavior, structural, and accountability. Rex addresses parts of two. It does not address structural risks across multi-agent systems. It does not address the accountability category—the one that auditors and regulators will care about most—because accountability is evidence about who accessed what data, on whose behalf, for what purpose. A system call audit log does not produce that evidence. A data-layer audit log does.
The Architecture Data Security Actually Requires
The architecture that holds up under regulatory enforcement is layered, and the layers are not interchangeable. Runtime controls like Rex enforce what the host will permit. Identity controls enforce who the agent is acting on behalf of. Data-layer controls—attribute-based access control evaluated against classification, jurisdiction, consent, and purpose—enforce what data the agent is allowed to touch. Each layer addresses a different failure mode. None of them substitutes for the others.
The data layer is where data security lives. It is the layer where every access is authenticated against the human user the agent is acting for, where every authorization decision is evaluated against attribute-based policies that respect classification, jurisdiction, and consent, and where every operation produces a tamper-evident audit record that outlives the model that initiated it. AWS does not provide that layer in the Rex release. It is the architect’s responsibility, and it has to be built explicitly.
Expanding on the context: Agentic AI systems are increasingly deployed in high-stakes environments like healthcare hiring, and supply chain management. AWS’s own push into agentic AI now extends into these verticals, making the need for robust data-layer controls even more pressing. The threat landscape includes not only injection attacks but also unintended data exfiltration via model inference. Runtime guards can prevent a script from writing to an external socket, but they cannot prevent an agent from returning sensitive data in its response if the underlying vector database lacks fine-grained access controls.
Compliance teams must also contend with evolving regulations. The European Union’s AI Act, expected to be fully enforced by 2027, will require risk management systems for high-risk AI systems that process personal data. The act’s transparency obligations demand logging of data access at a granular level. The US National Institute of Standards and Technology (NIST) AI Risk Management Framework similarly emphasizes traceability and accountability. These requirements cannot be met by runtime gating alone.
Furthermore, the principle of data minimization in privacy regulations requires that an agent only access the minimum data necessary for its task. A customer service agent that retrieves a full customer profile when it only needs a name and order ID violates this principle. Rex cannot enforce data minimization because it operates at the file system level, not the content level. Only a data-layer gateway that inspects the actual fields requested can enforce such policies.
Career and industry context: The author, Tim Freestone, is chief strategy officer at Kiteworks, a company specializing in content governance and compliance. His perspective underscores the need for a comprehensive data security strategy that complements runtime innovations. Kiteworks’ own platform provides attribute-based access control and tamper-evident logging for sensitive content, positioning it as a solution for the data layer gap.
What This Means for Security and Compliance Leaders
The right operational response to the AWS announcement has three parts. First, adopt the runtime pattern. Rex is open-source under Apache 2.0, hosted at github.com/trusted-remote-execution, and runs on Linux and macOS. There is no procurement obstacle. Second, do not treat runtime gating as the whole answer. Map current controls against the Five Eyes advisory’s five risk categories and identify where the architecture stops at the kernel and where the data layer is still ungoverned. Third, build the audit trail at the layer that survives model lifecycle changes. The model can be retired. The runtime can be replaced. The data layer is the only place where the evidence outlasts the agent that produced it.
AWS solved part of the problem. Data security—the part that actually shows up in audits, regulatory inquiries, breach notifications, and litigation discovery—requires governance at the data layer, and AWS did not address it. The runtime layer just got easier. The data layer is still the architect’s responsibility, and it is the layer that decides whether the next agentic AI audit succeeds or fails.
Source: TechRepublic News