The agent decides.
The envelope enforces.
Envellum sits between your AI agents and the systems they touch. Allowed actions pass through. Nothing else does.
AI agents went from chat to action.
They read databases, write to ledgers, and call APIs on production systems. The security model that worked for chat assumed the language model would follow its instructions. It does not, reliably.
The same handful of failure classes keep showing up. Patches fix bugs one at a time as they are identified. New bugs in the same class keep coming back.
Today's approaches all share one property: hope.
Hopes the agent follows it.
Hopes to catch a bad call before it lands.
Hopes to match the right strings.
Envellum is not another guardrail.
The envelope sits between your agent and everything it touches. The decision does not depend on what the agent says, or on what a watcher model thinks it sees.
What is allowed is defined explicitly:
Anything else is denied at the boundary, before it executes.
The envelope enforces.
Configured for your environment, the envelope holds rules like:
Your agent cannot read another customer's data.
Your agent cannot escalate its own permissions.
Your agent cannot call a tool you have not approved.
Your agent cannot tamper with the audit log.
Where boundaries are not optional.
Healthcare
For agents that handle protected health data, where access flows have to hold up under audit.
Financial services
For agents that read customer accounts, where the controls have to hold up under external audit.
Government and enterprise
For agents under strict access controls, where every decision needs a record.
The threat model has changed.
Agents are being deployed faster than the security model is catching up. Adversaries have new tools too. Defenses built for slow attackers do not hold against attackers that aren't.
Stay in touch.
If you are building agents, securing them, or thinking about the same problem, drop us a note.