Fail closed by default
Unmatched actions require approval unless you explicitly opt into fail-open behavior.
AI tool-call approval gate
Block dangerous actions, require human approval for risky tool calls, and only create executable commands after policy approves them.
$ npx @pallattu/aeg-intent-gate
safe email: approved
large refund: requires_approval
dangerous shell: blocked
Live browser demo
Adapters
OpenAI function calls, Anthropic tool-use blocks, and MCP tool calls can all pass through the same policy lifecycle.
Install
npm install @pallattu/aeg-intent-gate
npx @pallattu/aeg-intent-gate
Unmatched actions require approval unless you explicitly opt into fail-open behavior.
Approved commands use the evaluated payload snapshot, not mutable intent metadata.
Side-effecting code should accept only approved command objects, never raw model output.