AI tool-call approval gate

Do not execute AI tool calls directly. Gate them first.

Block dangerous actions, require human approval for risky tool calls, and only create executable commands after policy approves them.

$ npx @pallattu/aeg-intent-gate

safe email: approved
large refund: requires_approval
dangerous shell: blocked

Live browser demo

Model-proposed actions enter the queue before execution.

Adapters

Wrap the tool-call shapes developers already use.

OpenAI function calls, Anthropic tool-use blocks, and MCP tool calls can all pass through the same policy lifecycle.


      

Install

Small core. Zero runtime dependencies.

npm install @pallattu/aeg-intent-gate
npx @pallattu/aeg-intent-gate

Fail closed by default

Unmatched actions require approval unless you explicitly opt into fail-open behavior.

Payload snapshots

Approved commands use the evaluated payload snapshot, not mutable intent metadata.

Executor boundary

Side-effecting code should accept only approved command objects, never raw model output.