Local-first AI action control

Control AI actions before they touch files, systems, or data.

Guardian Gate is a local-first control layer for action-taking AI. It checks sensitive file, command, tool, and network actions before they run.

Built for teams deploying coding agents or tool-using AI into real workflows, where the goal is to move faster without blindly trusting every agent action.

Action decision layer
Action
Write updated payroll summary to /reports/march.csv
Policy
Allow writes only inside approved workspace paths
Decision
Allowed
Action
Delete customer export archive
Decision
Blocked
Action
Call external billing API with account data
Decision
Approval required
Current control surface

What Guardian Gate controls today.

Guardian Gate is focused on the action points where agents can create real operational risk. Each request can be evaluated before execution and recorded as reviewable evidence.

File reads, writes, and deletes Check access before agents inspect, change, overwrite, or remove workspace data.
Shell and command execution Evaluate command intent, target, and scope before the host environment is touched.
Outbound HTTP destinations Gate external domains and endpoints before data leaves the local workflow.
Protected or hidden paths Keep sensitive directories, configuration files, and hidden paths outside approved agent scope.
Allow, block, or approval decisions Let low-risk actions proceed while risky actions can be blocked or escalated for review.
Structured audit evidence Capture what was requested, what policy applied, and what decision was enforced.
Why teams pilot Guardian Gate

Narrow control, before execution.

Guardian Gate is credible as a pilot because it focuses on a specific control problem: deciding what an agent may do before the action reaches real systems.

Local-first control layer

Designed to keep action checks close to the files, commands, tools, and endpoints the agent is trying to use.

Allow, block, or approval decisions

Each sensitive action can be allowed, stopped, or routed to a human review step before it runs.

Policy-based enforcement

Rules are applied at the action boundary rather than relying only on model instructions or post-incident review.

Audit evidence for pilots

Teams can review what was requested, what decision was made, and why that action was allowed, blocked, or escalated.

Architecture

How it fits into an agent stack.

The stack fit is plain: AI or agent -> Guardian Gate policy check -> files, shell, endpoints, and tools. Guardian Gate sits at the point where model intent becomes an action request.

AI / agent Coding agent, copilot, or workflow assistant requests an action.
Guardian Gate policy check Evaluate action type, target, protected paths, destination, and approval rules before execution.
enforce before execution allow / block / approval structured audit evidence
Files / shell / endpoints / tools Only permitted actions reach the local environment, connected tools, or approved destinations.
Example policy controls

What controls can look like.

These examples are intentionally product-facing: simple controls a team can discuss during a pilot, not legal policy text.

Protected paths blocked

Requests to read or write hidden paths such as .env, credentials, or restricted config folders are blocked by default policy.

Destructive deletes require approval

Bulk deletes, workspace cleanup, or removal of shared files pause for review before anything is removed.

Risky shell commands escalated

Commands that can remove data, change permissions, or affect the host are blocked or routed for approval.

Outbound requests limited

HTTP calls are restricted to approved destinations so agents cannot freely send data to unknown endpoints.

Example audit evidence

What a decision record can show.

A pilot should make agent behavior reviewable. Guardian Gate is designed to expose the action request, policy reason, and final outcome.

Requested action
Delete generated files outside the approved workspace
Target
/Users/team/shared_exports/
Decision
Approval required
Reason
Destructive action outside approved agent workspace
Approval
Required before execution
Final outcome
Paused for reviewer decision; no files touched
Current scope

What it is and what it is not.

Guardian Gate is best understood as an auditable control layer around agent actions. It is intentionally narrower than a full security platform.

It is a control layer for agent actions

Guardian Gate evaluates action requests before they reach files, commands, tools, or network destinations.

It is not a full OS sandbox

It should not be treated as a replacement for operating-system isolation, endpoint security, backups, or least-privilege infrastructure.

It does not claim to solve every AI security problem

It focuses on policy-based action control, approval routing, and audit evidence for the actions it can observe and enforce.

It adds reviewable boundaries

Risky actions can be blocked or escalated so teams can test safer rollout of action-taking AI with clearer operational control.

What actions it controls

Built for the actions that create operational risk.

For action-taking AI deployed into workflows where file changes, command execution, and outbound requests need control before they run.

Read files Restrict which directories or file types an AI system can inspect.
Write files Allow updates only in approved locations and workflow-specific scopes.
Delete files Block destructive operations or require explicit human approval.
Run commands Check shell or system execution before it reaches the host environment.
Call websites Limit outbound requests for copilots and agents to trusted destinations.
Call HTTP endpoints Apply destination checks before agents send data to internal or external services.
Why a control layer matters

Why buyers ask for action controls.

These examples show what can happen when AI executes without checks on files, systems, or destinations.

View real examples
Hundreds of emails affected

An AI agent reportedly deleted or archived hundreds of emails.

Even familiar business systems can be affected quickly when autonomous actions are allowed to proceed without policy checks.

2.5 years of data lost

A coding agent reportedly wiped years of stored work.

Write and delete permissions need strong boundaries before agents are allowed to touch valuable project data.

Live database deleted

An AI coding agent reportedly removed a production database.

High-impact actions need an approval or policy checkpoint before an agent can reach live systems.

Unsafe agent behavior

Research has highlighted manipulative and unsafe autonomous actions.

The risk is broader than a single workflow. Teams need controls between AI intent and execution across files, tools, and endpoints.

Isolation was advised

Some agent systems were serious enough that isolated environments were recommended.

Isolation reduces exposure, but teams still benefit from a dedicated decision layer around what actions are allowed at all.

Private data at risk

Reports have described autonomous systems leaking sensitive details or deleting files.

Outbound requests and file operations both need explicit controls when AI is connected to internal or customer data.

Pilot scope

Good first fit.

Guardian Gate is best suited for focused internal pilots where teams already have an agent workflow and want enforceable action boundaries, reviewable approvals, and audit evidence.

Coding agents

Teams testing agents that read code, edit files, run commands, or interact with local development environments.

Tool-using AI workflows

Assistants and copilots that can call tools, move data, update records, or reach internal and external endpoints.

Internal pilot environments

Teams that want to evaluate action control in a bounded workflow before expanding agent access more broadly.

Reviewability-first teams

Buyers who need to see what an agent requested, what was approved, and what was blocked before trusting wider rollout.

Request a pilot discussion

Show us the agent workflow you want to control.

This is a static form. When you submit, your email client will open a prefilled pilot request so you can review and send it directly.

Best fit: teams evaluating coding agents, internal automations, tool-using copilots, approval boundaries, or endpoint restrictions in production-like workflows.

No backend is connected yet. The form uses a prefilled email fallback.
Pilot next step

Evaluate Guardian Gate against a real agent workflow.

Use the demo to see the control model, then request a pilot discussion around the files, commands, tools, and endpoints your AI can reach.