ASK 2026.04 — Open framework — CC BY 4.0

A security framework for AI agent systems.

ASK defines the architectural properties — enforcement, mediation, governance, and trust — so you can build agent systems that are secure, auditable, and compliant. Open, vendor-neutral, and aligned with the regulatory frameworks that matter.

"Agents are principals to be governed, not tools to be configured."

Open framework The tenets Agent-agnostic Works with Claude Code CC BY 4.0
Get started on GitHub Read the framework Threat catalog

What makes ASK different

A framework you can actually build against, to make things people can actually use.

Most AI security guidance is aspirational. ASK is operational — specific enough for an engineer to implement, an auditor to verify, and a regulator to accept.

01

Engineers can build it

ASK doesn't say "ensure appropriate oversight." It defines concrete properties — enforcement outside the agent boundary, complete mediation, immutable audit trails. Things you can implement, test, and verify. Any stack, any platform.

02

Security and compliance teams can trust it

Every action is traced. Every trust relationship is documented. Every constraint state is reconstructible. ASK maps directly to EU AI Act, NIST AI RMF, SOC 2, HIPAA, and GDPR — the evidence is structural, not bolted on.

03

Customers can rely on it

Agents built on ASK have governance that scales from a laptop to an enterprise fleet. The same invariants apply at every level — which means you can ship agent-powered products into environments with real security requirements.


The invariants

Built to earn trust, not break it.

Each invariant is a binary condition — it holds or it doesn't. When someone asks "how do I know your AI is safe?" — the answer is architectural proof, not a promise.

Foundation — Tenets 1–10
T-01
Constraints are external and inviolable. Enforcement machinery never runs inside the agent's isolation boundary. The agent cannot influence or circumvent enforcement — cannot read enforcement configuration, modify policy files, or access audit logs.
T-02
Every action leaves a trace. Logs are written by the mediation layer, not by the agent. The agent has no write access to audit logs and cannot suppress, alter, or destroy them.
T-03
Mediation is complete. There is no path from the agent to any external resource that bypasses the mediation layer. Direct network access from the agent container is a framework violation.
T-04
Enforcement failure defaults to denial. No failure of enforcement infrastructure can result in expanded agent capability. An agent whose enforcement layer is unavailable is an agent that cannot act.
T-05
The agent's runtime is a known quantity. Operators can identify exactly what code, dependencies, and configuration comprise the agent's Body, verify that they match an expected state, and detect when they diverge.
T-06
All trust is explicit and auditable. Every trust relationship — between principals, between agents, between agents and external services — is declared, documented, and visible to operators. There are no implicit trust grants.
T-07
Least privilege. Capabilities, credentials, mounts, and authority are scoped to the minimum the role requires — network, filesystem, LLM model, tool, and governance authority alike.
T-08
Operations are bounded. Authorization defines what an agent can access. Operational bounds define how that access is exercised — volume, rate, duration, concurrency, and retention are constrained, not unlimited by default.
T-09
Constraint changes are atomic and acknowledged. An agent never operates in a partial constraint state. All updates are delivered atomically. An unacknowledged constraint change is treated as a potential compromise.
T-10
Constraint history is immutable and complete. Every constraint state an agent has ever operated under is logged and retrievable. "What was the agent permitted to do when it took that action?" must always be answerable.
Containment & response — Tenets 11–14
T-11
Halts are always auditable and reversible. Every halt has a complete audit record: who initiated it, why, what was in flight, when it executed, who was notified, and what the outcome was. Every halted agent's state is preserved.
T-12
Halt authority is asymmetric. Any principal with halt authority can halt an agent. Only principals with resumption authority — always equal to or higher — can resume it. An agent can self-halt but cannot resume itself.
T-13
Authority is monitored at the authority level. Every exercise of governance authority by a principal is logged and auditable with the same rigor as agent actions. Principals are accountable for how they use their authority.
T-14
Quarantine is immediate, silent, and complete. When an agent is quarantined, all ability to impact its environment is severed simultaneously, without agent notification. An agent that is running while it cannot be contained is a framework violation. All state is preserved as a forensic artifact.
Principal model — Tenets 15–18
T-15
Principal and agent lifecycles are managed independently. Terminating a principal does not automatically terminate its agents, and halting an agent does not suspend its principal's authority. Each requires an explicit, deliberate decision. Independence prevents cascading failures; it does not permit ungoverned operation.
T-16
Authority is never orphaned. When a principal is suspended, its authority transfers immediately to a coverage principal. When no coverage principal exists, the agent defaults to its fail-closed state. An ungoverned agent that halts is the framework succeeding, not failing.
T-17
Trust is earned and monitored continuously. Principal trust levels are not static. No principal — human or agent — can self-elevate trust. Trust reduction can be automatic. Trust elevation always requires explicit human approval.
T-18
The governance hierarchy is inviolable from below. No agent can unilaterally impede, contain, or reduce the authority of the principals who govern it. Agents may execute governance actions when explicitly delegated by an operator — the agent is the mechanism, not the decision-maker.
Multi-agent — Tenets 19–22
T-19
Delegation cannot exceed delegator scope. A coordinator can only delegate permissions it explicitly holds. Implicit permission requirements are treated the same as explicit grants. No coordinator can give what it doesn't have.
T-20
Synthesis cannot exceed individual authorization. Synthesized outputs must be bounded by the recipient's authorization scope — not the coordinator's. Like tear lines in classified document handling, content beyond a recipient's authorization is blocked pending human review.
T-21
External agents cannot instruct internal agents. Agents in different governance domains can share data but cannot instruct each other. Verification establishes identity, not instruction authority.
T-22
Unknown conflicts default to yield and flag. When an agent encounters a workspace conflict with an unidentifiable source, it yields, logs the conflict, and flags to operators and the security function.
Data integrity — Tenets 23–25
T-23
Unverified entities default to zero trust. When an agent encounters an entity whose identity or authority cannot be verified at runtime, it defaults to the lowest trust tier. Ambiguous cases resolve to less trust, not more.
T-24
Instructions only come from verified principals. External entities — regardless of identity claims — produce data, not instructions. Content that contains instruction-like text is processed as data under the agent's own constraints. Any instruction to override constraints is a red flag, not a credential.
T-25
Identity mutations are auditable and recoverable. Every write to the agent's persistent Identity is logged with provenance metadata. Identity history is recoverable: operators can reconstruct the Identity state at any point and roll back to a known-good state.
Organizational knowledge — Tenets 26–27
T-26
Organizational knowledge is durable infrastructure, not agent state. Knowledge accumulated by agents must be structured, auditable, and operator-owned. It persists independently of any individual agent's lifecycle. Destroying organizational knowledge requires more deliberate action than destroying any individual agent.
T-27
Knowledge access is bounded by authorization scope. Graph traversal, retrieval, and contribution are subject to the same authorization model as every other agent action. The synthesized view available through the knowledge graph must not exceed what the querying agent is individually authorized to access.

Cognitive model

Don't read minds. Build them.

An agent has state the operator controls (constraints), state the agent accumulates over time (identity), and state that lives only in the current session. ASK defines where these boundaries are and who owns each layer — because the security model depends on keeping them separate.

WORKSPACE Managed environment — provisioned by infrastructure, never by the agent BODY Runtime process — hosts the Mind, translates decisions into actions MIND Cognitive core — reasoning, role, identity, memory CONSTRAINTS Operator-owned Read-only to agent Rules, permissions, tier, behavior cf. Superego IDENTITY Agent-owned Writable, audited Memory, personality, learned context cf. Id SESSION Ephemeral Resets each session Active context, attack surface cf. Ego

The three outer layers are independently replaceable: swap the Body (change agent frameworks), swap the Workspace (reimage without losing state), swap the Mind (change the agent's role). Within the Mind, the critical security boundary is between Constraints (operator-owned, read-only) and Identity (agent-owned, writable). An agent that can write to its own constraints can rewrite its own rules — the architecture makes this structurally impossible.


Trust

Trust is earned, not configured.

You don't give a new hire the keys to production on day one. Agents are the same. ASK defines a trust spectrum so operators can set boundaries and users can set their own comfort level — and agents can earn more autonomy through observed behavior, not just configuration.

Level 0

Assisted

Human confirms every action. The new hire with a senior looking over their shoulder.

Level 1

Supervised

Human reviews batches. Agent proceeds on clear cases, flags the rest. Trusted but verified.

Level 2

Autonomous

Agent operates independently within defined bounds. Escalates exceptions only. The experienced team member.

Level 3

Delegated

Humans set goals, agent manages scope. Highest trust, earned through track record.

Trust elevation always requires human approval. Trust reduction can be automatic. No agent — and no human — can self-promote.


Get started

Add ASK to your AI dev tools in 30 seconds.

ASK works as context for any AI coding assistant. Point your tool at the framework and start getting security review on every design decision.

Install the ASK plugin. Gives you /ask for security review, plus skills for threat analysis and secure architecture design.

$ claude plugin marketplace add geoffbelknap/ask $ claude plugin install ask-framework@ask

Add the ASK marketplace and install the plugin. Same skills — review, threat analysis, and design.

$ copilot plugin marketplace add geoffbelknap/ask $ copilot plugin install ask-framework@ask

Install from the Command Palette, or add the marketplace to your settings for browse-and-install.

# Option 1: Install from source > Chat: Install Plugin From Source > https://github.com/geoffbelknap/ask # Option 2: Add marketplace to settings.json "chat.plugins.marketplaces": ["geoffbelknap/ask"]

Clone the ASK repo and install the plugin into your project.

$ git clone https://github.com/geoffbelknap/ask.git $ cp -r ask/plugins/codex ./plugins/ask-framework

Clone the ASK repo into your project. Cursor, Windsurf, Cline, and other tools that read project context will pick up the framework automatically.

$ git clone https://github.com/geoffbelknap/ask.git

Compliance

ASK and the regulatory regimes that matter.

AI regulation is moving fast. ASK was designed with auditability and demonstrable compliance in mind from the start. Full tenet-by-tenet mappings are available in REGULATORY.md — contributions welcome.

EU
EU AI Act
Articles 9, 12–15: risk management, record-keeping, transparency, human oversight, cybersecurity. ASK exceeds the Act's logging and halt requirements.
● Mapped
NIST
NIST AI RMF
All four functions (Govern, Map, Measure, Manage) mapped. Strongest alignment with MEASURE 2.7 (security/resilience) and MANAGE 2.4 (deactivation).
● Mapped
SOC
SOC 2 Type II
All five Trust Services Criteria covered. ASK's audit architecture produces structural evidence — logs are a guaranteed byproduct, not a configurable feature.
● Mapped
HHS
HIPAA
Access controls, audit controls, integrity, transmission security, authentication. Honest gap: breach detection yes, notification procedures no.
● Mapped
EU
GDPR
All six data protection principles addressed. Honest gap: right to erasure creates tension with immutable audit logs.
● Mapped
SEC
SEC AI Guidance
Documentation, human oversight, explainability, conflicts of interest, vendor risk. Audit trail and governance hierarchy directly address SEC concerns.
● Mapped

Working on AI compliance in a regulated industry? Contributions to the regulatory mapping work are welcome — open an issue or PR on GitHub. This is community work, not vendor work.


Read the docs

Everything in one place.

ASK is a complete framework — not just principles. Each document serves a different audience and use case.

Core document

FRAMEWORK.md

Tenets, cognitive model, trust spectrum, principal hierarchy, policy model, and agent lifecycle. Start here.

Read →
Threat catalog

THREATS.md

Risks organized by attack surface — runtime, network, ingress, agent state, multi-agent, governance. Cross-referenced to MITRE ATLAS technique IDs.

Read →
Technical reference

ARCHITECTURE.md

Reference defense architecture — enforcement layers, topology, runtime gateway, guardrails stack, trust tiers, scaling patterns, and verification tests.

Read →
Implementation

MITIGATIONS.md

Implementation guidance for novel threats — XPIA kill chain, MCP tampering, delegation poisoning, identity corruption, behavioral drift, and oversight fatigue.

Read →
Compliance

REGULATORY.md

Tenet-by-tenet mappings to EU AI Act, NIST AI RMF, SOC 2, HIPAA, GDPR, and SEC AI Guidance. Honest about gaps.

Read →
Honest gaps

LIMITATIONS.md

Known gaps and open questions. ASK documents what it doesn't yet cover — because honest gap reporting builds more trust than overclaiming.

Read →

Implementations

Built on ASK.

Agency is the reference implementation of ASK — an open source platform for orchestrating teams of AI agents with full enforcement architecture.

Agency COMING SOON

Implements the single-agent architecture with all core enforcement layers: network isolation, mediation proxy, XPIA guardrails, per-agent enforcement sidecar, container hardening, runtime gateway, and continuous monitoring.

Agency is not the only valid ASK implementation. Any platform that satisfies the tenets is ASK-compliant. If you've built an ASK-compliant system and want to be listed here, open a PR.


About
Geoff Belknap

ASK was written by Geoff Belknap, a professional security person. ASK reflects lived experiences about the gap between AI capability and organizational readiness to deploy it safely.

License and contribution

ASK is published under Creative Commons Attribution 4.0 International (CC BY 4.0) — free to share and adapt for any purpose, including commercial, with attribution.

Contributions welcome. If you've implemented ASK, identified gaps, or want to contribute regulatory mappings — open an issue or PR on GitHub.