ASK defines the architectural properties — enforcement, mediation, governance, and trust — so you can build agent systems that are secure, auditable, and compliant. Open, vendor-neutral, and aligned with the regulatory frameworks that matter.
"Agents are principals to be governed, not tools to be configured."
Most AI security guidance is aspirational. ASK is operational — specific enough for an engineer to implement, an auditor to verify, and a regulator to accept.
ASK doesn't say "ensure appropriate oversight." It defines concrete properties — enforcement outside the agent boundary, complete mediation, immutable audit trails. Things you can implement, test, and verify. Any stack, any platform.
Every action is traced. Every trust relationship is documented. Every constraint state is reconstructible. ASK maps directly to EU AI Act, NIST AI RMF, SOC 2, HIPAA, and GDPR — the evidence is structural, not bolted on.
Agents built on ASK have governance that scales from a laptop to an enterprise fleet. The same invariants apply at every level — which means you can ship agent-powered products into environments with real security requirements.
Each invariant is a binary condition — it holds or it doesn't. When someone asks "how do I know your AI is safe?" — the answer is architectural proof, not a promise.
An agent has state the operator controls (constraints), state the agent accumulates over time (identity), and state that lives only in the current session. ASK defines where these boundaries are and who owns each layer — because the security model depends on keeping them separate.
The three outer layers are independently replaceable: swap the Body (change agent frameworks), swap the Workspace (reimage without losing state), swap the Mind (change the agent's role). Within the Mind, the critical security boundary is between Constraints (operator-owned, read-only) and Identity (agent-owned, writable). An agent that can write to its own constraints can rewrite its own rules — the architecture makes this structurally impossible.
You don't give a new hire the keys to production on day one. Agents are the same. ASK defines a trust spectrum so operators can set boundaries and users can set their own comfort level — and agents can earn more autonomy through observed behavior, not just configuration.
Human confirms every action. The new hire with a senior looking over their shoulder.
Human reviews batches. Agent proceeds on clear cases, flags the rest. Trusted but verified.
Agent operates independently within defined bounds. Escalates exceptions only. The experienced team member.
Humans set goals, agent manages scope. Highest trust, earned through track record.
Trust elevation always requires human approval. Trust reduction can be automatic. No agent — and no human — can self-promote.
ASK works as context for any AI coding assistant. Point your tool at the framework and start getting security review on every design decision.
Install the ASK plugin. Gives you /ask for security review, plus skills for threat analysis and secure architecture design.
Add the ASK marketplace and install the plugin. Same skills — review, threat analysis, and design.
Install from the Command Palette, or add the marketplace to your settings for browse-and-install.
Clone the ASK repo and install the plugin into your project.
Clone the ASK repo into your project. Cursor, Windsurf, Cline, and other tools that read project context will pick up the framework automatically.
AI regulation is moving fast. ASK was designed with auditability and demonstrable compliance in mind from the start. Full tenet-by-tenet mappings are available in REGULATORY.md — contributions welcome.
Working on AI compliance in a regulated industry? Contributions to the regulatory mapping work are welcome — open an issue or PR on GitHub. This is community work, not vendor work.
ASK is a complete framework — not just principles. Each document serves a different audience and use case.
Tenets, cognitive model, trust spectrum, principal hierarchy, policy model, and agent lifecycle. Start here.
Risks organized by attack surface — runtime, network, ingress, agent state, multi-agent, governance. Cross-referenced to MITRE ATLAS technique IDs.
Reference defense architecture — enforcement layers, topology, runtime gateway, guardrails stack, trust tiers, scaling patterns, and verification tests.
Implementation guidance for novel threats — XPIA kill chain, MCP tampering, delegation poisoning, identity corruption, behavioral drift, and oversight fatigue.
Tenet-by-tenet mappings to EU AI Act, NIST AI RMF, SOC 2, HIPAA, GDPR, and SEC AI Guidance. Honest about gaps.
Known gaps and open questions. ASK documents what it doesn't yet cover — because honest gap reporting builds more trust than overclaiming.
Agency is the reference implementation of ASK — an open source platform for orchestrating teams of AI agents with full enforcement architecture.
Implements the single-agent architecture with all core enforcement layers: network isolation, mediation proxy, XPIA guardrails, per-agent enforcement sidecar, container hardening, runtime gateway, and continuous monitoring.
Agency is not the only valid ASK implementation. Any platform that satisfies the tenets is ASK-compliant. If you've built an ASK-compliant system and want to be listed here, open a PR.