Last updated: March 24, 2026
AIPMO exists to help organizations govern AI responsibly. We believe that starts with governing our own. This page describes how we use AI across our platform, what data we collect and how we handle it, what safeguards we apply, and the governance frameworks that guide our decisions.
We publish this page not because we are required to, but because transparency is a foundational principle of trustworthy AI — and we intend to demonstrate it.
Our AI Systems
What Powers AIPMO
AIPMO uses Anthropic's Claude API across two distinct AI systems, each with a defined scope and purpose.
AIPMO Advisor — The AI governance chat interface at app.aipmo.co. Users ask questions about AI governance, risk management, regulatory compliance, and project delivery. The Advisor responds with guidance grounded in a curated knowledge base of 25+ established frameworks including NIST AI RMF 1.0 and Playbook, EU AI Act, ISO 42001, OECD AI Principles, UNESCO AI Ethics Recommendation, UK Pro-Innovation AI Regulation, UN AI Governance, Singapore IMDA frameworks, US Executive Order on Safe/Secure/Trustworthy AI, Stanford HAI, and ISACA COBIT AI Governance. Every response includes numbered citations tied to actually retrieved content — the system is architecturally designed to make hallucinated framework references impossible.
AIPMO Document Customizer — A document generation system at app.aipmo.co. Users generate customized AI governance documents tailored to their organization's context, industry, maturity level, headquarters location, and operating regions. Available document types include AI Impact Assessments, NIST AI RMF-aligned plans, ISO 42001 implementation documents, OECD alignment documents, and more.
AIPMO Assistant — A scoped chat interface embedded on aipmo.co. This is a separate, lighter AI system powered by Claude Haiku. Its sole purpose is to answer questions about the AIPMO platform itself — what it is, how it works, plans and pricing, and how to navigate the site. It is explicitly prohibited from answering general AI governance questions, redirecting those to the Advisor. It operates under strict rate limits by member tier.
What the AI Does Not Do
- The AI does not make autonomous decisions on behalf of users.
- The AI does not access, process, or store any external systems or databases beyond what users provide within the platform.
- The AI does not learn or retain information from one user session to inform responses for other users.
- The AI does not replace professional legal, regulatory, or compliance advice.
How Content Is Generated
Advisor Chat Responses
When a user sends a message through the AIPMO Advisor, the system constructs a prompt that includes:
- System instructions — A carefully designed prompt that establishes the AI's role as an AI governance advisor, defines its knowledge boundaries, and sets response guidelines.
- Retrieved framework content — A RAG (Retrieval-Augmented Generation) pipeline retrieves the most relevant chunks from the ingested framework knowledge base using vector similarity search. Only actually retrieved content is available for citation — the AI cannot reference frameworks not present in the retrieved set.
- Numbered citations — Every response cites specific retrieved content by number. Users can verify which framework passages informed the response.
- Organizational context — If the user has set up an organization profile, this context is injected into the prompt so responses are tailored to the user's specific situation.
- Project context — If the user is working within a specific project, the project's details further refine the guidance.
- Uploaded documents — If the user has uploaded documents at the project level, these are made available as additional context for the AI's responses.
- Conversation history — The current conversation thread is included so the AI can maintain continuity within a session.
Each response is generated fresh — there is no persistent model fine-tuning or training on user data.
Document Generation
When a user generates a governance document:
- The user's organizational profile — including industry, size, maturity level, headquarters location, operating regions, and regulatory environment — is combined with project context to create a generation prompt.
- The AI generates a customized document adapted to the user's specific governance context, with regulatory framework emphasis shaped by the user's operating regions.
- The generated document includes a disclaimer indicating it was AI-assisted and should be reviewed by a qualified professional before use in governance decisions.
Site Assistant
When a visitor uses the Assistant on aipmo.co, only the user's message and recent conversation history are included in the prompt. No organizational or project data is used. If the visitor is a logged-in member, their membership tier is used to apply appropriate rate limits — no other member data is sent to the API.
Human Oversight by Design
Every AI-generated output on AIPMO is designed with the expectation of human review:
- All chat responses are presented as informational guidance, not directives.
- All generated documents are delivered as starting points, not final deliverables.
- Disclaimers appear on every page where AI-generated content is presented.
- Users are encouraged to have qualified professionals review any AI-generated material before applying it to governance decisions.
Data Collection and Handling
What We Collect
| Data Type | Purpose | Storage Location |
|---|---|---|
| Name and email | Account creation and authentication | Ghost CMS (aipmo.co) |
| Organization profile | Contextualizing AI responses | Supabase (encrypted) |
| Project details | Tailoring guidance to specific initiatives | Supabase (encrypted) |
| Uploaded documents | Providing project-level context for AI responses | Supabase (encrypted) |
| Conversation messages | Maintaining chat continuity within sessions | Supabase (encrypted) |
| Generated documents | Providing users access to their customized content | Supabase (encrypted) |
| Payment information | Processing subscriptions | Stripe (not stored by AIPMO) |
What We Send to the AI
When you interact with the AIPMO Advisor, the following information may be included in the API request sent to Anthropic's Claude:
- Your chat message
- Your organization profile (industry, size, maturity level, headquarters location, operating regions, regulatory environment)
- Your project context (type, deployment stage, risk level, data scope)
- Relevant content retrieved from the framework knowledge base
- Uploaded document content if present at the project level
- The current conversation history
We do not send your name, email address, payment information, or any personal identifiers to the AI API.
What Anthropic Does With Your Data
AIPMO uses Anthropic's commercial API, which operates under Anthropic's Commercial Terms of Service. Under these terms:
- No model training. API inputs and outputs are never used to train or improve AI models. This prohibition applies unconditionally to all commercial API customers — there is no opt-in or opt-out; it is simply not permitted.
- 30-day retention. Anthropic automatically deletes API inputs and outputs from their backend within 30 days of receipt or generation.
- Safety exception. If content is flagged by Anthropic's trust and safety classifiers as potentially violating their Usage Policy, inputs and outputs may be retained for up to two years for enforcement purposes.
These protections are distinct from — and stronger than — the policies that apply to Anthropic's consumer products (Claude Free, Pro, and Max). AIPMO's use of the commercial API ensures that your governance conversations and document generation data receive enterprise-grade privacy protections.
Data Retention
- Conversation data is retained as long as your account is active or until you delete it.
- Organization and project profiles are retained as long as your account is active.
- Uploaded documents are retained as long as your account is active or until you delete them.
- Generated documents are retained as long as your account is active or until you delete them.
- Account deletion removes all associated data from our systems. Organization deletion cascades to remove all linked projects, conversations, and documents.
Safeguards and Risk Management
Guardrails We Apply
- Prompt engineering — System prompts are designed to keep each AI system focused on its defined scope, prevent harmful outputs, and maintain professional boundaries.
- RAG-based citation grounding — Advisor responses are anchored in actually retrieved framework content with numbered citations. The architecture makes it structurally impossible to cite frameworks not present in the retrieved set.
- Framework knowledge base curation — The 25+ frameworks in the knowledge base are ingested from authoritative published sources and versioned. Source diversity is enforced at retrieval time to prevent any single framework family from dominating responses.
- Context boundaries — Each AI system operates within defined knowledge domains and is instructed to acknowledge when questions fall outside its scope.
- Usage controls — Tier-based limits on message volume and API costs prevent runaway usage and ensure service stability.
- Disclaimer integration — Every surface where AI-generated content appears includes a disclaimer reminding users to seek qualified professional review.
- No autonomous actions — The AI cannot take actions, access external systems, or make decisions on behalf of users. It provides information and recommendations only.
Known Limitations
We believe transparency includes acknowledging what our system cannot do:
- The AI may occasionally produce inaccurate, incomplete, or outdated information, particularly on rapidly evolving regulatory topics.
- Framework knowledge is based on specific ingested versions of published documents. The knowledge base is updated periodically but may not reflect the most recent amendments, guidance updates, or regulatory interpretations at any given time.
- Organizational context provided by users is taken at face value — the AI cannot independently verify the accuracy of user-supplied information.
- Generated documents are starting points that require professional review and organizational adaptation before implementation.
- The AI does not have access to an organization's internal documents, existing policies, or proprietary data unless explicitly uploaded by the user within a project.
- The RAG retrieval system uses vector similarity to identify relevant content. Highly specialized or niche queries may not always surface the most relevant framework passages.
Governance Frameworks Applied
AIPMO applies the same governance principles it teaches. Our approach to managing this AI system is informed by the following frameworks:
NIST AI Risk Management Framework (AI RMF 1.0)
The NIST AI RMF identifies seven characteristics of trustworthy AI systems. Here is how AIPMO addresses each:
| Characteristic | How AIPMO Addresses It |
|---|---|
| Valid and Reliable | Advisor responses are grounded in retrieved framework content with numbered citations. Document templates are developed by credentialed practitioners and reviewed before publication. |
| Safe | All systems are advisory only — they cannot take autonomous actions or make decisions that directly affect users or third parties. |
| Secure and Resilient | Data is encrypted in transit and at rest. Authentication uses secure session management with encrypted JWT tokens. Payment processing is handled by PCI-compliant Stripe. Row-level security is enforced on all application database tables. |
| Accountable and Transparent | This page. We disclose what each AI system does, how it works, what data it uses, and what its limitations are. |
| Explainable and Interpretable | Users can see their organizational and project context in the interface, understanding what inputs shape the AI's responses. Advisor responses include numbered citations identifying which retrieved framework content informed each answer. |
| Privacy-Enhanced | Personal identifiers are not sent to the AI API. Data minimization principles guide what we collect. Users control their data and can delete it at any time. |
| Fair — with Harmful Bias Managed | System prompts instruct the AI to provide balanced, framework-grounded guidance without bias toward specific vendors, tools, or approaches. Guidance scales to organizational maturity rather than assuming a one-size-fits-all approach. |
Additional Framework Alignment
- ISO/IEC 42001 — Our approach to AI system management follows the structure of an AI management system, including defined scope, risk assessment, and operational controls.
- EU AI Act — AIPMO's advisory systems would be classified as limited-risk or minimal-risk under the EU AI Act's risk-based approach. We apply transparency obligations voluntarily, including clear disclosure that users are interacting with AI systems.
- OECD AI Principles — We align with the OECD principles of transparency, explainability, accountability, and human-centered values in our design and operation.
- UNESCO Recommendation on the Ethics of AI — Our emphasis on human oversight, proportionality, and user agency reflects the UNESCO recommendation's values of transparency and responsibility.
Continuous Improvement
AI governance is not static, and neither is this page. As our platform evolves, we commit to:
- Updating this page when we introduce new AI capabilities or change how data is processed.
- Conducting periodic reviews of our AI systems' performance, risks, and safeguards.
- Incorporating user feedback into our governance practices.
- Tracking document staleness to alert users when generated content may need updating due to profile or framework changes.
Questions or Concerns
If you have questions about how AIPMO uses AI, how your data is handled, or any aspect of our governance practices, contact us at:
- Support: support@aipmo.co
- General inquiries: info@aipmo.co
- Website: aipmo.co
AIPMO is founded and operated by an IAPP Certified AI Governance Professional (AIGP), Project Management Professional (PMP), PMI Agile Certified Practitioner (PMI-ACP), Certified Project Manager in AI (CPMAI), and Google Cloud Certified Generative AI Leader. We apply the same rigor to governing our own AI systems that we help others achieve.