THE INSTITUTIONAL ARTIFICIAL INTELLIGENCE COMPANY
  • HOME
  • PLATFORM
    • ARCHITECTURE
    • INSTITUTIONAL AI STACK™
    • OLTAIX™
    • SOVEREIGN AI™
  • SOLUTIONS
    • SOLUTIONS OVERVIEW
    • RETIREMENT PROVIDERS
    • ASSET OWNERS
    • PRIVATE EQUITY FIRMS
    • ASSET MANAGERS
    • ASSET SERVICERS
    • WEALTH MANAGERS
    • SOVEREIGNTY IN ACTION
    • WHITE PAPERS
  • INSTITUTIONS
    • PRIVATE EQUITY
    • ASSET MANAGEMENT
    • ASSET SERVICING
    • WEALTH MANAGEMENT
    • PENSION FUNDS
    • SOVEREIGN WEALTH FUNDS
    • INSURANCE
    • RETIREMENT
    • ENDOWMENTS
    • FAMILY OFFICES
  • ADVISORY
    • AI ASSESSMENT
    • AI STRATEGY
    • AI IMPLEMENTATION
  • COMPANY
    • ABOUT
    • INSIGHTS
    • NEWSROOM
    • CONTACT
THE INSTITUTIONAL ARTIFICIAL INTELLIGENCE COMPANY
  • HOME
  • PLATFORM
    • ARCHITECTURE
    • INSTITUTIONAL AI STACK™
    • OLTAIX™
    • SOVEREIGN AI™
  • SOLUTIONS
    • SOLUTIONS OVERVIEW
    • RETIREMENT PROVIDERS
    • ASSET OWNERS
    • PRIVATE EQUITY FIRMS
    • ASSET MANAGERS
    • ASSET SERVICERS
    • WEALTH MANAGERS
    • SOVEREIGNTY IN ACTION
    • WHITE PAPERS
  • INSTITUTIONS
    • PRIVATE EQUITY
    • ASSET MANAGEMENT
    • ASSET SERVICING
    • WEALTH MANAGEMENT
    • PENSION FUNDS
    • SOVEREIGN WEALTH FUNDS
    • INSURANCE
    • RETIREMENT
    • ENDOWMENTS
    • FAMILY OFFICES
  • ADVISORY
    • AI ASSESSMENT
    • AI STRATEGY
    • AI IMPLEMENTATION
  • COMPANY
    • ABOUT
    • INSIGHTS
    • NEWSROOM
    • CONTACT

THE AI SOVEREIGNTY ASSESSMENT - Should Your Institution Build, Rent, or Compose AI Infrastructure? A Strategic Decision Framework for LeaderS.

How ThE AI Assessment Works — Three Steps, One Program.

  Most governance frameworks start by asking where you want to go. This assessment starts by asking where you actually are. The distinction is deliberate and important. 


Strategy without an honest baseline is aspiration. An honest baseline without benchmarking lacks urgency. 


And both without a strategic direction lack purpose. The three steps work together — and they work in this order.

THE GAP BETWEEN STEP 1 AND STEP 3 IS YOUR PROGRAM.

  

Your matrix score tells you where you are standing. The 0–160 tells you where you need to get to. Your benchmark score tells you how urgently. The distance between them — cell by cell — is the governance program.
 

How This Assessment Works — Three Steps, One Program.

STEP 1


The 5×5 Control Matrix


Where is our governance today?


The AI Sovereignty Assessment measures your institution's verified ability to own, govern, and audit the AI systems that drive decisions, manage risk, and serve clients. It applies five governance control dimensions — Jurisdictional, Logical, Technical, Operational, and Contractual — independently to each of five AI infrastructure layers: Power, Compute, Data Centers, Models, and Agentic Applications.



The most urgent finding

 The most acute sovereignty gap for most institutions is not a foreign government legal demand. It is that external AI model providers — Anthropic, OpenAI, Microsoft, Google — have ongoing access to your queries, decision logic, and outputs by design. Every day. In every API call. 


Anthropic CEO Dario Amodei stated publicly: "I think the next tier of risk is actually AI companies themselves. The governance of AI companies deserves a lot of scrutiny."



The Models column of this matrix is where most institutions score at Level 1. That is the finding that should shape your board conversation.

Five questions for the board

  

1. Do we own our AI — or do we rent access to someone else's?


2. Can management prove — with technical evidence, not contracts — where every AI workload executes?


3. If our primary AI provider restricted or revoked access tomorrow, what would operationally happen?


4. Could we produce a complete AI decision audit trail from 18 months ago within 24 hours?


5. Do we control what our AI providers can see — or are we trusting their promises?

THOUGHT LEADERSHIP

   

A CONCRETE EXAMPLE


An institution completes the 0–160 assessment and scores 95. That falls in the 81–120 band — Compose is the right strategy. They need hybrid sovereign architecture with a protected core for sensitive data and proprietary systems.

They then complete the 5×5 Control Matrix and score 38 out of 100. That says their current governance posture is essentially Reactive — most of the 25 cells are at Level 1.


The gap between 'you need to Compose' and 'you currently govern at Reactive level' is the entire work programme. The matrix identifies which of the 25 cells are at Level 1 — those are your priorities. In most institutions, the Models and Agents columns score lowest, meaning sensitive client and institutional data is being processed without adequate technical controls. Those Level 1 cells are the programme — not theoretical risk, but specific governance gaps with specific remediation steps.

Why Most Institutions Score Poorly on Models and Agents

Across every institution type, the Models column and Agents column consistently score the lowest. This is structural, not incidental. External model providers — Anthropic, OpenAI, Microsoft, Google — have ongoing access to your queries, decision logic, fine-tuning data, and inference outputs by design. Every day. In every API call. This is not a legal demand scenario. It is the operational reality of how AI models are served.

It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves. The governance of AI companies deserves a lot of scrutiny.


Anthropic CEO Dario Amodei, January 2026.

A CONCRETE EXAMPLE (LARGE INSTITUTIONAL RETIREMENT PLAN PROV

Where do we need to go?

  

Step 1 — A large TPA completes the 5×5 matrix and scores 38 out of 100. Most of their 25 cells are at Level 1 — particularly in the Models and Agents columns, where participant Social Security numbers are being processed without the technical controls ERISA's prudent expert standard requires.


Step 2 — They benchmark against peers. The typical range for large TPAs is 51–72. At 38, they are materially behind their peer group — before the DOL has asked a single question.


Step 3 — They complete the 0–160 strategic assessment and score 95, placing them in the Compose band. They need hybrid sovereign architecture with a protected core for participant data.


The program — The gap between 'we are at 38 on the matrix' and 'we need to Compose' is the entire work. The matrix has already identified the priorities: the specific Level 1 cells in the Models and Agents columns are where they start. The benchmark has established the urgency: they are behind their peers. The strategic assessment has set the destination.

FINAL GUIDANCE

Key Principle 1 — Sovereignty Is a Journey, Not a Destination

  

Most institutions will progress through maturity stages over time. Each stage builds the capability required for the next. Do not attempt to jump directly to full sovereignty without building foundational expertise.

  

Learning phase


Rent with standard governance


Build   institutional AI governance knowledge and contractual discipline

 

Compliance phase


Rent with enhanced governance and BYOK


Achieve   regulatory examination readiness; sensitive data technically protected at   rest

 

Strategic phase


Compose — hybrid architecture


Sovereign   core for mission-critical and highest-sensitivity workloads

 

Autonomy phase


Build — full sovereign infrastructure


Complete   institutional command of AI infrastructure across all five ecosystems

Key Principle 2 — Control Is Non-Negotiable for Mission-Critical AI

   

The fundamental question every institution must answer: 


When AI becomes mission-critical, who will control the infrastructure that produces it? 


If the answer is 'we will', you have an AI strategy. 


If the answer is 'someone else', you have an AI dependency.

Key Principle 3 — Start Now, Even If You Are Not Ready to Build

  

Every institution — regardless of current score — should take the following steps immediately:

1. Complete this assessment and the 5×5 matrix honestly — establish your baseline

2. Engage legal counsel to review all AI vendor agreements against your regulatory and contractual requirements

3. Classify workloads by sensitivity and criticality — not all AI workloads require the same governance

4. Negotiate enhanced contractual terms with AI model and agent providers — even if not yet building

5. Build internal team capability in AI governance progressively — the framework cannot be governed without people who understand it

6. Document your architecture for portability — never allow vendor lock-in without understanding its cost

7. Monitor for triggers requiring reassessment — regulatory changes, competitive events, client requirements, spend thresholds

THE AI READINESS ASSESSMENT

Should Your Institution Build, Rent, or Compose AI Infrastructure?

Understanding your institution’s AI sovereignty posture is no longer optional — it’s strategic. 

The Institutional AI Assessment delivers an actionable snapshot of where you stand today and how to close the gap before the next budget cycle or regulatory review. 

BEGIN YOUR AI SOVEREIGNTY ASSESSMENT

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Cancel

AI IS A GIVEN. CONTROL IS NOT.


© 2026 Institutional AI. All rights reserved.

  • ABOUT
  • INSIGHTS
  • NEWSROOM
  • CONTACT
  • LEGAL
  • DISCLAIMER
  • PRIVACY

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept