Virtually every firm your institution is evaluating for AI governance fits one of five archetypes.
Each archetype is structurally unable to give an unconflicted answer to the four choices that actually matter: Build. Rent. Compose. Rent + Govern.

AI governance advisors do not compete on methodology. They compete on relationship. Most institutions engage the firm they already use for audit, strategy, or technology — and accept whatever framing that firm brings.
Framing is not neutral.
A consulting firm aligned with a hyperscaler cannot credibly tell the institution to build its own infrastructure. A firm whose internal operations run on a partner model vendor cannot credibly recommend a different vendor. A software company that sells AI faces an inherent conflict when it also offers governance over that AI.
The conflict is not ethical. It is structural.

What they sell. Board roadmaps, ethics principles, responsible-AI taxonomies. Deliverables are policies, committees, and guiding questions.
Structural conflict. A framework is not an architecture. It defines what the institution should do — not what the infrastructure enforces.
Failure mode. Governance philosophy without technical reality.

What they sell. Enterprise strategy paired with delivery capability built on a single cloud platform. Joint teams, co-funded assessments, shared commercial models.
Structural conflict. The firm's economics are tied to one infrastructure outcome. The strategic recommendation and the delivery revenue cannot be separated.
Failure mode. A compromised answer to Build, Rent, or Compose.

What they sell. AI transformation programs built on a partner foundation model. Internal operations and client deliveries run on the same stack.
Structural conflict. The firm's own productivity depends on the model it recommends. The governance layer assumes the vendor it cannot replace.
Failure mode. The recommendation is the product.

What they sell. Inventory, risk-tiering, testing, monitoring, accountability. Traditional risk-function methodology extended to AI.
Structural conflict. Knowing which AI systems exist, and testing them, is not the same as controlling the infrastructure underneath them.
Failure mode. Audit dressed as control.

What they sell. A product that monitors, logs, and reports on AI use — typically operating within the vendor's own AI and data platform.
Structural conflict. The entity offering governance is the same entity providing the models, data platform, or infrastructure.
Failure mode. A vendor cannot govern itself.
Institutional AI does not fit any of the five archetypes. By design.
The firm's only output is the control architecture the institution owns permanently — and the proprietary methodology that scores it.
That methodology is the 5×5 Control Matrix and the AI Sovereignty Assessment. Every completed assessment compounds the benchmark dataset that makes the next one sharper. The methodology is the product. The institution owns the architecture it produces.
The closest analogy in financial services is a rating agency combined with an architect. The rating agency owns a proprietary methodology the market treats as authoritative. The architect designs the infrastructure the institution owns. Institutional AI does both — for AI control.

The archetypes are not wrong. They are useful — for what they are.
A framework consultancy can help a board structure its AI committee. A hyperscaler-aligned strategist can help scale workloads on that hyperscaler. A model-vendor-fused advisor can accelerate deployment on that vendor's stack. An audit-anchored advisory can inventory and test a portfolio of existing use cases. A governance software vendor can monitor the AI running inside its own platform.
Each archetype has a structural economic interest in the outcome of the Build / Rent / Compose question — which is why an unconflicted answer requires a firm without that exposure.
Institutional AI is engaged when the institution needs the unconflicted answer. After that, the archetypes may still have a role — executing the direction the institution has committed to, under a control architecture the institution now owns.

If there is uncertainty about which archetype a current advisor fits, the test is simple. Ask the advisor four questions:
The answers separate advisors from architects.
No. The archetypes are useful for the work they are designed to do. The point is that the control question — who commands the infrastructure — requires a firm whose economics are not tied to the answer.
In our experience, every major firm operating at scale in the AI governance market fits one of the five. The archetypes are defined by economic structure, not marketing language. The test in Section 5 separates positioning from structure.
The methodology is the product. The institution owns the architecture. No models. No platforms. No software licenses. No infrastructure resale. The economics are intentionally simple: the firm is paid to produce the methodology and the design — not to capture ongoing revenue from the institution's AI operations.
Yes. The work is complementary. Institutional AI produces the control architecture and the methodology. Existing advisors frequently execute against it. The conflict the archetypes create is at the recommendation layer, not the execution layer.
This page reflects Institutional AI's analysis of the AI governance advisory market as of April 2026.
Most institutions have AI. Few have control.
© 2026 Institutional AI. All rights reserved.