Perspectives/13 February 2026/8 min read

AI, Governance and Organisational Coherence

A position paper arguing that AI governance in regulated organisations is fundamentally a question of distributed cognition, cultural alignment and architectural coherence.

Drawing on organisational theory from Edgar Schein (culture), Karl Weick (sensemaking), and institutional perspectives on harmonisation and institutionalisation.


Introduction

In regulated organisations, AI is often framed as either a technical implementation challenge or a legal compliance issue. While both dimensions matter, they obscure a deeper structural shift: AI introduces distributed cognition into the organisation.

This shift has implications not only for efficiency or automation, but for how knowledge is produced, validated and institutionalised over time.

The central argument of this paper is that AI governance is fundamentally a question of organisational coherence. When cognition becomes partially machine-mediated, governance must address not only risk and control, but the interpretative structures that hold the organisation together.


1. AI Introduces Distributed Cognition

When AI systems are used to draft analyses, generate strategy material or structure board-level briefs, cognition is no longer confined to individuals or teams. It becomes distributed across humans and models.

AI systems participate in:

  • framing problems
  • generating alternatives
  • structuring arguments
  • categorising risks
  • shaping the language of decision-making

In this sense, AI becomes part of the organisation's sensemaking apparatus (Weick). It influences not only outputs, but how reality itself is interpreted.


2. Distributed Cognition Without Shared Frameworks Creates Fragmentation

If different parts of an organisation rely on different models, prompts and analytical structures, the result is not collective intelligence but epistemic divergence.

Over time, this leads to:

  • inconsistent risk interpretation
  • divergent prioritisation logics
  • incompatible decision artefacts
  • blurred accountability

From a cultural perspective (Schein), shared assumptions and interpretative schemas are what stabilise organisations. When AI-mediated cognition is uncoordinated, those shared assumptions erode.

The practical implication is straightforward: organisations may already be producing decision material with external LLMs that operate outside formal governance structures. Without a shared interpretative architecture, this introduces structural incoherence.


3. The EU AI Act Reinforces the Need for Harmonisation

The EU AI Act, if implemented thoughtfully, should not be viewed solely as a compliance burden. It can function as a harmonising mechanism.

By clarifying roles (provider, deployer), risk classifications and documentation requirements, the Act implicitly raises a deeper question:

Does the organisation possess a shared decision architecture for AI?

Regulation, in this sense, acts as an institutionalising force. It encourages convergence around common standards, responsibilities and accountability structures.


4. AI Governance as Cultural Standardisation

Governance is often interpreted as a system of controls, audits and documentation. However, governance in AI-intensive environments must also address interpretative alignment.

A viable AI governance model requires shared frameworks for:

  • risk classification
  • capability maturity
  • responsibility boundaries
  • deployment constraints
  • documentation logic

Without such harmonisation, AI becomes culturally incoherent.

And culturally incoherent systems do not scale.

AI governance is therefore not primarily a legal discipline. It is a cultural and architectural one.


5. Tools and Models as Cultural Artefacts

AI tools and models are not neutral utilities. They embed categories, assumptions and decision logics.

When adopted at scale, they become cultural artefacts, shaping how problems are defined and how authority is exercised.

Standardising AI use is therefore less a procurement issue and more an exercise in preserving organisational coherence over time.


Implications

For regulated organisations, the key challenge is not merely whether AI systems comply with regulation, but whether their integration preserves epistemic alignment.

The success or failure of AI will depend on the architectural conditions under which it is embedded:

  • clarity of responsibility
  • durable context
  • shared interpretative structures
  • governance models aligned with regulatory constraints

This perspective informs the ongoing work at MokanTech, where the focus is on designing structural conditions that allow AI systems to function coherently, governably and sustainably within regulated environments.

Related Work

KanitaStrategic decision support for regulated AI systems.
Visit Kanita