Building an AI Governance Culture in Your AEC Organisation

By ResponsiblewithAI Team|Last updated: 14 Apr 2026|5 min read

Most AEC firms now have someone using AI. Not everyone has a plan for governing it. That gap is where the risk lives.

The RICS professional standard on the responsible use of AI came into effect on 9 March 2026. It requires RICS-regulated firms to have written AI policies, maintain a risk register for AI systems that materially impact service delivery, and document professional judgement on every AI output they rely upon. That is not a light administrative lift. It is a governance programme.

Alongside that, ISO 42001, the international AI management system standard, gives firms a certifiable framework to demonstrate responsible AI use to clients, insurers, and regulators. BSI is accredited to certify against it in the UK. You do not have to pursue certification to benefit from the framework. But knowing it exists matters.

Why Culture Comes Before Policy

You can write a policy in an afternoon. Changing how 40 engineers think about AI outputs takes months. That is the harder problem.

In AEC, the stakes are high. A poorly governed AI output in a structural calculation, a valuation, or a ground investigation interpretation can lead to professional liability claims, project failures, or worse. The CECA report on AI in UK construction is clear that AI must not displace or dilute human responsibility, especially in safety-critical contexts.

Culture change starts with leadership. If your directors treat AI governance as a compliance checkbox, so will everyone else. If they model sceptical, documented, professional use of AI, teams follow.

The AI Champions Model

The most effective adoption pattern in engineering and construction firms right now is the AI champions model. It works like this.

Select three to five people across different disciplines, not necessarily the most senior, but the most respected and curious. They attend a structured programme covering what AI can and cannot do, your firm's specific tools, and the governance requirements attached to each. They then embed that knowledge in their teams through peer demonstrations and informal support.

Champions are not IT staff. They are the structural engineer who figured out how to use AI to check specification clauses, or the QS who built a prompt library for cost report drafting. Their credibility comes from doing it themselves first.

A twelve-week programme structure works well. Weeks one to four: intensive training on tools, governance, and use case identification. Weeks five to eight: guided practice in live projects with documented outcomes. Weeks nine to twelve: peer teaching, where each champion leads a short session with their immediate team.

AI Governance: The Numbers

9 Mar: 2026: RICS AI standard comes into force for regulated firms

72%: of employers cite skills gaps as the main barrier to AI adoption (CIPD)

42%: shortfall between anticipated and actual AI deployments in organisations (ModelOp)

What Small and Large Firms Need to Do Differently

A ten-person surveying practice and a 500-person engineering consultancy face different challenges.

Small firms often have one or two people driving AI adoption informally. The risk is that governance is entirely person-dependent. If that person leaves, nothing is written down. The priority for small firms is documentation: a simple AI use policy, a short risk register covering the tools in active use, and a one-page decision template for documenting professional judgement on AI outputs. The RICS standard requires all of this. Doing it lightly is better than not doing it at all.

Larger firms have the opposite problem. Multiple teams using different tools, no central visibility, and governance that exists on paper but is not embedded in daily practice. The priority here is a central AI register that captures which tools are in use across the firm, who approved them, and what the documented risk assessment says. The Construction Leadership Council's Construct AI initiative is building collaborative frameworks for exactly this.

Measuring Governance Maturity

A simple maturity model helps firms understand where they are and what to work on next. Three stages cover most AEC contexts.

Reactive: AI is being used but governance is ad hoc. No central policy. No risk register. Professional judgement on AI outputs is undocumented.

Proactive: Written policy exists. A risk register covers material AI tools. Champions are in place. Training is structured. Clients are notified when AI has a material impact.

Transformative: Governance is embedded in practice management. ISO 42001 or equivalent framework adopted. AI use is auditable. Governance is a competitive differentiator in bids.

"The firms that govern AI well will be the ones clients trust with high-value work. That is not a soft benefit. It is a commercial one."

Most AEC firms are somewhere between reactive and proactive right now. The RICS standard has accelerated the timeline. But the real driver should be this: the firms that govern AI well will be the ones clients trust with high-value work. That is not a soft benefit. It is a commercial one.

Start with the risk register. Document which AI tools have a material impact. Write the policy. Train the champions. The culture follows the structure, not the other way round.

Build Your AI Governance Framework

Responsible with AI offers practical tools, templates, and training to help AEC firms meet the RICS AI standard and build lasting governance culture. Explore the resources at responsiblewithai.com.

Explore the Programme → Responsible with AI

Related Blog Post

Responsible with AI Logo

Responsible with AI Training Platform, which offers accessible training on responsible AI principles, enabling professionals to build knowledge in ethical AI practices and governance.