How to Learn Responsible AI Governance for Ethical and Scalable Systems

By ResponsiblewithAI Team|Last updated: 26 Apr 2026|5 min read

There's a pattern playing out across the built environment right now. Firms are deploying AI tools. Professionals are using them daily. But when you ask who actually owns responsibility for how those tools are governed, the room goes quiet.

That silence has a cost. The RICS AI in Construction report 2025 found that 46% of construction organisations cite lack of skilled personnel as the single biggest barrier to responsible AI adoption. It's not a technology problem. It's a competence problem.

The good news: structured learning pathways now exist. Here's how to navigate them.

Start with the standard: ISO 42001

ISO/IEC 42001 is the first international standard for AI management systems. Think of it as the ISO 27001 equivalent for AI governance. It gives organisations a structured framework to define policies, assign accountability, manage risk and audit performance across their AI systems.

Certification is achievable. One company's experience shows what it takes: four months of preparation, 29 new policies and procedures, and 10 dedicated AI governance registers. That's not trivial, but it's manageable if you approach it systematically. Early adopters report the process improved their governance as much as it validated it.

ISO 42001 is also increasingly recognised as a practical compliance route under the EU AI Act, making certification a dual-purpose investment.

The EU AI Act Article 4 Obligation

If you operate in the EU, or serve clients who do, AI literacy is already a legal requirement. Article 4 of the EU AI Act places a duty on both providers and deployers to ensure staff have sufficient AI literacy. That obligation took effect on 2 February 2025.

The regulation doesn't mandate specific certifications or tests. What it does require is documented, proportionate training that covers how AI works, the risks it presents in your context, and how staff should apply professional judgement. Doing nothing is not a defensible position.

RICS Training Requirements

For surveyors and built environment professionals, the RICS Global Standard on Responsible Use of AI has been mandatory since 9 March 2026. It requires firms to maintain AI risk registers, establish written governance policies, conduct due diligence on AI suppliers, and train staff appropriately.

RICS has published a five-part training course on Harnessing AI and Data in the Built Environment. It's a foundation-level programme covering governance, data concepts and practical surveying applications. For firms needing to evidence competence, this is a sensible starting point.

The AI Competence Gap in Numbers

46% of construction firms cite skills shortage as top AI barrier (RICS, 2025)

74% of construction companies have limited or no AI preparation (RICS, 2025)

4 months typical preparation time for ISO 42001 certification

The Brief Framework and AI4QS

For built environment professionals specifically, one of the most useful tools to emerge this year is the Built Environment Responsible AI Competence Framework (BRIEF), developed by the AI4QS initiative at the University of Westminster.

BRIEF v1.0, led by Dr Abdullahi Saka, is designed to translate governance principles into practical capability. It maps the skills, behaviours and knowledge that professionals need to adopt AI responsibly, and it's built around the realities of surveying, construction and FM practice. It's not generic AI literacy. It's sector-specific competence.

"Through AI4QS and the BRIEF framework, we are laying the foundations to support practitioners, educators and organisations in adopting AI in ways that strengthen, rather than replace, professional values." - Dr Abdullahi Saka, University of Westminster

The next phase of AI4QS will include a Responsible AI Digital Hub, an open-access platform with research outputs, guidance tools and learning resources. Worth bookmarking.

Building an internal governance team

Certifications and frameworks matter, but governance needs people. A functional internal AI governance team doesn't need to be large. It needs clear roles: someone who owns the AI risk register, someone who reviews new tool procurement, someone who ensures client-facing staff understand disclosure requirements.

For smaller firms, that's often one or two people wearing multiple hats. The key is documentation. Every governance decision should be recorded. Every training undertaken should be evidenced. When something goes wrong, and in construction, things go wrong, your paper trail is your protection.

Start with a one-page AI policy. Add an AI systems register. Build the risk register. Review it quarterly. That's a governance programme. It doesn't need to be complex to be defensible.

Practical steps to take today

If you're a professional wondering where to begin, here's a straightforward sequence. Read the RICS AI standard if you haven't. Take the RICS foundation training on AI and data. Download and start mapping against BRIEF v1.0. For firms seeking formal recognition, explore ISO 42001 with your leadership team. And document every step you take.

The competence gap in this sector is real. But the frameworks to close it are now in place. The question is whether you act before a regulator, a client or a court makes that decision for you.

Building AI Competence in the Built Environment

The Responsible with AI programme gives built environment professionals the tools, frameworks and practical guidance to govern AI confidently. From RICS standard compliance to ISO 42001 readiness.

Explore the Programme → Responsible with AI

Related Blog Post

Responsible with AI Logo

Responsible with AI Training Platform, which offers accessible training on responsible AI principles, enabling professionals to build knowledge in ethical AI practices and governance.