Whether you're a QS, a building surveyor, an architect, or an FM professional, responsible AI is now part of your professional landscape. The RICS standard is mandatory. The EU AI Act is live. And the BRIEF framework gives the sector a practical map for building competence.
The question isn't whether to engage with AI governance. It's how to do it in a way that fits your role, your practice and your CPD obligations. This is that guide.
What your role actually requires
Different professional roles carry different AI exposure and different obligations. Here's a quick mapping.
Quantity surveyors using AI for cost estimation, procurement analysis or contract risk must document how AI outputs were validated. The RICS standard requires a named qualified surveyor to own every AI-assisted determination. If you're that named professional, your judgement and your documentation are what protect both you and your client.
Building surveyors and architects using AI for condition assessment, specification support or design optioneering need governance at the output stage. Can you explain the AI's recommendation? Can you evidence why you accepted or overrode it? These are not hypothetical questions under RICS rules. They are compliance requirements.
FM professionals deploying AI for energy management, predictive maintenance or occupancy analytics are often the furthest from formal governance frameworks, but arguably most exposed to recurring operational risk. If your AI tool flags a maintenance issue incorrectly and an incident follows, your records of how the system was monitored and validated are central to your liability position.
The BRIEF framework: your competence map
The Built Environment Responsible AI Competence Framework (BRIEF), developed through the University of Westminster's AI4QS initiative, is designed specifically for this sector. Version 1.0 was launched in early 2026 and is available via the AI4QS research outputs.
BRIEF maps competence across three dimensions. Technical understanding: how AI systems function, their limitations and failure modes. Ethical and regulatory awareness: the obligations under RICS, the EU AI Act, and sector-specific legislation. And practical governance: how to apply professional judgement to AI outputs in real project contexts.
It's a framework, not a test. Use it as a self-assessment tool. Where are the gaps in your current knowledge? Where are the gaps in your firm's governance?
AI Competence Across the Built Environment
46%: of construction firms cite skills shortage as top AI barrier (RICS, 2025)
9 Mar 2026: RICS AI standard became mandatory for all members and regulated firms
94%: of leaders globally face AI-critical skill shortages today (WEF, 2025)
Self-assessment: where to start
Before investing in formal training, do a quick honest audit. Can you name every AI tool your firm or team currently uses? Do you have a written AI policy? Do you have an AI risk register? Do you have records showing that staff have received proportionate AI literacy training? Does your client engagement process include disclosure of where AI is used?
If the answer to any of these is no, you have your starting priority. The RICS standard requires written policies, AI registers, and staff training. These are not aspirational. They are compliance obligations for every RICS-regulated firm from March 2026.
"AI is already impacting the built environment. Its value depends on how well digital capability is combined with professional judgement and ethical leadership." - AI4QS Report Launch, University of Westminster
CPD pathways worth your time
There are several structured learning routes now available. The RICS five-part course on Harnessing AI and Data in the Built Environment is the most directly relevant for RICS members. It covers AI fundamentals, data governance, practical applications in surveying, and governance frameworks. It's foundation-level, accessible without a technical background, and aligned with the RICS standard.
For those wanting formal certification, ISO/IEC 42001 certification schemes are now available through multiple accredited bodies. This is most relevant for firms with structured AI deployment programmes rather than individual practitioners.
The EU AI Act's Article 4 literacy obligation doesn't mandate specific certifications, but it does require documented, proportionate training. Any CPD you undertake on AI governance should be recorded and retained. Evidence of training is evidence of compliance.
A practical governance checklist for individuals
Use this as a starting point, not a ceiling. Know which AI tools you use and their limitations. Understand how each tool was trained and what data it uses. Document the rationale for accepting or overriding AI outputs on every significant decision. Disclose AI use to clients in advance, in writing. Keep records of your AI-related CPD. Review your firm's AI policy at least annually.
That's not a heavy compliance burden. It's professional due diligence applied to a new category of tool. The professionals who build these habits now will be the ones who find AI governance genuinely manageable. The ones who defer will find it becomes an emergency.
The built environment is a sector defined by professional judgement. AI doesn't change that. It raises the stakes for making judgement well, and for documenting it clearly.
Your AI Competence Development Starts Here
The Responsible with AI programme is built for built environment professionals. Role-specific guidance, governance frameworks, RICS-aligned training and practical tools for QS, surveying, architecture and FM practitioners.
Explore the Programme → Responsible with AI




