Responsible AI Governance in Practice: From Policy to Site

By ResponsiblewithAI Team|Last updated: 6 May 2026|5 min read

Most firms now accept that they need AI governance. Fewer have worked out what that looks like on a real project, in a real firm, with real professionals who have other things to do.

This is the practical version. Not theory. Not high-level principles. Actual steps, tools and formats you can adapt today.

Your AI risk register: what to include

The RICS standard requires regulated firms to maintain an AI risk register, reviewed at least quarterly. Here's what a functional register entry looks like.

For each AI system in use: a unique risk ID; the name and purpose of the tool; the risk category (technical failure, data quality, bias, liability, confidentiality); a likelihood and impact score; the mitigation steps in place; the name of the person responsible for monitoring it; and the review date.

Start with the tools your team uses most. Cost estimation platforms with AI components. Document analysis tools. BIM-integrated AI outputs. Procurement recommendation engines. Each one gets a row. Each row gets an owner. That owner is responsible for flagging when the tool's outputs look wrong or when circumstances have changed.

You don't need specialist software for this. A spreadsheet with version control and quarterly review dates works. What matters is that it exists, it's current and the right person is named on it.

Tool evaluation workflow

Before adopting any new AI tool in surveying or construction work, the RICS standard requires written due diligence. Here's a practical workflow that takes under two hours.

Step one: define the use case. What decision will this tool support? What professional judgement will it inform? Step two: identify the risk level. Does the output affect safety-critical decisions, client financial advice or regulated outputs? Step three: evaluate the supplier. Can they answer questions about training data, bias testing, data handling, liability under their terms? Document their answers. Step four: determine validation requirements. What does the named qualified professional need to check before accepting an output? Write it down. Step five: log it on your AI systems register.

This process is not onerous. A two-page written record per tool is sufficient for most purposes. The discipline is the point, not the length.

RICS AI Governance Requirements at a Glance

Written: AI risk register required, reviewed at least quarterly

Named: Qualified surveyor must own every AI-assisted determination

Disclosed: Clients must be informed in writing before AI is used

Client disclosure: what to say

The RICS standard requires clients to be informed in writing, in advance, where AI will be used in delivering services, and how they can challenge or opt out. This doesn't need to be lengthy or alarming. It needs to be clear and on record.

A simple paragraph in your letter of engagement is sufficient for most work: "In delivering this service, we may use AI-assisted tools to support analysis and recommendations. All outputs are reviewed and validated by a qualified [surveyor/engineer/consultant]. If you would like more information about specific tools used or wish to discuss any concerns, please contact [name]."

Adjust the wording for the level of AI involvement in the specific engagement. Higher-risk work, safety assessments, regulated valuations, may warrant a more detailed disclosure. The key is proportionality and documentation.

"AI outputs cannot be accepted at face value. Reliability decisions must be documented and overseen by a qualified named surveyor. Clients must be informed in writing in advance."

Quarterly review process

Your AI governance doesn't stay current by itself. A quarterly review process keeps it live without consuming significant time. Block two hours per quarter for this. Work through four questions: Have we adopted any new AI tools since the last review? Have there been any near-misses, errors or unusual outputs from existing tools? Have any tools we use changed their functionality or terms of service? Have relevant regulations or professional standards been updated?

For each yes, update the risk register and policy accordingly. For each no, document that the review took place and found nothing requiring change. That documentation is your compliance evidence if the question ever arises.

Real governance examples from the field

One commercial surveying firm implemented a simple approach after the RICS standard took effect. Two AI tools on their register: a cost database with AI-enhanced analytics, and a lease review tool. Each tool has a named owner. Each output is validated by a qualified surveyor and that validation is recorded on the project file. Client engagement letters were updated with a standard disclosure clause. The whole governance programme took three days to set up and about four hours per quarter to maintain.

That's what proportionate governance looks like for a small professional services firm. It's not a compliance department. It's professional discipline applied systematically.

The firms that treat AI governance as a separate overhead are the ones who struggle. The firms that build it into their normal project and practice management, as another column on the risk register, another clause in the engagement letter, find it becomes routine within a quarter.

Governance doesn't protect you from everything. But it does protect you from the claim that you weren't paying attention. In the built environment, that's a significant distinction.

Turn Governance Policy into Practice

The Responsible with AI programme provides ready-to-use AI risk register templates, client disclosure frameworks, tool evaluation workflows and quarterly review processes, built for the built environment.

Explore the Programme → Responsible with AI

Related Blog Post

Responsible with AI Logo

Responsible with AI Training Platform, which offers accessible training on responsible AI principles, enabling professionals to build knowledge in ethical AI practices and governance.