Every industry claims AI governance matters. But most sectors have the luxury of iterating. If an AI tool produces a bad product recommendation, you fix it. If an AI hiring tool discriminates, you can catch it before lasting harm is done.
Construction doesn't work like that. The built environment is different in ways that make AI failures uniquely dangerous. Here's why governance here is not optional background infrastructure. It's a matter of life, liability and legacy.
Decisions that can't be undone
AI in construction increasingly influences decisions that are literally built into the physical world. A structural calculation error embedded in a design. A cost estimate that misses a material hazard. A safety system that approves a substandard fixings specification.
Unlike a marketing campaign or a software update, a building can't be easily patched. If an AI-assisted decision in the design phase turns out to be wrong, you may not discover it for decades, and by then the building is occupied, the original team has dispersed and liability is fiendishly difficult to assign.
The RICS AI in Construction report notes that AI is being explored for design optioneering, cost management, safety monitoring and procurement. All of these touch areas where errors have physical and legal consequences. Yet 74% of construction firms report limited or no AI governance preparation.
Asset lifecycles that stretch 40 to 60 years
Most AI governance thinking focuses on the here and now. But built assets outlast the technology, the firm and the regulatory regime that existed when they were constructed. A building designed with AI assistance in 2026 will still be occupied in 2066.
That creates a governance challenge no other industry faces at the same scale. The data used to train a construction AI tool may become outdated. The outputs embedded in project records may be inscrutable without the original model. The professional who validated them may be retired or unreachable.
The RICS AI standard addresses this through requirements for documented AI decision records and transparent procurement due diligence. These aren't bureaucratic boxes. They're the audit trail that protects the profession in year 40, not just year one.
Construction AI Risk in Numbers
45%: of construction firms have no AI implementation at all (RICS, 2025)
40-60 yrs: typical operational lifespan of a major built asset
6%: only 6% see AI value in improving safety and wellbeing (RICS, 2025)
Multi-party liability chains
In most industries, there's a relatively short line between the AI system and the accountable party. In construction, accountability is distributed across architects, structural engineers, quantity surveyors, principal contractors, subcontractors, clients and building owners. When AI is layered in, that chain gets more complex, not less.
Who is liable if an AI-generated cost estimate proves materially wrong? The QS who validated it? The firm that licensed the tool? The developer who relied on it to sign off a contract? The RICS standard is clear: a named qualified surveyor must own every AI-assisted output. But that only resolves liability within the surveying firm. The wider chain remains contested territory.
"AI tools cannot replace the surveyor's professional judgement, skill and scepticism. A qualified named surveyor must own every AI-assisted determination."
The Grenfell lesson for digital governance
The Grenfell Tower fire of 2017 exposed what happens when accountability gaps exist in complex systems. Multiple parties, fragmented oversight, unclear responsibility for safety-critical decisions. The resulting Building Safety Act 2022 introduced the concept of the golden thread: a continuous, accessible, accurate record of safety information throughout a building's lifecycle.
The parallel for AI governance is direct. When AI systems are used in safety-relevant decisions, there must be a clear, auditable record of what the system did, who validated it and what the basis was. The Building Safety Act doesn't mention AI by name. But its principles, traceability, accountability, competence, apply to AI-assisted decisions just as they apply to design and construction choices.
Any firm using AI in higher-risk building work should be asking whether their AI decision records could withstand the kind of scrutiny that Grenfell has placed on construction documentation. If the answer is no, that's a governance failure waiting to surface.
Where failures cascade
Think through a single scenario. An AI-assisted procurement tool recommends a materials specification based on training data that predates a supply chain disruption. A subcontractor accepts the recommendation without review. The materials are installed. An inspection later flags a compliance failure. Who bears the cost? Who bears the liability? Who even has the records to prove what the AI recommended and what was validated?
This is not a hypothetical. Supply chain AI is being deployed across construction right now. The tools are moving faster than the governance. That gap is where the risk lives.
The sector has survived decades of poor documentation culture. The Building Safety Act is already forcing change on that front. AI governance needs to move at the same pace. Because the next Grenfell-level inquiry may not be about cladding. It could be about an AI recommendation that nobody questioned and nobody documented.
Govern AI Before It Governs You
The Responsible with AI programme helps built environment firms build the governance structures, risk registers and training programmes that the RICS standard and Building Safety Act now require.




