Three Countries, Three Approaches: How the World Is Regulating Construction AI

By ResponsiblewithAI Team|Last updated: 29 Apr 2026|5 min read

Three major jurisdictions are each trying to get AI regulation right. None of them is doing it the same way. And UK construction firms operating internationally need to understand what that means for them.

The UK is betting on flexibility. The EU is betting on rules. Singapore is betting on infrastructure. Each approach has something to teach. And each has gaps.

The UK: Trust the Sector, Trust the Regulator

The UK's position is principles-based and deliberately non-prescriptive. The DSIT white paper established five cross-sectoral principles: safety and robustness, transparency and explainability, fairness, accountability, and contestability. These are not laws. They are guidance for existing regulators to apply within their own domains.

For Construction AI regulation and surveying, the main regulatory action has come from the professions themselves. RICS published its global professional standard for responsible use of AI in surveying in September 2025, mandatory from 9 March 2026. It requires members to address data governance, procurement due diligence, output reliability, and client transparency when AI has a material impact on their work. This is the sharpest regulatory intervention the UK built environment has seen to date.

The advantage of the UK's approach is speed and adaptability. Regulators can respond to specific sector problems without waiting for primary legislation. The disadvantage is fragmentation. There is no single AI authority. Oversight is spread across the ICO, the Building Safety Regulator, RICS, and others. Firms working across multiple sectors face multiple, sometimes conflicting, expectations.

Three Regulatory Timelines

Aug 2024 EU AI Act enters into force

Mar 2026 RICS AI standard becomes mandatory for members

Oct 2026 Singapore CORENET X mandatory for all new building projects

The EU: Rules First, Sector Second

The EU AI Act entered into force on 1 August 2024, and it takes a fundamentally different approach. It classifies AI systems by risk level and applies mandatory requirements to high-risk applications regardless of the sector they operate in.

For construction, the relevant classification sits primarily in safety-critical systems. AI used as a safety component in critical infrastructure, including building management and fire safety, is classified as high-risk. Article 6 of the AI Act is specific: if an AI system is used as a safety component of a regulated product, it attracts strict requirements including human oversight, high-quality training data, and documented risk management.

That has direct implications. An AI fire detection system sold into the EU market likely falls into the high-risk category. The vendor must demonstrate conformity before placing it on the market. The deployer must maintain logs and implement human oversight protocols. Penalties for non-compliance run to 6% of global annual turnover.

The EU model is comprehensive but slow. Firms working across UK and EU jurisdictions now face two separate compliance regimes with different requirements and different timelines.

Singapore: Build the Infrastructure, Set the Standards

Singapore's approach through the Building and Construction Authority (BCA) is the most integrated of the three. Rather than regulating AI as a standalone technology, Singapore has embedded digital delivery requirements into the regulatory approval process itself.

CORENET X, the BCA's integrated digital platform, became mandatory for all new projects over 30,000 sq m in October 2025, with full mandatory coverage expected by October 2026. It requires BIM submissions in standardised formats, consolidates approvals across seven regulatory agencies, and is expected to reduce approval times by 20%. AI is already processing inspection reports 20% faster within the platform.

The BCA approach treats digital tools and AI not as something to regulate after the fact, but as something to build into the process from the start. Data standards, interoperability requirements, and digital delivery accreditation schemes are already in place.

"Singapore did not wait for AI to cause problems before acting. It built the digital infrastructure that makes responsible AI use the default."

What UK Firms Should Take From Each

From the EU: take the risk classification framework seriously. Even if you are UK-based, EU standards set the bar for what high-quality AI deployment looks like in safety-critical contexts. If your AI fire safety system would not meet EU high-risk requirements, ask yourself whether it should be trusted in a UK building.

From Singapore: data standards matter. The reason CORENET X works is that BCA mandated interoperable data formats before asking firms to submit digitally. UK firms should be pushing vendors for the same level of data transparency.

From the UK itself: the RICS standard is the most practically useful regulatory intervention in the built environment right now. If you are a surveying firm and you have not read it, that is where to start. It is not aspirational guidance. It is a mandatory professional standard with compliance consequences.

The three approaches are converging more than they diverge. All three require human oversight of high-stakes AI. All three demand transparency about how AI-generated outputs were produced. The firms that build those capabilities now will be in the best position regardless of which regulatory framework they are operating under.

Building AI Competence in the Built Environment

The Responsible with AI programme equips built environment professionals to evaluate, procure, and govern AI tools. Explore how to build the right frameworks for your firm.

Explore the Programme → Responsible with AI

Related Blog Post

Responsible with AI Logo

Responsible with AI Training Platform, which offers accessible training on responsible AI principles, enabling professionals to build knowledge in ethical AI practices and governance.