ISO 42001 Just Got Its First Construction Certification. Here’s Why That Matters

By Micah Stennett|Last updated: 2 Mar 2026|5 min read

When you think of AI governance pioneers, you probably picture big tech. Google. Microsoft. Anthropic. You probably do not picture a construction progress-tracking platform based in Austin, Texas.

But that is exactly the story. AI Clearing became the world’s first company to achieve ISO 42001 certification — the international standard for AI Management Systems — certified by SGS, the world’s leading testing and inspection body. A construction technology firm. First in the world.

That is not a footnote. That is a signal worth paying attention to.

What Is ISO 42001 and What Does It Actually Require?

ISO/IEC 42001:2023 is the first certifiable international standard for an Artificial Intelligence Management System (AIMS). Published in December 2023, it gives organisations a structured framework for governing AI responsibly across its full lifecycle — from development through deployment to retirement.

It follows the same Plan-Do-Check-Act logic as ISO 27001 (information security) and ISO 9001 (quality management), so if your firm already holds those, you have a head start. But it introduces controls that are specific to AI: algorithmic bias mitigation, explainability requirements, human oversight mechanisms, and impact assessments that consider effects on individuals and communities.

To get certified, an organisation must pass an audit against 38 distinct controls organised into 9 objectives, covering risk and impact assessments, AI system lifecycle management, data governance, and supplier oversight. It is not a paper exercise. The audit is detailed and the certification is valid for three years, with annual surveillance audits in between.

AI Clearing whose platform uses AI to automate construction progress tracking and real-time quality control on large infrastructure projects like solar farms, railways, and road schemes completed the full certification process in just six months

Why This Connects to RICS, the EU AI Act, and Your Firm

This is not just an interesting fact about a Texas startup. It directly connects to two frameworks that will affect every built environment professional in the UK.

The RICS professional standard on the responsible use of AI came into force on 9 March 2026. It is mandatory for all RICS members and regulated firms worldwide. It requires governance policies, AI risk registers, documented due diligence on AI tools, and transparency with clients about AI use. The overlap with ISO 42001 is substantial — both demand structured risk management, clear accountability, and evidence that humans remain in the loop.

Meanwhile, the EU AI Act and ISO 42001 are increasingly treated as a pairing: the Act defines what must be achieved; the standard provides the operational framework for demonstrating it. As ISACA puts it, the Act is the rulebook and ISO 42001 is the operating system that makes compliance repeatable and auditable. For UK firms with any European clients or projects, that matters now.

“ISO 42001 certification proves that our AI system is managed according to the newest and most complex standards, and that our AI models are trustworthy and thoroughly verified before release.”

The Quiet Competitive Advantage

Here is the practical angle for QS practices, FM firms, and built environment consultancies thinking about AI tools right now.

When you deploy AI, whether that is a cost estimating tool, a contract review assistant, or a progress monitoring platform, you take on responsibility for how it performs. If a quantity surveyor uses an AI tool that produces a biased estimate, or a facilities manager deploys a predictive maintenance system that fails to flag a critical defect, the professional accountability sits with the firm. Not the vendor.

ISO 42001 provides the framework for managing that accountability systematically. It forces the right questions: Is this AI tool fit for purpose? What are the failure modes? Who reviews the outputs? What is the escalation path? Where is the audit trail?

AI Clearing’s CEO described the process bluntly: “With the certification, we stand out from the crowd of AI companies.” That is true — but the same logic applies to the firms procuring AI. In a market where 37% of employees are already using generative AI without their employer’s knowledge, having a structured AI management system is quickly moving from differentiator to baseline expectation.

Construction got there first. The rest of the built environment needs to catch up.

The Responsible with AI programme is built for exactly this moment — helping built environment professionals understand what ISO 42001, the RICS standard, and the EU AI Act actually require in practice, and how to implement responsible AI governance without it becoming a compliance burden.

Related Blog Post

Responsible with AI Logo

Responsible with AI Training Platform, which offers accessible training on responsible AI principles, enabling professionals to build knowledge in ethical AI practices and governance.