Resources & FAQs

Find answers to common questions and access helpful resources for your Responsible with AI journey

Frequently Asked Questions

CAIG Educate is a role-based learning pathway that helps built environment professionals use AI safely, responsibly, and defensibly. It focuses on practical judgement: what's safe to do, what's risky, and what you should be able to evidence if challenged.

Each level matches how AI shows up in real organisations: Level 1 Foundation (Awareness): safe everyday use and core risks Level 2 Practitioner (Operational Practitioner): using AI day-to-day for work outputs Level 3 Reviewer: checking AI-assisted work before it goes out Level 4 Steward: governance-minded oversight (controls, evidence, escalation) Level 5 Sponsor: leadership decision-making, risk appetite, accountability and adoption strategy

No. Most people take the level that matches their role today. Organisations often assign levels by responsibility (e.g., practitioners take Practitioner; line managers take Reviewer; governance leads take Steward).

It depends on your starting level. Levels are designed to be completed at your own pace, with an assessment at the end. Courses are provided on a lifetime access basis, allowing for updates, reassessments and refreshers.

CAIG Educate is built around defensible professional practice, not tech hype. You learn: • How to spot common AI failure modes (confident errors, bias, missing context) • How to stay on the right side of confidentiality and data risk • How to verify, disclose, and quality-check AI-assisted outputs • How oversight works (who reviews what, what evidence matters)

Across the levels, you'll build capability in: • AI fundamentals (what it is/isn't) • Risk judgement (what's safe vs risky) • Data and confidentiality • Verification and quality control • Disclosure and accountability • Review workflows and sign-off • Governance and evidence

Yes. Examples and scenarios are framed around typical built environment workflows (commercial management, QS tasks, reporting, tendering, change, FM/asset, client deliverables) so the learning transfers straight into work.

Both - but in the right order. Early levels teach safe use and good habits. Higher levels focus on review, controls, and governance responsibilities.

Yes. Each level includes short checks and a final assessment focused on workplace judgement (not trivia). The aim is to confirm you can apply safe habits under realistic pressure.

It means you've demonstrated competence at that level's scope (e.g., "I can safely use AI for X tasks" or "I can review AI-assisted work for Y risks"). It's a capability signal not a legal guarantee of compliance.

Yes. Responsible with AI is CPD accredited.

The pathway is designed to be standards-aware and evidence-minded, including: • RICS professional standard on responsible AI use in surveying practice (effective 9 March 2026) • ISO/IEC 42001 concepts (AI management system themes: governance, transparency, competence, continual improvement) • UK Government AI Playbook principles (safe, responsible, effective use) • EU AI Act themes (risk-based approach and AI literacy expectations, phased in over time)

Training supports compliance, but it's not the whole story. Compliance depends on your tools, policies, approvals, oversight, and record-keeping. This pathway strengthens the human capability part: safe use, review discipline, and defensible habits.

Because regulators and governance bodies are increasingly expecting organisations to ensure people using AI understand risks and safe practice. The EU AI Act, for example, introduces AI literacy requirements as part of its phased application.

No. We teach using safe examples. As a rule: don't paste anything sensitive, identifiable, or contractually restricted into AI tools unless your organisation has explicitly approved it.

AI can be useful, but it can be confidently wrong. The course teaches "check before you ship" habits so outputs are verified, assumptions are clear, and accountability stays human.

Yes particularly where it affects judgement, risk, or client expectations. We provide simple disclosure patterns that are professional and proportionate (not alarmist).

Yes that's the point of the pathway. You can map learning to responsibility: practitioners, reviewers, stewards, and sponsors each have a distinct competency target.

Yes. Organisations can add local rules (approved tools, red lines, escalation routes) so the learning matches "how we do it here".

Reporting is planned (completion status by cohort/level, certificates, and renewal reminders).

You retain access to the level you purchased, including updates, for as long as the platform remains available and your account is active.

Yes the pathway is designed to evolve as standards and expectations mature, so learners can return and refresh without starting from scratch.

Email support can be found at support@responsiblewithai.com. Our typical response window is 24 hours.

Still Have Questions?

Can't find what you're looking for? Our team is here to help. Get in touch and we'll respond as soon as possible.

Contact Us
Responsible with AI Logo

Responsible with AI Training Platform, which offers accessible training on responsible AI principles, enabling professionals to build knowledge in ethical AI practices and governance.