AI Curriculum Framework for Clinical Technology Education in South Africa:
Expert Consensus Study

A multi-round eDelphi study to establish expert consensus on integrating artificial intelligence into South African Clinical Technology education. 66 statements across 8 domains, rated by invited expert panellists.

8
Curriculum Domains
66
Consensus Statements
≥75%
Agreement Threshold
5-pt
Likert Scale

About the Study

This instrument forms part of a modified electronic Delphi (eDelphi) study designed to achieve expert consensus on a curriculum framework for integrating artificial intelligence into Clinical Technology education in South Africa. The study targets all seven Clinical Technology specialisations registered with the Health Professions Council of South Africa (HPCSA): Cardiology, Nephrology, Critical Care, Neurophysiology, Pulmonology, Cardiovascular Perfusion, and Reproductive Biology. The instrument comprises 66 statements across 8 domains, each rated on a 5-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree).

Before beginning, participants rank the eight domains by their own perceived expertise — the survey is then presented in that order. For each domain, panellists rate every statement, provide domain-level ratings across four dimensions (agreement, confidence, feasibility, and priority), and may contribute an open qualitative comment. A panellist profile covering demographics and AI experience is completed once at Round 1 entry.

Between rounds, aggregated results — panel distribution, median, interquartile range, and consensus status — are shared with panellists to support informed deliberation in subsequent rounds. The study follows established reporting guidelines and all responses are anonymous.

Instrument Development

The consensus statements were developed through a structured, evidence-based process drawing on four categories of source material:

Regulatory and professional guidance

HPCSA Ethical Guidelines on the Use of Artificial Intelligence (Booklet 20, 2025); SAHPRA regulatory requirements for AI/ML-enabled medical devices (2025); Health Professions Act 56 of 1974; Protection of Personal Information Act (POPIA) No. 4 of 2013; Cybercrimes Act 19 of 2020; DCDT National AI Policy Framework (Draft, 2024).

International and continental frameworks

UNESCO AI Competency Frameworks for Students (2024a) and Teachers (2024b); UNESCO Guidance for Generative AI in Education and Research (2023); UNESCO Recommendation on the Ethics of AI (2021); WHO Ethics and Governance of AI for Health (2021); WHO Regulatory Considerations on AI for Health (2023); WHO Guidance on Large Multi-Modal Models (2024); African Union Continental AI Strategy (2024); OECD AI Recommendation (2019); EU AI Act (2024); AAMC Principles for Responsible AI Use in Medical Education (2025).

Peer-reviewed literature and educational frameworks

Clark (2025) AI-Era Bloom's Taxonomy; Perkins, Roe and Furze (2025) AI Assessment Scale; Al-Eyd et al. (2018) curriculum mapping; Russell et al. (2023) AI competencies for health professionals; Masters (2023) AMEE Guide No. 158 on AI ethics in health professions education; Meyers, Durlak and Wandersman (2012) Quality Implementation Framework; Wang (2025) AI in healthcare; National Academy of Medicine (2025) report on generative AI in health and medicine.

Practitioner survey data

Healthcare AI Questionnaire (N=54 HPCSA-registered Clinical Technologists across all seven categories). Survey findings validated existing statements and prompted the addition of four new statements and revision of four existing statements.

How the Process Works

  1. 1

    Receive Your Invitation

    You receive a secure, personalised token link from the research team. No account or password is required — click the link, enter your token, and you are taken directly to the study.

  2. 2

    Complete Each Round

    At Round 1 entry you complete a brief panellist profile (once only). You then rank the eight domains by expertise — the survey is presented in your chosen order. For each domain, rate statements on a 5-point scale, provide domain-level ratings (agreement, confidence, feasibility, priority), and optionally add a comment. Progress is saved automatically.

  3. 3

    Review Feedback and Return

    After the facilitator closes a round and publishes results, you receive a feedback report comparing your responses with the aggregated panel distribution. In subsequent rounds, prior-round consensus is shown alongside each statement to support your deliberation.

Eight Curriculum Domains

The instrument spans eight domains covering the full scope of AI integration in Clinical Technology education.

  • 1Curriculum Philosophy and Foundational Principles
  • 2Competency Framework and Learning Outcomes
  • 3Specialisation-Specific AI Integration
  • 4Pedagogical Strategies and Teaching Approaches
  • 5Assessment of AI Competencies
  • 6Faculty Development and Institutional Readiness
  • 7Regulatory, Ethical, and South African Contextual Considerations
  • 8Implementation and Sustainability

Privacy and Anonymity

  • Token-based access. Invitation links are cryptographically signed and expire after 30 days. No login, account, or password is ever required.
  • Participant codes. On first access the system assigns you a random identifier (e.g. DEL-XXXXX). This code — not your email address — is used in all response records and analytical outputs.
  • Email isolation. Your email address is stored only in a dedicated invitation table used solely for sending your invitation and result notifications. It is never written to response, rating, comment, or analytics records.
  • Demographics stored separately. Your panellist profile is held in its own table, isolated from your statement responses and domain ratings.
  • Anonymous analytics. Consensus metrics, exports, and facilitator dashboards reference only participant codes. No personally identifiable information appears in any analytical output.
  • Audit trail. All system actions — round openings and closings, invitation sends, feedback publication, and data exports — are recorded in an append-only audit log for research governance.
  • Auto-save and draft recovery. Responses are saved automatically as you rate. A draft recovery payload preserves your progress if your session is interrupted before submission.

All data is used solely for academic research purposes.