The evolution of healthcare systems globally, and particularly within the African context, has seen a shift in how health professionals are trained and assessed. Traditional models prioritising rote memorisation are no longer sufficient for preparing practitioners for the complex challenges of modern clinical practice. In South Africa, this transition is crystallised in the move toward competency-based education, a model demanding rigorous, evidence-based assessment strategies to ensure graduates are truly "fit for purpose". Central to this transformation is the need for clinical assessment to be valid, reliable and fair.
The Foundation for Professional Development (FPD) offers a Postgraduate Diploma in Health Professions Education and Leadership designed to equip health professionals with the expertise to lead these reforms. Through core modules like Assessment in Health Professions Education, the programme provides the theoretical frameworks and practical tools needed to design assessments that build competence rather than merely measuring recall.
To understand effective clinical assessment, one must engage with the underlying theories of learning. In Health Professions Education, Miller’s Pyramid remains the foundational framework for categorising clinical achievement.
George Miller’s 1990 model proposed that clinical competence is hierarchical. At the base lies "Knows" (foundational knowledge), followed by "Knows How" (application). These are typically assessed via written exams. However, clinical education must reach "Shows How" (demonstration in simulation) and "Does" (workplace performance).
|
Name |
Detailed Description |
|
Knows |
Foundational knowledge and factual recall of biomedical theory and principles. |
|
Knows How |
The ability to apply knowledge and theoretical understanding to solve clinical problems or formulate reasoning. |
|
Shows How |
Demonstration of clinical skills and competence in a controlled, simulated environment, such as an OSCE. |
|
Does |
Authentic action and independent clinical performance within the real-world workplace context. |
The FPD programme emphasises this hierarchy, training educators to align methodologies with the specific tier intended for measurement; for instance, assessing surgical skill through an essay (Knows How) rather than observation (Does) creates a mismatch that undermines validity.
Effective design is predicated on "constructive alignment", which requires a seamless marriage between intended learning outcomes, teaching methodologies and assessment tasks. In South Africa, where resource constraints often dictate choices, this alignment ensures critical competencies are prioritised.
The Objective Structured Clinical Examination (OSCE) is used for assessing the "Shows How" level of Miller’s Pyramid. It provides a standardised environment in which multiple students are assessed on identical tasks, yet its reliability is sensitive to the design. Assessments are conducted through OCSE stations, which are timed, simulated clinical scenarios in which healthcare students or professionals are assessed on specific skills.
Blueprinting maps OSCE stations to the curriculum to ensure representative sampling. A robust blueprint prevents "cool case bias", where educators design stations based on rare pathologies rather than those that are more commonly encountered. In a South African context, blueprinting must consider the local burden of disease, for example, by ensuring respiratory modules test the management of tuberculosis or HIV-related pneumonia rather than obscure conditions.
A valid OSCE station requires three components:
The Station Stem: Clear instructions to the student. Long vignettes can cause "construct-irrelevant variance", where a student fails due to cognitive overload rather than a lack of skill.
Standardised Patients (SPs): SPs must portray scenarios consistently across all candidates. In diverse societies like South Africa, portrayals must account for cultural and linguistic variations to remain socially accountable.
The Scoring Tool: Educators must choose between binary checklists (Done/Not Done) and global rating scales. While checklists offer inter-rater reliability, global rating scales often better capture the level of a student’s clinical skills.
Consistency between different examiners is a major challenge. The FPD programme addresses this by training educators in "calibration" sessions, where examiners discuss discrepancies to reach a shared understanding of criteria.
The Health Professions Council of South Africa (HPCSA) has adopted the AfriMEDS framework to guide training. Adapted from the Canadian CanMEDS model, AfriMEDS defines seven critical roles that health practitioners need to fulfil:
Healthcare Practitioner: The central role of integrating knowledge and skill.
Communicator: Building relationships and information exchange.
Collaborator: Multi-disciplinary teamwork.
Leader and Manager: Resource management and leadership.
Health Advocate: Commitment to social responsibility.
Scholar: Lifelong learning and evidence-based practice.
Professional: Ethical practice and integrity.
FPD supports educators in this pedagogical conceptual change, moving from discipline-centred teaching toward service-centred ideologies that produce practitioners capable of advocating for equity in a resource-constrained system.
Unconscious bias can unfairly disadvantage students. Documented biases include the Hawk/Dove Effect (strict vs. lenient examiners), the Halo/Horn Effect (one trait clouding total judgment), and the Contrast Effect (comparing a student to the previous high-performer rather than the standard).
Mitigation requires perspective-taking and individuation to focus on specific performance data rather than stereotypes. Systemically, institutions should use competency-based language rather than subjective adjectives. FPD provides the skills to audit institutional practices, identify trends in bias, and implement fairer systems.
Rubrics explicitly describe performance levels and serve both summative and formative purposes. When shared in advance, they demystify the process, helping students identify their own strengths and weaknesses.
The transition to electronic rubrics offers timeliness and data analytics, allowing institutions to track performance trends. However, implementation in Africa faces systemic barriers, such as limited internet connectivity. FPD trains educators on the strategic, contextually appropriate use of technology to ensure these tools are sustainable.
Much clinical training in South Africa occurs in rural districts. This decentralised model is essential but faces challenges, such as high clinical workloads, which can lead to "failure to fail" and underperforming candidates. FPD prepares educational leaders to manage these platforms effectively, ensuring quality assurance and accreditation standards are met regardless of geography.
The complexities of clinical assessment, from Miller’s Pyramid to bias mitigation, show that being an excellent clinician does not automatically make one an excellent educator. Health professions education is a distinct discipline requiring specialised leadership.
The FPD Postgraduate Diploma in Health Professions Education and Leadership provides the pathway for health professionals to acquire these skills. By mastering assessment principles, educators move beyond the theatre of examinations to build systems that foster true competence and excellence, contributing to a more robust, equitable healthcare system for South Africa.
Blueprinting ensures the examination is a representative sample of the curriculum. By mapping stations to specific competencies and clinical systems, it prevents over-representation of certain topics and ensures students are assessed on critical skills required for safe practice. This reduces the "luck of the draw" and ensures the exam measures what it is intended to measure.
The "Hawk/Dove" effect is the tendency of some examiners to be consistently stricter (hawks) or more lenient (doves) than colleagues. It is addressed through mandatory examiner training and calibration exercises, in which raters score the same performance and discuss their reasoning to reach consensus on standards.
A rubric provides descriptive feedback that identifies exactly where a student performed well and where they were lacking. Unlike a simple percentage, a rubric clarifies which specific behaviours need remediation, promoting feedback literacy and helping students become self-regulated learners.
The primary challenges are systemic, including the "digital divide" where institutions or students lack reliable internet access. There is also a pedagogical gap, as educators lack training in using digital tools effectively. Successful implementation requires a system-wide approach that includes technological support and faculty development.
AfriMEDS lists "Communicator" and "Collaborator" as core roles, requiring them to be assessed with the same rigour as technical skills. This shift demands authentic methods, such as OSCE stations dedicated specifically to patient counselling or team-based peer assessments that measure professional integrity.