|Year : 2004 | Volume
| Issue : 2 | Page : 75-78
Feasibility and acceptability of objective structured clinical examination (OSCE) for a large number of candidates: Experience at a university hospital
Gamal A Khairy
College of Medicine and King Khalid University Hospital, King Saud University, Riyadh, Saudi Arabia
|Date of Web Publication||30-Jun-2012|
Gamal A Khairy
Assistant Professor & Consultant Surgeon, Department of Surgery, College of Medicine and King Khalid University Hospital, P.O. Box 7805, Riyadh 11472
Source of Support: None, Conflict of Interest: None
| Abstract|| |
Objective : To assess the feasibility and acceptability of using objective structured clinical examination (OSCE) for a large number of medical students.
Methods : All medical students (291) who had completed the basic surgical course were examined by objective structured clinical examination (OSCE) at the College of Medicine, Riyadh, for the first time. A 5-scale questionnaire was filled by the examiners at the end of the examination each day. Another questionnaire was filled by the students as a feedback.
Results: All students agreed that the organizational aspect of the examinations was smooth and the time for each station was adequate. 86% of the students agreed that the stations were within the content of the course, 82% agreed that the examination was fair and objective and 93% wanted this method to be followed in the assessment of third year medical students, instead of the traditional examination (written and single long case). Similar responses were received from the examiners who were involved in the exams.
Conclusion: OSCE is a practical and acceptable method for assessing medical students' basic surgical skills, even for a large number of candidates, if facilities are available in the examination center. Replacing written exams with OSCE depends on the design of stations to test knowledge adequately in scope and depth probably at problem solving level.
Keywords: OSCE, medical students, surgery exams
|How to cite this article:|
Khairy GA. Feasibility and acceptability of objective structured clinical examination (OSCE) for a large number of candidates: Experience at a university hospital. J Fam Community Med 2004;11:75-8
|How to cite this URL:|
Khairy GA. Feasibility and acceptability of objective structured clinical examination (OSCE) for a large number of candidates: Experience at a university hospital. J Fam Community Med [serial online] 2004 [cited 2020 Dec 2];11:75-8. Available from: https://www.jfcmonline.com/text.asp?2004/11/2/75/97756
| Introduction|| |
In recent years, there has been growing dissatisfaction in medical schools with the traditional methods of student assessment based on written examinations and faculty ratings of performance in clinical training. This is because of the limited skills assessed through written tests and psychometric problems associated with the rating of performance.  Clinical competence refers to a complex set of skills that include the abilities to interview, perform a physical examination, make diagnostic and treatment decisions, and while demonstrating good interpersonal skills communicate with a patient and his or her family.  The importance of the assessment of these skills, which are usually not systematically done in medical schools has been identified by many associations,  addressed in several conferences and reports, , and already has been applied in some medical schools in many parts of the world. ,
The purpose of this prospective cross-sectional study is to examine the feasibility and acceptability of an objective structured clinical examination (OSCE) which was used to examine a large number of medical students for the first time at our institution.
| Material and Methods|| |
A total population (two hundred and ninety one) of medical students who had completed the basic surgical course (mainly history-taking and physical examination) at the College of Medicine, Riyadh, were examined by OSCE.
Forty consultant surgeons (examiners) were involved in a two-day examination, in four surgical wards. In addition, 8 surgical registrars organized the examinations. During the examination the students rotated round ten stations in each surgical ward at the same time, spending 4 - 5 minutes at each station. On a bell signal, the student moved to the next station. The time assigned was the same for all stations. A further 30 seconds was allowed for the student to move to the next station and for the examiner to finalize comments on the previous student's performance. Where the distance between two stations was long, a rest station was placed between them.
The assessment at each station was limited to the techniques of history-taking, physical- examination and differential diagnosis. Stations were either manned, with real or simulated cases or unmanned with pictures of clinical cases. The examiners used checklists to record the performance of the students at the manned stations. A 5-point scale answer questionnaire comprising 12 questions was completed (Appendix I). This was completed each day by all examiners immediately after the examination. Another feedback questionnaire was constructed for the students examined. Both questionnaires were in English.
| Results|| |
The total number of students who were examined was 291 (208 male and 83 females). Almost all of the students agreed that the organization of the examinations was smooth and the time allotted for each station was adequate. The exam was described as fair and objective by 86% of the students, and 93% wanted this to be the method of assessment for third year surgical course.
All of the examiners agreed that the organization was smooth and the stations were within the scope of the course. The vast majority of the examiners (90%) agreed that the examinations were fair and objective and 78% preferred this method of assessment to the traditional method (written and single long case examination).
Forty clinical cases (real and simulated) were used on both days of examination. Patients were cooperative despite being examined by a large number of students on the same day. Another group of clinical cases were used on Day 2.
| Discussion|| |
In our institution, the method used in the assessment for the third year medical students who have completed the basic surgical course, are the traditional written and long case clinical examinations. These methods have many shortcomings. Firstly, there is no guarantee that students would be able to use their knowledge in the care of patients or apply their clinical skills in the appropriate situations. Several studies , have shown that students' clinical performances are rarely observed by faculty in written examinations. Secondly, with regard to the long case clinical examination, there is great variability both in the patients assigned to students and in the criteria used by individual faculty members in rating students' performance. Therefore, the current methods tend to be mostly subjective and not standardized. These negative effects of traditional methods of assessment have often been reported. ,,
In 1979, Harden  described the first objective structured clinical examination (OSCE). This method has dramatically changed the assessment of clinical competence and had a significant impact on future doctors' training and practice.  OSCE fulfills most of the criteria needed to assess clinical competence especially because of its greater objectivity and the fact that the areas tested are uniformly applied by the examiners. It is envisaged that students' skills in history-taking and physical examination, the essential content of the basic surgical course for the third year medical students would be tested. This would leave investigations, diagnosis and treatment to be included in the final year surgical course. In OSCE, the various components of clinical competence such as history-taking, examining the abdomen, commenting on a picture of a patient…etc are tested in phases; each component assessed in turn and dealt with at one of the stations in the examination.
All the examiners who participated in the study agreed that the stations were within the purview of the course. For our students, this method of assessment was highly acceptable. The faculty in turn could decide in advance the items to be examined (history-taking, physical examination and differential diagnosis) and design the stations accordingly. Furthermore, the content, structure and complexity of the examination (e.g. more straight-forward cases for junior students) are easily controlled. Besides, the use of the checklists by the examiners resulted in a more objective assessment, and with ten stations, a larger sample of the student' skills were tested. For this reason, 78% of the teaching staff who responded to the questionnaire recommend this method for examining the third year medical students, even though OSCEs are time-consuming and labor-intensive. 
The OSCE is considered a significant contribution to the improvement of the methods of testing students' clinical skills in medicine,  and is known to be more valid in the assessment of clinical skills, both in undergraduate and postgraduate training.  OSCEs combine the reality of live clinical interactions, the standardization of problems and the use of multiple observations of each student. Consequently, it is rapidly replacing other forms of assessment at all levels of medical and health professional education, licensure and certification. 
The organization of OSCE is complex and time consuming especially when many stations are to be used.  These and other logistic limitations have restricted its application to smaller groups.  Although this method of assessment was used for the first time at our institution, it was successful for a large number of candidates (291 medical students).
Simulated cases which were well-controlled by checklists were used in a few stations (e.g. asking the student to examine a normal abdomen or to take a history from the examiner himself). Standardized patients (SPs) have been used before in the OSCE format. SPs are individuals with or without actual disease, who have been trained to portray a medical condition in a consistent manner.  SPs can also evaluate skills in interviewing interpersonal relationships, and communication.  With the proper training, SPs were known to provide consistent and accurate simulations and recordings of performance by medical students and professionals.  They are the gold standards for measuring the competence of students and the quality of the practice of physicians.  Since the use of OSCE was the first in our surgical course, no SPs were used, but there are plans to use them in the coming exams.
In conclusion, OSCEs are practical and acceptable methods for assessing medical students' basic surgical skills. If facilities are available (manpower, surgical wards, clinical cases and an enthusiastic organizing committee), a large number of candidates can be accommodated. In our institution, the OSCE is now the method of assessment for the third year medical students who have completed the basic surgical course.
| Acknowledgment|| |
The author would like to thank the 3 rd year medical students, the examiners and both Ms. Cora Rivera and Ms. Arlene Dasco for their expert secretarial assistance.
| References|| |
|1.||Van der Vleuten CPM, Swanson DB. Assessment of clinical skills with standardized patients: State of the art. Teach Learn Med 1990; 2:58-76. |
|2.||NU Veit Vu, Howard SB, Marcy ML, Steven JV, Jerry AC, Terry T. Six years of comprehensive, clinical, performance based assessment using standardized patients at the Southern Illinois University School of Medicine. Acad Med 1992; 67:42-50. |
|3.||Muller S. (Chairman). Physicians for the Twenty-First Century: Report of the project panel on the general professional education of the physician and college preparation for medicine. J Med Educ 1984; 59: Part 2. |
|4.||Gastel B and Rogers DE, eds. Clinical education and the doctor of tomorrow. New York: New York Academy of Medicine 1989. |
|5.||Marini CJM. Evaluating the competence of health professionals. JAMA 1988; 260:1057-8. |
|6.||Sibbald D, Regehr G. Impact on the psychometric properties of a pharmacy OSCE: Using first-year students as standardized patients. Teach learn Med 2003; 15:180-5. |
|7.||Hart J, Harden R (Eds). Further development in assessing clinical competence. Montreal: Can Heal 1987. |
|8.||Davis MH. OSCE: The Dundee Experience: Montreal. Med Teach 2003; 25:255-61. |
|9.||Engel GL. Are medical schools neglecting clinical skills? JAMA 1976; 236:861-3. |
|10.||Stillman P, Regan MB, Swanson DA. Diagnostic fourth-year performance assessment. Arch Intern Med 1987; 19:1981-5. |
|11.||Sternburg JK, and Brokway BS. Evaluation of clinical skills: An asset-oriented approach. J Fam Pract 1979; 8:1243-1245. |
|12.||Shakun EN. The clinical skills assessment form: A preliminary examination in paediatric examinations. Eval Health Professions 1981; 4:330-7. |
|13.||Largerkvist B, Samuelsson B, Sjolin S. Evaluation of the clinical performance and skill in paediatrics of medical students. Med Educ 1976; 10:176-8. |
|14.||Harden RM, Gleeson FA. Assessment of clinical competence using an objective, structured clinical examination (OSCE). Med Educ 1979; 13:41-54. |
|15.||Hodges B. OSCE! Variations on a theme by Harden. Med Educ 2003; 37:1134-40. |
|16.||Zantman RR, McWhorter AG, Seale NS, Boone WJ. Using OSCE based evaluation: Curricular impact overtime. J Dent Edu, 2002; 66:1323-30. |
|17.||Murto SH, MacFadyen JC. Standard setting: A comparison of case-author and modified borderline-group methods in a small scale OSCE. Acad Med 2002; 77:729-32. |
|18.||Karmer AW, Jansen JM, Zuithoff P, Dusman H, Tan LH, Van der VLenten Cp. Predictive validity of a written knowledge test of skills for an OSCE in postgraduate training for general practice. Med Educ 2002; 36:812-9. |
|19.||Mellroy JH, Hodges B, McNaughton N, Reghr G. The effect of candidate's perceptions of the evaluation method on reliability of checklist and global rating scores in an objective structured clinical examination. Acad Med 2002; 77:725-8. |
|20.||Carpenter JL. Cost analysis of objective structure clinical examination. Acad Med 1995; 70:828-33. |
|21.||Consensus statement of the researchers of clinical skills assessment (RCSA) on the use of standardized patients to evaluate clinical skills. Acad Med 1993; 68:475-7. |
|22.||Stillman PL. Session three: technical issues: Logistics. Acad Med 1993; 68:464-70. |
|23.||Vu NV, Barrows HS. Use of standardized patients in clinical assessments: Recent developments and measurement findings. Edu Res 1994; 23:23-30. |
|24.||Peabody JW, Luck J, Glassman P. Comparison of vignettes, standardized patients and chart abstraction: A prospective validation study of 3 methods for measuring quality. JAMA 2000; 283: 1715-22. |