Journal of Family & Community Medicine
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contact us Login 
 

Users Online: 92 Home Print this page Email this page Small font sizeDefault font sizeIncrease font size

 

 Table of Contents 
MEDICAL EDUCATION
Year : 2005  |  Volume : 12  |  Issue : 2  |  Page : 101-105  

An audit of assessment tools in a medical school in eastern Saudi Arabia


1 Department of Internal Medicine, College of Medicine, King Faisal University, Dammam, Saudi Arabia
2 Department of Pediatrics, College of Medicine, King Faisal University, Dammam, Saudi Arabia
3 Department of Surgery, College of Medicine, King Faisal University, Dammam, Saudi Arabia

Date of Web Publication30-Jun-2012

Correspondence Address:
Khalid U Al-Umran
P.O. Box 40140, Al-Khobar 31952
Saudi Arabia
Login to access the Email id

Source of Support: None, Conflict of Interest: None


PMID: 23012084

Rights and PermissionsRights and Permissions
   Abstract 

Background : Assessment has a powerful influence on curriculum delivery. Medical instructors must use tools which conform to educational principles, and audit them as part of curriculum review.
Aim : To generate information to support recommendations for improving curriculum delivery.
Setting : Pre-clinical and clinical departments in a College of Medicine, Saudi Arabia.
Method : A self-administered questionnaire was used in a cross-sectional survey to see if assessment tools being used met basic standards of validity, reliability and currency, and if feedback to students was adequate. Excluded were cost, feasibility and tool combinations.
Results : Thirty-one (out of 34) courses were evaluated. All 31 respondents used MCQs, especially one-best (28/31) and true/false (13/31). Groups of teachers selected test questions mostly. Pre-clinical departments sourced equally from "new" (10/14) and "used" (10/14) MCQs; clinical departments relied on 'banked' MCQs (16/17). Departments decided pass marks (28/31) and chose the College-set 60%; the timing was pre-examination in 13/17 clinical but post-examination in 5/14 pre-clinical departments. Of six essay users, five used model answers but only one did double marking. OSCE was used by 7/17 clinical departments; five provided checklist. Only 3/31 used optical reader. Post-marking review was done by 13/14 pre-clinical but 10/17 clinical departments. Difficulty and discriminating indices were determined by only 4/31 departments. Feedback was provided by 12/14 pre-clinical and 7/17 clinical departments. Only 10/31 course coordinators had copies of examination regulations.
Recommendations: MCQ with single-best answer, if properly constructed and adequately critiqued, is the preferred tool for assessing theory domain. However, there should be fresh questions, item analyses, comparisons with pervious results, optical reader systems and double marking. Departments should use OSCE or OSPE more often. Long essays, true/false, fill-in-the-blank-spaces and more-than-one-correct-answer can be safely abolished. Departments or teams should set test papers and collectively take decisions. Feedback rates should be improved. A Center of Medical Education, including an Examination Center is required. Fruitful future studies can be repeat audit, use of "negative questions" and the number of MCQs per test paper. Comparative audit involving other regional medical schools may be of general interest.

Keywords: Assessment Technique, Curriculum review, MCQ


How to cite this article:
Al-Rubaish AM, Al-Umran KU, Wosornu L. An audit of assessment tools in a medical school in eastern Saudi Arabia. J Fam Community Med 2005;12:101-5

How to cite this URL:
Al-Rubaish AM, Al-Umran KU, Wosornu L. An audit of assessment tools in a medical school in eastern Saudi Arabia. J Fam Community Med [serial online] 2005 [cited 2020 Oct 21];12:101-5. Available from: https://www.jfcmonline.com/text.asp?2005/12/2/101/97645


   Introduction Top


Assessment is a critical component of instruction. Properly used, it can help institutions attain key curricular objectives. The aims of assessment include communicating what institutions see as important, and helping them monitor the program by providing feedback on the extent to which teaching results in learning. It can also disclose instructional gaps and encourage learners to read broadly and participate actively as and when educational opportunities become available. [1]

The impact of decisions on how and when to evaluate learners' knowledge and competence cannot be overstated. Tests are a powerful motivator, and learners tend to study what they believe instructors value. ("Assessment drives learning.") Thomas Huxley was quoted as saying: "Students work to pass, not to know. They do pass but they do not know." [2]

Because assessment has a powerful influence on all key aspects of learning and curriculum delivery, tools that reinforce educational goals should be used. Medical instructors must "ensure that assessment tools and their use conform to principles and procedures of educational science, and seek to improve the tools they intend to use." [2]

Thus, an audit of assessment tools should be an integral part of formal curriculum review. Furthermore, the last decade has seen an evolution of assessment tools in medical education from the traditional ones to more sophisticated tools such as OSCE, the portfolio approach and hi-tech simulations. It is necessary to be methodical in program evaluation which includes re-examining existing tools. [3]

We have performed one aspect of formative program evaluation, namely, an audit on the assessment tools being used. It was basically a quality assessment exercise, an audit of structure defined as a survey to count, see and show whether acceptable standards are being met. No other value judgment was required.

The desirable attributes of assessment tools audited in this survey were validity, reliability, effect on students and whether they are up to date. Excluded were cost and feasibility of administering the tool, as well as strategies of employing two or more tools to assess the same course. It is expected that as our primary aim, the findings will provide valid information and support recommendations to improve the delivery of our curriculum. The secondary aim is an eventual improvement in the performance of learners and the program.


   Subjects and Methods Top


A prospective survey was conducted in the College of Medicine, King Faisal University, Eastern Saudi Arabia in May 2004 . A self-administered questionnaire was the survey tool used. Copies of the questionnaire were distributed to all course coordinators in the College, with the exception of English Language, Islamic Studies and Physical Education. Respondents were allowed to complete the questionnaire without prompting. Data were analyzed by SPSS software program to determine basic frequency distribution of the assessment tools being currently used.

Although it covered both Basic Sciences and Clinical streams in the College, the survey was not designed to yield data for statistical comparison. Hence, no statistical analysis was indicated and none was attempted.


   Results Top


A total of 31 courses were evaluated. The type and subtype of written questions used were explored in questions 2 & 3 [Table 1]. More than one response was allowed. MCQs emerged as the commonest; it was used by all 31 respondents. Long essays, short essay questions and fill-in-the-blank-spaces were infrequently used, but, short notes were employed by 9/14 pre-clinical departments. "One-best" response was the most frequently used subtype, and the least often were "extended matching items" and "more than one correct answer". However, "true/false" response was used by 7/14 pre-clinical and 6/17clinical departments.
Table 1: Distribution of responses to types and sub-types of questions used

Click here to view


Questions 4 & 5 dealt with the source of the selected questions (where and who). The group of teachers responsible for the course was the body which selected test questions most often, with the whole department a close second in clinical departments (10/17), but not in pre-clinical ones where individual teachers were used just as often (3/14). Whereas pre-clinical departments sourced equally from "new" (10/14) and "used" (10/14) MCQs, clinical departments relied heavily on 'banked' ones (16/17).

Questions 6-8 dealt with decisions about pass marks: who, how and when. Departments mostly took the decision: 12/14 pre-clinical and 16/17 clinical. The 60% pre-set by the College was the most frequently chosen pass mark: 10/14 and 15/17 for pre-clinical and clinical departments respectively. Whereas the decision was taken before examinations by most clinical departments (13/17), 5/14 pre-clinical ones did so after the examination.

How test questions were marked was addressed in questions 9-11 [Table 2]. Of the 6 essay users in pre-clinical departments, 5 provided model answers but only one used double marking. OSCE was used by only of seven clinical departments, five of which provided check list. Optical reader was used in only 1/14 pre-clinical and 2/17 clinical departments.
Table 2: Distribution of responses on how questions were marked

Click here to view


Question 12 asked whether examinations were reviewed after marking, and if so, which review activities were used. Of 14 pre-clinical departments, 13 reviewed examinations post-marking. However, only 10 of 17 clinical departments did so. The types of review activities were as follows: difficulty and discriminating indices were determined by only four departments; seven pre-clinical departments compared scores with previous years; seven with those in the same year. However, only five clinical departments performed this review activity.

The form and timing of feedback during in-course assessment was explored in question 13. Of the 17 clinical departments, only seven provided feedback as against 12 in 14 pre-clinical ones where the commonest form was "exams discussed without exam papers" (5/12). The most frequent timing was within one week: 7/12 for pre-clinical and 5/7 clinical departments respectively.

Questions 14-16 explored three aspects of the conduct of examinations. Asked if students had prior access to examination papers, the responses were as follows: pre-clinical departments 13/14 'no' and one "yes"; clinical departments 16/17 'no' and one "don't know". As to whether students were appropriately informed about the mechanics of the conduct of the examination, all departments replied "yes". However, only 3/14 pre- clinical and 7/17 clinical course coordinators had copies of the booklet on examination regulations.


   Discussion Top


The population studied was representative of the College faculty: all except three course coordinators in basic sciences and clinical departments were surveyed. Rating forms, questionnaire and performance audits are the methods of measurement used to evaluate educational programs. [3] Self-administered questionnaire was the method used here. As such, it was considered adequate since the study was a basic formative program assessment, not a summative one which would seek to judge performance.

It was gratifying that MCQs were used by all respondents and that the one-best subtype was the most frequent. However, observed deficiencies include lack of item analyses and regular up-dates of MCQ bank. Some departments failed to provide answer keys or use double marking. The use of optical reader systems was negligible.

It was also encouraging that infrequently used question types were fill-in-the-blank-spaces, extended matching items and more than one correct answer. OSCE was used by seven clinical departments and five provided check lists.

However, it was disturbing that in this study, long essays were still being used especially by some pre-clinical departments. This deficiency was aggravated by the non-use of double marking. Medical educationists agree that, as an assessment tool, the long essay is out-of-date. [4] Paul observed: "Long essay questions have limited reliability and poor validity. They are not an objective measure of learning outcome. They have little role in medical education." [4]

The observed frequent use of true/false subtype was equally disturbing because of its known flaws. [1] It may be easier to construct than one-best format, but it is more problematic. The student is guaranteed 50% chance of guessing the correct answer. Though the original item writer had a particular fact in mind when he wrote the question, it can be ambiguous, or the distinction between "true" and "false" blurred and obscure. Thus, subsequent reviewers alter the answer key, rewrite or discard the question more frequently than items written in other MCQ formats. Whereas some ambiguities can be clarified, others cannot. One way to avoid ambiguity is to test for simple recall of isolated facts, although educationists discourage this practice. [1]

It was appropriate and commendable that test questions were most often selected by the group of teachers responsible for the course, or the department as a whole. The use of MCQ implies that the team responsible for the course should be involved since it is unlikely that one individual can develop a bank of well evaluated MCQs. [5] Whereas pre-clinical departments correctly sourced in equal measure from new and used MCQs, the observed heavy reliance on banked MCQs by clinical departments was inappropriate.

Departments took decisions on pass marks and chose the 60% pre-set by the College; this was appropriate. However, whereas the decision was taken before examinations by most clinical departments, it was disturbing that many pre-clinical departments did so after examination.

The frequency of post-examinations review was satisfactory in pre-clinical but not in clinical departments. However, throughout the College, basic item analyses such as calculating difficulty and discriminating indices were grossly deficient. Similarly, comparing scores with previous years or the same year was infrequently practised, especially in clinical departments.

The low feedback rate observed in clinical departments was unsatisfactory. Learners require regular feed back on what they know or do not know in order to learn from their mistakes. [1],[2],[3] Assessment also affects students' self-esteem, career aspirations and accomplishments. [5] The provision of feedback within one week was a pragmatic approach and can be encouraged. Course coordinators are urged to obtain copies of the booklet on examination regulations.


   Conclusions and Recommendations Top


No one assessment tool by itself is perfect, and the "pivotal role of assessment in the educational process" [7] cannot be over-emphasized. After a careful review of the literature, MCQ with single-best answer, if well constructed and adequately critiqued, emerged as the preferred tool for assessing the theory domain. Higher taxonomies such as application of knowledge, integration, synthesis and judgment can also be assessed by it if based on patient management problems. However, it should be enhanced by the following means, among others.

Faculty members in general and course coordinators in particular should become familiar with item analyses and their correct application, as well as appropriate comparisons with previous results. The College should provide optical reader systems to all departments. Until departments have adequate banks of MCQs, each test paper should contain at least 50% new questions. All departments should provide answer keys, and practise double marking.

Departments should use OSCE or OSPE more often along with check lists. Four question types can all be abolished without educational loss to the Curriculum: long essays, true/false, fill-in-the-blank-spaces and more than-one-correct-answer.

At all times, the department as a whole or the team of instructors responsible for the course should set the test paper and collectively take all decisions, including the pass mark. On no account should such matters be left to one individual to finalize. Clinical departments should improve on feedback rates at in-course assessments. A Center of Medical Education, including an Examination Center, requires to be established as soon as possible. This will, among other benefits, permit the College to play an informed role in "BEME"- Best Evidence Medical Education. [6],[7]

It remains to be seen the extent to which these recommendations will be implemented. In order to complete the audit cycle, a repeat audit is mandatory. Furthermore, aspects of assessment tools which can be explored in other studies include "negative questions" and the number of MCQs per test paper. Finally, a comparative audit involving other regional medical schools may be of general interest.

 
   References Top

1.Al Umran K. "Format for Written Examinations." Personal Communication. July 2003.  Back to cited text no. 1
    
2.Abeykoon P. "Foreword " K. L. Wig Centre for Medical Education and Technology, All India Institute of Medical Sciences. SEARO, WHO Project I-R / IND HRH 001/LGS 1995  Back to cited text no. 2
    
3.Wojtczak A. "Evaluation of Learning Outcomes." http:// www.iime.org/documents/elo.htm. 7-8-2003  Back to cited text no. 3
    
4.Paul V. K. "Essay Questions", in "Assessment in Medical Education: Trends and Tools." K. L. Wig Centre for Medical Education and Technology, All India Institute of Medical Sciences. SEARO, WHO Project I-R / IND HRH 001/LGS 1995  Back to cited text no. 4
    
5.DAMMCQ: Designing and Managing MCQs - http:// web.uct.acza/projects/cbe/mcqman/mcqch2.html.  Back to cited text no. 5
    
6.Best Evidence Medical Education. Report of a Meeting. Medical Teacher 2000; 22: 242-5.  Back to cited text no. 6
    
7.Harden RN, Davis MH, Friedman BD. UK recommendations on undergraduate medical education and the Flying Wallendas. Med Teacher 2002:24:5-8  Back to cited text no. 7
    



 
 
    Tables

  [Table 1], [Table 2]



 

Top
 
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
   Subjects and Methods
   Results
   Discussion
    Conclusions and ...
    References
    Article Tables

 Article Access Statistics
    Viewed1364    
    Printed85    
    Emailed0    
    PDF Downloaded184    
    Comments [Add]    

Recommend this journal

Advertise | Sitemap | What's New | Feedback | Disclaimer
Journal of Family and Community Medicine | Published by Wolters Kluwer - Medknow
Online since 05th September, 2010