Journal of Family & Community Medicine
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contact us Login 
 

Users Online: 383 Home Print this page Email this page Small font sizeDefault font sizeIncrease font size

 

MEDICAL EDUCATION
Year : 2010  |  Volume : 17  |  Issue : 2  |  Page : 91-95 Table of Contents     

Developing questionnaires for students' evaluation of individual faculty's teaching skills: A Saudi Arabian pilot study


1 College of Medicine, University of Dammam, Dammam, Kingdom of Saudi Arabia
2 Prince Mohamed Research Center, University of Dammam, Dammam, Kingdom of Saudi Arabia
3 Colleges of Medicine & Nursing, University of Dammam, Dammam, Kingdom of Saudi Arabia
4 Quality Management Unit, University of Dammam, Dammam, Kingdom of Saudi Arabia

Date of Web Publication23-Oct-2010

Correspondence Address:
Ammar Hassan
Department of Family and Community Medicine, P O Box 40187, Al-Khobar 31952
Kingdom of Saudi Arabia
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/1319-1683.71991

Rights and Permissions
   Abstract 

Background: The National Commission for Academic Accreditation and Assessment is responsible for the academic accreditation of universities in the Kingdom of Saudi Arabia (KSA). Requirements for this include evaluation of teaching effectiveness, evidence-based conclusions, and external benchmarks. Aims: To develop a questionnaire for students' evaluation of the teaching skills of individual instructors and provide a tool for benchmarking. Setting: College of Nursing, University of Dammam [UoD], May-June 2009. Materials and Methods: The original questionnaire was "Monash Questionnaire Series on Teaching (MonQueST) - Clinical Nursing. The UoD modification retained four areas and seven responses, but reduced items from 26 to 20. Outcome measures were factor analysis and Cronbach's alpha coefficient. Results: Seven Nursing courses were studied, viz.: Fundamentals, Medical, Surgical, Psychiatric and Mental Health, Obstetrics and Gynecology, Pediatrics, and Family and Community Health. Total number of students was 74; missing data ranged from 5 to 27%. The explained variance ranged from 66.9% to 78.7%. The observed Cornbach's α coefficients ranged from 0.78 to 0.93, indicating an exceptionally high reliability. The students in the study were found to be fair and frank in their evaluation.

Keywords: Student evaluation of teaching effectiveness, student evulation of faculty teaching skills, academic accreditation, faculy personal portofolio, Saudi Arabia


How to cite this article:
Al-Rubaish AM, Abdel Rahim S, Hassan A, Al Ali A, Mokabel F, Hegazy M, Wosornu L. Developing questionnaires for students' evaluation of individual faculty's teaching skills: A Saudi Arabian pilot study. J Fam Community Med 2010;17:91-5

How to cite this URL:
Al-Rubaish AM, Abdel Rahim S, Hassan A, Al Ali A, Mokabel F, Hegazy M, Wosornu L. Developing questionnaires for students' evaluation of individual faculty's teaching skills: A Saudi Arabian pilot study. J Fam Community Med [serial online] 2010 [cited 2019 Dec 6];17:91-5. Available from: http://www.jfcmonline.com/text.asp?2010/17/2/91/71991


   Introduction Top


The accreditation body charged with academic accreditation of universities recently introduced in the Kingdom of Saudi Arabia is the National Commission for Academic Accreditation and Assessment (NCAAA). University of Dammam (UOD) was one of the first to be involved in the process. [1]

Of the 11 areas identified by NCAAA for evaluation according to internationally accepted standards of good practice, "Students' Learning and Teaching" is considered of primary importance. [2] Requirements include: "A comprehensive system for evaluation of teaching effectiveness, including but not limited to student surveys." [3]

The NCAAA "Course Evaluation Survey" (CES) evaluates the effectiveness of teaching in each course as a unit. However, there are other NCAAA requirements. First, "Faculty maintain portfolio of evidence of evaluation, and, of strategies for improvement." [3] Second, "analyses and conclusions should be based on valid evidence rather than subjective impressions." [4] Third, benchmarks should include external comparison. [5]

Informative and important as they are, these directives are not sufficient for comprehensive evaluation of instructor's individual professional areas of strength and weakness in general, and teaching skills in particular. The development of valid and reliable questionnaires for completion by students anonymously on each instructor separately is an indispensable tool for the provision of an authentic judgment on the teacher's individual potential and aptitudes. This input for the evaluation of instructors' teaching skills should preferably be focused each time on a single area of teaching skills.

Student Evaluation of Teaching Effectiveness (SETE) has been criticized on several grounds. [6] Traditionally, it is regarded as sensitive. The controversy begins with questioning the validity of students' evaluation of their professors' teaching skills. [7],[8],[9] Teaching in universities is a complex and multi-dimensional task. [10] Another potential bias against SETE is that, it might induce leniency in the grades assigned to students among other factors. [11],[12]

Aim

The primary aim of this study was to develop a valid and reliable instrument for students' evaluation of the teaching skills of individual instructors. A secondary aim was to provide a potential tool with which to benchmark teaching skills among different institutional settings. This paper reports initial results on the teaching skills of clinical nursing instructors.


   Materials and Methods Top


Study population

The study was carried out in the College of Nursing, UoD in the 2008/09 academic year. The focus of the study was students' evaluation of each instructor's teaching skills in clinical nursing courses. Students were assembled in their respective classes and the questionnaires were distributed to them. They were given sufficient time to respond to the questionnaire without prompting. Each group was supervised by an independent faculty member (i.e. one who was not being evaluated in that session.)

Throughout the study, care was taken to protect anonymity of evaluators i.e. the students, but not the evaluated i.e. the instructors.

The questionnaire

The original questionnaire was the "Monash Questionnaire Series on Teaching (MonQueST) - Clinical Nursing. [13] It consists of four areas, 26 items and seven response options. These were: (1) All or almost all, (2) Most, (3) About half, (4) Only some and (5) Very few as well as (6) Entirely inappropriate and (7) Attended too few.

In the modification by UOD, the four areas and seven response options were retained, but the items were reduced from 26 to 20 [Table 1]. Response options 6 and 7 were put in a separate category because all students in the study were full-time, and their attendance at clinical instructions was mandatory. Accordingly, statistical analysis of the modified MonQueST was based on a 5-point scale relating to the first five response options.
Table 1: The monash questionnaire series on teaching (monquest)14 as modifi ed by university of dammam

Click here to view


Outcome Measures were factor analysis and Cronbach's alpha coefficient.

Statistical analysis

Data entry and analyses required SPSS version 13. Factor analysis was performed to measure the ability of the questions asked to relate in the actual construction that was intended for use. In this first step, the inter-item correlation was explored. This created a matrix of correlation of all items. Eignevalue and amount of variances explained was calculated for each item and for the different modules in the study.

At this stage, the risk of "singularity" had to be borne in mind (i.e. items that are perfectly correlated with R > 0.9). Therefore, two sub-types of items were identified: (a) Those that failed to correlate with others, and (b) Those which demonstrated singularity. This was a pre-requisite for the second step (i.e. reliability test) since the above items, if any, had to be excluded. A check for the normal distribution of the scores was also done.

Internal consistency reliability test (test-retest measure of reliability) was then performed by administering the same instrument to the same group of students for different instructors for each course. The internal reliability estimates were calculated using Cronbach's alpha coefficient. [14] It provides a conservative estimate of reliability, and, generally represents the lower bound to the reliability of a scale item. Cronbach's alpha coefficient greater than or equal to 0.70 was taken as an acceptable criterion for reliability of the scale. [15]


   Results Top


At present, all the students and staff of the Nursing College are females. Seven courses from the Nursing Program were studied, namely: Fundamentals of Nursing, Medical Nursing, Surgical Nursing, Psychiatric and Mental Health Nursing, Obstetrics and Gynecologic Nursing, Pediatric Nursing, and Family and Community Health Nursing. There was one course from Level 2 and three each from Levels III and IV.

Response options 6 ("Entirely inappropriate") and 7 ("Attended too few") were dealt with as a separate category. The counted proportions were as follows: 0.20, 0.26, 0.30, 0.39, 0.49, 0.68 and 0.95% (Mean 0.65%). Thus, the selection of both options was numerically negligible.

Based on a 5-point scale, the total number of students was 74; missing data ranged from 5 to 27%.

Factor analysis

All the 20 items of the employed questionnaire were entered in a factor analysis for each module, with a minimum of one eigenvalue for factor extraction and or 0.4 for item-to-factor loading. The procedure generated four areas in which all the 20 items were included. The explained variance ranged from 66.9% to 78.7%, depending on the module, except the "Fundamentals of Nursing". In this module (sample size=74), inter-item correlations failed to emerge in 23% of paired items, and the explained variance was less than 54%. As a result, this module had to be excluded from further analysis. [16]

Reliability


The internal consistency reliability was tested by Cornbach's a coefficient for each of the four areas in each of the six modules with the individual student as the unit of analysis. The observed a coefficients ranged from 0.78 to 0.93 , indicating an exceptionally high reliability. By convention, a lenient cut-off of 0.60 is common in exploratory research; alpha should be at least 0.70 or higher to retain an item in an "adequate" scale. Many researchers require a cut-off = 0.80 for a "good scale." [15]


   Discussion Top


All student evaluations are based on the hypothesis that students are the best experts to assess their teachers.[17],[18] Nevertheless, Students Evaluation of Teaching Effectiveness (SETE) is controversial. [7],[8],[9],[10],[11],[12],[19],[20],[21],[22],[23],[24] With the advent of NCAAA, institutions seeking academic accreditation in KSA will be required to apply SETE in the medium term. Writing from King Faisal University of Petroleum and Minerals in Dhahran, KSA, Siddiqi (2002) observed: "Proper questionnaire design has been cited as one of the key factors in the qualitative outcome of the exercise." [18]

Questionnaires seeking students' opinion should be reliable, valid and consistent, but also concise and adequate [Table 2] and [Table 3]. This is especially so if the area studied is traditionally regarded as sensitive such as students' evaluation of their individual professors' teaching skills. The exclusion of six items was informed by the logical and pragmatic approach. This demanded that all the key components in the original questionnaire be retained. Furthermore, the remaining 20 items which covered major aspects of teaching Clinical Nursing were more simply and clearly phrased for the students.
Table 2: A Summary of results from factor analysis on the modifi ed monquest(6) questionnaire per six modules in the nursing program

Click here to view
Table 3: Crobanch reliability, items mean, standard deviation and ability to distinguish between classes for each scale of the modifi ed monquest for six modules in the nursing program

Click here to view


Hence, it was gratifying to note that, the reduction of the items from 26 in the original instrument to 20 in the present version did not result in a significant reduction in reliability, validity or consistency of the instrument. It rendered the modified version more concise and suitable, for use in our local socio-cultural setting. It was therefore, fit for the intended purpose: that of readily providing valid, objective data.

Another issue for discussion is the minimum number of students required for an assessment of teaching to be valid. [25] In a recent publication, Chenot, Kochen and Himmel used a cut-off point of five students. [26] Thus, the number of students in this study was considered adequate, especially for a pilot study.

The modified MonQueST demonstrated another useful attribute: the ejection of one module as a result of statistical scrutiny: "Fundamentals of Nursing". This outcome was subsequently validated by the Course Supervisor who pointed out that in actual delivery, the course was more theoretical than practical. This observation also confirmed that the students in the study were mature, fair and frank in their evaluation.

The final issue for discussion is the intended use of the results of such studies. Siddiqi raised a veiled objection: "It gets too much weight for contractual/job evaluation." [18] Salsali concluded from an Iranian perspective that: "Systemic and continuous evaluation as well as staff development should be the primary goal." [27]

It was clear from the beginning that results can be used for the three stated aims of the study. First, it was to help satisfy requirements of NCAAA that faculty maintain evidence of evaluation, and that analyses and conclusions were based on valid evidence. [3],[4] Secondly, it could be used formatively. This includes needs assessment for the teaching skills component of professional development of individual faculty. Thirdly, it could form a link for external Institutional benchmarking. [5]

The University of Dammam is in a transitional phase of academic accreditation. This demands that we refine and customize various tools including questionnaires. These results remain to be confirmed. It is hoped that field-testing will widen its application by refining them for use in other colleges of University of Dammam , the Eastern Province as well as KSA and Gulf States.


   Conclusions Top


The qualitative aspects of the study have not been determined. In other words, the students' opinion as well as the peers of those evaluated have to be authenticated by the Dean of College. This will be the subject of separate study. Pending authentication, two tentative conclusions can be drawn. The modified MonQueST for Clinical Nursing has been found to be efficient, adequate, reliable and consistent. It can be used formatively as stated above. However, it remains subject to ongoing review and optimization, and may only be used as part of the range of faculty evaluation tools as required by NCAAA.


   Acknowledgments Top


The authors sincerely thank Monash University, Centre for Higher Education Quality for the MonQueST as our benchmark and original questionnaire. They also express their gratitude to His Excellency Prof Yussuf Al Jindan, President, King Faisal University, the faculty and teaching staff, as well as students of the College of Nursing without whose support and cooperation this research could not have been completed. Finally, we thank Ms. Margilyn Ungson and Mr. Jess Asilo for secretarial assistance.

 
   References Top

1.King Faisal University, Al-Hassa and Dammam. "Unified Self-Study Document (1 of 3). 2008. P. 2.   Back to cited text no. 1      
2.National Commission for academic Accreditation and Assessment - (NCAAA). Quality Standards for Post Secondary Institutions; 2005. P. 6.  Back to cited text no. 2      
3.NCAAA - Standards for Quality Assurance and Accreditation of Higher Education Institutions; 2007. P. 16.  Back to cited text no. 3      
4.NCAAA - Handbook of Quality Assurance and Accreditation in Saudi Arabia, Part 2, Internal Quality Assurance Arrangements. 2007. P. 19.   Back to cited text no. 4      
5.NCAAA - Strategic Planning for Quality. 2007. P. 2.  Back to cited text no. 5      
6.Harrison PD, Douglas DK, Burdsal CA. The relative merits of different types of overall evaluations of teaching effectiveness. Research in Higher Education 2004;45:311-23.  Back to cited text no. 6      
7.Greenwald AG. Validity concerns and usefulness of student ratings of instruction. J Educational Psychology 1997;52:1182-6.  Back to cited text no. 7      
8.McKeachie WJ. Student ratings: The validity of use. J Educational Psychology 1997;52:1218-25.  Back to cited text no. 8      
9.Marsh HW, Roche LA. Making students' evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. J Educational Psychology 1997;52:1187-97.   Back to cited text no. 9      
10.Greenwald AG, Gillmore GM. Grading leniency is a removable contaminant of student ratings. J Educational Psychology 1997;52:1209-17.  Back to cited text no. 10      
11.Gillmore GM, Greenwald AG. Using statistical adjustment to reduce bias in student ratings. American Psychologist 1999;54:518-9.  Back to cited text no. 11      
12.Marsh HW, Roche LA. Rely upon SET research. American Psychologist 1999;54:517-8.  Back to cited text no. 12      
13.Monash University, Melbourne, Australia, Center for High Education Quality. Monash Questionnaire Series on Teaching (MonQueST) for Clinical Nursing.   Back to cited text no. 13      
14.Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika 1951;16:297-34.  Back to cited text no. 14      
15.Nunnally JC. Psychometric theory. 3rd ed. New Yourk, McGraw-Hill; 1994.  Back to cited text no. 15      
16.Gorsuch RL. Factor Analysis. Hillsadle, NJ. Erlbaum. 1983.   Back to cited text no. 16      
17.Elzubier M, Rizk D. Evaluating the Quality of Teaching in Mdical Education: Are we using the evidence for both formative and summative purposes? Med Teach 2002;24:313-9.  Back to cited text no. 17      
18.Siddiqi AS. Student Evaluation of Faculty: A Mirror for Self Analysis. The 6th Saudi Engineering Conference, KFUPM, Dhahran 2002;1:433-46.   Back to cited text no. 18      
19.Abrami PC. How Should We Use Student Ratings to Evaluation Teaching? Research in Higher Education 1989;3:221-7.  Back to cited text no. 19      
20.Abrami PC, d'Apollonia S, Cohen PA.Validity of Student Ratings of Instruction: What We Know and What We Do Not Know. J Educational Psychology 1990;82:219-31.  Back to cited text no. 20      
21.DeBerg CL, Wilson JR. Wilson. An empirical investigation of the potential confounding variables in student evaluation of teaching. Journal of Accounting Education. 1990. p. 37-62.  Back to cited text no. 21      
22.Seldin P. The use and abuse of student ratings of instruction. Chron High Educ 1993;39:40.   Back to cited text no. 22      
23.Seldin P. When students rate professors. The Chronicle of Higher Education Opinion. 1993.   Back to cited text no. 23      
24.Sproule R. Student evaluation of teaching: Methodological critique of conventional practices. Education Policy Analysis Archives. 2000, vol. 8.  Back to cited text no. 24      
25.Mazor K, Clauser B, Cohen A, Alper E, Pugnaire M. The dependability of Students' rating of Preceptors. Acad Med 1999;74:19-21.  Back to cited text no. 25      
26.Chenot JF, Kochen MM,Himmel W. Student evaluation of a Primary care clerkship: quality assurance and identification of potential for improvement. BMC 2009;9:17.   Back to cited text no. 26      
27.Salsali M. Evaluating Teaching Effectiveness in nursing education: An Iranian perspective. BMC 2005;5:29.  Back to cited text no. 27      



 
 
    Tables

  [Table 1], [Table 2], [Table 3]


This article has been cited by
1 Development and Validation of the Simulation-Based Learning Evaluation Scale
Chang-Chiao Hung,Hsiu-Chen Liu,Chun-Chih Lin,Bih-O Lee
Nurse Education Today. 2016;
[Pubmed] | [DOI]
2 Constructing and validating a global student-centered nursing curriculum learning efficacy scale: A confirmatory factor analysis
Shu-Fang Chang
Nurse Education Today. 2013; 33(10): 1173
[Pubmed] | [DOI]
3 Avaliação da preceptoria na residência médica em cirurgia geral, no centro cirúrgico, comparação entre um hospital universitário e um hospital não universitário
Elizabeth Gomes Santos,Rafael Rodriguez Ferreira,Vera Lúcia Mannarino,Elizabeth Menezes Teixeira Leher,Rosane S. Goldwasser,Guilherme Pinto Bravo Neto
Revista do Colégio Brasileiro de Cirurgiões. 2012; 39(6): 547
[Pubmed] | [DOI]
4 The Development of an Evaluation Tool to Measure Nursing Core Curriculum Teaching Effectiveness
Shu-Fang Chang
Journal of Nursing Research. 2012; 20(3): 228
[Pubmed] | [DOI]



 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
    Introduction
    Materials and Me...
    Results
    Discussion
    Conclusions
    Acknowledgments
    References
    Article Tables

 Article Access Statistics
    Viewed4070    
    Printed252    
    Emailed0    
    PDF Downloaded637    
    Comments [Add]    
    Cited by others 4    

Recommend this journal

Advertise | Sitemap | What's New | Feedback | Disclaimer
© Journal of Family and Community Medicine | Published by Wolters Kluwer - Medknow
Online since 05th September, 2010