Saraco, Anthony N., Danielle Brewer-Deluce, Noori Akhtar-Danesh, & Bruce C. Wainman (2020, April 18). The Big Q: Is Q‐methodology valid for evaluating a large‐scale, cross‐disciplinary anatomy and physiology course? The FASEB [Federation of American Societies for Experimental Biology] Journal, 34(S1), 1. (doi: 10.1096/fasebj.2020.34.s1.06686) (Link: https://doi.org/10.1096/fasebj.2020.34.s1.06686) (Access: https://faseb.onlinelibrary.wiley.com/doi/abs/10.1096/fasebj.2020.34.s1.06686)
Abstract: Introduction: Course evaluations are an important tool to gather feedback on the structure of a course, instructor effectiveness, and the overall learning experience. Critically, the Likert scale approach used by most institutions lacks course specificity and the difference between responses cannot be assumed equal (eg. “strongly agree – agree” vs. “agree – neutral”). This makes evaluating the effectiveness of a course and identifying areas that need improvement difficult. Q‐methodology is a technique that mitigates these issues by polling students for qualitative feedback statements that represent prevalent opinions of the course, then asking them to rank the statements relative to each other. Students are then clustered by shared opinions, values, and preferences. Methods: This study uses Q‐methodology to assess student opinions on an undergraduate anatomy and physiology course (850 students). Specifically, students across five disciplines (midwifery, bachelor of health sciences, engineering, nursing, and integrated biomedical sciences) enrolled in the same second‐year undergraduate anatomy and physiology course were recruited into the study. All students experienced the same lecture and laboratory components as well as discipline‐specific tutorials. Students were asked to rank 37 statements relative to each other using an online platform. A by‐person factor analysis was completed using the qfactor program in Stata. Overall, the goal of this study was to validate Q‐methodology as an assessment modality across different populations experiencing the same course. Results: 143 students participated in the study (70.6% female, 25.2% male, 4.2% rather not specify; median age: 19, range: 18 – 38). The by‐person factor analysis classified students into three significantly different groups (22 students unassigned) representing 1) students who greatly appreciated the use of cadaveric specimens (n = 55), 2) students who were extremely dissatisfied by the means of evaluation (n = 40), and finally 3) students who despised the virtual reality (VR) supplementary resource (n = 26). Group 1 expected a significantly higher grade than the other two groups (p<0.05). No demographic data correlated with the groups, nor did discipline. All three groups agreed upon six consensus statements. Conclusion: Critically, this study uncovered three distinct opinion patterns spanning all five academic disciplines within the course. The study provided guidance for course reform and suggested that discipline does not predict course evaluations. The study supports the use of Q‐methodology analyses for assessing student opinions on a large scale. Future work looks to re‐assess student course evaluations in the same course to determine how Q‐methodology outcomes change in response to “Q”‐directed course reform.
Danielle Brewer-Deluce <email@example.com> is in the Education Program in Anatomy and Department of Kinesiology, McMaster University, Hamilton, Ontario, Canada.
You must log in to post a comment.