Mumford, Steven W. (2018). Ways of knowing in participatory program evaluation. Doctoral dissertation (Public Policy & Administration), George Washington University.
Abstract: The study investigated the potential role of individual “ways of knowing” in participatory program evaluation. Ways of knowing refer to individual styles and preferences for creating and testing knowledge in a group setting. These implicit preferences were hypothesized to influence perceptions of credible research methods, appropriate meeting discourse approaches, and prioritized learning outcomes of evaluation. Researchers have identified three ways of knowing most directly relevant to the study: “separate knowing,” or playing “devil’s advocate”; connected knowing, or playing the “believing game”; and “constructed knowing,” or combining both approaches according to context. To identify participants’ preferred “ways of knowing,” the study applied Q methodology, guiding participants to rank a series statements according to which are most descriptive of them. These rankings were analyzed through by-person factor analysis to group participant preferences. The application of Q methodology took place early on within a broader action research case study, in which the researcher facilitated a participatory program evaluation with a team of five stakeholders from a non-profit organization. Results of the case study were compared with Q findings to explore the Q tool’s usefulness for understanding participants’ actual behaviors and perceptions of the evaluation process.
The Q tool developed and refined for use in the study served to differentiate the three theoretical ways of knowing among participants, in a more nuanced fashion than extant Likert-scale surveys. The results of the tool were useful for understanding case study participants’ discursive preferences, particularly between argumentative and narrative styles. Hypothesized relationships between ways of knowing and evaluation design and learning outcomes were not supported in this study; rather, the evaluation context was paramount in shaping these decisions. The Q tool represents the primary practical contribution of the study, and it may be adapted and applied to future studies, and to the practice of participatory evaluation. The study also revealed potential relationships between ways of knowing and other phenomena of interest that might be investigated further. The conceptual distinction among the three ways of knowing can inform our understanding of group dialogue, and how best to promote it among diverse participants.