Michelle, Carolyn, Charles H. Davis, Ann Hardy, & Craig Hight (2018, November). Response to Martin Barker’s ‘Rise of the Qualiquants’. Participations, 15(2), 376-399. (Link: http://www.participations.org/Volume%2015/Issue%202/23.pdf)

(Participations apparently does not provide abstracts of its articles. The following summary was composed by Steven Brown and emphasizes those parts of the article that refer to Q methodology.)

Summary: Michelle, Davis, Hardy, and Hight begin by linking their use of Q methodology to Stephenson’s theory of subjective communicability, which, citing Watts and Stenner (2012), they see as “consistent with a social constructionist ontology,” but from a “moderate constructionist perspective” (p. 377), by which they mean (now citing Höijer [2008, p. 278]), that “people bring basic perspectives, interpretations, cognitive schemas or social and cultural frames of reference with them to an interpretive situation, such as the viewing of a television programme, or an interview” (p. 377). They assume the existence of “internal states of mind” (p. 378), which they regard as a perpetual dilemma within audience studies, the dilemma being how to access these internal states. They assert that “it is not possible to ever access human consciousness completely or in an entirely ‘unfiltered’ way,” but believe that “Q potentially comes closer to the ‘truth’ of what people really think because it observes their actual communicative behaviour as they actively consider each statement and its meaning and relative importance for them in constructing a representation of their individual point of view” (p. 379). Then they assert “the essence of Stephenson’s core concept of operant subjectivity” as follows: “… he saw subjectivity as the sum of behavioural activity that constituted a person’s current viewpoint as operationalised at a particular moment and with respect to a specific question or issue, and he readily acknowledged that such viewpoints could change over time or in response to different conditions (Watts and Stenner, 2012)” (p. 379). This and other features of the methodology then led to the belief that “the incorporation of Q methodology within a mixed method research design potentially offers many advantages over reliance on traditional survey or interview questions alone” (p. 379).

Michelle et al. utilize an unusually large number of Q sorters in their study (in excess of 800) that they speculate “will raise eyebrows within the wider community of Q researchers” (p. 381), but they feel the necessity to defend themselves against Barker’s claim that “the selection of respondents, not the selection of stimuli, as posing the key sampling issue” (p. 381), hence they assure him that they are on safe ground in this regard. But they are clear on the view that it is the variety of distinct viewpoints and not their population distribution that is central to Q. Indeed, jumping ahead, the authors come back and emphasize this general point in their concluding statement:

But we have also achieved much more than most traditional Q studies, because the scale of our project allowed us to explore the possible relationships between the shared perspectives of different sub-groups of respondents and a wide range of theoretically and empirically significant variables, such as gender, age, class, nationality, fandom, religion, and political belief, which have been long-standing areas of interest in audience studies. (p. 394)

But first, the authors launch their own critique of Barker’s suggestion “that Q methodology introduces a crypto ‘ontology of the self’ imported from a possibly tainted psychological paradigm” and assert instead that “Q is generally considered an anti-essentialist methodology that neither makes nor requires assumptions about the structure of the Self or about the nature of consciousness, … only that human beings can create and communicate meaning within a relevant symbolic or discursive field, and that these meanings can be made operant and observable using the technique that Stephenson developed” (p. 382).

Barker had raised the issue of what constitutes a quality Q study, in response to which Michelle et al. emphasized that P-sample size is not the main criterion, and offered the following:

In our view (and other Q practitioners will have their own thoughts on this), overall quality in Q methodology research is shaped by the following: the depth, range, and overall appropriateness of the apprehended concourse; the quality of the Q-sample and of its design, including its representativeness of that concourse and its congruence with theory; the quality and appropriateness of the P-set, with the inclusion of a suitable range of relevant respondents being more important than the raw numbers of respondents or their numerical representativeness of a wider population; and the quality of the interpretation offered, which often depends on the researcher’s ability to carefully evaluate and synthesise the identified factor arrays in conjunction with qualitative feedback from respondents (whether in written or verbal form) to produce a concise summary and interpretation of a holistic point of view (see Watts and Stenner, 2012). (p. 382)

The authors go on to distinguish their use of Q with what they label “weak Q studies,” concluding that a good Q study would provide answers to the following questions:

… does the interpretation of viewpoints ‘make sense’ – does it weave together qualitative and quantitative insights to ‘tell the story’ of the data, and does that interpretation seem sound, given the available evidence? Would those who loaded highly on that factor recognise themselves as broadly sharing that point of view? Respondents’ comments explaining their rankings of items in the Q-sample, whether collected via face-to-face interviews or online, are absolutely essential to forming an interpretation of each factor array. (p. 383)

Other studies are considered “weak” in part due to too few Q sorts defining the factors. In the Michelle et al. analysis, virtually all factors have 15 or more defining Q sorts, the smallest defined by 7. Other criteria are also advanced for good studies––the fewest factors that account for the most variance, each factor containing at least two defining Q sorts, and all factors having the least number of cross-loaded and non-significant Q sorts.

Michelle et al. take on Barker’s concern with generalizability, contending that in the past there was no straightforward way to join Q methodology with sample surveys, a limitation that is now overcome (as they show in their book) through the use of online surveys via FlashQ, which they used in gathering their 800+ respondents. This reliance on large numbers was sought “because we wished to make inferences about the social locations of those who expressed the detected viewpoints…. These findings corroborate the position long maintained in Q methodology that most viewpoints in the conventional P-set are likely to be found in a larger comparable population. In other words, P-sets on the upper end of the conventional size in Q-methodology reliably capture the major viewpoints” (p. 384). The larger the number of Q sorters, the greater likelihood that marginal viewpoints will be incorporated.

The authors assert that “Qualitative materials lie at the very heart of Q methodology” (p. 385), by which they mean interview responses obtained supplemental to the Q sorting itself, including the 20 open-ended questions used in their Hobbit study and that they analyzed using a process of “inductive content analysis.” This gives rise to the question of which interviews to give prominence to, and in this regard Michelle et al. cast their lot with Q sorts that have pure rather than mixed loadings, the latter regarded as “statistical outliers who express idiosyncratic points of view” (p. 387). They raise this to a general methodological principle: “… the ability to reliably and systematically characterise shared viewpoints is a necessary first step before one can meaningfully explore contradictions, complexities, ambiguities, and variations in outlying or statistically non-significant viewpoints” (p. 388).

Finally, the authors provide a spirited defense of Michelle’s Composite Multi-dimensional Model of Modes of Audience Reception, comprised of four modes: transparent, referential, mediated, and discursive (Michelle et al., 2017, pp. 34-37). They respond to Barker’s assertion that the Composite Model is incompatible with what Barker claims is Stephenson’s conception of a “coherent, conscious self” (Barker, 2018, p. 445), which they attribute in part to Barker’s reliance on the article by Good (2010), “published, … ironically, [in] the journal Psychoanalysis and History” (p. 445). Rather than running contrary to Freud, Michelle et al. argue that Stephenson was reliant to a large extent on Freud, especially vis-à-vis the pleasure and reality principles, pointing to one of Stephenson’s (1993-1994) posthumous articles. It therefore seems unlikely, according to the authors, “that there is any fundamental ontological incompatibility between Q methodology and the notion that human behaviour might be at times influenced by unconscious forces” (p. 445).

References

Barker, M. (2018). Review essay: The rise of the Qualiquants: On methodological advances and ontological issues in audience research. Participations, 15(1), 439-452.

Good, J. M.M. (2010). Introduction to William Stephenson’s quest for a science of subjectivity. Psychoanalysis and History, 12, 211-243.

Höijer, B. (2008). Ontological assumptions and generalizations in qualitative (audience) research. European Journal of Communication, 23, 275-294.

Michelle, C., Davis, C.H., Hardy, A.L., & Hight, C. (2017). Fans, blockbusterisation, and the transformation of cinematic desire: Global reception of the Hobbit film trilogy. London: Palgrave.

Stephenson, W. (1993-1994). Introduction to Q-methodology. Operant Subjectivity, 17, 1-13.

Watts, S., & Stenner, P. (2012). Doing Q methodological research. London: Sage.

Carolyn Michelle <c.michelle@waikato.ac.nz> and Ann L Hardy are affiliated with the School of Social Sciences, University of Waikato, Hamilton, New Zealand; Charles H Davis <c5davis@ryerson.ca> is in the School of Radio and Television Arts, Ryerson University, Toronto, Canada; and Craig Hight is at the Department of Creative Industries, University of Newcastle, Newcastle, Australia.