featured, Notebook

Developing assessment criteria of trustworthiness for the Critical Interpretive Synthesis

By Joke Depraetere, Christophe Vandeviver, Ines Keygnaert & Tom Vander Beken

Reviewing qualitative and quantitative research? Or aiming to develop a new theory based on literature readings? The relatively new review type, the Critical Interpretive Synthesis (CIS), allows for both. Emphasizing flexibility and a critical orientation in its approach, the CIS aims to develop a new coherent theoretical framework from both qualitative and quantitative research. Recognized as one of the best review types, the CIS provides a fresh interpretation of the data rather than a summary of results, as is often the case with other review types. However, CIS’ greatest advantage, flexibility is also one of the greatest disadvantages since it hampers its implementation, introduces ambiguity in its execution and reporting and therefore exacerbate concerns about trustworthiness.

In our published work in the International Journal of Social Research Methodology, evaluation criteria for the CIS were developed and applied on 77 published CIS reviews. By developing these criteria and assessing existing CIS reviews we aimed to evaluate the trustworthiness of these reviews and provide guidelines to future authors, journal editors and reviewers in their implementation and evaluation of the CIS.

The paper outlines two important concepts of trustworthiness in scientific research: transparency and systematicity. While transparency focusses on the reproducibility of the review process, systematicity emphasizes that fit-for-purpose methods need to be implemented and well executed. Previous scholars (Templier & Paré, 2017; Paré et al., 2016) have already developed various guidelines regarding transparency and systematicity in review types. They, however, remained broad and lacked a focus on the specificities that accompany these various review types. Each review type is characterized by different key features that allow to distinguish review types. These features should be transparently reported and soundly executed (i.e. systematicity). Some features can be considered as more central and important than other more peripheral features. This allows to identify a hierarchy of features and enables the evaluation of the extent to which central features of the review type have been consistently implemented and clearly reported in research.

Overall, seven key features are formulated and presented in a hierarchy based on the main goals of the CIS as emphasized by previous scholars (Dixon-Woods et al., 2006b; Entwistle et al., 2012). Both aspects of trustworthiness were evaluated, allowing us to make a distinction between transparency and systematicity of the various key features. During our evaluation of the CIS reviews, we identified six groups of papers based on the scoring of these key features. While only 28 papers transparently reported and soundly executed the four highest ranked features in the hierarchy, the majority of the papers (i.e. N = 47) did well on the two most important features of the CIS. These most important features represent the main goal of the CIS, namely the development of a theoretical framework using methods as described by the original authors of the CIS (Dixon-Woods et al., 2006). This, however, indicated that over 38% of the papers cannot be considered as trustworthy in terms of transparently reporting and soundly executing the two highest ranked features of the CIS.

The paper details which key features of the CIS were soundly executed and transparently reported and which features performed rather poorly. We conclude how the trustworthiness of CIS papers could be improved by providing various recommendations for future scholars, reviewers and journal editors regarding the implementation and evaluation of CIS reviews. While this paper only focuses on one review type, we hope that this paper may be considered as a starting point for developing similar evaluation criteria for methodological reporting in other review genres.

To read the full IJSRM here.

featured, Notebook

Radical critique of interviews – an exchange of views on form and content

By Rosalind Edwards (IJSRM Co-editor)

The ‘radical critique’ of interviews is a broad term encompassing a range of differing positions, but a shared element is an argument that interviews are not a method of grasping the unmediated experiences of research participants – that is, the content of the interview data.  Rather, the enactment of the method, of interviewer and interviewee exchanges, is data – that is, the form.  The critique has been the subject of a scholarly exchange of views in the Journal, drawing attention to agreements and distinctions in debates about radical critiques of interview data in social research.

In a themed section of the Journal on ‘Making the case for qualitative interviews’, Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright contributed an article arguing that the focus on interviews as narrative performance (form) leaves in place a seemingly unbridgeable divide between the experienced and the expressed, and a related conflation of what can be said in interviews with what interviews can be used to say.  They call for attention to the ways that interview data may be used to discuss the social world beyond the interview encounter (content).

Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright – ‘Beyond performative talk: critical observations on the radical critique of reading interview data’.

Emilie Whitaker and Paul Atkinson, responded to their observations, to argue that while their work (cited in Hughes et al.) urges methodologically-informed, reflexive analytic attention to interviews as speech events and social encounters (form), this is not at the expense of attention to content.  Indeed, they say, there cannot be content without form. 

Emilie Whitaker and Paul Atkinson – ‘Response to Hughes, Hughes, Sykes and Wright.

In reply, Hughes and colleagues state their intention to urge a synthesis that prioritises a focus on the content of interviews and the possibilities for what researchers can do with it, just as much as a critical attention to its form.

Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright – ‘Response to Whitaker and Atkinson’.

The renditions of these constructive exchanges are my own, and may not (entirely) reflect those of the authors.

featured, Notebook

I Say, They Say: Effects of Providing Examples in a Survey Question

By Eva Aizpurua, Ki H. Park, E. O. Heiden & Mary E. Losch

One of the first things that survey researchers learn is that questionnaire design decisions are anything but trivial. The order of the questions, the number of response options, and the labels used to describe them can all influence survey responses. In this Research Note, we turn our attention to the use of examples, a common component of survey questions. Examples are intended to help respondents, providing them with information about the type of answers expected and reminding them of responses that might otherwise go unnoticed. For instance, the 2020 U.S. National Health Interview Survey asked about the use of over-the-counter medication, and included “aspirin, Tylenol, Advil, or Aleve” in the question stem. There are many other examples in both national and international surveys. Despite the potential benefits of using examples, there is a risk that respondents will focus too much on them, at the expense of overlooking cases not listed as examples. This phenomenon, called the “focusing hypothesis”, is what we test in our study.

Using an experimental design, we examined the effects of providing examples in a question about multitasking (“During the time we have been on the phone, in what other activities, if any, were you engaged [random group statement here]?”). In this experiment, respondents were randomly assigned to one of three conditions: the first group received one set of examples (watching TV or watching kids), the second group received a different set of examples (walking or talking with someone else), while the final group received no examples. Our goal was to determine whether respondents were more likely to report an activity (e.g., watching TV or walking) when it was listed as an example. We also wanted to understand whether providing examples resulted in respondents listing more activities beyond the examples.

We embedded this experiment in a telephone survey conducted in a Midwestern U.S. state and found support for the focusing hypothesis. As anticipated, respondents were more likely to mention the activity if it was provided to them as an example. However, the effect sizes were generally small and examples did not have an effect on the percentage of respondents who identified themselves as multitaskers, nor on the number of activities reported by them. This is because people faced with the experimental conditions were more likely to list the examples presented to them (i.e., watching TV, watching kids, walking, talking with someone else), while those in the control group more frequently reported activities outside this range (cooking, doing housework…), yielding no differences on the frequency of multitasking or on the number of multitasking activities.  Although examples can help respondents understand the scope of the question and remind them of certain responses, the results from this study indicate that they can also restrict the memory search to the examples provided. This has implications for survey practice, suggesting that the inclusion of examples in questions should be carefully considered and limited to certain situations, such as questions in which recall errors are anticipated or when the scope of the question might be unclear.

To learn more, see full IJSRM article here.

covid-19, featured, Notebook

Are novel research projects ethical during a global pandemic?

By Emily-Marie Pacheco and Mustafa Zaimağaoğlu

The global pandemic has inspired a plethora of new research projects in the social sciences; scholars are eager to identify and document the many challenges the COVID-19 situation has introduced into our daily lives, and explore the ways in which our societies have been able to thrive during these ‘unprecedented times’. Given the wide acknowledgement that life during a global pandemic is often more difficult than in our pre-pandemic circumstances, researchers must consider whether asking those in our communities to donate their time and energy to participating in our research is acceptable. Does recruitment for research which seeks to explore the psychological wellbeing and adjustment of those living through uniquely challenging circumstances during COVID-19 really reflect research integrity?

There is no simple answer to whether asking people to share their stories and experiences of COVID-19 is ethical or improper. Many would argue that social research has the potential to contribute many vital insights about life during a global pandemic which are unique to the humanistic lens and approach often reserved for the social sciences; such investigations could propel scholarly dialogue and manifest practically in recommendations for building resilient societies. However, social scientists have a responsibility to protect their participants from any undue harm they may experience as a result of their participation in a study. Thus, while social research may be especially important during a global pandemic, traditional study designs need to adapt to the circumstances of the pandemic and be held to higher ethical expectations by governing bodies and institutions.

Ethical social research during a global pandemic is reflected in research methods which demonstrate an awareness that we are asking more of our participants than ever before. Simple adaptations to existing projects can go a long way in bettering the experience of participants, such as by providing prospective participants additional information on what is expected of them if they choose to participate in a study – whether it be an online survey or an interview. Projects which aim to collect data using qualitative or interpersonal methods should be especially open to adaptation. These studies may be more ethically conducted by offering socially distant options, such as online focus groups or telephone interviews; adopting multimethod approaches and allowing participants the opportunity to contribute to projects in a medium which is most suitable for them may also be an ideal approach, such as by allowing participants the option to participate in online interviews or submitting audio-diaries conducted at their own discretion.

Attention should also be given to the various details of the research design which pertain to participant involvement more specifically. Does that online survey really needto include fifteen scales, and do they really need to ask all thosedemographic questions? Do online interviews really need to exceed thirty minutes and is it really necessary to require participants to turn their cameras on (essentially inviting you into their homes)? The ‘standard procedures’ for collecting data should be critically re-evaluated by researchers in consideration of the real-world context of those from whom they wish to collect data, with the aim of upholding their commitment to responsible research practices. Ethics boards should also aid researchers in identifying areas of their research designs which may be adapted to protect participants. This additional critical perspective may highlight participation conditions that may be arduous for participants, but which may have been overlooked as part of a traditional research design. 

Research during unprecedented times should also aim to provide a benefit to participants who generously donate their time and energy despite experiencing various transitions and changes in their own personal lives. While some researchers may need to devise creative solutions to meet this aim, many research methods in the social sciences have the inherent potential to serve as an activity which provides a benefit to those who engage in their process. For example, researchers may opt to collect data through methods which have a documented potential for promoting psychological wellbeing, or which are also considered therapeutic mechanism. Such approaches include methods which ask participants to reflect on their own experiences (e.g., audio-diaries, reflective entries, interviews with photo-elicitation) and those which focus on positive thoughts or emotions (e.g., topics related to hope, resilience, progress). Beyond these recommendations, researchers should also consider whether they really need participants at all. There are many options for conducting valuable research with minimal or no contact with participants, such as observational methods, content analyses, meta analyses, or secondary analyses. Some may argue that research during a global pandemic should only be conducted with either previously acquired or secondary data; others may argue that primary data collected voluntarily from willing participants is entirely ethical. Either way, respecting participants and their role in our research is always necessary. Beyond the requirements of doing so to uphold institutional research integrity expectations, it is our individual responsibility to ensure we, as researchers, are protecting those who make our work possible by assessing vulnerability, minimizing risk, and enhancing benefit, of participation – to the full extent of our capabilities.

Notebook

Online qualitative surveys?!?

By Virginia Braun and Victoria Clarke

A qualitative survey? What about face-to-face interaction? All the non-verbal cues? Probing and following up? Depth of data? These are the types of sceptical questions we hear a lot when we talk about our research using qualitative surveys. Our doctoral students have even been told at that they must supplement their qualitative survey data with another data source such as interviews, otherwise they will not have the depth of data they need. Sceptical questions like these are partly what motivated us to write about our experience of using online qualitative surveys for the International Journal of Social Research Methodology (LINK TO PAPER). We were also motivated by our enthusiasm for this method and wanted to share with other social researchers why we think it is a valuable addition to their methodological toolkit. We’ve used qualitative survey data over the last decade or so to explore everything from students’ responses to a gay pride T-shirt (Clarke, 2016, 2019) to male body hair removal discourse (Terry & Braun, 2016). We have also supervised numerous students using surveys – including Elicia Boulton, Louise Davey and Charlotte McEvoy, our three co-authors on this paper.

Examples of exclusively, or predominantly, qualitative surveys are relatively rare, but predominantly quantitative surveys with a few ‘open-ended’ questions are common. So how did we come to develop an enthusiasm for surveys as a qualitative method? Here we must credit our inspirational PhD supervisors – Celia Kitzinger and Sue Wilkinson – both great methodological innovators and ‘early adopters’, who encourage their PhD students to ‘experiment’ with research methods. Indeed, the small body of empirical research based on qualitative survey data mostly comes from Celia and Sue’s PhD students (e.g. Frith & Gleeson, 2004; Peel, 2010; Toerien & Wilkinson, 2004), and their students in turn (e.g. Hayfield, 2013; Jowett & Peel, 2009; Terry & Braun, 2017).

What is a qualitative survey then? Usually a series of questions focused around the topic of interest that participants answer in their own words. But qualitative surveys are not limited to questions and written responses, other possibilities include drawing tasks (see Braun, Tricklebank & Clarke, 2013) and responding to stimulus materials such as audio and video clips. Qualitative surveys are necessarily self-administered – if they were administered by researcher, they would essentially be a rather structured qualitative interview that would fail to reap the benefits of ‘messy’, participant centred qualitative interviewing. Qualitative surveys can be delivered in a variety of formats (hardcopy by post or in person, email attachment) but delivery via online survey software is pretty much the norm now, and that delivery mode is the focus of our discussion in our IJSRM paper.

When we think of (quantitative) surveys – as the sceptical questions we opened with illustrate – we typically think of breadth and more prosaically, larger samples. Whereas qualitative research is typically associated with depth and small, situated samples. How then can a method typically associated with breadth, and quantitative research, have anything to offer qualitative researchers? To appreciate the possibilities of qualitative surveys, we first need to recalibrate how we think of depth – shifting from associating it with individual data items, as is typically the case, to assessing depth and richness in terms of the dataset as a whole. This is not to say that individual survey responses can’t be rich, they can, and we include a powerful example in our paper from Elicia Bolton’s survey of experiences of sex and sexuality for women with obsessive compulsive disorder. Not all responses will be like this though – well, certainly not in our experience of using qualitative surveys so far. But an entire dataset of 60, 80 or a hundred responses will provide a rich resource for qualitative analysis. Survey data also have their own unique character, they are not simply like reduced interview data. They are very focused, dense with information – to the extent that a dataset that runs to the same number of pages as a small number of interview transcripts can feel like a lot of data! Our students typically cycle through an initial panic at the start of data collection or piloting – the responses aren’t very detailed! – to feeling delighted, or even overwhelmed, by the amount of information in the final dataset.

Okay, so survey data can be rich, but why would I use a qualitative survey though, rather than do some interviews over Zoom or Skype, with all the advantages of virtual interviewing? Let’s start with some of the practical and pragmatic benefits of qualitative surveys – for us as researchers. There are no bleary-eyed video calls at 6am or 11pm. Data collection can be relatively quick – and there’s no transcription! – leaving plenty of time for data analysis, which is particularly useful if working to a tight or fixed deadline. We note that we are not advocating for quick (and dirty) as inherently good, however; good quality qualitative research takes time, and using a qualitative survey can allow time for the slow wheel of interpretation to turn when we do not have all the time we would ideally want and need to complete our research. In research with no funding, there are few or no costs associated with data collection (especially if you have access via your institution to online survey software). When it comes to student research, we think qualitative surveys can open up research possibilities – because there is no direct interaction with participants, there are likely fewer ethical concerns around inexperienced researchers addressing sensitive topics. For example, one of our undergraduate students researched young adults’ experiences of orgasm using a qualitative survey – it’s highly unlikely they would have received ethical approval to research this using interviews (see Opperman, Braun, Clarke & Rogers, 2013).

For participants, there are even more practical benefits – not least that they can participate when it is most convenient for them. Louise Davey noted that her participants often completed her survey on experiences of living with alopecia early in the morning or late in the evening; unlikely times for an interview. Online survey software will also usually allow completion over multiple sessions, so participants can complete the survey in several short bursts, fitting participation around their schedule, commitments, and indeed energy. This is one of the ways in which online qualitative surveys can give participants a greater sense of control over their participation. Surveys also typically ask less of participants – they do not have to spend an hour or two talking to a researcher at a particular time, they do not have to travel to meet a researcher in person. They also have the advantage of a strong sense of felt-anonymity (in practice, online qualitative surveys are not completely anonymous) – this can be vital for some topics. In Charlotte McEvoy’s research, for instance, on therapists’ views on class and therapy, some participants commented that they were glad of the anonymity of the survey, they would not have shared what they did – and we can speculate, perhaps even not participated at all – if they were invited to take part in an interview. This connects to another advantage of qualitative surveys – that they have the potential to open up participation for groups for whom face to face participation is challenging in various ways. This includes some disabled people, people with caring responsibilities, people with visible differences – such as alopecia – who may feel anxious about being visible to and open to scrutiny by the researcher, and people for whom social interaction with strangers can be profoundly anxiety inducing (such as people with OCD).

This is just a taster of some of the benefits and possibilities of qualitative surveys. We hope we have enticed you to read further about qualitative survey literature and discover the joys, and challenges, of this method for yourself!

See full IJSRM article here.