covid-19, featured, Notebook

Critical reflections on the ‘new normal’: Synchronous teaching of CAQDAS-packages online during COVID-19

By Christina Silver, Sarah L. Bulloch, & Michelle Salmona

Our contribution discusses synchronous online teaching of digital tools for qualitative and mixed-methods analysis, known as Computer Assisted Qualitative Data AnalysiS (CAQDAS) packages, during the COVID-19 pandemic. Teachers must take responsibility for, and be sensitive to, the current additional challenges and pressures upon learners and attend to them effectively. Learners are never homogenous but, in these contexts, their heterogeneity and personal situations bring our responsibilities as teachers into sharper focus.

Challenges of teaching CAQDAS-packages

Teaching CAQDAS-packages is challenging as research methods and technology are taught together, and researchers often need support overcoming hurdles associated with integrating technology into research practice. Although it can support critical reflection on methods-driven research, novice researchers have trouble connecting method and software (Salmona & Kaczynski, 2016; Schmieder, 2020).

Traditionally CAQDAS is taught in-person but even before COVID-19, there was a gradual move to online courses, which can be cost-effective and reach wider groups. However, teaching CAQDAS online has its own challenges, including possible technical problems, catering to different learning styles, and interactional issues (Kalpokaite & Radivojevic, 2020). Learning CAQDAS-packages online also heightens challenges in overcoming barriers to successful technological adoption due to the lack of support normally present in-person (Salmona & Kaczynski, 2016). Teaching CAQDAS-packages online during COVID-19 poses additional challenges related to learner availability, real-life bleeding into the classroom, and resultant interactional issues. 

Learner availability in the COVID-context 

Pre-COVID-19, both in-person and online, certain assumptions were often made concerning the ‘availability’ of learners: 

  • They would be present for the duration, unless specific exceptions were brokered; e.g. warning they may have to take a call or leave early.
  • Only registered learners would be present – not family-members, carers, or dependents as well. 
  • Learners would be in a state of mental and physical health suited to learning.

Teachers could generally assume to be engaging not with whole individuals, but with focused

“learners”: the mentally present and mentally well, physically present and physically well, the not-distracted, the captive from start to finish, solo individuals.

Real-life bleeding into the classroom

During COVID-19 these assumptions no longer hold true. We cannot expect learners to focus for the whole allotted time because they cannot necessarily physically or emotionally remove themselves from their home-life contexts. New distractions and stresses include: interruptions from household members, capacity to concentrate for lengthy periods of screen-time, and mental-health issues associated with being more isolated. However, because in-person interactions have largely vanished, learners are keen to participate in online sessions, despite the distractions and stresses. Online sessions also provide learning opportunities for those previously unable to access in-person events. 

As we teach and learn from our homes, real-lives bleed into the classroom. Sharing our images via video-stream allows others into our lives, which is potentially risky. We’ve found more learners choose not to share their video-stream than do, especially in larger groups and when they don’t know each other. 

What we miss by not ‘seeing’

Those used to teaching in-person can find this tricky, as the non-verbal cues used to judge learners’ progress are absent. CAQDAS teachers can no longer ‘walk-the-room’ observing learners’ computer-screens to identify those needing additional support. Screen-sharing can be a solution; but is more time-consuming and ethically difficult when working with confidential data, and impossible if using two devices (one to access the meeting, the other to operate the CAQDAS-package). We miss a lot by not seeing in these ways.  

One risk is that those who can actively participate inadvertently soak-up attention at the cost of those who cannot. It’s our responsibility as teachers to be aware of this and design creative solutions to enable every learner to participate as much as they are willing and able, whilst still benefiting from the session.

Adjusting tactics for the ‘new normal’

We’re therefore continually adjusting how we teach CAQDAS-packages online during COVID-19. Current uncertainties land responsibilities on us as teachers, not on our course participants: we must find out what they need, reflect on our practice, and refine our pedagogies. 

Moving from in-person to online always requires a redesign (Silver & Bulloch, 2020), but during COVID-19 we are also:

  • Educating ourselves about accessibility to ensure we sensitively and effectively open our events to every type of learner
  • Engaging learners more before sessions to understand personal/research needs and provide pre-course familiarisation materials
  • Reducing class-sizes. It’s often assumed class-sizes can be larger online, but we find the opposite, especially during COVID-19. Although we’ve recently experienced pressure to increase group size, we’re resistant because of the increased need to balance the requirements of every learner, and provide individual support 
  • By co-teaching we provide additional support in synchronous online events during COVID-19. Learners can be split according to their needs and two groups supported simultaneously
  • Providing more post-course resources to support learners’ continued use of CAQDAS-packages and hosting platforms for them to communicate with one another afterwards (e.g. VLE platforms)
  • Diversifying teaching tactics to provide as many opportunities as possible for learners to engage and participate. Awareness of different ways people learn has always been central to our pedagogies (Silver & Woolf 2015), but our sensitivities and reflections have increased. We’ve found mixing up tactics (see image) in shorter sessions more effective.

Where do we go from here?

Teachers continually critique and reflect on practice, but COVID-19 requires a re-evaluation of learners’ differences and reflection about their more challenging situations. We are all learning and must continue to do so.

COVID-19 brings ethical issues even more to the forefront, including the appropriateness of requiring or encouraging learners to share their image via video. We must think about disabilities, access to technology, and socio-economic issues in a context where learning is only available online. Positives have also emerged, as sessions can be followed from a range of devices and locations.

COVID-19 forces us to explicitly consider the well-being of learners. Despite coming at this difficult time, we welcome this focus. All our situations have changed, so we need to think about the issues differently. What are the additional ethical issues we must now address? How do we keep this conversation going?

About the authors

Together we have 50+ years experience teaching CAQDAS-packages and 30+ years experience teaching online. Dr Michelle Salmona is President of the Institute for Mixed Methods Research and an international consultant in: program evaluation; research design; and mixed-methods and qualitative data analysis using data applications. Michelle is also an Adjunct Professor at the University of Canberra, Australia specializing in qualitative and mixed methods research. Dr Sarah L Bulloch is a social researcher passionate about methods, with expertise in qualitative and quantitative analysis, as well as mixing the two. She has worked in academic, government, voluntary and private sectors. Sarah teaches introductory and advanced workshops in several CAQDAS packages as a Teaching Fellow for the CAQDAS Networking Project at the University of Surrey, as well as teaching quantitative analysis using SPSS. Dr Christina Silver is Director of Qualitative Data Analysis Services, providing training and consultancy for qualitative and mixed-methods analysis. She also manages the CAQDAS Networking Project (CNP), leading its capacity-building activities. She has trained thousands of researchers in the powerful use of CAQDAS-packages, including NVivo, and developed the Five-Level QDA® method with Nick Woolf.  

References

  • Kalpokaite, N. & Radivojevic, I. (2020). Teaching qualitative data analysis software online: a comparison of face-to-face and e-learning ATLAS.ti courses, International Journal of Research & Method in Education, 43(3), pp. 296-310, DOI:10.1080/1743727X.2019.1687666.
  • Salmona, M. & Kaczynski, D. (2016). Don’t Blame the Software: Using Qualitative Data Analysis Software Successfully in Doctoral Research. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 17(3), Art 11, http://nbn-resolving.de/urn:nbn:de:0114-fqs1603117.
  • Schmieder, C. (2020). Qualitative data analysis software as a tool for teaching analytic practice: Towards a theoretical framework for integrating QDAS into methods pedagogy. Qualitative Research, 20(5), pp. 684-702. 
  • Silver, C. & Woolf, N (2015) “From guided instruction to facilitation of learning: The development of Five-level QDA as a CAQDAS pedagogy that explicates the practices of expert users” International Journal of Social Research Methodology, Vol. 18, Issue 5. Pp527-543
  • Silver, C. & Bulloch, S.L (2020) Teaching NVivo using the Five-Level QDA(R) Method: Adaptations for Synchronous Online Learning. Paper presented at the QSR International Virtual Conference, Qualitative Research in a Changing World. September 24th 2020
Calls, covid-19, featured, Notebook

Exploring unplanned data sites for observational research during the pandemic lockdown

By Areej Jamal (UCL Social Research Institute)

I was getting my PhD upgrade and preparing for my fieldwork soon after, when the Covid-19 outbreak struck. Due to the border closures, I was stuck in the country where I was undertaking my PhD and my fieldwork was due to start overseas which is also my home. I could not travel back home for months.

Not sure of when the borders would open again and anxious about starting my fieldwork, I had to adapt to the new circumstances and utilise my research time more efficiently. Since my research employs a mixed methods design using online surveys and qualitative interviews, I did not have to pivot much of the original design. However, I also had an observatory aspect in my proposed design. Being an insider to the community I am researching, I was looking forward to attending some social meetings to observe and gather the less apparent insights during my fieldwork time. But since the social distancing protocols of Covid-19 seemed long term, I had to reconsider the observation element. In this blog, I will reflect on how I took the changing circumstances in my stride and tried to gather insights from unplanned data collection sites.

Reframing the methods design

My research investigates lived experiences of long-term migrants and how they make sense of their identity and belonging in a country where they have no pathways to citizenship. Due to the Covid-19 crisis I had to reframe a few ways of collecting data ensuring to keep the essence and relevance of my research questions. Though I could carry out the online surveys as originally planned, I had to switch the in-depth interviews to online interviews. The ethical and methodological amendments arising out of online interviews were submitted to the ethics committee for review.

Being stranded abroad in lockdown, I started exploring and observing the online content as sites for data collection. The unobtrusive observation of online content became a very significant part of my revised research design. The two main online sources where I found salient insights were the online videos (vlogs) on YouTube where some migrants had been documenting almost every aspect of their lives and the other was online support groups where mostly distressed migrants were interacting. The latter was a chance discovery as I, myself was a part of these groups seeking guidance and information to return home.

Unobtrusive observation implies that a researcher observes and collects data from online sources such as websites; social media sites or discussion forums without necessarily interacting with the participants. I was passively observing the interactions taking place on these online platforms.

Making sense of the data and meanings emerging from the observation sites

Robinson (2016) and Seale et.al (2010) posit that unsolicited narratives provide richness of new knowledge that is often lacking in solicited accounts. Since the researched community is unaware of the ongoing observation, they tend to offer genuine and certain interpretations of life under specific circumstances. And so, the narrator controls the content without the researcher’s interference.

The migrant stories I had been observing from the selective YouTube accounts and the online support groups that I had joined on WhatsApp particularly offered me very significant source of information and insights. The self-reflections and opinions the observed groups expressed about their temporal migrant status and the impact it has had on some of their life decisions and which was further exacerbated by the pandemic gave me context and ideas relevant for my research questions. The narrator driven stories uncovered aspects of life, which probably would not have occurred in any of my other methods. Seale et.al (2010) suggests that since the online interactions occur in real time, they offer some sort of immediacy which is often lacking in methods where participants mostly reflect and reconstruct past occurrences.

The new knowledge emerging out of these online sources helped me draft some of the initial themes and areas to further investigate through the other methods.

Methodological and Ethical considerations

There are various ethical debates surrounding the unobtrusive research in existing literature and ways of examining personal narratives produced by individuals through online medium.  Some researchers (Eysenbach and Wyatt,2002, Seale et.al, 2010) argue that since most of the online content is publicly available aimed at general audience, the need for informed consent is ambiguous. However, in case of online support groups as Barker (2008) and O’Brien and Clark (2011) discuss the limitations of private content due to smaller number of groups members. The support groups I had been a member of, had admins and certain privacy protocols that all group members had to abide by.

The positionality of the researcher is a very important aspect throughout the research process. Salmons (2012) E-interviews Research Framework offers useful tips reflecting on some crucial questions of self-reflexivity when undertaking unobtrusive observations of the online content.

Matters of confidentiality and seeking consent for data generated from online resources needs much deliberation and largely depends on the objectives of the researcher and the ways they aim to present and report the findings. Although I am still exploring ways of representing information from these valuable sites of information, I find the idea of fabrication approach by Annette Markham (2012) quite useful, which implies ‘involving creative, bricolage-style transfiguration of original data into composite accounts or representational interactions’ (2012, p334) without divulging any specific details of the researched community.

Conclusions

Sometimes the most obvious data sites would not be as apparent unless faced with unexpected circumstances restraining our methodological choices. Agility is an intrinsic characteristic of most social research. At present, data collection is in constant flux responding to the unprecedented crisis. And now more than ever, the relentless pandemic situation offers a critical window for researchers to make every effort to explore creative and novel approaches of data collection and innovative ways to tap the potential of the existing methods.

featured, Notebook

Developing assessment criteria of trustworthiness for the Critical Interpretive Synthesis

By Joke Depraetere, Christophe Vandeviver, Ines Keygnaert & Tom Vander Beken

Reviewing qualitative and quantitative research? Or aiming to develop a new theory based on literature readings? The relatively new review type, the Critical Interpretive Synthesis (CIS), allows for both. Emphasizing flexibility and a critical orientation in its approach, the CIS aims to develop a new coherent theoretical framework from both qualitative and quantitative research. Recognized as one of the best review types, the CIS provides a fresh interpretation of the data rather than a summary of results, as is often the case with other review types. However, CIS’ greatest advantage, flexibility is also one of the greatest disadvantages since it hampers its implementation, introduces ambiguity in its execution and reporting and therefore exacerbate concerns about trustworthiness.

In our published work in the International Journal of Social Research Methodology, evaluation criteria for the CIS were developed and applied on 77 published CIS reviews. By developing these criteria and assessing existing CIS reviews we aimed to evaluate the trustworthiness of these reviews and provide guidelines to future authors, journal editors and reviewers in their implementation and evaluation of the CIS.

The paper outlines two important concepts of trustworthiness in scientific research: transparency and systematicity. While transparency focusses on the reproducibility of the review process, systematicity emphasizes that fit-for-purpose methods need to be implemented and well executed. Previous scholars (Templier & Paré, 2017; Paré et al., 2016) have already developed various guidelines regarding transparency and systematicity in review types. They, however, remained broad and lacked a focus on the specificities that accompany these various review types. Each review type is characterized by different key features that allow to distinguish review types. These features should be transparently reported and soundly executed (i.e. systematicity). Some features can be considered as more central and important than other more peripheral features. This allows to identify a hierarchy of features and enables the evaluation of the extent to which central features of the review type have been consistently implemented and clearly reported in research.

Overall, seven key features are formulated and presented in a hierarchy based on the main goals of the CIS as emphasized by previous scholars (Dixon-Woods et al., 2006b; Entwistle et al., 2012). Both aspects of trustworthiness were evaluated, allowing us to make a distinction between transparency and systematicity of the various key features. During our evaluation of the CIS reviews, we identified six groups of papers based on the scoring of these key features. While only 28 papers transparently reported and soundly executed the four highest ranked features in the hierarchy, the majority of the papers (i.e. N = 47) did well on the two most important features of the CIS. These most important features represent the main goal of the CIS, namely the development of a theoretical framework using methods as described by the original authors of the CIS (Dixon-Woods et al., 2006). This, however, indicated that over 38% of the papers cannot be considered as trustworthy in terms of transparently reporting and soundly executing the two highest ranked features of the CIS.

The paper details which key features of the CIS were soundly executed and transparently reported and which features performed rather poorly. We conclude how the trustworthiness of CIS papers could be improved by providing various recommendations for future scholars, reviewers and journal editors regarding the implementation and evaluation of CIS reviews. While this paper only focuses on one review type, we hope that this paper may be considered as a starting point for developing similar evaluation criteria for methodological reporting in other review genres.

To read the full IJSRM here.

featured, Notebook

Radical critique of interviews – an exchange of views on form and content

By Rosalind Edwards (IJSRM Co-editor)

The ‘radical critique’ of interviews is a broad term encompassing a range of differing positions, but a shared element is an argument that interviews are not a method of grasping the unmediated experiences of research participants – that is, the content of the interview data.  Rather, the enactment of the method, of interviewer and interviewee exchanges, is data – that is, the form.  The critique has been the subject of a scholarly exchange of views in the Journal, drawing attention to agreements and distinctions in debates about radical critiques of interview data in social research.

In a themed section of the Journal on ‘Making the case for qualitative interviews’, Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright contributed an article arguing that the focus on interviews as narrative performance (form) leaves in place a seemingly unbridgeable divide between the experienced and the expressed, and a related conflation of what can be said in interviews with what interviews can be used to say.  They call for attention to the ways that interview data may be used to discuss the social world beyond the interview encounter (content).

Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright – ‘Beyond performative talk: critical observations on the radical critique of reading interview data’.

Emilie Whitaker and Paul Atkinson, responded to their observations, to argue that while their work (cited in Hughes et al.) urges methodologically-informed, reflexive analytic attention to interviews as speech events and social encounters (form), this is not at the expense of attention to content.  Indeed, they say, there cannot be content without form. 

Emilie Whitaker and Paul Atkinson – ‘Response to Hughes, Hughes, Sykes and Wright.

In reply, Hughes and colleagues state their intention to urge a synthesis that prioritises a focus on the content of interviews and the possibilities for what researchers can do with it, just as much as a critical attention to its form.

Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright – ‘Response to Whitaker and Atkinson’.

The renditions of these constructive exchanges are my own, and may not (entirely) reflect those of the authors.

featured, Notebook

I Say, They Say: Effects of Providing Examples in a Survey Question

By Eva Aizpurua, Ki H. Park, E. O. Heiden & Mary E. Losch

One of the first things that survey researchers learn is that questionnaire design decisions are anything but trivial. The order of the questions, the number of response options, and the labels used to describe them can all influence survey responses. In this Research Note, we turn our attention to the use of examples, a common component of survey questions. Examples are intended to help respondents, providing them with information about the type of answers expected and reminding them of responses that might otherwise go unnoticed. For instance, the 2020 U.S. National Health Interview Survey asked about the use of over-the-counter medication, and included “aspirin, Tylenol, Advil, or Aleve” in the question stem. There are many other examples in both national and international surveys. Despite the potential benefits of using examples, there is a risk that respondents will focus too much on them, at the expense of overlooking cases not listed as examples. This phenomenon, called the “focusing hypothesis”, is what we test in our study.

Using an experimental design, we examined the effects of providing examples in a question about multitasking (“During the time we have been on the phone, in what other activities, if any, were you engaged [random group statement here]?”). In this experiment, respondents were randomly assigned to one of three conditions: the first group received one set of examples (watching TV or watching kids), the second group received a different set of examples (walking or talking with someone else), while the final group received no examples. Our goal was to determine whether respondents were more likely to report an activity (e.g., watching TV or walking) when it was listed as an example. We also wanted to understand whether providing examples resulted in respondents listing more activities beyond the examples.

We embedded this experiment in a telephone survey conducted in a Midwestern U.S. state and found support for the focusing hypothesis. As anticipated, respondents were more likely to mention the activity if it was provided to them as an example. However, the effect sizes were generally small and examples did not have an effect on the percentage of respondents who identified themselves as multitaskers, nor on the number of activities reported by them. This is because people faced with the experimental conditions were more likely to list the examples presented to them (i.e., watching TV, watching kids, walking, talking with someone else), while those in the control group more frequently reported activities outside this range (cooking, doing housework…), yielding no differences on the frequency of multitasking or on the number of multitasking activities.  Although examples can help respondents understand the scope of the question and remind them of certain responses, the results from this study indicate that they can also restrict the memory search to the examples provided. This has implications for survey practice, suggesting that the inclusion of examples in questions should be carefully considered and limited to certain situations, such as questions in which recall errors are anticipated or when the scope of the question might be unclear.

To learn more, see full IJSRM article here.