Calls, covid-19, featured, Notebook

Exploring unplanned data sites for observational research during the pandemic lockdown

By Areej Jamal (UCL Social Research Institute)

I was getting my PhD upgrade and preparing for my fieldwork soon after, when the Covid-19 outbreak struck. Due to the border closures, I was stuck in the country where I was undertaking my PhD and my fieldwork was due to start overseas which is also my home. I could not travel back home for months.

Not sure of when the borders would open again and anxious about starting my fieldwork, I had to adapt to the new circumstances and utilise my research time more efficiently. Since my research employs a mixed methods design using online surveys and qualitative interviews, I did not have to pivot much of the original design. However, I also had an observatory aspect in my proposed design. Being an insider to the community I am researching, I was looking forward to attending some social meetings to observe and gather the less apparent insights during my fieldwork time. But since the social distancing protocols of Covid-19 seemed long term, I had to reconsider the observation element. In this blog, I will reflect on how I took the changing circumstances in my stride and tried to gather insights from unplanned data collection sites.

Reframing the methods design

My research investigates lived experiences of long-term migrants and how they make sense of their identity and belonging in a country where they have no pathways to citizenship. Due to the Covid-19 crisis I had to reframe a few ways of collecting data ensuring to keep the essence and relevance of my research questions. Though I could carry out the online surveys as originally planned, I had to switch the in-depth interviews to online interviews. The ethical and methodological amendments arising out of online interviews were submitted to the ethics committee for review.

Being stranded abroad in lockdown, I started exploring and observing the online content as sites for data collection. The unobtrusive observation of online content became a very significant part of my revised research design. The two main online sources where I found salient insights were the online videos (vlogs) on YouTube where some migrants had been documenting almost every aspect of their lives and the other was online support groups where mostly distressed migrants were interacting. The latter was a chance discovery as I, myself was a part of these groups seeking guidance and information to return home.

Unobtrusive observation implies that a researcher observes and collects data from online sources such as websites; social media sites or discussion forums without necessarily interacting with the participants. I was passively observing the interactions taking place on these online platforms.

Making sense of the data and meanings emerging from the observation sites

Robinson (2016) and Seale et.al (2010) posit that unsolicited narratives provide richness of new knowledge that is often lacking in solicited accounts. Since the researched community is unaware of the ongoing observation, they tend to offer genuine and certain interpretations of life under specific circumstances. And so, the narrator controls the content without the researcher’s interference.

The migrant stories I had been observing from the selective YouTube accounts and the online support groups that I had joined on WhatsApp particularly offered me very significant source of information and insights. The self-reflections and opinions the observed groups expressed about their temporal migrant status and the impact it has had on some of their life decisions and which was further exacerbated by the pandemic gave me context and ideas relevant for my research questions. The narrator driven stories uncovered aspects of life, which probably would not have occurred in any of my other methods. Seale et.al (2010) suggests that since the online interactions occur in real time, they offer some sort of immediacy which is often lacking in methods where participants mostly reflect and reconstruct past occurrences.

The new knowledge emerging out of these online sources helped me draft some of the initial themes and areas to further investigate through the other methods.

Methodological and Ethical considerations

There are various ethical debates surrounding the unobtrusive research in existing literature and ways of examining personal narratives produced by individuals through online medium.  Some researchers (Eysenbach and Wyatt,2002, Seale et.al, 2010) argue that since most of the online content is publicly available aimed at general audience, the need for informed consent is ambiguous. However, in case of online support groups as Barker (2008) and O’Brien and Clark (2011) discuss the limitations of private content due to smaller number of groups members. The support groups I had been a member of, had admins and certain privacy protocols that all group members had to abide by.

The positionality of the researcher is a very important aspect throughout the research process. Salmons (2012) E-interviews Research Framework offers useful tips reflecting on some crucial questions of self-reflexivity when undertaking unobtrusive observations of the online content.

Matters of confidentiality and seeking consent for data generated from online resources needs much deliberation and largely depends on the objectives of the researcher and the ways they aim to present and report the findings. Although I am still exploring ways of representing information from these valuable sites of information, I find the idea of fabrication approach by Annette Markham (2012) quite useful, which implies ‘involving creative, bricolage-style transfiguration of original data into composite accounts or representational interactions’ (2012, p334) without divulging any specific details of the researched community.

Conclusions

Sometimes the most obvious data sites would not be as apparent unless faced with unexpected circumstances restraining our methodological choices. Agility is an intrinsic characteristic of most social research. At present, data collection is in constant flux responding to the unprecedented crisis. And now more than ever, the relentless pandemic situation offers a critical window for researchers to make every effort to explore creative and novel approaches of data collection and innovative ways to tap the potential of the existing methods.

Announcements, Calls, featured

Call for Contributions: Adapting Conventional Research Methods for the New Normal

In search of Novel Adaptations in Social Science Research Methods for the World of COVID

With a second wave of the global COVID-19 pandemic now sweeping across the West, it is becoming clear that returning to ‘normal‘ maybe a long way off yet. Though, the need for social research is now more pressing than ever. The International Journal of Social Research Methodology is inviting researchers, academics, and doctoral students to share blog contributions reflecting their experiences in adapting existing research methods to meet the needs of our new research paradigm. The goal is to help the social science research community find inspiration and learn from one other as we continue to adapt our methods to meet contemporary needs. Contributions reflecting all aspects of social science research, including research ethics as well as quantitative and qualitative methods, are welcome. Preference will be given to submissions that demonstrate novel adaptation, creativity, and/or innovation.

Your contributions should be emailed to tsrm-editor@tandf.co.uk in MS Word format. Accompanying images may be included. Contributions should not exceed 1,000 words.

Deadline for submissions is 30 November 2020.

featured, Notebook

Developing assessment criteria of trustworthiness for the Critical Interpretive Synthesis

By Joke Depraetere, Christophe Vandeviver, Ines Keygnaert & Tom Vander Beken

Reviewing qualitative and quantitative research? Or aiming to develop a new theory based on literature readings? The relatively new review type, the Critical Interpretive Synthesis (CIS), allows for both. Emphasizing flexibility and a critical orientation in its approach, the CIS aims to develop a new coherent theoretical framework from both qualitative and quantitative research. Recognized as one of the best review types, the CIS provides a fresh interpretation of the data rather than a summary of results, as is often the case with other review types. However, CIS’ greatest advantage, flexibility is also one of the greatest disadvantages since it hampers its implementation, introduces ambiguity in its execution and reporting and therefore exacerbate concerns about trustworthiness.

In our published work in the International Journal of Social Research Methodology, evaluation criteria for the CIS were developed and applied on 77 published CIS reviews. By developing these criteria and assessing existing CIS reviews we aimed to evaluate the trustworthiness of these reviews and provide guidelines to future authors, journal editors and reviewers in their implementation and evaluation of the CIS.

The paper outlines two important concepts of trustworthiness in scientific research: transparency and systematicity. While transparency focusses on the reproducibility of the review process, systematicity emphasizes that fit-for-purpose methods need to be implemented and well executed. Previous scholars (Templier & Paré, 2017; Paré et al., 2016) have already developed various guidelines regarding transparency and systematicity in review types. They, however, remained broad and lacked a focus on the specificities that accompany these various review types. Each review type is characterized by different key features that allow to distinguish review types. These features should be transparently reported and soundly executed (i.e. systematicity). Some features can be considered as more central and important than other more peripheral features. This allows to identify a hierarchy of features and enables the evaluation of the extent to which central features of the review type have been consistently implemented and clearly reported in research.

Overall, seven key features are formulated and presented in a hierarchy based on the main goals of the CIS as emphasized by previous scholars (Dixon-Woods et al., 2006b; Entwistle et al., 2012). Both aspects of trustworthiness were evaluated, allowing us to make a distinction between transparency and systematicity of the various key features. During our evaluation of the CIS reviews, we identified six groups of papers based on the scoring of these key features. While only 28 papers transparently reported and soundly executed the four highest ranked features in the hierarchy, the majority of the papers (i.e. N = 47) did well on the two most important features of the CIS. These most important features represent the main goal of the CIS, namely the development of a theoretical framework using methods as described by the original authors of the CIS (Dixon-Woods et al., 2006). This, however, indicated that over 38% of the papers cannot be considered as trustworthy in terms of transparently reporting and soundly executing the two highest ranked features of the CIS.

The paper details which key features of the CIS were soundly executed and transparently reported and which features performed rather poorly. We conclude how the trustworthiness of CIS papers could be improved by providing various recommendations for future scholars, reviewers and journal editors regarding the implementation and evaluation of CIS reviews. While this paper only focuses on one review type, we hope that this paper may be considered as a starting point for developing similar evaluation criteria for methodological reporting in other review genres.

To read the full IJSRM here.

featured, Notebook

Radical critique of interviews – an exchange of views on form and content

By Rosalind Edwards (IJSRM Co-editor)

The ‘radical critique’ of interviews is a broad term encompassing a range of differing positions, but a shared element is an argument that interviews are not a method of grasping the unmediated experiences of research participants – that is, the content of the interview data.  Rather, the enactment of the method, of interviewer and interviewee exchanges, is data – that is, the form.  The critique has been the subject of a scholarly exchange of views in the Journal, drawing attention to agreements and distinctions in debates about radical critiques of interview data in social research.

In a themed section of the Journal on ‘Making the case for qualitative interviews’, Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright contributed an article arguing that the focus on interviews as narrative performance (form) leaves in place a seemingly unbridgeable divide between the experienced and the expressed, and a related conflation of what can be said in interviews with what interviews can be used to say.  They call for attention to the ways that interview data may be used to discuss the social world beyond the interview encounter (content).

Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright – ‘Beyond performative talk: critical observations on the radical critique of reading interview data’.

Emilie Whitaker and Paul Atkinson, responded to their observations, to argue that while their work (cited in Hughes et al.) urges methodologically-informed, reflexive analytic attention to interviews as speech events and social encounters (form), this is not at the expense of attention to content.  Indeed, they say, there cannot be content without form. 

Emilie Whitaker and Paul Atkinson – ‘Response to Hughes, Hughes, Sykes and Wright.

In reply, Hughes and colleagues state their intention to urge a synthesis that prioritises a focus on the content of interviews and the possibilities for what researchers can do with it, just as much as a critical attention to its form.

Jason Hughes, Kahryn Hughes, Grace Sykes and Katy Wright – ‘Response to Whitaker and Atkinson’.

The renditions of these constructive exchanges are my own, and may not (entirely) reflect those of the authors.

featured, Notebook

I Say, They Say: Effects of Providing Examples in a Survey Question

By Eva Aizpurua, Ki H. Park, E. O. Heiden & Mary E. Losch

One of the first things that survey researchers learn is that questionnaire design decisions are anything but trivial. The order of the questions, the number of response options, and the labels used to describe them can all influence survey responses. In this Research Note, we turn our attention to the use of examples, a common component of survey questions. Examples are intended to help respondents, providing them with information about the type of answers expected and reminding them of responses that might otherwise go unnoticed. For instance, the 2020 U.S. National Health Interview Survey asked about the use of over-the-counter medication, and included “aspirin, Tylenol, Advil, or Aleve” in the question stem. There are many other examples in both national and international surveys. Despite the potential benefits of using examples, there is a risk that respondents will focus too much on them, at the expense of overlooking cases not listed as examples. This phenomenon, called the “focusing hypothesis”, is what we test in our study.

Using an experimental design, we examined the effects of providing examples in a question about multitasking (“During the time we have been on the phone, in what other activities, if any, were you engaged [random group statement here]?”). In this experiment, respondents were randomly assigned to one of three conditions: the first group received one set of examples (watching TV or watching kids), the second group received a different set of examples (walking or talking with someone else), while the final group received no examples. Our goal was to determine whether respondents were more likely to report an activity (e.g., watching TV or walking) when it was listed as an example. We also wanted to understand whether providing examples resulted in respondents listing more activities beyond the examples.

We embedded this experiment in a telephone survey conducted in a Midwestern U.S. state and found support for the focusing hypothesis. As anticipated, respondents were more likely to mention the activity if it was provided to them as an example. However, the effect sizes were generally small and examples did not have an effect on the percentage of respondents who identified themselves as multitaskers, nor on the number of activities reported by them. This is because people faced with the experimental conditions were more likely to list the examples presented to them (i.e., watching TV, watching kids, walking, talking with someone else), while those in the control group more frequently reported activities outside this range (cooking, doing housework…), yielding no differences on the frequency of multitasking or on the number of multitasking activities.  Although examples can help respondents understand the scope of the question and remind them of certain responses, the results from this study indicate that they can also restrict the memory search to the examples provided. This has implications for survey practice, suggesting that the inclusion of examples in questions should be carefully considered and limited to certain situations, such as questions in which recall errors are anticipated or when the scope of the question might be unclear.

To learn more, see full IJSRM article here.