featured, Notebook

Testing key underlying assumptions of respondent driven sampling within a real-world network of people who inject drugs

By Ryan Buchanan, Charlotte Cook, Julie Parkes, & Salim I Khakoo

The World Health Organization has recently set a target for the global elimination of Hepatitis C. However, to monitor progress it is necessary to have accurate methods to track the changing prevalence of Hepatitis C in populations that are most affected by the virus. People who inject drugs are a marginalized and often hidden population with a high prevalence of Hepatitis C. As such, tracking Hepatitis C infections in these populations can be difficult. One method to do just this Respondent Driven Sampling or RDS. However, prevalence estimates made using RDS make several assumptions and it is difficult to test whether these assumptions have been met.

However, our recently published article in the International Journal of Social Research Methodology describes a novel way to do just this. This blog shares some of the challenges faced in doing this work and how, by using novel social network data collection techniques, we were able to test some of the assumptions of RDS in an isolated population who inject drugs on the Isle of Wight in the United Kingdom. However, before delving into how we did this, a brief introduction to the RDS method is necessary.

RDS requires that researchers start with a carefully selected sample within a target population. These individuals (called seeds) are asked to refer two or three friends or acquaintances to researchers who are also eligible to take part. These new participants are asked to do the same and recruitment continues in this way through ‘waves’ until the desired sample size is achieved. Then, using appropriate software, the data collected during the survey about each persons’ social network allows for the estimation population prevalence (e.g., how common Hepatitis C is in the population of people who inject drugs).

Using RDS to estimate the prevalence of Hepatitis C among the population of the Isle of Wight, we hypothesized that the treatment program was closer to achieving the elimination of the virus than the available data suggested.

However, concerns remained about the potential flaws of RDS and we were interested in how one could develop methods to assess these flaws. Here our study on the Isle of Wight presented a unique opportunity. The small island population made it possible to map the social networks connecting people who inject drugs through which the sampling process passes. With this network ‘map’ it would then be possible to test whether some of the assumptions underlying the method had been met.

To achieve a mixed methods social network study was run alongside the main survey. Interviews were conducted with people who inject drugs on the Island as well as the service providers who worked with them. These interviews explored how they were all interconnected. Survey participants were also asked about their social networks which then aided in the construct of a representation network through which the ‘waves’ of the sampling process passed.

Unsurprisingly, many survey participants were unenthusiastic about identifying friends and acquaintances who also inject drugs. Instead, unique codes for each individual described were utilized. These comprised of their initials, age, hair colour, gender and village or town where they lived. Participants were asked about each individual they described e.g., how frequently do they inject or if they use needle exchange services? In this way a picture of the social network of people who inject drugs on the Isle of Wight was gradually built up which provided insights into this population even though some of the target population hadn’t come forward to directly participate in the survey.

With this ‘map’ in-hand and the personal information collected we collected it was possible to test some of the assumptions of RDS like (1) if the target population for the survey are all socially interconnected or (2) if members of the population are equally likely to participate in the survey.

Read the full article is the IJSRM here.

The researchers would like to thank the individuals who came forward and took part in this study, the community pharmacists who provided research venues in towns and villages across the Island, and the study funders (NIHR CLAHRC Wessex).

featured, Notebook

Analysing complexity: Developing a modified phenomenological hermeneutical method of data analysis for multiple contexts

By Debra Morgan

Qualitative data analysis has been criticised for a lack of credibility over recent years when vagueness has been afforded to the reporting of how findings are attained. In response, there has been a growing body of literature emphasising a need to detail methods of qualitative data analysis. As a seasoned academic in nurse education, comfortable with the concept of evidence-based practice, I was concerned as a PhD researcher (exploring student nurse experiences of learning whilst studying abroad) to ensure I selected a sound approach to data analysis that would ensure transparency at each stage of the analysis process; so avoiding the aforementioned criticism from being levelled against my research.

The analytical journey began well, with the selection of the ‘phenomenological hermeneutical method for interpreting interview texts’ (Lindseth and Norberg, 2004​). This method appeared ideally suited to my research methodology (hermeneutic phenomenology) and the process is well described by the authors, so offering the level of transparency desired. Briefly, this analysis method comprises three stages: naïve reading: the text must be read many times in an open-minded manner so that a first ‘naïve understanding’ is arrived at; structural analysis: commences with the identification of ‘meaning units’ from the text, these are condensed and sub-themes and themes emerge; comprehensive understanding: themes are further considered in relation to the research question and wider texts and a comprehensive understanding emerges (Lindseth and Norberg, 2004).

Analysis of each individual research participant’s data progressed well following these stages. However, I found it difficult to understand how I could then combine individual data to develop core phenomenon themes and yet not lose the individual student voice and learning experiences which were embedded in diverse contexts. This was concerning to me as my research comprised gathering student experiences of their learning in multiple contexts. For example, student experiences ranged from short placements in low and middle income countries in Africa and Asia, to longer placements in high income countries in Europe. Whilst the phenomenon of learning may exist in each diverse context, each experience is unique, therefore illuminating and preserving experiences in context was important to me. I was concerned to ensure that each individual ‘lived experience’, within each of these different contexts, were explored so that an understanding of learning in each type of placement could be revealed prior to developing a comprehensive understanding of the phenomenon more generically.

Whilst Lindseth and Norberg suggest reflecting on the emergent themes in relation to the context of the research (such as different study abroad types) at the final ‘comprehensive understanding’ stage, I felt it was important to preserve experiences specific to each study abroad type throughout each stage of data analysis in order they were not ‘lost’ during this process. To capture such contextual elements, Bazeley (2009) recommends describing, comparing and relating the characteristics or situation of the participants during analysis. I therefore incorporated these aspects into Lindseth and Norberg’s approach so that the varied study abroad types could be explored individually before then combining with the other types. In order to achieve this, I developed a modified approach to analysis. In particular, I further differentiated at the stage of structural analysis. Accordingly, I introduced two sub-stages, these are: ‘individual structural analysis’ and an additional ‘combined structural analysis’. This development represents a refinement in relation to moving from the individual participant experience (which I have termed ‘the individual horizonal perspective’ or ‘individual horizon’) to combined experiences of the phenomenon (respectively termed ‘the combined horizonal perspective’ or ‘combined horizon’).

This modified qualitative data analysis method ensures that the subjective origins of student, or research participant, experience and contexts remain identifiable throughout each stage of the data analysis process. Consequently, I feel this modification to an existing qualitative data analysis method permits greater transparency when dealing with data that relates to multiple contexts. I have called this approach the ‘Modified Phenomenological Hermeneutical Method of Data Analysis For Multiple Contexts’ and I have also developed a visual model to further illuminate this modified approach.

The ‘modified phenomenological hermeneutical method of data analysis for multiple contexts’ holds utility, and whilst my research is focused upon student nurse education it is transferable to other subject and research areas that involve multiple research contexts. To this effect, I have shared a reflexive review, and in this I include data extracts, of the development and application of this approach in my article: ‘Analysing complexity: Developing a modified phenomenological hermeneutical method of data analysis for multiple contexts’  published in‘The International Journal of Social Research Methodology’. This paper also presents the developed visual model. Additionally, an exploration of the underpinning theoretical basis to this data analysis method, and modification is provided, so adding to the expanding body of evidence and reflecting the ethos of transparency in qualitative data analysis.

Read the full IJSRM article here.

References:

Bazeley, P. (2009). Analysing qualitative data: More than ‘identifying themes’. Malaysian Journal of Qualitative Research 2, 6-22. Lindseth, A. & Norberg, A. (2004). A phenomenological hermeneutical method for researching lived experience. Scandinavian Journal of Caring sciences 18(2), 145-153.

featured, Notebook

Decentering methodological nationalism to study precarious legal status trajectories

By Patricia LandoltLuin Goldring & Paul Pritchard

Our paper tackles the knowledge gap between people’s experiences of immigration status and quantitative research on the matter, and proposes a survey design solution for the gap. We consider how this knowledge gap is produced and perpetuated in Canada, a traditional country of permanent immigration in which temporary migration has become a core feature of immigration management. We showcase the most important survey and administrative data sources used in Canada to study the relationship between immigration and social inequality. We capture gaps and omissions in data sources and show why the loyal application of state legal status categories to the empirical study of the relationship between changes in immigration status and social inequality is flawed. In the second half of the paper, we present the design lessons learned by the Citizenship and Employment Precarity project. We discuss our community consultation process and the survey questions we developed to capture respondent’s precarious legal status trajectories, and some of the limitations of our approach.

Research shows that precarious legal status situations are proliferating globally and have profound long-term impacts on migrant life chances and social inequality. Precarious legal status is characterized by a lack of permanent authorized residence, partial and temporary authorized access to social citizenship and employment, dependence on a third party for authorized presence or employment, and deportability (Goldring et al., 2009). Across the world, migrants move between immigration status categories, often in unpredictable ways that do not fit easily within state categories or the design of international migration management or state policy. Migrant precarious legal status trajectories crisscross jurisdictions and programs. There is movement between authorized and unauthorized legal status situations, failed transitions, denied and repeat applications. Yet the bulk of quantitative research on immigration status is organized around state legal status categories and programs. As a result, analysis cannot take into account potentially life altering moments in a person’s precarious legal status trajectory that fly under or around the radar of the state.

The research design process is a crucial moment with the potential to generate methodological autonomy from state classification and counting systems. In our work, the community social service agency-centred knowledge network generated conceptual and analytical autonomy from state practice. Active consultation with stakeholders to incorporate their insights and priorities into the survey design was essential throughout the research process.

The target population for the survey included employed, working age, residents of the Greater Toronto Area who had entered Canada with precarious legal status. Respondents’ current legal status was not a sampling criterion, which means that the sample ranges from illegalized, unauthorized migrants to permanent residents and naturalized citizens. In addition to questions on precarious legal status trajectories, the survey includes demographic data and modules on pre-arrival planning and financing of migration, education pre- and post-arrival, early settlement and work, current work, income and financial security, and a self-rated health score and questions about access to healthcare. The twenty-minute, self-administered online survey includes a mixture of closed-response and multiple-choice items and a limited number of open-response items.

The survey has three sets of questions that together capture a respondent’s precarious legal status trajectory in Canada. Respondents are asked for immigration status at entrance or the first time they entered Canada. Current status is captured with three sets of questions. One question establishes the respondent’s immigration status category. Another asks if and when the respondent transitioned to permanent residence. Third, the respondent is asked if they currently have a valid work permit, with a probe for work permit type (open or closed). Work permit type is associated with varying degrees of labour market access, rights and entitlements. The three-part question is a check on the potential for mismatches between authorization to be present and authorization to work that are built into the immigration system.

A separate set of questions captures movement within and between legal status categories including successful and failed attempts to change status. The respondent is asked to identify all immigration statuses and work permits held, including a deportation order and the total number of attempts to gain permanent resident status via humanitarian applications. They are also asked if they ever left and returned to Canada under a different or renewed immigration status. The survey also asks for costs incurred by a family unit to finance migration and efforts to regularize their legal status.

The survey has limitations that truncate the temporal and directional complexity of precarious legal status trajectories. Limitations result from balancing the practical requirement of manageable survey length with comprehensiveness. The survey does not capture sequential data on legal status trajectories, only what statuses a respondent applied for and held. It is also not possible to establish whether the total period without status or work permit is continuous or discontinuous. We also note that survey pilot testing confirmed the inherent difficulties of collecting sequential data given lacunae in respondent recall and gaps in knowledge, particularly when these are handled by a third party such as an immigration lawyer, consultant or family member. A second limitation involves legal status trajectories outside of Canada, before first arriving in the country, or between arrival and the survey date. The survey does not address precarious transit zones.

You can read the full article in IJSRM here.

Calls, featured, Notebook

Remote qualitative data collection: Lessons from a multi-country qualitative evaluation

By Mehjabeen Jagmag *

Like most researchers who had planned to begin their research projects earlier this year, our research team found our data collection plans upended by the pandemic. We had designed our research guides, received ethical clearance and completed training our research teams for a multi-country endline evaluation of an education programme in Ghana, Kenya and Nigeria much before we heard of the COVID-19 pandemic.

A few days before our teams travelled to their respective data collection sites, phone calls started pouring in – schools were closed indefinitely, travel between cities was restricted, and we were beginning to understand how much the COVID-19 pandemic would change our lives. After a few weeks of waiting and watching, it became apparent that we could not continue in-person data collection.

We revised our research guides and prepared ourselves for conducting remote phone-interviews with our research participants. Given that this was the third and last round of data collection in our multi-year panel research, we had previously collected phone numbers of our research participants and acquired permission to be able to contact them on the phone for further research. We set up remote research desks for the team and began preparation for data collection.

What we were unsure about was whether our research plans would be successful. Accounts of fraudulent callers promising medical remedies and peddling fake health insurance packages had made people wary of responding to unknown phone numbers. We were not sure how many of the phone numbers we had collected in the previous year would still be working, and most importantly, we were not sure how our research participants were faring under the lockdown and whether they would want to speak with us. Finally, our research participants included primary school students, who were an essential voice in our study. We were keen to conduct interviews but were not sure if this would be feasible – would parents trust us enough to speak to us and consent to their children speaking to us? Once we secured consent from parents, would children provide assent? As trust was the key element to completing our research successfully, we devised a data collection plan that included the following elements, that are likely to be necessary for future remote data collection.

Training and retraining for remote data collection

We spent time discussing as a team what the potential challenges may be and how we plan to respond to them. We drew up a collective list of answers that we could draw on to communicate clearly and effectively about the evaluation, respond to any queries and alleviate any concerns that our participants had. This list and knowledge grew, and we collected data, and debrief meetings with the teams at the end of each data helped ensure this was a live document.

Seek feedback from key informants

We contacted community leaders and headteachers to enquire about how we should approach data collection with school and community participants. They provided important contextual information that was occasionally specific to each community. We used this information to improve our introductory messages, the time and dates we called and how we approached research participants.

Seek introductions from trusted leaders

We also asked community leaders and headteachers to support our recruitment process by sending messages to the participants about our research before it began. Doing so helped minimise any uncertainty of the veracity of our calls. Where relevant, we compensated them for their airtime.

Give participants time to prepare for the interview

We shared information about our organisation and the research objective over text messages or calls, which gave research participants enough time to decide whether they wanted to participate. It also helped plan to speak at a time would suit them best for a discussion, and also consult with their family and children if they wanted to participate in the research.

Ensure continuity of research teams

As this was an endline evaluation, we had research team members who participated in previous rounds of data collection calling the participants they were likely to have visited in the past. Where this was possible, it increased trust and facilitated easy conversations.

Prepare case-history notes

We prepared short case history notes about the programme and school and what we had learned from previous research rounds for each school to build confidence that our intentions and credentials were genuine. These notes helped remind research participants of our last conversation, helped us focus on what has changed since that last conversation, which in turn helped keep conversations short and in general proved to be a useful conversation starter.

Save time at the beginning and end for questions

We ensure research participants had enough time to ask us about the programme, our motivations, go over the consent form, understand why we wanted to speak with the children or for children to ask parents for their permission before we began our interviews. To ensure that that the conversation did not feel rushed, we designed shorter research guides.

Plan for breaks or changes when interviewing with young participants

When speaking with students, we anticipated time to break and distractions during the call, which helped maintain a relaxed pace during the interview. If students were uncomfortable with phone interviews, we, eased the conversation to a close to minimise any distress caused to the participant.

Summary and Conclusion

We completed data collection in all three countries, albeit with a less ambitious research plan that we originally intended for an in-person research study. The key objective of our research was to collect the optimal amount of data that would inform the programme evaluation while making the interview process convenient and comfortable for the research participants involved. To do so, we have learned that it is vital for participants to have confidence in the researchers and the motive for collecting data. Planning before we began data collection and updating our knowledge as the research progressed proved invaluable to our experience.

* Mehjabeen Jagmag is a Senior Consultant with Oxford Policy Management.

covid-19, featured, Notebook

Critical reflections on the ‘new normal’: Synchronous teaching of CAQDAS-packages online during COVID-19

By Christina Silver, Sarah L. Bulloch, & Michelle Salmona

Our contribution discusses synchronous online teaching of digital tools for qualitative and mixed-methods analysis, known as Computer Assisted Qualitative Data AnalysiS (CAQDAS) packages, during the COVID-19 pandemic. Teachers must take responsibility for, and be sensitive to, the current additional challenges and pressures upon learners and attend to them effectively. Learners are never homogenous but, in these contexts, their heterogeneity and personal situations bring our responsibilities as teachers into sharper focus.

Challenges of teaching CAQDAS-packages

Teaching CAQDAS-packages is challenging as research methods and technology are taught together, and researchers often need support overcoming hurdles associated with integrating technology into research practice. Although it can support critical reflection on methods-driven research, novice researchers have trouble connecting method and software (Salmona & Kaczynski, 2016; Schmieder, 2020).

Traditionally CAQDAS is taught in-person but even before COVID-19, there was a gradual move to online courses, which can be cost-effective and reach wider groups. However, teaching CAQDAS online has its own challenges, including possible technical problems, catering to different learning styles, and interactional issues (Kalpokaite & Radivojevic, 2020). Learning CAQDAS-packages online also heightens challenges in overcoming barriers to successful technological adoption due to the lack of support normally present in-person (Salmona & Kaczynski, 2016). Teaching CAQDAS-packages online during COVID-19 poses additional challenges related to learner availability, real-life bleeding into the classroom, and resultant interactional issues. 

Learner availability in the COVID-context 

Pre-COVID-19, both in-person and online, certain assumptions were often made concerning the ‘availability’ of learners: 

  • They would be present for the duration, unless specific exceptions were brokered; e.g. warning they may have to take a call or leave early.
  • Only registered learners would be present – not family-members, carers, or dependents as well. 
  • Learners would be in a state of mental and physical health suited to learning.

Teachers could generally assume to be engaging not with whole individuals, but with focused

“learners”: the mentally present and mentally well, physically present and physically well, the not-distracted, the captive from start to finish, solo individuals.

Real-life bleeding into the classroom

During COVID-19 these assumptions no longer hold true. We cannot expect learners to focus for the whole allotted time because they cannot necessarily physically or emotionally remove themselves from their home-life contexts. New distractions and stresses include: interruptions from household members, capacity to concentrate for lengthy periods of screen-time, and mental-health issues associated with being more isolated. However, because in-person interactions have largely vanished, learners are keen to participate in online sessions, despite the distractions and stresses. Online sessions also provide learning opportunities for those previously unable to access in-person events. 

As we teach and learn from our homes, real-lives bleed into the classroom. Sharing our images via video-stream allows others into our lives, which is potentially risky. We’ve found more learners choose not to share their video-stream than do, especially in larger groups and when they don’t know each other. 

What we miss by not ‘seeing’

Those used to teaching in-person can find this tricky, as the non-verbal cues used to judge learners’ progress are absent. CAQDAS teachers can no longer ‘walk-the-room’ observing learners’ computer-screens to identify those needing additional support. Screen-sharing can be a solution; but is more time-consuming and ethically difficult when working with confidential data, and impossible if using two devices (one to access the meeting, the other to operate the CAQDAS-package). We miss a lot by not seeing in these ways.  

One risk is that those who can actively participate inadvertently soak-up attention at the cost of those who cannot. It’s our responsibility as teachers to be aware of this and design creative solutions to enable every learner to participate as much as they are willing and able, whilst still benefiting from the session.

Adjusting tactics for the ‘new normal’

We’re therefore continually adjusting how we teach CAQDAS-packages online during COVID-19. Current uncertainties land responsibilities on us as teachers, not on our course participants: we must find out what they need, reflect on our practice, and refine our pedagogies. 

Moving from in-person to online always requires a redesign (Silver & Bulloch, 2020), but during COVID-19 we are also:

  • Educating ourselves about accessibility to ensure we sensitively and effectively open our events to every type of learner
  • Engaging learners more before sessions to understand personal/research needs and provide pre-course familiarisation materials
  • Reducing class-sizes. It’s often assumed class-sizes can be larger online, but we find the opposite, especially during COVID-19. Although we’ve recently experienced pressure to increase group size, we’re resistant because of the increased need to balance the requirements of every learner, and provide individual support 
  • By co-teaching we provide additional support in synchronous online events during COVID-19. Learners can be split according to their needs and two groups supported simultaneously
  • Providing more post-course resources to support learners’ continued use of CAQDAS-packages and hosting platforms for them to communicate with one another afterwards (e.g. VLE platforms)
  • Diversifying teaching tactics to provide as many opportunities as possible for learners to engage and participate. Awareness of different ways people learn has always been central to our pedagogies (Silver & Woolf 2015), but our sensitivities and reflections have increased. We’ve found mixing up tactics (see image) in shorter sessions more effective.

Where do we go from here?

Teachers continually critique and reflect on practice, but COVID-19 requires a re-evaluation of learners’ differences and reflection about their more challenging situations. We are all learning and must continue to do so.

COVID-19 brings ethical issues even more to the forefront, including the appropriateness of requiring or encouraging learners to share their image via video. We must think about disabilities, access to technology, and socio-economic issues in a context where learning is only available online. Positives have also emerged, as sessions can be followed from a range of devices and locations.

COVID-19 forces us to explicitly consider the well-being of learners. Despite coming at this difficult time, we welcome this focus. All our situations have changed, so we need to think about the issues differently. What are the additional ethical issues we must now address? How do we keep this conversation going?

About the authors

Together we have 50+ years experience teaching CAQDAS-packages and 30+ years experience teaching online. Dr Michelle Salmona is President of the Institute for Mixed Methods Research and an international consultant in: program evaluation; research design; and mixed-methods and qualitative data analysis using data applications. Michelle is also an Adjunct Professor at the University of Canberra, Australia specializing in qualitative and mixed methods research. Dr Sarah L Bulloch is a social researcher passionate about methods, with expertise in qualitative and quantitative analysis, as well as mixing the two. She has worked in academic, government, voluntary and private sectors. Sarah teaches introductory and advanced workshops in several CAQDAS packages as a Teaching Fellow for the CAQDAS Networking Project at the University of Surrey, as well as teaching quantitative analysis using SPSS. Dr Christina Silver is Director of Qualitative Data Analysis Services, providing training and consultancy for qualitative and mixed-methods analysis. She also manages the CAQDAS Networking Project (CNP), leading its capacity-building activities. She has trained thousands of researchers in the powerful use of CAQDAS-packages, including NVivo, and developed the Five-Level QDA® method with Nick Woolf.  

References

  • Kalpokaite, N. & Radivojevic, I. (2020). Teaching qualitative data analysis software online: a comparison of face-to-face and e-learning ATLAS.ti courses, International Journal of Research & Method in Education, 43(3), pp. 296-310, DOI:10.1080/1743727X.2019.1687666.
  • Salmona, M. & Kaczynski, D. (2016). Don’t Blame the Software: Using Qualitative Data Analysis Software Successfully in Doctoral Research. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 17(3), Art 11, http://nbn-resolving.de/urn:nbn:de:0114-fqs1603117.
  • Schmieder, C. (2020). Qualitative data analysis software as a tool for teaching analytic practice: Towards a theoretical framework for integrating QDAS into methods pedagogy. Qualitative Research, 20(5), pp. 684-702. 
  • Silver, C. & Woolf, N (2015) “From guided instruction to facilitation of learning: The development of Five-level QDA as a CAQDAS pedagogy that explicates the practices of expert users” International Journal of Social Research Methodology, Vol. 18, Issue 5. Pp527-543
  • Silver, C. & Bulloch, S.L (2020) Teaching NVivo using the Five-Level QDA(R) Method: Adaptations for Synchronous Online Learning. Paper presented at the QSR International Virtual Conference, Qualitative Research in a Changing World. September 24th 2020