featured, Notebook

Analysing complexity: Developing a modified phenomenological hermeneutical method of data analysis for multiple contexts

By Debra Morgan

Qualitative data analysis has been criticised for a lack of credibility over recent years when vagueness has been afforded to the reporting of how findings are attained. In response, there has been a growing body of literature emphasising a need to detail methods of qualitative data analysis. As a seasoned academic in nurse education, comfortable with the concept of evidence-based practice, I was concerned as a PhD researcher (exploring student nurse experiences of learning whilst studying abroad) to ensure I selected a sound approach to data analysis that would ensure transparency at each stage of the analysis process; so avoiding the aforementioned criticism from being levelled against my research.

The analytical journey began well, with the selection of the ‘phenomenological hermeneutical method for interpreting interview texts’ (Lindseth and Norberg, 2004​). This method appeared ideally suited to my research methodology (hermeneutic phenomenology) and the process is well described by the authors, so offering the level of transparency desired. Briefly, this analysis method comprises three stages: naïve reading: the text must be read many times in an open-minded manner so that a first ‘naïve understanding’ is arrived at; structural analysis: commences with the identification of ‘meaning units’ from the text, these are condensed and sub-themes and themes emerge; comprehensive understanding: themes are further considered in relation to the research question and wider texts and a comprehensive understanding emerges (Lindseth and Norberg, 2004).

Analysis of each individual research participant’s data progressed well following these stages. However, I found it difficult to understand how I could then combine individual data to develop core phenomenon themes and yet not lose the individual student voice and learning experiences which were embedded in diverse contexts. This was concerning to me as my research comprised gathering student experiences of their learning in multiple contexts. For example, student experiences ranged from short placements in low and middle income countries in Africa and Asia, to longer placements in high income countries in Europe. Whilst the phenomenon of learning may exist in each diverse context, each experience is unique, therefore illuminating and preserving experiences in context was important to me. I was concerned to ensure that each individual ‘lived experience’, within each of these different contexts, were explored so that an understanding of learning in each type of placement could be revealed prior to developing a comprehensive understanding of the phenomenon more generically.

Whilst Lindseth and Norberg suggest reflecting on the emergent themes in relation to the context of the research (such as different study abroad types) at the final ‘comprehensive understanding’ stage, I felt it was important to preserve experiences specific to each study abroad type throughout each stage of data analysis in order they were not ‘lost’ during this process. To capture such contextual elements, Bazeley (2009) recommends describing, comparing and relating the characteristics or situation of the participants during analysis. I therefore incorporated these aspects into Lindseth and Norberg’s approach so that the varied study abroad types could be explored individually before then combining with the other types. In order to achieve this, I developed a modified approach to analysis. In particular, I further differentiated at the stage of structural analysis. Accordingly, I introduced two sub-stages, these are: ‘individual structural analysis’ and an additional ‘combined structural analysis’. This development represents a refinement in relation to moving from the individual participant experience (which I have termed ‘the individual horizonal perspective’ or ‘individual horizon’) to combined experiences of the phenomenon (respectively termed ‘the combined horizonal perspective’ or ‘combined horizon’).

This modified qualitative data analysis method ensures that the subjective origins of student, or research participant, experience and contexts remain identifiable throughout each stage of the data analysis process. Consequently, I feel this modification to an existing qualitative data analysis method permits greater transparency when dealing with data that relates to multiple contexts. I have called this approach the ‘Modified Phenomenological Hermeneutical Method of Data Analysis For Multiple Contexts’ and I have also developed a visual model to further illuminate this modified approach.

The ‘modified phenomenological hermeneutical method of data analysis for multiple contexts’ holds utility, and whilst my research is focused upon student nurse education it is transferable to other subject and research areas that involve multiple research contexts. To this effect, I have shared a reflexive review, and in this I include data extracts, of the development and application of this approach in my article: ‘Analysing complexity: Developing a modified phenomenological hermeneutical method of data analysis for multiple contexts’  published in‘The International Journal of Social Research Methodology’. This paper also presents the developed visual model. Additionally, an exploration of the underpinning theoretical basis to this data analysis method, and modification is provided, so adding to the expanding body of evidence and reflecting the ethos of transparency in qualitative data analysis.

Read the full IJSRM article here.

References:

Bazeley, P. (2009). Analysing qualitative data: More than ‘identifying themes’. Malaysian Journal of Qualitative Research 2, 6-22. Lindseth, A. & Norberg, A. (2004). A phenomenological hermeneutical method for researching lived experience. Scandinavian Journal of Caring sciences 18(2), 145-153.

featured, Notebook

Decentering methodological nationalism to study precarious legal status trajectories

By Patricia LandoltLuin Goldring & Paul Pritchard

Our paper tackles the knowledge gap between people’s experiences of immigration status and quantitative research on the matter, and proposes a survey design solution for the gap. We consider how this knowledge gap is produced and perpetuated in Canada, a traditional country of permanent immigration in which temporary migration has become a core feature of immigration management. We showcase the most important survey and administrative data sources used in Canada to study the relationship between immigration and social inequality. We capture gaps and omissions in data sources and show why the loyal application of state legal status categories to the empirical study of the relationship between changes in immigration status and social inequality is flawed. In the second half of the paper, we present the design lessons learned by the Citizenship and Employment Precarity project. We discuss our community consultation process and the survey questions we developed to capture respondent’s precarious legal status trajectories, and some of the limitations of our approach.

Research shows that precarious legal status situations are proliferating globally and have profound long-term impacts on migrant life chances and social inequality. Precarious legal status is characterized by a lack of permanent authorized residence, partial and temporary authorized access to social citizenship and employment, dependence on a third party for authorized presence or employment, and deportability (Goldring et al., 2009). Across the world, migrants move between immigration status categories, often in unpredictable ways that do not fit easily within state categories or the design of international migration management or state policy. Migrant precarious legal status trajectories crisscross jurisdictions and programs. There is movement between authorized and unauthorized legal status situations, failed transitions, denied and repeat applications. Yet the bulk of quantitative research on immigration status is organized around state legal status categories and programs. As a result, analysis cannot take into account potentially life altering moments in a person’s precarious legal status trajectory that fly under or around the radar of the state.

The research design process is a crucial moment with the potential to generate methodological autonomy from state classification and counting systems. In our work, the community social service agency-centred knowledge network generated conceptual and analytical autonomy from state practice. Active consultation with stakeholders to incorporate their insights and priorities into the survey design was essential throughout the research process.

The target population for the survey included employed, working age, residents of the Greater Toronto Area who had entered Canada with precarious legal status. Respondents’ current legal status was not a sampling criterion, which means that the sample ranges from illegalized, unauthorized migrants to permanent residents and naturalized citizens. In addition to questions on precarious legal status trajectories, the survey includes demographic data and modules on pre-arrival planning and financing of migration, education pre- and post-arrival, early settlement and work, current work, income and financial security, and a self-rated health score and questions about access to healthcare. The twenty-minute, self-administered online survey includes a mixture of closed-response and multiple-choice items and a limited number of open-response items.

The survey has three sets of questions that together capture a respondent’s precarious legal status trajectory in Canada. Respondents are asked for immigration status at entrance or the first time they entered Canada. Current status is captured with three sets of questions. One question establishes the respondent’s immigration status category. Another asks if and when the respondent transitioned to permanent residence. Third, the respondent is asked if they currently have a valid work permit, with a probe for work permit type (open or closed). Work permit type is associated with varying degrees of labour market access, rights and entitlements. The three-part question is a check on the potential for mismatches between authorization to be present and authorization to work that are built into the immigration system.

A separate set of questions captures movement within and between legal status categories including successful and failed attempts to change status. The respondent is asked to identify all immigration statuses and work permits held, including a deportation order and the total number of attempts to gain permanent resident status via humanitarian applications. They are also asked if they ever left and returned to Canada under a different or renewed immigration status. The survey also asks for costs incurred by a family unit to finance migration and efforts to regularize their legal status.

The survey has limitations that truncate the temporal and directional complexity of precarious legal status trajectories. Limitations result from balancing the practical requirement of manageable survey length with comprehensiveness. The survey does not capture sequential data on legal status trajectories, only what statuses a respondent applied for and held. It is also not possible to establish whether the total period without status or work permit is continuous or discontinuous. We also note that survey pilot testing confirmed the inherent difficulties of collecting sequential data given lacunae in respondent recall and gaps in knowledge, particularly when these are handled by a third party such as an immigration lawyer, consultant or family member. A second limitation involves legal status trajectories outside of Canada, before first arriving in the country, or between arrival and the survey date. The survey does not address precarious transit zones.

You can read the full article in IJSRM here.

Calls, featured, Notebook

Remote qualitative data collection: Lessons from a multi-country qualitative evaluation

By Mehjabeen Jagmag *

Like most researchers who had planned to begin their research projects earlier this year, our research team found our data collection plans upended by the pandemic. We had designed our research guides, received ethical clearance and completed training our research teams for a multi-country endline evaluation of an education programme in Ghana, Kenya and Nigeria much before we heard of the COVID-19 pandemic.

A few days before our teams travelled to their respective data collection sites, phone calls started pouring in – schools were closed indefinitely, travel between cities was restricted, and we were beginning to understand how much the COVID-19 pandemic would change our lives. After a few weeks of waiting and watching, it became apparent that we could not continue in-person data collection.

We revised our research guides and prepared ourselves for conducting remote phone-interviews with our research participants. Given that this was the third and last round of data collection in our multi-year panel research, we had previously collected phone numbers of our research participants and acquired permission to be able to contact them on the phone for further research. We set up remote research desks for the team and began preparation for data collection.

What we were unsure about was whether our research plans would be successful. Accounts of fraudulent callers promising medical remedies and peddling fake health insurance packages had made people wary of responding to unknown phone numbers. We were not sure how many of the phone numbers we had collected in the previous year would still be working, and most importantly, we were not sure how our research participants were faring under the lockdown and whether they would want to speak with us. Finally, our research participants included primary school students, who were an essential voice in our study. We were keen to conduct interviews but were not sure if this would be feasible – would parents trust us enough to speak to us and consent to their children speaking to us? Once we secured consent from parents, would children provide assent? As trust was the key element to completing our research successfully, we devised a data collection plan that included the following elements, that are likely to be necessary for future remote data collection.

Training and retraining for remote data collection

We spent time discussing as a team what the potential challenges may be and how we plan to respond to them. We drew up a collective list of answers that we could draw on to communicate clearly and effectively about the evaluation, respond to any queries and alleviate any concerns that our participants had. This list and knowledge grew, and we collected data, and debrief meetings with the teams at the end of each data helped ensure this was a live document.

Seek feedback from key informants

We contacted community leaders and headteachers to enquire about how we should approach data collection with school and community participants. They provided important contextual information that was occasionally specific to each community. We used this information to improve our introductory messages, the time and dates we called and how we approached research participants.

Seek introductions from trusted leaders

We also asked community leaders and headteachers to support our recruitment process by sending messages to the participants about our research before it began. Doing so helped minimise any uncertainty of the veracity of our calls. Where relevant, we compensated them for their airtime.

Give participants time to prepare for the interview

We shared information about our organisation and the research objective over text messages or calls, which gave research participants enough time to decide whether they wanted to participate. It also helped plan to speak at a time would suit them best for a discussion, and also consult with their family and children if they wanted to participate in the research.

Ensure continuity of research teams

As this was an endline evaluation, we had research team members who participated in previous rounds of data collection calling the participants they were likely to have visited in the past. Where this was possible, it increased trust and facilitated easy conversations.

Prepare case-history notes

We prepared short case history notes about the programme and school and what we had learned from previous research rounds for each school to build confidence that our intentions and credentials were genuine. These notes helped remind research participants of our last conversation, helped us focus on what has changed since that last conversation, which in turn helped keep conversations short and in general proved to be a useful conversation starter.

Save time at the beginning and end for questions

We ensure research participants had enough time to ask us about the programme, our motivations, go over the consent form, understand why we wanted to speak with the children or for children to ask parents for their permission before we began our interviews. To ensure that that the conversation did not feel rushed, we designed shorter research guides.

Plan for breaks or changes when interviewing with young participants

When speaking with students, we anticipated time to break and distractions during the call, which helped maintain a relaxed pace during the interview. If students were uncomfortable with phone interviews, we, eased the conversation to a close to minimise any distress caused to the participant.

Summary and Conclusion

We completed data collection in all three countries, albeit with a less ambitious research plan that we originally intended for an in-person research study. The key objective of our research was to collect the optimal amount of data that would inform the programme evaluation while making the interview process convenient and comfortable for the research participants involved. To do so, we have learned that it is vital for participants to have confidence in the researchers and the motive for collecting data. Planning before we began data collection and updating our knowledge as the research progressed proved invaluable to our experience.

* Mehjabeen Jagmag is a Senior Consultant with Oxford Policy Management.

Calls, covid-19, Notebook

Teaching online research methods online with asynchronous international distance learning students during Covid-19

By Elizabeth Hidson and Vikki Wynn

Challenges in asynchronous international distance learning pre-Covid

Working on an international distance learning teacher training programme brings multiple challenges, the biggest of which had previously been the asynchronous pattern of teaching and learning for the academic elements. Teaching is based on a systematic instructional design approach adopted by our university and broken down into weekly thematic units to support acquisition, discussion, investigation, collaboration, practice and production to meet learning outcomes. Recorded micro-lectures, learning activities and discussion boards are accessed asynchronously, with face-to-face online group sessions for further consolidation. The assessed teaching practice element of the programme had always been carried out in the host international schools, facilitated by school-based mentors and in-country professional practice tutors.

Developing research-informed practitioners

The importance of developing research capacity in trainee teachers stems from the expectation that they will become research-informed practitioners who can use evidence to inform decision-making (Siddiqui and Wardle, 2020). Being consumers of research is not enough, however: teachers need to also develop the tools to carry out their own research in school settings. The first MA-level module that our trainees encounter requires a case study approach to explore specific interventions that their schools implement to address targeted pupils’ learning needs. Typically, our trainee teachers undertake observations, conduct interviews and collect a range of data in their settings to understand how and why this additional support is provided and discuss it in relation to ‘what works’ in education, using initial sources such as the Education Endowment Foundation and the What Works Clearinghouse portals.

Establishing the heritage of research methods and methodology

Good teaching is good teaching, and it follows therefore that good research practice is still good research practice, irrespective of a global pandemic. Early rapid evidence assessments concluded that teaching quality was more important for remote teaching and learning than how it was delivered (Education Endowment Foundation, 2020), which had also been our starting point when considering our own research methods pedagogy. The initial teaching of research methods starts on our programme with key concepts and expectations: conceptualisation, literature, developing research questions, justification of research methods, consideration of ethics, all designed to ensure that the student teacher can apply theory to practice. We start with a formative proposal assignment to ensure early engagement with methodology and methods.

Our face-to-face online group sessions, themed as weekly ‘coffee shop’ meetings, provide a collaborative forum for knowledge exchange and trouble-shooting. Trainee teachers join to listen, to share ideas, to pose questions and problems and the module leaders respond with a dialogic teaching approach, helping to contextualise research methods in school settings and develop knowledge and understanding in a supportive online space.

Elizabeth Hidson promoting the weekly ‘coffee shop’ meeting

The ‘hybrid’ assignment and hybrid research methods

As teaching practice became hybrid for trainee teachers, so did research and assessment. Schooling around the world moved in and out of face-to-face, hybrid and fully online modes over the course of 2019, with the realities of the pandemic hitting earliest in the far east, where half of our students are based. As physical access to schools and participants fluctuated with local restrictions and impacted on students’ research plans, our alternative assignment pathways opened out to include hybrid and hypothetical assignments designed to act as a safety net for completion.

A key feature of the hybrid assignment was the shift to online and alternative research methods, building on the core research methods pedagogy we had established. Where face-to-face interviews were not an option, we promoted video calling and desktop-sharing (Hidson, 2020), but maintaining the spirit of semi-structured or artefact-based interviewing. Where classroom observations were no longer possible, we promoted fieldnotes captured from hybrid or online teaching sessions, urging a re-think of ethics and collection of additional secondary data in various forms to attempt triangulation.

The outcomes in terms of the final case studies produced have been pleasing: creative and thoughtful academic discussions that responded to the unique challenges of each setting. We regularly quoted Hamilton and Corbett-Whittier (2013) to our trainees, where they advised thinking of a case study as a living thing and ensuring that it made “as much sense to the reader as it did to the researcher” (p.179). The act of thinking in detail about the research methods seemed to have been beneficial to the understanding of research methods and real-world research.

Developing resilient research capability as a factor of resilient teaching

Although our programme continues to respond to the global challenges of Covid-19, we are keen to retain what has worked into the future. The ability for trainee teachers to embrace the need for resilience in teaching as well as in research is a benefit. Their capacity to see research as a live and responsive part of their practice has always been our intention; we believe that the response to research during Covid will itself be a case study for future cohorts.

References

Education Endowment Foundation (2020). Remote Learning, Rapid Evidence Assessment. London: Education Endowment Foundation.

Hamilton, L., and Corbett-Whittier, C. (2013). Using Case Study in Education Research. London: Sage.

Hidson, E (2020) Internet Video Calling and Desktop Sharing (VCDS)as an Emerging Research Method for Exploring Pedagogical Reasoning in Lesson Planning. Video Journal of Education and Pedagogy, 5 (1). pp. 1-14. https://doi.org/10.1163/23644583-00501001.

Siddiqui, N. and Wardle, L (2020). Can users judge what is ‘promising’ evidence in education? Research Intelligence 144 (Autumn 2020). London: BERA.

Notebook, Uncategorized

“Who says what” in multiple choice questions. A comprehensive exploratory analysis protocol

By M. Landaluce-Calvo, Ignacio García-Lautre, Vidal Díaz de Rada, & Elena Abascal

The aim of much sociological research is to assess public opinion, and the data are often collected by the survey method. This enables the detection of different response, behaviour or opinion profiles and the characterization of groups of respondents with similar views on a certain topic or set of questions. As is widely known, however, different types of question not only yield different qualities of response, but also require different methods of analysis.

Any attempt to classify survey question types require consideration of five criteria: 1) degree of freedom in the response; 2) type of content, 3) level of sensitivity/threat; 4) level of measurement; and 5) number of response options per question. The last classification (with respect to the number of responses) first differentiates between single response and multiple response questions. Here is the main objective of our article in IJSRM: How to extract maximum information from multiple response questions.

There are two broad types of multiple-response questions. One is the categorical response question, where the respondent is instructed to “check-all-that-apply” (the categories are exhaustive, but not mutually exclusive.). The other is the binary response question, where the respondent is required to check yes or no to each response option. Respondents find “check-all-that-apply” questions more difficult to answer because the multiple options require more use of memory. Under the binary-response format the respondent must consider pairs of options, one by one, and check one option in each case. Each pair of options requires an answer, so only a minimal demand is placed on memory. This procedure yields more responses, in both telephone and online surveys and requires less effort on the part of the respondent, although it may lengthen the questionnaire.

Those admitting various response options can be further classified into grid or check-all-that-apply questions. In the case of the latter, the categories are exhaustive, but not mutually exclusive. This multiple-response question format is its widespread use both in the field of opinion polling and in sociological and marketing research. International research project such as the European Social Survey and the Word Values Survey, for example, contain large numbers of multiple responses questions.

All the above considerations relate to the stages of data collection and participant opinion retrieval, but what about the analysis? A review of the specialist literature reveals a lack of attention to the specific data-processing treatment, and the failure to use a multidimensional exploratory approach that would enable the maximum amount of information to be extracted from the response options. The analysis is limited mainly to calculating one-dimensional frequencies (the frequency with which a given response occurs over the total number of respondents or total number of responses) or two-dimensional frequencies resulting from crossing the chosen response option with other socio-demographic or socio-economic characteristics, etc; in other words, a partial approach in either case.

Our article in IJSRM present a multidimensional analysis protocol that provides the researcher with tools to identify more and better profiles about “who says what”. The underlying philosophy in this approach is to “let the data speak for themselves”, and to learn from them. The strategy begins by coding the response options as a set of metric binary variables (presence/absence). The ideal methodological duo for the exploration of the resulting data is Principal Component Analysis coupled with an Ascending Hierarchical Cluster Analysis, incorporating, in addition, supplementary variables (gender, age, marital status, educational attainment, etc.).

This protocol applies to the analysis of three different multiple-response questions included in a Spanish National Sociological Survey (CIS- Centro de Investigaciones Sociológicas):

  1. “How do you usually spend your free time?”, the respondent has 17 options and can select as many as desired; no order of preference is required and the categories are not mutually exclusive.
  2. “During 2017, how have you spent or do you intend spending your leisure periods?”, with 10 options, there is no limit on the number of them that can be checked, but there are two which automatically exclude the rest: “I haven’t thought about it yet” and “I have no leisure periods”.
  3. When deciding how to spend your days off, what are your top three priorities?”, there is alimit of three options, out of 10 possible, no order of preference required.

This empirical analysis provides evidence not only of the interpretation potential of the coding/analysis protocol, but also of the limitations of some multiple-response question formats. Specifically, it is shown that multi-response with limited options is not a suitable format for detecting response patterns or overall tendencies leading to the identification of global respondent profiles. In addition, this study corroborates that in the “forced choice” and “check all that apply” the respondents are more likely to choose from the options presented at the beginning of a list (primacy effect). Early theories attributing the phenomenon to such questions requiring deeper cognitive processing.

Read the full article in IJSRM here.