featured, Notebook

Decentering methodological nationalism to study precarious legal status trajectories

By Patricia LandoltLuin Goldring & Paul Pritchard

Our paper tackles the knowledge gap between people’s experiences of immigration status and quantitative research on the matter, and proposes a survey design solution for the gap. We consider how this knowledge gap is produced and perpetuated in Canada, a traditional country of permanent immigration in which temporary migration has become a core feature of immigration management. We showcase the most important survey and administrative data sources used in Canada to study the relationship between immigration and social inequality. We capture gaps and omissions in data sources and show why the loyal application of state legal status categories to the empirical study of the relationship between changes in immigration status and social inequality is flawed. In the second half of the paper, we present the design lessons learned by the Citizenship and Employment Precarity project. We discuss our community consultation process and the survey questions we developed to capture respondent’s precarious legal status trajectories, and some of the limitations of our approach.

Research shows that precarious legal status situations are proliferating globally and have profound long-term impacts on migrant life chances and social inequality. Precarious legal status is characterized by a lack of permanent authorized residence, partial and temporary authorized access to social citizenship and employment, dependence on a third party for authorized presence or employment, and deportability (Goldring et al., 2009). Across the world, migrants move between immigration status categories, often in unpredictable ways that do not fit easily within state categories or the design of international migration management or state policy. Migrant precarious legal status trajectories crisscross jurisdictions and programs. There is movement between authorized and unauthorized legal status situations, failed transitions, denied and repeat applications. Yet the bulk of quantitative research on immigration status is organized around state legal status categories and programs. As a result, analysis cannot take into account potentially life altering moments in a person’s precarious legal status trajectory that fly under or around the radar of the state.

The research design process is a crucial moment with the potential to generate methodological autonomy from state classification and counting systems. In our work, the community social service agency-centred knowledge network generated conceptual and analytical autonomy from state practice. Active consultation with stakeholders to incorporate their insights and priorities into the survey design was essential throughout the research process.

The target population for the survey included employed, working age, residents of the Greater Toronto Area who had entered Canada with precarious legal status. Respondents’ current legal status was not a sampling criterion, which means that the sample ranges from illegalized, unauthorized migrants to permanent residents and naturalized citizens. In addition to questions on precarious legal status trajectories, the survey includes demographic data and modules on pre-arrival planning and financing of migration, education pre- and post-arrival, early settlement and work, current work, income and financial security, and a self-rated health score and questions about access to healthcare. The twenty-minute, self-administered online survey includes a mixture of closed-response and multiple-choice items and a limited number of open-response items.

The survey has three sets of questions that together capture a respondent’s precarious legal status trajectory in Canada. Respondents are asked for immigration status at entrance or the first time they entered Canada. Current status is captured with three sets of questions. One question establishes the respondent’s immigration status category. Another asks if and when the respondent transitioned to permanent residence. Third, the respondent is asked if they currently have a valid work permit, with a probe for work permit type (open or closed). Work permit type is associated with varying degrees of labour market access, rights and entitlements. The three-part question is a check on the potential for mismatches between authorization to be present and authorization to work that are built into the immigration system.

A separate set of questions captures movement within and between legal status categories including successful and failed attempts to change status. The respondent is asked to identify all immigration statuses and work permits held, including a deportation order and the total number of attempts to gain permanent resident status via humanitarian applications. They are also asked if they ever left and returned to Canada under a different or renewed immigration status. The survey also asks for costs incurred by a family unit to finance migration and efforts to regularize their legal status.

The survey has limitations that truncate the temporal and directional complexity of precarious legal status trajectories. Limitations result from balancing the practical requirement of manageable survey length with comprehensiveness. The survey does not capture sequential data on legal status trajectories, only what statuses a respondent applied for and held. It is also not possible to establish whether the total period without status or work permit is continuous or discontinuous. We also note that survey pilot testing confirmed the inherent difficulties of collecting sequential data given lacunae in respondent recall and gaps in knowledge, particularly when these are handled by a third party such as an immigration lawyer, consultant or family member. A second limitation involves legal status trajectories outside of Canada, before first arriving in the country, or between arrival and the survey date. The survey does not address precarious transit zones.

You can read the full article in IJSRM here.

Calls, featured, Notebook

Remote qualitative data collection: Lessons from a multi-country qualitative evaluation

By Mehjabeen Jagmag *

Like most researchers who had planned to begin their research projects earlier this year, our research team found our data collection plans upended by the pandemic. We had designed our research guides, received ethical clearance and completed training our research teams for a multi-country endline evaluation of an education programme in Ghana, Kenya and Nigeria much before we heard of the COVID-19 pandemic.

A few days before our teams travelled to their respective data collection sites, phone calls started pouring in – schools were closed indefinitely, travel between cities was restricted, and we were beginning to understand how much the COVID-19 pandemic would change our lives. After a few weeks of waiting and watching, it became apparent that we could not continue in-person data collection.

We revised our research guides and prepared ourselves for conducting remote phone-interviews with our research participants. Given that this was the third and last round of data collection in our multi-year panel research, we had previously collected phone numbers of our research participants and acquired permission to be able to contact them on the phone for further research. We set up remote research desks for the team and began preparation for data collection.

What we were unsure about was whether our research plans would be successful. Accounts of fraudulent callers promising medical remedies and peddling fake health insurance packages had made people wary of responding to unknown phone numbers. We were not sure how many of the phone numbers we had collected in the previous year would still be working, and most importantly, we were not sure how our research participants were faring under the lockdown and whether they would want to speak with us. Finally, our research participants included primary school students, who were an essential voice in our study. We were keen to conduct interviews but were not sure if this would be feasible – would parents trust us enough to speak to us and consent to their children speaking to us? Once we secured consent from parents, would children provide assent? As trust was the key element to completing our research successfully, we devised a data collection plan that included the following elements, that are likely to be necessary for future remote data collection.

Training and retraining for remote data collection

We spent time discussing as a team what the potential challenges may be and how we plan to respond to them. We drew up a collective list of answers that we could draw on to communicate clearly and effectively about the evaluation, respond to any queries and alleviate any concerns that our participants had. This list and knowledge grew, and we collected data, and debrief meetings with the teams at the end of each data helped ensure this was a live document.

Seek feedback from key informants

We contacted community leaders and headteachers to enquire about how we should approach data collection with school and community participants. They provided important contextual information that was occasionally specific to each community. We used this information to improve our introductory messages, the time and dates we called and how we approached research participants.

Seek introductions from trusted leaders

We also asked community leaders and headteachers to support our recruitment process by sending messages to the participants about our research before it began. Doing so helped minimise any uncertainty of the veracity of our calls. Where relevant, we compensated them for their airtime.

Give participants time to prepare for the interview

We shared information about our organisation and the research objective over text messages or calls, which gave research participants enough time to decide whether they wanted to participate. It also helped plan to speak at a time would suit them best for a discussion, and also consult with their family and children if they wanted to participate in the research.

Ensure continuity of research teams

As this was an endline evaluation, we had research team members who participated in previous rounds of data collection calling the participants they were likely to have visited in the past. Where this was possible, it increased trust and facilitated easy conversations.

Prepare case-history notes

We prepared short case history notes about the programme and school and what we had learned from previous research rounds for each school to build confidence that our intentions and credentials were genuine. These notes helped remind research participants of our last conversation, helped us focus on what has changed since that last conversation, which in turn helped keep conversations short and in general proved to be a useful conversation starter.

Save time at the beginning and end for questions

We ensure research participants had enough time to ask us about the programme, our motivations, go over the consent form, understand why we wanted to speak with the children or for children to ask parents for their permission before we began our interviews. To ensure that that the conversation did not feel rushed, we designed shorter research guides.

Plan for breaks or changes when interviewing with young participants

When speaking with students, we anticipated time to break and distractions during the call, which helped maintain a relaxed pace during the interview. If students were uncomfortable with phone interviews, we, eased the conversation to a close to minimise any distress caused to the participant.

Summary and Conclusion

We completed data collection in all three countries, albeit with a less ambitious research plan that we originally intended for an in-person research study. The key objective of our research was to collect the optimal amount of data that would inform the programme evaluation while making the interview process convenient and comfortable for the research participants involved. To do so, we have learned that it is vital for participants to have confidence in the researchers and the motive for collecting data. Planning before we began data collection and updating our knowledge as the research progressed proved invaluable to our experience.

* Mehjabeen Jagmag is a Senior Consultant with Oxford Policy Management.

Calls, covid-19, Notebook

Teaching online research methods online with asynchronous international distance learning students during Covid-19

By Elizabeth Hidson and Vikki Wynn

Challenges in asynchronous international distance learning pre-Covid

Working on an international distance learning teacher training programme brings multiple challenges, the biggest of which had previously been the asynchronous pattern of teaching and learning for the academic elements. Teaching is based on a systematic instructional design approach adopted by our university and broken down into weekly thematic units to support acquisition, discussion, investigation, collaboration, practice and production to meet learning outcomes. Recorded micro-lectures, learning activities and discussion boards are accessed asynchronously, with face-to-face online group sessions for further consolidation. The assessed teaching practice element of the programme had always been carried out in the host international schools, facilitated by school-based mentors and in-country professional practice tutors.

Developing research-informed practitioners

The importance of developing research capacity in trainee teachers stems from the expectation that they will become research-informed practitioners who can use evidence to inform decision-making (Siddiqui and Wardle, 2020). Being consumers of research is not enough, however: teachers need to also develop the tools to carry out their own research in school settings. The first MA-level module that our trainees encounter requires a case study approach to explore specific interventions that their schools implement to address targeted pupils’ learning needs. Typically, our trainee teachers undertake observations, conduct interviews and collect a range of data in their settings to understand how and why this additional support is provided and discuss it in relation to ‘what works’ in education, using initial sources such as the Education Endowment Foundation and the What Works Clearinghouse portals.

Establishing the heritage of research methods and methodology

Good teaching is good teaching, and it follows therefore that good research practice is still good research practice, irrespective of a global pandemic. Early rapid evidence assessments concluded that teaching quality was more important for remote teaching and learning than how it was delivered (Education Endowment Foundation, 2020), which had also been our starting point when considering our own research methods pedagogy. The initial teaching of research methods starts on our programme with key concepts and expectations: conceptualisation, literature, developing research questions, justification of research methods, consideration of ethics, all designed to ensure that the student teacher can apply theory to practice. We start with a formative proposal assignment to ensure early engagement with methodology and methods.

Our face-to-face online group sessions, themed as weekly ‘coffee shop’ meetings, provide a collaborative forum for knowledge exchange and trouble-shooting. Trainee teachers join to listen, to share ideas, to pose questions and problems and the module leaders respond with a dialogic teaching approach, helping to contextualise research methods in school settings and develop knowledge and understanding in a supportive online space.

Elizabeth Hidson promoting the weekly ‘coffee shop’ meeting

The ‘hybrid’ assignment and hybrid research methods

As teaching practice became hybrid for trainee teachers, so did research and assessment. Schooling around the world moved in and out of face-to-face, hybrid and fully online modes over the course of 2019, with the realities of the pandemic hitting earliest in the far east, where half of our students are based. As physical access to schools and participants fluctuated with local restrictions and impacted on students’ research plans, our alternative assignment pathways opened out to include hybrid and hypothetical assignments designed to act as a safety net for completion.

A key feature of the hybrid assignment was the shift to online and alternative research methods, building on the core research methods pedagogy we had established. Where face-to-face interviews were not an option, we promoted video calling and desktop-sharing (Hidson, 2020), but maintaining the spirit of semi-structured or artefact-based interviewing. Where classroom observations were no longer possible, we promoted fieldnotes captured from hybrid or online teaching sessions, urging a re-think of ethics and collection of additional secondary data in various forms to attempt triangulation.

The outcomes in terms of the final case studies produced have been pleasing: creative and thoughtful academic discussions that responded to the unique challenges of each setting. We regularly quoted Hamilton and Corbett-Whittier (2013) to our trainees, where they advised thinking of a case study as a living thing and ensuring that it made “as much sense to the reader as it did to the researcher” (p.179). The act of thinking in detail about the research methods seemed to have been beneficial to the understanding of research methods and real-world research.

Developing resilient research capability as a factor of resilient teaching

Although our programme continues to respond to the global challenges of Covid-19, we are keen to retain what has worked into the future. The ability for trainee teachers to embrace the need for resilience in teaching as well as in research is a benefit. Their capacity to see research as a live and responsive part of their practice has always been our intention; we believe that the response to research during Covid will itself be a case study for future cohorts.

References

Education Endowment Foundation (2020). Remote Learning, Rapid Evidence Assessment. London: Education Endowment Foundation.

Hamilton, L., and Corbett-Whittier, C. (2013). Using Case Study in Education Research. London: Sage.

Hidson, E (2020) Internet Video Calling and Desktop Sharing (VCDS)as an Emerging Research Method for Exploring Pedagogical Reasoning in Lesson Planning. Video Journal of Education and Pedagogy, 5 (1). pp. 1-14. https://doi.org/10.1163/23644583-00501001.

Siddiqui, N. and Wardle, L (2020). Can users judge what is ‘promising’ evidence in education? Research Intelligence 144 (Autumn 2020). London: BERA.

Notebook, Uncategorized

“Who says what” in multiple choice questions. A comprehensive exploratory analysis protocol

By M. Landaluce-Calvo, Ignacio García-Lautre, Vidal Díaz de Rada, & Elena Abascal

The aim of much sociological research is to assess public opinion, and the data are often collected by the survey method. This enables the detection of different response, behaviour or opinion profiles and the characterization of groups of respondents with similar views on a certain topic or set of questions. As is widely known, however, different types of question not only yield different qualities of response, but also require different methods of analysis.

Any attempt to classify survey question types require consideration of five criteria: 1) degree of freedom in the response; 2) type of content, 3) level of sensitivity/threat; 4) level of measurement; and 5) number of response options per question. The last classification (with respect to the number of responses) first differentiates between single response and multiple response questions. Here is the main objective of our article in IJSRM: How to extract maximum information from multiple response questions.

There are two broad types of multiple-response questions. One is the categorical response question, where the respondent is instructed to “check-all-that-apply” (the categories are exhaustive, but not mutually exclusive.). The other is the binary response question, where the respondent is required to check yes or no to each response option. Respondents find “check-all-that-apply” questions more difficult to answer because the multiple options require more use of memory. Under the binary-response format the respondent must consider pairs of options, one by one, and check one option in each case. Each pair of options requires an answer, so only a minimal demand is placed on memory. This procedure yields more responses, in both telephone and online surveys and requires less effort on the part of the respondent, although it may lengthen the questionnaire.

Those admitting various response options can be further classified into grid or check-all-that-apply questions. In the case of the latter, the categories are exhaustive, but not mutually exclusive. This multiple-response question format is its widespread use both in the field of opinion polling and in sociological and marketing research. International research project such as the European Social Survey and the Word Values Survey, for example, contain large numbers of multiple responses questions.

All the above considerations relate to the stages of data collection and participant opinion retrieval, but what about the analysis? A review of the specialist literature reveals a lack of attention to the specific data-processing treatment, and the failure to use a multidimensional exploratory approach that would enable the maximum amount of information to be extracted from the response options. The analysis is limited mainly to calculating one-dimensional frequencies (the frequency with which a given response occurs over the total number of respondents or total number of responses) or two-dimensional frequencies resulting from crossing the chosen response option with other socio-demographic or socio-economic characteristics, etc; in other words, a partial approach in either case.

Our article in IJSRM present a multidimensional analysis protocol that provides the researcher with tools to identify more and better profiles about “who says what”. The underlying philosophy in this approach is to “let the data speak for themselves”, and to learn from them. The strategy begins by coding the response options as a set of metric binary variables (presence/absence). The ideal methodological duo for the exploration of the resulting data is Principal Component Analysis coupled with an Ascending Hierarchical Cluster Analysis, incorporating, in addition, supplementary variables (gender, age, marital status, educational attainment, etc.).

This protocol applies to the analysis of three different multiple-response questions included in a Spanish National Sociological Survey (CIS- Centro de Investigaciones Sociológicas):

  1. “How do you usually spend your free time?”, the respondent has 17 options and can select as many as desired; no order of preference is required and the categories are not mutually exclusive.
  2. “During 2017, how have you spent or do you intend spending your leisure periods?”, with 10 options, there is no limit on the number of them that can be checked, but there are two which automatically exclude the rest: “I haven’t thought about it yet” and “I have no leisure periods”.
  3. When deciding how to spend your days off, what are your top three priorities?”, there is alimit of three options, out of 10 possible, no order of preference required.

This empirical analysis provides evidence not only of the interpretation potential of the coding/analysis protocol, but also of the limitations of some multiple-response question formats. Specifically, it is shown that multi-response with limited options is not a suitable format for detecting response patterns or overall tendencies leading to the identification of global respondent profiles. In addition, this study corroborates that in the “forced choice” and “check all that apply” the respondents are more likely to choose from the options presented at the beginning of a list (primacy effect). Early theories attributing the phenomenon to such questions requiring deeper cognitive processing.

Read the full article in IJSRM here.

Notebook

The feasibility and challenge of using administrative data: a case study of historical prisoner surveys

By Anthony Quinn, David Denney, Nick Hardwick, Rahul Jalil, & Rosie Meek

This research note arose from a collaboration between researchers at HM Inspectorate of Prisons and Royal Holloway University of London. Within the last two decades, HM Inspectorate of Prisons [HMIP] has collected a vast array of survey data from detainees in England and Wales. As part of HMIP inspections, the detainee voice is captured via a survey and it is triangulated with other evidence. The survey data inform the inspection report of each establishment as well as annual reports and thematic studies. 

These survey data are important because they provide detainees with a rare opportunity to voice their experiences of incarceration. There are questions about a range of aspects of prison life. For instance, participants are asked about the amount of time that they spend outside of their cells, the quality of the prison food and how often they are able to receive visits from their family and friends. Currently there are 159 question items in total.

It goes without saying that in twenty years a dedicated research team has amassed a huge volume of data from detainees within the 117 prisons in England and Wales. HMIP have retained these data for inspectorate purposes and digital records of the paper surveys have been stored in an electronic archive. So, to analyse these historical survey data is it just a case of logging in to the archive and inputting the data into a statistical computing program?

Well, no… far from it. There are a number of complexities that must be addressed to make just one or a few larger datasets with these data. With these prisoner survey data, a major sticking point is the number of iterations of the questionnaires that there have been since the year 2000.i.e. there have been several different versions of the questionnaire and so questions and their response options have varied. This establishes the need to create metadata such as inventories and timelines so that available data can be easily identified.

A wealth of literature explains the challenges of opening up data to secondary users. Primarily, curating and maintaining datasets is costly and time-consuming. This is not an undertaking that should be taken lightly or, where it can be avoided, alone. Secondly, survey data can contain personal information and so it needs to be ensured that data are sufficiently anonymised. Thirdly, data can easily be misused or misinterpreted so it is vital to document and explain the data and their limitations for secondary users.

Any research involving places of detention and detainees raises significant ethical considerations. In this case, detainees had not explicitly agreed data from the surveys they returned to the inspectorate could be made more widely accessible. So, we conducted some focus groups with current long-serving prisoners (we would have conducted more if the epidemic had not halted our efforts) to ask what they thought – they were emphatic that the data should be shared if it would help improve prison conditions. Indeed, some said they would not have taken part in the survey if they had thought the data were NOT going to be used in this way. Further qualitative research with data subjects in order to ascertain their perspectives is certainly an endeavour to be pursued.

Within our Research Note we have put a spotlight on the intricacies involved in identifying administrative data, aggregating them and fully understanding the context within which they were collected. To achieve the latter aim it is vital that, where possible, those who have collected data play a prominent role in the collation of administrative data. This is not a task that should simply be outsourced. Rather, to do justice to such a potentially valuable resource, the expertise of a diverse collaboration of professionals is vital.

Oh, and there’s COVID-19 as well… this has prevented researchers from gaining access to prisons to talk directly to detainees. It has also highlighted the importance of making better use of existing operational and administrative data sources.

Read the full IJSRM article here.