featured

Cultivating citizen science for all

By Stephanie Chesser, Michelle M. Porter & Anthony G. Tuckett

Cultivating citizen science for all: Ethical considerations for research projects involving diverse and marginalized populations is a critical appraisal and offers practical advice on the ethical considerations for citizen science projects involving diverse and marginalized populations. Our take-home message is two-fold: (1) explaining how the citizen science community can conduct research in ways that value inclusivity, adaptability, sensitivity, safety, and reciprocity; and (2) explaining why researchers designing citizen science projects ought scaffold every aspect of their research according to The Golden Rule. With this in mind, we have argued that citizen scientist volunteers are better positioned to be treated authentically and never as a mere means to an end. Ultimately, we contend that by implementing these recommendations, citizen science projects will be well-placed—from an ethical perspective—to achieve meaningful community engagement.

We put forward an argument for several ethical research considerations that we feel are necessary for citizen science projects wanting to involve individuals from traditionally marginalized groups (e.g. older people). To do this we first describe the notion of ethics in the context of citizen science research and some of the approaches professional researchers may choose to incorporate into projects to help ensure that citizen scientists are able to participate in meaningful and non-harmful ways. Finally, weaving together examples from published citizen science research in the human and health sciences along with current recommended standards for conducting citizen science, we suggest that projects with marginalized populations attend to five specific research elements: inclusivity, adaptability, sensitivity, safety, and reciprocity.

Our work is an international collaboration. It is a collaboration that has spanned four (4) years, but it is a manuscript that has required 18 months of dedicated thinking, writing and unwavering perseverance to have it come into print. We would say though, that when you know you have a manuscript with a useful message, it is worth pushing on and to never give up. It is a paper of two integrated parts – the backbone of the work is the five (5) elements around which we have wrapped the muscular core- an applied ethics approach encapsulated by the Golden Rule.

MP is the link between AT and SC, though AT and SC have never met. MP and AT met four years ago at a large, multidisciplinary, international meeting exploring the application of an approach to citizen science specifically in the health context. All three of us work with and are committed to the well-being of older people. SC and MP work closely together on a daily basis and co-created the ‘backbone’ whilst AT bought the applied ethics ‘muscularity’.  The writing and redrafting was very much an iterative process – questioning and challenging ideas and the clarity of meaning.  SC and MP nicely kept AT on task and his frustrations tempered. We were never writing a philosophical thesis. In the beginning and for the duration SC pushed the work on, whilst in the latter and final strides, AT got the manuscript over the line! In the thinking-writing nexus, we were also able to capitalise on the academic workload variations between north and south hemispheres so that as one of us got overwhelmed, fed-up or fatigued, the other stepped up to the plate. The writing team was a perfect fit.

featured

Using creative methods to research across difference

By Rachel Brooks

Although there is now a substantial literature on the use of creative methods within the social sciences, relatively little work has explored the value of such approaches to researching across difference, specifically. Our interest in this topic came about during a project we were working on (Eurostudents), which involved conducting a plasticine modelling exercise during focus groups with undergraduate students in six European countries. We were concerned that differences in academic culture might have a bearing on participants’ willingness to engage in the modelling, and perhaps also on the type of models produced. While our fears were, in the end, unfounded, we began to think more about the ways in which differences by nationality – but also by other social markers – may affect the use of creative methods. To this end, we organised a seminar, held in June 2018 at the University of Surrey and kindly funded by the IJSRM, to bring together scholars who were using such methods to research various aspects of difference, and reflect on the associated challenges and benefits.

The special issue of IJSRM, published recently, is based primarily on the papers given at this event. A number of authors tease out various practical and ethical issues that they encountered, which were brought into sharp relief because of the cross-national context (see those by Burningham et al. and Harman et al.). In other respects, however, contributors suggest that the use of creative methods can help to overcome some of the challenges of working across different countries. Chawla-Duggan et al., for example, contend that their use of filming helped to alleviate some of the linguistic barriers that emerged from working across four different countries. Moving on to examine other aspects of difference, Donnelly et al.’s contribution explores the extent to which a sense of intra-national geographical difference (here, discussed primarily with reference to the UK) affected educational decision-making, while Bernardi’s research (conducted cross-nationally) focusses on a group of children who are often positioned as different (by virtue of their autism), and Rainford’s contribution foregrounds institutional differences instead. Lažetić’s critical appraisal of website analyses focuses on both institutional and national differences, and outlines an agenda for further developing work in this area.

Together, the papers demonstrate how a wide range of creative methods (including filming of participants; analysis of visual material on public websites; photo elicitation; facilitation of art workshops and activities; Lego modelling; and geographical mapping) can bring new insights to researching across difference with respect to various substantive areas of enquiry including education, family, violence, youth studies, childhood studies and disability.

Notebook

Confessions of a Muslim Researcher – Considering Identity in Research

By Maisha Islam

In this Research Note, I discuss some of the contentions I have faced when conducting research with Muslim students. As a Muslim myself, I initially believed it to be advantageous in conducting research with a community I also belong to. However, I was not prepared to question fundamental parts of my identity as I was conducting this research and throughout the research process.

Although my piece for the International Journal of Social Research Methodology has been reworked for publication, this paper originally emerged from a module I was taking looking at theory and ethics in Educational Practice as part of my professional doctorate in Education programme. This module gave me the opportunity to unpack the tensions I had faced in a safe and structured way, where I could delve into the literature exploring researcher identity whilst situating my own experiences within it. Whilst there have been authors who have explored the double-edged sword of conducting insider research, I myself was not prepared for some of the experiences (and the emotions they brought) I had encountered.

The paper outlines three main issues I’ve considered when interviewing Muslim students about sense of belonging, provisions provided for them in relation to their religious needs, and whether or not they believe to have been settling for less in terms of their university experience. These issues included: whether or not I was using inappropriate means to gather Muslim student research participants; If I was simply over-representing my own experiences when I was an undergraduate student and applying it to a wider Muslim student population; And how I began to question not only my beliefs but also my sense of religiosity when meeting and interviewing a wide array of Muslim students.

Within the paper, I exemplify where and how these issues have manifested. In doing so, and at times, it felt vulnerable in having to take myself back to uncomfortable situations. For example, one particular interview with a student not only made me feel like my own views were ‘too liberal’ but I also questioned why I was undertaking research when I was opening myself up to conflictual encounters. Additionally, why was I undertaking research, with the core aim to better understand and improve Muslim student experience, when my participants (notably, only one) could not appreciate this?

However, the paper is able to detail how I as a researcher have been able to reconcile with these critical incidents’, and that the research process and journey is bound to be one which brings uncomfortable situations. I conclude that, as an early career researcher, it is imperative to not only be reflexive in acknowledging such situations but, to be confident in confronting these situations. It is hoped that researchers who are embarking on the start of their journeys (particularly Muslim researchers) are able to take away lessons from this research note about and be more prepared when going into their field.

Calls

Call for special issue proposals

The IJSRM editors welcomes outline proposals for Special Issues or Themed Sections, to be submitted to the Journal editors by May 1st 2020.  Outline proposals should be submitted via email to tsrm-editor@tandf.co.uk

PLEASE NOTE: Submission for special issues or themed sections is a two-stage process. The first is an informal inquiry, which, if tentatively approved, requires, in the section stage, a more developed proposal, including significant ‘buy-in’ from the authors involved in the project.

STAGE 1: INITIAL INQUIRY

Special Issues consist of a ‘state of the art’ review by the guest issue editors and 7 or 8 articles.

Themed Sections consist of a brief introduction by guest section editors and 4 or 5 articles.

Outline proposals of no more than one A4 page should provide:

  • Indication of whether the proposal is for a Special Issue or Themed Section
  • Title/topic of proposed Special Issue or Themed Section
  • Name/s, affiliation/s and contact details of guest editors
  • Brief rationale for the timeliness, importance and international interest of the methodological topic and methods to be addressed

Proposal topics should fit with the Journal’s remit.  See the statement of aims and scope: https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=tsrm20

Outline proposals will be assessed by the Journal Editors and Board members.

STAGE 2: FORMAL SUBMISSION AND APPROVAL

Once the initial proposal is tentatively approved, the guest editors are required to put together a more formal summary of their project, including:

  • Abstracts (at minimum) for each of the papers.
  • Author buy-in, showing in some way author agreements to complete the project.  (This can be emails from authors, etc.)
  • A delivery timeline, including key milestones for completing project and final delivery date.  (We recognize that timelines change due to different circumstances, but overall, our approach is similar to a book contract insomuch as the agreed delivery date is expected to be honoured.)

Brian Castellani, Rosalind Edwards, Malcolm Williams, Co-editors
International Journal of Social Research Methodology

 

Announcements

The Early Career Researcher Article Prize 2019/20

The International Journal of Social Research Methodology is the leading European journal in social research methods and methodology with a five-year impact factor of 2.099.

We are pleased to announce the 2019/20 competition for papers written by early career researchers (ECRs), who are either current doctoral students or in their first three years of post-doctoral employment since the date of their doctoral graduation day.  

We particularly welcome sole authored ECR articles but will also consider joint authored articles where the ECR is the main/lead editor and is responsible for 70% or more of the paper.

A prize of £500 will be awarded to the best paper, and this and runners up will be published in the Journal. They will be free to access until the end of 2020. 

The journal aims to encourage high quality rigorous papers that provide an original contribution to current and emerging methodological debates and methodological practice across a range of approaches qualitative, quantitative, hybrid and mixed methods. The prize has been established to encourage and recognise research and contributions from new scholars in these debates and practices.

Potential contributors should carefully read the Aims and Scope of the Journal at http://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=tsrm20  and Instructions for Authors at  http://www.tandfonline.com/action/authorSubmission?journalCode=tsrm20&page=instructions. 

Papers submitted between October 1st 2019 and June 30th 2020 will be considered as entries in the competition.

All papers will be subject to the journal’s normal refereeing process and the best paper will be selected by the editors and representatives of the editorial board. Papers will be expected to reach the normal publishing standard of the journal and in the unlikely event that none do, the journal reserves the right to publish none and not award the prize.

Questions concerning the competition should be sent to Malcolm Williams WilliamsMD4@cardiff.ac.uk and papers for consideration to tsrm-editor@tandf.co.uk. Your covering letter should indicate that you would like your paper to be considered for the competition and a statement of eligibility as an ECR.

Notebook

The Methodological Chaos of Adverse Childhood Experiences

By Rosalind Edwards

The methodologies behind evidence that policymakers and service providers can adopt as ‘magic bullets’ to solve social ills rarely get attention. One such bullet is the notion of Adverse Childhood Experiences (ACEs), which has been gathering speed as a basis for family policy and decision-making.  However, there are telling methodological and evidential drawbacks to ACEs, which seem to be left aside, particularly in terms of understanding and addressing the very real adversities that parents and children may face. Also, the varying definitions of what constitutes ACEs and different study designs for researching them mean that there’s no cohesive body of knowledge.

In terms of a definition, ACEs are an attempt to identify a set of traumatic conditions experienced before the age of 18, and to trace the combined ‘score’ (baldly, the number) of events in a simple causal manner through to the long-term damaged physical and mental health that these early experiences are said to create. Findings from studies using ACEs are regarded as rigorous ‘hard’ data for policy and decision-making; because they are quantitative they appear concrete and exact.

While statistical methods and evidence certainly have an important role to play in policy-making, the provisional and uncertain nature of quantitative social science in a complex and dynamic social world gets obscured in the rush to certainty. Weak measures, measurement error, missing data, and statistical significance cautions get swept aside. And yet they are evident in ACEs studies.

The need for caveats concerning ACEs evidence is compounded when the range of ‘inputs’ that are identified as adverse experiences are so ambiguous. For rigorous tracing of causal inputs through to effects, ACEs need to be a clearly defined set of experiences. And yet they lack cohesion in nature and extent. They encompass a shifting ragbag of possible abuses, dysfunction and extent of severity, timing and duration.

In standard ACEs inventories the boundaries between quite common family circumstances and abnormal experiences become blurred. For example, a ‘yes’ answer to ‘were your parents ever separated or divorced?’ is considered an ACE no matter whether it was amicable or adversarial, or occurred before the respondent was born, when a toddler, or age 17. Similarly the ACE criteria ‘living with anyone who was depressed, mentally ill or suicidal’ takes no account of who this is, for how long, and does not distinguish between the person feeling dejected and miserable or suffering clinical depression.

As the idea of ACEs gains popularity in policy and practice, the net is being cast more widely to add further situations to the standard inventories. The motley experiences reflect the agendas of the various agencies and researchers putting them forward. They include parental disability, mothers’ health, lack of childrearing routine, ‘inter-parental’ conflict, moving home, and violence involving a sibling or peer. The implication is that these different issues and the variety of combinations of them are comparable, underpinned by a common mechanism, rather than considering that different adversities may have different effects dependent on context.

There’s further chaos to the methodologies adopted by ACEs studies. Typical to a lot of research surveys into subjective wellbeing, there are retrospective studies subject to people’s recollections, and prospective longitudinal designs subject to the specificities of the temporal period they start from. There are different sources of information and assessment, from the subject of the ACEs themselves or a parent or a professional.

Whatever their methodology, though, what the vast majority of putative ACEs have in common is their narrow remit of consideration. They focus on and isolate the ‘household’ and in particular mother and child.

There’s no attention to the influence of subsequent experiences later in life in ameliorating or exacerbating the effects of stressful life events in childhood. And the concept and measurement of ACEs doesn’t capture confounding contextual issues that are beyond parental control or that can harm people emotionally and physically, such as being subject to racism/Islamophobia and misogyny. They don’t extent to contextual factors beyond the parent-child that may be harmful or mediating and supportive in the face of adversity.

In all, the methodological chaos of ACEs provides no indication of how best to intervene, can’t point to whether or not an intervention and of what type and when works, and can’t be used to predict individuals at risk. Yet it’s being implemented to drive policy and practice interventions.

Researchers investigating ACEs need to take care about claiming a body of knowledge in the face of chaotic definitions, and about the claims for certainty made in their findings about cause and effect. We have a duty to point out caveats clearly to policymakers and service providers. In turn, however tempting it may be to seize on a ‘magic bullet’ solution to social ills, policymakers and service providers need to be more cautious and questioning. And all need to widen their focus and concerns, to look outside narrow parent-child relations and address the adversities that poverty and prejudice pose for people’s mental and physical health.

Notebook

Basic Data Sharing and Our Journal: A Dialogue Between Our Editors Brian Castellani and Ros Edwards

In this blog, IJSRM editors Ros Edwards and Brian Castellani share views on the issue of basic data sharing and potential implications for the International Journal of Social Research Methodology.

ROS: Our publisher has adopted a Basic Data Sharing Policy where they state that journals that they publish will encourage authors to share or make open the data underlying their article publication where this doesn’t cut across privacy and security of human subjects.  This is a requirement that’s coming to the fore across disciplines, research funders and the academic publishing world.  The data set to be shared or made open by the author/s should be the data needed for independent verification of research results in the article.  The idea is that authors sharing data supports research transparency, reproducibility and replicability, and enables researchers to build on others’ work. 

While I generally support ideas about transparency and data sharing for research reuse, I wonder about its applicability and implications for our journal given our focus on methodology.  This is especially the case from the perspective of a qualitative methods article. 

I’ll start us off with my first concern.  The idea of data sharing for replication and verification is based on the notion that data is free-standing and that anyone analysing the same data set will come to the same results – or different ones if the analysis has not been carried out correctly.  I’m not sure this is the case for quantitative data in the social sciences, and I’m certain that it isn’t the case for qualitative research.  For example, given a single qualitative data set, a researcher with a psychosocial analytic approach may come up with quite different research findings to one who approaches the data from a critical realist or interpretive perspective.  Neither will be ‘wrong’; they are providing different angles on the data.  So, the researcher and their epistemological standpoint is part of the method of analysis.  That isn’t necessarily an argument against data sharing of course – as I say, I can see the benefit of making data available for other researchers to use.  It’s just that some of the rationale for publishers requiring data sharing for articles that are published in their journals needs to start from a different basis.

BRIAN:  I agree with much of what you say, Ros.  I do not pretend to be an expert in all aspects of the discussion, but the problem for me is what the motivation is behind it all, which seems to have a lot to do with the recent debates within psychology and, more widely, social science regarding the ‘reproducibility crisis.’

For a good summary of the debate and its historical errors, I recommend reading, Daniele Fanelli’s “Is science really facing a reproducibility crisis, and do we need it to?” At the end of his article, he states:

To summarize, an expanding metaresearch literature suggests that science—while undoubtedly facing old and new challenges—cannot be said to be undergoing a “reproducibility crisis,” at least not in the sense that it is no longer reliable due to a pervasive and growing problem with findings that are fabricated, falsified, biased, underpowered, selected, and irreproducible. While these problems certainly exist and need to be tackled, evidence does not suggest that they undermine the scientific enterprise as a whole. Science always was and always will be a struggle to produce knowledge for the benefit of all of humanity against the cognitive and moral limitations of individual human beings, including the limitations of scientists themselves.

For me, it is not so much if the crisis exists, as much as what it says, methodologically speaking, about where most conventional social science remains presently, which is caught up in some epistemological variant of a positivist, quantitative methods based view of science and an emphasis on causality.  And this is despite all the significant methodological critique and advance made in such fields as feminist methods, intersectionality theory, complexity science methodology, narrative inquiry, qualitative methods, mixed-methods, and the past several decades of the sociology and philosophy of science.

It also reminds me of Mike Savage’s (2009) article, Contemporary sociology and the challenge of descriptive assemblage (European Journal of Social Theory) in which he argues for the importance of description as a legitimate methodological and substantive goal of social science, and not just the development of models of causality.

Still, I do agree that sharing data is a great thing.  Mainly because I also agree that, regarding issues such as policy evaluation and decisions on such things as health and mental health, etc, we need to be methodologically rigorous because people’s lives are dependent upon us doing good research.  I also agree that there is pressure to produce increasingly novel results, and that scholars are often less motivated to examine further the results of their colleagues’ work.  And I agree that one must always be worried about the influence that industry, politics, business and funding have on the results at which scientists arrive.  But, I also agree that, one study does not a new insight make.  Instead, it comes from a field of study and its key debates.  I am also very aware that science advances, albeit in a very messy way, through debate and conflict.

In other words, while any given study might be brilliantly insightful, the world is too complex to be contained within the results of a single study or frame of thinking, and so all research will be undetermined by its evidence.  And, for me, why this periodically suddenly surprises people I am not sure?

As such, and to repeat the point, I worry as you do about the need to share being more about a certain view of what, methodologically speaking, social science is able to do and how we respond in light of its inability to live up to that perception.  And less about increasing the capacity of researchers and citizen scientists being able to access the data upon which a study is based.

ROS:  Turning to my second, and I think more crucial point in terms of implications for whether or not IJSRM implements a data sharing policy then.  In a methodology journal like ours, the articles that we publish are often researchers reflecting on their methods.  For example, they may write about their negotiations around access to a setting, or about shifting power relations in designing participatory research, or about non-verbal communication in interviews.  In a sense then, in these discussions the researcher is the data, or at least is a strong feature of it.  The researcher is making the argument transparent through the article – arguably publishing the account is the transparent and sharing element.  To require authors of such articles to publish their fieldnotes or interviews or whatever would feed my first point above, about data as free-standing, in an entirely inappropriate fashion.

BRIAN:  This is an excellent point, Ros, and one that may actually function as a useful counterpoint to the first.  In other words, for sake of argument, let’s assume there is a legitimate crisis in causality and reproducibility in science and a need to share data.  It seems, then, that a useful medicine is a journal such as ours, where scholars are given the chance to reflect and think through their work or the work of others and the fields to which they belong to think how best, methodologically speaking, to proceed.  In such instances, the actual data upon which their studies are based may sometimes be less important, given that, as you say, the goal of the article is transparency.  Again, I don’t think a blanket statement of doing things one way or another is useful.  But, then, that is the point, isn’t it?