Announcements

The winners of our ECR paper competition for 2018

In 2018 IJSRM ran a competition for papers written by early career researchers (ECRs) who were either current doctoral students or in their first three years of post-doctoral employment.  Our aim was to encourage and recognise research and contributions from new scholars in current and emerging methodological debates and practice.

All entries were subject to the Journal’s usual refereeing processes and had to reach our normal publishing standard.  The winners were selected by a sub-panel of members of the IJSRM Editorial Board and the Journal Editors. The panel identified two articles as joint winners of the ‘Best ECR Article’.

Fabio Hirschhorn’s article reflected on application of the Delphi method.  Fabio, who is a PhD candidate at Delft University of Technology, Netherlands, explains:

I reflect on the use of the Delphi method in the context of research on the governance of public transport services. The Delphi has become a tool to address varied research questions, producing and employing both quantitative and qualitative information, in a multiplicity of scientific fields, also helping achieve several types of outcomes beyond only consensus. This flexibility for researchers to tailor a survey according to specific needs while still keeping core features that ensure the scientific robustness of the method, is, in my view, the greatest merit of the Delphi. 

Nicole Brown’s article reflecting on using identity boxes to elicit experiences.  Nicole, who is a PhD candidate at the University of Kent, and a Lecturer in Education and an Academic Head of Learning and Teaching based at the UCL Institute of Education, UK, explains: 

I consider the identity boxes and their contents as data in themselves, analysing them using not a more conventional form of qualitative analysis but also a less traditional arts-based method that led to an artistic installation and an illustrated poem. I found that these representations spoke to audiences in a way that the written word could not. What is most intriguing for me about this material approach is the depth of data that emerges through asking participants to engage with research questions intuitively.

Nicole and Fabio remarked on what winning the IJSRM ECR prize means to them:

Nicole: Being told that my work was awarded IJSRM’s “Best ECR Article” prize was particularly special. Not only has my work been deemed worthy of publication, the editorial board and journal editors, who are all eminent academics in the field, have given my work their seal of approval. I have always enjoyed learning about, experimenting with and teaching research methods and to me this prize now confirms that my work contributes to the advancement of the field. 

Fabio: It is a great honor to win the ‘Best Early Career Researcher Article’ prize from a prestigious outlet as the International Journal of Social Research Methodology. This prize is a great incentive to early career researchers whose work is seldom recognized. In my case, this achievement not only makes me very proud about my past effort, but also motivates me to further strive to produce high-quality and high-impact research in coming years. 

Many congratulations to Fabio and Nicole!

Notebook

The Methodological Chaos of Adverse Childhood Experiences

By Rosalind Edwards

The methodologies behind evidence that policymakers and service providers can adopt as ‘magic bullets’ to solve social ills rarely get attention. One such bullet is the notion of Adverse Childhood Experiences (ACEs), which has been gathering speed as a basis for family policy and decision-making.  However, there are telling methodological and evidential drawbacks to ACEs, which seem to be left aside, particularly in terms of understanding and addressing the very real adversities that parents and children may face. Also, the varying definitions of what constitutes ACEs and different study designs for researching them mean that there’s no cohesive body of knowledge.

In terms of a definition, ACEs are an attempt to identify a set of traumatic conditions experienced before the age of 18, and to trace the combined ‘score’ (baldly, the number) of events in a simple causal manner through to the long-term damaged physical and mental health that these early experiences are said to create. Findings from studies using ACEs are regarded as rigorous ‘hard’ data for policy and decision-making; because they are quantitative they appear concrete and exact.

While statistical methods and evidence certainly have an important role to play in policy-making, the provisional and uncertain nature of quantitative social science in a complex and dynamic social world gets obscured in the rush to certainty. Weak measures, measurement error, missing data, and statistical significance cautions get swept aside. And yet they are evident in ACEs studies.

The need for caveats concerning ACEs evidence is compounded when the range of ‘inputs’ that are identified as adverse experiences are so ambiguous. For rigorous tracing of causal inputs through to effects, ACEs need to be a clearly defined set of experiences. And yet they lack cohesion in nature and extent. They encompass a shifting ragbag of possible abuses, dysfunction and extent of severity, timing and duration.

In standard ACEs inventories the boundaries between quite common family circumstances and abnormal experiences become blurred. For example, a ‘yes’ answer to ‘were your parents ever separated or divorced?’ is considered an ACE no matter whether it was amicable or adversarial, or occurred before the respondent was born, when a toddler, or age 17. Similarly the ACE criteria ‘living with anyone who was depressed, mentally ill or suicidal’ takes no account of who this is, for how long, and does not distinguish between the person feeling dejected and miserable or suffering clinical depression.

As the idea of ACEs gains popularity in policy and practice, the net is being cast more widely to add further situations to the standard inventories. The motley experiences reflect the agendas of the various agencies and researchers putting them forward. They include parental disability, mothers’ health, lack of childrearing routine, ‘inter-parental’ conflict, moving home, and violence involving a sibling or peer. The implication is that these different issues and the variety of combinations of them are comparable, underpinned by a common mechanism, rather than considering that different adversities may have different effects dependent on context.

There’s further chaos to the methodologies adopted by ACEs studies. Typical to a lot of research surveys into subjective wellbeing, there are retrospective studies subject to people’s recollections, and prospective longitudinal designs subject to the specificities of the temporal period they start from. There are different sources of information and assessment, from the subject of the ACEs themselves or a parent or a professional.

Whatever their methodology, though, what the vast majority of putative ACEs have in common is their narrow remit of consideration. They focus on and isolate the ‘household’ and in particular mother and child.

There’s no attention to the influence of subsequent experiences later in life in ameliorating or exacerbating the effects of stressful life events in childhood. And the concept and measurement of ACEs doesn’t capture confounding contextual issues that are beyond parental control or that can harm people emotionally and physically, such as being subject to racism/Islamophobia and misogyny. They don’t extent to contextual factors beyond the parent-child that may be harmful or mediating and supportive in the face of adversity.

In all, the methodological chaos of ACEs provides no indication of how best to intervene, can’t point to whether or not an intervention and of what type and when works, and can’t be used to predict individuals at risk. Yet it’s being implemented to drive policy and practice interventions.

Researchers investigating ACEs need to take care about claiming a body of knowledge in the face of chaotic definitions, and about the claims for certainty made in their findings about cause and effect. We have a duty to point out caveats clearly to policymakers and service providers. In turn, however tempting it may be to seize on a ‘magic bullet’ solution to social ills, policymakers and service providers need to be more cautious and questioning. And all need to widen their focus and concerns, to look outside narrow parent-child relations and address the adversities that poverty and prejudice pose for people’s mental and physical health.

Notebook

Basic Data Sharing and Our Journal: A Dialogue Between Our Editors Brian Castellani and Ros Edwards

In this blog, IJSRM editors Ros Edwards and Brian Castellani share views on the issue of basic data sharing and potential implications for the International Journal of Social Research Methodology.

ROS: Our publisher has adopted a Basic Data Sharing Policy where they state that journals that they publish will encourage authors to share or make open the data underlying their article publication where this doesn’t cut across privacy and security of human subjects.  This is a requirement that’s coming to the fore across disciplines, research funders and the academic publishing world.  The data set to be shared or made open by the author/s should be the data needed for independent verification of research results in the article.  The idea is that authors sharing data supports research transparency, reproducibility and replicability, and enables researchers to build on others’ work. 

While I generally support ideas about transparency and data sharing for research reuse, I wonder about its applicability and implications for our journal given our focus on methodology.  This is especially the case from the perspective of a qualitative methods article. 

I’ll start us off with my first concern.  The idea of data sharing for replication and verification is based on the notion that data is free-standing and that anyone analysing the same data set will come to the same results – or different ones if the analysis has not been carried out correctly.  I’m not sure this is the case for quantitative data in the social sciences, and I’m certain that it isn’t the case for qualitative research.  For example, given a single qualitative data set, a researcher with a psychosocial analytic approach may come up with quite different research findings to one who approaches the data from a critical realist or interpretive perspective.  Neither will be ‘wrong’; they are providing different angles on the data.  So, the researcher and their epistemological standpoint is part of the method of analysis.  That isn’t necessarily an argument against data sharing of course – as I say, I can see the benefit of making data available for other researchers to use.  It’s just that some of the rationale for publishers requiring data sharing for articles that are published in their journals needs to start from a different basis.

BRIAN:  I agree with much of what you say, Ros.  I do not pretend to be an expert in all aspects of the discussion, but the problem for me is what the motivation is behind it all, which seems to have a lot to do with the recent debates within psychology and, more widely, social science regarding the ‘reproducibility crisis.’

For a good summary of the debate and its historical errors, I recommend reading, Daniele Fanelli’s “Is science really facing a reproducibility crisis, and do we need it to?” At the end of his article, he states:

To summarize, an expanding metaresearch literature suggests that science—while undoubtedly facing old and new challenges—cannot be said to be undergoing a “reproducibility crisis,” at least not in the sense that it is no longer reliable due to a pervasive and growing problem with findings that are fabricated, falsified, biased, underpowered, selected, and irreproducible. While these problems certainly exist and need to be tackled, evidence does not suggest that they undermine the scientific enterprise as a whole. Science always was and always will be a struggle to produce knowledge for the benefit of all of humanity against the cognitive and moral limitations of individual human beings, including the limitations of scientists themselves.

For me, it is not so much if the crisis exists, as much as what it says, methodologically speaking, about where most conventional social science remains presently, which is caught up in some epistemological variant of a positivist, quantitative methods based view of science and an emphasis on causality.  And this is despite all the significant methodological critique and advance made in such fields as feminist methods, intersectionality theory, complexity science methodology, narrative inquiry, qualitative methods, mixed-methods, and the past several decades of the sociology and philosophy of science.

It also reminds me of Mike Savage’s (2009) article, Contemporary sociology and the challenge of descriptive assemblage (European Journal of Social Theory) in which he argues for the importance of description as a legitimate methodological and substantive goal of social science, and not just the development of models of causality.

Still, I do agree that sharing data is a great thing.  Mainly because I also agree that, regarding issues such as policy evaluation and decisions on such things as health and mental health, etc, we need to be methodologically rigorous because people’s lives are dependent upon us doing good research.  I also agree that there is pressure to produce increasingly novel results, and that scholars are often less motivated to examine further the results of their colleagues’ work.  And I agree that one must always be worried about the influence that industry, politics, business and funding have on the results at which scientists arrive.  But, I also agree that, one study does not a new insight make.  Instead, it comes from a field of study and its key debates.  I am also very aware that science advances, albeit in a very messy way, through debate and conflict.

In other words, while any given study might be brilliantly insightful, the world is too complex to be contained within the results of a single study or frame of thinking, and so all research will be undetermined by its evidence.  And, for me, why this periodically suddenly surprises people I am not sure?

As such, and to repeat the point, I worry as you do about the need to share being more about a certain view of what, methodologically speaking, social science is able to do and how we respond in light of its inability to live up to that perception.  And less about increasing the capacity of researchers and citizen scientists being able to access the data upon which a study is based.

ROS:  Turning to my second, and I think more crucial point in terms of implications for whether or not IJSRM implements a data sharing policy then.  In a methodology journal like ours, the articles that we publish are often researchers reflecting on their methods.  For example, they may write about their negotiations around access to a setting, or about shifting power relations in designing participatory research, or about non-verbal communication in interviews.  In a sense then, in these discussions the researcher is the data, or at least is a strong feature of it.  The researcher is making the argument transparent through the article – arguably publishing the account is the transparent and sharing element.  To require authors of such articles to publish their fieldnotes or interviews or whatever would feed my first point above, about data as free-standing, in an entirely inappropriate fashion.

BRIAN:  This is an excellent point, Ros, and one that may actually function as a useful counterpoint to the first.  In other words, for sake of argument, let’s assume there is a legitimate crisis in causality and reproducibility in science and a need to share data.  It seems, then, that a useful medicine is a journal such as ours, where scholars are given the chance to reflect and think through their work or the work of others and the fields to which they belong to think how best, methodologically speaking, to proceed.  In such instances, the actual data upon which their studies are based may sometimes be less important, given that, as you say, the goal of the article is transparency.  Again, I don’t think a blanket statement of doing things one way or another is useful.  But, then, that is the point, isn’t it?

Notebook

Health inequalities in England: A complex case of sticking plasters and category errors

By Jonathan Wistow

In this blog post I will briefly develop three interrelated strands of analysis – political economy, policy, and methodology – that I argue are central to understanding and reducing health inequalities. Focusing on England, it will be argued that within each of these strands the dominant approaches to framing and responding to health inequalities are, more or less, complementary and provide a relatively ‘neat’ and unified understanding and response to this social problem. In short, political economy, policy, and methodological components complement each other with their tendency towards individualism, treatment and behaviour, and linearity respectively. Unfortunately the ontological and epistemological coherence that cuts across these is negatively reinforcing and goes a long way towards explaining the persistence of health inequalities in England. The purpose of this blog post is to make the case for challenging and redefining each of these strands and I do so through making three propositions at the end of each section below.

Political Economy

The economic recession of the mid-to-late 1970s in the UK led to a crisis in Keynesianism and enabled a shift to neoliberal ideology, as the then Labour government submitted to budgetary restraint and austerity mandated by the International Monetary Fund (Crouch, 2011 and Harvey, 2005). Successive governments, commencing with Thatcher’s tenure as Prime Minster in 1979, have viewed the market as the optimal or default form of economic organisation (Sayer, 2016), while key instruments of the welfare state, including local government, were seen as placing constraints on the central state’s marketization project (Crouch, 2011).

These changes also marked (and arguably responded to) a shift from collectivism to individualism in society. One consequence of this was that inequality was no longer seen as necessarily problematic, which allowed for less progressive taxation, privatisation of public assets and runaway salaries at the top of the labour market. As a result the UK witnessed rising income inequality from the late 1970s onwards. Since then health inequalities have remained remarkably persistent and, as Scambler and Scambler (2015: 343) note, there have been, ‘endlessly repeated statistical associations linking socio-economic classification to health’. Much less common, but still important within the academic literature, are calls to question the structural and class forces that are key causal factors in producing health inequalities (see, for example, Coburn, 2009; Lynch, 2017; Scambler and Scambler, 2015; Schrecker, 2017; Wistow et al., 2015).

The apparent disconnect between the understanding of the pattern and distribution of health inequalities and the reluctance amongst some in academia to ask the deeply sociological questions (essentially the next logical step) about the structural forces producing these is our first hint at how political economy, macro-economic policy arising from this, and methodology are aligning to undermine the potential to tackle health inequalities.  

Without addressing and understanding the causal processes produced through the political economy; policy designed to tackle health inequalities is likely to be no more than a ‘sticking plaster’ for the deep cut that these represent within society.

Policy

The extent to which the state is seen to have a role in modifying the inequalities that are (re)produced across society should be a key debate within social and economic policy. However, and as we have seen, this has too long been either neglected, or more conspiratorially, intentionally marginalised in mainstream debates about the political economy. Within the more focused domain of health and health-related policy it is important to question the extent to which governance structures enable the social determinants of health to be effectively identified and addressed.

Gerald Wistow (in Wistow et al., 2015), described the NHS as being based on a ‘category error’ that tended to mirror the medical model’s concern to treat symptoms more than causes of ill health. The relative power of medicine within the NHS remains largely secure, despite prominent challenges from McKeown in 1976 and the development of a longstanding evidence-base about health inequalities dating back to the Black Report in 1980 that was subsequently revisited and refreshed through the 1998 Acheson Report and the 2010 Marmot Review (2010). In short, each of these pointed to the material and structural causes of health inequalities and what should be interpreted as the complex causal relationships that intersect with social standing (class) and place, in particular.

The fact that only four per cent of NHS spending in the first decade of the 21st century was on health promotion and preventing illness (NAO: 2013)​ demonstrates just how skewed resourcing is towards the treatment of the symptoms rather than the underlying causes of health. It is also significant that this distribution of resourcing occurred at the same time as New Labour’s Tackling Health Inequalities policy agenda.  

Even within the field of health promotion and prevention the balance is wrong. A recent speech by the Secretary of State for Health and Social Care, Matt Hancock, on prevention nicely illustrates this point. He started by saying, ‘I want to talk about how preventing ill health can transform lives, and transform society for the better too.  That might sound radical. It is intended to.’ To many this is far from radical but, nonetheless, revealing in that Hancock both thinks it is and has gone as far as to emphasise this point.  

Hancock’s focus moves on to consider the significance of peoples’ behaviours for their health. The argument here is not that behaviours are unimportant; they are extremely significant for health outcomes, but that they are complex and nested in a wider social structure within a neoliberal political economy.  Hancock’s response, in this respect, is both predictable and disappointing: ‘I want to see people take greater responsibility for managing their own health…how can we empower people to take more care of their own health?  By giving people the knowledge, skills and confidence.’ He does not, however, consider the nature of the political economy, nor the social determinants arising from this in his analysis and instead essentially isolates behaviours within debates about individual agency without due consideration of social and structural contexts these are situated in.  

The NHS compounds the category error at the level of the political economy of using a sticking plasters when stitches are required and, to extend the metaphor, does so by producing a plaster that is both too small and the wrong shape for the cut, with little hope of covering it, let alone stemming the flow. Under these circumstances methodology should help us to understand and frame the problem better but as we will see the dominant approaches here are lacking.

Methodology

Back in 1979 Lesley Doyal with Imogen Pennell wrote, The political economy of health, and argued that the, ‘emphasis on the individual origin of disease is of considerable social significance, since it effectively obscures the social and economic causes of ill health.’  They continue that the medical emphasis on individual causation is one means of defusing the political significance of the ‘destruction of health’. 

As we have already seen the evidence-base linking social and economic inequalities to health inequalities has developed considerably since Doyal and Pennell were writing. However, despite public health researchers and professionals generally understanding that the causes of ill health are complex and structural Salway and Green (2017: 523) conclude that both health promotion campaigns and journal evidence, ‘suggest we remain deeply wedded to linear models and individualistic interventions.’ Here the dominance of Randomised Control Trials (RCTs) and, to a lesser extent, epidemiology have significant limitations for understanding an inherently social problem such as health inequalities.

RCTs are often considered to be the gold standard in medical research and are increasingly having an influence over public health and social sciences research.  For example, the National Institute for Health Research has recently funded a £1.5million evaluation (in the form of an RCT) of ‘Strengthening Families, Strengthening Communities’, which is an evidence-based parenting programme, designed to promote protective factors associated with good parenting and better outcomes for children. However, questions have to be asked about the extent to which an RCT is capable of understanding the dynamic and relational factors that operate at levels above and beyond individuals (see, for example, Kelly, 2010) in a research problem such as this.  Instead RCTs fit very well with the reductionist approach of medicine to simplifying and dividing problems into sub-components, thereby, losing sight of the embeddedness of entities in their interconnections (Chapman, 2004).

The effect of epidemiology over public health policy can be seen through the example of Hancock’s speech about prevention. Here public health policy is reduced to identifying factors and markers for disease and using knowledge and empowerment to encourage people to take responsibility for this. The political significance of health is lost and questions are not asked about the ‘causes of the causes’ of ill health with implications for how the state through public servants is likely to encourage individual responsibility for health across social classes. For example, is someone who works as a professional and lives in a healthy area more or less likely to be receptive to being empowered through knowledge than someone who lives in an unhealthy neighbourhood and whose experience and interactions with the state are largely negative through the excessive monitoring and surveillance that comes with living on (or potentially previously living on but no longer doing so due to changes in allocation criteria) incapacity benefits? The point here is not that epidemiological data is not useful but it should be part of inquiry.  

Complexity theory helps to respond to these limitations. Firstly, through framing complex problems like health inequalities and secondly through advocating mixed methods and perspectives. Let’s focus on premature mortality rates resulting from cancer as an example of health inequalities. Many of the causes of cancers for individuals (as ‘cases’) relate to a complex combination of lifestyle behaviours such as smoking, diet, exercise, bodyweight, and exposure to sunlight. However, it is very difficult to isolate these behaviours and attribute causation to these as individual variables. Indeed we can question whether this is a desirable strategy given that in practice people do not live their lives in neat and separate component parts: diet, frequency of exercise, social and work activities, alcohol and nicotine consumption, are all parts of the complex whole that make up individuals’ lifestyle. But this is not the whole picture (and an example of where Hancock’s analysis is fundamentally flawed) because lifestyle, in turn, relates to (but is not wholly determined by) the contexts in which people live their lives. Different people react to these different contexts differently.  The social gradient linking health status to social status is an important factor here and cuts across a multi-scalar approach to context that includes family, workplace, neighbourhood settings, towns, cities and regions – all important contextual characteristics within which people lead their lives. Here we are talking about multiple and non-linear causation in which individuals are embedded and the importance of interactions between these multiple causes. In responding to these issues Castellani (2014) argues that for complex systems both social reality and data are best seen as ‘self-organising, emergent, non-linear, evolving, dynamic, network-based, interdependent, qualitative and non-reductive.’  It follows, therefore that no one method (especially statistics), ‘can effectively identify, model, capture, control, manage or explain them’.

Health inequalities are complex and no one methodological approach should take precedence in understanding these.  By opening up the methodological toolbox and using a variety of methods we can develop a much fuller understanding of how these exist within complex systems, which, in turn, directs our attention much more fully to the constraints in the political economy and the limitations of policy as presently configured.

Jonathan Wistow is an Assistant Professor in the Department of Sociology at Durham University. The blog is based on research for chapter in a forthcoming book commissioned by Policy Press called Social policy, political economy and the social contract, which is due for publication in 2020.  

References

Chapman, J. (2004) System failure: Why governments must learn to think differently, London: Demos.
Coburn, D. (2009) Income inequality and health. In Panitch, L. and Leys, C. (eds) Morbid Symptoms: Health under capitalism. Pontypool: The Merlin Press.
Crouch, C. (2011) The strange non-death of neoliberalism, Cambridge: Polity.
Department of Health (1998) Independent Inquiry into Inequalities in Health – Report. London: HMSO.
Doyal, L., with Pennell, I. (1979) The political economy of health, London, Pluto Press Limited.
Harvey, D. (2005) A brief history of neoliberalism, Oxford: Oxford University Press.
Kelly, M. (2010) The axes of social differentiation and the evidence base on health equity, Journal of the Royal Society of Medicine, 103(7), 266-272.
Lynch, J. (2017) Reframing inequality? The health inequalities turn as a dangerous frame shift, Journal of Public Health, 39: 4, 653-660.
Marmot Review (2010) Fair Society, Healthy Lives: Strategic review of health inequalities in England post-2010, London: The Marmot Review.
McKeown, T. (1976) The role of medicine: Dream, mirage or nemesis? London: Nuffield Provincial Hospitals Trust.
NAO (2013) Early action: Landscape review: https://www.nao.org.uk/report/early-action-landscape-review/​
Salway and Green (2017) Towards a critical complex systems approach to public health, Critical Public Heealth, 27:5, 523-524.
Sayer, A. (2016) Why we can’t afford the rich, Bristol: Policy Press.
Scambler and Scambler (2015) Theorizing health inequalities: The untapped potential of dialectical critical realism, Social Theory and Health, pp 340-354, 13: 3/4
Schrecker, T. (2017) Was Mackenbach right? Towards a practical political science of redistribution and health inequalities, Health and Place, 46, 293-299.
Wistow, J., with, Blackman, T., Byrne, D., and Wistow, G. Studying health inequalities: An applied approach, Bristol: Policy Press
Competitions, featured

Using Creative and Visual Methods to Research Across Difference

By Rachel Brooks

I am currently working on a research project that is exploring the different ways in which the higher education student is conceptualised across and within six European nations (see here for further details about the study). We are collecting data from a variety of sources including university websites, newspaper articles, policy texts, interviews with policymakers and higher education staff, and focus groups with students. To help stimulate discussion in the focus groups, we have asked students to use plasticine to make models of how they see themselves and how they think others see them. We have found this an effective means of making abstract concepts rather more tangible – but were initially concerned that such methods might be viewed rather differently in the countries in which we are conducting research (for example, would UK students, many of whom have become used to more participatory approaches within university classrooms, be more favourably disposed to plasticine modelling than their peers in Germany who may have had less exposure to such pedagogies?).

Our reading of the wider literature, when grappling with such issues, indicated that while increasing use is made of both creative and visual methods in social research, to date there has been very little discussion of the extent to which such methods can be used in comparative research. For this reason, we ran a seminar in June 2018 – kindly funded by the International Journal of Social Research Methodology – to explore some of the challenges of using these methods cross-nationally. In particular, we were keen to examine the different cultural associations that may be brought to bear in different national contexts, and how these are accounted for in research design, data collection and analysis. Indeed, a key aim of the seminar was to draw on the experiences of researchers working in these areas, to explore how such challenges can most effectively be addressed. We also wanted to look at approaches that used creative and visual methods to research across difference more generally – for example, across different social class groups within a single nation.

Overall, the day brought together many fascinating accounts of using creative and visual methods in these ways, and provided a forum for both academic staff and postgraduate students, with an interest in these approaches, to share ideas and experiences. Keynote talks were given by Agata Lisiak from Bard College in Berlin and Rita Chawla-Duggan from the University of Bath. Agata’s talk, entitled ‘A Tale of Two Cities: Notes on Creative Methods in Research on Migrant Mothering’ provided a fascinating account of her use of drawings with mothers who had migrated from Poland to Birmingham in the UK and Munich in Germany. She argued, convincingly, that creative methods can help to facilitate what Jennifer Robinson calls ‘a comparative imagination’ and open up new kinds of narrations about migrants’ everyday urban experiences, sense of belonging, and negotiations of motherhood ideologies. A film of Agata’s talk can be found here. Rita drew on her recent experience of conducting research in four different national contexts in her talk on ‘Using Visual Technology in Comparative Studies: Researching Young Children’s Perspectives on Fathers’. She maintained that the use of films (and some other types of visual technology) can help explicate young children’s perspectives of learning as it occurs through interactions with their fathers. However, she also raised a series of interesting questions about this particular methodological approach, such as how we define the boundaries of a visual case study, and how we make comparisons within and between such cases. An audio recording of Rita’s talk can be found here.

Alongside the two keynote addresses, the day included eleven other presentations – from researchers at a variety of different career stages – about how they had employed some form of creative or visual method to research across difference. These included the use of Lego figures (in Jon Rainford’s research on widening participation practices across different higher education institutions), a new visual mapping tool (in Michael Donnelly’s study of the geographic and social (im)mobilities of university students in the UK), art workshops (in Susana Campos and Vicki Harman’s work with female survivors of domestic violence in Portugal and England) and photo-elicitation techniques (in Kate Burningham and colleagues’ research within young people in seven different national contexts). Slides, films and audio-recordings from all the talks given during the day can be found here.

The seminar was successful in bringing together a community of researchers working on similar methodological issues, in different national contexts and at different career stages, and providing a forum for methodological dilemmas in this area to be discussed, and ways forward proposed. We also hope that it will make a contribution – through the special issue that we are in the process of putting together on the basis of the seminar contributions – to both advancing debates internationally about the use of creative and visual methods in comparative research and enhancing the profile of the use of creative and visual methods in such work.

Rachel Brooks, University of Surrey

Notebook

Democratisation vs big data? A dialogue between our editors Ros Edwards and Malcom Williams

Ros Edwards

Democratisation and big data have established themselves as key developments in social research processes. But I’m wondering, are they pulling in opposite directions?  Democratised methodologies immerse researchers within communities, undertaking relational work up close. Big data, on the other hand, has been described as a gaze from 30,000 feet.

Voices advocating radical challenges to traditional research practice through democratisation have questioned the model of research that positions the people who are the focus of study as subjects, and those who research them as experts who can analyse and evaluate.  Under the democratisation paradigm, research seeks to serve the needs of those who’ve traditionally been excluded from positions of power, to highlight the voices of those who are disenfranchised on the basis of their gender, race/ethnicity, disability or other characteristics, and to further human rights.  Democratised methodologies are concerned with ensuring that people who experience marginalisation influence research at every level of the process, to identify what it is that’s important to research, and how the community may benefit from involvement.  

The other development, the increasing availability and use of big data, potentially creates critical tensions for democratising research methodologies and knowledge production.  The potential (and seduction) of big data is the scale and availability of large sets of data that may be analysed; it promises easy access to massive amounts of data.

On the one hand, this may make access to data more democratic, with marginalised groups able to obtain material relevant to topics they have identified as important to them, and to engage in analyses with transformative potential.

On the other hand, while big data gives the illusion of providing unmediated and direct access to people’s beliefs and experiences, in fact it’s just as socially mediated and constrained as any other form of data. Indeed, the background frameworks structuring what knowledge gets collected in the form of big data and how it’s analysed may hide the transparent interpretation of human experience that’s central to democratising methodologies.  

But can and how might democratisation of the research process and big data fit together? As big data enables new methods in knowing and defining social life, these new ways of knowing need to be critiqued for their limitations as they emerge. Perhaps it’s here that democratising methodology can step in and enable marginalised groups to have an input and make a difference.

Malcolm Williams

I’m not sure that I would entirely agree with you because I would pose a different question – who is research for?

I don’t think democratising methodology is a one size fits all. At the level of community research, action research etc. where the methods and choice of methods is relatively accessible, then I would agree with you. But when we get to complicated, highly technical statistically sophisticated methods/methodology then this is a bit like democratising high energy physics.

But it is at the level of what we ask and for who we are asking it, some level of democracy is appropriate. What do we research and why? Can the results be made more open and accessible? Even then, I worry that democracy can itself be destructive of well being when people vote or choose in a knowledge vacuum.  I think we have a home grown example in Brexit.

The problem with big data is not method or methodology (well it might at a technical level), but who owns it, who interrogates it and who gets to choose the questions.

Ros Edwards

I can see the strength of the points that you are making, and I wouldn’t disagree with all of them all of the time! I agree that the research questions asked of big data and who chooses them are central issues, but most democratising approaches wouldn’t draw a distinction between that and methodology – where methodology is an understanding of how we can go about gaining knowledge about how the world works, and thus how research should be carried out.

I also want to stress that the background to this is inequality and power and how it relates to methodology. There’s a long history of marginalised communities who have been problematised and stigmatised through the damaging assumptions embedded in the methodological approach and the methods used by social researchers. There is the potential that democratised research (just like democracy generally) may produce knowledge that’s problematic or skewed, although probably in different ways to traditional approaches, and that knowledge is open to challenge. But there’s just as much, maybe more, potential that it produces greater illumination.

There’s a tendency to see uses of ‘big data’ in social research as somehow antithetical to democratised research but I’m asking whether and how the two may be brought together. Attention to methodology seems to me to be an entry point.

Malcolm Williams

I absolutely agree with your second point, but how to get there? Firstly, I’d reiterate we are at one in respect of participatory research, action research etc. Methods and methodology, should and can be democratised. ‘Big data’ and (say) complex administrative datasets do present a bigger challenge. In respect of the former, a current methodological question might be is the general linear model still an appropriate methodological framework? The debate on this is technical and statistical and one I find challenging, but it makes a profound political difference to what is asked and how it is asked. So, how can we improve the links of accountability and decision making between that data scientists and the public, in order to avoid dangerous assumptions/ stigmatisation etc?

The answer, for me, is threefold: firstly we can and should democratise data in society, through more informed critical reflection in the media and promote a better understanding of data in government and the third sector. School students must study English (or French, Spanish, Chinese, etc) and mathematics, but they also should study data in society to enable them to have a critical understanding of how data are used to persuade or govern.

Secondly and inevitably, at the technical and operational level methodological and methods choices must be made by a trained cadre of professionals. To deny this begs the question of why do we have social science degrees, PhDs etc? However, we should resist any shift to wholly methods and methodological training in social science at the expense of scholarship and critical reflection. It seems obvious, but all social scientists should have a critical understanding of the ideological and political context of data, methods and methodology. In the UK, at least, the danger is that universities and the ESRC are often dangerously focussed on measuring impact within a very narrow normative ideological agenda. So what we do and how we do it is so often decided by those who fund it!

Finally we should resist the colonisation of the analysis of big data by those without a social science background, for at best their questions may be trivial, but at worst informed by whatever is the current ideological fashion or project of government or big business.