Overton Blog

On knowledge diversity: reflections from ESSS

In this blog post Overton analyst Kat Hart talks about her experience of attending the European Summer School for Scientometrics (ESSS), reflecting on what she learned at the conference.

This year, I travelled to Sapienza University, Rome for the annual European Summer School of Scientometrics (try saying that 10 times quickly) with fellow analyst Ceire Wincott. As the resident Policy Data Nerds* (with a background in academic-policy engagement and research development), Ceire and I are interested in emerging schools of thought on how public policies can better utilise research-based evidence, as well as how Overton’s policy data can be used effectively to support this. 

The ESSS was a golden opportunity to learn more about the fundamentals of scientometrics - how the field is changing and the different taxonomies, tools and approaches which are emerging. For Ceire and I, this is a really important aspect of our work in determining how our data can be used for different purposes (and the issues and difficulties of doing so) and more broadly, for us to contribute to the wider field of policy research in a useful and meaningful way. It also allows us to help support our users to do the same.

Across the course of the week, we heard about journal impact indicators, altmetrics, research assessment, evaluation, open access, taxonomies, different tools available for bibliometric analyses and so on. It was a really valuable opportunity to learn from one another, to exchange thoughts and ideas and to hear from experts in the field. Pasta, gelato and a beautiful city were also a bonus…but mostly the learning! 

For the purposes of this blog, we wanted to reflect a little on what we heard, and perhaps discuss a few different things we found particularly interesting now that we’ve had a chance to ‘digest’.

Knowledge diversity

This year’s theme at ESSS was diversity. There were so many interesting presentations (and spoiler alert: there will be more blog posts based around these) but for now I wanted to reflect on this issue in particular.

Diversity and inclusion has, rightfully, been a huge topic for quite a while amongst research communities. While there is obviously a structural or human aspect to this - the way that certain voices are privileged within the scholarship for example - research methodologies have also been a focus. The need for knowledge diversity has been discussed often. This refers to the need for a variety and breadth of information sources, perspectives, and methodologies within a research dataset or field. It means encapsulating diverse viewpoints, ensuring that a wide range of voices, topics, and sources are represented. This diversity is pivotal because it brings richness to analyses, offering a more holistic view of the subject matter - and this is also very important to deliver informed, robust public policy

A particular highlight at ESSS was a talk from Professor Andy Stirling of the University of Sussex. He discussed the importance of generating diverse, comprehensive and inclusive data sets, but also the approach we take when creating a 'measure' of knowledge diversity in bibliometric assessment. Reflecting on the difficulty of doing this adequately, he raised some important questions - how do we define our categories? At what point is something large enough to start counting? How do we define the parameters for assessment? If we approach it too broadly we may come across issues like ‘dilution’, increased bureaucracy, lead times and so on. If we’re too narrow we lose robustness and the benefits we know diversity brings. Equally, we know that we need interdisciplinarity - people from different backgrounds with different expertise to be a part of research and policy - but how do we assess that objectively? It was really helpful to see different approaches to ‘measuring’ diversity in a more meaningful way and the latest thinking on how bibliometric approaches could be used responsibly to support this. 

Using our learning

Ceire and I deliberated on how these concepts could be applied to the Overton data. We know that this need for (or lack of) knowledge diversity is a live issue in the field of policy - for example, the arts are typically underrepresented although we know that disciplines like sociolinguistics are incredibly important for policy consultations. 

We recognised the potential to measure knowledge diversity by analysing the diverse institutions, funders, topics, and journals that inform public policy. However, a deeper dive into the 'engagement mechanics'—the underlying interactions, dynamics or ‘causality’ within the data—might be more challenging. This is something we’re really keen to explore further, please feel free to get in touch if you’d like to chat! 

This poses an interesting challenge in itself. Whilst work has been done to improve accessibility and make it easier for researchers from all groups and demographics to engage with policy, there are still problems relating to motivation. Why should a researcher engage with policy, if they do come across the opportunity in the first place? We know that - at the moment - demonstrating the impact of doing this work can be challenging (we’re working on it!). Given this, the way that policy impact and influence is ‘measured’ is flawed. It’s extremely difficult to standardise how researchers are engaged and subsequently acknowledged, which makes it inadvisable to adopt a policy impact metric into the assessment of research. This can, understandably, make it difficult to incentivise engagement - there is currently no ‘reward’ for doing so. This can also cause tensions with equity, and potentially pose an additional, inequitable barrier to those historically underrepresented in research and policy. 

Not only this - in encouraging certain communities to engage with policy and in some cases, ‘put themselves out there’, the risk or ‘cost’ associated with the engagement is higher. Research has found, for example, that women are more likely to be the target of online hate. Amnesty International have also reported that Black and Asian MPs are more likely to be abused online. The Council for Europe have also found that Black women are 84% more likely to receive abusive tweets on twitter. Generally speaking, we know that public engagement via online platforms can have serious negative consequences for researchers, with little to be done in the way of taking preventative measures, protection or recourse. 

With this in mind - how should we approach ‘diversity’ - in all its forms, within bibliometric studies? How can we harness policy data in a ‘useful’ way that accounts for these issues - both within the data itself, and in the underlying ‘causality’? 

My colleague Ceire, will be following up on this in a future post on the Overton blog, where she’ll explore this with respect to scientometric approaches. So watch this space! 

For me, it’s time to bid adieu in preparation for the Nordic Workshop on Bibliometrics and Research Policy (NWB2023) next week! If you’re also joining us in Gothenburg and would like to grab a coffee, please feel free to drop us a line. Our presentation will explore the research-policy interface in Sweden, and how we can ‘go beyond the number’ to explore this in more depth. We look forward to meeting you - either in person or virtually. 

 

Author notes

*Not our official job title, unfortunately. 



What is Overton

We help universities, think tanks and publishers understand the reach and influence of their research.

The Overton platform contains is the world’s largest searchable policy database, with almost 5 million documents from 29k organisations.

We track everything from white papers to think tank policy briefs to national clinical guidelines, and automatically find the references to scholarly research, academics and other outputs.