Published on Mar 29, 2024 by Arcadia Science

How can we measure and communicate the impact of science?

How can we measure the true impact of science? We're seeking feedback on indicators of the utility and rigor of publications beyond traditional journal metrics. Your input will help shape the future of our publishing experiment.

How can we measure and communicate the impact of science?

Purpose

Traditional signals of scientific quality — journal titles, closed peer review, and impact factors — don’t fully reflect the utility and rigor of scientific work. Since our publishing platform exists outside of traditional systems, these signals wouldn’t be available to us or those running other open science initiatives even if they were reliable. There are plenty of other challenges faced by scientists publishing both inside and outside of traditional systems too, including discoverability, tracking reuse, determining ways to re-evaluate quality over time when sharing living documents, and others.

We need new ways to evaluate science that better capture its true value and can be displayed directly on a scientific output so researchers can more easily utilize and expand on it.

The questions we’ve laid out at the bottom of this pub serve as conversation starters to creatively reimagine how we measure scientific efforts, especially forays into open science. We hope this dialogue will inspire us and others to develop open resources and tools that support science sharing for all collaborators in this space. Stay tuned for future publications where we'll share insights from our experiments with different reuse metrics.

Read on for background on what we’ve tried so far, or jump straight to the questions and start a dialogue.

  • This pub is part of the model creation effort, “Reimagining scientific publishing.” Visit the project narrative for more background and context on our approach to publishing.

Share your thoughts!

Feel free to provide feedback by commenting in the box at the bottom of this page or by posting about this work on social media. Please make all feedback public so other readers can benefit from the discussion.

Motivation

Research is most impactful when it’s findable, accessible, and useful. Thus, a major goal of our publishing experiment is to release rigorous work that we and others can replicate and build upon. This is why we publish our science openly — complete with all the data, code, methods, and other information necessary to reuse and evaluate it.

Since we began iterating on our publishing framework [1], we’ve seen some early signs of success within and beyond Arcadia: community-driven GitHub contributions, reuse of our strains/reagents, alterations to preprints based on our modular reviews, and open feedback beginning to shape the way we think about our science.

Despite that, we are still working to identify all the indicators that will let us understand if we’re meeting the goals of our publishing experiment.

Aims for our publishing model

As described in our “Reimagining scientific publishing” narrative, we’ve identified three key qualities to maximize in our publishing experiment.

Speed: Sharing smaller, more modular pieces of research as we go will let people learn about and use our findings quicker and will accelerate scientific progress as a whole.

Utility: By breaking from rigid journal formatting, we can maximize usability and explore interactivity. Our data will be easy to find, access, use, and repurpose in ways we can’t predict.

Rigor: We want public comments from anyone. Expertise lives everywhere, not just where you look for it. With diverse feedback and iterative engagement, our work will be improved and we can meet community needs. A key signal of rigor that we’re focusing on is reuse. Are others able to replicate and build upon the work we release?

What do we measure so far?

Strong metrics can inform our internal strategy and, when shared publicly, provide the people encountering our work with a means to quickly and effectively evaluate its usefulness. While we don’t yet communicate any of this data to readers, we currently gather and analyze a variety of quantitative metrics, including:

Metrics about individual pubs

  • PubPub:
    • Pageviews
    • Unique visitors
    • Country of visitors
    • PDF downloads
    • Number of public comments
    • Traffic sources
  • Citations (via Google Scholar)

Metrics about linked resources

  • Protocols.io metrics:
    • Views
    • Runs
    • Exports
    • Comments
  • GitHub metrics:
    • Unique visitors
    • Unique clones
    • Number of pull requests (forthcoming)
    • Number of issues (forthcoming)
  • Zenodo metrics:
    • Views
    • Downloads

We also gather qualitative metrics that could indicate utility and rigor, such as responses to the survey that you'll find at the bottom of every pub and public comments on our platform.

Tracking this data is helpful for researchers to determine who their work reaches, its quality, and how it’s used. Still, it doesn’t help readers understand if the work is rigorous or useful to them. We’re developing ways to display metrics on our publications that reflect utility and rigor. But we’re still figuring out the best form for that to take. If you have thoughts on what would be useful for you to see, please leave a comment here or on question number one!

What else do we want to measure?

While useful, many of the metrics above simply indicate reach (e.g. pageviews) or move at a pace that doesn’t match ours (e.g. citations). Reach can be a useful marketing metric, but it doesn’t reveal much about our science or its impact on its own. We need new ways to assess the utility of our work, ensure the feedback loop is fast enough to improve it, show scientific value to readers so they can quickly assess if a pub will be useful to them, and indicate how public feedback influenced our science.

What could we measure that would be more informative, and how would we collect that data efficiently? What parts of a pub is a given researcher using (code, protocols, data, etc.), and are they usable? How can we tell if our tools directly or indirectly inspire future work?

Many organizations and individuals are innovating in this realm; we aren’t alone in this struggle. PLOS developed a set of “Open Science Indicators” to better understand the uptake of open science practices throughout the scientific ecosystem [2]. Recognizing the limitations of journal metrics, researchers in various fields have also proposed alternative frameworks. For example, the “Scientific Impact Framework” seeks to evaluate the influence of a piece of research using quantitative and qualitative metrics across multiple domains, from dissemination to implementation in public health policy [3]. And, with the rapidly expanding role of social media in facilitating scientific discussion, a variety of groups are working to gain new insights into who specific outputs are reaching and the dialogue surrounding them [4].

How might we continue to innovate together, share resources to document these efforts, and evaluate their outcomes?

Our goal is not to create a different impact factor — we recognize that scientific value cannot be boiled down to a single number and believe it should be conveyed through an array of different indicators. With rapid advances in AI and language processing, we as a science community are well-positioned to build nuanced, useful, and easy-to-parse methods to measure this.

Let’s have a public conversation about how to identify and communicate qualitative and quantitative signs of rigor, utility, and reuse. We hope this forum will spark ideas for us and others to develop open tools or projects that will make it easier to evaluate scientific impact.

Weigh in!

While we’d love any thoughts or feedback you have, we’ve decided to focus on a small set of specific questions to provoke discussion:

  1. In the absence of editorial decisions, what data, tags, summaries, or other information would help you quickly determine if a piece of research is relevant to your interests and use cases?
  2. What existing or novel measures could indicate that research…
    • is verifiable (i.e., can someone verify that the work is rigorous and replicable)?
    • has been verified?
    • has been expanded or built on?
  3. How might we effectively track the ways a given piece of research is reused (i.e., others following up on a finding, applying the knowledge provided, using a tool, etc.)? Are there existing tools that do this well?
  4. What shared benchmarks should the open science community consider to evaluate the success of different publishing models?

If you like the idea of providing open feedback, consider weighing in on the questions above and signing up for our pub digest to get notified when we release new work! Remember, you don’t need to write an entire review — we encourage in-line, modular feedback. Even a quick comment is appreciated!

How can I join the discussion?

We hope you’ll respond publicly to our questions above by selecting/highlighting the question you’d like to answer, clicking the comment icon, and typing in your thoughts (as shown in the GIF below)! You’ll need a PubPub account to do this, but it’s free and quick to make one. Here’s a quick tutorial on how to comment.

Watch our follow-up discussion

On May 29, 2024, we held a live, interactive discussion with ASAPbio to discuss the topics in this pub. Some comments from the discussion have been posted in the “Weigh in!” section, and you can view the entire recording below. We’re still looking for feedback — feel free to add your own thoughts based on our discussion here!

Methods

We used ChatGPT to provide feedback on draft text and to suggest wording ideas and then used its responses as inspiration to improve the draft without directly using any of its phrasings.


Share your thoughts!

Feel free to provide feedback by commenting in the box at the bottom of this page or by posting about this work on social media. Please make all feedback public so other readers can benefit from the discussion.

Provide feedback

P
Prachee Avasthi
Critical Feedback
M
Megan L. Hochstrasser
Editing, Supervision
R
Robert Roth
Conceptualization, Writing