Skip to main content
SearchLoginLogin or Signup

How can we measure and communicate the impact of science?

How can we measure the true impact of science? We're seeking feedback on indicators of the utility and rigor of publications beyond traditional journal metrics. Your input will help shape the future of our publishing experiment.
Published onMar 29, 2024
How can we measure and communicate the impact of science?
·

Purpose

Traditional signals of scientific quality — journal titles, closed peer review, and impact factors — don’t fully reflect the utility and rigor of scientific work. Since our publishing platform exists outside of traditional systems, these signals wouldn’t be available to us or those running other open science initiatives even if they were reliable. There are plenty of other challenges faced by scientists publishing both inside and outside of traditional systems too, including discoverability, tracking reuse, determining ways to re-evaluate quality over time when sharing living documents, and others. 

We need new ways to evaluate science that better capture its true value and can be displayed directly on a scientific output so researchers can more easily utilize and expand on it.

The questions we’ve laid out at the bottom of this pub serve as conversation starters to creatively reimagine how we measure scientific efforts, especially forays into open science. We hope this dialogue will inspire us and others to develop open resources and tools that support science sharing for all collaborators in this space. Stay tuned for future publications where we'll share insights from our experiments with different reuse metrics.

Read on for background on what we’ve tried so far, or jump straight to the questions and start a dialogue.

  • This pub is part of the model creation effort, “Reimagining scientific publishing.” Visit the project narrative for more background and context on our approach to publishing.

Share your thoughts!

Watch a video tutorial on making a PubPub account and commenting. Please feel free to add line-by-line comments anywhere within this text, provide overall feedback by commenting in the box at the bottom of the page, or use the URL for this page in a tweet about this work. Please make all feedback public so other readers can benefit from the discussion.

Motivation

Research is most impactful when it’s findable, accessible, and useful. Thus, a major goal of our publishing experiment is to release rigorous work that we and others can replicate and build upon. This is why we publish our science openly — complete with all the data, code, methods, and other information necessary to reuse and evaluate it.

Since we began iterating on our publishing framework [1], we’ve seen some early signs of success within and beyond Arcadia: community-driven GitHub contributions, reuse of our strains/reagents, alterations to preprints based on our modular reviews, and open feedback beginning to shape the way we think about our science.

Despite that, we are still working to identify all the indicators that will let us understand if we’re meeting the goals of our publishing experiment.

Aims for our publishing model

As described in our “Reimagining scientific publishing” narrative, we’ve identified three key qualities to maximize in our publishing experiment.

Speed: Sharing smaller, more modular pieces of research as we go will let people learn about and use our findings quicker and will accelerate scientific progress as a whole.

Utility: By breaking from rigid journal formatting, we can maximize usability and explore interactivity. Our data will be easy to find, access, use, and repurpose in ways we can’t predict.

Rigor: We want public comments from anyone. Expertise lives everywhere, not just where you look for it. With diverse feedback and iterative engagement, our work will be improved and we can meet community needs. A key signal of rigor that we’re focusing on is reuse. Are others able to replicate and build upon the work we release?

What do we measure so far?

Strong metrics can inform our internal strategy and, when shared publicly, provide the people encountering our work with a means to quickly and effectively evaluate its usefulness. While we don’t yet communicate any of this data to readers, we currently gather and analyze a variety of quantitative metrics, including:

Metrics about individual pubs

  • PubPub:

    • Pageviews

    • Unique visitors

    • Country of visitors

    • PDF downloads

    • Number of public comments

    • Traffic sources

  • Citations (via Google Scholar)

Metrics about linked resources

  • Protocols.io metrics:

    • Views

    • Runs

    • Exports

    • Comments

  • GitHub metrics:

    • Unique visitors

    • Unique clones

    • Number of pull requests (forthcoming)

    • Number of issues (forthcoming)

  • Zenodo metrics: 

    • Views

    • Downloads

We also gather qualitative metrics that could indicate utility and rigor, such as responses to the survey that you'll find at the bottom of every pub and public comments on our platform.

Tracking this data is helpful for researchers to determine who their work reaches, its quality, and how it’s used. Still, it doesn’t help readers understand if the work is rigorous or useful to them. We’re developing ways to display metrics on our publications that reflect utility and rigor. But we’re still figuring out the best form for that to take. If you have thoughts on what would be useful for you to see, please leave a comment here or on question number one!

What else do we want to measure?

While useful, many of the metrics above simply indicate reach (e.g. pageviews) or move at a pace that doesn’t match ours (e.g. citations). Reach can be a useful marketing metric, but it doesn’t reveal much about our science or its impact on its own. We need new ways to assess the utility of our work, ensure the feedback loop is fast enough to improve it, show scientific value to readers so they can quickly assess if a pub will be useful to them, and indicate how public feedback influenced our science.

What could we measure that would be more informative, and how would we collect that data efficiently? What parts of a pub is a given researcher using (code, protocols, data, etc.), and are they usable? How can we tell if our tools directly or indirectly inspire future work?

Many organizations and individuals are innovating in this realm; we aren’t alone in this struggle. PLOS developed a set of “Open Science Indicators” to better understand the uptake of open science practices throughout the scientific ecosystem [2]. Recognizing the limitations of journal metrics, researchers in various fields have also proposed alternative frameworks. For example, the “Scientific Impact Framework” seeks to evaluate the influence of a piece of research using quantitative and qualitative metrics across multiple domains, from dissemination to implementation in public health policy [3]. And, with the rapidly expanding role of social media in facilitating scientific discussion, a variety of groups are working to gain new insights into who specific outputs are reaching and the dialogue surrounding them [4].

How might we continue to innovate together, share resources to document these efforts, and evaluate their outcomes?

Our goal is not to create a different impact factor — we recognize that scientific value cannot be boiled down to a single number and believe it should be conveyed through an array of different indicators. With rapid advances in AI and language processing, we as a science community are well-positioned to build nuanced, useful, and easy-to-parse methods to measure this.

Let’s have a public conversation about how to identify and communicate qualitative and quantitative signs of rigor, utility, and reuse. We hope this forum will spark ideas for us and others to develop open tools or projects that will make it easier to evaluate scientific impact.

Weigh in!

While we’d love any thoughts or feedback you have, we’ve decided to focus on a small set of specific questions to provoke discussion:

  1. In the absence of editorial decisions, what data, tags, summaries, or other information would help you quickly determine if a piece of research is relevant to your interests and use cases?

  2. What existing or novel measures could indicate that research is or isn’t rigorous and replicable?

  3. How might we effectively track the reuse of a given piece of research (i.e., others following up on a finding, applying the knowledge provided, using a tool, etc.)? Are there existing tools that do this well?

  4. What shared benchmarks should the open science community consider to evaluate the success of different publishing models?

If you like the idea of providing open feedback, consider weighing in on the questions above and signing up for our pub digest to get notified when we release new work! Remember, you don’t need to write an entire review — we encourage in-line, modular feedback. Even a quick comment is appreciated!

How can I join the discussion?

We hope you’ll respond publicly to our questions below by selecting/highlighting the question you’d like to answer, clicking the comment icon, and typing in your thoughts (as shown in the GIF below)! You’ll need a PubPub account to do this, but it’s free and quick to make one. Here’s a quick tutorial on how to comment.

Methods

We used ChatGPT to provide feedback on draft text and to suggest wording ideas and then used its responses as inspiration to improve the draft without directly using any of its phrasings.


Share your thoughts!

Watch a video tutorial on making a PubPub account and commenting. Please feel free to add line-by-line comments anywhere within this text, provide overall feedback by commenting in the box at the bottom of the page, or use the URL for this page in a tweet about this work. Please make all feedback public so other readers can benefit from the discussion.


  • Contributors
    (A–Z)

    • Prachee Avasthi

      • Critical Feedback

    • Megan L. Hochstrasser

      • Editing, Supervision

    • Jasmine Neal

      • Writing

    • Robert Roth

      • Conceptualization, Writing

Contributors
(A–Z)
Critical Feedback
Editing, Supervision
Writing
Conceptualization, Writing
Comments
4
Jasmine Neal:

It may be interesting to display who and how the work is being used in other contexts. Similar to the “works for me” button found on protocols.io, researchers could indicate if the tool or result is “in use” and perhaps elaborate on their use case. This could lead to a dropdown list or other feature that shows all the places the work is currently in use and what it is being used for. Listing the examples of reuse could also drive traffic to the work of other scientists in the community so it may incentivize others to indicate that they’re using a tool, result, etc. Just an idea! :)

?
Pavithran Narayanan:

Before attempting to answer any of these questions, I think it is necessary to look at (and probably measure, if possible) the reach and penetration of Arcadia Science in the research community. Only if a significant fraction (used here as a loose term) of the researchers are aware of the company and its work, it will make sense for the impact to be measured. Please note that this reach is different from the reach mentioned in this pub, which is a measure of the views, downloads, comments, etc.

Active Outreach:

The reach of the company could, in many ways, depend on the kind of outreach strategies the company employs. I think what the company currently needs is what I call an “active outreach” strategy. This essentially involves directly reaching out to researchers in a given field through email, in person at conferences, or other relevant means. For example, if the company works on a project on Actin structure, the company needs to actively reach out to the researchers (PIs, postdocs) working in the same field and let them know about the work that it has recently published. This would allow them to know about the work and engage with it as per their preference.

After an initial round of active outreach in a given field, this circle would be expected to grow and the work of the company could be expected to penetrate within the community with time. This would potentially help the pubs to receive more visibility, feedback, and probably get cited in other publications. These would then give the company to have something on the slate to start with upon which it can build useful metrics and approaches to track utility, rigour, and reuse.

Such an active approach may ultimately pave the way for either other researchers to directly engage with the research group at Arcadia Science (so that reuse could be directly tracked through personal communication and ultimately as citations in published works) or the company to develop much robust measures for the reuse of its work.

Jasmine Neal:

Thanks for your feedback Pavithran, you bring up really great points! I’m actively thinking about this aspect of our publishing experiment at Arcadia so I wanted to respond to your comment with some more context and questions. 🤗

Active Outreach

We definitely agree that an active outreach strategy is necessary and I’m glad to say that Arcadia does execute active outreach via social media and email, at conferences, by co-hosting virtual and IRL events and more. We also track our interactions with people in our community to monitor its growth and better understand how people interact with us. Reach is a bit easier to measure for conferences since we can easily compare the number of scientists who joined our newsletter/community with total conference attendance. But how do we measure our reach more broadly? What number can we use to get a sense of the scientific community at large? (I don’t have an answer for this yet but I welcome all suggestions!)

As you suggested, monitoring active outreach and the growth of our community does give us a glimpse into reuse and we can even anticipate citations based on how other researchers respond to our outreach, ask questions etc. However, much of the valuable discussion remains behind closed doors if done via email or buried in threads on X. Oftentimes, a question or suggestion that one researcher may have is shared by another researcher.

One difficulty, especially pertaining to email outreach, is “converting” any given feedback into a comment so that the entire community can benefit from it. At times there is a reluctance to post feedback publicly even when asked. Is this because making a PubPub account is a big lift? Or because there is hesitation with publicly criticizing someone else’s work, even if it’s welcomed and done constructively? Or perhaps comments aren’t the best way to make feedback visible to others? (I’d love to hear your perspective on any of these questions!)

I have many more thoughts and ideas but perhaps this merits an open question pub specifically about engagement…? 🙃

Daven Northroup-Kuder:

I think it would be really helpful to communicate the scope and impact of publications. This could help policymakers and the general public understand the focus and utility of different publications. These are some possible impact and scope questions with associated scale:

‘What is the scope of study?’ [peer reviewers would link related topics covered in this paper (e.g., electrochemistry, protein engineering, etc.)]

‘What is the scale of impact for this paper?’ [a 1-5 rating scale from niche to universal]

‘How accessible is this paper?’ [a 1-5 scale from very niche to easily understood]

In addition, a publication’s impact on various sectors (such as policy, technology, education, medicine, etc) could be assessed.

These benchmarks could be determined by a weighted mixed voting system (similar to rotten tomatoes’ rating system). Peer reviewers and approved readers would give each publication an initial score, and then every reader would have the chance to score the paper's impact and scope. The scores of approved users and peer reviewers would carry more weight than those of the general reader, but the general readers would get to report on how they found the article.

Assessing a paper's scope and impact via weighted crowdsourcing would help assess the subjective response to publications. This impact and scope rating system could easily be added to the existing peer-review process and journal/publication platforms.

Jasmine Neal:

Thanks for your feedback Daven! We definitely agree that it would be helpful to communicate the scope and impact to the reader and this is something we’re actively thinking about! For our other pubs, we currently ask the reader about clarity, utility, replicability, and rigor, but I wonder if we should consider expanding or modifying these questions to help measure impact or scope? E.g. We could also ask the reader which sector this work would be useful for, if they indicate that it’s useful. We’d love to make these (or other) questions more prominent and to display results as the PubPub platform evolves to allow us more control.


I’m curious  – would you also find it helpful to see how other scientists or labs are using the work or would the results of a weighted-mixed voting system be more helpful for you? Or perhaps a combination of both would be better?

?
Daniela Liebsch:

When it comes to relevance, for me, I’ld simply say it is topic based, so a very clear honest summary, keywords, and limitations would be most helpful. If we take traditional abstracts and summaries they are often a bit vague, and advertising rather then stating limitations. For tools, something that outlines possible uses, citations to how it was used (maybe some kind of summary of uses), and setting it apart from other similar tools or even quick comparison with other tools and again specific strength/limitation summary could help.

Robert Roth:

Thank you for your thoughts on this, Daniela! I’m curious — do you find yourself using filtering tools (such as by topic) to list and then find articles, or do you generally do more of a keyword-based search to find specific tools/publications that could be useful? I’m guessing this depends on why you’re looking for publications, but I’d be interested to hear your thoughts.

+ 1 more...