Skip to main content
SearchLoginLogin or Signup

How can we measure and communicate the impact of science?

How can we measure the true impact of science? We're seeking feedback on indicators of the utility and rigor of publications beyond traditional journal metrics. Your input will help shape the future of our publishing experiment.
Published onMar 29, 2024
How can we measure and communicate the impact of science?
·
history

You're viewing an older Release (#1) of this Pub.

  • This Release (#1) was created on Mar 29, 2024 ()
  • The latest Release (#4) was created on Aug 23, 2024 ().

Purpose

Traditional signals of scientific quality — journal titles, closed peer review, and impact factors — don’t fully reflect the utility and rigor of scientific work. Since our publishing platform exists outside of traditional systems, these signals wouldn’t be available to us or those running other open science initiatives even if they were reliable. There are plenty of other challenges faced by scientists publishing both inside and outside of traditional systems too, including discoverability, tracking reuse, determining ways to re-evaluate quality over time when sharing living documents, and others. 

We need new ways to evaluate science that better capture its true value and can be displayed directly on a scientific output so researchers can more easily utilize and expand on it.

The questions we’ve laid out at the bottom of this pub serve as conversation starters to creatively reimagine how we measure scientific efforts, especially forays into open science. We hope this dialogue will inspire us and others to develop open resources and tools that support science sharing for all collaborators in this space. Stay tuned for future publications where we'll share insights from our experiments with different reuse metrics.

Read on for background on what we’ve tried so far, or jump straight to the questions and start a dialogue.

  • This pub is part of the model creation effort, “Reimagining scientific publishing.” Visit the project narrative for more background and context on our approach to publishing.

Share your thoughts!

Watch a video tutorial on making a PubPub account and commenting. Please feel free to add line-by-line comments anywhere within this text, provide overall feedback by commenting in the box at the bottom of the page, or use the URL for this page in a tweet about this work. Please make all feedback public so other readers can benefit from the discussion.

Motivation

Research is most impactful when it’s findable, accessible, and useful. Thus, a major goal of our publishing experiment is to release rigorous work that we and others can replicate and build upon. This is why we publish our science openly — complete with all the data, code, methods, and other information necessary to reuse and evaluate it.

Since we began iterating on our publishing framework [1], we’ve seen some early signs of success within and beyond Arcadia: community-driven GitHub contributions, reuse of our strains/reagents, alterations to preprints based on our modular reviews, and open feedback beginning to shape the way we think about our science.

Despite that, we are still working to identify all the indicators that will let us understand if we’re meeting the goals of our publishing experiment.

Aims for our publishing model

As described in our “Reimagining scientific publishing” narrative, we’ve identified three key qualities to maximize in our publishing experiment.

Speed: Sharing smaller, more modular pieces of research as we go will let people learn about and use our findings quicker and will accelerate scientific progress as a whole.

Utility: By breaking from rigid journal formatting, we can maximize usability and explore interactivity. Our data will be easy to find, access, use, and repurpose in ways we can’t predict.

Rigor: We want public comments from anyone. Expertise lives everywhere, not just where you look for it. With diverse feedback and iterative engagement, our work will be improved and we can meet community needs. A key signal of rigor that we’re focusing on is reuse. Are others able to replicate and build upon the work we release?

What do we measure so far?

Strong metrics can inform our internal strategy and, when shared publicly, provide the people encountering our work with a means to quickly and effectively evaluate its usefulness. While we don’t yet communicate any of this data to readers, we currently gather and analyze a variety of quantitative metrics, including:

Metrics about individual pubs

  • PubPub:

    • Pageviews

    • Unique visitors

    • Country of visitors

    • PDF downloads

    • Number of public comments

    • Traffic sources

  • Citations (via Google Scholar)

Metrics about linked resources

  • Protocols.io metrics:

    • Views

    • Runs

    • Exports

    • Comments

  • GitHub metrics:

    • Unique visitors

    • Unique clones

    • Number of pull requests (forthcoming)

    • Number of issues (forthcoming)

  • Zenodo metrics: 

    • Views

    • Downloads

We also gather qualitative metrics that could indicate utility and rigor, such as responses to the survey that you'll find at the bottom of every pub and public comments on our platform.

Tracking this data is helpful for researchers to determine who their work reaches, its quality, and how it’s used. Still, it doesn’t help readers understand if the work is rigorous or useful to them. We’re developing ways to display metrics on our publications that reflect utility and rigor. But we’re still figuring out the best form for that to take. If you have thoughts on what would be useful for you to see, please leave a comment here or on question number one!

What else do we want to measure?

While useful, many of the metrics above simply indicate reach (e.g. pageviews) or move at a pace that doesn’t match ours (e.g. citations). Reach can be a useful marketing metric, but it doesn’t reveal much about our science or its impact on its own. We need new ways to assess the utility of our work, ensure the feedback loop is fast enough to improve it, show scientific value to readers so they can quickly assess if a pub will be useful to them, and indicate how public feedback influenced our science.

What could we measure that would be more informative, and how would we collect that data efficiently? What parts of a pub is a given researcher using (code, protocols, data, etc.), and are they usable? How can we tell if our tools directly or indirectly inspire future work?

Many organizations and individuals are innovating in this realm; we aren’t alone in this struggle. PLOS developed a set of “Open Science Indicators” to better understand the uptake of open science practices throughout the scientific ecosystem [2]. Recognizing the limitations of journal metrics, researchers in various fields have also proposed alternative frameworks. For example, the “Scientific Impact Framework” seeks to evaluate the influence of a piece of research using quantitative and qualitative metrics across multiple domains, from dissemination to implementation in public health policy [3]. And, with the rapidly expanding role of social media in facilitating scientific discussion, a variety of groups are working to gain new insights into who specific outputs are reaching and the dialogue surrounding them [4].

How might we continue to innovate together, share resources to document these efforts, and evaluate their outcomes?

Our goal is not to create a different impact factor — we recognize that scientific value cannot be boiled down to a single number and believe it should be conveyed through an array of different indicators. With rapid advances in AI and language processing, we as a science community are well-positioned to build nuanced, useful, and easy-to-parse methods to measure this.

Let’s have a public conversation about how to identify and communicate qualitative and quantitative signs of rigor, utility, and reuse. We hope this forum will spark ideas for us and others to develop open tools or projects that will make it easier to evaluate scientific impact.

Weigh in!

While we’d love any thoughts or feedback you have, we’ve decided to focus on a small set of specific questions to provoke discussion:

  1. In the absence of editorial decisions, what data, tags, summaries, or other information would help you quickly determine if a piece of research is relevant to your interests and use cases?

  2. What existing or novel measures could indicate that research is or isn’t rigorous and replicable?

  3. How might we effectively track the reuse of a given piece of research (i.e., others following up on a finding, applying the knowledge provided, using a tool, etc.)? Are there existing tools that do this well?

  4. What shared benchmarks should the open science community consider to evaluate the success of different publishing models?

If you like the idea of providing open feedback, consider weighing in on the questions above and signing up for our pub digest to get notified when we release new work! Remember, you don’t need to write an entire review — we encourage in-line, modular feedback. Even a quick comment is appreciated!

How can I join the discussion?

We hope you’ll respond publicly to our questions below by selecting/highlighting the question you’d like to answer, clicking the comment icon, and typing in your thoughts (as shown in the GIF below)! You’ll need a PubPub account to do this, but it’s free and quick to make one. Here’s a quick tutorial on how to comment.

Methods

We used ChatGPT to provide feedback on draft text and to suggest wording ideas and then used its responses as inspiration to improve the draft without directly using any of its phrasings.


Share your thoughts!

Watch a video tutorial on making a PubPub account and commenting. Please feel free to add line-by-line comments anywhere within this text, provide overall feedback by commenting in the box at the bottom of the page, or use the URL for this page in a tweet about this work. Please make all feedback public so other readers can benefit from the discussion.


  • Contributors
    (A–Z)

    • Prachee Avasthi

      • Critical Feedback

    • Megan L. Hochstrasser

      • Editing, Supervision

    • Jasmine Neal

      • Writing

    • Robert Roth

      • Conceptualization, Writing

Contributors
(A–Z)
Critical Feedback
Editing, Supervision
Writing
Conceptualization, Writing
Comments
24
Victor Holmes:

I admire deeply how you’re setting up a new system for sharing science - I hope the community can help you craft a model that works. I’m worried about the incentives: Why would other scientists give you the time it takes to comment, review, offer feedback? Cynically I think willingness drops off with seniority.

Consider moving to a licensing model for your information? A no-money “paywall” that gives access to pubs only to registered users. Access to the next pub requires feedback about the last or by spending ‘community credit’, which is earned through comments on other pubs, feedback on how pubs were used, etc. This could even be filling out a quick survey.

This is still basically free and accessible to anyone, but the data you might gather through (now somewhat mandatory) feedback will tell you what is impactful. Willingness to spend community credit to, say, read past the abstract, will give you information about the interest level in a pub. Later, ratio of citations to how many accessed tells you something about a pub’s impact.

Also cynically, I believe people treat something proportionally to what it cost to get it. Asking for just a little bit of community engagement in exchange for your pubs would both increase users’ perception of their value and get you much more of the feedback you’re seeking.

Robert Roth:

Thank you for your comment and thoughtful response. The challenges of incentives and engagement are ones we're actively exploring, and your suggestion is an interesting one.

As outlined in 'Publishing v2,' we're experimenting with new ways to empower our scientists to engage with the community and foster open dialogue. This process used to be top-down, so we’re in the early stages of building out what those tools will look like. 

Briefly, our aim is to release high-quality, useful science such that providing feedback becomes a mutually beneficial exercise. We believe that when scientists find our work valuable and readily applicable to their own research, they'll be more inclined to contribute their expertise and help us improve the work that they’re utilizing.

We also actively try to model this behavior by providing feedback on other researchers' preprints (over 2,000 comments across more than 500 preprints so far!). We believe that demonstrating the value of public comments can create a positive feedback loop that accelerates scientific progress, and we’re already seeing evidence of this.

While a licensing model could be an effective incentive for some, I do have a few reservations that I’d be curious to hear your thoughts on:

  • Accessibility: We're committed to making our science accessible to everyone, regardless of their ability to contribute feedback. Introducing a "paywall," even one based on community credit, could create a new barrier for scientists with limited time or resources.

  • Machine readability: As part of being open, we want to move toward making sure our work is machine-readable to better take advantage of the rapidly evolving AI/ML space. A licensing system could make it more difficult for others to use our work in this way.

  • Diverse perspectives: We believe valuable feedback can come from anyone, regardless of field or seniority. A system that prioritizes feedback from those who have the time/resources to contribute in a way that gets them enough credit could inadvertently skew the perspectives and could lock out some of the ‘one-off’ visitors to our science.

There are many avenues to explore in this area, and I agree that it would make measuring the impact of our work much simpler, but I think the downsides may not be worth it. I’d be really interested in keeping this conversation going if you or anyone else reading this has additional thoughts.

Jennifer Ramirez:

As a visual learner, one thing I enjoy is when publications have is a graphical abstract. These can increase initial engagement to an article for inter-disciplinary individuals that might typically see a wall of text and be drawn away.

Robert Roth:

Thank you for your comment! I’m curious — do you tend to look at the graphical abstract (when one is available), before or after reading the abstract/introduction?

?
Bethany McCarty-Kirkman:

Have you considered partnering with a vendor to track how its sales change following a publication? I’m thinking that if someone is trying to replicate the same experiment, they may opt to use your exact vendor for reagents.

Have you considered adding an “endorsement” feature? Once a scientist has used and validated your catalyst/method/technique/code, they can “endorse” the publication.

Robert Roth:

This is a really interesting idea! I could see this being a good signal of reuse, provided that we’d be able to detect a (potentially small) spike against the background volume of sales/requests/purchases/etc. and have the confidence to correlate it with the publication. I imagine it would be easier with lesser-used materials versus more common ones.

As for the endorsement feature — it’s something we’ve considered, but maybe with slightly different language. Something like a ‘worked for me’ (à la protocols.io) or ‘this was useful to me’ button could be a great way to immediately show utility. I’d welcome any thoughts that people have about what would be most useful to see on a pub, or what kinds of ‘ratings’ might be most effective.

?
Unain Ansari:

I think an important need in science publishing to make it more transparent and accessible is publication of “negative” results. Often research is bogged down by people repeating things that take extensive time and resources that do not end up working out in the end. Open science community needs to be more forgiving and transparent in sharing not only important discoveries but also things that do not work out, so people can build from it and not start from scratch.

Robert Roth:

Thank you for adding to the discussion! I agree that increased publication of negative results is a major need (and something that Arcadia strives for in our publishing experiment). Another point about negative results that I think about is how we can make them more findable and accessible so that a given researcher may find them before they start their work. I'm definitely curious about new ways of approaching that as well.

?
Ethan Beswick:

There is always a moment (or years) when research would be logically applied within it’s own microcosm and I think that may be more true in a world where there are less structural barriers to publication/writing.

  1. A Metric/visualization mechanism that allows for the velocity + depth of the research with differentiation (as others have suggested) between validation vs. expansion would appear extremely useful. If we’ve moved past the construct of “only experts” being able to validate an idea before publication then the overall impact needs to be more than sheer ‘number of comments’ as this would appear easily manipulatable.

    The nature of “expanding” on the research has value both within the field of intended research and outside of it. Separating those two is often necessary in my own experience as research goals and expectations of outcomes are often very different. This could enable separate scoring mechanisms on “impact” and separate the noise of the discussion of progress in very different arenas.

  2. Language/specificity: If you are looking for the broadest possible audience, then some mechanism for allowing translation of the research for different fields of research and languages would likely be useful.

    From a pure language perspective, translation would seem simple and easy to incentivize given how widespread geographically the open science community is.

  3. Purely commenting on how this paper can be improved or it’s faults without response/validation of the logic in those comments can be an equally slippery slope of manipulation. If commentary on research will be public, then to an extent, the validation and citation of those ideas could be incorporated into future research to help substantiate the thought process. That way users have an incentive to give great feedback that is intelligible, constructive, and usable in future research. This is then usable both from a search mechanism but also user curated “this was useful for my research in [field X].”

Robert Roth:

Thank you for taking the time to comment — you bring up a lot of interesting points here. I’m especially curious about your second point. When you say ‘translation of the research for different fields,’ how do you imagine that looking? Would it be translating specific terminology that would be more easily understood by someone in a different field, or a larger overhaul of the work?

?
L. Robert Hollingsworth:

Engagement in the new publishing models is key. (1) Importantly, how many **collaborative** studies have been published in these new forums? It’s much easier to make radical publishing decisions by oneself than inspire colleagues to take risks. How do we incentivize groups to build-in public release of their study through these models at the design phase of a project? The open science community is large but spread-out, so it might be important to get folks together that are experts in different disciplines/methods to foster collaboration aimed explicitly at publication experimentation.

(2) Are people reviewing, and getting reviews? What are the quality of those reviews? Are they leading to meaningful changes in the research studies? While traditional peer review has its downsides, the upsides are a level of deep reading/scrutiny that is unusual when reading literature daily (I’ve spent upwards of 10-15 hours reviewing papers!). How do we get a mix of the things readers can catch quickly—but also incentivize deep review? Is there a way to build-in incentives for peer review, such as the ability to request verified reviewers in exchange for needing to review (e.g., a token system where you must review 2-3X more than request? The tokens could somehow scale with the depth of review?)

(3) Ultimately, are papers being read and informing new studies? Citation and other quantitative metrics give some hint at this but take a long time to measure and can be biased.

?
L. Robert Hollingsworth:

This could take the form of badging as proposed here (and I’m sure other sources, too): https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002234

Astera or other nonprofit orgs like eLife could organize badging principles/practices and deploy technical experts for rigor, or others for open science audits. ASAP (Aligning Science Against Parkinson’s) has strict open-science requirements and a template for materials sharing audits that could be helpful.

Also relates to “is verifiable” in the discussion, here: https://research.arcadiascience.com/pub/open-question-measuring-reuse#nfd6bck2cia

?
Rodolfo Aramayo:

### How can we measure and communicate the impact of science?

The discussion elaborates on measuring and communicating the impact of science through different classes of publications, focusing on Reviews, Research Papers, and Methods Papers.

1. **Classes of Publications**

- Reviews

- Research Papers

- Methods Papers

2. **Evaluating Reviews**

- Comprehensive coverage

- Recency and up-to-date information

- Use of extensive research publications

- Risk of "Review of Reviews" and error propagation

- Can we create a metric for evaluating reviews based on the ratio of original research publications cited versus the number of reviews cited?

- **Summary for Reviews**: Their impact is gauged by comprehensiveness, recency, and the extent of original research citations, avoiding the pitfalls of relying too much on other reviews, which can propagate errors.

3. **Evaluating Methods Papers**

- Wet-Lab Methods Papers

- Accessibility and availability of reagents and materials

- Reproducibility

- Challenges in replicating experiments

- Using the same reagents and conditions as the original method

- Getting the same results

- Importance of internal controls

- Computational Methods Papers

- Computational Environment and Software Availability

- Can we use the same computational environment?

- Can we use the same software?

- Is the software used accessible and available?

- Challenges in replicating experiments

- Importance of internal controls

- Issues with computational reproducibility

- Researchers are more likely to test a protocol or a pipeline using their own data. Researchers are unlikely to use the data described/used in the original publication. Therefore, it is very hard to evaluate to what extent we were able to reproduce and replicate a given described experiment protocol or computational pipeline.

- In addition, in the case of computational pipelines, researchers are likely to use the latest version of software or scripts and/or genome files, thus making a clean and unbiased judgment regarding the reproducibility of a given computational pipeline very suspect and potentially biased.

- **Summary for Methods Papers**: Judged by reproducibility, including the accessibility of reagents and materials and the ability to replicate procedures in both wet lab and computational contexts. Challenges include not using original conditions and lacking proper internal controls.

4. **Evaluating Research Papers**

- The evaluation of research papers is inherently complex.

- Rarely do reviewers have expertise in all experimental aspects of the paper.

- Research papers can potentially be evaluated at different levels.

- Deconvolution of reproducibility:

- Figures

- Is the primary data used to generate the figures readily available?

- Has a Python/R/Shell script that can regenerate the figure been provided? Scripts in R or Python (pandas, seaborn, matplotlib) can be generated to reproduce most figures.

- If the answer to the above questions is yes, then the figure should be eligible to be assigned a DOI.

- Tables

- Is the primary data used to generate the tables readily available?

- Has a Python/R/Shell script that can regenerate the figure been provided? Scripts in R or Python (pandas, seaborn, matplotlib) can be generated to reproduce most tables.

- If the answer to the above questions is yes, then the table should be eligible to be assigned a DOI.

- Materials

- Have the drugs, reagents, and other materials been reported in detail?

- Can these drugs and reagents be obtained? Are they available?

- How likely are these drugs and reagents to be affected by batch production effects?

- Are the animals used in the research available?

- How robust is the phenotype of the animals used in the experiment? What is the probability that a variable genetic background will affect a given phenotype used in the experiments?

- Methods

- How detailed is the description of the methods?

- Are the reagents, antibodies, oligonucleotides, and dyes used readily available?

- Can the experiment be reproduced using the same materials?

- How dependent on specific hardware and/or specific laboratory equipment are the methods?

- Data availability and integrity

- Is the primary data used in the manuscript readily available?

- Primary data from repositories like NCBI SRA

- Is the data processing affected by issues related to metadata stripping in sequencing files during submission?

- Scripts and commands documentation

- Availability and documentation of processing scripts

- Documentation of pipeline and virtual environments

- Availability of scripts in R or Python (pandas, seaborn, matplotlib)

- **Summary for Research Papers**: Unique due to their combination of new observations and methodologies. Evaluating their reproducibility involves ensuring primary data availability, processing it according to the original study, and producing equivalent figures. Challenges include data integrity from repositories like NCBI SRA and proper documentation of processing methods. Reproducibility also hinges on the availability of experimental materials (reagents, antibodies) and proper documentation of data processing methods. Sharing primary data and scripts (preferably in Python) for figure generation enhances understanding and reproducibility, potentially leading to granular micro-publications with unique identifiers.

5. **Evaluating Supplementary Data**

- Problems with supplementary data storage and accessibility

- Issues with broken links and deleted data

- Solutions for better data preservation

- Using repositories like Zenodo or Figshare

- Dissociating supplementary data from main publications

- Exploring different publication models (GigaDB, GigaScience)

- **Summary for Supplementary Data**: The storage and accessibility of supplementary data present significant challenges. Issues with broken links and deleted data necessitate better preservation solutions, such as using repositories like Zenodo or Figshare and dissociating supplementary data from main publications. Different publication models (e.g., GigaNDB, GigaScience) offer various approaches to addressing these challenges.

6. **Evaluating Data Integrity After Publication - The FASTQ File Headers Issue**

- To save disk space, NCBI-SRA started removing metadata present in headers of FASTQ files containing link number, tile number, X and Y coordinates.

- How important are FASTQ file headers for quality control?

- What information is being lost by NCBI-SRA redefining the FASTQ headers?

- Is it worth preserving this information in another format to be able to reattach to the downloaded files if necessary?

- See [GitHub discussion for details](https://github.com/ncbi/sra-tools/issues/130)

- **Summary for FASTQ Files**: A specific issue is raised about FASTQ file headers, especially in RNA-seq datasets. A recent GitHub discussion highlighted the importance of headers containing metadata like link number, tile number, and X and Y coordinates, which are stripped upon submission to NCBI SRA. This loss of information hampers the identification of PCR artifacts and read quality assessment. The need for alternative ways to preserve this metadata before submission is emphasized to ensure the replicability of original results. The removal of header information from 10x Genomics files at some point further complicated data reanalysis, underscoring the necessity of methods to preserve and reattach this metadata if necessary (although this might not be a current issue for 10x Genomics data).

#### **Note** The content of this comment was developed by first recording the ideas. ChatGPT-4 then transcribed the recordings and used these transcripts to generate an outline. The outline was subsequently manually edited and refined.

?
Rodolfo Aramayo:

An important question to include in the survey is: Has this publication changed the way you think about biology or the specific problem in question? Alternatively, has this publication introduced you to a new method for performing an experiment?

Robert Roth:

This is a really cool idea! We did something like this on our ‘The experiment continues’ pub, where we adapted the question to ask if it’s changed the way the reader thinks about publication/will approach their own publications. I like the idea of having a more direct indicator of a change in thought process. It’s probably worth thinking about having a ‘base’ survey so that one can have a consistent set of data to compare across publications, but also introduce more specific questions that pertain directly to the work (like the ones you’re suggesting). Thank you for adding to the discussion!

?
Rodolfo Aramayo:

My question is: Can we apply the same metrics to all publications, given that they can belong to different categories? Publications may introduce significant new biological concepts or bring novelty and clarity to a particular topic. They can also report important new technological developments and open new avenues for investigation.

CD
Claire Duvallet:

It might be interesting to have different types of citations — some citations are “this is a finding I’m referencing” and others are “this is a method/datatset from this work I’m using”

Robert Roth:

This is a really cool thought (and I know some other folks are actively working on this, so hopefully we start seeing more of it soon). But I’m curious what the most effective way to display that would be. For instance, we and others use hover-over citations right now and there are a lot of different formats out there.


Do you think it would be most useful displayed directly next to the citation (or in the hover-over text)? Or would that be distracting? Or maybe encouraging writers to incorporate how they used a citation in the sentence/paragraph where it’s included? I could think of some more creative ways to display/communicate this info, but I wonder what would be the easiest to analyze without becoming overbearing.


Thank you for contributing to the discussion!

James Boyd:

(A “better explanation” being one that is simpler, has fewer exceptions, covers more cases, enjoys better consistency with other explanations, etc.) To put it simply: metrics are a poor proxy for what could eventually be a standardized metascientific evaluation of work.

James Boyd:

Well, I think the best solution to this problem is a somewhat “longer-term” prospect, but I’ll raise it here nonetheless. I think the success of a publishing model pertains to its ability to directly facilitate advancement of science itself, as opposed to, say, “network-based” indicators that only approximate social adoption/discussion/sharing. Fundamentally, scientific theories/hypotheses/models are “algorithms” (defined as loosely as you like) that eat data and produce explanations/predictions, and any publishing model that helps gather better input and/or deliver better output should garner prestige for doing so.

As a highly simplified toy model, consider “research” that is quite directly related to algorithms/data, such as quantitative finance: CrunchDAO (which is a kind of decentralized hedge fund…) actually has a leaderboard in place that ranks participants by the performance of their models. Of course, theories and models in many scientific fields are often not actual algorithms, and assessing their ability to best explain data would require a qualitative (though, nevertheless, standardizable) metascientific criterion. Fortunately for humankind, the realm of biology is more sophisticated than that of hedge funds :D And metascience will require criteria more sophisticated than “market returns”. Nevertheless, I wonder to what extent publishing platforms could ultimately be “metascientifically ranked” by their ability to gather the best data and develop the best explanations.

James Boyd:

As a general issue in scientific publishing, I believe that literature search friction and duration can be improved if publications are tagged both by subject and by common, predictable use cases. I’m often looking for particular perspectives/angles on a given subject, and often have to infer the kind of perspective/angle that a paper provides only after reading it in some depth. Here are some examples of what I might personally seek during a literature search:

  • New results/data that corroborate/support a given theory or investigation

  • New theories/models with better explanatory power

  • New results/data that challenge a given theory or investigation

  • Strategic and/or historical commentary

  • Summaries and expository work

  • Reproduced/replicated studies

  • Methodological innovations

  • Reviews or criticism

I was happy to see the “Negative Data” and “Open Question” tags on Arcadia publications; they’re a step in the right direction. I personally favor more tags, though I imagine that avoidance of over-tagging is an important curation issue.

When I see “Negative Data”, I immediately wonder – what is negated? Is it a hypothesis previously formed within Arcadia? Is it a major assumption that the entire community holds? (That is, what is its “metascientific scope”?) Does the negative result present a quandary, or is a new explanation offered instead? (What is the “deliverable scope”?) In summary, it would be helpful to know, when browsing, what kind of scientific proposition the data is negating, and what I’ll stand to gain by reading the publication (e.g. a new explanation, a scientific dilemma to mull over, a study replication prospect, a new methodology to consider, etc.)

I recognize that tags won’t be able to capture much of the above information, but I can imagine 1-3 tags (e.g. subject, use-case, scope) being quite informative.

Shaurya Chanana:

I think having an NLP-generated large repository of technical nouns and word-phrases would be a great starting point. Some kind of word-cloud-like structure that is weighted by field-relevance and how frequently they occur in various topics would be helpful. Then, for any new paper, it could be auto-tagged based on this large corpus of tags.

?
Luis Goicouria:

Generally, the tracking of citations of your publication is used to determine the extent to which your work is being verified or built on. This, however, isn’t sufficiently comprehensive (as not all pertinent research may find your publication, especially if there are barriers to finding or accessing your work). I would imagine that a supplemental measure would be to track publications that cite the same publications that you cited in your publication. Other publications that use a certain threshold of shared citations are more likely to be pertinent to your research, even in the event that they are not citing your work specifically.

Robert Roth:

Thank you for contributing to the discussion and for your note on barriers to finding or accessing the work — discoverability/accessibility is a key part of this puzzle, especially outside of more traditional channels.

I’m curious if you or anyone who sees this comment has tried out any tools or seen other projects that try to get at this concept and what you thought of them/if you found it helpful. I suppose this is similar to the concept of bibliographic coupling? Correct me if I’m wrong.

It also makes me wonder what else we could add to that analysis to more easily get at how pertinent the shared citations might be to the original publication (and maybe even uncover some ‘hidden’ evidence of reuse/influence of the original publication). Could be an interesting use case for LLMs, or combining some of the work being done on contextual citations.

?
Luis Goicouria:

I love commentaries because they often effectively communicate technically niche or complex data to a broader audience, provide more unbiased discussion of the significance of the findings, and occasionally provide criticism of the design and implementation of the study. I would imagine that finding a third party, uninvolved in the production of the publication in question, to write a commentary would provide a valuable tool in interpreting the use cases and limitations of the findings and determining how relevant the findings are to my interests.

DS
Daniela Saderi:

Methods section… is the protocol detailed enough to suggest how to replicate an experiment?

Robert Roth:

Definitely something to look for! I’m curious if you’ve found any sort of indications of this to be helpful (like the ‘Works for me’ button Protocols.io, for instance) or any kind of public commenting (on Twitter, bioRxiv, etc.) that has indicated it’s detailed enough? Or does it generally require more of a personal, manual review for you to feel confident in its level of detail?

AH
Anna Hatch:

I look at figures and methods and to see if results match conclusions.

Shaurya Chanana:

At the surface level, reading the abstract _should_ suffice but it often doesn’t because abstracts are usually very technical. Alternatively, reading the results and skimming the discussion help but, that’s a lot of reading and no one has time for that.

One way could be to have an AI-based summary of the article. The downside is that the summary could be partially hallucinated and probably unreliable. We could add some conditions like making sure words in the summary appear in the paper etc.

Another way is to write a “narrative” at the beginning of the paper. So, if the paper is about a novel clustering algorithm that claims to make it easier to see unrelated points in a space better, the paper-writer could try writing a story around what kinds of problems a reader could use this method for.

A third way is to force paper-writers to simplify their language and add a graphical abstract.

Hui Xin Ng:

Before we can evaluate success, we need a clear definition of "success".
I believe that there is no single definition of success, however, when it comes to measuring the impact of science because that definition evolves depending on the desired outcome. The desired outcomes differ depending on the role of the person or the organization.

For instance, a government official tasked with creating a new public health program might want to learn about the outcomes across multiple interventions from an existing case study, or potential outcomes from a health economics study. Downloads and views are not sufficient to reflect the impact of a publication - because what follows from reading the publication might be more nebulous - e.g., the government official then contacts the lead authors of the study for engagement/consultation. The publication may facilitate the initial point of contact, but the impact goes beyond a single metric we can measure - I imagine one of the ways to measure such impact (if we narrow the scope to comparing publishing models) is to ask how effectively X publishing platform/model itself resonates with non-scientists? If we are only measuring metrics like downloads, it implies a single direction of information flow. A successful model, I reckon, will be bidirectional, and we need to find ways to perhaps quickly prototype a model for publishing/sharing science and see how it resonates with people who are not directly involved in scientific knowledge production.

That may be beyond the scope of the current discussion. Going back to measuring success across publishing models, I think we need to consider the ease of adoption of a new publication/research-sharing model. This might be a chicken and egg problem where we don't have enough people trying out a new model and hence we can't evaluate it. But I suspect there might be subgroups of people who face similar obstacles to adopting a new publishing model. 

My thoughts on what to measure now are quite nebulous and I will expand them in the coming weeks. But to close it off, for now, here's one question I'd like to pose for discussion: Who uses science/research findings, and for what purpose? Answering this question will help us identify different metrics to measure other than metrics like number of downloads.


Melissa Steele-Ogus:

It seems like this is already being implemented here, but having a comments or discussion where people can compare notes seems really key, especially for those of us who work in really niche fields. Negative results may not be publishable, but knowing about them may save your colleagues or future researchers a ton of work.

Melissa Steele-Ogus:

Graphical abstracts are always really valuable to me–I can quickly scan to see if the subject and findings of the paper are something that applies to my interests.

Jasmine Neal:

It may be interesting to display who and how the work is being used in other contexts. Similar to the “works for me” button found on protocols.io, researchers could indicate if the tool or result is “in use” and perhaps elaborate on their use case. This could lead to a dropdown list or other feature that shows all the places the work is currently in use and what it is being used for. Listing the examples of reuse could also drive traffic to the work of other scientists in the community so it may incentivize others to indicate that they’re using a tool, result, etc. Just an idea! :)

?
Pavithran Narayanan:

Before attempting to answer any of these questions, I think it is necessary to look at (and probably measure, if possible) the reach and penetration of Arcadia Science in the research community. Only if a significant fraction (used here as a loose term) of the researchers are aware of the company and its work, it will make sense for the impact to be measured. Please note that this reach is different from the reach mentioned in this pub, which is a measure of the views, downloads, comments, etc.

Active Outreach:

The reach of the company could, in many ways, depend on the kind of outreach strategies the company employs. I think what the company currently needs is what I call an “active outreach” strategy. This essentially involves directly reaching out to researchers in a given field through email, in person at conferences, or other relevant means. For example, if the company works on a project on Actin structure, the company needs to actively reach out to the researchers (PIs, postdocs) working in the same field and let them know about the work that it has recently published. This would allow them to know about the work and engage with it as per their preference.

After an initial round of active outreach in a given field, this circle would be expected to grow and the work of the company could be expected to penetrate within the community with time. This would potentially help the pubs to receive more visibility, feedback, and probably get cited in other publications. These would then give the company to have something on the slate to start with upon which it can build useful metrics and approaches to track utility, rigour, and reuse.

Such an active approach may ultimately pave the way for either other researchers to directly engage with the research group at Arcadia Science (so that reuse could be directly tracked through personal communication and ultimately as citations in published works) or the company to develop much robust measures for the reuse of its work.

Jasmine Neal:

Thanks for your feedback Pavithran, you bring up really great points! I’m actively thinking about this aspect of our publishing experiment at Arcadia so I wanted to respond to your comment with some more context and questions. 🤗

Active Outreach

We definitely agree that an active outreach strategy is necessary and I’m glad to say that Arcadia does execute active outreach via social media and email, at conferences, by co-hosting virtual and IRL events and more. We also track our interactions with people in our community to monitor its growth and better understand how people interact with us. Reach is a bit easier to measure for conferences since we can easily compare the number of scientists who joined our newsletter/community with total conference attendance. But how do we measure our reach more broadly? What number can we use to get a sense of the scientific community at large? (I don’t have an answer for this yet but I welcome all suggestions!)

As you suggested, monitoring active outreach and the growth of our community does give us a glimpse into reuse and we can even anticipate citations based on how other researchers respond to our outreach, ask questions etc. However, much of the valuable discussion remains behind closed doors if done via email or buried in threads on X. Oftentimes, a question or suggestion that one researcher may have is shared by another researcher.

One difficulty, especially pertaining to email outreach, is “converting” any given feedback into a comment so that the entire community can benefit from it. At times there is a reluctance to post feedback publicly even when asked. Is this because making a PubPub account is a big lift? Or because there is hesitation with publicly criticizing someone else’s work, even if it’s welcomed and done constructively? Or perhaps comments aren’t the best way to make feedback visible to others? (I’d love to hear your perspective on any of these questions!)

I have many more thoughts and ideas but perhaps this merits an open question pub specifically about engagement…? 🙃

Daven Northroup-Kuder:

I think it would be really helpful to communicate the scope and impact of publications. This could help policymakers and the general public understand the focus and utility of different publications. These are some possible impact and scope questions with associated scale:

‘What is the scope of study?’ [peer reviewers would link related topics covered in this paper (e.g., electrochemistry, protein engineering, etc.)]

‘What is the scale of impact for this paper?’ [a 1-5 rating scale from niche to universal]

‘How accessible is this paper?’ [a 1-5 scale from very niche to easily understood]

In addition, a publication’s impact on various sectors (such as policy, technology, education, medicine, etc) could be assessed.

These benchmarks could be determined by a weighted mixed voting system (similar to rotten tomatoes’ rating system). Peer reviewers and approved readers would give each publication an initial score, and then every reader would have the chance to score the paper's impact and scope. The scores of approved users and peer reviewers would carry more weight than those of the general reader, but the general readers would get to report on how they found the article.

Assessing a paper's scope and impact via weighted crowdsourcing would help assess the subjective response to publications. This impact and scope rating system could easily be added to the existing peer-review process and journal/publication platforms.

Jasmine Neal:

Thanks for your feedback Daven! We definitely agree that it would be helpful to communicate the scope and impact to the reader and this is something we’re actively thinking about! For our other pubs, we currently ask the reader about clarity, utility, replicability, and rigor, but I wonder if we should consider expanding or modifying these questions to help measure impact or scope? E.g. We could also ask the reader which sector this work would be useful for, if they indicate that it’s useful. We’d love to make these (or other) questions more prominent and to display results as the PubPub platform evolves to allow us more control.


I’m curious  – would you also find it helpful to see how other scientists or labs are using the work or would the results of a weighted-mixed voting system be more helpful for you? Or perhaps a combination of both would be better?

+ 1 more...
?
Daniela Liebsch:

When it comes to relevance, for me, I’ld simply say it is topic based, so a very clear honest summary, keywords, and limitations would be most helpful. If we take traditional abstracts and summaries they are often a bit vague, and advertising rather then stating limitations. For tools, something that outlines possible uses, citations to how it was used (maybe some kind of summary of uses), and setting it apart from other similar tools or even quick comparison with other tools and again specific strength/limitation summary could help.

Robert Roth:

Thank you for your thoughts on this, Daniela! I’m curious — do you find yourself using filtering tools (such as by topic) to list and then find articles, or do you generally do more of a keyword-based search to find specific tools/publications that could be useful? I’m guessing this depends on why you’re looking for publications, but I’d be interested to hear your thoughts.

+ 1 more...