As I read the pub alongside the in-line comments, I assumed that I had covered all the comments so I didn’t bother looking at the bottom ‘Comments’ section. Didn’t realise there were also general comments in this section that applied to the pub as a whole. But these are buried under the in-line comments.
If many others also don’t realise this, it may be worthwhile improving the vid to mention how to easily view general (not in-line) comments via the filter option (disable ‘Show anchored comments’).
Some long-term bigger questions:
If the future of publishing is no longer PDFs and journals (akin to New York Times —> Substack) how do you curate your own library if the native config of PubPub is an online document? Will PubPub allow you to have a private copy? Some may want to have a copy in their private library so that they can add highlights/comments not for the public. Although you can download the PDF copy of the PubPub, it doesn’t make sense to revert back to the PDF and EndNote format.
Will all this info be indexed? Can it be easily retrieved by new AI tools like Elicit.org that can draw insights? This could even help extract insights within Arcadia as the institute scales with more projects.
Combined with open-access, this means permissionless contribution to science! It could come from anyone (no degree requirement), perhaps even those that are outside of science. If they understand the science, then it could provide a new creative insights as they are likely to have a ‘beginner mind’ or see parallels with their own discipline.
I wonder if this will also mean a new career path in academia - a ‘consultant’ for those that enjoy scientific discussion but not lab experiments, and are hired to assess the integrity of the science, contribute to new directions etc.
Reviewers of course exist. But it’s not exactly clear what their contribution was (somewhat now better with transparent peer review). Since Arcadia is crediting public feedback, this also means a potential metric to assess the value that a ‘consultant’ provides.
In regards to a reading audience, I don’t represent everyone, but I don’t go to a journal for papers of interest. I use pubcrawler to give me a weekly feed of articles that match my criterias of interest. If the Pubs are in PubMed (just like recently now with pre-prints), this shouldn’t be a major issue.
I only occasionally go to a journal to read broad (e.g. I’m in immunology but I’ll read synthetic biology or physics)
(Potential bug: Clicking on a comment that is linked to a text with hyperlink, seems to go straight to the hyperlink, not the comment. On Chrome MacOS)
Since this is online (not PDF), it could benefit by adopting something similar to Microsoft Word’s track changes or minimally an automatic highlighting feature to track areas where changes occured. (Similar to sending back a manuscript for revision so reviewers can easily identify the changes)
+1. This would require uploading source data (not just the figure). Would be esp. helpful for barplots as many plots will not show individual biological replicates - which can reveal a lot of info about the spread of the data that would otherwise be hidden if you just presented the mean value.
Easy to implement for bar plots, may be more difficult for non-quantitative data e.g. flow cytometry plots (in immunology, some will choose to use contour plots over dot plots to obscure the fact that there may only be <10 events in a flow cytometry acquisition)
+1 on this idea, it’ll make reading a lot of more easier compared to scrolling up and down OR jumping up/down using hyperlinks. For reading papers via PDF on a computer, I would open 2 copies - 1 for the text and then the other for the figure. Even more necessary when journals decide to place the figure legend on a separate page to the figure (e.g. Cell).
PDF of journals have their own fixed structure. The advantage of PubPub should be the flexibility i.e. you can choose two column if you prefer, can change how references are displayed (numbered vs last author / year format etc.)
Hi! For citing your pubs, I see you provide the bibtex code which is great; it would be extra nice to be able to export a .ris file for citation managers like EndNote!
Added to my list! Thanks so much for the feedback!
I really love this. Its an aspiration not a decision, but I hope that we can all move peer review towards this new type of workflow.
one thought that I keep coming back to after taking this first step away from deriving “truth” from single article, is does this open up a new opportunity for more robust mechanisms for curation and drawing attention to ground truth?
one example would be content state changes if findings are replicated. another example could be meta-analyses that then establish higher levels of confidence (or lack thereof) in a finding.
for me this is a really important first step towards:
1) decoupling peer review / journal brand as proxy for “truth” or “consensus”
2) deriving consensus from subsequent research outputs (vs commentary and endorsements), ie data driven peer review with replications, contradictions, and extensions.
Similar arguments can be made for the structure of the article itself. There’s a lot of really cool meta-analyses etc that could be done if papers were structured a bit better. For me this is more important than for underlying data sets, which may be hard to automate machine use of given the high degree of variability between data sets.
<3. I think this can be the new methods section. just don’t even try to summarize it, post all the protocols and then reference in context.
what about retraction? or, in the positive direction, replication? Lots of really cool ideas to experiment with here.
I think I saw comment about this somewhere else, but visibility and organization of comments / feedback at different states (in review, final, etc) is worth thinking about. No right answers but can have an effect on how users engage with content.
all open science endeavors should be empowering researchers to lean into their curiosity again.
are pubs different than modular pubs?
I love this approach and have thought about this a lot (mostly in circles). Curious to see how this goes and happy to share more detailed thoughts if helpful.
For me a few questions I always had with this is:
1) what is the best governance structure? Should this be org / journal level - like special features that editors are updating contextual content about? Or a content type that anyone can create?
2) If coming from authors / users, how does this compete with (or not) other workflows like preprints, blogs, etc?
3) do you treat this as a scholarly object or as a stepping stone / facilitator between micropubs (modular pubs) and longer form content / meta-analyses?
This is a great testing ground for new workflows and finding what will work for the community. Really excited to see how things shake out and what patterns and trends emerge in user behavior, and what that could mean for workflow refinements.
I have found product design tension between building up workflow specifically designed for small modular atomized content (micropubs) and exactly how and when this relates to longer form content. I don’t have any strong answers or intuitions on this yet, but I often think about what twitter would be like if it allowed blog posts. would that be a good thing or bad thing? how would that change user behavior and product focus?
One observation to note is that if you don’t implement specific creative constraints like length and types of content, user behavior will probably drift towards longer and more entropy. which may be a good thing! or it could make it difficult for certain types of content to have the correct incentive and cultural structure. Could twitter have ever evolved from an email or blog platform, or would it only have emerged and gotten traction if it only allowed certain type and format of content?
One more observation is I think it’s important to know how many products and workflows you are trying to support. By not choosing, sometimes that means you are choosing to support a wide variety of products and workflows (ie all the things the users choose to do on the platform). For example, it’s possible this approach could lead to competing user needs for 1) modular content, 2) preprint / blog, 3) more formal publishing and visibility needs for career advancement.
With my work on micropubs, I made a deliberate decision to NOT support longer form content, and am working towards integrations with external platforms that do that better (preprints, journals). I haven’t gotten very far with this, so maybe wait until I have actual data / experience before taking this advice, but it’s something I’ve found useful to at least think through.
I could be completely wrong about all of this so will be curious to see what actually happens in this case!!
I have a strong bias / idiosyncrasy towards constrained design serving narrowly focused workflows, which works well sometimes and other times is a terrible decision haha.
Perhaps an intermediate approach, informed by an initial round of loose experimentation, could be to further constrain (or loosen) format and workflow depending on what authors are doing and finding most helpful!
A small thing: when I click on a comment which is tied to a hyperlink in the text, it takes me to the linked page instead of opening the comment. I’m encountering this issue with Jacob Bumgarner’s comment in the Citation Style section. If I try to add a comment near there, it also automatically directs me to the linked page. My browser is Chrome Version 103.0.5060.11.
In relation to this, should comments about technical issues with the webpage be separated somehow from comments about the content of the pub itself?
Thank you for pointing this out! We’ve relayed this bug to the PubPub team and it’s in their queue. Please do point out technical issues on this pub—I’d like to be aware of them myself and can make sure they get attention.
An invitation to take a look at the Open Scientist Handbook, here on PubPub: https://openscientist.pubpub.org/
What about doing overlay journals?
Overlay journals are fantastic and a welcome addition to the post-publication review and curation ecosystem. As our primary focus is as a research organization rather than a review entity, we aren’t formally organizing review or rendering decisions the way a journal might but rather want to make sure that the comments for work we read/review are captured publicly so that readers can benefit from a richer set of available information.
This type of broad, but focused public feedback inside the article is of much greater use than any current closed review system.
A general website aesthetic comment - the table of contents dropdown covers the text.
I’m viewing the publications on Safari 15.4.
Oh, thank you! I actually don’t see it that way on Safari on my end, so this is great to know. I’m definitely seeing issues with the ToC in general across browsers too. We’ll work on this!
This is a general comment that applies to all of the links in this pub, but it would be nice if links automatically opened in new tabs so that the readers aren’t directed away from the publication page.
Agreed! We can’t control this right now, but it’s on our list of requests.
Will there be stated guidelines that indicate the level of commenter contribution needed to be listed as a contributor?
- Those that proof the publications for grammatical issues won’t be included as contributors
- Those that provide ideas for new critical experiments/alternative analytical approaches will be listed as contributors.
This is a great question. We’re developing a list of definitions for all contributor roles, including “Critical Feedback,” which can apply to public commenters or colleagues who weigh in early in the research process. Since we’re building on the casrai.org “CRediT” system but diverging in a few key ways, we plan to share this list once it’s built out. It could potentially be a standalone pub to support versioning, since we may make changes over time, otherwise we’ll add it as a page.
Based on your feedback here, I think eventually we may want to add a page just intended for would-be commenters that explains how to comment, how comments may be used, in what circumstances someone may be credited as a contributor, etc.
A general proofing comment - it seems like this question was meant to be addressed/filled out in the drafting stage.
Good catch! Not sure how that made it through.
It would be fun for there to be space for community contributions on the project page. In an open-source model, other researchers could help address a given research question, provide hypotheses, or help out with a particular analysis. And then get credit if the contribution becomes useful.
Yes! We definitely intend to add community members as “contributors“ to pubs if they provide critical feedback. I think it’ll be important to show how we’ve incorporated comments into later versions of the pubs to 1) show people that we value their feedback, 2) be fair about how people are credited/rewarded, and 3) demonstrate that our post-publication review approach is working.
Major contributions can be highlighted on the project narratives too, via written shout-outs. We’ll see how this evolves if we eventually end up designing those pages to function closer to pubs (which I suspect we may end up doing later on).
I am really curious about whether pubs will naturally sort into a specific set of “pub types.” Like, will each novel finding be most easily communicated as an individual Observation, and later on an Integration cites a few Observations to draw a new conclusion? Or will the pub types become really heterogenous? If the former, then I think it will be really powerful to be able to query based on pub type; e.g. find Observations that are referenced by a particular Integration. Example: https://roamjs.com/extensions/discourse-graph/synthesis-query
I’m curious about this too. So far, I’ve outlined a set of about a dozen pub types that I think can work for most use cases and I’m trying to have us stick to these (I’ll share the list in a future version of this pub, or perhaps as a page that explains key features of each type). We’ll see how straightforward it is to use this standard set as we try to categorize future findings.
I like that feature of Roam! I’ll add it to my list of things to think about as we develop our framework and think about how best to connect and query for individual pubs. PubPub has a “Connections” feature that lets us connect pubs to each other and to external sites (e.g. deposited data) that may be the right structure for this — I’ll have to look into how searchable it is.
Just like in using hypothes.is, it might become overwhelming to readers to parse which comments they find useful. I wonder if it will become necessary to filter or prioritize comments? Having upvotes might also help to gauge how much engagement a particular article is getting, and how pressing a comment might be.
I also like the idea of subjective filtering, where you can rate comments as well as other raters. If I highly rate e.g. Prachee’s opinion, then comments that she rates highly will appear first for me. This “subjective review” is implemented at https://braid.news/ .
+1 to comment rating.
Along this vein of thought, will there be comment moderation?
It would be nice to be able to query based on these roles. For example, I could imagine having a contributor page that describes the nature of my contributions to various projects. It should be possible to start with an autofill of my contributions; i.e. “conceptualization role in pubs x and y for project z.” A systematic treatment like this might make it more likely for these taxonomic roles to be considered, both for contributors and evaluators.
This is a cool idea! For reasons I won’t get into, the “Contributors” section you see right now is added manually, with the same information separately added to the metadata behind the scenes. This should change in the future, and once it does, you should be able to click on a contributor’s name to see their profile, which lists all pubs to which they’ve contributed. I like the idea of being able to additionally sort by contribution type!
this link is broken
Thank you! It will be fixed in the next release. Here is the intended link: https://research.arcadiascience.com/pub/method-mass-spec-proteomics-transcriptomics
will the code be documented or just shared?
As much as possible, code will be documented. We want to balance reproducibility and utility with how much time it takes our scientists to add documentation, so the extent of documentation may differ from case to case.
Wow! So cool!
what if the reader could chose which figures to simultaneously view? what if they could view more than one?
Love these ideas! 💡
Will this introduce a tendency to editorialize and revise the biology to fit a story even when it may not?
Narrative should live separately from data and methods (in pubs and projects)
will dependencies between projects be tracked? ie one project motivated experiments that were inconclusive in the former context, but that led to the initiation of a new project. Will that new project cite the old project somehow. Is the experiment part of the new project or the old project?
Perhaps projects and pubs could benefit from a model like git: ie projects can be revised and changed in time but forks (citations) are specific to an particular revision
Partially addressed this in a previous reply, but will summarize here! I agree that we may end up wanting some sort of versioning/forking with project narratives in the future.
Right now, we’re hoping to avoid editing the intro/goals substantially and to verbally describe shifts in the project’s direction within the “Progress” section. If we do end up drastically changing direction/goals, we’ll likely create a new project narrative page or otherwise attempt to clearly explain and document the shift.
We can easily hyperlink between projects, but it could also be cool to eventually develop a way to embed projects like we do with pubs or add a higher-level page type that tracks links between projects themselves as they evolve, inspire new projects, etc.
Thanks for all this food for thought!
in terms of measuring scientific progress, it seems like projects are a very important piece to track and that they should be considered together with pubs.
if projects remain unquantified, while pubs are counted and cited, I am concerned that incentives may continue supporting maximizing/focusing on pubs rather than projects.
The best solution might not be including a DOI, but I am concerned that in the current system, projects will remain in the periphery of scientific documentation and communication because they are not associated with measures of progress.
A last thought: perhaps including a DOI for projects could be accompanied with norms that the project DOI is cited for motivations/context building, while specific results, numbers, or methods could be cited from the individual pubs.
Good points, and we’ll keep these considerations in mind. Given that project narratives may change substantially, we’ve chosen not to use DOIs for now, but they will be citable via their URL. I do think some sort of versioning could be useful in the future, but I think we’ll have to see what happens to these over time and decide on a strategy based on the real examples that come up.
I love this “open arms” approach - this is a fantastic incentive for encouraging involvement from the public!
This is great - directly gets at the frustation of bouncing around from text and figure in traditional journal publications! To improve legibility while “following” throughout the section, figures could expand/contract based on reader selection? That way you can always get to the figure you’re looking for, but it’s impact on reading the text is minimal?
will major revisions have a new DOI? Or will there be a way for citing publications to refer to a specific snapshot of the publication?
what happens if there is a revision that contradicts or significantly changes the originally interpretation of the pub?
Great questions! We are currently planning to keep a single DOI for each overall pub. For all revisions, users can cite specific versions by including the URL to the specific release. Note that PubPub saves each version (aka “release”) and provides a unique URL for each.
Regarding contradictory or otherwise major changes to the interpretation, we definitely do not want to hide the original data/interpretation, nor do we want people to stumble upon it and miss the critical update. Thus, at a minimum, we will add a hard-to-miss note to the top of the pub and make edits within, and may add an entirely new pub with its own DOI to relay the new data, discuss how the interpretation changed, why this has happened, etc. Any major (non-contradictory) additions to our data or understanding will be added as new pubs.
So far, we’ve learned a ton each time we’ve encountered a new scenario and ended up rethinking our plans, but this is our initial outlook.
I like this idea of having it pre-expanded. It also seems like if could be useful if in this table of contents you could directly link to other related pubs under the same umbrella. For instance in the table of contents for the project page, including links to subsections of not only that document, but also linking to the “downstream” pubs - data, methods/protocol, results, etc. Could help to more seamlessly integrate the different pubs?
I like this idea! When you’re viewing a pub, you can currently click the project title tag at the top of the header to see a dropdown of all the pubs included within that project. This functionality could be useful on the project narratives themselves as well. As the project narratives grow, this sort of organization/navigability will become more important.
Noted, thank you! Replicated on my end as well, we’ll address this.
minor: the banner at the top of the page with the title does not appear to be adjusting to my window size on Firefox
Will there be a sort of “version control” for citation? Given that these pubs are continually being revised, it seems that it would be important that when citing these works, the citation should link to a specific version, since findings may change/be modified in later iterations.
Seems like the three-stage tag system could be used/integrated to facilitate this?
Thank you for these questions and ideas! PubPub supports versioning and lets users view all previous “releases” of a pub, each with its own URL and with the option to add a “release note” explaining what has changed. We are currently planning to keep a single DOI for each overall pub. For all revisions, users can cite specific versions by including the URL to the specific release.
For contradictory or otherwise major changes to the interpretation, we definitely do not want to hide the original data/interpretation, nor do we want people to stumble upon it and miss the critical update. Thus, at a minimum, we will add a hard-to-miss note to the top of the pub and make edits within, and may add an entirely new pub with its own DOI to relay the new data, discuss how the interpretation changed, why this has happened, etc. Any major (non-contradictory) additions to our data or understanding will be added as new pubs
Applying the tagging system here is an interesting idea as well. We could consider adding a “Major Revision” or “Critical Update” tag, or otherwise noting this somewhere.
I don’t know if there is any existing infrastructure for such a thing, but I wonder if there might be value in having not only an executable code block, but one that’s interactive and modifiable - modifications could be shared by the community.
I think this sort of interactivity could facilitate data exploration, and provide a framework for scientific engagement by the community that’s currently lacking in traditional scientific communication. But again, this is pretty “pie in the sky” since that infrastructure might not even exist yet!
I have a similarly structured pub pub website built on modular updatable articles with a synthesis overlay, and have had a lot of luck with an email newsletter that functions more or less like a changelog of the website. Subscribers receive emails for each new pub, and once a month I also send out an email bundling up all the updates I’ve made to the site. Each email has most of the new text in it, so people do not have to leave their inbox to read it, but I include a disclaimer that the underlying linked article is always updated, and that people should follow the link to see the latest version.
The tradeoff is people mostly engage with the email newsletter, rather than the main site. But it is a relatively frictionless way to tell people what’s new, and substack has built a very shareable product that attracts interested subscribers quite well.
Oh this is an interesting approach! We’re just setting up our subscription options now, and it never would have occurred to me to include the new text directly within the email. Currently using Mailchimp but might look into Substack!
This is the key challenge and I look forward to seeing your ideas for incentivizing people to engage or creating new norms around it. In my field, it is normal to circulate drafts for comments among people in your network; if similar norms exist in your field, you might be able to co-opt them and simply encourage authors of new pubs to circulate to interested parties, noting that any feedback is welcome, especially on the platform.
Agreed! We have been trying this with our initial pubs, jury’s still out on how well it’s working since we’re still in the middle of the process. We’re also hoping to hire someone to think about this full-time!
As an early career researcher, will my publication in this format count towards hiring, appointment, rank and tenure? That is my biggest concern. I am all for this format but I do not know what the thoughts of those sitting on the other side of the table deciding if my scholarship efforts were enough and/or meritorious when published on this platform. I do not have a solution at this point. Just thinkng through things. I am sure this is not the first time such a question has been brought up.
Hi there. This is a great question and one we are running the experiment to answer. All evaluators are likely to make their own decisions on what “counts,” but it’s our feeling that the materials with the most transparency (public materials with public feedback) give evaluators the most information possible upon which to make their decisions. It is also our hope that by virtue of the existence of other avenues of research sharing that are attractive to scientists, evaluators might expand what they consider, as has happened in some circumstances with preprints. This shift won’t happen overnight, but needs to start somewhere, so we hope to be able to chart the path that others might feel comfortable trying once they see it in action.
I was looking forward to a graphic to understand the process. There is a lot of information and “docking points” that talk about specific topics. I was getting lost on how each of those docking points are connected. A figure might help readers here.
Thanks for this feedback! We will consider making a graphic or figure.
super interesting, have not read all of it yet, but it would really nice to discuss:
- extension of CREDIT: how to make it usable by all, should this go to the CRO ontology. Would you have some funding to revive CRO development?
- author metadata in pubpub, and its reuse: you may want to look/participate in the jams initiative: see https://github.com/jam-schema/jams/issues/7
As a general remark, the link to twitter and google scholar is not aligned with the open science objective, it should at least be also schema.org (also used by google), and fediverse (open source alternative to twitter) compatible. Maybe a link to Wikidata would be feasible ?
Thank you for all these ideas! Right now, we’re not planning to fund external efforts, but we will look into the resources you’ve pointed out.
Regarding your third point, as we test other models of sharing and engagement, we’re doing our best to balance pure open science principles with the need to interact with scientists where they are in the existing ecosystem. For example, we’d love for our work and discussion to be maximally replicable via open mechanisms, but we’re also cognizant that Twitter is a major hub for scientific discussion, and we don’t want to miss that. We’re definitely interested in capturing conversations wherever they are, and will need to think about how to track comments on different sites, especially as social media channels change over time.
It is valuable for each unit (Method, Data, etc…) to exist in its respective Context — this offers a massive amount of approachability to the concepts in the pub and allows them to stand alone, however the redundancy might be high effort for the author and annoying to navigate for a reader familiar to the field. I think there must be another strategy that enables the best of both worlds: approachability without redundancy.
Had similar thoughts. When dealing with multiple units of research output (e.g. a new set of results from the same study), how would one add the information and how will it be connected to a previous unit of the same research? Does one leave out the big picture intro/background, methods, and instead provide a link to the older unit?
When I started reading I immediately felt the value of a narrative story in the plural first person. The mention of specific hurdles and confusions that you overcame as scientists adds soooo much implicit value to understanding the research story. This stuff is missing from traditionally published papers.
@Kat - I totally agree. So much of science is hurdles/confusion - we are doing ourselves and the process a disservice if we write it out of the narrative. I also worry about how we can properly teach students about the process without this
The data I get when I click the “Cite” link uses a “journal article” entry type with “Arcadia Science” in the journal field. Personally, I have had endless issues dealing with bibliography management software and journal production systems with citing electronic journals that don’t use traditional volume and page/article number systems. So I wish you would either use some other entry type or add article numbers.
thanks for the note, Michael. Which bibtex type do you think should be used for content like this? It does feel like @misc kind of undersells it, but I agree that the journal article isn’t quite the right fit, either.
Why gists? Gists can be created instantly but are relatively anonymous and unstructured. I would think having a repo for each Pub would be a better match, and would be more easily discoverable through GitHub rather than mainly through the Pub itself.