Last Friday and Saturday the 6th SpotOn London conference (http://www.nature.com/spoton/event/spoton-london2013/?) tool place at the British Library. I had a great time with many interesting sessions and good conversations both in and between sessions. But I might be biased, since I helped organize the event, and in particular did help put the sessions for the Tools strand (http://www.nature.com/spoton/?cat=11?) together.
The following blog post summarizes some of my thoughts before, during and after the conference, and I want to focus on innovation in scholarly publishing, or rather: what is holding us back?
The #solo13alt (http://www.nature.com/spoton/event/spoton-london-2013-whats-your-number-altmetrics-session/?) session on Saturday looked at the role of altmetrics in the evaluation of scientific research. I was one of the panelists and had summarized my ideas prior to the session in a blog post (http://blogs.plos.org/tech/evaluating-impact-whats-your-number/?) written together with Jennifer Lin. It was an interesting session, although a bit too controversial for my taste. But it became obvious to me in this and a few other sessions that other obsession with quantitative assessment of science is increasingly dangerous. Other people have said this more eloquently:
My job title is Technical Lead Article-Level Metrics so it might sound surprising that I say this. But we have to differentiate of what we do now and in the next few years - which is mainly to get away from the Journal Impact Factor to more reasonable metrics that look at individual articles and include other metrics besides citations - to where we want to be in 10 or more years. And for the latter it is essential that journal articles and other research outputs are valued for the research they contain, rather than serving as a currency for merit that can be exchanged into grants and acadmic advancement. This is a very difficult problem to solve and I have no answers yet. Going back to how science was conducted until about 50 years ago - as a small elite club that worked based on closed personal networks - is definitely not the answer.
In his keynote (http://www.nature.com/spoton/event/spoton-london-2013-keynote-1-boson-50-years-50003-scientists-understanding-our-universe-through-global-scientific-collaboration-and-open-access/?) Salvatore Mele from CERN explained to us that Open Access in High Energy Phsics is 50 years old, and that the culture of sharing preprints preceeded the ArXiv (http://arxiv.org/?) e-prints service - scientists were mailing their manuscripts to each other at least 20 years before ArXiV launched in 1991. A similar culture doesn’t exist in the life sciences and therefore the preprint services for biologists launched this year (e.g. PeerJ Preprints (https://peerj.com/preprints/?) and bioRxiv (http://biorxiv.org/?)) will have a hard time gaining traction.
Email is one of those services that every researcher uses, and we should think much more about how we can create innovative services around email rather than only considerung new tools and services that are still used only by early adopters. AJ Cann had coordinated a workshop around email at SpotOn London that he called the dark art of dark social: email, the antisocial medium that will not die (http://www.nature.com/spoton/event/spoton-london-2013-the-dark-art-of-dark-social-email-the-antisocial-medium-which-will-not-die-workshop/?). I am still puzzled why most researchers prefer to receive tables of content by email rather than as a RSS feed, but we shouldn’t confuse what we get excited about as software developers and early adopters of online tools with what the mainstream scientist would be likely to use.
Another good example is data sharing (http://royalsociety.org/policy/projects/science-public-enterprise/report/?), a topic that was discussed in at least three SpotOn sessions. Even though most attendees at SpotOn London agreed that sharing of research data is important, it is obvious that this is currently not common practice in most scientific disciplines. Funders have created data sharing policies (e.g. NSF (http://www.nsf.gov/bfa/dias/policy/dmp.jsp?) or the Wellcome Trust (http://www.wellcome.ac.uk/About-us/Policy/Spotlight-issues/Data-sharing/?)), as have publishers (http://dx.doi.org/10.1371/journal.pone.0067111?), and many organizations are thinking about incentives for data sharing, including data journals such as Scientific Data (http://www.nature.com/scientificdata/?) that will launch in 2014 and was presented by Ruth Wilson in the motivations for data sharing (http://www.nature.com/spoton/event/spoton-london-2013-how-can-we-encourage-data-sharing-discussion/?) session. Even though incentives can help promote changes, I am pessimistic that something as central to the conduct of science as data sharing can be changed without more scientists being intrinsically motivated to do so. This is a much slower process that should start as early as possible during training, as pointed out by Kaitlin Thaney in the #solo13carrot (http://www.nature.com/spoton/event/spoton-london-2013-how-can-we-encourage-data-sharing-discussion/?) session.
In terms of the technology that is holding us back, I increasingly think that publisher manuscript submission systems may be the single most important place that is slowing down innovation. I participated in the first Beyond the PDF (https://sites.google.com/site/beyondthepdf/?) workshop in 2011, and I think now that Beyond the MTS (or manuscript tracking system) might have been a better motto than Beyond the PDF, as many of the problems we discussed relate to typical editorial workflows we use today. These systems need to implement many of the ideas discussed at SpotOn London and other places, from opening up peer review (#solo13peer (http://www.nature.com/spoton/event/spoton-london-2013-how-should-peer-review-evolve/?)) to making it easier to integrate research data into manuscripts (#solo13carrot (http://www.nature.com/spoton/event/spoton-london-2013-how-should-peer-review-evolve/?)) and to ideas of how the scientific record should like in the digital age (#solo13digital (http://www.nature.com/spoton/event/spoton-london-2013-what-should-the-scientific-record-look-like-in-the-digital-age-discussion/?)). In the latter panel we discussed both new authoring tools such as WriteLaTeX (https://www.writelatex.com/?), and new ideas of what a research object should look like and how the different parts are linked to each other. A major theme here was reproducibility highlighted both by Carol Goble (also see her ISMB/ECCB 2013 Keynote (http://www.slideshare.net/carolegoble/ismb2013-keynotecleangoble?)) and Peter Kraker (see also his Open Knowledge Foundation blog post (http://science.okfn.org/2013/10/18/its-not-only-peer-reviewed-its-reproducible/?)).
The problem with today’s manuscript submission systems is that they have grown so big and complex that any change is slow and cumbersome, rather than iterative and part of an ongoing dialogue. I don’t want to blame any single vendor of these systems, but rather suggest that we carefully re-evaluate the workflow from the manuscript written by one or more authors to the accepted manuscript. My personal interest is mainly in authoring tools, and I have recently written about and experimented with Markdown (http://localhost:4000/tags.html#markdown-ref?). This process of re-evaluating manuscript tracking systems is not simply about technology, but is rather about how we approach this problem as author, publisher, tool vendor and as a community.