James Evans, a sociologist from the University of Chicago, reports (https://web.archive.org/web/20120611091345/http://dx.doi.org/10.1126/science.1150473?) his research on the kind and frequency of citations over the last 60 years in the latest issue of Science. He found a change in citation behavior as more and more journals became electronically available: fewer journals and articles were cited and the cited articles were more recent.
These findings seem to contradict our expectations (and research by other groups (https://web.archive.org/web/20120611091345/http://www.sciencemag.org/cgi/doi/10.1126/science.321.5887.329a?)). The greater availability of research papers in recent years thanks to electronic publication (and open access) should broaden and not narrow the papers that we read and ultimately cite in our own publications. But looking at my own behavior when reading papers or writing a publication, and thinking about many discussions we had on related topics, these findings make perfect sense.
Today's technology allows us to make the distribution of scientific papers in electronic form very efficient, and thanks to this technology we have new business models (author-pays) and an ever-increasing number of journals. Access to research articles is now easier, cheaper and for a broader audience than in ever was before. This is of course a wonderful development, but unfortunately creates a new problem: information overflow and how to filter out the relevant information.
Twenty years ago the typical researcher would use the personal or institutional journal subscription to regularly follow the important papers in his field. Index Medicus and Current Contents were used to find additional articles, but they were cumbersome to use. Today few researchers regularly read printed journals. Most papers are found by searches of online databases and by subscriptions of tables of content by email or RSS. There are many clever tools to facilitate this, but most people probably are overwhelmed by the information and stick to some very specific research interests and high-profile journals.
This is where the filtering of information becomes critical. Technology can help a great deal in finding the most relevant research papers, but I would argue that human intervention is still far more important. For most people including myself peer review is the first step in that filtering process. Connected to peer review is the editorial decision that something is not only scientifically sound but also interesting. This editorial decision is sometimes debatable but is a very effective filtering process. Post-publication filtering by human intervention in the form of comments, voting or paid services (e.g. Faculty of 1000 Biology (https://web.archive.org/web/20120611091345/http://www.f1000biology.com/?)) is still in its infancy.
I am hoping for better filtering tools in the future, both pre- and post-publication. I'm confident that technology can be a big help (especially when full-text searching takes off), but will never replace human editing. Until then, maybe we should kep at least some important print subscriptions so that we don't miss that fascinating research paper that for some reason wasn't picked up by that fancy electronic tool.
David Crotty (in his highly recommended blog Bench Marks) also blogged about this topic (https://web.archive.org/web/20120611091345/http://www.cshblogs.org/cshprotocols/2008/07/18/scientific-citations-and-the-alleged-death-of-the-long-tail/?). Philip Davis also wrote about (https://web.archive.org/web/20120611091345/http://scholarlykitchen.sspnet.org/2008/07/18/online-journal-paradox/?) the Science article on the scholarly kitchen blog. And Thomas Lemberger blogged about the article (https://web.archive.org/web/20120611091345/http://blog-msb.embo.org/blog/2008/07/impact_of_online_publishing.html?) on The Seven Stones.