Sharing journals freely online

What are all the research journals that anyone can read freely online?  The answer is harder to determine than you might think.  Most research library catalogs can be searched for online serials (here’s what Penn Libraries gives access to, for instance), but it’s often hard for unaffiliated readers to determine what they can get access to, and what will throw up a paywall when they try following a link.

Current research

The best-known listing of current free research journals has been the Directory of Open Access Journals (DOAJ), a comprehensive listing of free-to-read research journals in all areas of scholarship. Given the ease with which anyone can throw up a web site and call it a “journal” regardless of its quality or its viability, some have worried that the directory might be a little too comprehensive to be useful.  A couple of years ago, though, DOAJ instituted more stringent criteria for what it accepts, and it recently weeded its listings of journals that did not reapply under its new criteria, or did not meet its requirements.   This week I am pleased to welcome over 8,000 of its journals to the extended-shelves listings of The Online Books Page.  The catalog entries are automatically derived from the data DOAJ provides; I’m also happy to create curated entries with more detailed cataloging on readers’ request.

Historic research

Scholarly journals go back centuries.  Many of these journals (and other periodicals) remain of interest to current scholars, whether they’re interested in the history of science and culture, the state of the natural world prior to recent environmental changes, or analyses and source documents that remain directly relevant to current scholarship.  Many older serials are also included in The Online Books Page’s extended shelves courtesy of HathiTrust, which currently offers over 130,000 serial records with at least some free-to-read content.  Many of these records are not for research journals, of course, and those that are can sometimes be fragmentary or hard to navigate.  I’m also happy to create organized, curated records for journals offered by HathiTrust and others at readers’ request.

It’s important work to organize and publicize these records, because many of these journals that go back a long way don’t make their content freely available in the first place one might look.  Recently I indexed five journals founded over a century ago that are still used enough to be included in Harvard’s 250 most popular works: Isis, The Journal of Comparative Neurology, The Journal of Infectious Diseases, The Journal of Roman Studies, and The Philosophical Review.  All five had public domain content offered at their official journal site, or JSTOR, behind paywalls (with fees for access ranging from $10 to $42 per article) that was available for free elsewhere online.  I’d much rather have readers find the free content than be stymied by a paywall.  So I’m compiling free links for these and other journals with public domain runs, whether they can be found at Hathitrust, JSTOR (which does make some early journal content, including from some of these journals, freely available), or other sites.

For many of these journals, the public domain extends as late as the 1960s due to non-renewal of copyright, so I’m also tracking when copyright renewals actually start for these journals.  I’ve done a complete inventory of serials published until 1950 that renewed their own copyrights up to 1977.  Some scholarly journals are in this list, but most are not, and many that are did not renew copyrights for many years beyond 1922.  (For the five journals mentioned above, for instance, the first copyright-renewed issues were published in 1941, 1964, 1959, 1964, and 1964 respectively– 1964 being the first year for which renewals were automatic.)

Even so, major projects like HathiTrust and JSTOR have generally stopped opening journal content at 1922, partly out of a concern for the complexity of serial copyright research.  In particular, contributions to serials could have their own copyright renewals separate from renewals for the serials themselves.  Could this keep some unrenewed serials out of the public domain?  To answer this question, I’ve also started surveying information on contribution renewals, and adding information on those renewals to my inventory.  Having recently completed this survey for all 1920s serials, I can report that so far individual contributions to scholarly journals were almost never copyright-renewed on their own.  (Individual short stories, and articles for general-interest popular magazines, often were, but not articles intended for scientific or scholarly audiences.)  I’ll post an update if the situation changes in the 1930s or later. So far, though, it’s looking like, at least for research journals, serial digitization projects can start opening issues past 1922 with little risk.  There are some review requirements, but they’re comparable in complexity to the Copyright Review Management System that HathiTrust has used to successfully open access to hundreds of thousands of post-1922 public domain book volumes.

Recent research

Let’s not forget that a lot more recent research is also available freely online, often from journal publishers themselves.  DOAJ only tracks journals that make their content open access immediately, but there are also many journals that make their content freely readable online a few months or years after initial publication.  This content can then be found in repositories like PubMedCentral (see the journals noted as “Full” in the “participation” column), publishing platforms like Highwire Press (see the journals with entries in the “free back issues” column), or individual publishers’ programs such as Elsevier’s Open Archives.

Why are publishers leaving money on the table by making old but copyrighted content freely available instead of charging for it?  Often it’s because it’s what’s makes their supporters– scholars and their funders– happy.  NIH, which runs PubMedCentral, already mandates open access to research it funds, and many of the journals that fully participate in PubMedCentral’s free issue program are largely filled with NIH-backed research.  Similarly, I suspect that the high proportion of math journals in Elsevier’s Open Archives selection has something to do with the high proportion of mathematicians in the Cost of Knowledge protest against Elsevier.  When researchers, and their affiliated organizations, make their voices heard, publishers listen.

I’m happy to include listings for  significant free runs of significant research journals on The Online Books Page as well, whether they’re open access from the get-go or after a delay.  I won’t list journals that only make the occasional paid-for article available through a “hybrid” program, or those that only have sporadic “free sample” issues.  But if a journal you value has at least a continuous year’s worth of full-sized, complete issues permanently freely available, please let me know about it and I’ll be glad to check it out.

Sharing journal information

I’m not simply trying to build up my own website, though– I want to spread this information around, so that people can easily find free research journal content wherever they go.  Right now, I have a Dublin Core OAI feed for all curated Online Books Page listings as well as a monthly dump of my raw data file, both CC0-licensed.  But I think I could do more to get free journal information to libraries and other interested parties.  I don’t have MARC records for my listings at the moment, but I suspect that holdings information– what issues of which journals are freely available, and from whom– is more useful for me to provide than bibliographic descriptions of the journals (which can already be obtained from various other sources).  Would a KBART file, published online or made available to initiatives like the Global Open Knowledgebase, be useful?  Or would something else work better to get this free journal information more widely known and used?

Issues and volumes vs. articles

Of course, many articles are made available online individually as well, as many journal publishers allow.  I don’t have the resources at this point to track articles at an individual level, but there are a growing number of other efforts that do, whether they’re proprietary but comprehensive search platforms like Google Scholar and Web of Science, disciplinary repositories like ArXiV and SSRN, institutional repositories and their aggregators like SHARE and BASE, or outright bootleg sites like Sci-Hub.  We know from them that it’s possible to index and provide access to the scholarly knowledge exchange at a global scale, but doing it accurately, openly, comprehensively, sustainably, and ethically is a bigger challenge.   I think it’s a challenge that the academic community can solve if we make it a priority.  We created the research; let’s also make it easy for the world to access it, learn from it, and put it to work.  Let’s make open access to research articles the norm, not the exception.

And as part of that, if you’d like to help me highlight and share information on free, authorized sources for online journal content, please alert me to relevant journals, make suggestions in the comments here, or get in touch with me offline.

From Wikipedia to our libraries

I’ve heard the lament in more than one library discussion over the years.  “People aren’t coming to our library like they should,” librarians have told me.  “We’ve got a rich collection, and we’ve expended lots of resources on an online presence, but lots of our patrons just go to Google and Wikipedia without checking to see what we have.”  The pattern of quick online information-finding using search engines and Wikipedia is well-known enough that it has its own acronym: GWR, for Google -> Wikipedia -> References.  (David White gives a good description of that pattern in the linked article.)

Some people I’ve talked to think we should break this pattern.  With the right search tool or marketing plan, some say, we can get patrons to start with us first, instead of Google or Wikipedia.  This idea seems to me both futile and beside the point.  Between them, Google and Wikipedia cover a vast array of online information, more than librarians could hope to replicate or index ourselves in that medium.  Also, if we truly have better resources available in our libraries than can be found on the open Web, it’s less important that our researchers start from our libraries’ websites than that they end up finding the knowledge resources our libraries make available to them.

Looked at the right way, Wikipedia can be a big help in making online readers aware of their library’s offerings.  One of the things we spend a lot of time on in libraries is organizing information into distinct, conceptual categories.  That’s what Wikipedia does too: so far,  their English edition has over 4 million concepts identified, described, and often populated with reference links.  And Wikipedia has encouraged people to add links to relevant digital library collections on various topics, through programs like Wikipedia Loves Libraries and Wikipedian in Residence programs.  But while these programs help bring some library resources online, and direct people to those selected resources, there’s still a lot of other relevant library material that users can’t get to via Wikipedia, but can via the libraries that are near them.

So how do we get people from Wikipedia articles to the related offerings of our local libraries?  Essentially we need three things: First, we need ways to embed links in Wikipedia to the libraries that readers use.  (We can’t reasonably add individual links from an article to each library out there, because there are too many of them– there has to be a way that each Wikipedia reader can get to their own favored libraries via the same links.)  Second, we need ways to derive appropriate library concepts and local searches from the subjects of Wikipedia articles, so the links go somewhere useful.  Finally, we need good summaries of the resources a reader’s library makes available on those concepts, so the links end up showing something useful.  With all of these in place, it should be possible for researchers to get from a Wikipedia article on a topic straight to a guide to their local library’s offerings on that topic in a single click.

I’ve developed some tools to enable these one-click Wikipedia -> library transitions.  For the first thing we need, I’ve created a set of Wikipedia templates for adding library links. The documentation for the Library resources box template, for instance, describes how to use it to create a sidebar box with links to resources about (or by) the topic of  a Wikipedia article in a reader’s library, or in another library a reader might want to consult.  (There’s also an option for direct links to my Online Books Page, if there are relevant books online; it may be easier in some cases for readers to access those than to access their local library’s books.)

For the links to work, we need to know about the reader’s preferred library.  Users can register their preferred library (which will set a cookie in their browser recording that choice), or select it for each individual search.  We know how to link to several dozen libraries so far, and can add more libraries on, which includes holdings of thousands of libraries worldwide, is also an option.  Besides the “Library resources box” template, I’ve also provided templates for in-text links to library resources, if those work better in a given article.  Links to these templates can be found at the end of the “Library resources box” documentation.

For the second thing we need, I’ve created a library forwarding service (“Forward to Libraries”, or FTL– catchier name suggestions welcome) that transforms links from Wikipedia into searches for appropriate  headings or keywords in local libraries.  This is the same service I describe in my “From my library to yours” blog post from last month, but it now supports links from Wikipedia as well as to Wikipedia.

Thanks to information included in the Library of Congress’ Authorities and Vocabularies datasets, OCLC’s VIAF data feeds, Wikipedia’s database downloads, and my own metadata compiled at The Online Books Page, FTL already knows how to link directly to over 240,000 distinct authority-controlled headings known to the Library of Congress from their corresponding Wikipedia articles.   (Library of Congress headings are used in most sizable US libraries, and many English-language libraries outside the US also use similar headings.)

For other articles, FTL by default will try a general keyword search based on the Wikipedia article’s title, which will often turn up useful results at the destination library.  Alternatively, my templates allow Wikipedia editors to determine a specific Library of Congress heading to use in library links, if appropriate.  I’m hoping to incorporate suggested headings into FTL’s own knowledge base as I detect them showing up in Wikipedia articles.  I also plan to publish FTL’s data sets under open access terms, so that others can use and improve on them as well.

The third part of this solution– displaying relevant resources at the destination library— can be implemented differently at each library.  For most of the libraries in FTL’s current knowledge base, links go to searches in the library’s regular online catalog.  But with some libraries, I’ve linked to another discovery system, if it seems to be the main search promoted at that library, and it seems to produce useful results.  The Online Books Page’s subject map displays also have features that I think will be useful to Wikipedia subject researchers arriving at my site, such as also showing related subjects and books filed under those subjects.  I hope in future posts to talk more about other useful guideposts and contextual information we could be providing to readers arriving from Wikipedia.

But if you’ve read this far, you probably want to see how this all works in practice.  So I’ve added some example library resources boxes in a few Wikipedia articles that seemed particularly relevant this month, including those for Women’s history, Elizabeth Cady Stanton, and Flannery O’Connor.  Look down in the “External links” or “Further reading” sections of those articles for the boxes, and view the page source of the articles to see how those boxes are constructed.

As with most things related to Wikipedia, this service is experimental, and subject to change (and, hopefully,  improvement) over time.  I’d love to hear thoughts and suggestions from users and maintainers of Wikipedia and libraries.  And if you find creating these sort of links from Wikipedia useful, and need help getting started, I’d be happy to help you bring them to your favorite Wikipedia topics and local libraries, as time permits.

Building on a full complement of copyright records

Thanks to recent efforts of the US Copyright Office, we now have a complete digitization of summary copyright registration and renewal records back to the late 19th century.  As Mike Burke and others at the Copyright Office have been reporting on their blog, Copyright Matters: Digitization and Public Access, the Copyright Office has now digitized nearly every volume of the Catalog of Copyright Entries, and its predecessor publication, the Catalogue of Title Entries of Books and Other Articles, to the start of that serial in 1891.  Combined with the current online Copyright Catalog database, and some independent scans that fill in gaps in the Copyright Office set, records for every copyright registration and renewal still in force in the US can now be found online, free of charge.

This is a great benefit for people wanting to make better use of copyrighted works and the public domain.  With the information now online, we can quickly verify copyright and public domain status for lots of works, and also get useful leads on current owners of copyrights, in ways that were not possible when the only copies of the Catalog were in closed reserve at certain federal depository libraries.  Various people in the Copyright Office  have been hoping for a while to get approval and funding for this digitization, and I’m very thankful for their persistence in seeing the work through.

Not all the work is done, though.  Although the Catalog is now online, its records are not as easy to search, navigate through, and interpret as they could be.  There’s no one-stop search box, for instance, that will reliably bring you to any copyright record with your query terms, regardless of date or type of record.  And the Copyright Office also has more information about its copyright registrations– some of it on catalog cards, and more of it on original registration certificates like the one I found when researching the status of my mother’s book— that could be useful to people researching copyright status and looking for rightsholders.

For now, the Copyright Office is scanning the cards used to look up volumes of registration certificates, and that are also the basis of the Catalog of Copyright Entries printed volumes.  From my (limited) experience with these cards, they don’t seem to add much information to what’s in the printed Catalog, but it’s easier to automatically create a searchable, structured database of copyright records from the cards, with their fairly regular typefaces and formats, than it would be to create one from the Catalog scans.  According to their latest blog post, the Copyright Office is now creating digital images of the relevant cards, and hope to be done by the end of Fiscal Year 2014, or a little over 26 months from now.  They’re also hoping to work with various partners– including “crowdsourcing” partnerships– to reliably convert the information on the cards into machine-readable form.

There are also lots of ways to make the existing online records more useful.   On my own copyright records site, for instance, I’ve now made a comprehensive index to all the Catalog volumes, and created a table to make it easier to look up records in digitized Catalog volumes, based on the year and type of copyright registration.  I’m still working on further refinements, and would be very happy to hear suggestions.  (I’ve also been unable to find one 12-month stretch of records for copyrights from 1895 and 1896.  Fortunately, all the copyrights from those years have long since expired, but I’d still be grateful to anyone who can help me fill this last gap.)

At the same time, I’ve been using the comprehensive record set to help me research and publicize copyright status for listings on The Online Books Page.  For instance, if I’m listing public domain issues of a journal, magazine, or other serial, I’ll also look to see whether additional issues might also be in the public domain if their copyrights were not renewed.  Then I’ll place a note about this on my cover page for the serial, if applicable.

As for the Copyright Office, I’m hoping that they can soon start digitizing their volumes of registration certificates, which contain a lot of useful additional information about copyrights and copyright holders, and which no one else has.  Digitizing all of them wouldn’t be cheap– there are a lot of pages potentially to digitize, usually two for each registration.  But perhaps they could start digitizing incrementally, either on a prioritized systematic basis (e.g., starting with the most recent volumes), or on a demand-based basis (e.g., digitizing when someone wants to obtain a copy of one of a volume’s certificates).

These are only a few of the things that could be done with the records now online, by people anywhere with the suitable motivation.  I’d love to hear what others are doing or thinking of doing.

A digital public library we still need, and could build now

It’s been more than half a year since the Digital Public Library of America project was formally launched, and I’m still trying to figure out what the project organizers really want it to be.  The idea of “a digital library in service of the American public” is a good one, and many existing digital libraries already play that role in a variety of ways.  As I said when I christened this blog, I’m all for creating a multitude of libraries to serve a diversity of audiences and information needs.

At a certain point after an enthusiastic band of performers says “Let’s put on a show!”, though, someone has to decide what their show’s going to be about, and start focusing effort there.  So far, the DPLA seems to be taking an opportunistic approach.  Instead of promulgating a particular blueprint for what they’ll do, they’re asking the community for suggestions, in a “beta sprint” that ends today.   Whether this results in a clear distinctive direction for the project, or a mishmash of ideas from other digitization, aggregation, preservation, and public service initiatives, remains to be seen.

Just about every digital project I’ve seen is opportunistic to some extent.   In particular, most of the big ones are opportunistic when it comes to collection development.  We go after the books, documents, and other knowledge resources that are close to hand in our physical collections, or that we find people putting on the open web, or that our users suggest, or volunteer to provide on their own.

There are a number of good reasons for this sort of opportunism.  It lets us reuse work that we don’t have to redo ourselves.  It can inform us of audience interests and needs (at least as far as the interests of the producers we find align with the interests of the consumers we serve).  And it’s cheap, and that’s nothing to sneer at when budgets are tight.

But the public libraries that my family prefers to use don’t, on the whole, have opportunistically built collections.  Rather, they have collections shaped primarily by the needs of their patrons, and not primarily by the types of materials they can easily acquire.   The “opportunistic” community and school library collections I’ve seen tend to be the underfunded ones, where books in which we have yet to land on the Moon, the Soviet Union is still around, or Alaska is not yet a state may be more visible than books that reflect current knowledge or world events.  The better libraries may still have older titles in their research stacks, but they lead with books that have current relevance to their community, and they go out of their way to acquire reliable, readable resources for whatever information needs their users have.  In other words, their collections and services are driven by  demand, not supply.

In the digital realm, we have yet to see a library that freely provides such a digital collection at large scale for American public library users.   Which is not to say we don’t have large digital book collections– the one I maintain, for instance, has over a million freely readable titles, and Google Books and lots of other smaller digital projects have millions more.  But they function more as research or special-purpose collections than as collections for general public reference, education, or enjoyment.

The big reason for this, of course, is copyright.  In the US, anyone can freely digitize books and other resources published before 1923, but providing anything published after that requires copyright research and, usually, licensing, that tends to be both complex and expensive.  So the tendency of a lot of digital library projects is to focus on the older, obviously free material, and have little current material.  But a generally useful digital public library needs to be different.

And it can be, with the right motivation, strategy, and support.  The key insight is that while a strong digital public library needs to have high-quality, current knowledge resources, it doesn’t need to have all such resources, or even the most popular or commercially successful ones.  It just needs to acquire and maintain a few high-quality resources for each of the significant needs and aptitudes of its audience. Mind you, that’s still a lot of ground to cover, especially when you consider all the ages, education levels, languages, physical and mental abilities, vocational needs, interests, and demographic backgrounds that even a midsized town’s public library serves.  But it’s still a substantially smaller problem, and involves a smaller cost, than the enticing but elusive idea of providing instant free online access to everything for everyone.

There are various ways public digital libraries could acquire suitable materials proactively.  The books collection provides one interesting example.  The US State Department wanted to create a library of easy-to-read books on civics and American culture and history for an international audience.  Some of these books were created in-house by government staff.  Others were commissioned to outside authors.  Still others were adapted from previously published works, for which the State Department acquired rights.

A public digital library could similarly create, commission, solicit, or acquire rights to books that meet unfilled information needs of its patrons.  Ideally it would aim to acquire rights not just to distribute a work as-is, but also to adapt and remix into new works, as many Creative Commons licenses allow.  This can potentially greatly increase the impact of any given work.  For instance, a compellingly written,  beautifully illustrated book on dinosaurs might be originally written for 9-12 year old English speakers, and be noticeably obsolete due to new discoveries after 5 or 10 years.  But if a library’s community has reuse and adaptation rights, library members can translate, adapt, and update the book, so it becomes useful to a larger audience over a longer period of time.

This sort of collection building can potentially be expensive; indeed, it’s sobering that has now ceased being updated, due to budget cuts.  But there’s a lot that can be produced relatively inexpensively.  Khan Academy, for example, contains thousands of short, simple educational videos, exercises, and assessments created largely by one person, with the eventual goal of systematically covering the entire standard K-12 curriculum.  While I think a good educational library will require the involvement of many more people, the Khan example shows how much one person can get accomplished with a small budget, and projects like Wikipedia show that there’s plenty of cognitive surplus to go around, that a public library effort might usefully tap into.

Moreover, the markets for rights to previously authored content can potentially be made much more efficient than they are now.  Most books, for instance, go out of print relatively quickly, with little or no commercial exploitation thereafter.  And as others have noted, just trying to get permission to use  a work digitally, even apart from any royalties, can be very expensive and time-consuming.  But new initiatives like Gluejar aim to make it easier to match up people who would be happy to share their book rights with people who want to reuse them. Authors can collect a small fee (which could easily be higher than the residual royalties on an out-of-print book); readers get to share and adapt books that are useful to them.   And that can potentially be much cheaper than acquiring the rights to a new work, or creating one from scratch.

As I’ve described above, then, a digital public library could proactively build an accessible collection of high-quality, up to date online books and other knowledge resources, by finding, soliciting, acquiring, creating, and adapting works in response to the information needs of its users.  It would build up its collection proactively and systematically, while still being opportunistic enough to spot and pursue fruitful new collection possibilities.  Such a digital library could be a very useful supplement to local public libraries, would be open any time anywhere online, and could provide more resources and accessibility options than a local public library could provide on its own.  It would require a lot of people working together to make it work, including bibliographers, public service liaisons, authors, technical developers, and volunteers, both inside and outside existing libraries.  And it would require ongoing support, like other public libraries do, though a library that successfully serves a wide audience could also potentially tap into a wide base of funds and in-kind contributions.

Whether or not the DPLA plans to do it, I think a large-scale digital free public library with a proactively-built, high-quality, broad-audience general collection is something that a civilized society can and should build.  I’d be interested in hearing if others feel the same, or have suggestions, critiques, or alternatives to offer.

Book People postscript

This past Friday I closed down the Book People mailing list, a forum for people making and reading free online books that Mary and I started in 1997. Much of the activity of folks on the list would be early examples of the sort of citizen librarianship that I referred to in the first post to this blog. I announced the list’s closing about three weeks ago, giving my reasons in a later post.

In the last three weeks of the list’s activity, various listmembers wound up conversations, planned or announced various new forums, and said their goodbyes. You can read all this, and the rest of the list’s history, in the archives, which are remaining online. The most direct successor to the list is Book Futures, a Yahoo Groups mailing list maintained by Kent Larsen, and there were some other lists announced as well.

I closed the list with my own retrospection and thanks. But I continued to get some more listmember reflections even after my last post (and for all I know some more may have come in after the list’s email address was decommissioned.) Here’s one of them, a message I got from Michael Stutz (posted here with his permission):


When you started Book People back in 1997, I began a list for the discussion of what has now become known as “open content,” in an attempt to prove a concept I’d been working on in obscurity for years.

My list, Linart, shut down years ago, and that goes so far back that a whole lifetime is packed in the interim. But I do know firsthand what it’s like to administer and moderate a list like this and I know that to do it justice takes more time and work than most people would believe. I’ve never known a list with closer and more careful moderation than Book People. Absolutely every time a BP post came into my inbox, I thought of this and how keeping a good list running takes a massive amount of work.

It’s always sad to see an end, but looking back I do think that Book People had a good run and, like Linart, it reached the end of its course—a decade ago, the idea of publishing an online or electronic edition of a book was a novelty, there weren’t so many of them and they weren’t always easy to find. Not so anymore—at the very instant your announcement came into my inbox, I was downloading several gigabytes of rare old books, dozens of volumes among hundreds that I’d found through a full-text keyword search.

Just the same, Linart was a great idea because at the time no one was publishing copylefted work online—and even more importantly, _no one thought it was possible._ My main interests were books and art, but I wanted to see every kind of copyrighted work digitized online with “copyleft” licensing. And it might seem crazy now, but the reactions
from open source and free software figures to my dream went from complete disinterest to overt hostililty: “Copyleft is for software! You can’t do that with books, music, art”—replies like that were typical. Few people in the world were copylefting non-software works, but Linart is best left in the 20th century and the world as it was before Wikipedia and Creative Commons. In fact, after seeing the results of several years of online “open content” and having tested it extensively firsthand, I’m now critical of the method—I know its weaknesses and errors and have come to see that it isn’t the right solution for the age.

But what remains important today is the greater question of online publishing in general—and, of course, the future of the book. As a reader I’m nearly exclusively online for newly-published material, and as a writer that’s also where I want to find my audience, but how to do it and how it will all work out, how new writing and new books will be published and read and sold, remains entirely unclear—I’m still looking for the answer, and so I think the new Book Futures list is very aptly named and hope it takes off on its quest from this place we’ve come to after over a decade’s worth of Book People.

If anyone else from the list would like to add any postscripts or other comments here, feel free to add a comment to this post.

What’s this all about, Part 2: Everybody’s Libraries

In my previous post, I discussed “citizen librarianship” and the rise of online library services that go beyond the established library organizations and practices. And I claimed that the most promising future of libraries involved understanding and building up “everybody’s libraries”, as a collective group and as a concept.

The collective group is easy enough to understand. It’s just the sum of all the library content and services usable by the global community. The bigger this is, the more we can benefit.

But what do I mean by “everyone’s libraries” as a concept? I mean a group of characteristics that I think will describe and build up the best libraries of the future. “Everybody’s libraries”, as I see it, includes

  • Libraries everybody can use. We’ve been sharing information with the online world at large almost since the day we set up computer networks. (The work of Project Gutenberg, for instance, started over 37 years ago.) Openly accessible information can be used by anyone it reaches, enlightening the world, making it easier to build on old work to create new knowledge, and enabling new kinds of production and commerce. Open-access libraries become even more usable when they make their information easy to find and repurpose, and when they accommodate varying languages, abilities, and education levels. For various reasons, not everything can be used by everyone all the time, but many of the barriers to access today can and should be removed.
  • Libraries everybody can put their work in. Libraries need to accommodate whatever information is important to their communities, from whatever source, and in whatever form, whether that be books, serials, images, multimedia, ephemera, or any of the forms of electronic information introduced in the Internet age. Many libraries are rightly selective about what they acquire, but we shouldn’t limit what they are able to select to benefit their users.
  • Libraries everybody can build. This includes the “citizen’s” libraries people build themselves and the established libraries that people contribute to. I started a kind of library 14 years ago as a computer science graduate student. It serves the Internet as a whole, and I continue to grow it. I also now work for another library that serves a smaller, university-based community with a broader range of collections and services (including some that are enhanced by our users’ contributions). The work I do with one library often enhances the work I do with the other. Many other people are now also building their own libraries, with the help of various tools for collecting, describing, organizing, preserving, and providing access to the information their communities need.
  • Libraries everybody can share. This is a crucial characteristic, distinct from but dependent on the characteristics above. In the past, if my library bought a new book or introduced a new service, it improved the lot of my library’s constituents, but did little or nothing for anyone else’s library. That no longer has to be true. My library, if it’s willing and able, can now share its content, its metadata, and even much of its services and technical infrastructure with any number of other libraries. The costs of turning local resources into shared resources can be very small; the benefits to the users of all these libraries can be very large. In this kind of environment, the improvements that I make in my library can also be turned into improvements in your library, and in someone else’s library– ultimately, in everybody’s libraries.

Most of these characteristics assume lots of libraries, large and small, independently managed but sharing whatever collections, services, knowledge, and other resources they see fit. People sometimes imagine that one day everyone will just use one big “universal library”, containing all knowledge, and run by some overarching organization, government, or corporation. I don’t think that’s going to happen, and I hope it doesn’t. There are too many ways that people want to collect and use information for various purposes. The library landscape of the future should support the construction, cooperation, and use of many kinds of libraries– physical, virtual, and hybrid– serving many kinds of communities and needs.

Everybody’s libraries, then, include libraries for everybody, by everybody, shared with everybody, and about everything. No one library is all things to all people, but collectively, they can be much greater than any single library can be. And if we understand and support everybody’s libraries (as I hope to encourage with this blog), we can make each of our own libraries better serve their users.