Everybody's Libraries

November 11, 2010

You do the math

Filed under: open access,publishing,serials,sharing — John Mark Ockerbloom @ 6:02 pm

I recently heard from Peter Murray-Rust that the Central European Journal of Mathematics (CEJM) is looking for graduate students to edit the language of papers they publish.  CEJM is co-published by Versita and Springer Science+Business Media.

Would-be editors are promised their name on the masthead, and references and recommendations from the folks who run the journal.  These perks are tempting to a student (or postdoc) hoping for stable employment, but you can get such benefits working with just about any scholarly journal.  There’s no mention of actual pay for any of this editing work.  (Nor is there any pay for the associate editors they also seek, though those editors are also promised access to the journal’s content.)

The reader’s side of things looks rather different, when it comes to paying. If we look at Springer’s price lists for 2011, for instance, we see that the list price for a 1-year institutional subscription to CEJM is $1401 US for “print and free access or e-only”, or $1681 US for “enhanced access”.  An additional $42 is assessed for postage and handling, presumably waived if you only get the electronic version, but charged otherwise.

This is a high subscription rate even by the standards of commercial math journals.  At universities like mine, scholars don’t pay for the journal directly, but the money the library uses for the subscription is money that can’t be used to buy monographs, or to buy non-Springer journals, or to improve library service to our mathematics scholars.  Mind you, many universities get this journal as part of a larger package deal with Springer.  This typically lowers the price for each journal, but the package often includes a number of lower-interest journals that wouldn’t otherwise be bought.  Large amounts of money are tied up in these “big deals” with large for-profit publishers such as Springer.

If you can’t, or won’t, lay out the money for a subscription or larger package, readers can pay for articles one at a time.  When I tried to look at a recent CEJM article from home, for instance, I was asked to pay $34 before I could read it.  Another option is author-paid open access.  CEJM authors who want to make their papers available through the journal without a paywall can do so through Springer’s Open Choice program.  This will cost the author $3000 US.

So there’s plenty of money involved in this journal.  It’s just that none of it goes to the editors they’re seeking.  Or to the authors of the papers, who submit them for free (or with a $3000 payment).  Or to the peer reviewers of the papers, if this journal works like most other scholarly journals and uses volunteer scholars as referees.  A scholar might justifiably wonder all this money is going, or what value they get in return for it.

As the editor job ads imply, much of what scholars get out of editing and publishing in journals like these is recognition and prestige.  That, indeed, has value, but the cost-value function can be optimized much better than in this case.  CEJM’s website mentions that it’s tracked by major citation services, and has a 0.361 impact factor (a number often used, despite some notable problems, to give a general sense of a journal’s prestige).  Looking through the mathematics section of the Directory of Open Access Journals, I find a number of scholarly journals that are also tracked by citation services, but don’t charge anything to readers, and as far as I can tell don’t charge anything to authors either.   Here are some of them:

Central Europe, besides being the home of CEJM, is also the home of several open access math journals such as Documenta Mathematica (Germany), the Balkan Journal of Geometry and its Applications (Romania), and the Electronic Journal of Qualitative Theory of Differential Equations (Hungary).  For what it’s worth, all of these journals, and all the other open access journals mentioned in this post, currently show higher impact factors in Journal Citation Reports than CEJM does.

Free math journals aren’t limited to central Europe.  Here in the US, the American Mathematical Society makes the Bulletin of the American Mathematical Society free to read online, through the generosity of its members.  And on the campus where I work, Penn’s math department sponsors the Electronic Journal of Combinatorics.

A number of other universities also sponsor open-access journals, promoting their programs, and the findings of scholars worldwide, with low overhead.  For instance, there are two relatively high-impact math journals from Japanese universities: the Kyushu Journal of Mathematics and the Osaka Journal of Mathematics.  The latter journal’s online presence is provided by Project Euclid, a US-based initiative to support low-cost, non-profit mathematics publishing.

Ad-hoc groups of scholars can also organize their own open access journals in their favored specialty.  For instance, Homology, Homotopy and Applications is founded and entirely run by working mathematicians.  Some journals, such as the open access Discrete Mathematics and Theoretical Computer Science, use Open Journal Systems, a free open source publishing software package, to produce high-quality journal websites with little expenditure.

The Proceedings of the Indian Academy of Sciences: Mathematical Sciences is an interesting case.  Like many scholarly societies, the Indian Academy has recently made a deal with a for-profit publisher (Springer, as it turns out) to distribute their journals in print and electronic form.  Unlike many such societies, though, the Academy committed to continuing a free online version of this journal on their own website.

This is a fortunate decision for readers, because libraries that acquire the commercially published version will have to pay Springer $280 per year for basic access and $336 for “enhanced access”, according to their 2011 price list.  True, libraries get a print copy with this more expensive access (if they’re willing to pay Springer another $35 in postage and handling charges).  But the Academy sends out print editions within India for a total subscription price (postage included) of 320 rupees per year.   At today’s exchange rates, that’s less than $8 US.

Virtually all journals, whether in mathematics or other scholarly fields, depend heavily on unpaid academic labor for the authorship, refereeing, and in some cases editing of their content.  But, as you can see with CEJM and the no-fee open access journals mentioned above, journals vary widely in the amount of money they also extract from the academic community.  In between these two poles, there are also lots of other high-impact math journals with lower subscription prices, as well as commercial open access math journals with much lower author fees than Springer’s Open Choice.  These journals further diversify the channels of communication among mathematicians, without draining as much of  their funds.

I certainly hope mathematicians and other scholars will continue to volunteer their time and talents to the publication process, both for their benefit and for ours.  But if we optimize where and how we give our time and talent (and our institutional support), both scholars and the public will be better off.  As I’ve shown above, with a little bit of information and attention, there’s no shortage of low-cost, high-quality publishing venues that scholars can use as alternatives to overpriced journals.

October 18, 2010

October 15, 2010

Journal liberation: A community enterprise

Filed under: copyright,discovery,open access,publishing,serials,sharing — John Mark Ockerbloom @ 2:53 pm

The fourth annual Open Access Week begins on Monday.  If you follow the official OAW website, you’ll be seeing a lot of information about the benefits of free access to scholarly research.  The amount of open-access material grows every day, but much of the research published in scholarly journals through the years is still practically inaccessible to many, due to prohibitive cost or lack of an online copy.

That situation can change, though, sometimes more dramatically than one might expect.  A post I made back in June, “Journal liberation: A Primer”, discussed the various ways in which people can open access to journal content, past and present,  one article or scanned volume at a time.  But things can go much faster if you have a large group of interested liberators working towards a common goal.

Consider the New England Journal of Medicine (NEJM), for example.  It’s one of the most prominent journals in the world, valued both for its reports on groundbreaking new research, and for its documentation, in its back issues, of nearly 200 years of American medical history.  Many other journals with lesser value still cannot be read without paying for a subscription, or visiting a research library that has paid for a subscription.  But you can find and read most of NEJM’s content freely online, both past and present. Several groups of people made this possible.  Here are some of them.

The journal’s publisher has for a number of years provided open access to all research articles more than 6 months old, from 1993 onward.  (Articles less than 6 months old are also freely available to readers in certain developing countries, and in some cases for readers elsewhere as well.)  A registration requirement was dropped in 2007.

Funders of medical research, such as the National Institutes of Health, the Wellcome Trust, and the Howard Hughes Medical Institute, have encouraged publishers in the medical field to maintain or adopt such open access policies, by requiring their grantees (who publish many of the articles in journals like the NEJM) to make their articles openly accessible within months of publication.  Some of these funders also maintain their own repositories of scholarly articles that have appeared in NEJM and similar journals.

Google Books has digitized most of the back run of the NEJM and its predecessor publications as part of its Google Books database.  Many of these volumes are freely accessible to the public.  This is not the only digital archive of this material; there’s also one on NEJM’s own website, but access there requires either a subscription or a $15 payment per article.   Google’s scans, unlike the ones on the NEJM website, include the advertisements that appeared along with the articles.  These ads document important aspects of medical history that are not as easily seen in the articles, on subjects ranging from the evolving requirements and curricula of 19th-century medical schools to the early 20th-century marketing of heroin for patients as young as 3 years old.

It’s one thing to scan journal volumes, though; it’s another to make them easy to find and use– which is why NEJM’s for-pay archive got a fair bit of publicity when it was released this summer, while Google’s scans went largely unnoticed.  As I’ve noted before, it can be extremely difficult to find all of the volumes of a multi-volume work in Google Books; and it’s even more difficult in the case of NEJM, since issues prior to 1928 were published under different journal titles.  Fortunately, many of the libraries that supplied volumes for Google’s scanners have also organized links to the scanned volumes, making it easier to track down specific volumes.  The Harvard Libraries, for instance, have a chronologically ordered list of links to most of the volumes of the journal from 1828 to 1922, a period when it was known as the Boston Medical and Surgical Journal.

For many digitized journals, open access stops after 1922, because of uncertainty about copyright.  However, most scholarly journals have public domain content after that date, so it’s possible to go further if you research journal copyrights.  Thanks to records provided by the US Copyright Office and volunteers for The Online Books Page, we can determine that issues and articles of the NEJM prior to the 1950s did not have their copyrights renewed.  With this knowledge, Hathi Trust has been able and willing to open access to many volumes from the 1930s and 1940s.

We at The Online Books Page can then pull together these volumes and articles from various sources, and create a cover page that allows people to easily get to free versions of this journal and its predecessors all the way back to 1812.

Most of the content of the New England Journal of Medicine has thus been liberated by the combined efforts of several different organizations (and other interested people).  There’s still more than can be done, both in liberating more of the content, and in making the free content easier to find and use.  But I hope this shows how widespread  journal liberation efforts of various sorts can free lots of scholarly research.  And I hope we’ll hear about many more  free scholarly articles and journals being made available, or more accessible and usable, during Open Access Week and beyond.

I’ve also had another liberation project in the works for a while, related to books, but I’ll wait until Open Access Week itself to announce it.  Watch this blog for more open access-related news, after the weekend.

July 31, 2010

Keeping subjects up to date with open data

Filed under: data,discovery,online books,open access,sharing,subjects — John Mark Ockerbloom @ 11:51 pm

In an earlier post, I discussed how I was using the open data from the Library of Congress’ Authorities and Vocabularies service to enhance subject browsing on The Online Books Page.  More recently, I’ve used the same data to make my subjects more consistent and up to date.  In this post, I’ll describe why I need to do this, and why doing it isn’t as hard as I feared that it might be.

The Library of Congress Subject Headings (LCSH) is a standard set of subject names, descriptions, and relationships, begun in 1898, and periodically updated ever since. The names of its subjects have shifted over time, particularly in recent years.  For instance, recently subject terms mentioning “Cookery”, a word more common in the 1800s than now, were changed to use the word “Cooking“, a term that today’s library patrons are much more likely to use.

It’s good for local library catalogs that use LCSH to keep in sync with the most up to date version, not only to better match modern usage, but also to keep catalog records consistent with each other.  Especially as libraries share their online books and associated catalog records, it’s particularly important that books on the same subject use the same, up-to-date terms.  No one wants to have to search under lots of different headings, especially obsolete ones, when they’re looking for books on a particular topic.

Libraries with large, long-standing catalogs often have a hard time staying current, however.  The catalog of the university library where I work, for instance, still has some books on airplanes filed under “Aeroplanes”, a term that recalls the long-gone days when open-cockpit daredevils dominated the air.  With new items arriving every day to be cataloged, though, keeping millions of legacy records up to date can be seen as more trouble than it’s worth.

But your catalog doesn’t have to be big or old to fall out of sync.  It happens faster than you might think.   The Online Books Page currently has just over 40,000 records in its catalog, about 1% of the size of my university’s.   I only started adding LC subject headings in 2006.  I tried to make sure I was adding valid subject headings, and made changes when I heard about major term renamings (such as “Cookery” to “Cooking”).  Still, I was startled to find out that only 4 years after I’d started, hundreds of subject headings I’d assigned were already out of date, or otherwise replaced by other standardized headings.  Fortunately, I was able to find this out, and bring the records up to date, in a matter of hours, thanks to automated analysis of the open data from the Library of Congress.  Furthermore, as I updated my records manually, I became confident I could automate most of the updates, making the job faster still.

Here’s how I did it.  After downloading a fresh set of LC subject headings records in RDF, I ran a script over the data that compiled an index of authorized headings (the proper ones to use), alternate headings (the obsolete or otherwise discouraged headings), and lists of which authorized headings were used for which alternate headings. The RDF file currently contains about 390,000 authorized subject headings, and about 330,000 alternate headings.

Then I extracted all the subjects from my catalog.  (I currently have about 38,000 unique subjects.)  Then I had a script check each subject see if it was listed as an authorized heading in the RDF file.  If not, I checked to see if it was an alternate heading.  If neither was the case, and the subject had subdivisions (e.g. “Airplanes — History”) I removed a subdivision from the end and repeated the checks until a term was found in either the authorized or alternate category, or I ran out of subdivisions.

This turned up 286 unique subjects that needed replacement– over 3/4 of 1% of my headings, in less than 4 years.  (My script originally identified even more, until I realized I had to ignore the simple geographic or personal names.  Those aren’t yet in LC’s RDF file, but a few of them show up as alternate headings for other subjects.)  These 286 headings (some of them the same except for subdivisions) represented 225 distinct substitutions.  The bad headings were used in hundreds of bibliographic records, the most popular full heading being used 27 times. The vast majority of the full headings, though, were used in only one record.

What was I to replace these headings with?  Some of the headings had multiple possibilities. “Royalty” was an alternate heading for 5 different authorized headings: “Royal houses”, “Kings and rulers”, “Queens”, “Princes” and “Princesses”.   But that was the exception rather than the rule.  All but 10 of my bad headings were alternates for only one authorized heading.  After “Royalty”, the remaining 9 alternate headings presented a choice between two authorized forms.

When there’s only 1 authorized heading to go to, it’s pretty simple to have a script do the substitution automatically.  As I verified while doing the substitutions manually, nearly all the time the automatable substitution made sense.  (There were a few that didn’t: for instance. when “Mind and body — Early works to 1850″ is replaced by “Mind and body — Early works to 1800“, works first published between 1800 and 1850 get misfiled.  But few substitutions were problematic like this– and those involving dates, like this one, can be flagged by a clever script.)

If I were doing the update over again, I’ll feel more comfortable letting a script automatically reassign, and not just identify, most of my obsolete headings.  I’d still want to manually inspect changes that affect more than one or two records, to make sure I wasn’t messing up lots of records in the same way; and I’d also want to manually handle cases where more than one term could be substituted.  The rest– the vast majority of the edits– could be done fully automatically.  The occasional erroneous reassignment of a single record would be more than made up by the repair of many more obsolete and erroneous old records.  (And if my script logs changes properly, I can roll back problematic ones later on if need be.)

Mind you, now that I’ve brought my headings up to date once, I expect that further updates will be quicker anyway.  The Library of Congress releases new LCSH RDF files about every 1-2 months.  There should be many fewer changes in most such incremental updates than there would be when doing years’ worth of updates all at once.

Looking at the evolution of the Library of Congress catalog over time, I suspect that they do a lot of this sort of automatic updating already.  But many other libraries don’t, or don’t do it thoroughly or systematically.  With frequent downloads of updated LCSH data, and good automated procedures, I suspect that many more could.  I have plans to analyze some significantly larger, older, and more diverse collections of records to find out whether my suspicions are justified, and hope to report on my results in a future post.  For now, I’d like to thank the Library of Congress once again for publishing the open data that makes these sorts of catalog investigations and improvements feasible.

June 11, 2010

Journal liberation: A primer

Filed under: copyright,libraries,open access,publishing,sharing — John Mark Ockerbloom @ 10:07 am

As Dorothea Salo recently noted, the problem of limited access to high-priced scholarly journals may be reaching a crisis point.  Researchers that are not at a university, or are at a not-so-wealthy one, have long been frustrated by journals that are too expensive for them to read (except via slow and cumbersome inter-library loan, or distant library visits).  Now, major universities are feeling the pain as well, as bad economic news has forced budget cuts in many research libraries, even as further price increases are expected for scholarly journals.  This has forced many libraries to consider dropping even the most prestigious journals, when their prices have risen too high to afford.

Recently, for instance, the University of California, which has been subject to significant budget cuts and furloughssent out a letter in protest of Nature Publishing Group’s proposal to raise their subscription fees by 400%.  The letter raised the possibility of cancelling all university subscriptions to NPG, and having scholars boycott the publisher.

Given that Nature is one of the most prestigious academic journals now publishing, one that has both groundbreaking current articles and a rich history of older articles, these are strong words.  But dropping subscriptions to journals like Nature might not be as as much of a hardship for readers as it once might have been.  Increasingly, it’s possible to liberate the research content of academic journals, both new and old, for the world.  And, as I’ll explain below, now may be an especially opportune time to do that.

Liberating new content

While some of the content of journals like Nature is produced by the journal’s editorial staff or other writers for hire, the research papers are typically written by outside researchers, employed by universities and other research institutions.  These researchers hold the original copyright to their articles, and even if they sign an agreement with a journal to hand over rights to them (as they commonly do), they retain whatever rights they don’t sign over.  For many journals, including the ones published by Nature Publishing Group, researchers retain the right to post the accepted version of their paper (known as a “preprint”) in local repositories.  (According to the Romeo database, they can also eventually post the “postprint”– the final draft resulting after peer review, but before actual publication in the journal– under certain conditions.)  These drafts aren’t necessarily identical to the version of record published in the journal itself, but they usually contain the same essential information.

So if you, as a reader, find a reference to a Nature paper that you can’t access, you can search to see if the authors have placed a free copy in an open access repository. If they haven’t, you can contact one of them to encourage them do do so.  To find out more about providing open access to research papers, see this guide.

If a journal’s normal policies don’t allow authors to share their work freely in an open access repository, authors  may still be able to retain their rights with a contract addendum or negotiation.  When that hasn’t worked, some academics have decided to publish in, or review for, other journals, as the California letter suggests.  (When pushed too far, some professors have even resigned en masse from editorial boards to start new journals that are friendlier to authors and readers.

If nothing else, scholarly and copyright conventions generally respect the right of authors to send individual copies of their papers to colleagues that request them.  Some repository software includes features that make such copies extremely easy to request and send out.  So even if you can’t find a free copy of a paper online already, you can often get one if you ask an author for it.

Liberating historic content

Many journals, including Nature, are important not only for their current papers, but for the historic record of past research contained in their back issues.  Those issues may be difficult to get a hold of, especially as many libraries drop print subscriptions, deaccession old journal volumes, or place them in remote storage.  And electronic access to old content, when it’s available at all, can be surprisingly expensive.  For instance, if I want to read this 3-paragraph letter to the editor from 1872 on Nature‘s web site, and I’m not signed in at a subscribing institution, the publisher asks me to pay them $32 to read it in full.

Fortunately, sufficiently old journals are in the public domain, and digitization projects are increasingly making them available for free.  At this point, nearly all volumes of Nature published before 1922 can now be read freely online, thanks to scans made available to the public by the University of Wisconsin, Google, and Hathi Trust.  I can therefore read the letters from that 1872 issue, on this page, without having to pay $32.

Mass digitization projects typically stop providing public access to content published after 1922, because copyright renewals after that year might still be in force.  However, most scholarly journals– including, as it turns out, Nature — did not file copyright renewals.  Because of this, Nature issues are actually in the public domain in the US all the way through 1963 (after which copyright renewal became automatic).  By researching copyrights for journals, we can potentially liberate lots of scholarly content that would otherwise be inaccessible to many. You can read more about journal non-renewal in this presentation, and research copyright renewals via this site.

Those knowledgeable about copyright renewal requirements may worry that the renewal requirement doesn’t apply to Nature, since it originates in the UK, and renewal requirements currently only apply to material that was published in the US before, or around the same time as, it was published abroad.  However, offering to distribute copies in the US counts as US publication for the purposes of copyright law.  Nature did just that when they offered foreign subscriptions to journal issues and sent them to the US; and as one can see from the stamp of receipt on this page, American universities were receiving copies within 30 days of the issue date, which is soon enough to retain the US renewal requirement.  Using similar evidence, one can establish US renewal requirements for many other journals originating in other countries.

Minding the gap

This still leaves a potential gap between the end of the public domain period and the present.  That gap is only going to grow wider over time, as copyright extensions continue to freeze the growth of the public domain in the US.

But the gap is not yet insurmountable, particularly for journals that are public domain into the 1960s.  If a paper published in 1964 included an author who was a graduate student or a young researcher, that author may well be still alive (and maybe even be still working) today, 46 years later.  It’s not too late to try to track authors down (or their immediate heirs), and encourage and help them to liberate their old work.

Moreover, even if those authors signed away all their rights to journal publishers long ago, or don’t remember if they still have any rights over their own work, they (or their heirs) may have an opportunity to reclaim their rights.  For some journal contributions between 1964 and 1977, copyright may have reverted to authors (or their heirs) at the time of copyright renewal, 28 years after initial publication.  In other cases, authors or heirs can reclaim rights assigned to others, using a termination of transfer.  Once authors regain their rights over their articles, they are free to do whatever they like with them, including making them freely available.

The rules for reversion of author’s rights are rather arcane, and I won’t attempt to explain them all here.  Terminations of transfer, though, involve various time windows when authors have the chance to give notice of termination, and reclaim their rights.  Some of the relevant windows are open right now.   In particular, if I’ve done the math correctly, 2010 marks the first year one can give notice to terminate the transfer of a paper copyrighted in 1964, the earliest year in which most journal papers are still under US copyright.  (The actual termination of a 1964 copyright’s transfer won’t take effect for another 10 years, though.)  There’s another window open now for copyright transfers from 1978 to 1985; some of those terminations can take effect as early as 2013.  In the future, additional years will become available for author recovery of copyrights assigned to someone else.  To find out more about taking back rights you, or researchers you know, may have signed away decades ago, see this tool from Creative Commons.

Recognizing opportunity

To sum up, we have opportunities now to liberate scholarly research over the full course of scholarly history, if we act quickly and decisively.  New research can be made freely available through open access repositories and journals.  Older research can be made freely available by establishing its public domain status, and making digitizations freely available.  And much of the research in the not-so-distant past, still subject to copyright, can be made freely available by looking back through publication lists, tracking down researchers and rights information, and where appropriate reclaiming rights previously assigned to journals.

Journal publishing plays an important role in the certification, dissemination, and preservation of scholarly information.  The research content of journals, however, is ultimately the product of scholars themselves, for the benefit of scholars and other knowledge seekers everywhere.   However the current dispute is ultimately resolved between Nature Publishing Group and the University of California, we would do well to remember the opportunities we have to liberate journal content for all.

May 6, 2010

Making discovery smarter with open data

Filed under: architecture,discovery,online books,open access,sharing,subjects — John Mark Ockerbloom @ 9:06 am

I’ve just made a significant data enhancement to subject browsing on The Online Books Page.  It improves the concept-oriented browsing of my catalog of online books via subject maps, where users explore a subject along multiple dimensions from a starting point of interest.

Say you’d like to read some books about logic, for instance.  You’d rather not have to go find and troll all the appropriate shelf sections within math, philosophy, psychology, computing, and wherever else logic books might be found in a physical library.  And you’d rather not have to think of all the different keywords used to identify different logic-related topics in a typical online catalog. In my subject map for logic, you can see lots of suggestions of books filed both under “Logic” itself, and under related concepts.  You can go straight to a book that looks interesting, select a related subject and explore that further, or select the “i” icon next to a particular book to find more books like it.

As I’ve noted previously, the relationships and explanations that enable this sort of exploration depend on a lot of data, which has to come from somewhere.  In previous versions of my catalog, most of it came from a somewhat incomplete and not-fully-up-to-date set of authority records in our local catalog at Penn.  But the Library of Congress (LC) has recently made authoritative subject cataloging data freely available on a new website.  There, you can query it through standard interfaces, or simply download it all for analysis.

I recently downloaded their full data set (38 MB of zipped RDF), processed it, and used it to build new subject maps for The Online Books Page.   The resulting maps are substantially richer than what I had before.  My collection is fairly small by the standards of mass digitization– just shy of 40,000 items– but still, the new data, after processing, yielded over 20,000 new subject relationships, and over 600 new notes and explanations, for the subjects represented in the collection.

That’s particularly impressive when you consider that, in some ways, the RDF data is cruder than what I used before.  The RDF schemas that LC uses omit many of the details and structural cues that are in the MARC subject authority records at the Library of Congress (and at Penn).  And LC’s RDF file is also missing many subjects that I use in my catalog; in particular, at present it omits many records for geographic, personal, and organizational names.

Even so, I lost few relationships that were in my prior maps, and I gained many more.  There were two reasons for this:  First of all, LC’s file includes a lot of data records (many times more than my previous data source), and they’re more recent as well.  Second, a variety of automated inference rules– lexical, structural, geographic, and bibliographic– let me create additional links between concepts with little or no explicit authority data.  So even though LC’s RDF file includes no record for Ontario, for instance, its subject map in my collection still covers a lot of ground.

A few important things make these subject maps possible, and will help them get better in the future:

  • A large, shared, open knowledge base: The Library of Congress Subject Headings have been built up by dedicated librarians at many institutions over more than a century.  As a shared, evolving resource, the data set supports unified searching and browsing over numerous collections, including mine.  The work of keeping it up to date, and in sync with the terms that patrons use to search, can potentially be spread out among many participants.  As an open resource, the data set can be put to a variety of uses that both increase the value of our libraries and encourage the further development of the knowledge base.
  • Making the most of automation: LC’s website and standards make it easy for me to download and process their data automatically. Once I’ve loaded their data, and my own records, I then invoke a set of automated rules to infer additional subject relationships.  None of the rules is especially complex; but put together, they do a lot to enhance the subject maps. Since the underlying data is open, anyone else is also free to develop new rules or analyses (or adapt mine, once I release them).  If a community of analyzers develops, we can learn from each other as we go.  And perhaps some of the relationships we infer through automation can be incorporated directly into later revisions of LC’s own subject data.
  • Judicious use of special-purpose data: It is sometimes useful to add to or change data obtained from external sources.  For example, I maintain a small supplementary data file on major geographic areas.  A single data record saying that Ontario is a region within Canada, and is abbreviated “Ont.”, generates much of my subject map for Ontario.  Soon, I should also be able to re-incorporate local subject records, as well as arbitrary additional overlays, to fill in conceptual gaps in LC’s file.  Since local customizations can take  a lot of effort to maintain, however, it’s best to try to incorporate local data into shared knowledge bases when feasible.  That way, others can benefit from, and add on to, your own work.

Recently, there’s been a fair bit of debate about whether to treat cataloging data as an open public good, or to keep it more restricted.  The Library of Congress’ catalog data has been publicly accessible online for years, though until recently only you could only get a little a time via manual searches, or pay a large sum to get a one-time data dump.  By creating APIs, using standard semantic XML formats, and providing free, unrestricted data downloads for their subject authority data, LC has made their data much easier for others to use in a variety of ways. It’s improved my online book catalog significantly, and can also improve many other catalogs and discovery applications.  Those of us who use this data, in turn, have incentives to work to improve and sustain it.

Making the LC Subject Headings ontology open data makes it both more useful and more viable as libraries evolve.  I thank the folks at the Library of Congress for their openness with their data, and I hope to do my part in improving and contributing to their work as well.

January 1, 2010

Public domain day 2010: Drawing up the lines

Filed under: copyright,online books,open access — John Mark Ockerbloom @ 12:01 am

As we celebrate the beginning of the New Year, we also mark Public Domain Day (a holiday I’ve been regularly celebrating on this blog.)  This is the day when a year’s worth of copyrights expire in many countries around the world, and the works they cover become free for anyone to use and adapt for any purpose.

In many counties, this is a bittersweet time for fans of the public domain.  For instance, this site notes the many authors whose works enter the public domain today in Europe, now that they’ve been dead for at least 70 years.  But for many European countries, this just represents reclaimed ground that had been previously lost.   Europe retroactively extended and revived copyrights from life+50 to life+70 years in 1993, so it’s still three more years before Europe’s public domain is back to what it was then.  Many other countries, including the United States, Australia, Russia, and Mexico, are in the midst of public domain freezes.  For instance, due to a 1998 copyright extension, no copyrights of published works will expire here in the US due to age for another 9 years, at least.

In the past, many people have had only a vague idea of what’s in the public domain and what isn’t.  But thanks to mass book digitization projects, the dividing line is becoming clearer.  Millions of books published before 1923 (the year of the oldest US copyrights) are now digitized, and can be found with a simple Google search and read in full online.  At the same time, millions more digitized books from 1923 and later can also be found with searches, but are not freely readable online.

Many of those works not freely readable online have languished in obscurity for a long time.   Some of them can be shown to be in the public domain after research, and groups like Hathi Trust are starting to clear and rescue many such works.  Some of them are still under copyright, but long out of print, and may have unknown or unreachable rightsholders.  The current debate over Google Books has raised the profile of these  works, so much so that the New York Times cited “orphan books”, a term used to describe such unclearable works, as one of the buzzwords of 2009.

The dividing line between the public domain and the world of copyright could well have been different.   In 1953, for instance, US copyrights ran for a maximum of 56 years, and the last of that year’s copyrights would have expired today, were it not for extensions.  Duke’s Center for the Study of the Public Domain has a page showing what could have been entering the public domain today– everything up to the close of the Korean War.  In contrast, if the current 95-year US terms had been in effect all of last century, the copyrights of 1914 would have only expired today.  Only now would we be able to start freely digitizing the first set of books from the start of World War I.

With the dividing line better known nowadays, do we have hope of protecting the public domain against more expansions of copyright?  Many countries still stick to the life+50 years term of the Berne Convention, including Canada and New Zealand.  In those countries, works from authors who died in 1959 enter the public domain for the first time.  There’s pressure on some of these countries to increase their terms, so far resisted.  Efforts to extend copyrights on sound recordings continues in Europe, and recently succeeded in Argentina.  And secret ACTA treaty negotiations are also aimed at increasing the power of copyright holders over Internet and computer users.

But resistance to these expansions of copyright is on the rise, and public awareness of copyright extensions and their deleterious effects is quite a bit higher now than when Europe and the US extended their copyrights in the 1990s.  And with concerns expressed by a number of parties over a possible Google monopoly on orphan books, one can envision building up a critical mass of interest in freeing more of these books for all to use.

So today I celebrate the incremental expansion of the public domain, and hope to help increase it further. To that end, I have a few gifts of my own.  As in previous years, I’m freeing all the copyrights I control for publications (including public online postings) that are more than 14 years old today, so any such works published in 1995 and before are now dedicated to the public domain.  Unfortunately, I don’t control the copyright of the 1995 paper that is my most widely cited work, but at least there’s an early version openly accessible online.

I can also announce the completion of a full set of digitized active copyright renewal records for drama and works prepared for oral delivery, available from this page.  This should make it easier for people to verify the public domain status of plays, sermons, lectures, radio programs, and similar works from the mid-20th century that to date have not been clearable using online resources.  We’ve also put online many copyright renewal records for images, and hope to have a complete set of active records not too far into 2010.  Among other things, this will help enable the full digitization of book illustrations, newspaper photographs, and other important parts of the historical record that might be otherwise omitted or skipped by some mass digitization projects.

Happy Public Domain Day!  May we have much to enjoy this day, and on many more Public Domain Days to come.

(Edited later in the day January 1 to fix an inaccurately worded sentence.)

October 26, 2009

Promoting access to the best literature of the past

Filed under: online books,open access — John Mark Ockerbloom @ 3:24 pm

Last week saw widespread observance of Open Access Week 2009 .  The week primarily focused on opening access to current research and scholarship (though there’s also been a growing community working on opening access to teaching and learning content).  You can find lots of open access resources at the Open Access Directory.

Current scholarship is not spontaneously generated from the brain or lab of the writer.  Useful scholarship must understand and interpret past work, to be effective in the present.  In many fields, and not just the classical humanities, the relevant past work may stretch back hundreds or even thousands of years.  Current scholarship and study will be more effective if its source material is also made openly accessible, and if proper attention is drawn to the most useful sources.  And now is an especially opportune time for scholars of all sorts, professional and amateur, to get involved in the process.

This may seem a strange thing to say at a time when the digitization of old books and other historic materials is increasingly dominated by large-scale projects like Google and the Internet Archive.  With mass digitizers putting millions of public domain book and journal volumes online, and with a near-term possibility of millions more copyrighted volumes going online as well, how much of a role is left for individual scholars and readers?

A very important role, as it turns out.  Mass digitization projects can quickly produce large scale aggregations of pass content, but as many have pointed out, aggregation is not the same as curation, and as aggregations grow larger, being able to find the right items in a growing collection becomes increasingly important.  That’s what curation helps us do, and the large-scale digitizers are not doing a very effective job of it themselves.  Google’s PageRank algorithm may take advantage of implicit curation of web pages (through the choices of authors’ page links), but Google and other aggregators have had a much harder time drawing attention to the most useful books, scholarly articles, or other works created without built-in hyperlinks.

Sometimes this is because they haven’t digitized them, even as they’ve digitized inferior substitutes.  Over three years after Paul Duguid lamented the republication of a bowdlerized translation of Knut Hamsun’s Pan by Project Gutenberg, that version remains the only freely available one of this book available there, or at Google Books, or anywhere else online that I’ve found.   Even though an unexpurgated version of this translation was published before the bowdlerized version, no digitizer that I know of has gotten around to finding and digitizing it; and countless readers may have used the existing online copies without even knowing that they’ve been censored.  Extra bibliographic and copyright research may be necessary to determine whether a better resource is available for digitization, as it in this case.

Sometimes the content is digitized, but can’t be found easily.  Geoff Nunberg’s post on Google Books’ “metadata train wreck” shows plenty of examples of how difficult it can be to find and properly identify a particular edition in Google Books, much less figure out which edition is the best one to use.  I’ve commented in the past about the challenges of finding multi-volume works in that corpus.  And Peter Jacso has pointed out Google’s problems indexing current scholarship.  If you can’t find the paper or book you need for your research, your work will be no better than it would be if the source had never existed.

This is where scholars can potentially play a useful role.  We don’t individually digitize books by the thousands, but we do individually find, cite, and recommend useful sources, down to the particular edition, as we find them and use them in our own writings and teaching.  These citations and recommendations now often go online, in various locations.  It would be very useful to have these recommendations made more visible, and tied to freely available online copies of the sources cited, whenever legally possible. Sometimes, we also create or digitize our own editions of past works, with useful annotations, for our classes or our own work.  It would be very useful to have these made visible and persistent as well, whenever appropriate.

I hope that large resource aggregations will make it easier for scholars and others to curate the collections to make them more useful to their readers.  In the meantime, we can start with resources we have.  For example, on The Online Books Page, my catalog entry for Hamsun’s Pan notes its limitations.  My public requests page includes information on a better edition that could be digitized, by someone who has access to the edition and has some time to spare.  And my suggestion form is ready to accept links to better editions of this book, or to other online books that merit special attention.  Indeed, most of the books that I now add to my catalog derive from submissions made by various readers on this form, and I invite scholars to suggest the freely accessible books and serials that they find most useful for my catalog.

As the Little Professor notes in a recent post, the sort of bibliographic work I’ve described can be time-consuming but vitally important for making effective use of old sources, and that work has often not been done by anyone for many books outside the usual classical canons.  Yet it’s the sort of thing that scholars do, bit by bit, as part of their everyday work.  The aggregate effect of their curation and digitization, appropriately harnessed in open-access form, could greatly improve our ability to build upon the work of the past.

September 17, 2009

Google Book settlement: Alternatives and alterations

Filed under: copyright,online books,open access — John Mark Ockerbloom @ 1:35 pm

In my previous post, I worried that the Google Books settlement might fall apart in the face of opposition from influential parties like the Copyright Office, and that such a collapse might deprive the public of meaningful access to millions of out of print books.

Not everyone sees it that way.  I’ve seen various suggestions of alternatives to the settlement for making these books available.  In this post, I’ll describe some of the suggested alternatives, explain why they don’t seem to me as likely to succeed on their own, and discuss how some of them could still go forward under a settlement.

Compulsory licenses

Both the Open Book Alliance’s court filings and the Copyright Office’s testimony mention the possibility of compulsory licensing, which essentially lets people use a copyrighted work without getting permission, provided that they meet standard conditions determined by the government.  Compulsory licenses already exist in certain areas, such as musical performances and broadcasts.  If I want to cover a Beatles song on my new record, I can, as long as meet some basic conditions, including paying a standard royalty.  The (remaining) Beatles can’t hold out for a higher rate, or say that no one else is allowed to cover the recordings they’ve released.

The Google Books settlement has some similarities to a compulsory license, but with some important differences, including:

  1. Book rightsholders can choose to deny public uses of their work, or hold out for higher compensation, which they generally can’t do under a compulsory license regime. (They have to explicitly request this, though.  So it’s really what one might call a “default” license.)
  2. The license has been negotiated through a court settlement rather than Congressional action. (This was one of the main complaints of the Copyright Office.)
  3. The license given in the settlement is granted only to Google, not to other digitizers. (This has justifiably raised monopoly concerns.)

I do have a problem with the last difference as it stands.  I’d like to see the license widened so that anyone, not just Google, could digitize and make available out of print books under the same terms as Google. But there are various ways we can get to that point from the settlement.  The Book Rights Registry created by the settlement could extend Google-like rights to anyone else under the same terms, as the settlement permits them to do.  The Justice Department could require them to do so as part of an antitrust supervision.  Or Congress could decide to codify the license to apply generally.  (They’ve done this sort of thing before with fair use and the first sale doctrine, both of which originated in the courts.)

If the settlement falls apart, though, negotiation over an appropriate license has to start over from scratch, and has to persuade Congress to loosen copyrights for benefits they might not clearly see. As I suggested in my previous post, Congress’ recent tendencies have heavily favored tightening, rather than loosening, copyright control.   And I haven’t yet seen a strong coalition pushing for laws granting compulsory (or default) licenses that are as broad as would be needed.

For instance, the Open Books Alliance’s amicus brief suggests the possibility of a compulsory license, but only as “but one approach”, and that suggestion seems as much aimed at getting hold of Google’s scans as licensing the book copyrights themselves.  Their front page at present shows no explicit advocacy of compulsory copyright licenses.  Perhaps they will unite behind a workable Google Books-style compulsory license proposal in the future, but I’m not counting on that.  (Update: Just after I posted this, I saw this statement of principles go up on the OBA site.  We’ll see what develops from that.)

The Copyright Office’s congressional brief also mentions but tries to damp down the idea.  It repeatedly characterizes compulsory licensing as something that Congress only does “reluctantly” and “in the face of marketplace failure”. But despite its strong words on other subjects, it does not appear concerned over whether we in fact have a marketplace failure around broad access to out-of-print books.

Orphan works legislation

The Copyright Office filing also suggests passing orphan works legislation (as have various other parties, including Google).  An orphan works limitation on copyrights would be nice, but it’s not going to enable the sort of large, comprehensive historical corpus that the Google Books settlement would allow.

As Denise Troll Covey has pointed out, the orphan works certification requirements recommended in last year’s bill, like many other case-by-case copyright clearance procedures, are labor-intensive and slow, and may be legally risky.  (In particular, the overhead for copyright clearance, not including license payment, can be several times the cost of digitization.)  Hence, these methods are not likely to scale well.  And they would not cover the many out-of-print books that aren’t, strictly speaking, orphans.  I don’t consider it likely that a near-comprehensive library  of millions of out-of-print 20th century books will come about by this route alone any time soon.

Even so, despite its limited reach, last year’s orphan works legislation was stopped in Congress after some creator organizations objected to it.  Some of the objectors, including the  National Writers Union and the American Society of Journalists and Authors, are now members of  the Open Book Alliance, which makes me wonder how effectively that group would act as a united coalition for copyright reform.

Private negotiation

Some critics suggest that Google and other digitizers simply negotiate with each rightsholder, or a mediator designated by each  rightsholder.   It’s possible that this actually might work for many future books, if authors and publishers set up comprehensive clearinghouses (like ASCAP and Harry Fox mediate music licensing).  If new books get registered with agents like these going forward, with simple, streamlined digital rights clearing, private arrangement could work well for future books both in-print and out-of-print.  Indeed, Google’s default settlement license privileges don’t apply to new books from 2009 onward.

But it’s much less likely that this will be a practical solution to build a comprehensive collection of past out of print books from the 20th and early 21st century, because of the sheer difficulty and cost of determining and locating all the current rightsholders of books long out of print.   The friction involved in such negotiation (involving high average cost for low average compensation) is too great.  Without the settlement and/or legal reform, we risk having what James Boyle called a “20th century black hole” for books.

Copyright law reform

As James Boyle points out, it would solve a lot of the problems that keep old books  in obscurity if books didn’t get exceedingly long copyrights purely by default.  It would also help if fair use and public domain determination weren’t as risky as they are now. I’d love to see all that come to pass, but no one I know that’s knowledgeable on copyright issues is holding their breath waiting for it to happen any time soon.

Moving forward

As I’ve previously mentioned, the settlement is imperfect.  It may well need antitrust supervision, and future elaboration and extension.  (And I’ve suggested some ways that libraries and others can work to improve on it.)  It’s still the most promising starting point I’ve seen for making comprehensive, widely usable, historic digital book collections possible.  I hope that we get the chance to build on it, instead of throwing away the opportunity.  In any case, I’d be happy to hear people’s thoughts and comments about the best way to move forward.

September 15, 2009

Google Books, and missing the opportunities you don’t see

Filed under: copyright,online books,open access — John Mark Ockerbloom @ 9:12 pm

The Google Books settlement fairness hearing is still a few weeks away, but in the last few weeks the deal has been talked and shouted about with ever-higher volume.  Still, it wasn’t until the other day, in a House Judiciary Committee hearing where US Copyright Register Marybeth Peters came loaded for bear, that I started thinking there was a significant likelihood that the settlement might fall apart.

There are a number of people in different communities, including libraries, who hope this  happens.   I’m not one of them.  I’m not a lawyer, so I can’t comment with authority on whether the settlement is sound law.  But I’m quite confident that it advances good policy.  In particular, it’s one of the best feasible opportunities to bring a near-comprehensive view of the knowledge and culture of the 20th and early 21st centuries into widespread use.  And I worry that, should the settlement break down, we will not have another opportunity like it any time soon.  The settlement has flaws, like the Google Books Project itself has, but at the same time, like Google Books itself, the deal the settlement offers is incredibly useful to readers, while also giving writers new opportunities to revive, and be paid for, their out of print work.

The potential

Under the status quo, millions of books are greatly under-utilized.  It isn’t just that people don’t have easy access to them; it’s that people don’t know that particular books useful to them exist in the first place.  I work in a library that has collected millions of volumes, many of which are hardly ever checked out. Not only would Google’s expanded books offerings give our users access to millions more books, but it would also make millions of books that we already own easier for our users to find and use effectively.

Want to know what books make mention of a particular event, ancestor, or idea?  With existing libraries, and good search skills, you might be able to find books, if any, that are written primarily about those things. But you’ll probably miss much other information on those same topics, information in works that are primarily about something else.  With expanded search, and the ability to preview old book content, it could be much easier to get a more comprehensive view on a topic, and find out which books are worth obtaining for learning more.

And if that’s a big advance for people in big universities like ours, it’s an even bigger step forward for people who have not had easy access to big research libraries.  Once a search turns up a book of interest, Google Books would offer a searcher various ways of getting that book: buying online access; reading it at their library’s computer (either via a paid subscription, or via a free public access terminal); buying a print copy; or getting an inter-library loan.  These options all involve various trade-offs of cost and convenience, as is the case with libraries today.  While one could wish for better tradeoff terms, the ones proposed still represent big advances from what one can easily do today.

And as with other large online collections like Wikipedia or WorldCat, or the Web as a whole, the advantages to large book corpuses like Google’s aren’t just in the individual items, but in what can be done with the aggregation.  I don’t know exactly what new kinds of things people will find to do with a near-comprehensive collection of  20th century books, but having seen all that people have done with other information aggregated on the Internet, I’m confident that there would be many great uses found, large and small.

The peril

If the Google settlement does fall apart, are we likely to see any collection like the one it envisions any time soon?  I’m not at all confident we will.  The basic problem is that, without some sort of blanket license, it’s impractical (and in the case of true orphan works, currently impossible) to clear all the copyrights that would be required to build such a collection.  This represents a failure in copyright law.  Instead of “promot[ing] the progress of science and useful arts”, as the Constitution requires, current US copyright law effectively keeps millions of out-of-print books in obscurity, not producing significant benefits either to their creators or to their potential users.

The current proposed Google Books settlement is, among other things, an attempt to get around this failure.  If the settlement fails, would the parties make a new agreement that would allow a readable collection of millions of post-1922 online books?  The divergence in the complaints I’ve seen (for instance, on one hand that the collection would cost readers too much, and on another hand that it would pay writers too little) suggest the difficulty of coming to a new consensus that satisfies all the parties, if negotiations have to start again from scratch.  And, if the arguments of the Copyright Office and some of the other parties carry the day, even if such an agreement were reached, the agreement could not be ratified by a court anyway.  Instead, it would require acts of Congress, and maybe even re-negotiations of international treaties.

Based on past history, there are two things that would make the government likely to reform copyright law to permit mass reuse of out-of-print books.  Ether there needs to be a clear example of the benefits of such a reform, or there needs to be a strong coalition pushing for such a reform.  Clear examples have usually come from businesses that are actually in operation; for example, the player piano roll industry that successfully persuaded Congress to streamline music copyright clearance in the previous century (or the Betamax that persuaded a slender majority of the Supreme Court to declare the VCR legal).

If the proposed Google Books library service goes online, even under a flawed initial settlement, it too could provide a compelling example to encourage general copyright reform.  But without such an example, it can be hard to move Congress to act.   It’s easy to undervalue the opportunities you don’t clearly see.

What about a strong coalition pushing for a reform in the law that would let anyone create the comprehensive online collections of out of print books I’d described?  I’d like to see one, but I haven’t yet.  (Yes, there’s the Open Book Alliance, but its members don’t seem to be distinctly allied in anything particular other than objecting to the settlement.)  In my next post, I’ll discuss reforms that might do the job, and the reasons I believe they would be difficult to enact without the settlement.

« Previous PageNext Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 87 other followers