Everybody's Libraries

May 6, 2010

Making discovery smarter with open data

Filed under: architecture,discovery,online books,open access,sharing,subjects — John Mark Ockerbloom @ 9:06 am

I’ve just made a significant data enhancement to subject browsing on The Online Books Page.  It improves the concept-oriented browsing of my catalog of online books via subject maps, where users explore a subject along multiple dimensions from a starting point of interest.

Say you’d like to read some books about logic, for instance.  You’d rather not have to go find and troll all the appropriate shelf sections within math, philosophy, psychology, computing, and wherever else logic books might be found in a physical library.  And you’d rather not have to think of all the different keywords used to identify different logic-related topics in a typical online catalog. In my subject map for logic, you can see lots of suggestions of books filed both under “Logic” itself, and under related concepts.  You can go straight to a book that looks interesting, select a related subject and explore that further, or select the “i” icon next to a particular book to find more books like it.

As I’ve noted previously, the relationships and explanations that enable this sort of exploration depend on a lot of data, which has to come from somewhere.  In previous versions of my catalog, most of it came from a somewhat incomplete and not-fully-up-to-date set of authority records in our local catalog at Penn.  But the Library of Congress (LC) has recently made authoritative subject cataloging data freely available on a new website.  There, you can query it through standard interfaces, or simply download it all for analysis.

I recently downloaded their full data set (38 MB of zipped RDF), processed it, and used it to build new subject maps for The Online Books Page.   The resulting maps are substantially richer than what I had before.  My collection is fairly small by the standards of mass digitization– just shy of 40,000 items– but still, the new data, after processing, yielded over 20,000 new subject relationships, and over 600 new notes and explanations, for the subjects represented in the collection.

That’s particularly impressive when you consider that, in some ways, the RDF data is cruder than what I used before.  The RDF schemas that LC uses omit many of the details and structural cues that are in the MARC subject authority records at the Library of Congress (and at Penn).  And LC’s RDF file is also missing many subjects that I use in my catalog; in particular, at present it omits many records for geographic, personal, and organizational names.

Even so, I lost few relationships that were in my prior maps, and I gained many more.  There were two reasons for this:  First of all, LC’s file includes a lot of data records (many times more than my previous data source), and they’re more recent as well.  Second, a variety of automated inference rules– lexical, structural, geographic, and bibliographic– let me create additional links between concepts with little or no explicit authority data.  So even though LC’s RDF file includes no record for Ontario, for instance, its subject map in my collection still covers a lot of ground.

A few important things make these subject maps possible, and will help them get better in the future:

  • A large, shared, open knowledge base: The Library of Congress Subject Headings have been built up by dedicated librarians at many institutions over more than a century.  As a shared, evolving resource, the data set supports unified searching and browsing over numerous collections, including mine.  The work of keeping it up to date, and in sync with the terms that patrons use to search, can potentially be spread out among many participants.  As an open resource, the data set can be put to a variety of uses that both increase the value of our libraries and encourage the further development of the knowledge base.
  • Making the most of automation: LC’s website and standards make it easy for me to download and process their data automatically. Once I’ve loaded their data, and my own records, I then invoke a set of automated rules to infer additional subject relationships.  None of the rules is especially complex; but put together, they do a lot to enhance the subject maps. Since the underlying data is open, anyone else is also free to develop new rules or analyses (or adapt mine, once I release them).  If a community of analyzers develops, we can learn from each other as we go.  And perhaps some of the relationships we infer through automation can be incorporated directly into later revisions of LC’s own subject data.
  • Judicious use of special-purpose data: It is sometimes useful to add to or change data obtained from external sources.  For example, I maintain a small supplementary data file on major geographic areas.  A single data record saying that Ontario is a region within Canada, and is abbreviated “Ont.”, generates much of my subject map for Ontario.  Soon, I should also be able to re-incorporate local subject records, as well as arbitrary additional overlays, to fill in conceptual gaps in LC’s file.  Since local customizations can take  a lot of effort to maintain, however, it’s best to try to incorporate local data into shared knowledge bases when feasible.  That way, others can benefit from, and add on to, your own work.

Recently, there’s been a fair bit of debate about whether to treat cataloging data as an open public good, or to keep it more restricted.  The Library of Congress’ catalog data has been publicly accessible online for years, though until recently only you could only get a little a time via manual searches, or pay a large sum to get a one-time data dump.  By creating APIs, using standard semantic XML formats, and providing free, unrestricted data downloads for their subject authority data, LC has made their data much easier for others to use in a variety of ways. It’s improved my online book catalog significantly, and can also improve many other catalogs and discovery applications.  Those of us who use this data, in turn, have incentives to work to improve and sustain it.

Making the LC Subject Headings ontology open data makes it both more useful and more viable as libraries evolve.  I thank the folks at the Library of Congress for their openness with their data, and I hope to do my part in improving and contributing to their work as well.

January 1, 2010

Public domain day 2010: Drawing up the lines

Filed under: copyright,online books,open access — John Mark Ockerbloom @ 12:01 am

As we celebrate the beginning of the New Year, we also mark Public Domain Day (a holiday I’ve been regularly celebrating on this blog.)  This is the day when a year’s worth of copyrights expire in many countries around the world, and the works they cover become free for anyone to use and adapt for any purpose.

In many counties, this is a bittersweet time for fans of the public domain.  For instance, this site notes the many authors whose works enter the public domain today in Europe, now that they’ve been dead for at least 70 years.  But for many European countries, this just represents reclaimed ground that had been previously lost.   Europe retroactively extended and revived copyrights from life+50 to life+70 years in 1993, so it’s still three more years before Europe’s public domain is back to what it was then.  Many other countries, including the United States, Australia, Russia, and Mexico, are in the midst of public domain freezes.  For instance, due to a 1998 copyright extension, no copyrights of published works will expire here in the US due to age for another 9 years, at least.

In the past, many people have had only a vague idea of what’s in the public domain and what isn’t.  But thanks to mass book digitization projects, the dividing line is becoming clearer.  Millions of books published before 1923 (the year of the oldest US copyrights) are now digitized, and can be found with a simple Google search and read in full online.  At the same time, millions more digitized books from 1923 and later can also be found with searches, but are not freely readable online.

Many of those works not freely readable online have languished in obscurity for a long time.   Some of them can be shown to be in the public domain after research, and groups like Hathi Trust are starting to clear and rescue many such works.  Some of them are still under copyright, but long out of print, and may have unknown or unreachable rightsholders.  The current debate over Google Books has raised the profile of these  works, so much so that the New York Times cited “orphan books”, a term used to describe such unclearable works, as one of the buzzwords of 2009.

The dividing line between the public domain and the world of copyright could well have been different.   In 1953, for instance, US copyrights ran for a maximum of 56 years, and the last of that year’s copyrights would have expired today, were it not for extensions.  Duke’s Center for the Study of the Public Domain has a page showing what could have been entering the public domain today– everything up to the close of the Korean War.  In contrast, if the current 95-year US terms had been in effect all of last century, the copyrights of 1914 would have only expired today.  Only now would we be able to start freely digitizing the first set of books from the start of World War I.

With the dividing line better known nowadays, do we have hope of protecting the public domain against more expansions of copyright?  Many countries still stick to the life+50 years term of the Berne Convention, including Canada and New Zealand.  In those countries, works from authors who died in 1959 enter the public domain for the first time.  There’s pressure on some of these countries to increase their terms, so far resisted.  Efforts to extend copyrights on sound recordings continues in Europe, and recently succeeded in Argentina.  And secret ACTA treaty negotiations are also aimed at increasing the power of copyright holders over Internet and computer users.

But resistance to these expansions of copyright is on the rise, and public awareness of copyright extensions and their deleterious effects is quite a bit higher now than when Europe and the US extended their copyrights in the 1990s.  And with concerns expressed by a number of parties over a possible Google monopoly on orphan books, one can envision building up a critical mass of interest in freeing more of these books for all to use.

So today I celebrate the incremental expansion of the public domain, and hope to help increase it further. To that end, I have a few gifts of my own.  As in previous years, I’m freeing all the copyrights I control for publications (including public online postings) that are more than 14 years old today, so any such works published in 1995 and before are now dedicated to the public domain.  Unfortunately, I don’t control the copyright of the 1995 paper that is my most widely cited work, but at least there’s an early version openly accessible online.

I can also announce the completion of a full set of digitized active copyright renewal records for drama and works prepared for oral delivery, available from this page.  This should make it easier for people to verify the public domain status of plays, sermons, lectures, radio programs, and similar works from the mid-20th century that to date have not been clearable using online resources.  We’ve also put online many copyright renewal records for images, and hope to have a complete set of active records not too far into 2010.  Among other things, this will help enable the full digitization of book illustrations, newspaper photographs, and other important parts of the historical record that might be otherwise omitted or skipped by some mass digitization projects.

Happy Public Domain Day!  May we have much to enjoy this day, and on many more Public Domain Days to come.

(Edited later in the day January 1 to fix an inaccurately worded sentence.)

October 26, 2009

Promoting access to the best literature of the past

Filed under: online books,open access — John Mark Ockerbloom @ 3:24 pm

Last week saw widespread observance of Open Access Week 2009 .  The week primarily focused on opening access to current research and scholarship (though there’s also been a growing community working on opening access to teaching and learning content).  You can find lots of open access resources at the Open Access Directory.

Current scholarship is not spontaneously generated from the brain or lab of the writer.  Useful scholarship must understand and interpret past work, to be effective in the present.  In many fields, and not just the classical humanities, the relevant past work may stretch back hundreds or even thousands of years.  Current scholarship and study will be more effective if its source material is also made openly accessible, and if proper attention is drawn to the most useful sources.  And now is an especially opportune time for scholars of all sorts, professional and amateur, to get involved in the process.

This may seem a strange thing to say at a time when the digitization of old books and other historic materials is increasingly dominated by large-scale projects like Google and the Internet Archive.  With mass digitizers putting millions of public domain book and journal volumes online, and with a near-term possibility of millions more copyrighted volumes going online as well, how much of a role is left for individual scholars and readers?

A very important role, as it turns out.  Mass digitization projects can quickly produce large scale aggregations of pass content, but as many have pointed out, aggregation is not the same as curation, and as aggregations grow larger, being able to find the right items in a growing collection becomes increasingly important.  That’s what curation helps us do, and the large-scale digitizers are not doing a very effective job of it themselves.  Google’s PageRank algorithm may take advantage of implicit curation of web pages (through the choices of authors’ page links), but Google and other aggregators have had a much harder time drawing attention to the most useful books, scholarly articles, or other works created without built-in hyperlinks.

Sometimes this is because they haven’t digitized them, even as they’ve digitized inferior substitutes.  Over three years after Paul Duguid lamented the republication of a bowdlerized translation of Knut Hamsun’s Pan by Project Gutenberg, that version remains the only freely available one of this book available there, or at Google Books, or anywhere else online that I’ve found.   Even though an unexpurgated version of this translation was published before the bowdlerized version, no digitizer that I know of has gotten around to finding and digitizing it; and countless readers may have used the existing online copies without even knowing that they’ve been censored.  Extra bibliographic and copyright research may be necessary to determine whether a better resource is available for digitization, as it in this case.

Sometimes the content is digitized, but can’t be found easily.  Geoff Nunberg’s post on Google Books’ “metadata train wreck” shows plenty of examples of how difficult it can be to find and properly identify a particular edition in Google Books, much less figure out which edition is the best one to use.  I’ve commented in the past about the challenges of finding multi-volume works in that corpus.  And Peter Jacso has pointed out Google’s problems indexing current scholarship.  If you can’t find the paper or book you need for your research, your work will be no better than it would be if the source had never existed.

This is where scholars can potentially play a useful role.  We don’t individually digitize books by the thousands, but we do individually find, cite, and recommend useful sources, down to the particular edition, as we find them and use them in our own writings and teaching.  These citations and recommendations now often go online, in various locations.  It would be very useful to have these recommendations made more visible, and tied to freely available online copies of the sources cited, whenever legally possible. Sometimes, we also create or digitize our own editions of past works, with useful annotations, for our classes or our own work.  It would be very useful to have these made visible and persistent as well, whenever appropriate.

I hope that large resource aggregations will make it easier for scholars and others to curate the collections to make them more useful to their readers.  In the meantime, we can start with resources we have.  For example, on The Online Books Page, my catalog entry for Hamsun’s Pan notes its limitations.  My public requests page includes information on a better edition that could be digitized, by someone who has access to the edition and has some time to spare.  And my suggestion form is ready to accept links to better editions of this book, or to other online books that merit special attention.  Indeed, most of the books that I now add to my catalog derive from submissions made by various readers on this form, and I invite scholars to suggest the freely accessible books and serials that they find most useful for my catalog.

As the Little Professor notes in a recent post, the sort of bibliographic work I’ve described can be time-consuming but vitally important for making effective use of old sources, and that work has often not been done by anyone for many books outside the usual classical canons.  Yet it’s the sort of thing that scholars do, bit by bit, as part of their everyday work.  The aggregate effect of their curation and digitization, appropriately harnessed in open-access form, could greatly improve our ability to build upon the work of the past.

September 17, 2009

Google Book settlement: Alternatives and alterations

Filed under: copyright,online books,open access — John Mark Ockerbloom @ 1:35 pm

In my previous post, I worried that the Google Books settlement might fall apart in the face of opposition from influential parties like the Copyright Office, and that such a collapse might deprive the public of meaningful access to millions of out of print books.

Not everyone sees it that way.  I’ve seen various suggestions of alternatives to the settlement for making these books available.  In this post, I’ll describe some of the suggested alternatives, explain why they don’t seem to me as likely to succeed on their own, and discuss how some of them could still go forward under a settlement.

Compulsory licenses

Both the Open Book Alliance’s court filings and the Copyright Office’s testimony mention the possibility of compulsory licensing, which essentially lets people use a copyrighted work without getting permission, provided that they meet standard conditions determined by the government.  Compulsory licenses already exist in certain areas, such as musical performances and broadcasts.  If I want to cover a Beatles song on my new record, I can, as long as meet some basic conditions, including paying a standard royalty.  The (remaining) Beatles can’t hold out for a higher rate, or say that no one else is allowed to cover the recordings they’ve released.

The Google Books settlement has some similarities to a compulsory license, but with some important differences, including:

  1. Book rightsholders can choose to deny public uses of their work, or hold out for higher compensation, which they generally can’t do under a compulsory license regime. (They have to explicitly request this, though.  So it’s really what one might call a “default” license.)
  2. The license has been negotiated through a court settlement rather than Congressional action. (This was one of the main complaints of the Copyright Office.)
  3. The license given in the settlement is granted only to Google, not to other digitizers. (This has justifiably raised monopoly concerns.)

I do have a problem with the last difference as it stands.  I’d like to see the license widened so that anyone, not just Google, could digitize and make available out of print books under the same terms as Google. But there are various ways we can get to that point from the settlement.  The Book Rights Registry created by the settlement could extend Google-like rights to anyone else under the same terms, as the settlement permits them to do.  The Justice Department could require them to do so as part of an antitrust supervision.  Or Congress could decide to codify the license to apply generally.  (They’ve done this sort of thing before with fair use and the first sale doctrine, both of which originated in the courts.)

If the settlement falls apart, though, negotiation over an appropriate license has to start over from scratch, and has to persuade Congress to loosen copyrights for benefits they might not clearly see. As I suggested in my previous post, Congress’ recent tendencies have heavily favored tightening, rather than loosening, copyright control.   And I haven’t yet seen a strong coalition pushing for laws granting compulsory (or default) licenses that are as broad as would be needed.

For instance, the Open Books Alliance’s amicus brief suggests the possibility of a compulsory license, but only as “but one approach”, and that suggestion seems as much aimed at getting hold of Google’s scans as licensing the book copyrights themselves.  Their front page at present shows no explicit advocacy of compulsory copyright licenses.  Perhaps they will unite behind a workable Google Books-style compulsory license proposal in the future, but I’m not counting on that.  (Update: Just after I posted this, I saw this statement of principles go up on the OBA site.  We’ll see what develops from that.)

The Copyright Office’s congressional brief also mentions but tries to damp down the idea.  It repeatedly characterizes compulsory licensing as something that Congress only does “reluctantly” and “in the face of marketplace failure”. But despite its strong words on other subjects, it does not appear concerned over whether we in fact have a marketplace failure around broad access to out-of-print books.

Orphan works legislation

The Copyright Office filing also suggests passing orphan works legislation (as have various other parties, including Google).  An orphan works limitation on copyrights would be nice, but it’s not going to enable the sort of large, comprehensive historical corpus that the Google Books settlement would allow.

As Denise Troll Covey has pointed out, the orphan works certification requirements recommended in last year’s bill, like many other case-by-case copyright clearance procedures, are labor-intensive and slow, and may be legally risky.  (In particular, the overhead for copyright clearance, not including license payment, can be several times the cost of digitization.)  Hence, these methods are not likely to scale well.  And they would not cover the many out-of-print books that aren’t, strictly speaking, orphans.  I don’t consider it likely that a near-comprehensive library  of millions of out-of-print 20th century books will come about by this route alone any time soon.

Even so, despite its limited reach, last year’s orphan works legislation was stopped in Congress after some creator organizations objected to it.  Some of the objectors, including the  National Writers Union and the American Society of Journalists and Authors, are now members of  the Open Book Alliance, which makes me wonder how effectively that group would act as a united coalition for copyright reform.

Private negotiation

Some critics suggest that Google and other digitizers simply negotiate with each rightsholder, or a mediator designated by each  rightsholder.   It’s possible that this actually might work for many future books, if authors and publishers set up comprehensive clearinghouses (like ASCAP and Harry Fox mediate music licensing).  If new books get registered with agents like these going forward, with simple, streamlined digital rights clearing, private arrangement could work well for future books both in-print and out-of-print.  Indeed, Google’s default settlement license privileges don’t apply to new books from 2009 onward.

But it’s much less likely that this will be a practical solution to build a comprehensive collection of past out of print books from the 20th and early 21st century, because of the sheer difficulty and cost of determining and locating all the current rightsholders of books long out of print.   The friction involved in such negotiation (involving high average cost for low average compensation) is too great.  Without the settlement and/or legal reform, we risk having what James Boyle called a “20th century black hole” for books.

Copyright law reform

As James Boyle points out, it would solve a lot of the problems that keep old books  in obscurity if books didn’t get exceedingly long copyrights purely by default.  It would also help if fair use and public domain determination weren’t as risky as they are now. I’d love to see all that come to pass, but no one I know that’s knowledgeable on copyright issues is holding their breath waiting for it to happen any time soon.

Moving forward

As I’ve previously mentioned, the settlement is imperfect.  It may well need antitrust supervision, and future elaboration and extension.  (And I’ve suggested some ways that libraries and others can work to improve on it.)  It’s still the most promising starting point I’ve seen for making comprehensive, widely usable, historic digital book collections possible.  I hope that we get the chance to build on it, instead of throwing away the opportunity.  In any case, I’d be happy to hear people’s thoughts and comments about the best way to move forward.

September 15, 2009

Google Books, and missing the opportunities you don’t see

Filed under: copyright,online books,open access — John Mark Ockerbloom @ 9:12 pm

The Google Books settlement fairness hearing is still a few weeks away, but in the last few weeks the deal has been talked and shouted about with ever-higher volume.  Still, it wasn’t until the other day, in a House Judiciary Committee hearing where US Copyright Register Marybeth Peters came loaded for bear, that I started thinking there was a significant likelihood that the settlement might fall apart.

There are a number of people in different communities, including libraries, who hope this  happens.   I’m not one of them.  I’m not a lawyer, so I can’t comment with authority on whether the settlement is sound law.  But I’m quite confident that it advances good policy.  In particular, it’s one of the best feasible opportunities to bring a near-comprehensive view of the knowledge and culture of the 20th and early 21st centuries into widespread use.  And I worry that, should the settlement break down, we will not have another opportunity like it any time soon.  The settlement has flaws, like the Google Books Project itself has, but at the same time, like Google Books itself, the deal the settlement offers is incredibly useful to readers, while also giving writers new opportunities to revive, and be paid for, their out of print work.

The potential

Under the status quo, millions of books are greatly under-utilized.  It isn’t just that people don’t have easy access to them; it’s that people don’t know that particular books useful to them exist in the first place.  I work in a library that has collected millions of volumes, many of which are hardly ever checked out. Not only would Google’s expanded books offerings give our users access to millions more books, but it would also make millions of books that we already own easier for our users to find and use effectively.

Want to know what books make mention of a particular event, ancestor, or idea?  With existing libraries, and good search skills, you might be able to find books, if any, that are written primarily about those things. But you’ll probably miss much other information on those same topics, information in works that are primarily about something else.  With expanded search, and the ability to preview old book content, it could be much easier to get a more comprehensive view on a topic, and find out which books are worth obtaining for learning more.

And if that’s a big advance for people in big universities like ours, it’s an even bigger step forward for people who have not had easy access to big research libraries.  Once a search turns up a book of interest, Google Books would offer a searcher various ways of getting that book: buying online access; reading it at their library’s computer (either via a paid subscription, or via a free public access terminal); buying a print copy; or getting an inter-library loan.  These options all involve various trade-offs of cost and convenience, as is the case with libraries today.  While one could wish for better tradeoff terms, the ones proposed still represent big advances from what one can easily do today.

And as with other large online collections like Wikipedia or WorldCat, or the Web as a whole, the advantages to large book corpuses like Google’s aren’t just in the individual items, but in what can be done with the aggregation.  I don’t know exactly what new kinds of things people will find to do with a near-comprehensive collection of  20th century books, but having seen all that people have done with other information aggregated on the Internet, I’m confident that there would be many great uses found, large and small.

The peril

If the Google settlement does fall apart, are we likely to see any collection like the one it envisions any time soon?  I’m not at all confident we will.  The basic problem is that, without some sort of blanket license, it’s impractical (and in the case of true orphan works, currently impossible) to clear all the copyrights that would be required to build such a collection.  This represents a failure in copyright law.  Instead of “promot[ing] the progress of science and useful arts”, as the Constitution requires, current US copyright law effectively keeps millions of out-of-print books in obscurity, not producing significant benefits either to their creators or to their potential users.

The current proposed Google Books settlement is, among other things, an attempt to get around this failure.  If the settlement fails, would the parties make a new agreement that would allow a readable collection of millions of post-1922 online books?  The divergence in the complaints I’ve seen (for instance, on one hand that the collection would cost readers too much, and on another hand that it would pay writers too little) suggest the difficulty of coming to a new consensus that satisfies all the parties, if negotiations have to start again from scratch.  And, if the arguments of the Copyright Office and some of the other parties carry the day, even if such an agreement were reached, the agreement could not be ratified by a court anyway.  Instead, it would require acts of Congress, and maybe even re-negotiations of international treaties.

Based on past history, there are two things that would make the government likely to reform copyright law to permit mass reuse of out-of-print books.  Ether there needs to be a clear example of the benefits of such a reform, or there needs to be a strong coalition pushing for such a reform.  Clear examples have usually come from businesses that are actually in operation; for example, the player piano roll industry that successfully persuaded Congress to streamline music copyright clearance in the previous century (or the Betamax that persuaded a slender majority of the Supreme Court to declare the VCR legal).

If the proposed Google Books library service goes online, even under a flawed initial settlement, it too could provide a compelling example to encourage general copyright reform.  But without such an example, it can be hard to move Congress to act.   It’s easy to undervalue the opportunities you don’t clearly see.

What about a strong coalition pushing for a reform in the law that would let anyone create the comprehensive online collections of out of print books I’d described?  I’d like to see one, but I haven’t yet.  (Yes, there’s the Open Book Alliance, but its members don’t seem to be distinctly allied in anything particular other than objecting to the settlement.)  In my next post, I’ll discuss reforms that might do the job, and the reasons I believe they would be difficult to enact without the settlement.

May 5, 2009

April 23, 2009

David Reed: Some extracts from his life and letters

Filed under: online books,people — John Mark Ockerbloom @ 11:36 pm

Last summer I was looking for a particular book. I couldn’t find it in any library in my State. Went interlibrary loans and found one copy at the library of Congress. Only one copy in the whole country. One of the best stories I ever [heard] about this is one when one of my professors was working on a trash pile of papyrus sheets and came across one that said [it] was the works of Meander. He went through that pile of papyrus with a fine tooth comb. He didn’t find anything but that single piece. He said that it felt as though he was looking across the centuries and saying, “Somewhere out there are the works of Meander.” [Friends,] this is how things get lost forever.

David Reed, 1997

Today, there are thousands of important books that will likely never share that fate as long as civilization lasts, because they were digitized and sent all over the world.  Many of these books were first put online by Project Gutenberg.  And many of the Project Gutenberg texts are online thanks to the work of David Reed.

I scanned and released Gibbon’s Decline and Fall of the Roman Empire and hardly a day goes by when I don’t get an email from someone thanking me for releasing it on the web. At one site I know that it has been downloaded 1800+ times in all six volumes.

David Reed, 2001

In the mid-1990s, Project Gutenberg had an outlandish-sounding goal: to make 10,000 books freely available online by the start of the 21st century.  They’d only managed to put a couple hundred online by then.  Authors like Clifford Stoll were skeptical that they, or anyone else, would ever reach such a goal.

But Gutenberg was soon publishing more and more texts every month, at an ever-increasing pace.   Lots of those texts had David Reed’s name on them.  Working persistently with his own scanner, well before the era of well-funded mass digitization, he digitized and proofread long works that few other people at the time would have taken on: Gibbon’s Decline and Fall; Shakespeare’s First Folio;  Josephus’ Antiquities of the Jews; Frazer’s Golden Bough; Tocqueville’s Democracy in America.  He also scanned numerous works weighty and light from authors like Rudyard Kipling, Louisa May Alcott, Robert Frost, James Joyce, and the US government.

Some critics in academia complained that the books David and others put up for Gutenberg were not up to the standards of scholarly editions.  David didn’t begrudge the work of scholars, but he wanted to put up more works, more quickly, to reach a broader audience.  As he put it in 1999:

[I] think that [it's] important to remember that we do all this work because we like to read and we like to share our discoveries with others…. I see no reason why the text specialists can’t have the specialist collections and the general people (like myself) have the general collections. There is room enough on the web for all of us. The real enemy are those who want to lock up all the books in the world. The real enemy are those who don’t read a single book.

David was fighting another enemy besides illiteracy, one closer to home. He had diabetes, and in the last few years of his life his health slowly worsened from complications of that disease. He didn’t mention it in this post (nor, as far as I can remember, in any of the posts he made to the Book People mailing list, from which these quotations are taken). But even while his health was failing, he continued to put books online, like this emergency childbirth manual that was posted this past October.  He was working to fulfill a dream that he described back in his 1999 post:

I dream of the day when we have 50,000 and 100,000 etext libraries on the web. Where there are 100 new etexts being released a week or every couple of days. When I can’t keep up with reading every etext that pops up on the Online Book Page or that Project Gutenberg releases. . I appreciate all the work that you are all doing. I love reading the work that you are all doing.

David died on April 21, 2009, according to the email his son Chris sent to David’s contacts list.  By then, Google Books and the Internet Archive’s book collection had made over 1 million books freely available online, the various Gutenberg projects had posted just over 30,000 books, and many smaller projects had posted numerous unique titles as well.  He lived long enough to see his dream come true, thanks in part to his own pioneering work and dedication.

I have dedicated etexts in honor of my daughter, my sons, my wife, parents and in honor of my companies I work for, even in honor of myself.

David Reed, 2001

Out there all over the Net, in millions of replicas, are the works of David Reed, transcribing many of the great authors that have also passed on.  In some sense, all of those works are dedicated  to him.  Through them, I hope his name lives on for generations to come.

March 30, 2009

How to find complete multi-volume works in Google Books

Filed under: online books — John Mark Ockerbloom @ 10:15 pm

While Google’s agreement on copyrighted books has been the subject of much discussion lately, they’ve also been continuing to add public domain titles at a brisk pace.  For instance, they announced in February that they now had 1.5 million public domain volumes formatted for mobile devices.  And last week, they noted that they had completed their scans of hundreds of thousands of volumes of 19th century public domain books from Oxford’s Bodleian library.

If you look at the three example book links in their Oxford post, you’ll notice that each of them goes to a volume of a multi-volume edition.   Works from the nineteenth century and before were often originally published in multiple volumes, such as the “three-decker” format common for Victorian novels.  When such books are reprinted today, they’re usually printed as a single volume, but to read all of many Google titles, you’ll have to range over multiple volumes.

Unfortunately, as various readers have noted, it can be quite difficult to find readable copies of all of the volumes in a multi-volume edition.  For various reasons, they often don’t all come up when you do a search for a particular title.  This can make readers think there are no complete digital editions of a work they’re seeking, even when there are.

In working with people who have helped me fill requests for public domain books, I’ve compiled a series of techniques for finding complete multi-volume sets in Google Books.  I’d be happy to hear additional tips from readers.

  • First, do a search for full-view volumes of the work you’re looking for.  One good way to do this is to go to Google’s advanced book search page, select the “full view only” option, and enter author and title words in the appropriate blanks.
  • If you get a hit, check the start and the end of the scan, to verify which volumes are actually present. Sometimes you’ll find more than one volume in the scan, either because multiple volumes were bound together, or because Google combined volumes in its scan.
  • Go to the “about this book” page for the scan, and look in the lower regions to see if there is an “Other editions” section. This often includes links to other volumes, not just other editions. If there’s a “See more” at the bottom of such a section, click on it to see more volumes or editions.  (Sometimes Google will have multiple editions as well as multiple volumes for the same work.  It’s best when possible to compile volumes from the same edition.  You can do this by matching publishers and dates between volumes, though keep in mind that some multivolume editions came out over the course of multiple years.  Editions from different publishers, or from different times, may have inconsistent content, and might not divide into volumes at the same points.)
  • If the book is from the University of Michigan (as reported either in the “about this book” page or in the scanned front pages) check the Mirlyn catalog for the book. Sometimes this will turn up volumes scanned by Google that have been put in the Hathi Trust repository, or in Google Book Search itself, but that for some reason don’t show up in an ordinary Google books search. Some other Hathi Trust libraries also have links to digitizations of their content; see this page for details.
  • If this didn’t turn up all the volumes you’re looking for, repeat the process above for the other volumes in your initial hit list. Sometimes those will have “Other editions” links to additional volumes that didn’t appear with the earlier hits.
  • If you manage to complete a set this way, consider sharing your success with other readers.  If you fill in my book suggestion form with the volumes you find,  I can list a neatly consolidated edition of all the volumes on The Online Books Page, and help other people avoid going through all the trouble you just did.  (Give the book’s title, URL for the first volume, and other information in the appropriate blanks, and then add URLs for subsequent volumes in the “Anything else we should know?” section of the form.)
  • Even if you only partially succeeded, if it’s a work you’re particularly interested in you can use my suggestion form to let me know what you’ve been able to find.  If I can’t easily find the other volumes myself, I can at least list what was found on my works-in-progress page. With luck, someone coming along later will find or digitize the remaining volumes, and I can list the set.

Similar techniques can be used for compiling runs of historic serials, which are also present in Google, and can be of great interest to readers.

If you find these suggestions useful, I hope you’ll help me compile sets of your favorite public domain works, so we can take advantage of all this wonderful old material that Google and others are digitizing.

March 18, 2009

January 27, 2009

Neil Gaiman wins Newbery medal; more Newbery honorees go online

Filed under: awards,copyright,online books — John Mark Ockerbloom @ 7:46 pm

I just got back from a whirlwind trip to Denver for ALA Midwinter.    While I was there, they announced the winner of this year’s Newbery medal: Neil Gaiman‘s Graveyard Book.  I’ve been hoping to get around to this book– but if it’s anywhere near as well-written as Gaiman’s other juvenile titles (like Coraline), the Newbery committee chose well.  (You can hear an interview with Neil, and an excerpt from the book, on NPR’s website.)  Congratulations to Neil, and to the other authors who won Newbery honors and ALA’s other awards for children’s books this year.

When I blogged about last year’s Newbery awards, I noted that most of the 1922 medalists and honorees were online, and that about a dozen later Newbery honorees were also out of copyright and could go online.  Since then, my partner-in-bookery Mary has found and digitized many of those later books for a special Celebration of Women Writers exhibit, “Newbery Honor Books and Medal Winners by Women, 1922-1964“.  You can also find these and other online prize-winners from my Prize Winning Books Online page.

(As I mentioned in a previous post, I went out to ALA Midwinter to give a couple of talks on the future of library catalog interfaces and data.  I’ll have a post with pointers to slides and notes for those talks shortly.)

« Previous PageNext Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 81 other followers