Everybody's Libraries

June 16, 2014

December 30, 2012

Persistence

Filed under: failure,online books,people,preservation — John Mark Ockerbloom @ 6:02 pm

The week between Christmas and New Years is mostly time off for me– I’ve added no new listings to The Online Books Page this past week, for instance– but even on vacation, as long as I have a working Internet connection I still tend to fix bad links as I hear about them from readers’ reports.  I try to draw from a variety of free online book sources, instead of just a few big ones; that’s worthwhile to me because it increases the diversity of titles and editions on the site.  But the tradeoff is that many of these sites disappear, reorganize, or otherwise have links go bad over time.  I’m grateful to my readers for reporting bad links to me, and I can often fix other bad links to the same site when I fix the one reported to me.

The links and sites that persist, and those that don’t, often aren’t the ones you might expect.  Who’d have thought, for instance, that a shoestring-budget project that didn’t even maintain its own website until fairly recently would have the longest-lived (and still one of the largest) electronic book collections in common use, outlasting many better-funded or more systematically planned projects (as well as its own doggedly persistent original champion)?  Although the links to Project Gutenberg’s ebooks have changed over the years, the persistence of their etext numbers, and the proliferation of Gutenberg sites and mirrors, has made it relatively easy for me to keep links working for their more than 40,000 ebooks.

Some library-sponsored sites use persistent link redirection technologies, such as PURLs, to keep their links working.  But technology alone isn’t sufficient for persistence.  I recently had to update all of my links going to a PURL-based library consortium site.  I’m sure the people who worked at the organization hosting the site would have kept the links working if they could, but the organization itself was defunded by the state, and its functions were taken over by a new agency that didn’t preserve the links.

Fortunately, the failure had a couple of graceful aspects that eased recovery.  First of all, the old links didn’t stop working altogether, but redirected to the front page of a digital repository in which people could search for the titles they were looking for.  Second, the libraries in the consortium still maintained their own websites, and the old links included a serial number unique to each text (similar to Gutenberg’s etext numbers) that was also used by member libraries.  I found that in most cases I could automatically rewrite my links, using that serial number, so that they would point to a copy at a contributing library’s website.  This made it easier for me to rewrite my links, even though they go to new sites, than it’s often been for me to update links to sites that persist but reorganize.   (For instance, I’ve seen sites change to new content management systems that used completely different URLs from their old design, and then had to manually relocate and verify each link one at a time.)

Sometimes I have to replace links that still “work”, technically.  I used to have thousands of links to a Canadian consortium that provided free access to scanned public domain books and pamphlets from that country’s history.  Not long ago, I discovered that while my links still work, the site had gone to a subscription model where readers have to pay for access beyond the first dozen or so pages of each text. Given the precarious state of Canadian library funding, I’m sure the people running the site were simply doing what they thought necessary to ensure the persistence of the sponsoring organization (which continues to provide new electronic texts and services).  Personally, however, I was more concerned about the persistence of free access to the digitized texts I’d pointed to.  Fortunately, a number of the consortium’s member libraries had also uploaded copies of their scans to the Internet Archive, using the same serial numbers used on the Canadian consortium’s website.  As a result, I was able to quickly update most of my links to point to the Internet Archive’s copies.  I intend to track down working alternative links to the 200 or so remaining texts, or post requests seeking other copies of these texts, when time permits.  (I’ve also sent along a donation to the Internet Archive, in part to thank them for continuing to provide access to texts like these.)

It’s been said in digital library literature that persistence of identifiers is more a matter of policy than technology.  Based on the experiences I’ve related above, the practical persistence of links is even more a matter of will than of policy: the will (and ability) to keep maintaining access through changing conditions; the willingness to consider alternatives to specific organizational structures or policies if the original ones turn out not to be tenable; the willingness to pick things up again, or let others pick them up, after a failure.

It’s also clear from my experience that practically speaking, failure is not the main enemy of persistence.   More of a threat is not recovering from failure, or being so worried about failure that one doesn’t even begin to sustain the thing or the purpose that should persist.  To riff off a famous G. K. Chesterton quote, if it’s worth doing something, it’s worth being willing and ready to fail at doing it.  And then, to be willing to pick up again where you left off, or to make it easy for someone else to pick it up, and try something new.

That’s persistence.  That’s what’s ultimately gotten the dissertation rewritten, the estates settled, the blog picked up again, the books put and kept online for the world to read, and many other things I’ve found worthwhile, despite difficulties, anxieties, and setbacks.  I value that persistence, and I hope you value it as well, for the things you find worthwhile. I look forward to seeing where it takes us in the year to come.

October 7, 2011

My mother’s orphan

Filed under: copyright,findingada,online books,open access,people,preservation,sharing,teaching — John Mark Ockerbloom @ 5:06 pm

Before my mother was pregnant with me, she was working on a book.

The book had begun its gestation at least a year before. She had been teaching math in Massachusetts, and was involved with the Madison Project, one of the initiatives that arose from the “new math” movement of the 1960s.  What excited her, and what I caught from her not long after I was born, was the sense of discovery and play that was encouraged in the Madison teaching style.  The primary focus wasn’t so much on imparting and drilling facts and rules, or on mundane applications, but on finding patterns, solving puzzles, and figuring out the secrets of numbers and geometry and the other mathematical constructs that underlie our world. Some project participants planned a series of books that would help bring out this sense of discovery and exploration in math classes.

Two small children in the house may have delayed my mother’s ambitions, but we didn’t stop her.  When I was in kindergarten, the piles of papers in my parents’ bedroom went away, and my mother proudly showed me her new book.  The book, Discoveries in Essential Mathematics, was co-written with Ramon Steinen, and published by Charles E. Merrill. Though the textbook was written for middle schoolers, I remember reading through the book after my mother showed it to me, solving the simpler problems, and smiling when I saw my name or my sister’s in an example.

She got small royalty checks for a few years, but the book was out of print by the late 1970s, never reaching a second edition.  We kept some copies in our basement, but I didn’t know of any library that held it.  When I visited the Library of Congress as a middle schooler, wrongly convinced that they had every book ever published, I remember my disappointment when I couldn’t find Mom’s book in their card catalog.

My mother eventually retired from teaching, and the enthusiasm and talent I’d gotten from Mom for math shifted into computing, and then into digital libraries.  And when my kids reached school age, I decided to try putting her book online.  In an era of large classes, detailed state standards, and high-stakes standardized tests, it might not be a viable standard textbook any more, but I think it’s still great for curious kids who show an interest in math.

Mom thought that was a great idea.  But she didn’t know if she could grant permission on her own.  Although long out of print, the book’s copyright had automatically renewed in 2000 under US copyright law, and she wasn’t sure if she had to get the consent of her publisher or co-author before she could give me the go-ahead. She didn’t know how to reach her co-author, and her old imprint was long gone.  Even its acquirer had itself been acquired by a large conglomerate some time ago.  So I let the idea drop, thinking I’d come back to it later when I had a little time to research the copyright.

But not long after, she started a long slide into dementia, and was soon in no position to give permission to anyone.  If her book had been practically an “orphan work” before, due to uncertainty over rights, it was even more so now.  There was no trouble locating the author; but no way of getting valid permission from someone definitely known to hold the rights.

Mom died this past winter, four years after my Dad had reluctantly moved her into the nursing home for good, and four weeks after he’d made his usual daily visit, gone back home, and had a fatal heart attack.  After we paid the last of the bills, and threw out the contents of the basement (where a burst pipe ruined all the books, papers, and other things they kept down there), what remained of what they had would now go to me and my siblings.

I still had a copy at home of the teacher’s edition of Mom’s book that she had once given to Grandma.  And between my mother’s funeral and the burst pipe, I’d taken a student edition out of their basement for my kids to read.  But any faint hope of finding publishing contracts or rights assignment documents was obliterated after the pipe burst.  The basic questions were: had Mom signed her rights to the book away, as many academic authors do? If so, had she gotten them back at some point?  Or had she never had the rights in the first place, as sometimes happens with textbook authors under “work for hire” contracts?

The copyright page of the book, and the record in the 1972 Catalog of Copyright Entries, show the publisher as the copyright claimant, so I couldn’t assume she had the rights.   But I also doubted whether I could get a clear answer, or reasonable licensing terms, from the company that had eventually acquired the assets of Mom’s original publisher.

I eventually found what I needed to know on a trip to Washington, DC.  While attending a meeting on digital format registries, I realized that I was in the same building as the Copyright Office.   So after the meeting, I got a reader’s card, went upstairs, and consulted the librarians there.  We confirmed that, under the automatic renewal laws of the time, the copyright to Mom’s book would have reverted in 2000 to whoever had been declared the “author” in the book in the original registration record.   Moreover, in the absence of any contrary arrangement, any co-owner of a copyright can authorize publication, as long as they split any proceeds with the other copyright owners.

Since I was planning just to put the book online for free, the only question remaining was: who was listed as the author on the original registration: the publisher who claimed the copyright, or my mother and Dr. Steinen?  It’s not clear from the Catalog of Copyright Entries, but the original registration certificate would state it.  And the one copy known to exist of that certificate was in the archives of the Copyright Office where I was sitting.

Twenty minutes later, I had the certificate in front of me.  The name on the “claimant” line was indeed the publisher’s, but the names on the “author” line were Steinen and Ockerbloom.  My mother’s orphan was mine to claim.

There are a lot more books out there like hers.  Since I added records for Hathi Trust‘s public domain books to The Online Books Page, I’ve gotten requests to curate hundreds of out of print, largely forgotten books that are still meaningful to readers online.  Many of the people who opt to leave contact information  live in places where  books tend to be hard to get or pay for. Many others, judging from their names, seem to be related to the authors of the books they suggest. These readers have found the books after Hathi, or Google, or the Internet Archive, has resurfaced them online, and the readers want these books to live on.  If there were an easy, inexpensive, uncontroversially legal way to also bring back books that are still in copyright, but no longer commercially exploited, I’m sure I could fulfill a lot of requests for those books too.

For now, though, I’ll bring back the one orphan book I’ve been given. And I thank my mother for writing it, and the other women and men who have poured so much of their energy and teaching into their books, and the librarians of all kinds who help ensure those books stay accessible to readers who value them.  I’ll try my best to keep your legacies alive.

March 23, 2010

Lots of conversation keeps stuff sustainable

Filed under: libraries,people,preservation,sharing — John Mark Ockerbloom @ 10:12 pm

Among the hats I wear at my place of work is that of LOCKSS cache administrator. LOCKSS is a useful distributed preservation system built around the principle “Lots of copies keep stuff safe” (whose initials give the system its name).  The idea is that, with the cooperation of publishers, a bunch of libraries each harvest copies of selected online content, and keep backups on our own LOCKSS caches, which are hooked up to local library proxy services.  Then, if the material ever becomes inaccessible from the publisher, our users will automatically be routed to our local copies.  Each LOCKSS cache also periodically checks with other LOCKSS caches to ensure that our copies are still in good shape, and to repair or replace copies that have been lost or damaged.  (Various security features protect against leaks of restricted content, or unauthorized revisions of content.)

LOCKSS is open source software that runs on commodity hardware.  It was originally envisioned to run virtually automatically.  As Chris Dobson described the ideal in a 2003 Searcher article, “Take a computer a generation past its prime…. Hook it up to the Internet and put it in a closet. Stick in the LOCKSS CD-ROM and boot it up. Close the closet door.”  And then presumably walk away and forget about it.

Of course, it’s not that simple in practice, particularly if your library is proactive about its preservation strategy.  The thing about preservation at scale is there’s always something that needs attention.  It might be something technical, or content-related, or planning-related, but preserving a growing collection requires ongoing thought.  And if you want to think as clearly and sensibly as you can, you’ll want to collaborate.

Right now, for instance, I’m trying to get my cache to harvest the full run of a journal that’s just been made available for LOCKSS harvesting, where we hope to provide post-cancellation access through LOCKSS.  Someone at Stanford just gave me a useful tip on how to give this journal priority over the other volumes I’ve got queued up for harvest.  Unfortunately, I can’t try it out until I get my cache back up after it failed to reboot cleanly after a power failure. While I wait to hear back instructions about how best to remedy this, I wonder whether switching to a new Linux-based version of LOCKSS might make such operating system-level problems easier to deal with.  But it would be useful to hear from folks who are running that version to see what their experience has been.

Meanwhile, we’re wondering how best to approach new publishers who have content that our bibliographers would like to preserve via LOCKSS. Our special collections folks wonder whether we should preserve some of our own home-grown content via a private LOCKSS network.  I’m also doing some ongoing monitoring and testing of our LOCKSS cache’s behavior (some of which I’ve reported on earlier), and would be interested in knowing if others are seeing some of the same kinds of things that I see on the cache I administer.

In short, there are a lot of things to think about, when LOCKSS plays a significant role in a preservation plan.  And a lot of the issues I’ve mentioned above are ones that others may be thinking about as well.  So let’s talk about them.  As the LOCKSS group has said, “”A vibrant, active, and engaged user community is key to the success of Open-Source efforts like LOCKSS.”

One thing you need for such an engaged community is a forum for them to talk to each other.  As it turns out, the LOCKSS group at Stanford tell me they created a LOCKSS Forum mailing list a while back, but I haven’t yet seen it publicized.   Its information page is at https://mailman.stanford.edu/mailman/listinfo/lockss-forum .  (Currently, archived email messages are not visible on the open web, though this may change in the future.)  If you’re interested in talking with others about how you use or might use LOCKSS to preserve access to digital content, I invite you to sign up and help get the conversation going.

January 28, 2010

Every book its libraries: or, Taking care in withdrawal

Filed under: preservation,sharing — John Mark Ockerbloom @ 1:42 pm

The question of when to withdraw materials from libraries has gotten heightened attention lately.  Everyday readers may not always realize it, but most libraries get rid of books and other materials on a regular basis.  Libraries typically have limited space, but keep acquiring new materials to serve their audience’s needs.   As they acquire new materials, they typically make room by getting rid of materials that no longer serve their audience as well; this is variously known as “withdrawing”, “deaccessioning”, or “weeding”.

Some libraries weed more aggressively than others.  School and public libraries tend to turn over their collections more quickly than academic research libraries.  There’s not much value a middle-schooler can get out of an outdated science book, for instance, compared to a current one.  And a public library user looking for a book on how to use their new Windows 7 computer shouldn’t have to wade through stacks clogged with TRS-80 programming guides and the like.  You can find amusing anecdotes about books that have outlived their usefulness in these kinds of collections in the blog Awful Library Books, one of the blogs on LISNews’ 10 Librarian Blogs to Read in 2010.

Academic libraries typically don’t weed as aggressively.  The larger research libraries aim to have a broad selection of thought on subjects from various points in history, as well as whatever happens to be of current interest.   A book on science that no longer reflects current scientific understanding may still be useful for researchers that want to look at the history of science, or at how science interacted with culture at the time.  Even the peripheral details can be of interest; for instance the photographs in an obsolete computer guide can tell us what what the computers looked like, and how they were expected to be used.  The most interesting aspect of many old periodicals nowadays is often the advertisements, rather than the editorial content.

Especially when they’re digitized, large corpuses can also be of major interest even when the individual items might not be particularly noteworthy.  They can help you track the use and evolution of language, for instance, or quash unwarranted patents.  I’ve talked before about the great potential of Google Books and similarly comprehensive corpuses.

Even so, research libraries still get rid of materials, or move them to offsite warehouses, when space is short.  As more users access materials online instead of print, we often ship out print volumes that have online surrogates.  Recently Ithaka published a report called What To Withdraw that recommends gives guidelines for withdrawing materials that are online in sustainable archives (such as Ithaka’s own JSTOR), and that have a few physical copies in print archives somewhere.  Doing this responsibly may help many research libraries grow their collections, or repurpose their spaces, in useful ways.  Selling particularly valuable items to more appropriate libraries can also help fund additional library acquisition and activity.

Carefully considered, then, withdrawal can greatly benefit libraries and their users.  But libraries need to think not only about their own collection’s purposes, but about the systemic risks of individual library collection decisions.  For instance, many of the “Awful Library Books” justifiably withdrawn from public libraries might still be of historical research interest to someone.  Even if academic research libraries would keep them, many of the books intended for popular or specialized non-academic audiences were not collected by academic libraries in the first place.  If all the public libraries with these books simply throw them out, and no copy gets transferred to a library or archive with a longer-term interest, the materials may disappear forever.

Online access, as an alternative to retaining print copies, may not be as reliable as one expects.  Recently, the archives of many popular magazines that were available through various subscription databases became part of an exclusive deal from one database vendor.  This is likely to raise the costs of access to many libraries, both because they may have to subscribe to a new database to keep providing these magazines, and because the price of the new exclusive bundle is likely to increase.  But even if vendors keep prices reasonable, libraries’ own situations may change.  Here in Pennsylvania, funding to libraries has been cut severely enough that many now have to cancel subscriptions to heavily-used databases. The linked story has a heartbreaking quote from one of the public librarians that’s had to drop their formerly free Power Library subscription: “I got rid of [our old magazines] because everything was in the database.”

How can we insure against these sorts of cultural loss, even as we withdraw items?  A key principle is replication.  In the words of one well-known digital preservation program, “Lots of Copies Keep Stuff Safe”.  When we consider withdrawing something, we stop to think if some other library or institution might find it of value. If we’re considering dropping print originals for digital surrogates, we check to see if other institutions we trust are keeping the originals safe, or would be willing to do so.  We also make digital copies of print materials that may be at risk, and we try to spread around these copies as widely as practicality and copyright law allows.  And we develop and support efficient inter-library transfer networks so that we can quickly move locally deaccessioned materials to where they’re needed or valued.

Many librarians have a philosophy of public service that draws on Ranganathan’s famous set of Five Laws of Library Science, which includes principles like “every reader his book” and “every book its reader”.   As we try to preserve our broad cultural heritage in the midst of withdrawal, loss, and replication, a related principle, “Every book its libraries”, is a useful one to keep in mind.

[Edited slightly 4:12pm Jan 28, in response to a comment below: deleted struck-through text, and added italicized text]

October 5, 2009

Remember this

Filed under: people,preservation,sharing — John Mark Ockerbloom @ 12:40 am

I am eating a sandwich at the end of Pier 14 in San Francisco.  The sun has set behind the downtown skyscrapers, and the colors in the sky are slowly fading to grey.  I’m not the only diner out here.  Pelicans soar close off the pier, about 100 feet above the water, and one by one dive straight down with a loud splash, resurfacing in a moment, ruffling their feathers and jerking their beaks to get down the fish they’ve caught.  Other splashes in the water come from seals surfacing for air.  As an orange-tinted full moon comes up over the East Bay hills and under the span of the Bay Bridge, I see a pair of seals surface side by side, with their mouths meeting as they float at the water’s surface for a few seconds.  I am delighted to see all this, so different from what I usually see at home, and at the same time I wish I could be back there with the people I love instead of alone here.

I don’t have a camera right now, or anything to draw with, so I can only record this scene in words and in memory.   When there was still sun shining low on Yerba Buena Island and the coastline to the east, there were several people out here with tripods and light umbrellas, photographing human couples standing against the pier railings, in each other’s arms.  Judging from the clothing and the poses, I suspect these shots are for wedding or engagement albums.  And I can understand the motivation.   When Mary and I were married, 14 years ago this month, we too had pictures taken of us against a striking background, in our case the bright orange and yellow trees of a Pennsylvania fall.  I see one of those pictures every time I return home. Remember this, the picture says, and it brings back memories of the vows we made to each other that day.  The words we said, and the way we looked when we said them, were not recorded in fixed form, but, God willing, will stay in our hearts as long as we live.

There are more memories recorded out on the pier.  Plaques along the rails quote lines of poetry by Lawrence Ferlinghetti and Thomas Lovell Beddoes about the bay I’m looking out on.  Ceramic tile art depicts boats that have plied its waters, from the early days of European exploration to the present.  A display on the sidewalk in front relates the history of the pier, the ferries that ran (and still run, in smaller numbers) from the terminal nearby, the freeway that was built and then removed again from the water’s edge, and some of the people who played a part in all of these developments.  Remember this, they say, and I bring bits back with me to record in words.

It’s a basic need that we have, as intelligent, reflective, and social creatures, to remember the things we’ve experienced, seen, and learned about.  We make records of these things in various forms, to help us remember, and to prompt others to remember as well.  They help us go beyond and above what’s immediately in front of us, telling us things we need to know, people we can relate to, pasts that were different, futures that can be better.

Technology can make it easier for us to record these things– and sometimes easier to lose them.  We took many pictures of our kids on digital cameras as they grew up, and kept hundreds of them on my laptop, which let me easily recall them and show them to friends and family when I traveled.  Then one day I was robbed of my laptop, without my having backed up my photo collection, and most of those pictures were lost.   I’ve also seen  many other personal and family memoirs posted on the Web, stay for a few years, and then vanish with the demise of the web site they were on.    I kept paper tapes of early BASIC programs I wrote in middle school for years after I had access to any device that could read them.  They’re gone now; I presume they were thrown out when my parents cleaned house sometime after I left home.

I know better now how to keep what remains.  Apple’s Time Machine makes it easy for me to incrementally back up my laptop every time I come home from work and plug a cheap external drive into my USB port.  The pictures of my kids that survived the laptop theft were mostly the ones that I had shared with others (either by copying them onto prints, or by putting them up on the Web). And the older family pictures that are most meaningful to us are ones where we know what the pictures represent, either because we are in them, or because others have told us, in person or in writing, who is in the pictures and the context in which they were taken.

I am here in San Francisco for Ipres 2009, a conference promoting the preservation of digital content.  There are a lot of smart, dedicated people scheduled to speak, and I hope to learn about new technologies and methods to help us preserve the content we want our libraries and their users to remember.

While some of these techniques may be complex, many of them are essentially elaborations on basic principles I’ve touched on in what I’ve related above: Help people record what’s important to them.  Make it easy for them to preserve these records in their everyday activity.  Encourage them to copy and share what they record, and allow others to build on them.  Make what they record easy to interpret, through informative description and straightforward formats.  And finally, try to understand and appreciate the connection between the record and the people for whom the record is important.

Which is why I sit now with my laptop in my hotel room, looking out on a bay that is now as dark as the night sky overhead, and trying to connect my experiences with the preservation challenges and proposals to come. Remember this, I mean to say.  It’s important.

October 13, 2008

What repositories do: The OAIS model

Filed under: preservation,repositories — John Mark Ockerbloom @ 11:23 pm

(Another post in an ongoing series on repositories.)

In my previous post, I mentioned the OAIS reference model as an influential framework for thinking about and planning repositories intended for long-term preservation. If you’re familiar with some of the literature or marketing for digital repositories, you may well have seen OAIS mentioned, or seen a particular system marketed as “OAIS compliant”. You may have also noticed remarks that it’s not always clear in practice what OAIS compliance means. The JISC Standards Catalogue notes “The [OAIS] documentation is quite long and complex and this may prove to be a barrier to smaller repositories or archives.” A common impression I’ve heard of OAIS is that it’s a nice idea that one should really try to pay more attention to, but complex enough that one will have to wait for some less busy time to think about it. Perhaps, one might think, if we just pick a repository system whose marketing says it’s OAIS compliant, we can be spared thinking about it ourselves.

I think we can do better than that, even in smaller projects. The basics of the OAIS model can be understood without having to be conversant with all 148 pages of the reference document. Those basics can help you think about what you need to be doing if you’re planning on preserving information for a long term (as most libraries do). The basics of OAIS also make it clear that following the model isn’t just a matter of installing the right product, but of having the right processes. It’s made very explicit that repository curators need to work with the people who produce and use the information in the repository, and make sure that the repository acquires all the information necessary for its primary audience to use and understand this information far into the future.

To help folks get oriented, here’s a quick introduction to OAIS. It won’t tell you everything about the model, but it should let you see why it’s useful, how you can use it, and what else you might need to consider in your repository planning.

What OAIS is and isn’t

First, let’s start with some basics: OAIS is a reference model for Open Archival Information Systems (whose initials make up the OAIS), that’s now an ISO standard, but is also freely available. It was developed by NASA’s Consultative Committee for Space Data Systems, who have had to deal with large volumes of data and other records generated by decades of space missions and observations, so they’ve had to think hard about how to manage and preserve it. To develop OAIS, they had open discussions with lots of other people and groups (like the National Archives) who were also interested in long-term preservation. OAIS is called “Open” because of the open process that went into creating it. It does not require that the archives are open access, or have open architecture, and it has no direct relation to the similarly-acronymed Open Archives Initiative (OAI). (Though all of these things are also useful to know about in their own right.) An “archival information system” or “archive” can simply be thought of as a repository that’s responsible for long-term preservation of the information it manages.

Unlike many standards, OAIS specifies no particular implementation, API, data format, or protocol. Instead, it’s an abstract model that provides four basic things:

  • A vocabulary for talking about common operations, services, and information structures of a repository. (This alone can provide very useful common ground for different people who use and produce repositories to talk to each other.) A glossary of this vocabulary can be found in section 1 of the reference model.
  • A simple data model for the information that a repository takes in (or “ingests”, to use the OAIS vocabulary), manages internally, and provides to others. This information is assumed to be in distinct, discrete packages known as Submission Information Packages (SIPs) for ingestion, Archival Information Packages (AIPs) for internal management, and Dissemination Information Packages (DIPs) for providing the information to consumers (or to other repositories). These packages include not just raw content, but also metadata and other information necessary for interpreting, preserving, and packaging this content. They have different names because the information they contain can take different forms as it goes into, through, and out of the archive. They are described in more detail in sections 2 and 4 of the reference model.
  • A set of required responsibilities of the archive. In brief, the archive (or its curators) must negotiate with producers of information to get appropriate content and contextual information, work with a designated community of consumers to make sure they can independently understand this information, and follow well-defined and well-documented procedures for obtaining, preserving, authenticating, and providing this information. Section 3 of the model goes into more detail about these responsibilities, and section 5 discusses some of the basic methodologies involved in preservation.
  • A set of recommended functions for carrying out the archive’s required responsibilities. These are broken up into 6 functional modules: ingest, data management, archival storage, access, administration, and preservation planning. The model describes about half a dozen functions in each model (ingest, for example, includes things like “receive submission”, “quality assurance”, and “generate AIP”) and data flows and dependencies that might exist between the functions. Some of these functions are automated, some (like “monitor technology”), are carried out by humans, and some may involve a combination of human oversight and automated assistance. The functions are described in more detail in section 4 of the model (with issues of multi-archive interoperability discussed in Section 6.)

OAIS conformance and usage

It is important to note that OAIS compliance simply requires fulfilling the required responsibilities, and supporting the basic OAIS data model of information packages. A repository is not required to implement all the functions recommended in the OAIS model, or replicate the detailed internal data flows, to be OAIS compliant. But it can be very useful to look through the functions in any case, both to make sure that your repository is doing everything it needs to do, and to see how the big problem of reliable data preservation can be broken down into smaller, more manageable operations and workflows.

You may also find the functions a useful reference point for detailed descriptions of the exact formats and protocols your repository uses for ingesting and storing information, providing content to users, and migrating it to other repositories. Although the OAIS model does not itself provide specific formats or protocols to use, it makes it clear that a repository provider needs to specify these so it can receive information from producers and make it clearly understandable to consumers.

The OAIS model has been used to help construct more detailed criteria for trusted repositories, as well as checklists for repository audit and certification. In most cases, repositories will operate perfectly well without satisfying every last criterion or checklist item. At the Partnerships in Innovation symposium I attended last week, Don Sawyer, one of the main people behind OAIS, remarked that the archives where he worked satisfied about 80% of the trusted repository checklist items. But he still found it useful to go through the whole list to verify that certain functions were not relevant or required for their repository needs, as well as to spot aspects of the repositories (like disaster recovery or provenance tracking) that might need more attention. Similarly, you can go through the recommended OAIS functions and data-model breakdowns to evaluate what’s important to have in your repository, what can be safely omitted, and what might need more careful attention or documentation.

What else you need to think about

Although the OAIS model includes examples of various kinds of repositories that might use it, it’s at its heart a fairly generic, domain-independent model, largely concerned with preservation needs. It doesn’t say a whole lot about how a repository needs to interact with specific communities to fulfill its purposes. For instance, in the talk I gave last week, I stressed the importance of designing the architecture of repositories to support rich discovery mechanisms. As Ken Thibodeau noted in later conversation, the access model of OAIS is more primitive than the architectures I described. OAIS is not incompatible with those architectures, but designing the right kinds of discovery architectures requires going beyond the criteria of OAIS itself.

You’ll also need to think carefully about the needs of the communities you’re collecting from and serving. The OAIS model notes this requirement, but doesn’t pursue it in depth. I can understand why it doesn’t, since those needs are highly dependent on the domain you’re working in. A repository intended to preserve static, published text documents for possible use in legal deposition will need to interact with its community very differently from, say, a repository intended to manage, capture, and ultimately preserve works in progress used in ongoing research and teaching. They both have preservation requirements that OAIS may well address effectively, but designing effective repositories for these disparate needs may require going well beyond OAIS, doing detailed requirements analyses, and assessing benefits and costs of various options.

I’ll talk more about requirements for particular kinds of repositories in later posts. But I hope I’ve made it clear how the OAIS model can be useful for general thinking and planning what a repository needs to do to manage and preserve its content. If it sounds promising, you can download the full OAIS model as a PDF. A revised document that will clarify some of the terminology and recommendations, but will not substantially change the model, is expected to be released in early 2009.

October 9, 2008

Surpassing all records

Filed under: architecture,discovery,preservation,repositories — John Mark Ockerbloom @ 10:47 pm

What will happen to all the White House emails after George W. Bush leaves office in January? Who will take charge of all the other electronic records of the government, after they’re no longer in everyday use? How can you archive 1 million journal articles a month from dozens of different publishers? Can the virtual world handle the Large Hadron Collider’s generation of 15 petabytes of data per year without being swallowed by a singularity? And how can we find what we need in all these bits, anyway?

These were some of the digital archiving challenges discussed this week at the Partnerships in Innovation II symposium in College Park, Maryland. Co-sponsored by the National Archives and Records Administration and the University of Maryland, the symposium brought together experts and practitioners in digital preservation for a day and a half of talks, panels, and demonstrations. It looked to me like over 200 people attended.

This conference was a sequel to an earlier symposium that was held in 2004. Many of the ideas and plans presented at the earlier forum have now grown into fruition. The symposium opened with an overview of NARA’s Electronic Records Archives (ERA), a long-awaited system for preserving massive amounts of records from all federal government agencies, that went live this summer. It’s still in pilot mode with a limited number of agencies, but will be importing lots of electronic records soon, including the Bush administration files after the next president is inaugurated.

The symposium also reviewed progress with older systems and concepts. The OAIS reference model, a framework for thinking about and planning long-term preservation repositories, influences not only NARA’s ERA, but many other initiatives and repositories, including familiar open source systems like Fedora and DSpace. Some of the developers of OAIS, including NASA’s Don Sawyer, reviewed their experiences with the model, and the upcoming revision of the standard. Fedora and DSpace themselves have been around long enough to be subjects of a “lessons learned” panel featuring speakers who have built ambitious institutional repositories around them.

The same panel also featured Evan Owens of Portico discussing the extensive testing and redesign they had to do to scale up their repository to handle the million articles per month mentioned at the top of this post. Heavily automated workflows were a big part of this scaling up, a strategy echoed by the ERA developers and a number of the other repository pracitioners, some of whom showed some interesting tools for automatically validating content, and for creating audit trails for certification and rollback of repository content.

Networks of interoperating repositories may allow digital preservation to scale up further still. That theme arose in a couple of the other panels, including the last one, dedicated to a new massive digital archiving initiative: the National Science Foundation‘s Datanet. NSF envisions large interoperating global networks of scientific data that could handle many Large Hadron Colliders worth of data, and would make the collection, sharing, reuse, and long-term preservation of scientific data an integral part of scientific research and education. The requirements and sizes of the grants are both prodigious– $20 million each to four or five multi-year projects that have to address a wide range of problems and disciplines– but NSF expects that the grants will go to wide-ranging partnerships. (This forum is one place interested parties can find partners.)

I gave a talk as part of the Tools and Technologies panel, where I stressed the importance of discovery as part of effective preservation and content, and discussed the design of architectures (and example tools and interfaces) that can promote discovery and use of repository content. My talk echoed in part a talk I gave earlier this year at a Palinet symposium, but focused on repository access rather than cataloging.

I’m told that all the presentations were captured on video, and hopefully those videos, and the slides from the presentations, will all be placed online by the conference organizers. In the meantime, my selected works site has a PDF of the slides and a draft of the script I used for my presentation. I scripted it to make sure I’d stay within the fairly short time slot while still speaking clearly. The talk as delivered was a bit different (and hopefully more polished) than this draft script, but I hope this file will let folks contemplate at leisure the various points I went through rather quickly.

I’d like to thank the folks at the National Archives and UMD (especially Ken Thibodeau, Robert Chadduck, and Joseph Jaja) for putting on such an interesting and well-run symposium, and giving me the opportunity to participate. I hope to see more forums bringing together large-scale digital preservation researchers and practitioners in the years to come.

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 83 other followers