Blowing the lid off the ILS (and the providers’ chance to have a say)

It’s now hardly a secret that many large research libraries are increasingly chafing at their traditional integrated library systems (ILSs). Duke University recently announced that they were planning to design an all new, open source ILS, presumably to replace their vendor-supplied ILS. This was just the latest example of a litany of impatience I have heard from various folks involved with organizations like the Digital Library Federation (DLF), which includes many larger North American research libraries doing innovative work in the digital realm. Library polls like the recent Perceptions 2007 survey give the lowest system satisfaction scores for systems like Voyager and Aleph, which are commonly used at large research libraries. Those low scores may have more to do with heightened expectations of their users than relative inadequacies of those systems. But heightened expectations have a way of spreading out to the general population over time.

The responses to a survey conducted by a DLF task force that I’m chairing cited numerous inadequacies with the public access catalog component of the ILS. The vast majority of respondents are using– and in many cases developed themselves– a wide variety of discovery tools that go beyond what the ILS itself offers. A number of libraries are building their own overlay or alternative catalogs, such as VuFind, Extensible Catalog (XC), and NCSU Endeca, to provide better information discovery in different ways. At the library where I work, we’ve hacked into our ILS to provide various enhanced discovery services like a video catalog, a social tagging system, and subject map browsers I’ve discussed in earlier posts.

What’s become increasingly clear to those of us trying to move information discovery forward is that we can no longer expect a single “integrated library system” to satisfy our current and emerging collection discovery needs by itself. And it’s inefficient and frustrating for each of us to hack custom interfaces to each ILS we have to build on top of it. Instead, we need a set of standard machine interfaces that allow us to build and provide new discovery systems on top of whatever ILS we have, using its data and services in whatever ways best help our users make the most of our extensive library collections and resources. (The various Web 2.0 initiatives suggest plenty of possibilities for developing and integrating such systems, and at the same time raise our own users’ discovery expectations.) There’s already been some work in the library world in developing protocols for interoperability, such as Z39.50, SRU, OAI-PMH, and NCIP. What’s needed now are some standard profiles for a complete suite of functions that support catalog alternatives and supplements, and that can be widely implemented and supported.

The DLF convened a group to recommend such technical profiles last year. In the fall, we compiled a set of functional requirements and presented and discussed them with library folks at the last DLF Forum. Now we’re detailing and refining the technical requirements, and preparing to discuss them with developers, vendors, and other service providers, to see what they are able and willing to implement and solicit suggestions. I made a general invitation to ILS vendor representatives at the recent ILS “president’s seminar” at ALA Midwinter. Others in our group are going out to the upcoming Code4lib conference to talk with open source and library-based development projects. And Peter Brantley, executive director of the DLF, recently sent out invitations to selected ILS vendors and developers to a workshop in March to discuss and help shape our recommendations.

We’re heard back from some of our invitees, but there are a number we haven’t heard from yet. I don’t know if this is because the recipients just haven’t gotten around to replying, or they’re not sure how serious we are about these standards, or they’re worried about whether more interoperable and more interchangeable ILS’s might threaten their markets. I’ve heard one respondent wonder whether would it be smarter for some companies to not participate, and hopefully thereby kill off the initiative, and protect vendor lockin with their sales base?

Well, that’s not going to happen. Whether or not particular vendors are on board, we’re going forward with this initiative. There’s already significant interest in the library community for more open, more flexible interfaces to our acquisitions, catalog, and circulation services. And there’s substantial existing work and interest in developing overlays to existing ILS’s, and in building new ones. (Besides the Duke proposal, and the various ILS overlays and catalogs mentioned above, there are already two complete open source ILS’s, Koha and Evergreen, in production in some libraries.) We can move forward just with the existing library and open-interfaces development community if we need to. That said, having more ILS vendors participating will make it easier and quicker for suitable interfaces to emerge, and make them available to a wider set of libraries. ILS vendors who participate can help shape the recommendations, gain understanding that can help them get a jump on development, highlight competitive advantages for libraries concerned about discovery API support in their next ILS, and tap new markets for discovery applications compatible with the standard interfaces.

So I encourage those developers and vendors that Peter and I have invited to get in touch with us soon. And I would also like to invite folks who haven’t heard from us, but are developing in the ILS and information-discovery domain, to contact me as well, to see how you can get involved. There may be room for you at the table as well, if you respond soon.

Author: John Mark Ockerbloom

I'm a digital library architect and planner at the University of Pennsylvania.

3 thoughts on “Blowing the lid off the ILS (and the providers’ chance to have a say)”

  1. VTLS has a discovery tool called Visualizer. We are also working with a lot of open souce software (VTLS’s VITAL product is based on Fedora and is being used in places like ARROW and Duke Medical and Yale; Visualizer is based on the Lucene and SOLR search engines). I am the CEO of VTLS and do not recall being invited to participate. I am the only CEO/President invited as a panel member at all 18 “President’s seminar” mentioned in your article. Perhaps it was an oversight in not inviting VTLS, but if invited, we will be happy to participate.

  2. This emergence of Open Sources products for replacing classic ILS is something very interesting. But I think that it implies a strong technical team for implementing, maintening, and progrssing in the design.
    At Academie Louvain (consortium of 4 Universities), we do not have such human resources.
    We are working since a while with VTLS (Virtua and VITAL). We can say that we are globaly very satisfied. I think they have taken the “right” direction since years: open system, integration of Open sources products, OAI compliant, and so on…
    We are testing their discovey tool where data of our ILs and data of our institutional repository are pull together.

  3. I’m very glad to hear from you both. We did in fact invite VTLS, but it’s possible the email was misdirected. Vinod, if you haven’t already heard from me or Peter, please send me your mail address and we’ll send you more details right away.



Comments are closed.