Everybody's Libraries

June 15, 2011

A digital public library we still need, and could build now

Filed under: citizen librarians,copyright,libraries,people,sharing — John Mark Ockerbloom @ 12:39 pm

It’s been more than half a year since the Digital Public Library of America project was formally launched, and I’m still trying to figure out what the project organizers really want it to be.  The idea of “a digital library in service of the American public” is a good one, and many existing digital libraries already play that role in a variety of ways.  As I said when I christened this blog, I’m all for creating a multitude of libraries to serve a diversity of audiences and information needs.

At a certain point after an enthusiastic band of performers says “Let’s put on a show!”, though, someone has to decide what their show’s going to be about, and start focusing effort there.  So far, the DPLA seems to be taking an opportunistic approach.  Instead of promulgating a particular blueprint for what they’ll do, they’re asking the community for suggestions, in a “beta sprint” that ends today.   Whether this results in a clear distinctive direction for the project, or a mishmash of ideas from other digitization, aggregation, preservation, and public service initiatives, remains to be seen.

Just about every digital project I’ve seen is opportunistic to some extent.   In particular, most of the big ones are opportunistic when it comes to collection development.  We go after the books, documents, and other knowledge resources that are close to hand in our physical collections, or that we find people putting on the open web, or that our users suggest, or volunteer to provide on their own.

There are a number of good reasons for this sort of opportunism.  It lets us reuse work that we don’t have to redo ourselves.  It can inform us of audience interests and needs (at least as far as the interests of the producers we find align with the interests of the consumers we serve).  And it’s cheap, and that’s nothing to sneer at when budgets are tight.

But the public libraries that my family prefers to use don’t, on the whole, have opportunistically built collections.  Rather, they have collections shaped primarily by the needs of their patrons, and not primarily by the types of materials they can easily acquire.   The “opportunistic” community and school library collections I’ve seen tend to be the underfunded ones, where books in which we have yet to land on the Moon, the Soviet Union is still around, or Alaska is not yet a state may be more visible than books that reflect current knowledge or world events.  The better libraries may still have older titles in their research stacks, but they lead with books that have current relevance to their community, and they go out of their way to acquire reliable, readable resources for whatever information needs their users have.  In other words, their collections and services are driven by  demand, not supply.

In the digital realm, we have yet to see a library that freely provides such a digital collection at large scale for American public library users.   Which is not to say we don’t have large digital book collections– the one I maintain, for instance, has over a million freely readable titles, and Google Books and lots of other smaller digital projects have millions more.  But they function more as research or special-purpose collections than as collections for general public reference, education, or enjoyment.

The big reason for this, of course, is copyright.  In the US, anyone can freely digitize books and other resources published before 1923, but providing anything published after that requires copyright research and, usually, licensing, that tends to be both complex and expensive.  So the tendency of a lot of digital library projects is to focus on the older, obviously free material, and have little current material.  But a generally useful digital public library needs to be different.

And it can be, with the right motivation, strategy, and support.  The key insight is that while a strong digital public library needs to have high-quality, current knowledge resources, it doesn’t need to have all such resources, or even the most popular or commercially successful ones.  It just needs to acquire and maintain a few high-quality resources for each of the significant needs and aptitudes of its audience. Mind you, that’s still a lot of ground to cover, especially when you consider all the ages, education levels, languages, physical and mental abilities, vocational needs, interests, and demographic backgrounds that even a midsized town’s public library serves.  But it’s still a substantially smaller problem, and involves a smaller cost, than the enticing but elusive idea of providing instant free online access to everything for everyone.

There are various ways public digital libraries could acquire suitable materials proactively.  The America.gov books collection provides one interesting example.  The US State Department wanted to create a library of easy-to-read books on civics and American culture and history for an international audience.  Some of these books were created in-house by government staff.  Others were commissioned to outside authors.  Still others were adapted from previously published works, for which the State Department acquired rights.

A public digital library could similarly create, commission, solicit, or acquire rights to books that meet unfilled information needs of its patrons.  Ideally it would aim to acquire rights not just to distribute a work as-is, but also to adapt and remix into new works, as many Creative Commons licenses allow.  This can potentially greatly increase the impact of any given work.  For instance, a compellingly written,  beautifully illustrated book on dinosaurs might be originally written for 9-12 year old English speakers, and be noticeably obsolete due to new discoveries after 5 or 10 years.  But if a library’s community has reuse and adaptation rights, library members can translate, adapt, and update the book, so it becomes useful to a larger audience over a longer period of time.

This sort of collection building can potentially be expensive; indeed, it’s sobering that America.gov has now ceased being updated, due to budget cuts.  But there’s a lot that can be produced relatively inexpensively.  Khan Academy, for example, contains thousands of short, simple educational videos, exercises, and assessments created largely by one person, with the eventual goal of systematically covering the entire standard K-12 curriculum.  While I think a good educational library will require the involvement of many more people, the Khan example shows how much one person can get accomplished with a small budget, and projects like Wikipedia show that there’s plenty of cognitive surplus to go around, that a public library effort might usefully tap into.

Moreover, the markets for rights to previously authored content can potentially be made much more efficient than they are now.  Most books, for instance, go out of print relatively quickly, with little or no commercial exploitation thereafter.  And as others have noted, just trying to get permission to use  a work digitally, even apart from any royalties, can be very expensive and time-consuming.  But new initiatives like Gluejar aim to make it easier to match up people who would be happy to share their book rights with people who want to reuse them. Authors can collect a small fee (which could easily be higher than the residual royalties on an out-of-print book); readers get to share and adapt books that are useful to them.   And that can potentially be much cheaper than acquiring the rights to a new work, or creating one from scratch.

As I’ve described above, then, a digital public library could proactively build an accessible collection of high-quality, up to date online books and other knowledge resources, by finding, soliciting, acquiring, creating, and adapting works in response to the information needs of its users.  It would build up its collection proactively and systematically, while still being opportunistic enough to spot and pursue fruitful new collection possibilities.  Such a digital library could be a very useful supplement to local public libraries, would be open any time anywhere online, and could provide more resources and accessibility options than a local public library could provide on its own.  It would require a lot of people working together to make it work, including bibliographers, public service liaisons, authors, technical developers, and volunteers, both inside and outside existing libraries.  And it would require ongoing support, like other public libraries do, though a library that successfully serves a wide audience could also potentially tap into a wide base of funds and in-kind contributions.

Whether or not the DPLA plans to do it, I think a large-scale digital free public library with a proactively-built, high-quality, broad-audience general collection is something that a civilized society can and should build.  I’d be interested in hearing if others feel the same, or have suggestions, critiques, or alternatives to offer.

May 24, 2011

April 9, 2011

Opt in for open access

Filed under: copyright,libraries,online books,open access — John Mark Ockerbloom @ 8:40 am

There’s been much discussion online about Judge Chin’s long-awaited decision to reject the settlement proposed by Google and authors and publishers’ organizations over the Google Books service. Settlement discussions continue (and the court has ordered a status conference for April 25).  But it’s clear that it will be a while before this case is fully settled or decided.

Don’t count on a settlement to produce a comprehensive library

When the suit is finally resolved, it will not enable the comprehensive retrospective digital library I had been hoping for.  That, Chin clearly indicated, was an over-reach.  The  proposed settlement would have allowed Google to sell access to most pre-2009 books published in the English-speaking world whose rightsholders had not opted out.   But, as Chin wrote, “the case was about the use of an indexing and searching tool, not the sale of complete copyrighted works.”  The changes in the American copyright regime that the proposed settlement entailed, he wrote, were too sweeping for a court to approve.

Unless Congress makes changes in copyright law, then, a rightsholder has to opt in for a copyrighted book to be made readable on Google (or on another book site).  Chin’s opinion ends with a strong recommendation for the parties to craft a settlement that would largely be based on “opt-in”.  Of course, an “opt in” requirement necessarily excludes orphan works, where one cannot find a rightsholder to opt in.  And as John Wilkin recently pointed out, it’s likely that a lot of the books held by research libraries are orphan works.

Don’t count on authors to step up spontaneously

Chin expects that many authors will naturally want to opt in to make their works widely available, perhaps even without payment.  “Academic authors, almost by definition, are committed to maximizing access to knowledge,” he writes.  Indeed, one of the reasons he gives for rejecting the settlement is the argument, advanced by Pamela Samuelson and some other objectors, that the interests of academic and other non-commercially motivated authors are different from those of the commercial organizations that largely drove the settlement negotiations.

I think that Chin is right that many authors, particularly academics, care more about having their work appreciated by readers than about making money off of it.  And even those who want to maximize their earnings on new releases may prefer freely sharing their out of print books to keeping them locked away, or making a pittance on paywall-mediated access.  But that doesn’t necessarily mean that we’ll see all, or even most, of these works “opted in” to a universally accessible library.  We’ve had plenty of experience with institutional repositories showing us that even when authors are fine in principle with making their work freely available, most will not go out of their way to put their work in open-access repositories, unless there are strong forces mandating or proactively encouraging it.

Don’t count on Congress to solve the problem

The closest analogue to a “mandate” for making older books generally available would be orphan works legislation.    If well crafted, such a law could make a lot of books available to the public that now have no claimants, revenue, or current audience, and I hope that a coalition can come together to get a good law passed. But an orphan works law could take years to adopt (indeed, it’s already been debated for years). There’s no guarantee on how useful or fair the law that eventually gets passed would be, after all the committees and interest groups are done with it.  And even the best law would not cover many books that could go into a universal digital library.

Libraries have what it takes, if they’re proactive

On the other hand, we have an unprecedented opportunity right now to proactively encourage authors (academic or otherwise) to make their works freely available online.  As Google and various other projects continue to scan books from library collections, we now have millions of these authors’ books deposited in “dark” digital archives.  All an interested author has to do is say the word, and the dark  copy can be lit up for open access.  And libraries are uniquely positioned to find and encourage the authors in their communities to do this.

It’s now pretty easy to do, in many cases.  Hathi Trust, a coalition of a growing number of research institutions, currently has over 8 million volumes digitized from member libraries.  Most of the books are currently inaccessible due to copyright.  But they’ve published a permission agreement form that an author or other rightsholder can fill out and send in if they want to make their book freely readable online.  The form could be made a bit clearer and more visible, but it’s workable as it is.  As editor of The Online Books Page, I not infrequently hear from people who want to share their out of print books, or those of their ancestors, with the world.  Previously, I had to worry about how the books would get online.  Now I usually can just verify it’s in Hathi’s collection, and then refer them to the form.

Google Books also lets authors grant access rights through their partner program.  Joining the program is more complicated than sending in the Hathi form, and it’s more oriented towards selling books than sharing them.  But Google Books partners can declare their books freely readable in full if they wish, and can give them Creative Commons licenses (as they can with Hathi).  Google has even more digitized books in its archives than Hathi does.

So, all those who would love to see a wide-ranging (if not entirely comprehensive), globally accessible digital library now have a real opportunity to make it happen.  We don’t have to wait for Congress to act, or  some new utopian digital library to arise.  Thanks to mass digitization, library coalitions like Hathi’s, and the development of simplified, streamlined rights and permissions processes, it’s easier than ever for interested authors (and heirs, and publishers) to make their work freely available online.  If those us involved in libraries, scholarship, and the open access movement work to open up our own books, and those of our colleagues, we can light up access to the large, universal digital library that’s now waiting for us online.

January 2, 2011

Public Domain Day 2011: Will the tide be turned?

Filed under: copyright — John Mark Ockerbloom @ 12:40 am

This year’s Public Domain Day, the day on which a year’s worth of copyrights expire in many countries, is getting particular attention in Europe, where events in various European cities commemorate authors who died in 1940, and whose works are now in the public domain there.

Or, to be more precise, they’ve returned to the public domain there.  Although the reigning international copyright standard, the Berne Convention, requires copyrights to run at least for the lifetime of the author plus 50 years, the European Union in 1993 mandated a retroactive copyright extension to life plus 70 years, to match the longest term in any of its member countries at the time.  Twenty years of the public domain were buried by this extension.  For at least the next 3 years, all we’ll be seeing in Europe is the old public domain re-emerging.

The public domain has seen losses and freezes in much of the rest of the world since.  In 1998, after years of lobbying by the entertainment industry, the US enacted its own 20-year copyright extension.  Thankfully, this extension only froze the public domain instead of rolling it back, but we will wait another 8 years before more publications enter the public domain here due to age.  The 1998 extension was just the latest of a series of copyright extensions in the United States.  In 1954, US copyrights ran a maximum of 56 years, so all of the works published before 1955 would now in the public domain here were it not for later extensions.   (Instead, we still have copyrights in force as far back as 1923.)

There’s no clear end in sight to further extensions.  Since 1998 I’ve steadily been seeing country after country extend its terms, often pushed by trade negotiations with Europe or the United States.  “Life+50″ may still be the global standard, but bi-lateral and region-specific trade agreements have pushed terms up to “life+70″ in many countries around the world.  Some countries have gone even longer — Mexico, for instance, is now “life+100″– making convenient targets for further rounds of copyright extensions in the name of international “harmony”.

There are some bright spots, though.  Many countries continue to hold the line at life+50 years, including Canada (despite years of pressure from its southern neighbor).  As of today, residents of “life+50″ countries are now free to republish, adapt, reuse, and build upon works by authors who died in 1960 or before, in whatever way they see fit.   I hope to show some of what this means as I introduce listings from projects like Gutenberg Canada to The Online Books Page this year.

In the US, where many copyrights prior to 1964 didn’t run for their full length unless renewed, a number of digitization projects (most notably Hathi Trust) have been finding post-1922 works with unrenewed copyrights, and making them freely readable online.  These works tend not to be the best-sellers or popular backlist titles, but collectively they embody much of the knowledge and culture of the mid-20th century.  I’ve also been very happy to list many of these works over the past year.

At the same time, there’s been a growing awareness that copyright need not be “one size fits all”, particularly for works that no longer have much commercial value.  This insight helped lead various authors’ and publishers’ groups to negotiate a blanket license to Google to make out of print works generally available online.  The license, part of the Google Books Settlement, is not without its controversy or problems, and might or might not eventually get court approval.  But it suggests political feasibility for similar efforts to free older, more obscure cultural and scholarly works now languishing under exclusive copyright control.

We’ve even seen at least one entertainment industry spokesman speculate out loud that re-introducing simple formalities to maintain copyright might not be such a bad idea.  Such formalities are forbidden by the Berne Convention, so they could not be introduced across the board without re-negotiating that treaty.  That would be no easy task.

But the recent round of copyright extensions may at least provide an opening for international experimentation.  Now that copyright terms go past the Berne minimum in many countries, the post-Berne portion of the copyright term could potentially be made subject to requirements that Berne doesn’t allow (such as the renewal of copyrights in some suitable international registry system).  That could not only free many older “orphan works” for reuse, but if it works well it could also lead to negotiating a farther-reaching international registry system. Such a system could make it easier both to contact copyright holders for permissions, and to free works for the public domain whose owners no longer cared (or who never did want) to maintain exclusive rights.

I’ve been practicing a self-imposed system of “formalities” myself over the last few years.  On every Public Domain Day, I’ve been freeing published works of mine more than 14 years old, except for works where I explicitly opt to reserve copyright.  (Copyrights in the US originally ran for 14 years unless renewed for another 14.)  So: All works of mine published in 1996 for which I control the copyright are hereby released to the public domain.  (Legally, you can consider them all to be declared CC0.)  Much of the publication I did that year online can now be found through sites like the Internet Archive, which started crawling my web sites in late 1996.

I’d be very happy to hear about other gifts people are making to the public domain, as well as successes in bringing more of the public domain to light online, and in expanding the scope of the public domain as a whole.  Happy Public Domain Day to all!

December 25, 2010

November 11, 2010

You do the math

Filed under: open access,publishing,serials,sharing — John Mark Ockerbloom @ 6:02 pm

I recently heard from Peter Murray-Rust that the Central European Journal of Mathematics (CEJM) is looking for graduate students to edit the language of papers they publish.  CEJM is co-published by Versita and Springer Science+Business Media.

Would-be editors are promised their name on the masthead, and references and recommendations from the folks who run the journal.  These perks are tempting to a student (or postdoc) hoping for stable employment, but you can get such benefits working with just about any scholarly journal.  There’s no mention of actual pay for any of this editing work.  (Nor is there any pay for the associate editors they also seek, though those editors are also promised access to the journal’s content.)

The reader’s side of things looks rather different, when it comes to paying. If we look at Springer’s price lists for 2011, for instance, we see that the list price for a 1-year institutional subscription to CEJM is $1401 US for “print and free access or e-only”, or $1681 US for “enhanced access”.  An additional $42 is assessed for postage and handling, presumably waived if you only get the electronic version, but charged otherwise.

This is a high subscription rate even by the standards of commercial math journals.  At universities like mine, scholars don’t pay for the journal directly, but the money the library uses for the subscription is money that can’t be used to buy monographs, or to buy non-Springer journals, or to improve library service to our mathematics scholars.  Mind you, many universities get this journal as part of a larger package deal with Springer.  This typically lowers the price for each journal, but the package often includes a number of lower-interest journals that wouldn’t otherwise be bought.  Large amounts of money are tied up in these “big deals” with large for-profit publishers such as Springer.

If you can’t, or won’t, lay out the money for a subscription or larger package, readers can pay for articles one at a time.  When I tried to look at a recent CEJM article from home, for instance, I was asked to pay $34 before I could read it.  Another option is author-paid open access.  CEJM authors who want to make their papers available through the journal without a paywall can do so through Springer’s Open Choice program.  This will cost the author $3000 US.

So there’s plenty of money involved in this journal.  It’s just that none of it goes to the editors they’re seeking.  Or to the authors of the papers, who submit them for free (or with a $3000 payment).  Or to the peer reviewers of the papers, if this journal works like most other scholarly journals and uses volunteer scholars as referees.  A scholar might justifiably wonder all this money is going, or what value they get in return for it.

As the editor job ads imply, much of what scholars get out of editing and publishing in journals like these is recognition and prestige.  That, indeed, has value, but the cost-value function can be optimized much better than in this case.  CEJM’s website mentions that it’s tracked by major citation services, and has a 0.361 impact factor (a number often used, despite some notable problems, to give a general sense of a journal’s prestige).  Looking through the mathematics section of the Directory of Open Access Journals, I find a number of scholarly journals that are also tracked by citation services, but don’t charge anything to readers, and as far as I can tell don’t charge anything to authors either.   Here are some of them:

Central Europe, besides being the home of CEJM, is also the home of several open access math journals such as Documenta Mathematica (Germany), the Balkan Journal of Geometry and its Applications (Romania), and the Electronic Journal of Qualitative Theory of Differential Equations (Hungary).  For what it’s worth, all of these journals, and all the other open access journals mentioned in this post, currently show higher impact factors in Journal Citation Reports than CEJM does.

Free math journals aren’t limited to central Europe.  Here in the US, the American Mathematical Society makes the Bulletin of the American Mathematical Society free to read online, through the generosity of its members.  And on the campus where I work, Penn’s math department sponsors the Electronic Journal of Combinatorics.

A number of other universities also sponsor open-access journals, promoting their programs, and the findings of scholars worldwide, with low overhead.  For instance, there are two relatively high-impact math journals from Japanese universities: the Kyushu Journal of Mathematics and the Osaka Journal of Mathematics.  The latter journal’s online presence is provided by Project Euclid, a US-based initiative to support low-cost, non-profit mathematics publishing.

Ad-hoc groups of scholars can also organize their own open access journals in their favored specialty.  For instance, Homology, Homotopy and Applications is founded and entirely run by working mathematicians.  Some journals, such as the open access Discrete Mathematics and Theoretical Computer Science, use Open Journal Systems, a free open source publishing software package, to produce high-quality journal websites with little expenditure.

The Proceedings of the Indian Academy of Sciences: Mathematical Sciences is an interesting case.  Like many scholarly societies, the Indian Academy has recently made a deal with a for-profit publisher (Springer, as it turns out) to distribute their journals in print and electronic form.  Unlike many such societies, though, the Academy committed to continuing a free online version of this journal on their own website.

This is a fortunate decision for readers, because libraries that acquire the commercially published version will have to pay Springer $280 per year for basic access and $336 for “enhanced access”, according to their 2011 price list.  True, libraries get a print copy with this more expensive access (if they’re willing to pay Springer another $35 in postage and handling charges).  But the Academy sends out print editions within India for a total subscription price (postage included) of 320 rupees per year.   At today’s exchange rates, that’s less than $8 US.

Virtually all journals, whether in mathematics or other scholarly fields, depend heavily on unpaid academic labor for the authorship, refereeing, and in some cases editing of their content.  But, as you can see with CEJM and the no-fee open access journals mentioned above, journals vary widely in the amount of money they also extract from the academic community.  In between these two poles, there are also lots of other high-impact math journals with lower subscription prices, as well as commercial open access math journals with much lower author fees than Springer’s Open Choice.  These journals further diversify the channels of communication among mathematicians, without draining as much of  their funds.

I certainly hope mathematicians and other scholars will continue to volunteer their time and talents to the publication process, both for their benefit and for ours.  But if we optimize where and how we give our time and talent (and our institutional support), both scholars and the public will be better off.  As I’ve shown above, with a little bit of information and attention, there’s no shortage of low-cost, high-quality publishing venues that scholars can use as alternatives to overpriced journals.

October 29, 2010

October 18, 2010

October 15, 2010

Journal liberation: A community enterprise

Filed under: copyright,discovery,open access,publishing,serials,sharing — John Mark Ockerbloom @ 2:53 pm

The fourth annual Open Access Week begins on Monday.  If you follow the official OAW website, you’ll be seeing a lot of information about the benefits of free access to scholarly research.  The amount of open-access material grows every day, but much of the research published in scholarly journals through the years is still practically inaccessible to many, due to prohibitive cost or lack of an online copy.

That situation can change, though, sometimes more dramatically than one might expect.  A post I made back in June, “Journal liberation: A Primer”, discussed the various ways in which people can open access to journal content, past and present,  one article or scanned volume at a time.  But things can go much faster if you have a large group of interested liberators working towards a common goal.

Consider the New England Journal of Medicine (NEJM), for example.  It’s one of the most prominent journals in the world, valued both for its reports on groundbreaking new research, and for its documentation, in its back issues, of nearly 200 years of American medical history.  Many other journals with lesser value still cannot be read without paying for a subscription, or visiting a research library that has paid for a subscription.  But you can find and read most of NEJM’s content freely online, both past and present. Several groups of people made this possible.  Here are some of them.

The journal’s publisher has for a number of years provided open access to all research articles more than 6 months old, from 1993 onward.  (Articles less than 6 months old are also freely available to readers in certain developing countries, and in some cases for readers elsewhere as well.)  A registration requirement was dropped in 2007.

Funders of medical research, such as the National Institutes of Health, the Wellcome Trust, and the Howard Hughes Medical Institute, have encouraged publishers in the medical field to maintain or adopt such open access policies, by requiring their grantees (who publish many of the articles in journals like the NEJM) to make their articles openly accessible within months of publication.  Some of these funders also maintain their own repositories of scholarly articles that have appeared in NEJM and similar journals.

Google Books has digitized most of the back run of the NEJM and its predecessor publications as part of its Google Books database.  Many of these volumes are freely accessible to the public.  This is not the only digital archive of this material; there’s also one on NEJM’s own website, but access there requires either a subscription or a $15 payment per article.   Google’s scans, unlike the ones on the NEJM website, include the advertisements that appeared along with the articles.  These ads document important aspects of medical history that are not as easily seen in the articles, on subjects ranging from the evolving requirements and curricula of 19th-century medical schools to the early 20th-century marketing of heroin for patients as young as 3 years old.

It’s one thing to scan journal volumes, though; it’s another to make them easy to find and use– which is why NEJM’s for-pay archive got a fair bit of publicity when it was released this summer, while Google’s scans went largely unnoticed.  As I’ve noted before, it can be extremely difficult to find all of the volumes of a multi-volume work in Google Books; and it’s even more difficult in the case of NEJM, since issues prior to 1928 were published under different journal titles.  Fortunately, many of the libraries that supplied volumes for Google’s scanners have also organized links to the scanned volumes, making it easier to track down specific volumes.  The Harvard Libraries, for instance, have a chronologically ordered list of links to most of the volumes of the journal from 1828 to 1922, a period when it was known as the Boston Medical and Surgical Journal.

For many digitized journals, open access stops after 1922, because of uncertainty about copyright.  However, most scholarly journals have public domain content after that date, so it’s possible to go further if you research journal copyrights.  Thanks to records provided by the US Copyright Office and volunteers for The Online Books Page, we can determine that issues and articles of the NEJM prior to the 1950s did not have their copyrights renewed.  With this knowledge, Hathi Trust has been able and willing to open access to many volumes from the 1930s and 1940s.

We at The Online Books Page can then pull together these volumes and articles from various sources, and create a cover page that allows people to easily get to free versions of this journal and its predecessors all the way back to 1812.

Most of the content of the New England Journal of Medicine has thus been liberated by the combined efforts of several different organizations (and other interested people).  There’s still more than can be done, both in liberating more of the content, and in making the free content easier to find and use.  But I hope this shows how widespread  journal liberation efforts of various sorts can free lots of scholarly research.  And I hope we’ll hear about many more  free scholarly articles and journals being made available, or more accessible and usable, during Open Access Week and beyond.

I’ve also had another liberation project in the works for a while, related to books, but I’ll wait until Open Access Week itself to announce it.  Watch this blog for more open access-related news, after the weekend.

September 17, 2010

« Previous PageNext Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 100 other followers