Mid-20th century newspapers: Minding the copyrights

I was pleased to read last week that the National Digital Newspaper Program, which has sponsored the digitization of over 1 million historically significant newspaper pages , has announced that it has expanded its scope to include content published up to 1963, as long as public domain status can be established.   I’m excited about this initiative, which will surface content of historic interest that’s in many readers’ living memory. I’ve advocated opening access to serials up to 1963 for a long time, and have worked on various efforts to surface information about serial copyright renewals (like this one), to make it easier to find public domain serial content that can be made freely readable online.  (In the US, renewal became automatic for copyrights secured after 1963, making it difficult to republish most newspapers after that date.  Up till then, though, there’s a lot that can be put online.)

Copyright in contributions

Clearing copyright for newspapers after 1922 can be challenging, however.  Relatively few newspapers renewed copyrights for entire issues– as I noted 10 years ago, none outside of New York City did before the end of World War II. But newspapers often aggregate lots of content from lots of sources, and determining the copyright status of those various pieces of content is necessary as well, as far as I can tell.  While section 201(c) of copyright law normally gives copyright holders of a collective work, such as a magazine or newspaper, the right to republish contributions as part of that work, people digitizing a newspaper that didn’t renew its own copyright aren’t usually copyright holders for that newspaper.  (I’m not a lawyer, though– if any legal experts want to argue that digitizing libraries get similar republication rights as the newspaper copyright holders, feel free to comment.)

Text contributions

As I mentioned in my last post, we at Penn are currently going through the Catalog of Copyright Entries to survey which periodicals have contributions with copyright renewals, and when those renewals started.  (My previous post discussed this in the context of journals, but the survey covers newspapers as well.)  Most of the contributions in the section we’re surveying are text, and we’ve now comprehensively surveyed up to 1932.  In the process, we’ve found a number of newspapers that had copyright-renewed text contributions, even when they did not have copyright-renewed issues.  The renewed contributions are most commonly serialized fiction (which was more commonly run in newspapers decades ago than it is now).  Occasionally we’ll see a special nonfiction feature by a well-known author renewed.  I have not yet seen any contribution renewals for straight news stories, though, and most newspapers published in the 1920s and early 1930s have not made any appearance in our renewal survey to date.  I’ll post an update if I see this pattern changing; but right now, if digitizers are uncertain about the status of a particular story or feature article in a newspaper, searching for its title and author in the Catalog of Copyright Entries should suffice to clear it.

Photographs and advertisements

Newspapers contain more than text, though.  They also include photos, as well as other graphical elements, which often appear in advertisements.   It turns out, however, that the renewal rate for images is very low, and the renewal rate for “commercial prints”, which include advertisements, is even lower.  There isn’t yet a searchable text file or database for these types of copyright renewals (though I’m hoping one can online before long, with help from Distributed Proofreaders), and in any case, images typically don’t have unambiguous titles one can use for searching.  However, most news photographs were published just after they were taken, and therefore they have a known copyright year and specific years in which a renewal, if any, should have been filed.  It’s possible to go through the complete artwork and commercial prints of any given year, get an overview of all the renewed photos and ads that exist, and look for matches.  (It’s a little cumbersome, but doable, with page images of the Catalog of Copyright Entries; it will be easier once there are searchable, classified transcriptions of these pages.)

Fair use arguments may also be relevant.  Even in the rare case where an advertisement was copyright-renewed, or includes copyright-renewed elements (like a copyrighted character), an ad in the context of an old newspaper largely serves an informative purpose, and presenting it there online doesn’t typically take away from the market for that advertisement.  As far as I can tell, what market exists for ads mostly involves relicensing them for new purposes such as nostalgia merchandise.  For that matter, most licensed reuses of photographs I’m aware of involve the use of high-resolution original prints and negatives, not the lower-quality copies that appear on newsprint (and that could be made even lower-grade for purposes of free display in a noncommercial research collection, if necessary).   I don’t know if NDNP is planning to accommodate fair use arguments along with public domain documentation, but they’re worth considering.

Syndicated and reprinted content: A thornier problem

Many newspapers contain not only original content, but also content that originated elsewhere.  This type of content comes in many forms: wire-service stories and photos, ads, and syndicated cartoons and columns.  I don’t yet see much cause for concern about wire news stories; typically they originate in a specific newspaper, and would normally need to be renewed with reference to that newspaper.  And at least as far as 1932, I haven’t yet seen any straight news stories renewed.   Likewise, I suspect wire photos and national ads can be cleared much like single-newspaper photos and ads can be.

But I think syndicated content may be more of a sticky issue.  Syndicated comics and features grew increasingly popular in newspapers in the 20th century, and there’s still a market for some content that goes back a long way.  For instance, the first contribution renewal for the Elizabethan Star, dated September 8, 1930, is the very first Blondie comic strip.  That strip soon became wildly popular, published by thousands of newspapers across the country.  It still enjoys a robust market, with its official website noting it runs in over 2000 newspapers today.  Moreover, its syndicator, King Features, also published weekly periodicals of its own, with issues as far back as 1933 renewed.  (As far as I can tell, it published these for copyright purposes, as very few libraries have them, but according to WorldCat an issue “binds together one copy of each comic, puzzle, or column distributed by the syndicate in a given week”.  Renew that, and you renew everything in it.)  King Features remains one of the largest syndicators in the world.  Most major newspapers, then, include at least some copyrighted (and possibly still marketable) material at least as far back as the early 1930s.

Selective presentation of serial content

The most problematic content of these old newspapers from a copyright point of view, though, is probably the least interesting content from a researcher’s point of view.  Most people who want to look at a particular locale’s newspaper want to see the local content: the news its journalists reported, the editorials it ran, the ads local businesses and readers bought.  The material that came from elsewhere, and ran identically in hundreds of other newspapers, is of less research interest.  Why not omit that, then, while still showing all the local content?

This should be feasible given current law and technology.  We know from the Google and Hathitrust cases that fair use allows completely copyrighted volumes to be digitized and used for certain purposes like search, as long as users aren’t generally shown the full text.  And while projects like HathiTrust and Chronicling America now typically show all the pages they scan, commonly used digitized newspaper software can either highlight or blank out not only specific pages but even the specific sections of a page in which a particular article or image appears.

This gives us a path forward for providing access to newspapers up to 1963 (or whatever date the paper started being renewed in its entirety).  Specifically, a library digitization project can digitize and index all the pages, but then only expose the portions of the issues it’s comfortable showing given its copyright knowledge.  It can summarize the parts it’s omitting, so that other libraries (or other trusted collaborators) can research the parts it wasn’t able to clear on its own.  Sections could then be opened up as researchers across the Internet found evidence to clear up their status.   Taken as a whole, it’s a big job, but projects like the Copyright Review Management System show how distributed copyright clearance can be feasibly done at scale.

Moreover, if we can establish a workable clearance and selective display process for US newspapers, it will probably also work for most other serials published in the US.  Most of them, whether magazines, scholarly journals,  conference proceedings, newsletters, or trade publications, are no more complicated in their sources and structures than newspapers are, and they’re often much simpler.   So I look forward to seeing how this expansion in scope up to 1963 works out for the National Digital Newspaper Program.   And I hope we can use their example and experience to open access to a wider variety of serials as well.

 

 

Sharing journals freely online

What are all the research journals that anyone can read freely online?  The answer is harder to determine than you might think.  Most research library catalogs can be searched for online serials (here’s what Penn Libraries gives access to, for instance), but it’s often hard for unaffiliated readers to determine what they can get access to, and what will throw up a paywall when they try following a link.

Current research

The best-known listing of current free research journals has been the Directory of Open Access Journals (DOAJ), a comprehensive listing of free-to-read research journals in all areas of scholarship. Given the ease with which anyone can throw up a web site and call it a “journal” regardless of its quality or its viability, some have worried that the directory might be a little too comprehensive to be useful.  A couple of years ago, though, DOAJ instituted more stringent criteria for what it accepts, and it recently weeded its listings of journals that did not reapply under its new criteria, or did not meet its requirements.   This week I am pleased to welcome over 8,000 of its journals to the extended-shelves listings of The Online Books Page.  The catalog entries are automatically derived from the data DOAJ provides; I’m also happy to create curated entries with more detailed cataloging on readers’ request.

Historic research

Scholarly journals go back centuries.  Many of these journals (and other periodicals) remain of interest to current scholars, whether they’re interested in the history of science and culture, the state of the natural world prior to recent environmental changes, or analyses and source documents that remain directly relevant to current scholarship.  Many older serials are also included in The Online Books Page’s extended shelves courtesy of HathiTrust, which currently offers over 130,000 serial records with at least some free-to-read content.  Many of these records are not for research journals, of course, and those that are can sometimes be fragmentary or hard to navigate.  I’m also happy to create organized, curated records for journals offered by HathiTrust and others at readers’ request.

It’s important work to organize and publicize these records, because many of these journals that go back a long way don’t make their content freely available in the first place one might look.  Recently I indexed five journals founded over a century ago that are still used enough to be included in Harvard’s 250 most popular works: Isis, The Journal of Comparative Neurology, The Journal of Infectious Diseases, The Journal of Roman Studies, and The Philosophical Review.  All five had public domain content offered at their official journal site, or JSTOR, behind paywalls (with fees for access ranging from $10 to $42 per article) that was available for free elsewhere online.  I’d much rather have readers find the free content than be stymied by a paywall.  So I’m compiling free links for these and other journals with public domain runs, whether they can be found at Hathitrust, JSTOR (which does make some early journal content, including from some of these journals, freely available), or other sites.

For many of these journals, the public domain extends as late as the 1960s due to non-renewal of copyright, so I’m also tracking when copyright renewals actually start for these journals.  I’ve done a complete inventory of serials published until 1950 that renewed their own copyrights up to 1977.  Some scholarly journals are in this list, but most are not, and many that are did not renew copyrights for many years beyond 1922.  (For the five journals mentioned above, for instance, the first copyright-renewed issues were published in 1941, 1964, 1959, 1964, and 1964 respectively– 1964 being the first year for which renewals were automatic.)

Even so, major projects like HathiTrust and JSTOR have generally stopped opening journal content at 1922, partly out of a concern for the complexity of serial copyright research.  In particular, contributions to serials could have their own copyright renewals separate from renewals for the serials themselves.  Could this keep some unrenewed serials out of the public domain?  To answer this question, I’ve also started surveying information on contribution renewals, and adding information on those renewals to my inventory.  Having recently completed this survey for all 1920s serials, I can report that so far individual contributions to scholarly journals were almost never copyright-renewed on their own.  (Individual short stories, and articles for general-interest popular magazines, often were, but not articles intended for scientific or scholarly audiences.)  I’ll post an update if the situation changes in the 1930s or later. So far, though, it’s looking like, at least for research journals, serial digitization projects can start opening issues past 1922 with little risk.  There are some review requirements, but they’re comparable in complexity to the Copyright Review Management System that HathiTrust has used to successfully open access to hundreds of thousands of post-1922 public domain book volumes.

Recent research

Let’s not forget that a lot more recent research is also available freely online, often from journal publishers themselves.  DOAJ only tracks journals that make their content open access immediately, but there are also many journals that make their content freely readable online a few months or years after initial publication.  This content can then be found in repositories like PubMedCentral (see the journals noted as “Full” in the “participation” column), publishing platforms like Highwire Press (see the journals with entries in the “free back issues” column), or individual publishers’ programs such as Elsevier’s Open Archives.

Why are publishers leaving money on the table by making old but copyrighted content freely available instead of charging for it?  Often it’s because it’s what’s makes their supporters– scholars and their funders– happy.  NIH, which runs PubMedCentral, already mandates open access to research it funds, and many of the journals that fully participate in PubMedCentral’s free issue program are largely filled with NIH-backed research.  Similarly, I suspect that the high proportion of math journals in Elsevier’s Open Archives selection has something to do with the high proportion of mathematicians in the Cost of Knowledge protest against Elsevier.  When researchers, and their affiliated organizations, make their voices heard, publishers listen.

I’m happy to include listings for  significant free runs of significant research journals on The Online Books Page as well, whether they’re open access from the get-go or after a delay.  I won’t list journals that only make the occasional paid-for article available through a “hybrid” program, or those that only have sporadic “free sample” issues.  But if a journal you value has at least a continuous year’s worth of full-sized, complete issues permanently freely available, please let me know about it and I’ll be glad to check it out.

Sharing journal information

I’m not simply trying to build up my own website, though– I want to spread this information around, so that people can easily find free research journal content wherever they go.  Right now, I have a Dublin Core OAI feed for all curated Online Books Page listings as well as a monthly dump of my raw data file, both CC0-licensed.  But I think I could do more to get free journal information to libraries and other interested parties.  I don’t have MARC records for my listings at the moment, but I suspect that holdings information– what issues of which journals are freely available, and from whom– is more useful for me to provide than bibliographic descriptions of the journals (which can already be obtained from various other sources).  Would a KBART file, published online or made available to initiatives like the Global Open Knowledgebase, be useful?  Or would something else work better to get this free journal information more widely known and used?

Issues and volumes vs. articles

Of course, many articles are made available online individually as well, as many journal publishers allow.  I don’t have the resources at this point to track articles at an individual level, but there are a growing number of other efforts that do, whether they’re proprietary but comprehensive search platforms like Google Scholar and Web of Science, disciplinary repositories like ArXiV and SSRN, institutional repositories and their aggregators like SHARE and BASE, or outright bootleg sites like Sci-Hub.  We know from them that it’s possible to index and provide access to the scholarly knowledge exchange at a global scale, but doing it accurately, openly, comprehensively, sustainably, and ethically is a bigger challenge.   I think it’s a challenge that the academic community can solve if we make it a priority.  We created the research; let’s also make it easy for the world to access it, learn from it, and put it to work.  Let’s make open access to research articles the norm, not the exception.

And as part of that, if you’d like to help me highlight and share information on free, authorized sources for online journal content, please alert me to relevant journals, make suggestions in the comments here, or get in touch with me offline.

Public Domain Day 2016: Freezes and thaws

For most of the past 55 years, the public domain in the United States has gone through a series of partial or complete freezes.  We’ve gotten used to them by now.  A thaw is coming soon, though, if there are no further changes in US copyright terms.  But right now, our government is trying to export freezes abroad, and is on the brink of succeeding.   And our own thaw is not yet a sure thing.

The freezes began in 1962, when Congress extended the length of copyright renewal terms in anticipation of an overhaul of copyright law.  Copyrights from 1906 that had been expiring over the course of that year stopped expiring.  The first extension was for a little over 3 years, but Congress kept passing new extensions before the old extensions ran out, until the 1976 Copyright Act established new, longer terms for copyright.  The 1906 copyrights that were frozen in 1962 would not enter the public domain until the start of 1982.

The freeze of the public domain in the 1960s and 1970s wasn’t complete.  Unrenewed copyrights continued to expire after 28 years, and works published without a copyright notice entered the public domain right away.  In 1982, all the traditional routes to the public domain were open again: age, non-renewal, publication without notice, and so on.  But that would only last about 7 years.   In 1989, the non-notice route was frozen out: from then on, anything published, or even written down, was automatically copyrighted, whether the author intended that or not.  In 1992, the non-renewal route was frozen out: copyrights would automatically run a full term whether or not the author or their heirs applied for a renewal.  In 1996, many non-US works were removed from the public domain, and returned to copyright, as if they had always been published with notice and renewals.  And finally in 1998, copyright expiration due to sheer age was also frozen out.  Due to a copyright extension passed that year, no more old published works would enter the public domain for another 20 years.  The freeze of the public domain became virtually complete at that point, with the trailing edge of copyrights stuck at the end of 1922.  It’s still there today.

But a thaw is in sight.  Just 3 years from now, in 2019, works published in 1923 that are still under copyright are scheduled to finally enter the public domain in the US.  Assuming we manage to stop any further copyright extensions, we’ll see another year’s worth of copyrights enter the public domain every January 1 from then on– just as happens in many other countries around the world.  Today, in most of Europe, and other countries that follow life+70 years terms, works by authors who died in 1945 (including everyone who died in World War II) finally enter the public domain.  In Canada, and other countries that follow the life+50 years terms of the Berne Convention, works by authors who died in 1965 enter the public domain.  The Public Domain Review shows some of the more famous people in these groups, and there are many more documented at Wikipedia.

But this may be the last year for a long while that people in Canada, and some other countries, see new works enter the public domain.  This past year, trade representatives from Canada, the US, and various other countries approved the Trans-Pacific Partnership, an agreement that includes a requirement pushed by the US to extend copyrights to life+70 years.  Those extensions would take place as soon as the TPP is ratified by a sufficient number of governments. In Canada, New Zealand, Japan, Malaysia, Brunei, and Vietnam, that would mean a 20-year freeze in the public domain, potentially coming into effect just before the US’s 20-year near-total freeze is scheduled to end.

Supporters of the public domain should not take either the pending freezes or the pending thaws for granted.  When the TPP was agreed on this past October, the leaders of the US and Canadian governments  were strong TPP supporters.  But the government of Canada has changed since then, and it looks like the US government might not put TPP to a vote until after the 2016 elections.  Canada’s new government, and some of the leading US candidates, seem to be more on the fence about TPP than their predecessors.  Organized public action could well shift their stance, in either direction.

While we’re awaiting a thaw in the US, we can still map out and digitize more of the public domain we have.  HathiTrust has been doing a wonderful job opening access to hundreds of thousands of post-1922 public domain books via its copyright review activities.   But other categories of unrenewed copyrights are not yet as well lit up.  For instance, Duke’s summary of the 1959 copyrights that could have been expiring today mentions 3 scholarly journals– Nature, Science, and JAMA, whose 1959 articles are behind paywalls at their publishers’ sites.  But it turns out that none of those journals renewed copyrights for their 1959 issues — the first issue to be renewed of any of them was the January 9, 1960 issue of JAMA — so we can digitize and open access to much of that content without waiting for the publishers to do so.

In the next three years, I’d love to see digital projects in the US make the post-1922 public domain as visible and comprehensive online as the pre-1923 public domain is now.  And then, if we ensure the thaw comes on schedule in the US, and we stave off freezes elsewhere, I hope we can quickly make another full year’s worth of public domain available every New Year’s Day.  Maybe once we get used to that happening in the US, we’ll be less likely to allow the public domain to freeze up again.
Happy Public Domain Day!  May we all soon have ample reason to celebrate it every year, all around the world.

 

Public Domain Day 2015: Ending our own enclosures

It’s the start of the new year, which, as many of my readers know, marks another Public Domain Day, when a year’s worth of creative work becomes free for anyone to use in many countries.

In countries where copyrights have been extended to life plus 70 years, works by people like Piet Mondrian, Edith Durham, Glenn Miller, and Ethel Lina White enter the public domain.  In countries that have resisted ongoing efforts to extend copyrights past life + 50 years, 2015 sees works by people like Flannery O’Connor, E. J. Pratt, Ian Fleming, Rachel Carson, and T. H. White enter the public domain. And in the US, once again no published works enter the public domain due to an ongoing freeze in copyright expirations (though some well-known works might have if we still had the copyright laws in effect when they were created.)

But we’re actually getting something new worth noting this year.  Today we’re seeing scholarship-quality transcriptions of tens of thousands of early English books — the EEBO Text Creation Partnership Phase I texts — become available free of charge to the general public for the first time.  (As I write this, the books aren’t accessible yet, but I expect they will be once the folks in the project come back to work from the holiday.)  (UpdateIt looks like files and links are now on Github; hopefully more user-friendly access points are in the works as well.)

This isn’t a new addition to the public domain; the books being transcribed have been in the public domain for some time.  But it’s the first time many of them are generally available in a form that’s easily searchable and isn’t riddled with OCR errors.  For the rarer works, it’s the first time they’re available freely across the world in any form.  It’s important to recognize this milestone as well, because taking advantage of the public domain requires not just copyrights expiring or being waived, but also people dedicated to making the public domain available to the public.

And that is where we who work in institutions dedicated to learning, knowledge, and memory have unique opportunities and responsibilities.   Libraries, galleries, archives, and museums have collected and preserved much of the cultural heritage that is now in the public domain, and that is often not findable– and generally not shareable– anywhere else.  That heritage becomes much more useful and valuable when we share it freely with the whole world online than when we only give access to people who can get to our physical collections, or who can pay the fees and tolerate the usage restrictions of restricted digitized collections.

So whether or not we’re getting new works in the public domain this year, we have a lot of work to do this year, and the years to follow, in making that work available to the world.  Wherever and whenever possible, those of us whose mission focuses more on knowledge than commerce should commit to having that work be as openly accessible as possible, as soon as possible.

That doesn’t mean we shouldn’t work with the commercial sector, or respect their interests as well.  After all, we wouldn’t have seen nearly so many books become readable online in the early years of this century if it weren’t for companies like Google, Microsoft, and ProQuest digitizing them at much larger scale than libraries had previously done on their own.  As commercial firms, they’re naturally looking to make some money by doing so.  But they need us as much as we need them to digitize the materials we hold, so we have the power and duty to ensure that when we work with them, our agreements fulfill our missions to spread knowledge widely as well as their missions to earn a profit.

We’ve done better at this in some cases than in others.   I’m happy that many of the libraries who partnered with Google in their book scanning program retained the rights to preserve those scans themselves and make them available to the world in HathiTrust.   (Though it’d be nice if the Google-imposed restrictions on full-book downloads from there eventually expired.)  I’m happy that libraries who made deals with ProQuest in the 1990s to digitize old English books that no one else was then digitizing had the foresight to secure the right to make transcriptions of those books freely available to the world today.  I’m less happy that there’s no definite release date yet for some of the other books in the collection (the ones in Phase II, where the 5-year timer for public release doesn’t count down until that phase’s as-yet-unclear completion date), and that there appears to be no plan to make the page images freely available.

Working together, we in knowledge institutions can get around the more onerous commercial restrictions put on the public domain.  I have no issue with firms that make a reasonable profit by adding value– if, for instance, Melville House can quickly sell lots of printed and digitally transcribed copies of the US Senate Torture report for under $20, more power to them.  People who want to pay for the convenience of those editions can do so, and free public domain copies from the Senate remains available for those who want to read and repurpose them.

But when I hear about firms like Taylor and Francis charging as much as $48 to nonsubscribers to download a 19th century public domain article from their website for the Philosophical Magazine, I’m going to be much more inclined to take the time to promote free alternatives scanned by others.  And we can make similar bypasses of not-for-profit gatekeepers when necessary.  I sympathize with Canadian institutions having to deal with drastic funding cuts, which seem to have prompted Early Canadiana Online to put many of their previously freely available digitized books behind paywalls– but I still switched my links as soon as I could to free copies of most of the same books posted at the Internet Archive.  (I expect that increasing numbers of free page scans of the titles represented in Early English Books Online will show up there and elsewhere over time as well, from independent scanning projects if not from ProQuest.)

Assuming we can hold off further extensions to copyright (which, as I noted last year, is a battle we need to show up for now), four years from now we’ll finally have more publication copyrights expiring into the public domain in the US.  But there’s a lot of work we in learning and memory institutions can do now in making our public domain works available to the world.  For that matter, there’s a lot we can do in making the many copyrighted works we create available to the world in free and open forms.  We saw a lot of progress in that respect in 2014: Scholars and funders are increasingly shifting from closed-access to open-access publication strategies.  A coalition of libraries has successfully crowdfunded open-access academic monographs for less cost to them than for similar closed-access print books.  And a growing number of academic authors and nonprofit publishers are making open access versions of their works, particularly older works, freely available to world while still sustaining themselves.  Today, for instance, I’ll be starting to list on The Online Books Page free copies of books that Ohio State University Press published in 2009, now that a 5-year-limited paywall has expired on those titles.  And, as usual, I’m also dedicating a year’s worth of 15-year-old copyrights I control (in this case, for work I made public in 2000) to the public domain today, since the 14-year initial copyright term that the founders of the United States first established is plenty long for most of what I do.

As we celebrate Public Domain Day today, let’s look to the works that we ourselves oversee, and resolve to bring down enclosures and provide access to as much of that work as we can.

Public Domain Day 2014: The fight for the public domain is on now

New Years’ Day is upon us again, and with it, the return of Public Domain Day, which I’m happy to see has become a regular celebration in many places over the last few years.  (I’ve observed it here since 2008.)  In Europe, the Open Knowledge Foundation gives us a “class picture” of authors who died in 1943, and whose works are now entering the public domain there and in other “life+70 years” countries.  Meanwhile, countries that still hold to the Berne Convention’s “life+50 years” copyright term, including Canada, Japan, and New Zealand, and many others, get the works of authors who died in 1963.  (The Open Knowledge Foundation also has highlights for those countries, where Narnia/Brave-New-World/purloined-plums crossover fanfic is now completely legal.)  And Duke’s Center for the Study of the Public Domain laments that, for the 16th straight year, the US gets no more published works entering the public domain, and highlights the works that would have gone into the public domain here were it not for later copyright extensions.

It all starts to look a bit familiar after a few years, and while we may lament the delays in works entering the public domain, it may seem like there’s not much to do about it right now.  After all, most of the world is getting another year’s worth of public domain again on schedule, and many commentators on the US’s frozen public domain don’t see much changing until we approach 2019, when remaining copyrights on works published in 1923 are scheduled to finally expire.  By then, writers like Timothy Lee speculate, public domain supporters will be ready to fight the passage of another copyright term extension bill on Congress like the one that froze the public domain here back in 1998.

We can’t afford that sense of complacency.  In fact, the fight to further extend copyright is raging now, and the most significant campaigns aren’t happening in Congress or other now-closely-watched legislative chambers.  Instead, they’re happening in the more secretive world of international trade negotiations, where major intellectual property hoarders have better access than the general public, and where treaties can be used to later force extensions of the length and impact of copyright laws at the national level, in the name of “harmonization”.   Here’s what we currently have to deal with:

Remaining Berne holdouts are being pushed to add 20 more years of copyright.  Remember how I said that Canada, Japan, and New Zealand were all enjoying another year of “life+50 years” copyright expirations?  Quite possibly not for long.  All of those countries are also involved in the Trans-Pacific Partnership (TPP) negotiations, which include a strong push for more extensive copyright control.  The exact terms are being kept secret, but a leaked draft of the intellectual property chapter from August 2013 shows agreement by many of the countries’ trade negotiators to mandate “life+70 years” terms across the partnership.  That would mean a loss of 20 years of public domain for many TPP countries, and ultimately increased pressure on other countries to match the longer terms of major trade partners.  Public pressure from citizens of those countries can prevent this from happening– indeed, a leak from December hints that some countries that had favored extensions back in August are reconsidering.  So now is an excellent time to do as Gutenberg Canada suggests and let legislators and trade representatives know that you value the public domain and oppose further extensions of copyright.

Life+70 years countries still get further copyright extensions.   The push to extend copyrights further doesn’t end when a country abandons the “life+50 years” standard.  Indeed, just this past year the European Union saw another 20 years added on to the terms of sound recordings (which previously had a 50-year term of their own in addition to the underlying life+70 years copyrights on the material being recorded.)  This extension is actually less than the 95 years that US lobbyists had pushed for, and are still pushing for in the Trans-Pacific Partnership, to match terms in the US.

(Why does the US have a 95-year term in the first place that it wants Europe to harmonize with?  Because of the 20-year copyright extension that was enacted in 1998 in the name of harmonizing with Europe.  As with climbers going from handhold to handhold and foothold to foothold higher in a cliff, you can always find a way to “harmonize” copyright ever upward if you’re determined to do so.)

The next major plateau for international copyright terms, life+100 years, is now in sight.  The leaked TPP draft from August also includes a proposal from Mexico to add yet another 30 years onto copyright terms, to life+100 years, which that country adopted not many years ago.  It doesn’t have much chance of passage in the TPP negotiations, where to my knowledge only Mexico has favored the measure.   But it makes “life+70” seem reasonable in comparison, and sets a precedent for future, smaller-scale trade deals that could eventually establish longer terms.  It’s worth remembering, for instance, that Europe’s “life+70” terms started out in only a couple of countries, spread to the rest of Europe in European Union trade deals, and then to the US and much of the rest of the world.  Likewise, Mexico’s “life+100” proposal might be more influential in smaller-scale Latin American trade deals, and once established there, spread to the US and other countries.  With 5 years to go before US copyrights are scheduled to expire again in significant numbers, there’s time for copyright maximalists to get momentum going for more international “harmonization”.

What’s in the public domain now isn’t guaranteed to stay there.  That’s been the case for a while in Europe, where the public domain is only now getting back to where it was 20 years ago.  (The European Union’s 1990s extension directive rolled back the public domain in many European countries, so in places like the United Kingdom, where the new terms went into effect in 1996, the public domain is only now getting to where it was in 1994.)  But now in the US as well, where “what enters the public domain stays in the public domain” has been a long-standing custom, the Supreme Court has ruled that Congress can in fact remove works from the public domain in certain circumstances.   The circumstances at issue in the case they ruled on?  An international trade agreement— which as we’ve seen above is now the prevailing way of getting copyrights extended in the first place.   Even an agreement that just establishes life+70 years as a universal requirement, but doesn’t include the usual grandfathered exception for older works, could put the public domain status of works going back as far the 1870s into question, as we’ve seen with HathiTrust international copyright determinations.

But we can help turn the tide.  It’s also possible to cooperate internationally to improve access to creative works, and not just lock it up further.  We saw that start to happen this past year, for instance, with the signing of the Marrakesh Treaty on copyright exceptions and limitations, intended to ensure that those with disabilities that make it hard to read books normally can access the wealth of literature and learning available to the rest of the world.  The treaty still needs to be ratified before it can go into effect, so we need to make sure ratification goes through in our various countries.  It’s a hopeful first step in international cooperation increasing access instead of raising barriers to access.

Another improvement now being discussed is to require rightsholders to register ongoing interest in a work if they want to keep it under copyright past a certain point.  That idea, which reintroduces the concept of “formalities”, has been floated some prominent figures like US Copyright Register Maria Pallante.  Such formalities would alleviate the problem of “orphan works” no longer being exploited by their owners but not available for free use.   (And a sensible, uniform formalities system could be simpler and more straightforward than the old country-by-country formalities that Berne got rid of, or the formalities people already accept for property like motor vehicles and real estate.)  Pallante’s initial proposal represents a fairly small step; for compatibility with the Berne Convention, formalities would not be required until the last 20 years of a full copyright term.  But with enough public support, it could help move copyright away from a “one size fits all” approach to one that more sensibly balances the interests of various kinds of creators and readers.

We can also make our own work more freely available.  For the last several years, I’ve been applying my own personal “formalities” program, in which I release into the public domain works I’ve created that I don’t need to further limit.  So in keeping with the original 14-year renewable terms of US copyright law, I now declare that all work that I published in 1999, and that I have sole control of rights over, is hereby dedicated to the public domain via a CC0 grant.  (They join other works from the 1900s that I’ve also dedicated to the public domain in previous years.)  For 1999, this mostly consists of material I put online, including all versions of  Catholic Resources on the Net, one of the first websites of its kind, which I edited from 1993 to 1999.  It also includes another year’s history of The Online Books Page.

Not that you have to wait 14 years to free your work.  Earlier this year, I released much of the catalog data from the Online Books Page into the public domain.  The metadata in that site’s “curated collection” continues to be released as open data under a CC0 grant as soon as it is published, so other library catalogs, aggregators, and other sites can freely reuse, analyze, and republish it as they see fit.

We can do more with work that’s under copyright, or that seems to be.  Sometimes we let worries about copyright keep us from taking full advantage of what copyright law actually allows us to do with works.  In the past couple of years, we saw court rulings supporting the rights of Google and HathiTrust to use digitized, but not publicly readable, copies of in-copyright books for indexing, search, and preservation purposes.   (Both cases are currently being appealed by the Authors Guild.)  HathiTrust has also researched hundreds of thousands of book copyrights, and as of a month ago they’d enabled access to nearly 200,000 volumes that were classified as in-copyright under simple precautionary guideliness, but determined to be actually in the public domain after closer examination.)

In the coming year, I’d like to see if we can do similar work to open up access to historical journals and other serials as well.  For instance, Duke’s survey of the lost public domain mentions that articles from 1957 major science journals like Nature, Science, and JAMA are behind paywalls, but as far as I’ve been able to tell, none of those three journals renewed copyrights for their 1957 issues.  Scientists are also increasingly making current work openly available through open access journals, open access repositories, and even discipline-wide initiatives like SCOAP3, which also debuts today.

There are also some potentially useful copyright exemptions for libraries in Section 108 of US copyright law that we could use to provide more access to brittle materials, materials nearing the end of their copyright term, and materials used by print-impaired users.

Supporters of the public domain that sit around and wait for the next copyright extension to get introduced into their legislatures are like generals expecting victory by fighting the last warThere’s a lot that public domain supporters can do, and need to do, now.  That includes countering the ongoing extension of copyright through international trade agreements, promoting initiatives to restore a proper balance of interest between rightsholders and readers, improving access to copyrighted work where allowed, making work available that’s new to the public domain (or that we haven’t yet figured out is out of copyright), and looking for opportunities to share our own work more widely with the world.

So enjoy the New Year and the Public Domain Day holiday.  And then let’s get to work.

Public Domain Day 2013: or, There and Back Again

The first day of the new year is Public Domain Day, when many countries celebrate a year’s worth of copyrights expiring, and the associated works become freely available for anyone to share and adapt.  As the Public Domain Day page at Duke’s Center for the Public Domain notes, the United States once again does not have much to celebrate.  Except for unpublished works by authors who died in 1942, no copyrights expire in the US today.  Under current law, Americans still have to wait 6 more years before any more copyrights of published works will expire.  (Subsisting copyrights from 1923 are scheduled to finally enter the public domain at the start of 2019.)

The start of 2013 is more significant in Europe, where the Open Knowledge Foundation has a more upbeat Public Domain Day site featuring authors who died in 1942, and whose published works enter the public domain today in most of the European Union. But that isn’t actually breaking new ground in most of Europe, because 2013 is also the 20th anniversary of the 1993 European Union Copyright Duration Directive, which required European countries to retroactively extend their copyright terms from the Berne Convention‘s “life of the author plus 50 years” to “life of the author plus 70 years”, and put 20 years’ worth of public domain works back into copyright in those countries.

For countries that used the Berne Convention’s term and implemented the directive right away, today marks the day that the public domain finally returns to its maximum extent of 20 years ago.  Only next year will Europe start seeing truly new public domain works.  (And since many European countries took a couple of years or more to implement the directive– the UK implemented it at the start of 1996, for instance– it may still be a few years yet before their public domain is back again to what it once was.)

At least the last US copyright extension, in 1998, only froze the public domain, without rolling it back.  If the US had not passed that extension, we would be seeing works published in 1937, such as the first edition of J.R.R. Tolkien’s The Hobbit, now entering the public domain.  (If the US hadn’t made any post-publication extensions, we’d also have the more familiar revision of The Hobbit, in which Gollum does not voluntarily give Bilbo the Ring, in the public domain now as well, along with all three volumes of The Lord of the Rings.)   Folks in Canada and other “life+50 years” countries, now celebrating the public domain status of works by authors who died in 1962, may be able to freely share and adapt Tolkien’s works in another 11 years.  Folks in Europe and the US who’d like to see a variety of visual adaptations, though, will have to content themselves with the estate-licensed Peter Jackson and Rankin/Bass adaptations for a while to come.

But there are still things Americans can do to make today meaningful.  For the last few years, I’ve been releasing copyrights I control into the public domain after 14 years (the original term of copyright set by the country’s founders, with an option to renew for another 14).  So today, I dedicate all such copyrights for works I published in 1998 to the public domain.  This includes my computer science doctoral dissertation, Mediating Among Diverse Data Formats.  If I believed a recent fearmongering statement from certain British journal editors, I should be worried about plagiarism resulting from this dedication, which doesn’t even have the legal attribution requirement of the CC-BY license they decry.  But as I’ve explained in a previous post on plagiarism, plagiarism is fundamentally an ethical rather than a legal matter, and scholars can no more get away with plagiarizing public domain material than they can with copyrighted material.   Both are and should be a career-killer in academia.

I’ll also continue to feature “new” public domain works from around the world on The Online Books Page.  Starting today, for instance, I’ll be listing works featured in The Public Domain Review, a wonderful ongoing showcase of public domain works inaugurated by the Open Knowledge Foundation on Public Domain Day 2011.  I’ll also be continuing to add listings from Project Gutenberg Canada and other sites in “life+50 years” countries, as well as other titles suggested by my readers.

Finally, I’ll be keeping a close eye on Congress’s actions on copyright.  In this past year, the Supreme Court ruled that Congress could take works out of the public domain, meaning that the public domain in the US is now under threat of shrinking, and not just freezing.  And the power of the copyright lobby was evident this year when a Republican Study Committee memo recommending copyright reform (including shorter terms) was yanked within 24 hours of its posting, and its author then fired.  On the other hand, 2012 also saw one of the largest online protests in history stop a copyright lobby-backed Internet censorship bill in its tracks.  If the public shows that it cares as much about the public domain as about bills like SOPA, we could have a growing public domain back again before long, instead of works going back again into copyright.