The System that Does or Does Not Prevent Online Posting of Copyrighted Moving Images
The “notice and takedown system” that can be used to challenge online posting of copyrighted material — on YouTube, for example — is “under strain.” While no easy fixes are in sight, regulatory reform is needed, argues a helpful study.
By Peter Monaghan
How is it, you might wonder, that so much material is freely available online that seems certain to be protected by copyright.
Have copyright holders agreed to their “intellectual property” being on sites like YouTube?
Very often, they have not. The explanation of how uncountable hours of it nonetheless are posted there requires a meander through the thickets of copyright law — through its rationale, formation, and enforcement.
The key issue, for drafters of copyright law, is an abstract one: How free should the circulation of information and ideas be? Some observers – and many copyright holders — argue that everything should be done to enforce copyright stringently, and to recompense copyright holders maximally; at the other end of the scale are advocates of an almost completely unshackled, ideas-must-fly-free Internet.
How well current copyright law is faring, when it comes to Internet posting of moving-image material, is a complicated muddle of an issue. So clarification is always welcome. Some has come in the form of a research paper titled “Notice and Takedown in Everyday Practice” that was published at the end of March by Jennifer M. Urban and Brianna L. Schofield, of the University of California at Berkeley School of Law, and Joe Karaganis, of The American Assembly, a public policy institute at Columbia University. (The publication is available for free download at the Social Science Research Network.)
The paper warns that key elements of modern copyright law “may be unraveling” in part because disputes “largely arise from forces the legislators could not have foreseen” when they created the law, before the massive growth of the Internet — before the advent of Google, Facebook, YouTube, and other Internet giants. The authors focus on U.S. copyright law, specifically the Digital Millennium Copyright Act (passed in 1998, in force since 2000); they note, however, that the measures at the core of that legislation have been adopted in whole or in large part in many other countries, although with varying favoritism towards rightsholders versus Internet service providers and their users. They also underpin such international agreements as the European Union’s E-Commerce Directive of 2000, as well as various international trade agreeements.
Notice and takedown
The paper’s title, “Notice and Takedown in Everyday Practice,” refers to a measure inscribed in copyright law as it applies to works created or stored in digital form. A key facet of such creation and publication is that when copyrighted works are placed on Internet host sites like YouTube, users who post items are deemed to be doing the publishing — they, and not the companies that operate the sites.
That’s a key issue: posters publish; sites merely host.
The posters have material that they value and that they wish to allow other users to see; but the posters may or may not own the material, and may or may not have the right to post it.
Who makes money from host sites, and how, is a murky area, and is beyond the scope of this article. Suffice it to say that hosting platforms that know their users are posting huge amounts of copyrighted material can still make money from selling ad space, usually to large ad brokers that increasingly use complex, automated techniques of market data collection and analysis to select which ads to send to which platforms, and how often. All parties then have some degree of remove and legal protection from copyright claims against them.
Not all advertisers presume to have free rein: in 2013, companies associated with the Interactive Advertising Bureau, a trade group for online advertisers, committed to a set of best practices designed to dry up the flow of advertising revenue to operators of Internet pirate sites. At least, that was the claimed goal of major online ad networks, including those of Google, Yahoo, Microsoft, and AOL. The White House applauded them.
Unsurprisingly, observers doubted that dodgy advertising networks that funnel ads to sites would adhere to the standards; even major brands often advertise on sites that accommodate copyright infringement, and ad brokers and networks haul in payments.
Notice and takedown within the safe harbor
Essential to understanding the posting of copyrighted materials is to understand a key element of modern copyright law. Enshrined at the core of the U.S. Digital Millennium Copyright Act (passed in 1998, in force since 2000) is section 512, the “safe harbor” provision.
It makes creators of content responsible for flagging infringements of their copyrights. Once copyright owners report infringements to companies that host them online – your Facebook, your YouTube (owned by Google), and so on — those companies are obligated to tell the posters of the improperly posted material to remove the offending material, or to make a case for not removing it.
Owners of host sites may bar posters who offend repeatedly.
For copyright owners, that process can mean a lot of work, and for sites that host purloined material, too: Owners must report breaches of copyright; hosts must tell posters to take them down, or the hosts must remove them.
For hosts, the safe-harbor provision is a safeguard against being held liable for copyright breaches; conversely, the provision gives them a loophole that they can exploit. It allows companies to purport to discourage posting of copyrighted material at the same time as they really don’t.
That’s to say that even while discouraging illegal posting, they may in reality be contentedly tolerating it: they say it is not they, but users, “third parties,” who are posting the material, independent of the hosting sites. Those sites complete their disavowal of responsibility by groaning that they are themselves beleaguered: they strain under the weight of responding to a gazillion notice-and-takedown demands from annoyed copyright owners.
It may seem odd, at first blush, that lawmakers constructed a system that hinges upon a notice-and-copyright stipulation, rather than one that plainly states that illegal posting is theft and will attract fines or other deterrents. The reasons for the safe-harbor provision relate to such interests as maintaining the Internet as a location in which ideas may be exchanged with a minimum of obstruction.
Who does or should define “minimum of obstruction” is, of course, a contentious area of policy and history.
Some Internet users – let’s, for argument’s sake, call them Internet libertarians — claim that a founding and prevailing ethos of the Internet is that it should be a space for unencumbered sharing of ideas. That view ignores that the earliest online channels of communication were designed to ensure uninterceptable transfer of military communications. But that’s another story.
To recap, so far…
- Whether specifically intended or not, the effect of the takedown provision of the Digital Millennium Copyright Act has been to permit at least temporary posting of copyrighted materials.
- If copyright owners don’t like their material being posted, the onus is on them to be organized enough to find it and request that it be taken down.
- Hardly discouraging unauthorized postings is that the law allows posters to pretend they didn’t realize that they didn’t have clearance to post, or need permission, at all; it also allows hosts to garner advertising revenue for copyrighted material until they are asked to order that it be removed.
Fair use, a slippery concept
Then, there’s “fair use.”
Copyright law makes provision for “fair use” of copyrighted material for such purposes as to illustrate points made in film criticism, or to advance scholarly research. But there’s no hard-and-fast boundary around the concept — even advocates for fair use have been surprised by some rulings in favor of it.
Posters of copyrighted material often allude to the fair-use doctrine in suspect ways – when, for example, they litter YouTube and the like with such statements as “I do not own the copyright of this film but am posting it just ’cause I’m swell and wish to share it under the terms of fair use.”
At issue is freedom of expression — and by that, the framers of copyright law presumably never intended to encourage utterance of silly statements such as that. But what exactly they did mean is a complex and contentious area of law and public policy, and one with significant ramifications for what happens out on the Information Super Highway.
the notice and takedown process became one of the principle mechanisms for regulating freedom of expression
Robots? More on that, below.
The authors write that “anecdotal reports of abusive takedowns” and “complaints by large rightsholders that determined online pirates continue to dent their profits” lead them so conclude that “the system exhibits important failures and definite strain.” They ask whether the measure is, in fact, “practically obsolete.”
Not quite, they conclude: “In some of its most basic features, the notice and takedown system is functioning, and meets the goals it was intended to address” even amidst many “anecdotal reports of abusive takedowns” and “complaints by large rightsholders” about “determined online pirates.”
We are in the midst, they conclude, of “an ever-escalating arms race fought with millions of automated notices and revolving offshore domains.”
Freedom of expression in “everyday practice”
Before trying to make sense of what the authors find and report, it is perhaps best to review some of the issues that give rise to the need for an analysis such as theirs. You can quickly learn how complicated the situation is — and how blatant, some of the pirating — by surveying Internet-related news articles about the implementation and policing of copyright laws.
Several news and industry-group sites monitor and aggregate such reports. Among those that do so even-handedly is the Internet site of Etcenter, which is operated by The Entertainment Technology Center at the University of Southern California, a think tank and research center for the entertainment, consumer electronics, technology, and information-services industries. It characterizes its news-and-analysis site Etcentric as “The Media Professional’s Inside Perspective.” Read through its postings about online copyright infringement during the last few years, and you get a sense of how disputed the principles and measures of the Digital Millennium Copyright Act are.
Courts have largely stuck with the notice-and-takedown and safe-harbor stipulations in ruling, for example, that online music services that host tracks aren’t liable for their users’ illegal postings
But just what role companies can take has been the subject of fine slicing by the courts. For example, in 2013 the Motion Picture Association of America won a lawsuit against the “cyberlocker” Hotfile for copyright violations and forced the company to shut down. Hotfile tried to argue that it should enjoy “safe harbor” status, but a Florida federal judge ruled against its practice of providing a way for users to share files of copyrighted material.
Confusingly, YouTube effectively is allowed to share such files; the only real difference is that on YouTube files aren’t hidden away, or at least camouflaged, within the “cyberlockers” that Hotfile deployed. That difference — one that could be viewed as a mere figment of computer coding — affords YouTube the “safe harbor” that Hotfile couldn’t avail of.
Viacom v YouTube, the big one
Viacom, the television giant, brought a $1-billion law suit against Google and its subsidiary YouTube in 2007, claiming widespread copyright infringement; courts ruled in YouTube’s favor, citing the “safe-harbor provision” of the DCMA, and its complementary obligation to issue appropriate takedown notices. In April 2013, a U.S. District (i.e. federal) judge dismissed Viacom’s suit, but in July of that year, Viacom appealed to the U.S. Second Circuit Court of Appeals (one step away from the U.S. Supreme Court). Viacom continued to insist that YouTube “willfully infringed” its rights, and that the proof was simply demonstrated: postings of Viacom productions, such as “The Daily Show,” were easily located on YouTube, which Viacom characterized as the result of YouTube/Google’s “deliberate effort to avoid guilty knowledge.”
(The court history was complicated – after the District Court ruled in favor of YouTube in 2010, in 2012 the Second Circuit Court of Appeals remanded the case back to the District Court for further exploration; the Circuit Court affirmed its ruling, and Viacom appealed again to the Circuit Court in 2013.)
YouTube Profited Directly From Infringing Activity They Had The Right And Ability To Control; YouTube Had The Ability To Control Infringing Activity That Pervaded Its Website; YouTube Received Enormous Financial Benefits Directly Attributable To The Infringing Activity…
Viacom claims that YouTube/Google had failed to show it didn’t know it was hosting Viacom property, and in fact was “willfully blind” to the infringement: that it had deliberately avoided learning the location of infringing clips, which in many cases, these days, are of substantial length — often, whole films.
In sum, Viacom claimed, YouTube could not “satisfy either of two required elements for application of the DMCA safe harbor.”
Of course, this was rather a Viacom-centric analysis, by Viacom. YouTube receives millions of takedown requests every week, and presumably doesn’t have any legally enforceable obligation to follow up preferentially on Viacom’s.
Still, you can see Viacom’s point.
The American global mass-media giant reminded the Court that the DMCA “safe harbor” provision came with “right-and-ability-to-control” and “knowledge” exceptions, and that the Circuit Court had directed the lower court to consider those, too. And the U.S. Supreme Court, in the 2005 case of Metro-Goldwyn-Mayer Studio versus Grokster, had condemned “inducement of infringement over the Internet.”
Viacom began its 2013 filing by noting: “YouTube went from a start-up to a multi-billion-dollar business between 2005 and 2008 by intentionally enabling and profiting from the posting of infringing clips of copyrighted shows and movies, including several highly popular shows owned by plaintiff Viacom like South Park and The Daily Show. It was only in 2008 (more than a year after being acquired by Google and after succeeding in becoming the clearly dominant video site) that YouTube began using readily-available filtering software to screen out the copyrighted works of major content companies like Viacom that had not licensed their content to YouTube.”
Viacom’s 2013 filing offered a stinging historical account of YouTube’s founders’ profit-motivated attitudes about copyright infringement, and about tactics they used in seeking to absolve themselves of inviting users to commit infingements. Indeed, the Second Circuit court of appeals had noted that service providers can be found “willfully blind,” and lose safe-harbor protection, if they make a “deliberate effort to avoid guilty knowledge.” The Ninth Circuit Court of Appeals has, however, made clear (in Universal Music Group v Veoh) that the standard for judging that breach should weigh very much in favor of the service provider: that’s to say, service providers should not have to actively police their services to the extent rightholders have persisted, unsuccessfully, in insisting they should.
No Supreme Court determination, for now
But the U.S. Supreme Court won’t, yet, be taking up the issues that Viacom raised, because in 2014 Viacom and Google reached an accord on their dispute: it seems the two megacompanies agreed to work together on such fronts as how Viacom, its harm miraculously healed, might most profitably host advertising on its programming when it offers it through its own sites or such mediaries as Hulu, the online content portal.
After reading around YouTube’s early history and attitudes towards infringement, a reasonable person might view the company’s responses to copyright breaches as window-dressing, and belated.
YouTube/Google adopted its search-and-impede measures in response not only to claims like Viacom’s, but also long-running pressure from the Motion Picture Association of America and the Recording Industry Association of America. Google has for years worked on its search algorithm — the one most Internet users deploy to find things online — to make copyright-infringing postings harder to locate. Copyright-holder groups, particularly the Recording Industry Association of America, have long pressed for improvements, and have complained that they have been slow coming, and inadequate.
As Etcenter notes, however, part of Google’s response was to tell copyright holders, in 2014, that “piracy often arises when consumer demand goes unmet by legitimate supply” and that “the best way to combat piracy is with better and more convenient legitimate services,” suitably priced, convenient, and extensive. That, Google claimed, will “do far more to reduce piracy than enforcement can.”
That, too, may seem a self-exonerating stance, not to mention a presumptuous one, as it seems to tell copyright holders how they should reconfigure their businesses. It sounds a bit like asking householders who have just been robbed whether they’ve considered installing a coin-operated turnstile.
While YouTube has claimed that it seeks to discourage improper uploading to its platform, some large Internet service providers boast of penalizing illegal downloaders. They have instituted “copyright alert systems” that detect and pursue downloaders of copyrighted movies, television shows, and music.
The Electronic Frontier Foundation was not impressed by the systems (aka “Six Strikes”), which in their first year or so sent 1.3 million accusations of copyright infringement to 722,000 Internet subscribers, 60,000 of whom were penalized by their Internet service provider. The EFF in 2014 characterized the systems as exercises in “copyright maximalism” with scant consideration of the doctrine of fair use, and saw the systems as a troubling drift.
Large service providers contented themselves with having systems that nominally or effectively acquitted any responsiblity they might have to police themselves. In 2011, five large providers worked with large rightsholder groups to produce a standardized, automated Copyright Alert System. Its goal has been to issue “strike” notices to apparently offending users, and to impose sanctions leading towards termination of the accounts of offending users, if providers have wished to take that step. The benefit for large service providers who sign on to the system has been that large rightsholder organizations like the Motional Picture Association of America then bother them far less with takedown requests.
You can also see a fair degree of self-deception among people who like merely to help themselves to copyrighted material, and to trade in it.
You can see what sort of see-sawing of competing interests is in play. You can also see a fair degree of self-deception among people who like merely to help themselves to copyrighted material, and to trade in it. In 2014, a user of Dropbox, another “cyberlocker,” discovered that he could not store files on Dropbox completely privately. The company had deployed a system for identifying copyrighted material, and preventing sharing of it. Users were volubly aggrieved to find out about the measure, and sounded a little like petty burglars taking offense at being told they can’t hoard stolen stuff in their homes.
Staggering volume of takedown requests
It’s not surprising that Google would want to tamp down its responsibility for copyright infringement: since August 2014, it has been processing more than one million takedown requests every day. By the first three months of 2016, that had grown to more than 213 million requests, well over two million a day.
In recent years, Facebook has increasingly been hosting video content, and millions of Facebook users have been freely copying video content — or, more commonly, links to it — from all over the place, to place on their own Facebook “pages.” Their goal is to draw traffic to their own pages. That practice is known as “freebooting,” and people who create content specifically for sites like YouTube don’t like it, at all. (A related phenomenon, rapidly growing, is “stream ripping” in which music fans go to sites that help them collect mp3 music files “ripped” from YouTube or other sites; the sites effectively becoming free-of-charge providers that circumvent fans’ need to use paid-subscription music-listening sites like Spotify.)
Creators of YouTube original content (as distinct from content copied from, for example, purchased DVDs of movies) can garner up to millions of dollars from advertising attached to their YouTube postings (some, for the greatest load of old rubbish you could fear being made to sit through). Freebooting annoys them because they get ad revenue only from YouTube or some other original intended platform. After disgruntled creators began sending millions of takedown requests daily to Facebook, clamoring for it to do something about the freebooting, Facebook released Rights Manager earlier this year , a tool that, like YouTube’s Content ID, helps video producers to detect unapproved posting of their content.
Creators had been complaining that takedown requests were inadequate because Facebook had been responding too slowly to them; Facebook has claimed all along that it has been making good-faith efforts, which now include Rights Manager; but Facebook has to be seen to be trying to curb misappropriation, so it would take that stance. Otherwise, it might appear to be indulging in the above-mentioned “willful blindness” to copyright infringement.
Rights holders have turned to a variety of other approaches to try to discourage what they see as “piracy,” pure and simple. Most controversial is the tactic of completely blocking sites that appear to exist only to provide access to copyright-breached material. (Sites exist, for example, that provide nothing but feeds to satellite or cable sports telecasts.) That tactic does have its opponents, who consider blocking to be draconian — those opponents include users, Internet-service providers, and even some rightsholder groups.
In another response to rightsholder pressure, Google has altered algorithms in its search engine so that sites with many takedown notices appear lower in results rankings. It has also rigged the “autocomplete” functions of its search engine to exclude terms associated with piracy, and has developed advertisements that promote “copyright-friendliness.”
Copyright holders are, in addition, increasingly turning to “copyright-prevention services” that launch and police takedown-notice demands. In November and December 2015, that approach got a seeming major boost from a Virginia court ruling, in BMG v Cox Communications.
BMG vs Cox Communications
BMG Rights Management, a giant music publisher, hired Rightscorp, a company that detects online copyright infringements and then sends letters to people whom it accuses of wrongdoing. The letters typically demand that supposed wrongdoers pay for their misdeeds, or face litigation.
Cox Communications told the Virginia judge that in its opinion, “Rightscorp sells shady services,” so it was refusing to give customer IP addresses to Rightscorp, in contrast to what some other Internet service providers had dutifully, fearfully, or…uhhh, collegially done. The compliant companies had included Comcast and Time Warner Cable, through joint programs like the “Copyright Alert System” or in response to court subpoenas.
Cox insisted to the court that Rightscorp “shakes down ISP customers for money without regard to actual liability, and it tries to enlist ISPs in its scheme.” By “without regard to actual liability,” it meant: without first discovering for certain that the users actually were infringing copyright. (Accounts of what Rightscorp did to Cox are fascinating — essential reading, if you aspire to a career as an Internet-era standover agent.)
As a matter of law, allowing known, repeat, flagrant infringers to continue to use the network does not satisfy the DMCA’s requirement of an appropriate repeat infringer termination policy.
The judge agreed, and ruled that Cox did not qualify for the safe-harbor protection accorded by the Digital Millennium Copyright Act. The case went to trial, and a jury in December 2015 ruled that Cox would have to pay BMG $25-million for copyright violations.
It is hardly surprising that rightsholders and rightsholder organizations would increase legal pressure on people and companies they view as pirates. They may well consider themselves to be forced to act like sheriffs of the Wild West, undergunned as they combat renegade activities. Indeed, some outlaw companies do seem to set up shop using suspect, copyright-testing strategies that they hope will survive and prosper, bolstered by the popularity of the services built with them. In May 2015, Twitter’s app, Periscope, which allows users to stream video recordings, permitted some untold number of Twitter users to watch for free and in real time a prize fight that HBO was offering as a $100 pay-per-view event. HBO had earlier asked Twitter not to allow Periscope to be used to stream episodes of Game of Thrones,
Companies like Periscope — another is Meerkat — have a delicate balance to maintain: they may initially thrive on such copyright-testing or -infringing postings, but to survive in the longer term, they must reach agreements with broadcasters that guarantee legal posting of materials in return for supplies of materials worth watching.
Unfair actions, and fair use
Advocates of minimizing unnecessary obstacles to the sharing of ideas online abhor law suits like BMG’s and services like Rightscorp; they consider them part of a broad campaign to sway the balance too far in favor of copyright holders.
Creators start losing money on advertising very quickly, even if they used a copyrighted clip legally.
All this may make the Digital Millennium Copyright Act and its safe-harbor provision look rather tenuous. Indeed, Congress has for years been holding hearings designed to see whether the Act needs changing. You can imagine how complicated that would be, and how wrought with factional interests. Not least of other complications is that online piracy is an act that crosses international borders. (More on Congress’s deliberations, in a later article.)
Copyright Takedown Notices, in Everyday Practice
As noted, an important feature of enforcement has been the practice of entrusting, or empowering, Internet companies to monitor their subscribers’ and users’ browsing, as a sort of private copyright surveillance force — one that harkens to every and any copyright-holding complainant’s objection.
Obviously, that is inherently problematic. Both copyright owners, on the one hand, and users and “creators,” on the other end up sorely aggrieved.
Many copyright owners complain that the takedown notices are fairly useless in controlling many posters, who simply adopt new names, and return under new names.
Users and creators complain that Internet service providers often: i) take down material before — or, rather than — properly adjudicating the merits of copyright holders’ complaints; and ii) make their deliberations based on spurious claims by copyright owners, or by complainants who merely claim they are wronged copyright owners.
In 2014, for instance, the Writers Guild of America West warned that the “notice and takedown” system was potentially harmful if more was not done to prevent “mistaken or abusive notices that target fair use of copyright works.”
How so? Well, first it’s telling to learn from their report that, despite the centrality of the complex, contentious notice-and-takedown emphasis of the Digital Millennium Copyright Act, little research has been done about how effectively the process has been in actually curbing copyright infringement. (Among key players, they note, only Google has substantially publicly reported – since 2002, in its case — such information as the number and origin of requests to review and take down postings.)
The researchers set out to gauge the process’s performance, not only in protecting the interests of creators and providers, but also in “providing due process for posters who have been served with takedown notices.”
What they found is quite startling.
Takedown requests: often faulty, or bogus
The researchers wanted to find out from surveys and interviews how well the notices allow online service providers and rightsholders to issue notices.
They found that certainly online infringement of copyright is extensive. But also rife are invalid claims of copyright infringement.
Many complaint notices hadn’t properly and specifically identified any infringed material, as the notice-and-takedown regulations in the DMCA require. Yet others ignored fair-use grounds for using copyrighted material.
It’s not all bad news.
The authors found that, in the view of the parties in online posting of copyrighted materials, the safe-harbor provision of the DMCA, its Section 512, remains fundamental to the online ecosystem.
Perhaps surprising is that rightsholders concede that the safe-harbor provision is not such a bad way to go. Presumably that is because it at least provides a gauge of what they can demand be taken down, even if it has frustrating limitations when it comes to combatting large-scale infringement, particularly postings from countries outside the reach of US law, the study found.
(One can’t help but wonder, too, about how much postings of large rightsholders’ copyrighted material, legal or not, serves to publicize their holdings, past and future; after all, movie attendance went up after movie moguls lost their battle in the late 1970s to hobble the spread of VHS home recording.)
Service providers like the system; for them, the benefits are obvious. While enjoying the protection it provides, they can also publicize their noble desire to preserve users’ rights of free expression that might entail fair use of copyrighted material.
Sure, they may lament the costs of championing those rights, including that they are hit with vast waves of notices that overwhelm their methods of passing on word to users that they must stop breaching copyrights. But even then, they may sing their own praises while publicizing their efforts to curb abuses by instituting efficient automated takedown-demand systems, or by setting up “content protection teams” on their staffs, or by using third-party rights enforcement companies like Rightscorp, as in the dispute between BMG and Cox Communications section, described above.
Service providers would of course prefer not to have to bear such costs, but the bottom line is that they are all merely costs of doing roaring business.
The law continues to provide rightsholders with a copyright enforcement alternative that is cheaper and easier to use than lawsuits.
The authors note, however, that the system has some glaring shortcomings. Many relate to those automated systems with which rightsholders issue notices objecting to posts they claim breach copyrights.
Those vast volumes of notices go to service providers. Most providers, the authors found, receive few notices — so few that they opt to process them by hand — but a few large service providers receive huge numbers of notices generated by automated systems and, under the burden of millions of notices, also resort to automated systems to issue takedown notices to offending, or possibly offending, posters.
So, for example, a few complainants send the vast majority of the millions of takedown requests that Google Web Search receives each week. Those few complainants are almost all large entertainment companies with large investments to protect — large corporations in music, “adult entertainment,” film and television, software, games…; by and large, they target large-scale infringers, such as file-sharing sites.
Some large providers, because they receive so many notices, have come to agreements with some rightsholders and rightsholder groups that set out additional protections and obligations. For example, they may institute “staydown” systems that track improper postings and ensure they aren’t reposted under a different user name. They may even provide ways for trusted rightsholders essentially to issue takedown notices, themselves.
Identifying spurious and malicious takedown requests has created a clear need for better mechanisms to check the accuracy of algorithms
Also lacking, in the current system, the authors found, were adequate procedures for users to object to demands that they take down material, or have it taken by providers. A “counter notice procedure” is available, but few users avail of it because they find it impractical, the authors report. “Due process safeguards for targets have largely failed,” they say.
Posters of material are not able to prevent improper removal of their postings; no unbiased adjudicator oversees takedowns; and providers that remove postings are not obliged to provide any public accounting of those actions. That “lack of transparency,” the authors wrote, means that service providers, Internet users, rightsholders, and policymakers have little sense of each others’ practices and reasoning.
How the law should change
So, what to do?
The authors have no doubt that something needs to be done, but rather than press for wholesale changes, they advocate tweaking or further testing the current regime.
A necessary step to learning more about the way the system is operating, they say, is to introduce a general requirement that notice and counter-notice senders submit their documentation to a centralized repository, for analysis — because, write the authors, “knowledge-sharing and best practices can fill gaps and improve operation.”
Among other steps they recommend is that regulators should make it harder for complainants to issue bogus notices, and easier for targets to challenge them. To do that, they suggest, could entail requiring senders “to declare under penalty of perjury that their substantive claims in a takedown notice are accurate, remove the mandatory ten-day waiting period before material goes back up, lower the standard for targets to recover damages from senders who make bogus claims, and raise the penalty for doing so.”
The authors provide various other suggestions for fine-tuning the current system, which they say is bias in favor of takedown.
They say service providers that use automated notice-processing systems should develop filters and procedures to flag questionable notices for human review and to reduce overbroad takedowns.
Relying on machines to make decisions about sometimes-nuanced copyright law raises questions about the effect on expression.
The authors also recommend adding some teeth to the watchdog. Regulators, they say, should make it easier for targets of bogus copyright-infringement takedown requests to recover damages from senders, and require senders of notices to declare, under penalty of perjury, that their notices are accurate and justified.
They also caution against changing section 512 to fit the practices of large service providers with automated searching and takedown in ways that harm smaller providers. Specifically, they note that harm would occur if changes forced smaller service providers to adopt the larger companies’ expensive automation. That could force smaller providers out of the market, and discourage or prevent other small providers from entering it.
Certainly, the authors conclude, “relying on machines to make decisions about sometimes-nuanced copyright law raises questions about the effect on expression.”
The report by Urban, Karaganis, and Schofield is part of the Takedown Project, a global network of researchers studying the role of notice and takedown procedures in addressing conflicts involving copyright, privacy, and freedom of expression.