The Internet gives every appearance of being wide-open, public, available to all. You can email anyone in the world who is connected to the Internet. You can visit any public website. This amazing network — a “universal space” in the words of Tim Berners-Lee — seems as accessible as a public park.
But can you put legal locks on Internet doors? Can public sites (not just password-protected sites) create legally enforceable selective admissions policies? Can email systems and websites use law to exclude unwanted messages and unwanted visitors?
Yes, such locks are possible, and a recent decision from the U.S. District Court for the Northern District of California has even recognized a potential new locking mechanism: the Computer Fraud and Abuse Act. Still, the decision faces strong objections from those who believe these legal locks “implode the basic functioning of the Internet itself.”
Several twists of Internet law allow legal locks on otherwise public Internet places. The cases are developing, and the decisions haven’t yet settled on any all-embracing principle for the circumstances in which a public place on the Internet can or cannot exclude particular unwelcome visitors, or particular kinds of unwelcome visitors.
Blocking online ‘scrapers’
The concept was first tested when automated computer programs began scraping public Internet sites for data, which would then be reposted or otherwise used by the party that conducted the scraping. In one case, Bidder’s Edge, a website operator, sought to create a portal that would compile offerings on multiple auction sites, including eBay. eBay sued, claiming that the compiler’s automated program (its “robot,” “scraper” or “spider”) overused eBay’s resources and slowed down its operation. The court sided with eBay in 2000, ruling that by continuing to scrape eBay’s site after it had been warned not to, the compiler had committed the tort of “cyber-trespass,” an electronic version of the centuries-old tort of physical trespass. Several other decisions around this time period took similar approaches.
The cyber-trespass tort has practical and free-speech limits. When a dissident former Intel employee directed thousands of emails to Intel employees, Intel sued, characterizing his conduct as cyber-trespass. But the California Supreme Court disagreed, finding an insufficient physical effect on Intel’s computer system from the emails, and holding that disruption attributable to the content of the emails could not support a cyber-trespass claim.
Robots keep out
One of the key lessons from the initial round of cybersquatting cases is that the party that wishes to shield its otherwise public Internet facilities from certain parties, communications, or uses, must give clear notice. Just as visitors are usually presumed to have the right to approach your front door and ring your doorbell unless you have posted “no trespassing” or “keep out” signs, public Internet sites are presumed open to all, until that electronic “keep out” sign is posted.
But what is an adequate “keep out” sign? There are a variety of ways to post “keep out” notices, both general and selective, on the Internet. One of the simplest and most basic is a robots exclusion file, which appears in source code (and is often informally referred to) as “robots.txt.” Automated scraping programs (robots) are supposed to look for each website’s robots.txt files, which give instructions as to whether all or certain robots are excluded.
In an unusual case in 2006, Field v. Google, a website operator sued Google, claiming that Google, in regularly surveying the Internet to update its search database, made unauthorized copies of the operator’s site. The court, however, noted that Google honored robots.txt files and found that robots exclusion files were well-known and well-accepted techniques. The court ruled that the operator had no claim because he could have so simply signaled to Google that his site was off limits. At least from this court’s view, robots exclusion headers work as “keep out” signs, and if you wish to keep robots off your site, you may even have an obligation to use them.
Cyber-trespass claims are viable, and robots.txt files serve as effective “keep out” signs. One might, therefore, expect many cyber-trespass cases. But there’s a catch. Cyber-trespass claims generally require a significant physical effect on a computer system. With the enormous improvements in the speed and capacity of computer systems, the likelihood of such physical effects have diminished. The faster and more robust your computer system, the less likely it is that you will suffer the physical effects from a robot that are essential to a cyber-trespass claim.
Craigslist v. 3Taps
That may explain why, in a recent case reminiscent of the eBay situation of a decade ago, the federal computer anti-hacking law, the Computer Fraud and Abuse Act, was asserted against a scraper. In this case, Craigslist objected to a scraping service, 3Taps, which scraped, aggregated, and republished its ads. Craigslist notified 3Taps that it had no permission to scrape the site. And when 3Taps continued to scrape, Craigslist sued under various theories, including the CFAA, which prohibits “unauthorized access” to covered computer systems. (The act’s coverage of computer systems is quite broad, and included Craigslist’s servers.)
The court found that Craigslist had clearly barred 3Taps from its website, no differently than a store owner who decides to prohibit a bothersome guest from entering his store. "The law of trespass on private property provides a useful, if imperfect, analogy," U.S. District Court Judge Charles Breyer wrote. "Store owners open their doors to the public, but occasionally find it necessary to ban disruptive individuals from the premises.” Craigslist’s unambiguous “keep out” notice, therefore, perfectly set up the case against 3Taps. Defiance of the notice by 3Taps clearly implicated the CFAA’s prohibition of unauthorized access to computer systems, the court held.
Finding the CFAA’s application clear, the court gave short shrift to 3Taps’ plea that its ruling went against Internet culture and that a “permission-based” regime “could implode the basic functioning of the Internet itself.” Faced with “unambiguous statutory language,” the court found that Craigslist had full rights to selectively revoke authorization to access its website.
The decision was issued in the context of a motion to dismiss, so it holds only that a CFAA claim is plausible in these circumstances; whether 3Taps is actually liable will be determined at trial, based on all the facts and circumstances. And, of course, it may take an appeal in this and other cases to fully resolve this new CFAA theory for “keep out” signs on the Internet.
Mark Sableman is a partner in Thompson Coburn’s Intellectual Property group. He is the editorial director of Internet Law Twists & Turns. You can find Mark on Google+ and Twitter, and reach him at (314) 552-6103 or firstname.lastname@example.org.