6 Ingenious Ways Hackers Break Into the World’s Most Secure Computers
And one simple technique that’s more powerful than them all
In a now-famous scene from the 1999 techno-thriller The Matrix, Laurence Fishburne’s character Morpheus sits in a chair, drenched in sweat, desperately resisting attempts to break into his web-connected brain.
His friends look on helplessly. “It’s like hacking a computer", one intones in a way that somehow didn’t seem melodramatic back in the 90s. “All it takes is time.”
The scene has become symbolic of the world of cybersecurity. Modern encryption, virus detection systems, and CDNs can fight off the majority of attacks. But given enough time and resources, every system is potentially vulnerable.
One cybersecurity measure, though, was long thought to be nearly unhackable: the airgap. That was until one dedicated and relentless Israeli research came along and made it their mission to defeat the world’s most secure computers. Sometimes doing so required ingenuity, creativity, and a knowledge of the inductive properties of different motherboard components. And sometimes it just required dropping a USB drive in a parking lot.
What is an airgap, anyway? According to an explainer by Wired Magazine, the term “airgapped" refers to computer systems which have been physically isolated from the Internet, or any other network.
Imagine you’ve just installed a computer in a data center. Now, yank out all its networking hardware, and anything that allows it to communicate wirelessly. Stick it deep inside a secure part of the facility (preferably underground). Only allow people to reach it by passing through several locked doors and checkpoints. Point a bunch of surveillance cameras at it, just for good measure.
That computer is now “airgapped.” The “air" is literal--ideally there will be no physical connection (nothing but air) between your computer and the outside world. Airgapped systems are often used in situations where security is a matter of life and death: in military bases, nuclear power plants, sensitive government data repositories, aviation control centers and the like.
At first glance, airgaps seem like they should be the definitive solution to cybersecurity. If a computer isn’t connected to the Internet, and physical access to it is controlled, how could hackers possibly breach it?
Mordechi Guri, a researcher at Ben Gurion University of the Negev in Israel, has made a career out of answering that question. He and his team have developed and demonstrated numerous ingenious ways to break into airgapped systems--some wildly complex, and some embarrassingly simple.
The Negev is a massive desert of rolling sand dunes — bathed in sun and starkly beautiful — which covers more than half of Israel’s landmass. Traveling around it, it’s easy to believe that you’ve been transported to another planet. It’s the perfect metaphor for Guri’s work--his team is diligently hacking airgaps in a place that’s as physically separated from the outside world as one could hope to be.
Much of the team’s research was shared in a massive presentation from the Blackhat security conference, and in a detailed profile in Wired. In their published work, Guri’s team demonstrates a variety of ingenious ways to get data out of airgapped systems, often exploiting physical vulnerabilites in computers themselves.
One of Guri’s first hacks, which the team dubs Beatcoin, uses an airgapped computer’s speakers to transmit data via sound waves. Human hearing only extends to a few thousand hertz. But as anyone who’s used a dog whistle knows, it’s readily possible to generate sound at frequencies well above this range.
Guri’s team realized that by hijacking an infected computer’s speakers, they could encode sensitive data into sound waves above the range of human hearing. Even with the speakers playing these encoded sounds at full blast, no human could hear their output.
The sound could then be picked up and decoded by a nearby cellphone — or with a listening device installed at a distance, if the target computer wasn’t well enough isolated. The team showed how they could use the tech to steal Bitcoin from an airgapped computer--thus the Beatcoin moniker.
If that sounds clever, just wait--the team’s techniques only get crazier. Realizing that speakers aren’t the only way to generate sound waves, Guri’s team turned to other hardware.
Their Fansmitter exploit changes the speed of a secure computer’s cooling fan, essentially using it to tap out rhythms that encode data. Again, this can be picked up and decoded by a nearby infected cellphone. Their Diskfiltration hack does the same thing, but goes a step further. It uses the read/write arm of a hard drive to tap out the sounds, eliminating the need for speakers altogether.
Sound isn’t the only wave that Guri’s team have exploited, either. Their Airhopper hack transmits special images to a computer’s monitor. The images are designed to induce a specific electromagnetic field in the wires connecting the monitor to the computer, essentially turning them into makeshift radio antennas that can then transmit data over a distance. A similar hack uses USB cables to the same effect.
Some of the team’s hacks go even deeper into the infected computer’s hardware. One, called Magneto, induces tiny magnetic fields in a computer or phone’s internal circuit boards. These encode data, and can pass through a Faraday cage, which blocks most other signals and is sometimes used to add another layer of security to airgapped systems.
Another series of hacks uses lights connected to the infected computer to transmit data. This can be as simple as blinking the computer’s LED status light in a specific sequence, like a high-tech, optical version of morse code. Or it can be as complex as hijacking the infrared night vision LEDs on surveillance cameras connected to the infected computer, using them to blink out sequences which can be read by a passing drone.
One technique even uses heat to transmit data. By controlling the heat emitted by different components on an infected computer, Guri’s team shows that they can transmit out a few bits of data at a time. It’s not enough to steal a big document, but provides enough bandwidth to (slowly) exfiltrate an encryption key.
Most of these hacks assume that the attacker has some form of physical access to the infected computer. Again, this could be through an infected cellphone in a repair contractor’s pocket, or a sophisticated measuring device set up near a secure facility to monitor for radio or magnetic transmissions.
My favorite hack, though, doesn’t even require physical access to the infected facility. Called Powerhammer, it varies the infected system’s electrical consumption in order to transmit data over a facility’s powerlines. These signals could potentially be read by an attacker miles away, with nothing more than an induction clamp on the facility’s power line.
At first, it’s easy to dismiss Guri’s techniques as elaborate party tricks--technically impressive, but unlikely to work in the real world.
Except they’re not. The NSA has reportedly used an RF technology similar to Airhopper for years, transmitting data from airgapped computers to listening stations up to 8 miles away. And the Stuxsnet worm famously infected airgapped computers at a nuclear enrichment facility in Iran, reportedly causing centrifuges and other industrial equipment to fail.
Guri’s team provides a variety of recommendations to help facility operators protect their systems from his team’s threats. These include disabling all unused hardware (like audio hardware) on airgapped computerd, unplugging monitors and connected devices, ensuring aural and visual separation of critical systems from the outside world, and more.
But Guri and his team also acknowledge that the biggest threat to airgapped systems isn’t a technical hack. The biggest threat these systems face are their users.
In many cases, the easiest way to infect an airgapped system is to infect a USB drive, and then get someone to plug it into the airgapped computer. Indeed, this is how Stuxsnet found its target. Sometimes this requires targeting specific parties--like a contractor with physical access to the airgapped system--using traditional methods of hacking spy-craft.
But sometimes, it requires nothing more than dropping an infected USB drive in a parking lot. In an alarming study from the University of Illinois, researchers left hundreds of USB drives in public spaces around the university’s campus. About 45% of the time, someone came along, found the drive, and plugged it into their computer.
When surveyed, most people said they did so because they were worried someone had lost the drive, and they wanted to return it to its rightful owner. But 18% cited “curiosity" as their only reason for plugging the suspicious drives in.
And lest you think that this ruse could never work on security-minded government or military contractors, think again. According to a 2011 study by the Department of Homeland Security, when USB drives were left in the parking lots of sensitive facilities, the percent that got plugged in wasn’t 45%. It was 60%. If the drive had the logo of the target facility printed on it, that number rose to 90%.
In 2008, a USB drive with a malicious worm was plugged into a computer at a military base in the Middle East (rumors, never confirmed, say it was dropped in a parking lot). The security breach that resulted was the worst in military history, and took the Pentagon 14 months to fix. The damage was so bad that it led to the creation of the United States Cyber Command, and led to a military ban on all USB drives that lasted more than 2 years.
The cybersecurity community has long known that so-called “human factors” are always the weakest link in any system. Up to 95% of breaches are the result of human error, not a technical issue. All the complex security software in the world is no match for the office worker who keeps their passwords on a Post-It stuck to their monitor.
That’s part of why all of Guri’s work is essentially a coda to a much more important point (and why they focus mainly on exfiltrating data; getting malicious code onto a machine in the first place is assumed to be easy): people are the best vector for spreading malware and other cyber threats.
The Matrix had something right: given enough time, any computer can be hacked. Guri’s research is proof that no system, no matter how secure, is safe from the ingenious exploits of hackers.
But perhaps the most surprising secret of the cybersecurity world is that hacking is often unnecessary. To break into the world’s most secure airgapped systems, you could try encoding data into fan vibrations, sending telltale signals through powerlines, or converting monitor cables into makeshift antennas.
Or you could write IMPORTANT on a USB drive in Sharpie, drop it in a parking lot, and rely on “human factors" to do the rest.