Skip to main content Skip to secondary navigation
Main content start

Q&A: What’s new in the effort to prevent hackers from hijacking chips?

Designers have always had to find and fix the logic flaws, or bugs, that make chips vulnerable to attack. Now computer algorithms are helping them identify these threats.

As hackers develop new ways to attack chips, researchers aim to anticipate and forestall their malicious intrusions. | Illustration by Kevin Craft
As hackers develop new ways to attack chips, researchers aim to anticipate and forestall their malicious intrusions. | Illustration by Kevin Craft

In their previous work, Stanford engineering professors Clark Barrett and Subhasish Mitra developed computer algorithms to automate the process of finding bugs in chips and fixing these flaws before the chips are manufactured.

Now, the researchers are adapting their algorithms to thwart a new type of peril — the possibility that hackers could misuse a chip’s features to carry out some nefarious end. In a recent discussion with Stanford Engineering, Barrett and Mitra explain the risks, and how algorithms can help prevent them.

What’s new when it comes to finding bugs in chips?

Designers have always tried to find logic flaws, or bugs as they are called, before chips went into manufacturing. Otherwise, hackers might exploit these flaws to hijack computers or cause malfunctions. This has been called debugging and it has never been easy. Yet we are now starting to discover a new type of chip vulnerability that is different from so-called bugs. These new weaknesses do not arise from logic flaws. Instead, hackers can figure out how to misuse a feature that has been purposely designed into a chip. There is not a flaw in the logic. But hackers might be able to pervert the logic to steal sensitive data or take over the chip.

Have we already suffered from these unintended consequence attacks?

In a way, yes. Last year some white hat security experts — good guys who try to anticipate hack attacks — discovered two attacks that could be used to guess secret data contained in sophisticated microprocessors. The white hats called these attacks Spectre and Meltdown. The attacks misused two features designed to speed up chip performance. These features are known as “out-of-order-execution” and “speculative execution.” These features store certain data in a chip in a way that makes the data immediately available should the program require. Say the program requires access to credit card info or private health data. The white hats discovered that Spectre and Meltdown could eavesdrop on any network to which the chip is connected and read that stored data right off the chip.

How?

The analogy would be guessing a word in a crossword puzzle without knowing the answer. If a clue demands a plural answer, the last letter is probably an ‘s.’ If the word is in the past tense, the last two letters are probably ‘ed’ and so on. The white hats discovered that hackers could use the out-of-order and speculative execution features as clues to make repeated guesses about what data was being stored for instant use. We think Spectre and Meltdown were discovered before hackers could actually perform such attacks. But it was a big wake-up call. You wouldn’t want a hacker using a technique like that to take control of your self-driving car.

How do your algorithms deal with traditional bugs and these new unintended weaknesses?

Let’s start with the traditional bugs. We developed a technique called Symbolic Quick Error Detection — or Symbolic QED. Essentially, we use new algorithms to examine chip designs for potential logic flaws or bugs. We recently tested our algorithms on 16 processors that were already being used to help control critical automotive systems like braking and steering. Before these chips went into cars, the designers had already spent five years debugging their own processors using state-of-the-art techniques and fixing all the bugs they found. After using Symbolic QED for one month, we found every bug they’d found in 60 months — and then we found some bugs that were still in the chips. This was a validation of our approach. We think that by using Symbolic QED before a chip goes into manufacturing we’ll be able to find and fix more logic flaws in less time.

Would Symbolic QED have found vulnerabilities like Spectre and Meltdown?

Not in its current incarnation. But we recently collaborated with a research group at the Technische Universität Kaiserslautern in Germany to create an algorithm called Unique Program Execution (UPEC). Essentially, we modified Symbolic QED to anticipate the ways that hackers might exploit a chip’s legitimate features for their own ends. The German researchers then applied UPEC to a class of processors that might run a home security system or other appliance hooked up to the internet of things. UPEC detected new types of attacks that didn’t result from logic flaws, but from the potential misuse of some seemingly innocuous feature.

This is just the beginning. The processors we tested were relatively simple. Yet, as we saw, they could be perverted. Over time we will develop more sophisticated algorithms to detect and fix the most sophisticated chips, like the ones responsible for controlling navigation systems on autonomous cars. Our message is simple: As we develop more chips for more critical tasks, we’ll need automated systems to find and fix all potential vulnerabilities — traditional bugs and unintended consequences — before chips go into manufacturing. Otherwise we’ll always be playing catch up, trying to patch chips after hackers find the vulnerabilities.