Skip to main content Skip to secondary navigation
Main content start

Ted Hoff: the birth of the microprocessor and beyond

Marcian “Ted” Hoff (PhD '62 EE) is best known as the architect of the first microprocessor.

Marcian “Ted” Hoff (PhD '62 EE), is best known as the architect of the first microprocessor. Intel’s 4004 was released in November 1971, 35 years ago this month. The history that his ingenuity helped spawn is now the subject of a new DVD, the Microprocessor Chronicles. Hoff came to Stanford for graduate work after being an undergraduate at Rensselaer Polytechnic Institute in Upstate New York, the region where he grew up. His career has morphed from engineering to litigation consulting, and his journey is full of interesting stories.

What was your path to Intel?

I used to play with vacuum tube circuits when I was in high school. When I graduated in 1954, I got a summer job in the company where my father worked. I always considered myself lucky to have had that job because I got to work with both magnetic cores and transistors. The transistor was only seven years old, and core memory was the major technology for computer memory. After I got my bachelor's degree, I came to Stanford to do graduate work in electrical engineering. I got a master's degree in 1959, and then did research on adaptive systems under the guidance of Professor Bernard Widrow. Together, we developed the LMS algorithm for adaptive systems, still used in modems, etc. to this day. I got my PhD in 1962 and stayed on doing government-sponsored research on adaptive systems. During that time professor Bob Pritchard, who I believe came to Stanford from Motorola, started courses in integrated circuit design. Someone suggested that I be a guinea pig for his lab course. It seems like everything went wrong in that course, but it showed how difficult it was to make integrated circuits. It was a learning experience. In the meantime, I talked about technology with Rex Rice, who often did on-campus recruiting for Fairchild Semiconductor. One thing we discussed was the possibility of semiconductor memory because, having worked with magnetic cores and knowing they were difficult to work with, it seemed to me that semiconductor memory might be very attractive. I understand that Bob Noyce, when starting Intel, approached Professor Jim Angell, who had consulted for Fairchild, about whether there was anyone on campus who he should talk to, and Professor Angell gave him my name along with several others. I ended up becoming Intel's employee number 12 in September of 1968.

Tell us about how the 4004 came to be.

Intel was founded with the idea of doing semiconductor memory. Up until that time, most computer memory used small magnetic cores strung onto wire arrays. In most cases, they were wired by hand and some of these cores weren’t much bigger than the tip of a mechanical pencil. When I interviewed with Bob Noyce, he asked me what I thought would be the next big thing for semiconductors. I said memory — a lucky guess because that was before I knew why Intel was being founded. While developing memory products, there was a feeling within Intel's management that it might take a while before the computer industry would accept semiconductor memory as an alternative to cores. So it was felt that we should undertake some custom work, that is, to build chips to the specifications of a particular customer. We were contacted by a Japanese calculator company whose calculators came out under the name Busicom. They said that they would like to have us build a family of chips for a whole series of different calculator models, models that would vary in type of display, whether they had a printer or not, the amount of memory that they had and so on. A contract to make their chips was signed in April of 1969. Three engineers from Japan came to Intel in June of that year and I was assigned to act as a liaison for them. I had no design responsibility or anything like that. Rather, if they had a problem they were to come to me and I was to try to find the appropriate person to deal with it. However, I was curious about the calculator design. I knew little about it, although I was fairly familiar with computer architectures and I had been at the sessions where the project had been discussed. The more I studied the design, the more concerned I became, based on what I had learned about Intel’s design and package capability and costs. It looked like it might be tough to meet the cost targets that had been set in April. The Japanese design was programmable, using read-only memory, but it seemed to me that the level of sophistication of their instruction set was too high, because there was a lot of random logic and many interconnections between different chips. There were about a dozen different chip designs needed. There was a special chip to interface a keyboard, another chip for a multiplexed display and yet another chip for one of those little drum printers. It seemed to me that improvements could be made by simplifying the instruction set and then moving more of the capability into the read-only memory, perhaps by improving the subroutine capability of the instruction set. I mentioned some of my concerns and ideas to Bob Noyce. He was really encouraging, saying that if I had any ideas to pursue them because it was always nice to have a backup design. I did so throughout the months of July and August. The original calculator design called for shift register memory, but I’d been doing some work with dynamic random-access memory (DRAM). While a shift register used six transistors for each bit of storage, our DRAM used but three. It seemed to me that there was an advantage to DRAM. In addition, because the DRAM allowed random access to the memory, its control logic was simpler than that needed for the shift register's serial access. It seemed to me we could simplify the control logic, reduce the number of transistors, and cut the overall cost by going to DRAM. Together Stan Mazor and I — Stan joined at the beginning of September — created an outline of what we were talking about and our marketing department proposed our alternate approach to the calculator company in the middle of September. In October, the management of the calculator company came over to the U.S. for a meeting in which both approaches were presented. At that point they said they liked the Intel approach because it represented a more general purpose instruction set, a simpler processor, fewer chips to be developed and had a broader range of applications. Our proposal reduced the number of chips needed from around a dozen to only four.

How did you know that what you came out with was a microprocessor?

Our initial goal was never to make a microprocessor, only to solve this particular customer's problem, this calculator design problem. But there were several aspects of the design that became more evident as it was pursued. One was, being more general purpose and faster than the original design, we figured it might be useful for a broader range of applications than just the calculator family. Dr. Federico Faggin was hired around in April of 1970 and given the responsibility for chip circuit design and layout, to turn this architecture into a physical transistor layout. He developed a number of techniques to take advantage of Intel's new silicon gate metal oxide silicon (MOS) process and even found ways to improve performance using techniques that others felt impossible to do with silicon gate. He had working parts by around January of 1971.

Of course you didn’t envision the PC at the time.

The PC? No. Our expected usage was for what today is called embedded control. I fault the media for not even being aware of embedded control and yet it’s a huge market for microcontrollers and microprocessors. It wasn’t that we wouldn’t have liked to have had our own personal computers built using microprocessors, but the other support devices, like hard or floppy disk drives and printers, were all exceedingly expensive. If one had to make those investments, one would be better off using a higher performance minicomputer rather than a microprocessor. In effect, the peripherals problem had to be solved before the personal computer became economically feasible.

So how many transistors were on that processor?

With the 4004, I don’t know the exact number. In fact, Dr. Faggin and I each came up with a different number. He had used a form of programmable logic array. We had designed the instruction set in a way we hoped such an array could be used. I believe Dr. Faggin counted all possible sites in the array and reported 2,300 transistors for the 4004 chip, while I counted only the transistors actually implemented in the array and got a count of 2,100, so it’s somewhere in that range.

As someone in on the ground floor of all this, what did you think of “Moore’s Law”?

Gordon showed me his chart soon after I joined Intel. He had originally published it around 1965. At one point, he added a few of Intel's new parts on it. The improvements due to Intel's silicon gate process represented growth even faster than his original prediction. But I don’t know if any of us expected it to go on as long as it has. We expected to hit limits to integrated circuit capabilities.

So how did your career evolve after the 4004 project?

Around 1975, Bob Noyce asked me if I could take a look at the telephone industry to see if we could apply our semiconductor technology to telephony. I hired some people who had telephony experience and after some study decided to try to make what is called a CODEC, for coder/decoder. The CODEC does the standard conversions between analog and digital representations of voice signals as used in telephone systems. Analog signals are converted to a 64 kilobit-per-second digital signal and vice versa. Although most circuits for analog applications used bipolar technology, I developed some techniques for doing the digital to analog and analog to digital conversion in our MOS technology. And then we persuaded Professor Paul Gray of Berkeley to consult for us. He had done pioneering work in using MOS technology to do analog functions and also had developed switched capacitor filtering techniques. I soon had two design teams, one doing the CODEC and one under Prof. Gray doing the filter that is needed for use with the CODEC. We ended up, I believe, having the first commercially available monolithic CODEC and the first commercially available switched-capacitor filter for use with the CODEC. The normal use for CODECs at that time was for transmission between central offices. We figured that we could also design our CODEC so that it could also be used as the basis for a digital private branch exchange.

From there you went to Atari?

Intel moved the telephone group to Arizona, but I liked California and I didn’t want to move there. For a while, I had a group at Intel looking at speech recognition, but then I was contacted by a head-hunter for Atari. I went to Atari in early 1983. Atari had some really advanced ideas. Unfortunately, Atari didn’t have good financial controls nor good visibility into its market. While initially very profitable, the video game market really started to saturate. Without market visibility, Atari overinvested. They went from annual revenue of well over $2 billion down to below $1 billion in the space of a year. That’s very difficult for a company to deal with. About the middle of 1984, Warner Communications sold the company to Jack Tremiel, who had formerly headed Commodore. I was presented with the option of trying to find a place in the new company or having my contract bought out. I chose the latter.

Now your career involves intellectual property litigation consulting. How did you make that transition?

I’m not a lawyer and so I can’t present myself as a legal expert, but I have worked in this field long enough that I know a fair amount of the jargon. I know the kinds of things they are usually looking for in patent litigation, so it helps in the communication with the attorneys. I also maintain an extensive library at home. I have a big collection of semiconductor data books that go all the way from the late 1960s up until the present. It’s something like over 1,700 databooks plus an almost uncountable number of datasheets and things like that. A key piece of information about a patent is the date it was filed. If you can show something that was on sale a year or more before the patent was filed, that has all the elements of the claims in the patents, then those claims may be disallowed.

This is a company you joined?

While at Atari, Gary Summers headed their semiconductor design activity, and reported to me. After Atari was sold, we both left. About a year later, Gary started a company called Teklicon to do consulting to attorneys who were looking for either testimonial experts or advice about where to find particular art. Around 1986, I started working as an independent consultant, using Teklicon as an agent. In 1990, I became an employee of the company and I am still an employee to this day.

So you’ve made a transition from engineering to consulting. Should engineering students strive to understand business and the law?

I look back and try to remember where I was back then. The love of science was more important than the love of money and so the stock market and finance did not interest me at the time. However, if you want to enjoy a good lifestyle it’s nice to have some resources. One way to do that is to be associated with a successful business. So now I look back and wish I had appreciated business more than I did at the time. I can think of decisions that were made by colleagues or Intel management where I didn't appreciate the significance at the time. Looking back with the experience I now have, I can see how important business decisions can be. Look at the dotcom bubble and the havoc that that wreaked here in Silicon Valley. Many mistakes I believe came about from ignoring those lessons about business. So even though science and technology are wonderful, what really gets them out there for people to use is to have businesses built around them. It takes savvy businessmen as well as savvy technologists to make that work.