Home   About   Timeline   The Book


Electronics History 4 - Transistors




 

Some investigators were convinced that semiconductors could be given the powers of a triode as well. In late 1947 that goal was met by John Bardeen and Walter Brattain at Bell Labs. Their invention (the little cylinder that provoked a shrug from the New York Times) essentially consisted of two "cat's whiskers" placed very close together on the surface of an electrically grounded chunk of germanium. A month later a colleague, William Shockley, came up with a more practical design—a three-layer semiconductor sandwich. The outer layers were doped with an impurity to supply extra electrons, and the very thin inner layer received a different impurity to create holes. By means of complex interactions at the junctions where the layers met, the middle portion of the sandwich functioned like the grid in a triode, with a very small voltage controlling a sizable current flow between the outer layers. Bardeen, Brattain, and Shockley would share a Nobel Prize in physics as inventors of the transistor.

Although Shockley's version was incorporated into a few products where small size and low power consumption were critical—hearing aids, for example—the transistor didn't win widespread acceptance by manufacturers until the mid-1950s, because Germanium transistors suffered performance limitations.  A turning point came in early 1954, when Morris Tanenbaum at Bell Labs and Gordon Teal at Texas Instruments (TI), working independently, showed that a transistor could be made from silicon—a component of ordinary sand.  These transistors were made by selective inclusion of impurities during silicon single crystal growth and TI manufactured Teal’s version primarily for military applications. Because this method was poorly suited for large volume production, Tanenbaum continued to pursue more promising methods and, in early 1955, he and Calvin Fuller at Bell Labs produced high performance silicon transistors by the high temperature diffusion of impurities into silicon wafers sliced from a highly purified single crystal. This technique was admirably suited for large volume production, the semiconductor industry migrated from germanium to the diffused silicon technique and the Silicon Age was launched. The diffusion technique was critical to the later inventions of integrated circuitry.

Any electronic circuit is an assemblage of several types of components that work together as a unit. Previously, the various circuit elements had always been made separately and then laboriously connected with wires. But in 1958, Jack Kilby, an electrical engineer at Texas Instruments who had been asked to design a transistorized adding machine, came up with a bold unifying strategy. By selective placement of impurities, he realized, a crystalline wafer of silicon could be endowed with all the elements necessary to function as a circuit. As he saw it, the elements would still have to be wired together, but they would take up much less space. In his laboratory notebook, he wrote: "Extreme miniaturization of many electrical circuits could be achieved by making resistors, capacitors and transistors & diodes on a single slice of silicon."

The following year, Robert Noyce, then at Fairchild Semiconductor, independently arrived at the idea of an integrated circuit and added a major improvement. His approach involved overlaying the slice of silicon with a thin coating of silicon oxide, the semiconductor's version of rust. From seminal work done a few years earlier by John Moll and Carl Frosch at Bell Labs, as well as by Fairchild colleague Jean Hoerni, Noyce knew the oxide would protect transistor junctions because of its excellent insulating properties. It also lent itself to a much easier way of connecting the circuit elements. Delicate lines of metal could simply be printed on the coating; they would reach down to the underlying components via small holes etched in the oxide. By 1965 integrated circuits—chips as they were called—embraced as many as 50 elements. That year a physical chemist named Gordon Moore, cofounder of the Intel Corporation with Robert Noyce, wrote in a magazine article: "The future of integrated electronics is the future of electronics itself." He predicted that the number of components on a chip would continue to double every year, an estimate that, in the amended form of a doubling every year and a half or so, would become known in the industry as Moore's Law. While the forecast was regarded as wild-eyed in some quarters, it proved remarkably accurate. The densest chips of 1970 held about 1,000 components. Chips of the mid-1980s contained as many as several hundred thousand. By the mid-1990s some chips the size of a baby's fingernail embraced 20 million components.


 


     Electronics
     Timeline
     1 - New Gadget
     2 - Foundation
     3 - Vacuum Switches
     4 - Transistors
     5 - Microprocessors
     Essay - Gordon E. Moore





Copyright © 2024 National Academy of Sciences on behalf of the National Academy of Engineering.

Privacy Statement. DMCA Policy. Terms of Use.

Printer-Friendly Version. Text-Only Version. Contact Us.