1938: Barrier layer theory
Physicist Walter Schottky had a significant influence on the evolution of communications technology with his investigations, which reached back as far as 1912. The space charge grid tube and screen-grid tube, the shot noise theory and the heterodyne receiver all originated with Schottky.
In addition to his other important work, at the end of 1938 he formulated the basic outlines of his barrier layer theory. The next year, a fully developed analysis followed, for the first time describing a qualitative – and in some aspects also quantitative – understanding of the rectifying effect that occurs at a metal-semiconductor junction.
The barrier layer theory proved to be a pioneering step in semiconductor research and development. In the aftermath of World War II, its results provided the foundation for the triumph of electronics using components like diodes and transistors – and later the integrated circuit (IC).
1953: Floating zone method
In 1953, Siemens researcher Eberhard Spenke and his colleagues at the Pretzfeld semiconductor laboratory developed a process for making ultrapure silicon that revolutionized the world of electrical engineering and electronics.
In this process, a polycrystalline silicon rod is melted around a single seed crystal of silicon, and passed through a high-frequency field. The part of the rod that melts and then cools, takes on the desired single-crystal structure. At the same time, chemical impurities float away, yielding ultrapure silicon. Only two atoms of other substances might be left among a billion atoms of silicon – a requirement for making semiconductor components like diodes, transistors and chips.
The “floating zone method” was subsequently licensed to numerous companies in the USA, Japan and Germany.
1987: 1-Mbit memory chip
In the early 1970s, the transition from analog to digital technology vastly accelerated in all fields of electrical engineering. Digitalization called for more and more memory capacity to process programs and data. The chip market exploded. Chip capacity quadrupled every three years, bringing new, more powerful, yet cheaper generation of chips onto a burgeoning market.
In 1983, Siemens declared that developing megabit memory chips would be a strategic corporate goal – meaning chips with more than a million binary memory cells and able to store around 64 pages of text. By the end of 1987, four years after the MEGA Project was launched, the first 1-Mbit chips went into production. In cooperation with international partners, storage capacity kept growing with every new generation of chips. These successes were the foundation for many of Siemens’ innovations in communication, information and automation technology.
1996: 256-Mbit memory chip
Following the MEGA Project’s first achievement of 1987, the 1-Mbit chip, and in spite of a number of difficulties, the project still attained its next goal ahead of schedule, in the summer of 1988: A laboratory specimen of a 4-Mbit chip. But mass production was able to start only at the end of 1989 – which was nevertheless a year ahead of Japanese competitors.
The race for bigger, cheaper memory accelerated in the 1990s. By 1996 Siemens and its cooperating partners were ready to present the first customer samples of the 256-Mbit memory chip. This, the smallest and fastest chip to date, could store all the works of Shakespeare, Goethe and classical Japanese literature combined.
Finally, in 1999 Siemens introduced the first 1-gigabit memory. In 1970 this storage medium, no bigger than two chip cards, would have covered 700 square meters (over 7,000 square feet) of space. This was the climax of chip development at Siemens: Two years later, the semiconductor production unit was transferred to the new company Infineon.