If you’re reading this, then you have billions of transistors at your fingertips. And billions and billions more around you in every microchip that exists, processing everything from Candy Crush, to the performance of our cars and temperature of our homes. These vast tiny cities of switches and amplifiers printed onto silicon wafers allow today’s technology to process data instantaneously and are so familiar that we take them for granted, but how much do we really know about them?
They’ve been around for an incredibly long time – the first commercial transistors could be found in hearing aids and pocket radios in the early 1950s, around the same time as the three physicists who invented them received their Nobel Prize for doing so. When William Shockley, John Bardeen and Walter Brattain of Bell Labs (today known as Nokia Bell Labs) were looking to improve and shrink the bulky conductive ‘valves’ used in radar systems during the war, they were inadvertently stumbling upon the door to the 4th Industrial Revolution. Shockley’s subsequent discovery of silicon as the best semi-conductive material to connect these transistors was the key to opening it. You could say that he was the founding father of Silicon Valley.
In the subsequent decades, their humble transistor has both shrunk and grown at the same time. Today’s miniscule chips are printed at high speed in vast quantities using a process called semiconductor lithography, and the quantity of transistors on a microchip essentially doubles every two years – as does its processing power. This observation was first made in 1965 by Gordon Moore (co-founder of Intel) and is commonly known as Moore’s Law. It has proven to be frighteningly accurate ever since and this steady increase can be tracked right through to the present day. However, little is said about the mass production of these chips, besides a nod to the billions of dollars that the semiconductor industry is worth, and the volumes required to keep a world that relies on smartphones ticking.
Writing on stones
Semiconductor lithography is sometimes also called Photolithography and describes the act of ‘printing’ a master pattern of circuits onto a wafer – either silicon or other conductive materials, depending on the application. It derives from the Greek word ‘lithos’ – stones, and ‘graphia’ – to write. However, this is where the simplicity ends. In essence, the machine used to achieve this is projecting a large circuit pattern from a plate, called a reticle, to expose the circuit onto a smaller wafer. This happens repeatedly because a complex circuit is created by overlaying many layers of these ultra-fine patterns. To give you an idea of the precision required to do this, semiconductor lithography uses ‘nanometres’ as its unit of measurement – a billionth of a metre.
When you consider that the first single transistor was the size of a standard scientific calculator, it’s truly astonishing how far we’ve come in such a short period of human history. Especially when you compare it to, say, the time elapsed between the invention of the printing press in the fifteenth century and the advent of the second wave of mass media – radio – which came about over 300 years later.
More than Moore?
Can smaller and more powerful continue forever? For years, rumours of the demise of the semiconductor as we know it have been rife, but unfounded. Smartphones are obviously the current huge consumer of semiconductors, but right now the Internet of Things means that many new areas are right on their tail; better, faster and more connected consumer electronics, wireless communication, telematics and cloud computing are demanding more power in less space. And it almost goes without saying that Artificial Intelligence, security, data transmission/storage and energy saving are the world’s future chip-guzzlers. This, coupled with demand from new geographical markets, means that the race is still on to keep up with Moore’s Law.
At the same time, alternatives to the traditional chip are on the distant horizon, the most well-known of which is the quantum chip. Where the transistor-based chip approaches every single piece of information (or ‘bit’) as existing in one of two states – 1 or 0, quantum bits (or ‘qubits’) can exist in a state of 1, 0, both or anything in between. And while this doesn’t sound particularly thrilling, the repercussions are huge, as this phenomenally increases the speed and amount of actions a quantum chip is capable of processing compared to its binary counterpart. But before we all start to fall down the rabbit hole of what this means for our current technology, it’s worth a reality check: quantum computing is still very much in its infancy and requires sub-zero temperatures to operate. And even though the race to market is very firmly on, the time and cost of developing the quantum chip means we have a long, long way to go before the world opens its arms to this particular brave new world.
In the meantime, we look again to Moore’s Law as a means to continue the digital evolution of our species. Miniaturisation of semiconductors has been a journey that began in the 1990s, but Canon’s nanoimprint technology is 2019’s very modern solution that meets the trinity of size, reliability and cost-effectiveness required by the demands of today’s tech.