Professor Michelle Simmons AO
Boyer Lectures 2023
Delivered Thursday, 19 October, 8pm AEDT.
Across the four lectures she’ll explore manufacturing at the atomic scale, why Australia is perfectly positioned to build the world’s first error-corrected quantum computer, and the importance of doubt in science. Since 1959, the ABC’s Boyer Lectures have sparked conversations about critical ideas.
View on ABC iview
Listen on ABC Radio National
Lecture may vary on delivery.
01 THE ATOMIC REVOLUTION
In the Science Museum in London, there is part of a nineteenth century mechanical calculating machine, termed the ‘analytical engine’, dreamed up by the British computing pioneer, Charles Babbage. Made of brass, bronze, steel, and wood, it looks like a component from a weaver’s loom, hocked from an old textile factory. It certainly doesn’t look anything like a modern-day computer. And at roughly a metre cubed – and this was only a trial piece for a much larger machine – it is a quaint reminder not only of how far we’ve come, but also of how our technologies scale over time.
There is an even more fascinating computer in the Scienceworks Museum in Melbourne. This is the famous CSIRAC computer built by Trevor Pearcey and his Australian team in the 1940s. This computer was state-of-the-art in its day.
Made using 2000 vacuum tubes, that period’s version of our modern transistors, each big enough to hold in your hand, and stacked together in banks, the machine filled an entire room. PAUSE I know I’m not the first to point out the contrast with our modern calculating machines, but it is astonishing in comparison to think that the current Apple M1 Max chip, manufactured by TSMC, manages to fit 57 billion transistors onto one little slither of an object about the size of an after-dinner mint.
Computers are not the only technology that has shrunk over time. The first motors were enormous – big, bulky, steam-powered machines originally designed to pump water out of mines. Now you can find miniature electronic motors all over the place: in refrigerators, dishwaters, vacuum cleaners, cameras, and of course in an ever-expanding range of transportation technologies. The first mechanical clocks were also rather grand and expensive; so much so that the earliest mechanical chronometers were only ever found in significant public places like churches, cathedrals and town halls. Today, anyone can buy a watch and our whole world has become synchronised as a result.
Or think about printers. Gutenberg’s printing press was twice as high as a man, larger than a grand piano, and arduous to use. Yet now you can pick up an efficient laser printer from Office Works that you can carry in both hands.
The miniaturisation of computing, then, is not an entirely unique story.
One of the ways humans can enhance any technology is to try to miniaturise it. It is not uncommon that, having made something smaller, we find new uses for it, which end up transforming society in unanticipated ways.
The story of computing is unique, however, in scale if not in kind.
And I say this not just because of the stark comparison that might be drawn between Charles Babbage’s effort and the iPhone, or between the CSIRAC of the late 1940s and the wild multitude of conveniently sized and extremely powerful computing devices we all have at our disposal today.
What sets computing apart from other technologies that we have miniaturised is the speed and the scale of miniaturisation, and the unbelievable fact – as I will explain – that we have now brought this particular process to its ultimate limit, whereby the core features of our computing machinery can be reduced to the size of individual atoms.
Today, I am going to tell you the story of how this happened and why it matters. It is a story that links fundamental physics, extraordinary technological advancement, and the future of computing. It is also largely an Australian story. And it starts with two things you may not often hear about on the radio: the transistor and the atom.
The foundation of all modern computers is the transistor, which was invented back in 1947. That probably feels like a long time ago, but in historical terms it is very recent. Indeed, it is remarkable to think that throughout most of human history, there was no such thing as a transistor. Yet the transistor has become the core calculating component at the heart of every computer, mobile phone, and smart device on the planet.
Transistors are electronic devices that act as switches, embodying the ‘ones’ and ‘zeros’ of the digital age. They increasingly run the world – microprocessors contain billions of transistors, and most of us use trillions every day without giving it a moment’s thought. They are the elemental gears that run our digital economy; they are all-pervasive – inescapable even – in the modern world. Yet they are also now so small as to be imperceptible.
The person whose name will forever be associated with the process of miniaturisation in computing is Gordon Moore, the co-founder of Intel. He’s the one who first noted that the number of transistors on a microchip was roughly doubling every 18 months to 2 years – which essentially meant transistors were halving in size over the same timeframe.
Moore published his data in the 1960s and projected the rate of growth into the future, observing that if any semiconductor company wanted to remain competitive over time it would have to keep cramming more and more transistors into less and less space. In so doing, he set a standard known as Moore’s law that the entire industry strove for, and turned what was then just an observation from a few years of data into a self-fulfilling prophecy that has spanned decades.
In the late 1990s, when I was working in the Cavendish Laboratory in Cambridge, and considering migrating to Australia, I remember looking at this data and wondering about the forward projection. Back then, judging from the speed at which the semiconductor industry was innovating, it looked as if it would be only 20 more years (so, the early 2020s) when we would be working at the atomic scale.
To understand what a breathtaking thought this was, and why I found it such an exciting prospect, you should know something about the atom.
The first person (that we know of) to speculate about the existence of atoms was the Greek philosopher Democritus (duh·mo·kruh·tuhs). In 460BC, he asked a simple question. If you break something in half, and then in half again, how small would the pieces eventually become?
He theorised that at some point you would end up with the smallest part of matter, which he called “the atom”. Unfortunately for Democritus, experimental science was not particularly advanced in 460BC and, to make matters worse, Aristotle (who was a much more influential philosopher) didn’t like the theory, so no one paid much attention to the idea for about 2000 years.
But he was basically right. The atom is not the smallest unit of nature, for atoms themselves are made up of electrons, protons, and neutrons. But atoms are nature’s building blocks. All the material that surrounds us, including the earth on which we walk, the food we eat, and our very own bodies, are composed of atoms.
And if most humans in history have had no conception of their existence, this is hardly surprising, because atoms are unimaginably small. A single atom is about ten billionths of a metre or about 50,000 times smaller than the width of a human hair.
It should not surprise you then to know that, even in the late 90s, the idea of making transistors and other electronic devices with components made from individual atoms seemed a little far-fetched. In those days, despite the 20-year prejections from Moore’s Law, there was no known technology for making things on such a scale.
But I was keen on the idea. To make electronic devices out of individual atoms – that was something almost unimaginable, an idea truly at the frontier of possibility. And I liked that, not just because I like a challenge, but because in science it is at the frontiers that the greatest discoveries are made.
At the Cavendish I was already trying to see if I could make the fastest, most perfect transistors.
However, I found that as these devices got smaller and smaller, they become harder and harder to make reproducibly. I liken it to a sculptor trying to make an exquisitely carved statue. The tools we were using to make these devices, tools that were even more advanced that the semiconductor industry was using, were still just too blunt.
Working at the atomic scale meant forging a different path. It meant doing something that no one had ever done before. It some ways, back then, it was like saying you were going to walk on the moon. It also meant accessing the quantum world, the mysterious world of the very small, where the Laws of Nature take on completely counterintuitive and wonderful powerful properties.
But how to manage it?
Fortunately, a new kind of microscope had been invented in the 1980s – the scanning tunnelling microscope – which allowed us for the first time to “see” individual atoms. This microscope works by bringing a very fine metal tip down to a surface under vacuum. When it gets close to the surface, a tiny current begins to flow from the tip to the atoms. Electrons essentially tunnel from the tip to the atom – hence the ‘tunnelling’ microscope. If we scan this tip across the atoms, keeping the current constant, we essentially get a height profile of the atoms on the surface – hence the name “scanning tunnelling microscope”.
Using this microscope for the first time it became possible to build up a 3D image of all the atoms on the surface. This is an awe-inspiring tool – and widely recognised as such. The scanning tunnelling microscope was one of the fastest discoveries ever to win the Nobel Prize, receiving the award in 1984, just four years after it was invented.
But it turned out to be possible to go a step further. In 1990, IBM researchers in Switzerland tried something highly imaginative.
Instead of just using this very special microscope to look at atoms on a surface, they applied voltages to pick atoms up and move them around, famously arranging 35 unbonded atoms on a metal surface to spell IBM – making the world’s smallest logo!
This was the idea that captured my imagination. It was a long way from being a perfected tool. For whilst it was easy enough to use this microscope to pick up unbonded atoms on a metal surface, it is a very different thing to manipulate atoms inside a semiconductor crystal, where the chemical bonding presents a far stronger impediment. Nevertheless, I had some ideas of my own.
In 1999, I decided I would come to Australia and see if I could adapt this imaging and manipulation tool to build electronic devices in silicon at the atomic scale. Rather than making devices smaller and smaller each year as industry was doing, I was determined to leapfrog Moore’s Law, and to push straight for the end point: to make devices with atomic precision.
Now, at that time, conventional transistors were being made with feature sizes of ~200 nm, which is equivalent to 1000 atoms across. And every integrated circuit was being made from a complex mixture of atoms with elements from nearly 2/3 of the periodic table – from silicon, to aluminium, tungsten, copper, and many complex oxides.
My plan was to build functional devices in silicon at the atomic scale (i.e., 1000 times smaller) and made out of just two elements – silicon and phosphorus.
Moreover, we knew that the scanning tunnelling microscope wasn’t going to be enough to do this on its own. I knew – and the wider team at UNSW knew – that we had to combine it with another technology, a tool that allowed us to grow perfect crystals of silicon one atomic layer at a time. Yet here’s the catch: up until that point, no one had combined these technologies. Both had to operate under ultra-high vacuum, the same type of vacuum you find in outer space, but one (the microscope) needed extremely high stability, while the other (the crystal growth tool) used powerful pumps that vibrated almost like a washing machine. They seemed fundamentally incompatible.
The consensus view within the scientific community was that the chances of combining these two technologies was near impossible. And even if we succeeded on that front, it was pointed out that there would be other significant technical challenges to overcome – indeed eight major technical impediments were identified, none of which had been realised. Even the senior scientists at IBM were sceptical. But we were in Australia – the land of the ‘give it a go’ attitude. And so, despite the critics, we did it anyway!
By 2010, we made what was then the world’s smallest transistor: a device constructed out of just 7 phosphorus atoms. (In addition to being published in the usual scientific places, this achievement made it into the Guinness Book of World Records – as my son discovered one day to his astonishment while sitting in his school library!)
Then in 2011, we showed that we could pattern a wire just one atom high and four atoms wide. We discovered, much to our surprise, and against all theoretical predictions, that it could carry current much as if it had been made of copper.
Then, in 2012, we made a transistor where the active part of the device was just a single phosphorus atom, beating those old industry predictions from Moore’s Law by nearly a decade. Indeed, Moore’s Law has slowed so the semiconductor industry, even to today, is yet to reach this atomic limit.
It took us a long time to figure out how to do all this.
And to make it happen we drew on talent from all over the world. In addition to our Australian base, I was fortunate to have great people from many different countries come to help – from Germany, England, Switzerland, the US, Poland, the Netherlands, Singapore, China, Israel, South Korea, and New Zealand, among other places.
Even so, it took us over 10 years of sustained and systematic problem solving. And of course, we are still refining things.
But the approach now is proven, and our laboratories at UNSW, and the Sydney manufacturing facilities of our company, Silicon Quantum Computing, now make electronic devices reproducibly at the atomic scale as a matter of routine.
Most recently, in 2022, thanks to the efforts of an exceptionally talented team we reported the creation of the world’s first integrated circuit made with atomic precision. An integrated circuit essentially combines multiple device components such as transistors on one integrated chip.
We have done this with input and output leads, and control gates all constructed at the atomic-scale out of phosphorus and silicon atoms using our atomic manufacturing process.
This incredible piece of engineering came roughly ten years after we reported designing, engineering, and measuring our single-atom transistor – which, curiously, is about the same time it took classical computing to get from the invention of the transistor in 1947 to the creation of the first integrated circuit in 1958.
In fact, we’ve gone beyond the atomic scale.
Electrons are subatomic particles that exist on the surface of atoms. They are the fundamental particles that form the physical basis of electricity.
I had always believed that if we could control the world with atomic precision, then it would be possible also to manipulate the quantum properties of individual electrons. And in our devices, we can. In fact, we can control the movement of individual electrons on and off individual atoms. We can also measure and control the spin properties of these electrons within our devices. The spin of an electron is a property of quantum physics in which an electron particle acts like a tiny bar magnet.
In other words, we now have access to the quantum world.
It is my belief that these collective attainments represent the dawn of something truly important. By developing tools to see, organise, control, and measure information using individual atoms, I, and those I have worked with, have aspired to open a new frontier in electronics – one based upon the intricate control of natures’ fundamentals, in which the specific placement of every atom counts.
But we have always been aware of a deeper implication. Because atoms, and the electrons that exist on their surfaces, are extremely small and possess quantum properties, our invention of atomic assembly raises the potential for making a completely new and powerful kind of computer: a quantum computer.
A quantum computer is a type of computer that exploits the laws of quantum physics, so that instead of performing calculations sequentially like a conventional computer, it works according to an extreme parallelism, looking at many possible outcomes at the same time and leading to an exponential speed up in computational power. Quantum computers have the potential to transform nearly every industry from finance and cryptography, to transportation, logistics, machine learning, and even drug design. Little wonder then that today there is an international race to build a quantum computer – and that the leading nations of the world are investing billions, in what has been nicknamed the ‘space race’ of the computing era.
The promise of quantum computing is the subject of my second lecture, but for now let me say this: our ability at UNSW to make electronic devices at the level of single atoms has given us a unique opportunity to build a quantum computer here in Australia.
We have figured out how to control the placement of individual atoms, the movement of electrons on and off individual atoms, and the spin states of electrons within quantum electronic devices. Australia is at the very forefront of this field, leading the way, and backed by a strong patent portfolio. PAUSE Even today, despite well-funded efforts in other countries, no one else in the world has been able to replicate our process.
With the visionary backing of the Australian Government, the Commonwealth Bank, Telstra, the NSW Government, and the University of New South Wales, my team and I are now on a mission to use our atomic assembly technology to build a complete prototype quantum computer for which all the functional elements are manufactured and controlled at the atomic scale.
This is not a research project: it is a commercial enterprise. Current estimates suggest that 40% of our economy has the potential to be impacted by quantum computing and that the global quantum computing opportunity could be worth USD $100 billion per annum by 2040.
To make this work, we have spun a commercial company out of the University of NSW, called Silicon Quantum Computing.
It is an ambitious and entrepreneurial undertaking to scale up what we have done to date – to take all the advantages from 20 years of world-leading Australian research – in order to manufacture a new kind of product with the potential to create enormous wealth here in Australia.
To find out about that you’ll have to tune in next time. But before I leave you, I want to ask one final question. What I’ve described to you today started out a quarter century ago as frontier science: a project to test the bounds of what was technically possible.
And we have delivered. Like those who invented the first transistor or the integrated circuit, we have demonstrated a completely new way of doing things. We have shown that it is humanly possible to manipulate atoms into working atomic-scale devices.
But this isn’t just an esoteric accomplishment; it has also given us a novel and extremely powerful manufacturing capability with clear utility and a clear pathway for making a quantum computer. PAUSE The question is, can we keep a capability like this in Australia?
It is sometimes said that we can’t manufacture things in this country – that the economics are too difficult. And for certain kinds of products, maybe that’s right. Even if Australians had invented the transistor, for example, I’m not sure the technology would have stayed here very long. In a world without the internet, manufacturers benefited from being close to their customers, and geographically speaking, Australia is an isolated country.
And, even if that weren’t the case, the country would have lacked the specialised workforce needed.
But atomic manufacturing is different – and not just because we have a technological lead to begin with, or because we hold key patents and know-how, or because the rest of the world has not yet been able to catch us. There are other reasons, too, for expecting that this is a technology that can be industrialised here in Australia.
The goal of our company, Silicon Quantum Computing, is to use the fundamental breakthroughs of our atomic revolution – in other words, atomic manufacturing – to build and operate the most precise and highest quality quantum computers. An early error-corrected quantum computer will be an extraordinarily powerful but also a highly specialised product, manufactured in low volume at very high margins – PAUSE something that does not require the economies of scale of a mass-market consumer product.
It is also likely that users will access quantum computing as a service via the cloud. The internet has eroded the old tyranny of distance.
And we have the specialised workforce we need. With more than two decades of centre of excellence funding from the Australian Research Council our country’s burgeoning quantum start-up sector has already attracted a sizeable workforce with the ingenuity and skills we need to make this happen.
For all these reasons, I believe this is a technology that we can exploit from Australia, for Australia; and I like to hope that other Australians might share this mindset.
It is odd that we ourselves should be made of atoms, and that all the matter we deal with every day is made of atoms, yet nearly all the humans who’ve ever lived have not had the slightest notion of these facts. It’s only in the last few decades that we’ve been able to “see” individual atoms through a microscope. And now we are creating devices at this scale – a scale we didn’t know excited for much of human history. Miniaturisation has driven technology; and in this country, our researchers have taken it to its extreme limit.
We have turned this ability from a lab exercise into a reproducible manufacturing process. And we are now trying to use that manufacturing capability to make computers of extraordinary power. If we succeed in this last step (and in my next lecture I will explain why I think we will), at the heart of our machines will be components as invisible to the world as the atoms they are made from. Yet they will also prove more consequential than any computer built thus far.
Indeed, if we succeed in this last step, everything I’ve told you today will just be the beginning.