Professor Michelle Simmons AO
Boyer Lectures 2023
ABC Australia

The 2023 Boyer Lecture series is called ‘The Atomic Revolution’ and is presented by Professor Michelle Simmons AO, a pioneer in atomic electronics and global leader in quantum computing.

Across the four lectures she’ll explore manufacturing at the atomic scale, why Australia is perfectly positioned to build the world’s first error-corrected quantum computer, and the importance of doubt in science. Since 1959, the ABC’s Boyer Lectures have sparked conversations about critical ideas.
View on ABC iview
Listen on ABC Radio National

Lecture may vary on delivery.

The computer chip is the defining technology of our era. Computer chips mediate our communications, our commercial and financial transactions, and our storing and sharing of information. They are the chief instruments by which we automate factories, navigate cities, and monitor our environment. We increasingly make and consume drama, music, and games using computer chips. And, of course, they have become the indispensable tools by which we seek to predict the weather, the state of the economy, the price of financial goods and the political and consumption behaviours of others.

Our dependence on computer chips has come about with extraordinary speed. The transistor was only invented in 1947, the integrated circuit in 1958. The personal computer was a product of the 70s and 80s. The smartphone has been with us for only about 20 years. It is hard to think of any other technology that has transformed so many parts of human life so quickly and so profoundly.

The speed at which computing technology has taken over so many aspects of our lives is due to three key factors.

First, it’s effective. The transistor, the integrated circuit and the microprocessor have proved profoundly reliable as information processing tools – tirelessly reproducing the same results, even when performing incredibly complex calculations over countless iterations.

Second, computing is universal – which is to say, computer chips have come to touch almost every aspect of society. And there’s an infinite number of things that can be done with them.

Third, computing offers ever-expanding capabilities. This is the effect of Moore’s Law over several decades, which has seen an almost unimaginable increase in processing speed and power with each new generation of computing products.

The word ‘revolution’ is overused. We are conditioned by the media, by advertising, and by our own wishful hopes, to hype the significance of anything new. For anyone who has lived through the late twentieth and the early twenty-first century, however, the transistor and the silicon chip are the real deal. They have triggered a genuine and profound revolution that has impacted upon every aspect of society.

Now imagine what the world would be like if it were possible to build a new kind of processor that would make even the most powerful supercomputer on the planet today look primitive.

This is the promise of quantum computing.

A quantum computer is a completely new type of computer, first proposed in the early 1980s, which exploits the laws of quantum mechanics.

Quantum mechanics is a branch of physics which deals with nature at very small scales – it is the world of atoms, subatomic particles and even individual photons of light. For Nature’s tiniest constituents, things are possible that would be miraculous should they happen to beings as large as you or me.

For example, electron spins. They don’t spin the way you might imagine, but have a quantum property called spin, which can take one of two states: spin-up or spin-down. What’s bizarre about quantum particles like electrons though, is that they can be in a combination of these spin states, so they are some fraction of the spin up state and some fraction of the spin down state … at the same time. This is ‘superposition’.

Another peculiar feature of the quantum world is that quantum particles, such as electrons, can become correlated with one another, so that whatever happens to one instantaneously and automatically correlates with whatever happens to the other. This happens independent of their separation in space. This is ‘entanglement’. When two or more particles are entangled, they operate as if they are a single entity, even if they’re a universe apart, which is why Einstein referred to the effect as “spooky action at a distance”.

A quantum computer seeks to exploit both these effects to perform a kind of computational magic, enabling massively parallel and incredibly powerful calculations. The basis of this concept is highly mathematical … but let me try to explain.

In classical computing all information is reduced to ones and zeros, with each one and each zero corresponding to the ‘on’ and ‘off’ states of current flowing through a transistor. A classical computer works by operating a series of transistors in a logical sequence. Anyone who has written code knows this. When writing code, you must go from one line to another, providing your computer with a sequential set of instructions.

This sequential processing is a fundamental constraint for classical computing. Imagine you have a directory of telephone numbers and that I’ve written someone’s number on a piece of paper.

If I asked a classical computer whose number it is, it would search through the directory, starting at the beginning and working its way to the end until it had found the number. If I wanted to go faster, I could break the problem up. I could split the directory in two and have two computers working. If I wanted to go faster still, I could split the directory in three and use three computers. But fundamentally, in each case, the operation is the same. The basis of computing in the classical world is a sequential, arduous process.

Things are very different in the quantum world.

A moment ago, I mentioned the mysterious properties of electron ‘spin’, ‘superposition’, and ‘entanglement’. Imagine I’m in the quantum world and I’m an electron on an atom. The spin of an electron is affected by magnetic fields. If you put me in a magnetic field at low temperature, I’ll behave like a little bar magnet. Indeed, you can imagine me as a bar magnet sitting in the middle of the earth. I’d either be pointing up to the north pole (spin up) or down to the south pole (spin down). These two states are the equivalents of classical bits – our digital ones and zeros of information – only instead of using the term ‘bits’ we use ‘quantum bits’ or ‘qubits’.

So far, there’s not a lot of difference between a classical bit and a quantum bit: the former has ones and zeros, the latter can be spin up or spin down. But the difference in terminology is significant because a quantum bit can do things that a classical bit simply cannot. You’ll remember that in addition to being ‘spin up’ or ‘spin down’ – i.e., one or zero – an electron can also exist in a superposition of states.

It can be in some fraction of the up state and some fraction of the down state at the same time. If we go back to imagining our electron as a little bar magnet in the centre of the earth, what this means is that instead of just pointing to the north or south pole, my little bar magnet can point anywhere on the surface of the earth. To London, New York, Sydney or even Quito in Ecuador, which exists almost exactly at the equator, i.e., exactly half-way in between.

Without going into the details, you should know that the direction of this little bar magnet – or the superposition state of a single quantum bit – can be described mathematically using two classical bits of information, one describing the fraction of the one state and the other describing the fraction of the zero state. So, already you can see that a quantum bit contains double the amount of information than a classical bit.

But now consider what happens when I bring two electrons together, so they become entangled. Each electron can either be spin up or spin down … or a superposition of both states. That means that two entangled electrons can be in four possible spin states: up-up, down-down, up-down, or down-up. We’re no longer constrained to ones and zeros. Every time I add a quantum bit to my system, I’m doubling the amount of information it contains. That is very different from the situation with a classical computer.

This is not the only advantage, however. You’ll recall that entanglement results in my electrons – or my quantum bits – functioning as if they were a single entity. This means that if I do any operation on this combined state, I do it on all the individual components at the same time. This makes quantum computing intrinsically parallel, and amazingly powerful, in a way that classical computing can never be.

The implications are profound. The power of doubling means that by the time you get to 50 quantum bits, there are tasks for which your quantum computer will outperform the world’s most powerful supercomputer. And if we get to several hundred quantum bits, all operating with low errors, it’s been predicted that you’d have something more powerful than all the computers in the world connected together. That’s a pretty extraordinary prize. And this is one of the reasons why there are many teams across the world racing to try and build one.

So how will quantum computing change the world?

If someone had tried to answer that question in 1947 about the transistor, they would have been wildly wrong. Back then it was impossible to see how these funny little on/off switches would eventually shake the world with their first uses being in hearing aids, calculators and transistor radios. I’m sure the same is true for quantum computers. Having a capacity more powerful than all the computers of the world combined will generate opportunities and applications that are currently unimaginable – at least for someone of my abilities.

However, there is a key difference between the two eras, which makes some crystal-ball gazing possible for quantum computing, at least in the near term.

Whereas the transistor emerged as a novel form of hardware looking for its applications, in the quantum field there are huge numbers of software engineers and algorithm developers already working on quantum problems, even before the hardware has been built!

Already more than 60 algorithms have been dreamed up that could be run on quantum computers to solve problems in areas like logistics, search optimisation, machine learning, portfolio optimisation, financial market analysis, catalyst design, drug design, aircraft design, supply chain management, and cryptography to name a few.

To elaborate on one example, current drug discovery processes have notoriously high costs and high failure rates. One of the critical impediments in this regard is that classical computers – even the most powerful supercomputers in the world – have intrinsic limitations when it comes to modelling drug structures and biological interactions. That’s because the quantum mechanical properties of drugs (such as the way electrons behave within a drug molecule, or the ways drugs interact with an organism’s biology) are often too complex for a classical computer to calculate.

Yet these kinds of calculations lend themselves to simulation on a quantum computer. Indeed, the use of quantum computing for molecular modelling is likely not only to facilitate faster design of new therapies for medical purposes but may also yield improved catalysts for fertiliser manufacturing, to the invention of completely new materials, and to accelerated chemical prototyping for the development of more efficient batteries.

Annually, the worldwide agriculture sector spends an estimated AU$170 billion on synthetic nitrogen fertilisers to enhance crop yields and meet the growing demand for food. The Haber-Bosch process, responsible for producing synthetic nitrogen fertilisers, consumes approximately 1-2% of the world’s annual energy supply, including 3-5% of the world’s global annual natural gas production and releases vast quantities of CO2, accounting for 3% of global carbon emissions.

The ability to improve the production of nitrogen fertilisers through advanced quantum chemistry simulations has the potential to not only save billions of dollars annually but also substantially reduce carbon emissions, promoting both environmental sustainability and global economic stability.

Another key application area is in optimisation. Our complex modern economy is awash with optimisation challenges. Management of supply chains, route planning for delivery or logistical systems, and even traffic control are all examples of optimisation problems which become excessively complicated for classical computers as these problems scale in size or complexity. Whilst it might be easy to alter the train timetable if there is a problem with one of the tracks, how does this affect the connecting buses and ferries?

There are similar complex optimisation problems in finance, notably in risk estimation and portfolio optimisation. What all complex optimisation problems have in common is dependencies across large numbers of variables. Because of the inherent parallelism due to entanglement, quantum computers will be able to cope with problems of this nature much more effectively than classical computers.

A final example worth mentioning is cybersecurity. Some of the earliest quantum computing algorithms suggested that quantum computers might be used to break the encryption that secures all our important information online. It is premature to worry on this front. Implementing these algorithms requires a quantum computer with a very large number of qubits and extremely low error rates – a product that is much more sophisticated than anything likely to be available in the next decade. Moreover, the threat has inspired security agencies to develop so-called “quantum-resistant” cyber protocols. So, applications in this domain are probably a long way off. Nevertheless, the power of quantum computers to crack encryption problems is of significant enough concern that this will definitely be another space to watch.

So, we already have a surprisingly large set of quantum applications on the table and ready to go, simply awaiting the invention of the right hardware.

The big question, then, is not whether a quantum computer can work, or whether it will provide some advantage, or what the applications will be, but whether we can practically deliver the hardware to operate all these quantum algorithms. That’s the challenge now; that’s what the world is racing for. Everyone’s waiting for the hardware. So, how do we build this? And, more importantly, what is the best way to build it?

Currently, there are several possible routes being pursued around the world. In our Australian company, Silicon Quantum Computing, which is the world leader for atom-based quantum computing in silicon, we are creating a quantum computer using the globally unique atomic manufacturing technology I’ve developed with my team at UNSW over the past quarter century.

But you may have heard of other companies too. D-Wave, IBM, and Google are building their quantum computers with quantum bits made using superconducting junctions. IonQ is a company that is making quantum bits using ions of a rare-earth metal called ytterbium. Xanadu makes quantum bits using photons of light.

What all these companies have in common is that their quantum bits are constructed out of Nature’s fundamental building blocks such as electrons, atoms, ions, photons, and superconducting metals. Going right down to these fundamental components is essential if you want to tap into the power of quantum mechanics. Nevertheless, each implementation is different, with its own pros and cons.

For example, some of the first qubits were first demonstrated in ionic systems because they are incredibly stable. But operating these systems is cumbersome. The time required to address and operate an ionic system is long. This is a materials technology that has never been manufactured at scale, where it will be extremely challenging to scale to high qubit numbers – numbers that will be needed to build a quantum computer that is actually useful.

By contrast, if you read the news, you might have heard that the superconducting qubit companies are now in the lead based on raw qubit counts. The key advantage of these companies is that they were able to deploy rapidly using pre-existing, 1990s manufacturing technology. They’ve had many incredible achievements. But this implementation, too, has its challenges. Because they are physically large, superconducting qubits are particularly prone to interference from their environment. This limits the time that a superconducting qubit can be held in a specific quantum state – or to use the technical jargon, they have what is known as short ‘coherence’ times.

The makers of quantum computers based on superconducting qubits are now trying to control for this using elaborate electronics, but, as they scale beyond a certain point, they face higher error rates and other engineering constraints, where their chips are now so large, they no longer fit in their ideal low temperature operating environment.

So, which system is best? At this stage, no one truly knows. But the longer you can hold your material system in a specific quantum state – i.e., the longer its coherence time – the more stable and less error-prone your quantum computer is likely to be.

But there’s another thing that matters. How fast can you operate your qubit? It’s not much use to have a long coherence time if you have a very, very slow operation time. This is the issue for ionic qubits. Ultimately, what really counts is how long you can maintain a specific quantum state and the speed with which you can perform operations on your quantum state. The higher that ratio, the better your system is likely to be, the more qubits you can build, and the more powerful your ultimate computer.

This is one of the reasons I am so excited about atom-based quantum computing.

We have a very simple, elegant, and clean system. In our atom-based approach, quantum information is encoded on phosphorus atoms – either on the spin of a phosphorus electron or on the spin of a phosphorus nucleus. These phosphorus-based quantum bits are kept in place and protected by a silicon lattice, which is ideal since silicon naturally forms a low-noise, crystalline structure.

Significantly, too, all the electronic components in the processor are made with atomically patterned phosphorus: from the control electrodes that create entanglement; to the sensors that initialise and read-out the information; as well as the qubits themselves. Everything, all together on one monolithic chip, is made of just two elements: phosphorus and silicon.

This brings huge advantages because it means we do not have the material interfaces and imperfections that are well known to create defect states and to cause qubits to lose their information or decohere. Working with just two kinds of atoms enables us to form stable, high-quality qubits.

This is not to say, however, that we don’t have our challenges as well.

For atom-based quantum computing, the biggest impediment has been technological. A quarter century ago, there was no off-the-shelf technology for us, as there was for the superconducting teams. We had to start from scratch. Before we could exploit the quantum effects in our system, we first had to figure out systematically how to design and manufacture atomic-scale devices, and then we had to learn how to control the quantum states of individual electrons and of the nuclei of individual atoms within these devices.

None of this was easy, but as I described in my last lecture, that’s what we’ve done over the past quarter century at UNSW. Our globally unique atomic manufacturing technology enables us to make electronic devices where individual phosphorus atoms can be placed into a silicon substrate to create devices with atomic precision. This, on its own, is an extraordinary capability.

But we’ve since gone on, steadily but surely, to exploit this technology to demonstrate all the key components of a quantum computer.

In conjunction with our spin-out company Silicon Quantum Computing, we’ve created quantum bits where information is encoded in a single electron on a single phosphorus atom; we’ve entangled quantum bits to form what is known as a two-qubit gate; we’ve gained exquisite control of a single electron spin so we can both initialise and read out the state of our qubits; we’ve developed the capacity to individually address two quantum bits with extraordinary accuracy even though they are less than 10 nanometres apart; and, most recently, we’ve engineered an integrated circuit where placed phosphorus atoms with such precision that they were used to simulate the quantum behaviours of electrons in an organic molecule.

Our goal now is to produce a 100-qubit quantum processor by 2028. Will we get there? I believe we will.

There are still big challenges ahead, but we have come up against a legion of serious challenges over the past two decades and found our way through. We have great people working on this – and I mean truly superb people who have come from all over the world to join us. We also know what we need to do, which is a huge help.

I am a firm believer in road maps and milestones. Not all scientists work this way, but for an ambitious, multi-decadal experimental physicist it is the only way to reach one’s destination.

And then there is the strength of our competitive position. Not only have we realised ways to input and read out information from individual atoms with higher accuracies, longer lifetimes and faster speeds than others, but with our atom-based technology we have established unequivocal international leadership, and we possess the know-how and patents that will be necessary to hold this lead. I believe we can take this all the way.

At the start of today’s lecture, I mentioned the revolution in all our lives brought about by the invention of the transistor. In explaining how a quantum computer works, I have tried to give you a flavour of the promise of quantum computing and why I, together with many other physicists, believe that a quantum computer could produce a similar transformation not so far off in humanity’s future.

There is something important, however, that I hope will be different this time.

The discovery of the transistor spawned several global industries: the semiconductor industry, the computing industry, the software industry, and all the modern digital industries we rely on today. While these are all global industries, each had its genesis in America, and I think there’s little doubt as to why. In the early years, it was the Americans who invented both the transistor and the integrated circuit, and it was the Americans who controlled the early industrialisation of these technologies. Everything else followed.

We don’t know yet which implementation will ultimately prove the most effective for quantum computing, but if it turns out that our atomically engineered devices are the winners, then it could – surprisingly – be Australians who control the invention and the industrialisation this time around.

One thing we know is that whoever gets there first, a quantum computer is worth striving for. This is much more than just a ‘gee-whizz’ story about scientific discovery and technological breakthrough producing the potential for global transformation. It is an area in which, somewhat miraculously, Australia has emerged with unique potential – a potential not just to play the game but to lead it.

And this, too, is part of the quantum promise.

Of course, we still have quite a road to travel. There are many unknowns. Ultimate success will require tremendous hard work, insight, technical prowess, and our fair share of good fortune. But the potential nonetheless is there – it is real – if we are able to seize it.