Share

Embed

Welcome to another episode of *The New Quantum Era Podcast* hosted by Kevin Rowney and Sebastian Hassinger.

In this episode, we are joined by Dorit Aharonov, a professor at the Hebrew University of Jerusalem and one of the pioneers of quantum computing. She's also the Chief Science Officer at QEDMA, a quantum startup based in Israel. Dorit is one of the major movers and shakers of quantum error correction and co-author of the important Threshold Theorem for quantum error correction. Kevin, Sebastian, and Dorit talk about her recent work on the theoretical foundations of random circuit sampling.

[4:22] Dorit shares her path into quantum information and computing.

[8:27] Dorit explains the threshold theorem in an easy-to-understand manner.

[16:35] The velocity of error correction versus the generation of errors in the computation could depend on physical implementation, or the algorithm. Maybe even both.

[18:53] A more powerful assertion Dorit makes is that there's a deeper connection between the phases of matter and the transition between solid and liquid and these quantum error correction thresholds.

[19:51] A lot of the foundations of classical error correction were laid down in the mid-40s in Von Neumann's work when the IAS system was being built. Dorit still sees the echoes of that.

[22:35] We might be witnessing a growing momentum around the powerful expression of new quantum error correction technologies.

[25:28] Dorit talks about the difference between error mitigation and error correction.

[26:55] Dorit explains the idea of the reset gate.

[30:22] It might be safe to say that challenges are primarily engineering in nature and that we have enough science to enable that engineering to get to fault tolerance.

[31:50] Dorit discusses a possible timeline for this engineering to get to fault tolerance.

[34:07] Is Dorit an NISQ optimist or a pessimist when it comes to real-world applications?

[39:21] Dorit addresses the difference between practical and asymptotic quantum advantage.

[41:30] Dorit shares what the paper on random circuit sampling shows.

[45:25] Dorit explains why the machine learning algorithms that were dequantized are treacherous.

[49:56] Dorit shows optimism regarding the possibility of seeing evidence of a quantum event.

[52:25] Dorit admits to finding constructive interference between working in the industry and working on theoretical questions.

[53:50] Is there something Dorit is excited about in the next year or two that will be another step forward?

[56:50] Dorit talks about concrete examples of experiments and sensors that might be arriving thanks to quantum computing advancements.

[1:00:35] Sebastian and Kevin share the highlights of a fantastic conversation with Dorit.

Visit The New Quantum Era

The New Quantum Era Podcast

Limitations of Noisy Reversible Computation Dorit Aharonov, Michael Ben-Or, Russell Impagliazzo, Norm Nisan

Limitations of Noisy Reversible Computation Dorit Aharonov, Michael Ben-Or, Russell Impagliazzo, Norm Nisan

The Complexity of NISQ, Sitan Chen, Jordan Cotler, Hsin-Yuan, and Jerry Li

A polynomial-time classical algorithm for noisy random circuit sampling Dorit Aharonov, Xun Gao, Zueph Landau, Yunchao Liu, Umesh Vazirani

A polynomial-time classical algorithm for noisy random circuit sampling Dorit Aharonov, Xun Gao, Zueph Landau, Yunchao Liu, Umesh Vazirani

“Nobody actually believed that it was possible to correct errors that occur on quantum states because of the lack of reversibility. ” — Dorit Aharonov

“it's a physics phenomenon… below a certain threshold, we can think of this as if the system is capable of some completely different behavior, like ice and water. It's just like a phase transition -- below that, there would be macroscopic entanglement and … ability to control large scale quantum correlations. And above it, this would not be possible.” — Dorit Aharonov

Guest

Composer

Omar Costa Hamido

OCH is a performer, composer, and technologist, working primarily in multimedia and improvisation. His current research is on quantum computing and music composition, telematics, and multimedia. He is passionate about emerging technology, cinema, teaching, and performing new works. He earned his PhD in Integrated Composition, Improvisation and Technology at University of California, Irvine with his research project Adventures in Quantumland (quantumland.art). He also earned his MA in Music Theory and Composition at ESMAE-IPP Portugal with his research on the relations between music and painting. In recent years, his work has been recognized with grants and awards from MSCA, Fulbright, Fundação para a Ciência e a Tecnologia, Medici, Beall Center for Art+Technology, and IBM.

Your hosts, Sebastian Hassinger and Kevin Rowney, interview brilliant research scientists, software developers, engineers and others actively exploring the possibilities of our new quantum era. We will cover topics in quantum computing, networking and sensing, focusing on hardware, algorithms and general theory. The show aims for accessibility - neither of us are physicists! - and we'll try to provide context for the terminology and glimpses at the fascinating history of this new field as it evolves in real time.

Sebastian Hassinger 0:02

The new quantum era, a podcast by Sebastian Hassinger and Kevin Rowney.

Hey Kevin, how are you, Sebastian? Hey, man, how are you? I'm very well. I'm really excited about the conversation we have for our listeners. today. We're joined by Dorit Aharonov, who's a professor at the Hebrew University of Jerusalem and really one of the pioneers in the field. She's also the chief science officer at kedma, which is a really interesting quantum startup based in Israel as well. And I was just really excited to get a chance to talk to Dorit. Yeah, I think Sebastian we were really lucky actually, to get Dorit on the podcast. She's she's actually one of the major movers and shakers around quantum error correction and co author on the the major, highly important threshold theorem for quantum error correction. She'll get into that in some depth. But you know, just given how big a year has been in quantum error correction, it was a truly a privilege, I think, to have your her perspective on this important subject, and sort of see a glimpse right of what the future could could bring in terms of these technologies of algorithms, we've got a great view of her sort of origin story, which was, was a treat as well. And, and then we also talked a bunch about some recent work, she's done on sort of the theoretical foundations of a technique called random circuit sampling. And we sort of left out the context a little bit, I'm afraid in our conversation. So the, the reason that's a topic of interest is that it's the approach that Google used in their so called supremacy experiment, in October 2019. So it's a technique to try to prove that their chip could perform a task in I think it was a matter of a couple 100 seconds instead of a couple 100,000 years or something on that scale, by classical means. And that was really interesting to delve into her recent work on that, as well. And it was an interesting news event, you know, at the time, many of these, you know, big breakthroughs in quantum computing research that they're hard to really grapple with in real time. But But big splash when it happened, you know, a little bit of controversy, but it's a major names worked on that, that release. So perspective on that end, and this, this paper, I think, really put that very claim into perspective. So I think that's I think it's a great intro to this great interview with with.

Okay, so we're joined today with a very special guest, we have Dorit Aharonov with us. She's a professor at Hebrew University in Jerusalem and a quantum information scientist. And we're really excited to get a chance to talk with Dorit, played an important role in the fault tolerance theorem. And more recently, it's done some work on a random set of circuits sampling and the research. And there's also the chief science officer at Qedma, which is really interesting. Startup quantum started up in Israel. So Dorit, welcome.

Dorit Aharonov 3:57

Thank you. Thank you, Sebastian. Thank you, Kevin. Nice to be here.

Kevin Rowney 4:02

Pleasure to have you on today. Yeah.

Sebastian Hassinger 4:04

So yeah, I think, you know Dorit, we love to always start with guests with sort of getting a little bit of a bio, because we find that everyone has their own path into quantum information and quantum computing. And it's always really fascinating to hear sort of the twists and turns that each person's individual story has taken. So how are we aware? Where do you start on your path towards quantum information? And how did you how did you get here?

Dorit Aharonov 4:32

So basically, I did my masters, I started my master's degree at the Weissman Institute, after studying after doing my BSC at the Hebrew University, and already before I started my BSC, actually I wanted to do brain research.

This was my main goal and and I was looking for a Master's advisor in brain research and I couldn't find one that Weissman Institute and then and then I gave it a

advice to your friends to some conversation at a party to go check out the Hebrew University. Then I told myself well, white one night check that. And I, I sort of

I don't know exactly why. But I had a feeling that in the computer science department, I would find something interesting. And I kind of knocked on the door of my to be Advisor Michael Ben-Or. And, and I wanted to talk to him about brain research. But then he told me Look, there's something really interesting going on

something that might interest you. And he gave me he gave me paper about quantum computation. And, and it just seemed to be so that was in 95, one year after Shor's algorithm

error correction, and about the question of whether I didn't understand a word of what he was saying, but, but I took this paper and it just caught me as something extremely interesting.

Sebastian Hassinger 5:59

You remember which paper it was?

Dorit Aharonov 6:02

it was Deutsch's paper, I think. Okay. Definition of quantum circuits.

Sebastian Hassinger 6:07

That's a foundational work for sure. Yes. And, and an exciting time in the field!

Dorit Aharonov 6:14

Yeah, it was. It's an extremely important paper. And I, I remember reading it and reading it again, again and again for months, until I finally got it like I was nobody understood quantum computation at the time. So yeah, yeah. So I just I didn't even I didn't the one thing that I really didn't understand was the connection between the exponential exponentially many dimensions and the N qubits. You know, just the

basics of the model. It I remember the moment when I actually got it.

Kevin Rowney 6:51

That's awesome. Fantastic story. And it's also interesting how quantum computation even though it's so formidable, it's abstraction. It does draw people in even though they perhaps at first don't really understand, right, there's some magnetic draw compelling. Yeah, it's compelling. I think, I mean, I think you can sense the depth, I think what computation is really special in in the, in the in its depth, and its connections to so many things in mathematics that you really consensus without even understanding anything by just knowing that, you know, the definitions are. They seem to be mind boggling even before you really?

Yeah, no doubt, no doubt. Yeah, I

Sebastian Hassinger 7:34

like to joke that if if anyone knows anything about quantum physics, it's probably that it's really, really hard to understand.

Kevin Rowney 7:44

Yeah, this is such a pleasure to have you on board. I, you know, I think it might be good if if it's okay for you to start to jump into the talk of the fault tolerance theremin. And the threshold theorem, your fundamental contributor on that front. I mean, our audience are people who are, you know, math and science amateurs, some professionals and people are deep tech and trying to grapple with quantum computation of what it might mean for the future. And so, I mean, we're trying to if we can come up with your in depth explanations of these challenging topics, but still stay away from all of the abstraction. I'm wondering I mean, that's a that's a challenging ask, I'm making up you, but is it possible for you to give a go with that, try to explain that the threshold theorem in that context? Yeah,

Dorit Aharonov 8:27

yeah, actually, it's not I don't think it's that complicated to explain my good point of view of so so I think the, you know, it's, it starts from from the amazing discovery by Peter Shor and Steen of quantum error correcting codes. And this was really at the time and 95. Nobody actually believed that it is possible to correct to correct errors that occur on quantum states because of the of the lack of reverse reversibility. It's like you know, a state is being measured, how can you recover from from such a an irreversible event. But once you have a quantum error correcting code, then basically the true question is, can you actually compute on encoded states while performing the error correction with noisy gates? So the whole idea, so let me start even earlier, why why would you even be interested in this question of whether you can compute in the presence of noise that's the main subject of the photons dear? So So if quantum computers were completely ideal, then we know that there are these algorithms like Shor's algorithm and other algorithms that seem to be we don't really have a proof of that, but they seem to be exponentially better and faster than classical AI algorithms. Yeah, so the question that that was, you know, very, very pressing at the time around just after Shor's algorithm was, is it really do you is this amazing competition complexity that quantum computers promise is it due to the to extreme accuracy to the high accuracy that that we require from the gates and from the States? Or is it okay to to perform the computation in the presence of noise in a non ideal situation, and still, that the quantum system will maintain its, its exponential advantage. And, and this is critical, because there are models of computation, where if you require infinite accuracy, you can actually derive exponential advantages or even more, but if you get to, you know, to real requirements, real physical systems with realistic requirements from the, from the model with finite precision, finite energy that you input that you use for every elementary step, etc, then suddenly, the advantage is lost. So, of course, we wanted, I mean, it's a very natural question to ask, Does this happen to a quantum computer? Or is it I mean, it could be that it's just an ideal, you know, that the advantage is just within an ideal framework, but once you make it realistic, then the whole promise disappears.

Kevin Rowney 11:24

And this is interesting to me, forgive me for interrupting, but if I'm not mistaken, I mean, it wasn't. I mean, the rise of analog computation, the theory around that, I mean, if it was assuming infinite precision, it could make breakthrough new capacities, but that's obviously not physically realizable. Do I have that correct? That was one of the tangents to the theory around analog computing.

Dorit Aharonov 11:48

So I'm not sure which theory you actually mean, when you talk about theory of analog computing, are you referring to something related to quantum computation or more general Oh, it was

Kevin Rowney 11:57

before quantum computation, but I if if I thought there was at least some tangent within the theory of computation related to analog computers, so there was you know, it early era of computation, where there's a debate about you know, digital versus analog, and before exploring the limits of and compare it, comparing both systems, and so, I have some abstract memory of that tangent to the theory of computation,

Dorit Aharonov 12:21

there is a very deep theory of analog quantum computing of analog computation on quantum computation, but I was referring especially to an example of just a theoretical example of how to actually factorize numbers if you have algebraic precision, that wasn't you the suggestion of a particular physical system, it was just it was just an example of how if you if you assume non physical assumptions like infinite precision, you can get extremely complex and powerful computers. So, so, the whole the name of the game is actually to achieve advantage, while not making extravagant assumptions like infinite precision, but just making realistic assumptions. And that was the starting point for the question, first of all about quantum error correcting codes, and the amazing discoveries of those. And then in 95, when I started my, my MSc, basically, in which turned into a PhD, we were we were wondering, whether actually, quantum error correcting codes can be used in order to compute on a noisy quantum computer and maintain the quantum advantage, the computational advantage and, and the whole idea is, well, first of all, the difficulty is that the error correcting itself, it's a procedure that you have to you know, you basically have some kind of a of a state that is somehow if there are errors occurring on that state, then you could in principle, detect those errors and reverse them. But in order to detect those errors, you have to perform various operations and in order to correct them you have to perform more operations and all these new operations are also faulty they're also noise. If you if you take a state that is noisy and then you start playing with it with noisy operations, you might only do more harm than good. And and the question is, can you actually use noisy operations to on noisy states, such that the noise doesn't do more harm than good, but it actually sort of maintains the amount of air below a certain level all the time. And that was an and this is an engineering task and that is that is what we did me and my co but nor my my advisor, my PhD advisor. We actually found a way to to so basically the key point is that there's sort of a tension between two forces. One is the force of, of the error, that the errors, comments at a certain rate, and they sort of ruin the coherence and the entanglement in the system at a certain rate. And then the noise correction are gates that create entanglement and that sort of get rid of these of this noise. That the that the false introduce, and these are two forces that combine each other. And the point is that if if both of them act at the same rate, they're sort of on on an on an equal footing, and if one of them has a slightly faster rate, then it wins. And there is a critical point where actually, if the noise correction works, essentially faster, then the errors accumulate, then you can actually get control of the noise of the errors and maintain the errors below, below a certain level that is that does not grow through the computation. And this happens at a critical point. And that's the threshold.

Kevin Rowney 16:10

Interesting, beautiful, interesting.

Sebastian Hassinger 16:13

And is the is the let's say, the velocity that the error correction happens, versus the the, the generation of errors in the computation. Is that dependent on a more engineering sort of physical implementation of the error correction? Or is it dependent on the algorithm? Or both?

Dorit Aharonov 16:32

This is a beautiful question, which I'm still so that was I don't know how long ago. But I'm still fascinated by this question, because I feel so the original proof of ours and also of Laflamme in Zurich and Kitaev, they also had other proofs of the same more or less the same theorem. These original proofs are very, very engineering like so basically, the value of the threshold, the error rate, that below which we can actually correct. And once we can correct the the noisy quantum computation only needs to be slightly larger than ideal quantum computation, and it will still work. So these these constructions, these protocols are extremely engineering, like, basically, the threshold comes from designing a certain error correction procedure and a certain computation procedure on encoded quantum states, such that the errors doesn't propagate too much in the exact circuits and so on so forth. It's very engineering, like, and I personally don't like this, I think that I think that it doesn't explain well, fault tolerance, because because really, it's a, it's a physics phenomenon, there is sort of below a certain threshold, we can think of this as if the system is capable of some completely different behavior, like ice and water, it's just like a phase transition below that, that would be macroscopic entanglement and control ability to control large scale quantum correlations. And, and above it, they would not be this would not be possible. And this, this distinction between two sort of really very different behaviors in nature, at this point, it's explained by computer science tools. And that is wrong, I think, I mean, really, it's a physical, interesting phenomena should be explained in a physics language. Right. And I think we're still missing such a theory.

Kevin Rowney 18:36

That's just so fascinating. Are you even more powerfully asserting there's a deeper connection between say, you know, the phases of matter and the transition between you know, solid and and in liquid and, and these quantum error correction thresholds? Is there is there more powerful analogy there?

Dorit Aharonov 18:54

I think that there, there is some kind of an analogy between between different phases of matter. And the the end the situation in photon systems in which below a certain threshold, there is a different type of behavior of matter in which long range quantum correlations, controllable ones are possible, whereas, above it, they are not

Sebastian Hassinger 19:20

interesting. And I think you did a postdoc, I believe, at the Institute of Advanced Studies at Princeton. Is that right? That's true. Yeah. Did you find any? I'm just curious if there is any sort of echoes or resonance in the work that von Neumann had done with the original is system where a lot of the foundations of classical error correction sort of get laid down in the mid 40s when they were building that system? Right.

Dorit Aharonov 19:50

Definitely, definitely. First of all, I was very it was wonderful to be in that place. And, and yes, of course, I mean, this whole question of can you perform computation in the presence of noise? Von Neumann studied it for in the context of classical computations. And me and my advisor when we thought about this, that this was the starting point for us, where basically there is von Neummann used majority gates to to perform the corrections. And quantumly, this is much more complicated. But, yeah, but the essence they had it easy.

Unknown Speaker 20:29

Yeah, they had a classical. That's actually not true because they had.

Dorit Aharonov 20:36

But, but the idea, the very the basic idea, the ingenious idea von Neumann to computer and encoded states. This is this is also something that we took from this also goes into the quantum world.

Kevin Rowney 20:51

So interesting, because you phase transitions and matter. I mean, there's some, there's some deep water out there, statistical thermodynamics, and you know, what actually even means to express precisely and physically a phase transition. So, that sounds like a fascinating domain of possible future research.

Dorit Aharonov 21:10

Yeah, I would love to see such and I'm actually working on that also today. But But also, I think that there is there is some deep theory that's waiting to be discovered about this. This what what it takes for, for quantum systems to be, you know, to be below to be in the space of matter.

Sebastian Hassinger 21:30

It's true. I mean, I guess the analogy does sort of extend his that there is a kind of stability, right, but if you're, if your propagation of errors is below the threshold, and you are in this stable state, where you can perform computations, but it's, it's essentially, as long as you know, everything else being equal, and it'll just stay in that state of of, of error correctness. I guess this what I'm saying? It's just that I mean, I kind of get what you're saying in terms of the analogy, the base transitions. super interesting.

Kevin Rowney 22:02

Cool stuff. I don't know. I mean, just as observers of the space, it appears from where we sit that it's been a big year, that this past year, in quantum error correction, it feels like there's numerous, both theoretical and practical breakthroughs. I mean, do you feel like the momentum around the powerful expression of new quantum error correction technologies is growing and strengthening?

Dorit Aharonov 22:27

Yeah, definitely, certainly, from various directions. Also, you know, there, there are these discoveries of the LDPC codes, there are amazing mathematical objects that might have practical implications very soon. They're also amazing developments in the in the era in the area of error mitigation. So right now, we are not yet at the place where we actually can perform in physical systems, really useful quantum error correction. But on the other hand, there are sort of the error mitigation techniques in which we don't have ability to reset gates, but we just have the ability to to reduce the effect of errors on quantum circuits and quantum algorithms, that is sort of so people often think of this as as kind of a discontinuous path from the area of the NISQ era to the phone era. But actually, I think I'm more more in favor of I think, Jay Gambetta is one of the advocates for the opposite point of view in which the, the progress towards fully fault tolerant quantum computers is kind of continuous. And I think that's true. I think that that basically, we're going to see that it's not exactly clear, it's not as if that it's not that we'll get to the photon era, and then that says, we enter the door. This is where we are, well, we will see more and more evolutions and progresses towards that, and on larger and larger systems better and better. Noise rates, etc. So So I see this as a more continuous path.

Sebastian Hassinger 24:15

Yeah, it's interesting, too. I mean, we've seen, you know, like Bacon-Shor, demonstrated on trapped ions by I think, Ken Brown and others. And, more recently, the, the the fault tolerant, you were saying, Kevin is a experiment on the nitrogen vacancy, I believe so yeah. It was below the threshold. Right. They actually were able to, but I mean, it's a very small system, but it seems like there's sort of and then, as you were saying that, Jay Gambetta, IBM, and the superconducting circuits are sort of pursuing more of a you know, increase gate Fidelity's and find new strategies for mitigating errors. It feels like there's sort of different horses in the race, so to speak, that are All coming at it from from different directions.

Dorit Aharonov 25:04

Right? And yeah, and I think I think one of the things that are going to change so, so the difference between error mitigation error correction is really whether we have reset gates, and also the the level, the noise rate, essentially. But it seems like it might be possible. I mean, it seems that systems are going to be slowly evolving towards fulfilling all the requirements of photonic quantum photonic quantum computation. For example, reset gates might be easier to to apply, you know, you need to reset gates in order to get rid of entropy, the entropy gets into the system and and starts warming up the system and you have to get rid of it. And we proved already long time ago that you basically need constant rate of getting rid of entropy because the entropy just comes in with in a constant rate. But But recent gates are often easier to do on the sort of on the edges of, of the quantum computer, not necessarily very close, especially in 2d systems, maybe maybe you would start with not sufficiently many recent gates, but still more than zero. So this is just an example for what one could see as an intermediate step between interest and mitigation and fault tolerance.

Kevin Rowney 26:35

And toys is really a fast material. Thank you. For just for our audience. Could you just do a quick rundown on this, this idea of the reset gate?

Dorit Aharonov 26:44

Oh, so a reset gate, basically. Okay, so what's a reset gate, it's just a gate takes takes any qubit. And no matter what the state of that qubit is, it replaces it by a qubit in the state zero. Now, it's, of course going to be noisy. So it's not going to be exactly zero, but it's very close to zero. That's the reset gate. And why is it needed? Well, the point is that, when you have errors, and noise and decoherence, the entropy in the qubits is just accumulating, and you don't know exactly what the state is. So you can't just rotate it to the state zero, if you knew exactly what the state of a qubit was, and you knew that it's for it was, for example, one, you could return it back to zero, and you didn't, that wasn't, that wouldn't be special gate. But a reset gate is something that actually takes a qubit with high entropy and makes it into a qubit with a particular state and exact state, which is zero entropy, or very low entropy,

Kevin Rowney 27:45

there must be impossible levels of entropy that you just don't know where it is, and its rotation and spin flips, and so forth. And so yet the reset gate gives you surety about the the exactly establishment of hidden state.

Dorit Aharonov 27:57

And once you have this known state, then this could be a place where you can throw your entropy into, that's sort of a place to put entropy in a second refrigerator. If you didn't have these, if you if your qubit is already very warm or has a lot of entropy, then there's no place for it to put more entropy in, and then you can't use it in order to cool the rest of the system. So the real problem with with errors that start accumulating in the quantum state is that the entropy grows and you want to get rid of it. Because entropy growing means that the state is no longer what you wanted to be it as various things that you don't know that were added to the state, you want to get rid of them, you want to somehow put this excessive entropy somewhere else. And for that you need this somewhere else to exist close to the qubit that you want to correct. Right. So that these are the reset gate -- reset gates that prepare these qubits in the state that that is, you know that it's ready to get this entropy.

Sebastian Hassinger 29:01

And I guess the specific implementation would just it's, again, it's an engineering challenge, depending on the modality the qubit you're working with, right?

Dorit Aharonov 29:10

That's true. Yeah, yeah, yeah. And some, in some cases, you would just have to wait. And I mean, me, sometimes the solution is just to wait until the qubit relaxes to its ground state. But that can take a long time. And in some cases, actually active reset gates, where you measure and then based on the the outcome of the measurement you, you correct the state of the qubit, so that it becomes zero. But then you have to apply a measurement. And there are various approaches to this.

Sebastian Hassinger 29:40

Right. It's interesting, I mean, listening to you, it sounds like as you said, if you feel like there's sort of a continuous we're on a continuous path rather than a discontinuous path to fault tolerance, then is it safe to say that you feel like the challenges are primarily engineering in nature and we have have, we have sort of enough science under our belt, so to speak to to enable that engineering to get to fault tolerance?

Dorit Aharonov 30:07

Well, I think that has been true from 96 or 97. And already since then we we sort of, I mean, the, the fault tolerance theorem basically provides a proof of mathematical proof that assuming that the noise is you know, that it satisfies certain conditions, and that it's below this threshold, you can actually perform error correction, and you can actually perform quantum computation in the presence of noise without paying too much overhead. And since then, they should say, of course, there, there has been a huge amount of excellent, you know, amazing works fault, providing much, much better constructions and much more efficient constructions. So it's never been, I think, I think that the real argument, if there is an argument is is, is whether the assumptions underlying the photons theorem, really hold. And, and also, I mean, assuming that the assumptions, mainly the locality of the noise really hold then then, then there is the very, very immense technical challenge of, of making the requirements, you know, of achieving these requirements. But I don't think anyone thinks that it's it's new physics, that to be that needs to be discovered.

Kevin Rowney 31:30

Interesting. And Dorit. I mean, I don't want to corner you, but would you be willing to speculate on you know, the timeline for this engineering? I mean, how long? Exactly I just so hard to answer. I know and predicting the future very hazardous stuff. But still, I mean, any any speculation you care to share?

Dorit Aharonov 31:48

As I said, I, it's not a single point, I think that there's not going to be you know, we're not going to wake up and and have a fault on one's computer available somewhere in the world without any preparation? Yes, you're sort of lots of things that need to be to happen, for example, increase of the number of qubits that are controlled, as well as the reducing the the infidelities and the gates all over the quantum computer, not just with, you know, the your best gates or high fidelity standard, my three, but maybe the rest of the gates have infidelities 10 to the minus two or didn't miss. I don't know. So So my guess is that this is going to be a continuous path. And my my guess I don't know, I don't want to I guess I'm recorded now. But my personal guess is that it was it should probably be pretty soon, like within maybe five a few years. Until we see actually some interesting quantum advantages that require significant reduction in the noise to achieve on a few hundreds of logical qubits. But, but this is not yet sufficient to to see quantum full, you know, full fledged quantum yours full fledged fault tolerant quantum computers for that, I think, maybe the right order of magnitude is 10 years or something along these lines.

Kevin Rowney 33:19

Got it. So interesting. And you also mentioned the -- this you know gradual transition out of the NISQ era into perhaps a more rigorous fault tolerant era of quantum computation. There's numerous people who have skepticism and some faith right in the capacity of the NISQ era machines making some sort of positive contribution towards you know, the not just the theory of computation, but actual practical problems, are you are you a NISQ optimist or a NISQ pessimist when it comes to you know, real world applications?

Dorit Aharonov 33:53

So, once again, I think that the exact definition of NISQ is not so sharp because of this. Yes, yeah. But I am in fact, a NISQ optimist, if you want to call it this way, because because of the following observation. So, in an old paper of mine with Michael Ben-Or with the Norm Nisan and Russell Impagliazzo, we actually observed already 96 We observed that that log depth quantum circuits can be made fault tolerant even even without reset gates, as long as the noise rate is below a certain threshold. So, and and Richard cleave and John Watrous in a beautiful paper from 2000. I think they showed that Shor's algorithm can be done in in using a hybrid machine of classical computation and log depth quantum computer Oh, interesting. If you combine the two results, basically, we didn't actually realize that. But but basically you've known for we've known without knowing for a long time, that Shor's algorithm is in NISQ. Meaning it can be done using NISQ devices. Recently, I've been to QIP. And there was a beautiful talk by about a paper by I forget the list of authors, but it's a beautiful paper called The complexity of NISQ. And, and the complexity of NISQ, they define the complexity class NISQ. And you can argue about the exact definition, but but the real point is that they gave a definition. And the definition is basically that you have a classical computer. And an so it's a hybrid machine of a classical computer, capable of doing polynomial classical computations, with access to lock depths, or too too noisy quantum computers. And I'm saying log depth, because after logarithmically many time steps, the entropy grows to the point that it's basically as useful as completely random states. So So basically, you can think of NISQ as this hybrid machine of classical computers with access to log depth quantum systems, and that contains Shor's algorithm.

Kevin Rowney 36:27

Wow. So

Dorit Aharonov 36:29

so this, this is so and it contains your off Shor's algorithm, even if these log depth quantum systems are noisy, as long as the error rate is below a certain threshold. So these recent gates might not be the main issue, the these lock depths quantum systems, they don't need recent gates, basically, the issue is that the error rate needs to be below a certain threshold. And I personally believe so. So that actually shows that these law of depth systems, quantum systems, if they're, if they're in if they come together with classical machines that control them were extremely powerful,

Sebastian Hassinger 37:09

when in fact, that's that use gave me a lot more understanding and to IBM is move last spring to shift the roadmap to much more emphasis of these sort of very tightly coupled classical and, you know, NISQ, you know, style machines, like that sounds much more powerful than I think I was initially imagining. But yes, I

Dorit Aharonov 37:35

think I think that's a common a common sort of intuition that, that log depth systems are very shallow quantum circuits or would not be so powerful. But in fact, if they come together with hybrid in a hybrid machine, they could they could get to to do Shor's algorithm. And so

Kevin Rowney 37:56

and tons more, right. I mean, there's all sorts of powerful problems with the domain of of quantum variational eigen solvers. Right. I mean,

Dorit Aharonov 38:03

yes. So that's, that's true. But there. I don't think there is any, any, maybe I'm wrong. I'm not sure. But I don't think there is any proof of the of, you know, that that have any advantage of I mean, also, for sure is on there's no proof of advantage. But I don't know of any results like Like, like, in a hybrid machine. Have. You apply a VQE -- of VQE computation with with long depth and then and then he gets something which is conjectured to be hard. I don't know of of such a thing. Maybe wrong.

Kevin Rowney 38:42

I don't I don't either. I you I would defer to of course, but if I'm not mistaken. I mean, I thought there was a ton of really powerful and interesting molecular geometry problems. Definitely, definitely. Right. Yeah, in this domain?

Dorit Aharonov 38:54

Definitely. But it's not clear. No, yeah, let me clarify this point. So there is a difference between quantum advantage which is practical and quantum advantage, which is asymptotic. Okay, and it's not and I was, what I was talking about earlier, was about the question of asymptotic advantages. And and I'm not aware of results that sort of suggest that in those hybrid machines, you would you would get these asymptotic advantages with locked up circuits without quantum error correction. And so that's why I'm sort of a little bit hesitant to say that in the NISQ era, without quantum error correction, you would expect asymptotic advantage advantages from those zones, those VQE and other hybrid like, computers. However, asymptotic advantages not is not I mean, we're talking about practical advantages and they're sure, very well will be that this will, you will see really beautiful advantages very soon. And also, as I said, I think that very soon we'll actually be able to move out of this NISQ devices to devices with some recent gates. And and we already have recent gates available on some machines. And so, again, the NISQ, the NISQ devices are changing over time. So they're getting getting more and more powerful. And they don't just remain in there in the framework of no reset gates, no error correction whatsoever, and just do computation and, and wait for the entropy of humanity.

Sebastian Hassinger 40:38

And on the the practical versus asymptotic. Point, the paper that you did recently on random circuit sampling, I guess that was providing theoretical proof of of not having asymptotic advantage. And in that sort of regime, right, the the I think what you prove was that the the more qubits in the in the random circuit sampling experiment, the the less advantage there was when they're noisy qubits, is that right? I wouldn't

Dorit Aharonov 41:08

put it that way that qubits you have, the less advantage you have. But what we showed is basically that if you have so there is this branch of attempts to show quantum advantage, which is really heroic, I think, and the there are amazing experiments that were done like Google's experiment, that the basically use random circuit sampling, where you pick up a random quantum circuit, and you just apply this random quantum circuit and measure at the end, and you apply, you do that for several quantum circuits, and then you perform some statistical test on the outcome of outcomes of these measurements. And, and like, for example, in your cross entropy benchmark, or there are other statistical tests, I won't, I don't want to go into the exact definition, but there are some statistical tests on the outcome of the measurements of the you know, you play random quantum circuit, then you measure, and you take several results of those measurements, and you perform some classical computation on those. And the idea was that whatever the outcome of this classical computation is, random quantum circuit sampling can produce can produce it for not too deep quantum circuits, whereas classical computations will require super polynomial time to do. If you if you're okay, so that's, that's if you're thinking of asymptotic advantages, and, and what we showed is that no matter what statistical test you apply, we can we can basically mimic this with a classical computer. So, we can we have a way to take to take the model the noisy quantum circuit, noisy random quantum circuit model, and compute, and basically simulate it and simulate several circuits of it and then compute the classical function, whatever classical function you want. If you if you use it on on your quantum circuits, then you can also use it on on our classical simulator. And then we basically mimic the quantum experiment. So that means, I mean, we mimic it to within some total version distance, small one arbitrarily small, inverse point only small. So once you can mimic that this means that there cannot be scalable quantum advantage. So this means that I'm putting an asterix here, there is a there are lots of caveats to this. It's not well, I'll get back to this caveats. But But basically, the lesson is that we think that random circuit sampling cannot be the basis for scalable experimental violation of what's called the extended Church Turing thesis. Yes. Yeah. So the extended Church Turing thesis states that, that all computational models are can be simulated in polynomial time by a probabilistic Turing machine, classical Turing machine. And the whole quantum computation area relies on the belief that quantum computers violate this thesis. And, and we still don't have experimental evidence for this violation.

Kevin Rowney 41:15

It's a treacherous area. I mean, first fascinating and powerful, right? Intellectual it's a fascinating, powerful intellectual quest to sort of like, try to find experimental evidence that would violate the extended Church Turing thesis and numerous attempts that people thought they had brief breakthroughs on a they've been sort of the those algorithms on a quantum computer have been de quantized. Yes, it's to lots of lots of attempts leading to a turnaround. So it seems like a very treacherous area to try to make a breakthrough it. Yes,

Dorit Aharonov 45:09

but I would. So I think you're referring to the machine learning algorithms that were dequantized. Yes. Yeah, these are, these are not okay. So these are two different topics that the machine learning algorithms that were fascinating, D quantize. By 10, and collaborators later. This is this is not about experimental evidence for quantum advantage, but it's just about the question of whether ideal quantum computers, you know, a mathematical model can achieve exponential speed ups over classical ideal computers. And, and within this arena of machine learning. Those particular algorithms turned out to be not as powerful as people thought. Yeah, they lead to very fascinating results in the classical machine learning domain, but they weren't, they weren't leading to quantum advantages. However, when people seek quantum evidence, experimental evidence or quantum advantage, this is not what they're that's not what we're talking about. We're not talking about finding faster solutions to problems. But we're talking about the question of whether we already have right now, before we solve Shor's algorithm on a quantum computer in 10 years. Already now, do we have experimental evidence that there is a good reason for us to believe that those quantum systems are really capable of exponential advantage over classical systems? And that is not something so we currently so remember, this phase transition? So basically, we haven't really probed the phase of the high complexity regime of quantum mechanics. And we currently don't have don't have any any any good evidence, any viable evidence that that that we have that really quantum systems are capable of performing information processing in a way which resembles this high complexity that is required from the fault tolerance regime. NDA should explain this a little bit more, I think it's really, really mind boggling. This this fact that we don't really, really have this yet. So one can ask. So if we go back to Fineman, Fineman, I mean, the original motivation of Fineman was basically the the fact that that quantum evolutions are extremely hard to simulate. So so one can ask, well, isn't the very fact that quantum systems evolve, as they do? Already, evidence experimental evidence that quantum systems are so hard, you know, that they perform something extremely difficult from the computational complexity point of view. So right, this is this is sort of the first, the first response to do we have any evidence for quantum advantage for experimental evidence for scalable quantum advantage. And and the answer to this is that unfortunately, so far, I don't think anyone has managed to actually define a computational problem that an existing physical system solves, that we can verify that it actually solves it. And that we have evidence that, that it solves it for sufficiently many instances, that we can really view it as something that scales. But there is no such a suggestion. I'm not saying that it doesn't exist, and that we don't, you know, I don't I'm not saying that we don't already have somewhere in one of the physics papers, existing evidence, but no one actually pointed at such an evidence and said, this is this is something that I propose as a challenge for classical computers. Just compute to what this physical systems have already computed. And and, and tell me and, and then if you can't, then this is evidence for a quantum event.

Kevin Rowney 49:24

And it's such a deep and powerful questions. And this detail is fantastic story. Thank you. I mean, and so I would again, ask you to speculate me, what do you what do you think this will net out? I mean, do you feel like there will be at some point a powerful breakthrough on on the theory here that would show this?

Dorit Aharonov 49:44

Okay, so I'm very optimistic about this, actually, I think so. Okay, so I don't know if that's exactly your question. But one can ask, is there already existing evidence? And the other question that one can ask is, can we actually generate evidence from existing systems, like NISC systems, these are two different questions. And I'm actually because of the power that I told you about before about the power of misc, devices that already, in principle, can can implement Shor's algorithm, if the error rate is below a certain threshold. And I personally think that probably, they're very powerful also with with, with higher error rates. So So I personally think that there that NIST devices are capable of, of providing evidence for, for scalable for scalable quantum advantage. But But yeah, but I don't know. I don't know yet. How to point on that problem. Yeah,

Sebastian Hassinger 50:56

that must have to be the topic, your next research paper, I guess. Working on, or your product of your startup, I mean, maybe kid, my consultant, it would be good to,

Dorit Aharonov 51:09

ya know, Kid mice dealing with with making error, you know, making quantum devices capable of performing V. Qe and and other quantum algorithms work work as best they can, you know, extracting the quantum power out of existing devices. And as an industry company, we're not worried about proving theorems and managers and stuff like that. We're working. We're worried about the practicalities. And then my other hat, my research hat, I'm doing these these questions.

Sebastian Hassinger 51:47

Right. Right. Right. That's always think that's a really healthy interplay between the search for proofs and the search for just practical application.

Dorit Aharonov 51:56

Yeah, I have to say that I was surprised. I was very surprised about the constructive interference between working in industry and and working on theoretical questions, I have to say, I was expecting destructive interference. And I'm actually seeing a lot of inspiration going in both directions. That's,

Kevin Rowney 52:13

that's really interesting. And tell us more about that. How does that how does that work? Do you just feel like it's, you know, kind of a group spirit of pushing forward ahead and inspiring each other by competition? Or is there some other underlying I don't know, framework of dialogue or theory that is helping enrich the field,

Dorit Aharonov 52:33

I have to say, I think it's much more mundane than that. It's, it's basically the fact that you can switch between different types of thinking, when you go back, it's like resting, when you go back back to something that you haven't thought about for a little while, and you have been thinking in a different way, then your point of view is different. And I find this switches between different ways, ways of thinking very fruitful.

Sebastian Hassinger 53:00

Really cool. Interesting. So we're, we're at about an hour now, I'm mindful of your time. I don't want to make you stay up already. Because I know it's late in Israel. But I'm just I'm really curious. Is there? Is there anything I mean, we were saying, you know, error correction has been making great strides. The the systems that are being built are getting, you know, higher and higher gate fidelity, is it? Is there something in particular you're excited about in the next year or two that, you know, that you think will be another step forward? And that that sort of continuous progress?

Dorit Aharonov 53:37

Well, we've been talking a lot about quantum computation and about error mitigation, quantum error correction. But but one one direction, which we haven't mentioned at all is it relates to to sensing and to basically using quantum computation ideas to perform new types of measurements and new measurements with much higher precision. And, and I think this is this is really, so I've been very interested in in in this aspect of basically using ideas from quantum computation and ideas from information processing, and entanglement and so on. In in the context of actually thinking of new types of experiments, which are interactive between the physical system that we want to perform experiment on, and another device which could could be thought of as a small quantum computer, etc. And we've actually shown a year ago, I think, with Jordan Kotler and she Shang chi that that such devices are capable of achieving exponential improvements in efficiency and resources. So So measurements and, and experiments completely can look completely differently in the world where you have quantum computers and quantum information processing. There could be entanglement between systems that you can perform measurements on, and some some kinds of information processing devices. And I think this is, this is just, I mean, this is just an example, our example. And there was also an experiment done on Google Sycamore of such an experiment like that of time reversal measurement, which was much more efficient. And, and I think that this direction of thinking of measurements and experiments in in a different way, where quantum computers are ating those experiments is is going to develop in into a whole new field of its own.

Sebastian Hassinger 55:55

It's almost as you're taking a step backwards in time from von Neumann it is to Norbert Wiener and cybernetics, right, like the control systems and feedback systems being much more gaining, you know, having efficiency gains.

Dorit Aharonov 56:11

Right, except now they're Quantum.

Sebastian Hassinger 56:14

Except No, exactly. And

Kevin Rowney 56:16

so sorry, this is such a cool topic. I'm so glad you brought it up. And we do agree, the sensing is I think, an underrated aspect, right of the of the QC movement, go just for our audience, could you help help them and us even on giving concrete examples of the types of experiments and sensors that might be arriving as a result of these these advancements?

Dorit Aharonov 56:37

Okay, one example is the example I mentioned earlier, which, which may be somewhat contrived, but I think it's very interesting. It's a measurement of the time symmetry type of a physical system. And, and what what the example that I told you about that, the, you know, the experiment on Google Sycamore, was actually a way to show that if you entangle the system that you want to measure to another qubit, you can actually perform a measurement that that sort of classifies the time reversal symmetry of that system much more efficiently. But it's not, I don't think this I mean, there are lots of examples, machine learning examples are examples of this type of where you actually can learn quantum systems with aided with with more quantum systems. And, and there, in fact, there are many, many examples for many different directions. Also, when so one example that I have a paper on is with Alex Metzger and others, is an example where when you measure during the measurement, you sort of apply a very, very rudimentary error correction so that the signal is being corrected while it's being measured. And while since you're correcting it, while you're measuring it, you can actually get to much higher precisions. Because it does indicate. So that's, that's another example. But I can think of many, many different examples where you were if you have a quantum processor next to you, next to the system that you're trying to measure, and you you make use of that you could actually either increase the precision or do various tricks to, to remove, you know, to improve the resource counts of the measurement.

Kevin Rowney 58:33

That's really fast. So so many possibilities. Yeah, yeah.

Dorit Aharonov 58:37

Yeah, I believe it's really it's, it's something with infinitely many possibilities. It's very tightly related to quantum algorithms can think of a quantum measurement as a quantum algorithm as where the input is your quantum system? And you're trying to, to learn something about that quantum system, but you also have it sort of entangled with a quantum computer. So it's like a generalization of a quantum algorithm.

Kevin Rowney 59:04

Oh, oh. So you expect advances in adiabatic, computation, quantum walks, you know, maybe topological systems, all of those is having pertinent direct, you know, support for these, these new quantum sensing applications. Yeah,

Dorit Aharonov 59:20

I would, I would know about, specifically about quantum walks, etc. But yes, I think that any quantum up, I mean, quantum measurements and quantum experiments and protocols are certain examples of quantum algorithms and and progresses and quantum algorithms is going to, to affect and to, to suggest new protocols in experiments. Interesting.

Sebastian Hassinger 59:44

Well, you just opened the door to a whole other collection of topics. I would love to have you back.

Kevin Rowney 59:54

We're so grateful for your time. Thank you. Yeah, very enjoyable conversation.

Dorit Aharonov 59:58

Thank you very much. Thanks a lot.

Sebastian Hassinger 1:00:42

All right, that was a great conversation. Fantastic, super enjoyable. I, you know, Kevin, I thought was so interesting to hear her say, essentially that she feels like it's primarily an engineering challenge to getting to fault tolerance or fault resistance, sort of, you know, industrial scale quantum computers, I mean, you know, I typically hear more of an emphasis on discontinuous innovation being required, but she likes she said exactly the opposite, that she feels like it's quite challenging, there's going to be continuous progress towards that goal, which was really super interesting to hear from such a credible source.

Kevin Rowney 1:01:24

She's got some some backbone, some authority on that one. So you want to pay attention. And also in the timeframe, she's talking, you know, five to 10 years, and there's some pretty powerful, disruptive new appearances on the scene. You know, I was also struck by this is a result that's been known for some time, but she was, I think, really helpful in underlining it. This whole rigorous notion that's been published recently, on the time complexity analysis, around NISQ machines. I mean, this acronym is a, you know, kind of a bit of jargon. But, you know, you've got actually rigorous mathematical definitions now of time complexity characteristics, space complexity, characteristics of inniscarra machines and what they can do. And what really struck me was her connecting, right the this idea of some forms of quantum algorithms can be rendered as essentially log depth circuits, that means you have the size of the of the input problem, when factored into a logarithm gives you back a depth limit on the number of quantum circuits, which is, of course, quite pertinent towards the amount of error correction and, you know, coherence time you have to muster in order to get to us to this computation. And powerfully that the Shor's algorithm for factorization, it resides in that class, the sketching out the possibility that there could be who knows maybe even some early breakthroughs on real implementation of that algorithm faster than most people anticipate. So I thought that was really fascinating.

Sebastian Hassinger 1:02:53

It's true. I mean, it's another example of sort of, potentially being overly fixated as an industry on on this, you know, the, the the hypothetical sort of shore machine being, you know, whatever 100,000, you know, fault tolerant qubits, they can run the the algorithm in perfect cleanroom conditions. It really gave me perspective on why there's such an urgent sort of drive by vendors like IBM, for example, whose roadmap last spring was revised to have much deeper and tighter integration of classical compute resources with the with the quantum, the qubit chips. Because I mean, that that sounds like what they're anticipating is that there will be that type of tight coupling with this sort of log depth circuitry, and a hybrid kind of algorithm might actually yield some some significant results in the near term.

Kevin Rowney 1:03:53

Seems so so likely that they're following this same logic as well. Yeah. You know, I also was just stunningly amazed, with her a really interesting idea around this parallel, right, between a different types of quantum error correction regimes inside quantum computers, and phases of matter. And the amazing thing, she's not just talking about it being a interesting analogy, and like, Wouldn't it be helpful to think of it this way, she's actually asserting that there could be actually a tight, you know, mathematical isomorphism, right, between these two, I mean, you know, the, the physics or mathematics of phases of matter, that's some deep stuff, and seeing some sort of like a connection between those two in rigorous mathematical terms. Wow, that would be that would be not just a beautiful idea and amazing observation, but wow, a theorem so. So power on are you?

Sebastian Hassinger 1:04:45

Yeah, I mean, she started with the description of the threshold theorem in these very operational almost, you know, I was thinking of feeds and stocks, I'm assuming she rented it down to the simplicity of like, if your errors propagate slow or then your correction can can be the affected then you're good, right? I mean, that's essentially what it was. And that sounds like as she said, an engineering problem. But then at the very end, she flipped around to like, and there may be some really fundamental science sort of embedded in that. That simple proposition, which is really interesting. Yeah, right. Good times. Excellent. Well, that was another enjoyable conversation for us. I hope our very much enjoyed it as well. And we'll see you in the next episode.

Kevin Rowney 1:05:32

Thank you all for your time. Okay, that's it for this episode of The New quantum era, a podcast by Sebastian Hassinger. And Kevin Roni, our cool theme music was composed and played by Omar Costa Hamido. production work is done by our wonderful team over at Podflyi. If you're at all like us and enjoy this rich, deep and interesting topic, please subscribe to our podcast on whichever platform you may stream from. And even consider if you'd like what you've heard today, reviewing us on iTunes and or mentioning us on your preferred social media platforms. We're just trying to get the word out on this fascinating topic and would really appreciate your help spreading the word and building community. Thank you so much for your time.