The New Quantum Era - innovation in quantum computing, science and technology

Has quantum advantage actually been achieved — or is the field still arguing over its own milestones? Dominik Hangleiter, one of the leading theorists working on quantum computational advantage, joins the podcast to make the case that it has, explain why so many physicists remain unconvinced, and map the path toward fault-tolerant, verifiable quantum advantage.

Show Notes

Has quantum advantage actually been achieved — or is the field still arguing over its own milestones? Dominik Hangleiter, one of the leading theorists working on quantum computational advantage, joins the podcast to make the case that it has, explain why so many physicists remain unconvinced, and map the path toward fault-tolerant, verifiable quantum advantage.

Why This Episode Matters
If you follow quantum computing and want to cut through the noise around quantum advantage claims, this episode is for you. Dominik Hangleiter — an Ambizione Fellow at ETH Zürich and postdoctoral fellow at UC Berkeley's Simons Institute — has spent over a decade studying the boundary between what quantum and classical computers can do. His March 2026 paper "Has quantum advantage been achieved?" synthesizes years of experiments, classical simulation attacks, and complexity theory into a clear-eyed assessment. Whether you're an experimentalist, a theorist, or simply quantum-curious, you'll come away with a sharper understanding of what's been demonstrated, what hasn't, and what comes next.

What You'll Learn
  • Why random circuit sampling became the primary arena for proving quantum advantage — and why the task's "uselessness" is a feature, not a bug
  • How the linear cross-entropy benchmark (XEB) works as a statistical proxy for verifying classically intractable quantum computation
  • Why audiences of physicists are still split on whether quantum advantage has been demonstrated, despite multiple experiments since 2019
  • What "peaked circuits" are and how they interpolate between random sampling and structured computation
  • How post-quantum cryptography (learning with errors) exploits problems that quantum computers can't solve — and what that reveals about quantum computation's limits
  • Why basic arithmetic is surprisingly hard for fault-tolerant quantum computers, and how that bottlenecks algorithms like Shor's
  • How fault-tolerant compilation co-designs quantum circuits with error-correcting codes to make advantage experiments scalable
  • The difference between "native" quantum operations and the overhead required for universal fault-tolerant computation
  • Why the interplay between quantum and classical computing strengths — not quantum dominance — may define the field's future

Resources & Links

Papers & Articles

Blog Series & Commentary

Guest Links

Key Quotes & Insights
  • "Really what sets random circuit sampling apart is that it's really programmable. I give an input to the device, I design a circuit — I draw it randomly, yes — but then I give the circuit to the device, and whoever controls the device runs the circuit and gives me back the samples." — On why RCS qualifies as genuine computation
  • "We typically do in physics experiments a lot of extrapolation, a lot of circumstantial experiments that validate that the experiment you really care about is actually what you want to probe. And that's the sense in which I think these random circuit sampling experiments have been verified." — On the physics-style epistemology of quantum advantage
  • "Classical computers are really good at doing basic arithmetic, but quantum computers — it's really hard to do basic arithmetic. And that's for the reason that fault tolerance is very restrictive in terms of the operations that you can do on encoded information." — On the surprising asymmetry between quantum and classical capabilities
  • "I can't just tell the quantum computer to give me the outcome I want. There's rules to it. And how those rules apply to computational problems that we face in the real world beyond quantum simulation is, I think, a really intriguing challenge." — On the structured nature of quantum interference
  • "Maybe there's a world where we can stitch together different hardware systems and won't have a single platform that wins the race." — On heterogeneous quantum architectures


Related Episodes

Calls to Action
Dominik's Quantum Frontiers blog series is one of the most accessible deep dives on quantum advantage available anywhere — start there if you want to explore beyond this conversation. Links in the show notes.
Subscribe: Apple Podcasts | Spotify | YouTube | Amazon Music
Go deeper: Sign up for the newsletter at newquantumera.com for research highlights, commentary, and behind-the-scenes looks at upcoming episodes.
Connect: LinkedIn | Bluesky

Creators and Guests

Guest
Dominik Hangleiter
Dominik is a quantum scientist working at the interface between computer science, physics, and philosophy of quantum science. He is an Ambizione Fellow at ETH Zürich and postdoctoral fellow at UC Berkeley's Simons Institute.

What is The New Quantum Era - innovation in quantum computing, science and technology?

Your host, Sebastian Hassinger, interviews brilliant research scientists, software developers, engineers and others actively exploring the possibilities of our new quantum era. We will cover topics in quantum computing, networking and sensing, focusing on hardware, algorithms and general theory. The show aims for accessibility - Sebastian is not a physicist - and we'll try to provide context for the terminology and glimpses at the fascinating history of this new field as it evolves in real time.

Sebastian Hassinger (00:01.432)
Hi Dominic, thanks for joining me.

Dominik (00:04.509)
Hi, hi, Sebastian. Great to meet you.

Sebastian Hassinger (00:07.182)
I'm happy to have you on the podcast. And if you could, can you start just by sharing your origin story with us?

Dominik (00:14.949)
My origin story. guess my origins, my quantum origin story. Yes, yes, yes. Great. Yeah. What is my quantum origin story? So, you know, I studied like a bunch of different subjects, mostly physics, but then also some philosophy and math. And what really got me into quantum, I think, is this question of verification. Like this like fundamental question, can you verify, you know, quantum computation, even though you have no way of simulating it on a classical computer?

Sebastian Hassinger (00:16.238)
Your quantum origin story, guess, specifically.

Sebastian Hassinger (00:35.758)
Hmm.

Dominik (00:44.145)
And I guess that's what sort of got me into the field initially. And then I got really excited about complexity theory and all these topics, which I guess we're going to discuss in more detail today.

Sebastian Hassinger (00:53.43)
Yeah, absolutely. Absolutely. So you got your PhD from the Freie Universität, I think, right of Berlin, right? Right. With and you got your PhD under Jens Einstein, right? Yeah, yeah. Yeah, I had him on the podcast. actually interviewed him at the Simon's Institute. couple of years ago. Yeah, so that was a great guy. I think he's he's a

Dominik (01:00.227)
Exactly, yeah, in Berlin.

Dominik (01:05.829)
Yes, that's exactly right. Yes.

Dominik (01:12.397)
wonderful. Wonderful.

Sebastian Hassinger (01:18.72)
incredible energy and has, you know, 27 things going at once all the time. seems like great, great collaborators. So, so the reason I wanted to have you on was a series of blog posts initially on the Caltech blog, quantum blog that has just, you just posted to the archive a couple of weeks ago, or maybe even last week around quantum advantage. And I remember the original blog post grabbed my attention because

Dominik (01:23.613)
So true, so true. Yeah, I love Jens. He's great.

Sebastian Hassinger (01:48.654)
You started by saying you were giving a talk and you asked the audience, you know, who here thinks there has been a, you know, conclusive proof that we've reached quantum advantage. And you were shocked at how few people were really convinced. Most people were on the fence or very skeptical. Why was that surprising to you?

Dominik (02:08.976)
Well, I've been working on quantum advantage for, I guess, more than 10 years now. And one of these big moments was the 2019 demonstration of quantum supremacy, as they called it. And since then, there's just been a lot of development and theory and more experiments. I remember at the time, was this big fuss about IBM put out a paper just a few days after being like,

we could simulate it on our huge, I guess the biggest supercomputer that there is, but they didn't actually do it. Right. And then it took like a couple

Sebastian Hassinger (02:41.272)
Frontier yeah, the largest supercomputer in the world. No, well now not worth the effort. I think is

Dominik (02:49.188)
Yeah, I don't know what exactly the framing was. And then it did take a couple years or so until I think there was a, to my eyes, pretty conclusive simulation of that experiment. But the field progressed on and there was more theory development and more experiments which were bigger and hadn't been simulated. And so in my mind, when I gave those talks last year, was like, surely everyone's going to be convinced of quantum advantage now we've had.

six experiments and they've grown and they've gotten better. And we have all this theory around it now. And so I was really surprised when to these very different audiences, there was not a consensus. was like, it was very split between people who were skeptical or who thought there hadn't been a quantum advantage demonstration. And I think for very different reasons. I guess the blog posts were meant to address those potential reasons.

Sebastian Hassinger (03:46.872)
Yeah, yeah. And just to rewind and set the context of the 2019 experiment that Google carried out, this was a random circuit sampling experiment, right? Can you explain a little bit about how exactly how that's designed, I guess?

Dominik (03:58.128)
That's exactly right.

Dominik (04:03.076)
Yeah, that's exactly right. I guess starting in the early 2010s or so, people had been starting to think about what would the first big milestone for the field be? And they came up with this idea of just running a really trivial experiment in some sense, which is just run a random quantum computation. And those computations, they started coming up with these arguments that

they would be really hard to simulate on a classical computer. And so this is random circlet sampling, which is, you you have a small quantum computer and you run whatever computation is kind of native on that device, but in a random way. And you do, you know, you do it as far as you can go until the noise drowns you basically, or before the noise drowns you ideally. And then you get samples. And the task, which is kind of an unusual task when you think about

computation, usually we think about it as, I want some number out of it. I don't know, the energy, ground state energy of a molecule, or the factors of a large number. And so the sampling task is kind of unusual. And that's maybe been kind of unsettling to people that it's such an unconventional, maybe, computational task. But it is like what I guess distinguishes it from what I would call more like

Sebastian Hassinger (05:05.826)
Mm-hmm.

Dominik (05:29.702)
physics experiments or analog simulation experiments or so is sure you, I mean, you, let me start differently. You could have also thought that just running a, I don't know, some quantum system, some letting a quantum system coherently evolve for some time. That's something that's hard to simulate on a classical computer. So you could be like, well, this is quantum advantage. But really what sets random circuit sampling apart,

Sebastian Hassinger (05:39.054)
Mm-hmm.

Dominik (05:59.246)
even though it is still kind of a native computation on a quantum device, is that it's really programmable. So it's really, you know, I sort of give an input to the device, right? I'm the user of the device, I design a circuit, I draw it randomly, yes, but then I give the circuit to the device, and then whoever controls the device runs the circuit and gives me back the samples. And I can, in principle, do the same thing on a classical computer, right? I can just simulate the circuit and, right, create samples.

Sebastian Hassinger (06:05.101)
Right.

Sebastian Hassinger (06:24.43)
Right.

Dominik (06:27.802)
And really the point is that one is much harder than the other.

Sebastian Hassinger (06:31.03)
Right. And that I guess that would distinguish it from other advantage experiments that have been, as you say, either analog or or the lack that sort of universal computation aspect to it. Is that right?

Dominik (06:47.108)
Right, exactly. And there is a continuum. There are these maybe analog simulators where you can control some parameters. So it's like a very restricted model of computation, if you will. One of the first proposals for these random circuits was IQP circuit sampling and boson sampling, both of which are also not universal circuit classes. And yet, they're, in my eyes, at least, much closer to this ideal of a universal programmable quantum computer than

Sebastian Hassinger (07:15.661)
Right.

Dominik (07:16.037)
Maybe these analog simulators where you just have a physical system that has some native dynamics. Exactly.

Sebastian Hassinger (07:22.092)
Right. That's parameterized. can, you can plug in your values, but you don't actually have control over the time evolution or whatever the dynamic is that it's carrying out. Yeah. Yeah. Yeah. And, so, the, mentioned there is plausible, you know, very convincing simulation of the original Google experiment, but is that because they're limited by number of qubits or the connectivity or the, or the coherence time? But if it was,

Dominik (07:30.063)
Exactly, yeah. Exactly,

Sebastian Hassinger (07:51.668)
if the sycamore chip had been higher quality and less nisky, that experiment could have been unsimulatable.

Dominik (08:01.924)
It's a little bit of everything, I would say. It's a little bit of the number of qubits was maybe not that large, the circuit depth, so the number of layers of elementary operations that you do wasn't that large, and the fidelity, so the quality of the computation wasn't that big. And so the simulation really used all of those properties in order to just about be able to simulate it, basically.

Sebastian Hassinger (08:03.853)
Mm.

Sebastian Hassinger (08:26.252)
Right, right. Because I mean, I noticed when you look at quantum computation simulation packages like a Penny Lane or that sort of thing, they seem to trade off the sparse connectivity with the number of qubits they can simulate. You'll see claims of being able to simulate hundreds of qubits, but it's with very sparse connectivity. And that's sort one of the bigger dials that you can adjust to make something tractable or non-tractable on a quantum.

Context I think right? Yeah

Dominik (08:56.579)
Right. Yeah, so really, the simulation algorithms that people use are these tensor network algorithms. And there, we really think of a computation as something in space time. So you really think of a big block of elementary tensors, and you stitch them together. And the simulatability is really a property of this space time block, not so much of the individual parameters.

Sebastian Hassinger (09:01.995)
Right.

Sebastian Hassinger (09:18.7)
Yeah, yeah. so the, I mean, it's random circuit sampling is an arbitrary task. It doesn't have sort of a pragmatic agenda or outcome to it. But it seems like from what you've written that you're making an argument that back to your point about how do you verify something that is classically hard to do.

that there is this method XeB that sort of emerged out of random circuit sampling. It's one of the tools that potentially we can use for verification beyond that simulation barrier. Is that right?

Dominik (09:53.916)
Yeah, maybe that's not how I would phrase it. So I wouldn't actually call random circuit sampling verifiable. Not verifiable in the sense that I guess I was alluding to originally, where we can really be convinced without any doubt that this is what has been done. The problem that XeB solves in the context of random circuit sampling is really that

Sebastian Hassinger (09:57.6)
Okay.

Okay.

Sebastian Hassinger (10:14.892)
Mm.

Dominik (10:21.241)
the distribution, so the probability distributions that these samples come from, they're really, flat. So they're in some sense quite close to uniform samples. In particular, when you get samples from this quantum device, you'll never see the same sample twice. So this makes it really hard to distinguish against, say, uniformly random sample. And that's the problem that XeB solves, basically by

Sebastian Hassinger (10:30.903)
Hmm.

Sebastian Hassinger (10:42.764)
Right.

Dominik (10:48.123)
taking the samples from the device and then correlating those with the ideal probabilities, like the probability that I should have seen that sample. And then you'll eventually see a small bias towards larger probabilities. And that signal you can pick up with this benchmark. So that's the problem it solves. The problem it doesn't solve is that you can verify random circuit sampling.

more efficiently than you can simulate it. Because to compute XeB, you still need to actually compute those probabilities, which is exactly the task we started out being a hard task.

Sebastian Hassinger (11:19.253)
I see.

Sebastian Hassinger (11:26.67)
Yeah.

Right, right. Yeah, I mean, I find that threshold really fascinating and I'm sure in some of the same ways, though not at the same depth as you, because it feels like we're heading towards a horizon where we'll have these machines that are kind of the ultimate black boxes in a way, because every classical computation you could with sufficient

pens and papers and human hands, like replicate or verify that calculation in some sort of classical way. Beyond this threshold of quantum advantage, there isn't a direct way to verify the work because it is by definition not simulatable. What are the ways that you think might provide means for verification past that horizon of no longer simulatable classically?

Dominik (12:23.169)
Right, right. So I guess there's a bunch of layers to that. So maybe I'll start from how do you actually, you know, I argued in the piece that we have actually demonstrated quantum advantage. So I must have have an argument that there is some sort of verification, right? So the argument there is really that while we can't actually compute XeB, what we typically do in physics experiments is a lot of, you know, extrapolation, a lot of

Sebastian Hassinger (12:39.596)
Right.

Dominik (12:53.465)
a lot of circumstantial experiments that validate that the experiment you really care about or the property you really care about the experiment is actually what you want to probe. And that's the sense in which I think these random circuit sampling experiments have been verified. There's a lot of circumstantial evidence that these devices actually sample from states with reasonably high fidelity using maybe proxies for the XEB.

using some theory about how the fidelity behaves as a function of what the noise properties of the device. There's also different ways where you maybe run the circuit, and then you invert it, and then you hope to see the sample that you started out from, essentially. And all of those together sort of build the foundation on which I would argue that these random circuit sampling experiments have been validated to the same extent that

Sebastian Hassinger (13:37.065)
Mm, right.

Dominik (13:47.739)
any physics experiment, and make the comparison to the Higgs boson, gravitational waves are validated. So that's maybe the bottom layer at this threshold, I would say. And then beyond that, we do now have some schemes that are based on classically delegating a computation to a quantum device under the assumption that the quantum computer can't do certain computations.

Sebastian Hassinger (13:49.922)
Right.

Dominik (14:17.05)
So it's sort of using the limitation of quantum computers in order to force them to actually do what you want them to do. So those are called maybe cryptographic proofs of quantumness if you just care about quantum advantage, but then you can leverage those to full on quantum verification. That's maybe one thing you can do. Another thing you can do is you can think of yourself as the verifier, say,

Sebastian Hassinger (14:20.066)
Hmm.

Sebastian Hassinger (14:30.712)
Mmm.

Dominik (14:46.956)
as someone who has maybe a small quantum computer. Maybe I can produce single photons in a certain quantum state, and I can send those to the quantum device. Maybe big quantum computer who can do entangling operations, all the things we like. And then based on that, I can also do verification. So there's a bunch of different ideas out there, which leverage, yeah, in maybe the broadest sense, some limitations of quantum computations.

Sebastian Hassinger (15:12.49)
Interesting. Okay, so two things that I wanted to ask about. One, you mentioned inverting the probabilities. Would peak circuits be sort of in that category where you're sort of embedding a signal in the inputs and seeing if you're picking that up in the outputs in the distribution?

Dominik (15:34.586)
Yes, yeah, yeah. Yeah, so that's a good point. So peak circuits, maybe I step back a moment and say what peak circuits are. So peak circuits is the idea that you design families of quantum computations, which are more like traditional computations in the sense that a certain outcome is the outcome you expect. And I said before that these random circuits sampling experiments are not like this at all.

So peak circuits kind of want to interpolate between this ideal of a computation where you just get the outcome you want, the factors of a number, and these like completely, almost completely random samples. By exactly, as you say, like designing circuit families in a way that hopefully there will be a peak that you know where it is when you design the family. Exactly.

Sebastian Hassinger (16:30.018)
You're anticipating it, yeah, yeah.

Dominik (16:32.57)
So that's exactly the idea, but you would still hope that these arguments for hardness from random circuits will carry over to this regime, even though now the distribution is much more structured.

Sebastian Hassinger (16:44.043)
Right. Right.

Right, Yeah.

Dominik (16:50.746)
But it's unlike this inversion idea, because an ideal inverted circuit is very trivial to simulate. I know exactly what I should get. yeah, in that sense, it's very much unlike it. So those are really more a proxy to estimate the quality of your gates.

Sebastian Hassinger (16:53.719)
Okay.

Sebastian Hassinger (16:58.568)
Mm-hmm, right, right throw a negative sign in front of it

Sebastian Hassinger (17:08.814)
Mm.

Sebastian Hassinger (17:13.792)
Okay. Okay. And then you, you mentioned cryptographic proofs. is that in the sense that you said using things that are difficult for the quantum computer to do? So what would be difficult for the quantum computer? Cause we typically think of, you know, crypto cryptography in the quantum sense of, of them magically having the ability to factor very large numbers, their problems. so what's something cryptographically is difficult for a quantum computer.

Dominik (17:37.58)
Right, yeah.

Dominik (17:41.729)
Yeah, so exactly. So as you correctly say, like one of the applications, I guess, of quantum computation is to break crypto. Yeah, I mean, yeah, I don't think it's a very good application, but you know, here we are. Right. But that's really a property, at least to the extent that we know it so far of the crypto systems that we use at the moment, which is mostly this

Sebastian Hassinger (17:50.474)
or threats.

You

Dominik (18:08.996)
based on integer factorization, RSA cryptosystem. But I guess since the advent of quantum computation, probably even before that, people have been thinking more about cryptosystems that would not be breakable by a quantum computer, or alternative cryptosystems that maybe would be breakable by a quantum computer. So just to explore this world of what kind of cryptosystems are breakable, what kind of cryptosystems aren't breakable, what are the limitations.

And one of the, I would say, most prominent crypto systems that I think there's pretty much a consensus that quantum computers would not be able to break them, but people, you know, they're not, they're still quite a little bit unsure, I would say, is this problem called learning with errors, which is basically the problem of, yeah, how do you, how do I explain it? I guess I like to think of it as like decoding under noise.

Sebastian Hassinger (19:07.566)
Mm.

Dominik (19:08.1)
So you have some linear classical code where, OK, this is getting kind of into the weeds. yeah, you have some linear code. And then you sort of add errors to a code word. So a code word is some element of a subspace, a binary subspace. And then you add errors to it in a sort of sparse way. And the decoding problem is to like,

Sebastian Hassinger (19:15.79)
It's okay. It's all right. That's all right.

Sebastian Hassinger (19:24.238)
Hmm.

Dominik (19:36.256)
once I give you the erroneous codeword, to go back to the original codeword. And that problem turns out to be really hard.

Sebastian Hassinger (19:42.115)
Hmm.

Interesting. Is that because of no cloning and other characteristics of quantum states that make it difficult to reconstruct or to make copies, I guess?

Dominik (19:58.69)
Yeah, why is that? I don't think it's related to no cloning.

Dominik (20:06.284)
Yeah, I mean, yeah, I don't think I can give you a deep reason for why that is right now. I guess it's just not the kind of structured problem that maybe factoring is. Factoring is really nice for quantum computers because you can do a Fourier transform and then it turns out after some classical analysis of the factoring problem, there's a really nice structure to the factoring problem, which the quantum algorithm can make use of. And in contrast, these errors, you know,

Sebastian Hassinger (20:09.856)
Interesting. That's interesting.

Sebastian Hassinger (20:16.206)
Hmm.

Sebastian Hassinger (20:22.712)
Right.

Sebastian Hassinger (20:28.621)
Hmm.

Sebastian Hassinger (20:33.91)
Interesting.

Dominik (20:36.611)
that they kind of break all these nice primitives we have for quantum computers like the Fourier transform. Yeah.

Sebastian Hassinger (20:39.958)
Right.

Right. And I mean, there's a theme that runs through your work, is, you know, verify, I mean, you, may mention sort of like the, the nat the native abilities of quantum computers versus the native abilities of classical computers. And it, seems to me like, like there's a lot of value in probing exactly these strengths and weaknesses, not only from a verification perspective, but, where, I mean, the search for additional algorithms,

that will be, you know, provide some massive advantage on quantum computers has been very, very difficult. And maybe in part, that's because we don't understand those fundamental native building blocks of quantum capabilities. Is that sort of in part of your... Okay.

Dominik (21:25.859)
That's absolutely right. Yeah. Yeah, yeah. That's exactly the kind of the deep question I'm really excited about, really understanding at a deep level what it is about quantum computers that makes them more powerful, what the types of operations are, maybe beyond some of the hand-wavy arguments that people give, let's say. Yeah.

Sebastian Hassinger (21:31.66)
Right. Yeah.

Sebastian Hassinger (21:48.253)
Yes. That's really interesting. do you, mean, there's, in my mind, there's sort of two categories. And when I think of native capabilities, I think of Feynman's original sort of pronouncement in 1981, know, nature is quantum. If you want to simulate nature, you're going to need a quantum computer. And obviously there's quantum simulators, which are non-universal, but once you get universal programmability,

you have enormous power in simulating, choosing what to simulate, how to simulate it. But then there's this whole category, which Shor is the most notable example of, which is what I would call horizontal or applications outside of the simulations of nature, which have

some other kind of commercial value or value creation possibilities in other industries that are not just about simulating material or simulating a drug or those types of things. Is that sort of how you map those kinds of categories as well?

Dominik (22:52.665)
yeah, pretty much. mean,

Dominik (23:03.545)
Let me try and think of a good way to answer this question. So certainly, this idea of simulating quantum systems using a quantum computer is very natural. Obviously, I mean, in hindsight, I guess, if I have the same type of system somehow, then it'll be much easier to simulate other systems of the same type. And so really, the interesting applications to me lie in these more like

cryptographic or sort of beyond simulating quantum systems applications and seeing what kind of elementary tasks are there that quantum computers are better at and why is it that they're better at them? And maybe one of the answers you could give is, that's often given is this idea that quantum computers have this capability to do destructive interference. So guess that's like a mechanism behind

Sebastian Hassinger (23:44.536)
Mm-hmm.

Sebastian Hassinger (23:57.71)
Mm-hmm.

Dominik (24:00.29)
say the quantum Fourier transform or Grover's algorithm where slowly you dampen out. I like to think of it as computational paths. Often we think of quantum computations as running in parallel. But really, that's not the whole story. Really, you need to actually make sure that in the end, only the things that matter end up in the outcome. And so you need a way to actually dampen out the things that don't matter.

And in some sense, this interference is maybe an answer to that, but exactly for which problems we can use this interference, because it's very structured interference somehow. Like the laws of quantum mechanics are not arbitrary. can't just tell the quantum computer to give me the outcome I want. There's rules to it. And how those rules apply to, say, computational problems that we face in maybe the real world beyond.

Sebastian Hassinger (24:35.182)
Mm.

Sebastian Hassinger (24:42.976)
Mm-hmm. Right.

Sebastian Hassinger (24:49.804)
Right. Yeah.

Dominik (24:57.689)
quantum simulation, which guess scientists are really interested in, is I think a really intriguing challenge. Maybe also how it interfaces with what classical computations are good at. So one thing I'm really excited about is to think about just basic arithmetic. Classical computers are really good at doing basic arithmetic, but quantum computers, it's really hard to do basic arithmetic. so that's some, yeah, I think those are...

Sebastian Hassinger (25:09.987)
Right.

Sebastian Hassinger (25:22.338)
Hmm. Because of the sampling, the probabilistic sampling of the outputs?

Dominik (25:28.737)
And I'm not really, so this is for a different reason, I would say. This is really for the reason that to do quantum computations, we need this thing called fault tolerance. quantum fault tolerance is very restrictive in terms of the operations that you can do on encoded information. And arithmetic is sort of one of the things that turns out to be really hard, at least in the ways that we know how to do fault tolerance.

Sebastian Hassinger (25:31.949)
Okay.

Sebastian Hassinger (25:39.362)
Mm-hmm.

Sebastian Hassinger (25:47.468)
Hmm.

Sebastian Hassinger (25:54.286)
Oops. Right.

Dominik (25:56.525)
Yeah, and so this is really a bottleneck for, running Shaw's algorithm now, like actually doing this arithmetic and compressing these circuits down and understanding why it is that for this task that maybe classical computers are really good at, like it's really simple for them to do arithmetic, quantum computers struggle somehow. And then there are other tasks where quantum computers are really good at, but classical computers struggle. Yeah.

Sebastian Hassinger (26:00.01)
Right.

Sebastian Hassinger (26:19.458)
And that's, so that's sort of the segue I get or the, the, the pathway where your work is turning into that fault tolerant compilation kind of research as well. Right. Yeah.

Dominik (26:30.166)
Yeah, absolutely. Exactly. Yeah. Yeah. So this is something that I guess I've worked on more recently, is taking these ideas of what are the native computations, say these random circuits for sort of un- like plain quantum devices where everything is noisy and merging them with this early fault tolerance capabilities that we have now and seeing, what are the native operations now and how like

What can I really easily do on those devices?

Sebastian Hassinger (27:02.774)
Right, right. And I mean, in those early models of fault tolerance, or maybe in any foreseeable model of fault tolerance, there's a round tripping between the quantum and the classical, right? There needs to be sort of mid-circuit measurement of some form and then a feed forward that detects the syndromes and corrects the error syndromes in the forward-looking work. Is that potentially where...

the interplay of the strengths and weaknesses of quantum and classical can be, you know, sort of applied.

Dominik (27:38.008)
I wouldn't say so yet because that's really just like a layer below the algorithmic side. Like this is really at the layer of how do I correct for errors in the quantum computer. However, there are a lot of ideas out there now for how to interface quantum with classical computers, how to maybe use a quantum computer to get like a good starting point for a classical algorithm or so. And those are really, I think the ideas that

Sebastian Hassinger (27:45.78)
Hmm, I see I see

Sebastian Hassinger (27:59.608)
Right.

Sebastian Hassinger (28:04.267)
Right, right, Yeah.

Dominik (28:07.373)
that we need to understand how quantum computers can interface in classical computations and really play out their strengths well.

Sebastian Hassinger (28:13.334)
Right. Right. Okay. So then, then to rewind to where I got it wrong. So then what you're talking about is, is a compilation of a circuit in a fault tolerant regime that takes advantage of the native capabilities and avoids the, the, the weak points or the blind spots in that, that set of capabilities. Okay. Okay. Okay. Yeah. I find it really fascinating that

Dominik (28:34.474)
Exactly, yes. That's exactly right, yes.

Sebastian Hassinger (28:40.606)
not only is fault tolerance giving rise to already very divergent definitions of what a logical qubit is, but also those definitions of logical qubits, which differ from modality to modality and vendor to vendor and variation of modality to variation of modality. But it also has huge implications for what the abstraction looks like for how to program logical qubits and compile circuits for fault tolerance. I mean,

It seems like you've got your work cut out for you.

Dominik (29:12.888)
Definitely. Yeah, definitely. I think there's a lot to be done in the next few years on these different levels of the, if you want to call it hardware stacker.

Sebastian Hassinger (29:24.428)
Yeah. And do you, mean, you're, you're, you're operating at this, in the theory, the realm of theory. Do you have a sense for how to sort of localize the theory to a specific modality, a specific vendor, specific architecture?

Dominik (29:42.993)
I'm not quite sure I got that.

Sebastian Hassinger (29:45.58)
Okay. Well, I mean, the implementation of the theoretical stuff that you're the theoretical fault harness compilation, let's say that's going to the implementation that is going to be very different from one type of qubit to another one vendor to another. Have you tackled that sort of the, the implementation side of it?

Dominik (30:00.646)
OK, I get it. get it. Yeah, Yeah, yeah, totally. I guess I'm just not used to the vendor lingo. But right, yeah, yeah, yeah. That's right. No, exactly. And I think that's what's so exciting about it is that maybe some hardware platforms have different native capabilities than other hardware platforms. And OK, not maybe. They do. The question is, do we leverage?

Sebastian Hassinger (30:08.236)
Yeah, yeah, yeah. I'm a technologist, not a physicist.

Sebastian Hassinger (30:21.164)
Yeah. Yeah. They do.

Dominik (30:29.27)
ways to leverage those native capabilities at the Bazel hardware level up to maybe an algorithmic level where maybe some algorithmic primitives, I like to call them, are really easy on some kind of hardware and not so easy on another and vice versa. And maybe there's a world, or maybe our world is like this, where we can sort of stitch together these different hardware systems and won't have a single platform that

wins the race.

Sebastian Hassinger (30:59.714)
Yeah. Yeah. Well, Dominic, that's fascinating. I mean, I think it's really interesting how the question of simple verification of advantage opens the door to all of these much, much deeper and broader. And I can tell you have a philosophy background because of the nature of the way you approach these questions, but I'm very, very impressed and I'm excited to see where your research goes next. So thank you so much for joining me today.

Dominik (31:25.09)
Great. Yeah. Thank you so much, Sebastian. This has been a lot of fun.

Sebastian Hassinger (31:30.286)
Great. I will hit stop.