Your hosts, Sebastian Hassinger and Kevin Rowney, interview brilliant research scientists, software developers, engineers and others actively exploring the possibilities of our new quantum era. We will cover topics in quantum computing, networking and sensing, focusing on hardware, algorithms and general theory. The show aims for accessibility - neither of us are physicists! - and we'll try to provide context for the terminology and glimpses at the fascinating history of this new field as it evolves in real time.
Welcome back to the podcast. We are joined today by professor Zoe Holmes from EPFL in Lausanne, Switzerland. She's a professor of physics there. She's been there for a few years. She's got a PhD from Imperial, and she did her undergraduate at Oxford.
Sebastian Hassinger:And she's been doing a lot of interesting work in variational techniques and, seeing what the limits are from, what we can get out of NISC machines or very you know, algorithms that can run on the systems we have today. So welcome, Zoe. Thank you for joining us.
Zoe Holmes:Cool. Thanks for having me.
Sebastian Hassinger:Excellent. So I I became interested, in your work last year sometime. There was a paper on, quantum imaginary time evolution that seemed like a novel approach to the kind of variational techniques that, you know, quantum, sorry, VQE, and other, variational, hybrid, algorithms that are trying to get the most out of these NIST machines that are available today. Is that, something you're you're still working on? Is it you're sort of probing what the limits are, where we can expect to get out of these machines?
Zoe Holmes:So that paper that you're talking about, it's all about going back to sort of the philosophy that nature's actually really good at finding ground states.
Sebastian Hassinger:Right.
Zoe Holmes:But if we try and force ourselves to find them, if we just search randomly, that seems pretty hard. So Nate's good at finding ground states by cooling. So let's try and use a cooling inspired algorithm to come up with an approach that is guaranteed to converge to the ground state. And it's not an entirely novel idea. People have been playing around with this in different ways for a time, but we have a new way of doing it and some nice guarantees for it.
Sebastian Hassinger:And so cooling inspired, how does that differ from from the standard VQE kind of approach, for for trying to find ground state?
Zoe Holmes:Okay. So VQE, you have a cost function, a parameterized cost function, and a parameterized guest state. So the cost function tells you the energy of your parameterized guest state. And what you've got to do is you've got to apply an optimization algorithm, try and find the best set of parameters to try and minimize that energy, which will then give you the update. What but that search can be the optimization problem.
Zoe Holmes:So the very simple algorithm in theory, but the optimization problem can be very hard. So time and time again, people have seen a number of different problems. So the one that I've looked into most is that in a quantum setting, the landscapes you're trying to optimize in some sense become very, very flat. The high probability, if you pick any randomly chosen parameter setting, the gradients you get at that parameter setting, that point in the landscape, will be very, very close to zero, exponentially close to zero. So the bigger exponentially the size of your problem.
Zoe Holmes:So the bigger and bigger you make your problem, the system you're looking at, the smaller the probability of actually observing gradients. Big, big problem for BQE. Another big problem for BQE is local minima. It seems like quantum landscape minimum.
Sebastian Hassinger:Barren plateau. Yeah.
Zoe Holmes:So that's the second problem, and then you've also got sort of expressivity concerns. Like, how the hell do you come up with a guess for this guess state? And, there are some ideas in certain contexts. For example, the UCC, unitary coupled con cluster, ANSATS, for chemistry. But in other contexts, people just have not many good ideas or it's obvious what the right solution is for coming up with a good guess onsets.
Zoe Holmes:So our approach using measured time evolution throws all of those problems away. And instead, what you do is you have this well defined circuit that can is guaranteed if it gets deep enough to prepare the ground state. Now there were downside of that. Your circuit can end up very, very deep, more deep than you're gonna be able to do certainly on current hardware and Right. Maybe not in the next five years, but maybe in ten, fifteen years.
Zoe Holmes:We'll see.
Sebastian Hassinger:Right. Right. And that's that that's that guarantee is because the circuit is, as you said, cooling inspired, and cooling is nature's way of finding ground state, and it's infallible. Right. Yeah.
Sebastian Hassinger:That's really interesting. And so, you know, the I guess there is is there any difference in sort of the what are the the arguments about VQE or the criticism of VQE and other variational approaches is they are, heuristics. Right? They're they're not, theoretically provable to be efficient. Does is that something that that the, imaginary time evolution approach gets around, or is it also a heuristic?
Zoe Holmes:No. It's not heuristic. It's completely non heuristic. But because of that, the circuit depth is larger. We have this guarantee that a single iteration will decrease your energy by an amount proportional to the energy fluctuation in the initial state, for example.
Zoe Holmes:So we have a guarantee in how much a certain circuit depth will cool it, but the circuit depth can end up
Sebastian Hassinger:Right. And is that circuit depth, that's guaranteed to to cool it by a certain amount, is that proportional to the size of the system that you're trying to to estimate the ground state of?
Zoe Holmes:The circuit depth, is it proportional to the size of the system?
Sebastian Hassinger:In other words, this larger the quantum system you're trying to simulate, do you are you anticipating you'll need a deeper circuit to
Zoe Holmes:yeah.
Sebastian Hassinger:That could be rough. Interesting. So what would the sort of in your experience, what would be sort of the, the the minimal practical requirements for depth of circuit you need to support for for imaginary time evolution to be useful?
Zoe Holmes:We have not really done those calculations yet because the thing that makes it harder is we have so you asked whether our algorithm was heuristic or not. The paper we put out recently was not heuristic. It was really giving these guarantees
Sebastian Hassinger:Right.
Zoe Holmes:In a certain regime. But, actually, I think there are sort of more heuristic versions of it that could, in practice, work out better, but that's not guarantees for yet. So what it boils down to is we have to use what we call a short step. We have to use kind of really short steps to in our so our circuit with a short short imaginary time step will approximate imaginary time evolution. It will approximate cooling.
Zoe Holmes:But, if we have that time step short, we can guarantee that we're well approximating imaginary time and if we have well approximating imaginary time evolution, we have guarantees that you're gonna converge towards the ground state. However, when you have these short time steps, you need lots and lots of them. We only have guarantees basically in the infinite time limit. So if you have time steps, you need loads of loads and loads of time steps, and that leads to deep circuits. On the other hand, you may well be able to get away with a much bigger time step.
Zoe Holmes:We do have some numerical evidence of that. However, we don't have guarantees then.
Sebastian Hassinger:I see. So it's is it analogous to to sorry. To analogous to if the large time, so if you're sort of getting two data points that are further apart and and and sort of extrapolating between those two data points, is that somewhat another?
Zoe Holmes:What we have is we have a number of approximations to get from an algorithm that exactly implements imaginary time evolution, which is guaranteed to converge, to some circuit that approximate imaginary time evolution.
Sebastian Hassinger:Okay. Okay.
Zoe Holmes:How well our circuit approximate the measuring time evolution depends on the step size.
Sebastian Hassinger:Right. Okay. And so, I mean, that that sounds like, the category of trade offs and and searching for tricks that is kind of required by the the type of device that's available today to to run on, you know, run circuits on real hardware. Is that that's sort of the the case that you you're you're trying to make the best you can out of the fallible, the noisy machines that are available today.
Zoe Holmes:Yeah. So if we wanted to try and use this approach on current devices, you would either have to accept that you are not getting anywhere near the ground state, but you're just getting a slightly better approximation. So they read actually another paper. I think it's David Gossett at Al put out, where they were advocating a similar approach where they said if you have some shallow circuit already that you can use for QAOA, then just do one iteration of an algorithm that's of a similar spirit to ours, and you can improve the energy by a certain amount, reduce the energy by a certain amount. And if your initial guess state is pretty shallow, the circuit depth required to run this thing are not too bad.
Zoe Holmes:So you're not getting to the ground state, but you have this guaranteed cooling anyway that could be useful. So that's one approach you can take. Just forget about the ground state, but just have something a bit closer. Or maybe you can throw away guarantees. May you can forget about using the step size that we have derived has guarantees, and instead use a much bigger step size and see how you do.
Zoe Holmes:But we were not really in this paper thinking about now.
Sebastian Hassinger:We Right. Okay.
Zoe Holmes:Immediately about early fault tolerant, maybe fault tolerant. And that's an interesting future step to try and really understand to what extent it can be made.
Sebastian Hassinger:Of course. Of course. Yeah. So I mean, there's there's sort of two aspects to this. One, as you said, is sort of the, you know, thought experiment of of what can you do once you have a fault tolerant machine, and then there's the trade offs that can make it potentially runnable on existing machines.
Sebastian Hassinger:Do you think those trade offs, could lead to sort of scientific value? I mean, what's interesting to me is that the devices that we're building now, are are starting to produce sort of, novel scientific results for for physical sciences. In other words, simulation of quantum systems, you know, utility, I think is the term that IBM uses. But it's it's really in a scientific setting rather than, trying to figure out sort of the theoretical field of quantum information. And once we have fault tolerance machines at scale, it's like, what can we do scientifically that's useful with the machines we have today?
Sebastian Hassinger:Is that is that sort of one of your areas of focus?
Zoe Holmes:I mean, so I completely I mean, yes. I think current machines are most likely to be useful for many body physics problems, maybe quantum chemistry problems. I think they probably haven't been useful yet, and I think most people would agree with that that they've not. They are becoming interesting to benchmark different methods to compare classical simulations to quantum simulation methods. I think it's now challenging to simulate some of these devices, and that makes it a tool to help develop our methods for simulating quantum systems, things along those lines.
Zoe Holmes:I think in the near future, yes, applications are most likely to be within the physical sciences. But you seem to present a dichotomy sort of, is it just useful for the physical sciences or is it useful for the rest of the world? But I'd say the things that are useful for the physical sciences are also the rest of the world. So
Sebastian Hassinger:True. Yes. That's an arbitrary categorization that somewhat maps to the way industry sees it. Right? I mean, the analogy I use often is is quantum sorry.
Sebastian Hassinger:Monte Carlo classical was created by Stan Ulam and others at, Princeton, Institute for Advanced Studies for, I think it was proton diffusion calculations initially, but was a physical science, application initially. And then that was like nineteen fifty something or forty something. And then 1977 is the first application of Monte Carlo in a financial services setting. And so to a certain degree, it's that kind of, map that the overall quantum computing industry is looking for. It's right.
Sebastian Hassinger:It's like techniques emerging from physical sciences applications and scientific applications of the devices we have today, leading almost serendipitously to broader application with more direct, you know, sort of economic impact rather than scientific impact.
Zoe Holmes:Even you took it as far as financial forecasting or whatever then. I don't think you need to take as far as financial forecasting in order for it to have an impact on society. If you have better understanding of catalysis for chemical reactions, I'm not a chemist. I but that's gonna be useful for all of these different companies.
Sebastian Hassinger:Yeah. And, I mean, the the the speculation about the earliest use cases tend to be sort of battery technology or, FOMOco or right. I mean, there's these are the the scenarios. And and I agree. I mean, I I I use the the Monte Carlo and the financial services because it's sort of that's on the the, it's the most obvious sort of dichotomy between, you know, scientific technique and then, you know, broadly applicable general algorithm.
Sebastian Hassinger:But, yeah, I I mean, so so when you think about those types of early use cases, do you think that's still something that's going to require fault tolerance to actually deliver those those scientific applications, scientific uses, and scientific value? Or do you think there's value in this NISC era or, like, late NISC era, early fault tolerance era?
Zoe Holmes:That's a big question. And I think, again, it depends what you mean by scientific value. I think if you mean scientific value for studying problems that are interesting to physicists, understanding better the dynamics of all sorts of different spin chains that can help us model all sorts of different magnetic materials, I think there's value there. If you are talking about an application that will bring commercial value with a, scientific topic, I think it remains to be seen. I think there are maybe what there's a growing body of research into approaches that are very, very, very lightly quantum assisted.
Zoe Holmes:So the algorithm is almost entirely classical, usually nonvariational now, though some of them have a variation component, with just using the quantum for a minor minor subroutine. And maybe using the quantum system for that minor subroutine brings some kind of heuristic advantage. Probably not approvable one, probably not an excellent one. Maybe one where then someone, a couple of years later, classically figures out a way of doing it. That said, the other thing that I I think is important to highlight here, I'm a strong believer that this there is at this moment this sort of dichotomy, this, competition between classical and quantum methods.
Zoe Holmes:But in this competition between classical and quantum method, we are improving our strategies to understand systems, to model these systems. If the sort of scientist competitive shit, and if the main benefit of quantum computing is it triggers that competitive shitness in us and gets us to call it better classical methods that help us solve problems. I mean, it's an expensive way of doing it, but it's still a good thing. And I think we are gonna increasingly see that, hopefully, small quantum subroutines maybe have some form of added benefit even if not more scalable benefit.
Sebastian Hassinger:Yeah. And there's also I mean, another aspect of that sort of dichotomy that may be, not, valid is is sort of the whatever, there's value in NISC and variational techniques using NISC machines versus there is no value and we have to wait for fault tolerance. It seems like you're you're much more, open to the possibility of there being more value from this this kind of phase of the the machines that we have today. Right?
Zoe Holmes:I'm a scientist. I'm not a business person. I don't have to put my where my mouth is. I I one challenge.
Sebastian Hassinger:Yeah. And to be clear, I mean I mean value in the broadest sense. I'm not meaning dollars and cents value. I mean, as you say, if it progresses, information science, that's fantastic. If it progresses quantum mechanics and our understanding of the quantum world, that was fantastic.
Sebastian Hassinger:There there's you know, it's it's more, I guess, the, is it if is it fertile territory for those type types of of discoveries that that drive, our knowledge ahead? Not, I guess, what I'm saying is distinct from driving quantum technology itself, but, like, deriving knowledge from the use of the technology we can make today, if that makes sense.
Zoe Holmes:So, yeah, I am still optimistic in
Sebastian Hassinger:Yes.
Zoe Holmes:FITIX settings.
Sebastian Hassinger:Yes. Okay. Okay. And, you know, one of the the arguments you've already mentioned is is against that is that it's just heuristics, and therefore, there isn't sort of a theoretical basis to provably derive advantage. But that's sort of beside the point from your perspective.
Sebastian Hassinger:Is that is that right? I mean, if a heuristic works, the heuristic works. It doesn't really matter.
Zoe Holmes:I'm not I mean, we do have all sorts of provable advantages in different settings. What I was talking about is provable advantage for algorithms that we think are gonna have an advantage in a near term setting.
Sebastian Hassinger:Okay.
Zoe Holmes:Particularly for applications that are with broader interest. I think we don't have that yet.
Sebastian Hassinger:Right.
Zoe Holmes:But I think we have a number of different groups putting forward different approaches where they maybe have guarantees for their approaches. They maybe don't. But maybe the classical case doesn't have guarantees either. And so it's just a game of who can do best at that moment in time.
Sebastian Hassinger:Right. Right. Yeah. And the the evolution of classical algorithms, often starts with heuristics and and, just, you know, whatever is, you know, pragmatically working at the time. And then after the fact, there's some sort of theoretical, proof or foundation laid down to to, codify it.
Sebastian Hassinger:So I think, you know, it's it's entirely reasonable to expect the same in quantum computing. So so when if you're, you know, you you were part of that review paper, which was sort of looking at the, the miss of of no go theorems or or, you know, what what we can expect and can't expect from from NISC machines. Given your overall sort of more optimistic stance, what do you what do you think you'll be focused on in the in the, you know, the coming year or two, in terms of research areas?
Zoe Holmes:So one area of research I really like is trying to understand the scalability of the original methods in quantum computing. That's historically something that I've worked on a lot, and I somewhat feel we're reaching a dead end with it, but, it's still a question that I find very theoretically interesting because the point is we have these quantum algorithms that have guarantees, but these variational ones don't. And we can prove lots of no goes against them scaling in certain regimes. And so, actually, over the last five years, me and a bunch of others have collected a sort of big set of seemingly damning evidence about the scaling. And yet you have to concede, and we do this in the, paper on myths, our perspective on myths, that these no goes don't absolutely rule out anything.
Zoe Holmes:We're we're trying to stare into a crystal ball. We have arguments why in certain cases these things are unlikely to scale, but we have to allow the possibility that heuristically, things could work out better and stuff. That said, heuristically things working out better in certain cases, there are heuristics and heuristics. I think if you just slap some kind of completely, I mean, it does work sometimes classically, but I guess I'm I want to know why a quantum method is gonna be better. And I think sometimes a mistake people have maybe made is they take a problem that doesn't seem to be inherently quantum at all, and your quantum solution will be better.
Zoe Holmes:I am more optimistic when it comes to, at least in the near term, near ish term, quantum problems being tackled on quantum devices where we can make use of the structure of those quantum problems to do some more interesting. So particularly in a very small context. And so that's why I started looking at these calling based algorithms because it seemed to be more natural to take inspiration, have an inductive bias based around something that should work, and modify that rather than sort of searching in these exponentially large spaces kind of blindly.
Sebastian Hassinger:And and when you think about, the the quantum imaginary time evolution, the ability to implement that in some kind of useful way. Do you think that that's that'll be affected more by, you know, things like the the new generation of of, superconducting devices, for example, that are either cat qubits or dual rail qubits that that are better at at, identifying or suppressing certain types of errors. So in other words, do you think increased fidelity of superconducting devices is going to help or increasing scale of high fidelity devices like trapped ions or neutral atoms?
Zoe Holmes:I mean, high fidelity is better. Well, high fidelity is proportional to other factors like the time it takes to implement the gates and things along those lines, the Deque Humans times. But in general, yes. Of course, we do.
Sebastian Hassinger:Well, I guess what I'm saying is, if you if you think of superconducting as fast but noisy and, atom based as, higher fidelity but slower, are you gonna or do you expect greater gains from your perspective from, increasing fidelity on fast gate devices like superconducting or, increasing scale on on, slower atom based devices?
Zoe Holmes:It depends which hat I've got on. But for sort of this new line of research in terms of these nonvariational cooling algorithms, they don't need to be fast. So just get high fidelity gates. That's what I want. In terms of dynamical simulation, well, we want high fidelity and we want big to try and make it beyond what can be done class.
Zoe Holmes:And we want a good topology. So you're gonna be balancing all those components out. In terms of variation stuff, I don't think well, correct me if I'm wrong, but I no one seems to be that optimistic about implementing large variational algorithms on current hardware pushing beyond what we can do classically. I feel like the combination of hardware noise, local minima, expressivity limitation, lacks of good ancestor, All of those, I think, makes people relatively pessimistic for right now. But there are optimistic for soon.
Zoe Holmes:I'm not I'm not sure how optimistic I am for soon, but there are some people who are more optimistic for soon.
Sebastian Hassinger:Yeah. I mean, I think the only, exception to what you're saying, I think, would be, the generative quantum eigensolver work that, Alan Aspergussic and others are doing, where they're trying to bring accelerated classical computing machine learning to improve the ansatz and and therefore reduce the sort of yeah. I mean, it's
Zoe Holmes:I don't think that avoids any of the problems. I think I think it's as it's no worse, but no better than the issues that all of the other approaches are faced, in my take. But there are different opinions on how much these issues are issues. But I think the point is so with the variational approaches, you need both them to be quick and noise to be relatively low, and we don't have that. Because the combination of the noise and the landscape being flat is really, really damning right now.
Sebastian Hassinger:Yeah. And, I mean, I guess the, you know, the the fidelity in superconducting is is promising to increase, but that's sort of resetting the scale in a sense. Right? I mean, you've got, you know, Quantum Circuits Incorporated's dual rail qubit device is gonna be eight qubits. IBM's LDPC logical qubit device, it that chip is 288 physical qubits.
Sebastian Hassinger:It's gonna be 12 logical qubits according to their research papers. So in other words, there is increased fidelity coming down the pipe, but it's decreasing the scale of the the system that you can actually simulate.
Zoe Holmes:Yep. Or some of the atomic ion devices, trapped ion, Rydberg atoms, they have higher fidelities. So
Sebastian Hassinger:Rydberg atoms don't have gates yet, so that's a bit of an issue. But, yeah, the trapped ion devices I mean, you know, Quantinium's got a roadmap to get a large number of of qubits, shipped in the the near future as well. So, there's there's there's always hope for the next device.
Zoe Holmes:Even I mean, Quantinium's got is it 54 or 56? Something
Sebastian Hassinger:like that. 56. Yeah.
Zoe Holmes:I I mean, that's enough to be challenging someone's laptop. Enough to be true. Yeah. Challenging actually someone on a cluster too if you're doing something
Sebastian Hassinger:That's true.
Zoe Holmes:But, yeah, you're not gonna be doing variation on that for what that's worth.
Sebastian Hassinger:Right. Is that a scale that potentially, QITE would be implement in in some of one of those, like you said, the the, the bigger time step kind of scale?
Zoe Holmes:Yeah. Potentially. We we've talked a bit about that and just need to think about a good application.
Sebastian Hassinger:That'd be really interesting. So is I guess, if you look ahead again, I, do you think you'll continue more in to to, you know, progress this idea to try to get to that implementation on on running on real hardware, or do you think you'll, continue on more down the theory track?
Zoe Holmes:We are exploring all branches there currently. So we're exploring all sorts of sort of long term theoretical understandings, of the algorithm and how it can be applied. We're exploring different applications of it. We're performing exploring different variants of it. We also are interested in near term more heuristic variants.
Zoe Holmes:I just say that in parallel so you asked about my research earlier. So this is one avenue, then sort of variational stuff, is it screwed or is it not is another area. The final one I have is I also have one foot in sort of the classical simulation, pull the almost the other side of the the barrier. So I think actually only by understanding the limits of your classical simulation methods. Not only are you gonna be able to figure out when you can get advantage, but also figure out how best to use these devices.
Zoe Holmes:In developing what we call these parallel propagation algorithms, and we have all sorts of ideas about how to try and not just use them on their own as a competitor, though we do like using them on competitor, but also combine them with how not to get the most. Because that then goes back also to the point I was saying before that if we're gonna make the most of these devices in the near term, I suspect it may well be for just some tiny subroutine with a lot of the rest of the work done.
Sebastian Hassinger:And and what's an example of that I mean, hypothetically, what would an example of that sort of classical plus quantum, solution look like versus versus a variational, which is sort of that's also hybrid, but it's it's iterating back and forth between the classical and the quantum?
Zoe Holmes:Yeah. Different. I'll give you two examples. So one, I've got some students who have worked on things like using tensor networks to try and combine, compile some really short depth circuits would use tensor networks as simulated as long as you can classically, then compress that down into a short depth circuit. Then you could apply that circuit on hardware, and actually you'd use that circuit on hardware just to do some Schrodinger evolution, something like evolution state.
Zoe Holmes:But you can also back evolve the observable, and that bit can done classically. So you're gonna have these sort of classical methods pinning in from both angles and just a little bit of So that's one example. And then you've got all sorts of people trying to explore variants where you get samples from your quantum computer. And those samples maybe are used to project down into a subspace for diagnosing Hamiltonian. So IBM have been really pushing these methods, and they have nice guarantees for them, in certain settings.
Zoe Holmes:But then the big question there is, are you actually adding anything that you couldn't do classically? So they completely see that that they've this is an algorithm that is practical on common devices. You are getting this little bit of quantum added value, but the question is, is it actually enough added value that you can do it? But interesting direction. And then, yeah, other other other approaches people are considering that are using, sampling.
Zoe Holmes:You maybe train classically. Do some training classically, for example, in general modeling, and then you get samples from your algorithm that you might have an algorithm that's the classical bit might be hard to sample from. But if you can train classically to get some circuit and then sample on the hardware, you get added advantage from the hardware of producing the samples. So things along the way.
Sebastian Hassinger:That's really cool. That's excellent. Well, thank you very much, Zoe. This has been fascinating. I'm going to continue to watch the work from your group and from EPFL generally, and, at some point, we may have you back on for an update, as you as you progress your work.
Sebastian Hassinger:Thank you very much. The podcast is a production of the new quantum era. It is produced and edited by me Sebastian Hassinger, with the music by Omar Khosda Amido. If you enjoy the podcast, please subscribe and share with your friends. They can go to newquantumera.com to access podcast, and also the book that I'm writing on O'Reilly's site is available, the first twelve chapters as a preview.
Sebastian Hassinger:Thank you, and we'll look forward to the next episode.