The New Quantum Era

Welcome to The New Quantum Era, a podcast hosted by Sebastian Hassinger and Kevin Rowney. In this episode, we have an insightful conversation with Dr. Toby Cubitt, a pioneer in quantum computing, a professor at UCL, and a co-founder of Phasecraft. Dr. Cubitt shares his deep understanding of the current state of quantum computing, the challenges it faces, and the promising future it holds. He also discusses the unique approach Phasecraft is taking to bridge the gap between theoretical algorithms and practical, commercially viable applications on near-term quantum hardware.


Key Highlights:

  • The Dual Focus of Phasecraft: Dr. Cubitt explains how Phasecraft is dedicated to algorithms and applications, avoiding traditional consultancy to drive technology forward through deep partnerships and collaborative development.
  • Realistic Perspective on Quantum Computing: Despite the hype cycles, Dr. Cubitt maintains a consistent, cautiously optimistic outlook on the progress toward quantum advantage, emphasizing the complexity and long-term nature of the field.
  • Commercial Viability and Algorithm Development: The discussion covers Phasecraft’s strategic focus on material science and chemistry simulations as early applications of quantum computing, leveraging the unique strengths of quantum algorithms to tackle real-world problems.
  • Innovative Algorithmic Approaches: Dr. Cubitt details Phasecraft’s advancements in quantum algorithms, including new methods for time dynamics simulation and hybrid quantum-classical algorithms like Quantum enhanced DFT, which combine classical and quantum computing strengths.
  • Future Milestones: The conversation touches on the anticipated breakthroughs in the next few years, aiming for quantum advantage and the significant implications for both scientific research and commercial applications.

Papers Mentioned in this episode:
Other sites:

Creators & Guests

Host
Kevin Rowney
Host
Sebastian Hassinger🌻
Business development #QuantumComputing @AWScloud Opinions mine, he/him.
Guest
Toby Cubitt
Father, pianist, scientist - alphabetical ordering implies no precedence ranking. Opinions my own.

What is The New Quantum Era?

Your hosts, Sebastian Hassinger and Kevin Rowney, interview brilliant research scientists, software developers, engineers and others actively exploring the possibilities of our new quantum era. We will cover topics in quantum computing, networking and sensing, focusing on hardware, algorithms and general theory. The show aims for accessibility - neither of us are physicists! - and we'll try to provide context for the terminology and glimpses at the fascinating history of this new field as it evolves in real time.

The New Quantum Era.

A podcast by Sebastian Hassinger and Kevin Roney.

Welcome back to the podcast.

I've got Kevin here with me.

Hey, Sebastian.

We've got a really interesting interview today with Dr.

Toby Cubitt.

He is a founder, co-founder of Phasecraft in the UK.

He's also a professor at UCL in London.

He's been a really sort of active and visible participant in the quantum computing field for quite a number of years.

One of sort of the OGs, I guess, the early adopters before there really was much hardware or software out there.

And I'm really looking forward to this conversation because I think Phasecraft consistently does really high quality work in sort of the algorithm research area.

No doubt.

Yeah.

I mean, like not a bad gig.

Professor at UCL and founder of a startup that's got such a huge future.

I mean, Phasecraft is a really interesting venture.

I mean, it feels like they're really taking a sober and realistic look at both what are the commercially valuable problems together with the constraints of what are the current constraints on quantum hardware and how to optimize that together with a team that can really keep doing progressive just improvement of underlying really difficult algorithms.

So those are three hard objectives all to be put into a blender and salt at once.

That's a cool future I think they've got.

Yeah.

I'm really looking forward to this conversation.

So let's jump in.

Awesome.

Thanks for having me.

It's been a pleasure.

Welcome back to the podcast.

We are joined today by Dr.

Toby Cubitt of Phasecraft joining us from the UK.

Thanks for joining us, Toby.

Hi, nice to be here.

Thanks a lot.

Great to talk to you, Toby.

Yeah.

So Toby, Phasecraft is a really strong participant in this stage of the quantum computing industry.

You do a ton of really interesting both open science work as well as work with clients, I think is sort of would you sort of characterize yourself as sort of a consultancy at this point?

No, definitely not.

We are a pure play quantum algorithms and applications company.

So we are very focused on getting real world applications onto hardware that exists now or in the very near future.

We don't do the kind of consulting that is like we have people who want to know when is quantum computing going to be relevant to their business and ask us to write a report on that.

So that kind of consulting we steer well clear on.

It doesn't drive the technology forward.

Right.

It brings in some revenue, but it's kind of like we're very focused on we're moving technology forward, not that.

Consulting we tend to think in terms more in terms of the partnerships with commercial customers, end users are much deeper relations where it's much more of a joint development style relation rather than a consulting relationship.

Those we do have.

Right.

Quite a bit of like a very interesting collaborations that I've seen in the published work.

I guess it'd be really interesting.

I mean, I have seen the work you've done as being really at that forefront of trying to find productive applications that can run on current or, as you said, near future hardware.

You know, in some ways, quantum computing over the last year or two has been going through, I would say, a little bit of a rough patch.

Generative AI kind of maybe stole some of the spotlight.

Maybe there's some growing impatience with the stage of the maturity of the hardware.

What's your perspective on sort of where we are in that journey towards quantum advantage, so to speak?

It's very interesting.

I mean, I've been in this field for 20 years plus now, originally in academia.

So I've been seeing this field and the industry side, you know, before there was an industry side and seen that grow.

And I know, you know, I've been writing theorems and writing papers about these quantum computers that up until not that long ago essentially didn't exist for a long time.

And like everything, like when we founded PhaseCraft, getting on about five years ago now, it was sort of peak quantum hype after the so-called supremacy experiments.

And, you know, we were telling investors, this is really difficult.

This needs some major R&D breakthroughs.

We think we have ideas of how we can do this because very few people are of the top level of from academia are going across to try and apply that on the algorithm side, from the algorithm side of the computer, which is in any case, extremely small.

But we're like, this is not happening tomorrow.

This is not, we sounded like that complete pessimists and doom mongers and everyone else was saying quantum tomorrow, switch off your classical computers.

You know, a few years later, the hype cycle has gone, swung the other way.

We are saying pretty much the same thing we were saying, said when we founded PhaseCraft, except we're like three, four or five years on and we've changed the dial internally.

I'm more cautiously optimistic about near-term quantum computing than I was when we founded the company.

We're saying pretty much the same thing as we did originally.

And now we sound like the crazy optimists because everyone else has gone past that.

And you know, you've been the same throughout.

Yes.

Really nothing has changed in our outlook except that things have moved on.

We've made progress.

We've actually, things have worked out that, you know, were research ideas that were speculative at the beginning.

But now things haven't worked out.

And that's like, that's not like just the last five years.

It's like the entire time I have, like the 20 years, it's now like half my lifetime that has been working in this field.

This is like difficult, challenging science and technology development.

It goes through waves of progress and then hits a wall and things get difficult.

And the hype from people not involved in doing it tends to fluctuate around that.

The reality of it is there, you know, the general public and the general media, you just see the things that come above the kind of some threshold.

And it looks like there's been some major breakthrough.

No, that was like, it was coming.

We've known for 10 years that was on, it's making steady progress.

You just see the spikes and you don't see everything that's behind that.

I think that people, so I am not more or less pessimistic or optimistic.

I'm slightly more optimistic than I was five years ago when we founded the company because things have worked out.

We've made a lot of progress and we think we're getting there.

I still think it's extremely challenging to get quantum computers to do.

No one has yet run a useful computation on a quantum computer ever.

Bottom line right there.

It's easy to get lost in the hype and not really see that's clear, but that's the raw truth right now.

I'm not sure that like finding a quantum computing startup is the easy way to make a fortune, but it's intellectually extremely interesting because it's hard and because it's really changing the dial.

It's not just writing some app that lots of people will buy.

It's really, we are fundamentally pushing the technology forward.

And that's why Ashley Montanaro and myself and John Morton founded PhaseCraft.

It's because we wanted to take the stuff we've been doing for 20 years in academia and theorizing over it.

We have an opportunity to now turn this into reality and be part of making this happen.

And to some extent, having been involved in building this field for 20 years and at the top of the field in academia, it sort of came down to, for me at least, why wouldn't you get involved in actually now making it happen now that this is starting to become real, not just some theory or a possibility.

But we're not there yet.

It's massively overhyped, but it's not overhyped in the sense that there's nothing behind it.

It's just that the excitement runs ahead of the progress on technology.

It's in the early stages of technology and that's always the case.

And in terms of AI taking the limelight, yeah, I mean, it has rightly, large language models, there's been stuff that has in large language models has been on the cards.

I have colleagues, I'm still a professor at UCL in the CS department.

This stuff is not a major surprise and didn't come out of nowhere.

It's huge.

It's a significant breakthrough, but of course it's hit the mainstream and now everyone's talking about it.

And that's great.

I don't think that really matters to quantum computing.

It's AI and quantum and ML and quantum computing, machine learning and quantum computing are not really in competition.

They address very different problems that are very complimentary.

We can talk about that more.

But, and you know, they were large, AI has been through its own hype cycle since the 1980s.

They survived their winter just about, well, maybe didn't survive it, but they recovered from it eventually.

There we go.

And you know, it's, there's the original old school now, deep neural networks, ML.

And now that's old school because now it's transformers and large language models.

Now it's gone through, it's fine.

You know, and then everyone's worried about the robots taking over.

And the, you know, the reality is that Chattopadhyay is amazingly good at producing cogent English sentences that are very persuasive.

And that's both amazingly great and terrifying, but it's probably not, you know, we're not, humanity and civilization is not yet in danger of taking over.

It's so great until it hallucinates and, you know, with, with extreme confidence.

Toby, Phil said, how do we summarize the market thesis for PhaseCraft?

I mean, you've got a specific idea about a theme of algorithms and contributions here that you think you're placing your force and energy behind.

Yeah.

So there's, I mean, my perspective or our perspective is there at this stage of the technology development, and this will not always be the case, but there are really only two things that matter to turning this from a potential industry to an actual real industry.

One is the hardware development.

That's absolutely critical.

And the other is the algorithms applications development.

Cause if you've got nothing to run on that hardware, the hardware is useless.

If you've got no hardware to run on the algorithms and applications are useless, they're vaporware.

Everything else is essentially peripheral.

If you cannot solve those two problems, there is no industry.

There is no quantum computing.

You've got to get through those.

Now that's not the only thing that you need to solve to build a successful business, but it's a necessary condition.

Like if no one gets there, that we will go through a quantum winter.

I don't believe, I think we won't.

As I said, I'm cautiously more optimistic than it was when we founded the company is due to a little five years of four or five years of very hard work by a very smart team of people.

And we are on the second of those two things.

We're not a hardware company.

We are an algorithms and applications company.

But what a lot of the world has memories of like the VC community don't stretch back to the 1940s and 50s of computing, of conventional computing.

It didn't really exist back then.

But if you have that, my memory doesn't stretch.

I'm getting older, but I'm not quite that old.

But the memory of the computer science community stretches back that far.

And if you look at, you read up on your history of classical computing, back in the early days of classical computing, the algorithm development was as instrumental, the development of the beginning of classical computing becoming a real thing as the hardware was.

But those were, you had to really, really understand the hardware.

You had to really understand the applications in depth to manage to develop, invent the algorithms that will manage to squeeze something useful out of this incredibly primitive, flaky hardware.

And I'm talking about that's why I just described the 1950s of classical computing.

And I just described today in quantum computing.

We're working on that.

But in FaceCraft, we are not focused on one particular application area.

We believe that we can like, you know, every potential application of quantum computing is something that FaceCraft is interested in, but we have our views on which one are coming sooner because of the hardware constraints, which ones will be able to be run sooner.

You've got to get to useful applications sooner rather than later.

So you've got to take a view on which ones are going to be the ones that are going to be accessible to the hardware sooner than others.

And there are some that are further off.

Shor's factoring algorithm.

I don't know a way of managing to do that without a scalable fault-tolerant quantum computer.

Despite some impressive theory breakthroughs in the last couple of years, they still, it's not one where in FaceCraft, I think we can make major improvements that will change the dial on when you can run it.

I think you need a scalable fault-tolerant computer for that one.

But other algorithms and other applications, in particular in simulation of chemistry and material science, and on that even more nuanced for algorithm reasons, particularly in the material simulation, that still feels like the one for good algorithmic scientific reasons that hits a nice sweet spot in terms of being one of the earliest that might be within reach of the hardware.

It's also one of the most commercially important.

There's a lot more money in that than there is in factoring large numbers.

No doubt.

Yeah.

I mean, you mentioned sort of the 40s and 50s.

The use case that sort of brought about classical computing was high energy physics.

Are you sort of suggesting that- That's right, and ballistics and military applications.

Right.

Are you sort of suggesting that material science, material simulation might be sort of the base use case for quantum computing in the same way?

I mean, it's one of the ones that comes within reach earliest.

And there's some deep algorithm, I mean, some fairly simple, well known within the scientific community reasons why, for example, periodic crystalline materials are algorithmically slightly simpler than a molecule.

So there's, you know, this huge pipe about quantum chemistry and drug discovery.

Yeah, I mean, there's potential there, but you need a slightly bigger quantum computer to do that in terms of the number of gates, not the number of qubits.

And this is something else that isn't super well understood.

Because in material science, sort of ground state simulation is more important or proportionally more important than- No.

No, okay.

What's the size?

No, it's sometimes a much simpler thing.

No, it's actually just that if you think about electrons in a material to a very crude, rough approximation and you have to make this actually work, but this intuition is not bad, actually.

Molecules talk, the electrons on a site in a crystalline lattice talk to the electrons that are nearby on adjacent sites, but not the ones that are really far away.

In a molecule, essentially all of the orbitals in a molecule, so all of the places the electron can be, they talk to all of the other ones.

That means that in, you know, with a small amount of math, one thing has a factor in the algorithm that scales as like number of electron orbital, number of places electron will be to the power of four in molecules, or three if you're careful, where that's a constant factor in the material systems.

Now, there's a lot more to it than that.

There's like a lot of symmetries in a periwinkle crystalline lattice that you can exploit in clever ways.

I was about to say the periodicity of structure might be a huge favor to you.

Yeah, you've got more structure to play with.

When you've got more mathematical structure, that usually means you can exploit that.

Or if you're clever, maybe you can find ways to exploit that algorithmically to simplify, compress down what you need to do to do the simulation.

And for these kind of, what I'm saying is nothing, it's well known within the academic small world of quantum computing and algorithms, physicists, means that materials is probably a little bit earlier as an application of quantum computing than say, chemistries, you know, computational chemistry.

There's a lot of that won't come, but it's a little bit further off.

It's not a hundred percent clear because you need fewer qubits to simulate molecules, but more gates.

Fewer qubits and more, whereas for materials, you need more qubits, but fewer gates.

Interesting.

That it's currently the way the hardware is progressing.

It's easier to scale the number of qubits and the number of gates.

So that seems like the sweet spot.

We work on both in FaceCraft, but it's like, is that kind of view of like, we've got to put our focus as a startup on at the very beginning of FaceCraft, you know, when we're literally like a few people in one rented room, like all good startups, we have to really pay a bet on one.

What do we think is the most important thing?

And we were started off focusing on the material simulation.

We've now at a scale now of 30 plus people where we don't have to just do that.

So we're also looking at optimization problems.

We're also looking at chemistry simulation, but the view hasn't changed that I, that in terms of what you can get to run worthwhile, interesting problems on a quantum computer, modeling periodic many body system, quantum systems like Chris, the electronic structure problems for crystalline material, which are very commercially relevant.

I mean, there's a lot of people who would like to do that.

Many of whom we've worked with, some of whom got so excited, they came and joined us and no longer work for the partners, but work for us.

But yeah, it's like that's likely, I still believe that's likely to be one of the ones that first that will come in reach of the technology.

That's such a fascinating perspective of like matching up the underlying fundamentals of the limits of current quantum hardware together with commercial opportunities.

So you're solving for multiple equations and multiple unknowns.

Yeah.

I mean, in some sense we view our, our part, the partners we work with, um, both on the hardware sides and on the end user customer side, the big chemicals companies, the pharma companies, we die, we work with their science teams.

They're like, we work with like the materials modeling team at Johnson Massey, one of the big UK chemicals companies in the early days of phase graph to really understand.

They have the domain knowledge of what do they actually need to solve.

It's not like we talk about simulating a material or simulating a molecule.

That's not what you want to do.

That's just like a subroutine.

What you want to do is I want to compute the charge drift coefficient in this particular material or, you know, and that kind of thing.

So you have to really drill down into, and they have a lot of knowledge from like half a century plus of doing this extremely effectively to try and manufacture next gen materials for real world technology applications.

What they don't have is the expertise of how to design quantum algorithms.

On the other side with the hardware partners, we work, we dive very deeply into exactly what the hardware can do.

So we can every, you know, often like we're asking them, I remember a conversation with one with early days with IBM where we were asked, they said like, you know, run, we're discussing access to the hardware and running on their hardware.

And they asked us, what would we want to know about the hardware?

And we said, well, we'd like to know this, this, this, and this.

And they said, no one's ever asked us that before.

Because we want to know everything you can do with the hardware to figure out how can we design the algorithms to squeeze every bit of juice out of that hardware because it's so early.

And that's what those people did in the like 1940s and fifties of computing.

It's what people did in the 1980s of home computing when they did amazing things with primitive home computers.

It's that kind of mentality.

And you know, one day, six year old kids will program a quantum computer in their bedroom better than I can.

I'm pretty sure I will be retired long since retired before we reach that stage.

It'd be great if we get there sooner, spacecraft will be laughing, but I don't think we get, we have it that easy.

We have to really just squeeze everything out of this hardware because it's still very primitive.

And I mean, in this stage, and actually even when you talk about deployed solute production solutions, there's necessarily going to be an interplay between classical compute and quantum compute resources.

What do you think about the challenges of sort of the overhead of state preparation, pre-processing error correction and post-processing sort of taking away the potential advantage with classical overheads?

Yeah, it's definitely, you should really account for everything.

I mean, there is no such thing as a quantum algorithm that only runs the quantum part.

There's always a whole bunch of classical computation going on around that.

Even the textbook stuff like Shor's algorithm, a lot of it.

I still teach it to undergrads.

There's a core of it that's phase estimation, one way of teaching it, and everything else is classical.

There's a classical algorithm around that.

And that's even like one of the classics.

It's not something that, like, it depends on what you're talking about.

If we're talking about sort of error mitigation methods, so these are ways of not doing error correction of fault tolerance.

So things that do have an exponential overhead, but maybe it's not blowing up too fast.

They have an exponential, there you have to be a bit careful because you're essentially saying, if you push it to the limit, you're saying I've got exponentially big classical computations for free.

And then of course I can do everything, but that's cheating.

So you have to be careful.

You have to put in the real numbers and find everything we do in phase craft.

We're like obsessed with actually like put in real numbers.

We don't care about asymptotic scaling at all.

We sort of joke that phase craft is all about the constants.

In the real world, and this is true in classical computer science as well, those constants really matter.

So yeah, you have to actually work out, is this realistic and feasible?

If you put everything together, how long is it going to run?

Sometimes the answer is different on different hardware.

For example, ion traps, cold atoms, the gate speeds are like three orders of magnitude slower than on solid state hardware like superconducting.

For some algorithms that doesn't matter too much.

For some algorithms that means you just can't run them on an ion trap or, the counter argument to that is if you have to do so much error mitigation to get a superconducting circuit quantum computer to have good enough effective fidelities to run, maybe you've lost that fact or three orders of magnitude and speed again.

And these things matter totally.

So you have to actually do that.

So it sounds like, sorry.

Go ahead.

So it sounds like maybe this emphasis you have on noticing the commercial opportunities match against large number of qubits, but short bursts of application of a run of a quantum computer that seems to lead you directly down the path of embracing the whole variational quantum eigen silver framework or are there others also that are featured prominently within the scope of your future?

There are many others.

Variational VQE is one amongst many.

When we founded Phasecraft, I think people viewed simulating the dynamics of a quantum many-body system as just beyond reach of near-term hardware.

One of the first things we did at Phasecraft is change the story on that by knocking six orders of magnitude off the best previous algorithm for doing time dynamics of a testbed model of the Fermi Hubbard model.

Suddenly that brought it from, you can't do it, you need it until you have a scalable fault-tolerant quantum computer, to you can fit in a quantum computer that is at least plausible within the next few years.

And that was one of the first Phasecraft results four years ago.

It's published in Nature Communications a while back.

That's out of date.

We're quite a long way on from that.

We've kept on pushing that down for more real-world problems as well.

We knocked a factor of 43 million off the circuit size and gate count for simulating transition metal oxide materials.

The problem is that 43 million is still not a big enough improvement in the algorithm to get it onto current hardware.

So that's out of date.

We've gone beyond that in Phasecraft.

I mean, it's tough to get it to work.

No doubt.

A steep, steep road.

But yeah, in some sense, what the hardware can currently do is definitely circuit depth or gate count.

Not quite the same thing, but both matter.

That's kind of what you're usually up against, more than the number of qubits.

Got it.

And it's just the nature of the current landscape of hardware.

Yeah, that's right.

And theoretically, we know the answer.

It is fault-tolerant and error correction.

At least that's the only answer we have theoretically that anyone's come up with so far.

The problem is the overhead is very big, and it's a while before we'll be able to deploy it.

Yeah.

And that raises the topic of there are logical qubits on the horizon.

I mean, IBM's published a paper, and they've updated their roadmap to talk about LDPC.

Yeah.

Google had a big result just last week.

Yeah, big news.

Had a big result.

Do you think that-- I mean, are you more excited about the first logical qubits, which are probably going to be sort of-- no.

That's what I thought.

Because they won't be that performant at first.

Right?

It takes time to work out exactly how best to fabricate and then control any new variation on qubit technology.

So I guess the other side of this, are you more excited about higher fidelity, higher quality existing sort of NISQ chip evolutions, right?

Or system evolutions?

So, I mean, yes.

But the good thing is you want that regardless of whether you're wanting to build a fault-tolerant scalable quantum computer or whether you're building a good NISQ-type device, you care a lot about getting the raw gate fidelities to be better.

For fault tolerance, you care about that because the overhead depends very sensitively on the raw error rate.

It's all polylog theoretically, but those constants really matter.

So getting that down, the fidelities up or the infidelities down by an order of magnitude, getting an extra, going from whatever, three nines to four nines, can make a huge difference in the overhead of the fault tolerance.

It might take it from, you will need a billion qubit quantum computer to only 10,000 per logical qubit quantum computer.

The problem with fault tolerance, I mean, there's been really good progress in the experimental results over the last 12 months from multiple different hardware companies and groups that have been really great progress.

But if you read the details of those papers, you see just how far it is before we've got even a fault-tolerant gate.

Some of them have done the easy part of that by cheating a little bit.

And this is not to diss those results.

It's a monumental effort to get there and we have to go through those milestones.

But it's to get to, I'm actually fairly optimistic about scalable fault-tolerant quantum computation.

And I think we will have it in 20 years time.

And that's not pessimistic.

That's just that there's a bunch, there's like three or four difficult engineering obstacles to overcome before you have a quantum computer that's so good and so big that I can forget about the details and just pretend it's an ideal computer and run as big an algorithm as I want, which is basically what we do classically.

It's incredible.

We've forgotten how amazing that is.

Time scales for overcoming big sort of deep engineering obstacles where there's a multiple different approaches that people have in mind, but they're still sort of university lab at the moment.

You know, error bars of like plus or minus three years is good going.

You're hitting your milestones.

Three or four of those, you're kind of out to like 10, 20 years before it will all come together and I can forget about the hardware.

And that's not me being pessimistic about it.

It's just realistically, it's going to take a while before that is going to be useful to a company like Phasecraft.

That doesn't mean error correction and fault tolerance aren't useful to us.

It's just not the kind of general scalable fault tolerant quantum computer that you just forget about the details of the hardware like we do classically.

That's a long way off.

I should say if I'm wrong about that, we are laughing.

I mean, if someone drops a scalable fault tolerant quantum computer like next month, Phasecraft is not our business model, is not torpedoed.

It's on the contrary.

Everything we do, we just flip it.

And that means we can solve problems that are 43 million times bigger than anyone else.

I will be very happy to be in that world or that parallel universe, but I fear that that's not the one we're going to land in.

So we have to do the hard work.

The realism is important here.

So yeah, I guess this feels like a very realistic market thesis.

And again, this betting on larger numbers of qubits and short compute times, not just variational quantum eigen solvers, but there's probably other algorithm classes that maybe our audience would like to hear about.

So is there any kind of way to characterize or summarize the other major dimensions of innovation in that space?

Yeah.

So there's a whole bunch of new...

So I mentioned briefly in passing, like time dynamic simulation, which is almost the oldest proposed.

It's like Feynman 1982, roughly, his original suggestion.

But to make that actually something that can be done within reach of near-term hardware without a scalable fault tolerant computer was not at all obvious.

We have a sequence of papers where we've really hammered on that.

And that involves inventing a whole bunch of new algorithmic techniques.

And that it's not sort of...

To getting the cost of doing that within reach of near-term hardware is not one idea.

It's three, four, five ideas.

One of them shaves off a factor of three, one a factor of 10, one a factor of five, eventually build up to having shaved off six orders of magnitude.

But it's a bit different to academic research that I do as a hobby still, in some sense.

You have one big theorem that's a massive, big breakthrough that changes everything.

In the real industry side, often there's a lot of...

Those ideas are great.

I'm very happy if someone on the team from FaceGraph comes up with something with one idea, completely changes the story.

But also there's a lot of mileage in doing this sort of interesting, intellectually challenging, but more engineering type work.

It's algorithmic engineering, but really like we can save here.

And if we put that technique together with invent this new, use this math that we know to do something differently here, we can keep on improving things bit by bit.

And that kind of- Like a sequence of incremental improvements.

That's right.

Yeah.

So for example, we invented...

We came up with a new way of encoding fermions to electrons into qubits.

Because they don't match up, they're different types of particles.

You got some overhead from representing them on a quantum computer.

The problem is that everything you care about in the real world is electrons.

So fermions, because all of chemistry is electrons, all of material science is electrons.

Everything except nuclear physics is electrons.

So you really care about that.

There's some overhead.

We invented a new fermion encoding in the early days of FaceGraph that was a bit more efficient and it's actually the most...

It's actually you can't do better than our encoding, in fact, in the particular parameters that we care about.

I got it down from you need three qubits per fermionic mode instead of the best previously was four.

That sounds trivial, but that improvement goes in the exponent of a larger bit constant and ends up giving us at least an order of magnitude improvement.

And it's that kind of thing plus another three, four ideas that gets time dynamic simulation, our time dynamic simulation algorithms within reach of current hardware.

But that's not the only one.

We also recently been...

Because 43 million fold improvement in the algorithms, we applied all of the ideas to material systems and we're like, we still need a bigger quantum computer.

So we have to do something else and keep on going.

We've been...

For the last couple of years, we invented some quantum enhanced approaches to doing both optimization problems, a combinatorial optimization, which we haven't really talked about, and also for electronic structure, so materials, chemistry simulation, which use a quantum computer to solve the difficult quantum part of the problem and use classical algorithms, particular DFT, density functional theory, which is the workhorse of all modeling to do most of the other things.

And the interesting thing here is that it uses the quantum computer not to solve the problem, but you don't get the answer off the quantum computer.

Use the quantum computer to steer the classical algorithm to the right solution.

Whereas the problem with DFT, when it fails, it tends to just go to the wrong solution.

You predict something's insulated when you go to the lab and measure it and it's a conductor.

Often DFT does really well.

It's an incredible, you know, there's a Nobel prize for it for a reason, but sometimes it's understood where it chokes.

It chokes when electron correlations or quantum information theory is called entanglement.

It's not quite...

You have to be a bit careful about that as well because it's fermions.

But in any case, it's understood where these effects, electron correlation effects play an important role in the chemistry or physics.

This is where DFT struggles.

This is kind of understood, well understood.

People have been doing it for like 50 years computationally now.

Quantum computers are of course very good at computing those things, but you can't fit big ones.

So we have an algorithm that we call quantum enhanced DFT, which is, I mean, there's a preprint out where we just solve that.

There's one difficult part of the many body electron problem, this so-called universal function.

It's really just the electron-electron interactions that are difficult for a classical computer.

That's the core of the difficulty for the Born-Oppenheimer optimization.

Right.

Yeah.

Right.

Everything blows up there.

Yeah.

So what we did is we came up with a new approach of using a quantum computer to just solve that bit of it, which we've actually tested this on Google's hardware, not yet at the scale that's beyond what you can simulate, just simulate the quantum computer itself.

But it outperforms conventional DFT.

It doesn't outperform the best classical methods yet because the best classical methods that Fermi Hubbard model is not actually quantum Monte Carlo performs better, but you want to compare with the kind of the thing that is the workhorse across all the useful materials.

And so it's this promising thing there where we have these hybrid quantum classical approaches where we can leverage the small noisy quantum computer.

And here it doesn't matter if the output is flaky and bit noisy, as long as it's qualitatively correct.

Right.

In our testing at least, and this is heuristic, I can't, I love proving things, but I can't prove this result.

There aren't very many proofs in DFT.

It steers the, even with the noisy output of the quantum computer, where you're solving on the quantum computer, a much smaller problem than the one that the algorithm is the thing you're modeling.

It steers it to a good solution and away from the bad ones by using the quantum computer in this small core of it.

So that's the kind of things that we're, we've been pushing hard on lately.

That's a, it's a new algorithm.

It's not the kind that you will prove theorems about because in the end it's heuristic.

VQE or your other, whatever your favorite way of preparing ground states is a sub routine in that VQE, VQE is one method.

We have an algorithm that I invented called DFT, sorry, DQE.

So dissipative, sorry, I used the wrong words there.

VQE, DQE, dissipative quantum eigensolver, which is a different space to VQE.

It provably works, but it's also worst case exponential time.

It's provably immune to noise actually, which is the white reason I was interested in it.

But it's not very practical.

That sounds cool.

Is there an archive link on that or a feature or whatever?

It's a fantastic, yes.

Yeah.

I mean, whether that's practical or not is, is a little unclear at the moment, but, um, you know, VQE in practice, it's kind of, I don't know, people are down on it at the moment, but for solving ground state problems of quantum antibody systems, it's pretty hard to beat in practice at the moment.

It works very well.

And there are, there are algorithmic theory reasons why for that particular application in some formulations of it, a lot of the results and the no go theorems, and then the re the arguments for why it won't work, don't apply to that use case.

So roughly speaking, if you take one form of VQE for ground state problems, uh, and you take what's called the Hamiltonian variation, um, you were essentially doing adiabatic state preparation, which is, you know, you take a trivial Hamiltonian, slowly change it into the one you actually want to find the ground stuff and drag the ground state with you.

Um, you're approximating that with a kind of trotterization type form.

So you're just cutting it into small time steps, discrete time steps, and approximating that continuously division, just like you would solving a differential equation on a classical computer, you know, you discretize time and just solve that small time steps.

Okay.

So take that right there down as a quantum circuit.

Um, and now, but you can't do a circuit that deep.

Okay.

So the, by the adiabatic theorem under the assumptions, the adiabatic theorem provably you will get to the right states as long as the gap doesn't collapse.

And, um, but you can't do that because the circuit's way too deep.

So now do the deepest circuit.

So take your time step as big as you know, as big as you need to actually just fit it.

So that's, that's how you're making the approximation much cruder and maybe it won't work and you can do it as well as you can.

And you're approximating the best you can on the current hardware, the adiabatic state preparation, which provably works.

Now make all the parameters, the time step sizes, just make those parameters and now just do gradient descent to make them better.

You're never going to make it worse.

That version of VQE, um, is in the limit as you go to larger depths, uh, which larger in the sense polynomial sized becomes adiabatic state preparation and provably works.

A lot of the theorem, the no-go theorems for VQE are not, don't apply to that.

They apply to other variants.

So the no-go theorems, I think John Bell famously said, you know, Bell inequality theorem, I think it was him that said, um, the only thing proven by a no-go theorem is a lack of imagination.

That's really true.

I mean, like basically any no-go theorem, these are really, I mean, the go theorems are really good, nice and important results.

You've got to see how you violate the assumptions is how you route around them.

So there I'm, we're less down on VQE in FaceCraft than perhaps the general, uh, but it depends on what you're using it for.

If you're just saying, I'm going to throw a circuit at it without any knowledge of the problem I'm solving and I'm just going to variation optimize every gazillion parameters.

And hope for the best it will not go well.

Yeah.

And you know, we don't do that.

I doubt it will work.

And that goes back to your point of having to know the most, just, you know, minute detail of the performance of the hardware in order to get something valuable out of the problem.

And the actual details of the problem you're trying to solve as well.

This era is there are no sort of black box solutions that you can just plug and play.

And that's fine by me because that makes it interesting.

You know, it's difficult, but interesting.

And I hope we will get to be part of getting to the point where that it's no longer interesting and quantum computing is boring.

That will be success.

You know, it's just a commodity thing everyone uses for obvious thing, you know.

So I know it's difficult at this stage, um, to, you know, predict how things are going because it's, but I mean, if, if you, if you, if you sort of look forward over the next year, are there specific kinds of milestones that you were trying to reach with phasecraft in terms of the, the scale of the problem, the type of problem that you're able to address?

So the key next milestone for us, and I think for the whole field and industry as a whole, uh, is to reach what the problem is, this has too many definitions, but what I mean, many people call quantum advantage.

So run a useful computation.

Maybe it's just a scientifically interesting computation.

Maybe it's not yet commercially relevant, um, or industrially relevant, but it's at least a problem that other people have cared about before.

Even if it's just an academic scientific, many body physics research or condensed matter toy models solve a problem like that on a quantum computer beyond anything that a classical computer can do.

That's a milestone we've got to pass through.

It's not yet commercial viability, but it is, it's a milestone milestone is something you pass on the way to something that hasn't been.

That's something, um, you know, we believe in face golf.

We have a good shot at with some of the algorithms we've developed.

Um, I think in the next couple of years, it's not, I know I can no longer prove that we can't do it.

And as a theorist, that's, you know, that's amazing.

That's as good.

I would have been able to set that and say that 12 months ago.

Um, and we're not the only ones.

I mean, I think if you look at the experimental results that have come out in the last 12 to 24 months, right there, you know, they're, they're not there yet, but we're getting closer.

Um, you know, the IBM utility paper got attacked a lot, but it's actually a really nice experiment, which they knew and they didn't claim, at least in the paper, it was carefully written.

They didn't claim that they got over that line, but they, it's a, it's a good experiment.

There's experiments that Google, the Google team have done, but again, they don't claim they're over the line yet, but they are pushing and the hardware is continuing to, to develop.

I think in the next couple of years, we will probably cross that threshold.

Of course, the classical numerical simulations will get better in the meantime.

It's a, and that's a good thing too.

It's, it's driving, right.

You know, if the end result of quantum computing is we can solve all of chemistry and material science on a classical computer efficiently, you know, great.

Even if the cost of a, of a series of embarrassing de-quantization of a pair of mirages.

I mean, you know, Facebook, we're a company, if we come up with a quantum inspired classical algorithm that is commercially useful, we are totally happy with that.

So I think it's, it's very good both for science and for the technological applications of quantum computing for there to be this, you know, firstly, it keeps you honest.

I mean, it's, it's good that these people are then don't, don't claim quantum advantage unless you're sure of it, because there are some very smart classical numerical algorithms people who might tomorrow, you know.

But at the same time in the end, there are a lot of these problems, especially for example, in the dynamics of a quantum system, we know provably are exponentially hard classically modulo big complexity, threaded collapses that, you know, are very unlikely to happen.

So a lot of these things, it's just the classical techniques are up against the kind of exponential curve and an exponential curve, as we know from things like the COVID pandemic looks a bit like a wall if once you start going up it and we're not there yet, but it's, you know, in that sense, I'm up quantum computers, the hardware just has to get a bit bigger and you know, one extra qubit, you're doubling the difficulty classically.

It can be very clever and maybe it doesn't double it.

Maybe it's only a factor of one point something, but then you just need one more qubit on top of that and maybe another, and maybe it's just, maybe it's 10 more qubits, maybe it's five or maybe it's 20, but it's not like a thousand more qubits before you're beyond, you know, into the regime where your, your, your classical methods, all of the classical methods we know run out of steam.

But I think in the next couple of years we will get probably past that milestone.

It may not, it may or may not get above the peak of the kind of media hype picking up on it.

It may be that you'll have a whole load of media articles going like quantum computing comes of age, or maybe it'll go below the radar because then only the scientific community will understand the importance of it.

But I think we're approaching that and it's not a hard threshold, right?

It's fuzzy.

Right.

We're somewhere in the vicinity of it.

We'll, it won't necessarily be really clear that we've crossed it until we're quite a lot beyond that.

But I think that's not, that's not as far off as people outside the field think.

So let's, I mean, let's keep in touch because if, you know, if you come up with that breakthrough or if you see somebody else, I mean, we want to cover it here, even if, even if it fades into obscurity against the media hype, right?

I mean, that's a, that would be good to know, to cover it for our audience.

You know, I mean, you know, very good.

Yeah.

I, you know, there, there are other good teams out there, you know, in that direction, of course.

But I think the sensible people know that we've got to get past that.

We've got to work on that.

A lot of us are like, it's, we're still in the early, this is not yet an industry, a real industry.

It's a promised industry.

A lot of this is like people who may be collaborating with, in fact, on some of this stuff.

You know, friendly competition at the moment, but it's when, for the people who are really tackling the hard science problems.

And at the moment it's, you know, why, why are Google giving us access to their hardware?

Because if we manage to make their hardware do something useful, it's at this stage, it's, it's win-win for everyone in the whole, the whole industry.

Yeah.

Absolutely.

This has been really a satisfying conversation, Toby.

Thank you so much for both the perspective on, you know, the nature of the science, how it matches against the economics and, you know, a sober and realistic view of the, of a steep climb ahead.

We're still in that optimism.

It's just great to see that shine.

So thank you so much for your time.

Oh, you're very welcome.

No, it's been a pleasure.

(Music ends.)

(Music plays.)

Hi, hey, look, so that was an amazing conversation.

I thought this clear-sighted characterization of specifically the commercially valuable problems within reach, with a larger number of qubits, and apparently the qubit rate of growth in terms of contemporary hardware is growing faster, right, than the length of computation, the number of gates that you can do.

So that spells out a particular class of commercially viable problems that work around so much of the skepticism out there.

And it was just amazing to hear him express such profound faith on that particular market thesis, that particular direction, to predict that, you know, in one to two years, there could be a major breakthrough that would be essentially a powerful, either commercially realistic or academically profound breakthrough that demonstrates supremacy.

Yeah, and as you said, Kevin, we'll definitely have Toby back on under such circumstances so that we can get the clear story.

I mean, I think, you know, one of the things I really appreciate about Toby's perspective is how clear-sighted it is.

Yeah, very sober-minded.

Very, yeah.

I mean, he's very realistic.

And in fact, you know, I think the measure of that is the way he said that, you know, five years ago when they founded Phasecraft, they were optimistic, but came off as maybe more pessimistic than the hype.

And now that some of that hype is running out of steam, they're appearing to be more optimistic, but in fact, their perspective hasn't changed.

It's a testament to being, you know, sort of internally consistent with your thesis.

Yeah, their profound confidence in their path ahead.

Yeah, exactly.

So just amazing stuff.

We do live in miraculous times.

I really look forward to talking with the Phasecraft team again.

Absolutely.

And before we sign off, I'd be remiss if I didn't mention that, of course, Toby has the best name in quantum computing, Dr.

Qubit.

And he's got a personal site, doctor-qubit.org, where he talks a little bit about his surname.

And, you know, I think he probably put that site up so that he can easily and efficiently field the questions that inevitably come up when he's introduced as Dr. Qubit.

Thanks for the important details.

Of course.

On the record, yes.

We'll add that to the links in the episode, along with a myriad of excellent research papers from the archive and from Nature Communications, as you mentioned.

So excellent.

Okay, that's it for this episode of The New Quantum Era, a podcast by Sebastian Hassinger and Kevin Rowney.

Our cool theme music was composed and played by Omar Costa-Hamido.

If you are at all like us and enjoy this rich, deep and interesting topic, please subscribe to our podcast on whichever platform you may stream from.

And even consider, if you like what you've heard today, reviewing us on iTunes and/or mentioning us on your preferred social media platforms.

We're just trying to get the word out on this fascinating topic and would really appreciate your help spreading the word and building community.

Thank you so much for your time.