All Points West

Can we trust Artificial Intelligence?

Recently, siMa.ai, a leading edge computing chip firm, secured $70 million in funding, with significant backing from Dell, underscoring the growing investment in AI and edge computing technologies. While AI chatbots continue to evolve at an accelerated pace, advancements in algorithms are driving faster performance, shaping how we interact with technology. At OcaiPipe, Eric Topham and his team are committed to delivering trustworthy AI solutions for the Internet of Things (IoT).

Eric Topham is a 'sporty geek' who is operating at the cutting edge of the high tech world of AI and edge computing. The founder and chief executive of OctaiPipe, the start up British tech firm, has developed a platform that aims to protect critical infrastructure, such as power plants, by putting AI right at the heart of connected devices. He is super smart, with a PhD from Oxford University, but has a laid back 'work to live' ethos, inherited from his parents.

Episode 16 of the All Points West Podcast hears why the adventurous divemaster has, so far, eschewed the well trodden 'tech bro' path to Silicon Valley and why he believes AI presents more potential opportunities than risks. 

Nothing in this podcast is intended as investment advice and the people in this podcast may hold positions in the stocks they talk about. Do not buy anything based solely on a tip or recommendation. Please do your own research. 

What is All Points West?

The All Points West podcast meets leaders from some of the most innovative and influential small and mid-cap UK-listed companies to learn more about them and their businesses. Hosted by former Sunday Times business journalist, Karl West.

Want to get in touch? Send an email to allpointswestpod@gmail.com.

Nothing in this podcast is intended as investment advice and the people in this podcast may hold positions in the stocks they talk about. Do not buy anything based solely on a tip or recommendation. Please do your own research.

 Welcome Eric Topham to the All Points West podcast. Eric is chief executive and founder of Octaipipe, the technology firm that is making waves in the brave new world of artificial intelligence or AI. He founded Octopipe in 2016 and prior to that he worked as a data scientist and statistician at the NHS and UCL, university College London.

Eric's mission at Octaipipe is to deliver trustworthy AI for IOT, or the Internet of things to protect the world's critical infrastructure by placing intelligent device performance, security, privacy, and resiliency right at the heart of connected devices. Octaipipe wants to provide scientists, engineers, and developers the tools to ensure the safety, security, and monitoring of edge AI devices to protect physical systems and the people that depend on them, as well as to unlock new levels of device, machine, and infrastructure intelligence.

Eric, could you give us a quick explainer on what edge AI devices are, and these are, as far as I understand it, these are devices that don't have to send as much data to the cloud. Is that right? So the data is utilized closer to the system that it supports, such as a power plant, for example.

Yeah, absolutely. So firstly, thanks for having me, Carl. It's a pleasure and thanks for the great intro. I think you have absolutely you nailed that. I think the first thing is just to talk a little bit about what we mean by, by edge. So I think you're pretty close to what we would think of as edge computing.

So edge computing is where we have. computational resource that is close to the site of generation of data and where then the output of that system is going to be taken downstream. So you might think of an edge computing node as perhaps one where we have an industrial PC that is connected to a large metal pressing machine, but then there are kind of degrees of edge.

So people think of the extreme being the embedded edge where we have devices with very, very low computational resource that are really embedded, as the name suggests, deeply into the physical system that we're dealing with, and that's kind of opposed to the concept of the cloud, where we have compute resource that is remote from the point at which the data is generated and where the system is operating.

What are the advantages of operating, edge systems? Is that, is it to reduce the distance between, where the data is being generated and that central hub, if you like, you know, in this case, the cloud, which is, on a server, central server, you know, miles and miles away, is it to reduce the distance there and to potentially reduce the impact of, of hackers, for instance, who, you know, have become quite adept at getting into these systems.

Yeah, so I think generally the reason people are looking to look to leverage edge computing is that they're trying to, undertake that computational task as close to the point at which it's required for the operation of the system in order to have perhaps low latency and also to be somewhat less reliant, therefore be more resilient against the loss of network and backend cloud services.

So we want that device to keep operating as it should. Therefore we want to compute calculations where they're needed. The piece around cyber security is interesting, but the reason I think that generally people are trying to push uh, computation down towards edge devices is not just so that we get this kind of low latency and resiliency it's that there's a cost involved in moving data from where it's generated to somewhere else to be computed and then moved back again. So, you know, that's inefficient from the point of view of latency. It's also inefficient in terms of the costs that you incur in that, in that process. So where we have edge computing that is becoming more powerful, it means that we can push more of that computational task down to where it's actually needed rather than having to kind of bank source it out to the cloud and pay for the privilege and pay for the network service on the way and back again.

Cybersecurity is interesting because, you know, traditionally people would think of distributed systems as being more vulnerable because there are more attack vectors and planes that can be exploited. You can get into an individual node and they propagate through the network.

So centralization typically to be thought of as a, you know, more secure way of doing things, but actually, we do need these systems to be distributed. So that's actually one of the very reasons that the type of technology that we're developing is to allow us to do that distributed compute, but in a way that is just as privacy preserving and just as secure as we might expect with a different paradigm.

So cheaper, more resilient and more secure essentially is what we're talking about.

about

Those Those are the value propositions, yeah. You know, we have this, you know, huge amounts of data that is in these distributed IOT systems. I mean, I read an estimate that you've always been these kind of estimates. It's hard to say what the numbers are, but you know, 10 billion IOT devices today, you know, 1 trillion by 2030.

That's a massive amount of data and there's a lot of latent value in that data, that we could release by solving problems using it. Unlike the data that's on the internet, which is largely freely available, most of this data is, locked up, if you like. It's siloed and distributed and segregated.

There are, on the one hand, barriers to them sharing that data across those networks. Secondly, there are economic issues around, as we've just discussed, the costs involved in the current way of doing things.

How would you explain Octaipipes technology? What does it do? How is it different from what's already available and what's your target market? Who are you aiming at?

The first thing is just to think about the way that things are done today in machine learning. So today, what we tend to do is aggregate a pool of data into one single data store and there we compute an algorithm to learn a set of relationships for a particular task that we're trying to learn.

In the world of IOT and edge computing today, typically, what we do is therefore move a very large volume of data up from where it's generated over a network into the cloud, store it and process it there, and then we might then distribute the results of that trained algorithm back to edge devices.

That's kind of unsatisfactory for the reasons that we've described. You have risk involved in that kind of centralization process. There's costs and there's a lack of resiliency. So instead, what we do is allow you to do things the other way around. So basically, what we allow you to do is rather than centralize the data to train your algorithm, We allow you to push the training program for your algorithm out to all of the devices from which you want to learn and then we learn one individual model per device. Then what we do is we just return the outcomes of the learning, so what we call model parameters, these trained model parameters. So we learn all of these learned relationships from all of the devices, and we aggregate those. Now, these, learn parameters are only a few kilobytes of information.

So rather than moving gigabytes, terabytes, petabytes of data from the edge over the network to the cloud for training, instead we just move a few kilobytes of information back and forth between the edge environments that we then aggregate. Once we've got this aggregated model, the aggregated model is basically the same thing as you would have got if you put your big pile of data, um, in the cloud and paid all that money and got around all that governance headache .

The magic trick is, is that we obtained it without ever moving any of that data. So it's very, very privacy preserving because the data stays where it's generated and we can secure it there. It's very efficient because now, we only move a few kilobytes and every time we add another device to the network, we have a fixed cost in the cloud of aggregation and it's a very small incremental cost in terms of the network, communication overhead.

So it's very, very efficient. It's very scalable.

So by moving smaller bytes of data, I guess you're able to do that much faster as well.

So it's, it's rather more the fact that by parallelizing the learning process, we can speed things up. So what we can think of this is, is rather than having to read through one long data set. Instead, we can split the data set out across multiple devices, and each device processes a small amount of data in parallel.

So actually, that's true. We can actually speed up the, what we call, the wall to wall training times, the time to complete the training task, we can speed it up by a factor of 10 because of this parallelization process.

Got it. So it breaks up, it breaks up the data into manageable chunks basically because you're, you're doing it across multiple devices.

Exactly. Then the process of aggregating the learning results is very, very efficient because of the small message packet sizes that we're moving on the network and the computationally cheap nature of the aggregation process that happens in the, in the cloud.

So who's your target market? Who are you aiming at and who are you working with currently?

Yeah, so as I said in the intro, we're interested in helping, you know, use or serve, you know, machine learning for the purposes of AI to, help secure and make more sustainable critical systems and infrastructure. So naturally, the things that we think about our energy generation and management systems, the kind of smart built environment, and industrial automation and manufacturing.

So basically generating energy. Making stuff and moving it, you know, things that if they stop happening, then bad things happen to the world, and things that we would like to be more efficient at doing so that we can be more, you know, more sustainable, I guess, is the way of thinking about it. So there's kind of the domains, and then I think what we can ask ourselves when would we want this kind of distributed architecture?

One is where we have a single organization that has a sort of naturally distributed architecture. So where we might have remote operations, so where we have wind farms, we have offshore gas drilling, where we have distributed solar farms, where we have distributed manufacturing plants, where we want to be able to combine the learning from data from all of those things, but we want to do it as a much more cost effective way.

Right. So that's number one. Number two would be that, um, perhaps you are the manufacturer of a smart connected product and to give it more intelligence. You want to train algorithms, but you need real world data and that real world data only exists across your customers, but your customers don't want to share that data with you. So, how do you resolve that, that conundrum where the answer is to do it in a privacy preserving way where you actually don't remove their data, but you only share the the model parameters and that allows you to then improve your product. So that's kind of like a one to many scenario.

Then the other one would be sort of like a collaborative many to many scenario. So where a group of entities want to collaborate in learning for a task, that is perhaps non competitive, but not at the risk of having to actually share the data with each other. So we have an example, of a project that's just started with Procter Gamble and involving some other blue chips, that are collaborating as well, so Unilever, where they want to train models that predict, the likelihood of a health and safety near miss event. In order to do that, you need scale because fortunately these events relatively rare and that means that you need to collaborate with others, but those others tend to be other organizations that look like you, and that tends to be your competitors. So therefore, you need to keep the data private to all of those different organizations while getting the benefit, which is the training model that can predict these events happening.

Therefore, you can try and prevent them.

How did the idea for Octopipe come about?

Well, it was a pretty organic process and it's also one of those things where it's funny, you know, today it's easy to say, it's easy to draw the kind of common thread, but at the time it wasn't necessarily conscious. So, as you said in the introduction, my PhD was in field of ecology, where we're interested in modeling complex biological networks using statistical analysis, statistical modeling and machine learning. So, you know, that's kind of like the interest in complex systems. Then, as you rightly said, I went and worked in the world of public health and healthcare research, and here you immediately run into issues of data privacy, right?

You know, people's personal data, you know, needs to be managed in a way that's such that it's kept, kept private but yet you still want to get the benefits out of being able to model these things. So that's the awareness around complex systems and data privacy. I then started a business building bespoke machine learning systems in industrial applications.

When you then start to work in industry, you immediately start to work in, IOT and edge computing systems because those are used to, you know, to run those industrial processes. As part of that business, we actually set up an R& D collaboration with the computer science department at Imperial College.

I had some academic acquaintances there, and for a number of years hosted research students to come and do six month research projects on hard and innovative machine learning problems applied to kind of real world settings, in order to kind of, you know, firstly, just learn and secondly, you know, see whether we could identify things that address kind of bigger problems that we're experiencing.

So, in kind of late 2019, I think we picked up some papers on federated learning. Federated learning had existed at that point for about 18 months. The first paper came out from Google in 2017, and we thought, well, this looks jolly interesting for solving, some of the problems that we see specifically within the kind of industrial applications that we were dealing with, but also just the awareness around, data privacy and trying to model in complex networks meant that it seemed like a really good way of approaching the problem.

So, from that point, we sort of did a bunch of testing to kind of make sure that it really did what it promised, and then took a decision to start to pivot the business around that technology set, you know, utilizing our existing customer base, but then heading off increasingly down a traditional, deep tech company, product ,company, trajectory, you know, supported by VC.

So that's where we are today.

Just to give, listeners a little bit of recent background in January, you raised 3. 5 million in pre series A funding, including half a million pound grant from Innovate UK, which is the government's innovation agency. Pre series A just means the stage after seed funding, but before series A, which is generally when some of the bigger VCs, jump in what are you going to do with that cash injection?

So it's been a, yeah, it's been a really great experience doing that funding round. It was quite extended, in that there was a lot of interest. So we actually took, took more money than we had originally planned to, to do. So the money we brought in is really here to, on the one hand, allows to accelerate the technology development that we're doing within the product and also really start to kind of scale out our go to market operations and the front end of the business.

We think that one of the kind of the challenges for the federated learning domain has been the transition from academically interesting piece of technology to something that's actually useful in the market and so I think that, you know, if we were just to kind of a very high level say what we're focusing on is one, it's the, it's the true productization around that core piece of technology. So I'd say that Oxfam is much more than just, you know, the federated learning piece. There's a whole infrastructure and therefore product around that to make it useful in the real world setting. At the same time, we're having to invest into productizing the business in the sense that we're now, really building out that, that go to market function and motion so that we can then, you know, effectively take that product, to the markets and accelerate its penetration.

That's the specifics about Octaipipe and where you sit in the AI universe, if we can just kind of like zone out a little bit now and just talk about AI in more general terms, there's been a lot of understandable concern about AI in some quarters, about it taking over huge swathes of the economy and replacing people in their jobs, et cetera, et cetera. How much of this is justified, do you think? How do you see AI for the future on balance, does it present more opportunities than risks?

Well, I shouldn't be sitting here if I didn't think that it presented more opportunities than risks. Equally, it would be disingenuous of me to say that there aren't risks, right? I think that risk is precisely one of the things that we are trying to help mitigate. So I think that the, you know, the fundamental currency of AI today is trust.

If you can't trust something, you can't scale it. If you don't scale it, you won't get benefit from it. So the question is how do we make these systems more trustworthy so that we can scale them. Now, I think if we just sort of deal with the question of, will there be impacts within, you know, the workplace and the economy in terms of, you know, what people do.

I think inevitably, yes, it's unavoidable. I think without sounding glib, that's probably true of all technological revolutions. I'm sure there were some people, you know, the late Victorian era wondering about, you know, the impact of steam engine and industrialisation but that's not to kind of minimize the concerns today.

So the answer is it will have an impact. The question is really only how do we then respond to that? Are we sort of really ahead of the game in response to that? So I think that really, from a technology perspective, what we have to do is one, be quite human centric in what we design and how we design it.

I think that there does need to be more regulation. The regulation needs to be carefully calibrated to allow for innovation while also protecting the end user, which is all of all of us. We need to make sure that when we do design these systems, there's a clear focus on, you know, sort of trustworthy client design.

That's kind of really one of the things that we're trying to bring here. You know, we talked about the cost efficiencies, but really, what we're interested in is that, you know, cost efficiency is there because we're really talking about energy efficiency, so that's clearly got to be a, prime driver.

We want these systems to be efficient, and they contribute, you know, net contributors to sustainability goals that we should all have. Second piece is that we can trust that these systems, you know, keep our data private and secure to appropriate level for the tasks that they're there for, and that they don't expose us to unnecessary risks in the process.

If I could turn the focus on you a little bit, one of the things that struck me looking through your CV is that you came into AI through a slightly unusual route, if you wouldn't mind me saying, because your kind of academic background, which is, which is impressive is in ecology, natural sciences, and including a PhD in zoology from Oxford university.

So it kind of strikes me that coming from it, from that point of view, it's not an obvious route, that you've ended up, in this kind of high tech world of AI. How did you get into this field?

Well, yeah, it is an unobvious route. Like I say, I think there were common threads and it would be important not to be sort of distracted by the red herrings. I think it's actually that kind of pluralistic background that has made me particularly, you know, adept for what we're doing.

From a kind of technical skill set point of view, machine learning is something that I've been doing for quite a long time. It's just under what auspices are you doing it, right? The analytics involved in modeling a lot of these problems are very much the same.

What is called survival analysis in one place is called remaining useful life estimation in another, but the modeling you're doing is basically the same whether you're modeling the remaining useful life of a machine versus the survival analysis for some biological system. So, that's one one common thread.

So in many ways, it's been a natural progression. Like I said, the kind of move out of academia into public health was public health was driven by interest. Actually, the relevancy today is the, like I say, you immediately run into kind of these issues around data governance, data privacy and how to literally do your modeling while dealing with those constraints.

That kind of gave me a very clear understanding of some of the problems that need to be, to be solved when dealing with privacy problems. Think that it's a pluralistic background, but one that actually then means that I can actually synthesize quite a lot of information from different places and take quite a novel view on things, whereas if you come from one very specific background, then, you know, perhaps that's, that's what you want to know.

Um, oh, sorry, um, sorry, I've got a slight problem with my, uh, my battery that I need to charge. I just asked my colleague to bring it, bring it, bring the plug. Um, So, so, yeah, so that's, you know, that's kind of think the view that I have that that kind of pluralistic background is actually has been a real strength.

It's allowed me to kind of see things from multiple perspectives, experience things in under different conditions. Like I say, from a technical understanding point of view, in the end, learning about machine learning is learning about machine learning. What you then apply it to is what varies. So, so, yeah, so it's been an interesting journey, but like I say, I think one that's been a net asset rather than anything that's held me back.

Yeah, so here you are now in the tech world. Why stay in the UK? Were you ever tempted to head off to Silicon Valley and become a fully fledged tech bro?

You're implying that I'm a tech bro but just not fully fledged. I am wearing a baseball cap, so fair enough. Yeah, semi fledged. Yeah, a semi fledged tech bro. That's fine with me. No, I mean, look, to be honest, I'm sorry to say this to any American friends, but I've never been particularly drawn to the U. S. My sister lived in Denver for five years and it was nice to visit, but it has its foibles and wrinkles as well. Perhaps better the devil, you know, I feel very comfortable here. I think of myself as European. I'm actually half French , my friends and family are all here and those things are important, you know, life isn't just the business.

Even though I work pretty hard, I think that, you know, we should remember why we're doing that. You know, work to work to live, not live to work, whatever people say. So it's always just been the natural choice, I think, to be here. Perhaps the world will take us, you know, in that direction at some stage.

I have to say, as a business, I think today our eye is more drawn towards the European market, the DACH region, you know, Germany. In particular, we have a potential opportunity to set an office up over there from the middle of the summer onwards. So that might be an interesting, you know, new part of the journey to have some footprint, over there.

Just at the personal level, my other half lives in France or actually commute back and forth quite a bit, anyway. If one day it demands us to go to the West Coast then so be it, but I think just from a business point of view that currently, the cake is large enough and there are few enough of us trying to solve this problem that I think it doesn't really require us to rush over there just yet.

I think it would take a sizable war chest to do that move finally. So, this is where things have led me and there isn't a huge pull to go anywhere else for now.

That's great. I'm not trying to suggest that you should. We're very, very glad that you, that you want to stay and we wish you all the best with it. As I said we kind of touched on a bit earlier you've got a degree in natural sciences at Cambridge University, and you also played rugby for the university team.

Then from there, you went to Imperial college, London, and you've got a master's in ecology, evolution, and conservation. Then came your PhD in zoology from Oxford, where you did yacht racing and were blues captain. Eric, you're clearly a very smart guy but can I take you back further to find out a bit more about, maybe where that comes from?

So where did you grow up and what was that like?

Yeah, well, I was born in France, so I was born in a place called Surin, which is just on the outskirts of Paris in the Northwest. My mum's French, my father's English, and, they moved here to the UK when I was two and a half. So I, I grew up in Dorchester, down in, in Dorset, went to the local comprehensive.. My mum took me to play rugby when I was six or seven and that was a core pastime all the way sort of through into my mid twenties. My father is a sailing fanatic might be the right term. I'm sure he wouldn't mind. My parents did some crazy stuff in the eighties, sailing a very small boat, most of the way around the world.

I sort of therefore grew up on boats, before I could, I could walk. Those things that you mentioned, from when I was at university. That's where those are rooted from, very much my childhood. I did a lot of things growing up, very outdoorsy.

Today, you know, still do a lot of hiking and climbing and skiing. My family are quite avid, avid skiers as well. We were brought up with a very sort of active childhood, not a lot of time for sitting inside, lots of use of the imagination and being very active.

So I'm sure that that's somewhere in, in the way that you're brought up has an impact later.

What did your parents do for careers?

My father was a software developer by trade. Then later sort of moving into, into management and my mum was a nurse. My dad is I think, probably one of the most intelligent people I know.

My mum is also, I think, has a great mind for history and the arts. My dad is very sort of logical and mathematical. So I think I've sort of inherited a healthy mix from both of them. They're also people that, you know, like I said earlier, you know, we've got to remember why we're doing these things.

It's, you know, work to live. They very much put that first. I really appreciate that from my childhood that we were given all of the opportunities. I'm sure that they sort of sacrificed a bit on their careers as well.

Do you have siblings? You've got brothers and sisters?

Yeah, I do. Yeah, I have two twin sisters. Well, they're twins. They're not my twin. Yeah, I have two younger twin sisters. One lives in London, and she's a patent attorney. So I get lots of good free IP advice, which is great. My other sister lives in Spain. She's a translator, I'm sure if everyone needs to get any of our technical documentation translated to Spanish one day, I'll get some friendly services, there as well.

Good stuff, what were you like at school? What were your favorite subjects?

Oh, it's a good question. So, I think I was probably a bit of a contradiction in terms in that, on one hand, I was very, very, Sporty, did a lot of rugby, athletics, tennis and so forth. On the other hand, clearly a bit of a geek, right? So, I think I sort of lived, and I always think of my school days as being a slightly, you know, looking back on it, strange experience in the sense that I think I had one foot in each. I had a lot of friends that were kind of like the cool kids at school because we played rugby together and I was reasonably good at it. At the same time, I was clearly a bit geeky and quite interested in my studies in a way that perhaps, you know, others were less interested.

What was I interested in at school? It's funny, you know, you were pointing out the pluralism of my, kind of, my career path. Well, I would say that my interests were relatively pluralistic. I mean, I could have quite easily landed up going and doing history at university. I really, really enjoyed history.

I think that's something I get from my mum. You know, did my history A levels. I did it very well and I honestly could have gone to university to do history. I think, you know, I remember being at university and doing my actual work and then being a bit bored for a minute and decided to pick up a history book and have a look at it, you know, have a look at the interesting pictures and read some of the interesting stuff, you know, it's that kind of thing.

So, yeah, so, so not so good perhaps. I did fine at like English and things, but you know, it was more in very much in the sciences. It's an interesting point that maths never actually really came super easily. It's something I've always had to work at really hard to be any good at. So that was probably one of the things I had to work the hardest at. The things that sort of came a little bit more easily were the kind of biological sciences. At the time, I did physics as well, and that was nice because it was, I was sort of thought as physics as being a little bit more applied, of course reality isn't, isn't that, but that, you know, helped me kind of make sense of the maths that we were being taught.

Good. I read also that you've got a real passion for travel and clearly looking at your CV, a pretty adventurous guy. You've worked as a dive master in Honduras and Spain. What does that involve and what drew you to that? I'm guessing that circles back to your kind of sporty background.

So I have to say, so I was very lucky in that my dad's a member of a dive club, a scuba diving club in Dorset. So, age 12, um, you know, 12 birthday present was go and see if you like diving. So, diving, I, I went, I have to say actually, that's around, I think like sort of childhood hero was like Jacques Cousteau, so I used to read a lot of like Jacques Cousteau books and, and things. So yeah, so I grew up, grew up doing a lot of diving, go diving like every week with my dad, sometimes after school, sometimes at the weekend. Then it was kind of sort of the natural thing, like university summer holidays go and work somewhere nice and hot and on a beach and get paid to do the thing that, you like doing as a hobby.

So, so I did a bit of that. That was, that was really fun. Then, yeah, I did a lot of backpacking, sort of South America in particular, over in Asia with one of my university friends as well. I've just always really enjoyed traveling. I think it's the process of, probably it's not just traveling, right?

Like really what it's about is the process of discovery. It's probably why I like doing what I'm doing now. It's probably why I like, the kind of the academic side of sciences is that process of discovery, finding something new and that kind of enjoyment, that kind of very special moment where for the first time you've experienced that new thing, there's something about that quite addictive.

I think that would be kind of a common thread, you know, in traveling, you're seeing new things, you're meeting new people, particularly experienced meeting new people and exchanging with them and seeing new stuff for the first time. That's a common thread, and that's the same, you know, PhD, when you're, there's a moment where you've analysed your experiment, for instance, you've tested a hypothesis and you're at that point, the only person that knows the result of that, there's something in that, and today, here at Octaipipe, there are things that we're doing that other people haven't done yet, you know, we're the only people that have, have done it. I think there's probably also a slight aspect of healthy appetite for calculated risk.

This would be the other piece that probably comes through. So with the sailing, you know, I did a lot of offshore sailing. It's, considered to be a extreme sports, calculated risk, traveling, there's an element of calculated risk. You know, you're going into places that you don't know, you can easily think that you know a place because you've seen it on the TV, but you know, you don't actually know it says that there's that and same with business, but like we take a lot of calculated risks all of the time and the other piece about all of that is that you're ultimately only responsible to yourself, you know, in the sense that, I mean, it's not quite my best as if they heard me say that, you know, I haven't forgotten about them, but the point is that like, you know, you are not just a cog in a wheel, you, the decisions you make, there's a very direct feedback onto yourself, you know, when you're, when you're traveling, the decisions you make about where you go, what you experience, what you do to keep yourself safe, has very immediate feedback when you're doing a PhD, the decisions you make about the direction of your studies and the boundaries you want to push has a very direct feedback.

With business, when you're running a business, the decisions you make, when it goes well, that was down to you and when it goes badly, well, there's no hiding. It was down to you as well. You know, so, and I think that's where the enjoyment in it comes from.

Yeah. So just finally, it's a kind of an obvious question to ask a founder of an early ish stage, company, but where do you see Octaipipe in 5 or 10 years time, what's the plan? Do you intend to float the business or is it more of a scale and sell to a bigger tech consolidator?

It's a good question because it's one of those questions that we should, I expect the textbook answer is that I should really have a really clear answer for you because I should have this all perfectly laid out and you know that's what I should be telling everyone.

It's never usually like that,

It's never usually like that.

So I will do my best. So as a business, we, for us, success would be that we are, you know, the global standard infrastructure for you to build, AI applications into edge computing that you can really trust, right? That's what we would like to see. We've really kind of enabled that kind of part of the revolution.

What does that then mean in terms of plans for the business? I think that, you know, ultimately there is an exit to be had. Where that exit is, is an interesting question. Some would say that we should always aim for the IPO market. I think today, you know, people are asked, there's a question mark about, you know, how, what the state of the IPO market is.

The consolidation piece, you know, consolidating up into a larger tech organisation, I think there's clear, you know, clear route there and there are entities out there that have you know, already expressed, you know, signals of interest. Yeah. No surprise that the likes of, you know, ARM and Intel and NVIDIA, you know, these organizations.

This is an interesting point because if you think about it, what does our technology really do for a potential acquirer? I think for an acquirer like them, they sell. Ultimately, their core business is to sell your high performance chunks of silicon on which you can process and their success as a business is on how much market share of selling of that silicon do they have?

And just look what the LLM world has done for NVIDIA recently in its, you know, its market cap. Um, And you might argue that LLMs are a catalyzer for the sale of that silicon, because, you know, without the, without the application side, uh, that, that silicon doesn't do by

Yeah. Well, Eric Topham of Octaipipe, thanks for joining me on the All Points West podcast. I wish you all the best with it.

Thank you very much. It's been a real pleasure.