Rail Technology Magazine Podcast

In this episode of the Rail Technology Magazine Podcast, we chat to Vaibhav Puri, Director of Sector Strategy and Transformation at the Rail Safety and Standards Board.

Vaibhav discusses in detail the role of the RSSB in how innovation in rail should be regulated and controlled and what needs to be done to make it as much an environment for innovation and technological transformation as possible.

But as the industry gets to grips with the changing world around it, it also wrestles with how to look after the data it inevitably will handle and Vaibhav discusses how best to handle data, the framework that need to exist to ensure privacy is in place and the capabilities of emerging technologies to change how the industry works with it.

Vaibhav is an influential voice in the evolution of the UK and EU railway legislation, including supporting the Department for Transport and the sector in transitioning seamlessly to a post-EU exit legal landscape and he explains how his role has evolved as the industry navigates its way through the changes that are happening.

Artificial intelligence is also discussed and we chat about its new role and its vast capabilities to fundamentally change the way rail looks in the future.

But with great change, there is a race to ensure that the industry adjusts appropriately and the wider rail community are on board and up-to-date, Vaibhav discusses how the board are reacting and proactively helping facilitate these changes.

We also discuss technology advancements in rail since Vaibhav started his role at RSSB and also what regulation frameworks might need to happen once Great British Railways is established.

What is Rail Technology Magazine Podcast?

Welcome to the Rail Technology Magazine Podcast. Keeping you up-to-date with the most current rail industry news, giving you an all-access pass to the key insights and innovations helmed by the decision makers in our industry

Vaibhav Puri: You, the private sector, are keen to introduce innovation, and are keen to introduce innovation in areas where there's a clear use case, having the appropriate level of regulation and central intervention and checks and balances so that you don't stifle any development, particularly in. In areas where you can't predict. There was a focus on safety. Safety improvement was the goal. That was the thing we were trying to fix. You can't provide a safe mode of transport. Forget about anything else. Your whole business model falls apart.

Voiceover: This is the Rail technology magazine podcast bringing you views, insight and conversation from leaders across the rail industry, presented by Richard Wilcock.

Presenter: I'm pleased today to be joined with the Rail technology podcast with Vaibhav Puri, who is the director of sector strategy and transformation at the Rail Safety and Standards board. Hi, Vaibhav. You ok?

Vaibhav Puri: Very well, thank you.

Presenter: Wonderful. So can you just give me a little bit more about, yourself and how you've landed, with RSSB and your role in general?

Vaibhav Puri: Yeah, I've been with RSSB for a while. I wouldn't describe myself as somebody, who is essentially sort of a rail background, but I've been in it for over 20 years now. So, I guess that, I qualify. I've done several roles in the past. I was the deputy director of standards, at RSSB and the head of technical and regulatory policy before my current role, which I had been in for nearly sort of two years now. And, it's an area that essentially is cross cutting. So viewers who are not, perhaps not familiar with RSSB, we are an independent safety body, we look at aspects related to standards, safety, sustainability, and a whole host of other, areas. And we are independent of all parties, independent of, private operators, running the railway, but also independent of government. and our role is to bring the sector together and collectively address challenges that, it faces. So, as we have specific areas like Standards and safety, there is a requirement for a cross cutting area, which is where sector strategy, comes in, where, we can explore, first of all, what are the cross cutting themes across safety. Standards, sustainability, health, all of these kind of interesting areas that the challenges that the sector is facing. But also, and perhaps this is relevant for today, look at what are the future challenges and risks that the sector might face and how can we be better prepared today to deal with them. A lot of the work that RSSP does is enabling. It helps the sector do things that it wishes to do, dealing with future technology changes, future challenges like weather, resilience, emergence of AI, for example. how do we ensure that those enabling functions are prepared? we know what to do, we know what the roadmap is towards, I guess making rail attractive, making rail an amazing mode, of transport for its customers, but also essentially ensuring that rail keeps, its, place as a key, center for infrastructure in the wider UK economy, and ensures that we are playing our part in enabling innovation, bringing about change, but doing so in an ethical and safe manner.

Presenter: So the development, as we know of technology is moving at a really fast pace at the moment. So what needs to be done to make sure that regulation keeps at pace with it?

Vaibhav Puri: Well, I think we can almost use AI as an example, in that context where there's always this tension, between having the appropriate level of regulation and central intervention and checks and balances, so that you don't stifle any development, particularly in areas where you can't predict where they're going next. But at the same time, particularly in those kinds of technologies, there is a concern around, well, precisely because we don't know where they will go next and how far their capability can be pushed. we need to have some principles, some boundaries around what is possible and preempt that as much as possible. And it's a difficult tension. You don't know where they're going yet. You want to kind of have some degree of control and ethical application of those things to protect people really at the end of the day, from any harm. so it's a very interesting tension, and we have seen that with all sorts of technologies that have been introduced in the past, which were novel at the time. But I think AI in particular offers, an interesting challenge where the technology is developing so rapidly and is so fundamentally linked to human, in some ways mimic human behavior, that I think there is a set of unique challenges. It brings about where can the regulators wait for technology? So often in the past, a rough model would have been new tech has come along, but you see the first iterations of it applied, and that first mover essentially becomes the template for understanding what works, what doesn't work. That's true of everything. But with AI, ah, because the technology is so rapid and so complex potentially, and its application can be in various places, there is an interesting need to really think about what are the questions that we need answers to now. is there a way of coming up with some principles so that the development of this technology is guided a little bit more, so that it avoids the pitfalls that perhaps were acceptable for something else, for AI, you think? I don't really want you to go there. now that's a very difficult. It's easy to say that it's very difficult to do. And you could almost sort of see two models developing. I don't know if you're familiar, with the EU AI act, which I think in December 23, I think the parliament, et cetera, approved, likely to come into force 2025 sometime. That's gone at, it in the sense of perhaps. I think it is one of the first sort of true regulatory interventions. It sets out these categories of, AI, ah, applications. So it goes from unacceptable risk to high risk to limited risk, and I think minimal risk. and it has attached some principles around what it considers to be unacceptable. So think unacceptable is something like, if it impacts people's cognitive abilities or can manipulate that, then that forms into that high risk will be something, a product which has an existing sort of safety regulation, children's toys, for example, and so on. And so they've gone, through that approach. And then there is an element of identifying the technologies that underpin something like AI. So machine learning, et cetera. what do you do with generative AI? Is that a general purpose AI? And is the emphasis there on transparency versus, on something where, let's say you're using AI in the legal profession, where a more stringent approach may be required for, let's say, judges using it, making this up slightly, but judges using it to help with the decision. For example, if you go into that sort of fairly sensitive space, then some degree of high regulation will be required. it's not quite unacceptable, but, pretty stringent or robust, elements will be required. So they've gone down that route of essentially identifying, I guess, red lines where now I'm sure they're not static red lines, but red lines for now, that, will emerge as technology changes. UK policy. If you look at the UK one is slightly different, where, the UK policy hasn't gone down the route of identifying categories of risks. It's more of a spectrum. sorry, go on.

Presenter: And why do you think that is? Why difference?

Vaibhav Puri: I think the EU, approach was primarily because they are dealing with 27 other member m states. I think there is an element of some principles and some central regulation being a driver, because across it's a big market and if you want to place some of these products on the market, then some form of regulation may be required. I think the UK's approach, has been more pragmatic in the sense of more open around, the innovative aspects of it. So do we go too fast, too quickly, so as to stifle innovation? Do you provide more, a high level set of principles around? Well, this is a spectrum of risk. you can have different types of assurances and essentially you apply a very risk based approach, to understanding where something can be applied. And perhaps not categorizing something as unacceptable as the EU category has, maybe. Now who's to know what's better? I suspect there'll be a degree of convergence of those sorts of approaches as new, applications come on board. And people will start to understand what is the best way of looking at some of these technologies, and what is the best way of ensuring we understand what they do. Really that's what it comes down to.

Presenter: So is there a degree of push and pull between the industry and the tox and the regulatory bodies like yourself? Where does the responsibility lie in terms of regulating the new tech? Is it something which you're looking at, primarily and driving it forward or are tox and, other operating companies and the industry in general coming to you first?

Vaibhav Puri: I think it's a collective, it's obvious to say that, but there's a bit of collective responsibility here because, the regulator is coming at it from a particular perspective of ensuring harm doesn't occur. the users and the supply chain. So I will expand that a little bit. So you have operators plus the supply chain that supports it. in the private sector are keen to introduce innovation, and are keen to introduce innovation in areas where there's a clear use case. I think our role in that becomes, well, how do we create opportunities where, those use cases can be explored and applied and tested without breaking the system? can we provide opportunities to do that? so I think there is a degree of push and pull, but there's also a degree of rail, setting some clear messages, some clear flags around what it expects from the market. So you can't be completely market driven. You can say market will decide the challenge we face in the rail industry in particular because of, where we are in the general challenges of the railway, that cost constraints and performance issues. you could almost become too short termism in terms of. Or short termism could become a dominant view to say you behave to just run the railway. We'll deal with things like AI and tech when it comes along. So essentially it's the supply chain we'll provide technology and then if it can be integrated, so be it. Another approach could, be for the rail industry to show some very clear pathways to, well, these are the areas we want to fix and we're very open. This is messaging, really. We are very open for the supply chain and the operators to find use cases to apply this new tech. Just, saying that is not good enough because essentially you've just pushed the risk to the operators and the supply chain. So the job really of organizations like ours is, how can we derisk that a little bit? How can we provide a framework and a messaging approach so that the operators and the supply chain feel confident that, okay, I will introduce this new tech, I'll introduce AI in XYZ. and I feel there's a support structure available to me from organizations like RSSB because a lot of these people are our members, to help us through that process. And the regulator is behind that help. It's not just independent help, it's with the backing of the regulator. So I think there's a space there to offer encouragement. so yes, this push, and if you don't offer encouragement, I'm not sure there will be too much of a push because people will say, well, okay, then we'll just run the railway now. because there's a bit of a risk here, introducing something novel, on the operators and the supply chain.

Presenter: And the industry itself is rather risk averse anyway. so obviously there's got to be that sort of collaborative nature, certainly to it, absolutely. certainly coming back to AI in particular, what is being done currently to regulate it? Are you retrofitting previous regulations or are you looking at completely new legislation within the technology?

Vaibhav Puri: This is where the two worlds almost, not collide, but I guess there's a bit of an overlap, because there are existing safety and product regulations around things that the railway uses, assets, functions, et cetera. And you could say, well, that's enough. You have to make sure a train is safe. So, what else is there to really think about? But then the side of this stuff is this technology, like AI, that is being introduced into that environment. So you have your existing regulations, but technology, being introduced, which clearly at an EU level will start to get regulated. So that pathway in this big market next to us is happening already. so there is that to contend with. Whether we like it or not, that's going to start to happen. so I think the challenges is of integration. You might have generic regulation of AI as a technology and an area and an application. but obviously it will not be applied in isolation. It will probably fit into a system, a complex system, and then essentially will do certain functions in that system. So the real question becomes, almost even before legislation, is because timing is not our friend in some ways, application might happen now and there's no regulation. And then you say, what are we to do now? Do we just wait and wait and wait? Or do we really try and understand what is the right thing to do? So really pushes emphasis on assurance. If somebody were to bring some sort of machine learning capability inside a system and it has to be deployed in the railway, what does assurance of that look like? So that everybody has a degree of confidence. And then there's a different question around to what extent that assurance has to be signed off by somebody or checked by somebody. That's kind of almost the regulatory layer. But at the most basic level, regulation or no regulation, if you buy something, you need to assure yourself it's safe, it works, it's ethical, it will not do funny things. I think the focus from our side, and I'm speaking as an RSSP side, is much more around those frameworks of how do we ensure the railway, as a buyer of these things and an integrator of these things, understands the implications. So I'll give you an example. We've been working with various parties to really explore that and this year we'll really, and maybe I can come back and tell you more in a few months, really explore what assurance of machine learning, so to speak, if I don't use the word AI, because just machine learning looks like, that will be massively helpful in understanding how these things get integrated because the challenge is not the AI bit, is how do you integrate this into whatever you wanted to do on the railway. And one of the things that people often focus on is all of the cool stuff, right? The algorithm and the training of the AI, all the lovely stuff, but believe it or not, 75%. And I'll put that in inverted commas, of the effort is really data preparation. So if you think about AI and machine learning, why is it that dramatically different to any other software? At the end of the day, it's a software, right? You get a software and it works. And often very few of the AI systems, stuff you see in cars is interacting with the operational environment live. What tends to happen is the data will go back and then somebody somewhere very quickly will retrain this thing and then push another software update, onto the scene. you're living in an environment where that sort of stuff is happening, where there's sort of software related things. So all AI and machine learning is, is an executable, it's a data model that becomes a software and it gets training. So the big emphasis here is not on the coding bit because your traditional stuff would be, you look at the code and you can say, oh, this code does this, you can look at the bugs in it. With AI, the challenge is ever so slightly different in that you have an algorithm that you are training with data.

Vaibhav Puri: Some data that you have collected which you believe represents reality. This is the key bet. So how close is that data to the ground truth?

Presenter: Yeah.

Vaibhav Puri: So the pre preparation of the data to train the model is where a lot of the effort is, because you will make some interesting decisions there around what data you give, because AI is, I think somebody said this, the only real, bit of artificial intelligence is the artificial bit. it will learn, whatever data you give it and how you then deal with outliers. Somebody preparing that data might decide that these 2% of the population, that's noise in the data.

Vaibhav Puri: I'll take it out because that's where all the money is, the training is where all the cost comes in. I'm going to shave off the outliers to make this data cleaner. You train the AI, the AI, learns from the core data and does perform brilliantly, let's say. But in the real world it's encountering outliers, it doesn't understand what they are. So the data preparation effort is huge here. So I think one of the big things we are trying to do is really try to get industry and everybody else's focus on this data preparation thing. It's not a little kit that you could just point to stuff. It could be trained on data. Do you have the data, have you coded things properly? And what's the process to be applied when you, I guess prepare that data for the use and training of ML? I think that's a new thing. I think that with software we just code bugging somebody else will do this. Now it is okay, training, who's trained it, what was excluded, what wasn't, for what reasons? So I think those become interesting.

Presenter: So what excites you the most when we bring it back to rail? What excites you the most about this technology and what also equally terrifies you the most about it?

Vaibhav Puri: I think the exciting bit is incredible, isn't it? Hopefully I don't have to sell this. The pace of change and the capabilities are amazing. I think it was 1997, I could be wrong here, when, I think IBM deep blue beat Gary Casper on chess. And then I think 2016 is when, I can't recall who it was, but essentially, or what the software was that defeated the champion in go, which is like the most complex game ever, just 2016. We're not that far away from where we are now. And the speed of improvement is unbelievable. So it's clear that this thing can outperform humans in certain things. We also know it can't do certain things. Humans can do things that we never really cared about. We just assume because it's a human, they'll be able to figure out the difference between a cat coming in front of a car versus a human. And all of those assumptions that we think, oh, humans are, they'll figure it out. So this thing is incredibly exciting because it can pick up, I think somebody. and I can't take credit for this because somebody else said this to me, it's just the ability to find the needle in the haystack, that ability to do that machine learning has, and it outperforms humans already and will continue to do so, as it exponentially improves. So it's exciting it can do that. And that's precisely the reason. And because it's black boxed is precisely the reason why it's terrifying.

Presenter: The idea that it can find a needle in the haystack, and you're quite right there, it is quite terrifying, but equally exciting. And I guess with regulation, it is not so much as if I'm hearing you right. it is not so much about controlling it. It's about maintaining correct procedure for applying it. Really? absolutely.

Vaibhav Puri: I think I'll just add one thing there just to push the haystack analogy, is it finds the needle in the haystack, but you may not be able to find out why and how it found the needle. And that's just inherent reality of this tech, because if you use neural network that will have lots of hidden layers where even the developers do not actually know why it found that needle and how it found that needle. So the brilliance is certainly you found the needle. Is that your needle? Is that the needle? That was the needle that you needed and you don't know how it arrived at that. I think that's the terrifying bit.

Presenter: Yeah. And I guess over time, as we learn more about this tech and we work with it more, maybe that will become apparent. But like you say, we're taking giant strides at the moment, but in the grand scheme of things, they are quite small steps, they're incremental steps. Part of your role is involved with safety standards in rail. And how have you seen the attitudes shift with safety standards, and safety to rail in general since you've been at RSSB? And where do you think it can be improved? Where can the areas be that you can see real genuine improvement over the years?

Vaibhav Puri: Well, I think, the one obvious shift that certainly I have seen is where RSSP began. After Ladba Grove and 90s, bad rail accidents. there was a focus on safety. Safety improvement was the goal. That was the thing we were trying to fix. You can't provide a safe mode of transport. Forget about anything else. Your whole business model falls apart. that shift to perhaps a shift now where there is a recognition, the safety is part of the wider picture. Safety doesn't sit in isolation.

Presenter: No.

Vaibhav Puri: Safety decisions have to be taken in the context of cost, have to be taken in the. There's trade off involved with performance. there are trade offs involved with other aspects of, extended aspects of safety, or maybe even health and let's say, even running a greener mode of transport. I think there has been a recognition that there has been that shift. So you've gone from safety being the goal to safety being part of the wider decision making framework. You've gone from we need to improve this thing and therefore we need to tell people what to do perfectly right for that context, to. No, we need to increase the choice space for people so that they can account for all of the factors and reach the right decision.

Presenter: What do you mean by choice space?

Vaibhav Puri: Well, I think the choice space to make the right decisions so that this trade off between, let's say, safety needs and performance needs or cost constraints can be managed in a more optimal fashion. you can tell people what to do and whatever the cost, you have to do this, and that's one way of looking at the world. and perhaps that was the right thing to do in the 90s. given that safety performance, safety performance is a lot better now. We are the best, if not one of the best, safety performing, railways in the world. So you're now in a different, paradigm. And in that paradigm you then ask yourself if you've made all the safety improvements. So what guidance are we providing? I think what you're trying to essentially say is we now write rules. They're not even safety standards, they're standards. And essentially they are rules that allow for one component to work with another. And we have to write those rules in such a way that allows for all different types of scenarios. So not overfit, not to create one size fits all solutions, create, a set of rules that allow the users and the supply chain to go, well, I think I'm facing this type of situation here, and the risk here is lower than what generally is faced across the railway. So I can come up with a unique, more bespoke, more specific solution, a more optimal solution for my bit of the railway. how do you allow for that? So this is a very interesting tension between standardization and standardization of approaches as opposed to solutions. So I think there's that shift. Single solution. Safety being the main thing to safety, is part of a bigger, decision framework. And, you have to provide a way so that people, all of the people, approach the same problem with the same rigor, but may come up with different answers to suit their needs.

Presenter: I guess it's baking in the thought process from the very start, isn't it? And like you say, years ago when Labbrook Grove happened, and afterwards, it was all about safety, rightly so, and it was about making the railway safe. Now it's about sort of future proofing the railway. And I guess that's probably where the biggest shift has been in those ideas. where do you think it can be improved? Is it a case of just, carrying on as what we're doing right now? Because, like you say, we are a reasonably safe railway.

Vaibhav Puri: We are safe. But once we get to that paradigm where safety is part of the bigger picture, the bigger picture becomes the opportunity. It's not the safety necessarily that's the opportunity. But how do you make improvements? And there's still a long way to go. There are some bits of safety. Improvements can still happen, but how do we do it so that it's baked into the railway becoming more efficient? A safer railway doesn't have to be a costlier railway, it can be a better railway. You can come up with safety improvements which are perfectly, aligned with reducing cost on the railway, in the long run and in the midterm. So once you recognize that it is part of a bigger picture, the question then becomes, how do you trade those things off? How do you decide what's important in different circumstances? So the question almost the problem question has changed. You're now saying, well, I have these many objectives, safety being a very important one, and a legal obligation. How do I navigate this decision space now to decide what the best thing to do is so that I remain safe and I'm still able to add innovation and I'm still able to reduce my costs whilst remaining safe. That's a very difficult question and I think the place where we can improve here is, and I can certainly speak of RSSP here, is that kind of trade off requires a more sophisticated modeling approach, a more better decision support, better data, more mature way of looking at some of these situations so that ah, safety doesn't become a sort of a barrier essentially. And I think that's a space where we can do a lot more work. so that the sector, the burden on the sector reduces that. Those decisions are taken with a degree of confidence as opposed to fear that, oh my God, I'm taking a risk here. If I do that, then I'm exposing myself to safety liabilities.

Vaibhav Puri: We want people to go and innovate and change and improve stuff because we know the railway needs to improve, but do so with a degree of confidence. So I think that's the thing. And better models, better approaches, how do we get everybody along so that they can see this wider picture is the tricky bit.

Presenter: And I just wanted, the last question I just wanted to touch upon was, railway's relationship with data is a constantly changing thing. And as we move towards smarter tech, a digital railway, do you envisage many changes in how data is handled and ultimately regulated?

Vaibhav Puri: I think so. I think data, as we just talked about, AI, in some ways data is more important than the algorithm almost. And I think that recognition, is definitely there. And I think the white paper, talked about open data. Data open by default is the policy of government. the challenging thing here is how do we promote data interoperability, which I'll unpack that a little bit in the sense of providing data in a format and in a way that is aligned with standards. So there is a new data and telematic standards committee that RSSB manages on behalf of the sector. So the governance structure is there to create standards. so you have data that's aligned with standards and is published in a way that's machine readable. Yeah. If we recognize where the future is headed and how this data is going to be used, how do we go from that position to a machine readable set of data which is essentially zeros and ones, really, that AI can read. And it's really embracing that challenge. It's really understanding what are the things you need to standardize. So time, there are some obvious ones, time, stock and crew, systems headcodes, basically unique identifiers for trains. So these are some of the things that are in the pipeline for the railway to really think, how can we standardize these and publish them in a way that can be used effectively? So there's degree of consistency. It's one of the first things being able to identify uniquely a train service. Yeah, you have to start there. and I think that's where I think a lot of data standardization will become important. because in my view, the best AI use cases, if I tie the two together, will be in areas where there is a very strong amount of standardization and codification of the environment in which AI is performing. If you want to apply AI in an image environment, and you already have library of images with right tags already in place and standardized, and everybody accepts it, now you're a coder, you know how to deal with that. You can write an algorithm that recognizes them. So I think when we places where we make good progress on data and data standardization and accepting it and written in a way that is machine readable, if you make good progress on that, then I think AI almost becomes more enabled. Plus, there's a lot of work going on on the rail data marketplace, which is essentially an approach for the sector to create a platform so people can share data with, app developers and other things. RSSP has been heavily involved. Rail delivery group, has been leading on that, with our help. And that platform essentially is going to provide the marketplace which allows publisher people to say, here's my data, codified it. App users, you can buy this, use it for your purposes, et cetera. So I think that's very exciting. So that combined with better standards and targeting things in areas to say, let's fix this problem of unique identifiers first. If we start to, go on that journey, I think we will create an environment. So I'm not sure it needs legislation, but it needs standards, agreements, and, ways in which people can share data, commercially and in an open sense.

Presenter: ViPAv, thank you very much for your time today. it's been an absolute pleasure.

Vaibhav Puri: Thanks very much.

Voiceover: Ricky, you've been listening to the latest podcast from Rail technology magazine. Don't forget to like and subscribe to. Make sure you receive every new edition.