upGrad Enterprise aims to build the world’s largest GenAI learning initiative to enable high-growth companies to embrace technology’s transformative business impact. Hosted by Srikanth Iyengar, CEO, upGrad Enterprise, the GenAIrous Podcast, will curate an exciting roster of global experts and guests, who are at the cutting-edge of Generative AI, and its varied applications in the world of business.
Srikanth Iyengar (CEO, upGrad Enterprise) in conversation with Kit Burden (Partner, DLA Piper)
[00:00:00] Srikanth Iyengar - CEO, upGrad Enterprise: Welcome to the GenAIrous Podcast, where we unravel the fascinating world of generative AI and its transformative impact on businesses globally. I'm your host, Srikanth Iyengar, CEO of upGrad Enterprise. At upGrad Enterprise, we're building the world's largest Gen AI learning initiative, and we're empowering high growth companies to leverage cutting edge technology.Each week, join me and a roster of global experts. As we explore innovations shaping the world of work as we know it. Are you ready? Let's get GenAIrous.
Welcome to another episode of the GenAIous Podcast. It promises to be a very exciting discussion. Today I have with me Kit Burden, member of the Global Leadership Team, at, DLA Piper, one of the world's, top three law firms. Working across gazillion countries and revenues of over a few billion dollars… Kit, thank you and welcome to the show.
So Kit, first of all, I believe congratulations are in order. I know that DLA recently won an award, one of many over the last few years. So you must be proud.
[00:01:14] Kit Burden - Partner, DLA Piper: Oh, it's very good. Yes. Well, we won the Law Firm of the Year award at the Global Sourcing Association Awards this week, which is a, it's a nice award to win because it's not one awarded just by other lawyers, but it's an industry award which is always nice to get some recognition from your clients and contacts. So yeah, thanks very much. It was a nice night.
[00:01:32] Srikanth Iyengar - CEO, upGrad Enterprise: Congratulations again and for our listeners, I must say Kit and I have known each other for probably 15 years. So I know how good they are. So, you know, it's a lot of fun, but Kit, thank you. I still bear the scars. I still bear the scars, but you know what? I learned a lot as well. I must say it was a good discussion. So thank you for that. But Kit, you know, thank you for taking the time.
As you know, we, through this podcast, invite thought leaders exploring the topic of generative AI from multiple perspectives. At DLA Piper, given your specific role, you sit at the cusp of technology and legal, which is one of the key issues that's facing a lot of decision makers and practitioners around generative AI today. So I'd love to get into a bit of a detail on your perspective on how the space is going to pan out.
[00:02:20] Kit Burden - Partner, DLA Piper: Clearly it's a huge inflection point for enterprises and individuals worldwide. Of course, AI has been around for a long time in various different forms. So it's in that sense, it's not new, but you're quite right to pick up GPT as being a major stepping stone in terms of the development of AI and the public consciousness, because that was a time that generative AI became better known to the rank and file members of the public, and its capabilities, I think, became better appreciated.
[00:02:50] I think what we can see from here is that the rate of change is going to be monumental. We are into that hockey stick type curve of speed of development, where with AI being utilized to train new AI and to further develop it, the pace of improvement is going to be monumental. From a legal point of view, we do have a real challenge, if we are honest, because one of the things about law is that, generally speaking, it doesn't move that quickly.
[00:03:20] It takes a fair amount of time for regulators or legislators to come around to reaching a common view as to what should be done to solve any particular issue of the day and then to go through a process. Particularly so if you want to have laws and regulations which are harmonized across multiple different jurisdictions and different types of law.
[00:03:42] Technology doesn't wait for that to happen. Technology moves on at a rapid pace anyway, but particularly so with AI. So the first and most important challenge is, can law and regulation even keep pace with the way in which AI is going to develop over the coming years? There is an attempt to create more of a framework type approach, which will enable more flexibility going forward. But in the meantime, we're left with a situation where there are relatively few restrictions.
[00:04:14] But one of the most important questions is not so much, can I do it, but rather ethically and from the point of view of social responsibility perspective, should I do it? And that's, I think, a really key question for a lot of the decision makers today.
[00:04:28] Srikanth Iyengar - CEO, upGrad Enterprise: No, that's a great point, Kit. So there is obviously, as you said, the letter of the law, if I call it that, the regulation, but then there is the spirit.
So given the fact that enterprises are wading through this with varying levels of knowledge across senior executives and practitioners, you know, what would your advice be to these companies? What kind of governance processes should they set up to ensure that, you know, they stay on the right side of the spirit as well or of the self-policing aspect?
[00:04:58] Kit Burden - Partner, DLA Piper: So I think there are a few absolute requirements. So you do need to look at the laws as they do currently exist. And in that regard, you are probably inevitably going to look at the EU's new AI act. I say inevitably, because although at first blush, that's only relevant to organizations who are operating in the EU or who are looking to do business in the EU, that obviously has a very broad net in and of itself.
[00:05:24] So, I think we can also see the EU’s approach as an initial high watermark, which other organizations around the world will probably seek to emulate in exactly the same way we saw with data protection, for example, in the forms of the GDPR. So if you look at the EU’s approach, firstly it specifies some things which they just say you should not use AI for.So things which seek to, take advantage of, vulnerable groups, for example. But then it also sets out a number of what it calls high risk activities. So uses of AI in the context of HR and employment, use of AI in the context of safety systems, use of AI and critical infrastructure, just as an example, and where you are going to be utilizing AI in those areas.
[00:06:07] And obviously there's a pretty broad area of application. You have to apply specific safeguards and processes. Then those will include requiring some relatively common sense, safeguards and steps, which should then be built into any governance process, so it's ensuring that you're utilizing the appropriate data to train these AI models in the first place, recognizing that old cliche principle of rubbish in and rubbish out still applies in AI, and if you have an inherent discrimination bias in the data you use to train the tool, that's what the tool is going to take as gospel going forward.
[00:06:45] So you absolutely need to ensure that you've got the right data sets to begin with. Then you need to ensure that there is transparency as to the way in which the AI is then going to make its decisions based on that data.
It needs traceability of those decisions so they can be justified in due course were that to be required. I need to ensure that there is a human element in the loop at an appropriate point. So you need to ensure, therefore, that any governance structure you put in place now in your organizations to both consider the potential application for AI, but then to ensure that when it is implemented, it is implemented with those restrictions and requirements in mind.
[00:07:24] Yeah, I think that has to be the touchstone because if you don't, the risk that you will inadvertently fall foul of either the EU AI Act or similar restrictions which come into force in various different jurisdictions around the world, is much increased. And what you can't do is to wait to see what's going to happen in The US, Japan, China, or wherever else, because if you do that, you run the risk that some of these new requirements may have retrospective effect and you might be needing to pull out or backtrack on applications that you've already put into use in your business.
[00:07:58] Srikanth Iyengar - CEO, upGrad Enterprise: Absolutely. Data is clearly at the heart of it. Two questions that I hear from a lot of clients and stakeholders we're speaking to when it comes to data is, first of all, there's the aspect of privacy. What sits within the firewall? Because if you're using an open LLM, then there is the aspect of data crossing the firewall and a generative AI model is such that once it's got the data, It learns on it.
[00:08:21] Kit Burden - Partner, DLA Piper: You can't pull it back in that sense. That's the first aspect. But the second aspect is a lot of models now are also being built on synthetic data, which in some ways amplifies any biases that already exist. So any perspectives on both those aspects, the firewall and on synthetic data? It's very important to remember that the AI application or any development of an AI application, doesn't exist in a vacuum.
So we've talked about the new laws that may come into place to deal with AI, but we mustn't lose sight of the fact that there are other relevant and existing rules which might apply. And obviously data protection is first and foremost amongst them. So you may have as an organization, lots of personal data, either relating to your employees or to customers or other third parties that might be very valuable, maybe absolutely perfectly suited for use for training your application, but do you actually have the legal right to do that?
[00:09:16] Because your usage of that data may at law be restricted to certain purposes for which it was gathered and not anything else that you might want in the ideal world to be able to do that. Equally, that data may have been provided to you by somebody else, whether that's a market data provider or some other contract counterparty, and your use of that data may then be subject to contractual restrictions.
[00:09:39] And again, have you actually then got the right to apply that data in the way that you want to? So, synthetic data might get around some of those issues in the sense that it isn't personal because it can't identify a living individual. Because it's not going to be linked to anybody in particular. If it is synthetic and you've created it, okay, that gets around permissions. But is it suitable? Is it accurate? Is it actually then tied to the real world issues that your generative AI tool is intended to address? Because if it isn't, because it is synthesized, then you run the risk of training a tool to do something which is going to be inherently flawed from day one.
[00:10:19] Because all it's going to do then is to reflect whatever biases or changes or, inherent, challenges exist in the core set of data which you synthesized from. Absolutely. I think the amplification of that bias is a key concern.
Srikanth Iyengar - CEO, upGrad Enterprise: Completely. Couldn't agree more. So just, a question around, your advice to your clients. You're talking to Fortune 500, FTSE 100 companies, European leaders at the board level. And this is very much a board issue. I recently saw a report. I think it was a US publication that said that 40 percent of boards today have generative AI as probably if not the first, but among the top two discussions on their monthly discussions, that when they meet as a board. So what would your advice be to boards and leadership teams when they start thinking about this?
[00:11:13] Kit Burden - Partner, DLA Piper: Well, it's a really good question. I actually look at this from two perspectives because aside from advising the boards of many of my clients, I actually also sit on the board of DLA Piper International. So I see it from a rather selfish perspective as well. What I would say is this: For many organizations, AI is an existential issue. So it isn't one of the things which you can look at and think, well, maybe we will make use of this. Maybe it's something that we should be interested in.
You have to be, because if you're not, one of your competitors will be, or your customers will be, and will be expecting to see the benefit of your application of AI for the purposes of your delivery of services or products to them. And if you're not making use of it, You can assume that one of your competitors is, and you will be very quickly left behind.
[00:12:00] Equally, if you don't approach the use of AI properly, the potential for damage to your business, be it reputational or economic, is likewise huge. So in the technology space, we do occasionally run the risk of overhyping things. Blockchain is a good example. Blockchain has been very, very important. It's been applied to great benefit in many, many instances, but it was never the kind of tsunami of change that was predicted a few years back. And that's left a few people a little bit more cynical. I don't think there is any chance that AI will suffer the same fate. I think it would be quite the contrary.
[00:12:37] I think we will see that the pace of change is going to surpass our expectations and that is going to leave a lot of organizations struggling to catch up. If you're on the board or a senior executive position and you've not already got AI as one of your top two or three agenda items to every major meeting you have, I would suggest that you should be changing your approach and making sure that you've got the right people within your organization providing you with the advice that you need in terms of identification of use cases, setting up the necessary conditions for success in terms of its use. And also obviously looking at the legal and regulatory framework.
[00:13:14] Srikanth Iyengar - CEO, upGrad Enterprise: On the other point that you touched on, I just want to get into a bit of detail. You know, DLA Piper, you work in every major jurisdiction in the world, over 4 billion in revenue, a few thousand lawyers, obviously, The legal profession in itself is also impacted by generative AI and you sit on the international board of DLA Piper. So how do you think law firms will adapt to this?
[00:13:39] Kit Burden - Partner, DLA Piper: Well, there's no doubt we're going to be significantly impacted and law firms and the legal sector generally have been conservative with a small C for many, many years. So we have used technology, we have digitized our services in many ways, but at heart the practice of law has not changed very much over the course of the last few decades.
[00:14:00] That is going to change and the question is whether our lawyers become what we call internally within DLA ‘Iron Men and Women Lawyers’, where AI augments what we do and improves our service delivery. Or does AI replace the lawyer such that it actually becomes more of a threat to us? I think that the truth is that there are elements of both.
[00:14:22] So the legal profession of the future is going to have augmented lawyers, but there will be fewer of them. Because there will be a large number of tasks that currently are done by lawyers, where it will be done instead by AI. And I think this comes back to a question of trust and data, because if we think of lawyers as any other profession, there comes questions.
[00:14:45] Well, why do you trust the doctor? Why do you trust the lawyer? Why do you trust the accountant? Is it because they're really nice people or is it something else? And the reality is we trust them all because they're maybe from a profession, but then when you scratch beneath the surface of that, they're in a profession because they've had access to learning books, materials, and therefore data.
[00:15:06] So ultimately, you can say that in all these kinds of professions, you do trust data. Now, when you turn to AI, it's just a different form of trusting data. It's now data as is being manipulated by an application. You can do it faster, potentially more accurately than a human being. So when you have businesses and people who are willing to trust in the AI, as opposed to willing to trust in a professional, then you can see the threat for that underlying profession, because then you could see the situation where people are willing to trust in the app and what the app is telling them rather than what a lawyer is doing.
[00:15:44] And I can already give you examples of that, where. Yeah, we have Copilot for example, sitting on our browsers now at DLA. There'll be frequently times when I could ask one of the junior lawyers in my team to do me a memo. Or I could ask Copilot and the reality is Copilot is going to do a job which is pretty good for 99 percent of the time. Massively faster, relatively little cost when you think about the cost of it. But in terms of other applications, there are many examples of law firms, ourselves being just one example, who have used AI to already develop tools. So we have, for example, one called Ascension, which is an anti-cartel and -competition tool, which go into to our client systems, route through unstructured data and emails, et cetera, and identify indicators of anti competitive or cartel behavior and do that in a way that historically would have taken, you know, armies of lawyers, hundreds of mandates to do. Instead, it can now do it on an automated basis using AI self learning techniques. I think that we're now also exploring the use of AI tools to do augmented contract drafting in the sense of not just automated document creation, but actually smart
[00:17:03] markups so that you're applying changes to different types of contract, different types of wording, different types of cross referencing, to do the kind of mass contract review and remediation exercises which often come around and cost customers potentially millions of dollars or pounds to do what that night might now be reduced to tens of thousands And again, you might say that's an example of lawyers cannibalizing their own work, but it's a classic case that if we don't do it to ourselves somebody else will come along and do it to us.
[00:17:39] Srikanth Iyengar - CEO, upGrad Enterprise: A fantastic perspective, the competition that one can't see. I completely agree. And the only thing I'd say is in my view, the devil is in the detail, as we both know, it's about data discovery. And then it's about interpretation. And like you said, the incisive knowledge that lawyers carry through years of practice. And now through AI tools through the technology that discovery process access to information is level set.
[00:18:00] So it comes down to how you can be a better lawyer, how you can be the best at your profession. So in some ways, while there might be less lawyers, I think it'll probably push lawyers to raise their game if I could say that.
[00:18:12] Kit Burden - Partner, DLA Piper: I don't think that lawyers will disappear as a profession entirely. So we will not simply disappear overnight. What it will mean is, there'll be fewer of us. But those of you who are left will hopefully be more in the nature of trusted advisors to our clients, able to offer more strategic and well rounded advice because the better lawyers already are not pure black letter law advisors.
[00:18:35] They're people who can offer commercial insight and guidance based upon their experience. And I think it just is going to be even more so in the future. So I think that for those who remain in the profession and the lawyers of tomorrow, it can be an incredibly exciting profession. It will just be a very different one to the way it was when I first entered the profession 30 years or so ago.
[00:18:57] Srikanth Iyengar - CEO, upGrad Enterprise: Absolutely. And then, Kit, you talked about companies that come with technology solutions that could impact legal firms, but clearly lawyers themselves today need to be tech savvy. They need to upskill themselves, understand the technology tools available to them. So how do you, what would you advise a young lawyer today? How should they upskill themselves?
[00:19:18] Kit Burden - Partner, DLA Piper: It would be good to be a coder. I regret now that—when my two daughters are both in their early twenties and—I regret that I didn't push them harder to do some at least basic coding courses when they were younger girls. I don't mean that everybody needs to be an absolute technologist.
So I think that gives you a better understanding of the way in which some of these concepts, applications, technologies will work so as to give you more of an insight into the art of the possible. I think you've got to embrace change. I think that all of the younger people coming to Threshing need to be of change mindsets and be more flexible and be willing to ask questions as to how things are being done.
[00:20:01] And not simply accepting them as a status quo and thinking, well, if that's how it's been done in the past, that's how I'll do it again in the future. But instead, to be willing and able to challenge that orthodoxy and think, is there a better way to do this which is more tech-enabled? So I think that's the real challenge for the young lawyers to be more empowered to ask about different ways of doing things.
[00:20:26] Srikanth Iyengar - CEO, upGrad Enterprise: And it sounds like you're keeping abreast yourself. You clearly know a lot about the technology. So how do you do that? Because I'm sure that'll be an inspiration to a lot of people.
[00:20:35] Kit Burden - Partner, DLA Piper: So well, I'm lucky that I came from the technology space anyway, because for the last 20, 30 years, I've been a technology and outsourcing lawyer. So this has always been my space. I think that naturally then led me to be closely aligned to what we, as a firm, have been doing with technology now for the last 10, 15 years. We have a specific innovation arm, within DLA, which we call law and that has an executive team who drive it.
[00:21:02] We're responsible on a day to day basis for acting like an incubator for various different ideas and working out which ones have got the greatest, prospects and therefore the greatest support you've got, as you can imagine, any number of ideas coming through to that body at any one time. But by dint of my background and interest, at the board level at DLA, I am the person with the greatest degree of responsibility for the oversight of those initiatives. And it is just a question of immersing yourself in it. So I will speak to a lot of people, on a weekly, if not daily basis in terms of what's within the art of the possible, not just in the legal services world, but also more generally.
[00:21:42] Obviously you read as much as you possibly can and there are various different podcasts. There's a lot of very good websites that you can read. I think the general press even has a lot of material which you can keep up to date with. You don't need to be an absolute technologist, but you need to understand the direction of travel. And as long as you’re looking for the information to fill your gaps, identify what you don't know, but which you think you'd be interested in. I think you can very easily develop at least a minimum level of understanding, but I think it is very important to make sure that you have good people around you.
[00:22:15] So we're blessed at DLA with having some very good technologists who are able to provide that level of specialism to help people. Ensure that we understand what is within the art of the possible now as opposed to what may become possible in times to come. So yes, people who, to take one example, we had an initiative just recently on the back of a client request, there happened to be an investment bank and they were asking about whether AI could be utilized as a tool for a particular contract process that they're looking to do across multiple hundreds of their own end customers.
[00:22:53] I thought, yeah, I can see how one could automate that and one could apply AI to do something which is not simply strictly processing, but the good check and balance input from our tech team was even with four or 500 customers. That's probably not a critical mass of data for the training of an AI tool. You would need more. So you need that kind of expert input to ensure that you don't go down too many rabbit holes.
[00:23:22] Srikanth Iyengar - CEO, upGrad Enterprise: Financial services is probably one of the industries where AI is being explored across the space, as you know, whether it's retail banking, investment banking, treasury, all of these aspects. And I think that's one area where the ramifications of the AI models going wrong could be significant for a lot of people. Thank you for a fascinating chat. I think you and DLA are at a very, very exciting point in time in that sense, as the legal implications of AI play out.
I mean, firms like yours play a very critical role in shaping thought and driving thought leadership across the board with your clients and with various stakeholders in the geographies that you operate in. So thank you for your time and your insights.
[00:24:04] Kit Burden - Partner, DLA Piper: No problem.
[00:24:05] Srikanth Iyengar - CEO, upGrad Enterprise: And that concludes another episode of The GenAIrous Podcast. We're very grateful to our guests for their time and their expertise. A big thank you to our producer Shantha Shankar in Delhi and our audio engineer Nitin Shams in Berlin for making the magic happen behind the scenes. Don't forget to subscribe to GenAIrous wherever you listen to your podcasts. See you next time.