HPE news. Tech insights. World-class innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.
Michael Bird (00:00):
Hello, and welcome back to Technology Now, a weekly show from Hewlett Packard Enterprise, where we take what's happening in the world and explore how it's changing the way organizations are using technology. We are your hosts, Michael Bird...
Aubrey Lovell (00:23):
And Aubrey Lovell. And in this episode we are looking at a topic which organizations and even governments have been rushing to adopt, AI and skills in the workforce. The rise of generative AI has been a seismic shift in the way we do business over the last couple of years, and not just in tech, but across every sector. And that presents an issue for the workforce at large. How do non-IT departments and fields, managers, education systems, or even us as individuals, deal with this new technology? Well, that's what we're going to unravel in this episode.
(00:55):
We'll be asking whether the workforce is ready for AI. We'll also be investigating what's being done to prepare us. And we'll be discussing whether what's happened so far is setting us as workers up for success or failure.
Michael Bird (01:08):
Wow. Yeah, super important. So, if you are the kind of person who needs to know why what's going on in the world matters to your organization, then this podcast is for you. And of course, if you haven't yet done so, make sure you subscribe on your podcast app of choice so you don't miss out. Right. Aubrey, let's get on with the show.
Aubrey Lovell (01:26):
Onward.
Michael Bird (01:30):
So, it might be fair to say that organizations and I guess even governments have been scrambling to keep up with AI. In 2021, the British government launched a drive to create skilled AI jobs, which was somewhat successful according to government reporting, which we've linked to in the show notes. However, that doesn't cover the tens of millions of workers and students for whom AI isn't their core focus. It's just something that one day will arrive in their workplace, and that's a worry.
(02:04):
In August 2024, the UK announced a 7 million pound training program to help build skills among workers at small and medium businesses. It came as the government announced that, "Evidence shows that a lack of AI skills in businesses is hindering AI adoption, in part due to low investment in AI upskilling by UK businesses." Meanwhile, in the US, government research has shown that Americans are increasingly more concerned than excited about AI, with 52% in August 2023 believing that AI was a danger to them and only 44% believing that they have any common interaction with AI. That's a worryingly low level of understanding and AI literacy according to the reports, which again, we've linked in the show notes. So, how serious is the problem? Where do the gaps lie? And who is responsible for upskilling us? And what can be done about it?
Aubrey Lovell (03:00):
Well, Michael, I've been speaking to Dr. Erin Young. She's a project co-lead and research fellow in public policy at the Alan Turing Institute. Really great conversation. And they are dedicated to solving societal problems with technology.
(03:13):
So, Erin, we know that with AI, it's the big topic of conversation. How do you see AI transforming the nature of jobs in the next five to 10 years?
Erin Young (03:23):
I'll begin with two important caveats, taking a step back and zooming out. So, firstly, I'm sure we've all seen lots of headlines about X amount of jobs being displaced by AI by X year. But the truth is, the full impacts of AI on work and society more generally are still very much uncertain, especially as capabilities are advancing and investments increase, and as AI scales across domains and populations rapidly. But what is certain is it will absolutely depend on the choices and decisions made around design, development and adoption, including the policies and principles implemented by organizations, governments, and individuals.
(04:17):
And then my second caveat is the term AI is, of course, a massive umbrella, and changes in the labor market will look very different across different technologies, applications, domains and sectors, across healthcare and media and education, energy and transport. Forms of AI have actually been used for decades in certain industries like finance, but with rapid advancements in, as I said, capabilities, investments, data, compute, which most notably have been seen in the last few years in the diffusion of multimodal, frontier gen AI systems. We're now seeing its gradual, arguably experimental adoption, this kind of early pilot testing into all kinds of workplaces across functions, tasks, and workflows globally.
(05:10):
So, there's been various indexes which look at which jobs are most susceptible to automation and augmentation. And a interesting assessment I read by the IPPR of 22,000 tasks in the UK economy and how exposed they are to automation, found that 11% of tasks are already exposed to GenAI and that this could increase fivefold as AI systems are more deeply integrated into organizational processes. And yes, this has understandably brought with it widespread discussion and indeed, often anxiety around the potential for AI to take jobs or displace workers entirely.
(05:56):
I read, there was an interesting 2024 Gallup poll, which found that nearly 25% of workers worry that their job will become obsolete because of AI. But in contrast to lots of headlines, and as I said, again, depending on human choices and actions and the kinds of levels of trust we have in the technology, what I think is more likely is actually much more nuanced in that we could see AI changing the nature of work in the next decade. So, what I mean by this is we'll see, yes, some jobs automated, but also maybe some job role will merge. For example, product manager and product developer will become one job. New jobs will be created, for example, AI safety consultant. We're already seeing this job advertised. And I think mostly above all, we will see our workflows augmented by AI.
Aubrey Lovell (06:50):
And as a society, are we prepared for that? If not, what should we be doing to embrace this kind of change?
Erin Young (06:58):
Yeah, very good question. So, in response to your first question, "As a society, are we prepared?" In short, not yet. There is a widespread lack of AI readiness or what we sometimes call AI preparedness globally, varying significantly across populations and regions. And then, of course, the same is true for organizations and businesses.
(07:18):
And in particular, AI skills pipelines really haven't kept pace with the technology. There was a survey by a lead cloud computing and infrastructure platform that found that 75% of employers, so about three in four, can't find the talent that they need for their AI augmented workforce. And then, some other research by Hayes in 2023 found that more than half of the employees they surveyed don't believe that they have the right skills to confidently use AI in the workplace.
(07:50):
And so, what I think we really need urgently is accessible AI literacy campaigns at scale, for lifelong learning across formal and non-formal educational pathways to adapt. And on top of this, to make sure that pre-existing digital skills gaps aren't exacerbated. This doesn't mean only learning technical skills applicable in the workplace, although of course, this is important. It's about much more. It's about understanding the implications of AI, knowing how to use AI responsibly and practically. Thinking critically about where are the greatest opportunities and where might the most significant individual and socioeconomic risks and harm surface? And how do we mitigate them and manage them to reap the benefits?
(08:40):
And what's crucial is this is not only for people as workers, but as consumers and citizens. So yes, of course, this is about ensuring that negative impacts on employment are mitigated, but it's also educating people, technically or otherwise, about, for example, building and deploying AI in line with human values and rights and building safeguards against its misuse, for example, in disinformation. Ensuring we understand and can make decisions about how our data is used to train models, so including data privacy. Understanding limitations of model outputs across applications. I could go on. And a few stakeholders are making strides, but there is an incredible amount of work to do. We're moving in the right direction, but we've really got to pick up the pace.
Aubrey Lovell (09:29):
Definitely. And I think you touched on this, but what skills do you believe will be essentially the most critical for workers to adapt to these changes?
Erin Young (09:37):
We're still in the process of understanding the new skills that AI-supported work will require. What we do know, however, is, as I said, the importance of workers having at least basic AI literacy. There are some really interesting emerging AI skills and competency frameworks and AI literacy curricula, but we really need new agile skills taxonomies for citizens and workers alongside these.
(10:07):
I think some of the most critical skills will include data processing, programming, then extensions to what are already framed as digital media, data literacy. So, as I said, understanding how the AI works, thinking about risks and challenges. Then also making sure that we're honing what are sometimes called our human-centric skills, so critical thinking and empathy and interpersonal skills. I think on top of this as well, the need for adaptability. So, honing our resilience and willingness to learn new skills. And then, making sure we're also focusing on decision-making skills, how to judge model output, draw conclusions, making sure that we're not over-relying on the technology and experiencing some kind of skill atrophy. And always thinking about how we collaborate with other humans, but also technologies.
Aubrey Lovell (11:00):
That's a good point. I mean, I think when you think about governments, right? Governments, companies, educational institutions, what should they be doing to ensure that young people and workers are prepared for an AI economy? Is that something that they've started to effectively look at this in terms of planning and trying to integrate?
Erin Young (11:17):
Yes. So, many governments and organizations absolutely broadly recognize the need for proactive measures. And countries like the UK, USA, those in the EU, are making progress with AI strategies and policies and regulation focused on principles like safety, privacy, transparency. Some governments, including the UK, are also promoting AI education and re-skilling initiatives through funding provision. And then, there are large corporations, especially big tech companies, that are increasingly stepping up in-house as well as external training programs aimed at building foundational AI skills and bridging current skills gaps. I think they're kind of seeing upskilling and re-skilling as an investment rather than just a cost, which is great, but we're definitely seeing a gap between major corporations and smaller businesses in this.
(12:13):
These are all great moves but I think what is really needed is a sustained, multifaceted, multi-stakeholder, international, interdisciplinary, agile approach to policy and practice. So, right across the board, across education, business, investment, responsible governance. As I said, it needs to be proactive, not reactive.
(12:36):
So, in education, this could look like integrating AI education into curricula at all levels, across subjects, investing in teacher training, developing innovative pedagogical approaches. Companies really have a responsibility, and I believe it's a kind of a strategic imperative for them to provide access to relevant, quality training and development programs and apprenticeships. And they can partner with governments and educational institutions to develop these programs and create a pipeline of skilled workers for the AI economy.
(13:14):
They should also, of course, provide support for workers during job transitions, implement robust AI governance. And then, governments and policymakers must absolutely build AI capacity and foster this AI ecosystem which prioritizes human oversight and supports all of the above to shape a equitable AI economy. So, things like establishing clear policies for responsible AI development and developing a robust AI workforce strategy. For example, offering platforms or short courses or accreditation systems for employers and workers. So, the establishment of Skills England in the UK for example, could be a good basis for that. As well as creating thoughtful employment and skills policies, they need to be investing in talent and entrepreneurship alongside technology adoption.
(14:11):
And I think arguably most importantly, they need to help scale up existing AI training programs so that they can reach everyone, so they can reach all communities, including those from traditionally underserved communities. I really think that governments have a responsibility to address the potential inequalities, ensure that AI education and training opportunities are available to everyone.
Michael Bird (14:38):
Thanks, Aubrey. That was a really, really interesting interview. Loads of amazing insights there, and we'll head back to that interview later on in the show.
(14:49):
Right. Well, now it is time for Today I Learned, the part of the show where we take a look at something happening in the world that we think you should know about. Aubrey, I think it's your turn again.
Aubrey Lovell (14:59):
Yes. It's one for me this week, and it's potentially a bit of a farming revolution, believe it or not. So, what if we could grow food in total darkness, feeding the plants only the nutrients they need to grow? This is actually called electro-agriculture, and it could be a game changer for tackling global food supply. This is going to get really interesting.
(15:17):
So, the idea is really simple. In theory, when seeds grow, they don't have access to light. Instead of photosynthesis, they feed off of acetate solutions until they sprout above the soil. So, by using genetic engineering to, "switch off the gene," which starts the transformation to photosynthesis, we can actually grow plants which rely totally on a carefully supplied liquid nutrient. Without the need for intense, inefficient lighting, plants can be grown in more condensed environments for a fraction of the cost. In fact, according to their research, which was published in the journal Joule, land use in the US alone could be cut by 88%.
(15:54):
Now, the big challenge, engineering the crops and getting them certified for human consumption. It's clearly seen as a technology with promise though. Among the funders for electro-ag projects are NASA who are interested in deep space food production, and DARPA, who are interested in shortening military supply chains by growing food on site. Pretty interesting stuff.
Michael Bird (16:14):
Yeah. Yeah, absolutely fascinating. Thank you for that, Aubrey. Very, very, very, very cool. Right. Well, now it is time to return to Aubrey's interview with Erin Young to talk about the AI skills gap.
Aubrey Lovell (16:28):
So, how do we make sure that AI upskilling opportunities are equally accessible to everyone? And how would that play out?
Erin Young (16:37):
Absolutely. I mean, this is absolutely core, equity is absolutely core to creating effective and successful AI upskilling programs and campaigns. We published some work actually from the Institute in 2023, where we explored the nature of work and skills in new AI careers as they're being formed. And we found that there's structural inequality in the data and AI fields, with career trajectories themselves differentiated by gender.
(17:10):
And so, while access to tech and training programs and careers remains uneven, without interventions these equalities could deepen. And so, governments, I believe, should implement targeted support and resources to communities that face barriers to accessing upskilling opportunities, particularly those from low-income backgrounds or underrepresented groups. So, initiatives like scholarships and subsidized training programs and infrastructure investments to bridge the digital skills gap.
Aubrey Lovell (17:47):
So, Erin, tell us a little bit about the taskforce that you're a part of.
Erin Young (17:50):
Yes, absolutely. So, we bring together in the UK, business and government and third sector and industry groups to support women from non-technical backgrounds to gain the skills to pivot to specialist digital roles, particularly in financial and professional services. And initiatives like these are absolutely crucial when we're discussing the AI skills gap.
Aubrey Lovell (18:18):
So, the big question is can AI itself play a role in that upskilling? Can AI tools effectively teach AI-related skills aligned to the overall integration of human influence?
Erin Young (18:33):
Yes, absolutely. And as I said, it is still early days, but we're already seeing examples where AI is being used for direct AI skill development through various platforms, or gamification, or simulation. In industry, we've also seen how different AI systems are being used for skills assessments. So, some large tech and consulting companies are already using these to recommend targeted learning resources around data and AI skills and otherwise, and other internal opportunities. And then, technologies like personalized learning pathways and AI tutors and coding assistants. I'm particularly excited to see how the evolution of AI agents will develop and how this will also work towards AI being used to teach AI-related skills.
Aubrey Lovell (19:27):
Fabulous. Okay. Last question, Erin. This has been really good conversation, by the way. Bottom line, why should our organizations be paying attention to the AI skills gap?
Erin Young (19:37):
So, I hope I've kind of made this clear throughout our discussion, but I mean, just to reiterate, firstly, it will impact their ability absolutely to adopt AI effectively and therefore, remain not just innovative, but competitive and resilient. Secondly, as we see capabilities advance and AI scale across domains and applications, this skills gap is projected to increase.
(20:04):
Third, organizations that actively address the skills gap will inevitably be more attractive to top talent. And then, I think fourth, and arguably most importantly, addressing the AI skills gap, especially for traditionally underrepresented groups in tech, is really a social and economic responsibility for us all.
Aubrey Lovell (20:28):
Well said. Thank you, Erin.
Erin Young (20:30):
Thank you so much.
Michael Bird (20:31):
Thank you so much, Aubrey. That was a really fascinating interview, and of course, Erin for joining us today. And you can find more of the topics discussed in today's episode in the show notes.
Aubrey Lovell (20:44):
All right. Well, we're getting towards the end of the show, which means it's time for This Week in History, a look at monumental events in the world of business and technology, which has changed our lives. Michael, what do we have this time?
Michael Bird (20:55):
Well, the clue last week was it's 1991, and we all came together for this explosive discovery. Did you get it? I'm not sure I did. Anyway, it was the first ever successful fusion experiment, which occurred at the JET Laboratory here in the UK after four decades of experimenting with substitute fuels. This week, 33 years ago, a mixture of hydrogen isotopes, deuterium and tritium were brought together at incredible speeds and energies, resulting in a brief 1.5 megawatt power output. Now, three decades on, we are still not quite there yet for commercial fusion power, but that first stepping stone set us down an exciting path.
Aubrey Lovell (21:38):
Very interesting story. Thank you, Michael. And the clue for next week is, it's 1984 and time for NASA to bring in the trash. Do you have any clues on that?
Michael Bird (21:50):
I think it's to do with a satellite being de-orbited. That's going to be my prediction. We'll see if I'm right next week.
Aubrey Lovell (21:57):
Yeah, definitely. I don't know. I don't have an opinion yet, but we'll see next week. And that brings us to the end of Technology Now for this week. Thank you so much to our guest, Dr. Erin Young, project co-lead and research fellow in the public policy at the Alan Turing Institute. And to our listeners, as always, thank you so much for joining us.
Michael Bird (22:15):
Technology Now is hosted by Aubrey Lovell and myself, Michael Bird. And this episode was produced by Sam Datta-Paulin with production support from Harry Morton, Zoe Anderson, Alicia Kempsen Taylor, Alison Paisley, and Alyssa Mitry.
Aubrey Lovell (22:28):
Our social editorial team is Rebecca Wissinger, Judy-Anne Goldman, Katie Guarino. And our social media designers are Alejandro Garcia, Carlos Alberto Suarez, and Ambar Maldonado
Michael Bird (22:38):
Technology Now is a Lower Street production for Hewlett Packard Enterprise. And we'll see you the same time, the same place, next week. Cheers.