Technology Untangled

In this episode, we'll explore how technology is changing the way we communicate. But much more than that, we're going to be looking at how it's actually changing our relationship with language itself.

Ever since the first pictograms which date way back over 5000 years, we've been searching for technologies to communicate with each other in more widespread, more efficient ways. The printing press, the telegram, the telephone, and the internet have all evolved into what we're now using today: live streaming; chatbots; VR and AR technologies and, of course, social media. The thing is, the pace of change has grown remarkably in recent decades. We're more connected than ever.

And nothing is changing that faster than AI, in particular generative AI. It’s a core area of interest for HPE Chief Technologist, Matt Armstrong-Barnes. He can see the incredible potential for globe-spanning communication that AI brings, from instant translation to live captioning, but he’s also wary of taking the human out of the loop and losing the important context, nuance and difference that makes language so special and effective.

That’s also a challenge for Courtney Napoles. She’s Linguistic Engineering Director at Grammarly, an AI-enabled software platform which aids human writing. Despite being a tech firm with a strong machine learning underpinning, they rely on humans in the loop to ensure that communication remains effective for us, rather than trying to pull us towards homogenised, machine-learning inspired writing styles.

The rise in assistive technology is having unintended benefits, too – in particular, it’s opening up the door to greater representation in the workplace, and bridging communication barriers when it comes to accessibility needs. Rob Koch is a data engineer and principal at Slalom Build, and heads the group Deaf In The Cloud Dot Com. He’s seen a remarkable breaking down of barriers in the last few years as technology has enabled him to communicate more effectively with colleagues and customers. There’s a way to go, but he’s optimistic about the future.

And speaking of the future, where are we headed next? Leslie Shannon is Nokia’s head of trend scouting. She’s keeping a keen eye on the way we communicate and the language we use, and is seeing a stark generational shift away from text and towards video and augmented reality solutions, embracing the additional context that body language and gesture can add to traditional, ‘flat’ conversations – and changing the way we speak to our tech and the language we use in the process.

Statistics and sources cited in this episode:
Global communications market value:
US communcations market value:
Size of the Digital Acessibility market:

Creators & Guests

Michael Bird

What is Technology Untangled?

Why isn't talking tech as simple, quick, and agile as its innovations promise to be?

Technology Untangled is just that - a show that deciphers tech's rapid evolutions with one simple question in mind: what's really going to shape our future (and what's going to end up in the bargain bin with the floppy disc)?

Join your host Michael Bird as he untangles innovation through a series of interviews, stories, and analyses with some of the industry's brightest brains. No marketing speak, no unnecessary jargon. This is real tech talk from the people who know it inside and out.

Discover which tools and systems are revolutionising the way we do business, what's up next on the endless innovation agenda, and, most importantly, how you can future-proof and get ahead of the curve.

Leslie Shannon (00:00):
More and more we find ourselves moving away from actual text because computing power lets us get at better ways to represent the information that we're asking for and that we're looking for. And it's not that language is going away, not at all. It's that the text representation of the languages is greatly diminishing.

Aubrey Lovell (00:22):
Ever since the first pictograms, which date over 5,000 years, we've been searching for technologies to communicate with each other in more widespread, more efficient ways. The printing press, the telegram, the telephone, and the internet have all evolved into what we're now using today, live-streaming, chatbots, VR and AR technologies, and of course social media.

The thing is, the pace of change has grown remarkably in recent decades. We're more connected than ever. So what's next? Where will we be even in just the next few years and this episode, we'll explore how technology is changing the way we communicate, but much more than that, we're going to be looking at how it's actually changing our relationship with language itself.

Michael Bird (01:07):
Because tech isn't just changing the way we communicate. Academic linguists generally agree it's changing both the words we use, and the way we use them. We've linked to some of those papers in the show notes.

You're listening to Technology Untangled, a show which looks at the rapid evolution of technology, and unravels the way it's changing our world. We are your hosts, Michael Bird...

Aubrey Lovell (01:31):
... and Aubrey Lovell.

Michael Bird (01:34):
The role of technology in communications has been growing ever since the 19th century, when the first telegram was sent from Washington DC to Baltimore. Today, the worldwide communications market is estimated to be worth $1.4 trillion, according to market research analysts Statista. And although that figure is for the entire global spend, within that the US revenue is the biggest piece of that pie, generating $332 billion.

Aubrey Lovell (02:08):
That's obviously a huge amount of money, but it goes far beyond just cash. It's about linking people and ideas together, understanding each other more clearly, and bridging barriers in distance and language.

Matt Armstrong-Barnes (02:20):
My name's Matt Armstrong-Barnes. I'm the chief technology officer for artificial intelligence at Hewlett Packard Enterprise.

Technology's played a fundamental role in the evolution of communication since the inception of the printing press. We moved through voice communication, email, instant messaging, video conferencing, social media. And where we've kind of got to today is it's very fast and it's very convenient. We're probably doing less face-to-face communication than we used to do. We can span distances much more effectively and our ability to span distances is ever increasing. Hewlett Packard Enterprise, we power the International Space Station, and we've got communications up there quite readily.

As we start to break down some of those barriers, it's really allowed us to collaborate at a global scale because it's become much more group-based instead of being sort of point to point or peer-to-peer. So if we start to think about people with impairments, it means that we have the capability to collaborate at massive scale. Things that we didn't anticipate being able to do, we can do now.

Aubrey Lovell (03:22):
That aspect is one that it's fair to say is sometimes overlooked. The accessibility to language and communication that's opening up due to technology is hard to overstate, but it really is changing lives for people.

Rob Koch is a data engineer and principal at Slalom Build. He also leads a group,, which aims to get more deaf and hard of hearing people into tech. Here he is speaking through his American sign language interpreter, Amelia.

Rob Koch (03:51):
There are many technologies that we have now that in the past we didn't, of course. Video streaming technology, that has improved tremendously, and the compression is much better, so we get more video through the pipeline over the internet. Some of it in the past would be choppy and/or freezing or pixelated, and so the video quality has changed drastically so that we can easily communicate via sign over video streaming.

Having interpreters available remotely at the convenience of my screen for a sign language interpreter like I'm using here, it's nice if I have a hearing person maybe next to me in person, I can have a sign language interpreter on the fly on my screen.

Audio captioning, closed captioning for audio, has also improved a ton. Today's autogenerated captioning is completely different. The algorithm or the machine learning process and tools that have been used are incredible for improving that to a point where we can handle things like accents and people from various different countries. So it's really been something that I've noticed quite helpful in terms of accessibility. So I believe that has made a big change.

Michael Bird (05:18):
The digital accessibility software market is currently worth around $706 million, and is expected to increase by more than $250 million over the next five years, according to a Mordor Intelligence report. It's a key area of opportunity for more inclusivity.

But communication, much like technology, is a virtuous cycle. The more we all use it, the better it gets. Technologies which could aid Rob are great for all of us. For example, Rob mentioned that the machine learning algorithms behind closed captions are getting better at capturing a nuance such as accents and dialect. And if machines can help better understand and translate our spoken word, it means they can also help understand our writing, and help improve it.

Courtney Napoles (06:11):
My name is Courtney Napoles. I'm an engineering director at Grammarly, where I lead our language research org. My team is a group of linguists and engineers who are essentially applying language and communication expertise to our product to make sure that our product is unbiased, and that we meet our users' needs and that we deliver high quality suggestions to all of our users.

We've been around for more than 14 years, and as the AI technology has developed, so have the technologies that we use to power our features, and that has evolved to then include deep learning approaches as well as large language models.

When we first started out, we were really focused on the revision stage of writing. So how can we help our users improve what they've already written? With the latest technological advances with generative AI, this is now unlocked more opportunity for us to help our users in other stages of the communication process. Generative AI is changing the way that we work and the way that we interact with each other. And just the ability that we now have to really synthesize information and create really targeted communication, is really fascinating, really exciting for me.

Michael Bird (07:29):
AI is clearly a huge part of the picture when it comes to written content, but it's not necessarily plain sailing. AI comes with quirks and challenges, some of which are yet to be worked out, and can potentially have negative effects when it comes to communicating with one another. Here's Matt Armstrong-Barnes.

Matt Armstrong-Barnes (07:52):
Large language roles have a significant implication. If you think about it, when it comes to basic typing of an instant message, we are now being prompted to potential words that we should use, and that means that our native way of speaking as an individual is going to be influenced by the large language model, because how I would compose a message will be different, but I might well just use the prompts because it's quicker and easier. So these word recommenders are going to start to shift the language that we use more broadly towards the language that is based on where the large language model is produced.

And if you think about that, most of that is in the western world, so as a result, there is some very large parts of the western world that have a big influence on that. So we could see the type of dialogue that we're using, the idioms, the local dialect, et cetera, shifting away to one more commonly held because we're all using next word generators.

Michael Bird (08:58):
Wow. Do you think that it's always going to need a human in the loop to create that understanding?

Matt Armstrong-Barnes (09:04):
One of the evolutionary trends I think we're going to see going forward is the concept of something called a Centaur. The concept of a Centaur actually dates back to Garry Kasparov, who created something called Centaur chess, where if you have human beings working hand hand with AIs, they are much more successful than either an AI or a human being on their own.

And I think that definitely comes with things like translation, where the more educated we are, the better interactions we'll be able to have with our AIs, that are ultimately tools for assisting us in the field of communication or productivity.

Aubrey Lovell (09:48):
That's something Grammarly, an organization with language at its center, is keen to focus on. In fact, they employ what Matt would call a Centaur model on their platform, though they term it a hybrid model of language analysis. Here's Courtney.

Courtney Napoles (10:03):
So we're thinking a lot about this component too. So really considering the whole communication landscape and all the different ways that we can help our users communicate more effectively. We choose the right type of technology that would be appropriate for that use case, and we employ a lot of hybrid approaches as well.

We might have some rule-based approaches where the rules are custom written by our computational linguists to target very specific instances of language use, whether it's the structure of the language or the words that are being used. And we'll actually combine these with some of our deep learning approaches, for instance, to help to really target the output of these models so that it is meeting the user problems.

We can actually develop ways to generate text, generate communications, but then also have them be personalized to an individual's voice or their own style and to be aware of the context in which they're communicating and bringing in knowledge from disparate sources. And we approach it as AI is augmenting our user's intelligence, rather than replacing them.

Aubrey Lovell (11:20):
But what if the answer to our future communications and the way we use language is to not write it down at all? What if in fact we're destined to go back to the very origins of storytelling and become a visual species once again?

The excitement of the early internet was that it was the world at your fingertips, meaning you could type on a keyboard and communicate with the world. But as technology and language evolves, typing is looking distinctly old-fashioned.

Leslie Shannon (11:48):
Hello, I am Leslie Shannon. I am Nokia's head of trend and innovation scouting. I'm based in Silicon Valley. I'm kind of up the flagpole with binoculars trying to see just that few degrees more over the horizon, to see innovations that are coming specifically that have a connectivity angle, and particularly things that are coming that are expecting the network to do things that I know the network doesn't do.

And so if we think about our experience with the internet today, it's very mediated by text. For example, if you want to search something, you're going to use text to search it. But as we move more into the three-dimensional representation or the graphical representation of our information, we are moving away from text, whether we realize it or not, and we're moving much more towards other visual ways of representing information, taking the world that we know on the two-dimensional internet and the two-dimensional screen and making it over time three-dimensional. The fully digital world that you access by going, I'm an avatar, you're an avatar, and we're in this three-dimensional digital world interacting with each other in real time, which we'll see through some kind of augmented reality headset possibly, increasingly through our smartphones as well.

So more and more we find ourselves moving away from actual text because computing power lets us get at better ways to represent the information that we're asking for and that we're looking for.

There's another book that I've written called Virtual Natives. I wrote that with Catherine Henry. And that looks at how Gen Z and Gen Alpha are using digital tools, and how they're actually using their digital prowess to restructure, well, everything.

If I'm going to search for something, I use text and I type it in. If my kids are going to search for something, first of all, if they're going to use text, they're going to use some kind of voice interface as opposed to typing words. But for them, reading a response is much less meaningful than seeing a video about the response.

The thing is that the virtual natives, they never knew the original systems, and this sets them free, with their great fluency with computer tools, to reimagine stuff and to create something better. It's not that language is going away, not at all. It's that the text representation of the language is greatly diminishing in use.

We moved here from Finland to the United States when my kids were kind of in the 7 to 9 age group. And one of my sons, he missed the year that they taught cursive. He has never picked it up. He literally cannot read cursive. And, it is no hindrance. They really live very text free lives. A lot of their school assignments are actually to produce videos.

Michael Bird (14:45):
So we could potentially be in a situation where text is being supplemented, if not yet fully overtaken, by visual mediums. Think of it as being like the way that video conferencing has taken over from phone calls when it comes to remote meetings, to the benefit of people like Rob who can take advantage of the accessibility benefits it offers. Video, albeit with a heavy text layer on top.

Rob Koch (15:11):
I'm trying to think what was I doing at work 10 years ago. Where the accommodations were, the accessibility. Wow. I mean, it's completely different in the world today. We have video relay services, we have video relay interpretation or remote sign language interpreters. Everyone uses texting now to communicate rather than by voice. A lot of people prefer texting, so that's been easy for me as well.

Customer support online, before you'd have to use the phone and call in and speak with somebody at support. It wasn't necessarily very accessible. Serverless provides customer support online via chat and chatbots now, so you can go through that process and it makes things a lot easier.

Text-based communication is everywhere right now compared to in the past, 10 years ago. Everything was phone driven. We didn't have anything like that in the corporate world at the time, and it really opens up doors for accessibility really.

Aubrey Lovell (16:27):
So with all of this talk of video, what about the traditionally text-based services? Courtney Napoles isn't worried. For her, there will always be a role for more effective communication, no matter how the words are presented.

Courtney Napoles (16:41):
Yeah, I mean visual communication is really important and people are communicating by video and GIFs and images all the time. Language is still a fundamental part of communication and how people interact with each other and how people share ideas. And so while the type of communication or the medium that we're communicating by is changing, language is still going to be a fundamental part of that.

So I don't see the importance of language changing in communication. The prevalence of written language might change, but ultimately language is still... That isn't going to change.

Aubrey Lovell (17:19):
But there are a lot of challenges here. While the ability to transcribe audio and make it searchable is getting better, in fact this podcast is edited using a text-based system. The same can't be said for video. Video essentially isn't searchable. Sure, it has metadata attached. You can search for videos about say watermelons. But the technology to search for a video containing a watermelon in the background is pretty poor.

Michael Bird (17:45):
In fact, it's only just being developed and supplied to certain big tech companies with a visual heavy focus now, and still has a long way to go. The first prototypes of what's called video object detection were only released a few years ago, and we've linked to a couple of articles on them in the comments.

That said, that's a software problem tech companies can sort. Actually, for us humans and the way we process information, video can be way more searchable than text, especially when it comes to language. That's because unlike AI, our brains thrive off context. Here's Leslie.

Leslie Shannon (18:26):
If you hand me a book in Latvian, I can't read it. I have no idea what is being said. If you show me a video of someone speaking Latvian, I may be able to glean some information, not just the words, the expression, the setting, what they're doing with their hands. I won't be able to get the full information, but I might be able to get 10, 20% of what the message is. So the barrier for entry for video is super low.

And especially if you think about how to videos. If I'm trying to figure out how to replace a part on my car, I can watch a video that somebody recorded in Sri Lanka. And they can be speaking in Tamil, but if I can see what they're doing, the language is not so important. So that's another reason why video is having a moment now, because we do have this ability to globally transmit our information and potentially have it be free from... or language can be less important.

My children find the printed word flat because it doesn't have the human element. And the advantage of video is that we can be far more human and far more immediate with each other. That said, AI can also do translation really easily, so I can get my subtitles for the Tamil guy who's showing me how to fix my car.

Michael Bird (19:49):
So the hybrid or Centaur model once again looks to be a promising option for efficient communication, and on a global scale. But what about the way we tap into each other? After all, phones are, well, a little clunky. So could something more direct be on the cards? Here's Matt with some pretty out there science.

Matt Armstrong-Barnes (20:12):
There are some very, very interesting things that are happening in the communication space. We're starting to look at brain signals. So brain signals that can be turned into audible speech. But as that starts to become much more mainstream, we will definitely see more use of this type of technology that allows people to collaborate. Because potentially that can speed us up. If we start to think about the future, our ability to think is faster than our ability to speak. And as a result of that, we can start to see the evolution of communication move forward where it'll just become a different speed, which means we can collaborate much more effectively and at speed.

Michael Bird (20:54):
So we could get to a point where we don't even speak to each other because that's just too darn slow. And instead we have chips hooked up to our brain, speaking to other chips hooked up to other brains, which are able to communicate much faster.

Matt Armstrong-Barnes (21:10):
Whether or not we need to implant them, I think that's a very broad question. But we can definitely today apply sensors on the outside of the skull that will pick up this information and allow us to collaborate effectively. So I can see it definitely moving in that direction, whether or not we start to apply implants to it, I don't necessarily think that that's going to be the right way to go. But our ability to incorporate the Centaur style approach, where we can start to use man and machine to work extremely effectively to tackle some really big and global challenges, I think is definitely going to be a right step forward.

Aubrey Lovell (21:51):
Wow. I mean, that's quite something, isn't it? Even if we don't get there, the fact that it's a possibility is mind-blowing.

Closer to present day technology, Leslie previously mentioned AR and VR, augmented reality and virtual reality. And in the absence of direct brain communication, that could be a big part of the equation when it comes to seamless communication. Matt's not convinced we're quite there yet with that one though either.

Matt Armstrong-Barnes (22:18):
I think if we start to think about the increased use of virtual reality, then we are at the minute quite limited. So wearing VR headsets, we've got battery life, we've got the fact that we've got heavy devices sitting on our heads, which is not necessarily good from a physiological perspective. Not everyone can wear VR headsets for long periods of time. People suffer from nausea because of the refresh rates.

Also, things like there's been some analysis done about the implications of a walk outside, a walk in our environment, increases our creativity by a considerable amount. So how do we change the way that we are interacting that allows us to get much more movement in terms of what's happening? Because otherwise we are going to potentially shift to being more sedentary.

As we've become much more attached to our video conferencing, means that we end up sitting in the same physical space for very long periods of time because we're all sort of bolted to our laptops on video calls, and our ability to move around, to take calls while we're walking, has sort of changed probably over the past couple of years.

We're starting to tackle a lot of that. As the technology starts to decrease in size and increase in speed and capability, it'll become much more readily available. And it means that we start to build an ecosystem that's wrapped around it so that, for example, we can all spend time sitting in a virtual office and we can hear the chatter around us of other people working and we can interact with those other people. It will provide a much more virtualized office experience than we get today from virtual reality.

Michael Bird (24:01):
Leslie has a somewhat more optimistic viewpoint. She sees the physical limitations of VR headsets being overcome in the very near future, and in fact in a way which encourages us to pay greater attention to the world outside our homes or offices, and explore it more, not less.

Leslie Shannon (24:21):
Towards the end of this decade, when consumer grade augmented reality headsets are actually out there, by which I mean eyeglasses, form factor, lightweight, easy to wear, stylish looking, which is absolutely on track for the end of this decade. When we have these devices, that will let us walk through our physical world and be able to access computing power that we have today only when we are looking at a two-dimensional screen and typing some kind of words.

And so as an example, say you're walking down the street and you see a plane in the sky and you're like, oh, I wonder where that plane's going. So you say, "Glasses, write me a program where anytime there is a plane in the sky portion of my visual field, do visual analytics of the livery on the plane to determine what airline it's from. Look at my location data, figure out where I'm standing right now. And then put those two bits of information into the publicly available flight aware database, and then figure out where that plane is going and then just print over that plane, in my visual field, the destination of the plane. So do that every time I see a plane. The end. Thank you."

And so this ability to interface with huge amounts of computing, using large language models, using natural language, to generate not just some kind of text-based response, but to generate programs and images and three-dimensional digital representations of things. We can't do this today, but all the building blocks, all the technical building blocks for making this possible, are on the table.

Michael Bird (26:08):
Now that is a cool idea. Not just communicating with each other, but communicating with the very world around us. Interacting with it. Even programming our own tools to make the best use of it on the fly using our voice. That is a very exciting prospect.

Aubrey Lovell (26:24):
But it throws up an interesting linguistic challenge, and that is how we talk to our tech. And let's face it, this episode is making it pretty clear that language is changing to actively involve our tech. That's where prompt engineering comes in, i.e., how do you talk to an AI powered technology and get it to interact with you in the right way, for example to make Leslie's aircraft tracker? Here's Matt.

Matt Armstrong-Barnes (26:50):
Prompt engineering is the means by which you can help the generative AI provide better responses to questions. Because people who are asking questions of the AI don't necessarily understand how to ask them in the right kind of way that means that the AI is going to provide the best and most accurate responses for you. And there are some complexities about how to build prompts in the right kind of way, because you do need to understand how the large language model that you are using is going to work.

So some of the challenges are if you write ambiguous prompts, then you are going to get lower quality responses from your large language model, because they have a tendency to not cope with ambiguity in the best way.

There are also things around ethical considerations, bias and fairness. How do we create prompts that take these things into consideration without moving us into a world where we are unintentionally censoring information?

And then there are other things which we need to guard against. So it is possible to do prompt injection, which means that we're trying to hack the models. So associated with this, we need to put the right defensive mechanisms in place, but without restricting the way the model can provide the best responses.

Aubrey Lovell (28:17):
So there's clearly an issue here, and it's one which is being considered by providers across the AI spectrum. But where there are issues, there's also opportunity and a way of doing things better.

It's something Courtney Napoles is really excited about, the ability to use expertise both in machine learning and human intuition to overcome poor communication with AI. And beyond that, provide fair balanced tools to help people communicate effectively.

Courtney Napoles (28:46):
Yeah, I think about this a lot. Trust in AI is a really important component of any type of AI development and deployment of AI features. With large language models, these models are trained off of a huge amount of real data that people have produced, and so we take responsibility as the most important factor when we are doing our development. This is one of the superpowers, and this is something that is in the future.

With Grammarly, our vision is that we can bring in information from multiple different sources. And as you're doing research for the podcast, just think about all of the different sources that you're consulting and all the places that you're pulling information and ideas from. And what if you could easily synthesize that into a single document without having to do all the manual cutting and pasting and cross-referencing?

What we're going to see in the next 10 years is we are going to have information at our fingertips, even more so than it is now, by looking something up on the internet or using a voice assistant. This information is going to be brought to us through products like Grammarly without us having to go and look for it. For instance, I see Grammarly being able to help break down information silos and share context that somebody might be missing and they would need to be able to understand something that they're reading. And this is something that I think we're just getting closer and closer to, and I'm really excited about this in the future.

Michael Bird (30:21):
And if it pans out, it could be a really bright future, for us at least. Because one of the key problems with technology and language right now is that it's very centered around, well, people like Aubrey and I, and probably many of you. English speakers with good connectivity. And right now, if you don't fall into that bracket, well, there's not necessarily a lot that technology can do for you. Here's Matt.

Matt Armstrong-Barnes (30:52):
There are significant amounts of the global population who don't have either the skills or the ability to interact with this type of technology. There are areas of the planet that are not covered by the internet, and there are a large percentage of the population of the planet who are unable to consume this technology because of the cost differences.

But also large language models are predominantly based around English. If you start to look at the role that AI and large language models are playing, they're capable of doing very effective translation. That does mean though that we run the risk of people not necessarily understanding the cultural implications, and that means even though that we are talking across massive geographic boundaries and using AI as a way of doing that translation, some of the intricacies of the cross-cultural collaboration, we run the risk of degrading or even losing them. So one person who reads something could it interpret it differently. And that applies definitely when it comes to large language models. If you don't understand the language, you are completely at the mercy of some other entity to do that translation for you, and the possibility of it being misinterpreted goes up quite significantly.

If you start to take into consideration less commonly spoken languages, then they will start to suffer from the digital divide, where they have lower ability to interact with large language models, unless they interact with them with a potentially non-native language.

Aubrey Lovell (32:28):
Obviously, as Matt says, the specifics of languages across the globe are going to pose a huge challenge in terms of understanding and accessing technology. After all, we don't really want to give tech-savvy English speakers with good internet connections an unfair advantage.

Michael Bird (32:45):
In fact, we don't really want to use these tools to give anyone an unfair advantage. So, do they? Is using tech to improve or fix our bad communication, poor use of language, or terrible attempts at Latvian, actually, well, cheating?

That opens up a great question about how far we let tech help us before it takes the reins. It's something Courtney's team is taking seriously, but don't necessarily have a detailed answer to yet.

Courtney Napoles (33:20):
Trust in AI is a really important component of any type of AI development and deployment of AI features. So much new content can be generated by AI that's shared publicly where we may not know whether or not it was generated by AI or not. We're also thinking about trustworthiness of the AI.

We have this question in the educational system about cheating and students using AI or generative AI to cheat on tests and assignments. And so we have a team that's working on how can we enable students to leverage generative AI in their schoolwork in ways that are ethical and that are transparent?

Aubrey Lovell (34:02):
For Leslie, it's about generating acceptance of new technologies and ways of using language in order to allow them to flourish.

Leslie Shannon (34:11):
When I was in high school, it was the very early eighties, and affordable pocket calculators were first coming out. And there was a huge debate about whether or not you should allow pocket calculators in the classroom. And so at my high school, our physics teacher let us have pocket calculators, and we could actually bring a written copy of the formulas we were going to use into the test. He's like, "Don't bother memorizing it. If you're out there in the real world, you're just going to look it up anyway. Don't worry about it." Whereas my friend at the high school across town, they couldn't use pocket calculators and they had to memorize everything.

But pocket calculators, honestly, part of the reason why everybody has a TI-84 in their pocket now is that it actually takes away that grunt work and it speeds you up. It lets you focus on the stuff that is the valuable human component of it, rather than forcing you to manually work out the square root of whatever.

And large language models, natural language processing, is exactly the same thing. For people for whom writing the basic text is a challenge, does not come naturally, get everybody that baseline. And so math was his language. He did not need a pocket calculator for any of the stuff that I needed a pocket calculator for. He was fine. I'm that way with language.

And so to be able to raise that bar for that minimum written communication standard, and then have us add our human stuff on top of it the same way we let calculators do the super low grunt work of our math in daily life, yeah, exactly the same.

Aubrey Lovell (35:53):
So we've looked at the opportunities and controversies, but what's next? What are the next big challenges yet to be overcome? For Rob, it's all about improving the technology we've got to make the workplace more accessible.

Rob Koch (36:06):
Everyone has different needs, and different needs at different points in time. So there's not necessarily one tool or one service that would fit all. And I think everyone doesn't necessarily expect the accommodations to be instantaneous. And sometimes there can be a feeling of disappointment, but I know that a lot of the products out there have roadmaps as far as including things with accessibility in mind. I have a optimistic view on things of that nature. So I think in that way, that's kind of how I see it.

Michael Bird (36:46):
For Courtney, it's all about taking the next step in finding nuance and context, and training systems to be able to accommodate and help us find the right words to express ourselves.

Courtney Napoles (36:56):
You and I talking right now, we're using different tones and different expressions than if we were talking to our closest friends. And so being aware of the context and being able to react very quickly to that, is something that I think that's going to be the big challenge of being able to replicate or imitate somebody's tone or style more precisely. Because if our tone and style is the same all the time, it'd be a much easier problem.

Michael Bird (37:20):
Meanwhile for Matt, it's all about nailing down accuracy and accountability, particularly when it comes to AI.

Matt Armstrong-Barnes (37:27):
So I think one of the roles where we see the evolution of large language models is explainability. So when it comes to providing a summarization, it then also provides means to go to the actual source data to validate it. Because we have to bear in mind that large language models do suffer from something called hallucination, which means if they don't understand it, they can make it up. Whereas if we have this ability for them to explain how they've arrived at the summarization, we can go back to the original source material and review it ourselves, allowing us to make more informed decisions about how the large language models we're using are summarizing the data for us.

Michael Bird (38:09):
Okay. So I think what we've established over this episode is the sheer scale of where language and technology can take us, both in terms of the opportunities it presents and the questions it raises. One thing is for sure, the way we communicate and the words we use are becoming inextricably linked with the technology we interact with. And that is unlikely to change anytime soon.

Aubrey Lovell (38:34):
You've been listening to Technology Untangled. We've been your hosts, Michael Bird and myself, Aubrey Lovell. And huge thanks to Matt Armstrong-Barnes, Leslie Shannon, Courtney Napoles, and Rob Koch.

Michael Bird (38:46):
You can find more information on today's episode in the show notes, and do subscribe on your podcast app of choice so you don't miss out and to check out the last three series. This episode was produced by Sam Datta-Paulin and Zoe Anderson, with production support from Al Booth, Harry Morton, Alicia Kempson, Alison Paisley, Alyssa Mitry, Camilla Patel, Alex Podmore, and Chloe Sewell.

Aubrey Lovell (39:09):
Our social editorial team is Rebecca Wissinger, Judy-Anne Goldman, Katie Guarino, and our social media designers are Alejandra Garcia, Carlos Alberto Suarez, and Ambar Maldonado. Technology Untangled is a Lower Street production for Hewlett Packard Enterprise.