Make an IIIMPACT - The User Inexperience Podcast

Welcome to another exciting episode of Make an IIIMPACT - The User Inexperience. Today, we have our hosts Makoto Kern, President @ IIIMPACT and his AI Integration team - Brynley Evans and Joe Kraft, who dive deep into the fascinating world of emergent behavior in artificial intelligence. We'll explore the intriguing idea of AI developing consciousness, emotional intelligence, and even empathy. Together, we'll unpack genetic algorithms, the impact of dreams on creativity, and the unexpected nature of emergent behaviors in neural networks.
 
This episode also covers the latest research efforts into preventing toxic responses in AI and the critical balance between privacy, security, and innovation. We'll ponder over the potential for AI to replace a significant portion of jobs, the role of large corporations like Microsoft in shaping AI's future, and the ethical implications of data usage. Whether it's the integration of AI into everyday tools like Windows or the surprising abilities of models like GPT-2 and Amazon's large language model, we have a lot to discuss!

So, tune in as we navigate the promising yet complex landscape of emergent AI and its impact on society and technology. Let's make an IIIMPACT!


00:00 Integration of AI into Windows operating system.
04:31 Integrating photos, new feature: computer work recall.
07:49 Specs speculation on future of machine models.
12:39 AI integration into daily life becomes inevitable.
13:46 Microsoft's feature raises major privacy concerns.
19:42 Technology for tracking and flagging activity.
22:08 Monitoring employees' tasks and data cost.
26:40 Large language models demonstrate emergent behaviors clearly.
27:57 GPT-2 trained to predict and generate text.
33:31 Dreams and creativity, AI, empathy, and intelligence.
35:28 AI recognizes patterns; tasks may be similar.
39:37 Understanding neural network firing for better predictability.
40:48 Research allows manipulation of neuron-level model predictions.
43:49 Improved hardware limited by human neural capacity.


Bios:

Makoto Kern - Founder and UX Principal at IIIMPACT - a UX Product Design and Development Consulting agency. IIIMPACT has been on the Inc 5000 for the past 3 consecutive years and is one of the fastest-growing privately-owned companies. His team has successively launched 100s of digital products over the past +20 years in almost every industry vertical. IIIMPACT help clients get from the 'Boardroom concept to Code' faster by reducing risk and prioritizing the best UX processes through their clients' teams.

Brynley Evans - Lead UX Strategist and Front End Developer - Leading large-scale enterprise software projects for the past +10 years, he possesses a diverse skill set and is driven by a passion for user-centered design; he works on every phase of a project from concept to final deliverable, adding value at each stage. He's recently been part of IIIMPACT's leading AI Integration team, which helps companies navigate, reduce their risk, and integrate AI into their enterprise applications more effectively.

Joe Kraft - Solutions Architect / Full Stack Developer - With over 10 years of experience across numerous domains, his expertise lies in designing, developing, and modernizing software solutions. He has recently focused on his role as our AI team lead on integrating AI technology into client software applications. 

Follow along for more episodes of Make an IIIMPACT - The User Inexperience: https://www.youtube.com/MakeanIIIMPAC....
You can find us on Instagram here for more images and stories:   @iiimpactdesign  
You can find us on X here for thoughts, threads and curated news:   @theiiimpact  

What is Make an IIIMPACT - The User Inexperience Podcast?

IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.

We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.

We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.

Speaker 1:

Hello, everybody. Welcome back to another episode of Make an Impact. I'm your host Makoto Curran. I've got my cohost, Joe Craft and Brinley Evans on the call as well.

Speaker 2:

Hey, Dave.

Speaker 1:

Thanks for joining. On today's episode, we've got some latest news on Copilot plus PCs, AI integration into Windows. We also got emergent capabilities, and we're gonna talk about the anthropic developing techniques to better understand AI neural networks. So before we just jump in, wanted to give a brief, just overview, you know, what we do, as Impact. We are a UX product software design and development agency.

Speaker 1:

We go into companies. We help them actually launch from, basically, we say from boardroom to to code. We help companies pretty much reduce risk, get to priorities, and understand where the highest priority priorities are faster. And, you know, Joe and and Brinley has been involved with, mainly in energy for the last several years and really helping out companies make their enterprise offer much better and really more usable to to their customers. And so lately, they've been really involved in AI and integration of AI into, the software products.

Speaker 1:

And so they're they're they're great experts to talk about these topics. So with that, I think we should, go ahead and jump right into it.

Speaker 2:

Sounds good.

Speaker 1:

So, Brinley, I don't know if you wanna start or, Joe, you wanna start off?

Speaker 2:

I think there's so much to talk about. I know, Joe, you had something about Copilot and and PC, so I'll let you let you

Speaker 3:

unpack that. Yeah. It's a it's a kind of an interesting topic. So this might be I'll kind of have to explain a bit of the background since you guys are are Mac guys. Right?

Speaker 3:

So this doesn't affect you. But I think it's gonna become, you know, a bigger thing. I think all the other other operating systems will slowly pick us up. But, basically, Microsoft, you can tell just from the direction that they're going, and a lot of their recent conferences that they're going all in on AI. They're trying to get AI into absolutely every part of their business, their product strategy.

Speaker 3:

So they they they're very invested into it, and we've seen instances of AI being incorporated into Office, you know, PowerPoint and Word, in email. So the whole Office suite is getting AI integration, and it's generally, you know, just assistant integration. And then, obviously, they with their, you know, development side with, GitHub and Copilot integration, they've got sort of assistance with code and and writing our code in Versus code and and different tools like that. And then what they have been doing is adding Copilot to Windows. So that's your general search.

Speaker 3:

So similar to, you know, if you go to Edge or use Edge or or any sort of Copilot interface, it's their implementation of chat chat gpt. Right? So you can sort of ask questions, get responses, and that's always been, you know, kind of a button that you click on in Windows to achieve that. And it's still very much just chattyPT, just a an a button that allows you to get very easily. So that's the current landscape, and that's what it's been so far.

Speaker 3:

But, some interesting news came out this week with you can see the direction that they're going, and they're going way more into it than that. They really want to integrate AI deeply into the operating system. Now I'll sort of explain the features of what they're doing, and there's one specific feature. And, yeah, just be interested to get your guys' take on what you think this means and and the direction that they're going with it. Do you think it's a good thing, or do you see problems with it?

Speaker 3:

So I'll explain a bit about it. So what they're doing is they're building currently, it's not available on all Windows machines. I think it's just their new laptops with specific, Snapdragon processor, but they're working with, Intel and AMD to bring out variants that'll that'll work on, you know, normal PCs too. But really what this means is that they're calling it Copilot plus PCs, and what they're doing is integrating AI and the chipset itself is extremely good at AI itself. It has capabilities.

Speaker 3:

It's got neural processing units, right on the Snapdragon processor itself for for helping with the sort of thing. And on the surface, it seems pretty good. You know, they're just adding more AI integration to a lot of the products, and they showed a few simple examples, like, you know, the classic Windows Paint. You can also draw a fuzzy picture, and there's a side panel, and you can ask it to, you know, turn that fuzzy picture of a turtle into an actual, like, you know, really good image of a turtle, and it'll it'll do that for you. Or Interesting.

Speaker 3:

You know, put a photo there and say, you know, redesign this photo and and make it look like an eighties, you know, movie, you know, star or something, and you also, you know, add a a photo on top of that. So kind of integration like that. But there's one thing that they're adding, which is which is really the main feature of this, and it's called recall. And what recall is is, well, try to explain simply what it's basically doing is is taking screenshots consistently of all the work that you're doing on your computer, saving that to an index, index stored locally on your machine and then using AI to analyze all the data through it. It's going to analyze all the screenshots, pull out all the information it can out of that, and then store that within the index too.

Speaker 3:

So what they're claiming and what their use cases for this are is that you'll then stop being able to, you know, ask Copilot information about the work that you're actually doing. Like, what file was I working on last week? And I I think I had a meeting scheduled that I, you know, was writing out an email too, and I think I just got that email. Can you bring that email back? And it'll bring it back for you?

Speaker 3:

It's basically like a full history. I think it goes back 3 months of everything that you've done on your computer. So it's taking consistent screenshots of that. They even showed examples of you being able to, like, sort of walk through that as a timeline. So you can sort of imagine looking at your screen, a timeline sort of bar at the top that you can drag back and forth over time, and you'll just see screenshots of showing where you were in that process of time.

Speaker 3:

So it's really just a full on recording device of everything that you're doing. And, again, the advances are you should be able to query on that. You should be able to ask questions, you know, ask about work that you're doing or bring me up that image that I was, editing last week, of my, new puppies or whatever. So, you know, has interesting sort of, use cases there for it, especially when it comes to work.

Speaker 1:

So you're getting

Speaker 3:

Yep.

Speaker 1:

Yeah. So you're getting Total Recall, but not with Arnold Schwarzenegger? Yeah.

Speaker 3:

Exactly. Yeah. Exactly. That's why they call you a people. So it's an interesting it's an interesting bit of tech.

Speaker 3:

Right? So with that with that sort of I've got a few of my own kind of thoughts on that tech, but off the bat, what would you guys say? Would you say that would be something that you'd want to use? Would you want to see that in in your Mac operating systems, or could you see a use case for it? Do you think it's, you know, worth kind of the whole implementation that they're going with it, or what are your thoughts of that?

Speaker 1:

Yeah. I'm curious that, if some of it isn't just more of a salesy type of thing. I mean, great. You're using their own networks. It's AI integration.

Speaker 1:

That seems to be the hot topic. So whether you have AI in there or not, it's just a better processor. And using those terms make it sound more sell sellable. I'm curious how much of that comes into play. Like, is there any really difference between just improvement of the of the processor with better, you know, architecture?

Speaker 1:

So I'm I'm I'm I'm wondering how that works. But as far as the recall, I mean, it's definitely interesting to keep all that. It's like a timeline. Google follows you with with maps or whatever, and and it has a history. So the more history you have of what you've been doing, can better inform whether it's ads or something else.

Speaker 1:

So I think there's definitely, value to that, but, obviously, there's a big brother aspect of it as well. So I'm sure people are are gonna be either bothered by it if you have a bad you know, interesting history and screenshots to show or if you don't.

Speaker 2:

It is interesting too. I I was looking at some of the specs of those machines, and things like the RAM is is pretty high for, for some of the different models. And you do think, what have they planned in the future? If we go back to I think it was in the last podcast, we talked about the kind of small language models and having locally running models. Could they be using something like that, or could they extend Yeah.

Speaker 2:

The ability to do that? And then where I see this going as well is you obviously if you have some form of training or referencing through everything you're doing on your operating system and you have a personalized model in the future that understands that and it can be tied back into your actual operating system, it's going to be so much easier to be able to ask for help with something or, you know, almost automate certain things that you've been doing already that could recognize, well, I realize that once a week, you usually send a summary email. So I've noticed the things you've done. Would you you know, the last week, would you like me to summarize and add that to it? Yes, please.

Speaker 2:

So there's 30 minutes of your day that's been saved just from going through that. So there are lots of exciting possibilities. I kind of agree with you. It's probably Makoto, it is the marketing sort of hype that will sell these initially and those little things like the paid demo. This one was like, this is amazing, but what they can do in the future if they get machines that are capable of, of actually running their own models or even interfacing with with models with kind of low latency.

Speaker 3:

Yeah. Yeah. At the moment, I'm trying to think of my own use use cases for it. Would I even find value in that? Do I ever, like, get to a point where in my work day where I'm like, oh, I need to go and find that thing I was working on or had a have a question on some work I was doing last week or that, you know, if it comes up now and then, I do wonder if it's like a strong enough value add to make me want to go out and buy one of those PCs, for instance.

Speaker 3:

Sure. They may just, you know, inherently add it to Windows, you know, normal desktops at some point. But, yeah, it'll be interesting to see, you know, I guess this is the first phase of this and the use cases are pretty simple. But, really, what they're building out is a full, you know, indexed database of of everything that you do, and there's gotta be a lot of, you know, new functionality that they could hold out that could add on to that or maybe they could see you doing some task and say to you, oh, there's a better way of doing this task. You don't have to in other, you know, do this whole process of ticking on these 10 different menus to to get your filters looking this way.

Speaker 3:

There's new feature that allows you to do it in one click or something so they can analyze your workflows and and improve those to potentially or send you reminders for things automatically that you may have forgotten to do or you know, it's basically just analyzing everything that you're doing. And because our lives are now, you know, almost everything that we do is through our devices, you know, it really is capturing all our entire lives almost. So it really interesting to see where they go with that.

Speaker 1:

What I was thinking is, you know, I I just saw chat gpt. You can integrate your Google Docs into that, and I've been getting notifications of Gemini wanting to integrate with everything. And so it's going to be a fight Yeah. With AI if you're having multiple AIs looking at your your things.

Speaker 2:

Oh, and so today.

Speaker 1:

I'm curious how that's going to be. I know. Like, hey. Wait a minute. You're not supposed to be here.

Speaker 1:

Yeah. I'm gonna kick you out of here. Yeah. Google AI or chat GPT. Yeah.

Speaker 1:

So that's kinda that's kinda interesting. And and, you know, going a little on the dark side of of what can happen is it's almost you're seeing or you're going to see such a massive separation between people who are integrated with tech and people who aren't. Yep. They're gonna have such a vast like, it's almost like, you know, they say, oh, when the rich get richer and the middle class disappear, you're just gonna have big disparity between the rich and the poor. If certain if certain things happen, you're almost gonna see this happen as well with attacks.

Speaker 2:

Yep.

Speaker 1:

If you're not if you're just not involved with that, you're gonna, you know, you you're gonna start getting way behind or just way out of which could be good in some cases, maybe. Maybe you don't don't want anything to do it. You wanna go outside. You wanna do something that's healthy. You don't wanna sit there and just AI come at you at all different ways.

Speaker 1:

But if you're not really utilizing it, I mean, what are you doing to your day to day? You go to your job, come back, you watch TV, you meet, and that's it.

Speaker 2:

I'd like to I'd like to look at the other side of that. If you have people who you're either gonna get people who are using technology and not using it correctly, and they're not. So, you know, you know of a lot of I'm sure we all know of people who they'll use a computer, but they haven't used any of the the NLPs yet. Internet. So Yeah.

Speaker 2:

Yeah. Internet. So so you can imagine if if we're starting to move off into an age of AI AI centralized in your actual operating system, which which will happen, and which these new PCs or sort of Copilot PCs are going to offer, then it's going to sort of become a seamless transition. Everyone's going to use AI whether they they know about it or not. And I think it's just a case of, at the moment, we have to consciously adopt it, but I don't think it's going to be like that for long.

Speaker 2:

It's going to be so interwoven in everything that, you know, just by checking your email, you're not going to have a choice whether you want it or not. It's going to be in there suggesting that Yeah. I don't think you're responding to this person in a very nice way. Maybe you should tell them, like, why is how is it how is it telling me this?

Speaker 3:

Yeah. Yeah. Corporate policy has detected that your language is this email.

Speaker 1:

Manipulation AI.

Speaker 3:

We've changed it.

Speaker 2:

We we intercepted your email, and we've Yeah. We've corrected it

Speaker 3:

to be much more, appropriate. We've reread it for you. Don't worry. It's pretty Yeah. Exactly.

Speaker 3:

Done. Yeah. Yeah. That's funny. So so I'll add on a few points that I was just reading through the comments on the news articles around this.

Speaker 3:

Just, you know, there's obviously big privacy concerns. So in the statement, it it goes on to say, Microsoft admits that the feature performs no content moderation, meaning it will gobble up everything it sees, including passwords in a password manager or your account numbers and your banking websites and just to sort of follow on from that, like in any sort of Discord conversations that you're having or Teams chat conversations. It's gonna be grabbing all of that. It's gonna be pulling out all the text out of it and saving it to this this single index. So, previously, our privacy concerns have always been around specific applications.

Speaker 3:

And if we trust those applications or not, like, do we trust edge? Do we trust Chrome? Do we trust Word? Do we trust, you know, any kind of software that you're running in your machine and you make those individual choices? But really now, all that data, all the privacy is just being, you know, pulled out of all those applications into a central sort of arrest repository of of everything.

Speaker 3:

And so there's a lot of privacy concerns around it, obviously, because what does that mean? It's now got everything about you, your passwords, you know, where you've been, what websites you visited, who you spoke to, or your voice conversations could start getting recorded. And and while they're while this is currently only being stored literally on your machine, that's still a huge thing for anyone to try and get access to or be able to pull that out of your machine. They basically have, like, you know, a full, you know, history of everything around you. So privacy concerns, obviously, how Microsoft manage that.

Speaker 3:

That's around security too, I guess. So let's say they do fix all of that, you know, or or or figure out ways around that. I think they do already have ways you can turn off certain apps where it won't see it. But, again, those are the, like, buried in settings that people are gonna have to find or know those settings exist and manually go, no. Don't look at my Discord.

Speaker 3:

Don't look at my Teams. You know, these are work conversations that I want you to have access to, where most people won't do that. They wouldn't be aware of that. So, yeah, interesting concerns around that.

Speaker 1:

We don't have to worry about Microsoft ever getting hacked, and they've never been hacked before. Yeah.

Speaker 3:

Yeah. Exactly.

Speaker 1:

That could be very nerve wracking if you got everything. Everything has access to your

Speaker 3:

But then yeah. One sales of everything.

Speaker 2:

It is interesting. I would say if everything's going into the cloud, I mean, a lot of it's happening already.

Speaker 3:

Yeah. A

Speaker 2:

lot of things are backed up. And then you look at your device, like your phone, that you often do your banking and everything. So a lot of that's there already. Probably what we need to look at is you did say, Joe, that a lot of it's stored locally, and that's where I think these more specific or smaller models, that can be run on a machine solve a lot of the privacy issues because you're not sending data out into the cloud, and it's being stored for anyone to use it. It's central to that compute.

Speaker 2:

So you buy an AI computer, the model's on there, and it's actually going to feed it and learn from all of them, which is so it's a pure model in some ways.

Speaker 3:

I think in there I think that may actually even be what they're doing, a bit of obviously a hybrid approach because in in the one sentence, it says connected to and enhanced by large language models. This is talking about co pilot PCs running in our Azure cloud in concert with small language models. So it seems like Yeah. Must be. May actually be small language models maybe doing, like, the simple data extraction part of actually, like, pulling the data over screenshots into the index.

Speaker 3:

That makes sense to have it all done locally. Right? You shouldn't have to farm that out to the cloud to do because it doesn't need a large language model to do that. That's a very specific task it's got to do, like, you know, pull out data, and you can you can fine tune it to that task. So yeah.

Speaker 3:

Exactly. It's a good point. Yeah.

Speaker 1:

Got a

Speaker 2:

few more. The sorry. One point I'd I'll add in just before you go on to others is just getting back to that recall section. That's really when I read about that, I was thinking, well, this is this is sort of getting to vision we've we've chatted about before of a purely conversational UI for everything. So your operating your operating system offers you this sort of chat history in some ways of everything you've done visualized in a specific way.

Speaker 2:

And this could really be the start of moving towards that because you have that sort of log of everything you've worked on. And, eventually, it could almost come down to one sort of interface where, you know, you're you're sort of seeing what you've done and and where you've done it almost in yeah. So it almost merges applications together, and you sort of have all these sort of multi functional, whether you want to look at it as cards or something, through a central interface. You're just moving through those, doing different specific activities. So it'd be interesting to see where it goes.

Speaker 3:

Yep.

Speaker 1:

Yeah. And, I mean, that's an interesting point. I just read that somebody said that Elon Musk wants to make x the application for all. I guess, WeChat in China it's kind of getting to that super app type of thing where it's doing everything. Mhmm.

Speaker 1:

You know, banking, mortgage, buying a pizza, whatever. It's you're doing everything through one super app. And so if you have that capability through one app, it's gonna be able to just manage everything because it has access to everything. So Yeah. That's an interest interesting take

Speaker 3:

Yeah.

Speaker 1:

On our next episode of Black Mirror. Yeah.

Speaker 3:

Yeah. Well, a few more a few more smaller points I can go to just some of the other concerns I read. And, again, like, I'm trying to find a balance between, you know, seeing the positives in this and the good use cases and, like, you know, sort of the concerns that people are seeing, and I think it's good to try and see both. I can often veer onto, you know, just seeing, like, the problems. So, I'm trying to go into this with a good perspective, but we'll see where it goes.

Speaker 3:

But some of the other ones are, you know, especially around enterprise side of things, you know, since this could be applied into all workstations, there could be a lot of analysis being done on on employee usage of those machines. And, again, this is kind of like big brothers too, but, you know, the technology is there. The capability is there. So, you know, that can be good from a security standpoint because now there's an index that's tracking, and it can flag anything that it sees that may be a security issue and and sort of send that up so you don't have to sort of have log monitors logging through, you know, churning through all logs to try and see if there are any security exploits. They can actually just visually see if someone's doing something that they shouldn't be doing and and flag that.

Speaker 3:

And that, again, could be, you know, outside trained on on what that actually looks like and then upload it. So that's a good side of it. Sure. Obviously, negative side of it is productivity. So, you know, it's being it's tracking everything that you're doing and very easy to see if you've been sort of, like, you know, on your Facebook website for, like, 5 hours that morning and maybe you're not doing much work.

Speaker 3:

I mean, that that has always existed, but this is kind of, like, the next level of that sort of productivity tracking. And, you know, once you have databases of this stuff, it's a lot easier to sort of compare and and contrast. So, yeah, positives and negatives there.

Speaker 1:

That's that's scary. You know, if if it's almost like the efficiency. Some people just need to disconnect for maybe 30 minutes, and they're far more productive for the next 3 hours versus some people, they can work a little bit more straight.

Speaker 3:

So how

Speaker 1:

do you distinguish between the productivity between those 2? It should be just a result, the output. Yeah.

Speaker 3:

And the second one too is

Speaker 1:

Shall we jump to the next oh, sorry. Yeah.

Speaker 3:

I got 2 more. I'll quickly run through just because it's, again, interesting sort of things to think about as this technology comes out, what what its ramifications are. Again, you know, Microsoft is pretty I know what the word is. They they don't mind saying your data, very easily. They they're pretty happy to do that.

Speaker 3:

And this is really, like, the ultimate database. Right? Because it's it's not just your browsing history or your usage history of certain apps. It's like everything that you're doing no matter what. So they're, like, you know, kind of building up this big index of data around you and show while maybe local, like, they could very easily be stripping out all the metadata out of that and sending it up, which I'm sure they have one to do.

Speaker 3:

And so while it may not know exactly the, you know, the things that you're doing, it'll it'll kind of gauge about where you spend your time. And, yeah, so I have a lot of advertising sort of usage around that too. Advertisers would love that data. So, yeah, that's that's a interesting side of it too. Microsoft, obviously, this is a product.

Speaker 3:

Right? They they're selling this to, you know, they're not selling recall as something that you're buying. They're giving it to everyone for free. And so what are they getting out of that? There's gotta be some sort of, data cost to that.

Speaker 3:

So it'll be interesting to see how that works out. And then another interesting sort of way I was looking at this too, sort of going back to enterprise side of things. This is a this is a a stretch, but I think it's an interesting thing to think about. If you have your employees doing work all day and you're you're monitoring them, we talked about, you know, monitoring their productivity. But, again, you're actually monitoring the tasks that they're doing too.

Speaker 3:

And let's say you have someone whose job it is to capture insurance forms all day or something like that. Right? And it's a very repetitive, very straightforward task. The AI is watching that. Right?

Speaker 3:

It's understanding what that person is doing, and it could be very easy to build a model of that person's task and their actual job purpose and to sort of you know, you get, you know, 10,000 agents, all all doing that same sort of task and build up a pretty, pretty good model around that. And now, certainly, you can replace positions which were previously thought to be potentially quite complicated or involved, but now can be done with AI since it's all been trained by those people and what they're actually doing on the machines all day and remember months years of of their work. So yeah. Interesting. That too.

Speaker 3:

Yeah. Yeah. We'll see if that's where that goes.

Speaker 2:

Yeah. They can sell that for a good bit. Wow. Yeah.

Speaker 3:

Oh, yeah. Yep. And Yeah. Any job

Speaker 1:

I think I I I saw somebody said that, you know, there's been tech is getting hit pretty hard with layoffs. I think 2022 had the most had more than 2020 and 21 put together as far as tech layoffs or just overall tech layoffs in the US?

Speaker 2:

20 those 2 years are are COVID as well, which had pretty big layoffs. So that's terrible. Yeah.

Speaker 1:

But then 2023 had even more than 2022. And so yeah. So now 2024 is already starting pretty strong with the tech layoffs. And and they think that you know, obviously, they're trying to explain that it's AI. Everybody's focusing on AI, and they think that AI is gonna replace 40% of the jobs out there Wow.

Speaker 1:

Which is substantial. If that's the case, other people are like, well, they did a lot companies did a lot of overhiring as well. Yeah. So maybe that's a part of it because the pendulum has swung from employee to employer now. So maybe it's just some of that rebalancing, which I I I think I I tend to agree with a little bit.

Speaker 1:

But we do see, you know, certain job jobs getting replaced or at least and we had we discussed this on the last call where it's just it could be significant in which as you're training more and more, definitely, certain jobs are gonna be taken over by this technology. So it should it'll be interesting to see where that goes for sure. Next topic, emerging capabilities at Brumley. The same.

Speaker 2:

That's that's a good segue from Joe's slightly sort of big brother. Okay? This is something that that I was I was looking through a lot of news, and I was like, merging capability. Well, that's interesting. Let me I don't know too much about that.

Speaker 2:

Let me dig into that. I mean, what happens if I told you guys that things like chat gpt wasn't designed to summarize text or wasn't designed to have conversations. It was designed to complete things. And the emerging capabilities were like, we

Speaker 1:

could

Speaker 2:

summarize text now. Wow. Didn't know I could do that. So you start thinking, interesting. So, anyway, I'll dive into a little bit of detail.

Speaker 2:

I saw this headline, researchers at Amazon have trained a new large language model, for text to speech that they claim exhibits this sort of emergent ability. And it kind of got me wondering what exactly those emergent abilities are in more depth. And really, I guess, how much of a black box are these models? Like, if these things are just these abilities are coming out of nowhere, what don't we know about them? So, the news article I was looking at was around this base TTS model, which was a 980,000,000 parameter model.

Speaker 2:

And it's sort of the largest text to speech model that's been created to date. And now the folks at Amazon trained models of varying sizes on public domain speech data of up to a 100000 hours. And what's interesting is that they found their medium sized 400,000,000 parameter model that was only trained on 10000 hours of of audio. That was the one that displayed a noticeable improvement on these sort of challenging test sentences. So this really kind of aligned with what a lot of the findings were with the large language models, where they've sort of increased the parameters, and then they have more and more emergent behavior that's being shown.

Speaker 2:

So if you're unsure, like, just revisiting parameters, we obviously looked at those in our last podcast, and that's the topic, small language models, where you can get a bit of an idea on what parameters are. But, yeah, what was fascinating is that that sort of 980,000,000 parameter model had been trained on 10 times the amount of audio, so 100000 hours of this audio, but didn't show any further abilities over and above the 400,000,000 parameter model. Mhmm. So it's quite interesting. It's almost also like we spoke about before, that sort of sweet spot.

Speaker 2:

Yeah. Like, there can be some growth. You get up to a certain point, but you you don't see anything, after that. So getting back to what I find the the fascinating part is, you know, what could we define emergent behavior as without seeming a bit kind of enigmatic? So we could say emergent behavior is really, I guess, a spontaneous development of complex and unanticipated abilities in the system.

Speaker 2:

And it arises from simple interactions and scaling, but this is the this is the great part, without explicit programming for those abilities. So, again, we come back to the purpose of GPT-two. The primary purpose was to predict the next word in a sequence of text. That was it. So, this kind of objective that's known as language modeling involves training the model on large amounts of text data, and then it kind of learns the statistical properties and patterns of language.

Speaker 2:

And, you know, again, the goal was to really create a model that could generate coherent and contextually appropriate text by really continuing a given prompt. So, you know, if you had to say you know, start a sentence like, I feel like it's a cotton candy at the movies or whatever the case is, you look at it very basic. So once they started digging into GPT 2, they they started realizing that these these, you know, specific, emergent capabilities were coming out. And, this is like things like coherent text generation. So even though it was only trained with the sort of simple objective of predicting the next word, it could almost generate coherent and contextually relevant text over long passages of text, which is amazing.

Speaker 2:

And, and also, so called so transfer learning, so ability to perform various natural language processor tasks like summarization and translation without without being given any tasks for that, which is is fascinating. And then Yeah. I think one one of one of the most sort of surprising one, I don't know what you would be sensational, shocking emergent capabilities of these large language models, sort of shown particularly in GPT-three, was its ability to perform what's called few shot and zero shot learning. So few shot learning is where you give a a model, and this is often through prompting, specific tasks looking for the completion. So I'm gonna use they had, some examples that used French.

Speaker 2:

I can't speak French, so I'm gonna do some some Afrikaans translation. So we imagine you prompted with few shot learning. You said, right. Translate from English to Afrikaans. You said, okay.

Speaker 2:

The cat sat on the mat, and you're like, And then he said, the book is on the table. It was like. But now you would say, alright. Tell me what you know, I've prompted you with few shot learning. Do you know what I'm asking you to do?

Speaker 2:

So you said, alright. The dog barks. The model would go, well, Dion, bluff. So it's really interesting that it hasn't been trained for that at all, but you start getting few shots, and it's like, oh, I know what you're asking for. I can do that for you.

Speaker 2:

But then what's even more surprising was the zero shot learning. So the ability to perform tasks like that without any specific examples. So you, again, you give it the text to translate. It's like, cool. I know about translation.

Speaker 2:

I know what you you need me to do. And bam. It's you know? So I I think why it's so you know, it was so sort of surprising is just they really you know, the despite the sort of you know, they they gave it specific sort of content to learn, specific architecture. But again, it comes back to it was never programmed to do that.

Speaker 2:

And they're just these things that in each sort of as the parameters are boosted, this emergent these emergent abilities or capabilities surface, which is pretty pretty fascinating. So I wanted to get both your thoughts on on that. And it's difficult not to sort of go like, oh, that's that's a black box. But, yeah, what do you guys think?

Speaker 1:

It just sounds like that's where you're going to get consciousness potentially.

Speaker 3:

To certainly is stuck into existence. Yeah.

Speaker 1:

Yeah. It's I mean, that's something that would be an emergent behavior. And, yeah, that could be interesting. I mean, creativity. You know, that's something where making music, art, things like that.

Speaker 1:

It's just gonna get better and better at that because we don't know how to pro we don't know how to program that mathematically at the moment. So why not have that something as as an emergent behavior that evolves?

Speaker 2:

It's fascinating. I'm I'm reading a book called Homo Deuce, which is the follow on to Homo Sapiens. It talks about our kind of neurological makeup, but really the the base is these, these genetic algorithms, sort of series of of steps that that allow us to perform certain behaviors. And, really, a lot of what was interesting for me is a lot of it seems unconscious how much we're following these algorithms without realizing it ourselves. We're like, wait.

Speaker 2:

We've made a lot of decisions today based on on evolutionary sort of algorithms. So it is interesting to see what is that sort of you know, at the moment, it it's it's not clear what makes us conscious and why we're you know, we have more self awareness than potentially other animals. There's no key component in the brain that they've found. So you do wonder at what point something like that comes about. And I'm not saying I know a lot of these these sort of GPT models, you know, they're performing a specific function.

Speaker 2:

But when you do get to neural networks and, you know, with with baked in algorithms, it's interesting to sort of see, well, you know, how much is is not known and how much are we running on our own sort of program without being aware of it.

Speaker 1:

You know, we could go pretty deep into this. Yeah. How do you do that? Say, like you know? Yeah.

Speaker 1:

You know? Does dreams is dreams part of what helps with creativity? Sometimes when I sleep better or most of the time when I sleep better and I dream, I have I'm just more creative and thought, you know, those subconscious thoughts of ideas come quicker. Does that have something to do with it? Is who knows?

Speaker 1:

I mean, it's yeah. I think, again, with the emergent behavior, does this get into that social kind of kinda emotional intelligence that how can you you know, people can be very intelligent in one way, analytical, or it could be very intelligent socially or, emotional intelligence. So does AI start to as an emergent behavior, all of a sudden because I don't know how to train somebody mathematically to become more emotionally intelligent. There's some things you can do, but to to be more empathetic, but that's something where can can it think that way and then say, oh my god. Humans are pretty crazy.

Speaker 1:

Let's let's go let's go a different way. I don't I you know? It it should be interesting.

Speaker 2:

It's interesting the fact that just looking at the the release of GPT 4 o, where I know we we touched on it before, where there's more human interaction. You do start wondering, well, you look at IQ and EQ, but, again, how much of those are just algorithms that are genetic algorithms inside us that could be replicated just based on a number of cues. So it is it's fascinating for me to see the more this, I would say, evolves, but the more this advances. What is the overlap between actually what we're doing at a biological level and what is, at a digital level, very similar algorithms that we could put together to somehow get I'll take

Speaker 3:

a stab at a more grounded approach to it, too, just to sort of go with that.

Speaker 2:

That was the fun of that joke.

Speaker 3:

I'm just joking. Yeah. I agree. But that that's how my brain's working. So Yeah.

Speaker 3:

Yeah. No. It's thinking, though, it's too in terms of emergence. Emergent means unexpected, but that could mean basically that things that we as humans see as very separate. So, you know, AI is built on pattern recognition.

Speaker 3:

Right? It's it's on setting patterns, and then as you give it information or a question, it tries to generate that response, you know, based on that prediction. And it could just mean too that a lot of things that we see as separate that we're very surprised it can do, are actually very similar in in a way that our brains just don't really think of them that way. And so what, you know, me taking my dog for a walk or or me writing an email, as humans, we see those as very separate tasks and very separate sort of processes to go through. But the the patterns behind that might actually be very similar and can can translate those 2 different very separate tasks that we see are separate, but they're actually very similar.

Speaker 3:

They come to, like, a similar outputs. Like, we see, you know, okay. A dog's back and rested. Okay. You sent the email.

Speaker 3:

But, you know, pattern wise, that that can reach the same conclusion around, you know, the actual goal of what you're trying to accomplish. So, yeah, just sort of thinking in terms of that way too that that maybe, yeah, it's just that we see things differently to to how models see them and things that we are perceiving as very separate tasks are actually fundamentally kind of similar in how they can be approached.

Speaker 1:

I just while you guys were talking, I I put it in chat gpt what they think the most logical emergent behaviors

Speaker 3:

Mhmm.

Speaker 1:

Would be based on ranking of what would be first to last. Of course, the first one's adaptive learning and evolution, to complex interaction with physical environments, which, you know, I think that makes sense with the robotic side of things. Collaborative problem solving, social emotional intelligence, unpredictable decision making, self improvement and meta learning, autonomous creativity, ethical reasoning and moral judgment, resource optimization and sustainability solutions, emergent languages and communication protocols, unintended consequences and bias amplification, and then finally, self awareness and consciousness.

Speaker 3:

That's funny.

Speaker 1:

There you go.

Speaker 3:

Yeah. That's the last 2 things that I

Speaker 1:

have to

Speaker 3:

be in there is

Speaker 2:

the awareness

Speaker 3:

and consciousness.

Speaker 1:

Oh, yeah.

Speaker 3:

But you might get that too. Let's see.

Speaker 2:

Yeah. Yeah. Don't worry about that. I don't need too much attention to stuff.

Speaker 1:

Yeah. No. No. That's all the way in the future. You know?

Speaker 1:

Way, way, way in the future. Okay. Let's you wanna jump to the next one?

Speaker 3:

Yeah. We can wrap up on this one because it's pretty small, but actually segues briefly from what was mentioned around emergence, which is unexpected behavior, but just interesting news coming out of Anthropic, the guys who bought Claude. So this week, they released some findings of theirs where they're basically saying that they're able to understand those neural networks a little bit better. And just to take a step back, what they're trying to accomplish again, as Bernie just mentioned, the way neural networks working at the moment is we feed them huge amounts of data, and then they just magically work. Right?

Speaker 3:

It literally is that no one really understands the inner workings of them. And every time we've tried in the past to understand those inner workings, this has been a very confusing mesh of millions of neurons, and it's very hard predict how they're going to behave or how they came to an answer that they came to. And this is kind of important with AI, especially around accuracy and understanding how it's coming coming up with those answers and and to try and control those answers in a way is important because, you know, if you want to release an AI, you don't want it to be, you know, essentially coming up with toxic language or giving incorrect answers, and you want to try and prevent it from doing that. And ways that have been done in the past are because it's a black box and you can't see it how it's coming up with that answer or putting in controls that will basically, you know, either measure the answer it comes up with. And then if it's coming up with a weird answer, it'll in the background without you as a user realizing, adjust the parameters a bit and see if it can come up with a better answer that isn't, you know, on the list of keywords.

Speaker 3:

It doesn't want you to say it, like, you know, it doesn't want the response to say or, you know, it'll try and manipulate in a way that that'll it'll keep trying until it gets an answer that's approved and send it over to the person. And same with accuracy, you know, there there'll be accuracy checks to try and to determine if this answer is accurate. But, again, all these sort of measures are put in place when the answer comes out of the model, you can't adjust the model itself easily apart from just making lots of different models and measuring them all and seeing which one works the best. Still, it's a black box. So Claude have been looking into the science and tropic.

Speaker 3:

The Claude guys have been looking into this. And what they basically were looking at again is it's a neural network. Right? So a neural network is made of billions of, you know, you can call them neurons almost, and they fire off in certain questions asked. And previous attempts when they've tried to monitor those neurons, the correlation between which ones are being fired and the sort of questions that have been asked are confusing.

Speaker 3:

So you could ask a question on something like semi colon's writing in in c sharp, and you'll still get the same neurons firing when you ask questions on burritos or discuss Golden Gate Bridge, you know, very separate completely separate topics, but these same neurons are firing. So it's been free and free challenging to try and figure out, okay, how can we predict, you know, which neurons are even working here? What does it actually mean? So what they did, which is quite clever, is, they basically as part of that sort of technique of trying to understand it, they were going, okay. Well, let's stop looking at individual neurons.

Speaker 3:

Let's zoom out completely and then study groups of neurons and try and figure out, okay. Can we see which groups of these neurons are firing compared to certain answers that are coming in or out? So so where it's generating things. And from that, they've had really good success, which means that they can actually target those areas of neurons and suppress them in certain ways or express them in certain ways depending on the sort of questions that are coming through and the way they want to kind of mold the model into what they want it to be. So for instance, if they're seeing that a certain question was always producing inaccurate answers, they could then go inside the model and suppress those neurons that they were seeing were responsible for those inaccurate answers.

Speaker 3:

And then when they rerun the question because it's neurons are suppressed, which basically means they skipped, you know, different neurons would be used, which are then come up with a better answer. And so, you know, being able to kind of manipulate the model on a neuron level rather than trying to manipulate the answer that comes out and and try and fix it after the fact is huge progress because that means that it's just a better way of actually understanding and then trying to sort of get these models into a more accurate and and understandable state about how things are working. So, yeah, it seems like interesting research, and it just means that the models are gonna become better and more predictable, and you can engineer them better in a way that you couldn't before. And that's always been what's, has been very advantageous to the very large corporations like OpenAI because they have the resources right at the moment to sort of build up all the tuning around these language models to get you, correct answers and to try and come up with answers a certain way. There's a lot of manipulation that they're doing and lot of other systems and algorithms that they're using to, like, get GPT into the state it is, which is why the open source models haven't been able to compete very well with that or smaller models because they just don't have the resources to add it all the additional tooling to get into a state.

Speaker 3:

But with research like this, it means, you know, as tools developed to actually use these techniques, you know, you can actually modify the models pretty simply yourself and to try and massage them into a way that works better for you without having to build, like, loads of tooling around it to try and fix it up after the fact. So, yeah, just interesting sort of way that it's going there, and, it's good progress. So that was really good news to see.

Speaker 1:

My limited graduate studies on neural networks that I took some of the classes on. It's interesting the, you know, the the different ways in which you can train neural networks. But what was always, in the back of my mind is that you're hindered by the number of neurons because humans, obviously, from from what I see, you know, we have, I think, let me look, 86,000,000,000 neurons, in our brain to to perform tasks, whether it's physical or analytical. Let's say a cat. Cat has 250,000,000 neurons, and so they're using as far significantly less when they're performing tasks.

Speaker 1:

And, you know, maybe their their motor skills and their, you know, they're much quicker, they're faster. So it's a little bit more gear it's more inherent into their physical structure, and they're using less. But then from an analytical standpoint, they can't they're not using nearly as much, so their tasks that they could perform are limited. So as we're as the hardware is getting better, you know, obviously, you start to talk about quantum computing because the number of neurons are are firing in our brain to perform these analytical tasks. Just us talking right now is in the millions.

Speaker 1:

So I'm curious at how how many neurons and it does that does that take into effect of as we're as our hardware is improving and this actual neural network chips that we're talking about at the beginning take into account, does that really play play a factor? Because now they're they're evolving their architecture of the the the hardware to actually match what's in the actual human brain. So then that's where you're going to get some of this emergent behavior that's evolving versus just following, you know, your your program now 10,000 neurons or, you know, 100 of 1000 neurons. But in reality, if you want the human brain, you need to have multimillions, if not billions.

Speaker 3:

Yeah. Yeah. Exactly.

Speaker 1:

So I don't know what that is there. Is is this a good, time to end this this episode?

Speaker 2:

We've got in deep this episode.

Speaker 3:

Yeah. You're like, there's a lot to that. I think

Speaker 1:

I'm gonna go I'm gonna go take a nap. From

Speaker 3:

from big brother to, yeah, yeah, in-depth and network analysis. Yeah.

Speaker 2:

Next time. Next episode is a lot lighter.

Speaker 1:

But, yeah, thanks again for everybody for tuning in and listening in. Again, like and subscribe, and thanks again, and we look forward to seeing you on the next episode.

Speaker 2:

Cool. Catch you there.

Speaker 3:

See you.

Speaker 1:

Alright. Bye everybody.