HPE news. Tech insights. World-class innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.
MICHAEL BIRD
Hi Aubrey
AUBREY LOVELL
Hello.
MICHAEL BIRD
I have a joke for you.
AUBREY LOVELL
Okay…
MICHAEL BIRD
I say I have a joke for you. Give me a second. I'm going to ask for a joke. Please can I have a joke? mark. Okay. Here's the joke. Here we go. Get ready to laugh Aubrey. Why do skeletons fight each other?
AUBREY LOVELL
I don't know.
MICHAEL BIRD
Because they don't have the guts.
AUBREY LOVELL
I walked right into that one. That was right there.
MICHAEL BIRD
AUBREY LOVELL
MICHAEL BIRD
So that joke came from Chat HPE, our company workplace assistant AI. I'm not sure it's fine-tuned for jokes. Do you want to hear another joke? Should we do another joke?
AUBREY LOVELL
Can we make this so-called joke HPE themed?
MICHAEL BIRD
Yeah, okay, so please can I have another joke HPE themed. Here we go. All right. This is gonna be a good one. I just know it. Okay, here we go. Why did the server go to school? I don't know. It wanted to get a better connection and then... Hang on. Hang on, the joke's not finished. It wants to get a better connection, open brackets, and trust HPE to deliver those reliable connections. Exclamation mark, close bracket.
AUBREY LOVELL
What a good chat, HPE
MICHAEL BIRD
I'm not sure how funny the jokes were, but they were jokes.
I’m Michael Bird
AUBREY LOVELL
I'm Aubrey Lovell
MICHAEL BIRD
And welcome to Technology Now from HPE.
MICHAEL BIRD
Chatbots are pretty much everywhere from Workplace assistants and voice assistants in our homes to general online chatbots which can generate conversations and images at the mere click of a button. Aubrey, do you use them at all in your day to day life?
AUBREY LOVELL
Yeah, so I, my gosh, well the other day I used it to give me specific dietary needs for my dog who's older and it was amazing and in like 30 seconds I had a full meal plan, grocery list, everything. great. I've used it for even just like exercise planning. In terms of business though, I also have used it for like with advertising, helping to ideate certain scripts different headlines for types of banners, et cetera. So it's really useful.
MICHAEL BIRD
Aubrey, as you probably know, we have our very own workplace assistant created Bespoke for HPE, which is called... It is, it certainly is. And that is going to be the topic of today's podcast episode. And we are joined by one of the team who developed it, Jose Mejias, a distinguished technologist who works in the data office here at HPE.
AUBREY LOVELL
Ok so that sounds pretty cool so to set the scene a little bit, before we talk about our modern workplace assistant, I want to take it back to the sixties and look at the first chatbot ever created, Joseph Weizenbaum’s [[WHY-ZEN-BAUM’s]] ELIZA.
It’s time for Technology Then.
AUBREY LOVELL
It’s 1966 and Professor Joseph Weizenbaum has just announced the creation of a programme which produces the illusion of understanding. The chatbot, called ELIZA after the character Eliza Doolitle from Pygmalion [[pig-may-lee-on]], was accessed from a typewriter which was hooked up to a mainframe. A user could type into the typewriter and ELIZA would reply by typing on the same typewriter, thus allowing a user to hold a conversation with a computer.i
There was a catch though… (there’s always a catch).
The responses were rather… let’s call them basic. We are obviously used to modern day chatbots which are remarkably advanced when it comes to simulating human conversation. Eliza, however, was obviously not as advanced so… Do you want to hear an excerpt from a conversation with ELIZA which was published by Weizenbaum in his paper about the programme?
MICHAEL BIRD
Always!
AUBREY LOVELL
Great. I’ll play the user and you can be the chatbot! Our producer has said he will put a filter over you to make you sound more robotic for this. So the conversation went as follows:
CONVERSATION WITH ELIZAii
Men are all alike.
IN WHAT WAY?
They’re always bugging us about something or other.
CAN YOU THINK OF A SPECIFIC EXAMPLE?
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE?
He says I’m depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED
It’s true. I’m unhappy.
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY?
AUBREY LOVELL
So as you can hear, it’s responding to very specific parts of what the user was writing rather than looking at the conversation as a whole but it’s still pretty impressive right…
MICHAEL BIRD
Oh for sure
AUBREY LOVELL
One particularly interesting thing which Weizenbaum began to notice when people used ELIZA was the way they often began to attribute human feelings and characteristics to the programme. This would later go on be known as the “Eliza effect”iii. This effect didn’t just affect strangers, even Weizenbaum’s secretary, who had watched him build the programme and knew it was designed to just repeat users’ own words back to them requested that Weizenbaum leave the room after she began to speak to the computer. The Eliza effect is still seen today with other large language models – especially with the size of the modern LLMs and their ability to sound incredibly realistic in their mimicry of human speech.iv
MICHAEL BIRD
So, we started with ELIZA and now we have our very own chatbot here at HPE to assist staff and to find out more about it, I spoke to Jose Mejias, an HPE Distinguished Technologist who works in the Data Office. He and his team created a workplace assistant for use called Chat HPE and I wanted to know why they did this…
JOSE M. MEJIAS
Well, you know that two years ago, three years ago, there was this kind of new models, the large language models, and really took the industry by surprise. That capability that you can talk, you can connect with an entity that knows context, that understands a more natural conversation. And also it's being trained by the best information within the internet. And it changed everything. So right now you can have conversations, natural conversations. It's like having a co-worker. It's like having an assistant with you. So it changed everything. Now you don't have to have all this cumulative set of data, points, architecture in my case, that you have to have it all in the book. Now you can rely on those entities to really bring that information to you in the more natural language models. And now it's shifting how you operate, how you work. So all the boring kind of things that you typically do, then you can delegate to that assistant and now you can focus on what is important which is bringing value to the interface.
MICHAEL BIRD
Okay, okay.
And so from a user perspective, does it, like what's the user experience for it?
JOSE M. MEJIAS
People first were impactful about it, but one of the key learnings is that you need to learn how to connect and establish a conversation with these entities. This is what is called prompting engineering. So the initial phase is about, how we can interact with it, how we can get the most out of it. And still we're in this phase. So we're doing training and training, and my recommendation to any kind of enterprise is to invest in training. Once they master prompting, in general, is how to communicate effectively.
So you can ask questions. I am an architect, so architectural questions, how to manage emails, how to answer emails, how to create presentations, how to focus on the particular area. So you can bring your own documents. So in our experience, at the end it's creating like four hours in average of savings in terms of time and three or four times exponentially the value of each user.
MICHAEL BIRD
so how does it actually work sort of under the hood?
JOSE M. MEJIAS
People feel that they are intelligent entities like the physical AI. It's all vectors, matrices, and maths, applied maths. So they predict what is the next word. But it does that in a very intelligent way. So he looks at the entire conversation and puts weights about each word in the conversation in order to have a semantic understanding. And based on that, can predict the next word. This is all it does but then with billions and billions and billions of parameters. And with GPUs, so this general purpose unit that most likely are able to do high computations at microsecond levels, this is how it's all coming together. The data about the internet that you have, then these are algorithms that can understand semantic language and they were to communicate effectively while maintaining semantic context.
MICHAEL BIRD
And in terms of the data that this uses, where is that data sourced from? Is it just sort of all the data we've ever created at HPE?
JOSE M. MEJIAS
Well, there are three different kinds of data. First, outside data, the internet data, which is all the data that is around in all these portals, right? The Web.com portals. is one. Commercial models are trained with a vast set of data within the internet, but they don't have access to your internal data because they are firewalls.
MICHAEL BIRD
Yeah
JOSE M. MEJIAS
Okay. So the question is how you use those commercial last language models in order to interact with your enterprise data. This is where grounding comes. This is a common term, grounding. How you ground your model.
So what you do is you need to create a knowledge management systems in which you put your data, your unstructured data, your structured data, your transactional data, and any kind of data. You put it in the formant that can be consumed by the system and then you need to connect this data and make it available. So in Chat HPE, then you're going to see Q &A and summarization and also RAC.
There are three functions that you can use where you can upload first your documents, your enterprise documents. You can establish a question and answer session. Also, you can summarize.
So this is how we then mix and match. Internet data, company data, and then there's a third one, which is the transactional data, which...
which also is a real time data that you can embed to your conversation as well. So these three kind of data are really what makes large language models powerful by grounding the large language model within the enterprise data.
MICHAEL BIRD
And so the big difference between chat, HP and a large language model that you can just sort of access commercially is our own data.
JOSE M. MEJIAS
Yes, sir, our own data. And this is what makes them real powerful for the enterprise. And the connectivity with the external data as well. Sure.
MICHAEL BIRD
So how do you put guardrails on this sort of thing? Because presumably if we have lots of our company data, some of that is going to be quite sensitive, some of that is going to be confidential to a particular person or a particular group of people. So how do you ensure that the right data goes to the right people at the right time?
JOSE M. MEJIAS
That's a great question because that is one of the key differentiator of using external assistant and internal assistant. So we put around four things. It's guardrails, policy enforcement, making sure that the conversation is without bias, that is in line with the company values, that is grounded properly with the right data.
But also we do identity, so we are identified by using our single sign-on. Who is the specific user that is interacting? Also, who is authorized, who's not? We're logging all the conversations, the prompts and the responses. But also, we would like to democratize in a fair way. So we have quotas. You have a limit of a certain amount of quota per month that you can use. So all these three layers that I share with you are critical in order to have safe AI or getting the value out of it. And it’s a key differentiator of how we do things internally in the enterprise and how the commercial assistance is accessing or enabling person outside the enterprise
MICHAEL BIRD
And how do you train these models? What's the process to train it? Is it a, you know, pointed at all of our data and it just sort of looks at it or is it a more involved process? Is there like a data cleansing process as well?
JOSE M. MEJIAS
Absolutely. So garbage in garbage out. So you need to make sure that you have the right data with the right quality that is present, that is really move the needle. So you asked me how you train. So first you have to label data because the process of training is that hey this is this word or these following words this should be the next word. So in order to do that you have to label data. You have to have a human in the loop that is labeling that data and analyzing this data and determine, yeah, for this conversation, for this set of words, this is the next one. But also, in order to speed the process, you create digital twins.
You create synthetic data in order to do that. So it's a very computational intensive, and this is why we have partnered with a lot of companies that help us with specific products.
MICHAEL BIRD
And is it is it sort of a one big batch job to train on all the data or is it just a sort of continuous...
JOSE M. MEJIAS
So we generate three terabyte to five terabyte of data per month. So it has to be a continuous process. Data keeps changing, so this field changes. So one week seems to be one year.
MICHAEL BIRD
Yeah, because if you're not constantly training on new data, the data set gets old very, quickly and it will give you potentially incorrect answers.
JOSE M. MEJIAS
So training is a complex thing. It takes time. So there are other tools that we use in order to try to get this data faster and ground the models. There are patterns that rock retrieval-augmented generation that what you do is basically rather of training the model than you add the data, the pertaining data for a conversation. to the context window of the model and then you answer the question. So it's a more efficient, less compute intensive process that are really serving us well. Also, there is technology that model context protocol that allow you to connect real time to system or records. So we blend, always we're looking for efficiency. So RAG, MCPs, fine tuning also, there is a third technique, you don't have to train the entire model, train only the later layers of the model and therefore less compute intensive.
MICHAEL BIRD
You mentioned RAG a couple of times. For listeners who don't know, can you just give a quick definition of what rag is and why?
JOSE M. MEJIAS
Yes, R is retrieval, A augmented, then generation, what it means. So you put your documents in a search engine, then when you're prompting to a model, we send that questions first to the search engine. Search engine extracts from terabytes of data the pertaining passages for that particular conversation, three or four or five.
Okay, then they bring this data to their large language model and then let the large language model answer the question based on this data and using also external data. So it's efficient in the sense that it's very pointful to the conversation you're having and really fast.
MICHAEL BIRD
Yeah, so you're sort of using existing technology, that sort of search engine technology, which has been around for years, to do the thing that it's good at and then allowing the large language model to then do the thing that that's good at.
okay, that makes sense. I mean does chat HPE or workplace AIs, do they have the ability to interact with other large language models?
JOSE M. MEJIAS
Yes, actually we built a ShatterSpeed orchestrator because there's gonna be a small models There's gonna be large language models. They're gonna be open source models commercials model domain model all kind of model so we put intelligent on top when you have a conversation the orchestrator knows and Will direct you to the right model in many cases it might be multiple models in order to strike the best response for your answer so yeah, it's
multimodal, meaning that you can add videos, audio, text, but also it's multimodal in the sense that it's going to be able to tap on multiple models.
MICHAEL BIRD
And again, how do you make sure that sensitive data doesn't get out? Because from my understanding, AI models can sometimes be a bit opaque. It's a little bit hard to know what's going on under the hood.
JOSE M. MEJIAS
I always get this question about the security of the generation, but think about...
Generation is similar to how you treat your emails, how you treat your team's conversations. So you are in the driving seat to determine how good this information is, to who you can share, how you should pack this information. So that responsibility still resides on the user, and the user is fully responsible for that.
This comes to the topic of hallucination. Oh, we need to be careful because the models hallucinate and if hallucinate wrongly, then it has a financial impact.
Absolutely, this is true. I would like to give you a different perspective of hallucination. This is what makes Slash Language Model powerful. Establish relationships that you don't think there might be a relation. So, apply one pattern here to a different domain and discover things that you haven't think about. So when talking about hallucination, yes, you may in the loop make sure that this is powerful, this is factual, but also let it hallucinate because it's going to help you with your thinking.
MICHAEL BIRD
How do you think the increase in use of AI systems, workplace large language models, how do think that's going to impact the way that we do work?
JOSE M. MEJIAS
I read a study saying that in general, a large language model, if not used well, is making us dumper. I don't know whether you have read that. So it's like that. But I think it's going to empower us to get to the next level, to have critical thinking. So I always think about how we use this as Sherlock Holmes. What makes Sherlock Holmes great? It's not because he has all the facts. It's that he can connect all the facts and uncover hidden patterns with...
exponential way to access this knowledge. This is what we should do. We need to become Sherlock Holmes now. So you have models with all your data. knowing the data is not good enough right now. It's how you then create these new patterns, these new knowledge, this new technology. Large language model cannot create new knowledge. It's trained based on known patterns. But you can use that in order to empower your critical thinking and create new knowledge, new patterns. I think this is the pivot from exponentially empower AI user in the enterprise level.
MICHAEL BIRD
So what's the future Phil? Chat HPE.
JOSE M. MEJIAS
It's going to be reaching all employees in the next couple of months. It's going to be embedded with more models. It's going to then apply to multiple applications. Right now we have 50 plus applications. We are looking for the next 100 applications in the next six months and impact the entire corporation. So the journey is to pivot into the intelligent enterprise. An enterprise that can sense, that can perceive, that can reasoning, and that can adapt, making intelligent decisions at any point in time to really move the specific APIs.
AUBREY LOVELL
That was a really fascinating and interesting conversation Michael, I really enjoyed it.
MICHAEL BIRD
What I found really fascinating is that Jose and his team have created this system that is using, you know, HPE’s data in creative ways and I know, you know, there’s loads of conversations about AI as it relates to data and there’s lots of thoughts and conversations as it relates to sensitive company data etc. so the fact that we can actually input our own data and as an organisation use that… yeah I found that really fascinating.
also thought what he said around hallucination was quite interesting. He said something along the lines of like, hallucination can be a good thing. I guess like in particular, when you're looking at creativity, I think if you know that the model is hallucinating, then actually you could potentially use that for your advantage because it's maybe creating something that didn't exist before.
AUBREY LOVELL
That's true. mean, it's kind of like a canvas, right? So if there are those things happening, then it kind of does give you different opportunities to either fix it or maybe it does take you in a different direction. So sometimes little blips like that are a good thing.
MICHAEL BIRD
And what he talks about is sort bridging to external large language models. Again, that's not something I'd really heard of, but almost creating this single engine that will say, actually this question, well, that's not, that doesn't necessarily relate to us as an organization. You're asking about what the weather's like in St. Petersburg, Florida. So I'm going to send that to somewhere else. So I'm going to actually, I'm going to query just a database. I know those answers like using, just being clever with where you're rooting questions. I think it's quite an interesting thing.
AUBREY LOVELL
Yeah, it definitely is. think us being able to have that tool or any company that has their own kind of language model tool with AI is really important because you think about like intranets and how they work or have worked in the past and how clunky they are, right? So as you're building your knowledge base, it's so important, right, between enablement, people being able to find things within your company, the questions, the history. I think it's just a better collaboration tool all around to find what you need. So yeah, very interesting
MICHAEL BIRD
You are so right about intranets. Intranets are so hard to maintain, but they are so vital for a company. And maybe a new start, you just give them a window to chat HP and just say like, all right, fire away, what questions have you got? And it will just spit out the right answers.
MICHAEL BIRD
Anyway, Aubrey, seeing as this is a podcast about chat HP, I thought we should give it the final word. So we asked it to generate an outro for the podcast. Are you ready, Aubrey?
AUBREY LOVELL
Fire away, Bird.
MICHAEL BIRD
Producer Harry, can you enhance my voice so I sound like a bit like a just 10 % robot here. Okay, here we go.
ChatHPE – Voiced by Michael
Thanks for tuning in to Technology Now, the show where we dive deep into the world of ChatHPE and explore how AI is transforming the way we connect, create, and solve problems. We hope today’s discussion sparked new ideas and gave you some valuable insights into the incredible potential of ChatHPE and conversational AI.
AUBREY LOVELL
Okay that brings us to the end of Technology Now for this week.
Thank you to our guest, Jose Mejias,
And of course, to our listeners.
Thank you so much for joining us.
MICHAEL BIRD
If you’ve enjoyed this episode, please do let us know – rate and review us wherever you listen to episodes and if you want to get in contact with us, unlike what ChatHPE said, you’ll have to send us an email to technology now AT hpe.com and don’t forget to subscribe so you can listen first every week,
Technology Now is hosted by Aubrey Lovell and myself, Michael Bird
This episode was produced by Harry Lampert and Izzie Clarke with production support from Alysha Kempson-Taylor, Beckie Bird, Allison Gaito, Alissa Mitry and Renee Edwards.
AUBREY LOVELL
Our social editorial team is Rebecca Wissinger, Judy-Anne Goldman and Jacqueline Green and our social media designers are Alejandra Garcia, and Ambar Maldonado.
MICHAEL BIRD
Technology Now is a Fresh Air Production for Hewlett Packard Enterprise.
(and) we’ll see you next week. Cheers!