Recommender Systems are the most challenging, powerful and ubiquitous area of machine learning and artificial intelligence. This podcast hosts the experts in recommender systems research and application. From understanding what users really want to driving large-scale content discovery - from delivering personalized online experiences to catering to multi-stakeholder goals. Guests from industry and academia share how they tackle these and many more challenges. With Recsperts coming from universities all around the globe or from various industries like streaming, ecommerce, news, or social media, this podcast provides depth and insights. We go far beyond your 101 on RecSys and the shallowness of another matrix factorization based rating prediction blogpost! The motto is: be relevant or become irrelevant!
Expect a brand-new interview each month and follow Recsperts on your favorite podcast player.
Note: This transcript has been generated automatically using OpenAI's whisper and may contain inaccuracies or errors. We recommend listening to the audio for a better understanding of the content. Please feel free to reach out if you spot any corrections that need to be made. Thank you for your understanding.
You are listening to episode number 17 of RECSPERTS, a recommender systems experts where we discuss the application and research in recommender systems, talking to experts from industry and academia.
I'm Marcel Kurovski, data scientist and RecSys enthusiast, and I'm your show host.
I hope you enjoy this show and I'd be happy if you support it.
Doing this is very easy and straightforward.
Subscribe to RECSPERTS on your favorite podcast player, leave a rating and follow me on LinkedIn.
This is the easiest way to support me and this project. Thank you.
If you seek more details on our discussions, check out the episode show notes.
There you can find a rich compilation of papers, blog posts, guest information, talks and more for each episode, along with the episode's transcript.
If you have questions, a recommendation for an interesting expert you want to have a show or any other suggestions, reach out to me on LinkedIn or Twitter or contact me via email at Marcel at RECSPERTS.com.
That is M-A-R-C-E-L at R-E-C-S-P-E-R-T-S.com.
And now off to my conversation with Miguel Fierro from Microsoft.
A lot of companies claim that they have a recommender system and then you start scratching in the surface and they don't.
I don't think there are many technological solutions that have this super extreme high ROI.
When you have a lot of talented people, a lot of times people have strong opinions on how the code should be done or whether we should use a class or a function.
There are some intense discussions about that, right?
We found a solution to that problem.
And I think that was the main driver of the success of recommenders because the result is that we had a lot of talented people that could work together on the same direction.
A constitution.
So that's what we created. We created a law.
Our law was that all of us, all the contributors and recommenders, had to behave in a certain way.
In our view, the best way is when you have something that works and then adapt it to your problem.
So that's why we have so many examples.
Almost no matter what problem you have in your recommendation system, you're going to find there five to ten different notebooks that hopefully what you can do is you can run the notebook, see if it works, and then just remove the dataset that you see there and put your dataset and then start iterating.
Hello and welcome to this new episode of RECSPERTS, recommender systems experts.
Today, I have invited Miguel Fierro as my guest and we will be talking about Microsoft recommenders and also a bit about most recent developments in generative AI and what Miguel's view is in terms of these most recent developments and especially about Microsoft recommenders.
To give you a brief introduction, Miguel Fierro is a principal data scientist manager working at Microsoft where he is leading the personalization team.
And this is not his only duty or work, but he's also an adjunct professor at the IE University.
Miguel Fierro has obtained a master's degree in robotics and automation, which is actually the topic where he obtained his PhD, which he has gotten from the King's College in London, also in robotics.
And he has done several publications at, for example, KDD, Dub Dub Dub and the NeurIPS.
So hello and welcome to the show, Miguel.
Hi, hi, Marcel, it's nice to see you and yeah, happy to have a conversation about recommendation systems.
Yeah, I'm happy that you joined the crowd because I see you very often posting very interesting insights that you share on LinkedIn.
So you are very active there sharing your thoughts, giving advice to people also how to keep up with the most recent developments.
And yeah, I really like to see your thoughts and what you are sharing there.
Can you share with us, actually, who are you and how did you get into recommender systems?
Yeah, well, I'll say I'm a very curious person. Probably that's a very good way of defining me.
I studied engineering. My undergrad was electrical engineering.
And when I was finishing that degree, I realized I didn't like it.
So kind of wasted some years of my life.
And then I was kind of at the end of the degree, everybody had to do a final project.
And typically, the normal thing is people do the final project in their expertise that for me would be electricity.
But I thought electricity was not really very interesting for me.
So I was finding other possibilities and I went to the Department of Robotics and I kind of fell in love with robotics and AI.
And that's where I did my master PZ.
And then I did a couple of startups and later I ended up at Microsoft.
And it was kind of when I was doing the transition between startup and Microsoft that I started working on recommendation systems.
The startup I had was VSTOL Recommender.
So we had a SaaS solution that you could take a picture of a garment on the street and it would recommend what is the most similar garment in a catalog of our retailer.
That was kind of my first experience with recommendation systems.
And then when I got to Microsoft, I also started working on recommendation systems.
One of the things we did is a recommender repository that is a good number of big attention and a lot of contributions from really tight-knit people.
And I would say that's kind of my path to the area.
Oh, okay.
I see.
So you basically started out with fashion recommendations, one could say, and then delved into other areas of recommender systems.
I mean, Microsoft as a whole is engaged in so many different areas where you can personalize your products with the help and support of recommender systems.
So which other domains and areas have you encountered during your path besides that very start on fashion recommendations?
It's very interesting because during my time at Microsoft, I've been working in, I could say, two big areas.
One is you could call it technical sales.
And technical sales is that we go to customers, typically large customers, and we help them to build solutions on Azure.
And my team precisely, we help them build recommendation systems on Azure.
And we work from media companies to big retailers, some of the big retailers, also gaming industry as well.
Yeah, I would say this three is the three that we made more recommendations, particularly in retail, probably, where we had the biggest impact.
And then also, then I moved to people call it engineering, which is basically you create internal products for Microsoft.
And then I worked on some internal products related to e-commerce, so retail as well, and also gaming.
So actually, you know, these three probably are the strongest.
So quite a broad set of very different domains and also with all their specifics.
I mean, recommenders are not always only the generic set of algorithms that you have, but you also need to incorporate a lot of domain expertise, domain knowledge to kind of tailor them to the corresponding domain.
So what is it actually that that made you so fascinated about recommender systems?
Because I do get somehow that you were in the very beginning, very enthusiastic about robotics, kind of driven from being dissatisfied with electrical engineering in itself.
And then at some point, there was some kind of another switch, was AI kind of the bridge leading you from, let's say, robotics into the field of recommender systems.
And then you switched only, let's say, one part or what is it that made you, hey, personalization, that's it.
Yeah, so I would say, like, you know, first of all, robotics is super broad, right?
Like you have the mechanics of robotics or electronics.
And also you also you have the AI, right, which is the kind of how the robot moves or behaves, etc.
When I was, even when I was doing my PhD, I was more focused on the AI part.
So maybe I would say 80% or 85% was AI versus electronics and almost nothing related to mechanics.
So I already kind of was leaning more towards AI.
But I particularly like this applicability and usability, right?
Like the nice thing of a robot is, you know, you do some programming and at the end you have a robot that moves, right?
Also in AI, a lot of the time you have that, right?
Like you create a recommendation system and you can provide a better experience to the users.
Right. Another thing that was very interesting.
Also, I have my entrepreneurship background.
So this ability to kind of, I would say that in AI, you can do two things.
You can either, in terms of economic value, you can either increase revenue or you can reduce cost, right?
A lot of the time, many of the AI solutions are kind of reducing cost, right?
For example, you know, Chads, you know, you have your company and you have a call center and you have a lot of people, you know, doing a repetitive task of calling, etc.
Then you can have a bot that, you know, can do it all the time.
And then these people, maybe they can go to high level jobs, right?
In that situation, you are reducing costs, right?
And a lot of people, a lot of things related to computer vision, to forecasting, to NLP, a lot of them are related to reducing costs.
And reducing costs is something that, I mean, the finance people, it's not the best thing, right?
Because, you know, reducing costs, you always have a limit and it's very programmable.
The best thing you can always have is increased revenue. That's the best thing.
Because the sky is the limit.
Kind of, right? I mean, like maybe the limit is the human population, right?
But I mean, like the limit is very, very high.
So our recommendation solution is actually one of the few AI systems that can increase revenue.
And another thing that is very, very important, and this is one of the biggest hurdles that we had in my team and in general, in technical teams, is what we call the dependency.
So dependencies are when you create an ML solution.
Sometimes you are dependent on some other technology. Just an example.
So let's say you create a churn solution. You want to identify churn or you want to segmentate customers in different groups, right?
Now, what you want to do is you want to reduce churn, right?
So you create your machine learning solution that creates some segments of people, and then you need another system.
So you have another dependency, which is the email marketing software that sends this email, right?
And so you are dependent not only on the technology, but also on what you put in this email, right?
The problem is that there is no direct path from the churn identification that you build with value, right?
Whereas in Reko, you have a direct path, right?
So you have a marketplace. You basically put different items for each individual.
And then the person buys or doesn't buy. So it's completely direct.
I guess a bit the difference between the, let's say, predictive nature of certain ML output, versus the prescriptive nature of what we see with recommender systems, where we directly intervene with the experience that users make.
Exactly.
Would you say it like that?
Yeah, exactly. And, you know, I think that was one of the reasons why I got attracted, because suddenly, you know, a lot of the success, the majority of the success relies on my team.
I don't have to rely on different things. And that's a massive advantage.
And the other thing is, I remember there was a report by McKinsey around 10 years ago, talking about the Amazon recommender. It's kind of famous.
And they mentioned that 35% of the revenue in Amazon.com comes from recommendations.
I mean, 35% of the revenue, I remember that I did the math, because, you know, 35, obviously, people think it's a lot. But so 35, when it was published, it was around 20 something, 25 or 27 billion. Now it's around 100 billion. And the very interesting thing is that there's a massive ROI, because you don't need tens of thousands of engineers to create this 100 billion.
So I don't think there are many technological solutions that have this super extreme high ROI.
That's very interesting.
Basically, you don't need the people to scale it. You need basically a good team of engineers, scientists that you work with, of people that are able to integrate and innovate.
But then to scale it, basically, given your algorithms, your systems are actually designed in a scalable way, then it's about the compute and the storage that you need to throw it in.
It's funny that you are bringing up that study by McKinsey, because I have also seen several times studies there in terms of personalization, the value of personalization that people are demanding it. And I'm always having difficulties to finding the real root source of that 35%.
When you brought it up, I actually had to consider that fortune article that were somehow citing internals by Amazon, where they claimed to make a 29% sales increase within just one year by integrating recommendations in almost every part of the purchasing and buying process.
Definitely, numbers that are in there, let's say, ballpark, and might convince you that there's really growth. And they actually really like the perspectives that you are bringing up versus top line versus bottom line. So it's not that, let's say it's providing a value of reducing costs, but the opportunity is somehow limited, whereas it is not really as limited when it comes to increasing revenue. Okay, cool. Yeah, it's our main topic for this episode. And I'm really glad that you joined, because as far as I have understood, you're the main person in charge of driving the Microsoft recommender repository, which is a great collection of many algorithms.
Can you give us some background of how it came to that point and how you basically developed that repository and why it gained that much attraction? Yeah, so the background, the reason why we build this is because when we were working in customers sales, technical sales, the way we work is that one or two of us will go to a customer and build a solution. And then it was very clear at the beginning that we were reinventing the wheel all the time. So we will go to customer A, and then we will do our recommender from scratch, and we won't share any code. We'll go to customer B, same thing. And then a couple of us will say, I don't know, guys, we should do some libraries so we can share. Instead of taking six months to or nine months to do a project, it should take us three or whatever. And then around that time in my team, we were doing a lot of blogs and a lot of VR and technical marketing. So we said, okay, can we publish this on open source? And our leadership said, yeah, do it. And what we did is at the beginning, we wanted this to be a big effort, not an effort of our team because our team was kind of small. The core contributors to recommenders, I don't think over the years, we've been more than between five to eight, something like that, right? But we needed more people. So first of all, the first thing we did is we partnered with Microsoft Research. There is a very, very good team in Beijing.
It's the team of Xishia, in my opinion, he's one of the top researchers in recommendation systems.
So we partnered with them. And also we partnered with some other open source contributors. One that I think was he was the first person to agree to me to kind of contribute external person at Microsoft is Nicolas Haag. He had a very famous at the time, recommended framework called Surprise.
Nicolas now is at Meta. And he is he's actually a contributor to scikit-learn. I think he also contributed to fighter. So he's I mean, like the guys is really good. And I remember he was the first person to actually outside Microsoft to contribute to recommenders. He did a notebook with surprise actually, which is you know, a friend that was very good. And, you know, that was kind of the origin. But I think there is one very, very important thing, which is kind of the strategy that we use to build recommenders. And we have we spend a lot of time on the contribution guidelines and how we contribute because in many technical teams, and this is I think this happens in all companies in favorite companies, Google, Microsoft, metal, Amazon, you know, when you have a lot of talented people kind of a lot of times people have strong opinions on how the code should be done, or whether we should use a class or a function. There are some intense discussions about that, right?
So we, we found a solution, we found a solution to that problem. And I think that was like the main driver of the success of recommenders, because the result is that we had a lot of talented people that could work together on the same direction. So the thing is, this came from a podcast that I was reading about the Roman Empire. The Roman Empire is a very, very, very interesting time and part because it kind of, you know, rose and died, right? And the historian, the guy was that was doing the podcast, he said that there there were three reasons why the Roman Empire was so successful. The first one is talent. So they have the best warriors, right? And if you think about the big tech companies, you know, it's kind of easy to get to attract talent, right? So all the tech companies, they have this this piece. The other the second thing in the Roman Empire is that they have like a mission. So they actually wanted to kind of Latinize Europe and expand and share the culture, etc. Right? So that's kind of a common mission. Right. And again, in all these companies, it's kind of easy to have a common mission, right? Like, depending on the company, sometimes is bought on out, sometimes is is top down, but it's kind of easy, right? So these two pieces are there. What is the one that is missing? So the one that was missing them, someone in the Roman Empire that surprised me is that they had the law. So in the Roman Empire law was very, very important, like everybody had to abide to the law. And then I thought, that was the piece that was missing. We have we needed a law, we needed a constitution. So that's what we created. We created a law. Our law was that all of us, all the contributors, recommenders had to behave in a certain way. And this is something that we all choose. We all agreed on this way of working, right? So that it completely changed the way we were, because at the beginning, we were having a lot of discussions and meetings, like, you know, where we were not making advance. And the moment we had this law, and we agreed on how to behave and how to review pull requests between us and how to think about making decisions, suddenly, we were able to go super, super fast. So that was super, super important. And that's something that I was employed in many of my teams in different teams.
And I think that was super, super important for us as super commanders.
Okay, so this was kind of, let's say, they're not really the turning point, but the point where it really accelerated since you were actually providing alignment within your team. So somehow internally at Microsoft work for the people working on that repo that you kind of derived from the demands of where you put into production recommender systems, seeing that there was a demand for making things more generic, and thereby, of course, decreasing the effort that you needed to put into each and every time where there was another demand. So that kind of alignment allowed you to speed up, but also, I would say to manage the complexity of the development process to a much better or more efficient degree than it was before then. Was it that you shared or made public that law these guidelines with your open source community? Or how did you actually enable this?
Yeah, the guideline is there, it's available. And I think a lot of teams, the way they operate is what some people call the enabling dictator, right? The global dictator, typically is kind of the more senior engineer, is kind of the person who kind of makes the final decision, right?
Like that's how Python was about, that's how Linux was about. Now, the problem is that, you know, that's not a team, right? That's a leader, that's, well, I'm not, that's a leader, but that's kind of the dictator that actually tells all the people what to do. In the approach that we took, we didn't have any either, like all of us were equal, right? That we all respected each other and trusted each other. So we were able to have, we really enjoyed the project, because we were really thinking that all the contributors, I think, believe that it was their project.
It's not like probably, you know, the contributors to Linux or the contributors to Python.
I don't know to what degree they believe is their project, right? So what I wanted to achieve with this way of working is that it's not my project, or it's not the product of the four people that we started recommending. It's more, everybody is welcome to participate, everybody will follow the rules. And no matter the experience, no matter the seniority, everybody follow the same study.
Okay, okay, I see. These rules, so how strict are they? What kind of level do they have to guide people to behave or write code or design algorithms in a certain way in order to contribute to that repository? So they are not that strong. I will say, for example, one very important thing that a lot of people are not doing is how to make technical decisions, right? For example, one decision that was very tricky. We had in recommenders, we had code in Spark, in PySpark, and we had coding in Python, right? Now, for those that don't know the difference, if you look at like the typical Python code is quite different to typical PySpark code. The PySpark code looks a lot like Java. Okay, so if you are a Python purist, right? If you're a Python purist, you'll say, no, no, no, we're going to do everything as Python, right? Because this is a Python library, and we shouldn't do classes for everything, you know? And the way we decided is, okay, so instead of saying we're going to do a vote, which is what most people do, it's like, okay, so there are 10 people or 11, 12 people, let's raise their hand. What we said is, okay, who are we serving here?
And we call this evidence-based design, okay? The thing is, who are we serving? Well, we are serving the people that goes into recommenders and are the users of the recommenders. They want to build recommendation systems. So we want to make the experience fantastic, right? So we stopped thinking about what I, Miguel, personally like, right? And say, okay, what a person that starts with the recommenders will like. And then we thought, okay, I guess if you're a PySpark developer, you are used to this specific Java slash Python way of doing things. Everything is a class, et cetera. So if you come to recommenders and everything is like Python, it's going to be really weird for you, right? So then we said, okay, then what we do is all the tools, all the functions and classes related to PySpark, they follow the PySpark naming and the PySpark structure. Whereas the rest, they follow Python, right? So the thing is, when you have a team, and obviously depending on the members of the team, right? That you completely change the discussion from what I prefer to what your customer prefers, what the person you're serving prefers. And then it's like, it's all about bringing evidence on what the customer wants. And then the discussion is completely sweet, right? It's like, okay, I, and then people will come and say, okay, I've seen, you know, these customers doing this, and these are the customers doing this. And then basically, the only thing we do is just follow what they want, we give them what they want. That's a good example of something that we did. And I think it's completely different to what other people do. Okay, okay. So I've just gone through a bit of it. And you are actually providing also a couple of quick start Jupyter notebooks, where you are quickly guided through an algorithm, its fundamentals and applying it to some certain data sets. So for example, the Amazon reviews, or like people always know in recommender systems, the movie lens data set, and then you are applying it to it evaluating its performance in terms of several retrieval metrics. And there, I already have seen that some of them require you to be having access to a spark session or to fire up a spark session and work in spark with, of course, the use of PySpark. But some other notebooks don't require this and have other dependencies and run in different, let's say environments or under different assumptions. So in there, it's not like that you provide, let's say each environment for each algorithm, but rather those which you kind of assessed are the most commonly used context for using a certain algorithm. Would it be a way of how to put it? Yeah, yeah, yeah. Actually, like the original recommenders, it wasn't a big library, right? It was just the all the notebooks. And the reason why we wanted to have notebooks is again, because we were all the time thinking about our user, we're thinking, okay, what is the best way for these people to start building recommendations? So in our view, the best way is when you have something that works, and then adapt it to your problem, right? So that's why we have so many examples, right? Like, almost no matter what problem you have in recommendation system, you're going to find there five to 10 different notebooks that hopefully what you can do is, you know, you can run the notebook, see if it works, and then just remove the data set that you see there and put your data set, and then start iterating. And that is something that I think that's another of the reasons why it was so successful, because people start using it that way, right? Like, they download recommenders. And that's exactly what they did. And actually, some of our customers, they kind of like that approach. And actually, they say, hey, that's exactly, you know, that's what we did. We had, you had this data set that was kind of a toy data set, and then we put our data set, and we press run, and then the thing ran. And, and they said, wow, now the next thing is how do we productionize that? And, you know, they were able to very, very quickly iterate and create a solution that works. Okay, okay, I see. To provide our listeners a bit more with what it really looks like. So can you maybe quickly guide us through the structure of the repository and what's maybe contained in it? Yeah, yeah. So I will say that we have three big pieces of content, right? Then the first one is the examples, which are all the notebooks. And and then we have, or, I think around 30 different algorithms on with different data sets, sometimes is the typical user item interaction data set, sometimes is, you know, what you could find in any commerce where you have, you have a user, you have product and details for the product or user and it is for the user, right? Sometimes you have examples for a text recommendation, like news recommendations, you can use text and yeah, so if people want to start recommending this, you know, that's the first place that I will go. And then the other the other two, the first one is the library. So all the functions are, are not in the notebooks, they are in a library. So basically, you can do big install recommenders and, and you get the library. And that's where if you are a developer, that's where you put your algorithms and your utilities. And then the last one is the test.
And again, the test is another reason why recommenders has been so successful, we have around 900 tests that run almost every day. And the reason why we have and we have a very, very sophisticated pipeline for for MLLs and for testing. And the reason we wanted to have this really, really strong and really complex way of testing is because we wanted to make sure that every time they, you know, like when people download the library, it works. That's what we wanted to make sure.
And yeah, that was that having a very strong MLOps pipeline solves one of the most difficult problems in ML development and development, which is the maintenance. The maintenance is very, very difficult because you need constant manpower, right? So if you have a strong, a very, very strong test infrastructure is like in my mind is like a wall, right? Like it's protecting you for bugs and it's protecting you to make sure that everything works. Yeah, makes your job far easier and also provides reliability for the users that kind of take advantage from using recommenders.
Yeah, actually, in terms of the of the users, so do you have some insights of who besides Microsoft internally is actually using recommenders whether for let's say developing ideation or also actually in production? It's very funny because a lot of people from top companies, even some people that are competitors, like cloud providers, they they kind of forked our repo.
And, and you know, it's funny because every now and then we go like, oh, these guys are working so maybe they are using it. Yeah, but I mean, like the thing is, this is an open source project and we don't have any telemetry insights that we don't really know who is using the only thing we can know is the information that is provided by GitHub. We don't really know who is using I know, I know, internally, there are many teams on Microsoft that are using it. I know customers that I directly work with them that I know they are using it. But you know, every now and then is funny. And then somebody, you know, write to me on the dinner is like, Amy, I'm working with recommenders. And this thing, can you help me with this thing? Like, why are you? Yeah, we've been working on this. We've been using this for two years in life.
Wow. Okay. Yeah, so it's very interesting. Okay, that definitely sounds great. I mean, if you're sometimes very surprised about where it has made its way at and where it's being used, and yeah, so maybe the other side, which is sometimes also the same side as the users, but talking a bit more about the contributors. So as far as I get from what you're saying, as there are a couple of people at Microsoft, we're actually working at there are five to eight main contributors that you mentioned, how is actually the reception in the community overall? So who is actually contributing to it? What is kind of the structure? Is it just a main core of people contributing to it? But is it more commonly shared between non Microsoft people and Microsoft people?
Or what does it look like? Yeah, so right now, we actually have divided people into two groups. One is what we call the mountain maintainers. And the other group is what we call the contributor. So the maintainers are seven right now. And these are the people that can accept PRS, everybody can accept or review PRS or any contributor, but there are the you know, these seven people are the people that actually are looking up all the PRS, right? And then we have a contributors from, you know, outside Microsoft, for example, we have people from different universities. We have people from you know, people that actually, you know, they were at Microsoft and now they are on other companies, even some, you know, some competitors. Yeah, I would say that too, like a lot of people from universities, and then several people from other big tech and well, not that big, like just technical companies. Okay. In terms of organizing the contributor community. So how do you decide whether you want to go with a new algorithm? So I guess the last time I did my account, I ended up counting 33 different algorithms and where you basically reference the source and more of it in the notes on the main readme. But how actually do you do you decide which new algorithms you want to onboard into the repository? Yeah, so I guess almost every time it was either somebody in our team that they said, okay, this looks super interesting, let's add it, or somebody external that kind of proposed something. The way it works is that anybody can provide any algorithm as long as it's not the same content, right? If you want to have another implementation of the same thing, then somehow you need to, for example, improve what was there or maybe remove what was there and add the new content. So the only reason is because we want to make it simple for people. But yeah, anyone can add the algorithm that they want.
We already touched on, rather implicitly touched on the scalability to a certain degree, because as we discussed, there are some of these algorithms kind of tied to using Spark. But I have also seen that some of them are, let's say, GPU ready or something like that. So actually, how do you perceive the scalability across the board of algorithms? Would you say that all of them are ready to use and scale to, let's say, millions of customers on your user base? I mean, it's hard to say because it also depends on the amount of data that these customers generate, the window length that you look back, but what is your perception on the scalability of those algorithms and the library in general? Yeah, that's a very good question. It's very interesting because intuitively, people will think that recommendation system is a problem where you have a lot of data, right? So it's either Spark or GPU, this sort of training, or maybe multiple GPUs, multi-node or something like that, right? In reality, what we found is that there are not that many customers, at least in our experience, right? Like we said, there are not that many customers with huge amounts of data, like very, very few, very, very few. We had some customers that they had a ridiculous amount of data and then it becomes challenging data engineering or machine learning engineer, like how to deploy this thing, how to train with an algorithm that is fast enough to accommodate this amount. But most of the time with a small Spark cluster or even a CPU machine or even a GPU machine with a reasonably big GPU, maybe just one GPU or maybe one of these clusters that have four GPUs, that's enough.
So here talking about, let's say, data intensive machine learning algorithms. I mean, with Microsoft, along with OpenAI, you are at the forefront of a bunch of most recent innovative breakthroughs, especially tied to generative AI with chat GPT and large growth of large language models. What is your perception on the impact that large language models and all these most recent developments are having on recommender systems, but also on models in specific? So what do you think is going to change or how do things need to change to adapt? Yeah, that's very interesting because there are people around MIT and kind of close people, there are different views, right? And maybe I can comment on both views. There is one view that, and I've seen some papers where you have something like GPT-4 and then you transform the recommendation problem to text. And something as simple as Miguel, who has this profile, this, this, this, bought this item, who is this product with this price, you know, like, it's kind of, you take the data set and just put it together, right? And that's the information that you give to GPT-4, to the LLM. And then you say recommend similar products. And actually, you know, you need to do some tricks in terms of memory. Actually, you know, right now, GPT-4 has, I think, 30,000, over 30,000 token space, right? So it's a huge number of tokens. So you can actually add a lot of information there. And that's the recommender, right? Like, and I've seen some papers where data approach looks and works very well. There is another very similar approach. There is an NLP network by Google that is called T5. And some researchers, they did a version applied to recommendation system, and I think they call it P5. And it's kind of the idea is kind of the same. Basically, you take the data of the recommender and kind of somehow put it in text form, and just use the LLM to predict, to recommend that. And that is the approach. And I think we're going to see more and more this approach, right?
Like just use pure LLMs as a recommender. And then we have the other point of view that they say that is not enough. Actually, some of the arguably the best researcher in the team, in the recommender's team, actually asked I asked him this question, and he was not very convinced of this approach of the just LLM approach. And he believes that is going to be more like a combination of the backbone of the LLMs like the attention mechanism, plus maybe the GNNs, the graphical neural networks, graphical neural networks for Reko is very, very interesting. And maybe the extra information through knowledge graph. Actually, we have some networks that are knowledge graph aware. So basically, you can get extra information of entity pairs. And you can add that information to your network. But he believes that that's kind of the route. So but again, in my opinion, both routes are very interesting. But yeah, like personally, just to give you my personal view, I would like to explore the GPT for LLM path. I mean, they share some common ground, which is that, yeah, we should definitely see it as a component towards building better recommender systems, whatever better means. I mean, on the downside, if you, for example, would turn each user that you want to provide recommendations for in that kind of text that you elaborated on for showing us the first approach, I mean, that might also drive you into into scalability issues, for example, or also, there might be some argument saying, okay, but then how do you actually optimize for things on a global scale, for example, if you want to balance relevance of recommendations with, let's say, the diversity of recommendations or something like that. So this is something that you might get into trouble there. But I guess that the path is not at all edits. And so we are still on our way to find out how we can make use of these systems to build better recommender systems. What I actually liked is that you brought up the GNN perspective, because it was, if I remember correctly, Max Walling, who was providing a keynote at Rexels 2021. That was all about GNNs and their relationship to recommender systems. Actually, Max Walling, Microsoft, over here in position, and now he's at Microsoft.
Max is really, really strong. Yeah, he's one of the pioneers in Graph Commodity Networks.
Another one, another researcher that I like a lot is Peter Belikovic from Big Mind. He's doing super interesting work in some kind of neural algorithm. So how to mix neural network with algorithms, with big algorithms. But yeah, in general, there are many, many ideas, many paths that are very interesting. Another one, for example, is reinforcement learning. A recommender system is naturally a closed loop system. You can do a reinforcement learning algorithm, but the problem of recommendation systems is that it's actually very, very difficult to do this closed loop. And for me, I come from robotics, and in robotics, everything is closed loop. You have your model, the model do a prediction, then you have a sensor, and then feed. This closed loop, and you actually potentially could create the same in a website. But the problem is that you have so many dependencies. You need to have your machine learning model, you need to do the prediction, and then in real time, you need to track all the information of the user and mix it with the model.
So it's very, very difficult. I've seen just a few companies doing reinforcement learning right now very well. And it's very difficult. Yeah, I guess there are some very great publications also coming from Google in terms of reinforcement learning for recommender systems.
So also a topic that people have been thinking about already quite some time. So I remember some YouTube papers there that were dealing with reinforcement learning for RecSys, and then also several deep reinforcement learning techniques. However, what you said, I would challenge it a bit. And so I would agree with saying that closed loops are quite the standard in recommender systems, but actually that they are also posing some kind of the problem because then you might enter several different biases that you resolved from, okay, users can just interact what the recommender system decided to show to them. And then it's always a bit hard to see really the ground truth of user intention and what users really like where you then need to embed more organic signals from users that were not under the influence of recommender systems.
Yeah, absolutely. And I think that problem actually is very, very big in media companies, right? Like Netflix, for example, because if you compare, for example, something like a media company like Netflix to something like an e-commerce, when you enter to Netflix, you have all of your pages is many recommenders, like many, many different recommenders. I would say except the hero row, the top row, which is typically is kind of done manually, everything is a recommendation system. So that's where you have these buyers, the selection buyer, right?
Because you don't have that many people searching. And then in e-commerce, I would say it's easier because e-commerce people search more, right? Like you go to a big e-commerce, and then it's like, I want some specific food or some specific garment, right? And then you get all the search. So then in that case, it's kind of easy to get that unbiased information. But yeah, that's one of the biggest problems. Yeah, true point. True point. Yeah, actually, you really do like these two different perspectives on the effect of LLMs and all that part that is associated with them on recommender systems, talking less about the system, the methods and the approaches, but more about the people. What is your perception? Because I also see, I mean, you are very active on LinkedIn providing advice, providing technical insights and your ideas, which is very beneficial for the community, I think, what is it that you would provide people who are very active in the field, maybe also getting a bit anxious about what they need to learn, where they need to skill up?
What is it that you would provide RecSys researchers or RecSys practitioners as an advice from what you see and how you experience that changes? So particularly for people in research, and I would say the continent share is typically more for what I call applied AI rather than research, because in research, your objective is to create performance machine learning algorithm.
Just your objective. Applied AI is a little bit more broad. Your objective is not just to make a good recommender system. It's more like, okay, so how can you help the business with recommendation?
So for these people, one idea that I share a lot is this T-shaped professional profile.
So a T-shaped professional is somebody that has a very deep expertise in one or two areas, so let's say recommendation system, and then it has general knowledge of all the areas related to your business. So for example, maybe you are the best on your team recommendation system, but hopefully you also know a little bit about entity, a little bit about computer vision.
Also, how the business side of machine learning works. Maybe also you know, emilox, right? Maybe you are not the super expert, but the thing is, from your perspective as an individual, I think it's really, really powerful, right? Because you are, first of all, you are the go-to person for one area in this case, let's say recommendation. So if someone in the team has a need about recommendation system, this person is going to ask you, so that gives you a lot of leverage. But also you understand, you are not isolated on just your area of expertise. You know all the areas, right? Like maybe, for example, for recommendation system is also interesting to understand how a website works and what are the key metrics in a website like an e-commerce, right?
Like how important is the usability in the business? Or what are the key metrics like click-through rate or conversion rate or the churn, like you know, how your recommender can affect all these metrics, right? So that's the T-shaped profession. And then from the company point of view, if you have a team of T-shaped professionals, you have the best team in the world because it's like you have like special forces team, right? So imagine it's like you have the mega expert in record, mega expert in computer vision, mega expert in NLP, mega expert in MLOs, right? Because suddenly you have an amazing team of people that can solve any problem, right? So then this T-shaped professional in my view is what most companies want because it's like not just the super good researcher that only knows about their area, it's more like A. So how can you provide value with what you're super expert? Yeah, definitely. I mean, it also facilitates communication a lot because even though the RecSys expert is actually not a computer vision expert or an MLOps expert, if they know their basics, they are much more versatile when communicating with the MLOps or with the computer vision experts and sharing thought there because there is some kind of a shared knowledge or let's say skill base, which acts as a bridge between these two people to then solve problems much more quickly. I have to think about in a technological manner as well.
So I mean, these data scientists who are working in a business context and for example, only concentrating on Jupyter notebooks or something like that, this is not really providing the best value because you also need to think about, okay, how, for example, to create a package, how to work with Docker, how to adhere to certain MLOps principles, CI, CD, and so on and so forth, to really be able to then work along with data engineers, ML engineers, and so on and so forth, and not kind of the old image of throwing their Jupyter notebooks over the fence and leaves the stuff to other people. That's very common actually. So I mean, still a lot of room for further development there for people, but also for their different areas that we are talking about.
Yeah, in terms of the more broader picture of recommender systems or more general personalization, which challenges do you see for the future that we ought to solve or to address?
Well, I see still a challenge of adoption, a challenge and an opportunity there, right? Because it's very interesting. A recommendation system is something that is being there for many years, not really new. And something very interesting is that a lot of companies, they claim that they have a recommender system, and then you start scratching in the surface and they don't.
Actually, to the point that I had some meetings with customers, and I remember one specific, like a big retailer in Europe, that we had to help them with a record solution, right? And then my PM came and said, Miguel, I don't think these guys need our solution because they already have a recommender. I mean, and I said, okay, yeah, well, let's stop with them and see, I don't know, see if we can help them so hard, right? And then actually I told them with the head of their scientists at that company, and they actually recognized that they didn't have anything.
But when we talked with the PM, the PM was not able to kind of go super deep into the details, into the algorithms. And at the end, it's like, you know, guys, you actually need something that we have to offer. So it's very interesting that not a lot of people are taking advantage of the solution. So I think, you know, that's an opportunity and again, a challenge because I mean, why people are not adopting. That's one thing. The old thing that I think is very, very important is the ethical part in recommendation systems. So a recommendation system can be used to provide a better experience to users that it used in a bad way. It can also be used to influence the behavior of people. Right. And we've seen, for example, we've seen the scandal of Cambridge Analytica that kind of a recommender system was used to target specific groups of people to vote for some specific candidate. Right. That is exactly. And I think it's the debate and it's kind of happening right now with GPT for charge. Right. Like who's to blame? Who should be responsible? Actually, I had some admin, the CEO of OpenAI was doing a tour in Europe. And actually, I think recently he was in Germany, you know, last week or so he went to Spain and I was in the top weekend. And yeah, it's interesting because, you know, he's he went to the Congress of the US to ask for regulation.
But then, you know, the EU, the Europe is doing some regulation. And then he mentioned that that's too much regulation. Yeah, yeah. You want to regulate like actually, you guys haven't seen the his certification in the Congress, the lawyers and the politicians, they were kind of saying, well, it's the first time in history that a company comes to us asking us to regulate, right?
Like a lot of people, they have done their careers going after the companies, right to regulate and the company saying, no, you get you are the first one that came to us and says, wow. And then and then, you know, a week later, Europe say, okay, yeah, this is that. And the guy said, nope.
So I don't know. It's great. At first sight, it seems a bit contradicting because on one side, he's saying this on the other one, he's saying this on the other side, one might also argue that, for example, there is too few regulation in the state and too much regulation in the US. So what is kind of the common ground? But that's also easy to say, but maybe hard to define and then also to bring into into reality. So yeah, but I followed that and was also getting a bit confused. It was funny, though. You know, we need to be careful with with recommendation systems.
And I guess in any technology, right, it can be used to the way it is, it's a search engine superpowers, right? It's like a technical solution that saves me time when I go to an any commerce, you know, instead of spending half an hour, I want to spend, you know, that particularly I don't I don't like to buy a lot. So for me, like five minutes and I get exactly what I want. That's that's fantastic. Yeah, yeah. In recommender systems, it's very domain or context dependent of how critical that ethical side could be. So one might say that for, let's say, retail recommendation, there is not such a great problem. But on the other side, if you, for example, are operating a marketplace and don't provide enough opportunity to be shown for your smaller businesses or something like that, then you again have ethical concerns. Whereas, let's say in social media or news recommendations, the ethical concerns are much more apparent, since you might, let's say, create division, or spread hate or something like that. And then of course, they need to be addressed much more urgently or importantly. So maybe just as a short reference there. So for people who want to know more about fairness and recommendation systems, I'll look up the last episode with Michael Eckstrand, where we actually talked a lot about fairness, which is one of the many ethical concerns one might come up in recommender systems. And yeah, I like what you say. I mean, it's kind of with every technology that it should be used with care, such it is with recommender systems. Yeah, Miguel, thanks for sharing all these great thoughts and impressions that you have from your work, and also on the recommender's library and all of that stuff. I mean, also some great hints for people to think about how they want to position themselves. So for example, bringing up your T-shaped data scientist or T-shaped data person, I guess, that not only holds true for data scientists, but also for generally for people working in software IT or something like that.
And also about the challenges regarding adoption or more broader adoption of recommender systems and the ethical concerns. You have already done to a certain degree what I always ask my guests towards the end of the episode, which is if they want me to reach out to a certain person to have them as my guest or invite them to be my guest. Besides the people that we already mentioned, is there some specific person that you are having in mind that you would like me to feature or invite to the show and have on your experts? I mean, yeah, Nicolas. Nicolas Hagg is a super nice guy, definitely really, really good. And Nicolas is very handsome, right? That's a great, great, great developer. And, you know, people and Microsoft Research, not sure how easy that is, but, you know, there are a couple of like one of the maintainers of recommender is Jan Schulien.
He's really, really good. He's one of the best researchers I know. And his manager is Xin Xie.
Xin is amazing. Yeah, this, any of these, we're not sure how easy it will be to get them, but I mean, these are fantastic. Okay, great. So I will do my best and reach out to have them or provide them the opportunity to also get on board of this podcast. Yeah. So, Miguel, thank you for your time. And thank you for sharing your thoughts with the recommender systems community was a great talk and great Twitter or is on. Yeah, thank you, Marcel. Super nice interview. Thanks. So have a nice day. Bye.
Thank you so much for listening to this episode of Recspert, recommender systems experts, the podcast that brings you the experts in recommender systems. If you enjoy this podcast, please subscribe to it on your favorite podcast player and please share it with anybody you think might benefit from it. If you have questions or recommendation for an interesting expert you want to have in my show, or any other suggestions, drop me a message on Twitter or send me an email to Marcel at Recspert.com. Thank you again for listening and sharing and make sure not to miss the next episode, because people who listen to this also listen to the next episode. Goodbye.
Bye.