The NeuralPod

The NeuralPod welcomes Shreesha Jagadeesh, Associate Director of Applied Machine Learning at Best Buy, for an insightful conversation. 

Shreesha shares her extensive career journey, from his early days in biomedical diagnostics to her current role at Best Buy. The discussion delves into machine learning techniques in retail, the evolution of recommendation systems, and the technical challenges of personalisation at scale. They also explore his contributions to HR tech at Amazon and his innovative paper on homepage personalisation using XGBoost, soon to be published at RecSys.

Additionally, Shreesha offers valuable advice on leadership, career growth, and navigating the competitive field of machine learning. Get ready for an episode packed with expert insights and practical tips for aspiring AI professionals.

00:00 Introduction and Guest Background
01:48 Career Journey: From Academia to Industry
03:34 Transition to Software and Data Science
04:19 Consulting and Managerial Roles
07:09 Joining Amazon and HR Tech
09:35 Advising a Startup in India
11:19 Joining Best Buy and Recommender Systems
12:46 Challenges in Retail Personalisation
28:28 Implementing XGBoost for Homepage Personalisation
40:01 Top-Down and Bottom-Up Approaches in AI
40:57 Challenges in Implementing Recommender Systems
42:20 Understanding Business Objectives in AB Testing
44:36 Experimentation and Value Demonstration
46:20 Representation Learning in Machine Learning
51:03 Leadership Principles in AI
58:08 Hiring and Team Building in AI
01:02:43 Future of Recommender Systems and Generative AI
01:03:00 Upcoming Book on Recommender Systems
01:10:21 AI Tools for Productivity
01:16:20 Conclusion and Final Thoughts

References: 

Multi stage recommender systems blog 
https://eugeneyan.com/writing/system-design-for-discovery/

Hidden technical debt in machine learning 
https://papers.nips.cc/paper_files/paper/2015/hash/86df7dcfd896fcaf2674f757a2463eba-Abstract.html

HSTU paper
https://github.com/meta-recsys/generative-recommenders/blob/main/README.md

What is The NeuralPod?

The NeuralPod AI Podcast

The NeuralPod is all about deep diving into technical machine learning topics and showing its real-world impact, bridging the gap between AI-first companies and adoption.

We chat with subject matter experts across reinforcement learning, recommender systems, deep learning and generative AI. As well as chatting with business leaders, VCs, operations and strategy experts.

Who are NeuralRec.ai? NeuralRec is a recruitment staffing agency. We build niche ML teams and represent some of the globe's best ML talent.

Chris: welcome Reha the podcast.

Um, it's great to have you on today.

How, how are you doing?

Shreesha Jagadeesh: Pretty good.

Excited for the, uh, the podcast.

It's, uh, it's a cloudy day here
in Boston, but fairly warm for

this time of the year, so, yeah,
so a good weekend coming up

Chris: Nice.

Well, uh, yeah, it's, it's, uh,
nice sort of here in Manchester.

But, um, ju just quick intro
to you, you, uh, you're gonna

go through your background

Shreesha Jagadeesh: here.

Chris: um, you are for Associate
Director of Machine Learning at Best Buy.

Um, heavily focused on recommended
systems, and it's somewhat of a,

um, a good subject matter for, for
the podcast is lots of people who

listen from that kind of background.

So know there'll be, uh, lots of
people interested in, in what you,

you've got to say today, but I guess,
uh, just let people know at home.

We're gonna kind of introduce to your
career, um, what you've been working

on, your background, kind of lessons
learned in the first half of the podcast.

And then, it'll be a, a neural pod.

First, you're gonna, uh, prevent
your Rexi paper, uh, around what

you did around extra boost and
some of the, uh, really great.

Interesting stuff and, um, homepage
personalization that you've been doing.

Um, a little bit about your leadership
experience and, and how you've got

to, where you've got to and always,
uh, with, with the, the show, just

some, future predictions and, and
also discuss your book, which, uh,

seems super, super interesting.

So, uh, Sharon.

Would you just like to people through
your, your career history and got

to today and introduce yourself?

Shreesha Jagadeesh: yeah.

Thanks for the, the brief
introduction about myself and

the overview of the topic.

So, hello everyone.

I'm Trisha Jaga.

I'm one of the associate directors
of Applied Machine Learning

at Best Buy United States.

Um, yeah, so I, I briefly, I kind of
lead the, the teams for personalization.

Um, I've previously worked on
like ad tech, recommender systems

representation, learning use cases, kind
of moved around within, uh, different

portfolio projects within Best Buy.

So kind of like a long
arc on how I got here.

Um, I'm trying to think of where
would be the best place to start.

Right.

So maybe my master's degree in the,
uh, university of Toronto in Canada

would be a good place to start.

So, um, yeah, so I, I went to
Toronto, uh, in the early 2010s

to do a master's, um, that was in
electrical and computer engineering.

Back then, I was focused on
making a career out of more

on the hardware aspects.

And, uh, so that masters turned
into, um, an internship at a, at a

startup that my thesis supervisor had
co-founded, uh, it was called Chip Care.

They made, uh, biomedical, uh,
diagnostics for HIV patients in Africa.

Um, and then the, the core sort of
intellectual property was to use like

a single drop of blood to detect.

Biomarkers in the blood suggest white
blood cells that would proxy for what

is the status of the HIV patients.

So my role was kind of like a jack of all
trades, kind of all in one, both hardware

and a bit of software analytics as well.

So I was building like
biomedical prototypes.

I was a hardware engineer there building
some, uh, photonics devices like cameras,

imaging equipment, uh, to detect what's
in the blood and then also, uh, count

those cells and then analyze it, right?

So we were a pretty small company
and, um, yeah, pretty enjoyed my time

there, but over time I realized that
the hardware profession is, has a very

long feedback cycle in terms of um,
how quickly you can iterate on the

ideas and get feedback from customers.

So, uh, looking at the overall sort of
division between hardware and software,

I realize that I think the long term
it's better to be in areas where.

The, the feedback loop is
much quicker so that the idea

iteration cycles are much shorter.

So I made a hard pivot towards
more on the, uh, the software side.

And then within the software, I kind
of took a liking towards more on the

statistical data science aspects of
things while I was at, at the startup.

Um, so yeah, so the subsequent job was
set up a consultant for, uh, Cisco,

whereas working on supply chain, uh, data
science and analytics, uh, use cases.

So, uh, a very interesting use case was to
like figure out forecasting for the right

inventory levels to be stalking so that
there's a trade off between, you know,

excess inventory in one hand, and on the
other hand you have shortages and such.

So, um, along the way I realized
that, uh, while the data science

part is great, I think to like really
make, uh, impact, it's useful to

have visibility and some level of.

Uh, mandate to also touch
upstream and downstream systems.

So that's when, um, Ernst and Young,
the, the role that Ernst and Young

happened as well, where, um, I was
recruited to be a manager within

their tax practice in, in Canada.

And it was one of the first
technical hires, uh, to do

some digital transformation
using data science techniques.

That role was very interesting because,
uh, yes, I did get the mandate to maybe go

upstream and downstream, re-engineer the
process and do everything, but that also

meant that I was wearing a lot of hats.

Um, so it not only was there was a
substantial amount of data science

and ML techniques to be done, but
there was also software engineering.

So my team consisted of like
software developers, front and

backend, and also data scientists
and a few product managers as well.

So we had like a, a very interesting
portfolio where we were building.

Post stack software applications with
the data science being the key ip.

Uh, an interesting use case was a software
as a service application where we took

in, uh, client documents, financial
statements, and then analyzed it using

like a simple binary classification to
determine whether or not a particular

line item is eligible for a research,
research and development tax credit.

And once those tax credits are,
uh, consolidated, we would then,

the software application would then
like create like a draft, uh, set of

tax returns and then work with the,
the accountants who then take over

and, and then do some audits there.

Um, so that was pretty interesting.

And then that's when, kind of like towards
the very end of, uh, 29, early 2020 is

when like Covid hit and there was a little
bit of dislocation in the, the job market.

Um, I was still employed in Canada, but
around that time I realized that, you

know, the world being topsy-turvy might be
a good time to like actually get married.

Settled in with my wife.

So she was doing her graduate
program at MIT back then, uh,

a combination of PhD and MBA.

So, but I was in Toronto, so I was
trying to get to the United States.

What's the, so, um, so
that's when Amazon happened.

So, uh, the Amazon role was basically
working in the, uh, the HR tech space.

And, um, yeah, the, the mandate was
quite interesting because, uh, Amazon

is obviously a very large company and
there's so many other business units

and both internal and external sort
of customer facing, um, applications.

And this was a purely sort of internal,
uh, internal facing role where, uh,

unlike, let's say ey where it's a
mix of internal and external, this

was like, okay, you're mandated
to build out, um, data science

capabilities that would impact a.

Other data science developers
across the HR space.

Um, and it's surprising the amount of
resources that a large, I guess, big tech

company like Amazon can spare towards
improving internal employee experiences.

Um, so that ended up evolving into
something to not necessarily established

the feature store existed, but more like
an evolution of an internal feature store

platform, as well as the ML lops platform
to sort of streamline how data scientists

and machine learning researchers
develop models and deploy them.

Um, so that was one part of the mandate.

The other part was actually helping
bespoke machine learning models to be, I

guess, going all the way from, uh, scoping
to design development to deployment.

Um, but I was still officially a
product manager, tech, not like

a data science manager or a,
like a machine learning IC role.

So this, this is very cross-functional.

Then it was also very interesting time
because I think I was, it was a very early

stages of, you know, what is considered at
the data science pm Like are they a data

scientist or are they a product manager?

So, um, I think one of the summers 2020.

Yeah.

So one of the summers I ended up having
an intern, um, who, who was a machine

learning researcher to build like
recommender systems and help match, uh,

potential career growth ready employees
with, um, let's say other managers in

other departments to be able to review.

So we built up this POC and that
kind of got me kind of fascinated

with recommender systems and, and
personalization type of use cases.

And in parallel, I was sort of advising.

So just outside of my sort of day job,
I was advising a friend over in, in

India, he had gone back to India, uh,
to basically build out a startup that.

Supplies, agricultural, um,
accessories and inputs to farmers

in India, in Central India.

So think about like fertilizers
and pesticides and all that.

So he had a, a physical, the
company's name is Farm Cart.

They have a physical store
as well as like a website.

And then he tapped my shoulders like,
Hey, do you wanna help guide what are

some of the best practices that could
help maybe, um, launch some of the

analytics, uh, department and data science
department within, within farm car?

So I was like, sure, I'll give it a shot.

And then I kind of ended up helping
them to hire a few key analytics and

data science roles within the company.

And that one of the data scientists,
uh, and I, we ended up collaborating

to build recommender systems
for their omnichannel use cases.

Right.

So, but it was fairly basic
because there's still a startup.

You don't really need
like sophisticated models.

Uh, so we kind of start off with like,
you know, collaborative filtering, matrix

factorization type of use cases too.

Figure out, okay, customers who
bought this also bought that type

of user experiences, and that was
a big value add to, to the startup.

Yeah.

Um, let me know if you, if you want
me to transition to Best Buy, I think.

Chris: Yeah, that'd be great.

Yeah, yeah.

Talks through Best Buy then it'd
be good to, you know, I think one

of the great things about you,

Shreesha Jagadeesh: Yeah,

Chris: across lots of different
industries and I'd love to just

into that and the differences after.

But yeah, talks through Best Buy.

I think you've done some,

Shreesha Jagadeesh: sure,

Chris: stuff there from what you said.

Shreesha Jagadeesh: sure.

Yeah.

So right around like 2022, I realized
that while, um, I enjoyed some of

the contributions that I was able
to, uh, impact with Amazon, the,

uh, the peer group was amazing.

Like I had a chance to work with,
uh, economist Nobel Lau Dock, Guido

bins, uh, to sort of build out or
help get help in designing some of

those models in conjunction with the
other ML researchers and the team.

However, the, in, in terms of the
broader context of where I was situated

in, I felt like maybe, uh, I had the
opportunity to influence the company's

strategic direction a lot more if I
went to like, maybe a, a mid-scale,

uh, sort of Fortune 50 company.

So that was kind of my reasoning to,
uh, shift towards Best Buy, where,

uh, there was this expansive mandate
of helping drive recommender systems,

personalization and, you know,
representation, learning type of use

cases, um, that would be very beneficial.

That would be sort of like first in class
for the entire company rather than just

being yet another business department
in then the grand scheme of things.

So that was kind of like the
transition point in, in mid 2022,

um, where, yeah, so I joined Best
Buy and since then I've been there.

We launched quite a few use cases.

Um, I won't get into technical details
on all of them, but there may be some

relevant things such as the homepage
personalization for which I published

a paper, which we can cover later on.

Chris: Yes, that sounds great.

Um, so you on there, um, best Buy,
obviously amid Fortune 500 company and.

Amazon, you mentioned you were
surprised at how much, resources

they could, um, dedicate to tasks.

You know, to me they're
both, uh, personalization

and recommendations at scale.

What would you say the main difference
was doing, if you can say, doing

recommendations that Amazon compared
to Best Buy, who, know, have also got,

I would say quite a lot of resources.

Shreesha Jagadeesh: Yeah.

Yeah.

So I think, um, the, the domain
is, is also quite different.

I think there's multiple
distinctions there.

Um, in addition to the company itself.

The sheer fact that one, sort of
like an offline use case versus an

online use case makes a difference.

So in, in, in any retailer, you have
a situation where a customer lands on

the website or the app, and then you
have, the system has to make predictions

in, in real time in most of the cases.

I mean, you could still catch
some of those predictions.

Um, however, in the case of like
internal sort of HR tooling, you don't

necessarily have that constraint.

You, you can do a lot of
like batch predictions.

Um, that's not to say real-time
inference won't make a difference,

but you could, you could probably
get away with maybe slower SLAs.

Uh, the other consideration is, is the,
the, the data privacy piece as well.

Like when.

When it comes to helping internal
employees, uh, on certain carrier

trajectories, uh, the data privacy
aspects become super critical.

Uh, there's, there's a lot of data,
uh, that you're not allowed to use.

Um, I won't get into specific
details for that in, at Amazon.

Uh, but in the case of retail, I
think it's a little bit more relaxed,

um, especially when it comes to like
clickstream and browsing behavior.

There's still some sensitivities
around like personally identify

information, even in the case of retail.

But the vast majority of like
clickstream events, search events,

browse and so on, after card
purchases, there shouldn't generally

be constraints in the retail industry.

So that make, that makes model
performance generally, uh, the

main determining factor rather than
many of the privacy considerations.

Chris: Okay, interesting.

And I'm moving on.

Um, like I say, you've worked across
HR tech, uh, biomedical engineering

Shreesha Jagadeesh: Yeah.

Chris: um, retail.

What would you consider, you know, the
fundamental machine learning techniques

that have studied you in good stead that
are transferable, across industries?

Because, you know, some industries
doing well and others not so well at the

moment, and people looking to move around.

What, what do you consider
to be, um, transferable?

Shreesha Jagadeesh: Sure.

I think I, I think, uh, taking a
step back, right, I think this is a

very, uh, important question to be
able to sort of introspect on for

any sort of person in terms of their
carrier opportunities out there.

Like what, what are the differentiating
factors do I bring to the table?

And I think, um, the key driving aspect
is really understanding the customer

problem, no matter which domain you are in
or which sort of technical skill you have.

It could be software engineering, product
management, machine learning, whatever,

finance or whatever it is, right?

Um, so really trying to get into the
heart of what the real problem is, rather

than the stakeholders or the business
team's representation of the problem.

Many times they might actually contradict.

So, um, and, and as you go along,
you start realizing that not all

the requirements are mandatory.

Some of them are mandatory,
some of them are nice to have.

Some of them are actually
not even requirements.

Some of them are assumptions or
speculations that the business

thinks is useful, but it may, it's
not usually backed up by data.

So really getting to the
heart of the problem.

How do you frame it?

What happens if you
don't build a solution?

What's the baseline?

So thinking through that specifically
in terms of machine learning, I think,

uh, a fundamental sort of curiosity
about upstream and downstream, uh,

processes and technology stacks that
are complementary to the model itself.

It's critical.

Uh, I think there was a very
interesting 2015 paper by Google.

It's called a Hidden Technical
Debt in Machine Learning.

You, you might have seen that infographic.

So there's like so many other modules.

It basically depicts the figure depicts an
ecosystem of different technology stacks,

not technology stack, like different
modules in the machine learning pipeline.

We got like feature ingestion,
feature processing, feature

engineering, data quality monitoring.

And then the, the model itself is
like this tiny, tiny little rectangle

in the overall sort of architecture.

Right?

The, where I'm getting at with that
sort of, uh, thought process is like you

have to really, as a machine learning
engineer, you have to really understand

what is the source of your data.

Where does your clickstream
events come from, or where does

your supply chain data come from?

What generated it?

Is there any sort of
like interference in the.

In the processing that
might bias your samples.

Like are you only collecting clicks
or are you only collecting one

type of data versus the other?

Um, so, and then same thing with
like the downstream, how are your

model predictions getting used?

In the case of recommended system
personalization, it's most likely

your model predictions are directly
put in front of the customers, right?

You have a front end that renders
your ranking somewhat, maybe filters

it away for duplicate or something.

But in general, it's not, there's
not a lot of interference, uh,

in the model predictions, uh,
before it reaches the customer.

But that may not be true with many of the
backend sort of internal tooling or other

types of non-customer facing ML use cases.

So think through on how the
downstream integrations happen.

Are there post-processing
layers that might invalidate

how you generated the results?

So I'll give you an example of.

There was an interesting use case where
the marketing team was like, well, we

really need model explainability, right?

And then, I mean, if I was a junior
engineer, like looking back at

myself eight years or 10 years
ago, I would be like, sure, I'll

give you a feature importance.

But now I'm like taking a step back.

Why do you need model feature?

Like why do you need model explainability?

What does that help you with, right?

And they're like, well, if I know which
attributes in the model are helping to

generate those predictions, then I can
like tweak something and then like re

uh, re-engineer the system to do that.

But I'm like, hold on, I don't think
you need this extra two step process to

get feedback on feature importance and
then try to like fine tune it manually.

We can do that in an automated
system by retraining the model, by

looking at the same feedback data.

So you don't need an extra step.

So I was just using that as an
example to say, really understand your

downstream and upstream processes.

Um, the third, I would say common
attribute is like, really keep learning.

Whatever field you pick in,
try to go really deep into it.

Could be computer vision, natural
language process like NLP and, and,

uh, recommended system personalization.

The surface level knowledge is becoming
a very much of a commodity these days

because anyone can go into chat, GPT and
like you can literally say, Hey, can you

build me an algorithm that reflects what's
in this paper that is possible today?

I've tried it even for very
advanced papers and it gives me

fairly reasonable, uh, co-pilot.

Co-pilot gives me like fairly
reasonable Python scripts that

could replicate a, a research paper.

So, um, yeah, so that, that those
would be my sort of like three main

pieces of advice for, um, ML engineers
looking to sort of transition.

Chris: Nice.

And you've kind of already answered
this in that question, but you know,

with, with AI and copilot, it's
becoming a little bit harder as

someone junior or mid-level to break
into the first role after internship.

Um, you know what, what?

Piece of advice would you give to them?

to become an, you know, an a player
engineer or a player research scientist,

um, you know, when they're competing
against potentially a, a co-pilot

or, or AI in the first instance, or,
you know, teams are now a little bit

smaller because, uh, the more efficient.

Shreesha Jagadeesh:
That's a great question.

I think the, that is definitely the case
that in the last couple of years, the job

market has become extremely competitive
for early career employees, uh,

looking into break into the profession.

Um, I mean, if this happened like 10
years ago when I was entering the field, I

would've been in a lot of trouble myself.

So, um, I kind of have sympathies
towards the, the newer generation

of folks who are entering it.

Um, of course, like take my advice,
who is listening with a pinch of salt.

But I think with the speed at which AI
engineering and ML engineering is moving,

I think paradoxically what we need to do
is, uh, look for the most like highest

signal to noise ratio content available
and try to like learn it rather than like

spreading yourself like way too thinly.

Um, and I think that aligns with
many of the guidance given by

much more senior members in the ML
community that I've heard as well.

I don't think they're all keeping
up with, with the pace at which

this is advancing as well.

Um, specifically for like more
tactical, I guess, areas that

could improve the profile.

I would say like build
a lot of things, right?

Like the barrier to entry to
building is very low today.

Like you could, you could mock up
something very useful in the span of a few

hours if you have access to copilot, um,
and maybe like other sort of coding tools.

But what matters to employers is like,
are you able to think end to end, right?

It's, it's pretty easy to like create
a or replicate a tutorial, but can

you go like one step further and
change a few, like design choices?

Do you actually understand
the system you're building?

Right?

Can you tweak a few things?

Tell me what you've
learned from those things.

So I think the, the, that's what
would differentiate yourself from

the, the massive amount of people who
are just like, you know, replicating

a tutorial and then passing that
off as an experience in the resume.

Chris: No really good points,
and I'm sure people have found

that really, um, really useful.

Moving on to retail, personalization and
kind of some of the marketplace dynamic

Shreesha Jagadeesh: Yeah.

Chris: been doing at Best Buy, you know,
best Buy, obviously a, a huge retailer.

Um, you know, what unique ranking
and recommendations, problems and

challenges come with a, with a
company of that size and, um, you

know, such, such varied products
I guess, that you're dealing with.

And, um, across such a, a,
a large geography as well.

Shreesha Jagadeesh: Yeah,
definitely, definitely.

Um, again, I'm not a spokesperson
for Best Buy, so these are completely

my personal opinions on, on the
nature of the problem space here.

So I can talk in general about the,
uh, uh, nature of problems that retail

companies in, in general face, right,
versus let's say the common types of

personalization or recommender systems.

The experiences you might have
seen in, in social media companies

like Facebook Newsfeeds or
Instagram or Snapchat or something.

So, um, I think especially for
social media companies, you

have a massive amounts of, uh,
content being added every day.

Uh, which, which means that
Cold Start problem is, is very

much front and focused, right?

Like you, you need to figure out a
way to, uh, get the, get the high

quality content in front of the
customers as quickly as possible.

Um, in contrast, I think retailers
also face cold start problem with, with

respect to new products, but it's not as
severe as, let's say, TikTok at scale.

Like you're not uploading billion
videos every month, every day or

something in, in most retailers, right?

There's no videos.

But the analogy would be maybe
the products themselves, right?

The turnover in the catalog
is nowhere in that scale.

It is still substantial enough
to, to warrant its own thing.

So instead, what ends up happening
is, I think the, the emphasis

is more on like enrichment.

What do we know about the products, right?

Is there, is there a way to like,
augment the quality of the products with

additional, sort of enriched metadata?

Um, so those are some of those challenges.

It's an open-ended challenge.

It's not like, yes, we, we
enriched it and now we are done.

It's more like, okay, we, we have
some like, I don't know, TechCrunch

articles or some other like blocks.

Is there a way to like bring
that insight into what we know?

Right.

So those would be some of the frontiers.

I don't know if it's a solved
problem in other companies as well.

Um, but so that would
be on the product side.

In terms of the customer
side, I think it's.

Um, and we can talk about a little
bit on the homepage when we get to

the home per personalization, right?

When it comes to the customer side,
you have a situation where, um, there's

both like logged in customers and
visitors who are unrecognized, right?

Like of course if a customer is logged
in, sure, you know, a lot of information

about their previous purchases and,
and, um, browsing history, clickstream

histories, uh, preferences they might
have set in their accounts, their email

clicks, and a lot of other information.

If they're not logged in, then you're
kind of like, okay, what contextual

information am I, should I be relying
on to make more accurate predictions?

And that's, that's generally true
with any retailer in general.

Like if you don't have customer, uh,
logging in, then you have a much more

limited set of, uh, attributes and
signals, features to work on to make

the model predict accurately, um, kind
of along the customer lines, right?

So you also have.

Um, the, when the customers are
traversing multiple stages, so like

homepage search and a product page, and
then maybe accessories page, then back

again to like deals page and so on.

How do you ensure you're not, like
how do you strike the balance between,

uh, content that might be perceived
as being duplicated versus it's an

intentional, there's a reason why we
are showing a few things repeatedly.

Right.

So those are some of the sort of
like the frontier challenges, right?

Is there some sort of, um, time
in variant sort of model that can

look across the customer journey?

Or is it like a point in time prediction
that is sort of stateless model, right?

Like those are some of the
sort of frontier challenges.

The more practical challenges are like.

Um, on the data side.

So imagine a customer goes from
like the homepage to a search

page or a product page, right?

Can we, or is the data infrastructure
fast enough to know those real time

signals of what happened in the
previous pages and feed that back

into whatever internal pipelines into
the predictive model by the time the

customer goes to the next page, right?

So those sort of like real time event
architecture would be sort of like

some of the more practical challenges,
even if you're doing sort of like

point in time predictions without
having, uh, stateful ML models.

Chris: And, uh, yeah, the, the next
part I'm particularly excited for,

um, I know you placed a sta a static
homepage, um, with an extra boost model

and, um, you know, accepted Rex's paper.

yeah.

Do you wanna just talk us through that,
some of the challenges you've faced

and payoffs and, yeah, over to you.

Shreesha Jagadeesh: Yeah, sounds good.

Sounds good.

Yeah.

So the conference for which this paper
submission was made is the A CM Rexi

Conference in, uh, Czech Republic later
this year in in September, uh, 22nd.

So the industry paper title is Retail
homepage Personalization of heterogeneous

Content using X based Ranking System.

Um, so yeah, so we, in that paper I talk
about some of the, some of the ways like

homepage content, person heterogeneous
content personalization is a little bit

different from, let's say, uh, newsfeed
ranking on Facebook or other types of

like similar content recommendations.

So allow me to explain, so in the
case of let's say, um, you're,

what would be a good example?

So imagine you're like looking
at restaurant recommender

or something, right?

So you, you.

Your content is generally all
restaurants, and then you have

access to the item information.

In this case, let's say restaurants.

And then you can set up a homepage
where you have, you can dynamically

say, okay, the best restaurant
should kind of go on the top.

And then within that carousel, the
best restaurants in that carousel

should be on the far left, right?

And then kind of like downwards.

The, the higher the quality, higher
the position on the page, the higher

the quality of the recommendations
would be, what would be expected.

However, in this case, the, the
problem framing is slightly different

in the sense that there's separate,
uh, business domain models, other ML

models or heuristics that generate sort
of within carousel recommendations.

Um, then the question becomes then what
is the purpose of the homepage model?

The homepage model is more like
it's placing it vertically on

the page as well as we ended up
having like a hierarchical nature.

So in addition to a vert like an,
uh, concept of like a carousel,

we have like nested carousels.

Um, so it, it's, it's more
like if you open your Best Buy

app, it's, it's pretty obvious.

Um, so there's, there are what we call
a stories, which are basically content

blocks that conform to a particular theme.

Um, so continue shopping might
be a theme and it'll have your

basket, it'll have your saved items.

It'll have recently viewed
items, for example, but each of

them is a careful on its own.

Recently viewed will
have its own set of SKUs.

Uh, saved items will have its own
set of SKUs, and your cart basket

will have its like own set of skills.

And that itself will be sort of
like ordered somewhat, right?

Like chronologically ordered or
some heuristics or ml model based.

Um, and then, so now the model,
the homepage personalization model

has to decide which of these three.

Um, carousels.

The recently viewed the, the
in card or saved would have to

come first, second, and third.

And then where on the vertical, uh,
layout of the page does the entire

continue shopping would come first.

So it's, it's not necessarily
trivial because I think a common

way many people would tackle, let's
say, recommendations problem is

to do a two tower approach, right?

You have a, a content tower, and then
like a customer tower, the content tower.

You feed in all of the item
attributes, and then a customer tower.

You have all the customer profile
and engagement information.

Uh, we have customer profile engagement
information, but your content

tower, it's a little bit different.

We can't really say, okay, these are just
items, these are not items to be ranked.

These are like carousel.

Uh, so it's kind of like
one level higher up.

So we didn't really approach
it using a two tower model.

Instead, we, we said, okay, let's try.

Let's try the most powerful sort of
formulation of traditional machine

learning using, you know, the gradient
booster decision trees and its variance.

So we've kind of landed upon X gush 2.0

and the paper goes into details of why
we believe x gush is suitable for the

specific problem that we attempted.

So yeah, so we, we experienced a
huge increase in, uh, engagement.

I mean, this was in the
first version of the model.

We had actually an ML based versions
prior to even the HG boost, which also

experienced increases in engagement.

Um, but this was the one that actually
demonstrated increases in like some of the

revenue metrics the business cared about.

So, um, so we decided this would
be the perfect time to like

publish a paper as well to sort
of talk about some of the aspects.

The other part that Uber talks, go ahead.

Chris: Yeah, I was gonna say, um, I'm
sure it's summarized in the paper.

Um, but what would you say, um,
at higher level, the, the key

challenges of, of, um, implementation
worth to, from, uh, replacing the

static page to, to this model?

I.

Shreesha Jagadeesh: Sure, sure.

I think it's less about the, uh, well,
I think static pay is a misnomer.

There was still some element of heuristics
driven personalization before, but

it wasn't necessarily an ML driven,
fully ML driven app personalization.

Um, so I think it's always, this is
actually a very good question because

reflecting upon the journey that happened,
and this will be true with the, any sort

of new user experience that is getting
launched, you don't have training data

when you first launch any experience.

Your customer behavior is
so radically different.

You cannot use the historical.

Experiences, um, uh, engagement feedback
as the basis for training your, your

new version of the model, right?

So I think that was the critical challenge
that we faced when we moved away from

the old experience to the new experience.

And this would be true
for any placement, right?

Not just homepage, like search or product
page or like other, and this is not just

about Best Buy, like any retailer or
company would have the same challenge.

So then, then it's more of a internal
sort of development challenge.

If you don't have training data,
then what are you building, right?

But you still need a model.

So what ends up happening is, what ended
up happening last year was we said,

okay, we are gonna develop our code base.

We are gonna develop training
pipelines and serving pipeline.

Assuming this data schema is what
is gonna be generated when the data

actually starts flowing in, right?

So we, we have to be really.

Thoughtful about how do we sequence
our work so that we can have all

of the sort of skeleton code and
skeleton sort of pipelines ready.

So when the homepage turns on, the model
starts learning immediately, right?

So there's a little bit of like
exploration exploitation, which we did

a little bit of reinforcement learning
before that ush model, but so that, that's

kind of like the critical aspect of,
uh, any new experience transformation.

And then specifically for the
homepage model, I think the biggest

internal challenge is it's more
of an engineering challenge rather

than an algorithmic challenge.

It's around feature store.

And how do we make sure that the
overall latencies when we are calling

in, uh, uh, like features from feature
store, doesn't exceed our SLAs?

So what I mean by that is like if
you have hundreds of, uh, signal

like features on your customer.

Um, you can't sequentially call 'em
because the feature store, API might

store them in like different sort of,
not tables, but different APIs basically.

You can sequentially call them,
get the data, and then the next

one get the data because that
would like exceed your latency.

SLA, you'd have to paralyze
your calls, that's number one.

And then you would have to join
the data after you get back.

It sounds fairly trivial, but on
the engineering side it's, it's

nontrivial because you might have
like race conditions and so on.

So, um, the other aspect is
around, um, multi-region deployment

and like co-location, right?

So a user making a request in, let's
say New York, their request should not

be routed to like all the way towards
the west coast and then back again.

So that would like breach
your SLAs requirements.

So you have to be really cognizant
about where is your request

traversing the network and that.

So we ended up deploying, we ended up
doing, uh, multi-region deployment.

Then the feature store itself is
sort of like co-located within the

model endpoint so that the network
latencies are as minimal as possible,

Chris: And yeah, I think,
um, a bit super interesting.

By the way, I think you were just about to
go on to some of the payoffs that you got.

It sounds like, um, you worked
really hard and results did you

kind of notice from, um, you know,
once you got it up and running?

Shreesha Jagadeesh: um, in terms of
the engineering results or like the

customer benefits or company benefits?

Uh,

Chris: let, I think, let's
go into all, all three.

I think all three.

Um, yeah, super relevant.

Shreesha Jagadeesh: sure, sure.

Yeah, so I think in terms of the, uh,
the engineering results, we managed to

squeeze out, squeeze out latencies from
the system, and it's like, well within the

a hundred milliseconds overall latencies.

Um, and, um, in terms of the customer
behavior, we, we didn't notice like

double digit increases in homepage
engagements, which is, uh, quite a lot.

Relative to what it was before.

And also an increases in like,
kind of like we have our own

sort of internal revenue metric.

Um, it's related to what we extrapolate
to be the increase in basket size

is that customers shop, right?

Because the, and that's one of the
things I perhaps I might have missed

earlier on when I explained some of
the differences between retailers

and let's say social media companies.

Um, a lot of the high value retailers,
the frequency of customer shopping is

much lower than, let's say you're a
grocery company or a social media company.

People are not coming back like
every day to like buy a new thing.

They might buy an iPhone like
once a year or a laptop like

once every couple of years.

Or washer dryer, right?

It's not like, okay, you're
coming back next week.

So, um, so some of the business
metrics are more like long term,

like sort of forward looking.

Okay.

Do, if we extrapolate this trend.

What do we anticipate, uh, the business
benefits to be if this trend keeps up?

Chris: Okay.

And, um, you know, getting
a rexi project like

Shreesha Jagadeesh: Yeah,

Chris: the ground, I imagine, um,
certain costs associated to it.

you know, some of the guests we've
had on in the past really, um, gone

deep into a BS experimentation.

But, um, from your perspective, and I
guess this is not, um, restricted to

Best Buy more, you know, you've worked
out lots of different enterprises,

um, you know, how do you frame, um,
you know, potentially a high costs.

Buy into projects like this.

And, what kind of initial
ab tests could you do,

before you kick the, the projects off?

Or do you do when you,
once you've started.

Shreesha Jagadeesh: yeah, yeah.

So I think, um, I.

Well, first of all, I think it's, it's
a combination of top down and bottoms up

approach in the sense that if you have
leadership buying ahead of time that yes,

machine learning, AI driven experiences
are gonna be beneficial to the customers

or any other sort of internal goals, then
that's the best position to be in, right?

So it saves the team a lot of effort to be
like, um, doing some education of, okay,

why is personalization beneficial, right?

So, but let's assume that
is not necessarily the case.

And there's some, let's say, pockets
of awareness, but not necessarily

fully prevalent in the company.

Uh, then the, then the situation would
be sort of like, you'd have to invest.

It's not gonna be like,
okay, I have an idea.

I'm gonna propose this and
it's gonna get approved.

It's, it's a long-term plan.

You'd have to slowly start,
uh, bringing in people and

stakeholders along the journey.

For example, I.

Uh, a common pattern in, in
recommender systems is to basically

do like a multi-stage, right?

So you do like a fast retrieval
stage using like a two tower model

and then like a ranking stage that
can, that is, uh, much heavier model,

but also, um, incorporates a lot of
other signals and, but Right, but

actually processes fewer like items
or customers to recommend, right?

So, but what is that
multi-stage recommended system?

People are generally not aware of it
outside, you know, specialists and

recommendations and personalization.

So there's this element of education
in place to kind of explain, here's,

here's sort of like the best practices
in the industry and here are the

reasons why you need like a two stage
approach, but you don't necessarily

need both stages to be effective.

So one of the, one of the things
that I did in, in one of my previous

projects was yes, even though two
stages beneficial, let's start

off with just one of those stages.

Um, demonstrate value and then start like
tacking on more complexity to it, right?

So, so kind of like an incremental sort of
evolutionary approach to how you sequence

the projects becomes, uh, important.

Um, and then in terms of the AB test
itself, I think it, it goes back to what

is considered important by the business.

So allow me to explain.

So, um, imagine, imagine a business
stakeholders comes to you and say,

Hey, I, I wanna, I wanna introduce
this new loyalty offer on the

cart page when people are checking
out and I wanna increase revenue.

So let's say that's kind of like
the problem statement, right?

And then I mean, if, if it was, if it
was a junior ML unit, they'll be like,

sure, I'm gonna build you like a, a
binary classification model that can

take in some, uh, customer signals and
then recommend or not recommend that

loyalty offer and they're checking out.

I think way, way, way before that, like
try to really understand the problem

even before you go into an ab testt mode.

The reason is the hidden in that
request is actually a very contradictory

set of objectives that would need
to be surfaced ahead of time.

Because, for example, if you introduce
a loyalty offer, you have to have

the product intuition to be able to
say that you're introducing friction

into the customer experience and
therefore your conversions will

actually decrease, not increase.

So, and then, then you might get into
a debate with the product manager

or business partner saying that,
well, I meant long-term conversions,

not short-term conversions.

By having a loyalty, uh, subscription,
the customer is now able to go

in and shop more often in the
future, and therefore their future

conversions will be increased.

Which is fine, but that's a
different problem formulation

than what was stated earlier.

The reason why that's important is
there are some metrics that you can

measure in a two week AB test, but
there are some metrics you cannot

measure in the duration of a two weeks.

Like the long term sort of benefits
from loyalty would not be observable

in a two week experiment, right?

So the most you would have is the short
term detriment to immediate conversions,

uh, because you're introducing
more friction in the popup, right?

So tho those would be some
of the considerations.

Uh, and then it becomes like a
question of going back to your earlier

question, how do you demonstrate value?

Sometimes demonstrating value
would itself require some very

thoughtful, um, experiments to
collect data that would validate or

disprove your initial assumptions.

So just extending the same
example a little bit, uh, simple

way to experiment would be.

Do a AB testing where the control group
never receives any recommendations.

And then the treatment group, all of
the people receive a recommendation.

There's no model, but just, just throw
all the recommendations to that people

and see if there's a detriment, right?

Or an improvement.

That will kind of give you a
benchmark of, okay, here's the maximum

possible, uh, loyalties you could
get for a given cohort of customers

when there's no model in place.

And then you can kind of like play
around with a trade off, okay,

for this amount of loyalty offers
being, uh, subscribed, here's a

detriment in conversions that I get.

So now you can start playing around
with that trade balance and then have

a working conversations with your
leadership saying, we are gonna build

a model that will improve precision.

But that precision also means that
you'll get fewer recall, which is you're

not casting a wide net to do everyone.

If you don't want precision, then
you should not even be asking us a

model because you know, you can just
show that loyalty offer to everyone.

So I think, I think there's various,
depending on the use case, there's

like various ways to frame the value.

Um, so some of them require, you know,
socializing, some of them might require

like experimentation to determine what
the maximum possible value and so on.

Chris: Yeah, but I mean,
whats super insightful?

I think what I take from that is you've,
you've had really good, obviously, uh,

foundations in machine learning, but
being able to translate into products

and, and the business and also, uh,
you know, it all intersects that

your, your business acumen and, um,
yeah, uh, I think really interesting.

I think moving on to the next part, um,
you know, we, we, we spoke previously, uh,

before the podcast about some of the work
you've done using Representation learning

and, um, how that's come to help you with
more traditional ML methods and now, uh,

gen ai, you know, how are you using, uh,
representative representation learning,

uh, to kind of add value to the the

Shreesha Jagadeesh: Yeah,

Chris: journey?

Shreesha Jagadeesh: for sure, for sure.

I think, um, representation
learning has been around for,

uh, at least a decade now, right?

I think in terms of, or maybe even longer
because the original sort of collaborative

filtering model, like fundamentally it
turns users and items into some sort of

an embedding space, and then you, you
try to find, uh, distances between them

now, like taking that concept along.

I think the, of deep learning, or
I should say the popularization of

deep learning techniques to generate
embeddings basically means that you

can convert a product like let's say
in the traditional sort of data science

world, you would take a, a SKU id, the
brand category, price, um, all of the

other information about that product.

And then build, and then maybe some
other sim something similar for the

customers, like, you know, their,
uh, profile characteristics, their

clickstream behavior, and then do
manual feature engineering, and then

feed that into a supervised model.

And then click on, no click,
like a classic CTR prediction.

However, when you're talking about it, an
enterprise level, the value proposition

for representation learning is more
around do you want to prevent, not

prevent, but do you want to centralize
the product understanding and customer

understanding so that multiple teams
are not duplicating the effort, right?

You don't want, so what ends up
happening is each team will be

working on a very specific problem,
like cybersecurity, fraud, or you

know, the loyalty subscription or
like recommender or a tech use cases.

But fundamentally, you are still
working on the same customer, right?

And then your product catalog
is still the same catalog.

But if each team might do
feature engineering ever so

slightly differently, right?

So is there a way to like have a
common set of agreed upon patterns

ahead of time so that it fee, it
saves everyone else time in the

future from doing feature engineering.

So that's kinda like the value
proposition for, uh, doing any sort of

like user modeling and item modeling.

And then more tactically from uh, US
reusability perspective, what ends

up happening is you have a central
team that creates product and item

embeddings and then stores it in a
feature store so that other teams,

other ML teams can, can consume it
for their domain specific use cases.

Right.

So where I think the literature has sort
of evolved in the last decade or so is

that there's some very popular approaches
to using, um, variations of, let's say

word to act type of approaches, pro
to act or meta proto act to represent.

Products better.

Um, and then I think recently
Facebook also released a paper,

uh, called HSTU where they're using
like transformer based approaches to

really understand their customers.

So they have like an LN
based recommender systems.

So it's kind of like getting to the
intersection of the more traditional

sort of representation, learning, sort
of evolving into LLM based approaches.

Mind you, the LMS that I'm talking about
are not the generative ai, so they're not

like the chat bot type of conversational
lms, but more like transformer

based approaches in, in general,

Chris: Okay.

Yeah, that's super interesting.

A lot, lot of customers we're working
with, uh, certainly, uh, moving towards

more, uh, LLM based recommender.

Um.

Shreesha Jagadeesh: The one more thing
that I wanna add there is like the, if you

really wanna use the power of generative
AI to understand customers and products

more, you can actually literally feed in.

Let's say clickstream data into
another, into a generative AI like

Gemini, API or Chad, G-P-D-A-P.

I get the question, what is
the customer trying to do?

You can summarize it that way.

Chris: Oh really?

Shreesha Jagadeesh:
how effective they are.

But it's possible.

Chris: Yeah.

Shreesha Jagadeesh: constraint is more to
do with the cost and latency because you

generally cannot do that in real time.

And then if you try to do that for like
hundreds of millions of customers every

other day, it's gonna be problematic.

Chris: Yeah.

Fair.

Fair enough.

Alright, well, uh, yeah,
that, uh, concludes, uh, the

personalization part of the podcast.

I think some, really insightful
points you made there.

And, um, yeah, thank you for sharing.

I think moving on to the last couple of
parts, um, this more about experience

and leadership and something you've got,
um, plenty of, know, more leadership

in a enterprise level organizations.

I.

You've worked at, um, what, you
know, what would you say are the,

the main leadership principles that
have allowed you to be successful?

And, uh, I hope you don't mind
me saying, but it seems like

you've moved up pretty quickly.

Um, you know what,

Shreesha Jagadeesh: Thank you.

Chris: you there?

Shreesha Jagadeesh: Yeah, I think
I've covered a couple of those points

before, but just rerating those, right.

I think, uh, customer obsession is a big
aspect of, uh, what I personally, um,

think of in terms of leadership principle.

Like really understand.

Who your ultimate customer is,
what business problem are being

solved and go upstream, downstream.

Right.

And then trying to figure out if it's
not just about like me being customer

upset, it's about imparting that sort
of perspective upon all of the other

developers in the team as well, right?

Like how do you ensure that, how
do you empower everyone else in the

team to also focus on the customer
problem rather than a specific

stakeholder you're trying to solve for?

Right?

So I think, um, the corollary
of that is, it, it also moves

people away from obsessed with a
particular modeling technique, right?

Modeling techniques are basically ideas.

Some may work, some may not work.

It's okay.

You can move on to other modeling
techniques or data approaches, right?

But being full, be falling in love with
the problem is, is actually more critical

than falling in love with the solution.

I think that's kind of like the,
the phrase that I would use.

The second aspect that I would
say is kind of like really.

Deeply understand what
your team is doing, right?

So there's one element of, you know,
empowering and delegating to the

team, um, where they're good at.

And, but you would also need
to really understand what each

person brings to the table, right?

Not everyone is gonna be like a
unicorn of, okay, they can do like,

you know, feature engineering or, uh,
modeling and then model deployment

and then optimizations and monitoring.

But you have like a, a symphony, like
an orchestra, like each person is,

is a specialist in certain areas.

They may be able to do other
areas as well, but they may not

necessarily be good at, right.

Um, um, I guess a complimentary to
that idea about diving in deep into

each of those things is it also helps
to speed things up when, when projects

are on a very tight deadline, right?

I.

So I think even like five years or
maybe three years ago, it was very

common to say, oh, most machine learning
projects do not make into production.

I don't think, personally, I
don't think that's true anymore.

I think most machine learning projects
these days actually make into production.

Not only that, many of them go
into production in a span of

like two or three months even.

I've seen some extremely fast
execution in, in several of the

companies that I worked for.

Yeah,

Chris: say is the biggest difference
from, you know, a lot of projects

not going into production to now, um,

Shreesha Jagadeesh: yeah.

Chris: more, more are

Shreesha Jagadeesh: Yeah.

So, uh, yeah, digressing
a little bit there.

So you raised a very good point on
what, what differentiating attributes.

My personal observation is that many
of the projects that do not end up

going into production have not been
de-risked at the very beginning.

So one, one sort of hypothetical
example about the loyalty offer, right?

Like really honing in on what
your evaluation metric is.

So it's, it's either a problem with
the right data not being available,

the evaluation metric being
incorrect, or the downstream, um,

consumers of the model outputs not
being ready to actually consume it.

Uh, the other things that I've
seen are infrastructure issues.

Can you really run the model at a
very stringent latency or it's your

latency gonna cause you issues?

So much of those things can actually be
de-risk ahead of time, or at least in

parallel to your model development, right?

These are not like unknown things.

Of course, people have faced these
challenges in the industry, right?

So sequencing the work in a way that
allows projects to be dear risk along

the way is gonna be a critical attribute
to ensuring projects are successful.

And that's where the, the diving deep
into your, uh, the work that your team

does and everyone else is doing to help
support your team is very critical.

Like as a leader.

You have to be hands-on in terms
of at least knowing what your team

does, even if you're not necessarily
the one coding out every single

module in the, in the pipeline.

Chris: Yeah.

And, uh, thanks.

Sharing that.

I think you interrupted you a
little bit, so apologies for that.

But, uh, in terms of leadership
principles, is there anything

else you'd like to add in terms
of what, what's helped you?

Um, you know, you, you mentioned,
I think before we left off about,

you know, just knowing what your
teams are doing and strengths and

Shreesha Jagadeesh: Yeah.

Yeah, definitely.

So I think in terms of, um, continuing
on the thread of leadership principles,

uh, like really respecting your,
uh, product and engineering partners

who are helping out, I guess
complimenting the machine learning

team strengths would be super critical.

Um, I mean, a great like sort of data
engineers and, and platform teams are

sort of like the unsung heroes of, you
know, those who support the machine

learning, uh, models from failing, right?

So.

The, the amount of effort that goes into
data quality is unimaginably complex.

There's like so many clickstream
events and, and offline data all

being like joined together, some
massive ETO jobs that are running.

So those are very tankless jobs.

And when models succeed, many times
those data teams and infrastructure

folks do not get the credit for.

Right?

So like how do you kind of like
keep them in the loop, make them

aware, sharing the success, right?

So it is more like a collaborative thing
and that would benefit future projects

because now they're more open to sharing
the thoughts, Hey, I think, I don't think

this data pattern that you're trying to
do is, is actually gonna be beneficial,

but instead what if you do this right?

So it's kind of this
collaborative nature there.

Um, the last thing that I would
say is, is sort of like try to

be as, as curious as possible
about the upstream and downstream.

And I kind of mentioned that
in the earlier stages as well.

And sometimes stakeholders are kind
of shocked that, oh, I thought you

guys are machine learning people.

Why are you guys interested
in all of these other aspects?

But it's important because if the machine
learning model is used wrongly it, it

might not work this way you would expect.

Right.

So just purely selfishly from
ensuring machine learning model, I

actually have to be sort of selfless
and understand everyone else.

Right.

So empowering the team to really be
curious about everything else around

them is, is gonna be critical as well.

Chris: Oh, uh, again, super
interesting, really like how curious

you are about the customer and
the problems that you're solving.

I think, um, yeah, it, it,
it's definitely unique.

Um, in, in terms of, uh, we kind
of touched on this, um, at the

start of the podcast, but traits
and things that you look for in,

in, um, engineers or researchers to
that kind of round out your team.

Are there any kind of, uh,

Shreesha Jagadeesh: Yeah.

Chris: that, that make people
good engineers or, or researchers?

Shreesha Jagadeesh: For sure, for sure.

I think in the last, um, six to nine
months, there's been like dozens

and dozens of interviews conducted.

So a lot of these are
like fresh in my memory.

So we, we, we were like massively
scaling up our, uh, best Buy

India office in, in Bangalore.

So interviewed like multiple
director candidates, senior manager,

manager, senior IC staff principal.

So I, I think it really depends on
the, the level at which the candidate

is interviewing for the, uh, whether
they're interviewing for like a research

role or an engineer role, right?

And then, or like an IC
role or a manager role.

I think these all make a difference.

Um, so like, of course as a, a senior
IC or staff kind of person, you

have to be really, really good at
writing good code, efficient code.

Those are like fairly
strong fundamentals there.

And I think, uh, definitely another
critical aspect of being like a

researcher I see is being able to
sort of link the solution that you're

proposing with sort of like industry
standard as well as the problem

that you're trying to solve, right?

Like how does, how does your
solution, how is it unique or how

is it building off of what other
people have done in the industry?

Because if you're trying to propose a
solution that doesn't exist at all in the

industry, that's kind of odd in the sense
that where, where did that idea come from?

How do we know that it's gonna work?

Right?

So it's kind of, you're not necessarily,
most of the times you're not necessarily

needing to innovate on the problem.

Like many of the problem patterns
already exist in the industry.

It usually means that the person has
not done enough literature review to

really understand the problem space.

Um, I'm talking about like technical
lit literature review, right?

And then also trying to see how
their solution maps to the problem

that you're trying to solve.

Like, are they proposing a problem
because they wanna like, or did they

work on a particular problem because
they wanted to like pad up their resume?

Or is it, and actually a, a good
problem, uh, or a good solution

to the problem they face, right?

Like, what are those design choices?

What are those loss functions?

What are those evaluation metrics that
are bespoke to your problem that, that

you believe you made the change, right?

So trying to, getting a sense of their
contributions to this versus, you

know, just running on autopilot and
their, their part of a bigger team.

Um, so as an, as a manager or like
a senior manager candidate, I think

the, the emphasis is now a little bit
different in the sense that how did

you empower your team to be productive?

Right?

Did you work on the most fun parts
of the projects for yourself, or did

you work on the things that are the
highest value add for the product team?

So those can be like different things.

Sometimes the highest value add may
not be the most the fun to do, right?

So you have to sort of understand
is this person as a team player or

are they trying to be, uh, yeah.

So, so those are the sort of
things that you'd wanna be

looking at a manager candidate.

But even more importantly, like at the
researcher, you would want some, uh,

specifically for recommender systems and
personalization team use cases, you would

have to have worked on online deployments.

Um, because the intuition that comes
from building a model all the way to

production and then seeing that, oh, the
data that I retrieve, that the serving

pipeline gets in, in the inference,
the API feature store, API is not

the same as what I saw in training.

There's a train serve skew.

That level of insights do not
come easily for people who have

only worked on batch inference.

And this is true for ML engineers as well.

The online serving use case is a
whole another ball game compared

to any sort of batch deployment.

So working at scale, working on
online deployments, um, rather than

just a batch inference process.

Those are some of like critical
effect and you might have seen that

in your hiring, uh, aspects as well
for recommended systems candidates.

Chris: Yeah, definitely.

I think, um, you know, recommenders at
at scale is totally different to, um,

anyone who's done it in a startup or,

Shreesha Jagadeesh: Mm-hmm.

Chris: So, um, cool.

And, uh, look, look lucky to the future.

I know you've, you've got your,
your book coming out, um, which

must be super exciting for you.

I, I imagine,

Shreesha Jagadeesh: Yeah,

Chris: know, do you wanna just
share what it's about and,

um, what you're discussing?

Shreesha Jagadeesh: sure.

Sure.

I think so.

Uh, yeah, just for context, right, so
I kind of, uh, when I first started

down the path of, you know, recommended
systems person personalization.

There were quite a few like
introductory books on this topic.

Um, and, and of course like
tutorials on collaborative

filtering, content filtering.

But I didn't really, at least when
I started my sort of journey in

recommended systems, there weren't
really a lot of good sort of like books

specifically focused on practical stuff
like end-to-end recommended systems.

Uh, there were end-to-end ML
engineering books like Chippy.

And I had written a fantastic ML
engineering book a few years ago,

uh, where, uh, there's a lot of
best practices for any sort of

machine learning problem in general.

So what if there was a book around
recommended systems kind of in

that sort of caliber, right?

So, so that kind of got me thinking,
is there a way in which future entrance

to the field of recommended systems
personalization do not have to go through

the journey of, you know, cobbling
together conference videos, uh, a few

blocks and a few like papers out there.

Rather, it's all like
consolidated in one place.

So that was kind of like the
motivation for, uh, kind of start

starting to write this book.

And so far, I mean it's, uh,
I've always kind of debated on

what's in the book versus what
I just referred people outside.

But so far the structure of the book
is like, rather than starting off

with a theory of, oh, okay, this
is a collaborative system model.

This is like a content filtering.

I think many of those techniques
are starting to show their age and

perhaps not as relevant for industry
enterprise recommender systems anymore.

Uh, those are good to know,
but not necessarily how things

go in production these days.

So instead the, the, the flow is more
like, okay, here's like an end-to-end sort

of rxi architecture in the modern times.

Yes, there's some introductory
collaborative filtering, but that's

more like the history of recommender.

And, but instead it's more
like, let's say the, the famous

Eugenian block post on here's like
multi-stage recommender systems.

And here's all the different co and then
you kind of like dive into each stage.

So each of the book chapters
focuses on a different part of

the recommender systems pipeline.

So we are talking about model
training infrastructure, right?

How do you work with the features
store team to make sure that you have

the training data and the right format
necessary to build a supervised model?

That is something that
not a lot of people cover.

Like, so I'll give you a classic
example of like a Gotcha.

So I've seen use cases where people
say, oh, we have lots of data, we

have lots of historical training data.

But when you start going into their,
into their databases and you realize

that what they're saying as data is
only the people who have clicked on

a particular item, they do not have
impressions data, they do not have,

they have not set up the front end
clickstream event ingestion for customers

who came upon the page viewed something.

Did not click on it.

It doesn't exist.

So there's, it's very hard to like
structure a supervised problem if

you don't have partial data, right?

But things like that.

So coming back to the
table of contents, right?

So it walks you through the journey
of building the recommended systems

pipeline end to end, and also
diving into the specifics of, uh,

particular technology stackers.

So if you talk about training
data, so you need BigQuery, right?

And you need some sort of like a
feature store wrapper around BigQuery

then that people ask, why do you
want, why do I need feature store?

Well, then you have the problem
of screen service queue.

You need consistency between Watson's
in training and Watson's in serving.

And that's not a lot of things
that outside of online inference

perhaps it's very prevalent.

The, the concept of train skew.

Then you talk about like
the actual modeling model.

Ranking model, new graph,
neural networks and all that.

And then you talk about like AB testing.

And this is a critical
area that a lot of people.

Are not fully aware.

They think that, oh, we'll just
delegate it to the platform teams

who will implement the AB test.

But unfortunately, if you let every,
it's much easier for an ML engineer

to understand a, a bit more about
the statistics, to make sure the

entire experiment runs well, rather
than for the platform teams to

understand how your model's working.

So that's kind like the motivation
to sort of nudge industry

practitioners to learn a little bit
more about the AB testing procedure.

Um, and then lastly, it talks a lot about
like how you can make the development

life cycle scalable across the enterprise.

And this is more catered towards managers,
senior managers, directors, leaders

in the, in the company who have, okay,
they now have a portfolio of projects.

Great people can build
their model all day long.

That's not the problem anymore.

But instead, you cannot be building
a model for each bespoke use case.

How do you make your team more efficient?

And that's where representation,
learning, customer embeddings, product

embeddings, using LMS to enrich data.

All of that comes into the picture, right?

How do you set up centralized, uh, data
assets using machine learning techniques

so that other ML teams, uh, in domain
specific use cases would benefit?

So kind of like, so that's kind of
like the end-to-end sort of structure

of this book, which I hope to be
complimentary to all of the other sort

of fairly soft blocks and tutorials
out there, but may not necessarily

get into like the enterprise
recommended systems development.

Chris: Okay.

Yeah.

And this may be an obvious question,
but who, who and what kind of people

do you think it will benefit the most?

Shreesha Jagadeesh: Sure.

I think the first half of the book
is, uh, or the first, I would say two

thirds of the book is mostly catered
towards, you know, the hands-on ML

engineers, researchers, uh, developers
who are coming in, who are already,

let's say in the field of data science
or machine learning for several years.

They've built models.

This is not their first sort of.

ML model by any means, but now they
wanna get into recommended systems and

they wanna accelerate their journey.

So that's kind of like the
main audience for this book.

And then the second sort, sort of
like the, uh, not the second, but the

last one, third of the book is sort
of, as I mentioned, catered towards

leaders who lead, uh, recommendation
systems or personalization teams.

And they have, they're technical leaders.

They can understand what's going on,
but they may not necessarily be familiar

with the nuances of what, what goes
into recommendation personalization.

So

Chris: Cool sounds, uh, really exciting.

Um, what, what's the plan release
date, and, uh, you know, where,

where can people find it last?

Shreesha Jagadeesh: yeah, I'm still
working with, uh, publishers to figure

out like who would be taking it up.

So this is more like, it's still
in sort of like a table of contents

stage at this moment, but Yeah.

But it.

So I haven't really paused my writing
to sort of wait for the publishers.

I'm just saying, you know what,
I'm just gonna keep writing

as my ideas start flowing in.

Um, and then maybe eventually it
will kind of like converge to Okay,

the publisher's ready, I'm ready.

And kind of mid in the middle.

So you're, you're probably looking at
like late 2025 or early 2026 release?

Chris: Nice.

Well, uh, yeah, may, maybe we can
have you back on for 20 minutes,

Shreesha Jagadeesh: Yes.

Chris: to do, do a live release.

Um, cool.

Okay.

Thank you.

And the, the last part, just, just a
little bit of fun, uh, Isha, um, but

also just having lots of people like
yourself who are, um, you know, really

well accomplished in AI leadership.

And I'd just like to ask kind of
what AI tools they're using in their

kind of work life and, you know,

Shreesha Jagadeesh: Yeah,

Chris: to be, um, you know, squeeze
that extra bit of productivity

out of each day of workday.

Shreesha Jagadeesh: for sure, for sure.

I'm still, I guess, I guess.

I have a little bit of FOMO in
not using or not being able to

use all the tools out there.

But the ones that I use the most often
are, of course, like the, the copilot

capabilities in the, in the VS code.

Um, so writing unit tests or literally
just feeding in a very advanced paper

and say, Hey, convert this to code.

And that serves as a starting template
or structuring the projects better.

Right.

Um, so that's super helpful.

And then chat, GPT deep research I've used
very extensively to do like literature

review, find the state of the art papers
out there, um, do some compare and

contrast, kind of ask like deep dive.

So it saves me like hours and hours.

It does that in like 15 minutes
and now I have to just go out and

Chris: And how, how accurate
do you find the deep research?

You know, obviously you've got the

Shreesha Jagadeesh: Yeah.

Yeah.

Chris: to be able to challenge it.

How, how accurate do you

Shreesha Jagadeesh: So that's
a very interesting question.

I think the.

More, I feel like the accuracy drops
off the longer the conversation goes.

So if I just start off with like
a fresh instance of deep research

and have like one or two prompts,
the, the results are extremely good.

Uh, they're mostly summarizing the
state of the art, but once they

start going kind of like, okay, you
have a multi-time colonization, I

kind of ask it follow up questions.

Can you kind of investigate
this a little bit more?

Now it starts getting, I wouldn't
necessarily call it hallucination, but

it, it starts mixing up terms because
it's not necessarily an expert in, in

recommended systems or personalization.

So it makes some assumptions
about the article.

It reads, um, I think
more, more than accuracy.

The one that I, I find somewhat
limiting in some situations.

I.

Is the technical depth of the articles
compared to what I can find on my own.

So if you give me Archive Explorer,
I can like quickly in maybe like four

or five iterations, end up with like a
large catalog of very interesting weekend

papers to read and then come back with
a good amount of literature review.

Um, I wish deep research capability
would evolve to that, but right now

I think there's still some prompts.

I, I haven't really found a pattern of why
in some situations the literature review

it does, is not as deep as I expected.

But yeah, it's not a hundred
percent, but it's a good starting

point for many use cases.

Chris: Cool.

Okay.

And, um, yeah, Rexi obviously,
um, LMS into Rexi is kind of

Shreesha Jagadeesh: Yeah.

Chris: the next big thing.

Uh, like I say, a lot of marketplaces
I'm aware of kind of looking at

it are, are already doing it.

But where do you see
the Rexi market going?

I think, um, obviously some of the
spotlights gone on tour the topics, but

still a great industry to be involved in.

Where, where do you see it heading?

Shreesha Jagadeesh: Um, again, this,
this, these are my personal opinions.

Um, I think the impact of LLMs
on Rexi is just getting started.

So far.

As I mentioned, the, the cost of inference
and the speed, the latency of inference

has been the two most critical factors
in using generative ai LMS directly

in Rex's and personalizations, right?

Many of the use cases that I've seen
in the industry generally focus on

offline, uh, product or customer data
enrichment, and then using that to

then train supervised traditional
ML models or deep learning models.

Um, where I think it would be a game
changer is one of the two things happen.

I mean, both will eventually happen is a
question of when the cost, if it goes down

further, like let's say 50 x, then you end
up in a situation where it's, it kind of

makes sense to use LMS everywhere, right?

That's not necessarily true right now.

Um, and looking at the trends, I
was actually doing some, uh, review

last week on what's the cost curve.

I think Anderson Horowitz, the venture
capital firm, had a very good graph

last year where they have noticed a
10 x reduction in cost per million

tokens every year pretty much, right?

So if that trend keeps up, which I
think it will, we'll probably get

there in like a year and a half or so.

So it's not unforeseen.

The second aspect is the latency part.

This is the part that I'm not
fully sure when we would reach

a stage where, you know, real
time recommendations can happen.

Um, it might also be a question
of like moving targets as well.

We are still in the realm of like a
second to one second latency for, uh,

like large, really large generative
AI models that have good accuracy.

Of course, you can distill a
large language model, get like

a small container and then try
to do inference, but then you're

compromising on the accuracy aspect.

So yeah, like try to figure
out how to get that down.

And I think, so I was in the Nvidia
GTC conference, um, in March, and then

NVIDIA has like very good infrastructure
for like hosting your own lms, right?

And, and that might be a trend
that we need to keep an eye out.

Like they have all the best sort of
GPU engineers in the world and they

know how to optimize LMS to run better
on their infrastructure, and then

they create those microservices that
make it available to people, right?

So, so those are sort of the
things that I think would, would

transform recommender systems.

Um, just like.

Figuring out how to incorporate the
advantages of reasoning models, generative

AI models in, in the recommendation space.

Of course, if you're just building
like a, a colonization search, I

think that area is already there.

I think a lot of companies are already
using colonization chat bots powered by

some open AI or Gemini API in the packet.

Chris: Okay.

Cool.

Well, uh, yeah, that, that
wraps everything up for today.

Thanks so much for coming on, sharing
your knowledge, sharing the XI paper, uh,

sharing your future predictions, and, um,
yeah, I'm sure a lot of people will find,

um, some golden nuggets to help with their
career or, uh, their rectus projects.

Um, so yeah, thanks so much.

I appreciate your time and,
and sharing your knowledge.

Shreesha Jagadeesh: Wonderful.

Thank you Chris for setting up this, uh,
podcast and I love your previous episodes.

It covers a wide range of diverse topics,
ai, aging government, BBC speakers.

So yeah, so thanks for bringing me in.

Honored.

Thank you.