1
00:01:04,940 --> 00:01:06,440
Well, hello, everyone.

2
00:01:06,440 --> 00:01:07,900
Long time no see.

3
00:01:07,900 --> 00:01:11,760
And welcome to another exciting episode of Pharma Insights,

4
00:01:11,900 --> 00:01:18,880
the talk show brought to you by Platforce,
where we explore the intersection of technology, the strategy,

5
00:01:18,960 --> 00:01:21,720
and innovation in the pharmaceutical industry.

6
00:01:21,720 --> 00:01:25,660
For all of you who do not know me,
my name is Juliana Kreisel,

7
00:01:25,660 --> 00:01:28,760
and I'll be one of your moderators for today.

8
00:01:29,430 --> 00:01:34,270
Do you want to harness the power of AI agents into your business?

9
00:01:34,440 --> 00:01:40,560
Do you want to discover how these technologies
can enhance innovation and streamline processes in pharma?

10
00:01:40,560 --> 00:01:42,640
Then you are in the right place.

11
00:01:42,640 --> 00:01:51,230
The healthcare landscape is rapidly evolving and artificial intelligence
is at the forefront of this transformation.

12
00:01:51,230 --> 00:02:01,350
As AI agents become increasingly integral to pharma operations,
understanding the role in driving innovation is crucial.

13
00:02:01,350 --> 00:02:02,750
In this episode,

14
00:02:02,750 --> 00:02:06,620
We will explore how AI agents are reshaping strategies

15
00:02:06,620 --> 00:02:11,160
and enhancing operational efficiency within the pharmaceutical industry.

16
00:02:11,580 --> 00:02:14,700
For this, we have prepared a special panel,

17
00:02:14,700 --> 00:02:23,600
but before we introduce them, I need to introduce
my co-host and colleague for Platforce

18
00:02:23,640 --> 00:02:26,000
Mr. Stefan Rappin, head of marketing.

19
00:02:26,190 --> 00:02:28,210
Hello, Stefan.

20
00:02:28,210 --> 00:02:29,470
Namaste, everyone.

21
00:02:29,470 --> 00:02:30,670
How is everyone?

22
00:02:30,670 --> 00:02:32,580
It's evening here in Europe.

23
00:02:32,580 --> 00:02:34,160
where I'm based right now.

24
00:02:34,160 --> 00:02:39,300
I'm super happy to talk to you guys about anything other than chat GPT.

25
00:02:39,460 --> 00:02:45,580
Today we're about a pretty interesting thing which is called AI agents.

26
00:02:47,460 --> 00:02:55,240
they're so unique to, I would say they were so unique to Pharma
that it made sense for us to get the best of the best.

27
00:02:55,380 --> 00:03:02,410
And we got Harvey, Salva, and one more guest from Microsoft,
a very special guest for you.

28
00:03:03,480 --> 00:03:06,000
You don't want to spoil it.

29
00:03:06,000 --> 00:03:10,180
I don't want to spoil it because the guest is very special.

30
00:03:10,880 --> 00:03:15,320
It took a lot of effort to get that person with us.

31
00:03:15,520 --> 00:03:18,500
And I would welcome the guest to the stage.

32
00:03:18,620 --> 00:03:21,500
So start to enjoy and please ask us questions.

33
00:03:21,500 --> 00:03:24,820
Ask all the participants questions.

34
00:03:24,820 --> 00:03:27,360
We will be delighted to answer.

35
00:03:27,360 --> 00:03:28,600
Thank you.

36
00:03:28,820 --> 00:03:29,540
Thank you.

37
00:03:29,540 --> 00:03:32,040
So first we have Salvalor.

38
00:03:32,040 --> 00:03:32,790
Carlucci.

39
00:03:32,790 --> 00:03:34,630
Hi Salvador, how are you doing today?

40
00:03:34,630 --> 00:03:36,130
Hi Juliana, hi Stefan.

41
00:03:36,130 --> 00:03:36,940
Good to see you.

42
00:03:36,940 --> 00:03:39,660
Thank you very much for the invitation to be here.

43
00:03:39,660 --> 00:03:42,080
It is a very exciting topic.

44
00:03:42,080 --> 00:03:45,840
I do believe artificial intelligence is probably the technology

45
00:03:45,840 --> 00:03:51,260
that we will experience in our lifetime that
is going to revolutionize the way we see the world.

46
00:03:51,950 --> 00:03:56,280
For us at Atacana, we believe there is also change

47
00:03:56,280 --> 00:04:00,280
how we make decisions within the pharmaceutical industry.

48
00:04:00,750 --> 00:04:05,060
but also how companies are going to compete in the new market as AI

49
00:04:05,060 --> 00:04:09,620
gets embedded within the different parts of the organization.

50
00:04:09,870 --> 00:04:15,380
I believe AI agents is probably the first step
that we're going to see that's going to be concrete

51
00:04:15,380 --> 00:04:17,000
on how we interact with them.

52
00:04:17,000 --> 00:04:19,620
Then I do believe there's going to be agentic workflows,

53
00:04:19,620 --> 00:04:24,680
AI teams that are going to have to put in place as technology evolve.

54
00:04:25,170 --> 00:04:27,530
My background has been in Big Pharma.

55
00:04:27,530 --> 00:04:30,650
I was the head of global competitive intelligence for Novartis.

56
00:04:30,800 --> 00:04:36,050
I was the global head of competitive intelligence for Roche
and then left corporate to set up Atacana.

57
00:04:36,050 --> 00:04:40,170
We're working with 9 out of the top 10 biggest pharmaceutical companies.

58
00:04:40,170 --> 00:04:44,340
And a lot of the discussions I'll say, especially this year has been around

59
00:04:44,340 --> 00:04:49,900
artificial intelligence in particular, is coming up
about AI agents and how can we leverage them to

60
00:04:49,970 --> 00:04:51,450
make better decisions.

61
00:04:51,670 --> 00:04:52,360
Nice.

62
00:04:52,360 --> 00:04:58,060
And I know, well, of course I know, that you have
an especially invitation for our attendees.

63
00:04:58,060 --> 00:04:59,790
So I'm going to share it here

64
00:04:59,790 --> 00:05:01,230
in our screen.

65
00:05:02,130 --> 00:05:08,620
We have a master class coming,
transforming decision making in pharma with AI.

66
00:05:08,620 --> 00:05:12,060
Guys, you need to go into this.

67
00:05:12,060 --> 00:05:13,740
So register.

68
00:05:14,620 --> 00:05:15,560
Yes.

69
00:05:15,590 --> 00:05:17,720
You can follow me on LinkedIn also.

70
00:05:17,740 --> 00:05:19,610
And then I can say the invitation.

71
00:05:20,020 --> 00:05:25,660
Since we're working with pharmaceutical industries
and big pharma, a lot of the conversation is,

72
00:05:25,660 --> 00:05:29,220
can we use AI agents to process large amounts of data?

73
00:05:29,390 --> 00:05:30,540
and drive new insights?

74
00:05:30,540 --> 00:05:35,430
And I think it's very fitting here with the Pharma Insights show.

75
00:05:35,740 --> 00:05:39,760
But it's really this idea on how do we see the technology evolving?

76
00:05:39,760 --> 00:05:43,120
And how do we see this technology being embedded in decision making

77
00:05:43,120 --> 00:05:47,160
within the pharmaceutical industry from preclinical all the way to loss exclusivity?

78
00:05:47,280 --> 00:05:55,320
In the masterclass, we got a little bit deeper in terms of
how do we see this evolving, the agents evolving over time?

79
00:05:57,500 --> 00:05:58,640
Thank you.

80
00:05:58,640 --> 00:05:59,310
guys,

81
00:05:59,310 --> 00:06:00,460
Register.

82
00:06:00,800 --> 00:06:05,250
Now I will introduce our second speaker for today, Mr.

83
00:06:05,250 --> 00:06:06,140
Harvey Castro.

84
00:06:06,140 --> 00:06:06,820
Hello, Harvey.

85
00:06:06,820 --> 00:06:08,170
How are you doing?

86
00:06:12,270 --> 00:06:14,380
Yeah, I cannot hear you.

87
00:06:16,360 --> 00:06:17,220
It's okay.

88
00:06:17,220 --> 00:06:17,610
okay.

89
00:06:17,610 --> 00:06:18,040
Where's the...

90
00:06:18,040 --> 00:06:18,690
I had muted it.

91
00:06:18,690 --> 00:06:20,160
I wanted to make sure.

92
00:06:20,500 --> 00:06:21,300
Thanks for having me.

93
00:06:21,300 --> 00:06:22,900
I'm excited to be here.

94
00:06:23,140 --> 00:06:23,660
Thank you.

95
00:06:23,660 --> 00:06:24,540
Thank you so much.

96
00:06:24,540 --> 00:06:30,150
And now we're going to introduce the spoiler
that Stefan didn't want to introduce first.

97
00:06:30,150 --> 00:06:31,030
We have Mr.

98
00:06:31,030 --> 00:06:32,570
Shadab.

99
00:06:32,570 --> 00:06:33,060
Hello.

100
00:06:33,060 --> 00:06:34,830
How are you doing today?

101
00:06:35,630 --> 00:06:37,770
You're the special guest.

102
00:06:38,130 --> 00:06:40,190
Stefan's special guest, I will say.

103
00:06:40,190 --> 00:06:41,590
Hello, everyone.

104
00:06:43,330 --> 00:06:44,570
Okay, so...

105
00:06:44,710 --> 00:06:46,450
We have introduced the panel.

106
00:06:46,450 --> 00:06:48,470
We have introduced a topic.

107
00:06:48,470 --> 00:06:54,130
So I think now is the time to start discussing a little bit about this.

108
00:06:54,130 --> 00:07:04,510
So my first question, and it's something that we have been
discussing prior off camera, was what are AI agents?

109
00:07:04,510 --> 00:07:07,090
Salvador, do you want to start with that?

110
00:07:07,270 --> 00:07:08,250
Yeah, I'm looking.

111
00:07:08,250 --> 00:07:12,030
I'm very curious to hear from Harvey and Shabad on their definition of AI agents.

112
00:07:12,030 --> 00:07:14,390
I think there's been multiple definitions

113
00:07:14,760 --> 00:07:16,500
in different companies describe it different way.

114
00:07:16,500 --> 00:07:23,000
The way we kind of simplify the definition is that an
AI agent has some type of reasoning and planning.

115
00:07:23,000 --> 00:07:27,180
I don't think they acknowledge this there yet,
but they have a large language model.

116
00:07:27,440 --> 00:07:29,460
It's kind of core component.

117
00:07:29,460 --> 00:07:30,480
Then they have a purpose.

118
00:07:30,480 --> 00:07:32,930
And for us, the purpose can be the system prompt.

119
00:07:32,930 --> 00:07:35,760
So what is the job description that the AI agent has?

120
00:07:35,760 --> 00:07:38,960
And then they have a task that they need to complete,
which might be the short prompt,

121
00:07:38,960 --> 00:07:42,000
or what they need to do, like summarize in a particular document.

122
00:07:42,860 --> 00:07:44,970
And then for us, AI agent also has a skill set.

123
00:07:44,970 --> 00:07:47,790
The skill set can be access to a vector database.

124
00:07:47,790 --> 00:07:49,160
It can be access to the web.

125
00:07:49,160 --> 00:07:51,000
It can be access to SQL database.

126
00:07:51,000 --> 00:07:53,670
So then these agents now can interact with the external world.

127
00:07:53,670 --> 00:07:59,280
I think Cloud 2.5 came out this week, and they can control a computer.

128
00:07:59,280 --> 00:08:04,400
So I think this skill set is also going to evolve
as they also get smarter and multi-modal.

129
00:08:04,520 --> 00:08:06,180
OK.

130
00:08:06,180 --> 00:08:08,690
Harvey, do you want to give your insights as well?

131
00:08:08,690 --> 00:08:09,380
Yeah.

132
00:08:09,380 --> 00:08:12,940
I see agents kind of like an AI detective

133
00:08:12,940 --> 00:08:18,990
aware of its environment, looking for patterns
that you can have him or she help you find things.

134
00:08:18,990 --> 00:08:23,960
I personally think of it as just as an assistant, my digital assistant, and

135
00:08:23,960 --> 00:08:25,960
it's able to find things and observe and
doesn't ever get tired, work 24/7, it helps me out.

136
00:08:30,460 --> 00:08:31,380
Nice.

137
00:08:31,380 --> 00:08:38,690
And Shadab, I know you had more insights into the AI spectrum in Microsoft.

138
00:08:38,690 --> 00:08:42,730
Do you want to give your insights, especially since you are

139
00:08:42,730 --> 00:08:47,410
in pharma but on other markets as well.

140
00:08:47,550 --> 00:08:58,450
Sure, know, agent is someone who has nice cars,
lot of money, travel all around the world and has a license to kill.

141
00:08:58,450 --> 00:09:00,190
Joking, that's the wrong agent.

142
00:09:00,190 --> 00:09:01,870
We're talking about the AI agent, right?

143
00:09:01,870 --> 00:09:12,140
So the AI agent is really, you know, the, it's an assistant
who has the ability to plan, to reason, to orchestrate,

144
00:09:12,140 --> 00:09:18,360
to respond to the percept from the environment and take actions on your behalf, right?

145
00:09:18,800 --> 00:09:21,160
And also importantly, learn.

146
00:09:21,160 --> 00:09:22,820
So it continues to learn as well.

147
00:09:22,820 --> 00:09:26,450
And that's the definition of the modern AI autonomous agents.

148
00:09:26,450 --> 00:09:27,770
That's what it is.

149
00:09:27,770 --> 00:09:32,160
And these agents, the good thing about that agent
is that this just cuts across the industry,

150
00:09:32,160 --> 00:09:33,670
not just the pharma.

151
00:09:33,670 --> 00:09:35,640
They just cut across the entire industry.

152
00:09:35,640 --> 00:09:41,040
In fact, the most popular example of an agent I can give you is it's your own city.

153
00:09:41,040 --> 00:09:43,870
For example, it does things on your behalf.

154
00:09:43,870 --> 00:09:45,490
It's a great example of it.

155
00:09:45,490 --> 00:09:57,780
Now think of it in different settings, whether it's an agent for a doctor
to help with the notes or diagnosis, or it's an even agent on your own behalf.

156
00:09:57,780 --> 00:10:04,620
So everybody can have their own agent, and then those agents
can work with even other agents or other humans to do it.

157
00:10:04,620 --> 00:10:05,980
So consider this.

158
00:10:05,980 --> 00:10:09,420
This is like an assistant for specialist tasks.

159
00:10:09,420 --> 00:10:11,130
That's what I will define it as.

160
00:10:11,700 --> 00:10:12,270
Love it.

161
00:10:12,270 --> 00:10:13,810
Thank you for that.

162
00:10:14,050 --> 00:10:22,390
We were discussing that agents can learn or one of the insights from agents
is that part that they are continuously learning.

163
00:10:22,390 --> 00:10:28,700
do you think that they can predict market trends
and consumer behaviors into the pharmaceutical?

164
00:10:28,700 --> 00:10:30,210
Can we use it that way?

165
00:10:30,210 --> 00:10:40,680
Or do you think it's too much of, I don't know,
to future vision kind of thing, like not into information?

166
00:10:41,620 --> 00:10:49,410
So my take on that would be that, look, you know, your AI
has learned things from what has been generated by humans.

167
00:10:49,410 --> 00:10:52,640
It is not creating something totally out of place, right?

168
00:10:52,640 --> 00:10:56,120
It is generating based on what it has learned.

169
00:10:56,920 --> 00:11:00,790
So it cannot speak a language, which it has not learned, for example.

170
00:11:00,790 --> 00:11:05,640
So it can predict based on the pattern that it has seen.

171
00:11:05,700 --> 00:11:11,100
But if, let's say, an unprecedented event happened
and things completely changed,

172
00:11:11,100 --> 00:11:14,380
that disturbed the data pattern on which it has learned.

173
00:11:14,380 --> 00:11:19,950
So as long as the patterns are being followed,
it's going in the same direction using the patterns, yes, it can forecast.

174
00:11:19,950 --> 00:11:21,180
It can forecast trends.

175
00:11:21,180 --> 00:11:27,200
It can forecast what it has learned, but it cannot
create something totally new altogether.

176
00:11:27,720 --> 00:11:28,860
OK.

177
00:11:28,940 --> 00:11:32,280
Salvador, do you have something into that?

178
00:11:34,200 --> 00:11:40,250
I think with the clients that we work with, they rely on
accurate information and the latest information.

179
00:11:40,250 --> 00:11:48,140
So what we have found is that we need to be providing
the latest data that sometimes is not in the public domain to these agents.

180
00:11:48,140 --> 00:11:54,040
So a lot of what we're doing is how do we
do automation the way that the agent either

181
00:11:54,040 --> 00:11:58,320
through a vector database or SQL can access
the latest information that is being either

182
00:11:58,540 --> 00:12:00,360
created within the client side.

183
00:12:00,360 --> 00:12:03,310
So maybe internal information that is not in the public domain,

184
00:12:03,310 --> 00:12:08,440
or information that maybe is restricted for whatever reason,

185
00:12:08,440 --> 00:12:13,340
to make sure that those agents are working
on very specific tasks to get access to that.

186
00:12:13,790 --> 00:12:22,650
Yeah, and I'd want to add to that, think of the agents as different tasks,
and so you can assign an agent for every little task.

187
00:12:22,650 --> 00:12:30,350
And so for that portion, then when you mentioned predictive analytics,
I mean, we could have an agent that's looking into the market in real time.

188
00:12:30,350 --> 00:12:33,860
and constantly searching and then every time
something new comes, that's what that has.

189
00:12:33,860 --> 00:12:38,050
Then you can have another agent to validate what that guy said or her, the agent.

190
00:12:38,050 --> 00:12:42,640
And then you can have another agent
that's coordinating all the agents for the bigger picture.

191
00:12:42,640 --> 00:12:52,570
And so you can really segment this down to different parts of this to have those
predictive analytics to have that tailor customized for you what you're looking for.

192
00:12:52,770 --> 00:12:58,950
Obviously, when you have these agents going,
every time it does a query, a question is going to take up some

193
00:12:59,450 --> 00:13:04,210
percentage of tokens and that percentage of tokens
may translate into amounts of money.

194
00:13:04,210 --> 00:13:09,160
So depending how long you guys are working these agents,
it's going to return how much it's going to cost you.

195
00:13:09,160 --> 00:13:12,900
So it's not as cheap of saying, okay,
I'm just going to run this agent all week.

196
00:13:12,900 --> 00:13:16,920
You you may have a nice bill at the end of the month
if you haven't been careful.

197
00:13:17,500 --> 00:13:23,430
The good thing, Harvey, is that companies like Microsoft,
they give you money to run some of these experiments.

198
00:13:23,430 --> 00:13:27,510
So we did get a 50,000 grand from Microsoft.

199
00:13:27,510 --> 00:13:28,920
So thank you, Shabad,

200
00:13:28,960 --> 00:13:32,900
for that, that allows me to do a lot of these experiments.

201
00:13:33,140 --> 00:13:35,410
So now I know who to call when I need some tokens.

202
00:13:35,410 --> 00:13:40,340
There you go, see we're building connections here.

203
00:13:40,660 --> 00:13:46,610
You know a fun fact, in the last two years
the cost of the tokens have gone down 99%.

204
00:13:46,610 --> 00:13:50,620
So cost is not so much of an end it will continue to do so, right?

205
00:13:50,620 --> 00:13:59,100
I mean the way the things are going, it's such a competitive environment
that cost would hardly be a constraint, honestly.

206
00:13:59,970 --> 00:14:01,390
Yeah, whatever you want to.

207
00:14:01,390 --> 00:14:06,450
Like I've seen that a lot of people are coming with the agents.

208
00:14:07,090 --> 00:14:12,700
The last we were discussing it with Stefan
in the last Salesforce, Dreamforce event for Salesforce,

209
00:14:12,700 --> 00:14:18,060
they also are announcing their AI agents
and how is everything is revolution,

210
00:14:18,960 --> 00:14:23,460
making a revolution into every single industry as we discussed prior.

211
00:14:24,640 --> 00:14:32,460
But I want to know how do you think the AI agents will transform
the pharmaceutical, for example, research and development processes?

212
00:14:33,640 --> 00:14:35,800
Harvey, do you want to take on that?

213
00:14:36,000 --> 00:14:40,590
Yeah, as a doctor, one of the main things I'm always looking at is data.

214
00:14:40,590 --> 00:14:42,090
How good is that data?

215
00:14:42,090 --> 00:14:45,250
Is it the famous phrase in AI garbage in, garbage out?

216
00:14:45,250 --> 00:14:51,880
And so if I can point the AI to start looking at certain databases and do that research
for me,

217
00:14:51,880 --> 00:14:52,880
Not that I'm being lazy.

218
00:14:52,880 --> 00:15:02,740
If I'm trying to go through an entire amount of data information,
AI can bring that data to me and I can have a strict criteria of what I'm looking for.

219
00:15:02,740 --> 00:15:09,200
Then it brings that data to me and then now I can go into the weeds
and see every article and make sure that that makes sense.

220
00:15:09,200 --> 00:15:14,660
When it comes to clinical trials, there's so many issues,
but one of them, one of the main issues

221
00:15:14,660 --> 00:15:18,580
is making sure that the person
that I'm trying to study is the correct person for

222
00:15:18,680 --> 00:15:21,020
the right problem, for the right solution.

223
00:15:21,070 --> 00:15:25,090
Having AI to be able to better predict on: is that patient qualified?

224
00:15:25,090 --> 00:15:27,630
Is that person something that can be in this study?

225
00:15:27,630 --> 00:15:31,790
And then once I have you in my clinical study, are you going to stay?

226
00:15:31,790 --> 00:15:33,670
Are you just here for a month?

227
00:15:33,670 --> 00:15:35,990
Are you going to stay the whole trial or are you just in and out?

228
00:15:35,990 --> 00:15:39,180
So then to be able to use another agent to analyze the data

229
00:15:39,180 --> 00:15:41,180
and say, okay, this population is more likely to stay
and here's the reasons why, and then I can totally capture that.

230
00:15:45,550 --> 00:15:50,720
And so there's many ways of using an agent to help during clinical trials for sure.

231
00:15:52,010 --> 00:15:53,100
Thank you for that.

232
00:15:53,100 --> 00:15:56,120
And Salvador, do you want to add?

233
00:15:56,740 --> 00:16:03,950
Yeah, I think the dependent where we look at the pharmaceutical industry
and we look at the drug development, there's different opportunities.

234
00:16:04,140 --> 00:16:08,100
I think Moderna did a great job showing
how much you can do with AI, right?

235
00:16:08,100 --> 00:16:10,260
They developed the COVID vaccine.

236
00:16:10,260 --> 00:16:19,530
from the time that they got the sequence from China, it only took two days
to develop the vaccine and they took a year to then test and build the capacity.

237
00:16:19,530 --> 00:16:21,840
But even the manufacturing capacity was done

238
00:16:21,930 --> 00:16:22,880
with the AIs.

239
00:16:22,880 --> 00:16:26,480
So I think we're moving from drug discovery to drug engineering,

240
00:16:26,480 --> 00:16:30,420
where it's not so much about trying to figure out where what kind of drug
we should develop is more about being

241
00:16:30,520 --> 00:16:31,570
more intentional.

242
00:16:31,570 --> 00:16:32,720
This is what we're going to do.

243
00:16:32,720 --> 00:16:36,740
So I think there's going be tremendous change
and hopefully a lot more breakthrough

244
00:16:36,740 --> 00:16:40,000
therapies that actually going to cure diseases in the near future.

245
00:16:40,350 --> 00:16:45,180
And then when we think about the commercialization part
or even some of the clinical trial,

246
00:16:45,180 --> 00:16:50,940
this idea of digital twin or digital humans,
where now you can role play a patient

247
00:16:51,290 --> 00:16:54,560
how they feel, what they think about a specific drug.

248
00:16:54,590 --> 00:16:58,800
So you can train these models
with patient data and then they can role play.

249
00:16:58,800 --> 00:17:01,820
So then again, that probably gonna speed up also how we think about

250
00:17:01,820 --> 00:17:06,840
bringing these drugs to the market and also
what is the right patient to bring into these clinical trials.

251
00:17:08,630 --> 00:17:10,520
Thank you.

252
00:17:10,520 --> 00:17:11,910
So we were, yeah.

253
00:17:11,910 --> 00:17:18,900
And let me just quickly add, not to forget that
the Nobel Prize for this year's in chemistry

254
00:17:18,900 --> 00:17:23,460
was for AI to actually discover the protein structure.

255
00:17:23,540 --> 00:17:25,390
And think about it, what it would lead to.

256
00:17:25,390 --> 00:17:33,000
It would actually lead to a lot of R&D, new drugs, new molecule
or protein discovery, etc, that would eventually help.

257
00:17:33,140 --> 00:17:36,220
And ultimately, all of that you can package within the agents as well.

258
00:17:36,220 --> 00:17:43,500
So there could be research agents literally working alongside
the humans and helping with this protein structure.

259
00:17:43,500 --> 00:17:48,080
And then eventually that will lead to all
these new drugs and treatments, hopefully.

260
00:17:48,780 --> 00:18:01,260
My question now will be related to the big, I will say the big scary part of the AI,
the losing jobs or it's a trending that some...

261
00:18:01,260 --> 00:18:09,220
Some people believe that AI will bring a lot of layoffs
due to the work that they can do, as Harvey said before.

262
00:18:09,220 --> 00:18:19,860
They can do enormous tasks or a lot of tasks in a short amount of time.

263
00:18:19,860 --> 00:18:28,930
Do you think that it will impact on the job market
or do you think that it's, as another speaker told us, it's a tool?

264
00:18:28,930 --> 00:18:37,450
that we need to learn how to use so you can re-utilize
or give it more value to our workload right now.

265
00:18:37,450 --> 00:18:39,470
Harvey, do you want to take on that?

266
00:18:39,470 --> 00:18:40,350
Yeah.

267
00:18:40,350 --> 00:18:48,240
So my favorite phrase in AI world,
it's human plus AI will beat the best human or the best AI.

268
00:18:48,240 --> 00:18:51,520
It's working together that will give you a better output.

269
00:18:51,520 --> 00:18:55,380
The second thing I want to say is I call it the great shift.

270
00:18:55,720 --> 00:18:58,800
If I'm a, pretend I'm a dermatologist.

271
00:18:59,020 --> 00:19:04,190
and you're a family practice doctor
and you're using AI, am I going to fire you?

272
00:19:04,190 --> 00:19:10,920
No, what's going happen is AI is going to enable you
to do things more in your scope and maybe broaden your scope.

273
00:19:10,920 --> 00:19:17,600
And some of the things that you might have sent to me as a dermatologist
will change because AI is going to start helping you to some degree.

274
00:19:17,600 --> 00:19:24,580
And then me as a dermatologist, I'm going to start seeing sicker patients
and more concentrated on things that I need to do.

275
00:19:24,580 --> 00:19:26,000
So I call it the great shift,

276
00:19:26,000 --> 00:19:26,490
then

277
00:19:26,490 --> 00:19:29,500
For the job security, people are like, well, I'm going to lose my job.

278
00:19:29,500 --> 00:19:31,590
My answer, Mac, is it may be a shift.

279
00:19:31,590 --> 00:19:34,360
You may do your job a little differently.

280
00:19:34,360 --> 00:19:36,980
And so that's my two cents on that.

281
00:19:37,680 --> 00:19:38,760
Thank you.

282
00:19:38,760 --> 00:19:42,240
Shabad or Salvador, whoever.

283
00:19:42,380 --> 00:19:44,110
Go ahead, Salvador.

284
00:19:45,750 --> 00:19:50,960
I think certain tasks are going to disappear
from the job description or the job profile.

285
00:19:50,960 --> 00:19:54,860
So I think certain low-level tasks like summarizing,

286
00:19:55,020 --> 00:20:00,130
going through documents to extract the relevant information
that we have a lot of like entry level analyst jobs.

287
00:20:00,130 --> 00:20:01,580
A lot of times they do that.

288
00:20:01,580 --> 00:20:04,090
I think those type of activities probably will disappear.

289
00:20:04,090 --> 00:20:11,260
Like Harvey said though, you still need a human
to bring the creativity, to bring the politics of organization,

290
00:20:11,260 --> 00:20:15,220
to bring the influencing and negotiation components of

291
00:20:15,340 --> 00:20:20,630
how they reposition that information in a better way.

292
00:20:20,630 --> 00:20:22,420
So, definitely I'll say tasks

293
00:20:22,420 --> 00:20:23,320
are going to disappear.

294
00:20:23,320 --> 00:20:27,520
And if people are focusing on very specific tasks,
then of course those people need to be re-skilled

295
00:20:27,520 --> 00:20:31,340
in return to take advantage of the technology.

296
00:20:33,890 --> 00:20:35,810
That is true.

297
00:20:39,090 --> 00:20:42,160
Thing is that, for example, one of the survey that recently happened

298
00:20:42,160 --> 00:20:48,500
it's predicted that between 2030 to 2060,
half of the thing that we do today

299
00:20:48,500 --> 00:20:51,860
as tasks will be done by the AI,

300
00:20:51,960 --> 00:20:52,890
at least.

301
00:20:52,890 --> 00:20:55,620
Or it will not appear altogether.

302
00:20:55,940 --> 00:20:59,640
So the thing is that the jobs will definitely change.

303
00:21:00,000 --> 00:21:04,470
And that would mean that a lot of
the drudgery of the work will actually disappear.

304
00:21:04,470 --> 00:21:11,750
And to be honest, not just the drudgery of the work,
like even highly intelligent work, like coding, etc.

305
00:21:11,750 --> 00:21:13,460
Honestly, that will also be done.

306
00:21:13,460 --> 00:21:16,550
It's just that people will be much more productive now.

307
00:21:16,550 --> 00:21:18,810
That means that you will probably need less.

308
00:21:18,810 --> 00:21:22,310
However, that doesn't mean that the job disappears.

309
00:21:22,310 --> 00:21:24,970
Well, that did not happen with the advent of computer, right?

310
00:21:24,970 --> 00:21:31,610
I mean, it just exploded into altogether new areas of economy,
which nobody expected before, right?

311
00:21:32,010 --> 00:21:38,990
Imagine those industries, when the industrial age comes,
same fears and apprehensions were there as well.

312
00:21:38,990 --> 00:21:42,060
And I'm not saying that this fear is unfounded.

313
00:21:42,060 --> 00:21:43,590
Yes, the jobs will change.

314
00:21:43,590 --> 00:21:45,850
Yes, some jobs will disappear.

315
00:21:45,850 --> 00:21:46,950
That's true.

316
00:21:46,950 --> 00:21:49,150
But there are new opportunities we'll also create.

317
00:21:49,150 --> 00:21:52,260
What is important is for people to understand that

318
00:21:52,320 --> 00:21:55,240
there is a shift coming.

319
00:21:55,240 --> 00:21:58,960
So they need to actually just essentially change with the time,

320
00:21:58,960 --> 00:22:04,620
not trying to resist it, but just trying to skill yourself
for the new expectation that you will have in the job.

321
00:22:04,770 --> 00:22:06,070
That's what it is.

322
00:22:07,270 --> 00:22:07,850
Thank you.

323
00:22:07,850 --> 00:22:09,220
Thank you for that.

324
00:22:09,260 --> 00:22:11,870
Stef, do you have a question for the panel?

325
00:22:14,340 --> 00:22:15,300
You're muted.

326
00:22:18,220 --> 00:22:21,200
Question, in fact, for the panel.

327
00:22:21,320 --> 00:22:27,360
So I was listening very carefully to what Harvey said,
and he as a clinician, as a doctor himself,

328
00:22:27,360 --> 00:22:34,380
I guess he sees a lot of potential improvements
that AI can help with in pharma and

329
00:22:34,810 --> 00:22:38,330
my, you know, I like to dig, I like to give hard questions.

330
00:22:38,330 --> 00:22:44,090
So my question is, considering that AI is so advanced,
we still have the human factor.

331
00:22:44,090 --> 00:22:46,090
And what's the problem?

332
00:22:46,090 --> 00:22:47,990
Why is the not far, not, why is

333
00:22:47,990 --> 00:22:54,570
Pharma so resistant to change and not getting ahead with AI.

334
00:22:54,570 --> 00:23:03,190
What's the problem in integrating AI into critical processes
that Pharma has like drug discovery, R&D and so on?

335
00:23:03,190 --> 00:23:08,240
Because these things, like you spend
a lot of money as a pharma company on this.

336
00:23:08,240 --> 00:23:09,330
How is it integrated?

337
00:23:09,330 --> 00:23:10,420
What are the problems?

338
00:23:10,420 --> 00:23:12,400
I would like to hear your opinion guys.

339
00:23:12,580 --> 00:23:14,800
Yeah, I'll jump in.

340
00:23:14,800 --> 00:23:17,090
As a healthcare professional,

341
00:23:17,090 --> 00:23:20,490
Healthcare is very cautious, very conservative.

342
00:23:20,490 --> 00:23:23,930
And the reason mainly, obviously, there's a lot at stake.

343
00:23:23,930 --> 00:23:25,690
If you make a mistake, someone can die.

344
00:23:25,690 --> 00:23:26,480
The stakes are high.

345
00:23:26,480 --> 00:23:33,690
It's not like, I made a bad choice and I lost some money
or I maybe had to spend some tokens or made a bad stock choice.

346
00:23:34,070 --> 00:23:36,970
If this messes up or hallucinates, someone may die.

347
00:23:36,970 --> 00:23:40,850
So with that said, there's also the cultural aspect.

348
00:23:40,850 --> 00:23:44,740
If medicine is an art and it's been this art and it has been taught from

349
00:23:44,740 --> 00:23:45,990
generation to generation.

350
00:23:45,990 --> 00:23:52,000
Because if you think about the perceptorship, I go to residency, I go med school
and someone older is teaching me how to practice medicine,

351
00:23:52,000 --> 00:23:54,000
then the culture has not changed.

352
00:23:54,960 --> 00:23:59,480
We need more leaders like not to brag, someone like me
that knows medicine, that understands AI,

353
00:23:59,480 --> 00:24:03,100
that can combine the two, that can teach the next generation so that

354
00:24:03,180 --> 00:24:04,280
culture can shift.

355
00:24:04,410 --> 00:24:08,920
Anytime you bring a new technology to someone,
the first thing is the fear of change.

356
00:24:08,920 --> 00:24:10,670
Like, my God, I don't know what that is.

357
00:24:10,670 --> 00:24:11,810
That must be the devil.

358
00:24:11,810 --> 00:24:13,260
AI is horrible.

359
00:24:13,340 --> 00:24:18,340
But then when you start unpacking it and educating
and teaching doctors and healthcare professionals

360
00:24:18,420 --> 00:24:22,100
how to use this tool, then they say, wait a second, this is good.

361
00:24:22,650 --> 00:24:29,920
To your point, I love what DeepMind did
with the Alpha Fold and got the Nobel Prize,

362
00:24:29,920 --> 00:24:34,420
being able to discover, I truly think
this is gonna open the door to personalized medicine.

363
00:24:34,740 --> 00:24:41,480
The second thing that AI has done is companies are
now able to produce medicines and discover them cheaper.

364
00:24:41,480 --> 00:24:45,950
And some studies have said some of these have dropped down by 90%.

365
00:24:45,950 --> 00:24:46,970
And so what does that mean?

366
00:24:46,970 --> 00:24:51,640
If you have a rare disease that maybe let's pretend
only a thousand people in the world have that disease,

367
00:24:51,640 --> 00:24:55,180
is the pharmaceutical company going to go out and make a drug that may

368
00:24:55,260 --> 00:24:56,770
be $10 million for a dose?

369
00:24:56,770 --> 00:24:58,230
The answer is they're not.

370
00:24:58,230 --> 00:25:03,060
But if I can bring that price down
and I can break it down to some point that now

371
00:25:03,120 --> 00:25:07,840
it makes financial sense for the pharmaceutical company,
now they're going to start creating drugs

372
00:25:08,200 --> 00:25:10,810
that will address that smaller population.

373
00:25:10,810 --> 00:25:12,550
So it really can revolutionize the world.

374
00:25:12,550 --> 00:25:17,350
That's why I'm so happy to see they got a Nobel Prize
because I truly think this is going to help.

375
00:25:17,350 --> 00:25:21,690
So big picture is going to just we need to change
the culture and we need to educate.

376
00:25:24,260 --> 00:25:25,360
Well said.

377
00:25:29,400 --> 00:25:36,480
In fact, we had a guest on the podcast later, well, earlier in the year,

378
00:25:36,480 --> 00:25:40,460
and he
works for an orphan drug company,

379
00:25:40,990 --> 00:25:48,590
he said that he's seeing way more orphan, well,
orphan drug companies are dealing with rare diseases, basically.

380
00:25:48,590 --> 00:25:54,020
So he's seeing way more companies pop out in like mushrooms,

381
00:25:54,020 --> 00:25:57,500
because it's easier for them to deliver and like test new drugs.

382
00:25:58,110 --> 00:26:03,780
specifically with AI agents and something
that he mentioned would be a company like,

383
00:26:03,780 --> 00:26:08,860
for example, Turbine AI, where you don't need that many,
well, you still need clinical tests,

384
00:26:09,020 --> 00:26:15,800
but like basically you could sort of use some VR
and you could use like a Turbine and

385
00:26:15,800 --> 00:26:23,880
play around the model and how the actual drug molecules
react with like biological targets like

386
00:26:24,150 --> 00:26:27,540
people or like, you know, animals without even

387
00:26:27,680 --> 00:26:31,190
administering it to any biological targets.

388
00:26:31,190 --> 00:26:38,210
And that's amazing because that opens the door to way
more pharma, way more R&D, way more development.

389
00:26:43,420 --> 00:26:44,330
Yeah.

390
00:26:45,210 --> 00:26:50,580
Would you guys agree or did you have
actually experience with something like that

391
00:26:50,580 --> 00:26:59,060
where you had an AI fully built maybe within the last few years
and then you had something like

392
00:26:59,070 --> 00:27:00,240
a breakthrough or something?

393
00:27:00,240 --> 00:27:02,940
Well, at least in the RNG space.

394
00:27:07,560 --> 00:27:09,460
That's a question for everyone.

395
00:27:09,460 --> 00:27:12,400
some of you, Harvey, you should have. Don't be shy.

396
00:27:12,460 --> 00:27:14,510
Yeah, I think there are several companies.

397
00:27:14,510 --> 00:27:19,150
One is called Insilico where they're really
using AI to do drug discovery.

398
00:27:19,150 --> 00:27:23,460
I think finally they have a few compounds in the clinical development.

399
00:27:24,040 --> 00:27:29,040
I'm not sure, I guess, depending on which is the definition of an agent,

400
00:27:29,140 --> 00:27:35,500
but clearly there are companies now being able to,
especially around this technology, being able to predict

401
00:27:35,600 --> 00:27:36,810
the protein folding.

402
00:27:36,810 --> 00:27:40,310
that are coming up with new drugs that are moving into the clinic.

403
00:27:40,830 --> 00:27:43,920
A lot of times, big pharma especially,
they're probably going to wait

404
00:27:43,920 --> 00:27:47,340
until this technology goes into phase one,
phase two, and then they buy them up.

405
00:27:47,430 --> 00:27:54,340
So the model for big pharma, a lot of times,
has been kind of a big bag investment where

406
00:27:54,340 --> 00:27:59,560
the goal is not to innovate too much,
but is really to go out and buy innovation from

407
00:27:59,630 --> 00:28:03,810
smaller players and then build those up because of the commercial

408
00:28:03,820 --> 00:28:05,260
capabilities.

409
00:28:05,320 --> 00:28:08,180
But I just see quite a bit of activity now

410
00:28:08,180 --> 00:28:12,360
across our clients and other companies that we're talking to.

411
00:28:12,680 --> 00:28:17,880
That AI, I guess, thanks to Chat GPT, can people understand it better now?

412
00:28:17,880 --> 00:28:20,750
And now they're seeing the potential in the day to day.

413
00:28:20,750 --> 00:28:27,000
So I they're being little bit more bullish on also investing
across all different decision-makings in the industry.

414
00:28:29,560 --> 00:28:30,350
Thank you.

415
00:28:30,350 --> 00:28:32,820
Guys, we have a question from the audience.

416
00:28:32,820 --> 00:28:35,890
One second, I'm going to showcase.

417
00:28:35,890 --> 00:28:43,600
It says, from Isaias Cuello, considering AI agents, what is the real balance
between automation and human expertise?

418
00:28:43,600 --> 00:28:45,600
and how to keep the balance?

419
00:28:47,950 --> 00:28:49,300
Who wants to take on that?

420
00:28:49,300 --> 00:28:50,390
Harvey?

421
00:28:50,610 --> 00:28:52,210
I can see you smiling.

422
00:28:55,510 --> 00:28:56,770
Yeah.

423
00:28:58,850 --> 00:29:00,580
Again, I'm going repeat what I said earlier.

424
00:29:00,580 --> 00:29:06,170
I truly believe it's the human plus AI,
the combination that's going to give you the best.

425
00:29:06,170 --> 00:29:11,800
It's my clinical gestalt, my human intuition,
my human ability to see things,

426
00:29:11,800 --> 00:29:17,020
plus using AI to create what alpha falls out there and what

427
00:29:17,250 --> 00:29:22,310
what molecules and then putting those together and saying,
wait, we can discover this or we can create this.

428
00:29:22,550 --> 00:29:25,950
Finding the balance, again, I truly, I'm stuck on it.

429
00:29:25,950 --> 00:29:31,650
I truly believe it's about education, educating the doctors
and the healthcare professionals and people making these drugs.

430
00:29:31,650 --> 00:29:33,080
What are the limitations AI?

431
00:29:33,080 --> 00:29:34,380
What are the bias?

432
00:29:34,380 --> 00:29:39,990
How can we make this more transparent so that when I make that decision,
I'm able to take it to the next level.

433
00:29:39,990 --> 00:29:41,290
Quick example.

434
00:29:41,290 --> 00:29:46,260
If I'm using today, chatGPT, and tomorrow I'm using Claude
and then I'm using Gemini, there's all these models.

435
00:29:46,260 --> 00:29:47,850
I don't know which one does what.

436
00:29:47,850 --> 00:29:54,320
But if we had a food label equivalent that now I look at it and say,
okay, this is the positive, this is the negative, these are the bias.

437
00:29:54,320 --> 00:29:56,640
This is why it's not transparent.

438
00:29:56,640 --> 00:29:58,270
This is why this model would work.

439
00:29:58,270 --> 00:29:59,320
This is why it wouldn't.

440
00:29:59,320 --> 00:30:00,810
Now I'm educated.

441
00:30:00,810 --> 00:30:07,740
Now when I'm creating the different drugs, now I'm able to know
which tool to use and which one I should and why and what population.

442
00:30:07,740 --> 00:30:09,120
Really powerful stuff.

443
00:30:09,120 --> 00:30:11,050
And that's how we could find balance.

444
00:30:12,190 --> 00:30:14,670
I get some other one.

445
00:30:15,470 --> 00:30:21,170
Yeah, I think it is still a world where the human needs to be in the loop.

446
00:30:21,490 --> 00:30:27,030
My recommendation is if the agent can automate it
and can do the task and let the agent do it.

447
00:30:27,310 --> 00:30:35,690
Because I think like Shabad said, I think in the real world, the AI agents
are going to take all the boring work away from our plate.

448
00:30:35,690 --> 00:30:41,240
And then that allows us to focus on the things
that we like and join doing the most decorative,

449
00:30:41,320 --> 00:30:43,800
creative work, but also I think a lot of times,

450
00:30:43,800 --> 00:30:47,000
they don't have enough time to think through different scenarios,

451
00:30:47,000 --> 00:30:50,160
enough time to maybe go after different activities.

452
00:30:50,300 --> 00:30:55,300
I think the agency is really gonna increase the productivity,
but also is gonna allow us to really explore

453
00:30:55,300 --> 00:31:00,740
bigger domains, ask more questions,
and then as humans we're also gonna get

454
00:31:00,820 --> 00:31:05,360
better and probably do work that we enjoy more moving forward.

455
00:31:05,620 --> 00:31:06,840
Nice.

456
00:31:07,440 --> 00:31:10,710
We lost you, Shadab, but you're back.

457
00:31:10,710 --> 00:31:12,540
So that's a good thing.

458
00:31:12,960 --> 00:31:16,640
We have another question coming from the audience saying,

459
00:31:16,640 --> 00:31:24,340
who will validate what AI is giving
us as a result that is relevant?

460
00:31:25,610 --> 00:31:27,910
And that's why we need the human in the loop.

461
00:31:27,910 --> 00:31:29,810
There you go.

462
00:31:29,810 --> 00:31:31,850
He came back with an answer.

463
00:31:33,120 --> 00:31:34,440
I was here all the time.

464
00:31:34,440 --> 00:31:36,220
It's just that you could not see me.

465
00:31:36,220 --> 00:31:38,830
was just like an agent just keeping an eye.

466
00:31:40,310 --> 00:31:44,880
But yeah, that is exactly why we need
a human in the loop because there are

467
00:31:44,880 --> 00:31:50,420
liabilities and particularly the industries which are regulatory
like the pharma and the healthcare

468
00:31:50,560 --> 00:31:52,000
and bunch of other industries.

469
00:31:52,000 --> 00:31:54,880
You definitely want to have the human in the loop.

470
00:31:54,880 --> 00:31:57,440
Not just that, there are other aspects as well.

471
00:31:57,440 --> 00:32:01,830
You need to also make sure that not just that the

472
00:32:02,180 --> 00:32:04,580
the answer that are being given are relevant,

473
00:32:04,580 --> 00:32:10,980
but they're also accurate, they are bias free, they are certified.

474
00:32:11,080 --> 00:32:14,870
So it takes a lot to get those data prepped.

475
00:32:14,870 --> 00:32:17,320
So data is how we said earlier, right?

476
00:32:17,320 --> 00:32:18,570
Garbage in, garbage out.

477
00:32:18,570 --> 00:32:24,910
So we need to make sure that the, for example, the diagnosis results,
they are based on those demographics, etc.

478
00:32:24,910 --> 00:32:26,260
So it takes a lot.

479
00:32:26,260 --> 00:32:28,470
You have to design a system.

480
00:32:28,680 --> 00:32:30,300
to take care of these kind of things.

481
00:32:30,300 --> 00:32:31,500
There's no hallucination.

482
00:32:31,500 --> 00:32:36,710
So that's human in the loop is required while developing it.

483
00:32:36,710 --> 00:32:40,360
Human in the loop is required while actually interacting with it

484
00:32:40,360 --> 00:32:43,060
so that people get the certified answer.

485
00:32:43,140 --> 00:32:46,040
And that's why there is, I would say,

486
00:32:46,040 --> 00:32:51,380
for example, if you have to go and get a diagnosis done

487
00:32:51,380 --> 00:32:56,420
or get a health care related answer,
I would not just solely rely on an agent.

488
00:32:56,600 --> 00:32:57,670
I would like an

489
00:32:57,670 --> 00:32:59,430
to be a doctor with an agent.

490
00:32:59,430 --> 00:33:01,950
That's what my preferred model would be.

491
00:33:01,950 --> 00:33:06,110
And that would be, I think, probably the most optimal model in the future.

492
00:33:06,970 --> 00:33:12,810
I think I guess one question is who validates what the humans produce, right?

493
00:33:12,810 --> 00:33:15,550
There's also a check there that needs to happen.

494
00:33:15,550 --> 00:33:21,800
And then the other one is shouldn't the AI validate
what a doctor is prescribing, right?

495
00:33:21,800 --> 00:33:27,310
If we look at seroideology, where the AI is really better at detecting serointestinal

496
00:33:27,310 --> 00:33:31,600
cancers and if that has access to all this information shouldn't the AI

497
00:33:31,600 --> 00:33:38,120
or the doctor validate what he's prescribing with AI
that can look at all the drug to drug interactions,

498
00:33:38,370 --> 00:33:41,530
maybe connect the dots across the whole patient history.

499
00:33:41,650 --> 00:33:47,600
So I think there is a part of somebody
needs to validate what the AI is creating,

500
00:33:47,600 --> 00:33:53,160
but there's also a scenario where the AI
is going to be required for people when they're

501
00:33:53,240 --> 00:33:56,000
prescribed to make sure that they have a complete picture of the patients.

502
00:33:56,000 --> 00:33:59,980
they have an agent to understand all the drug to their interaction
to make sure that what you're prescribing

503
00:33:59,980 --> 00:34:01,560
really makes sense for that patient.

504
00:34:01,560 --> 00:34:07,000
If not, it tells you, hey, maybe you want to consider this X, Y, Z.

505
00:34:07,000 --> 00:34:10,640
And then the physician can go back and say, OK,
does it make sense what I'm prescribing or

506
00:34:10,640 --> 00:34:12,640
is there a better option that I might consider?

507
00:34:14,030 --> 00:34:16,310
And I want to add to this.

508
00:34:16,310 --> 00:34:17,870
think of the whole lifecycle.

509
00:34:17,870 --> 00:34:19,060
Who has to do it?

510
00:34:19,060 --> 00:34:25,230
So obviously, we need the data scientists to make sure
they validate the data saying this is doing correctly.

511
00:34:25,300 --> 00:34:27,720
For example, there's something called data drift.

512
00:34:27,720 --> 00:34:28,710
I can make a model.

513
00:34:28,710 --> 00:34:29,860
It's working great.

514
00:34:29,860 --> 00:34:35,060
But then with time, it'll start drifting
and it'll start creating something that it wasn't meant to do.

515
00:34:35,060 --> 00:34:38,880
And so having those data scientists analyzing that AI,

516
00:34:38,880 --> 00:34:41,840
making sure that it's doing the task that it was supposed to do.

517
00:34:42,390 --> 00:34:44,380
Second, we need domain experts,

518
00:34:44,380 --> 00:34:48,660
pharmacists, doctors, people that are in there,
scientists that are looking at that AI,

519
00:34:48,660 --> 00:34:50,660
making sure that that does make sense, using that human in

520
00:34:50,720 --> 00:34:55,270
the loop, making sure it's doing reinforced learning
to make sure that it's going down the path as supposed to.

521
00:34:55,270 --> 00:34:59,160
Having the open mind to understand AI,
but then knowing when to use it and not.

522
00:34:59,160 --> 00:35:02,840
Obviously, regulatory, we need the FDA or government agencies

523
00:35:02,840 --> 00:35:07,260
are gonna regulate this and make sure
those bodies are doing their part.

524
00:35:07,530 --> 00:35:10,440
And then the last part, obviously, the whole reason we do all of this

525
00:35:10,440 --> 00:35:12,440
is at the end is the end user, the patient.

526
00:35:12,710 --> 00:35:17,100
Make sure the patient is able to look at it and saying,
yes, it is doing what it's supposed to do

527
00:35:17,100 --> 00:35:20,340
and being able to give feedback back
to the system so that way we can

528
00:35:20,360 --> 00:35:23,970
create that whole ecosystem to make the best AI product.

529
00:35:25,030 --> 00:35:25,590
Thank you.

530
00:35:25,590 --> 00:35:26,980
Thank you for that.

531
00:35:27,120 --> 00:35:29,080
We have two more questions.

532
00:35:29,080 --> 00:35:31,540
Our attendees are waking up or something.

533
00:35:31,540 --> 00:35:33,380
I don't know.

534
00:35:33,380 --> 00:35:38,320
The first one is, can AI be effectively applied in a patient-centric manner

535
00:35:38,320 --> 00:35:41,960
to enhance health care outcomes and improve patient experience?

536
00:35:43,790 --> 00:35:45,100
Who wants to tackle that?

537
00:35:45,100 --> 00:35:46,110
Harvey?

538
00:35:47,450 --> 00:35:51,320
So at the end of the day, I

539
00:35:51,320 --> 00:35:53,090
I'll joke with you guys.

540
00:35:53,090 --> 00:35:54,280
It's all about communication.

541
00:35:54,280 --> 00:35:59,050
When you fight with your spouse, when you fight with your kids,
your boss, it's about communication.

542
00:35:59,050 --> 00:36:04,110
So how can we use that AI to better enhance
patient-centric manner health care?

543
00:36:04,110 --> 00:36:06,550
I truly think it's leveraging AI.

544
00:36:06,550 --> 00:36:10,820
This is a great example how people from around the world are listening to this.

545
00:36:10,820 --> 00:36:16,560
And if I'm your doctor and I give you, I'm in Texas,
Texas examples, you'd be like, I don't live in Texas.

546
00:36:16,560 --> 00:36:18,440
I have no idea when he says football.

547
00:36:18,440 --> 00:36:20,760
I think of soccer as football.

548
00:36:20,760 --> 00:36:26,240
But if I can leverage AI to speak your language, your examples,

549
00:36:26,240 --> 00:36:31,400
your accent, your language that is primary, now you'll understand.

550
00:36:31,880 --> 00:36:36,320
Now I'll have a huge impact of how I deliver health care
because you'll be able to hear me,

551
00:36:36,320 --> 00:36:38,940
but not just hear me, you'll be able to understand me.

552
00:36:39,020 --> 00:36:42,960
And if you understand me, you're less likely
to come back into the healthcare system

553
00:36:42,960 --> 00:36:47,020
in the sense that it's like, you know what,
I didn't understand what the doctor said or how

554
00:36:47,080 --> 00:36:48,440
to take my meds or X, Y.

555
00:36:48,440 --> 00:36:50,510
The second part to this,

556
00:36:50,680 --> 00:36:56,000
is having things like edge technology
or things like this phone or iWatch

557
00:36:56,000 --> 00:36:57,780
that's telling me, hey, you didn't sleep enough.

558
00:36:57,780 --> 00:37:00,800
Like every morning I look at it, I'm like, all right, I probably need

559
00:37:00,800 --> 00:37:04,080
to sleep a little bit more if I want to live longer,
if I don't want to gain weight.

560
00:37:04,220 --> 00:37:09,960
And so using AI to now notify my watch
and saying, hey, Harvey's not sleeping.

561
00:37:09,960 --> 00:37:14,710
And then instead of being a reactive health care,
we create a proactive health care.

562
00:37:14,710 --> 00:37:18,820
So now the doctor is texting, calling, saying,
hey, you need to increase your meds.

563
00:37:18,820 --> 00:37:20,530
Hey, you need to come in sooner.

564
00:37:20,700 --> 00:37:25,420
And so that's how we'll be able to use this, like the question asked.

565
00:37:25,420 --> 00:37:26,430
like that.

566
00:37:26,430 --> 00:37:34,050
I think my doctor will be crazy, crazy sending
me messages of things I'm doing wrong.

567
00:37:34,050 --> 00:37:36,440
What can you do?

568
00:37:36,440 --> 00:37:40,800
I would like to mention about a recent, I think there was

569
00:37:40,860 --> 00:37:46,420
a paper that was written, I think most of you
would already know, but it was fascinating.

570
00:37:46,540 --> 00:37:47,520
It was, I think,

571
00:37:47,520 --> 00:37:51,600
paper about the use of AI in the healthcare setting.

572
00:37:51,600 --> 00:37:57,560
It was actually a paper based on, think 50, you know, I think

573
00:37:57,560 --> 00:38:02,940
the doctors and the physician assistant trying out AI for diagnosis.

574
00:38:03,990 --> 00:38:05,940
And I'm going to paraphrase some other things.

575
00:38:05,940 --> 00:38:10,170
going to, I've forgotten the size numbers, so pardon me, but here it is, right?

576
00:38:10,170 --> 00:38:14,870
The outline was that half of the doctors and assistants were given the

577
00:38:15,340 --> 00:38:19,050
no tools except for the conventional tools that they have used.

578
00:38:19,050 --> 00:38:29,570
The other half were given AI, chat GPT access to actually use that
to help with the diagnosis and the treatment plan, etc.

579
00:38:30,990 --> 00:38:36,320
And surprisingly, you would assume that the AI folks would have gotten it better.

580
00:38:36,320 --> 00:38:37,600
They actually did.

581
00:38:37,600 --> 00:38:39,940
It was just marginally better.

582
00:38:40,520 --> 00:38:43,490
Now, which was surprising.

583
00:38:43,490 --> 00:38:48,400
But on the other hand, we actually ran another group test where only AI,

584
00:38:48,400 --> 00:38:53,440
some AI experts, not the doctors, but the AI experts actually used AI.

585
00:38:53,770 --> 00:38:58,860
And the AI alone, Chai GPT alone was able to give 15 % better results.

586
00:38:58,860 --> 00:38:59,900
So what does that mean?

587
00:38:59,900 --> 00:39:04,580
Like AI is better than human and human in the AI loop with AI as well?

588
00:39:04,580 --> 00:39:12,430
No, actually the important thing was that
it is important that the adoption is done

589
00:39:12,430 --> 00:39:13,310
Right.

590
00:39:13,310 --> 00:39:18,340
So for example, everybody in the healthcare setting,
whether it's a doctor, assistant, nurses,

591
00:39:18,340 --> 00:39:22,840
everybody, you can't just throw a tool at,
You need to actually teach how to

592
00:39:23,050 --> 00:39:24,270
properly use it.

593
00:39:24,270 --> 00:39:25,790
Secondly, there is an adoption, right?

594
00:39:25,790 --> 00:39:31,890
I mean, you can't expect somebody highly qualified
to just accept what a chat bot says really.

595
00:39:31,890 --> 00:39:34,910
So there is that adoption process.

596
00:39:34,910 --> 00:39:42,150
So if we do the right adoption and we teach everybody
to use it right, there is a lot of upside here.

597
00:39:42,350 --> 00:39:46,850
15 % more, imagine, better treatment plan or the diagnosis.

598
00:39:46,850 --> 00:39:50,190
So there is a lot to gain, but there is a lot of work to be done.

599
00:39:50,190 --> 00:39:54,310
It's not just creating an account and then let them run wild with it.

600
00:39:54,310 --> 00:39:55,570
That's not going to how it works.

601
00:39:55,570 --> 00:40:01,600
That's the balance we need to find, the right speed,
the right skilling, and the right adoption plan.

602
00:40:04,270 --> 00:40:04,920
Thank you.

603
00:40:04,920 --> 00:40:06,230
Thank you for that.

604
00:40:06,230 --> 00:40:08,510
Salvador, do you want to add something?

605
00:40:08,610 --> 00:40:12,840
Yeah, I think there was another study coming out of the UK, NHS, where

606
00:40:12,840 --> 00:40:18,320
patients got a chance to talk to the physicians
or got chance to talk to the chatbot.

607
00:40:18,630 --> 00:40:27,610
They're going to kind of where you're saying, Harvey,
that the chatbots can be empathetic to the patients.

608
00:40:27,610 --> 00:40:30,080
And even though they're not talking to technology,

609
00:40:30,080 --> 00:40:33,860
the way they communicate and the amount of information they have

610
00:40:34,160 --> 00:40:35,870
they can really find to that message.

611
00:40:35,870 --> 00:40:40,040
they don't have the stress of having seen
50 patients throughout the day.

612
00:40:40,040 --> 00:40:44,220
They don't have the stress of the fault,
maybe with the wife or the husband the day before,

613
00:40:44,220 --> 00:40:46,940
maybe they had a nice shift and they're tired from that.

614
00:40:46,980 --> 00:40:48,270
They don't have all of that, right?

615
00:40:48,270 --> 00:40:51,880
So they bring like the best qualities if provided with the right tools

616
00:40:51,880 --> 00:40:57,100
to engage with the patients and provide answers
that are a little bit more empathetic than human.

617
00:40:57,140 --> 00:40:57,920
So.

618
00:40:58,410 --> 00:41:01,900
We're seeing that in these studies,
I think also from our customer service,

619
00:41:01,900 --> 00:41:07,900
I think as one of the industries are going to be
probably disrupted quite a bit because a lot of the

620
00:41:08,060 --> 00:41:13,190
engagement probably can be done better by AI.

621
00:41:13,190 --> 00:41:16,700
So I think that also is going to have an impact
also as we move forward on

622
00:41:16,700 --> 00:41:21,520
when does it make sense to talk to a human versus
when does it make sense to talk to a machine,

623
00:41:21,680 --> 00:41:24,780
especially when it gets specific tasks done.

624
00:41:26,190 --> 00:41:27,920
And I do want to add that

625
00:41:27,920 --> 00:41:33,520
It's just because I'm a doctor and I hear that study
always quote and I put it on all my presentations.

626
00:41:33,540 --> 00:41:38,220
I want to convey that it's again, it's about the communication.

627
00:41:38,220 --> 00:41:42,130
When a doctor answers your question,
unfortunately, they may be quick.

628
00:41:42,130 --> 00:41:45,190
The average time in the United States with the doctor is 13 minutes.

629
00:41:45,190 --> 00:41:48,050
So that interaction is very limited.

630
00:41:48,050 --> 00:41:51,430
But AI can give you a whole book on your subject.

631
00:41:51,430 --> 00:41:56,910
And then the other thing I'm going to encourage you
to ask for the most for your health care is the following.

632
00:41:56,910 --> 00:41:58,770
Before you see your doctor.

633
00:41:58,770 --> 00:42:01,910
Obviously, make sure there's GDP and HIPAA and all that stuff.

634
00:42:02,070 --> 00:42:06,050
Ask your medical questions and say, I have diabetes, hypertension.

635
00:42:06,050 --> 00:42:08,600
What are the top questions I need to ask my doctor?

636
00:42:08,600 --> 00:42:11,540
Because when you're there in those 13 minutes,
you want to maximize.

637
00:42:11,540 --> 00:42:15,190
So I'm going to encourage you to use AI for your health care to improve.

638
00:42:16,190 --> 00:42:17,670
I love that.

639
00:42:18,010 --> 00:42:20,770
It's very short, 13 minutes.

640
00:42:21,070 --> 00:42:23,560
I always say hi and I'm already out of.

641
00:42:23,560 --> 00:42:25,210
or I speak a lot.

642
00:42:25,650 --> 00:42:30,220
Next question is, we have another question for our attendees.

643
00:42:30,220 --> 00:42:32,010
Appreciate the insights.

644
00:42:32,010 --> 00:42:36,400
How do you anticipate AI agents will be used initially for pharma,

645
00:42:36,400 --> 00:42:40,680
drug discovery, R &D, help sales teams or something else?

646
00:42:43,470 --> 00:42:48,660
Actually, everything, to be honest, because AI agents, you know, AI agents,

647
00:42:48,660 --> 00:42:55,960
you can design the AI agent for everything,
whether it's a process, whether it's like a knowledge board.

648
00:42:56,950 --> 00:43:00,420
And remember that AI agents can be very, very smart as well.

649
00:43:00,420 --> 00:43:02,160
They can actually even replicate.

650
00:43:02,160 --> 00:43:05,220
There is an architecture pattern called AI swarm

651
00:43:05,220 --> 00:43:09,420
in which the AI agents, based on the problem that you throw at them,

652
00:43:09,420 --> 00:43:12,740
they can actually reason, they can create a plan.

653
00:43:12,820 --> 00:43:17,500
And if they have to be, for example, if they have to,

654
00:43:17,500 --> 00:43:23,160
let's say, create the new agents for doing some tasks, they can actually do that.

655
00:43:23,240 --> 00:43:26,620
So AI agents can be very, powerful from that regard.

656
00:43:26,620 --> 00:43:31,280
So I would say that they are not limited
to any particular function or aspects.

657
00:43:31,280 --> 00:43:33,350
They can be part of anything.

658
00:43:33,450 --> 00:43:36,650
What is important is to identify the purpose of it.

659
00:43:36,650 --> 00:43:40,820
So most of the agents are very specialist, specific tasks.

660
00:43:40,820 --> 00:43:46,620
So for example, a travel app or let's say for example, patient onboarding.

661
00:43:46,680 --> 00:43:48,510
That could be a specific process.

662
00:43:48,510 --> 00:43:51,980
Within the patient onboarding itself, there could be multiple agents,

663
00:43:51,980 --> 00:43:57,260
not just one agent, working together,
depending on what the task will be required.

664
00:43:58,140 --> 00:43:59,380
And that's just one area.

665
00:43:59,380 --> 00:44:04,110
There could be tons of areas like this
where you can actually do the work.

666
00:44:04,110 --> 00:44:07,550
But yeah, everything is relevant for the agents.

667
00:44:08,630 --> 00:44:09,670
Thank you.

668
00:44:11,110 --> 00:44:13,590
Salvador, do you want to add?

669
00:44:14,630 --> 00:44:20,590
Yeah, again, I think there's a difference
between just applying AI versus applying agents.

670
00:44:20,590 --> 00:44:25,340
I think in terms of agents, this is going to be how pharma engages

671
00:44:25,340 --> 00:44:30,460
with the patients, the physicians,
the different stakeholders in the ecosystem.

672
00:44:30,750 --> 00:44:32,980
I think that's going to play a role,

673
00:44:32,980 --> 00:44:37,660
especially around when we're talking
about specific diseases or specific drugs.

674
00:44:38,130 --> 00:44:43,840
the agent will be able to give kind of like HIPAA compliant, legal compliant answer.

675
00:44:43,840 --> 00:44:48,740
I think we're also seeing in the customer engagement,

676
00:44:48,740 --> 00:44:53,060
using AI to connect the dots and figure out, for this physician,

677
00:44:53,060 --> 00:44:54,420
what is the right detail aid?

678
00:44:54,420 --> 00:44:57,200
What are the topics that this physician likes to talk about?

679
00:44:57,200 --> 00:45:03,950
So it can help in the sales rep and the medical first people
to have the right message for the right physician.

680
00:45:04,210 --> 00:45:08,080
We go in-house, I think there's a lot of use cases around

681
00:45:08,440 --> 00:45:10,380
Training and onboarding of also employees.

682
00:45:10,380 --> 00:45:13,180
So like Shabad said about onboarding a patient

683
00:45:13,180 --> 00:45:17,560
that maybe needs to take different medications
or needs to use a medical device,

684
00:45:17,560 --> 00:45:19,200
there is an onboarding there.

685
00:45:19,200 --> 00:45:23,370
And then our focus of the Atacana is around decision-making, right?

686
00:45:23,370 --> 00:45:26,440
How do you use these agents to process the information

687
00:45:26,440 --> 00:45:30,620
and have the right information at the right time when you're making the decisions?

688
00:45:32,240 --> 00:45:33,340
Thank you.

689
00:45:33,440 --> 00:45:36,630
Stef, do you have a question for the speakers?

690
00:45:36,630 --> 00:45:38,190
Do you want to follow up?

691
00:45:41,110 --> 00:45:43,990
I always have a question.

692
00:45:46,530 --> 00:45:50,630
In fact, I was commenting on AI swarm in the chat.

693
00:45:50,630 --> 00:45:54,120
So that's a really useful tool, ladies and gentlemen,

694
00:45:54,120 --> 00:45:57,280
for pharma and beyond pharma if you're also in tech.

695
00:45:57,910 --> 00:45:59,390
I wanted to show you this.

696
00:45:59,390 --> 00:46:01,860
So some of you maybe know, some maybe don't.

697
00:46:01,860 --> 00:46:04,490
It's book written by a famous surgeon.

698
00:46:04,490 --> 00:46:06,410
His name is Atul Gawande.

699
00:46:06,410 --> 00:46:08,200
The book is called The Checklist Manifesto.

700
00:46:08,200 --> 00:46:09,780
He was a surgeon.

701
00:46:09,920 --> 00:46:13,450
He talks basically, the whole book is about checklists, right?

702
00:46:13,810 --> 00:46:19,280
And it's more of an insight than anything else.

703
00:46:19,280 --> 00:46:24,760
He talks about the fact that in medicine you use checklists,
in aviation you use checklists,

704
00:46:24,760 --> 00:46:29,520
in many other fields where there is
a high responsibility on people, use

705
00:46:29,640 --> 00:46:30,650
checklists.

706
00:46:30,810 --> 00:46:37,070
And I was wondering, maybe again,
it's sort of an open question for everyone.

707
00:46:37,550 --> 00:46:39,760
Maybe with, we could...

708
00:46:39,760 --> 00:46:46,600
implement AI better in pharma in particular
or in many industries if we had a checklist about

709
00:46:46,600 --> 00:46:54,940
how to use pharma and how to use AI and
how to implement AI in the right way maybe

710
00:46:55,560 --> 00:46:59,320
using a checklist that could be a good role model what do guys think?

711
00:47:01,770 --> 00:47:07,120
I know when I was coming up the ranks
in med school, was a big, big issue.

712
00:47:07,120 --> 00:47:11,350
I would always say we need to work on checklists,
make sure things perform.

713
00:47:11,350 --> 00:47:15,320
I know surgeons are notorious for checklists and I mean it in a good way

714
00:47:15,320 --> 00:47:21,060
being able to go from A to Z, making sure that everything is done the right way in the right sequence.

715
00:47:21,140 --> 00:47:24,880
That way you're not taking off the wrong leg,
cause you're not following the checklist.

716
00:47:25,660 --> 00:47:32,750
And it's been really big on in the airline industry, obviously,
before they fly any plane, they go through their checklist.

717
00:47:32,750 --> 00:47:37,980
I think it's very important in medicine, having AI create that checklist

718
00:47:37,980 --> 00:47:43,080
and then applying that to AI agent in particular, we could do that as well.

719
00:47:43,410 --> 00:47:49,940
There's already a science to this when you look at
how to discover medicine, how to do things, it's very A through Z.

720
00:47:50,320 --> 00:47:52,720
and creating the AI to make sure you're going through the checklist.

721
00:47:52,720 --> 00:47:53,440
So I totally agree.

722
00:47:53,440 --> 00:47:57,050
And I'm already on Amazon and I'm going to
order that book for me because I love reading.

723
00:47:58,610 --> 00:48:00,140
Nice, nice.

724
00:48:00,140 --> 00:48:02,650
I'm being on checklists as well, but I'm not a doctor.

725
00:48:02,650 --> 00:48:06,120
I'm being on checklists for everything.

726
00:48:06,120 --> 00:48:08,950
My lists have lists and so on.

727
00:48:09,750 --> 00:48:14,560
Salvador, what are your insights regarding this?

728
00:48:16,240 --> 00:48:23,120
Yeah, I read the book and definitely had an impact on how we think about

729
00:48:23,120 --> 00:48:28,680
educating the team on what they need to do
to get the right outputs or the right deliverables.

730
00:48:30,210 --> 00:48:36,880
I think with AI, I think there is a place for it.

731
00:48:37,560 --> 00:48:41,380
I think the challenge for us in terms of AI is that
every other week there's something new

732
00:48:41,380 --> 00:48:43,880
that comes out that has a new different skill set,

733
00:48:44,160 --> 00:48:45,420
new capabilities.

734
00:48:45,420 --> 00:48:48,340
So if you have the right checklist for the AI today,

735
00:48:48,340 --> 00:48:51,600
probably in a couple of months, we'll already be outdated.

736
00:48:51,720 --> 00:48:56,020
So for us, it's been more about thinking, what are the processes,

737
00:48:56,020 --> 00:49:00,820
maybe the steps to put in place that as the new technology,

738
00:49:00,820 --> 00:49:02,780
you're to take out the cartridge, you take out the

739
00:49:02,780 --> 00:49:07,900
GP3.5, and then you put the GPT next inside.

740
00:49:07,920 --> 00:49:13,490
And then that way, you don't have to rethink
the whole process, but you only

741
00:49:13,600 --> 00:49:20,780
improve the whole process by now swapping
these different LLMs as they become more powerful.

742
00:49:21,340 --> 00:49:28,100
So yeah, it is a good question trying to see
how they will evolve as the technology also evolves.

743
00:49:30,640 --> 00:49:31,240
Thank you.

744
00:49:31,240 --> 00:49:31,980
Thank you for that.

745
00:49:31,980 --> 00:49:32,750
Steph.

746
00:49:35,400 --> 00:49:38,980
Well, Shadab didn't answer so I'd like to hear that

747
00:49:38,980 --> 00:49:44,260
No, I think you know I won't get that book but I can tell you that

748
00:49:44,260 --> 00:49:47,380
I use the AI a lot already in lot of things and I'm

749
00:49:47,480 --> 00:49:52,300
gonna take Harvey's advice on Using the AI
to prep before the doctor's exam

750
00:49:52,300 --> 00:49:56,340
and  you know give them a hard time with all the questions, right?

751
00:49:56,360 --> 00:49:57,310
Really?

752
00:49:57,330 --> 00:50:04,240
So but yeah, I think that is very very cool
and remember that I think now that

753
00:50:04,490 --> 00:50:09,500
If you use, for example, Apple iPhone, now
you have Apple Intelligent on the phone.

754
00:50:09,500 --> 00:50:15,180
And I use a lot of these devices for fitness monitoring,
health monitoring and things like that.

755
00:50:15,180 --> 00:50:18,840
I think, you know, AI is, whether you want it or not,

756
00:50:18,840 --> 00:50:24,560
it's going to be everywhere and it's going to be on your devices
and you can actually really make a good use of it.

757
00:50:25,320 --> 00:50:30,240
Remember your Apple watch itself is going to have a lot of AI and you can actually
configure it.

758
00:50:30,240 --> 00:50:33,550
It can monitor a lot of vital signs, etc, as well.

759
00:50:35,350 --> 00:50:38,410
Of course, I think that the question is whether it can do the diagnosis.

760
00:50:38,410 --> 00:50:42,500
I think a lot of companies will stay away
because of the compliance and others.

761
00:50:42,500 --> 00:50:48,500
So think it will be fun to see because one aspect
that we don't have very clearly spelled out

762
00:50:48,500 --> 00:50:53,820
is that, what will be the regulatory framework around all these using the AI in the,

763
00:50:53,950 --> 00:50:57,260
you know, in the diagnosis, in the personal space, et cetera.

764
00:50:57,780 --> 00:50:59,380
It would be fun to watch.

765
00:51:00,050 --> 00:51:01,210
you know, how will that emerge?

766
00:51:01,210 --> 00:51:02,220
Will it slow down?

767
00:51:02,220 --> 00:51:04,810
Will it enhance the innovation phase?

768
00:51:04,810 --> 00:51:07,210
I don't know, but it will be fun to watch.

769
00:51:11,230 --> 00:51:16,020
Yeah, a recent study on AI usage and AI

770
00:51:16,020 --> 00:51:22,080
actually showed that AI is really good with
gathering research and giving you the data, but

771
00:51:22,280 --> 00:51:25,100
it's still people who have to take the decision.

772
00:51:25,100 --> 00:51:28,740
So it's, you know, it's human led AI.

773
00:51:28,740 --> 00:51:32,120
And although you can gather a lot of data

774
00:51:32,120 --> 00:51:34,120
and Harvey mentioned here that

775
00:51:34,120 --> 00:51:38,920
if you have garbage data and you feed it to the AI,
you're going to get garbage out.

776
00:51:39,550 --> 00:51:40,940
A very easy concept.

777
00:51:40,940 --> 00:51:46,060
So no matter how much humans or AI you're going to have,

778
00:51:46,060 --> 00:51:52,220
if the data is not enough, or it's not statistically significant data,
then you cannot be objective with your AI.

779
00:51:53,300 --> 00:52:02,650
And to be fair, AI does its fair part of creating things
out of thin air, which do not exist.

780
00:52:02,650 --> 00:52:09,240
So that's why you always got to have a human and
maybe another human to make sure that you do not think that

781
00:52:09,240 --> 00:52:12,070
the same mistake again.

782
00:52:13,130 --> 00:52:18,190
Guys, we have a question from the audience from Stefan.

783
00:52:18,190 --> 00:52:19,600
What are the idea?

784
00:52:19,600 --> 00:52:23,540
What are the ideas to think beyond applying AI to the way we do business

785
00:52:23,540 --> 00:52:29,760
and to rethink the way we serve customers and patients
in combinations of AI and the human component?

786
00:52:36,330 --> 00:52:39,310
It depends honestly on a lot of things.

787
00:52:39,310 --> 00:52:50,060
In my experience, what I've seen is there are times where it's easier
to just interact with AI than the human to be honest.

788
00:52:50,300 --> 00:52:55,310
Sometimes it's pain working with humans in certain areas.

789
00:52:55,310 --> 00:52:57,010
On the other side, sometimes it's pain.

790
00:52:57,010 --> 00:53:02,630
Honestly, to work with the AI, you just want to talk
to someone because your problem may not be

791
00:53:02,630 --> 00:53:10,190
Because like, for example, I'll tell you, I was planning
a international itinerary and it was complicated.

792
00:53:10,190 --> 00:53:14,090
And all I could get was AI agents to try to help me.

793
00:53:14,090 --> 00:53:22,670
And they just, I just couldn't explain properly sometimes
because in order to use AI effectively, you need to know how to use AI, right?

794
00:53:22,670 --> 00:53:30,510
I mean, how you instruct him because, you know,
if you can't give that AI is not always going to...

795
00:53:30,510 --> 00:53:33,930
read the sentiments and everything properly.

796
00:53:34,520 --> 00:53:39,620
sometimes it's just not going to give you the right answer
if you don't know how to ask the right question.

797
00:53:39,620 --> 00:53:41,020
So that's important.

798
00:53:41,020 --> 00:53:42,710
Many times customers don't know that.

799
00:53:42,710 --> 00:53:46,930
And that's why human in the loop in customer scenarios is still important.

800
00:53:46,930 --> 00:53:48,510
Patients are still important.

801
00:53:48,510 --> 00:53:53,020
On the other hand, there will be people
who will be just comfortable asking directly to the AI.

802
00:53:53,020 --> 00:53:55,030
They don't want to talk to a human.

803
00:53:55,170 --> 00:53:58,850
It could be a patient discussing something sensitive, etc.

804
00:53:58,850 --> 00:54:00,400
In those cases, you know,

805
00:54:00,560 --> 00:54:03,620
AI could be first channel to actually start with, right?

806
00:54:03,620 --> 00:54:06,460
mean, so, you know, it depends.

807
00:54:06,460 --> 00:54:14,240
I would say that it's hard to say that one size fits all,
that, you know, just, you know, it really depends.

808
00:54:14,240 --> 00:54:19,760
I think what it will do is it will give a lot of new options

809
00:54:19,760 --> 00:54:25,360
for the customers and patients to get their problem solved,
which is what I'm excited about.

810
00:54:26,460 --> 00:54:28,110
You know, they can choose the channel.

811
00:54:28,110 --> 00:54:29,720
I would say that

812
00:54:30,270 --> 00:54:33,760
That is where we should be, not just eliminating the channels, like

813
00:54:33,760 --> 00:54:38,040
put bots rather than humans completely, or the other way,

814
00:54:38,040 --> 00:54:40,240
it should be multiple of channel because different

815
00:54:40,300 --> 00:54:43,180
people have different comfort level working with the channels.

816
00:54:44,480 --> 00:54:45,060
Yeah.

817
00:54:45,060 --> 00:54:47,360
And I just want to add real quick to it.

818
00:54:47,960 --> 00:54:52,800
Leveraging the last part of your question,
leveraging humanity and AI, how to make that better.

819
00:54:52,800 --> 00:54:58,820
And I'm going to stick to healthcare, love doctors to death,
but not all the doctors have the same empathy.

820
00:54:59,070 --> 00:55:03,590
And that empathy, you can argue that it's hard to teach.

821
00:55:03,670 --> 00:55:06,910
I'm working on a product that will do the following.

822
00:55:07,050 --> 00:55:09,800
I will take the doctor, healthcare professional,

823
00:55:09,800 --> 00:55:14,400
and use AI to analyze the way they speak, the way they deliver news,

824
00:55:14,400 --> 00:55:18,060
and then have the AI teach that doctor to be more empathetic.

825
00:55:18,160 --> 00:55:22,880
And that's a great example of how to use
a human using AI and to make that more humane.

826
00:55:22,880 --> 00:55:25,580
Because you're like, you're using
AI to make that person more human?

827
00:55:25,580 --> 00:55:26,560
Yes.

828
00:55:27,140 --> 00:55:28,980
And so that's just a quick example.

829
00:55:31,000 --> 00:55:32,200
Thank you.

830
00:55:32,720 --> 00:55:34,620
We have a question.

831
00:55:35,040 --> 00:55:36,740
Someone wanted to say something.

832
00:55:36,740 --> 00:55:41,020
Yeah, I like this idea where you can, and this has been a lot of cases,

833
00:55:41,020 --> 00:55:47,180
not just between the physician and the patients,
this idea of role playing with the AI, right?

834
00:55:47,180 --> 00:55:52,980
So I think a lot of people that have stage fright
or the one that figure out how to have a difficult conversation,

835
00:55:52,980 --> 00:55:56,220
or you can start role playing that with the AI and then you can go

836
00:55:56,620 --> 00:56:00,110
into a meeting, be more prepared, like how you're saying

837
00:56:00,340 --> 00:56:03,870
as the strategic data, which questions
was the physician was also the same thing.

838
00:56:03,870 --> 00:56:08,720
if you're facing a negotiation, you can role play with the LLM

839
00:56:08,720 --> 00:56:13,240
and help you think through, OK, I can have better
interaction in the next meeting than going to

840
00:56:13,340 --> 00:56:15,300
based on this context.

841
00:56:16,770 --> 00:56:17,990
Thank you.

842
00:56:18,110 --> 00:56:24,620
We have five minutes, so we have one big, long comment.

843
00:56:24,620 --> 00:56:28,040
I will try to read it and showcase it here.

844
00:56:28,310 --> 00:56:39,840
It says from Nathan, in the practice of medicine,
ensuring the accuracy and reliability of medical results is a paramount.

845
00:56:39,840 --> 00:56:47,170
Validating AI's responses require a collaborative
approach involving multiple stakeholders.

846
00:56:47,170 --> 00:56:55,650
While AI offers innovation solutions, medicine is still
fundamentally driven by human expertise and ethical considerations.

847
00:56:55,680 --> 00:57:07,090
Studies and research are often funded by entities with best interests,
necessitating transparency and rigorous validation to maintain integrity.

848
00:57:07,090 --> 00:57:15,040
Transparency is essential for both AI and human researchers
to enhance accountability and trust.

849
00:57:15,040 --> 00:57:24,670
As AI integrates into healthcare, its role is to assist providers by handling
administrative tasks and supporting clinical decision-making.

850
00:57:24,670 --> 00:57:27,550
And it's the last part.

851
00:57:27,550 --> 00:57:30,140
It says AI can

852
00:57:30,300 --> 00:57:32,480
... accountability...

853
00:57:32,780 --> 00:57:34,040
One second.

854
00:57:34,040 --> 00:57:36,340
Decision making.

855
00:57:37,780 --> 00:57:46,510
can identify patterns that may be overlooked
due to social or cultural nuance, enabling more comprehensive care.

856
00:57:46,510 --> 00:57:54,880
However, AI contribution may be continuously validated by qualified
medical professionals to ensure accuracy and safety.

857
00:57:54,880 --> 00:58:00,540
Ultimately, validated AI's responses
in medical contexts requires a balanced

858
00:58:00,540 --> 00:58:08,580
approach that leverage AI's capabilities while maintaining
the essential role of human judgment and oversight.

859
00:58:09,220 --> 00:58:15,400
I hope, guys, you were able to listen and read it.

860
00:58:15,400 --> 00:58:17,630
Commentary rather than the question.

861
00:58:17,630 --> 00:58:19,510
And I would say I agree.

862
00:58:19,590 --> 00:58:21,150
That's all I can say.

863
00:58:21,550 --> 00:58:29,910
Because I think, yes, I mean, that's why it will be fun to see
how the compliance and regulation will emerge.

864
00:58:30,880 --> 00:58:33,940
The way we see is that AI is potential.

865
00:58:34,560 --> 00:58:37,240
I think that last year it was 4 % adoption.

866
00:58:37,240 --> 00:58:39,570
It already has jumped to 22 % adoption.

867
00:58:39,570 --> 00:58:42,640
It is already extremely fast so far.

868
00:58:42,860 --> 00:58:48,700
I think there are areas in which you still
are going to do the wait and watch.

869
00:58:49,700 --> 00:58:52,460
And health care could be one of those segments.

870
00:58:52,820 --> 00:58:57,370
The question is that how soon the regulations will emerge.

871
00:58:57,370 --> 00:58:58,300
We don't know.

872
00:58:58,300 --> 00:58:59,530
Europe is.

873
00:58:59,530 --> 00:59:04,450
Too much regulating, US is relaxed, but we will see.

874
00:59:04,450 --> 00:59:06,590
It is an area still too much.

875
00:59:06,590 --> 00:59:16,060
For example, I'll tell you that recently there was a news
about a teen killing himself because of the interaction with chatbot.

876
00:59:16,060 --> 00:59:24,670
So these kind of things, it's an example of ultimately
in a setting which could affect your health.

877
00:59:24,670 --> 00:59:27,920
So mental health is also part of the health care overall.

878
00:59:28,920 --> 00:59:29,630
We will see.

879
00:59:29,630 --> 00:59:31,060
This is now in the court.

880
00:59:31,060 --> 00:59:34,820
It will be decided whether AI board is actually responsible or not.

881
00:59:34,820 --> 00:59:42,980
But these kind of legislation that are happening, that will define
how AI will be used in the future in the health care setting or not.

882
00:59:42,980 --> 00:59:44,800
So something to watch for.

883
00:59:45,400 --> 00:59:46,210
Thank you.

884
00:59:46,210 --> 00:59:47,640
Thank you for that.

885
00:59:47,640 --> 00:59:54,980
Salvador, Harvey, closing comments regarding
the comment for our attendee?

886
00:59:54,980 --> 00:59:55,920
Yeah.

887
00:59:56,380 --> 00:59:57,040
Good.

888
00:59:57,040 --> 01:00:00,140
I got a hard stop at four, but go ahead.

889
01:00:00,320 --> 01:00:03,680
Technology is evolving.

890
01:00:03,680 --> 01:00:05,640
Regulation is going to be evolving.

891
01:00:05,700 --> 01:00:10,580
Clearly, there is a human in the loop component
that cannot be taken away right now.

892
01:00:10,580 --> 01:00:13,840
But I the key takeaway for me is we need to embrace the technology,

893
01:00:13,840 --> 01:00:16,640
figure out how to leverage it for our advantage.

894
01:00:16,760 --> 01:00:20,560
And of course, it's changing so rapidly that we're going to see different regulation

895
01:00:20,560 --> 01:00:24,160
different guardrails are going to be put in place to mitigate any risk.

896
01:00:25,460 --> 01:00:26,380
Great.

897
01:00:26,380 --> 01:00:29,470
And all I was going to say is this is growing so fast.

898
01:00:29,470 --> 01:00:33,320
Come to seminars like this to learn, follow people that are here.

899
01:00:33,320 --> 01:00:37,450
The more you follow them, the more you're learning
because it takes a village to learn this information.

900
01:00:37,450 --> 01:00:42,040
It's growing so fast and no one person really is an expert,
but together we are an expert.

901
01:00:42,040 --> 01:00:44,190
So with that, I appreciate being here.

902
01:00:44,190 --> 01:00:45,170
Thank you.

903
01:00:45,310 --> 01:00:45,900
Thank you.

904
01:00:45,900 --> 01:00:46,640
Thank you all.

905
01:00:46,640 --> 01:00:51,890
Thank you, our attendees for tuning in and don't forget to follow these guys.

906
01:00:51,890 --> 01:00:54,390
We have set up their podcasts and things.

907
01:00:54,390 --> 01:00:55,010
So.

908
01:00:55,010 --> 01:00:57,830
Follow that and go to Salvador's event.

909
01:00:57,830 --> 01:00:59,660
We will know if you haven't been there.

910
01:00:59,660 --> 01:01:01,950
So with that, thank you all and goodbye.

911
01:01:01,950 --> 01:01:03,700
All right, thank you.