How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. In this episode, I'm so pleased to bring you Philip Rathle. Philip is the Chief Technology Officer at Neo4j. He joins us today to provide a primer on all things Knowledge Graph and the emerging intersection of graphs with Retrieval Augmented Generation or RAG. Welcome to the show, Philip.
PHILIP RATHLE: Kimberly, hi. Hello, audience. Great to be with you today.
KIMBERLY NEVALA: So let's start with having you tell us a little bit about your journey from our mutual days in the way back, starting as developers, to your current role at Neo4j.
PHILIP RATHLE: In the way, way back.
KIMBERLY NEVALA: Way, way back.
PHILIP RATHLE: I started, pretty early in my career, getting caught up into data and databases and really falling in love with the potential of, essentially, starting with a model and then all that unfolds from there. All the way down to dirty data and duplicate data and unresolved data and all the problems that that brings in. Fom getting the model right in the first place for whatever it is you're trying to solve down to curation and all the human, business, organizational, all the other factors.
So I started at Accenture back when it was still Andersen Consulting, which is where you and I first met, got into data modeling where I was a bit more on the analytic, data warehousing, data analytics.
Fast forward, spent about a decade doing consulting with some of the largest database systems on the planet. And again, some of that with you at Tanning Technology, which no one's ever heard of now, but back in the day were pretty recognized in their acumen in dealing with large, real-world systems. I always looked at Tanning as being The Wolf in Pulp Fiction, like, of course, a much more IT-oriented version.
And then fast forward into landing in roles where I enjoyed technology, I enjoyed the business side of things. I often landed at the intersection in a consulting world.
Then in the early 2010s, got really interested in all the new models that were emerging in data and databases. And I was a bit perplexed that all the things that were popular at the time were essentially ripping out the concept of relating data and relationships or - as we say in the relational world, primary foreign keys, constraints, et cetera - out of the model in order to solve simple data at scale.
And at around that time, I met up with Neo4j's founders, including our current CEO, who was CEO our time, Emil Eifrem, who is taking a complete opposite angle and saying, What's most valuable in data? What's most valuable in data is probably these connections and these relationships. And sure, they're inconvenient. But man, if you could actually make the investment to build a product that focused on those, you could probably do a lot of really valuable things.
So that's what launched me on this graph journey, which I've been on for 12 years, which I live in Silicon Valley, and I think by tech benchmarks, and certainly Silicon Valley ones, that makes me a bit of a dinosaur with respect to tenure at one company. But I still feel like we're only just getting started. And there's a ton of opportunity this unlocked. So it's been exciting for me.
KIMBERLY NEVALA: So it is interesting because graph databases, graph analytics, knowledge graphs have been around for a bit now. They remain, in some cases, still somewhat obscure to a lot of folks, even in the data and analytics space. So I'd like to start with providing a little bit of foundational knowledge for folks who may not be particularly familiar with the space. What is a knowledge graph? And what is the type of information that graphs are well-suited to encapsulate?
PHILIP RATHLE: Let me answer your second question first. It is real world data, let's say it comes from real world systems. Digital world data comes from digital world systems that tend to manifest as interconnected systems. So you have supply chain as a system. The systems of biology are all just things that connect to other things that influence things that influence things, and so on down the line. Telephony is connected systems where the connections are calls between people at one layer, but then also literal connections between infrastructure and trunks and fiber and wireless, point to point, and so on. And you can just go on down the line, payment networks.
And even inside of an organization, you look at a large organization with many thousands or even into the millions of employees, and we think of this as a hierarchy. So again, that's non-tabular. But actually, if you drill in deeper and you're working the HCM domain, you quickly see that, no, what I want to manage is more rich than a hierarchy because each person might have a person they directly report to. But then they're part of a community of practice, and they have a mentor, and they have this history. All this changes over time. And they work for a project or they're on loan to a project and there's a project manager, and so on. And then you can hang facilities off of that and skills and journey. And we have customers using Neo4j to help employees manage their journey across their life cycle. As an employee of getting from where I am today with the journey I've had and all the skills I have to some place I want to get to, both in terms of job function, in terms of level, in terms of geography, and so on.
So these are all examples of what I think of as real-world systems. And what do they all have in common? Connections are very important. Dynamism is very important. They tend to be very dynamic.
Another thing these things have in common is data network effects. Where if I have the employee graph but I don't have skills, I just have the hierarchy of who reports to whom and now all of a sudden I add in skills. The skills data just became much more valuable by virtue of being attached to the core data and vice versa. So you have this additive effect. And you can even go a step further. I'm going to add my customer graph to my employee graph and see who's assigned to customers, who's engaged with them. Now, I've suddenly up-leveled both again. I've enabled all these other things.
So really, any time I'm dealing with real world data, real world systems, that's where graphs come into being. So what is a knowledge graph - back to your first question - it's a reflection of your world model. And you could say, How is that different from a digital twin? I don't actually think it is. Maybe digital twin is a narrower example of a particular system that I have that maybe exists as a physical object. The smart city probably isn't an object, but like, an airplane certainly is. We have airplane manufacturers who will store a digital twin of each aircraft, throughout its entire life, in a graph because that's a million parts and multiple levels, and there are interdependencies both at the design level and the maintenance level. That's the way I think about it, certainly.
KIMBERLY NEVALA: So what would be common or - I was going to say traditional, though, something that we're probably using actively over in the last 10 years is not that old - traditional applications of knowledge graphs or graph analytics that would help ground the concept for folks?
PHILIP RATHLE: Well, there's GraphRAG, which is an emerging one, and perhaps less traditional.
KIMBERLY NEVALA: We're going to get to that.
PHILIP RATHLE: Though, in the AI world, anything older than two weeks, is that traditional?
KIMBERLY NEVALA: [LAUGHS]
PHILIP RATHLE: But, yeah, more traditional, what people think of as graph use cases, are really cases where the value of doing something in a graph is because of how well-suited the language is to asking these more intricate questions about how things are connected. Like shortest path, Kevin Bacon type questions. You can do these in a graph with just one line as opposed to multiple SQL statements, each one of which gets progressively much longer and takes forever to run. And has orders of magnitude cheaper than joins at scale, like joining through primary and foreign keys and through join tables, so performance.
And then the fact that your physical, logical, conceptual model are all more or less the same. There isn't this vast distance you have between the whiteboard world, which is more circles and lines, and actually a graph - the world of a business domain expert - and the world of tables and DBAs and so on.
So what are those things? Fraud detection tends to be pretty high up on the list, and I'd say maybe more broadly, financial crimes and include things like money laundering. And why is that? That's because fraudsters exploit the fact that most systems are not designed to detect connections between things and to look for complex patterns. And so what is money laundering but exploiting that through creating multiple intermediaries? Well, guess what? If you have a system that is easily able to just spider across, do pattern recognition, and pierce through the intermediaries, then it doesn't matter how many levels you've got. So that's one pretty powerful area most financial institutions use Neo4j, including the top 20 out of the 20 US banks for this, as do a lot of government agencies.
Another one is recommendations. So predicting behavior is one. Also, storing an entity reconciled view of the graph where different business units and different identities, different identifiers, all oftentimes lead to the same person being treated as if they were different people. And then you've got the question of, what do people disclose to you as a company. You might suspect, and you might have a very strong suspicion, maybe it's 99%, that this activity comes from this person but they haven't disclosed that. Depending on your privacy policy, oftentimes it's OK to make a recommendation of, here's the next thing you might be interested in reading based on a suspected identity. Whereas it wouldn't be OK at all to show someone their bank account from this other line of business based on a suspected identity. So in the graph, you can have things like level of certainty as a weight in a relationship because your relationships can be attributed along with start date, end date, things like this.
I'll just rattle off a few others. Pharma, most pharma companies use Neo4j and Knowledge Graphs for accelerating drug discovery by a year or two by connecting all the dots between whatever pathogens or ailments and how that affects the biology. All the connected dots inside of the biology to what are things, say, like compounds that might be able to react positively or affect things in a positive way. Again, this is lots of cascading things. And speaking of cascades, supply chain is another really big one, particularly in this world where there's a lot of rethinking and refactoring of supply chains for geopolitical but also other kinds of risk minimization reasons.
KIMBERLY NEVALA: So if I was going to grossly oversimplify, it'd be the relationships and the connections between people, places, and things, and all of the different paths by which they can be connected, I suppose. It also strikes me that this mechanism allows us to look at somebody like Kimberly and see that Kimberly may have multiple personas in different areas - and allows us to look at not just people, but objects or people, places, things in points of time and in relation to each other - and understand more multidimensional views of any one thing. Which is really is not well-suited to some of our more traditional methods of capturing data.
PHILIP RATHLE: That's right. And the more I know about how Kimberly interacts with the world, the better I'm able to predict Kimberly's behavior. Which is - it's kind of obvious spoken about in human terms. But if we look at how data is often used in order to predict behavior, oftentimes we just use facts about things and maybe kind of trick ourselves or end up in a pattern where we're just trying to gather more and more facts.
But actually, there's some great research; it's in a book called Connected, written by two professors at UC Irvine, James Fowler and Chris Christakis. And they're sociology professors, but they look at different behaviors and find that I can better predict someone's behavior - they happen to look at smoking, obesity, alcoholism but you can apply this to any kind of behavior - much better chance at predicting those behaviors by understanding the people around them at three levels out. Like one, two, and three levels out, than if you have all the facts in the world about a given person. We are products of our environment in ways that are non-obvious.
KIMBERLY NEVALA: It strikes me as exceedingly powerful and particularly perilous, particularly when we're talking about people. And it begs a whole different conversation about privacy and ethics to which we will not apply ourselves today, but which we do talk a lot about on the pod and more broadly.
PHILIP RATHLE: For sure.
KIMBERLY NEVALA: So let's turn our attention to the all-consuming these days topic of generative AI and large language models. Graphs are enjoying a bit of a, I can't say resurgence, I think a bit of a highlight or sort of a spotlight in the context of LLMs in particular with recent pushes towards what's known as retrieval augmented generation or RAG.
So again, keeping with the primer-esque intention here, can you first tell us what RAG is? And then we'll talk about the intersection of graph and RAG.
PHILIP RATHLE: Yes. So RAG stands for retrieval augmented generation and it is simply the next step that practitioners discovered they needed to go to in order to make LLMs useful in an enterprise context.
So the first step was let's just try large language models, small language models, whatever it might be.
Well, they're not trained on my data, first of all. Then second of all, they only have data up until the point where they've been trained, which means they're not very useful if I want to interact with a customer about something that happened in the last six months since the model was trained or whatever the time frame might be. So then you have fine tuning, but it turns out fine tuning isn't a good way to get a model to learn about data. It's more of a good way to, let's say, tweak the model so that it gives better results for the kinds of questions that you're asking and that it's answering.
So retrieval augmented generation takes advantage of the fact that you, as part of a question, I can also feed a model more context about things. And so, well, why not feed it context from a database to give it information about what's relevant? And when people talk about RAG, typically, they talk about using a vector-based RAG effectively like a similarity search to find words that are conceptually, mathematically, statistically as close as possible to the question. Then you just feed that text back to the model to inform it of the answer.
So there's feeding data in, and then there's getting data back. But effectively, RAG is getting better results from a model, better meaning. It's more recent, incorporates your business data, and then usually also is able to match answers with questions based on texts that a company already has somewhere in the documents that you're searching.
KIMBERLY NEVALA: And correct me if I'm wrong here, because again, I'll be at risk of grossly oversimplifying what can (CHUCKLING) be a fairly complicated topic. But the idea of RAG came along, as you said, to be able to provide a little bit more focus towards the content that an LLM or GenAI was using to generate the answer. So it was specific to your internal corpus, whether that's your company documents, or customer information, or what have you. With an idea that this could then also not only provide more contextually relevant – particularly looking at things that might be more specific to the problem at hand, to the organization, to the enterprise - but also to try to address this issue of call them hallucinations, confabulations, or BS, depending on your orientation these days to the hype or the harm aspects of LLMs.
Because whatever the term, this idea of generation and these systems being non-deterministic, being probabilistic, means that hallucinations - I'll use the more common term - are somewhat of a feature, not a bug. So is it true then that RAG is also an attempt to not only provide more relevant recent contextual information but also try to shrink the potential space? Because I don't think there's any way to eliminate this in total by virtue of the systems. But to help us focus on reducing the potential for the potential of things like hallucinations, confabulations.
PHILIP RATHLE: Yeah, I agree with that.
One metaphor that I use is, it's like loading dice. You're rolling dice with an LLM and then with RAG, you're loading those dice. Or at least with vector-based RAG you are. And so an interesting feature that comes into play when you're doing graph-based RAG. Because with vectors it's entirely in the statistical realm. So you literally can't have certainty. It's just an approximation of meaning based on statistical models.
But with a graph, I have determinism, which I can then bring to bear. With graphs, you've got both deterministic queries. You also have non-deterministic, unsupervised learning kind of stuff. So it's a rich world where you have both.
But a concept that is actually fairly important here given those options, is where's the locus of reasoning? Is my reasoning happening in the LLM? In which case, with vector-based RAG, your reasoning entirely happens in the LLM. Then with graphs, I can actually tune. I can pull data from the graph, feed it to the LLM. The LLM makes its own decision. Now I've loaded the dice even more. But I get some form of informed creativity as the outcome.
But you could also say, I'm going to use the fact that graphs can give you a deterministic answer. So if I have a supply chain risk question, well, that's a multi-level calculation and it involves math. LLMs aren't good at either of those things. But guess what? Why would you use a tool that gives you approximate answers for answering a question that has an exact answer, particularly when you can calculate it? So there you actually would do your calculation in the graph. That's the locus of reasoning. And then you hand it back to the LLM and you say, here's the answer, use it. Now use your amazing language skills to formulate that back into the right language and couch it in the right words for whatever my usage is.
KIMBERLY NEVALA: Now, there will be folks that are taking umbrage, perhaps, and perhaps rightly so, when we say the reasoning is happening in an LLM, i.e. LLMs do not reason. But what I hear you really saying is - when I think about reasoning in the context of LLMs, I tend to go, is it reasoning or is it providing an approximation of reasoning? But what I hear you saying is that, to some extent, that deterministic reasoning is inherent in the structure of the graph itself. It captures relationships. It captures context, and concerns, paths. And so the LLM or GenAI then inherits that rationalized output as a result of integrating a graph.
PHILIP RATHLE: Yeah, that's right. So LLMs can't reason. That's a great point.
And in fact, Yann LeCun has these four characteristics of intelligent behavior, of which reasoning is one. And the others are planning, understanding, persistent memory. Where he says that because LLMs aren't built to do any of these things, you actually can't expect - I mean, he uses these as a benchmark for - what are the ingredients for AGI and LLMs themselves as they're currently constructed? If they're not built to do these things, they can't give you that.
What I meant by "reasoning" here is more well, yes, they're not able to do reasoning, but people use them to reason for them all the time in spite of that. Like when I say, "Help plan a trip," and I get some answers back. What's happening inside the LLM isn't actually reasoning, which you and I and the listeners know. But it is through advanced word completion effectively and more, giving me something that I can then use in the same way as I would use a properly reasoned decision. And this is the danger, also the opportunity. This can be a feature or a bug, depending on how you use it.
KIMBERLY NEVALA: So let's talk a little bit specifically about the introduction of graphs to RAG or what's now being called GraphRAG. What is the differentiation? What does GraphRAG provide us that the original approach to RAG did not?
PHILIP RATHLE: The best way to think about it is through a metaphor, actually. So let me explain it that way first, and then I'll explain it in more of a bottoms-up technology level.
But the metaphor is, the brain has two hemispheres. And you have one or two systems of thinking, if you want to take maybe the more Kahneman oriented approach. And one is what we usually think of as the right hemisphere or the thinking fast system is, it's intuitive, at times impulsive. You don't really know why it's doing what it's doing. In fact, you might have to spend weeks trying to reason through, why did I do this thing? (LAUGHING) Be it good or bad; hopefully more good than bad, I think. So it's a powerful system, but it's a black box. And it's all this complex system that we can't really rationalize or understand. And tends to be more creative. Complementing that is more the slow thinking system or let's call it the left brain, which is more reasoned and deterministic. And I can go and actually understand and explain what happened.
And so, unsurprisingly, what a lot of companies are discovering is that in using just technologies that give you right brain kind of behavior -- because an LLM, even with vector-based RAG, ends up being not at all explainable, and it hallucinates, as we were talking about earlier. No idea why. And it's sometimes right and it's sometimes spectacularly wrong. And that's not OK for a lot of use cases. Might be OK for a customer service chatbot where I have a pretty low bar with those.
But having a left brain complement that can give you deterministic answers has a model of the world, gives you explainability. I think of knowledge graphs as not only giving you explainability and determinism, but of actually being this bridge for human beings to be able to direct the AI so that we're not just… I mean, would you want 100% accurate, non-hallucinating, black box, machine-oriented system to make all the decisions for you? I certainly wouldn't. [LAUGHS] I'd not only want explainability, I'd want some level of human agency. So graphs are this bridge that can be understood by humans, used as an input, used to help understand the outputs, and it can be reasoned upon by both humans and machines. So that's the maybe more metaphorical view.
I guess the more systems-oriented view is, vectors represent one kind of algorithm, and generously, it's two algorithms. It's like a spatial distance and angular distance. But if I'm inside of a system and I'm trying to understand things better. Oh, and it's also based on unstructured data. So if I want to bring in structured data to complement my unstructured data, because companies oftentimes do have facts and systems of record about who their customers are, and what their products are, and their product components, and their suppliers, and so on, complemented with all the unstructured text.
Being able to run queries against that, that run multi-level, and have math, and bring in and calculate exact answers in cases where you have them again; using a technology that is imprecise in cases where you have an exact answer, it doesn't make sense. So that's kind of the basis for knowledge graphs. And the trigger for this has been people.
Necessity is the mother of invention, right? It's people trying things out, trying LLMs. No, not good enough. Fine tuning, not good enough. Vector-based RAG, still not good enough for my particular use case. So you have this getting-to-production problem where, maybe it's good enough to create an amazing demo but not to get me into production. And it turns out that GraphRAG solves the getting-to-production problem for a large number of use cases that otherwise get stuck.
KIMBERLY NEVALA: So are there some common use case or patterns that are emerging for GraphRAG right now? Are there 2, 3, 4 top types of situations where you might look to implement a graph within your RAG process or GraphRAG?
PHILIP RATHLE: Yeah, let me give a few examples and then we can distill those into some of the patterns that are emerging.
So one example is, we have a large employer with many hundreds of thousands of employees who actually have a chat bot where they're giving employees guidance on their career. So someone can come in and say, I have xyz job in this location. Well, it knows what job they have, so you don't need to say that.
But here's where I'm trying to get to. It will then give them some advice based on their history, based on what it knows about whom they've worked with, and on what teams, and what skills and certifications they have, and so on: what geography, and the current openings. So you have an employee view, you have a manager view. And it will basically connect those dots, feed that back to the LLM.
So here you have a strongly informed LLM to make a much better recommendation. In some cases, there are clear paths that need to be followed. In that case, the LLM is just serving that back into language. And then in other cases, the LLM might have more of a creative recommendation based on things that are maybe not specific to the company; around certifications and what's the best way to learn about xyz thing. In that case, you can depend on the LLM or maybe on vector-based RAG.
Another example is, I mentioned supply chain and supply chain risk earlier. This is one where you have a supply chain risk application that you take all of your data, feed it into an LLM, feed it into vectors, and it doesn't do a good job because math and multi-level are not within the sweet spot of what those technologies can do. So in that case, you literally have the LLM that is taking a human language question, mapping it to a graph query, running the graph query, and then translating that back.
I'll give you another example which is outbound email recommendation where you have two components. And this is email remarketing. So someone who's browsed a website, maybe had things in their shopping cart, they maybe have some history of prior purchases. And then the company has more broadly history of people who have bought these two things together, well, oftentimes they buy these other three because those go together in some fashion. Or people who have bought this, buy this other thing just because of a shared interest. Those are very graphy kinds of problems. Back to your core use cases that you asked about earlier, that's a very well understood and well-trodden path.
Now, how do you phrase that email? Well, it might depend a lot on what country the person is in, what language they speak, what age demographic they're in, what their occupation is, and so on and so forth.
In this case, the outbound email makes a product recommendation based on the graph query and then it leaves everything else to the LLM. Based on in your prompt, you come back and say, here's everything I know about this person, tailor an email based on their demographics, et cetera.
So lots more examples, but broadly, I'd say, going back to locus-of-reasoning concept, inside of there, you can see how you've got certain problems where you actually do a core part of the calculation in the graph and use that because it's definitive.
Anti money laundering is another one. Definitely it's a sweet spot of graph for the query. But then I might want to complete a SAR, a suspicious activity report, and formulate that into text using an LLM because it's good at language.
Another one is more informed creativity: passing information from the graph to the LLM so that it can make its own decision. Again, like you pointed out earlier, this isn't true reasoning. But it kind of behaves there like it's -- yeah, I mean, it's reasoning from the perspective of passing the duck test, right?
If I'm using it to do something, and it looks like it's reasoning, and I'm treating it as if it were, then it kind of is, maybe. So, so that's the other one, informed creativity is another pattern.
And then the last pattern we're seeing a lot of is actually helping to rank the vectors using things you know about the graph. So it turns out, let's say I have a customer service application where I'm trying to surface the document that is most relevant to solving a user's problem. You might have text matches and vector matches across a hundred different documents. And they'll be ranked, in some way according to whatever the vector math is. But the best result is probably the one that has had the most positive resolutions or where the customer has rated it a useful answer. These are things in the graph. And there are well known algorithms like PageRank that you can run to actually count the number of inbound relationships of a certain type to a particular thing, push that rank down as a property, and then you can just use that property-- whatever decimal value from 0 to 1-- to rank your vector results and serve up the appropriate vector. So that's another one, you can say Vector ++ or better together.
KIMBERLY NEVALA: Interesting. I've learned a new word today to add to my own lexicon, which is "graphy." I don't know if [LAUGHS] it would have occurred to me to use "graphy" as a noun and an adjective. But now I know. So that's fantastic.
So I think a lot of the cases that you're talking about here also just highlight that for LLMs it's really the value of communicating in natural language. But also highlight the fact that that doesn't divorce us of the need for a lot of our very traditional analytics and other techniques to really manage and surface knowledge. So in a lot of those circumstances that you're talking about, the LLM really is the conversational interface, not necessarily the knowledge manager, if you will. Again, is that a gross simplification or is that somewhat fair?
PHILIP RATHLE: Yeah, I guess I look at, if I zoom out and look at the use of graphs, you have GraphRAG, which we just talked about but there are other foundational ways in which you can use graphs to help with your AI more broadly.
For example, Entity Resolution, which we talked about. It's a great middle way between this world of, "federate everything" [CHUCKLES] to "no, we're going to take all systems and just create one giant system out of them." With a graph, you can just say, I'm going to have this master entity, which then has all these other entities hanging off of them with their own identifiers, and then contacts hanging off of those, and then federate from there.
Another one is System of Systems, is understanding data lineage and at the metadata layer how things connect. Which is also really important if you're going to have success in a GenAI project. The quality of your output depends a lot on the quality of your inputs. Being able to map those out and get data from the right system, at the right level of timeliness, with the right subject to the right regulatory requirements, et cetera, is important.
Then I'll also add one other thing we haven't talked about is along the data privacy, data security, regulatory angle. That there's a need for explainability in many kinds of use cases from a regulatory perspective. But even if you don't have a regulatory mandate, the person who's on the hook for the decision oftentimes needs to see an answer and GraphRAG can help with that.
But then from a data security perspective, once you have your data in a graph, you can actually apply fine grained permissions to the data inside that graph. There's even a permission called Traverse that lets you see that two things are connected without knowing how. So it's like, again, a middle way between not seeing that connection at all and knowing everything about that connection, the type, the direction, the properties associated. And that's not something that you get with vectors, which is a real limitation.
KIMBERLY NEVALA: So as much interest and work is happening around GraphRAG today, do you think organizations should also be taking, I won't say "a step back," but maybe taking a broader view? Are organizations today under-utilizing just the power of Graph and graphs themselves, even outside of the very exciting and attention-grabbing world of GenAI?
PHILIP RATHLE: For sure, yeah, we're still just getting started in that regard. I think having a standard is another big step in the journey towards making graphs more democratized, and available, and safe to invest skills into. And as a technology you've got lots of vendors taking this on at everything from the visualization layer to the data layer to everything in between.
So for sure, still very, very early days with Graph. And it's really neat how GraphRAG has just shown up a little bit out of the blue. From my perspective, I feel like we've accidentally built exactly the right product over the last decade plus, to solve a really key problem in GenAI. But from another, it's been unsurprising in that my whole journey for the last decade plus has been seeing how graphs just unlock more and more different kinds of opportunities that no one could have imagined before.
So definitely an important technology going forward that's only possible thanks to modern hardware and the world becoming more connected. I often think that if someone were to create a database from scratch today, and had no concept of databases, it would look much more like a graph model than a tabular model. Which honestly was designed back in the '70s, '80s when the problem of the day was I need to digitize all these paper forms.
KIMBERLY NEVALA: Well, as they say, "luck favors the prepared." So clearly those preparations are coming to the fore now. I do wonder, as you've been talking, I've been speculating whether GraphRAG will ultimately not be the ultimate value generator or point-of-interest for graphs. But it will be just the entrance to familiarize or make people aware of graphs and the very broad -- just like we say, GenAI and LLMs are one very small component of what is an AI portfolio. This may just be the point of entree for folks to be able to really leverage this in ways that have very little to do, if anything, to do with, "AI," but provides a very nice PR platform, if you will, or awareness - "awareness generating," that might sound better - intro to the technology.
PHILIP RATHLE: Yeah. Or a strong motivation. If you need it in order to solve your GenAI problems, well, once you have it for GenAI, you can use it for other things.
So in addition to data network effects, you also have this really cool thing of use case network effects. The data that I gather into my knowledge graph for solving supply chain, may be half the data that I need to do recommendations. And then once I have those two, maybe that's 90% of the data that I need to solve fraud. And maybe it's all the data that I need to solve some manufacturing efficiency problem, or portfolio risk problem, or some other problem. So you definitely have this pattern where graphs have this huge utility that just compounds over time.
And as to the question of ‘when do you get started?’ and how strong that motivation is, GenAI definitely is a strong call-to-action. At least for any company, any organization trying to do anything where the stakes are high in any sort of way. High dollar value, brand or reputational impact, regulatory health and human safety, risk of bias, et cetera. If any of those things are in the mix, then you actually care a lot more about answer, quality, accuracy, completeness. Kind of like bashing down hallucinations down to potentially zero, if you use your graph as the locus of reasoning in cases where an exact answer is available. And that's kind of a mindset switch. But yeah, definitely it's a good entry point to what's potentially a very long journey.
KIMBERLY NEVALA: So you mentioned getting started. Are there any foundational capabilities, practices, mindsets that organizations need to begin their Graph journeys, either within their GenAI or AI strategies or outside of it?
PHILIP RATHLE: I recently wrote a post called "The GraphRAG Manifesto" that expresses some of the ideas that I talk about here. But at the end, I have a "Further Reading" section, which has a long list of things that one can dig into. Many of them are training, and code, and then the reference to other papers, and some case studies, and so on and so forth. So I'd actually look at "Further Reading" there as a good one.
A lot of the frameworks - the GenAI frameworks like Langchain, LlamaIndex, Haystack, et cetera - have integrated knowledge graph capability and have Neo4j integration specifically. There's a really cool tool that we recently released called the LLM Knowledge Graph Builder, where it's a UI. And in a few minutes I can spin up a free Neo4j or a database in the cloud, find any YouTube video, any web page, any Wikipedia article. Point to it. Have that be dissected and entity extracted using an LLM of choice. And then just being able to pop in and visualize the graph. You can also ask questions of the LLM and it will do GraphRAG. It'll also vectorize things, by the way. So it'll do vector and GraphRAG. And it'll refer to the LLM and then you get an answer. And, last but not least, you can click in and actually see where your answer came from and what the graph query was. So this is all something you can do in, once it's up and running, literally two, three minutes. OK, you add some setup time, and it's maybe 15 generously. So I'd encourage users to and listeners to check that out if you want a little playground.
KIMBERLY NEVALA: Sounds almost too good to be true. We will include all of those references to those resources in the show notes. So you're active in the space, you're looking at where we've been and where this is going. Have there been any emerging applications, whether those are in-production, not-in-production, or just in research mode, that have particularly captured your attention recently? In the graphy
world?
PHILIP RATHLE: In the graphy world.
KIMBERLY NEVALA: [CHUCKLES]
PHILIP RATHLE: Yeah, hold on to it. It's a good one. So there are some things that are, I think, not yet for today, but farther out that have come up as topics when we talk to leaders. At least in the non-graph part of the AI world, or let's say non-graph first part of the AI world, because these two worlds are rapidly merging.
It's the fact that, as you get to more agentic architectures, where you've got multiple small models, and then maybe a large one, and they're each playing their roles, and you have this multi-shot kind of thing. That these architectures, if you fast forward a few years, you can see them getting very sophisticated and having lots of different parts and conditionality and so on. Well, you can represent that as a graph, as it turns out. And so something that's organically emerged out of discussions - that actually started on the side that's looking at graphs from the outside, not the inside - is that agentic architectures can be represented as a graph. So that's one.
I think for me, the areas of excitement are the things like the LlamaIndex's Property Graph Index and the fact that these major frameworks are adding support for knowledge graphs. It's been neat to see how the piece I mentioned, "The GraphRAG Manifesto" has been received. It's really done the rounds and it builds upon a lot of good work by many users and customers in our community. As well as Microsoft, whom I give credit for popularizing the term "GraphRAG" and research they've done, code they've done. We've recently added the repository as well, and so on.
So I think ultimately what excites me is seeing what happens in the real world and what users do with it, what companies and governments and NGOs and so on do with it. So can't wait to see more as this evolves.
KIMBERLY NEVALA: This is excellent. Well, this has been a fascinating tour into the "graphy" world and into the emerging intersection of Graph with RAG, which, given the ongoing excitement about generative AI and LLMs, shows no sign of slowing down. So thank you so much. These were just great insights. And as always, I always enjoy any excuse to talk to you. So thanks for joining us.
PHILIP RATHLE: It's been a blast. Thanks for the invite.
[GENTLE MUSIC]
KIMBERLY NEVALA: And thanks, everyone, for joining us as well. If you'd like to continue learning from thinkers and doers such as Philip, subscribe to "Pondering AI Now." We're available on your favorite podcast channel or now, also on YouTube.