Welcome to Chuck Yates Got A Job with Chuck Yates. You've now found your dysfunctional life coach, the Investor Formerly known as Prominent Businessman Chuck Yates. What's not to learn from the self-proclaimed Galactic Viceroy, who was publicly canned from a prominent private equity firm, has had enough therapy to quote Brene Brown chapter and verse and spends most days embarrassing himself on Energy Finance Twitter as @Nimblephatty.
0:00 All right, so I get to come on and talk a little bit about use cases. So Todd Bush, CEO of Collide, have been here for a couple of years, helped build out the community, as well as got into some
0:11 of the initial versions of AI products, and now spend a lot of time in sales discussions, as well as product meetings. So I get to go a little beyond the traditional high level use cases and get
0:28 into a lot of detail as we start talking about different processes, different things like that. So definitely enjoy that opportunity and enjoy kind of digging into the different use cases that are
0:40 in each individual customer. So just thinking about this past summer, I know there's a lot of conversation about the MIT study that went around basically 95 of projects fail. There was a little
0:53 nugget in that the that of two-thirds basically was which absolutely team our year.
0:59 projects were succeeding if the company had domain expertise and they were kind of a partner in the project. And that's -
1:11 It's fake news. Yeah, fake news. Yes, exactly. And so that's one little study. And then on the back of that, then there were a couple of different journal articles about not needing to measure
1:23 the ROI of anything AI-related, which I feel like was just the worst setup for anyone. Thinking about AI projects, thinking about this step change that we're going through around different
1:37 workflows and potential for automation. So I liked leading with that, especially. I know many people in the room have probably read it and dug through that paper. But it's a great setup to begin
1:51 talking about where we spend a lot of our time, which is where's the value, where's the return on investment. and how do we get there and how do we make sure to enable that across different
2:03 companies. And so when we're talking about this, we'll generally say there's kind of three legs of the stool, kind of GNA side. Typically more people process based the capital which where and how
2:17 are you going to spend kind of CapEx and OpEx and what's that value can be as far as the amount of capital that's in the industry and the amount of kind of opportunity there And then of course risk
2:29 and safety. And risk and safety piece could be something as specific as an insurance process because there's a lot of time and money invested in kind of the insurance process for a lot of ENPs all
2:43 the way down to the field level where you're looking at safety and kind of automating best practices and sharing kind of that tribal knowledge across teams. And so from a GNA standpoint, I think one
2:55 of the things that we see with different use cases really. Bearing the idea of these routine tasks, maybe it's a lot of data, maybe it's a little bit of information, could be as simple as I have
3:08 production alerts I'm looking at from a production operation side. Every morning I get a spreadsheet emailed to me. It already has the alerts on it, it's coming in from SCADA, but I have to
3:20 literally go through a hundred of these alerts for kind of my asset and try to figure out what to do with them And that's the perfect daily kind of tasks where we can apply some AIs, some rules, and
3:33 some internal processes around, okay, what do I need to focus on? What areas can I actually take action on and then accelerate those decisions a little more than spending kind of the first part of
3:47 your day or first half of your day kind of routinely going through that? The other part on kind of
3:54 the GNA aspect the tribal knowledge and if You've been in oil and gas a long time. You've heard about the crew change. We've talked about it a lot. There's a tremendous amount of just knowledge
4:06 that's not captured in procedures. It's probably in maybe a couple emails here and there or in a guy's head that's retiring about in his 60s or 70s getting ready to kind of walk out the door. And so
4:21 there's great opportunity there to either record meetings, interview some of your experts across different assets, bring that into the system and actually share what others know about a particular,
4:34 maybe it's a particular field, a particular reservoir, even a formation where you could actually get into that knowledge and understand what somebody else knows about that and really codify that
4:44 across the company. I think from the capital perspective side, I've had a lot of good discussions lately about teams that are trying to optimize equipment, understand where they're placing
4:58 equipment in the field. And even though we're thinking of that as maybe a 10 to20 million expense for the company, there's certainly opportunity to take, okay, where does this equipment need to be?
5:14 Where is it gonna be activated? And what needs to happen to make sure that they're in the right place to actually deploy that equipment and showcase some, we'll say
5:23 kind of optimization around that, all using AI, but very tailored, very focused on that specific use case. So there's a lot of opportunity when we start kind of looking at the three kind of levers.
5:36 And as you get to kind of meet the rest of the team, I think one of the things you'll see is probably Michael or Nick or John or myself come in and start talking about the opportunity to focus on
5:49 value, measure that, understand where you are today, and then how we can move towards it. some agentic workflow or even an automated process. You know, it doesn't have to be fully AI enabled.
6:02 It might be kind of an automation or a better understanding of what's going on in that particular workflow. And so when we start kind of thinking about this at a little bit higher level, you know,
6:14 your organization across kind of land ops, BD accounting, whatever it may be, being able to have access to different agents in different groups that have the system of record, basically your
6:29 systems of record access for, maybe it's a quorum for land, could be petrocytes or well-view for drilling and from the operations or production side, thinking about all the different SCADA systems
6:42 that are there, all the different production, a kind of accounting production systems that are in place today. And often, especially as we're going through acquisitions, there's this need to kind
6:53 of integrate all that information. from this intelligent orchestration layer, all the way down to the data, we can get access to that and start blending that across multiple departments and tackle
7:08 workflows at scale. And so I love that Bill set us up perfectly for what's down the road five years from now. This idea of managing agents like an agent that understands all the tools that are
7:23 available This is exactly how we see the world at Collide. And you can take specific examples here. And we've touched on a couple of these already from a regulatory standpoint. Maybe the revenue
7:36 statement is that first piece into the non-op side. But you start looking at this left to right and start thinking about all the different lease agreements that are in an organization. Some of those
7:50 are probably in quorum Some are in systems of record already. Many are not. And what we get to do is basically take all those provisions, understand the detail for kind of those leases and how they
8:03 impact maybe the drilling team or how they impact even the accounting side, being able to understand kind of what payments are there, bonus payments, for example, might be required for some leases
8:16 where other leases might have other payment terms. So all of this is interconnected and kind of feeling this within your organization, you're talking to different groups, talking to different
8:26 departments, having to work across those groups and the data that they have access to, and that's exactly how we kind of see building out some of these agents across different functions. I think
8:40 one that we're gonna spend a lot of time on is regulatory. Regulatory has kind of the nuance of each individual state, the differences if you're kind of operating vertical or horizontal wells and
8:55 having to understand kind of what stacked laterals look like in filing kind of W10s. There's a lot of details, a lot of kind of, I'll say inherent kind of knowledge in some of these teams that we
9:07 have to kind of surface and bring out so that you can see kind of that value come all the way through on an individual use case. Another piece I would say pretty common across all oil and gas
9:21 companies would be more of the commercial side in understanding kind of gas purchase agreements. So we've been able to take those gas purchase agreements. Can you hear Ariane talk a little more
9:31 about that specific workflow? But we're pretty excited about not only that starting point of being able to search across purchase agreements but also create kind of addendums to those. Be able to
9:41 understand what those initial terms were, how you can negotiate kind of those terms in your favor and then also figure out how there's overlap possibility or even a niche understanding of what
9:55 dedications are there for each contract. And how does that
10:02 relate back to operations and where this is all located from a kind of operation standpoint? I think from our perspective, we start looking at all these individual agents. And at first glance, you
10:18 could say, and you could argue, there's going after a lot of ground here. And I think it's safe to say, yeah, we're going to be pretty aggressive about delivering individual use cases. But what
10:32 we see is, once you can get into one use case, and maybe you want to start with regulatory. And that requires only your well database and maybe your production accounting system or production
10:46 system itself Well, we can take that. and start expanding into other use cases, maybe after the W-10 it's the G-10. Or after the G-10 you wanna get into active injectors and there's some filings
10:60 you have to do there. I know next week we get to talk about R-2s and understand kind of the oil kind of hauling side so that we can begin looking at kind of what that format looks like. I think one
11:12 of the fascinating things that happens once you start getting in using Gen AI a little bit, you start uncovering all the other kind of little workflows that you're doing on the side that may be
11:24 copying and pasting, it may be downloading a spreadsheet from somebody else. Could be gathering a well list, you know, have an example on just a well status change. What that does to the entire
11:36 organization goes from, maybe it goes from producer to injector and you have to get everyone involved so that as you report that to the regulatory side, You know the impacts to the lease agreements.
11:49 understand the impacts to kind of the contracts and the midstream partners. There's so much involved in and just some subtle changes that we can really begin looking at those use cases and trying to
12:01 measure value across each one. I think the other piece that John highlighted earlier was kind of the data access side and spend a lot of time internally focused on, all right, do we go after a
12:15 well-view petro site or open wells from the drilling side and we're really relying on our customers to help kind of direct us in the right way, in the right direction to go there. But for each one
12:26 of these systems of records, we're actually hoping we can take a slightly different approach than everyone else. We're not going to necessarily build a direct connection or a direct integration,
12:37 which is pretty typical across all the EMPs today. We want to have an MCP server or basically this model context protocol that sits above, well view, above. maybe your production and your
12:50 accounting system, and even potentially kind of the skata side as well so that we can read alerts and understand everything. That MCP layer for us allows us to go directly back to the collide
13:04 interface so that you could query those systems directly. So for example,
13:11 with an NP in West Texas that literally spent, I think it was three months, four months taking their well documents, historical well documents, bringing quote unquote, most of the information
13:25 over into WellView, copying, pasting, how the intern go through and do all that, come to find out still people are using the documents and relying on the documents and not relying on WellView.
13:37 Well, one thing that we're approaching them with is, great, let's use those documents, let's use the information in WellView And let's find out where the differences are. actually understand if
13:49 I'm going to go out and spend a couple million dollars working over well or even thinking about doing something, I know exactly what's happened previously on that well. So that's one kind of
13:58 specific example on the
14:05 drilling side. I think another example that we like talking about as well is on more of the counting side where there's a lot of
14:17 back office kind of operations and workflows that happen across the kind of land and lease team or even taking some of the production information and getting it into your counting system. When we
14:30 have that layer kind of sitting above and accessing kind of the systems through the MCP server, you can actually then query that directly now as, you know, maybe I'm sitting there in front of
14:45 Collide and I want to query. are accounting system to see how the production was allocated. And I want to get a listing of the actual from kind of SCADA. Now I can start seeing both of those things
14:58 at the same time. And one kind of opportunity we see with getting people access to kind of a GNAI tool is you can begin seeing some of these - I'll say
15:15 maybe correlations is a strong word, but you can start seeing things across systems, understand kind of what they're coming from, and still rely on yourself or your team as the subject matter
15:26 expert that's going to decide and make a choice. But at this point, now you have all that information. You have access to many different systems, and you can pull that information back and make a
15:38 kind of intelligent kind of choice. Without having to worry about, do I have the most recent information? do I have access to something that I -
15:47 I'm unaware of or do I need to go talk to another group, talk to another department and figure that out. So we are excited about that manager of agents model and what we can do across different
16:00 groups and across different departments. I think one piece that mentioned briefly is kind of the regulatory side and kind of two angles on regulatory that I just want to highlight. One would be kind
16:17 of the filings and obviously everything that has to happen for compliance. So that could be things like your active producers, inactive wells, active I which, transport crude the, injectors
16:31 mentioned briefly, any of those status changes, flare exceptions. Those filings we're now looking at, primarily in the dark yellow Texas and North Dakota. and then have a couple requests for
16:48 Mexico. And then from kind of the data and insights perspective, you have gathering some nuanced information from maybe the state of Ohio so we can understand what details buried in some post-job
17:02 reports, or even getting into kind of the last well test that was filed because it's not recorded in the system. We actually have a pretty, we've seen a handful of examples on for gas wells. It
17:15 didn't get recorded in the system, but it got filed to the state. So actually going back to the state on the regulatory side, seeing that and uncovering that. And I think we're, what we're pretty
17:27 excited about is being able to use different types of agents to go out to websites, actually pull that information, or maybe we need to scrape a list of wells or some information there to actually
17:41 generate a template or generate a response that goes back. to the regulatory. So, a lot of opportunity when you start thinking about kind of the compliance side and a lot of insights that we can
17:55 gain from gathering some of that data in what's typically not available from an inverus or well database or public data system. Okay, for one question, we've talked a little bit about the glorious,
18:08 what are y'all thinking about? Most of the workloads and agents that you're talking about, especially when you talk about land with horizontal wells, is how does that relate to spatially beyond
18:18 that? That's what he's all doing.
18:20 So to integrate the map portion into how all this fits in with agents, you're full of a unit that's made up of all of Lisa's that's all in the path of both the wells. Some of that could be a
18:32 structure of hierarchy, but a lot of times it's not.
18:38 So there's a lot of excitement around mapping capabilities, so we received really clear direction from, so Michael and I received a really clear direction from Kinesh and Armitag. We are rapidly
18:50 working towards a very aggressive FDE model, so similar to what Palantir has really become successful at is deploying on location, capturing these requirements, and then spitting out quick proof of
19:01 concepts to show use cases and confirm like, hey, this is a scrappy prototype we put together in two weeks. It captures the core functionality that you're looking for. This is the output or
19:12 visualization that you described
19:19 to us. Can you confirm this is what you're looking for? You then go, heck yeah, this is awesome. We're, hey, Nick, you kind of missed the mark here. We need to go back and refine a few things.
19:22 Using AI creation tools like cursor, we're able to rapidly deploy these quick prototypes, get feedback, and then work closely with Kinesias and a larger development team to take that core idea that
19:36 the end user gave to us, and then communicate with the development team, and then scale that into an enterprise grade tool that you can then have in your environment.
19:45 Mapbox is one of the core elements that we're looking at. They're a very strong mapping platform that can work seamlessly with multiple data sources. Esri and
19:56 other vendors work well, but there's a lot of price constraints and concerns around that. So that's the specific API that we're looking at I know that you like to geek out over that kind of stuff.
20:06 Michael and I got to review as well. Is there anything else, specifically, that you want to know? Yep. Yep. Well, and I'm going to ask
20:17 the question. When does Tom get to see a prototype with a Mac? There is a good step. 30 and a half hours. Yeah.
20:28 Depends on how many old fashions feed me, Chuck. No,
20:33 but we can get that done very quickly. Very quickly. So, I was, we were talking about this internally and I was talking with Kineshia's about it. And you couldn't tell if I was kidding or not
20:46 'cause I told them when we do this on the interface, the geologist is gonna want it upside down. And Kineshia's looks and then I go, no, they always stand at the top of the table and they're
20:58 looking at the map upside down. It doesn't go if I'm kidding or not.
21:05 So we see a ton of opportunity. I mean, mapping is so core to different teams within oil and gas on the, kind of the thinking about our kind of data pipeline side. As we're bringing in contracts
21:17 or bringing in leases, the natural thing to do is then to create layers based on that. Yeah, so it was like a turning where you got so many thrones, I had all the legal and other constraints in
21:29 the warehouse. And so being able to build layers, but then also kind of present it, or maybe everyone's benefit,
21:38 just how we pull up the PDF or the spreadsheet. We're essentially thinking you pull up the map in the same way, in the same kind of canvas, and that way you have a way to go back and forth between
21:48 chat and the map and have a nice little workflow there. But there are any states that are not highlighted that y'all would like to see highlighted or like to see us dive into and investigate. Assume
22:00 you, Tom, maybe with you and Tessa. Yeah Yep, yeah, that hasn't come up yet, but that's a great one to add. Assume Arkansas.
22:11 Arkansas. Yeah, I personally don't need it that there are a hundred old wells in California. I don't have anything to do with that. Yeah, we're not doing business in California yet. Yeah, and
22:16 so, yeah, I can think of a
22:28 couple examples where we're getting asked on the BLM side. to interact with the data and bring in information to see what's not least, what are the terms of those leases from different states up
22:48 here. Each one has unique amount of information that you can get from the public data sources. We don't want to go after that information. That's not our intent here. Our intent is to go through
23:02 the documents, through the related information, that typically is hard to surface. And so really good examples of that are in Ohio and Wyoming and North Dakota and Montana, where there's kind of
23:15 ancillary information that's in the well documents and the well history that typically isn't available in the Invares well database IHSs of the world. So we'll continue to do that and basically kind
23:29 of augment those public data sources and kind of bring hopefully more insights. and in a lot of granular well data to the table. The other piece that I just want to talk briefly about, and maybe
23:45 not as exciting as some of the regulatory stuff that we're doing, but we have a beginning kind of start to what we're thinking about as kind of this non-op agent, or where Chuck left, but he would
23:58 call it OBO bot, because we have to use bot for everything So this kind of non-op side started on the revenue statement, so you saw that what we're going to do is essentially extract all that
24:10 information and then be able to push it to - maybe it's an Aries database where you want to see the actuals. Maybe it's an internal kind of production database. We're going to add in actuals there.
24:21 Then take another portion of that and create kind of a CDEX file to import into a accounting system. So that's kind of the basic kind of blocking and tackling piece of that, where it gets super
24:32 interesting as we start looking at AFEs. and decoding some of the AFEs doing the same process, extracting a lot of that information and understanding the cost, suppliers, all that detail, and
24:44 then being able to actually take those volumes, take the information and do the return on investment analysis so you can actually see IRR, any ROI kind of metrics that you have internally, pairing
24:57 that with some of the least information and typical contract information for those kind of non-op partners. And that's gonna be a much kind of larger workflow when we start developing it, but that's
25:11 kind of our approach as we will take one of these small use cases, show a little bit of value and then grow into kind of a larger workflow that would be multi-agent, kind of a multi-agent,
25:23 multi-step type of process. The non-op piece, I feel like many of the companies that we're talking to today have Hugh. just a few resources dedicated to it. So it seems like another reason,
25:39 another kind of rationale for developing something that's focused on the non-opside. I know within - back in my day, Chevron, essentially, this whole process was an email inbox that someone would
25:53 have to drag into a share file. That share file would then be reviewed, and you'd have kind of a monthly summary produced that was then sent out to kind of all the asset teams and asset managers.
26:07 So we're not going back to that. We're going to try and make this as egenic as possible and actually produce
26:16 that the non-op engineer make his life so much easier, but also give him some unique tools to kind of handle the amount of data that's coming in.
26:32 Can you talk to me about this? That would probably be good. It would be interesting to focus on not on companies. Company, that's all they do. See their workflows, and then that'll be very
26:40 applicable. Yeah, that's a lot of raiders who don't care.
26:45 They will. Yes.
26:49 Apron. Yeah. Perfect. And so kind of the last couple minutes that I have, spend a lot of time talking about use cases with different groups, marketing teams, commercial, BD, getting into some
27:05 of the production operations and even some of the drilling teams. And I think kind of the focus of this, and you'll hear kind of Nick and Michael and John kind of repeat this as well as start small,
27:18 start on a focus assets, specific fields, specific area, really think about kind of that use case and boil it down to a little area that you can expand where there's obvious. places to kind of
27:33 expand on it. And we tend to focus a lot on those outcomes. So we'll be asking for value statements. We'll be asking for how much time did it take? How much money does it cost? Where is there an
27:45 opportunity to actually increase revenue if it's more of a production-focused workflow? And then thirdly, kind of building on the team's capability. And one thing that we're seeing where we have
27:59 champions and get to hear from two of them today, but where we have kind of internal champions that are guiding the company and helping direct kind of opportunities across the organization, we find
28:12 so much value in their partnerships and being able to take on not just one use case, two use cases, but I think for some of these companies, we're going to have probably five to 10 use cases in Q1
28:27 of next year So it's an exciting time to basically see. see the team capability grow and when we're deploying these individual agents, we often see a lot of interests in just uncovering of new
28:46 workflows, new ideas that they can bring back to us. So it's a great opportunity and then I wouldn't
28:55 leave in front of you without the little pitch of work with us, and collaborate with a partner that is going to help you go through all the new acronyms on the LLMs and MCPs and RAGs and agents and
29:09 all that good stuff. We're excited to have you all here and just appreciate the opportunity to be in front of you and working with you. So that will open up any questions or any other comments?
29:23 You've got an into a conditions control, Blaring.
29:28 We have not done, so we've been, there are a couple of operators that have talked to us about permits for flaring, which we could see that one coming down the pipe. And then on the flip side,
29:42 being able to report kind of some based on some methane standards and definitely see that as another important kind of regulatory workflow where you have certain requirements in Colorado versus PA
29:56 versus other states And, you know, in Texas, one operator is asking us for the kind of the flare exception permit versus another in Colorado that we're on the other side of that kind of reporting,
30:11 reporting methane. So. Yeah, we're going to be next to our flaring permits. We're on roll
30:17 360 by schedule. So it's kind of difficult after we're going to everything. Once you are all our flaring numbers and we'd be like, Oh, we play this much for now. have so the next six months we're
30:31 gonna have to watch our vlogs. And how are y'all handling that or managing it today or back then? So I was a village engineer so really was us out there what are great our constraints are you know
30:48 we had bumped down production a little bit because my black athletes were one big thing is a lot of collaboration is perfect and making sure that our beer users are probably anything you don't have to
31:08 have to do the process I guess.
31:14 Well that sounds like a great one to kind of sit down and figure out okay where can we where can we attack that either on the permit side or the or the reporting side. Yeah yeah and our missions were
31:26 where our missions coming from you so different technology things that were available out there, but our emissions coming from our banks or emissions coming from another facility that's adjacent to
31:39 us.
31:42 Cool, thank you for that.