CRAFTED.

We're not building a set of blogs or something like that where the domain is well understood. We're really pushing the forefront of what people are doing with vessels and optimization.
"The effect of global shipping on the climate is hard to overstate," reports The New York Times. "Cargo shipping is responsible for nearly 3 percent of global greenhouse gas emissions — producing roughly as much carbon each year as the aviation industry does."

So... Nautilus Labs uses AI to make maritime vessels more efficient by helping giant cargo ships optimize their routes across the oceans and suggesting when they need maintenance, saving money and reducing carbon emissions in the process.

On this episode of Crafted, former CTO Todd Sundsted describes how they model ships and their routes and push the envelope with AI. He also shares his approach to organizational design and how to align teams to customer problems.

Key Moments:
00:00 - Intro
01:53 - Helping cargo ships be more efficient
04:17 - Getting the data from difficult environments
06:24 - Aha moments
07:37 - Improving Nautilus Labs’ organizational design
12:31 - Aligning teams to the impact they have
15:15 - Outro

What is CRAFTED.?

Honored two years in a row as a top tech podcast by The Webby Awards, CRAFTED. is show about great products and the people who make them. Featuring incredible founders, innovators, and makers that reveal how they've built game-changing products — and how you can, too.

What trade-offs did they make? What experiments did they run? And what was the moment when they knew they were on to something BIG?

Hosted by Dan Blumberg, an entrepreneur, product leader, and public radio host with chops as both a technologist and as a public radio host. Dan has founded startups and led product releases and growth initiatives at LinkedIn, The New York Times, and as a consultant to big banks and startups. Before getting into tech, Dan produced and guest hosted WNYC's Morning Edition, the most listened to show on the country's largest NPR station.

Listen to CRAFTED. to find out what it *really* takes to build great products and companies.

Todd - 00:00:00: Don't assume you can take some requirements that someone hands to you on the product side or the design side and just implement them with no questions asked. Think about what you're being asked to do, bring your mind to that challenge, ask some more questions if you need it, and then synthesize that into the solution that you're trying to build.

Dan - 00:00:19: That's Todd Sunstead. He's the former CTO of Nautilus Labs, which uses AI to make maritime vessels more efficient. Nautilus helps giant cargo ships optimize their routes across the oceans and suggests when they may need maintenance, and doing that saves money and reduces the ship's carbon emissions. I spoke to Todd just before Nautilus was acquired by Danilek, and in this episode he shares more on how Nautilus pushes the envelope with AI.

Todd - 00:00:45: Like we're not building a set of blogs or something like that where the domain is well understood. We're really pushing the forefront of what people are doing with vessels and optimization.

Dan - 00:00:54: Welcome to Crafted, a show about great products and the people who make them. I'm your host, Dan Blumberg. I'm a product and engagement leader at Artium, where my colleagues and I help companies build incredible products, recruit high-performing teams, and help you achieve the culture of craft you need to build great software long after we're gone.

Todd - 00:01:18: Yeah, we focus on two things, really. We focus on optimizing the voyages that vessels are on as they go from port to port and optimizing the performance of the vessels themselves. And if you really think about it, basically, to kind of really simplify it, the less efficiently a vessel performs, the more fuel it's going to consume and then the more emissions it's going to emit. So, you know, the signal you look at is, hey, we're consuming too much fuel, we think. And then there's a troubleshooting process to identify how to optimize that vessel to remove that. And then on the voyage side, it's kind of the same thing.

Dan - 00:01:53: Can you take us a little deeper into one of those two products? Explain a little more of how you're building something novel that's helping these giant cargo ships be more efficient?

Todd - 00:02:02: Yeah, I think a lot of it starts with how you think about modeling the vessel, right? The amount of fuel a vessel is going to consume when you think about a tugboat versus a large container ship, obviously they're different, right? So it really starts with how you think about a vessel. And one of the innovations that Nautilus brought into the market was using machine learning and data to build more accurate models, right? And so that's kind of the first step, right? We build these models that kind of use physics to simulate how a vessel is going to behave. But rather than tuning the parameters by hand, we use data, historical voyage and vessel data to figure out what those parameters should be, right? So we build much more accurate models of how a vessel is going to perform. So that's kind of on the nuts and bolts. Side of that, right? Then on top of it, the question is, how do you look at signals and figure out what's wrong? So let's take performance optimization as a good example, right? The signal you might have is, hey, I think we're consuming too much fuel, right? But then the question is, well, okay, what does that actually mean, right? Do I have stuff growing on the hull? Do I have problems with the main engine? Is there something else going on? And so then the question is, how do you look at various kinds of data to help the operators and owners troubleshoot those various signals? And so what we do then is once again, look at data and use machine learning techniques to kind of sift through that data to see if we can get a sense of, based on this model that we built of the vessel, are we probably looking at hull fouling? Or are we probably looking at a main engine issue? And then, you know, all of that depends on the kind of data we can get as well, because not every vessel has high frequency sensors. And so really the technique is all around taking advantage of the data that we've got, building good models, and then using that data, both historical for training and then going forward. To try to pinpoint sources of performance. And then finally to bring that and raise that to the awareness of the owners and operators so they can take effective action on it. And that's the key point. When we think about performance, the goal is, hey, can we help you spot, say, hull fouling one voyage earlier than you might have otherwise done it? Because, you know, 5% additional fuel cost or additional power, that's a substantial chunk of additional fuel that you're going to spend. And then of course, a chunk of additional greenhouse gas emissions. You're going to dump into the atmosphere.

Dan - 00:04:17: Can you paint a picture of how you actually get the data? Do you get it in real time? Do you need it in real time?

Todd - 00:04:23: Yeah, that is a great question. So there's this concept in maritime of a noon report, which is literally this report of a bunch of characteristics of the vessel in the voyage captured at noon and maybe noted in a log, right? The good news about noon reports is it's pretty consistently captured. It's things like how deep is the vessel in the water? What's its draft? What's its heading? What's its speed over water? Speed through water? A bunch of characteristics like that. So that's kind of the standard baseline data that you have available about vessels of almost any kind. Then on top of that, there's sensor data. Now that noon data, you know, it's pretty low volume, right? So you could literally transmit that by speaking over a phone if you had to. There's enough connectivity that they can usually send that back in one form or the other. Sensors then, you know, are a couple of order of magnitude more data than that, right? Because you're oftentimes sampling at minutes or seconds or even sub-seconds for, you know, maybe engine monitoring or whatever. And there it's a much broader variety. And then the quality of what you're looking at depends a lot on the particular sensor. So speed through water is notoriously unreliable, right? Because you've got the sensor that's sitting in the water trying to measure the speed through water. And of course, gunk gets in there and fouls up that particular sensor. Whereas something like the engine shaft speed is something that's much easier to measure reliably as you can imagine, right? So all of that then gets aggregated in a bunch of different ways. Sometimes it's stored on the vessel and sent back. And batches maybe every day. Increasingly, a lot of it is streamed to data warehouses or data storage facility shoreside. And then many of those have APIs that shipping companies have developed where you can hit that. So it's pretty heterogeneous in the sense that there's a lot of different kinds of endpoints. And then really, I think it's a pretty traditional data pipeline. After that point, we drop that into an S3 bucket. And then there are successive transformations that occur on that as we try to normalize it and identify problems with it. And get it to a point where we can. Build applications on top of it.

Dan - 00:06:24: What have been some of the big unlocks or any sort of aha moments as you've been building these models and discovered something that allowed you to help the ships be way more efficient?

Todd - 00:06:33: So I think the thing for me that was the most interesting one generally is how effective you can be taking the data you have, the high frequency sensor data you have for vessels that have it and building a set of models for vessels that don't have it, but still perform better than just, you know, kind of these noon only models that are the industry standard. So the question then is how do you optimize for those kind of vessels? So we have this methodology called data pool modeling, and we actually can sit and train models for vessels that are similar to other vessels that we have that have sensor data and get a considerable improvement over what's possible with just that baseline noon data. Basically, we can get about 60% of what you would get if you actually had sensors installed on vessels that only have noon data by taking advantage of these data pool models. So for me, you know, in retrospect, it's like, oh, that kind of makes sense. But at the time, it wasn't immediately obvious that that was going to work or was going to work out well. And that's a big chunk of what we offer right now is this almost as good a sensor solution without having to have sensors.

Dan - 00:07:35: That's awesome. I mean, that's that's that reminds me of marketers who create lookalike audiences, right, based on characteristics of their best customers. You said, you know, organizational design is a passion of yours. I'd love to understand when you came into Nautilus Labs, what did you observe? How are you helping the teams organize to be successful?

Todd - 00:07:52: Yeah.I mean, in a nutshell, I think coming in at the stage that I came in, right, it's about building the notion of a team in the first place. Obviously, there are teams, you've got departments and you had an engineering team and you actually had a couple of teams inside of engineering, but they weren't really teams in any kind of really traditional functional sense, right? The dynamic of a startup typically is you get a bunch of people in a room at some point, maybe four or five or six people. And because there's a lot to do, everyone kind of grabs an area that they're passionate about and starts working on it. You know, one person's hammered. One person's hammering away on the database layer and another person's hammering on the data pipeline and another person's building the front end. And, you know, I think it's as pragmatic an approach to kind of bootstrapping up early stage startup as I can imagine, right? The problem is that that doesn't scale, but, you know, 20 people all with their individual area of expertise, right? You go from four or five people and it being somewhat streamlined and yeah, someone goes on vacation or whatever, but you're cohesive enough group that you can kind of deal with those interruptions. And navigate around them when you start getting bigger and bigger. Obviously you just run into impediment after impediment. Someone leaves and they take the knowledge with them and they go on vacation and they take the knowledge with them. So those knowledge silos become less and less effective. You also bring too few brains to tough problems, right? The problems we've got, I think are very traditional wicked problems, right? Like we're not building a set of blogs or something like that where the domain is well understood. We're really pushing the forefront of what people are doing with vessels and optimization. So. How to do it isn't clear. How people are going to react to what you're doing isn't clear. What value means isn't clear, right? There's just a whole bunch of challenging issues that you have to try to unwind, right? And so those are hard problems. And if you've got one person's brain thinking about some subset of it, you've got too little brain power really focusing on it. And I guess what I've found in most cases is most problems in this space aren't ideally solved by one engineer going off into a corner. It's typically. Maybe two engineers in your pair of programming, and you immediately start to solve some problems, right? You get rid of the knowledge silos. You, I think, increase your productivity. So it's a long digression to say that the thing that I did at Nautilus and the thing that I would probably do with any similar size company is really focus on going from this, you know, we had a front-end team and a back-end team, and, you know, we had three different product managers multiplexing work on the front-end and the back-end team. So all three of those product managers were dispatching work to both of those teams, and then the teams were trying to sort out that influx of work, and then they were trying to coordinate amongst themselves. I mean, it was just like kind of the antithesis of exactly what you really want in a well-functioning organization. So pulling that apart, you know, we use team topology, so building some streamlined teams that are really focused on feature areas that are meaningful to our clients and that are fairly decoupled so that they can really work independently while refactoring out some common concerns into platform teams. So I think that's really been the approach. Like nothing weird and unique there. I think it's one of these things, it just takes a lot of effort.
Dan - 00:11:14: Yeah, no, you're absolutely speaking Artium’s love language here. I mean, pair programming is a big tenet here. And for all the reasons you just described about breaking down silos and that intense focus you talked about. And some people hear pair programming, they're like two people doing the work of what I thought was one person. Like it's counterintuitive that it can be more effective. I wonder if you could just expand on it just for another beat.

Todd - 00:11:34: Yeah, I mean, my standard response is try it because I can guarantee that anyone who's actually done honest to God pair programming realizes very quickly that it is really challenging to do. And you bring someone else on board. And if you're really doing it kind of the traditional way, right, where one person's writing a test and then the other person is making that test pass, right? One, you're engaged at a very high level of thought process constantly, right? Your mind's not wandering. You're not getting bored and doing something else. It's the hard, hard, hard way to do it. Not because it's... Like climbing over rocks. It's because it pulls out a level of performance out of you that you aren't going to see. Otherwise, it's like going to a gym by yourself versus going with a lifting buddy, right? I mean, the workout you get when you've got somebody cheering you on and challenging you is just significantly better than any workouts you're going to get by yourself. It's the same thing.

Dan - 00:12:31: I want to go back to what you said about how it's ambiguous or can be difficult to measure the impact that teams are having. You're a writer, you're a storyteller, and I'm interested to know how are you helping teams align to the impact that they're having?

Todd - 00:12:43: Yeah.So that is probably the topic du jour right now, in fact, as we really think about. So you go through some kind of basic reorganization, reorientation activities, right? The first order of improvement is just getting a sensible structure in place. You're not even thinking about optimizing it. But so right now we're trying to think about how we get better, right? And a big chunk of that is helping people align more closely to the mission. So for me at this stage, I think it comes down to really trying to plug people into the problems, right? My general sense, kind of from a litmus test perspective, is a bad sign is people that are trying to operate at arm's length, right? I think you can make an argument that, hey, if I don't have all the distractions of what the clients want and I can really think about the purity of the client. I can really think about the code that I'm writing. I'm going to be more productive, right? And maybe if we were building a database or something like that, I think you could buy that argument. And maybe there's even a handful of problems where focus on the technical components is going to make that solution better. But most of what we build is going to be much better if we tie into what our clients need. So my point is, you can very easily go from, you know, kind of an understanding the mathematics involved to a solution that makes no sense whatsoever for a user, right? So right now, you know, I'm not at the point of being able to say, hey, let's talk quantitatively about KPIs, about what alignment means, right? But I can tell you qualitatively what we need, right? We need people to sit down and say, hey, let's understand what this problem looks like. When you're segmenting a voyage manually, you know, how do you break it up? How many times a day or how many times a week or how many times over that voyage do you change what you're doing? What's too many? What's too few? And because this is a complex domain like that, right? You're just not going to know the answers to those questions. You're certainly not going to know if you come from outside maritime. Even if you come from inside maritime, different segments operate differently. So you just have to ask the questions. So right now, to answer your question, what I try to do to get alignment is to push the teams to just get close to the clients. Don't assume you can take some requirements that someone hands to you on the product side or the design side and just implement them with no questions asked. Think about what you're being asked to do. Bring your mind to that challenge. Ask some more questions if you need it, and then synthesize that into the solution that you're trying to build. So anyway, that's kind of in a nutshell down the road. Maybe there are some KPIs that we can use to measure this, but we're just not at that point yet.

Dan - 00:15:14: Right on. Todd, thank you for your time.

Todd - 00:15:17: It's been a pleasure Thanks, Dan.

Dan - 00:15:19: That's Todd Sunsted. This is Crafted from Artium. If you're building something ambitious, let's talk. At Artium, we love partnering with visionaries to help them build incredible products, recruit high-performing teams, and achieve the culture of craft you need to build great software long after we're gone. You can learn more about us @thisisartium.com and start a conversation by emailing hello at thisisartium.com. If you liked today's episode, please subscribe and spread the word, because Crafted can up your game.

Todd - 00:15:51: It pulls out a level of performance out of you that you aren't going to see otherwise.