You’ve seen the dancing Boston Dynamics dogs, Honda’s ASIMO greeting people at malls, and the half-court-shooting robot at the Olympics, among other awe-inspiring robot stories that nowadays are getting increasingly more common. But equally fascinating, especially for us programmers, is the amount of programming and structure needed to make sure these robots work as intended. In this episode, we talk with Abhay Venkatesh, Software Engineer at Anduril Industries, about Platforms for Robotics (PFRs), and the intricacies happening inside these mechanical wonders.
- Anduril Industries https://www.anduril.com/
- Nvidia https://www.nvidia.com/en-us/
- Boston Dynamics https://www.bostondynamics.com/
- Arduino https://www.arduino.cc/
- Raspberry Pi https://www.raspberrypi.org/
- Nvidia Tegra https://developer.nvidia.com/tegra-development
- NixOS https://nixos.org/
- Docker https://www.docker.com/
- Bazel https://bazel.build/
What is Programming Throwdown?
Programming Throwdown educates Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.
Programming Throwdown Episode 118: Building a Robotics Software Platform with Abhay Venkatesh
[00:00:00] Jason Gauci: Programming Throwdown Episode 118: Building a Robotics Software Platform with Abhay Venkatesh. Take it away, Patrick.
[00:00:23] Patrick Wheeler: Hey, everybody. We're here with another exciting interview. Today we have Abhay, who's a software engineer at Anduril Industries. Go ahead and say hello to us, Abhay, and tell us a little bit about your current role there.
[00:00:36] Abhay Venkatesh: Yeah, thanks for having me on the podcast, Jason and Patrick. I'm Abhay, I'm a software engineer at Anduril Industries, and I work on platform infrastructure, mainly focusing on aspects of simulation and deployment and so forth. Before that I did a lot of work in autonomy area of expertise and, did some foundational work there, and also have a background in perception and machine learning.
[00:01:02] Patrick Wheeler: You said so many buzzwords already right up front. You got a lot of people very excited. I mean, I think there's a couple topics that I think are very motivational to people early on when they're getting into programming, or at least they were for me. Maybe now people are excited by like, websites and web apps and stuff. But for me it was always building games and building robots.
[00:01:24] Abhay Venkatesh: Right. Yeah. I guess to the point of buzzwords, I often like to say we do everything, but blockchain and Unreal. (laughter) So we really do have that diversity of expertise, and certainly robotics is one of the areas that we are particularly excited about, I'm particularly excited about. And yeah.
[00:01:43] The first time I started working on robotics was at Stanford where we had this one project of having this chair bot, and the problem space was, we're gonna study how humans interact with various robotic furniture. And how can you imagine those kinds of interactions and worked on building this chair robot that was literally a chair with a Roomba attached to it. And you could control it using an iPad and try to run your experiments that way. So yeah, it's a fun space.
[00:02:17] Patrick Wheeler: I mean, my first thought is like, you're programming the chair to move out from underneath someone when they try to sit on it as a prank. (laughter) But I assume that's not what you were doing?
[00:02:24] Abhay Venkatesh: Yeah. I cannot, I cannot deny that never happened, (laughter) but certainly it is, yeah, it's, it was a lot around "what is an acceptable approach?" Like, can you think of a chair? What are the acceptable interactions that would be socially pleasing or not aggressive necessarily, thinking about how robots should interact with humans in general, and trying to build like a design model around that research goal. And yeah, so at least that, that was my foray into robotics and certainly have taken a lot, lot more places from there.
[00:03:00] Patrick Wheeler: Very cool. So for a second here, I think most people kind of know the word "robot" or even know of robots, right? So you mentioned like Roomba, people think of like Androids. So, I mean maybe in like your mind, not, not necessarily like a definition, I mean, maybe that's kind of too boring, but like, what is it that makes robotics as like a field or a practice? Like what makes it different? What makes it robotics? And then for you, what makes it something that you're excited to work on?
[00:03:28] Abhay Venkatesh: Yeah. Yeah. I guess I think of robotics in a pretty broad sense. So usually, like you mentioned, people think of Roombas. Or even more in the broader sense, people think of like some robot, like C3PO or R2D2. And it's like somehow it has a body and a physical state, but robotics can be much broader than that. Really, even just a sensor with the control loop, meaning that it interacts with the environment and gains data from the environment. You could think of that really as a robot.
[00:04:01] And in that broad sense, robots are all around us, even your fridge has a robotic component to it. And even your microwave, or certainly your phone. So yeah, that's, that's how I really think about it. And if you think about it in that broad sense, then you can start making sense of really what were robots and what they do for us and the value of them.
[00:04:24] Patrick Wheeler: Are there things about programming robots like that, that you've found in your work that are different or unique to... I mean, I guess for me, like hearing you say. It involves the physical world and in interacting with it in some way. Measuring it, affecting it and having some control loop for that. I mean, is that the essence of what makes it different? Or do you think there's something more there, like before you're mentioning your chair bot and how like there's a human component that once you have something interacting in the world, you're sort of interacting with humans as well? Like what do you find `particularly engaging or is there something that you find particularly engaging about that?
[00:04:59] Abhay Venkatesh: Yeah, I guess there are a few, few differences. The main thing I would say is that interactive component with the environment. So you can contrast it against like building a database where it's a lot less effectual in the sense that you don't necessarily directly interface with a human. You're usually serving requests for other pieces of software. Whereas usually when you're making robots a sensor or some actuator, it does have that closer control loop or interaction loop with the environment. And that is I would say quite different mostly because interacting with the real world is really on a higher dimension almost.
[00:05:42] So when you're building a, let's say a drone and you have to make it fly around, it's kind of funny in the beginning, at least. Because when I was first writing control loops, you would think that the robot would behave in a fairly straightforward manner, but it almost never does. And it does take some getting used to in terms of building a robust, robust robotic system and making it, impervious to all the various states that could be possible in a complex environment.
[00:06:10] Patrick Wheeler: Ooh, impervious. I like that. Yeah. I mean, I've done a little bit of, like hobbyist dabbling, and I think this thing you're pointing out, right? That it's very different from my day to day programming to work on something which has mechanical failures, mechanical limits, there's like, you have to think about, it's not just on that value 1 and I want to be a value 2. You have to think about how does it transition between, I guess we're calling it states. Like I have a servo motor, a servo motor doesn't move instantaneously. So if I tell it to go to 90 degrees and then to 50 degrees and then the 70 degrees. Well, it may still have only just started moving, so you got to think about like the progression of time and the limits of the system. Yeah. I'll also find that very, I don't know, like a different challenge, I guess than like you point out working with a database or an application.
[00:07:00] Abhay Venkatesh: Yeah. Yeah. Certainly there's a, maybe you're getting at this notion of hysteresis, which I learned when building these systems where there's like a lag in the system and you want to like counteract that. Yeah. But building some like mechanism for applying that is pretty useful in general.
[00:07:18] Patrick Wheeler: Yeah. We never really talked, I don't think in depth or gone into about control loops. I don't know that I want to go there now. That's going to end up being a whole topic unto itself, but control of something.
[00:07:28] And then I think you were talking as well about, sensors and measuring and observing in order to have a control loop. And I think that's also a really interesting field, I don't know how much, how much you've gone into, even thinking about when you observe the world, it's pretty noisy because the world itself is noisy because your sensors have noise and they have limits and, knowing those as well. So that you, your model of how the world works is accurate to what's actually happening.
[00:07:56] Abhay Venkatesh: Yeah. Yeah. I think that does get into this notion of, I guess like a platform or a software platform in which you can have a place to ingest all this data from sensors and track them. So when you have a robotic system, there's all this kind of calorimetry that you, you, you must track and sort of, maybe collect health data from the telemetry or use the telemetry itself to do some higher order planning.
[00:08:23] And that's kind of where, having a, like a broader, let's say like platform that contains tracking systems, data, ingestion systems, and so forth can be pretty valuable, especially when you're, building simpler robotics applications that are not necessarily very highly complex on, let's say actuation, but are more complex on the side of making sense of the kind of information they are producing.
[00:08:50] Patrick Wheeler: Interesting. Oh, that's another like very densely packed sentence, I guess. Our sort of, comment. So I guess so, yeah. So what you're saying is you have sensors and you're collecting and information, then you're, doing some understanding of that information. And I guess when you were saying a platform or a framework for handling these things, that there are some parts of it and some flows which are common between applications, is that what you're getting at?
[00:09:18] Abhay Venkatesh: Yeah, absolutely. So like, you can think of a lot of these like sensor platforms, like let's say you have a smart home or you have some security camera, or you even have like a robot or like, let's say a drone that's flying around. There's a lot of commonalities between these, these kinds of robots. At the end of the day, it really is about, getting the information from them, storing them somewhere. Performing analytics on them and then also maybe controlling them and sending commands to these various things. So it's kind of like you can think of it in that sense, which is yeah, but that's the commonalities, I guess. And you do actually end up seeing a lot of these in many different application areas these days from, from, self-driving cars to doing delivery.
[00:10:07] Patrick Wheeler: So a little bit of an aside, but when you were saying that I was triggered, someone introduced me to some concept a bit ago and it's been kind of interesting to hear in a broader context, cause I kind of think about it in terms of control loops, cause I guess that's where I first stumbled across it. But this is the OODA loop. Have you heard of this before?
[00:10:24] Abhay Venkatesh: No, what is that?
[00:10:25] Patrick Wheeler: Okay. So the OODA loop, I guess, is something that, and I believe it was a United States Air Force Colonel who came up with, which is basically like a very, a way for humans to tackle the problems. I believe he was doing fighter pilots and thinking about the decision-making process. And so he came up with this term, this OODA loop.
[00:10:42] So it's four steps, the four letters of OODA, O O D A. It's observe, orient, decide, act. So he has it as this continual process as running in your brain. You first observe. And you orient yourself, you decide what you're going to do. And then you act, then you observe like the output of that action. Right? And so hearing you describe this, I know this is somewhat of an aside, but coming from, I guess what we call that meat space, like humans and the air force and having this OODA loop, and then I've heard it applied to business, right? And making business decisions. And then here I'm reminded of it again, when you talk about robotics and about how you want to get this, this flow set up and this cycle set up.
[00:11:23] Abhay Venkatesh: No. Yeah. That's actually, now that you mentioned that, I have basically heard something exactly like that. So it does come to mind now. And that action loop is I think at the crux of these kinds of robotic systems, especially because both, they have to make decisions in the real world, but also humans can make decisions based on the information collected from these let's say robots or sensors, and yeah.
[00:11:49] It's observe, orient, detect, act, if I got that right. And yeah, you can break those down into a map, map those into the various components of let's say a robotics system or platform where you ingest data and you collect it and maybe you display it in some part of a web UI of your platform, the user orients themselves. And then maybe they find some information they want to act on and then they can sort of, send a command back to the, to the robot and, and get their work done. So, yeah, that's absolutely I think the right analogy when, when thinking of these kinds of systems.
[00:12:27] Patrick Wheeler: I'm debating with myself, whether to go through the rest of the things you say, or to, to lift up a level and talk about like all the different components of a platform because you're already starting to allude to some. So you mentioned telemetry, the UI display component, trackers. I'm tempted to like go into each of these, but I think that might become a little long. So we were mentioning a platform. So to you in your mind, like you work with these platforms and the commonality between this and the software that drives it, what are some of the, I don't know if there, if you have like a canonical there's these five components, or just like, what are the most common ones that you see come up that are useful across these things, but maybe could you talk a little to, like, what are the normal pieces that would make up like a robotics platform?
[00:13:12] Abhay Venkatesh: Absolutely. Yeah, I guess there certainly are some commonalities. Before I describe them. I do want to, I guess, like maybe clarify what we even mean by this platform notion. Cause in my head, platform is something that enables applications. And with each application you introduce, or is enabled with this platform, future applications are made cheaper or easier to integrate.
[00:13:37] Let me give you again, maybe an example of, you know what, what like a platform might be. So like normally like in, in the software world, you can think of, let's say the iPhone as a platform, which is, it enables the consumer in the net, and each application that was added to the App Store made future applications let's say easier because Apple could extend its functionality , the API is exposed to app developers, and you also have this ecosystem effect. And over time, you had this explosion of apps on the App Store. So that's, I guess like what I roughly mean by platform. Does that make sense?
[00:14:17] Patrick Wheeler: Yeah. Maybe I'll take a little bit of an aside here and, and, and I'll tell it back to you and you tell me if it makes sense. So I mentioned dabbling in like electronics, robotics stuff. I was just like a hobbyist. So for me, one of the things that was really interesting is, even though I was a programmer and I started very early, predating some of the more modern stuff, you would have to go and find a C compiler for an ATmega part and like, or app Atmel apart, what is AVR eight or whatever, right? And it was some strange platform. It had very low level libraries. It was very hard to write for that ship. Then the Arduino Raspberry Pi came out.
[00:14:55] And so to me, like when I hear you say you're describing, is that the, something like an Arduino where it is, it is a hardware set, but in actuality, it's a library that gives you your input point. It gives you libraries for taking in sensor data, for actuating motors and whatever. And you actually can run it on several different knots. Arduino platforms themselves, but even other processors will adapt that library because like you were saying, it, it speeds up people's ability to get in and do integrations and make applications cheaper because so much of the lifting is done for you.
[00:15:30] Abhay Venkatesh: Absolutely. Yeah. That's exactly what I mean by platforms. And Arduino is certainly qualifies for that. There is like a similar platform in the hardware world called Nvidia Tegra, which has basically enabled, I think like single-handedly enabled the edge perception or edge IOT revolution that's currently happening. And certainly it's for the same reasons you mentioned, they do a lot of heavy lifting for you. They give you libraries, which with which you can implement perception algorithms, and you also get the board, obviously with the GPU attached to it. And you can, you can literally slap it on a robot and now you have edge machine learning, edge computing. So that is the power of platforms.
[00:16:11] Patrick Wheeler: So I actually don't know anything about that. Can you speak of it? So it's a like Nvidia, so I assume it means it has a general purpose GPU processing setup on it? Is that the...
[00:16:21] Abhay Venkatesh: Yeah, that's basically, I think the idea. So Nvidia has these boards that you can use. And I guess the killer app there is, it's a board with the GPU on it. And the idea is that it's something that you can slap on at the edge. So it's pretty small, it's compact, it's low power and yeah, it runs Linux and it comes with, you know, a general purpose GPU on which you can run all the fancy machine learning and computer vision algorithms that you would want.
[00:16:54] And the promise of that, again, is you don't have to, let's say if you have frames or camera video that you have on your robot or sensors, the promise of Nvidia Tegra is that you don't have to ship those frames or video off to the cloud and run inference on the cloud. You can actually do it on the edge, which can yield performance improvements, much better latency on inference and so forth.
[00:17:17] Patrick Wheeler: Okay. That's probably worth a call as well here. So the word "edge", isn't an edge between two nodes, but edge is the like frontier of the thing doing the observation.
[00:17:28] Abhay Venkatesh: Yes, exactly. So there's is the edge compute and then this cloud compute, and that's the contrast here where you run cloud is in, a data center somewhere in a big warehouse, but as the edge is it's near it's, it's at the edge of where it's actually happening, so to speak. And you have compute that is spread out across the environment, right? yeah, but that's the idea.
[00:17:54] Patrick Wheeler: So that's pretty interesting. I guess in my experience, I've mostly encountered either you're calling edge compute, but edge compute as only edge compute. Like there's no cloud component. So are you saying, is that for some robotics applications, I guess that makes sense is cooperating. I don't know what you would call them, robots, agents that interact through, so some data is done on the edge and in the local thing, but then some is pushed to the cloud and then data is pushed back out and shared across other.
[00:18:20] Abhay Venkatesh: Yeah, absolutely. I think that division of labor is quite powerful. And the reason for this is that there are applications that, there's this hard limit. So the Nvidia Tegra platform I mentioned was, it's you get four, four gigs of Ram and even the GPU is not necessarily, you can run the largest models there. So it has pretty hard limits in terms of what you can do.
[00:18:44] So, for example, if you want to do batch data processing, that's probably not the right place to do something like that, but you can ingest data from all your edge sensors and maybe you have a fleet of edge sensors that are like procreating the environment, doing all their work. Maybe they're in a warehouse giving their construction site and so forth. Maybe you do want to do an analytics. You want to understand what they're doing. You want to get like maybe a report of, how is my fleet doing as a whole?
[00:19:09] So those kinds of functions, you can have a data ingestion system to get all that data, put it in the cloud and, and run batch data analytics and all the, all the fun stuff you can do in the cloud that wouldn't otherwise be possible at the edge. So that kind of division of labor is quite powerful when you're building a robotics platform.
[00:19:29] Patrick Wheeler: Okay. So I guess you brought it up earlier, but I guess we bring it up here again. So telemetry. So the, I guess in my mind that like the recording or log of what happened in, I'll call it a robot because I don't know better. But what happened in a robot or even at like in a race car, some of the telemetry is streamed, but some of it may exceed the bandwidth, and so is recorded locally. And so then what you're saying is there's also a component here when you get to this at like a large enough scale or commercial scale where you want to do further processing and aggregation of those streams across modes. Okay. Yeah. That makes sense. All right. Yeah, yeah, yeah.
[00:20:05] Abhay Venkatesh: That's, that's kind of the idea and I, I, it's funny you, you mentioned racing, cause I think that Formula One teams are getting into this. So they are using I think, Palantir they're like, release on this, whether they're using their software on Formula One. So Formula One teams are getting their telemetry and using Palantir's big data analytics tools to analyze that. So that's exactly like the application that we are starting to see with, with this new sort of iOT or sensor evolution that's going on right now.
[00:20:37] Patrick Wheeler: I saw this the other day. I just Googled it while you were talking cause I went in to get some reasonable number, but in F1 car, they're saying has over 300 sensors and it does something like just transmission from cars to the pits is over a million data points per second. So, I mean, even if you took this as only a byte, you're still like a megabyte a second of data, which they're clearly not one byte each. Yeah. That's crazy. I hadn't, I'm not a big F1 fan, but I imagine they know all sorts of things about the car.
[00:21:08] Abhay Venkatesh: Yeah, I know, exactly. And I guess like going to the point of the, bandwidth limitations, I often think about this a lot as well. We often, When you have an edge network, it's not, it's not as ideal as having a data center wide connection, fiber, gigabit, so forth where you can literally like ship all that data.
[00:21:30] Often you're running on LT or other kinds of, lower bandwidth, less reliable links. And the amount of data that can be generated in theory is pretty large. So you do have this question of how do you process all the telemetry that you might see, let's say on an F1 car, in a efficient manner. And that in itself can be a pretty interesting data processing challenge or, yeah, it's a, it's a question of how do you efficiently get the useful information you need without, you know hitting into high limits of bandwidth and so forth.
[00:22:06] Patrick Wheeler: Today's sponsor for Programming Throwdown is SignalWire. SignalWire is a pretty awesome company that allows developers to use multiple languages to call their APIs and deliver low latency video and audio technology. So imagine if you're building an application or a website and you want to host an interactive event, like a charity event that they supported for the American cancer society, where they're able to have multiple rooms, people interacting in the rooms like a video conference call, but like way more tailored to your specifications, and so much more flexibility in the APIs that enable you to do that.
[00:22:43] They're already being used by large TV studios, film companies, Fortune 500. These are all things that are definitely been battle tested. And today we are happy to have them as a sponsor of Programming Throwdown.
[00:22:57] Jason Gauci: Yeah, SignalWire provides expert support from the real OGs of software-defined telecom. These are the original geeks of that technology. SignalWire's complete unified platform for integrating videos while it's voice and messaging capabilities into any app. You can try it today at SignalWire.com and use code for $25 in developer credit. So you go to SignalWire.com and use the code THROWDOWN at SignalWire.com today to receive $25 in developer credit.
[00:23:29] Patrick Wheeler: Now back to our episode.
[00:23:30] So talking about hard limits there, and maybe a bit of my background, I mean, I think like I started, when you say this difference between edge compute and cloud compute, and I start to think about decisions that need to be made under a given timeline, right? So we were talking about like control loops, right? There's hard, real time limits if you want certain performance out of your, of your, of your loop. So like when you start to mix in these sort of not guaranteed bandwidth streams and cloud compute, are you able to still do any like real time guarantees or is it become a much softer thing?
[00:24:06] Abhay Venkatesh: Yeah. Yeah. That's a, that's an interesting point and something I've personally run into when building these complex robotic systems. There's a few different points there. I noted, but to riff on the, guaranteed delivery for, I would say one of the primary concerns for me when I was building like actual robotic systems that do fly and do actual things. And it's like something I would always have at the back of my head, whenever I'm writing, let's say for loop that acts on external data. It's like, I have to think about, what if this data does not actually arrive, what if I miss measurements that would have been otherwise critical in my decision-making?
[00:24:46] So one of the patterns that I ended up adopting over time is focusing a lot of Idempotence. And writing my code or structuring my code in a way in which I would get almost like eventual behavior, so eventually my robot would do certain things. So it would tend towards things rather than depending on exact delivery of messages. If that makes sense.
[00:25:12] Patrick Wheeler: Yeah. I mean, I guess for like, maybe just to kind of illustrate it for other people. So when you're building a video game, people know like, oh, you're trying to hit a certain frames per second and say it's 30 frames per second or 60 frames per second. And so various parts that maybe this isn't obvious, but various parts of the system get budgets.
[00:25:30] Right? So the AI for path planning for all of the bad guys, right? That has like a budget for how long it can take to run. There's a, how long it can take for you to do. Determining all the polygons you need to render. Everything gets a budget because you have a timeline, but for things like robotics, the differences, instead of dropping a frame of video or having a stutter, which is not ideal, and that's very serious work.
[00:25:53] If you are flying a quad copter and you have your motors need to have a certain signal sent at a certain rate to control how fast the props are spinning. If you miss one of those, the issue is that like the system isn't going to behave like you thought it would, like there is a, how fast do you need to respond to an input from either the vehicle itself or from a human controller, if they're in the loop for the system to behave as you've modeled it.
[00:26:23] So I guess in some systems you get these deadlines, which are very serious to the operation. That was super vague. But like, if you imagine a robot arm, moving around the world and if it sees a person step into its path, If there isn't a guarantee that you can see with your camera or sensor, that the person is now in the danger zone and stop the robot within a hundred milliseconds.
[00:26:48] If you end up with a garbage collection running during that time, and you end up with a 50 millisecond delay, then all of a sudden you can't guarantee the safety of humans around that robot.
[00:27:00] Abhay Venkatesh: No, I think that's exactly, that's a pretty, pretty big concern when you're building robots, especially those that interact with the real world and are near humans and are doing similarly, like life critical stuff.
[00:27:14] So I guess one pattern that shows up when you're trying to build systems like that is you want to have, let's say like a safety layer. So you want to actually break up your system into two, the functional layer and the safety leader. And you want to keep these pieces quite like decoupled. And the safety layer is something that kicks in whenever let's say your telemetry goes off or whatever. If, if it thinks that something is off, they stopped receiving the information it needs to be receiving, the safety layer should kick in. And hopefully you have like a path to say safe exit. And I would say this, this seems like a, probably a pattern that you see across drones, robotic arms in manufacturing, or even probably self-driving cars where you want to like safely stop. And if your sensor systems are malfunctioning or your telemetry is off and so forth. So yeah, I found that pattern pretty useful in general.
[00:28:07] Patrick Wheeler: Yeah. I guess like these topics get pretty involved, like how, how you guarantee safety, especially if you start talking about various certification organizations who, when as, safety certified something or medical equipment, like a pacemaker. Not only like having it be safe, but proving that it will be safe, becomes quite an involved process, I guess.
[00:28:31] Abhay Venkatesh: Absolutely. Maybe that's a domain, topic in formal verification. That could be an interesting area. I haven't looked too much into it, but yeah, certainly like you would have to prove that your software would work under these, in these constraints and show them your source code and so forth. Yeah. I haven't personally been through that process so far, but yeah, I imagine it's a, it's a tough problem.
[00:28:54] At least, one, one thing I do is really like very defensive programming when I build some of these systems is default to nothing or, a landing if I'm riding a drone, for example, if, if nothing or returning to base, which is the other, other concept in drones.
[00:29:11] But yeah, so always having that default fall back pattern helps you have that kind of defensiveness without. Requiring verifications. Yeah. Yeah.
[00:29:23] Patrick Wheeler: I've had incidental contact with some of this a couple of times, I guess, formal verification where you have like a provably correct, or at least for one definition, like a provably correct way of coding or design spec. And, you have certain axioms and you guarantee those axioms are met. I've actually never come across a system that tried to tackle the problem that way. Although I know people do do it is an interesting field.
[00:29:50] But yeah, what I've seen more comment is I guess, similar to what you're doing, this defensive programming, which is you could talk about things like C or C++ coding standards and like what you're allowed to do.
[00:30:02] So like one of the easiest to understand ones is it's very common if you're using, I'll use C++ that's what I'm familiar with. If you're using the C++ standard library, right? And you have an STL vector and you're inserting something. It's actually allocating, as you're inserting it's allocating extra and moving stuff around, it's doing a lot of work.
[00:30:22] And if you're doing that inside of a loop, you can end up with a lot of performance issues because when the memory goes to be freed, what the allocator does or doesn't do at that time period isn't very easy to guarantee, is to reason about, or what if you ran out of memory because you didn't do it.
[00:30:36] So one of the techniques there, and this sort of, you mentioned a defensive programming is do all of your allocations upfront. So figure out what you're going to need, do them at the very beginning, that way the system either fails to start up or once it starts up, you know no more allocations. And I've seen that be done even to the point where the allocator is effectively turned off after like some phase of the startup cycle. And anything that tries to allocate is like guaranteed to not work.
[00:31:04] Abhay Venkatesh: Yeah. Yeah. I think C++ even makes it pretty easy for you with the reserve keyboard. I'm not sure you're familiar with that, but yeah, you can like reserve a space before you even add to the vector. So certainly that's a pretty good pattern in C++ defensive programming, but you could also do defense, not necessarily on the, let's say the code level of, but you could also do it on a process level, which is a, say, more course, but easier to manage.
[00:31:32] For example, you can, you can set constraints on how much memory a CPU or process should be consuming at any given time. And you can set up, like manage the processes running on your compute board in a way that you just kill let's say the application processes whenever maybe they're going off the rails for whatever reason, so that the the backup or the safety processes should have enough memory so that the, the drone can return home or the robot can safely reset to a safe state.
[00:32:05] So I, I also do end up thinking a lot on these various levels of abstraction of the system where you, you kind of separated out. So it it's maybe like a shortcut or easier way to build more systems where the alternative is to, be super careful with how your C++ is structured, which can be, you know pretty, pretty hard, and C++ is a beast of its own.
[00:32:28] Patrick Wheeler: Well, yeah, let's not get into that. (laughter) Yeah. So you bring up another interesting point there. So managing processes and like the approach of, of making sure that things stay within their limits. I guess that's an interesting thing, too. So when we talk about something like Tegra, and you mentioned like, oh, running Linux and having that kind of stuff, is it in your experience, like, I mean, is it mostly, very familiar, like Linux operating systems with all the programmer ergonomics you would normally see related to that, or are a lot of the times you're interacting with devices that have more specialized operating systems?
[00:33:04] Abhay Venkatesh: Right. That's a good question. I would say the main difference is actually the architecture, which is maybe not, not that different these days, since you have ARM on Mac, but the Nvidia Tegra platform is an ARM thing.
[00:33:18] So, and normally your Linux running in cloud or on your desktop is an x86 architecture. So that, that ends up being probably the biggest difference in terms of the development environment that is exposed to people. That being said, you also want to do want to be careful in terms of how you structure your operating system on the robot itself.
[00:33:39] And by that, I mean, you know, while the intricacies of compiling a program on Linux is that there's no guarantee that you get the same artifact when you compile it on, let's say your cloud computer or your desktop, and there could be the same architecture, but the intricacies of like your package management and all the paths you have set up on your Linux, those can basically mean that the combined artifacts are completely different and it could mean that there are intricacies like, like the memory management problems or even CPU CPU problems that you can run into from that.
[00:34:13] So that is a, I would say a big area of challenge. And we do have, there are actually pretty good tools these days to deal with that. And I don't know if you want to go into that, but there is this thing called Nix, I don't know if you've heard of it. It does solve this kind of problem.
[00:34:27] Patrick Wheeler: So I guess what you're talking about here, and maybe the word is like doing reproducible hermetic builds so that, you know that if every engineer does the same thing, so I've heard of people using Docker to tackle the challenge, but you were mentioning this Nix, so what is Nix?
[00:34:41] Abhay Venkatesh: Yeah. Yeah. So I think Nick's is again like I think their goal is to solve precisely the problem you mentioned, which is you want to have a hermetic or reproducible belts. So when two programmers are building an artifact, it results in the same artifact. So yeah, and you should be able to prove that because if you, if you can hash the binary or something, or the build artifact and they have the same hash and you can compare those and so forth. Nix tackled this problem by, providing a, Nix is a programming language that allows you to build things.
[00:35:11] And you know, if you can like specify here, this is my source code. This is the way, these are the build instructions. These are the inputs to my builds. So if you need to make or other, you know you need Bazel or whatever, to build your things, you can put, specify those as inputs.
[00:35:28] And then what it outputs is a artifact with the hash. And it could really, you could build anything. You could build a text file using Nix. So you could say build instruction, you just copy paste this X into XY, and the output is build a TXT. But the advantage is that you run the same. Let's say it's called derivation. The concept, you specified the revision, which is the blueprint of a build, and the key thing here is when you run this derivation on two computers, two developers running the same derivation, they get the exact same output. And if they don't, they can verify this by comparing the hash. So that is the key feature it enable.
[00:36:08] Patrick Wheeler: Interesting. Yeah. So I guess maybe people I've run this. I've seen this also come up as people doing Python stuff run into, it just will grab whatever you installed to your system and you don't know what dependencies, so you send it to someone else and they're like, it doesn't work.
[00:36:22] So then people use like virtual environments with PIP and a requirements file that specifies, I want this version of the library to be used. And I guess what you're saying is, so Nix is able to do that for more than just a single applicator production for the operating system itself.
[00:36:37] Abhay Venkatesh: Yeah. I'm very impressed that you're able to pick that up. So that's exactly the idea here, which is that you don't necessarily have to have your build target as a text file or a binary, but you can literally build an operating system in this manner. You mentioned Python virtual environment, and, yeah. You can specify requirements that TXT, but then say you want to set up a Python virtual environment.
[00:37:00] That is actually, non-trivial like, I actually struggled a lot with getting a virtual environment going and installing my requirements at TXT for whatever reason. But nix actually lets you have all of this defined in a set of files. There's even a notion of modules. So you can decompose your operating system into various modules and you can literally have a repository that describes an operating system and that lets you reproducibly build these operating systems.
[00:37:27] And it's a huge deal. I think when you're, especially when you're building these resource constraint, Add robots where you really do care about why the artifacts, you really do care about the limited resources you have in an operating system. And having that reproducibility basically lets you eliminate the myriad of bad variables you could have in debugging application. So I think it's a, it's a, it's a really powerful tool,
[00:37:52] Patrick Wheeler: Well, I'm going to maybe segue a bit, I guess. So we've been talking about a little bit of, I guess like the very low level parts, right? So like operating system, real time constraints. But I think I, that's sort of my background. So I'm interested in talking about that, but I don't want to lose the context you were giving earlier where you were saying, thinking about across multiple, different kinds of robots, multiple instances of the same robot and thinking about as a platform and more than just an individual at, you were saying the saying like an edge agent, but like across more, and then you were also mentioning stuff like even doing like image recognition on the data, the kinds of other bigger tasks that you might do.
[00:38:36] So when you think for yourself and you were describing this platform and things that might go in it, what are some of the other things besides just like configuring a single robot and its communication that would go into that platform?
[00:38:49] Abhay Venkatesh: No, absolutely. Yeah. we did take a digression there, so thanks for redirecting us, it was a long digression for sure. But yeah, I think there's a few big components and I'm not sure I think of them as necessarily being unique to robotics, but let's say, there's a Jack Bauer line where like events happen in real time. So anything that happens in real time can, can be thought of in this way or these components of the platforms that would describe. You could in theory apply them.
[00:39:22] So I think there's a few key components. I think system, which you already discussed is certainly one key component. When you're thinking about, let's say deploying a fleet of robots. So you do want to own the systems and you want to own the installation of those.
[00:39:35] So getting to installation of software or artifacts, there's the notion of deployment, which is a, you can think of it as the D and CICD. Yeah, where you, the main concern is, okay, you're building software, you have iteration cycles. Developers are constantly making updates, in the new age of development, as opposed to the old age of enterprise software where you would maybe release a CD or something, or a single build artifact, once you've done coding for months.
[00:40:06] These days, we don't do programming that way. We are interested in continuous integration, continuous deployment. So deployment becomes a key aspect of running these sensor platforms and, and yeah.
[00:40:17] The other aspects are simulation, which is simulating the robotic fleet in the cloud environment. There's data infrastructure, which is, these are data generating systems. How do you ingest those? How do you perform analytics on those? How do you, run image detection like you were mentioning earlier. And finally, there's like a networking component, which is these robots have to communicate over a network? And you can buy them, I mean, even if you did buy them, that would still be a network. So there's always a communications network, networking aspect. And finally there's building clients and API for these robots and where I started the ideal ways in which you can structure that.
[00:40:57] So I would say these five or six areas are roughly key or core components of building such a platform. And you will see this patterns across pretty much any, any robotics or IOT or sensor platform you see whether it's Amazon or Microsoft or any other company really.
[00:41:14] Patrick Wheeler: So I'm going to try my best to go to go one by one to them and like give us a chance to talk about about many each one. So the deployment, I guess like for me, I think about the Mars rovers and oh, no, there's a problem. We don't know what's wrong. And like, we need to run a build and send it, I guess I'll talk about simulation here as well. So there's some issue. The Mars Rover stops moving, some poor guy or girl has to show up to the office and like figure out how not to strand a multi-billion dollar piece of equipment on another planet. And then, but eventually all that happens, and someone has to deploy the binary.
[00:41:53] Abhay Venkatesh: Right, right? Um, no, that's exactly the challenge. It's funny that you mentioned that because I was having the exact same conversation with a colleague who did work on similar stuff in the past. So it's really accurate, your description. And certainly in the Mars case, the challenge is quite complex because, I think it, it's not like the communication itself is not instantaneous, so it takes them a few minutes or something. It's 30 minutes, I think? If I'm remembering correctly, to get like some, some packets from earth to Mars. And certainly like, they don't necessarily have the challenges of continuous integration, continuous deployment, where, we have, they have a fleet of coders who are releasing software every day or every week. But certainly the question of patching bugs is quite important because when you have these robot, our sensor fleets, they are basically out there in the environment and for, they are there for their lifetime, unless you send a field service agent to go and fix it. And in the case of Mar, that's a logistical nightmare. You have to call up Elon or Bezos these days if they're taking a flight.
[00:43:02] But yeah. So I think deployment does solve that problem for you, right? It is, it is a big area or a deep area of thinking about how do you deliver software to your fleet?
[00:43:13] Patrick Wheeler: I guess, like two questions I have there. So like the first we were talking a little bit about like compartmentalizing your, your system and having like a safety part and then like the normal part and monitoring each other.
[00:43:24] But then I guess for deployment as well, like there's always a risk that if you're doing continuous deployment, like sure you tested it, but that there could be something different or some glitch or a bit gets flipped because whatever neutrino flies through and like swaps a bit in your robot, like how, how do you handle the risk of over deploying and accidentally shipping something that's bad?
[00:43:46] Abhay Venkatesh: Yeah, I think the key aspect of building good deployments software is building good rollbacks software. Yeah. So when you make a software update onto your edge, you have to build, you have to be ingesting that telemetry we previously mentioned. You have to be constantly having well designed health checks that will catch bugs for you and automatically roll back.
[00:44:11] Deployment for robotics is slightly different from deployment for cloud, because cloud is always connected. You always know that there's going to be a network, it's easy to roll back. You don't even have to have, let's say a rollback artifacts present at the edge in the cloud. I mean, you can just, you can just resend the latest artifacts or the previous artifacts that you wanted to roll back to. Whereas when you're doing deployment at the edge, you need to ensure that whenever you're making a software update, you have the previous rollbacks locally available and ready to go. And you have an operating system that can do that for you. And Apple does this pretty well, I guess where, if there's some bug that you can, or you let's say you turn off your phone in the middle of the update, it will revert back to a functional condition. So you do have to think about those kinds of things when you are deploying at the edge, and having that roll back ready.
[00:45:03] Patrick Wheeler: So, yeah, I guess that makes sense. And then having them on the edge and being able to roll back and you were talking about like, not having constant communication. So is there, I'm curious, is there a, some negotiation that occurs about like, when a robot wants to take an update, like when it's a good time to do it, or is that something that's just like pre-programmed, it's like, oh, at night when it's plugged into the charger.
[00:45:27] Abhay Venkatesh: Yeah that's an interesting question, right? I guess, like I mentioned, health checks and if you don't have the network link, how do you know that her checks are about off? So it's certainly you could maybe, have a pattern where let's say in a drone, maybe it has a life cycle. Let's say let's come to the example of a drone delivery system where the drones let's say, fly around, made the deliveries and returned to base at the end.
[00:45:52] So you could maybe have your software bid process kind of plan around those frequent landings where it's relatively safer or a better time to deliver the updates. But at the same time, when you build the rollback mechanisms, they cannot necessarily only live at the cloud. You do have to have some of those living on the edge itself.
[00:46:12] And like the robot has to be able to figure it out on its own, whether its latest software is working for it or not. And it cannot rely on the cloud for that update.
[00:46:23] Patrick Wheeler: And then I guess, we've been talking about making sure things work, and you mentioned simulation. I mean, I think people think about writing unit tests and maybe integration tests with our software, but how is like simulation for robot systems different than maybe what people are used to?
[00:46:40] Abhay Venkatesh: Yeah, absolutely. So I would say, I guess if you're building a cloud application or database system. You don't necessarily need a simulation component in your platform because the software environment in which you program is roughly representative of the software environment in which you would be deploying that.
[00:46:57] Whereas the problem space becomes quite different when you have a fleet of robots and it's even more problematic when say you have a heterogeneity of robots, and let's say you have different versions of robots, different versions of sensors, maybe they have different hardware components. So just, just those few variables leads to like, let's say a common internal expo explosion and configurations.
[00:47:19] So your software is running, in a head head heterogeneity of environment in a much, much more diverse set of environments. And just that piece itself increases the complexity of testing and deploying your software. So you do need some, some notion of simulation or simulating your hardware devices in the cloud, which adds that extra layer of testability.
[00:47:40] So before you deploy, you have some level of confidence versus no level of confidence.
[00:47:45] Jason Gauci: Yeah. One question about this. It's really, really interesting that the whole idea of using simulation as a test and what comes to my mind is like, when you write a unit test, it's, it's, let's say you're testing the addition operator. And so you do two plus four, and then you verify that's six, right? And so the verification, like you know what the right answer should be. And even if it's something more complicated, you can estimate it pretty easily. Or you can have like some common sense reasoning.
[00:48:15] But if it's a simulation, like, fly this drone or drive this autonomous car down the street, I mean, obviously there's extreme cases, you fall off a cliff or something, but, but in general, like how do you really know if, if the tests passed, like what the fidelity was of it? Cause it's, it's not a binary thing. Like how do you know the, the quality of the test, and are you doing some kind of like self supervised thing? Like, if I can predict what's going to happen next, then maybe that is success. Even if what happens next is bad. Like, can you walk us through that? I think it's so interesting.
[00:48:51] Abhay Venkatesh: No, absolutely. I think you do hit the nail on the complexity of that kind of problem where, yeah. So at least my model, for some thinking of something like this is, I think I feel a lot more like almost like UI testing and less like unit testing or backend testing. And the reason for this is like, you mentioned, you have, you have a robot and then it has to do stuff in the environment and it may not be deterministic, so maybe may do different things and that might be valid.
[00:49:17] How do you verify it? So it really does increase the complexity of your tests. And certainly when you're building an automated testing system, key thing to keep in mind is flakiness and how flaky is the test. So, but I think about testing these robotic systems, at least I always go back to the Google testing pyramid. I'm not sure you're familiar with that concept. Maybe I can briefly mention what it is.
[00:49:40] Jason Gauci: Yeah, I've never heard of it.
[00:49:41] Abhay Venkatesh: Okay. Yeah. So I guess the testing pyramid is essentially a pyramid and it has like three, three layers. At the bottom is, like the thickest part of the pyramid is your unit tests. And that basically means that you want to have most of your tests, concentrate on unit tests.
[00:50:00] The center part of the pyramid is what they call integration tests, which in their definition, it really is integration of multiple units. So if in a program you have, let's say two functions and they, they both are combined in a certain way to achieve some goal. And you test that end to end for 10 year program.
[00:50:16] That is a integration. And finally there's end to end testing, which is the testing of the entire system. that application that would mean testing the database, the UI, and or whatever we made or the, you might have, and testing that whole workflow. And I guess the key part here is that you want to have most of your tests concentrated at the bottom at the set of unit tests.
[00:50:37] And you want to have fewer integration tests and you want to have like bare minimum end to end tests. And the main reasoning or main logic here is that end to end tests are flaky. There's a lot more going on. There's a lot more complexity going on there. And just thinking about the testing pyramid helps you even restructure your code so that it's more testable at the unit test level and does not necessarily have to rely on end to end testing.
[00:51:03] So I think it's a pretty powerful framework. When you think about like, how should you structure it? And how should he even structure? Let's say something like a simulation test. Yeah. Makes sense.
[00:51:13] Jason Gauci: Yeah. That makes sense. Cool. Thanks for explaining that.
[00:51:15] Abhay Venkatesh: Yeah, I guess the way that applies, at least in my head for simulation is that we have these behaviors that robots can do. Let's say like you have a drone delivery trip, and you want to make that trip, and you want to verify aspects of that in an automated fashion.
[00:51:30] You would focus on testing the bare minimum of that. So maybe you would test all the message passing on the system, or you, maybe it has some terminal stage that the system might have, but you would probably maybe try and relax the constraint of the drone delivery.
[00:51:47] Maybe it doesn't necessarily pass the exact path that it takes, so you can maybe just say, okay, we're going to test at the beginning state and the sync state and ignore everything in between. And that'sthrough like, it needs a lot more robust tests that are not necessarily flaky. Nice.
[00:52:01] Patrick Wheeler: So also, I guess one of the things that I'm curious about is like simulation and how do you know? So if you're ultimately going to be on some device, which has what you, what would you call like an actuator, a motor or something like that? Do you end up running a test where you try to see like, does the motor actually move?
[00:52:17] Abhay Venkatesh: Yeah. Yeah. That's a good question. I think at least one of my models for robotics these days is we actually find that the robotics, the actuators or the mechanical engineering, that is actually quite sophisticated. And it has been proven out over a long period of time.
[00:52:33] And it's often the fact that like, those are at least in the application domains, like drones. Those are actually like, not the biggest concern of when it comes to building these robotic application. So most of the time you can actually rely on the mechanical components being fairly reliable, mostly because we're kind of in this curve where we have been building super fancy mechanical robotics. I mean, you can see the Boston Dynamics robots. I mean, they have so many actuators. I don't even know, right? I mean, it's the, the complexity of actuators and the efficacy and the, it is where the innovation stage of actuators, where they are really highly reliable.
[00:53:15] And that is why, the reasons I'm personally excited about robotics software platforms. Because I think, if you think about the tooling or the base building blocks, like Arduino, like Nvidia Tegra, these are actually very new. So we're at the, let's say the earliest stage of the S curve on the software side of things.
[00:53:32] Therefore, this has like at least in my personal experience made me focus a lot more on testing the software components itself and the saying the hardware guys have done an awesome job and I trust the hardware completely, and a lot, lot less trust in the software at the moment.
[00:53:50] Patrick Wheeler: You remind me, like, at first it's like, oh, how do you, what do you mean you distrust? (laughter)
[00:53:54] But I think it's funny if you ever, or at least I've run it across a couple of times where like really new programmers where they'll do something, and then they'll check it twice. So like in their code they'll do like Jason was pointing out before, like one plus one equals two and then they'll like, write a check to say, if it's equal to two and then inside the check checkout, write it again. Like if it equals two.
[00:54:15] And then you ask them, like, is it going to change between these? Like, how would it change? Like, well, I don't know. And it's like, well, yeah. It's but at some level, if you can't trust the computer to not modify like random values, then your whole program doesn't make any sense. So when you say this about motors, yeah. I mean, there's always a line, like you're ultimately trusting that the CPU is repeatable in the instructions it's running, right? Like we don't normally test that. And so yeah, there always is some line where you don't test below.
[00:54:44] Abhay Venkatesh: Yeah. Yeah. And certainly I'm sure the CPU has hiccups. Like it's not a, there's no hundred percent system in the real world. There's no lines, no triangles. right? These geometric objects are ideal objects like platonic ideal is so, and there's no a hundred percent working computing system, but it just works most of the time or sufficiently enough that we don't have to think about it.
[00:55:05] And I think, although like, to be honest, like a funny thing I did run into was they are actually on the arm computing platform, the guarantees are much weaker. So if you're writing concurrently. And you're doing lot free concurrent, concurrency. There's actually no guarantee in execution patterns. So it can actually, maybe sometimes that trust breaks down even on the computing side.
[00:55:27] Patrick Wheeler: Yeah. I mean, all of the, like out of order execution and stuff that they had Intel that led to like the specter vulnerabilities and all of that. Yeah. It turns out none of us know what the CPU is doing or at least not that we thought it should, but cool.
[00:55:39] I want to make sure we get a little bit of time to talk about, so we talked about most of the elements. I think there was a couple we left off, but I want to make sure we have a few minutes to talk about sorta like what comes next?
[00:55:50] So you were describing having a platform where, the robot's data gets brought in, somewhat you deal with this heterogeneity creating, like, I imagine you didn't really talk about this, but like databases where the data is stored and can be analyzed and gone over, which leads to the ability to like refine behavior and improve things by doing traditional, like big data analytics.
[00:56:13] But like beyond that, I, I don't know. What is your thought? Like, I have some stuff, but I don't want to like bias the conversation. So I guess I'll let you go first. Like, what do you think comes next? Like there's more and more robots. You mentioned like drone delivery, Boston Dynamics, like cars.
[00:56:31] Like I think we are seeing more robots, like that's inevitable at this point. Like what do you think comes next?
[00:56:37] Abhay Venkatesh: Yeah, absolutely. I think that's an interesting question. But I think about sort of, let's say technological progress in general. I'm often reminded of Isaac Noonan's line on this, which is, "if I have seen further, it is by standing on the shoulders of giants."
[00:56:53] And similarly, I think we have technological progress when we stand on the shoulders of platforms. And certainly there's, you can think of these in layers. So we have the Linux layer, we have the operating system layer, we have the networking layer. We have the hardware layer.
[00:57:08] And like we mentioned earlier about trust. When we make progress, we trust each layer of the system incrementally. Similarly, so when we have these platforms, I think we are not quite there where we can say we have a mature set of robotics platforms on top of which we can easily build applications, but if it did, or once we do have some of these mature platforms, which in which is very easy to build applications, you have higher API APIs.
[00:57:35] One of the things I've been fantasizing about is you have this, you, you just literally have a declarative behavior specification system and everything has, is abstracted away from you. You don't have to think about the systems you have to think about the deployment or the simulation. All of that is like somehow provided to you.
[00:57:52] If you have this kind of, let's say platform, you can build hierarchy applications. You can do. I don't know, things like much better like, fleet management for robots, where you, they do actually, you know optimize their paths. They do share tasks, collaborative autonomies I would say one big area that, that gets enabled in this way, where if you have, let's see a fleet of robots, they're not necessarily doing work independently or individually, but even collaboratively. So they are sharing data. Applications are like, you can, you can imagine a high order of applications.
[00:58:25] And this gets mentioned a lot, but I do think you can also start building really like powerful learning, learning systems that automatically learn behaviors. And once you have all that infrastructure in place, you can start doing some of the hopes we have for reinforcement learning and, and so forth.
[00:58:43] Patrick Wheeler: I guess that's kind of like, that makes sense. Like moving up the abstraction ladder, let's call it and getting higher and higher orders. So most people don't worry about programming and assembly anymore, or at least do it rarely, we keep moving to higher levels. And then I think this thing you talked about reinforcement learning, I guess, is, is one of the things I always think about, but it is probably an well, Jason I'll probably chime in, but it feels like maybe in a little bit of a perpetual future where you see things like what's that, open AI did a gym where it's like, oh, here's a bunch of video games with like standardized input.
[00:59:16] You were talking about simulation. If you have enough of a fidelity in your simulation, in theory, you can put agents in there, give them a motivation or not, and have them figure out like what it means to move around in that world and how to optimize themselves. And, yeah. It's super like, yes, it gets very sci-fi, people write about this, but hopefully they don't turn a Terminator on us.
[00:59:41] Yeah. Jason, what are your thoughts about that?
[00:59:43] Jason Gauci: Yeah, so, so actually I think robotics and reinforcement learning both have the same phenomenon. I dunno if maybe you gave me this, this sound bite, but feel free to take credit back for it. But people say, it's called a robot until it works reliably, then it's called an appliance.
[00:59:59] Right? So it's like your washing machine, isn't a robot anymore. Your refrigerator isn't a robot anymore. But at the time they were these mechanical aberrations. right? And, and they didn't work well. And so that was the right time to call them robot.
[01:00:12] And I feel like reinforcement learning. It's actually falling through the same situation where it's like, it's reinforcement learning until it works reliably, then it's like control theory, you know? (laughter) And so, and so, yeah, I wonder if reinforcement learning is always going to be just the word for the thing that doesn't work yet and control theory and, I don't know, bandits and all these other words will be used for all the things that are already established.
[01:00:40] But yeah, I mean, I think that both of them are, we're making tremendous progress in both areas and robotics and RL. And I don't know if we'll see killer robots, because I think that the whole human values is we're still really far behind on that, but yeah. getting a robot to like, climb the stairs of, 90% of houses in the country.
[01:01:01] Like that would be really powerful or even just fold the laundry of 90% of articles of clothes would be amazing. And yeah. I wonder how far we are away from that. I mean, if someone has no background in robotics, I can't tell if we're a year away from that or two decades away.
[01:01:18] Abhay Venkatesh: Yeah. Yeah. I think there's a, there's a lot there, I would say.
[01:01:21] And I, I love that you mentioned robotics as a pejorative term almost. You don't want robots, you aren't like stuff that works, which is pretty funny, I guess. And I think there's some truth to that. My theory on this is that it's kind of like, I think people both overestimate and underestimate scale at the same time.
[01:01:41] So on the one hand, people believe that, if you have big data, you can predict everything. You can know everything, you can know more about. That that the AI can know more about you than you know about yourself. So there is certainly like that belief, but on the other hand, people underestimate the power of scale and the sense that, like, there's something that does really change.
[01:02:02] Like there's a step function change that does happen once you do have that scale. And at least my theory of what has happened in machine learning was that initially it was kind of a toy and, neural networks were kind of a joke, but then you did have a maturity of systems. Google had these really mature, big data systems.
[01:02:21] Then it did start working, it does really work for recommendation and search and so forth, but of course with pretty hard limits at the same time. So maybe you on this is, again, going back to the software platforms. Like once we have these robotics software platforms established and permeating the world, that's when you get the higher order learning systems and that's when they start being robots and being, being, things that work.
[01:02:47] Jason Gauci: That makes a ton of sense.
[01:02:49] Patrick Wheeler: Yeah. I mean, thanks so much. I mean, this has been an awesome interview, so I know you have a website, we'll put it in the show notes, and your Twitter handle. Is there any other thing you want to like talk about or tell people to do or visit or read or anything you want to kind of say?
[01:03:05] Abhay Venkatesh: No, I think that, that sounds like a great, great spot to stop. So yeah, I do want to thank you for having me on the show has been a pretty awesome experience.
[01:03:13] Patrick Wheeler: I want to thank everyone for, for I guess tuning in, that's outdated, for downloading or streaming the podcast. I thank Abhay for coming on and talking to us about robotics platforms.
[01:03:25] Thank you to all our patrons who help make all this possible. If you would like to become a patron, you can visit Patreon.com/ProgrammingThrowdown. And we have so many great people writing into us, telling us stuff and helping us out. And there's been a lot of enthusiasm for the more frequent podcasts.
[01:03:44] So I hope all of you are staying safe and healthy and we'll see you next time.
[01:03:48] Jason Gauci: See you later.
[01:04:02] Patrick Wheeler: Music by Eric Barndollar.
[01:04:06] Jason Gauci: Programming Throwdown is distributed under Creative Commons, Attribution ShareAlike 2.0 license. You're free to share, copy, distribute, transmit the work, to remix and adapt the work, but you must provide attribution to Patrick and I, and sharealike in kind.