RRE POV

In this episode of RRE POV, Will, Jason, and Raju explore the fascinating and rapidly expanding world of robotics. They delve into the technological advancements since the 1970s, highlighting the critical role of computer vision, AI, and processing power. The conversation also touches on the labor shortages driving the need for robotics, the innovative core technologies, and the safety considerations essential for deploying robots in various environments. They also speculate on the future of humanoid robots in the workplace and home, providing a comprehensive overview of current trends and future predictions in robotics technology.


Show Highlights
(00:00) Introduction
(00:58) Robots from a venture perspective
(05:20) How computer vision is driving innovation
(8:56) Robotics in the single-purpose domain
(13:30) Modern chips and the advancement of robots
(18:34) Growth and robotics working in concert
(24:02) What is a robot?
(26:59) Labor shortages, robotics, and the potential for investment
(32:28) The true cost of labor replacement
(37:48) Safety in robotics
(40:59) Designing a robotics business with a purpose
(43:18) Gatling gun segment


Links 
RRE POV Website: https://rre.com/rrepov
Twitter: @RRE
Apple Podcasts: https://podcasts.apple.com/us/podcast/rre-pov/id1719689131

What is RRE POV?

Demystifying the conversations we're already here at RRE and with our portfolio companies. In each episode, your hosts, Will Porteous, Raju Rishi, and Jason Black will dive deeply into topics that are shaping the future, from satellite technology to digital health, to venture investing, and much more.

Raju: Yeah, I would love to see—I would love to see a Roomba go rogue one day though. Like, you know what I mean? There’s got to be software in it that prevents it from going rogue. But like—

Jason: [unintelligible 00:00:08] yeah.

Raju: —there’s a lot of dirt on children, and the Roomba could say, “I see dirt,” and happens to be on a child, and it’s just chasing kids around the house. I mean, or cats… dogs.

Jason: What I’d love to see is the Roomba deploy a little mini Roombas, or something, you know? Like, it would just kind of crawl up on things and, like, clean off your table [laugh]. That would be cool.

Raju: Yeah. Oh, that’d be cool. That would be cool. Like a dusting machine—

Jason: Yeah.

Raju: —that just, like, sort of extracts itself.

Jason: [laugh]. Yeah, exactly.

Will: Welcome to RRE POV—

Raju: —a show in which we record the conversations we’re already having amongst ourselves—

Jason: —our entrepreneurs, and industry leaders for you to listen in on.

Will: Welcome back to RRE POV. Today we’re going to be talking about robots. Robots have captured our imagination for decades, beginning with the science fiction era, and then they had begun to show up in industrial settings in the ’70s, and they’re increasingly a part of our life. Today, we’re going to talk about the venture perspective on robots. And RRE has somewhat of a unique perspective on this, having been the lead investor in ClearPath Robotics, one of the few really successful venture-backed outcomes in the sector. So, I want to turn things over to my partners, Raju Rishi and Jason Black on this. Guy’s what’s the right framework for thinking about robots and robotics from a venture perspective?

Jason: Yeah. I’m hoping—actually, you kind of sounded like a robot on that intro. I don’t know if that’s just my [laugh] internet. Or—

Raju: Yeah, you did.

Jason: —Raju—

Raju: It was like Max Headroom—

Jason: This is perfect.

Raju: —Will Porteous.

Jason: Exactly. That Connecticut internet really layering on a beautiful robotic effect. We’ll see if that becomes [laugh] becomes a factor here. But we heard you the whole time. Yeah, so robotics. I mean, we’re starting to see it in the headlines, I kind of said just before we started the pod, whenever my mom asked me about a technology, I know it’s made it into the public consciousness. And so, we’re just seeing headline after headline.

You know, as many will know, robots are not new at all. You know, the six degree of freedom, kind of, industrial robot was first introduced in the 1970s, early-1970s. And as you might imagine, they were pretty dumb. Many of them still very much are because the task that they’re doing is very repetitive, there is basically low mix, so they don’t do a lot of different tasks—frequently, just one task—but it’s high volume, so it makes sense to automate. And so, if you think about the compute at the time in the 1970s, we’re talking about mainframes, where this kind of state of the art compute is the size of a room, and is about—the global compute in 1970 was the equivalent of the compute on a single iPhone 4. Not all iPhone 4s; [laugh] a single iPhone 4.

So, we have over 20 trillion times more compute today, and so it should come as no shock that the hard-coded, low-mix, high-volume robots have continued to be very successful and helpful in efficient manufacturing lines. But it’s really, if you’re going to go beyond hard-coded or you’re taking on a broader task scope—not just one thing, but maybe three or five or 30, or 50 different things—as well as doing those things without oversight and in an autonomous fashion—maybe even making decisions yourself—those higher autonomy, higher scope tasks require software and AI. And that’s where the key differentiator is. And so, yet again, shouldn’t come as a huge surprise that we’re really only about a decade into venture-backed robotics ecosystem, where if you look at venture dollars invested, it really starts to tick up in 2012.

So, why was that? Certainly not a coincidence. In 2012, there were not one but many breakthroughs in deep neural networks, and critically, one of the most famous examples was AlexNet. Geoffrey Hinton and Ilya Sutskever, among others, published this paper on deep neural networks for computer vision tasks with a tremendous amount more data. So, instead of tens of thousands of images, they trained on 15 million images.

And with a neural network—which was completely out of style; like, it was even tough during that time period to get papers published if you mentioned neural networks, it was so out of favor—but not a very shallow neural network but with tons of layers. And so, they kind of scaled the layers, and they scaled the data, and got this massive leap in ability to detect objects. Which as you can imagine, for a robot that’s taking on more tasks, and is doing that autonomously, [laugh] being able to recognize what objects are in the scene is pretty important.

Will: So Jason, I think that’s actually fascinating. Because, you know, it seems like you’re starting with computer vision as one of the foundational technologies for enabling this wave of innovation in robotics, I would have thought it was something like mobility, or kind of, different kinds of sensors, or that sort of thing. But actually, it’s computer vision?

Jason: Yeah. I mean, at a certain point, I mean—well, number one, vision is the largest area of our brain. It’s our most primary function, and pretty much everything we do in the—[laugh] when we’re awake relates to how we perceive the world. We do a lot of things—we’ve learned so much about the world—we’ll get into this in a little bit—but we’ve learned just so much about the world, but we kind of forgot what we’ve learned when we’re like a baby to [laugh], you know, call it 18, where you’re, like, running your own little mini-physics experiments, basically, as a child, and mapping the physics of the world to your kind of inputs; you know, the most primary one was vision. In terms of mobility, I mean, mobility is kind of downstream of computer vision.

You could—computer vision, you know, could also kind of—I don’t think it technically covers LiDAR, but—which is a laser scanner that gives you, kind of, a 3D environment—but yeah, if you can’t see the world, it’s difficult to navigate around it, it’s difficult to manipulate objects, it’s difficult, really, to do any task. And in the same way, you can kind of think about the hard-coded robots as being blind. Like, that’s why they literally keep them in cages because there are robotic arms that are over a story tall, that can lift, you know, I didn’t even know the weight limits on it, I mean, like, a full car easily, and that robotic arm will crush you [laugh] if you haven’t turned it off. It has no sense of the world. It is purely doing one task over and over.

Will: You don’t want to make that robotic arm mad.

Jason: No, you don’t [laugh]. And so, we keep them in cages. But we’re taking the robots out of the cage, and it’s because they’ve gotten far more intelligent. Intelligent is probably too human have a word to use; I don’t want to overstate the capabilities, but all of that software, all that AI has enabled so many more tasks to be taken on an in an autonomous way. And that means the robots are more applicable in huge swaths of what makes the world go around.

And you know, maybe to further the point a little bit, over 60% of the US GDP maps to knowledge work, like, the information economy, but in terms of the raw numbers of labor, the Bureau of Labor Statistics actually tracks the strength requirement for each job in America—which I think is kind of awesome that [laugh] they’re tracking that; I didn’t know that was a stat before doing a little mini-robotics presentation recently—and 71% of jobs have at least some strength requirements. And so, light strength requirements would be like a hairdresser, obviously, heavier strength requirements would be like a construction worker, miner, people working on oil rigs, or that type of stuff. And so, it’s a huge swath of the population that, you know, needs to accomplish tasks in the physical world. And we’re not going to have software that goes and cut your hair, even though it is a very, very light task. So, that’s kind of the state of play, and why we are where we are today.

Raju: Yeah. I mean, it’s quite interesting, actually. I mean, these things, robots have been around a long time. A long, long time, and you know, they’re in the home. Like, a lot of people have these Roombas that kind of move around, but the technology has been sort of escalating, and maybe we can spend a few minutes on just the technical layer, Jason and Will, and kind of highlight why we think, you know, sort of robotics is moving into this, you know, sort of more general-purpose domain as opposed to single-purpose domains. And you know, love your guys’ thoughts. I have some thoughts as well on things that have changed, but why don’t you guys jump in and give some of your opinions on this?

Jason: Sure. Well, I think all the core technologies have been moving forward since 2012, not purely pushed by robotics. I mean, even just doing computer vision on images on the internet, just to classify things on Instagram, and detecting you know, this object or that object, that’s just useful broadly even in a, kind of, a pure knowledge worker thing. And so, you have these, kind of like, parallel tracks of, like, what’s in this scene, and then, like, what objects are in the scene. So, Meta actually trained on billions of masked images that they have that are, like, publicly available, et cetera, and built a software stack that’s open-source called Segment Anything.

And in real time, it can detect just a dizzying array of objects, general-purpose objects: oh, this is a knob. This is a light switch. This is a t-shirt. This is a hat. This is a chair. This is a pool. I mean, it’s amazing to see it live. And so, that’s useful for Meta, just because they want to classify and tag all the stuff that’s, you know, in their videos and images. So, that’s kind of like some of the side benefit stuff.

There’s also more specific industry things that have moved forward. Once you know it’s in the scene, as a human, it’s so easy to just kind of assume all of the cognitive load is—you know, you can tell how far objects are away. Robots don’t know that inherently. We’re, you know, taking a—they—you know, it’s a flat image at the end of the day. We can intuit from shadows, and our binocular vision, and the parallax effect, how far away objects are in a scene. A robot has to figure that out.

And so, there are other kind of technology stacks, one’s called [laugh]—helpfully—Depth Anything. So, we have Segment Anything; there’s one called Depth Anything. There’s another one called ZoeDepth. But those are also technologies that will take in a flat image, right, computer vision image from, like, a camera, and estimate the distances. And it’s a very basic set of functionalities, but it’s critical for a robot to be able to operate in the real world. It’s like, what’s here? Where is it?

And then finally, you kind of get to the how, right, like, how do you do a task? And that’s where physics comes in. Fortunately, another kind of, like, happy side benefit for robotics is these video game engines—that are also used for computer graphics and movies—have dramatically progressed. So, you have realistic real-time light ray tracing, far more sophisticated fluid simulations, and light simulations, and, like, [caustics 00:12:19], within [laugh]—like, you know, the subsurface scattering—like, it’s incredible what has happened in that space—that’s actually really useful for robots. Being able to build these simulated worlds and train robots not just in a physical space where you have to, like, build each one and then set up the task and reset it up, like, this is a thing where you just get data point, data point, data point, data point. You just literally run it overnight, task over task over task.

And then the final one, which seems really dumb, but is a really core technology is just basically puppeteering a robot, it’s called teleoperation, where a human, if you’re trying to train a robot to, like, you know, water a plant, right, you know, you got a robot with two arms, what you can do is just give it a bunch of example data, whereas a human will go in and basically embody the robotic arms and water the plant over and over and over, and that’s actually really useful training data to teach, you know, the system to autonomously achieve that task. So, those are some of the core things.

Raju: Yeah, I love it. I mean, it’s fascinating. I had been—actually all of us had been playing around with robots for a long time, investing in robotics for a long time, and I just—the elevation of technology on all the different layers in the past sort of half a dozen years is pretty profound. You know, we’ve seen the videos of some of these, you know, humanoid robotics. The actuators are actually pretty darn important.

And we used to always have, like, limitations on their use and their cost, but you know, we have a combination of hydraulic, pneumatic, and electric actuators now on all of these robots that give it degrees of freedom and mobility that kind of didn’t exist half a dozen years ago. And—Jason, you touched on this—like, the processing power has just blown up. I mean, we know about the chips that are going on, you know, with Nvidia, and, you know, the number of transistors on chips is just growing exponentially year over year. That’s kind of really important for robots [laugh]. You know, they’ve got to be able to leverage a bunch of processing power to, you know, sort of do tasks and also just learn about their environment.

You know, the perception sensors are exploding. The number of different sensors on these robots is probably like an order of magnitude change year over year. So, you know, those actuators, the processing powers, that perception sensors, you know, kind of, leads to some of those things that you were talking about with computer vision, and being able to see the environment in a much more, you know, sort of detailed way, but I think in the last year, the explosion of AI and machine learning, layering that on top of all this hardware innovation that we’re seeing is kind of giving us the opportunity to get into this new domain. And that’s what’s really interesting. It’s like, you can do a lot with hardware, and you need to, but the fact of the matter is the, you know, robot has to navigate an unpredictable terrain.

And it’s not like what we talked about earlier in the pod, which was, you’re doing the same task over and over again, and you’re not really changing your landscape. These robots have to navigate different kinds of landscape. And so sensors, coupled with a very, very high degree of processing power to understand things quickly, coupled with the AI is really giving us a lot of potential.

Jason: Yeah. People talk too much about the hardware—or maybe other people do, I don’t know—I [laugh] frequently, just kind of assume it’s like, okay, yeah, like, we’ve gotten really good at building, like, robotic arms. Like, how much different is it? But when you’re really thinking about, particularly, you know, there are people building these humanoid robots for general purpose, that’s a big, meaty, giant goal that is far from clear, but also, like, many of the breakthroughs that we’ve had, were kind of far from clear until they were super clear [laugh], right?

Raju: [laugh]. Right.

Jason: Like, you know, Transformer being a great example of just kind of one of those where, like, you have the right architecture, and finally, you could like scalably ingest more data, and the context windows and the, kind of, coherence of the system would persist across these vast array of tokens. But the hardware still is really important, particularly in the humanoid space, where, number one, you know, you mentioned the chips, well, you got to run the inference on the robot, and like, the round trip to the cloud is too… it’s too slow. And it’s too slow, particularly in environments where you could tip over and injure somebody. Your perception and your ability to navigate, and your ability to safely achieve those tasks needs to be running real-time on the robot. And so, that means, you know, the more sophisticated, and the more data we’re trying to, like, push through these things, the more sensors we put on it, actually, the requirements go up dramatically for the robot. The more joints you put on it, you know, it has to now predict the future potential path or state for each one of those joints, right, the actuators.

Will: So Jason, that’s really interesting because I was thinking that you were going to say that, kind of, the sensor layer and the physical layer was for the most part pretty built out, but it sounds like what you’re saying is, as we try and do more and more complex things with robots, we—or actually, that the pressure point is in that ability to provide, at a local level, that is, on the robot, the quality of processing and response time needed, and that there’s still quite a lot of innovation happening at that layer well before we get to the kind of the implementation of some artificial intelligence that Raju was talking about that may reside kind of at a master level within the robot.

Jason: Yeah. It kind of works all in concert, right? So Figure, one of the primary you know, humanoid robotics companies, that’s raised about three-quarters of a billion dollars by this point, they’re building their own electric motors. So, there’s the processing power, which obviously also literally sucks power, but there’s also the actuators. And having enough battery, it’s like range anxiety for, you know, in a car. You know, if you had an electric car, but it could only go 30 miles, useless, right?

And so, there is a consideration as well, [laugh] you know, of all of these things take literal power, right, running a chip, and these actuators, and if you’re going to do things autonomously where you’re not plugged in, or you know plugged into the floor or plugged into some compute, increasingly it’s a systems consideration as well. Which, you know, the pioneering robots need to take into consideration.

Raju: It’s actually true. And you know, like, my son’s a roboticist at Carnegie Mellon, and he’s just finishing up his master’s degree there. And you know, I kind of talked to him a bunch about why this is so difficult. Why is, like, a robotics degree so hard? And it turns out, it’s the whole system thing [laugh].

You got to know how to do vision systems. You got to be really good at, you know, some sort of machine learning or AI, you have to know mechanical engineering [laugh], you have to know electrical engineering. And so, like, you got to get, like, four degrees [laugh], effectively, to be a robotics engineer. And you’re absolutely right: like, the technology needs to improve in concert. You can’t really just have AI and put it on a 1970 robot because you just don’t have the processing power, you don’t have the, you know, sort of vision systems that are going to effectively be able to see the environment, you don’t have the sensors, you don’t have—you know, the actuators are just inefficient. But I do see all of these improving at, I wouldn’t say the same rate because I think some of the technologies, sort of, been around a little bit, but they are improving at every level.

Will: So guys, I think that—you’ve given me a really clear view on the innovation challenges in the landscape, which are, frankly, much more compartmentalized than I was expecting. How does an investor approach that set of problems, right? Does every robotics company have to be a full-stack company that requires hundreds of millions of dollars to become a scalable enterprise, or are there investment opportunities within that innovation landscape that you just described? Because it sounds like there’s a real need for people who are pushing innovation at the physical layer, at the vision systems layer, et cetera.

Jason: There aren’t really Lego blocks yet, and it’ll probably be a little while before we have the Lego blocks. The world is very unstructured, chaotic place [laugh], and we’ve talked previously about, like—I talk about them, as you know, are they robots as humans or robots as aliens, and people are really excited about robots as humans. There are a lot of really unique tasks—challenges—that go into it—that are novel and pioneering R&D challenges that are not clear cut—that need to be solved in order to have a truly general-purpose humanoid robot. You know, the flip side is, you just build a robot for the task that you want to do, and you just pick a really high value task, or, you know, a very repetitive task.

We talked about Roomba earlier, right? I [laugh] always pose to people, “Would you rather have a Roomba, or a humanoid robot use your vacuum?” And it’s like, well, at a certain point, if the humanoid robot could do N number of additional tasks, that would be, like, valuable enough for me to go buy that thing. But by the way, it would also cost you about 10x, if not 100x, more than a Roomba. So, it has to be very general purpose, and by the way, very safe if it’s operating in your home.

Like, tipping over into grandma is not a thing [laugh] that you want to be doing. Even if you’re washing the dishes, you know, it’s like tipping over to grandma, and now she broke her hip, that’s a massive, massive, massive problem. So, safety is always key. And the thing that people—you kind of poo-poo the Roomba is a robot, but it’s designed to go under your couch. Like, it’s purpose built. And so, it’s actually better, in a lot of ways, where, you know, if you’ve had [laugh] if you’ve had to vacuum under a couch before, you’re getting down on your hands and knees, you know, and really getting under there. It’s not a very efficient thing to do.

Raju: Yeah, I would love to see—I would love to see a Roomba go rogue one day though. Like, you know what I mean? There’s got to be software in it that prevents it from going rogue. But like—

Jason: [unintelligible 00:23:30] yeah.

Raju: —there’s a lot of dirt on children, and the Roomba could say, “I see dirt,” and happens to be on a child, and it’s just chasing kids around the house. I mean, or cats… dogs.

Jason: What I’d love to see is the Roomba deploy a little mini-Roombas, or something, you know? Like, it would just kind of crawl up on things and, like, clean off your table [laugh]. That would be cool.

Raju: Yeah. Oh, that’d be cool. That would be cool. Like a dusting machine—

Jason: Yeah.

Raju: —that just, like, sort of extracts itself.

JJason: [laugh]. Yeah, exactly. But the final thing I’ll say is that, you know, we’re kind of getting to, like, what is a robot even, and I just talked to a company, it’s called [ArrowVac 00:24:08]. They’re those little trolleys, those little tiny little trains, there’s, like, a truck on the front, and then there’s, like, a long line of the baggage things. There’s probably a specific name for them. But the thing that bring your suitcases from, you know, when you check your bags, to the plane, and from the plane to the baggage carousels. We’ve all seen them. They’re creating an autonomous version of that.

So, they put sensors on there, but really what they’re doing is just, like, there’s a GPU and some sensors, and now it’s an autonomous robot. And so, there are some things where it’s like, could we just roboticize existing things? That is actually a very low cost, much more of a, kind of like, efficient way. There’s only so many of those tasks where you can take a thing that was like previously just driven or done by a human, and you create the autonomous version of it, but that’s kind of like the closest one where you’re going to say, yeah, it’s not from, like, a hardware perspective, that challenging. The remaining tasks, I think they’re one of the hardest businesses to build for Raju’s exact points. It’s just, you’ve got to be excellent at just a huge number of things.

And even if you’re excellent, your supply chain also needs to be excellent. You have a thousand part robot, one part is not available: you cannot ship robots. And so, it is very challenging. The flip side is, number one, the incumbents have next to no software and AI. So, it’s one of those rare industries, unlike, you know, we’ve seen language models just being devoured by the big tech companies, and immediately deployed out, right, and just kind of doing scorched earth for all the new potential startups that would crop up, that is not the case in robotics.

So, you have this kind of unique thing where if you can build something, you’re probably not going to have much, if any, competition in your space, which is awesome. And the second thing is just the labor shortages that we’re going through, industry after industry. I mean, the trucking industry, the food service industry, the agricultural industry, I mean, you can—manufacturing, like, all these things, these are things that make the world go round. Like, remember, during Covid where like you couldn’t get toilet paper or, like, you know, like, where things just kind of like weren’t on shelves? Like, these are the things that make it happen.

Raju: I’ve gotten used to not using it because of that shortage. It’s fine.

Jason: [laugh]. Oh, Jesus. Okay. Oh, you got one of those amazing Japanese toilets?

Raju: Oh, no, no. I’ll do that next. Right now, it’s just… people keep their distance.

Will: Someone needs to tell Raju that Covid is over.

Raju: Oh, Covid is over?

Jason: Somebody needs to tell Raju he’s on a podcast.

Raju: Oh, never mind. Okay. No, you hit a really good point. This labor shortage thing, I think, is massive. It’s a massive issue, and it’s not getting solved. And it actually… like, you know, China’s trying to increase the population now because they kind of shrunk the population, you know, using legislation, and now they’re trying to grow it. And, you know, there’s immigration rules that are going to prevent a normalization of people that are needed in particular countries.

And so, Jason, like that driving the need for this, like, I’d love to explore that a little bit more. Like, why do we think—as VCs, and why does the industry think—that now is the right time to be putting money into this? And I think it really does start with the labor shortage, but you know, what else are you seeing in the world that’s saying, like, hey, you know, it could be a good venture bet.

Jason: Yeah. Well, there’s a couple of things in there. I think it is difficult to understate the labor shortage problem. It’s like climate change, where it’s like we’ve known for a long time, like, you can’t make more 21-year-olds today; you had to make them 20 years ago. And so, like, just our birth replacement rates, like, we’re kind of locked in where we are.

And this is country after country. I know Italy’s dealing with this. I know Finland has massive incentives to bring populations in, France, they spend, like, 4% of their GDP on encouraging people to [laugh] have children. And so, you know, my quick take on the labor shortage thing is just, I don’t think people want to buy robots, I just think robots are going to be deployed because we can’t find humans to do them. It’s going to be a very long time before there is, like, true labor displacement.

I’m not talking about, like, an n-of-one level, but just at a macro level. So, that’s thing number one: robots are coming and we definitely need them. We definitely, definitely need them as, like, a human population globally because we’re running out of labor. So, that’s thing number one, and it’s a massive tailwind, and that’s a structural tailwind, which are some of the strongest tailwinds you can get, if not the strongest.

Thing number two, I don’t know if we want to dive back into some of the new technologies that are creating new avenues, but that is certainly another massive unlock. We’re still figuring out how massive, but language models, they started out as language models; they’re now multimodal. Multimodal means they can—modality is image is a modality, text is a modality, video is a modality, audio is a modality. And the beautiful thing about transformers is as long as you can kind of tokenize and vectorize things, you can basically you know, quote-unquote, “Throw it into a transformer,” and put English in on the other side [laugh]. And so, you can kind of map, you know, something that you’ve tokenized into a vector space to English language, which is pretty amazing. People are throwing all sorts of things into transformers.

One of the most obvious ones that you can now go readily use GPT 4 or 4.0, you know, PaLM or Gemini, like, all these systems are actually multimodal. So, you can throw in an image, and it will recognize what’s in the image, and can talk to you about the image, and explain the image. And yet again, a thing that seems really simple to a human [laugh], but to date has been so difficult for roboticists, and this is task planning. It is thinking about the future, taking a scene, breaking down what possible actions you could take, and then planning out those actions.

Again, sounds super simple. We kind of do it without even thinking, half the time, linguistically. Like, I’m going to pick up this bottle and, you know, drink some water, da-da-da-da-da. I don’t have to plan that in English. For a robot, like, people forget that, you know, silicon that drives the chips, like, it’s something as dumb as rocks. I mean, it’s a metalloid, but I like to say it’s dumb as rocks.

Like, it starts it’s life as this—yeah, as a metalloid, and we have to teach it. You know, we’ve taught it the ones and zeros. Now—you know, quote-unquote, “Teach.” I don’t want to anthropomorphize too, too far—but physics is a very complex system, and these robots need to operate in that world. And giving language to a vision system that can now plan a task is actually incredibly useful because then if you kind of like already know, oh, I need to go into my grasping mode.

And it’s like, well, I’ve actually been, like, trained to grasp a whole array of objects. But even knowing when to grasp for something, and whether it’s reachable, or how you’re going to have to navigate, or achieve a task that isn’t just, like, very straightforward, that portion of the, kind of, robotic cognition has taken a massive leap forward. So, that’s thing number one. I don’t know if we want to talk about that anymore, so I’ll pause, but I have one other big leap that has been really helpful. I don’t know if you guys have any questions on that part, or comments.

Raju: Yeah. No, I don’t. I mean, I think those are good points. I think the VC dollars, from my perspective, being put to work here really kind of is triggered off of the main point, which is just, we have a labor shortage. 70% of the tasks, as you mentioned, Jason, of like, just work done on the planet requires human labor. It’s not, it can’t be automated just by software; it needs hardware.

And you know, the third point I was just going to make is that the cost is dropping on these things. Like, I remember, you know, a couple years ago, it was like, between 50 and $250,000, for one of these humanoid robots, and now we’re looking at 30 to $150. You know, so it’s getting in a range. When you look at that 150, let’s say we get it to 100, right? Now, you’re talking about, we have a labor shortage, and it’s $100,000 to, you know, kind of buy this robot. Let’s say it’s another 50 or 20, you know, sort of to basically maintain it, you know, you’re going to need parts or whatever. It’s like 100k upfront, 20 year-over-year, to flip a burger?

Jason: Oh, the math, the math is—

Will: Well, right. It starts to sound like, kind of, a basic capital expense that becomes a permanent part of your cost structure for running a business. And that to me is a really—like, Jason, you’ve given us, I think, a really useful frame around this labor replacement being the kind of near-term, kind of, construct for investments in the robotics sector, and I’m kind of intrigued by your question earlier about the challenges of future planning for robots, right? But if they know what their job is, and their job is, you know, is defined by a certain regular set of variables, then maybe they can offload a lot of basic labor shortages, and maybe the venture community can build a bunch of great robotics businesses that are about specialized labor replacement, the burger flippers, et cetera, of the world.

Jason: Yeah. And in the mat—there’s a couple of things there. Number one is the, also, side benefit of, you know, kind of internally, being able to think through a task is also there’s, like, a native and built-in interface, right? We all speak [laugh] a language. Now, you can just—you know, there’s human-robot interactions that are going to be important. “Hey, what are you doing?” You know?

Like, just asking that to a robot in a natural language, like, now you can get an answer, which is pretty amazing. This has been used in Wave, which is an autonomous driving company, they now use language as one of their core ways that the autonomous system actually outputs the driving instructions. But the side benefit is, “Oh, go park behind that white car,” or, “Give this cyclist some extra room,” is something that you can now kind of just, like, talk to your car, which is pretty amazing. So, that’s kind of like the point on the side benefit of the language piece.

Coming back around to the cost portion, I think people get the math wrong a lot. Like they’re like, oh, well, if it costs—you know, you pay this person 50 or $60,000 a year, the robots got to be less than 50 or $60,000. Like, not even close [laugh]. Not even close. I mean, these, you could charge a million dollars for an actual general purpose humanoid robot for manufacturing. Easily a million dollars.

And it would make total sense because these businesses are used to buying hardware that is expensive, and they amortize it over its useful life, which is usually measured in decades. And, number one, even just finding the labor to begin with—we talked about labor shortage. We didn’t talk about labor turnover, right? Many of these tasks have really high turnover.

What does turnover mean? Turnover means you need to be constantly recruiting, you are, you know, re-skilling that human, right? If they need to understand how to do certain things, that takes time. That’s a drag on your business because things aren’t getting done, or they’re getting done improperly. You have worker's comp, and all the additional health insurance, and people show up drunk or sick, and get the other workers sick. I mean, there’s all these considerations.

A robot, like, you know, as long as it stays charged [laugh], 24/7 labor right there, that doesn’t get tired, that doesn’t complain, that doesn’t show up drunk, that doesn’t, you know, do all these things. And as long as you can complete the task with a high enough meantime-between-failure and success rate, all of a sudden, the math looks very attractive, even at some pretty high cost rates. And again, I don’t think people are going to be replacing a worker X with robot Y. I think it’s mostly going to be, we can’t find worker X—

Raju: Yeah.

Jason: Or we have to rehire that person, you know, three times in a year.

Raju: Yeah. No, it’s so true. And you said this in previous podcasts, Jason, but, like, dull, dirty, dangerous, right—

Jason: Yeah.

Raju: Nobody wants to do that job.

Jason: No.

Raju: Nobody wants to do that work.

Jason: No.

Raju: And even if you do have labor around, they’re going to pick a lot of other jobs before they do those kinds of things, really repetitive tasks. And, you know, million dollars, I don’t know if somebody’s going to drop a million on an experimental technology, but yeah, I think, like, you get into a range where it’s like, hey, that robot can work 24/7, 365, that robot doesn’t complain, you know, it doesn’t need a raise, they don’t quit the job, and I don’t have turnover issues. And it kind of makes a, sort of, calculated sense to purchase. So, it’s super interesting.

Jason: Yeah.

Raju: What about safety valves? Safety nets? Like, let’s talk a little bit about that. I don’t know how much I am up to speed on the safety nets for these things. I know in the cobot world, you know, there’s a bunch of sensors on cobots. Cobots, for those people who don’t know, are like, they work—they’re manufacturing arms, effectively, and they sit in a manufacturing environment, and they’re adjacent to humans.

But they have a lot of sensors that when they see human, come near a human, graze something that they’re not supposed to graze, they just stop because if they didn’t, they’d probably take off your head. And those have been around for a while, you know? We could never get them operational. They used to put them in cages, as you mentioned, Jason, and now they’re out of cages, right, because they have enough—

Jason: Yeah, they force sensors.

Raju: Exactly, force sensors.

Jason: And so, they’re—they’re called collaborative robots, hence, cobot.

Raju: Yeah.

Jason: That’s where the term comes from.

Raju: You know, have you guys heard of anything? Or what are your thoughts around that in more general purpose robots?

Jason: I mean, this is a really, really important point. And this is a really important point for customers. Customers, you know, just from my experience with OTTO Motors and ClearPath Robotics, like, selling into, I don’t know how many, which customers are public, but you know, a large automotive manufacturer, they take [laugh] safety so seriously. Like, it’s not a joke. It’s not—they rigorously testings to make sure that this can injure their workers. And as they should, right?

And we could actually run the robots way faster. Like, way faster. Maybe twice as fast as they currently go. But in order to ensure a remarkably high safety tolerance for these autonomous mobile robots, we run slower. And we can still deliver the ROI. But—

Will: But Jason, in that scenario, isn’t the problem the humans in that environment.

Jason: [laugh].

Raju: If we pulled all the humans off the shop floor, could we just speed things up and really improve productivity massively?

Jason: That’s a great question. There’s actually a term for this called lights-out manufacturing. And it’s lights-out because humans need light to see. You know, obviously, you can just run the thing in the dark. Honestly, that actually isn’t really technically true because computer vision [laugh], you know, like, really helpful to have some light.

But it’s the idea, right, like, a fully automated system that is just an autonomous, right, that it just goes. And so, the short answer is definitely yes. And that’s why they put these, you know, the old dumb robots in cages. We’ve got these new ones that have force sensors. But first thing you’re around humans, you got to be thinking about safety.

And, you know, I talked about it earlier with, like, tipping over into grandma as, like, a little bit of a jokey scenario, but that’s very real. And, you know, people talk about the Terminator scenario as if, you know—which, I guess is another one, but it’s much more innocuous, and frankly, that’s a huge—nobody’s trying to design things that [laugh] tip over, right, but it’s actually a difficult challenge, staying upright, if you’re a bipedal robot. Even humans tip over.

Raju: Any other topics we want to cover in this before we jump into Gatling gun? Or should we just run into that?

Jason: No, I just—I guess I would just kind of summarize the, kind of, take throughout the podcast is that these businesses are very, very idiosyncratic. They need to be designed for a specific set of tasks, and if you’re going to build a robotics business—and I encourage you to do so—it’s only getting better and better, and we’re going to need it more and more. But it’s not a science project. It’s got to be a business, and in order to be a business, your customers really, really, really need to want them, right, or just be in a position where they, you know—as I had mentioned before—just can’t afford not to automate. And I frequently see kind of this idealized thinking without really fully understanding the customer.

I kind of joke that, like, you know, the banks have the know-your-customer requirements [laugh]. Robotics businesses need to know their customer 100x better than a bank. I mean, you need to know the priorities, where their pain points are, what good yield, or throughput or meantime-between-failure that can be tolerated. Are you creating extra jobs? You know, instead of, like, the person doing the task, now there’s just, like, a person watching the robot. So, like, have you actually moved the needle?

And really considering how you’re going to—what part you play in the overall system that the customer is running? You know, with OTTO Motors, they’re basically the lifeblood. Materials flow around these systems is, like, the veins and the blood, bringing in the—it’s basically the beating heart. And if that goes down in an automotive manufacturing environment, that costs them millions of dollars. And so, your, kind of, tolerance for error goes way up.

And so, this is all to say that there’s some of the most challenging businesses to build. You know, you could be an n of one in a space where the budgets are huge. I mean, the budgets are, like, you know, individual customers of OTTO Motors, if they were fully expanded, could be in the hundreds of millions of dollars per customer. Honestly, like, 100 million-plus per year is not even close to out of the question. And so, it can be massive. Come talk to us if you’re building robotics companies.

Raju: Yeah, yeah. For sure. We love them. Gatling gun. Okay, so I’m going to move on to Gatling gun. Just… we’ve covered this—the topic, we actually had another robotics podcast which focused more on ClearPath before, so I’ll try to keep these questions a little bit different than that one. So just, you know, some of these are fun. Some of these are just inquisitive. So, you guys: best humanoid robotics movie that you’ve seen?

Jason: I love I, Robot. Sonny. Lovely robot.

Raju: Yeah. What about you, Will.

Will: Oh, I think the Terminator series, like, dominates my imagination. Has since I was a kid.

Raju: Yeah. You know, I had both of those, but then I picked Ex Machina.

Jason: Mmm.

Raju: Because that was awesome.

Jason: Because Alicia Vikander?

Raju: Yeah, maybe. But, okay. Worst humanoid robotics movie?

Jason: Oh. It’s probably some random low-budget—

Raju: It is. It is. It is.

Jason: —Nickelodeon movie made in the ’90s [laugh] that I definitely watched it. But [laugh] yeah.

Raju: It’s either Cyborg Cop, or Cyborg Cop 2.

Jason: Yeah.

Will: [laugh].

Jason: There you go [laugh]. Solved.

Raju: Okay. All right, what’s the coolest humanoid robotics video you’ve seen this year?

Jason: Mmm. I’ve watched a lot of them.

Raju: I know.

Jason: It’s actually not a humanoid. That’s why I’m having a difficulty. So, I’ll just say the non-humanoid one. There’s an amazing, like, Google researcher who initially took the large language model and just had it rewrite the code on the robot live, so the language model could update the robot based on, like, just natural language input, and it was pretty amazing, just to watch, like, “Oh yeah, now pick up the red ball.” And it’s like, there’s no [laugh] there’s no red ball in the code.

“Oh, and move it a little to the left.” A little to the left in human scale in language is like a couple inches. But a robot has no sense of scale. It can be like, “Oh, you mean three light years to the left?” No. It’s like, because of the language model, and like all of this, like, sense of human scale, very quickly, you got this very useful robot that was otherwise very dumb that could just update its own code, which is pretty amazing. That was a pretty cool one.

Raju: I have a—my brain is a lot simpler than yours. I thought the Tesla bot skiing was pretty cool. It just—

Jason: Wasn’t that fake, though? Wasn’t that—VR—

Raju: It might have been.

Jason: —or sorry, not VR. I’m pretty sure that one—snowboarding. Yeah.

Raju: Yeah, yeah, yeah.

Jason: That was CG, I’m fairly certain.

Raju: Oh, shit. Well, then—

Will: That was robot propaganda published by the office to promote the future of robots.

Raju: Yeah.

Jason: Probably just published by Elon.

Raju: Yeah. All right. So, this is just a predictive question. What’s the first task that you think a humanoid robot will be designed to execute?

Jason: Well, it’s not folding clothing. That’s a pretty—[laugh] it’s a pretty tough one.

Raju: [laugh]. Yeah.

Jason: Well, we’ve already seen it. I mean, there’s, like, the dishwasher, like Figure, they’ve done demonstrations, picking up eggs, sorting stuff, I mean, that’s kind of—

Raju: Yeah.

Jason: —they’ve already done the tasks.

Raju: It’s actually going to be in the business environment. I think it’s going to be, like, moving boxes.

Jason: Mmm.

Raju: I think it’s going to be packing boxes.

Jason: Yeah.

Raju: It’s done everywhere. Such a need. And, you know, they can do it [laugh] pretty easily.

Will: I think the food delivery robots that we’re already seeing on the street are—

Raju: Oh, yeah. That’s true.

Jason: I think that’s pretty meaningful. That’s a human job that is supplied by low wage labor today that is clearly in the midst of being supplanted in some cities, some economies of the world.

Raju: Okay. Those are good points. All right, what year do you think we’ll have the first general purpose humanoid robot in the workplace? General purpose.

Jason: General purpose means that you can ask you to do anything, and it will do it?

Raju: Yeah, exactly. And it might not be able to do anything, but it’ll tell you I can’t but, you know, it’s not designed for a certain task.

Jason: Well, [laugh] then, we have them today. It needs to be able to do the things. I’m going to say properly general purpose, probably 2040.

Raju: Will?

Will: Yeah, that feels about right to me. 15-plus years.

Raju: Yeah, I’ll go with 2042. And how about in the home?

Jason: My spicy take is that I think Figure and a lot of the humanoid robots will actually end up in the home. They’re targeting the manufacturing environment, but those environments are built around tasks, not around humans. And so, you can have, like, an alien-looking robot do a task really well. The homes are built around humans because we got to live in them, and so it actually makes a ton of sense [laugh] to have the humanoids in the home. So, when do I think they’ll get there? I’m going to say maybe, like, 2030… early-2030s, maybe 2035?

Raju: Yeah. How about you, Will?

Will: I would have thought it’s further out because I think the tolerance is going to be a lot lower for variance in the behavior of the humanoid robot than it will be in the workplace. So, my take on it is that the humanoid general purpose robot at home has to deal with a lot more complexity and that risks are greater. So, I’m going out beyond 2040. I’m going towards 2050.

Jason: Yeah. I didn’t hear the general purpose portion. That’s why I said 2035. Because if you can do a certain number of tasks and make sure you don’t fall over, like, you know, you don’t have to be, like, do literally everything for it to be still useful. But yeah.

Raju: Yeah, I think it’s going to be 2041. And this can be a year after it’s a general purpose for the—

Jason: [laugh]. Nice.

Raju: Yeah, because I just think they’re—the risks are too great, the environment is more chaotic in the home. But I think when we see him in the workplace, we’re going to see them at home, like, shortly thereafter. Not too much longer. Okay. What year do you think robots will unionize?

Jason: [laugh]. That’s more of a—that’s an AGI question more than [laugh] more than a robotics question. Um… 2050. But I don’t know.

Raju: Uh… yeah. I’m going to go with the 2042. They’ve been abused for two years in the work environment; they’re ready. They’re ready to say, “I’m not picking up that piece of paper anymore.” “I pick things up and put them down.” So, you know, they’re going [unintelligible 00:49:47] unionize. Okay. Year that humanoid robots realize that humans are optional?

Will: Well, I think that’s it, by your definition already 2042 [laugh].

Raju: Yeah, I guess so. Yeah. If they’re unionizing… I’ll go with the same 2042. The year they realize that we’re liabilities?

Will: [laugh].

Jason: This is the plot of I, Robot. This is the whole plot of I, Robot [laugh].

Will: Yeah, once you’re at this point on the curve, you’re headed up the asymptote. It all happens very quickly [laugh].

Jason: Very shortly thereafter.

Raju: I think it’s 2042 and one day [laugh].

Jason: [laugh]. There you go.

Will: A lot of things happening and 2042.

Raju: Yeah, exactly.

Will: Plan accordingly.

Raju: Okay, activity you are most looking forward to robots doing on your behalf? So, Will, I know you got a dirty mind, so keep this clean. Keep this clean.

Will: [laugh]. You know what? Everything to do with car maintenance, believe it or not. Like, I find owning a car in New York City to be, like, the most tedious thing, and I want my car—my robot—which will really be my autonomous car—to basically drive itself and get it inspected by the authorities. That’s my problem this week, so I’m kind of like looking for a robot to solve it for me.

Raju: Okay, fantastic. Yeah, I’m going to go with talking to my wife. No, no, I’m just—no just—no. She’s not listening to this. Somebody just kill that [laugh]. All right. Jason?

Will: I think you’ve go to—you go to—look at the tasks that people who can hire people to do things for them, look at what they hire for first: someone to drive them places, someone to cook for them. Like those are immediate, low-hanging fruit. But people want that kitchen robot.

Raju: I know.

Jason: People want to be able to hand the robot the recipe and know that it’s getting done.

Raju: All right. Well, that’s all the questions I had. Jason, I’ll let you wrap it up for us.

Jason: And that’s a wrap.

Raju: [laugh].

Jason: I don’t think I’ve used that one before.

Raju: Don’t expend too much effort on this, Jason.

Jason: That’s going to be—yeah exactly. “That’s a wrap.”

Raju: Don’t [laugh]—

Jason: We did it, you listened to it, and if you’re still here, you probably already follow us at@RRE on socials, we’re RRE Ventures on LinkedIn. Be sure to like and subscribe, or follow, or whatever the terminology is podcast styles. Share it with your robot friends or your friends who love robots. If you’re a robot listening to this, we welcome you, future overlords. Thank you for existing. And yeah, reach out anytime you can find more information about us on our rre.com.

Raju: So, I’m thinking the short version was better. No, no [laugh] I’m kidding.

Jason: [laugh].

Will: [laugh].

Raju: Thanks, guys. Thanks for listening. Thanks, everybody.

Will: Thank you for listening to RRE POV.

Raju: You can keep up with the latest on the podcast at @RRE on Twitter—or shall I say X—

Jason: —or rre.com, and on Apple Podcasts, Spotify, Google Podcasts—

Raju: —or wherever fine podcasts are distributed. We’ll see you next time.