Venture Step

Summary

Dalton discusses two groundbreaking projects in this episode: NVIDIA's Project Groot and Neuralink's human trials. Project Groot is part of NVIDIA's Gear project roadmap and aims to create a foundational model using Legos as an analogy. It combines Omniverse, Isaac Lab, and LLMs to enable robots to learn and perform tasks. Neuralink, on the other hand, focuses on connecting the brain to computers and robots. Dalton shares the inspiring story of Nolan Arbosh, who regained independence through Neuralink's chip implant. He also highlights the ethical concerns and the potential impact of AI and technology on society.

Takeaways

NVIDIA's Project Groot combines Omniverse, Isaac Lab, and LLMs to create a foundational model for robot learning and task performance.
Neuralink's human trials have shown promising results, allowing individuals like Nolan Arbosh to regain independence through brain-computer interfaces.
The rapid advancement of AI and technology raises ethical concerns and society needs to establish appropriate guidelines.
Both projects can revolutionize various industries, including manufacturing, elderly care, and space travel.

Chapters

00:00 Introduction
00:27 Overview of Project Groot
01:17 Neuralink's Human Trials
03:26 Introduction to Project Groot
04:26 Foundational Models and Legos Analogy
06:20 Building Blocks of Foundational Models
07:35 Reinforcement Learning and Simulated Environments
08:51 Omniverse and Hyper-Realistic Physics Models
10:23 Isaac Lab and Robotics Playground
11:23 Large Language Models (LLMs)
12:53 Combining Building Blocks into Foundational Models
13:50 Demonstration of Groot's Capabilities
14:38 Mimikgen and Human Control of Robots
15:46 Applications of Groot in Various Industries
24:14 Overview of Neuralink and Human Trials
25:12 Neuralink's Brain Chip and Neural Spikes
26:12 Improved Independence and Communication with Neuralink
27:28 Positive Impact of Neuralink on People's Lives
31:58 Ethical Concerns and Progress of Neuralink
35:29 Neuralink's Vision for the Future
38:09 The Pace of Technological Advancements
39:09 The Need for Society to Adapt to Technological Advancements
42:40 Conclusion and Future Topics

Creators & Guests

Host
Dalton Anderson
I like to explore and build stuff.

What is Venture Step?

Venture Step Podcast: Dive into the boundless journey of entrepreneurship and the richness of life with "Venture Step Podcast," where we unravel the essence of creating, innovating, and living freely. This show is your gateway to exploring the multifaceted world of entrepreneurship, not just as a career path but as a lifestyle that embraces life's full spectrum of experiences. Each episode of "Venture Step Podcast" invites you to explore new horizons, challenge conventional wisdom, and discover the unlimited potential within and around you.

Dalton (00:01)
Welcome to Venture Step Podcasts where we discuss entrepreneurship, industry trends, and the occasional book review. Get ready for a future where robots cook your dinner, care for your grandparents, and maybe even read your mind. NVIDIA's Project Groot and Neuralink's mind -blowing human trials are making sci -fi a reality. Before you dive in, I'm Dalton, your host. My...

Background is a bit of a mix of programming and insurance offline. You could find me running lost in a good book or building my side business. You can listen to the podcasts in video or audio format on Spotify or YouTube. If audio is your thing, you can of course find the podcast on Apple podcasts, Spotify, YouTube, or wherever else you get your podcasts. Today we'll be discussing the videos project Groot.

that was announced two weeks ago during Nvidia's conference slash project roadmap demo. We'll also be discussing Neuralink's human trials. Neuralink recently was approved by the FDA to do human trials, their first human trial.

and that individual had their surgery in January. It took them, I think a month to recover. Two weeks ago, Neuralink sent out videos, a live demo with Nolan and Nolan was the first human where Neuralink is being prescribed to. Nolan was.

or not was, but is paralyzed from the chest down from a freak diving accident, I think eight years ago, and he lives with his parents. And basically the video kind of discussed how Neuralink has changed his life. And it's only been a short time. And so I think it was it's a really touching story about how technology can can help out and improve the day to day lives of.

of people that you might not see on the surface, but they're around and they need help and they don't want to have to rely on other people. They want to have some level of independence, either born that way or freak accidents or choices, you know, led to independence being taken away. And NVIDIA,

slash neurolink slash figure slash.

Tesla Optimus has a promise to...

give people independence again. And then maybe if we have enough time, I don't think I'll talk about it, but when tech outpaces society, maybe I could touch upon it a little bit. But for Groot, so Groot once again was announced by Nvidia a couple weeks ago, and Groot is part of the project roadmap for Gear. Gear is a...

An additional research arm that NVIDIA spun up with Jim Fan being the head of it. Jim Fan is like, I think the head of AI research and innovation at NVIDIA. Jim Fan got his PhD at Stanford, and I think he was a professor for some time or still is, but he's obviously highly involved with NVIDIA and is heading the next.

chapter with AI research at Nvidia. And so Groot is a foundational model and I'm not sure everyone understands what a foundational model, but we'll discuss it. And I think that the best way to do so is with an analogy with Legos. So say that you have three Legos, you have one Lego that is

just the Legos, just basic Legos, nothing much about it. Then you have another set of Legos that is a kit to build a princess castle. And the kit, you know, follow the instructions, you can build the princess castle, and then you have another set that's also a kit that builds a Batmobile, right?

And those kits could be represent purpose -built models. And so a purpose -built model, if you build a Batmobile, the model knows exactly how to build a Batmobile. Or if you build a Princess Castle, the model can build the best Princess Castle. But if you ask it to build, say, you ask the Batmobile,

kit or you, I guess in this analogy, if you try to build a princess castle using the batmobile kit, you're obviously not going to be able to do so because you're not going to have the right parts. You're not going to have the right instructions. And so that's the point of a foundational model or general purpose model is where you have these basic blocks and these basic blocks are just different.

models combined into each other that are considered the building blocks, right? Like as a human, you have hearing, you have sight, taste, smell, touch. And so you have all these senses and all of these senses give you data on your environment and what you're doing and how you're doing it and if it's good or bad. And so that's what is trying to be achieved with a

foundational model or general purpose model is you just have these basic Legos and these Legos you could build a Batmobile, you could build this princess castle.

without having the instructions. But to do that, you need a solid understanding of your environment, a solid understanding of physics, a grasp of how to interact with your, how your body interacts with your external environment. Crazy things like, okay, when you step as a robot, how...

How long should you wait until you take your next step or how much force in the rotation between steps and where do you kind of lean your body and how do you lower your center of gravity on slippery surfaces? All these crazy things that we don't think is very complicated because we just do them is pretty complicated for robots. And so,

One of the ways that you can tackle this is with reinforcement learning, and this is typically done in a simulated environment. Other people would call it virtual reality. So in this simulated environment,

they would do a task and say the robot would climb the stairs and then it would learn how to climb the stairs. It would climb the stairs like 200 ,000 times. And then after the robot could probably climb the stairs in the simulation, then they would try to do so in the real world. And so the robot would do the same thing and maybe they'll pour some water on the stairs.

and the robot slips and they have to go back to the drawing board. And typically, or not typically, but the way that the Boston Dynamics does it is they have these code -built models where the developers slash engineers, they build these models and they code out the conditions and write what you're supposed to do in certain situations. And then they supplement it with reinforcement learning.

And that's kind of what Groot is doing. They'll have reinforcement learning, which is Omniverse. And so Omniverse is a hyper -realistic physics -built model that has many purposes. One being, I spoke about it maybe seven episodes ago with Lockheed Martin using it to simulate global warming and how global warming would affect the environment.

Another good example would be for Dune 2. Dune 2, they used omniverse to simulate the generation or they use omniverse to generate and simulate the grains of sand when the dune worm was moving around. Like, you know how like the sand was like swooshing around and looked crazy realistic? That was from

Omniverse.

sidebar but the Dune movie was great. I really enjoyed it and I can't wait for Dune 3 or if there is, if there's not, I don't know. I can't tell you because if you haven't seen the movie I don't want to ruin it but...

It was great. You should definitely see it. Anyways, so Omniverse is this hyper holistic physics base model.

they built a lab on top of it called Isaac's Lab on top of Omniverse. So Isaac Lab is a, I guess, think about it as a robotics playground where robots would be able to do these reinforcement learning activities. So that would entail a robot doing simulations and the

robot in this case, I mean, it's called an agent because it's not really the robot's not there, but it's easier to talk about it as if the robot is there. So the robot is doing X thing in the simulation. And what's cool about it is the robot would learn a thousand times faster in the Isaac lab than it would in a real time simulation.

So the throughput between real time versus simulated environment is a thousand times, which is a lot. It's really hard to put a perspective on it. But I think that that's huge, right? So they have this omniverse piece. So that would be reinforcement learning. And then they have LLMs.

So large language models, you know, I've talked about them many times on the show, but a large language model is a model that communicates via text and is trained on lots of texts and they have a lot of parameters, like a billion, and they are able to talk about many topics if it involves text and or I guess audio.

you can do depending on the situation and the model you're using. But Groot is also gonna be using LLMs. And so think about Omniverse, Isaac Labs as one building block, and then think about LLMs as another building block. And they keep building on these little kind of side projects and they combine them into this foundational model with all these other

pieces that one might be your sight, one might be your hearing. And so I don't know what the omniverse would be, but maybe LLMs is easy, where LLMs would be your, I guess your voice, that's not really a sense, but that, or your hearing. Yeah, your voice and your hearing, okay.

I stumbled there. So with LLMs hooked up to the robots, like using the group model, robots would be able to communicate back to you and they would also be able to understand tasks on the fly. And so you could ask in the demo they show and we'll show it in a second, but they show, Hey, can you, can you give me a high five?

and the robot understands here's the audio, turns the audio into text, then the text understands what it needs to do and what inputs it needs to do for the machine. And then the machine puts its hand up, goes for the high five. That's pretty cool. And they also ask, oh, can you show me some moves? And then it gets jiggy with it. And I think it does a dab.

It does a dab. If you're not familiar what a dab is, it's kind of when you have your opposite arm and you kind of put it towards your face, like towards your nose, and then you point your other arm like parallel. And I'm dabbing if you guys are watching video.

Diving on the internet. But so they have the LLMs, they have the Omniverse, which is the Isaac Labs. Then they also have this other thing, which is pretty cool, called Mimikgen. Mimikgen, I think was created in 2023, but Mimikgen allows a human to control the robot.

with VR and not only control, but teach the robot what to do. And so one, one example might be, okay, so in a simulation you are teaching the robot to cook and the robot needs to take the cookie sheet or pan or whatever the robot's doing.

takes the pan out of the oven. And the simulation, they do that, I don't know, a thousand times. They take the pan out of the oven, put it on the counter. Take the pan out of the oven, put it on the counter.

in the real world, the oven might be in an odd spot. It might not be on the ground where not the ground, but you know what I mean? Like the oven might be one of those wall ovens where it's hidden or it might be the vertical oven, the double, I think it's called it. I don't know what it's called. The double oven. I haven't bought a house yet. So still working on that one. I'll probably learn those things when, when they're become more important, but it's either like a,

An oven that's not on the ground, let's just say that. So that could be a hidden oven where maybe you're at one of those nice houses and the kitchen cabinets and the oven look the same, so you can't really tell, same as the refrigerators. Or the oven's just not in the normal spot, it is on the simulation. And so maybe the robot knows what to do once it finds the oven.

but it can't find the oven because it hasn't encountered anything of the sort in a real time simulation. So what you would do is you would hook up your mimic gin and you would train the robot how to do the task a couple of times in that area. And then the robot would understand what to do and it would learn.

which is crazy. And I am going to share my screen. Let's see here. If you're not watching on video, I am going to just lightly narrate. I'm not gonna do the whole video. I'm just going to.

kind of show snippets of it, because it, I'm turn off the sound. Okay, so they've got the robot walking around and doing things with its hands. It's got like human -like hands. There's many robots, but they have this weird, odd video of thousands of robots walking around this blank, white background with stairs and stairs that go up and down and.

and uneven surfaces and there's robots falling down and stumbling and it's very chaotic. It's like a city center with robots, but they're falling down and making mistakes.

So in this example, they have this robot getting trained by human how to pick up cups and pour coffee and make, I think this looks like lemonade. This one's a demonstration of a human teaching a robot how to do a dance move via video. So this was out Omniverse. So they went and they submitted the video, the robot learned.

via Omniverse and then they went back to reality and demonstrated that it understood.

This one's about the high five example I was kind of talking about earlier. So show me some cool moves. And it does the dab. It's pretty sick.

So that's it. So let me stop sharing. Let's see here.

Okay, so what kind of jobs would Groot be able to kind of enable safer workplace or more efficient work? I think that there's some pieces of manufacturing that could remove the human element that is dangerous and there would still definitely need human input and.

monitoring of situations and people to maintain the robots. But if you can remove the human...

The human death factor of certain aspects of manufacturing, that would be great. Sorting of trash, like trash cleanup. Like sometimes you have some cities, they have people sorting from plastics to just...

debris that can be incinerated. And so they obviously don't want to light plastic on fire. So they have people sorting them or hazardous debris cleanup. Space travel was something that was really interesting where the general consensus for space travel is if you're going to set up a base somewhere and have humans habitate, like say the moon or Mars, you would need to.

deploy a fabricator and then this fabricator would need to build out what you would need to start a base. So like the fabricator would make the environment for you, it would make the solar panels, the generators, maybe the battery storage, and maybe not the battery storage because it's a little bit more complicated, but it would need to be able to do majority of the stuff. And then someone's got to have to hook up.

the items to the fab, not the fabricator, but put the items up together so they would have power and it'd be a little space before the humans came there. It would make sense for these little task robots to handle small things, but then obviously you need a robot that could be general purpose and handle many different aspects of setting up the base. And one of the things I think is,

The biggest is elderly care. And I think that there might be some skepticism. Okay, you're thinking about these robots are probably like 200K, 500 ,000. The Tesla Optimus robot, according to Tesla is rumored, not really rumored, but like their estimations of the price is gonna be 20 to 25 ,000, which I think is very reasonable.

given that it's cheaper than a car, the average price of a car. And this robot would be able to complete general tasks and help out at home. I don't know if it would be necessarily right out of the box, able to handle the requirements of elderly care, but I think that would be the long -term goal, like within 10 years.

So they've got all this cool stuff and they had this cool demo and the same thing with figure a couple of weeks ago, but these are demos and they're not going to put out stuff that shines a bad light on the company. So these kind of are internal and not externally facing presentations. So I would have a little bit of skepticism, but I think that.

Accompanies of maybe learn from Lordstown. I don't know. Lordstown, if you're not familiar, they had the truck incident where they demoed. They supposedly demoed a truck that was going down and driving on this highway. Sorry, I had a sneeze going down and driving on the highway. But actually, they just put it on a hill.

and they had it go fast down a hill and then they videoed it on flat ground and they're like coming soon. But really it didn't have a functional product and people found out about it and it was a big deal and it basically bankrupted, bankrupt Lordstown. Anyways, so I hope that companies are being honest with what they're presenting and they are confident that if that...

product was opened up to the public that day that the public could get the similar results to what they're getting during their demos.

Neuralink. And I really talked about the patient Nolan Arbosh. Nolan Arbosh was someone who became paralyzed from the chest down or the neck down. And he had to move in with his parents because they couldn't afford or they had to move in his parents because he can't, you know, he can't function by himself anymore.

and they also can't afford to have a stay home nurse. So his parents take care of him and he has limited independence. And so he was selected recently to undergo the Neuralink connectivity surgery. And basically Neuralink will open up your skull and put a chip in your brain.

and the chip has these receptors and there's a thousand of them and they'll monitor your neural activity on your brain and then they would...

not they, but the chip would eventually understand what you're thinking and they call them neural spikes. And so like neural spikes means that the chip understands that the brain is trying to communicate something. And then these communications or attempts to communicate are sent via Bluetooth to the device that they're trying to communicate with.

And it sounds kind of complicated, but really it's you're thinking about stuff and the chip understands that you're trying to send data and that data is sent via Bluetooth, the device you're trying to communicate with.

Insane, crazy, crazy stuff. This is straight out of sci -fi, like controlling computers with your brain. And Nolan has only, he's only had the chip and being functional for a couple months, I think. And Neuralink demoed.

demoed his experience at his house, at Nolan's house, and they played kind of chess and just communicated on their experience of how cool it is to be independent, how cool it is to be able to talk with his friends on the internet, how cool it is to be able to play video games whenever he wants and not have to rely on his mom or dad to help him play.

And, you know, he understands that it's nice for, for them to spend time with him. But at the same time, he doesn't want them, he doesn't want to have to rely on other people more than he needs to. And so he felt that it was awesome and amazing to have.

increased independence than what he's been experiencing in the last eight years or so. And one of the things that he likes to do is play chess. I don't know what his elo is, but I don't know if he said it, but he was playing and talking at the same time and he was doing okay.

So he wasn't really focusing on the game and was playing and able to talk and it was pretty neat to watch. And I'll share the video in a second. And another thing that he did was or likes to do was play Age of Empires and Age of Empires is a 4X strategy game. So 4X strategy game means like basically like four dimensional. And so...

Normally like a strategy game might have like you have to manage your economy and war, but for X games you have to manage like your economy, your war, your citizens, the world slash like diplomacy with other nations. And so it becomes very complicated and they're.

quite challenging, intriguing simulations of strategy. But the caveat is with this complexity becomes a increased time allotment to complete a game. And so I've played a couple of 4X strategy games. My favorite one that I've interacted with and I've never beat the game, or I guess not beat the game, but won a match.

was this game called, or is this game called Stellaris? And Stellaris is a 4X space game, and you're a nation, and you start out in the middle of the galaxy, and you kind of build your space empire, and it's crazy complicated, and in one game, you're not even close to finishing, and you could be 40 hours in.

So I just said all that stuff to put it in perspective is like one of Nolan's favorite activities was to play.

age of empires and basically what he would have to do is have his mom or dad click everything on the computer to, you know, he would have to tell them, Oh, click, click here, do this, move this unit there. And so it, it wasn't a good experience for himself because they don't know the game and he can't move as fast as he wants to move. And he can't play online.

with other people in a competitive game with all of these extra inputs that are required to move and make decisions. Because it's 4X, but it's actually, it's pretty fast -paced. So if you fall behind, you fall behind, and it's very difficult to recover. Well, basically, it was very difficult and impossible for him to play Age of Empires. And this...

technology enabled Nolan to play and that's what he did with the first night he played Age of Empires, I think he said like until like 5 .30, 6 a .m. and he said he wanted to keep playing but his Neuralink died so he had to charge it and so then he went to bed.

I just think that there are positives to technologies like Neuralink and it's great to see people getting results from all of this research they've been doing for so long. Neuralink's been around for a long time and they haven't made...

much progress. They've been testing on animals, but they're really struggling for years and rightly so to be able to test this on humans. And I think that.

this would be something that would change the lives of many people and give people a level of independence that they may have never had or used to have. And it just gives a different perspective and aspect to their lives that they didn't think was possible, which is amazing.

And so I'm sharing this video, if you're watching the video of Nolan, he's playing chess and you can kind of see his mouse is kind of moving around and he's moving the mouse with his mind, which is really cool. So he just moved the chess piece and he's talking about how he goes about.

movement of the mouse. And so he started out and he would attempt to move it. And in his head he would say move, but he said that doesn't really work that well. You need to really think about the movement and kind of manifest where you want it to go and imagine what you want to do. And then that works. But if saying it, saying the command in his brain, it didn't, didn't compute the way that it's supposed to or how.

he wanted it to, and so he said, basically manifesting the ideal result of what he wants, that would work with the computer versus him saying the command, which is pretty interesting. And took me a couple of takes to be able to explain that, because it's complicated to think about, because I don't know what it feels like.

And then Nolan's just kind of explaining that, you know, he could play online and he can have this like new level of independence and have a good time. And he's smiling, laughing and just overall just thrilled to be a part of Neuralink and is extremely thankful that he was selected and just overjoyed.

That being said, there are some ethical concerns with Neuralink and the results of their test subjects on animals were, let's say, less than desirable, I would say. Quite a few of the test subjects died and some of the animals were just...

no longer what they used to be. I there are definitely some consequences to trying to innovate quickly. And people have been bringing that up as like a, like they're very upset about it. And rightly so. That being said, long -term, Neuralink envisions,

being able to not only connect people to computers, they want to connect people to robots, the Tesla Optimus robot. They want people with Neuralink to be able to control the Tesla Optimus robot. They also envision Neuralink being able to fix people that, I wouldn't say fix people, but be able to allow individuals to...

walk again, or walk. And the process is a little fuzzy in my head about it. But basically, they have the neurolink that's in your brain and the

pathways to certain body parts aren't functioning the way that they normally should. And the way around that is to put a device in your spine. And this would basically make this like computer brain interface. And this would allow the user to send commands with their brain, the Neuralink chip.

decodes the brain signals into plain movements. And then it sends the message down to the microelectrodes that are connected to the spinal cord with what they're calling a digital bridge to stimulate movement. And then the digital bridge sends

sends the movement signals to the limbs that are damaged and then it kind of bypasses the original nervous system. So basically what Neuralink wants to do is create an artificial neural system using your brain as the key compute.

like the computer, the CPU, like the CPU making all the commands and then the chip decodes the commands, then the codes are sent to.

the codes are, the commands and slash codes are sent to the spine and the spine has the digital bridge and the digital bridge sends commands to the damaged limbs. Other than that, I don't know how they connect. I don't know how they connect the digital bridge to the limbs. I'm not sure how that all works, but that's, that's what I could gather, which is pretty nuts and insane and something that would be amazing for so many people.

and overall very exciting.

I think the last thing is tech is very close to just outpacing society. And I think that we haven't had a time, and I also understand or envision that this time in our current present day is the kind of infection point for AI and technology advancements that we've never seen before.

and the pace that we're improving is just incredible. That being said, we are moving rapidly and I don't think that society has the correct or kind of put in place the right guide rails of.

these things where we're just moving at just this incredible rate.

And things, you know, bad things can happen and we're not, we're not necessarily, how do I put this? When electricity came about, right? People had a job where every night they would go and they would light the lights up that night. So they had street lights, but really the street lights were someone going by and lighting each light. So there wouldn't be dark on the street.

And then I think in the morning they would go around and put the lights out. And electricity obviously replaced their job, but electricity didn't just turn on everywhere all at once. It was a slower process where they maybe do a section of the city at a time and then the workers could re -educate and they could find another job, maybe learn how to work on.

work on the electricity systems or build the light poles that did the wiring. But with this AI stuff, it's kind of the snap of your finger and it's just there. There is no slow transition to where you've got 10 years or you've got five or three. It's kind of like once AI is ready to go and it's general and

They've built these foundational models that I'm talking about. Like it's kind of just, it is what it is. And the robots, you know, I mentioned earlier that they could use them for elderly care and all these great things, but also at the same time, they're pretty inexpensive in the grand scheme of things like 20 K, like you could buy five robots and potentially replace five people.

I don't necessarily think that you would be doing like a technical job. Like they're not going to be coding at a desk or something like that, but they're definitely going to take over some things for sure. And it's not going to be a. And what I think is like a slow transition, I think once the robots are ready, large companies will book big orders and then once they get to mass producing them.

It is what it is. So I am intrigued, but a little unsettled and I am really excited about the opportunity to adapt along with the robots, right? So we're not, it's not about replacing us. It's about how we adapt alongside them and

reap the benefits of our relationship. I don't have any new comments this week. Maybe you're saving the comments for next week, but I am excited to keep moving forward. Next week we'll be discussing a book, a book review. I got meditations by Mark Hayes that we'll be discussing and I'll share my thoughts.

and hopefully I'll get to entrepreneurship stuff soon. I was sick for...

three months. So I was having a tough time doing even doing these podcasts slash getting my work done at work and having to work late and some were at a busy time this year. And then I had to do these podcasts and I sometimes I was just in such bad shape. I was having a difficult time mentally getting through these podcasts episodes and it was difficult to to talk without coughing constantly and

and it was difficult to express myself because I was in constant pain. So I am happy to say I have been sick free for two weeks. So it's huge because I've been sick since January, which is something that's not normal for me. Typically don't get sick at all. But that being said, I hope that I'll either do the book review or I'll talk about by vision in a little bit more detail.

because I keep talking about in the intro discussing entrepreneurship, industry trends and the occasional book review. And it seems like all I'm doing is industry trends and our industry news slash trends and book reviews. So I would like to get some entrepreneurial information in there and kind of talk about what I've been working on, but we will see how this week pans out. All right. See everyone. Appreciate your time.

and I'll talk to you next week, bye.