What if we could? A podcast exploring this question that drives us.

Summary

On his birthday, David DeVore discusses his interest in day trading crypto and finding ways to 10x productivity as an engineer. He introduces the concept of using AI tools, specifically AI-powered code writing, to speed up workflow. The conversation then focuses on Cursor, an AI-powered extension for VS Code, and its ability to provide context-aware code suggestions and generate code snippets. The discussion highlights the time-saving benefits of Cursor and its integration with other tools like Replet. The conversation also touches on the challenges of keeping the AI models up to date with the latest technologies and the importance of human expertise in problem-solving.

Keywords

day trading crypto, 10x productivity, AI tools, AI-powered code writing, Cursor, VS Code extension, context-aware code suggestions, code generation, time-saving benefits, integration with Replet, keeping AI models up to date, human expertise

Takeaways
  • Consider day trading crypto as a new venture
  • Explore AI tools to 10x productivity as an engineer
  • Cursor is an AI-powered extension for VS Code that provides context-aware code suggestions and code generation
  • Cursor saves time by eliminating the need to switch between IDE and AI models
  • Cursor's Composer feature can generate code for new features and UI components
  • Keeping AI models up to date with the latest technologies is a challenge
  • Human expertise is still crucial in problem-solving and articulating complex prompts
Titles
  • 10x Productivity with AI Tools
  • Exploring New Ventures: Day Trading Crypto
Sound Bites
  • "What am I going to do this year that I've not done in previous years?"
  • "What is the interface between an engineer and AI to really speed up workflow?"
  • "Let's unpack Cursor a little bit."
Chapters

00:00
Introduction and Birthday Reflections
02:07
Unpacking Cursor
04:26
Cursor: Always Having Context Available
06:21
Cursor's Composer Feature
08:54
Preloading Context and Documentation
10:43
Challenges of Keeping AI Models Up to Date
15:00
Time Savings and Net New Projects
16:49
The Challenge of Crafting the Right Question

What is What if we could? A podcast exploring this question that drives us.?

"What if we Could?" A podcast exploring this question that drive us. We explore the practical application of artificial intelligence, product design, blockchain & AR/VR, and tech alpha in the service of humans.

David DeVore (00:01.354)
Yo, yo, yo, what it is. We're a Pixel and Protocol and it is August 26th, 2024. It's my birthday. And so I'm stoked. Yeah. And so part of what, you know, part of what I'm trying to do is be thoughtful on my birthday and thinking about like, Hey, what am I, what am going to do this year that I've not done in previous years? And so thinking of

the thinking about day trading crypto, right? And sort of looking at that like futures trading, which I know is Wild Wild West, but thinking a little bit about learning how to better day trade crypto. But the other thing that I'm thinking a lot about, and I think a lot of engineers are also thinking about this is like, hey, how do we start to 10x ourselves? What's the stuff that takes a lot of time? What's the stuff that

that, that, that, you know, where can I make all my life easier and cut corners and whatnot. And so we've been really working on this, you know, concept and rubbing the edges using a sort of AI tools, chat, GPT, et cetera. for, well, even, even before, even before chat, even before open AI, right. But, but specifically really keeping our eye on the tools and

Automations and capabilities and tinkering with the APIs and building assorted applications and so forth. And one of the things that I think that we're really interested in is AI is tremendously good at writing code, right? And so what is the interface between an engineer and AI to really speed up workflow? And so that's what we're going to unpack today.

There's two tools on the market that I think that we've been working and playing around with. One is Zed. Zed .dev and the other is Cursor. so Bob, going to just hand it over to you and let's unpack Cursor a little bit.

@Bitcoinski (02:07.692)
Yeah, for sure. Let me share my screen.

@Bitcoinski (02:19.648)
Okay. So I mean, one of the beauties of, I think, cursor in general, most of these tools, they're essentially extensions in most cases of VS code. And that usually feels very familiar for devs. It's the market leader in IDEs. Certainly there's more. But cursor came along and sort of let's extend what people know with the features they crave. And

So what are those features? And I think he kind of nailed it out of the gate. AI is good at writing code. You think GPT and so on, but usually in a vacuum, right? So like a single question, answer back and forth for sure. And it's going to teach you a lot. It's great out of the box for like, teach me how to code. Teach me how to write JavaScript or whatever. Where the challenge was over the last call of year.

was sort of swivel -charing between your IDE and GPT to use it for troubleshooting or scaffolding or whatever, but the gap was GPT didn't really know the current state of your code base, right? All these sort of tips and tricks started to flourish.

So we saw plugins for downloading an entire GitHub repo as a markdown file that you could upload to GPT. And that sort of started some of the conversation around this cursor clearly saw it differently. And I think they're really, they're, they're big value prop is this notion of sitting on top of the active code base because it's in your IDE. so it does a better job because it understands the context of every file in your repo as you ask these questions.

it helps 10x in the same way I think AI is purposely built to 10x engineers, which is time savings, right? So it removed that requirement of the dev, like, let me go find all the context and like put it in a doc so I can upload it to GPT and ask this one question, take the answer, try it out. that didn't work.

@Bitcoinski (04:26.766)
Well, now I got to give you all the new context. I may have changed some stuff as I tested that idea out from GPT and now I've just sunk another 15 minutes of just getting the context ready to ask my next question. And so for me, that's one of Cursor's big, big unlocks. It's just always having that context available. And they really have it on steroids. So as you are building stuff, this is our Props Fuel repo. just had it up randomly.

But as you're building stuff, you could ask questions about what you have on screen, highlight something, add it to your chat with the Apple L command on a Mac and say, what does this do? And that's pretty straightforward because it kind of gave it the context just like you would GPT. It'll explain it to me as I have asked it to. If I wanted to ELI 5 it, cool, whatever. It's LLM, so it can do all those things, answer in Pig Latin, whatever, make it rhyme.

David DeVore (05:24.297)
Hmm.

@Bitcoinski (05:26.211)
Where it gets really interesting though is maybe we ask a different question. Okay, what other sway programs use this function in this folder, right? So a lot of dev helpers to like pass it more robust context, like yeah, I can spider through your.

Repo, you may have a nice folder structure or something. In this case, our addition contract for Fuel Network. Let's limit the scope of context to just that. I can just at mention that folder, which is really, really nice. And of course, like back and forth here is, you know, let's call it the tip of the iceberg for sure. And it does most of the things you're going to want to do regularly. The other thing that's really cool is their feature called Composer. Composer is a little bit different.

It can do much harder challenges. And so it'll start to write like multiple files at once. So this is a great tool for like new feature creation. So to bring that up, you can do an Apple Eye. It comes down here kind of low on the screen and you can use it here. Little, I guess, pro tip, Shift Apple Eye brings it up in a full screen. So now I could say...

Let's say write me a fuel contract in rust. I don't know that that is a boilerplate. I don't know any VBOT. See what it does here.

@Bitcoinski (07:08.524)
And so you see it's not just answering my one question. It's starting to do that work of a developer. Really awesome for UI components too. Like write me a UI component that XYZ and we know that UI components can be a single file, but depending on how you architect out your code bases, maybe you already have a pattern in place, right? So match my pattern of wrapper parent.

David DeVore (07:14.323)
incredible.

@Bitcoinski (07:32.808)
know, sub child component, like so on down the tree. And it's just really, really excellent at doing that. You can go back and forth here when you accept it, right? Then it'll create that file. And I've got now, if I go in here, we should see a, A V bot folder somewhere. I lost it, but kind of get the, get the gist. we've seen a couple of really cool, sort of dev demos on Twitter and, and elsewhere where

folks are starting to tie cursor and composer to like Replet for deployment, right? Just really simple, you know, sandbox based deployers integrating in like zero dev. So here's my code base, create me an interface for it. And just like drag and drop those outputs back into their repo and sort of use cursor and they'll allow them to work out bugs and things which are usually R.

So yeah, that's it. mean, again, in a nutshell, it's about time savings, I think. And part of that calculus is for devs, how much time do you spend figuring out a really hard problem? It's not just the syntax. It's mental scaffolding of a really big problem and a big solution to solve that problem. takes hours or even days to do that and get down to minutes now, least a starting point.

David DeVore (08:54.738)
Do you have to preload it with context around different languages or like in the case of like what you're doing with fuel, like are you preloading it with fuel documentation? how is it aware of what the or other SDK or sort of other information that it needs to know to activate correctly?

@Bitcoinski (09:22.318)
Yeah, it's baseline LLM models. So you can actually choose cursor pro. They sort of have their own access to GPT -4 -0, I believe, as part of your pro plan. But you can just go straight, use your open AI API key and use your own models. The really like killer combo here is using it with Claude 3 .5 Sonnet, which is really the...

very good LLM when it comes to programming. It's burst in essentially every language you can think of and fuels case it's rust based. But what it doesn't know is how does fuel work? Fuel's not on main net yet. It's an emerging network that hasn't really launched. know, eventually you wait for the next 4 .0 update and may have spidered and brought that information into the model, but you can't really bank on that for something that's net new.

So yeah, that's the cool part is if you wanted to, you know, scrape or extract documentation and then bring it in as files to your repo, absolutely. It just looks at that as another file of context. That said, you know, there's, there's a lot of great tools and there have been for years, but certainly now with a lot of sites blocking AI, let's say blocking bots to prevent AI scraping. There's some really great local repos you can run to do.

David DeVore (10:28.424)
Interesting.

@Bitcoinski (10:43.724)
really great documentation scraping to generate those files for you so can give it that baseline context. And that's massive.

David DeVore (10:51.112)
Huge, yeah. Because I mean, you I mean, can, a lot of it, it knows, if it knows the, if it knows the SDK or the documentation, then, you know, we can, we can quickly run integrations and builds on all kinds of stuff, right? In addition to sort of like the, the base knowledge for, with that Chetchi PT or, or Sonnet has, right? In terms of language.

Do you find like things are pretty up to date? Like, so, you know, I mean, especially in terms of packages, right? So I'm thinking like we're right now doing an update from it's like a MySQL, you know, trying to get MySQL up to whatever it is 8 .0. And it's like, finding that sometimes like, you know, chat GPT is not or even or even copilot with GitHub was not really up to speed on

you know what the latest version of something like MySQL is right and what I mean so how do you how do you overcome that? Do you send it after like MySQL documentation or?

@Bitcoinski (12:00.546)
You can, mean, another good example comes to mind is ethers .js, right? And another challenge for it is it knows both ethers v5, which is the predominantly used version of ethers versus ethers v6. And it knows both of these, but it often gets confused between them, right? Think about a neural network like Gaff, depending on how they break that out, it's probably

David DeVore (12:06.484)
Hmm.

@Bitcoinski (12:29.312)
really well suited for a vector database in this case, because there's a lot of breaking changes between v5 and v6. So where libraries are backwards compatible, it usually doesn't matter as much because those functions still exist with the same arguments. They may have updated the logic in a new version, right? v5, v6, and ethers total breaking changes, like massive ones all over the place, which is part of the reason why folks

David DeVore (12:31.924)
Mm

David DeVore (12:38.41)
Mm.

@Bitcoinski (12:58.978)
sort of prefer v5 is they already built all their stuff in v5 and There's a bit of a dev mutiny on v6. We use v6. Just we thought hey, let's let's go after the thing that's gonna be maintained I'm but that's just another good example. So usually when it struggles with that I'll I'll tell it exactly like please only use v6

In cursor, you can set in your settings, essentially additional context to go along with every call you make. And so I actually hard coded that in there. Like never give me answers in ether's V five, always use V six. It still gets confused a lot.

David DeVore (13:36.286)
Yeah. Yeah, it's interesting. mean, Calvin, we were talking about documentation because I think that Calvin started with building a GPT for fuel. Right. And that's and it's still out there someplace. And, you know, I mean, you can ask it. He he he went through all the documentation, consumed the documentation into the GPT. So that way you could ask questions of the of of around the fuel documentation.

Not sure that it's up to date. this was probably a month ago when, he first built it. but it's, but, but yeah, it makes, it makes perfect sense and really, really interesting that that, which is great start, but it's like, okay, okay. I got a GPT. I can ask you questions, but it's not really in line with, with the execution level with the, you know,

user, engineer, this case, engineer, user, computer interface, right? So I'm excited, really excited to see these stuff like cursor start to come around because this is this is where it's really going to and especially for engineers who, like yourself and Calvin, who are full, like, what full stack, but also architects, it you can go from you can go from, you know, A to Z.

really, really rapidly, right? Because you can just get down and go for it, right?

@Bitcoinski (15:00.568)
For sure. Yeah, I guess another thing I learned here too, even last week on Friday, even I've been using cursor full time for like two weeks, three weeks now. Just totally got rid of VS code at this point. And of course, like over the last year, I've been starting to get really anxious. Like, am I gonna have a skill that yields value to provide for my family in like two years or am I gonna be totally obsolete by this? And of course like,

It's not the idea, it's execution and that whole thing, right? But still it's scary, right? Like if you can prompt anything into existence, like of course that role could get commoditized, but you know, turns out a cursor in LLM, really excellent even now of starting net new, right? It's like help me build this thing that new.

David DeVore (15:52.83)
Hmm.

@Bitcoinski (15:54.732)
What a challenge was I ran into last week was like, help me figure out this thing in my brain that's really hard and it isn't a prompt. I actually spent two hours on Friday just crafting a question to ask it because it wasn't a single sentence question. What I was trying to do for our Props Diamond SDK was introduce this like facet catalog manager.

David DeVore (16:18.59)
Mm -hmm.

@Bitcoinski (16:21.954)
But that needs to, and the code base is extremely well typed, right? Every single thing has a type and a type has a type and a type and a type all the way down this cascading tree just for efficiency. And it was just really, it was a very hard problem to articulate, even though I understood it, it was my problem. I could not articulate it well enough where I could figure it out. And so I think part of that put me at ease a bit.

You think about, you know, they're still cobalt engineers that make great money maintaining old mainframes and stuff. AI can help that side, but it took 40 years for that role to go away. think if you have some expertise, you're going to be okay for a while.

David DeVore (17:05.61)
100 % because it's still it's still it's you know, it's still

David DeVore (17:13.718)
It still takes a tremendous amount of, you know, if you're not just building a website, like, you quickly are into, and you want something that is useful in the world that's not already been built, it takes the human in the loop to really, like, exercise the, and understand the problem well enough to articulate the prompt, or series of prompts, that...

bring solutions, right? So, yeah, I'm super stoked. Well, I would love, I really want to spend more time on this topic and like continue to unpack this and do some additional tutorials and whatnot just for some fun tips and tricks and things. We can do that a future day. This has been awesome. Thank you for unpacking, you know, your journey around it so far. And I'm excited to jump into it as well here this week.

So yeah, let's go. Awesome. All right. We will talk to you soon.