Learn from inspiring developers about how they found meaningful and fulfilling work that that also pays them well. On The Scrimba Podcast, you'll hear motivational advice and job-hunting strategies from developers who've been exactly where you are now. We talk to developers about their challenges, learnings, and switching industries in the hopes of inspiring YOU. This is the podcast that provides the inspiration, tools, and roadmaps to move from where you are to work that matters to you and uniquely fits your strengths and talents.
Alex Booker (00:00):
How would you define an AI engineer?
Per Borgen (00:03):
It is a developer, first and foremost. I think their most important skills is building products and features. Stay on top of it. Be a knowledge worker that can work alongside AI and not be replaced by it, because human plus AI is still a lot better than just AI. Scrimba.
Alex Booker (00:23):
That was Per Borgen, developer, teacher, and the co-founder of Scrimba. I wanted to talk with Per because Scrimba's about to launch their second-ever learning path, the AI Engineer Path. When I first heard AI engineer, I thought it might refer to the data scientist, machine learning mathematical type that trains and builds AI models. But as you will learn from Per today, an AI engineer is an emerging type of front-end developer who uses those AI models and their APIs to build features and products that are in high demand.
(00:58):
Scrimba's creating this path even before a backend one, even before a [inaudible 00:01:02] one, even before trending technology X, Y, or Z because the opportunity for front-end devs to become one of the few equipped to build next generation apps with AI is enormous. This is so important. In fact, I'm interrupting our regular schedule to bring you a rapid response series where I'm speaking with Scrimba teachers about the rise of the AI engineer. In this first episode, I'm joined by Per who's going to teach you what an AI engineer does, what tools they use, and explain in more detail of this fundamental shift. Per, welcome to the show.
Per Borgen (01:37):
Thank you. It's really cool to be here yet again after, I think, two years since I was here last time. I've been listening to the success stories, and they truly make my day when I hear the stories of doctors becoming engineers through Scrimba or lawyers or teachers and whatnot, so awesome to be here.
Alex Booker (01:57):
Oh my god, it's incredible and such a privilege as well. I know the podcast has had some success in the sense that it gets a few listeners and people share it. But for someone to come on and share a story about how they managed to find work that gives them purpose or achieve a remote job that lets them spend time with their family or, frankly, just achieve a level of financial stability they couldn't before thanks to tech and the Scrimba Frontend Developer Career Path, it really truly is a privilege to get to hear those and bring those stories to people every week. You were on the show at the very beginning, probably over a hundred episodes ago.
Per Borgen (02:34):
Wow.
Alex Booker (02:34):
Since then, a lot has changed, not only at Scrimba but in tech in general. Back then, Scrimba had one offering. It was the Frontend Developer Career Path. It is frankly the best curriculum to take you from being a new or inexperienced developer with no experience potentially to being able to code at a higher level. I love that the podcast is kind of proof of that. People sometimes ask, "Does it work? If I stick at it, will I be successful?" I'm like, "Yes," because we literally interview people after they've done it on the show. But that was probably two years ago at this point, and there's a lot of irons in the work at Scrimba. What's top of mind these days for you?
Per Borgen (03:11):
Right now, we're prepping the launch of our second path after the Frontend Developer Career Path. This time it's the AI Engineer Path, which is due to launch on Wednesday, so gathering together all the loose ends and prepping the marketing. I'm super stoked about getting that out into the world and try to create the best curriculum out there for aspiring AI engineers.
Alex Booker (03:37):
We've gone from the front-end path to the AI path, but what happened to the backend path, Per?
Per Borgen (03:43):
We really should have had the Backend Path by this time or a Full-Stack path. I can promise you, it is coming, but it's hard to predict how things play out. It just so happened to be that the AI Engineer Path was more important to get out and also easier to get out, technically.
Alex Booker (04:01):
Why is it then that the AI path is so important right now, in particular in 2023?
Per Borgen (04:07):
It is, of course, on the back of the rise of generative AI this path has been created. The last few years with the generative AI models or APIs, most prominently GPT, have created a whole new category of companies and startups and products. You can build a whole new user interface. There's just so many opportunities out there. There's no doubt now at this point that all industries and companies will have to adapt AI some way or another. That means that there is loads of work to be done by developers who are the ones who will implement this AI. We saw this opportunity. Our job is to make our users succeed in the tech industry as much as they can, and right now, it was one thing to do and that is to teach them how to AI in applications.
Alex Booker (04:58):
There's a really great parallel here between what Scrimba is enabling and something that we've learned through podcast interviews and community events and videos you've made in the past. It's just this idea that when you're new to tech, you sometimes feel like you're so far behind. But actually tech is moving so fast. Even the most experienced OpenAI developer in the world only has one year, two years of experience at most. So there's a great opportunity to start cultivating these skills which will make you very productive in general.
(05:25):
I think what you're describing, Per, is like a supply and demand, where the demand for people who understand how to utilize AI technologies, not necessarily machine learning-type people or data scientist-type people... By the way, we're going to be very clear in this interview that that's not the kind of archetype we're describing. We're talking about a developer, for example, a front-end developer who leverages AI tools that sits on top of all that data science goodness to actually bring the value to end users, and ultimately, this is what companies value. The demand is going to become greater. It's already greater than the supply. That creates a scarcity that makes, I think, an AI engineer a very valuable resource.
Per Borgen (06:07):
Exactly. It's moving so fast, so there are no experts in AI engineering, or at least not people with many years' experience. That is something the aspiring developers and juniors can play to their advantage. If you just walk that mile and learn these APIs and stay updated with the latest models, trends, best practices, you're stronger at this field than a senior engineer, suddenly. That is a door opener.
Alex Booker (06:33):
We were talking about this the other week, weren't we? You mentioned the comparison with the Apple App Store.
Per Borgen (06:38):
Yes. I think we're in kind of the same exciting phase right now where there's just so many products that need to be built with AI, just as it was so many apps that had to be built in the App Store when they launched. You could launch a flashlight app and make thousands of dollars a month or tens of thousands of dollars.
Alex Booker (06:57):
Or a calculator.
Per Borgen (06:58):
Yes, so easy to pick new products and monetize it. I'm seeing that all over the place now, both in the indie hackers community and also being part of Y Combinator, a lot of companies coming in there with tons of traction with all these kinds of what people, pessimistic ones, just call GPT wrappers. But actually in many cases, the context is the product. So it's not just a wrapper. You're providing the AI to the user when they need it and maybe with the context where they need it. I would say there are true opportunities out there.
Alex Booker (07:32):
We're going to get all into it. For anyone listening, this is going to be a fantastic jumping off point. If you've heard rumblings about OpenAI, APIs, or you're seeing companies build AI features into their products, this isn't a tutorial obviously, but it's going to be your roadmap and jumping off points so you can understand exactly what's out there. Also, we want you to feel excited about the opportunity. I have to be honest, Per. I think in the scheme of things, I'm a bit of a laggard, meaning that I'm not an early adopter. Sometimes I'm a little bit like, "Ah, I'm going to see if the dust settles before I get really into this kind of thing." Web 3.0, still don't know what I think about that. And NFTs, I know they're a scam.
Per Borgen (08:13):
For sure.
Alex Booker (08:15):
Why is the AI engineer different? What does ChatGPT's success look like in recent years? Where are investors putting their money and what kind of numbers are we seeing that might convince someone listening that this is something that is worthy of their attention?
Per Borgen (08:30):
With NFTs and a lot of crypto, there's still the never-ending debate of what is the one true, the best use case for this, and what is the value? I don't think you have to ask that question with AI. Just go to whatever café or library in the world now and peek behind people what they are doing at their computers, you'll see people using ChatGPT all the time, probably even more at schools, which is a whole other story, maybe not that most positive one. But the numbers speak for themselves. People's usage speak for themself. Also with crypto, which has been tanking in usage. ChatGPT, when they launched, was the fastest product ever to hit 100 million users. It took like two months. They beat, I think, it was TikTok who had the record before that.
Alex Booker (09:15):
I wanted to put something in perspective there. Because I forget the precise numbers, but you're right. I think TikTok was previously the fastest to 100 million. It was maybe nine months or a year and two months. It was something in that ballpark. Previously, it was like the Instagrams and the Snapchats, and it took them years to reach 100 million users. So, yes, there are more people online and the internet's more accessible and stuff, but that was just as true, I think, with regards to Snapchat and stuff. Yet, there's been this proliferation of people wanting to sign up and use ChatGPT. It's actually kind of insane.
Per Borgen (09:46):
It is. A couple of times, we're talking where I was a little bit tempted, like, "Oh, should Scrimba have their own coin or should we create it? Mint some NFTs out of some scrims and stuff like that?" I'm so glad we didn't jump on that, and I'm so glad we waited, saved our energy to jump fully on the AI hype train because this time I think the hype is real.
Alex Booker (10:08):
Absolutely. Well, there you have it folks. There's our introduction. We're going to get all into this in the next 30 minutes or so.
Jan Arsenovic (10:14):
But first, let's take a quick look at your social media posts about the show. On LinkedIn, Miks Silis wrote, "Tonight, I listened to the latest Scrimba podcast featuring Matt and Eric from Self-Taught Devs. Fantastic episode. The phrase from the title, productivity anxiety, feeling like you're not doing enough, is something that'll stick with me. I felt it in my music career and in my career change, if I just spend one more hour doing this or I can't go have fun because I'll be missing out on my dedicated learning time." The struggle is real. If it makes you feel better, I feel it all the time. It was a great episode. If you haven't heard it, find it wherever you listen to podcasts.
(10:54):
On Twitter, Daniel @mrgiles1 is starting #100DaysOfCode. "Day one, working on a freelance project, the website for a Montessori-inspired Spanish language immersion school for young learners. Also finishing up freeCodeCamp's New Responsive Web Dev Certificate, and listening to CodeNewbie and Scrimba podcasts to inspire me." That sounds like a full day. Great job and keep going.
(11:22):
If you would like to get a shout out on the show, just post about it on social media. If you're feeling super supportive, you can also leave us a rating or a review in your podcast app of choice. Yes, we also read those on the show. But for now, let's go back to Alex and Per.
Alex Booker (11:40):
What changed in the last few years or even the last few months, arguably, with ChatGPT? ChatGPT has been around for about a year now. But before then, we still knew of AI. We knew that it was affecting the technology that we use, for example, natural language type stuff with Siri or Okay Google. We also know about autonomous vehicles. I think Google released the first self-driving car experiment in 2009. Even in years' gone past, we've seen Teslas with their autonomous capabilities hits actual roads. There's also things like image identification, where Microsoft and IBM for a while now, probably seven, 10 years, something like that, they've had APIs where you could input an image and it would classify them in some way: "Is this person smiling? Is this image offensive?" I don't fully tune into this kind of stuff, but I have heard rumblings over the years about things like DeepMind playing the game Go and defeating world champions and things like that using AI. What I'm saying is that that stuff has been around for a little bit, but then this all feels very new and exciting somehow. What would you say is the main difference?
Per Borgen (12:45):
AI is an old field, or everything is relative, but it's been around since the 1950s. The way I look at it is that the big shift had happened when we went from what you could call discriminative AI into generative AI. With discriminative AI, what you would do would typically be classification tasks. As you said, "Is this person smiling? Would Alex like this song or not?" which is a prediction more than a classification, but I would put it in the same category, and "Is this a spam email or not?" That was great if you had the need to do exactly that, and thus, if you had all that data you needed to classify and if that gave you value by classifying it. But that is typically a use case that you only have after you have a successful startup or company, because then you have a lot of data, a lot of users, and you need to maybe make recommendations for them faster or just somehow improve the user experience with this classification-type AI or narrow AI, as some people call it.
(13:50):
The big shift in my opinion was when generative AI came, and especially in 2017 when the transformers architecture was discovered or shared in a paper by Google researchers, because now you could suddenly use AI to generate things, text, images, within text, imagine all the types of texts that knowledge workers create, meeting notes, articles, codes, reports, the list goes on. Suddenly, when the AI can do that, it becomes a knowledge worker in itself. That's when you could kind of slap a UI on top of it or put it in some kind of context and create whole new products and startups.
Alex Booker (14:33):
That distinction, I think, between classification and generative makes a lot of sense. Literally, if you take all the examples I gave, they more or less fit within that classification. Think about all the examples I gave, Tesla, Google, Microsoft and IBM with their APIs, these kind of technologies are really limited to the big, big companies who could hire ML type, and ML means machine learning, by the way, engineers in-house and data scientists and also just have an insane amount of computing power and budget to do that kind of work because it's very GPU and CPU intensive to do that, especially while you're experimenting and iterating on things. But if I understand well, what you're saying is that in 2017 with the discovery of transformers, that led to something called foundation models, which are perhaps a lot easier for the average developer like you or I to make advantage of.
Per Borgen (15:24):
Yes. That's what led to GPT-1, 2, 3 and now 4 and also the open source ones, LLaMA, there's a bunch of others as well, Google's PaLM. All of these foundation models are built upon these transformers architecture, or at least the foundation models I know of or we all know of. Maybe there are some that aren't, but as far as I know, I think all of them are based on the transformers architecture.
Alex Booker (15:50):
GPT, whether that's GPT-3.5 or 4, they are models. They're foundation models, right?
Per Borgen (15:56):
Yeah.
Alex Booker (15:56):
Then you've got ChatGPT, which is like the user interface that uses those GPT models. ChatGPT is a user experience product. You make an account and you use it. But if you want to actually build something like ChatGPT within your own application, we're going to talk a bit more about how you can tailor these models to your data and specific use cases because that's where a lot of the opportunity lies, but just to say if you wanted to implement something similar, you could go to OpenAI, for example, and you could make an account, get an API key, and you can start to interact with these foundation models, which are like your jumping off points to answer really intelligent questions to do with inputs.
Per Borgen (16:35):
You can. I think what you're saying there, the distinction between GPT and ChatGPT, it's quite interesting to look into. Because when ChatGPT was launched, it was like ChatGPT-3.5, which they called it, but for over a year before that, or I don't remember when they launched GPT-3, a year or two before that, they had really impressive capabilities, not through a chat interface but through a playground interface and the API, though far fewer people noticed this. They didn't get 100 million users until the chat interface came, and I think that was actually the biggest innovation. It was a new UI that you could interact the AI with. Had you built that with ChatGPT-3.0, it would've blown your mind just as much as ChatGPT-3.5 did. That, I think, is super interesting because it means that what's stopping us from utilizing these AIs is very often UIs. And who builds UIs? It's the developers. Now you have this GPT API available, and you can be innovative in that last layer of UX and UI.
Alex Booker (17:45):
It is wild. I think it depends a little bit where your proclivities and interests lie, but if you want to build a successful career or you want to build a successful startup, that happens in the UX innovation layer. Frankly, I almost feel bad for machine learning people because they are so, so smart. They spend so many years researching and going in depth on things, and then some indie hacker comes along and makes millions of dollars.
(18:11):
What you're describing, I think, is very apt. It's like the UX innovation. ChatGPT is literally just a user experience. It's an interface. It could have been built with React, for example. I don't know what it was built with, but it could have been a React developer literally building ChatGPT by standing on the shoulders of giants. And it's not a new idea either. When we build SMS, we use Twilio. When we do email, we use Twilio SendGrid. When we want to do something very infrastructure heavy or very complicated, we don't code that ourselves. A lot of the time, we don't even code password authentication ourselves. We use an API. I think a lot of modern development is assembling APIs, but then putting a lot of weight on the front-end expertise to not only do that in a manageable way, but create something that the user's going to find valuable because that's their interface with the product ultimately.
Per Borgen (18:57):
Yeah. I wouldn't have thought that it would be the web developers and the front-end developers who would benefit from this AI revolution the most, at least in the short term. A few years ago, my bet would've been the ML engineers and AI researchers, but they are struggling to compete with ChatGPTs and LLaMAs and the PaLMs of the world right now, where just the very, very best top 1% can work.
Alex Booker (19:22):
That's such an exciting opportunity, I think, for anybody listening because if you know a front-end, even how to call an API, there's opportunities there to do some really cool things. I thought it might be nice to bring it back to some concrete examples and talk about companies that are using AI models, that are maybe using companies like OpenAI under the hood and their APIs to build successful businesses from scratch. Maybe you can list a few.
Per Borgen (19:48):
There's plenty. One, which I remember blew my mind before ChatGPT is Jasper AI. It's a tool that helps you write articles, a copywriting tool, essentially. They grew from zero to $50 million or something in two years and became a unicorn just like that. Super impressive. Yeah, the traction speaks for itself. There's also Character.ai, a very cool app that just lets you chat with a bunch of different types of characters, like game characters and a psychiatrist and people from film and TV-
Alex Booker (20:22):
What?
Per Borgen (20:22):
... and just all kinds of random... You can chat with whomever you want, like historical people and Elon Musk. It's quite cool. They have something like 20 million monthly active users.
Alex Booker (20:32):
How does that work? I guess a historical figure would be the best example. Say you wanted to talk to Freddie Mercury, for instance. Does it talk in the tone of Freddie Mercury? Does it say things he said? How does that even work?
Per Borgen (20:44):
Yeah, I think that's what it does. There's multiple ways to do that. One easiest way is what the API calls system messages, which is where you give it the initial instructions. There, you could say, "Speak in the tone of X, Y, Z." Another thing is actually fine tuning a neural network specifically on dialogue that resembles this person. That's kind of going back to the training process a little bit.
(21:09):
Also, then there's the third thing, which is a technique called RAG, which is also an AI engineering technique, where you have a bunch of extra data, specific knowledge that you turn into text embeddings and use them as you do the prompting, essentially. This is a little bit technical, but it's an AI engineering technique for using specific data. There you could pull up, like, "How would Einstein reply to this question?" where he actually did reply to this question in 1905. Well, then we pull in exactly what is said, and the AI tries to compose an answer to your context. So that's three examples of how you could do this. I don't know how a character AI does it, but there are, yeah, many, many ways to do it.
Alex Booker (21:50):
What other companies are out there?
Per Borgen (21:52):
There's a bunch of companies building tools for developers which is also super exciting. Warp, they are a AI terminal. If you're a developer, you know how much of a pain it is to see errors in the terminal hitting you every time you try to upgrade a node or something like that. They've got $23 million in funding. It's just a godsend for developers when they're wrestling with a terminal.
Alex Booker (22:16):
I think it has auto-complete as well. So if you can't remember, like, "Oh, what's that parameter for this command?" it'll help you out a little bit. Very cool.
Per Borgen (22:24):
It makes life so much better.
Alex Booker (22:26):
What about, I know that Scrimba is part of Y Combinator, which is an incubator that fosters innovative companies. Obviously, Scrimba is on a pretty cool mission to help anybody learn to code, even if they can't afford a bootcamp, for example, and we have the interactive format and stuff. Have you, by chance, seen any companies that are really early taking advantage of these technologies to achieve really big success?
Per Borgen (22:48):
I think actually Jasper there is the best example. They are a YC company. The reason I heard of it initially was because I talked with people in the YC network. People told me how crazy they had grown. I was like, "What? Really?" I looked at the product, and I'm like, "Yeah, write articles. Is it really that good?" It just blew my mind. I remember, I think I showed it to you as well, and you were like, "Ooh!"
Alex Booker (23:11):
Yeah, it's mind blowing and terrifying.
Per Borgen (23:13):
Terrifying, it is. It is a little bit scary what they will do, how they will change the world. I think as a knowledge worker, your best bet is, because this revolution is happening, just stay on top of it. It doesn't take that much to actually be on top compared to the majority of people in the world. If you just go around asking people how much they use these tools, most people still don't use them much. I think to relieve your anxiety or decrease your anxiety a bit is just get to know these and be a knowledge worker that can work alongside AI and not be replaced by it, because human plus AI is still a lot better than just AI.
Alex Booker (23:54):
I was going to say, well, at least I'm a podcast host because AI can't do that. But then I was listening to [inaudible 00:24:02] podcast the other day, and the intro is a female voice. I was like, "Okay, they got someone to help record the intro maybe because they're busy people." It turns out that's AI speaking in a human, natural tone. It was literally indiscernible from a regular person.
Per Borgen (24:20):
I think you podcast hosts will stick around for a long time.
Alex Booker (24:24):
Very good to hear. Very good to hear. That's interesting because these are examples of companies built on top of AI, and that's the whole thing. But then there's lots of companies that have been around for years who also want to capitalize on this opportunity. Basically, using these AI APIs and foundation models, they theorize that they can improve the user experience for their customers and perhaps generate more revenue as well. I'm wondering if any companies that have existed for a while but added AI in a useful way come to mind as well.
Per Borgen (24:56):
Yeah. I think those who have invested this so far has been the companies that used to be a startup not too long ago, so the technology companies, GitHub, for example, with their GitHub Copilot that's helping developers write code is a great example that's reached tremendous scale. But there are also newer companies like Loom has done a really good job in integrating AI into editing the videos you record because they are a video recording tool to on-the-fly record a quick video. They automatically create these really nice transcripts for you and chop it up into chapters. It just makes the experience much more fluid. And I would say Notion. I've used their AI helper quite a lot when working in documents, and that's an example of context. It's so easy to just click right there and just fix the grammar in this paragraph as opposed to going to ChatGPT, opening a new tab, and doing that. The context is the product.
Alex Booker (25:50):
I think it's really helpful to list some of these companies that are using AI for a couple of reasons. The first is that because ChatGPT is our entry point for so many of us, we see this kind of chatbot-type of interface. It's easy to kind of narrow the use case into a chatbot, but based on your examples, that's clearly not true. Sometimes it can be text-based. For example, it can be within the experience, like within Copilot, for example. It's not a chatbot, but it's right there with you. Then there's obviously other examples as well, which might be summarizing call notes, for example, in the case of Loom, or even looking at a transcript and removing ums and ahs and bits that don't seem important to then generate a more efficient video. That's totally possible as well. The other reason I think it's really helpful is because things are changing, and there is a lot of opportunity. These are just some examples.
(26:37):
But you know how it is when a big company releases a feature and it's based on a pattern or a model that can be replicated, a lot of other companies say, "Oh, if Google is doing it, we must do it," and that's literally how a lot of the internet works. That's how a lot of product roadmaps work. That's how a lot of features get prioritized. There will then be a need for someone who not only understands these technologies to come and help do it, but ideally, they'll be able to advise and say, "Oh, these are the right tools for the job." Or with a little bit of experience, maybe just a year or two of experience, we're looking into the future now, of course, if you're just starting, you might even be able to give more specialized advice on how to deploy it in a production reliable way or how to do it cost effectively because every API has a pricing model.
(27:18):
Some developers, they specialize in a platform like AWS. AWS has a suite of APIs that you can use to do backend infrastructure-type stuff and some application development stuff. As people become more specialized, they literally get hired by companies to go in and save them money by combining these tools and technologies and things. So there's lots of opportunities to be at the beginning to see it evolve from the front line, but also start building that experience sooner than later.
(27:45):
It's like this old idea that, wasn't it Marc Andreessen that wrote that blog post, which was like, "Software is eating the world," and he kind of theorized that every company will become a software company or it will die. You might have read that when it was published possibly 10 years ago and been like, "Oh, I don't know." But then you look at the present day and you realize that's absolutely true. Even the most boring businesses you never thought could benefit, they have a Shopify storefront, or they have some backend to manage their inventory management, for example. Do you think that that idea could be applied to AI in a sense as well? Do you think that every or at least most companies will start to utilize AI to be successful?
Per Borgen (28:25):
Yeah, I think so, at least the big ones. Maybe you can have a mom and pop shop and small companies that have very specialized niches that don't really... are that affected because they have such a good grip on their niche. Other than that, most companies who are out in the big market and competing, they will definitely have to. The exact same thing will happen, and I think it'll happen faster. That's just a cycle of exponential growth in technology. There's less time between every revolution.
Alex Booker (28:53):
This idea of taking a foundation model and tuning it in some way, you described three techniques, but let's just say broadly you're somehow adding on top of it so it's more aware of some context to do with your business. I suppose then businesses need to start thinking about this because they will have better opportunities to fine tune those models if they start thinking about and collecting data sooner than later. Is that fair to say, do you think?
Per Borgen (29:18):
Yeah, for sure. Right now, there's just so much value to be captured just by implementing the APIs that are available and some simple techniques. But of course, over time, maybe the data ends up becoming the secret sauce because everyone has access to the same great algorithms and models. Let's say you're a doctor or a hospital or a health tech company, if you have the best imagery of finding tumors somewhere, then you will get ahead. Hopefully, that is shared open source that everyone can benefit from it, of course, but that's just an example. Data can be the secret sauce.
Alex Booker (29:54):
Let's talk a little bit now then about the AI engineer job title and what it means. Obviously, it's very connected to Scrimba's new path as well, the AI Engineer Path. How would you define an AI engineer?
Per Borgen (30:07):
It is a developer, first and foremost, a web developer in many cases. I think their most important skills is their engineering and hacking skills, so to speak, like building products and features.
Alex Booker (30:22):
Like SQL injection type stuff or-
Per Borgen (30:23):
No, like hacking together a product.
Alex Booker (30:24):
Like an NDI, okay, yeah.
Per Borgen (30:25):
But also, then there's this specific AI engineering skills on top of that which we teach in the AI Engineer Path. That's like the basics, tokens, prompts, system messages, stop sequences. The list goes on and on. Then there's certain best practices that are emerging these days. Like, there's the RAG technique I mentioned. There's various prompting techniques for creating agents, autonomous agents that can take a prompt and decide not only just what to reply to you, but tell you what you should do and tell the computer program what it should do. These are different techniques that are being developed and discovered as we speak, or every week or month there's new ideas popping up here, so it's incredibly hard to keep an overview over it, which is kind of another thing, which is a key skill for an AI engineer. It is the ability to stay up to date so that you don't fall behind because very quickly you'll fall behind.
Alex Booker (31:20):
How is it different from prompt engineering? That's something that was coming up a lot eight, 10 months ago, I feel like.
Per Borgen (31:27):
Maybe the word prompt engineer is a bit misleading. I wouldn't say it is engineering to talk to an AI. It's more like prompt crafting. It's more of a creative endeavor rather than an engineering endeavor. That kind of prompt design, or whatever you want to call it, applies to all kinds of profession. As a lawyer, you should know how to prompt, to speed up your work. As an accountant, you should know how to do it. Everyone should know how to do that if they want to interact with AI, not just developers.
Alex Booker (31:56):
Prompt engineering, I suppose, describes the right way to structure your inputs into ChatGPT to get the most useful possible response.
Per Borgen (32:04):
Yeah.
Alex Booker (32:04):
That could be code generation, but as you point out, I think what you point out is the key difference. It could also be an accountant or a lawyer. The other question that I think would be helpful to narrow down what an AI engineer is is, how is it different from a machine learning or data scientist-type engineer?
Per Borgen (32:22):
A machine learning engineer or a data scientist or AI researchers or all these kinds of roles that are deeper in the stack or one step further down in the stack, they are more about this cleaning and gathering the data, deciding what data to get a hold of in the first place, and also using that to train the model and writing the algorithms to train for the neural network and doing the training. Then there's the final step, which is the inference, which is when you take new data and put it into the model and it predicts something or generates something. That is in the inference layer that the AI engineers work. They don't work in the training layer or very little. Who knows how these roles will evolve. There's an overlap, of course, but for the most part, they're separated.
Alex Booker (33:07):
I guess another way of distinguishing it is based on what tools you use. For example, a machine learning-type person will probably use PyTorch a lot, for example. Whereas an AI engineer would never need to go near that kind of tool. As you speak, I realized that it's a little bit like, to use a programming language, you don't know how to write a compiler. To use a database, you don't know how to write SQL server. Nor would you want to specialize in that necessarily because it's so constrained compared to the broad skillset you need to build useful apps, for instance. I guess you could say that an ML engineer, an AI engineer, their job is to produce the foundation models, and our job as AI engineers is to use those via APIs to create really useful user experiences and apps.
Per Borgen (33:52):
Yes, I think you said AI engineer for both of those now, but you meant AI researchers, right, for the first one.
Alex Booker (33:57):
AI researchers, yes.
Per Borgen (33:58):
That's exactly correct. I like the analogy with programming languages, creating them versus using them. That's exactly how it will be in AI as well, much fewer AI researchers and many more AI engineers.
Alex Booker (34:11):
I guess we've spoken about what an AI engineer is and what it isn't as well as some of the traits that would make a promising AI engineer. What we haven't spoken about as much, even though you touched on it briefly, are some of the tools that an AI engineer would likely use. I think one of the big tools and technologies that comes to mind is OpenAI and their respective APIs. Maybe that's a good place to start. Could you tell us what OpenAI is and what they offer? But also, maybe tell us about the landscape in general? Are there alternatives to OpenAI we should know about as well?
Per Borgen (34:45):
OpenAI provides a bunch of different APIs for AI engineers and a bunch of different models. You can use ChatGPT-3.5, 4 for Turbo and whatnot, and also image generation with DALL·E and a suite of tools, plus some extras. Like, now they've started with the assistance API, which is in beta, which simplify some of these other techniques you've mentioned, like RAG and stuff like that. They are kind of becoming the advanced provider of AI as-a-service.
(35:13):
Then behind that, we have something we also teach in the AI Engineer Path, which I think is really important, which is the open source alternatives. Plenty of these models are shared out in the open and open source. Even though they're lagging a little bit behind GPT, they're not that far behind, and there's so many benefits with using open source. So if you go to Hugging Face, for example, they will have tons of open source models you can pick and choose from and use just as you would use the GPT APIs. You can use a LLaMA, for example, launched by Facebook, hosted by Hugging Face.
(35:46):
The business model is a little bit different in terms of what you pay for, but overall, if you go open source, you'll get a lot of cost savings. You can even host it yourself, if you're even more advanced. Then you're more going downwards in the stack. Probably most AI engineers won't do that to begin with, or at least you don't need it to begin with to provide value to companies and products. But I think open source as an alternative is important as well to look at, and probably also because, for many tasks, you don't need close to artificial general intelligence to rewrite a title or find some grammar errors in your text or smaller tasks.
Alex Booker (36:26):
It sounds like the basis for all of this are the foundation models. OpenAI has two text-based ones, which are GPT-3.5 and 4.0, I think, but there's also foundation models for image generation. I don't know if the model is called DALL·E, but it's the same idea, and it sounds like they have some other things. But OpenAI as a company, they are a research company. They have spent a ton of time innovating to create these models and a ton of computing power, probably millions of dollars worth of computing power to produce these models.
(36:56):
Long and short, I can't do it myself, you can't do it yourself. We need some help in accessing that foundation model, and that's what OpenAI provide. On top of that, they give us some APIs that let us interface with that foundation model in a way that we probably don't have to worry too much about how to deploy it and the infrastructure side of things, and I think you pay a premium for that. But then there are open source alternatives. One of the most famous foundation models, text-based ones that came out is for Meta, and it's called LLaMA. I think it's kind of open source but kind of not, right? They made the model available, and they open sourced some of the... I think it's called the weights or something, but it's not 100% open source.
Per Borgen (37:33):
Yeah, and there are some limitations to who can use it and stuff like that. But they're much more open than what OpenAI does.
Alex Booker (37:40):
Still, yes, still a lot more open. Where does Hugging Face come in?
Per Borgen (37:44):
You can look at Hugging Face as GitHub for AI. If you have a model you want to share and expose through an API, the easiest way to do that is through Hugging Face. You can pick and choose from probably thousands of models there, text generation, image generation, image manipulation, transformation, and so many cool things you can do. Going forward, I think what you'll find is that the cost management part of AI engineering is going to be more and more important. We looked at this with Scrimba, actually. We are, of course, going to implement AI into our product as well. We're working on it. But if we're going to give all of our users a lot of feedback on their code, that's going to be expensive with the current prices.
Alex Booker (38:25):
Does Hugging Face just let you download LLaMA to host yourself and make part of your project, or do they give you an API?
Per Borgen (38:31):
They give an API, so they have an inference API that is super simple to access, and you can pick and choose from all of these models. If you want to become an AI engineer and want to be a little bit cutting-edge, just keep an eye on Hugging Face and play around with the new hot models that pop up every month and you'll be ahead, actually.
Alex Booker (38:51):
I wanted to point out something you said as well just to emphasize it, which is that, when you think about ChatGPT, it is omniscient. It knows everything. It's got internet-scale knowledge about everything, and so obviously that's a very big model and probably a bit computationally expensive. But if you have a more specialized use case, say you're creating accounting software for a Norwegian tax firm, for example, that model does not need to know anything about cats.
Per Borgen (39:15):
It's a great point. An interesting development we're also seeing now, which is partly pushed by Hugging Face, is machine learning models that run in the browser, so on your device. They can be like 20, 30, 40 megabytes in size and do really cool stuff.
Alex Booker (39:32):
I swear I heard of one that's four megabytes. Even though that's probably not nearly as capable as something like GPT or LLaMA, it would run in the client entirely, so then you don't have to worry so much about the service side of things. That was a pretty cool concept. It might've been totally useless, Per, to be honest, in my opinion.
Per Borgen (39:47):
Yeah, probably now. It was probably, but imagine in a year or two. Then going back to the costs, that's totally free for a US developer because the user, their device takes the cost.
Alex Booker (39:59):
That's definitely one part of the AI engineer stack. It's something like OpenAI, something like LLaMA with Hugging Face. We can call this the system of reasoning or the foundation model. That's something you need for every project that an AI engineer is iterating on top of. I think another part is this idea of RAG. This is a way, I think, to take this foundation model and customize it a bit. What does RAG mean, and what kind of tools do we use when we're practicing RAG?
Per Borgen (40:28):
It's short for retrieval-augmented generation. That's one of those three-letter acronyms you come across as a developer all the time that just sounds so insanely complex. I remember thinking about AJAX when I started to learn how to code. I'm like, "What is this complex AJAX thing?" I learned it, and, "Oh wow, it's actually not that complex." They just created a fancy word. I think RAG is not that complex. You just have to learn it. The idea is just as you said, to give the model some kind of specific knowledge, for example, about your company. So what you do, let's say you are building that, what was it, Norwegian accountants or something like that-
Alex Booker (41:04):
Something like that, yeah.
Per Borgen (41:05):
... for tax reporting. Then you would need a bunch of data about tax loss in Norway. So you would download that, and you would chop it up into chunks and create something called embeddings of all of these chunks. That is essentially turning sentences into a long array of numbers, so you kind of encapsulate the meaning of the sentence in numbers. That gives you the opportunity to use that in collaboration with the model to ensure that the model doesn't go off stray and hallucinate because you combine the embeddings with the generalized knowledge, the system of reasoning from these models. For this, you have to use a vector database. There's plenty of them, Pinecone, Chroma, Weaviate. Everyone's creating a vector database these days. You've just got to learn RAG, and then you can build amazing things.
Alex Booker (41:55):
What about things like LangChain? Where do they come into it?
Per Borgen (41:58):
They are kind of... call it a jQuery or React for AI engineering.
Alex Booker (42:04):
The jQuery, really?
Per Borgen (42:07):
Because they are the first, even though jQuery is a bit frowned upon, and LangChain is as cutting-edge as jQuery was.
Alex Booker (42:13):
Was, yeah.
Per Borgen (42:15):
Hopefully, they will stay afloat and continue to innovate and stay on top. They are a tool or a library, maybe call it a framework actually, that makes it easier for you to build these kinds of applications to create that RAG, for example, so that you don't have to do it just using Vanilla JavaScript. We teach that in the AI Engineer Path so that you have a tool you can use to quickly spin up these products or features when you go into the industry and start building your AI apps.
Alex Booker (42:43):
To be honest, I have a lot of questions, but I also think with something like this, it's incredibly hard to summarize on a podcast in a way that's like... For example, a vector is best visually illustrated. These are newer concepts to me as well that I'm learning about for the first time. Nothing helped it drop into place better than Guil Hernandez's introduction in his embedding course because he takes time to visualize it with real examples. I think once you understand what a vector looks like in a visual way and basically accept that you're never really going to understand how they work in these foundation models, like the details of it, the whole point again is that an AI engineer built on these abstractions, it does just at least help you understand why these vector databases are needed.
(43:26):
Basically, they help represent the data in a way that the foundation model can understand. Say Scrimba wanted to build a chatbot to help answer questions that are specific about Scrimba, like, "How do I upgrade my plan? How much does this cost? What is the best order to do the career path?" Imagine if we took all the transcripts from the Scrimba podcast, imagine if we took all the chat messages in the Scrimba Discord community, all of the FAQ articles and all of the Zendesk Q&As and things, less sensitive information, obviously that's something to be very careful about, we would essentially need to take all that data and represent it in a vector database some way in a format that is understood. How do you do that? The basic answer I think, and, Per, you'll tell me if I'm understanding okay, is that you would use something like LangChain and the tools within it to facilitate that process of taking your own data and representing it in a vector format that can then influence the output of the foundation model when you ask it questions and things.
Per Borgen (44:21):
Yes, LangChain makes that a lot easier. The one thing to keep in mind as well, or that one should be aware of, is that you actually need a model in itself to create these text embeddings, that is intelligence in itself or it needs a model itself, an intelligent model, to create semantic vector representations of text. You can't just do it with a math.random.vector in JavaScript. There's nothing you have access to in the browser or anything like that.
Alex Booker (44:55):
We're coming to a close now, unfortunately. This has been a very enlightening conversation. I already knew some of these things. I've definitely learned some things as well. Needless to say, I'm really excited to go and keep playing with some of these things, as I think someone listening might be as well. What we need, though, is a good jumping off point. So how would you suggest someone get started with these technologies?
Per Borgen (45:17):
Of course, that is going to Scrimba and the AI Engineer Path-
Alex Booker (45:22):
Really?
Per Borgen (45:23):
... and taking that course.
Alex Booker (45:26):
Tell us a bit about the AI path and how it's structured.
Per Borgen (45:29):
It starts with Tom Chant, our teacher, who takes you through the AI engineering basics, and that's based upon the OpenAI APIs because that's the most common ones you'll use out in the industry. Then myself and Arsala, a new teacher at Scrimba, goes through open source and Hugging Face and how to use those APIs, before Guil Hernandez, as you mentioned, goes through everything you need to know about embeddings and databases so you can do this RAG technique. After that, we take a deep dive into AI agents, building these features or products that can act on your behalf, not just generate text back to you, which is super exciting and just has limitless potential. That's created by Bob Ziroll, our very popular React and JavaScript teacher. Finally, kind of going all full circle, we have Tom Chant rounding it all up with LangChain.js.
Alex Booker (46:23):
Nice. I was so impressed when I was watching that course. In the introduction, someone from LangChain comes to introduce the course.
Per Borgen (46:30):
Yeah.
Alex Booker (46:31):
There's obviously quite a tight synergy there between Scrimba and LangChain, which is really cool.
Per Borgen (46:35):
And Hugging Face. We're collaborating with both LangChain and Hugging Face in creating many of these tutorials. So we're not just pulling this out of thin air. We're talking with the cutting-edge companies in the industry and making sure that we teach you the best practices.
Alex Booker (46:49):
So exciting, and I'm personally so excited to interview Tom, interview Guil, interview Bob. We're going to be doing a little series here where we get to go a bit more in depth about some of these subjects. It does make sense, of course, to watch the course because you do need, I think, some of those visuals. I think as well in true Scrimba style, nothing has changed. It's all about getting hands-on and interacting with the code to build something useful. That's how you're going to land the best. We still believe in that 100%.
(47:15):
But I do think that as far as the AI engineer goes as well, there's lots of great opportunities that we can teach you about here on the podcast as well as, yeah, maybe keep you some company while you're on that journey to becoming an AI engineer. So I hope you'll subscribe and tune into future episodes. Because we're releasing this weekly as well, if you're listening to this in the week it comes out, which is the week of the 23rd of November, you can share the episode or reach out to me on social media, we'll talk more about those links at the end, and you can ask questions that I can then direct to the teachers. I think it's going to be really awesome, just like this interview. Per, I know you're busy building the future of education, so I really appreciate you taking the time to come and share with us.
Per Borgen (47:53):
No, the pleasure was mine, and I am looking forward to hear the first AI engineer success story on this podcast in not too long.
Alex Booker (48:01):
Yes, yes, it's going to be such a big milestone. I love that.
Jan Arsenovic (48:04):
That was the Scrimba podcast. Thanks for listening or watching on YouTube. That's new. Hi. If you made it this far, please consider subscribing. You can find the show wherever you get your podcasts. The show was hosted by Alex Booker. I've been Jan, the producer. Keep coding and see you next time.