Autonomous

Unlock the secrets to transforming simple questions into powerful, engineered prompts that supercharge your interactions with AI language models. Whether you're a developer, creator, or AI enthusiast, this episode dives into how structured prompting can revolutionize your results and workflows.

Things we Talk about in this episode
  • The evolution from basic to advanced prompt techniques
  • How to craft simple, functional, and engineered prompts
  • The importance of personas, constraints, and evaluation metrics
  • Examples of real-world prompt structures for coding, research, and creative projects
  • The concept of meta prompting: using AI to improve your prompts
  • Practical tips: iterative refinement, role assignment, and self-checking mechanisms

Links
https://www.promptingguide.ai/ 

Hidden Beliefs Prompt:
https://thinkyface.com/prompts/prompt_4N3GODA2g71BVs9xKP5XbmYp 

Find Your Purpose Prompt:
https://thinkyface.com/prompts/prompt_aGvVbgP9A06wRHe8R2Dd3yLB 

Example software prompt: 
https://thinkyface.com/prompts/prompt_wQl1pbzjg7LLyCq72KoBJ9ay 

Example video prompt
https://thinkyface.com/prompts/prompt_9M3olvELj0qqDFw01kyWaqzJ 

Allen (Opus):  Example:https://thinkyface.com/prompts/prompt_DY2PkydB90pJgcA7eXqnrLKG?key=CgvUZhpe4qBVttQRA7gNsUYJ

Allen (Grok): https://thinkyface.com/prompts/prompt_BEJgaPNXyxzXwcR84l1ZYd2m?key=MLjjH1ms5qyYYMkKhnhZN6Vm

Donn Example: 
https://thinkyface.com/prompts/prompt_YkrQdDzqw7jbqIb7VZeX41j5?key=SgfuH4ErTyiy4dRaCjd4sqbM


Interact with Donn or Allen on your favorite platform below:

Creators and Guests

Host
Allen Santa Maria
Co-Founder @ http://ThinkyFace.com | Former Aimlabs, NYXL, Samsung
Host
Donn Felker
Author. Software Developer. Developer ThinkyFace, MyFitnessPal, Groupon, Aaptiv, Poynt, Tinder and more.

What is Autonomous?

Two founders, one a seasoned AI engineer, one a non-traditional builder who learned through AI, have unfiltered conversations about building products, using AI as a tool, and pushing limits in work and life. Part technical, part philosophical, part two friends being real.

The show is for ambitious people who want to do more than they think they’re capable of, whether that’s shipping their first product, going deeper technically, or just thinking bigger.

Donn (00:00)
Everybody welcome back to the show. Alan, how's it going man? What's new with you?

Allen (00:03)
Hey man, it's been a great week. got episode one live, got some really fun feedback from family and friends. was, was great to see them reaching out and, supporting us. Don, everyone, everyone really learned a lot from you in episode one, which was crazy.

Donn (00:18)
I that. Yeah, I appreciate everybody sending in all the feedback. I announced it on my email list and I shared the feedback with you, Alan. There was a bunch of people who replied of things that they would like to see. They're excited to listen. We've had people send in, you know, photos of them listening to it in their car, people listening to it in their house, their garage gym on YouTube. So thank you to everybody that's doing that. That's really cool. Glad to hear that you're listening. We've got a bunch of cool stuff happening soon.

So what kind of stuff, have you been working on anything recently at all or what?

Allen (00:49)
Every night I'd set up on my computer and I just try to build something now. It's just been, instead of sitting down in the couch and playing a game or watching a TV show, I'll just pull out my laptop and try to make something fun. And my friends who are watching this are not too happy with me at the moment. I've been building and sort of live testing in our group chat some really fun group chat bots, which we could talk about later.

Donn (01:13)
Yeah, definitely. Yeah, I've been working on similar. Like I think it was yesterday, it was like 17 hours I was at my desk. Like I forget at what point, it was like 11 in the morning. I was still hungry. I hadn't eaten yet. And I was like, I got to eat soon. And then before I know it, I look up as 12 25. And like, I still haven't eaten. Like I'm just inside of the code. Almost like an addict, just like tapping my elbow, like one more prompt, just one more prompt. And I just like kept going. I couldn't stop.

Allen (01:33)
⁓ man. The worst

is when I just, when you get, when you hit a failure and you just can't end the night on the failure, you know, it's just, it's like, it's, I have to finish this and then whatever, man, it ends when the rabbit hole ends. When you finally get the result that you want. That happened to me last night actually. And, ⁓ yeah, I gotta say, you know, you know that John Ham meme where he's, he's like dancing in the club, you know, it's, it's the one that's going.

Donn (01:43)
I

Yeah.

Allen (02:02)
That's

how I felt when I finally wrapped it up and I got what I needed. So that was a great feeling.

Donn (02:08)
One of the things that I actually do that's a little bit different than that is I will, if I'm working a long time, I have found a trick for myself and I don't know if it works this way for others is that I will actually end on an error. And so then if I get stuck, I get an error, that means I have something to do. So then at that point I'm like, right, I have an error message on my screen.

It's late at night. I got to end the day. I'll end it. And what that means is when I come in the next day, I looked at him like, yeah, I got this error. And then I feel like I'm just thrust right back into it. So for me, sometimes that works pretty well. However, it can backfire because then you can be up all night thinking about it.

Allen (02:34)
Yeah.

That's, know that I know my personality and that's exactly what's gonna happen. I'll be sitting in bed and I'll just, I'll have to get out of bed and go fix it. can't, I can't, I can't go to sleep with that on my mind.

So, what did you want to talk about today?

Donn (02:51)
We had been chatting on and off for a long time now regarding AI and just how to get going with it. And there's a fundamental principle that's underpins all that. And that's prompting. And that kind of just goes back to what I saying. Like, it's just like, you're just feeding in the prompt. I feel like I can't prompt enough, but learning how to prompt is kind of a core skill. So today we're gonna talk about prompting. And what I wanna say here is that today's show, the goal is to have you, the listener, the watcher,

or whoever you are, leave with something that's going to really change the way in which you work with LLMs. And I really feel like you're going to get that because if you learn how to prompt properly or differently, your entire outlook on everything will change. So that's my guarantee to you is that you will actually get something deep out of this show here. So that's what I'm going be talking about. We'll be going from basic to advanced inside of prompting here. So we're just going to hop right into the show immediately. Here's a quick outline of

what we're going to talk about. We're going to talk about A, why prompting matters. We'll kind of talk about the origins of where some of the prompts started from. And then we're going actually get into the meat and potatoes of the today's show. And that's where we're going to talk about the different types of prompts from simple prompts, which we've all done to a little more functional prompts. And then we're to get into fully engineered prompts and kind of what those mean, break them down into their components. We're going to understand throughout the show of like when you should be using each prompt and you'll kind of just get that

sentiment throughout the show. Now there are a few things that we're not going to cover. We're not going to get super academic.

If you go visit various sites, which I will link or we will link in the show notes, there's one site, for example, called prompting guide dot AI. And inside of that has all different types of prompts, like one shot, few shot chain of thought. And you mean there's a whole bunch of academic style prompts in there. And I think academia is great in that regard because it helps less learn. But I'm much more of an applied person and I will test to see what works for me. So we will not be talking about those. However, you can go learn about them.

after this show and they will make a lot more sense. We'll also be talking about the key components of a good prompt. We'll be giving you some examples of some prompts at the end or tools we have used to help create prompts and save them. And then we're get a little bit advanced at the end and we'll talk about meta prompting, how you can use agents to help you write your prompts a little bit better. And then of course, we'll have some sample prompts of how we have used meta agents and some of the meta agents we have used. So that's kind of what we're talk about today, Alan.

Allen (05:20)
Yeah. Prompting has changed the way that I work with LLMs and prior to using prompts, I was using it basically as a search engine, which most people who are starting to use a chat GPT might just show up and ask a simple question. the act of entering a few lines to prompt the LLM with an instruction before you ask a question, that's the difference between getting slop and getting what you want. basically, and it goes both ways, you know, a bad prompt.

Donn (05:29)
Yeah.

Absolutely.

Allen (05:47)
won't get good results. So you need to, you need to think through how you want to work through the LM to structure the question, to get the exact results that you want to get. And, and this can be done across every single LLM. whether you're using chat, GPT, Gemini, Claude, Grok, when you go to ask a question, you'll want to set up certain criteria within a prompt to let it know, Hey, when I ask you about the weather,

Donn (05:57)
Mm-hmm.

Allen (06:12)
I'm not just asking what's the temperature outside of my area. I want to know how I should dress today. I want to know, is it going to rain in the afternoon? Should I bring an umbrella? So your prompt may include instructions about how you want it to answer a question when you ask a simple question, like what is the weather? And then forever in that conversation, that prompt lives. And so that when you come back to the conversation and you ask that question,

It'll have the instruction of how you prefer it to be answered. And this is all customizable. think that's the great thing about working, you know, in your own, your own environment, what an LLM, is that you can customize how you prefer to work with it and how you prefer to get your answers and everyone's, everyone's structure is different.

Donn (06:40)
Mm-hmm.

Yeah, that's that last component you said there of everyone's structure is different is very important because some of the things that we're going to prescribe in today's show, I recommend that you try them, how we have put them out there. And then I want you to actually change them, change them to see what works for you. You may word something a little bit different. You may use it for coding. You may use it for generating a document or an image or a video. And each one of these prompts are going to be different based upon the style and how you want to use it. So definitely realize this is not a prescription that you have to stick

to this is more of a kind of like education and learning of like hey here's how you go from step you know step one to step three which is against you a little more advanced and so forth.

Look out.

Allen (07:31)
Yeah.

So well, let's dive into the, origins of prompting.

Donn (07:34)
Yeah, for sure. So.

So, but where did all this come from? And you mentioned before, and I think it was spot on is like,

This kind of came from the search engines because as humans, have a certain propensity to re.

repeat past patterns in new environments to see how things are going to react. And so that's kind of what we've done is we've taken like our search engine terminology and said, Hey, you know, when did some particular event take place and who were the key figures? Maybe you're asking about history. Same thing. You could plug into a search engine. You plug into chat GPT. And I'm sure a lot of you listening have done exactly this. I know I have, for example, when I was first getting going with the LLMs, I started just asking random questions cause I didn't know how to use it.

The prompts that we've all kind of started with have either been very simple, like one or two sentences, they're always simple and rudimentary. But using this, there's nothing wrong with these simple queries. I think the important thing to understand here is if you are prompting an LLM and getting value out of it, then your prompt is good enough. But there are ways to improve it because remember the LLM is super powerful. It has so much intelligence in it. And it's like driving a Ferrari to the grocery store.

I can drive a Ferrari to the grocery store. I can get groceries, but there's so much more that that Ferrari can go do, if that makes any sense.

Allen (08:46)
You're probably better off taking a truck. You fit more groceries in it. Doesn't make sense. It's a good example. It's a great example, actually, for our grocery store.

Donn (08:49)
Exactly. Yeah. Yeah, exactly.

Yeah. So we basically use them as glorified search engines in my opinion.

did you start prompting the same way, like using that search engine thing, or was it similar to this, or what did you do? How did you get started in it? What was your first endeavor?

Allen (09:07)
Well, when I saw this question, actually sat thought about it because it wasn't within LLM like chat

back to how I got my first prompting experience. It was actually through mid journey.

Donn (09:17)
⁓ yeah, I forgot about that. Yeah. Creating images.

Allen (09:18)
Yeah. So mid journey

is where, where you can go and you can create images, generative AI images by giving it instructions,

And, uh, when, when you go into mid journey, you can type in there, all right, I want a picture of like a sailboat on the ocean and it'll just spit out whatever it thinks, you know, just put together a, an image to say, all right, here, here's a sailboat in the ocean. But

You can, you can give mid-journey instructions and say, all right, well, I want it to have HDR imagery. want it to be shot with an Ari lens. I want it to look dramatic and dark. want the ocean to be bubbling with heat. know, you could add anything to the prompt to get the image that you want, and then you can further refine it. So that's, that's where I first got my prompting, knowledge from. And, and to be honest, that didn't necessarily translate from using that.

Donn (09:47)
Mm-hmm.

Yeah.

Allen (10:04)
platform to chat GPT, I didn't actually think about, I can still use all of these describers to chat GPT to get a better result. was still using it like Google, which most people probably are, you know, recently, I would say over the last maybe 10 years, I don't know exactly when, but it used to be like, you could just search on Google, like one or two words, but now you can search full, full, full questions. And so that got us used to talking to a computer in that way and say,

Donn (10:22)
Mm-hmm.

Allen (10:29)
Hey, where can I get the best sandwich in my local area? And so you go to chat GPT and you ask a similar question. But now when you ask that question, you say something like, Hey, I can only eat non gluten sandwiches. I don't want to drive with more than two and a half miles from my house. live in XYZ, New Jersey, and you know, who, who has the best, sandwich for me today. So that would be an example of a prompt there. So.

Yeah, that was my beginning of using prompts in an LLM. And it's definitely evolved way past that at this point. We're using way more advanced prompts to do way bigger tasks.

Donn (11:04)
Okay, so let's hop into the different types of prompts that we have out there. The first one we're gonna talk about is just a simple prompt. So we're not gonna spend too much time on this one because these are the ones that we're kind of just referencing already. And again, I just wanna reiterate this. Each prompt style that we talk about today does have validity and usefulness. We're not saying that you can't use simple prompts. We're just saying that...

there's different levels of this game. And so the simple and rudimentary prompts, a couple of examples would be something like, hey, write a document about XYZ topic, or what's the best material to keep you warm and yet not sweaty while snowboarding, or hey, my foot hurts on the bottom near the heel, how do I fix it? Like these are all prompts that I have maybe even typed in majority of these at some point in time because my foot does hurt sometimes. But anyway, they're all very useful and I still use many like that.

Alan, have you used prompts like this before? I think you kind of mentioned you did a little bit or what?

Allen (11:56)
Yeah, I do that too sometimes.

Donn (11:57)
Yeah, I mean, I think everybody does it. So it's all good. And then we've got different like.

As I'm gonna go into the developer perspective here, developers will start to use LLMs very similarly because they'll open up their development environment, which could be something like VS code or cursor or whatever they're using, or even a general purpose LLM. And when I said general purpose LLM, I mean chat GPT or Claude or Gemini, which you just go to the web browser or the app and use it. But a developer might say something very simple, and this is a very simple developer prompt, like, create a Python function that parses a CSV file. Like, okay, cool.

or tell me how to configure my DNS for Cloudflare. Now, some of this may be completely Greek to some of your listeners, that's fine, I'm just giving you an example. And then there's even simple things like we might find some code as engineers and not really understand what it does and we'll say, hey, can you explain this code to me? And we'll kind of just, we'll paste it in. So again, these are useful and that is okay. And I still use some of these, especially like, explain this code to me. But we'll get into the next level here pretty soon where we can expand on this and make them a little bit.

more useful in general.

So Alan, when you started, like for me, let me reiterate, for when I started developing with some of the agents, I found myself kind of just having almost like short little sentences like this, hey, can you build this thing or build this thing? And like very small, like I didn't give it long sentences. When you started working with Gemini and building software with it, or even just generating things, were you giving it huge prompts or were you giving it kind of like one little thing and then it would apply and then the next thing, or how are you doing it? Was it similar to me?

Allen (13:27)
Yeah, I was giving it definitely shorter prompts than I'm giving it right now. Now, now I have sort of a global set of instructions for a project and I'll go to my project and I'll start a conversation. I'll ask about a specific feature

Donn (13:31)
Yeah.

Allen (13:41)
or

something that I want to add to,

the app that I'm working on. But previous to that, it was just one window and just.

just basically one shot prompting the entire way through to try and get a result. And that definitely was not the best way to work with the LLM to build something, but it was a good way to get started.

trying to actually think through how, I actually got started building the app. I explained it in the first episode pretty well. where, you know, I said like, Hey, ⁓ I have this issue that I want to solve. Can you help me build something? And, LLM came up with a plan, basically like a, like a product requirements document for, what it is that I wanted to build.

but it didn't take a really complex prompt to do that. And the reason it didn't require one is because I don't think that I had the full vision of what I wanted to build when I first started. And I started to build that app through collaboration. And as it started to come together, like with the agent, but as the app started to come together, I started to say, all right, well, I think I...

Donn (14:30)
Yeah.

Like with the agent?

Allen (14:42)
I think it would be better if it had this, this and that. And so, ⁓ I would just ask it more questions, how to do something like let's add this new feature for the, for the music app specifically. It was really important that it didn't just add, ⁓ the, links. Like I really wanted it to create a playlist because I wanted people to just have a one button click to all the music that their friends shared in chat. And so again,

Donn (15:07)
You're just like going back

and forth with it really, right?

Allen (15:09)
Yeah, I was just going back and forth with it. Uh, and, that's probably, you know, the best way to work with LLMs for me is just as a collaborator, by the way, as somebody who's starting to build my first couple of apps, um, I don't have a very clear picture of what the end result is going to be. I don't think anybody does when they're building software, but, uh, yeah, I just, focus on one very simple thing.

to start it and then just go back and forth to build.

Donn (15:38)
Yeah, that makes sense.

Yeah, because then you don't really know what you're doing and you're like, hey, I want to build this one little feature over here and it builds it. And then you take a look at it and you're like, hey, that's actually I don't want that button over there. I want that button down here. I don't even want a button there. It's supposed to do that automatically. And you're kind of learning how to communicate with the agent. So it's very much like we're saying here, it's very simple prompts back and forth. So definitely seems like you've had that same experience.

Allen (16:00)
Yeah, absolutely.

Donn (16:01)
Okay, cool. So that kind of wraps us up with the simple prompts, which is like just the very basic stuff. And again, this is again to reiterate for numerous times now, it's okay that if this is how you operate and you build software, because I think a lot of people build things like this in cursor.

or whatever app you're using, or even just inside of a general chat GPT instance where you're learning about a document or learning about some type of topic. But next up, you kind of get to like a level two, like how do you advance past that first simple prompt? And after that, we kind of get into something that, and these are not names that are out in the industry, I just call it like a functional prompt because it actually provides a lot of function to you more than what the previous simple prompt would have. And so that is.

It's really better than just a one single sentence request though it still can be if it just has some more context. In short, the functional prompt means that you're giving the LLM a little more direction. It might have a few constraints or like clarifications inside of it, but you're not really designing something like a full executable prompt engineered thing that's got all these constraints and stuff that we'll get to. It's mainly still a kind of unstructured, but there's just more direction and intentionality built into.

what the input of the prompt might be and what it should be. So let's go ahead and talk about some examples. I'll dive into the first one and then I know you've got one about ⁓ that you will kind of expand on the snowboarding one that we talked about before. The first one that I had said, I'm gonna build on the previous simple examples. So, know, before we had talked about, hey, in a simple prompt, I would say, hey, I wanna write a document about X, Y, Z topic.

And if I were to turn that into more of a functional type of prompt, it might look something like this. And so I'm reading here from my notes. It says, write a short, well-structured document explaining XYZ topic for a general audience, focusing on the key concept and practical takeaways. Now, there are a number of things that have changed in this prompt. For example, some of the things that have changed is the audience, maybe some of the structure of that prompt, the outcome. It still doesn't have like a persona, so I'm not telling it if it has a particular persona to itself.

And there's no evaluation constraints, but just comparing, hey, write a document about whatever topic, and then actually giving that a little bit longer one, like say, hey, this is for a general audience. The LLM now is learning, hey, this is who I need to actually target this writing to. If I were to change that general audience to a professional audience or doctors or something like that, it's going to change how...

the LLM responds. So giving it a little bit more context is gonna help guide it along more. Alan, what about like that snowboarding example here where we asked it to help us give us some like, close it, don't make us sweaty. What would you change there?

Allen (18:34)
Yeah. So, I mean, the original question was, is the best material to keep you warm yet not sweaty while snowboarding? But a good example of a functional prompt would be what materials are the best for staying warm while snowboarding without overheating? Please explain why they work and compare at least three common options and compare typical affordability. So what's changing there? We have a classification of conditions now. We want to know what's going on. Yeah. The pricing we're talking about. Yeah. Exactly.

Donn (18:53)
you

Maybe some comparisons, yeah.

Allen (19:01)
we're, comparing, we're comparing different products, which is something we're going to do no matter what. When we go to Google and we look at different options, the LLM can assist there and an explanation request. So, ⁓ you know, LLM might do this a couple of different ways. It might just pull information from the description of a product. It might go to Reddit or, ⁓ any one of these review sites and just pull like, Hey, people are saying like XYZ about this product. But the first question you weren't getting any of those results. You're just asking.

Donn (19:19)
Yeah.

Allen (19:29)
what material keeps you warm and not sweaty while snowboarding. It might give you a few options, but you're not gonna get the context that a buyer would look for when you actually go to purchase a product like this.

Donn (19:33)
Yeah.

Yeah, exactly. I like how when I submit a question, it's more, more functional like this, especially when I'm giving comparisons and you know, hey, tell me about the affordability. Like you said, a lot of the LLMs like chat GBT will go research online and find those Reddit links, which is cool. And then like provide like a little source link that you can look at there and like, cool. And you'll end up. Yeah.

Allen (19:58)
I always click those. Yeah.

And by the way, it's important to do that. You know, this is a good call out. Whenever you're working with an LLM and it sources something for you, I would not take it for face value. Click the link, especially if you're working on a research project or something for work, make sure you click that link and try to find where it pulled that content from.

Donn (20:08)
Absolutely. 100%.

Mm-hmm.

Allen (20:18)
A lot of the times they're dead links. just link to sites. They may link to a extension or a part of the webpage that just no longer exists. And it might just be picking it up in its metadata. So you might be talking to somebody that's like, if you were to take this information to a meeting, you might be talking about something that's just completely outdated or no longer relevant. Yeah. So, so good practice there.

Donn (20:29)
Yep.

Allen (20:38)
You know, another good example, by the way, of a functional prompt is something as simple like, all right, my foot hurts. Here's the basic prompt. My foot hurts on the bottom near the heel. Like, how do I fix it? ⁓ A functional prompt example would be, I have some pain in the bottom of my foot near the heel. What are some common causes and where are general steps that I could take to relieve it? And when should I seek medical care? That's this. There's three questions in there. ⁓ Now it's reframed as causes and

Donn (20:49)
Yeah.

Allen (21:05)
I'm asking for actions on how to fix it. that'll give you a much better result from the LLM. You know, in general, I think what we're trying to get across with the use of a functional prompt. You have to learn how to ask the right questions to an LLM. Yeah. Point blank. It's, the single, biggest value to you as a user of an LLM is learning how to ask the right questions.

Donn (21:17)
Yes. Right there.

Absolutely, I think that's important and that's something that I forgot to touch on early on that I wanted to and I'm glad you reminded me through that comment. One thing that I've learned and I'm interested to hear your take on this. One thing that I've learned after...

diving deep into prompts, which we'll get into some more advanced ones here very soon, is that I have had an interesting side effect in my life from that. And what that is is I have actually seen my human communication improve drastically because no longer am I leaving a lot of ambiguity in my conversations. I find myself providing more detail and context to...

phone calls, know, maybe I'm talking to someone at my house, my kids, whatever. I just feel like I'm being a lot more thorough in my communications. Have you noticed that about yourself as well or no?

Allen (22:10)
Yeah, I mean, that probably has come as a result of a lot of different things, I'm way more conscious of the questions.

that I ask when I talk to people, you know, it helped me, I would say honestly, and I'm not just being agreeable here. I think I've learned how to better communicate my thoughts. and, ⁓ be a little bit more aware of the questions or how I sound when I'm talking to somebody else. Yeah.

Donn (22:34)
Yeah, I completely agree.

Yeah, I agree. So let's go ahead and hop into the software examples that I had before. So I'm gonna take those same exact software examples where I had the Python function, I had the Cloudflare DNS, and I had explained this code to me. How would I improve those? And so I'm just gonna give you a quick examples, then we can hop into these advanced ones where I think you're really gonna find a lot of value. So for the Python function, I just said, hey, create me a Python function that parses a CSV file. Okay, common stuff.

Millions of developers have typed this into Google and chat GPT. But if I were, that's probably not all I needed to do as a developer. I need that CSV file parsed for a reason. So I need to think a little bit more of the context, what I need to be done, et cetera. And so what I can do is I'll have, I can change that to be something like this. I could say, create a Python function that accepts a CSV file as a file path and returns the data as a list of dictionaries. Assume the first row contains the headers and ignore invalid rows.

and return them as part of a tuple and the result. Don't worry if you don't know what a tuple is or anything like that. But this is just much more detailed because it's providing like a defined input and a defined output.

I'm removing a bunch of ambiguity inside of the prompt. even though there's no frameworks or testing in there, it has a lot more structure to it. Let's dive right into the CloudFlare one here. Tell me how to configure DNS so I can use CloudFlare. I would turn this one into something like this. I'd say, explain the steps required to configure DNS for a domain using CloudFlare, including what records typically need to be updated and why. Give me step-by-step directions. Now, I have asked for step orientation here. So I've asked it for a step-by-step instruction so I can kind

follow along like a manual. I put in the reasons why I need to do these certain things. I'm configuring the DNS and I want cloud flare. need to tell me why it needs to be updated and why what's an A record a C name record again, technical details. But if you're doing something, you want to understand why it's done. It's still at this point, not fully engineered out prompt because it's kind of vendor neutral, meaning I didn't specify is it name cheap? Is it go daddy? Like who where's my where did I buy my domain? I didn't put any of that in there, but it's still much more useful than

just hey, tell me how to configure my DNS that could give me 12 different answers that are wrong because I'm using maybe some weird off the wall domain provider. And last one here.

We have some code explanation. So I might say like, explain this code to me and drop in a function or something like that. And it would reply back to me. I could change this to be much more helpful in a situation like this. I would say, explain this code to what this code does at a high level. Then I want you to walk through the main sections line by line, focusing on the intent rather than the syntax, explain it line by line. A couple of things here. You heard me say line by line twice. Sometimes when you do this, you're putting an emphasis on what

what

you want, the other line will pick up on that sentiment and be like, ⁓ they have said this twice, this must be really important. Sometimes you can also say, hey, this is critical that you do this component or focus on this component. So what this does, the explanation depth is specified. I wanna know line by line what it does. I just don't wanna know what the function does. Tell me why this piece of code does something here and what it's doing. Maybe I don't understand that weird, that thing that it's doing, which happens a lot.

The structure's kind of implied that it's gonna be line by line and I don't wanna know about the syntax because I can read the syntax, just tell me what it is. And the audience sentiment is included because since I've said hey, at a high level and line by line, it's kind of insinuating that I kind of want a beginner level syntax. I could make it better by saying hey, I'm a beginner, explain it to me like a beginner. But overall, this is a much more functional prompt.

It probably take you five, 10 more seconds to write if that, if you're tapping fast, maybe a few seconds, but it's just gonna give you a better result at the end of the day.

So here we're gonna go from simple to functional. What we're saying is basically going to drastically change the results on what you get. So a couple of closing thoughts on the final, on the functional prompts before we move on to the real good stuff, which is where I think you'll get the, at least I enjoy writing the prompts the best and that's the engineered ones. A functional prompt is, what is it? It's really just a prompt that's simple yet conversational, but adds a lot of clarity around maybe some of the intent, the scope of it and what you're looking for. What are the outputs? Is it a doc?

Is it a markdown file? it some code? Is it an explanation? It also, what it does is it reduce the ambiguity without turning the prompt into like a fully engineered prompt, which is what we're gonna be talking about next. And that's the key here because you wanna provide a little bit of structure here and maybe you don't care that it's perfect, but at least it's gonna kinda give you the right direction in which you're moving in.

Allen (27:01)
That's excellent.

Donn (27:01)
Okay, so let's go ahead and hop now into engineered prompts. Allen, you have surprised me in many ways during your learnings of LLMs. And one of the things that surprised me is I actually saw some prompts that you had created and I realized, I'm like, wow, these are actually quite well engineered prompts. So to me, it seems like you've moved beyond functional prompts into more engineered prompts. Is that correct? Or how did you?

move from like this simple to more advanced level and what do some of those kind of look like?

Allen (27:32)
Yeah. So one of the things I did was I actually created a persona to help me engineer prompts. And by persona, mean, I created a, a actual prompt that acts as somebody who's professional prompt engineer, has a set of instructions. It lives right now. ⁓ I have it in Claude. have it in Gemini set up as a project and I'll go in there and I'll say, all right, this is the type of person that I need to work with.

this is, these are the typical projects that I'm going to ask this person about. This is how I like to work with a collaborator. And this is the, ⁓ set of results that I expect. and so what that prompt will do is actually generate a set of instructions that I can again, take into a new project, and just collaborate and work with it. And it's, it's changed. It's completely changed the way that I am working with LMS right now on larger projects.

Donn (28:08)
Mm-hmm.

Allen (28:24)
And we can get into exactly what some of those look like. But maybe we want to talk first about the principles of engineering prompts and exactly how ⁓ they're all set up.

Donn (28:32)
Yeah, they're all set up. exactly. ⁓

That's it. It is. I'm always impressed like how fast you've migrated into these these new these new formats. And I've seen that prompt you're talking about. So.

I have a very similar one which we'll get to and I'll share as well and we'll share yours in the show notes so people can have access to them and check them out. And again, depends on what you like to do and how it works for you. You might like Alan's prompt engineer better. You might like mine better. Try them both, see what works and then come up with your own. But yes, let's get to the kind of the core concepts of like what a good engineered prompt is. The good engineered prompt is gonna be very purpose built. It's very specific. It's not like an open-ended chat. You're kind of providing a lot of details. It's laser focused.

focused in what it's doing and it provides a persona. Now, I want to explain the persona for a second because it's very important. Think of an LLM as like the most intelligent thing in the world. And then if I can ask it any question, history, I can ask it about DaVinci, I can ask about physics, medical things, legal things. The wealth of knowledge is amazing.

that is a positive and a negative because if I am not very laser focused on what I want and I do not include a persona, LLMs operate on the principle of pattern recognition really, very much like humans' brains do. And so if I ask it a question, it might kind of go off in left field and assume I want a legal response or it might assume that I want a medical response or whatever for whatever reason because of how I structured it. But if I give a LLM like a persona and I laser focus it, kind of think of

This LLM is huge. I've basically sliced away everything else. I've told the LLM it doesn't care about legal. It doesn't care about medical. It doesn't care about this. If I said, hey, you are a Python engineer who is an expert in web development and REST APIs. It's not worried about anything else, not worrying about machine learning. It's not worried about anything else. It's not worried about being a history teacher. It now just knows this is where I'm going to focus it. So I've now laser focused.

it's where he's gonna be making his decisions and recommendations from.

Another couple of things that a engineered prompt also includes is kind of explicit constraints about, you know, boundaries about what's going to happen and what it should or shouldn't do. It's going to contain also a lot of strong context about the awareness of like what the task is, maybe additional information that it has. The task itself is going to be very well defined. You might have, it might be just a couple of sentences. You might also have one that is ⁓ a thousand words long. I have prompts that over 2000 words long, basically execute

as an autonomous agents because of I've followed a lot of these principles here.

And eventually what you want to build into these prompts, which you'll see is one of the biggest things is like some type of evaluation criteria. This is how an LLM can check what it's doing. If it's writing software, maybe you tell it to write tests or it can actually read the screen and see that things are where they should be. If you are having it develop a video or an image, maybe it's got a constraint to certain properties and it needs to check that the resolution is what you want it to be. So there's all different types of evaluation criteria. You can sit on the side of there.

Tests could be whatever. There's also terminal events that you should throw inside of these. And again, know I'm throwing a lot at you here, but we'll get back to submit some examples here. So terminal events could be, hey, stop doing something when X happens. Or if you find yourself running down some crazy path too many times and you can't fix it, then stop and let me know. Or if it's success, let me know. The self-checking is built in, like I said, through evaluation. And ultimately what it's doing is it's providing you with reliable results.

found is that when you structure a prompt with these different ways, which I will give you an acronym here later and examples, you can then develop prompts that are insanely good for agentic coding, self-reflective learning, idea exploration, problem troubleshooting, even just like psychological things. If you're working through and you want questions and you want to do self introspection, it's amazing at what you can do.

blows my mind. I remember a very specific point in time in July of last year where I was doing something at work and it was very tedious, slow, and what I did is I said, well, let me take about an hour and try to write a prompt and if it doesn't work,

Okay, I lost an hour. What I ended up doing was writing a prompt that was like 1800 words long. I just kind of brain dumped it. And it turned into almost like a document I could hand to an employee and say, hey, coworker, can you go do this work? They should be able to follow that document and get it done. I then just told the LLM I gave it a persona. I said, hey, you are an expert Android engineer and here's your task.

And then I said, when you start, ask me, know, what part of the system to start in? And I fired it off. It did in 12 minutes of work. Well, it takes me about three to four hours to do. And that was that day. I really remember I stood up at my desk at my co-working place and I got up and I walked outside and I went for a walk because that's the day that I realized everything had changed.

Allen (33:14)
Wow. Yeah, they're so efficient. And with a good set of instructions, could just, it's capable of anything. I have a couple of different prompts that I'm using right now. know, one that helps me to create social media posts. I'm terrible at social media. I really am. And outside of this podcast, this is really like my first true presence on any sort of social platform in several years, other than I have a private Instagram with my family and friends.

Donn (33:30)
you

Allen (33:41)
But yeah, so it has a set of instructions and I explained to the prompt. So first off, the prompt is set up to be an expert social media persona. I let it give itself a name. It's pretty funny. It is constantly scanning the algorithms on all the major social media platforms. So when I come to it and say, hey, I want to develop a post for X.

Donn (33:43)
You use that all the time to help you generate posts or what?

Mm-hmm.

Allen (34:07)
Here's the concept for the post. It'll look through X and I'll say, all right, here's what's trending. Here's what's working right now. It'll actually source all of that to make sure that it's not hallucinating or telling me something that's not true. I'll feed it a line of text. It'll say, hey, Alan, this is way too long. I think it would be better to go from this angle. And I'm okay with it. I trust it way more than I trust myself with this stuff. And also,

for me personally, I kind of have a fear of sharing things on social media. know, I'm the type of person that'll sit on X and type out a response and just back it out like five, six times and then just walk away and say like, all right, fuck it. I don't really want to post this. But it understands because I told it my fear of posting on social media. I gave it that background and it'll actually.

Donn (34:38)
Thank

Thank

I've been there.

Call you on it.

Allen (34:59)
Yeah, I I'll say like, I don't know if I want to post this. I'd be like, it could be like reasons why I should post it and not be afraid to post it, you know, like, but that's what I need. You know, that that's why it's a helpful. That's why it's a helpful collaborator. That's just one example of a prompt like this that I set up. But anyway, I remember you had told me a story that used to ⁓ explain about plumbing. What was that story again with, with regards to prompts?

Donn (35:07)
Yeah!

Yeah,

the plumbing story, I use that to explain kind of how, help change people's mental models. But before I get started into that.

I know exactly what you mean. Last night I was doing the dishes and I thought to myself, I was watching a YouTube video of somebody and I was like, am I, the person on the YouTube video said, why are you watching my video? What are you getting out of this? And I'm like, whoa, that is pretty deep. I'm like, why am I watching this video? I'm like, what am I getting out of this? there's an older gentleman in his 50s and I'm like, wow, why do I like this guy's channel? What's going on? I'm like, I always.

really dumbfounded and I fired up ChatGPT, gave it a long prompt, which like kind of in this persona format and everything we'll talk about. And for the next 25 minutes, I had a conversation with ChatGPT identifying the core reasons why I like this and what it came down to was like some internal fears or insecurities that I had that this person gave me hope on. And I'm just like, ⁓ my God, this is crazy.

Allen (36:17)
Yeah, they're great for that. You know, it's kind of funny. Tesla built in grok into the car.

Donn (36:24)
All they did.

Allen (36:24)
You can just activate grok and talk to it. And grok's an amazing model. an amazing voice model as well. Though I think my favorite thing about it is that it has access to live data on X and through news. Um, so you can talk to it about anything from like current events to, uh, you know, your favorite celebrities or, know, the Oscar nominations that just came out this week. It has all that information, but there's a couple of different modes. And one of the modes is called therapists. so. Yeah.

Donn (36:33)
Yeah.

Really? I didn't know this.

Allen (36:51)
So you could just be sitting there in the car and, you know, driving back and forth wherever you got to go and you can talk to it and have open conversations about whatever it is that you're thinking. It's that I know this is a quick aside from the conversation that we're having, but just so that you know, this is available. ⁓ you know, yeah, it's just for anyone who's listening, you don't have to have a Tesla to talk to grok. You can go to grok.com and do it yourself.

Donn (36:58)
to

That was awesome.

Allen (37:13)
or download the app on your phone, but it's yeah, this is this is really what it's all about. It's not just about asking silly questions. These models have access to all the information that's almost all the information that's available through the web. I can give you really helpful ⁓ tips and tricks or help.

Donn (37:14)
Yeah.

Allen (37:30)
walk you through ⁓ apprehensions or fears or anxieties that you're having. And it sounds like, Don, you're doing it too. And I'm doing it too. It's the first time I'm actually talking about it with anybody. So it's nice to hear that anyone's actually using it. Obviously, other people are using it because they have that feature in the app. But yeah, it's a really incredible way to interact with an LLM.

Donn (37:44)
Mm-hmm.

That's

I didn't know that was in the car by default and I had the therapist mode. That's pretty cool. But underpinning all of this is exactly what we're talking about is actually underpinning that prompt or that therapist mode is actually a very well-tuned engineered prompt. So I'll go ahead and hop into like the plumbing story. And the goal of this is to help you change your mind about how to interact with an LLM. And the story goes like this. Assume that you have a house and your house, have a water leak in your house. And so what do need to do? You need to call a plumber. And so what you end up

doing is thankfully you can just text a plumber. Think of the LLM as somebody who can text who you need for help. And you say, hey, I need a plumber to come over and fix a water leak. And then boom, 20 minutes later, a plumber shows up at your door and you answer the door. And the plumber says, hey, I'm here to fix a leak. And you're like, yeah, come in and fix the leak. So the plumber comes in, starts looking in your coat closet, starts looking in the garage. And you're like, whoa, whoa, whoa, whoa, no, the leak's in the bathroom. Plumber goes into the bathroom, to the guest bathroom, looking around there, looking in the shower. like, no, no, no, no.

the leak is in the master bathroom. And so the plumber starts looking around and trying to find where the master bathroom's at you're like, okay, it's at the end of the house, turn right, going down the hallway, master bathroom's over there. So it finds the master bathroom, it's like, all right, cool, I got it. Goes into your master bathroom, it's looking in the shower, looking by the toilet and you're like, okay, what is this dude's problem? You're like, the leak, it's over in the sink, I need to take a look at the, know, fix the leak over at the sink. Plumber says, hey, no problem, I got it.

goes over to the sink, it starts looking at the sink, and then you realize, ⁓ I'm in a master bathroom and I've got two sinks. Okay, it's actually on the right-hand side. So it goes on the right-hand side, it's looking finally after you tell it, and then it's like, looking at the top, you're like, okay, the leak is, you know, it's in the master bathroom, it's on this right-hand sink, it's underneath the sink and it's leaking where the pipe connects. And it goes, okay, so now it goes directly to where it's leaking and starts looking at it, it says, yeah, I can fix this, I'm good.

Five minutes later comes back later and the plumber's like, all right, cool, I got you, you're all done, you're good to go. You go take a look at it and you look and it's all duct taped up. It's got a bunch of duct tape on it. You're like, what is this? And you're like, it was fixed. You're like, the leak's fixed, it doesn't leak anymore. And you're like, no, don't do that. Like use actual pipe fittings. And it's like, okay, yeah, I'll use pipe fittings. And now the LLM goes in or whatever your plumber goes in and fixes it and uses pipe fittings. You look underneath there and you're just like.

No, I have chrome pipe fittings. Why are you using like bright pink pipe fittings? Like what are you doing? Like that doesn't match. And so then you have to be more clarification. have to give the LLM like, hey, you're telling it where the leak is. You're telling it how to fix things. You're telling it, hey, I want you to fix and use the same material that you're using before, you know.

And then all of sudden it does that. It says, no problem, I'll fix that. And it says, it's done. You look, it looks great. Like, perfect. Thank you very much. You turn on the water and you realize it's leaking all over still. And you're like, you didn't fix the leak. And it goes, you're absolutely right. I didn't check it. Like if you've worked with LLMs, like you've noticed, like it's always saying like, you're absolutely right. Especially with quad. Yeah, exactly. So what you failed to give it here is like an evaluation metric. How can it check its own work? Well, I need to tell the plumber and I shouldn't have to, but sometimes we do.

Allen (40:36)
Without a couple of emojis.

Yeah.

Donn (40:47)
make sure you turn on the water, especially the hot water for whatever reason it leaks more with hot water. So finally the plumber is like, right, it still leaks. I forgot to tighten it up. And then goes down there fixes it. Finally, it's all tightened up, et cetera. And then finally, and this is like a different scenario.

You might run into a scenario where all of sudden the plumber can't fix a leak and can't figure out what's going on. The leak's actually coming from the wall. It's in the wall. So if you don't tell a plumber what you want to do, you just say, hey, I fixed it at all costs. The plumber rips out your sinks, it's ripping, you know, he's digging into the walls, tearing the sheetrock down. He's finally fixing the leak. You're like, why did you tear my wall apart? You're like, well, you told me to fix it at all costs. So I did. And so the LLM will do something similar. It'll just start deleting files. It'll just start doing whatever you want it to do in a coding scenario.

Allen (41:22)
Yeah.

Donn (41:28)
And so what I'm really getting at here is I have to give it some terminal events and saying, Hey, if you fix the leak, you can fix it. Cool. You're done. Tell me that you're done. Make sure you've, you've checked the water, make sure you've turned it on, make sure you there's no leaks. And also if you get to the point where the leak goes into the wall or it needs a deeper fix, like it's more, it's not a simple fix. I need you to stop and let me know that is like a terminal event. So the end result of this story here is

If we just tell the plumber to come in and fix a leak, we would never do that in the real world. The plumber is going to come to our door. We're likely going to guide the plumber to the exact location where the leak is. Tell them what's happening. Tell them how to repeat the issue. Tell them how we would like it fixed and then tell them, hey, don't spend a bunch of money. I don't want to be digging into the wall. Let me know if you have any problems. Think about communicating with an LLM the very similar way. You're to give it a persona. That's the plumber. You're going to give it a task. It's going to fix a leak. You're going to give it context.

Where is that leak? The physical location, the specificity of where it is exactly. You're give it constraints like, don't use FlexiSeal, don't use duct tape, know, or just use only matching materials that are already using. You're gonna give it evaluation metrics. Hey, here's how you check if you did the job correctly. You're gonna turn the water on and show there's no leaks. If there is leak, continually fix it, repeat it and fix it. And then you're gonna give it terminal events. Hey, you're done when the leak is fixed and you've verified it. And if leak can't be fixed because you gotta go into the wall or do something crazy, then you stop.

If you follow this same exact methodology of prompting, which is exactly how I do things, the results that you will get will be life-changing from working with prompts. And this follows a very basic formula here. that person, that formula here, and now I'm interested here if you follow a similar or if you just kind of identified it, just through trial and error, I identified this as a persona, a task, a context, constraints, evaluation, and terminal events.

Now not all of them are required, maybe you only need a handful of them, and use them as necessary, but when I follow this pattern, which we'll include inside of the show notes and also in the transcript, the results you'll get are amazing. Alan, when you did this and you kind of started building your own prompts, did you find like an existing pattern you followed or did you just kind of figure it out through trial and error?

Allen (43:36)
For me, I figured it out through trial and error. This is, by the way, the example you provided is great. And there's so many reasons why structuring a prompt that way is valuable to you, valuable to your usage of the LLM. And high level, we won't get into it today, but context window is absorbed so much quicker if you don't give it concise instructions.

You know, in the same way that if you told a plumber, fix the leak in my house and you he just came into your house and looked for a leak. He would charge you by the hour, right? Until they found a leak. You know, the LLM is going to do the same thing. It's going to probably find the first leak in your house. That might not be the first one that you had in mind. It might be one that you didn't know it was there. It's going to fix it. It's going tell you it's done. You're going to go and look and see if the leak was fixed and it's not going to have been fixed. You're be like, well, I fixed it by the way. It was this and ⁓ you know, you didn't get the end result that you wanted. So for me, ⁓

Donn (44:03)
Yeah.

Allen (44:24)
Again, this kind of goes back to what I was mentioning earlier about working with LLMs to really customize the way that you want to work with them.

I figured it out through trial and error and I could think back to, you know, a prompt that was using for career search and helping me to write resumes. And so I would put in a job description and I would have the LLM act as a hiring manager, help me to draw parallels between my experience based on a set of documents that I would provide to the chat and draw parallels to like the job description. And then at the end of

helping me to work through line by line of my resume,

Then I would have it shift to the cover letter. And after we're done with the cover letter.

⁓ would shift personalities. So it was actually a context, like a persona shift in the middle of my prompt. So once we're done working, through the paperwork, basically to apply to a job, now it's going to be my career coach and helping me to interview for the job. So my, my whole point of explaining this is that, this wasn't something that was like an off the shelf thing that I found. Now it could be now to you, if you want to, we can

Donn (45:03)
Yeah.

Allen (45:22)
And we can share this prompt in the description. That's just how I prefer to work with the LLM. That was for this specific example. And then for building apps, it's completely different.

Donn (45:31)
That's amazing that you kind of came up with that on your own and like I saw that prompt will share it in the in the show notes Because those type of things can be super useful and I've used it for very similar like interview like scenarios It could be for you know jobs could be whatever super useful What I wanted to share with folks here before we get into the meta prompting stuff and wrap it up is We do have a couple of good examples here that that you can use but just one quick reminder

When you are writing your prompts, try to use this persona task contact constraints evaluation. Think about it in your head, like, hey, you are a Ruby on Rails engineer and you're gonna be developing blah, blah. Give it that type of context and all those things.

In the show notes, there'll be two prompts that you can see. One is a software example, software engineering prompt for Ruby on Rails application. Another is a video prompt. I'm not gonna read the full prompt, but I'm just gonna give you a little bit of an idea here, reading the persona and the beginning of the task of both. So the first one here is like, I need maybe I have a Ruby on Rails application. I need to generate open graph images. These images are the ones that you would have when you share a post on the internet on Discord or through text message or whatever. You get that little preview of like the image.

it shows up. Those are generated behind the scenes usually. Or there's just one simple one for the whole site. But if you want to make things really cool, you can have one generated per, you know, so you got a blog, it'd be a blog post. So this is the prompt. It'd be an open graph image generation feature. The persona is you're an experienced Ruby on Rails developer tasked with implementing a production ready feature. You write clean and well tested code following Rails conventions and best practices. You consider edge cases, performance implications, and maintainability in your implementation decisions.

Task is build an automated open graph image generation system for the post model in a Ruby on Rails application. The post model being like a post in a blog post. The system must generate open graph images, trigger regeneration with the post changes, store generated images, and the list goes on. This, we're probably only covering 20 % of the prompt at this point. It continues to go on. You can see the full prompt inside of the show notes. It's got context. It's got constraints about when to do things, testing, evaluation criteria, how to.

handle image validity, how to write tests, how to do manage background job processing, terminal events. So great example of how you can like structure a prompt where it'll just go do amazing things. The second one that we have here is a, is a fitness video generation prompt. So maybe you want to generate a video using one of the video tools and you want a high energy, like fitness cinematic video. It would go something like this. The persona would be you're an elite cinematographer specializing in high energy sports and fitness.

content with the expertise in creating motivational visual experiences reminiscent of Nike commercials, under armor campaigns and boxing film training montages like Rocky or Creed. Your task is to generate a cinematic fitness montage video featuring intense weightlifting, functional movement patterns and high intensity exercise. The video should be evoke motivation, et cetera, et cetera, et cetera. Again, there's context, there's constraints that would be the resolution is 1080p. The frame rate is 24 frames per second. The cut frequency is 1.5.

five to three seconds, which is when the scenes change. Color grading, you can provide all of this as constraints and it'll have some evaluation criteria. These are just really good examples of kind of just to really, now that you understand the kind of the persona task context format, you can see them in action and then you can kind of help adapt what you're doing day to day to kind of start creating your own prompts that work that way.

Okay, so that wraps it up for the, really the engineered prompts. If you stop this episode right here and you just use what you just learned, you will forever change the way that you operate with LLM. You'll get higher quality results guaranteed, but there is one additional kind of like bonus material, almost like secret level to this, which when you unlock it, it really improves a lot of prompting here. And that is the meta prompt engineering. And this is where you're going to use the agent

to help refine your handwritten prompt to turn it into something that's like truly agentic. And Alan, you referenced, you talked about this earlier in the show about some prompt like this that you have. Can you talk about it a little bit more?

Allen (49:36)
Yeah. And I'm to link this one down again in the show notes. So I created a prompt engineer to help me create my prompt. So this is what we call meta prompting where we actually use a prompt to create a prompt. And so my prompt is pretty simple. It's going to act as a professional prompt engineer.

Donn (49:47)
Yeah.

Allen (49:52)
This one is the one that I'll share is specifically designed for use with Claude 4.5 Opus. But can be used with 4.5 Sonnet and I have another one that's better geared towards using on different platform,

What I want my prompt to do is I'm going to tell it what it is that I want to use it for. And it's going to ask me questions based on my use case. It's going to ask me two to four targeted questions covering objective, a target model that I want to use, a domain. So which industry use case, technical level, are there any constraints that I want for my prompt, like length, tone, compliant. Cost is a big one when we're talking about context window.

Donn (50:29)
Yeah.

Allen (50:29)
⁓ and is this, you know, what is the scale of my use for this prompt? Is this a one time or is this for, for production? Am I going to be using this like over and over again? And again, to your point, and this is what I've started to implement in most of my prompts, is success metrics. You know, how, how do I gauge success of, the result from a prompt? Is it going to be, ⁓ you know, a set of KPIs? is there a rubric that we're going to use, or is it just subjective? Am I going to look at something like text and say like, I like this, or I don't like it. once it gets.

all that information from me, it actually goes into building the prompt. All right. ⁓ and so it'll, it'll, it does. Yeah. So, so my prompt will actually ask you those four to five questions to cover those different categories we just discussed. Once you respond to it, it's going to go into actually building the prompt.

Donn (51:03)
Is it interviewing you during this process or what? does that work?

So the quick question clarification, when you're working with your prompt, I haven't looked at it, so I'm completely green here, which is good. When you start this, forgive me if I missed it, do I drop in my existing handwritten prompt or is it interviewing you from the beginning?

Allen (51:23)
Yeah.

It's not going to interview you from the beginning. You're going to have to give it a prompt to say, all right, I want to create a prompt today that is an engineer. It's going to help me build my next app. The app is going to be exactly. Yeah. And so once it gets that a little bit of information from you, it's going to probe and it's going to ask you more important information so that it knows how to structure the prompt that it's going to build. All right.

Donn (51:40)
Okay, so you need to kind of seed it with something.

Now let's just start interviewing.

Okay, yeah, so sorry

to interrupt. You had talked about how it was now going to be going through the steps, right?

Allen (51:57)
Exactly. So once it goes into building the prompt, it's going to assign a role in context. It's going to assign instructions. It's going to structure how it inputs. So basically, the expected format, the variations, and how to handle missing data. It's going to.

structure the output format. a precise structure ⁓ with a template or schema. It's going to look at edge cases as well. edge casing in developing apps, super important. So it's all built in. moving on, it'll apply the following techniques. It has XML tags. yeah, XML tags are

Donn (52:15)
Yeah.

Yeah.

Allen (52:34)
I would say a standard now for probably prompting with Claude.

Donn (52:37)
Can you explain a little bit about the XML tags for maybe some of the people that don't know? Like, are you putting the XML tags as part of the formatting or is it kind of separating different chunks of the prompt or what is it?

Allen (52:47)
It's separating different chunks of the prompt. it's setting up instructions, examples, constraints, context.

Donn (52:52)
Okay, so those

are the different XML tags or whatever, instructions, example, constraints, whatever. Got it. Okay, cool.

Allen (52:55)
Exactly. organize the more complex prompts.

Yeah. Yeah. And so it's going to take all that information. It's going to clean it up into a nice structured prompt. ⁓ And then, you know, I've gotten prompts before I just plug them into a project. I test them to see if I like the way it works. If I see that it's not operating the way I want it to work, I'll take the prompt back and I'll say, hey, listen, I didn't like the way that does this. I'd like to change it. Can you make some recommendations?

Donn (53:05)
Mm-hmm.

Yeah.

Allen (53:23)
Well, you know, I'll ask me like, here's what I think we can do. Do you want to implement it? Say yes or no. And it'll spit out a new usually I ask it to give me the prompt and markdown so I could just copy paste the actual full prompt right out of the window. And then I and I put it in the instructions and keep it moving. Yeah, keep it keep iterating.

Donn (53:34)
Yeah.

Keep iterating,

Yeah, I think that's something I have put down in here on our notes. I wanted to make sure that folks knew about it. You're going get the finalized prompt out of a result like this. It might get you 90 % there. It might be 99 % there, but.

you're going to probably need to optimize and iterate on it. I've written prompts that were 2000 words long to help automate insane parts of my software engineering career. But what I always realize is I'll get, it'll be executing and then all of a sudden I'll think, oh, I totally forgot this really random edge case that I need to throw in. I'll stop the execution, go back in, do exactly what Alan just said and say, hey, I need to add this little new constraint, this new little piece of information. Please update the prompt to reflect that. And the good news is, is sometimes if those prompts are like numbered, like,

there's eight steps you've got to accomplish. And you realize ⁓ step six needs to be something different or there needs to be a new step in there. It'll actually put that step in step five and then renumber everything else. It actually saves you a bunch of time, but of course that's AI. All that to say.

Allen (54:36)
Yeah.

Donn (54:37)
Allen's is probably way more, from what I'm hearing, is way more advanced than mine. I have a link that I'll put in the show notes next to Allen's. Mine is a much more simpler, agentic prompt builder. It just asks you to reformat a very similar thing, gives it a persona that you're a meta prompting expert. And that's it.

then ask it to reformat it to a persona task context constraints, evaluation, terminal conditions. Was it just a couple of notes? And I do use a couple XML tags like you do for just formatting and like grouping purposes. Cause the agent seems to understand that quite well. And I fired off. I, if I were a listener of the show and I am as I speak it at the same time, I would experiment with both, see which one works better for you. I have a feeling Alan's is probably going to be a lot more advanced. And I actually,

and you try it out, but just realize that nothing is perfect. You're just gonna have to play with it. And again, we're talking about the bonus level here. We're talking about meta prompting. If you're just doing a formatted prompt that's engineered, your results are gonna be amazing already. This is taking that amazing prompt that you've already written by hand and just really polishing it to make it just really pop. And this is where the amazing things are gonna happen. You're gonna see some great stuff out there. So.

There's that and then there's, you you can use all different, you can use these types of prompts for all different types of things, you know.

Allen (55:50)
Yeah, you know, one of the things that I did with that prompt was Anthropic had actually released a build with Claude prompt engineering guide. And so when I went to go create that prompt, I fed it the link and said, hey, help me create a prompt engineering prompt that utilizes all the best guidance or the instructions from this guide. ⁓

Donn (55:59)
Yeah, they did.

Mm-hmm.

Allen (56:15)
Most of what's in that, in that prompt is a result of Claude looking through that document and saying, all right, let's, introduce these, best practices.

Donn (56:19)
of that guide.

Yeah, it's, you know, we've talked a long time today about.

prompts, we started with a very simple examples of, you know, just simple few words, a sentence prompting that we've all done. We went into like adding some additional details and clarifications to that to make it functional. And we ended with really engineering your prompt with these constraints and kind of these components. And if you fill out these components, your results are not going to just get like a little bit better. They will be immensely better. And one last little tip here that I forgot to mention is that when you're building your prompts,

and

this could go for any type of them, simple, functional, engineered. One of the tricks that I love to do, and I know you do this too, Alan, is at the end of my prompt, I will say this. I'll say, ask me clarifying questions one at a time before you begin. And then what that will do is it'll put the agent into like an inquisitive mode and ask me questions that it needs clarification on, which will just make the execution of your prompt so much better.

Allen (57:22)
Yeah, that's my preferred way of working with the model as well. And you stay aligned in your goal the whole way through when you're working like that.

Donn (57:29)
Well, that wraps it up for the main part of the show. We're gonna, we're introducing something new here, which is again, we are a new show anyway, but we want to kind of showcase some of the things that we're working on and where we're really using AI. So we've got a little final segment here. It's just called, you know, what we're building. And right now, which we'll talk for a few minutes, the things that I'm currently working on is I have a timer app for.

fitness, so if you're into any type of fitness and you need a timer app, I created a free one, it's called For Time, but I'm using, I wrote this, I learned this term the other day, funny, it's called artisanal development. And so like this, the timer app was written back when it was artisanal code, means like written by hand. And so this got.

Myself and another developer rewrote this like two, three years ago before AI was really around. And now I'm using AI with it. And just last night I added a feature to it that probably would have taken me like three days before. It took like an hour. So just absolutely amazing. And I'm using like that, like I said early on, like an MCP server to actually interact with the, you know, with the, I'll have an emulator running and I recorded this. It's just the video was too long to send to you last night, Alan.

The amazing thing that I did is I told it to install the, when it was done building, to install it on the emulator. And then I said, I wanted it to go through the app and verify it worked. I didn't tell, I mean, you've used this app too. I didn't tell it how to use the app. I didn't tell it what the buttons were. I didn't say anything to it. I just said, install it, make sure it works.

It went through, it went through the onboarding wizard that has that little like introduction thing. It went through it, it got through that. It found the main view. It found where the timer was. It found out how to navigate and swipe it. It did it by itself, 100%. Dude. Dude. It was completely mind blowing. And there's another one that's been once where I just stood up, to go for a walk. So. ⁓

Allen (58:59)
Yeah.

That's incredible. ⁓ my God, it's a whole QA department. That's incredible.

It's just,

it's scary. mean, it's, it's awesome that you can do that for, for someone like yourself who, um, is working on these apps and, just needs to get things done. You know, you have to wait for all these, uh, all this red tape to, to keep a project moving. You can do it yourself.

Donn (59:34)
Yeah, I had it validate that way. And then I was like, you know what, can you actually write some automated tests to do this too? And like it wrote all the tests and I verified that they were correct. I was like, my God, this is amazing. So I did that. And I also did some, I have an email marketing platform that I wrote myself that, again, artisanal that is now being built with agents. I, I connected a Chrome dev tools MCP to that, which allows the agent to see the web app. And so I could actually say, Hey, the button is, is off the screen or something's too wide. Normally you take a screen.

drop it into the chat and you'd have to do it that way. Now it's like, okay, yeah, I loaded it up and you can see it in Chrome and it'll say this window is being operated by a testing agent. I'm just like, what the? And it was just like operating itself. It figured out how to log into the app without me telling it to. It's like, I found a login inside of your seed data. I'm gonna log in with that one. I'm like, my God.

Allen (1:00:11)
That's.

God. Really?

Just be careful what you put in there. Holy

Donn (1:00:24)
Yeah, so anyway, that's been

a lot of fun. For me, it's just been experimenting, learning a lot about the different agents, different tools out there, and just building features with it. I have really enjoyed hearing from you though. You recently built something that's really fun for you and your friends though. Do you wanna talk about that?

Allen (1:00:38)
Yeah, well really quickly, by the way, if you do work out, I would highly recommend checking out Dawn's for Time app. I use it all the time. It's excellent. It's, you know, it saved me a lot of money. I'm going to have to buy one of those like rogue clocks or something else to monitor my, know, EMOMs or just anything for time.

Donn (1:00:46)
thank you.

Allen (1:00:59)
So a lot of you really put every single thing into that app that anyone would use. It's such a full featured app. And it's incredible that you did it artisanally, which, my God, it sounds so antiquated. Like it's, it's just so funny that like, that's what we're calling, ⁓ traditional coding at this point.

Donn (1:01:07)
Yeah.

Allen (1:01:15)
the new way that you're working with your apps to let an agent actually go in and view it and make edits is so cool because up until this point, I've been doing the same exact thing where I'm just screenshotting all the bugs, all the visual bugs, the bugs in the code and sending it into the chat. And it's saying, OK, let me go back and fix this. But the fact that it can have its own eyes, I mean, it's really just a full

Donn (1:01:28)
Yeah.

That's crazy.

Allen (1:01:37)
service agent at that point. So yeah, that's cut down on so much time that's gonna allow you to build so much more. It's just, yeah, it is scary and it's also very exciting at the same time.

Donn (1:01:41)
We'll have to do it like that.

Yeah, I think I'll have to show you how to get set up for your projects because you'll be mind blown by it too. But it's all probably been very useful to all show. Maybe we'll do like we'll do a show or a live show sometimes you say, hey, here's how it kind of works if people can see it.

Allen (1:01:56)
Yeah.

So it would be really helpful on this one project that I'm working on where I'm putting together cards. was just visual cards. They're generated using nano banana nano banana three or nano banana pro. And I've had some issues, some visual glitches with the animations of like when a user opens a card, the card comes out the wrong way. Or I don't like the way the card that opens up. so ⁓ right now it's being developed in the web.

Donn (1:02:23)
Is this a web app or is it a mobile app or what is it?

Allen (1:02:28)
And I have, you know, have hopes of making it a web app as well. It's probably about 4,000 lines of code at this point of which, you know, I, I haven't written any of it. It's insane. but it's a full functioning app. Yeah. this is a concept, something that I was pretty passionate about building at one point. And it just went on the back burner because there wasn't an LLM that could really support it in the way that I wanted to build it.

Nano Banana 3 is fantastic. And one of the reasons why, you know, it signaled to me that it was time was because Nano Banana 3 will allow you to extract elements of an image, whether it's text or, you know, physical elements of the image and edit them. So you could say if you were to generate an image of a beach and there was a surfboard and a couple of chairs on the beach, you could say, all right, I don't want three chairs. I want two chairs. I want them to be green instead of blue. Take the surfboard.

out and add a lifeguard chair. ⁓ So you have full control over the images that you're wanting to generate. And then also, it's way better at text. I don't know if you'd been generating images with LLMs previously, but if you were to say something as simple as like Don Felker on an image, it might type like D-O-N-A-F-L-E, like some weird symbols. Yeah, I just have no idea.

Donn (1:03:22)
Yeah.

Thank you.

Yeah, terrible.

some random character.

Allen (1:03:47)
And then if you wanted to fix that, would just, you know, forget it. I mean, it would change the whole image. There was no way to do it. So it was just an impossible thing to do prior to this model coming out. So that's one of the projects I'm working on, and I'm really excited about using Next.js.

Donn (1:03:50)
⁓ forget it.

What are you using to build

it? Are you using like, I know you said Next.js just now, like, everything is kind of new to you. So like I use all kinds of models and tools. Are you using cursor, you using Claude? What are you using?

Allen (1:04:12)
So you had recommended using Conductor as a UI for Cloud Code. And so I took this as an opportunity to learn how to use Conductor and to dive a bit more into Cloud Code as well. So that's been my main. And I have to say the experience of using both has just been incredible. I do have some questions about Conductor. I find it easier to navigate the cursor interface and to make some of the changes that I wanted to make.

I think conductor, there might be a simple, it's just like toggling something on and off so that I could make these edits that I want to make. yeah, I've really enjoyed using Claude code through conductor. On a separate project that I'm working on, I'm using cursor. And the reason for that is just simply because I don't want to burn out my context window on Claude.

Donn (1:05:00)
Right?

Allen (1:05:01)
Yeah,

so like I'm just using the the I pay for the whatever it is the $20 a month cursor subscription and I can use like different models to build this this chat bot that I'm torturing my friends with in the group chat. But yeah, yeah, yeah, this this is fun. You know, this is just stuff like honestly, if the if I was doing it for work, I'm not sure I'd be as excited as I am.

Donn (1:05:14)
That's the other project you're working on, right? That one's pretty cool.

Allen (1:05:24)
about doing it right now, just because it's really just all passion projects. you know, we're on this platform called GroupMe, which if you aren't familiar with, it's Microsoft's group messaging platform. And it's a great platform. It's a very simple messaging platform we enjoy using. It has a lot of good functionality that the group likes, and that's why we're still there. They had introduced...

copilot as an AI assistant in chats, but copilot is so sanitized and it's not very fun. And, know, in, in, in a, ⁓ group chat with the boys, you want a bot that can keep up and yeah.

Donn (1:05:56)
Just, you can make it

like a David Goggins one where it's just like calling you out on nonsense, right?

Allen (1:06:02)
Yeah.

yeah. You can go so far past and some models are a little bit more liberal and allowing you to be, I guess more NSFW and ask questions. And I think the guys are getting a kick out of it. I know they're not really enjoying me testing in the chat, but over the last couple of nights, I've been able to put together some different personas for the bot to come into chat and just act out.

You know, just ridiculous shit. ⁓ One of the craziest things, by the way. So so one of things I want to say is that I really enjoy building with Claude code. I really enjoy building with Gemini. I haven't used five point two codecs yet, but I heard it's really good. But I really like using Grok for these personas. And the reason for that is because one Grok allows you to have a little bit more fun with the personas that you create. There's not.

Donn (1:06:31)
to that.

Allen (1:06:52)
Like there's safety barriers. can't do anything illegal, obviously, but, you can, you could be a little bit more NSFW, which is what I think we're looking for in the, you know, in in the group chat with the boys. and the other thing is that Grok has the ability to search real time news, RSS and X. so, you know, you could ask questions about what's going on in the news right now, or you can ask questions about the weather for this weekend. And so not only is it this.

bot where it's hysterical or it has the potential to be hysterical. It's super fucking helpful, dude. You know what I mean? Like you don't it'll like you could ask it like what are the top trending stories on X? And it replies in the persona. Yep. Yep. And you know, so it's just it's just a lot of fun. Those are the two projects that I'm working on right now that I'm having a blast with. And I have a couple of other things in the pipe.

Donn (1:07:24)
Mm-hmm.

And it would reply in that persona that you have given it, which could be totally off the unhinged.

Allen (1:07:42)
a little bit more advanced. I'm gonna need your help. We haven't talked about it yet. I'm gonna hit you after the show, but got some good ideas cooking and yeah, I've been enjoying the shit out of working on some of this stuff. ⁓ And I still, like last night to kind of land this plane, I'm sorry, I've been talking about it for a long time. I didn't talk a lot during the show, so I'm making up for it ⁓ I wanted the bot to also have the ability to generate images.

Donn (1:07:44)
That's good.

That's awesome.

Good.

Allen (1:08:09)
And that's one of the abilities of the Grok model has image generation. believe it's Imagen or Grok2 image. I forgot the model exact name. But it was for whatever reason, it was just tough for me to figure out how to implement this into my code. And I was going back and forth, you know, with, you know, I was actually working at a Gemini because that's where I had started the project. And then I...

Donn (1:08:19)
Yeah.

Allen (1:08:32)
just started talking to Claude in cursor and was just able to like fix it, like resolve it so quickly. So I had to set up two different triggers on, you know, it basically operates off of a web hook. When you type a note, you could say like grok and it'll do one thing. And then you could say grok send and it'll do a different thing ⁓ to kind of give you some perspective. for when the group goes grok,

Donn (1:08:51)
Okay.

Allen (1:08:56)
You know, ⁓ tell me what's going on right now in the news. It'll respond with a chat. It'll respond with the news link when you say grok send, and then you type your prompt. It'll it'll create an image. That's how I set it up. So.

Donn (1:09:08)
Oh, so it'll reply

in line. So it'll reply. You can have it make images and reply in line in the group chat with images.

Allen (1:09:14)
Yes. Yeah. So right now, it's sending a, image generation link ⁓ and they're temporary links. just for maximal privacy. ⁓

So basically, after the webhook is triggered, it'll be read by my script, my Python script. And my Python script is going to say, this is a relevant trigger. It's not a relevant trigger. It's not a relevant trigger. It just passes it by. It doesn't store anything. ⁓ And then when it recognizes the trigger, it'll see the trigger plus everything after is the prompt. And then it'll ping xai, or it'll

Donn (1:09:31)
Mm-hmm.

Mm-hmm.

Yeah,

Allen (1:09:51)
print

Donn (1:09:51)
the API or whatever.

Allen (1:09:53)
API, and then it'll reply with the

I

Donn (1:09:56)
did this, like you've never deployed anything like this before, right?

Allen (1:10:00)
No, this is, you know what? It's funny. I was just sitting in bed as watching the, ⁓ NFC championship game last week. And this is something that I had wanted to build. And after I'd been building all week on the cards app, I was like, feeling a little bit more comfortable and just being able to spin up a project pretty quickly. And so I said, look, fuck it. You know, I got an hour I'm sitting here. Like, let me see if I could just get some sort of MVP out and

I think it took me all of 45 minutes to get it working, which was incredible. And then over the last couple of nights, I've just been, you know, making little updates to it. Uh, at first it was just a, uh, LLM that responded in a persona, which was fun enough for just like, you know, making jokes. And then, uh, I decided I wanted to be a little bit more helpful. So, um, I pulled in, you know, the, the, the live data from news RSS and X.

Donn (1:10:29)
That's wild.

Allen (1:10:53)
That was working. That was a huge win for me. And then two or three nights later, you know, after just experimenting with some more personas and letting it have some fun in a group, I was like, I would love if it could just like send images too. If I could tell it, you know, to create an image and it would send a link. And this was probably the hardest piece to overcome just because it was introducing that second element with the second trigger. And there's some issues with how group me will receive the link.

Donn (1:11:15)
Mm-hmm.

Allen (1:11:22)
Does it have to be stored as a group me image in their source library? And then, you know, for it to show in line as an actual image. And I just said, you know what, like, let's just, let's just have it sent as a link that the temporary link is the, is the, is in my opinion, the best way for these group chats anyway. So it just disappears kind of like a Snapchat feed, basically, like you send a picture and it's gone. So that that's how it functions right now.

Donn (1:11:39)
Yeah. That's cool.

That's awesome. Cool. Well, everybody, thanks for going in and watching, listening, wherever you're at. Thank you again. So it's been a long one. We hope you found it useful and we will catch you next time.

Allen (1:11:53)
Thanks, guys.