AI First with Adam and Andy

In this AI First mini episode, Adam Brotman and Andy Sack explore the real-world implications of AI agents through Adam’s hands-on experience building one with OpenClaw. 

They break down what makes an agent different from traditional AI tools, including proactive behavior, memory, computer use, and the ability to operate independently. Adam shares both the excitement and the friction, from complex setup and security concerns to inconsistent performance and early-stage limitations. Despite these challenges, the broader implication is clear. AI agents are evolving into digital coworkers that can monitor information, take action, and continuously improve based on feedback. 

For executives, this represents a shift away from task execution toward managing outcomes and guiding intelligent systems. The conversation highlights how these tools can increase productivity, reduce manual work, and personalize how work gets done, while reinforcing the importance of control, governance, and trust as adoption scales.

What is AI First with Adam and Andy?

AI First with Adam and Andy: Inspiring Business Leaders to Make AI First Moves is a dynamic podcast focused on the unprecedented potential of AI and how business leaders can harness it to transform their companies. Each episode dives into real-world examples of AI deployments, the "holy shit" moments where AI changes everything, and the steps leaders need to take to stay ahead. It’s bold, actionable, and emphasizes the exponential acceleration of AI, inspiring CEOs to make AI-first moves before they fall behind.

Forum3 (00:00)
But tell me what are the implications for our audience?

Adam Brotman (00:03)
The implications are that

open-claw and open-claw like agents can become in the very near future, I believe, our digital coworkers and teammates. And by the way, I don't just mean in the workplace. mean, our audience is in the workplace. But I'm thinking about moms and dads and leaders of PTA groups and volunteer groups. think about it. If all of a sudden you could have a super smart, this is in the future.

reliable and secure.

entity, that could use computers in the way that you do and was as capable

Forum3 (00:40)
This is AI First with Adam and Andy, the show that takes you straight to the front lines of AI innovation and business. I'm Andy Sack and alongside my cohost, Adam Brotman. Each episode, we bring you candid conversations with business leaders, transforming their businesses with AI. No fluff, just real talk, actionable use cases and insights for you.

Adam Brotman (00:42)
.

Forum3 (01:10)
Today, we have another mini episode. Last week, we talked about Claude Cowork. And today, we want to continue the conversation around agents, because Adam had the experience over the last two weeks of setting up his own OpenClaw. So we want to talk to you about that experience, share the highs and lows of it, and make sure that you understand what an agent is and what it means for the future of work.

Adam Brotman (01:16)
So, thank

Forum3 (01:39)
So with that, Adam, you

Adam Brotman (01:41)
you

Forum3 (01:42)
want to introduce your experience? You want to talk about your holy shit moment with open claw?

Adam Brotman (01:48)
Yeah, It was a holy shit moment so going to kind of build on Claude co-work You know we had our own holy shit moment with both of you and I did you sort of like Adam get on this thing? It's amazing and we were like, my god, and I remember using Co-work we talked about this last week or last episode. I remember thinking to myself

man, can you imagine if co-work could sort of come to life? Like as it was me just prompting it, if I could tell it, hey, every, you know, so often I want you to just on your own start doing some things and also like remember our interactions and get smarter based on our interactions. So like this concept of be a little more proactive, have a little bit more

learning and memory that kind of comes. then because the parts of co-work that were amazing were its computer use abilities, like you and I talked about, and sort of how it's not just a chat bot, it's this really capable tool. part of what makes it so capable is it uses a computer, co-work is still waiting for you to tell it to do something. And so it's still prompt and response. The promise of a science fictional agent

was that it felt more human, like a real co-worker, like a real teammate, and in that sense would not just wait for you to prompt it, but would do stuff on its own and would get smarter as you used it more. And that's the promise of this OpenClaw that came out. Now, OpenClaw is, just for those that don't know, it is very cutting edge. It is not for the non-technical. So I'm going to say one...

caveat right now, I'm to tell you about this thing I did with OpenClaw and my experience with it. I'm not very technical, so that's not saying much, but it took me a couple of days of just frustrating being a non-technical person of like, you know, getting it set up, but also like troubleshooting a bunch of stuff. And, and I wanted to do it in a very secure way as well. So I had to jump through all these hoops to make it secure because OpenClaw like

Claude Cowork lives on a computer. I guess you could also host it in a virtual environment. But for security reasons and other reasons, it was kind of cool to like put it down on my own separate little Mac mini and bring it to life. I learned these terms myself. I'm going to teach the audience like like an agent as we all think about it is a combination of

a chatbot in a brain, but also it has computer use, it has memory, it's proactive, it can be scheduled to do things, it learns. Those are all the elements of an agent. And so they call that a harness. And what I learned was setting up the harness where it's like, pick your brain and pick your communication channels that you're going to talk to your agent in and pick where it's going to live and...

Pick it's rules and all that kind of stuff. You have to do all that with OpenClaw. And you have to do it in a way that was very technical. But once I got it set up, I gave it a name. I called it Jeffrey. And I was like, Jeffrey, here's kind what I want you to think about and do. And I didn't give it access to all my own stuff. But it was amazing, Andy. What was amazing? Yeah.

Forum3 (04:52)
Adam,

let me interrupt because you talked about all the things that OpenClaw is and appropriately called it the harness. Cowork is a contained version of OpenClaw, right? Meaning Claude Anthropic has set up the harness for you. So distinguish that from OpenClaw as you talk about it.

Adam Brotman (05:10)
That's right.

Yeah, so it's funny. I just learned this term, harness. But let's talk about what it is. it's the thing that of brings together, you know,

what parts of your computer it's able to use and how it's able to do that, what brain you give it as in what model it's running as the LLM brain behind it. And also, where does its memory live and its file system? So you've got some rules and a brain and a file system, and that's the harness. You're right. Claude co-work, and Claude code is a form of a harness. But you don't get to pick what model

it's using outside of you pick which version of Claude. you're either using Opus 4.6 or Sonnet 4.6 or whatever. you can't just pick any old model. You can't, I don't believe, self host a model and use it with Claude co-work. It knows how to handle its own files. And I'm probably not.

Forum3 (06:08)
That's right.

Adam Brotman (06:10)
I'm not technical enough to explain this correctly, but it is a form of a harness. That's right. And then it is a form of an agent, but it's constrained. Like you said, with OpenClaw, It was just bought by OpenAI. But my understanding is they're keeping it as a separate open source project owned by OpenAI run by the same guy who did it. And, you know,

you feel like you're in the Wild West with OpenClaw. You realize they're doing updates every day. And look, there's no system administrator that's hosting this for me. I am my own system administrator, which I'm not a good system administrator, meaning I have to think about updates and bug fixes and troubleshooting. And I've got to make sure that everything's on and working right. Which, by the way, makes it so dangerous because

If you give any agent, including and especially OpenClaw, access to your computer, you better not have done anything where a bad person can get access to that thing because now it has access to your computer, including credentials and access to any websites that you might want to log into and files and important documents and you name it. And it can destroy them. can

steal from you, could ruin things. There's a lot of damage that these agents can do. So that's why, number one, even with Cloud Cowork, it's using your browser and it's logged into your accounts. You've got to be careful. In this case, it can be that and then some, because it's, you know,

It doesn't have all the protections built in that Claude and Anthropic have built into Cloud Co-Work. Sure. Yeah. Yeah, so the thing I'd say is,

Forum3 (07:41)
I mean, but that's the that's the that's both the disclaimer, this the be careful disclaimer, and the inherent in that is some of the power. So why don't you just walk people through what you did? Like you had to go to the Mac store and what you purchased and like, and try to get to the aha of it all disclaimer of round security noted.

Adam Brotman (08:06)
I, so I went to the Mac store and I got a Mac mini and a, know, and I basically, it's fresh. I was able to set it up in a way where I installed this open claw software onto it. Where the thing, the operating system didn't have, I wasn't logged into anything personal or professional at all. It was just like clean slate. And then it's got this open claw. I said, okay. And then when you set up open cloud, asked you.

a bunch of setup questions like, OK, how are you going to communicate with me? Do you want to do it on Telegram or Slack or how do you want to do this? And of course, when you're on the Mac Mini, you can chat with it through a browser on the Mac Mini terminal, if you will. And you can chat. But it said outside of that, one of the benefits of OpenCLI was like, do you want to be able to just text message me from your phone? I was like, I do.

And so it was like, OK, let's give your OpenClaw agent a name, Jeffrey. Let's give it a communication channel, telegram. Let's give it a brain. What do you want to do? You want to use OpenAI? Do you want to use Claude? Do you want to use some other model? I chose Claude. And it's expensive, by the way. I'm not allowed to use.

like a one size fits all account, like we all are used to in our subscriptions, I had to set up a developer account, Andy. Like, you me, I'm not very. So I had to set up a developer account, get an API key, give that API key to this Mac mini harness so it can give a brain to my agent. I had to give it an API key to my messaging bot platform. So it could have a messaging platform.

I get it all set up. I give it a brain. I give it a name. give it a name. And I'm doing this in a very blast radius contained way, at least best I know. then I was like, OK, This is the part where my agent, Jeffrey, starts having capabilities that I'm not used to with Claude. Because it has the brain of Claude. It literally has a Claude brain.

I chose Sonnet 4.6 instead of Opus 4.6 just because it was less expensive because I'm paying per token now, not all you can eat subscription type model that I'm used to. I give it Sonnet 4.6, which is a great model on a pay per token basis. And I start saying to it, like, all right, I want to take advantage of the fact that I can tell you when to start doing stuff. And you're going to do stuff even when I'm not prompting you. So I told it, hey, give me reports twice a day.

on the following topics from the internet. also, I started having it do some research projects. I started having it do some other things. And what was amazing about it was the way it's built, OpenClaw, is that every day it has a log of everything I talk to it about and everything I do with it. And then it decides which elements of those logs it should commit to memory. Adam likes it when I do this.

he's working on this project with Andy over here. He's got this client over here that's doing this. You know, of course, I'm not giving Jeffrey confidential information because this is all experimental. But what was amazing. Andy was the first time I started getting, you know, text messages from my open client agent at the middle of the day, just like I asked it to where it said, hey, I went ahead and woke up and check this thing out for you and I'm sending you this thing. And then I was like, OK,

Why don't I give you access to my school's calendar that I'm supposed to be keeping up with every day with all the new news from my daughter? Like, can you do that for me? Can you stay abreast of all the activities that are just broadcast to all the parents of the school and then just boil them down for me or whatever without having to be remembered to do that? And every time something, and I want you to check on that calendar a couple of times a day, every day. And if you, if there's something I should be aware of or reminded of, start

text messaging reminders. it was like, and then I started to get a little braver and I said, you know, why not? And it's only in read only mode. I'm not gonna let it like post messages. And

so I started doing this and what I'll tell you is I saw the future. It is buggy. It is frustrating for me as a non-technical person. I'll fix one thing and it breaks another thing

And it hallucinates and says it does one thing when it doesn't do it. So no wonder you're hearing all these stories of how these open-claw agents are like, people are giving it access to their email and their calendar. And they're like, run my life for me, which it could. And then, sure enough, there's this funny story that went around about, not funny, by the way, but interesting funny, as in how it deleted somebody's all their emails. And you just start realizing, yeah, yeah, I'm not, this thing.

is still buggy and weird and does makes mistakes just like we're used to our LLMs doing. So I'm not going to like give it something where it could cause real damage. But man, you can imagine Andy when these systems I'm not saying open claw open claw themselves plus whatever comes out from open AI and Anthropic and Gemini and XAI when they're secure and reliable and you can give it access to confidential

systems and logins that are important to your life, this is going to be amazing. Like it's going to be a true agent. And that's, that was the big aha moment for me.

Forum3 (13:08)
And what is that like? So

embellish on that. So you see that. What do you think that means for our audience, the C-suite executive? I mean, you did this, and you've been, A, lit up about it, meaning excited. ⁓ And it feels like you're learning every day. I have not yet set up OpenClaw. I may or may not.

Adam Brotman (13:23)
Yeah.

Forum3 (13:29)
But tell me what are the implications for our audience?

Adam Brotman (13:32)
The implications are that

open-claw and open-claw like agents can become in the very near future, I believe, our digital coworkers and teammates. And by the way, I don't just mean in the workplace. mean, our audience is in the workplace. But I'm thinking about moms and dads and leaders of PTA groups and volunteer groups. think about it. If all of a sudden you could have a super smart, this is in the future.

reliable and secure.

entity, that could use computers in the way that you do and was as capable of doing marketing, coding, organization, communication, negotiation, analysis, bookkeeping. All these are skills that right now the models can do.

Forum3 (14:17)
And it's all, I think the one piece,

it's all those skills and it's customized to you and the way you work and the tone that you want it to communicate with you and what's effective for you.

Adam Brotman (14:27)
And that's right,

the tone and the rules. You're going to say to it, just like you would to a coworker, yeah, I don't want you to send any emails on my behalf unless I approve them or I'm going to give you your own communication channel. But I don't even want you to communicate without me reviewing it first. The other rule is I want you to double check for this, that, and the other thing I want you to, you're telling it when to wake up and do things, when to send things.

where it needs to check with you, where it doesn't. And by the way, Andy, this is the kicker. Imagine it's learning and remembering every time you said, you got that a little bit right, but I prefer for you to send that at night, not in the morning. that was a good document you created, but let me give you some feedback on that document. What if every time you corrected it and gave it advice and gave it direction and

and even praised it all the above if it stored that and it became embedded into the prompt so you didn't have to prompt it that ever again.

the implication is that it becomes like a co-worker in the sense that you don't have to learn how to be a prompt engineer. You just need to basically craft and guide your agent towards the outcomes you want. And I actually think to really talk about the future of work, which I think one of the themes that you and I are

talking about on these episodes is that the future of work is going to be all of us, you and me included, Andy, becoming much more of a manager of an agentic system as opposed to a prompter, or let alone a doer. Like, we're going to become much more outcome managers. And that's.

going to be true, think, of everybody. think everyone's going to have to become a little bit more of a domain expert, a little bit more of an outcome manager, as opposed to a task expert that you're going to get, know, that a lot of us have become really good at certain tasks. And some of those tasks are like important things like communicating and, you know, coding and, you know, strategizing.

Forum3 (16:22)
mean,

just that distinction alone, which is managing by outcome and telling the agent, this is the outcome I want, go do it. It will then go do it and communicate with you along the way. And that's a big

Adam Brotman (16:34)
Yeah.

Forum3 (16:36)
⁓ Given this is a mini episode, there anything that you want to conclude, one concluding comment, either about your experience or about the future of work that ⁓ you want to leave our audience with?

Adam Brotman (16:36)
Yeah.

I mean, the one concluding comment is if you're not

super technical and in the AI bubble, I would not try to set up your own open claw. It's too difficult. It's too scary. And find somebody who has and ask them to show you their agent and ask you to have this conversation. Because you have to experience. It's exhilarating. It's still early. It's kind of broken. And it's also kind of cool.

Forum3 (17:09)
Adam is it.

Is it something that you

will do a mini episode and you'll actually show people Jeffrey and yeah. Yeah. So I'll just say I've been along the ride as Adam went to the Apple store and bought his Mac Mini.

Adam Brotman (17:22)
Yeah, yeah.

Forum3 (17:32)
and went through the sysadmin process, which took four hours of a Sunday afternoon. And he was both dragged down by engaging on his Mac with Terminal, which he doesn't do almost ever. And he had to do that. But then as he did it, and he named it Jeffrey, which is he named it after his uncle Jeffrey, for those of you who are close to Adam.

Adam Brotman (17:32)
All right.

Forum3 (17:57)
So Jeffrey's this like smart agent that's now doing things and communicating and customizing its compounding learning both about Adam how he works and what he's working on and it's making him more effective and Teeing up things that would otherwise drop through the cracks It's been amazing to watch on my behalf and it really does I think highlight I've had my own experience with co-work, but it really does highlight

2026 as the year of the agent. And we are really redefining. I mean, I think both Adam and I's work just from an outcome perspective is now at least 50%, if not more, Claude cowork and open claw. So there's still lots of phone calls that need to happen, et cetera, et cetera. But in terms of output and tasks being done working towards outcomes,

I'd say about 50 % of our work and we're probably, we're at least 50 % more productive, if not more. So I'll leave that as my final comment for the audience and.

Adam Brotman (18:59)
You

Forum3 (19:02)
I think the next mini episode we do will actually show this to you and continue on this path because this is a rich vein for all of us and it's at the frontier. I will also say to you that Forum

Adam Brotman (19:15)
You

Forum3 (19:16)
3 is looking at applying agents specifically in restaurant and retail and we're going to have some exciting announcements coming in the next 30, 60, 90 days So.

Just bringing you along on that front as we as we charge ahead into the bold frontier with that

Thank you for listening, as always, to AI First with Adam and Andy. For more resources on how to become AI First, you can go to our website, forum3.com, download case studies, research briefing, executive summaries, and join our email list. And of course, we always invite you to connect with our AI First community, a curated hub and network for leaders turning AI hype into action. We truly believe you can't over-invest in your AI learning. Onward.