Back in America

Is our education system ready for AI—or still grading with yesterday’s rules? In this episode of Back in America, Stan talks with Shahid, an award-winning fractional CTO and CISO with 35+ years in regulated industries, from medical devices to federal health tech. He argues that AI isn’t just a tool; it’s a colleague, a co-student, and a force multiplier—if teachers and teams learn context engineering and treat AI as a companion.

We dig into:

  • AI in education: hyper-personalized learning, teacher workflows, and why schools must let students “pair program” with AI.
  • Hiring in the AI era: why entry-level jobs are shrinking, how juniors can win by mastering prompts and fundamentals, and the risk of skipping a generation of talent.
  • Safety and ethics: lessons from life-or-death medical device software, where reliability, empathy, and human oversight matter.
  • Parents & teachers: practical ways to co-work with AI without abdicating judgment.

Clear take: AI can elevate learning and work—if humans stay in the loop and standards stay high.

What is Back in America?

Interviews from a multicultural perspective that question the way we understand America

  📍 ​

So I'll stop now

 Shahid, thank you so much for being here with us today in Back in America. Tell me, is our education ready for ai? 

It it is in certain areas, Stan, where people are understanding that this is not a tool for us  to use as part of education, but it  is a educator and it is a. Almost a co student. So is education ready  for ai if they use it  as part of their education itself, like the teachers have to be using AI to teach better at a hyper personalized  level for languages,  algorithms, et cetera.

That's one area. Then there's the other area of, is the school ready?  To treat and teach  students to use AI as a colleague, as a co-equal coder with them. And there the answer is no.  So we don't quite know how to teach.  People to use AI as a companion. And so what you'll see is people using AI for minimal  kinds of stuff, but in the real world if we are hiring  in a business world today, we don't hire one guy who knows 50 things.

We hire 50 people who know 500  things, and then we have to coordinate their work. And so we use  AI to do that coordination. But education currently, of course, is woefully behind. Because it doesn't know how to do that any better than the business world does. 

All right, and we're going to come back to that. So let me try to introduce you

so when I did some research, I learned that you were an  award-winning fractional CTO and you are going to tell us what  it is. And A-C-I-S-O, you've got hands-on approach to technology.

You've got over 35 years of experience  of engineering. You've built secure  compliance system for complex regulatory industries from medical device to digital health  for the federal government among others.   You blend deep technical skills with real grasp on how the market and the regulation shape innovation. 

Yeah. Number one, as a fractional ct, my, my background is as a computer scientist, so I was trained to write code but as a fractional  CTO, what I've spent the last couple of decades on is  coming in, almost like parachuting as a special ops guy to come in and assist people who may already have a  CTO as well, but they need a coach for that CTO  or they might need a CTO.

'cause the CT OCTO is in transition. So the chief technology officer in  product oriented companies is the, literally the chief  technical person there who makes decisions, et cetera. Then I've also done work as a chief information security  officer or A-C-I-S-O. That, of course, sounds exactly  like what it is, which is that software needs to be secured, infrastructure needs to be secured.

Now, I like to say that I'm hands-on  because a lot of CTOs that are at my skill  level or experience level, I'm at 35 plus years now since I got my degree and I've been working on lots of different projects.  A lot of those CTOs no longer code. And so I've, this is a pet. A peeve of mine is when a CTO.

Who is supposed to be a coder does not code.   Today it's not right. Every, the tools are coming out fast and furiously, and if we're going to instruct. Younger people, whether it's  from an educational perspective or a training perspective at work,  you have to know something about what you're training them on in order to do that training. And so that's why I like to say that I'm  hands-on.

Tell me what pulled you toward this kind of high stake engineering,  working for highly regulated companies?  Is there an attraction for you to these kind of industries, or 

Yeah the principle attraction for guys like me is impact, right? We'd like to say I, I've worked on medical devices that I know save lives. For  me, if at the end of a long day, like I'm an entrepreneur, I  work long, long hours and long days. At the end of the day, I don't wanna say I did a fantastic job on, a new cartoon creation  app.

Those are not bad. I use them for my grandchildren, but that's not for me. For me, I like to say I, when I  finished the job, it helped somebody save a life, get a job done et cetera. So it's mostly about impact. And being, doing so  in a heavily regulated industry allows me to  use my special skillset, which is I'm one of those few engineers that can speak to humans.

As well as machines. And that's not common. And  so in order to do all of this really highly technical work in regulated  environments, you have to understand the law. So I'm part lawyer. I have to read the law, understand the regulations, know what compliance means, et cetera.   And then I have to be an engineer.

But there are a lot of people that can do one or not the other and  not the other vice versa. I the best of those amongst us now at this level of skills that we like to. Say that we have to understand a bigger world, and we  have to understand that the code that we generate from  AI has to live in the real world where you can accidentally spend more money, you can accidentally kill somebody, you can act, you can put this code into a a chip or  a microchip.

That might be in an airplane  that could fall out of the sky. It's really important. And so I just like to be working on important, impactful things. That's what, but that's why I would get up in the morning 'cause I'm I'm close to retirement age, but I don't see myself retiring anytime soon. 

So you spoke about interpreting the code that comes from a machine and keeping your humanity. me back to a time where  you were building something and you faced  a bug. The system crashed and how it really impacted how you see human judgment versus mass machine logics. 

Yeah, that's such a great question because it takes me back. Back in the early two thousands, late nineties I worked on a  class three very large medical device in a blood  bank. Blood banks are obviously you get one person gives blood. Say I could be the one donating blood. You could be the one receiving  it 'cause you got into a car accident and you're at the hospital and you need the blood. 

So it's a super. Critical thing that blood be taken properly, it'd be tracked  properly, it'd be tested properly, et cetera. So I  worked on, this a blood banking software. And there were multiple times over years where we saw bugs,  but we didn't understand what would happen if we  didn't fix it.

A lot of times you're working in a business app you got a bug, and the wrong color shows up on the screen.   If nobody tells the developer that, no. When that bug occurs. Somebody's not gonna get their  blood or they might get the wrong blood type, right? You're supposed to get O negative, but you got a po a B positive,  that's death, right?

That's not a, it's not a small thing that happens there.  So having worked in these environments where I understand. The bug when  it could impact a life, that's where your humanity  comes in, right? That's where you, that's where you spend the extra 40 hours that weekend trying to get through and making sure that this code can  get out the next Monday.

And that's really where,  because I've lived through that like I've worked on probably a dozen or so medical devices and I know  the code that I wrote. Had bugs that  probably accidentally harmed people and that stays with you. Like I, I can actually remember two or three cases.  I can't talk about them publicly where Yes it did.

And we know  that people were harmed. Now nobody would arrest me or sue me 'cause it was a team, right? Medical devices do harm all the time, just like planes do fall out of the sky when  things go wrong. But that's when you, your humanity  comes in, is you see that the every decision that you make. If AI is gonna generate the  code for you, and a human doesn't watch  the AI doesn't care whether it's being used to save a child's life or draw a cartoon figure. 

It just, it's just generating code. But you  and I know that one requires a high amount of reliability, a high amount of safety checks, et cetera, versus  the other. That's where we have to come in  and recognize that it doesn't matter how much AI comes in, if the target of the software that  we are building is other human beings,  you need us in the middle to verify that AI doesn't harm the other human being. 

So you talk about AI and when did you realize that AI was not just  another tool, but something that would  change? The way we learn, the way we work, the way we code. When was that? 

Yeah. Some of us worked with AI very early on. So as we know in late 2022  is when Chachi PDT was released. But that  version of Chachi PT. Was available in open communities a couple of years, a few years earlier.   When we used to use that before chat, GPT  came out, you had to wire a bunch of stuff.

Like you had to write a lot of code, you had to write a lot of prompts and put  those in and then wire it all up. And so  that never gave us the impact that this could have. But when Chachi PT came out, that  exact software. With a direct  interface, like a human to human, like you and I are talking right now.

The very first time that I realized it  is it was about probably the second or third month when chat  GPT was out.

I'd already been using it for a while, like I signed up for it, literally the weekend it came out.   Now so I'd been using it for code and other things for a while, so  I knew what it could do. But the one really cool thing that I ended up doing is so I have family members that have  chronic conditions like diabetes and cancer  et cetera.

So when I started asking it. By giving a little bit of context for  family members who needed medical advice. Even with  knowing the same thing, that's when the light bulb went off, is that I'm getting real  information that a real doctor would give me, that I can make  actionable.

That's how I know that it's now a colleague. That's not a tool that  I use. Just like I have friends who are doctors. When I have  before ache pt I have oncologist friends, I have cardiologist, friends. Whenever  I had a problem, I called my cardiology colleague,  my oncology colleague, and said, and they would help out. They would brainstorm. They would discuss and say,  did you try this paper? Did you read that paper? In the essence  now, the 90% that I do now back, the first time I realized this is do it  all there.

Now take that output. And then bring in the human colleague after  that and things get even better because the humans can now take obvious stuff and make it  unobvious. Or the humans can say, oh, wow, I never  thought about that. You know what? I just saw a patient two days ago where we did do this. We didn't even think  about it.

Let me go back and do it with that patient as well. Or extending that.  So it became crystal clear that the AI is your colleague, it's your friend, it's a worker,  a

How did you feel at the time when you when that light bulb came on, how did it make you feel?

At first I was like, no  way. I said this can't be right. It has to be fluke. So sometimes when  you see a generative ai, because I know the technology about generative AI   enough to be able to say I know it's gonna generate English 'cause that's what it's been trained on, et cetera, et cetera.

But when it put  together the words in the order that it did.  That is not just what it was trained to do, because there's more of  reinforcement learning, more training that happens on top of that,  which means that there's a bank of humans at OpenAI, at Claude, et cetera, that we know tens  of thousands that are reading and rewriting and updating  these prompts.

And but what it made me feel is, does this  obviate me? Meaning, am I no longer necessary, like the doctor is no longer necessary? 

Then of course in a few days you find out no, of course not. 'cause it's incomplete, it doesn't know how to  prompt itself. It doesn't know how to execute on the things.  It doesn't know how to be empathetic and talk to one patient versus the  other if you don't set the prompts properly.  So that's when I knew, oh, I'm not going anywhere.

There's plenty of opportunity. 

I want to come back to education. So we see today that a lot of companies are letting AI do the grunt work of,  debugging boilerplate, all that kind of  stuff. What does it mean for young freshly out of school student looking for an entry job? 

Yeah, it's gonna be tougher and tougher for those entry level positions, but not in the obvious way. Like the obvious way is that companies are not  looking for the junior guy. They want the mid or senior  who can help operate ai. That's the obvious part. The unobvious part is that the juniors don't know enough  to be able to, in their mind, the ones who are hiring this way,  the juniors don't know enough in order to drive the AI faster in their environment.

Said another way. The juniors and if you look at it from the high level, if  you're really good at what you do, the AI makes you a  hundred times better. If you're bad at what you do, like of course all younger people are gonna be bad at coding 'cause they don't have enough expertise. And you gotta build that expertise.

But if you're bad  at coding, you're a hundred times worse.  It doesn't mean if you're bad at coding, you get good because you have ai. The first thing you have to realize as a young person is. You don't know shit, right? You don't know what you don't know, what you don't know. You don't know how little you do know.

And  so you pretend that you do know  and you put that prompt in, and then you get stuff out, right? Code is going to come out the other end. You just don't know whether it's good or bad. That's why. So if I'm a leader  who has the opportunity to hire five  seniors and five mid-levels versus five, entry level just for my own sanity so that I don't have to check the work of those younger people to cr who are grown to create slop. I  need to hire the mid or upper end. That  is, that, that's the normal case. Even without ai, that was the case. If I had a choice, I would go get a senior. I often did not have a choice, so I needed  10 people.

I knew I could only get two seniors.  I could get three mid-levels and five juniors. So like I knew I needed 10 people. So this is what you, and if you're being educated today and you're coming outta college, you need to recognize,  in the old days, I might have needed ten five  juniors. I had to take you because I had no other choice.

Now if I have two seniors and one mid, I may not need the rest of them.  Mentally speaking, I may decide that and say, I don't need  the rest of them. It's not quite true. But that's what I'm going to think. And so what do I do? I just let my seniors continue and my mid-levels continue. Now what I don't realize is  if I don't build my  juniors now, how are they gonna become that mid, when my senior leaves, the mids gonna go up to become a senior, but I've got this whole era area which is gonna be completely  empty.

If I'm an idiot, as a senior executive,  I will stop hiring junior people only if I'm an idiot. If I'm not an idiot. I have to get those juniors to be able to  shadow the seniors in the midst, just like in the  old days. And so this is what the education establishment, compsci education.

IT education, they have to be teaching the juniors and the  fresh guys that are coming outta college. To be so  useful with ai that would be silly not to use you in combination with the seniors and the mids. And senior executives are and  senior people like me are making mistake right  now.

We're gonna pay for it in two years, or five years, or 10 years, where we're not building that community of juniors to be able to take care of the to become a mid, which then becomes a senior. 

Is it what you. You call the adapter, someone who guide ai? 

So you could think of them as context engineers. Really the, you don't really need to guide the AI per  se. In general, what you're saying is correct, but the guiding of  the AI really is telling it the background of what it needs  to know so that it can generate what you're  asking it to generate.

So if you go to chat GPT in a fresh account today. And you  say, write me a function  that can sort a linked list and that's all you say. You don't say anything else. You just give that, it will most likely pick Python '  cause it's a popular language and it will  do so in a way that is fairly advanced. 'cause it'll use traditional link list algorithms  and you'll get a function.

Now  you were supposed to tell it that this is to be an Assembler or Pascal or C or ADA or whatever.  You didn't give it that background, which is what was  lacking, right? You didn't give it the context. Now, in the modern era, I  would rather get a junior guy who  understands context engineering, and then I could give him or her  the extended language specific things that I  need rather than a mid-level who has the skills but cannot prompt an ai. 

As an example. If you are  a designer, HTML, CSS, et cetera, you have, you're fresh outta college. You don't know all  the advanced ways of doing that.  But boy, do you know how to use lovable? Lovable is a it's a popular design tool as we all know. So if lovable  and you don't have super high level skills in  HTML or CS.

You're still more valuable as a junior guy who knows lovable that can  take my words or my prompts, put it  into the tool and get something out that I can then give down the stream to someone else  rather than a senior guy who's been writing HTML  and Cs s his or her life for years, but doesn't even know lovable exists. 

Now you have to ask yourself, which one do you want? Do  you want the junior guy who knows Lovable exists? Not only that, but they can drive lovable better  than everybody else. Or do you want the senior guy who  doesn't know lovable exists? I'm taking the junior one any day of the weekend, twice on Sunday for sure. 

And that brings me to a natural question about the skills, right?  So being curious, being innovative is what we  talked about, using those new tools and. But what are the  human skills? You talk about empathy earlier  on and I believe that properly trained AI can be extremely  empathetic. So what are the skills you are  looking for as a CTO when you look at those kids fresh on the market? 

Three main things. Thing one is when you deal with ai,  how much do you use AI to  help you with ai? Now, this sounds very weird and very meta, but  the best prompters in the world like myself.  When I work with ai, the very first questions I ask when I teach a  context or need a task done is,  what do you need from me in order to do your job?

Give me the prompts  that you would have me give to you. So you have to literally sit there for minutes or hours and just coordinating with the AI to figure out. Do you understand me? And do I understand you? Because  when you give me the prompt, I understand it more. So if you're, if you've got, like I  have 35 years of programming experience, you could  give me almost any problem.

And with a little bit of books, obviously we're all old and we're gonna have to go back to reference  books and things like that. I would be able to write your function, write your  code, et cetera. But what I wouldn't be able to do is if I hadn't already done, six, seven  years of this work already, I wouldn't know  how to organize my thoughts and provide the context to an ai. 

So that's thing one. If you can  understand and tell your hiring manager or the guy interviewing you and say, look,  I don't know everything.  I know what Marissa Meyer said at at Google a long time ago. It's not what you know, it's what you can find out  now. It is not what you know, not what you can  find out, but what you can prompt through context.

So if you extend that and you say thing one  is, can you talk to the computer  better than the guy, than the next guy that they're trying to hire? If the answer is yes,  prove it. Write the articles, show your  prompts explain how you do things. Go on to GitHub. Put project after project.  Get everything that you know that you're doing with AI  public, because that's how you're gonna be able to prove to me that you know what you're doing.

So that's thing  one. Thing two then is do  you have the basic skills? If I said give me a circular link list.  Versus a unidirectional link list. Give  me a a a directed graph versus an undirected graph. If you're going to, if you're gonna say, oh, I know  how to tell AI what to do, and you don't know  the difference between a directed graph and an undirected graph it doesn't really matter how well you communicate  in English 'cause you don't have that skillset.

Kids coming out  today. They not only have to have the basics, but they've got to be stronger in those basics than  ever because the AI is going to  generate code in the way that you ask it to. So if you tell it to  create type safe code, it creates type safe code.  If you say, give me unsafe code, it gives you unsafe code, so  you have to know what to ask for, and so the first problem  is.

Can you communicate what you're communicating well to a machine, which is not the same  as communicating to humans. It is different and we know that,  but two, if you say you know how to communicate to the machine, but you don't have the basic skills in  computer science and understanding the algorithms  versus you know when to use one versus another.

If you can't do that, you're not very useful. The first  skill is not very useful in the second, but the third.   If you couldn't even do number two, but you could do number three. I'll let it slide, which is, look, I'm really good at what I do. I'm not great at the algorithms and things like that,  but I understand how to talk to humans  to get requirements input, understand what are their expectations,  and I will use that to coordinate  and work with a mid-level engineer to fill in with AI  so that the two of us, a junior plus mid. 

Where the mid knows more about the computer science part, but the junior knows how to talk to  other humans, if you can fill that gap,  then all three of those things, if you can gimme two of those three, starting with the first one is  a must, right? But the other two I'm willing to  negotiate on depending on how good you are.

With the tooling. Now  the world is hard for younger people  and it's always hard for young people. It's like when I was young, it wasn't like the world was easy,  but at least I was only competing with other human beings,  right? Today, kids are not only just competing with other human beings, they're competing with the AI  colleagues and the mids and seniors who  are 5, 10, 15, 25 times more productive than they were just five years ago. 

I love you said you had grandkids. As a parent of student,  what would be your  fears when it comes to AI and your hope? 

Yeah. My biggest fears are that people our age and those that are in the job market today, we don't understand the benefits of  AI and how operating it as  an additional colleague is more valuable than trying to replace the humans that we have. That's my biggest fear. And  of course it's understandable that, this stuff is  two or three years old.

It's not like we've lived with it for decades. So the fear is when you think it can do too much, it is going to harm your  environment. Like you won't build the human skills that are  necessary to operate complex machinery, complex equipment, et cetera. But. If you say that, okay, I'm treating it as a colleague that every new hire  that I have deserves a proper  twin of their ai, well-trained, designed to work with the person that comes in, that's okay.

Now my hope is  that, so I have a couple of granddaughters. One is six  other is two, turning three this in November. Now, the biggest hope that I have is it their teachers can do what I do   when they come over, I often play with them with  ai. So instead of playing a traditional Donkey Kong game or just on their Nintendo switch or whatever, they do that, of course they watch TV and they play with that.

But when they spend time with me,  we are on ai. So each  session might be, Hey let's invent a game together and let's play it together with AI or, and when they're coloring. They're not coloring Disney  characters. They're coloring pictures. Like I, I  will I'll load up pictures, create coloring books, stories, et cetera, with their parents, with their grandparents, with their siblings, with their cousins.

So the stuff that they  do is still hyper creative,  but it's personalized to them. So the 2-year-old stories that the AI works with me. Are great for a 2-year-old. The 6-year-old, the games that we play,  it escalates the game because it knows  the request and response. Now, I am a parent, I'm a grandparent.

I love my grandkids. Of course I'm gonna spend all that time with them but if you see this, the power of this. So what I'm really excited about is could teachers recognize. Within the  next, months, years, however long it takes  that each student in their class can have a personalized teacher that can operate at their speed, at their level, at  whatever pace that they want to go to.

And that the  teacher in the classroom, whether it's in a, for a 6-year-old or an 18-year-old, doesn't really matter that the teacher in the classroom is a proctor or an  organizer or an orchestrator. Of all the  other teachers that every single student has in their environment. Now I can, I'm just imagining the level of intelligence that the students could  have if each one of them had a bank  of personalized teachers teaching them math at, science biology, et cetera, at their level, at the pace that they wanna go to, instead of being bored out of their mind and going. 

Into different places in their head because  they're just bored, because they're really good at biology, but their biology teacher's too behind. They're really good at math, but they're too behind, right? You see this, that, that part. So that's got me really excited. 

So we are getting almost at the end of this interview, and we've got  parents, we've got teacher listening to us today.   If you have one thing you would like to leave parents and teacher with, what would it be? 

Yeah, just think about the AI as a personal. Third parent,  for example. So parents good parents will spend  time with their children, right? And they'll spend time doing homework, et cetera, et cetera. When you are busy  as a parent, often you're not able to do the things that the kid  needs attention on.

But the AI knows where it needs attention, right? Because you can actually give it the  prompts and say my kid got this homework.  I don't have time to do it with them. Could you help with it? It'll do it for IT teachers, the one thing you can start  doing is start to see which  students could be the ones that are your voice to ai.

And like in my  case, when I spend time with my 6-year-old. She,  as soon as she saw me chatting with the ai, she started giving it prompts. She said, she's, it. In her mind, it didn't even occur to  her that this was, an amazing piece of technology  that she didn't have when she was born. It's like she started talking to it as if it was always with  her in her entire life, and she's oh, ask it to do this.

Oh,  this is a better idea. Let's do this. This is a 6-year-old. Now imagine what an 8-year-old or a 14 or an 18-year-old could do.  If we trusted them. And so teachers today  are scared of ai just like everybody else is in many different circumstances. But what I would recommend for  teachers and parents to do is adopt it. 

But you sit there with ai, do not leave your, do not leave your kids with their AI on their own. Like in our  case, when our kids were young. We only had computers  in a central area where we could see them. So do that. Let them use ai, but be there with them. But don't  be afraid of it.

And don't say there are universities out there  that are preventing kids from using AI to turn in their homework. It's the dumbest thing in the  world. How are you going to be able to teach them that it's a companion  or a colleague when you say you can't talk to this, massively useful  technology, so don't be afraid of it.

Work with your kids  in class and outside whenever you can in an interactive way, so you can see  are I building their prompting skills or  am I accidentally making them dumber? Because they're not gonna look at anything on their own. They're not improving their prompting skills.  They're not saying, I learned this in my environment.

Let me put  it in Instead, they're just waiting for information to come out. That is a horrible situation that we have to try to prevent. 

Thank you so much. So this podcast is called Back in America, and in the podcast we looked at what makes America,  we look at the culture or. The identities and the values.  And if you are familiar with the podcast, I always end with one question, which is, what is America to you? 

To me, America is so I've been here since I was four years old. So I started elementary school  here. The number one thing America is to me  is opportunity. Opportunity. Opportunity, in this era  so having seen lots of other countries, 'cause I've been flying around for my  entire life what we see is that even today, lots of other countries have caught up in many  other areas, but they haven't exactly caught up with. 

The amount, just the sheer amount of opportunity available to do anything  that you want to do. And now with ai  it's easier in one way, 'cause you could do whatever you wanted to do, but it's the tyranny of choice. You can  literally now do whatever you want to do. And there's a, a, an  old scientific American paper.

If you don't know what I'm talking about. But in 2004 there's a paper that came out called The Tyranny of  Choice, how Having choice like we do in America.  Is wonderful at the surface level, but when you have a lot of choice, your life  is actually much harder than when you don't have any choices at all. 

So read that paper, understand it's about opportunity, and start selecting your 📍 opportunities wisely. 

Thank you so much for your time today. It was a pleasure to speak with you.

Yeah I had fun. Thanks. We'll do it again. I. 📍