Margin of Thought with Priten

In this episode, Priten and Yanni Chen explore what it actually looks like to build AI tools that support learning rather than shortcut it. Yanni, a master's student at Harvard Graduate School of Education and product developer at Deep Brain Academy, shares her experience creating an AI math tutor with a genuine commitment to scaffolding, cultural inclusivity, and keeping teachers central to the learning process.
Key Takeaways:
  • Scaffolding matters more than speed. AI tools often give direct answers because that's what they're engineered for. But real learning requires guiding students through the thinking process—something teachers do that AI cannot replicate. Educators should look for tools that provide step-by-step guidance rather than instant solutions.
  • Teacher skepticism is healthy—and often fades with use. Most teachers approach AI with skepticism, which is appropriate. But just like PowerPoint and video once were new classroom tools, AI becomes less intimidating through hands-on experience. The recommendation: start with personal, low-stakes use before thinking about classroom implementation.
  • Gen Alpha's AI fluency makes teacher presence more important, not less. Students are already fluent AI users. This doesn't diminish the teacher's role—it elevates it. Teachers need to help students navigate bias, develop critical thinking, and understand when AI is appropriate and when it isn't.
  • We lack clear guidelines—so educators must set their own. In the absence of federal or state AI policies, individual educators need to establish clear ethical boundaries around data security, safety, and appropriate use. The technology is moving faster than regulation can keep up.
  • Creative technologies extend beyond chatbots. From 3D printing and laser cutting that let students build physical objects to AR/VR simulations for medical training, there's a whole landscape of educational technology that emphasizes hands-on learning and creative exploration—not just AI conversation.

Yanni Chen is an Ed.M. candidate at the Harvard Graduate School of Education, where she studies Learning Design, Innovation, and Technology. She earned her B.S. from Boston University, majoring in Public Relations and minoring in Applied Human Development. Her work sits at the intersection of education, product management, AI, XR, and edtech. She focuses on student experience and the design of educational products that foster engagement, growth, and meaningful learning outcomes. Drawing from both her academic training and her work in edtech, Yanni brings the perspective of both a student and a product manager to conversations about teaching, learning, and educational innovation.

Creators and Guests

Host
Priten Soundar-Shah
ED of PedagogyFutures / Founder of Academy 4 Social Civics / CTO at ThinkerAnalytix
Guest
Ethical Ed Tech: How Educators Can Lead on Digital Safety & AI in K-12
Strategies and tools to integrate emerging technologies into K-12 classrooms in a way that benefits all
Guest
Yanni Chen
Student at Harvard Graduate School of Education

What is Margin of Thought with Priten?

Margin of Thought is a podcast about the questions we don’t always make time for but should.

Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students.

Each season centers on a key tension in modern life that affects how we raise and educate our children.

Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.

[00:00:05] Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming. We spend a lot of time asking how teachers should respond to AI, but what about the people actually building the tools teachers are being asked to use? Today's guest is Yani Chen, a master's student at the Harvard Graduate School of Education, studying learning design, innovation and technology. Yani is also a product developer at Deep Brain Academy, where she's designing an AI math tutor with something rare in ed tech: a genuine commitment to scaffolding, cultural inclusivity, and keeping teachers in the loop. We're going to talk about what it actually takes to build AI tools that support learning rather than shortcut it. Why Gen Alpha's fluency with AI makes the educator's role more important than ever, and what a student and product developer sees when she looks at where education technology is headed.

[00:01:05] Let's begin.

Yani: Hi, I'm Yani Chen. I'm currently a master's student at Harvard Graduate School of Education, studying learning design, innovation and technology.

Priten: Do you have experience in the field already? Can you tell us a little about what that experience has been?

Yani: Yes, I do. In the past I've worked as a community manager for an EdTech company called Lobster, and currently I work for an EdTech startup called Deep Brain Academy on their AI educational product development.

Priten: So on this show, we've heard from a lot of different folks, but we haven't yet spoken to someone developing AI science ed tech tools. I'm very curious to hear your perspective on what responsible development of these tools looks like and their utility. If you want to start generally: when is ed tech useful in the classroom versus when do you think it actually distracts?

[00:02:02] Yani: For the product I'm currently developing, one of the major challenges is how we position teachers to work alongside AI. Because Gen Alphas are already really fluent AI users. They're the most fluent users of any generation. It's really important to understand how teachers can help those Gen Alphas develop their critical thinking. One of the things I consider is the ethics of AI. AI is trained on broad databases that include lots of biases. We need to help teachers teach students how to navigate these situations.

Priten: I'm curious about how you navigate both trying to get uptake for the product and get folks excited about it, while also talking about potential dangers. That balance is tricky.

[00:03:07] Yani: The math tutor I'm creating is one tool students could use to work on math problems at home or anywhere teachers can't follow up directly. But AI sometimes has logistical and ethical problems. For example, it can give math examples that aren't culturally inclusive. We don't want students to experience examples that frame their thinking in a biased way. When we're training the AI, we focus on including as much inclusive material as possible, using pedagogical and curriculum design frameworks so students understand both the knowledge and the ethics.

[00:04:04] Priten: Is most of the work you do focused on building tools for use outside the classroom, or do you also have projects for direct classroom use?

Yani: The product I'm creating has two different versions. One is for outside the classroom, one for inside. For the classroom version, we're thinking more about safety issues. If students input any dangerous or inappropriate messages, we alert the teacher so they can help the student navigate the situation. For in-class use, we want to keep teachers as involved as possible. When AI answers a student's question, it only gives step-by-step guidance, not the direct answer. If a student is still struggling, the teacher comes in to help.

Priten: Have you heard from teachers about whether they're excited about something like this? Is it daunting? Because we work with schools helping them think about which tools to use, or whether to use a tool at all.

[00:05:04] I want to know from your experience: are teachers looking for something like this, and what excites them about it?

Yani: Teachers are mostly skeptical about AI because they don't trust it as much. But as soon as they use AI in their classroom, they're more comfortable navigating these tools. It's the same as when we started using PowerPoint slides or videos—multimedia tools we used in the past. AI is just a new tool that helps teachers teach better.

Priten: Do you see that shift happening with teachers once they get exposure to the tool?

Yani: I do see some shift, though most teachers still hold skepticism. But I get valuable feedback from teachers about how we can develop the product further.

Priten: I'd love to go into the details of what that process looks like. Because I think one of the things we hope more ed tech companies do is stay in conversation with teachers, build tools that teachers want, not make teachers want the tools that the ed tech company is building. I love that you're getting that feedback and iterating on it. Is there a formal process for that, or is it mostly ad hoc conversations?

[00:06:09] Yani: Right now we're doing mostly conversations. We haven't had a formal evaluation because there aren't a lot of teachers willing to adopt these tools in their classroom yet. Most teachers are fixed to the traditional classroom style. One common piece of feedback is that students get addicted to AI tools because AI can give out information so fast, while teachers have to apply it to each student and make it personalized. But AI can do that in seconds. Students ask all kinds of questions to AI, so one thing we developed later was adding safety checks through red teaming. This prevents students from getting too absorbed with AI.

[00:07:10] When a student asks an irrelevant question, we pause them and let the teacher come back and say: okay, do your math work right now, not chat with AI about random sports or concerts.

Priten: So the goal is to initiate human intervention at those points and get the teacher to redirect them. That's interesting. I know you're talking about how skeptical teachers are and some of the friction you're facing. What keeps you motivated to do the project?

Yani: I'm really excited about how AI tools could reduce a lot of workload for teachers. Teachers are overstressed with so much work right now. By applying AI tools, they can save time to do other kinds of work. In the past, when teachers tutored a student, they had to take a full hour stepping through the material. But now students could do that at home, and in class, teachers could focus on engaging students and helping them become interested in the knowledge.

[00:08:10] Priten: I know you're also a student yourself. I'd love to speak about that. What motivated you to go back to school? What role are you hoping that plays in your future career? And what's your experience been as a student? Let's start with the why.

Yani: Because AI is developing so fast, we could never study it enough. Going back to school helped me learn about the most emerging technology in the educational space and sparked new ideas about how to help teachers teach better and engage students more in classrooms.

[00:09:00] Priten: Tell us about the program. You're not just doing a Master's in education, right? It's a specialization in a very relevant field. Can you explain it?

Yani: The program is called Learning Design Innovation and Technology. Most of my coursework focuses on developing AI agents and curriculum design. One course I'm really excited about uses laser cutters and 3D printing to create sense-making objects for students to learn STEM. Learning involves building something and learning from that process.

Priten: Is this to get teachers to build tools for the classroom, or students themselves building with the printer?

Yani: Students building themselves with the printers. It's important to teach students these techniques so they can express their imagination in the objects they build. But we have to be cautious—we don't want to design mass production objects. I read about a case where students became obsessed with creating acrylic keychains and it became mass production. We don't want to design curriculum like that.

[00:10:06] Priten: How do you teach that intentionality to students so they really consider what they're building and the implications?

Yani: I focus on what the intention of the course is. We want students to learn the technology and the content behind the building. So it's about designing curriculum that's not too hard, not too easy—just right. It's really hard right now because I'm still navigating this process. I haven't figured out the right way yet. But through feedback from students, I'm really excited about what's happening.

[00:11:01] Priten: What role has AI played in your own education?

Yani: AI has helped me a lot because during courses I take pages of notes. When I want to revise and revisit the content, I can't keep focus on the key points. So I input it into AI, let it summarize, and I'm ready for the next class.

Priten: One concern people have is that by having AI do the synthesizing or summarizing, you might miss out on some learning. It sounds like that wasn't the case with you. You actually benefited. What do you think made it helpful rather than just a shortcut?

[00:12:04] Yani: For me, it's important that you learn for yourself. You're not learning for AI, you're not learning for anyone. The knowledge you absorb is totally yours. During class I pay as much attention as possible. When I go back to revise the content, I use AI to summarize the bullet points so I get a clear picture of what happened in the classroom and the examples and scenarios we discussed.

Priten: That's a very independent use of the technology. How do your professors feel about it? Are they talking about using it in the classroom? Do they have restrictions on how you use AI? I'm curious how your professors are approaching it, especially since you're in a learning technology track.

[00:13:10] Yani: Most professors approach it optimistically. They allow us to use AI to brainstorm, summarize, and get a clear vision of our work. But there are strict restrictions on using AI in our assignments and final products. Still, AI is useful in brainstorming sessions because you can't generate so many ideas on your own, but AI can guide you to what you want to do.

Priten: When they say they're not allowing it in the final product or assignment, does that mean you can use it as a thought partner to come up with the idea, but the execution has to be yours? Or are there other places you're allowed to use it?

Yani: We're allowed to use it as a thought partner, but not for specifically writing the assignment. But there are products that involve building AI agents and chatbots, so that's totally okay. For writing your own reflection about the discussion of the content—that's not allowed.

Priten: Has there been a moment where AI made something possible in a final project that you wouldn't have been able to do otherwise?

[00:14:10] Yani: In my J term class, which is January term, I took a course about building agents. It was my first time using Google AI Studio. I was fascinated by how much it could do. It literally created a whole website with all the functions I wanted, and I could add individual prompts to all the things I wanted. Vibe coding is really easy these days because AI is so powerful. It could help you create a whole product yourself. But we have to be cautious about what information we input into AI and what outcome we want from it.

Priten: Tell me about the vibe coding part. Do you have a programming background, or are you building things for the first time using AI tools?

[00:15:03] Yani: I don't have a programming background. This was actually my first time vibe coding. Before, I was mostly using AI for summarization. I never thought I could use AI for vibe coding, and it's really helpful to create something I couldn't do before. For the aesthetic part, I'm not a UI designer, but AI designed it pretty well.

Priten: It's an exciting use case. A lot of folks in education, especially if you're teaching or working with students, have an idea of a tool that would be useful. There's often a disconnect between what teachers want and what ed tech companies build. So I'm excited to see folks who can bridge that gap and build things themselves. Vibe coding might not be fully there yet in terms of security, so we want to keep seeing how that develops. But there are some really creative use cases coming from the potential to build your own tools. When you consider your path post-graduation, what excites you about next steps in your career?

[00:16:14] Yani: I'm really excited about becoming an AI educational product manager. I feel there's a lot of potential in AI products, especially in education, that still hasn't been discovered. I'm excited to be part of that and help students learn better.

Priten: When we think about using AI in education, I share some excitement about the potential use cases. But we often hear concerns that we don't yet have research showing it actually changes how students learn. It might be a different way to learn, but it could lead to worse or the same learning. And AI comes at a cost—financial cost, training cost, time cost for teachers, plus ethical costs like environmental and data concerns. I struggle with figuring out when the benefit to learning will be demonstrated and be worth it. Have you thought about how you might approach that, or started thinking about it?

[00:17:06] Yani: I have thought about it. In the states we actually lack guidelines for using AI. The things you mentioned—data security, safety, ethical issues—we lack those guidelines. I think it's important for us as educators to set our own individual guidelines right now. Since AI is evolving so quickly, I think in the future there will be systems and guidelines that let us rethink how to apply AI in classrooms.

Priten: Do you think we'll get federal or state policy, or will educators in schools and educational organizations help guide this?

[00:18:01] Yani: Currently, companies and schools have individual guidelines. But we do want federal and state or school district level to have reactive rules about guiding how educators should use AI tools. Since we're in the early stage of implementing AI in education, each educator should have their own guidelines about what to use and what not to use. There should be clear lines around data security, safety, and ethical issues.

Priten: For our educator audience, what are some things you'd recommend—from someone who helps build these tools and thinks about it from the inside—that educators should consider when deciding whether to adopt an AI tool?

[00:19:06] Yani: One thing is whether it aligns with your own ethical guidelines. AI is trained from a large dataset with lots of different perspectives. You have to be cautious about navigating those. If you aren't ready to work with all kinds of perspectives from AI trained on that dataset, I think it's not yet the time to implement AI.

Priten: What can they do to develop that?

Yani: The only thing is to use it. The more you use a tool, the more familiar you become with how it reacts and functions. I don't mean implementing it in your classroom yet, just for your daily personal use. You'll find the pros and cons of using AI.

Priten: That's one thing we've found very effective too—for teachers who are afraid or unsure what role AI will play in the classroom. We recommend low-stakes use. Don't think about your students or how you'll use it in class. Go find a recipe or something personal and low-stakes. That gives you opportunities to start exploring the technology. We've talked a lot about chatbots and tutoring, but you've worked on other aspects of ed tech and exciting frontiers in education. You mentioned 3D printing. Everybody's talking about AI and text-based chatbots, but there are other cool things happening in ed tech that are innovative. What other projects have you heard of or worked on?

[00:20:02] Yani: Laser cutting and 3D printing is something I constantly work on because it's easy to learn how to use them. But there are cons about the cost of these machines. Like AI, you can use them to create anything you want. I think they're a great learning source for students and educators to develop more creative learning plans.

Priten: Is there anything else besides those that excites you?

Yani: AR and VR really excite me. The past company I worked for had VR simulations for nursing students to manage with patients and get familiar with daily processes. It's a great way to experience this because labor costs and safety issues make it hard for students who need to be practitioners to practice with real humans. In medical schools, they use mannequins and patient actors. But the problem is students don't have conversation or communication with patients—they don't learn how to deal with real-time interaction or problems they might encounter in real life. AR and VR is a great way to let students learn in a safe environment, fully guided by an instructor.

[00:21:04] Priten: So like a training ground before they're actually interacting with real humans, where the stakes are much higher. Are there use cases in K-12 education that you've thought about with AR and VR?

Yani: I'm working on a personal open source project to create free AR models for STEM educators to use to teach students. Some STEM equipment is expensive, and AR could be used on phones or any technology device, so it's easier for students to learn STEM concepts.

Priten: Can you give me an example? I haven't taken a science class in forever. When would it have been possible to use AR?

[00:22:03] Yani: I've created AR models of Newton's Cradle. It's really interesting how students could click buttons 1, 2, 3, 4 on the AR model and see the numbers of balls dropping to show the physics process. Most schools might only have a few Newton's Cradles, so students have to be in groups to use them. But with AR, each student with a technology device could use it on their own, visualize what's happening, and investigate their own thoughts.

[00:23:10] Priten: I haven't explored the limitations and capabilities of AR. When I've seen conversations about generative AI, one concern is that world models aren't great. Physics simulations and video output are flawed—shadows and movement aren't correct, falling speeds aren't accurate. This limits generative AI video production. Are there easier ways to control for that in AR? Is the technology different?

Yani: When you're creating an AR model, it depends on which tools you use. I use Blender, which has really good embedded physics systems to mimic real physics situations. A lot of games use these kinds of models. Blender is a great choice for creating AR models. I understand we can't fully copy what happens in real life, but it's an opportunity for schools without many resources to use these technologies to educate students.

[00:24:01] Priten: Is there anything else about your schoolwork or work that you'd love to share with teachers or that you think they should hear about—what's coming, what's exciting, any concerns?

Yani: I think scaffolding is a big part. AI hasn't really learned what scaffolding is. It only gives direct answers because that's what it's engineered to do. As educators, we want to scaffold students. We don't want direct answers, and that's where AI in the classroom should pay extra attention. We still want teachers in the space.

Priten: Are you saying teachers should look for tools that do scaffolding, or that teachers need to do a lot of scaffolding alongside AI?

[00:25:06] Yani: I think both. For example, teachers should choose tools that don't give direct answers but give step-by-step instructions and lead students to think and discuss the final answer. If there aren't those kinds of tools, teachers should scaffold students to drive their thinking. Without the thinking process, students aren't really learning.

Priten: In those use cases, what would you say to a teacher who says: I'm going to have to do that work. Why should I even bother using AI? I'll just do it myself.

Yani: That's a con of using AI because AI is still developing. Those kinds of AI tools haven't been in public yet. Right now when teaching things like math, AI still gives direct answers. That's where teachers have to come in.

[00:26:04] Priten: It sounds like you're excited about where the technology will be in a few years or maybe months at the pace things are moving. But maybe we're still not exactly there where it'll make teachers' lives fully easier, or they can fully rely on it. Is that an accurate characterization?

Yani: Yes, I do think that. Nothing can replace what teachers do in a classroom. It's about the environment, engagement, and the learning process. AI can only output information for students. It can't copy what teachers do to guide their thinking process.

Priten: I'm thinking back to the example you gave of the math tutor—helpful for students to get feedback and practice as a rote practice tool outside the classroom. But they still have to engage with a teacher in the classroom for direct instruction. Because so much conversation right now is about AI, thinking about all the different ways we can use technology not just for answers and shortcuts, but to explore creativity and go down learning rabbit holes is interesting. The 3D printing example is definitely cool. Which classes do the 3D printing use cases normally come up in?

[00:27:08] Yani: It's normally in digital fabrication classes. In K-12 settings, it mostly appears in design and art classes. But there could still be projects in STEM classes that allow students to work on their own sense-making process. In humanities, students could also create a lot using those tools.

Priten: My last question for everybody is: when you think about the next five years of education, are you mostly scared or mostly excited? And why?

[00:28:00] Yani: I'm really excited about what's coming because AI is evolving. Not in years, not in months. I feel like it's in weeks. Each week AI will be different. For the next five years, I can't imagine how AI would be. What I can imagine is that the introduction of AI will make a lot of people really comfortable using the tools.

Priten: I think there's excitement in seeing all these different use cases, especially in STEM. I'm thinking about where we can get our students to be even more hands-on with the real world. I really appreciate your time today. It's nice to hear from the side of engineering ed tech tools, particularly because there's something very different about using ed tech in technology, science, or math classrooms than in English or history classrooms. The skills are different, the context is different. Even though there's a lot of fear in those contexts, it's nice to hear some excitement in the STEM world.

[00:29:05] Priten: Thank you to Yani for joining us and offering a view of AI and education we don't hear often—from someone who is both a learner and a builder at the same time. Yani reminded us that the design choices behind these tools are never neutral. Getting them right means centering equity, scaffolding, and teacher presence from the very beginning. Her work reminds us that the future of ed tech doesn't have to be something that happens to educators. It could be something we build with them in mind. Keep listening as we continue exploring the ethics of education technology. And don't forget to pre-order my upcoming book, Ethical Ed Tech, at ethicaledtech.org. Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.