Margin of Thought with Priten

In this episode, Priten speaks with Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries (OCUL), about why existing tools like citation and methodology sections can't capture how AI is actually being used in research and learning -- and what a structured disclosure standard might look like instead. Weaver, who also teaches graduate students at the University of Toronto and created the AID Framework for AI disclosure, walks through the practical and philosophical challenges of building trust infrastructure for an ecosystem that doesn't have bright lines yet. The conversation covers disciplinary divides in how AI use is understood, the global effort to establish a disclosure standard, and why the authorship question remains genuinely unresolved.
Key Takeaways:
  • Citation can't bridge the gap between AI-generated ideas and their sources. Traditional citation connects ideas to a discrete, traceable origin. AI severs that connection by synthesizing across sources in ways that can't be pinpointed. Weaver notes this is structurally similar to what Western scholarship has long done to traditional and lived knowledge -- and now researchers are experiencing that same disconnection applied to their own work.
  • A global AI disclosure standard is actively being built. Weaver is co-leading a large-scale effort with the European Network of Research Integrity Offices, the International Science Council, and the Committee on Publication Ethics to develop a consistent disclosure framework through the World Conferences on Research Integrity. The goal is to stop researchers from having to tailor disclosures to each journal's idiosyncratic requirements.
  • AI use in research often falls outside methodology entirely. A researcher translating articles from an unfamiliar language using AI is a real and beneficial use case, but it doesn't fit neatly into a methods section. These peripheral uses still shape how researchers interact with and think about their material, which is exactly why disclosure needs to be broader than methodological reporting.
  • Separating the disclosure from the assignment makes students more likely to do it. At the undergraduate level, voluntary disclosure is hard to get. Weaver recommends having students submit a disclosure rubric alongside their assignment in a separate dropbox. This treats disclosure as a professional skill worth practicing on its own, and it gives instructors a reference point if questions arise about how an assignment was produced.
  • Authorship will likely settle at the disciplinary level, not the universal one. Weaver is candid that she doesn't have an answer to the authorship question. In qualitative research, she sees coding as irreplaceable human work. In STEM fields, AI-assisted analysis may be more readily accepted. She expects discourse communities will develop their own standards -- but that shouldn't delay building consistent disclosure practices across all of them.
Kari D. Weaver (she/her) holds a B.A. from Indiana University, a M.L.I.S. from the University of Rhode Island, and an Ed.D. in Curriculum and Instruction from the University of South Carolina where her dissertation examined the impact of professional development interventions on academic librarian teaching self-efficacy. She is the Program Manager, Artificial Intelligence and Machine Learning with the Ontario Council of University Libraries on secondment from her permanent role as the Learning, Teaching, and Instructional Design Librarian at the University of Waterloo. Additionally, Dr. Weaver is a continuing sessional faculty member in the Department of Leadership, Higher, and Adult Education at the Ontario Institute for Studies in Education (OISE) at the University of Toronto. Her wide-ranging research background includes study of accessibility for online learning, information literacy, academic integrity, misinformation. She is widely recognized as an expert in AI citation, attribution, and disclosure practices for her development of the Artificial Intelligence Disclosure (AID) Framework and is currently the co-lead of the 2026 World Conferences on Research Integrity Focus Track: Toward a Global Reporting Standard for AI Disclosure in Research.

Creators and Guests

Host
Priten Soundar-Shah
ED of PedagogyFutures / Founder of Academy 4 Social Civics / CTO at ThinkerAnalytix
Guest
Ethical Ed Tech: How Educators Can Lead on Digital Safety & AI in K-12
Strategies and tools to integrate emerging technologies into K-12 classrooms in a way that benefits all
Guest
Kari Weaver
Learning, Teaching, and Instructional Design Librarian at the University of Waterloo

What is Margin of Thought with Priten?

Margin of Thought is a podcast about the questions we don’t always make time for but should.

Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students.

Each season centers on a key tension in modern life that affects how we raise and educate our children.

Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.

Episode 19 - Kari Weaver: What's the Line Between Research Integrity and Using AI as a Tool?
===

[00:00:00]

Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming.

We talk about how students and teachers should use AI, but one of the most pressing and least resolved questions is how we should document that use, who gets credit, what gets disclosed, and how do we build a research and learning ecosystem we can actually trust? Today's guest is Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries. She's also an adjunct faculty member at the University of Toronto and the creator of the AID Framework, a practical tool for helping students and researchers disclose their use of AI with clarity and [00:01:00] consistency. We're going to talk about why citation isn't enough, what a global AI disclosure standard might look like, and why the thorniest questions about authorship and integrity don't have clean answers yet, but why that's not a reason to stop asking them. This is about building the infrastructure of trust that AI and education desperately needs. Let's begin.

Kari: I am Kari Weaver. I'm a hybrid librarian and educator, and currently I'm the program manager for the Artificial Intelligence and Machine Learning Initiative with the Ontario Council of University Libraries. I'm in this role on secondment from my day job as the learning, teaching, and instructional design librarian at the University of Waterloo Libraries.

The University of Waterloo, if it's not familiar to you, is a large public research university in [00:02:00] Canada. It was actually the university where Blackberry was developed — the precursor to the modern smartphone. So it's a very STEM-focused institution on the education side.

In addition to those roles, I'm also a sessional or adjunct faculty member at the University of Toronto in the Ontario Institute of Studies for Education, where I teach graduate students about teaching and learning in higher education and educational research. So I have a variety of different hats, but those different experiences are really crucial to what I'm doing with AI. Currently I'm not only training and educating and researching about AI, but I'm also overseeing several large-scale projects at the Ontario Council of University Libraries — which I'll call OCUL from here [00:03:00] on.

At OCUL we have several large-scale projects where we're looking at automated workflows in library spaces and how we can use artificial intelligence to augment that work and improve the services we're able to offer to faculty, staff, students, and hopefully ultimately the public.

But we're doing that in ethical ways, where we can protect people's privacy and make sure that we're not violating intellectual property — all of the ethical concerns that libraries are concerned with. And when we own and control it, we're also much more [00:04:00] aware of the environmental footprint and what sustainability might look like in that context.

The projects we're working on right now are looking at augmenting these workflows. We've just finished one where we looked at how effectively we can train Whisper to do transcription of existing audiovisual materials within library collections that simply aren't accessible because the amount of lift and time required for human transcription is significant and beyond what staffing in libraries can support.

Funding and staffing in higher education is not great. Libraries certainly suffer from that as a downstream impact. We've just concluded that project, and [00:05:00] it's particularly interesting because at OCUL we do have collections and services that we're offering bilingually, in both French and English.

So when we're considering these things, we have to look at the capabilities not just in English, but also in French and Quebec French, which is particular. We've just finished that quite successfully. We have another project where we're looking at whether a chatbot could be used both to help end users and people who staff our consortium chat reference service.

We have another project where we're looking at whether we can improve the remediation of accessible books and book chapters that we provide as an existing service within Ontario and more broadly within Canada. And [00:06:00] we have a fourth project where we're looking at large-scale metadata extraction from a collection of about 50,000 historic Canadian government documents. Right now you can find that the document exists, but you cannot tell what information might be in there without looking at the document. Some of these are 2000-plus pages of records for an individual document. With AI we can explore doing this. Without AI, this work would never happen.

Priten: I want to talk a little bit more about those various projects, but often when folks are thinking about the role of librarians in higher education right now, a lot of it is about research integrity. And I'd love to start there in terms of how you're thinking about this, both in your classroom and supporting peers working directly with students. What has that conversation looked like, [00:07:00] and how has it evolved over the last few years?

Kari: Research integrity is a really interesting space. What's been determined in the research integrity space is that you can use artificial intelligence for research. That is clear. But what isn't clear, and this is not necessarily the work I've been doing at OCUL, but one of those pieces at the nexus of all of the different work I do with artificial intelligence, is that for research integrity, we need to have a way to capture what AI tools and technologies people are using in different facets of the research process. How they used that. We need some specification about the model or the timing of that, because if we discover a year down the line [00:08:00] that a particular iteration of an AI model had significant bias toward a particular population, that could certainly impact how we accept or reject that research in the scholarly record.

Unfortunately, the way we traditionally have done this is we either incorporate this information into the methodology and we use citations as a core piece of how we communicate our positioning and connection to the existing scholarly record. And artificial intelligence is really interesting in that it's disconnecting the ideas from the place where those ideas resonated. It's actually quite funny, not in a truly amusing way, but funny in a "you don't like it when it happens to you" sort of way. [00:09:00] That disconnection between the ideas and the place and the originator. This is what we have done to traditional knowledge and lived knowledge in scholarly knowledge production.

And AI is now making that happen to people who produce scholarly work. From a research integrity perspective, this means we can't actually use our existing practices. We have to come up with a different system. So that's really moved in the direction of AI disclosure and generating some sort of structured AI disclosure statement.

Most journals now do require, if AI is allowed in the research, that they require a disclosure statement. What's challenging right now is that disclosure statement is [00:10:00] semi-voluntary and inconsistent. So often when you're publishing, you have the situation where you prefer a journal but it might be a little bit of a stretch. You do the work, you submit it anyway, and if it doesn't get accepted, at least the feedback will help you improve it so you can submit elsewhere. From a research workflow and integrity standpoint, if I have to give different disclosures or make decisions about where I'm publishing based on what use of AI is allowed, that's not very helpful in building a research ecosystem where we have an understanding of how and in what ways people are using these tools.

A solution might be to just say AI is a tool incorporated in your method. The reality [00:11:00] is there are lots of uses of artificial intelligence that might not be methodological in nature. I'll use as an example the French-English dichotomy. I had a recent question from a researcher about using AI to translate some articles they really wanted to read but were in a language they weren't familiar with, but it was a critical scholar in a new area of scholarship that they wanted to engage with. While we were able to figure out a way that aligned with licensing and copyright restrictions to allow them to do that, they're then going to want to use that translated information in their research. That's not a methods issue, but that is a use of artificial intelligence.

[00:12:00] And depending on the tool they're using for that translation, that could certainly impact their interaction with and thinking about that information. This is a very concrete example of where artificial intelligence use in research is happening — and it's happening in a really beneficial way to support the research and build better connections across people doing this work across the globe.

But we couldn't really integrate that into the methods section. So there are all these nuanced challenges about this.

Priten: We've always struggled with the predominant types of advice that folks are giving students — citing it or including it in some sort of methodology section, especially for scholarly publication. And neither of those really feels right. And I think you hit the actual tension, which is there are parts of the research process that aren't the actual answering of the research question. And nor are they being used [00:13:00] as sources of authority, which hopefully is not most folks in the research world. But when we think about that in terms of student usage, there's value to scholars having a standardized disclosure statement. And you pointed out that even in terms of the publication process, we need to make sure we're not making it super onerous on folks to apply to different journals and that it doesn't influence which journals folks are applying to. Those are all very concrete reasons why we ought to have a standardized disclosure statement. What we've noticed is that across educators there are very different standards for this. Do you think that the kinds of disclosure statements you're pushing for research purposes might also be pedagogically valuable?

Kari: I think they are pedagogically valuable. In my work at the University of Waterloo, before I transitioned to my [00:14:00] current role at OCUL, I developed a framework for disclosing the use of artificial intelligence. It's called the Artificial Intelligence Disclosure Aid Framework — and it's that acronym intentionally because it should be helping you. The AID framework was really built specifically out of a need from graduate students. We had a number of graduate students finishing their thesis or dissertation who were at risk of their supervisor not signing off because the supervisor didn't feel comfortable or confident in how they had used, disclosed, or cited their use of artificial intelligence. So I was tasked with coming up with a solution to that. It ended with the AID framework. [00:15:00] But in creating that, I did quite a lot of work with individual classes on how to implement this, and I think there is a need for consistency because the student experience is they're taking three, four, sometimes five courses all at the same time, and they're experiencing such a range of policies and expectations.

The other thing is, if you're really trying to have an understanding of what students are doing with artificial intelligence, you need to give them clear tools.

So one of the things I did with the AID framework was generate a rubric. The framework has you give information on the AI tool that you're using, and then it has a taxonomy of different use cases for artificial intelligence, essentially in [00:16:00] learning or research contexts. And the first part of the rubric is: Did you use it for this purpose, yes or no? And then if it's yes, you can go in and describe it. The challenge right now is that getting students to disclose at the graduate level is really easy. If you ask them to, they will, because this is really impacting them quite heavily.

At the undergraduate level, voluntary disclosures are a little bit more difficult. So one of the things I've found in practice, both myself and working with colleagues and students, is if you separate the disclosure from the assignment, but have them submitted at the same time. Maybe you give them the rubric or ask them to generate the disclosure and have them submit that to a Dropbox for that assignment.

[00:17:00] And that's used not only as an opportunity for a student to practice that disclosure, which is an expectation in the research world but also in many jurisdictions increasingly a legal requirement in the corporate working world. So it's an opportunity to practice that, but then the disclosure is there and it can either be marked separately or it can just be something that's there. If there are questions or something seems off with the assignment, the instructor or TA can look at that and use that information in combination to help guide the conversation with the student about what happened here and whether we're really meeting our learning goals.

So those are a couple of options. [00:18:00] The other thing is that sometimes I will recommend working with disclosure. If we're using a framework, we can also, especially with first-year students, pre-select the uses that you want them to disclose. Use a standard framework, but tell them what disclosure you're expecting.

And I think that's particularly valuable when we're thinking across disciplines. In the STEM disciplines, a lot of student use of AI is around problem-solving and problem iteration. And then in the humanities and social sciences, it's a lot more about writing, editing, and iteration. These are very different uses of artificial intelligence even for students at the same level, [00:19:00] just based on disciplinary differences and norms.

So thinking through that and giving them concrete guidance is helpful. But if it can be set within a standardized framework being used across the institution, I think that's helpful.

Priten: There's still a component of the disclosure statements that requires buy-in on the part of the person doing the disclosing. And that's true of undergraduate students, but also true of researchers. What is the conversation like about enforceability? And beyond that, how are you viewing the culture of research integrity? Because we've had problems with research integrity in scholarly space, and we've had problems with integrity in assessments, but the scale of this is much larger. I think the standards are less clear than we've had in the past with plagiarism where there are bright lines. We don't have bright lines here. [00:20:00] But what does the conversation look like when you're pushing for disclosure statements?

Kari: What I would say is I don't have a solution, right? The solution is multi-pronged and systemic ultimately. But what I do know in practice is that if you're clear with people about your expectations and also clear about how they can meet those expectations, they're much more likely to conform to that. We're in the space where a lot of the move toward disclosure is, if we all collectively agree as educators and researchers that disclosure is in fact the answer — it's not citation, it's not something else, it is disclosure — and we can generally come to a collective agreement on how we want to do that.

Then we're going to be in a space where ultimately it's a cultural change. And if the expectation becomes normalized and consistent, [00:21:00] it's the same thing with plagiarism. Are you going to have students who plagiarize or violate academic integrity? Yes, there are going to be some students who will take the shortcut and do that.

As an educator, I do my best, but that's kind of not my business. My business is the folks who, if given the structure and the motivation, would do it. Those are the people I'm trying to reach. And that's most of the people, right? When we think of the bell curve, we're trying to reach the folks in the middle.

I think as it becomes more normalized, it will become easier for conformance and compliance. The other thing is, certainly on the research side, agentic research workflows are increasingly important, and disclosure is really important when we're thinking about agentic [00:22:00] workflows. Because not only as a human are you supposed to check in throughout those research workflows, but having a structured disclosure statement that is consistent means we can actually ask agentic workflows to produce and give us a breakdown of that disclosure statement. That gives you a much better sense about how that automated workflow is responding to the tasks you've given it. So it actually works fairly well. But the key is having some structure and standardization.

Priten: Well, what has that structure been? Not even just for agentic flows, but in general. When I think about the number of places you encounter AI these days, and even just in the last six months, it has changed quite a bit. What are the kinds of things you're considering when you think about what needs to be included in a disclosure versus not?

[00:23:00] And the particular examples I'm thinking of are like, you're in Excel and you use it to generate formulas, or you're in Google and you choose some of the results that Google's AI summary provides as your starting points, right? These are very tiny places, but it becomes massive in terms of how they influence how folks go about their research and writing.

And there's not a very clear line for me in terms of, okay, this has to be disclosed, and this we probably could get away with. There are obvious cases, but I'm curious about how you're navigating that middle ground.

Kari: So what I would say is I don't have answers to that, but there is a very large-scale effort that I'm lucky enough to be collaboratively leading with Bert Segers, who's the president of the European Network of Research Integrity Offices and works at the Flemish Commission on Research Integrity, in collaboration with the International Science Council, the Committee on Publication Ethics, STM, [00:24:00] and several other large publishing and research integrity organizations. We're really digging in with people who have specific expertise on this and working through some of these issues and having some of those conversations to land at a place where we could have a global AI disclosure standard that gives some more guidance on some of these thorny issues.

So I hope it's something we'll have a better answer to within the next year. We're doing this work through the World Conferences on Research Integrity as their focus track for this year because we are at the point where we're far enough into our journey with integrating AI into research and education that now is the time to really dig in and do [00:25:00] that work. We have an incredible network of folks who are working on this and really thinking deeply about these issues. So despite being someone very intimately connected with this and working on this, I don't have answers yet. But it is to say people are working. Some of the best minds on this are working to really answer those questions. I think we'll probably arrive at a place that everybody feels equally discontent about, which will probably be the correct answer for it.

Priten: When you think about the community response in general to this project, are you getting pushback in terms of this being the right way? I'm curious about the two alternatives you already cited — citations or incorporating it in the methodology section. [00:26:00]

Are you seeing folks advocate for those more strongly against just disclosure statements? And are there alternatives that I haven't thought of, or haven't heard about, that other folks are suggesting?

Kari: That's a complex question. What I would say is certainly people have different opinions, but I think there is a general understanding that citation is kind of not it. I appreciate that many of the large citation organizations have tried to grapple with this themselves.

They're not really able to overcome that disconnect between the ideas and the source because the source, the thing, is really what they're concerned with when it comes to citation. So I think there is general agreement that citation is really not it — unless what you're citing is a discrete thing that you have produced with AI. [00:27:00]

Like if it is an AI graph that you have produced from a data set, then you could probably cite that graph and that would make sense. But for a lot of the uses, that disconnection just really prevents citation from being the answer. I think there is a faction that does very strongly feel that we should just integrate it into the methodology and that's the answer. But then when you get into a lot of these use cases, they can't articulate how you would manage to integrate that into a methodology. So I think it's more of a process of elimination that we arrive at — probably the disclosure statement is going to be a thing that moves forward. And truthfully, I think that's okay. Realistically, if we already had something in the ecosystem that filled the need, I would be very supportive of making that adaptation. [00:28:00] But I think that divide on the methodology often falls along disciplinary lines. People in the social sciences and humanities tend to say, well, this doesn't fit in the method. And people in STEM fields feel like, well, of course this fits in the method. I'm using these tools to write my code, to analyze this data set. Both of those perspectives are valid and correct, but if we're thinking about this as part of the whole research integrity and educational ecosystem, we have to meet all those needs, right?

So we have to meet philosophy and we have to meet theoretical physics. And both of those things have to be addressed by what we're doing.

Priten: You were talking about agentic flows. I come from a philosophy background and I was trying to figure out what a valid use of that in philosophy might be. [00:29:00] And that's much harder for me to imagine than in the hard sciences and probably in the social sciences. I realize none of these answers are gonna come overnight. But one of the ways I've been framing it for some educators is that this is like the classic philosophy problem: you have a wooden ship and you're replacing plank by plank, and when is it really the same ship versus not?

And similarly with our writing and our research, at what point is it AI's — you know, as much as we can attribute possession to a piece of tech — but at what point do you lose ownership of your own research and writing? Are there concrete thoughts coming out?

What are some of the biggest questions about not just disclosing, but when it even makes sense to say this is yours?

Kari: The authorship question is so thorny. It always has been thorny though, in the sense of, how many folks have gone through some level of graduate study and felt like they made a contribution to something their lab [00:30:00] or their supervisor was doing that wasn't credited with authorship in a publication or presentation about something?

So I just want to acknowledge that these tensions don't just exist for AI. They exist in other aspects of this ecosystem. I think what you have to do is really look at what are the core irreplaceable elements of whatever that particular work or method might be?

And I think you do need to really ask yourself when doing this work: Is this aspect of it something that I need to be doing? I am by training a qualitative researcher. I do a lot of focus group and interview-based studies. I need to do the [00:31:00] coding for that. I am the research instrument. I don't want to outsource that to AI because that is the work.

I might be able to use AI in that context if I'm having trouble naming that idea in coding — I can get some ideas and so forth. But the actual work of doing that coding, that's the work. And honestly, that's the thing I like doing anyway, so I don't want to farm that out.

So I think these are the things. And for student development, one of the conversations I'm often having with students is: where are your skills? What are the things that you are trying to upskill on? If searching for literature is truly something you've already upskilled on and you want to use AI-assisted search tools to help with that, [00:32:00] sure, fine. As a librarian and a professional librarian, that's totally fine with me. But if this is something you haven't spent a lot of time on and your skills really aren't up to the level of your study or expectation, that's an opportunity to spend some time with that and really learn to do that in whatever the work is that you're doing.

And I think a lot of it comes down to that. I think the challenge is, especially on the research productivity side for those who are in tenure-track positions, there's a certain amount of production that's expected and required, and so I think there has [00:33:00] been continuing conversation about that and movement toward reconsidering those requirements across many institutions in a way that really focuses on quality and impact over quantity. I think probably what we're going to see is continued drive in that direction. But I have to say, that doesn't really answer the authorship question. I don't have an answer. I think you have to come to those places yourself as you are doing the work.

Priten: I'm thinking about the coding in particular, right? You've isolated that as an important part of the research process for you that you need ownership over to feel good about your research. I've seen AI coding platforms that take survey data, that take interviews and that will code them for researchers. And I don't think there's a view of them as cheating tools or whatever, right? [00:34:00] They're a standard part of the research process. So do you think this will all end up being individualized — what do you need in order to feel a sense of authorship — or do you think some of these things we might come up with some sort of cultural or community-based standard for?

Kari: I think we're going to see disciplinary-based standards around a lot of these things. I think that's really the place, and that is not happening right now. There is not consistency within discourse communities around what use is acceptable or what level of use is acceptable.

I absolutely am somebody who has experimented with those AI-assisted coding tools. I look at it and I can tell you not only does that make me as a researcher feel more disconnected from my work, [00:35:00] but I'm often not happy with how it actually categorizes things and don't feel like when I look at it, that represents the work.

The tools do continue to improve, maybe my opinion of that will change. But I think there is certainly a difference depending on the kind of work that you do. And I do think we need to allow space for that, but that doesn't prevent us from moving forward with disclosure.

And I do have to say, as a librarian and interested party, what does that mean for interdisciplinary research contexts, which we're often being pushed more toward? I don't know. I think that's perhaps the next frontier of a really big cultural conversation that we need to have in the scholarly community.

Priten: I would love to share with the audience how they can see more about the work that you're doing, [00:36:00] stay updated on any of the results of conversations that you all are having, and if they have other thoughts that they want to share with you and the consortium, where might they go?

Kari: A few different places. You can find more about the work that we're doing at OCUL at ocul.owen.ca, which is our website. We post regular updates about our progress on our exploratory projects and work that we're doing with AI, and information on the work to establish an international artificial intelligence disclosure standard.

The majority of that information is being hosted on the International Science Council website. And more information can also be found on the World Conferences for Research Integrity website. We will be moving to the second round of consultation on that work, which will really focus on [00:37:00] the content of disclosure — what we need to disclose with AI — which is part of why we'll be getting into some of those conversations in much more detail. And more information about my work on artificial intelligence disclosure can be found at aidframework.org, as well as a helpful statement builder. So if you're looking for a way to help students, especially in educational contexts, semi-automate their disclosure process, that statement builder is really there to help support that work.

Priten: Awesome. Thank you so much. I appreciate Kari for joining us and bringing a level of rigor and nuance to the AI disclosure conversation that is long overdue. Kari reminded us that we need to build systems that make honest, transparent use the path of least resistance. Her work on the AID framework and the global disclosure standard is exactly the kind of structural thinking that we need to match the pace of technology.

Keep listening as we continue exploring the ethics of [00:38:00] education technology and pre-order my book for more on this conversation at ethicaledtech.org. Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also, share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.