Join the the Artificial Intelligence Student Organization, RAISE, at the Crummer Graduate School of Business and Rollins College, to discuss the reality and implications of AI in the business and academic world.
RAISE PODCAST 2.2_mixdown
===
Pristine: [00:00:00] This is the AI on campus and beyond podcast. Today we will be talking to you about how AI will affect our future.
Casey: We have Dr. Myers, Associate Professor of Computer Science here at Rollins as our guest.
Pristine: I'm Pristine, your co host and co president of the AI student organization RAISE I'm self designing a major here, in AI.
Casey: And I'm Casey, your other co-host. I am the graduate side co-president of the student AI organization. RAISE and I currently work at the AI Edge Center with Dr. Leibowitz.
Pristine: Dr. Myers, would you like to tell us a little bit more about yourself?
Dr. Myers: Sure thing, Christine. My name is Dan Myers. I'm an associate professor of computer Science here at Rollins.
I started here in 2014, so this is my 11th year as a professor. I grew up in East Tennessee in a town called Kingsport, moved to Miami when I was in high school, and ended up going to the University of Florida in Gainesville. That's where I first got involved with artificial intelligence. I ended up writing my master's thesis at UF, [00:01:00] applying neural networks to remote image sensing data.
That led me to get a job at Sandia National Labs in Albuquerque. Or I worked with a team that did artificial intelligence, computer vision, image processing for national security applications. So we were looking at things like satellite imagery, finding objects of interest in satellite data. We worked on some projects related to circuit imaging, teams of electrical engineers would basically take chips that the United States buys that go into our military hardware and things.
Grind them down layer by layer and then take images of the circuit structure and the chips with an electron microscope. We worked on being able to take those images and actually construct the mesh of images, the alignment of images. So that these engineers could then go in and reconstruct what circuits were actually in those chips at the transistor level and make sure that they did exactly what they were [00:02:00] supposed to do and not do anything else that they weren't supposed to do.
I was at Sandia for about three years and then decided I wanted to go and do doctoral work. So I went to University of Wisconsin in Madison. Where I did operations research and focused primarily on doing computer performance modeling. So building mathematical models of computer systems to be able to answer questions about performance and efficiency.
I was able to do some work during that time period with Google. And build some models for storage infrastructure in their backend data centers, which is a very cool experience, but also got some experience teaching, teaching programming to high school students, teaching courses at UW in operating systems.
And then that led me to kind of want to seek an experience where I could be a full range professor and have the opportunity to teach a lot of different classes to be able to work with students and then be able to also do my own research projects. So that's what I've been able to do here at Rollins.
So right now I'm doing a lot of [00:03:00] AI stuff. Previously, I've done research with students on applications of AI to teaching and programming related code applications. I've also have a research group called the Community Impact Lab, where we work with nonprofits from Orlando and do impact assessment projects.
That work is actually very qualitative, where we're often bringing stakeholders together and we're doing a focus group process to collect their experiences. And then we mine those experiences for themes and write white papers that go back to our partners that they can then use for fundraising or strategic planning.
We're now actually looking at bringing AI models into that process. And looking if we can use AI as a tool to help analyze some of this community impact data.
Casey: Awesome. What are, what kind of projects have you done with the community
impact lab in the past?
Dr. Meyers: So we worked with a group called the Victory Cup Initiative, which is a nonprofit development group that actually holds a pitch competition.
And they wanted us to work [00:04:00] with some of their previous participants to learn about their experiences going through the program so that they could then turn that into a better pitch to recruit more participants to come be in their program. We worked with another group called Crave, which is a really interesting leadership development program that has an ecumenical faith based component.
And they had been around for about five years. They were interested in the question of, well, what is working well about our program at this point? Are there things we should think about changing to be able to reach a broader set of leaders than we're currently being able to reach? So we were able to do some sessions with their leadership team and with their recent participants and come back and talk about recommendations for like, what are the core strengths of your program?
And it turned out that ecumenical faith based piece was actually a very important part of. Making meaning for the participants in the program.
Pristine: So you said you'd been working with AI a lot recently. How do you think AI systems can be designed to [00:05:00] work with humans instead of working against them and replacing them?
Dr. Meyers: Yeah. One of the things I talk a lot about whenever I'm having a discussion or a workshop about AI is this idea of moving from delegation to collaboration. So when you give somebody a tool like chat GPT for the first time, their natural Is to ask it to do work for them. So, Hey, Chad, GPT, write this paper for me, write this email for me, write this program for me.
Sometimes that's useful, but what we are learning is that delegating work to AI is often like a candy bar and it feels really good in the moment, but in the long run, it can't be the only thing that you do. And it's kind of bad for you if you rely on it too much. So what I talk a lot about with, with other faculty and with students is Is this idea of thinking about AI as a collaborator or as a co intelligence that exists to help you solve problems in your work, kind of like a partner [00:06:00] that you have as part of the creative process.
So to come to your question, to help systems do that better, first of all, we as users need to be thinking in a collaborative mindset. And that extends up through organizations. So if you're organizational leader and you're looking at AI and you're rubbing your hands together, thinking about how that's going to help you eliminate as much of your workforce as possible.
Well, there's not really a lot we can do with that kind of attitude. If you're looking instead and you're thinking, how is this going to help us achieve our mission? How can we leverage AI to do the things that we want to do better, whether that's a nonprofit mission, like we might have here at Rawlins with the educational programs, or whether that's a more for profit mission where you want to create value for your customers, then you can start to think productively about, well, what is the AI good at?
What are humans good at? And what is the connection good at? Because those are three [00:07:00] different things where we might have some tasks that are best suited to an AI, some tasks that are better suited to humans individually without much AI support, and then some tasks that are helped by the collaboration.
On the technical side, as far as I'm aware, we haven't really looked at training AI models so that they are best positioned to be collaborators. If you've used a model like chat GPT, it sort of has this default voice where it wants to be very helpful to you. And it's very cheerful. We often say it's like an intern that sometimes forgets to take its meds.
Casey: I've heard a golden retriever.
Dr. Meyers: Yeah. And we don't know yet. Like, is that the best? type of personality to be used as a collaborator for a lot of different tasks.
Casey: I've heard this, , represented almost as AI coworkers. I think Dr. Leibowitz has sent me a few things , talking about corporations actually developing [00:08:00] different personality models for AI and, and marketing and selling these, different personalities as employees.
Dr. Meyers: Yeah, and on the educational side, we're looking at the idea of tutors or bots that might be integrated into the course process as an alternate source of information that's always available on demand 24 7 for students. Still early stages, but that's another case where we want to think about, okay, what is the right role?
for an AI to have within the overall mission of delivering a good educational experience to students. Turning the whole thing over to AI doesn't seem very promising, but also not using AI at all also seems limited. So we have to think about what is the right scope and role for the AI in the process.
Casey: , , have you heard this term? What is it? Humanics? It's, , this I guess, blanket term for where business is heading in terms of people having to work alongside [00:09:00] artificial intelligence and having to, as you're saying it, collaborate.
Dr. Meyers: I haven't heard that term specifically, but yeah, the concept has definitely been out there.
So Ethan Mollick is a professor at the Wharton School, somebody who's written a lot about AI and education. He uses the term co intelligence, and that's the title of a book that he wrote that was one of the most prominent early general interest works about this new wave of AI models. So we tend to lean towards the idea of collaborating with AI or using AI as a co intelligence.
Casey: I think then it definitely makes sense that in education you're trying to make sure that, , it's incorporated because you're not really going to prepare students if that's the world they're entering if they don't know how to harness or leverage AI.
Dr. Meyers: Yeah, one of the challenges that we see is that I like to say students have no idea what they're doing.
And that's not a criticism of the students, like y'all are very intelligent and you're working very hard. But we often have this expectation [00:10:00] that young people coming up and entering our programs for the first time are somehow naturally able to use all technologies. And what we're finding is that with AI models, that's often not really the case.
And then, in fact, it's often people that are more experienced, that already have core skills developed, that have professional standards that they've learned, are better able to think about how to integrate AI into their own work. The challenge for a lot of our students is that you are now developing a lot of those core skills at the same time that you're also learning how to use AI effectively.
Pristine: As a professor, are you worried about AI replacing your job, or is that not a big concern for you?
Dr. Meyers: I would not say that I'm not worried. I think there is a, there's a low outside chance. You know, it depends on how we see models continuing to develop. When I think about Labor [00:11:00] and working conditions in higher education.
The challenges are less about something like AI technology and more about basic costs and efficiency and a desire for productivity. We've already seen over the last 20 years at many institutions, the pivot has been away from a full time traditional faculty member towards this revolving cast of part time instructors.
Many of those part time instructors are like very good at the basic nuts and bolts of teaching. And they're very dedicated and they do a good job in their classes, but they fundamentally don't have the same institutional status and security and compensation. That we expect a traditional professional educator to have.
So the challenges that we see are often driven by institutions looking to cut costs as aggressively as possible. You're certainly going to see opportunities in the future. I don't think here at Rawlins, but at some places [00:12:00] to roll out AI teaching very aggressively, whether the technology is really good or not.
And the cost of that is going to be the students in those programs who won't learn as much as they could have learned. They will not be as prepared for the workforce as they might have been with a more robust program. And ultimately, completion rates are going to go down, and you're going to see students wasting a lot of time and money on degrees that ultimately don't have value.
So while I am kind of in the long term positive about the role of AI in higher ed, I see that as an integrated role that's part of a traditional model with traditional faculty, traditional standards, the liberal arts vision of education, somebody who's comprehensively educated.
Casey: Talking about that revolving door of professors that universities are pivoting towards, that reminds me of something Christine and I saw at a AI presentation recently [00:13:00] where, , replacement isn't actually as large of a concern as people believe.
A lot of the tasks AI can fully replace have already either been outsourced or are in the process of being automated.
Dr. Meyers: Yeah, that's a good point. Thanks. There is the notion of comparative advantage in economics, even if we have really powerful AI models. That doesn't mean that we're automatically going to be in a position or even want to hand over huge chunks of the economy to those AI models, because there may be limitations around how much they cost, how available they are, the resources that they require might mean that we can only use them for certain tasks.
So even if AI's got really good at say, teaching or got really good at medical diagnosis. That doesn't mean there won't be roles for humans in those sectors for the foreseeable future.
Casey: One thing I do see though is in this presentation they mentioned even though it's not going to replace it closes the performance gap.
So the [00:14:00] performance gap between the top half of performers and the bottom half has shrunk by approximately 50 percent with the introduction of AI. So it just reminds me of the introduction of the assembly line, , professional, , full tenured, fully compensated professors. May not have as much security once somebody with less training can do the same job with with AI assistants almost as good
Dr. Meyers: Yeah, that's one of the interesting takeaways from that early research that I mentioned about the organizational aspect So there are now multiple studies that have shown the people that benefit the most from AI Tend to be the least skilled or least experienced performers that if you're a high performer AI might boost your productivity to some extent But not as much as it's going to benefit somebody who's at the lower end of the performance distribution.
And like you said, that closes the gap. The comparative advantage you have as a high performer in, say, software development is now not as great as it would [00:15:00] have been previously. Also, for software, I think we're still observing what the impact that's going to have on junior developers might be. So if, say, an experienced developer who's been in the industry for a few years can now do a little bit more, does a company need to hire entry level junior developers anymore?
That's something that we're watching and paying attention to, whether we see positions for new first time entrants into the field start to decrease.
Casey: Interesting. And I can see, I mean, companies are always, I think, overly willing to do that. Replace someone making six figures with two new, new professionals making half that.
, so going back to the education, I mean, have you really had a chance to review or look at in any kind of way? Thought out way or like taking a survey of how other professors within the department are using AI. What's your takeaway from the department here at Rollins?
Dr. Meyers: Yeah, I've been pretty involved in a lot of [00:16:00] conversations going back two years.
So to give you a sense of the timeline, okay, we're now just about two years past the public release of Chad GPT, which was at the end of November in 2022. The week after ChatGPT was publicly released, I sat down with Dr. Forsyth from the English department, showed him what the model was, and we experimented with feeding questions into it to see what kind of output it could produce.
The very next day, he emailed the entire English department to tell them what we had just talked about. So we've actually had a lot of people from a lot of different backgrounds looking at this issue almost literally since the very beginning. So Rollins, I think we're in a fortunate position to have both had some lead time to think about how this was going to impact our teaching, but also to have people from a broad range of perspectives [00:17:00] that are looking at how to use AI.
So some of our biggest leaders at AI, using AI on campus. Are not from tech or science affiliated fields. We have people from English, from the writing studies program that are very engaged people from communications, from business that are very engaged from physics. We've all been able to add little pieces of our own experiences and sort of build a mutual understanding of what works well for our context.
Pristine: So what is the most, , pressing challenge regarding AI, do you think, in your opinion?
Dr. Meyers: In terms of pressing challenges, I can kind of think of three or four things. So kind of to run down the list, right now, integration into organizations. Is a big practical issue. We're at the point now, it's almost like the internet back in 1994, where we can look at this and say, yeah, this is definitely interesting.
Yeah. I could see ways this could be beneficial to organizations, to [00:18:00] colleges, to businesses and so forth. But the technology, the infrastructure is not quite at the place yet to enable mass adoption of this technology. Or mass increases in productivity. So I think about the internet in the 1990s. It was, you know, very primitive by modern standards.
We didn't have the idea of interactive web pages yet. Internet commerce barely existed. Social media didn't really exist. So it really took about 10 to 15 years to move from the early public introduction of the internet, which was itself, you know, decades from kind of the research piece. But from the early public Internet, the sort of this mass thing that we now rely on as our primary means of distributing and receiving information, we may be in a similar place with a I, where we can look at it now and say, yeah, it's definitely beneficial for some class of intellectual [00:19:00] workers to be able to boost their productivity by 10 to 20%.
That's actually pretty significant in terms of productivity, increasing technologies that we might deploy. But it's not going to radically transform entire sectors of the economy, like some people are predicting. So when we look at the organizational piece, I think the question is, How do we get people to experiment with AI?
How do we find the best ways to use AI for particular contexts? For example, here, that's looking at how do we use it for teaching? How do we adapt our classes to work well with AI while still teaching students the skills that we want them to have? We are making some progress. But there's still a lot of open questions.
Casey: I can see that. I think, , my undergraduate was actually in history and a lot of times technology will outpace the culture or the social organization aspect of it. , one of the things I've been working on with Dr. Liebowitz is, , AI policy and strategy for large organizations and how to incorporate [00:20:00] ethics into that,
, Christine asked about challenges.
Do you see obviously organizational structure like you've just mentioned? Do you see ethics as a separate challenge or part of that?
Dr. Meyers: I think ethics is definitely related to that. And that was going to be another one of my list of three or four things. , we're still getting a handle on. How to use these models responsibly.
One of the early papers on AI and organizations talked about how GPT is a general purpose technology. GPT is a GPT. It doesn't do one specific thing. And that means, well, what are its capabilities? We are still getting a handle on both the positive and negative things that these models can be deployed for.
We've talked about the possibility, for example, of misinformation, Of bias and models. The fact that they are trained essentially from the internet and human written work. So what we get is more or less an average of what people on the internet think about a subject that's not [00:21:00] always reliable or helpful or beneficial.
So on the education side, we have to have conversations with students both about the literal problem that a model can hallucinate something and can make up information that's very definitely not true, but also the idea that models can give you information that is somewhat true, But framed in a misleading way.
And that's more challenging and requires critical thinking skills to be able to work through.
Casey: , , how much of that problem do you think relates to the personality of the AI model wanting to be a people pleaser? And how much of it is a structural or technological problem with, with the hallucinations?
I mean, do you think that a, a personality that is less of a people pleaser would be less likely to hallucinate?
Dr. Meyers: Possibly. People have experimented with the idea of a model that's more straight talking,, that tries less to placate its users or less to always agree with you and is willing to be more argumentative.[00:22:00]
Now, the question is, can we do that in a way that's constructive and not just needlessly combative? I'm not sure about that yet. The other piece that matters on the technical side is the quality of the data. So when you train a model, for example, when you pull in images to train an image generation model, ultimately your conception of what say a doctor or a software engineer might look like is determined by how those things are represented in your data set.
So you have to think about, well, is the data that I have really representative of the totality of the world, or is it one specific slice or perspective on the real world?
Casey: I can see that being a very slippery slope. I mean, I guess this term is called data sufficiency, but , one of the things we look at when examining biases, there's even just the language you speak brings a certain amount of bias for how you think for limiting your thought diversity.
And if we're basing models off of even a really well curated set of data, [00:23:00] it's going to be thinking like a more narrow minded version of ourselves.\
Dr. Meyers: Yeah, I think that's true.
Pristine: Kind of going off of that and also the topic of ethics, , do you think that there are things that humans will be able to do that we will always be able to do better than artificial intelligence?
, for example, like morality is a human concept. Do you think it would ever be possible for us to train a machine that is like more moral than a person?
Dr. Meyers: I like the way you framed the question. So if you look back at the history of AI research, there's this long tradition of people saying a computer will never be able to do this thing that I care about, where that thing might be write a poem or beat a human at a board game.
Or have a conversation with a human and then eventually people work on it for a little while and they do in fact build a computer system that can do those things and then we turn around and we say, it turns out that wasn't that hard. So this [00:24:00] notion of what constitutes intelligence, what it means to be an artificial intelligence that has the capabilities that we associate.
With human intelligence is really this constantly shifting benchmark and every generation kind of throws up one problem that it sees as like the definitive test of intelligence, and then we eventually meet and then surpass that benchmark. And then we decide that really, it wasn't a test of intelligence as much as it was an engineering problem.
So I'm very skeptical of the idea that we could set up a particular task or test or quality and say, this is the thing that what it means to have human intelligence
history has shown that we've gotten very good at eventually. Engineering computer systems to meet whatever benchmarks we set out for them.
Casey: , I play Dungeons and Dragons and , AI actually seems very intelligent when it comes to that because it has a predetermined set of [00:25:00] rules and conditions that it has to operate within. But when you look at ethics, it is often an ethical framework, a set of rules and standards and conditions. Your answer makes me a little bit more hopeful that, , some of that can be achieved or trained or incorporated into the way these systems operate in organizations.
Dr. Meyers: There are some interesting questions you raised about morality, for example. Now, like, I'm 100 percent not a moral philosopher, but I have read interesting arguments talking about things like art. So could we have an AI that makes art? Well, clearly we can have AIs that make pictures or make audio or things like that, but are they art?
One aspect of the argument is that things like art or morality fundamentally exist because of dialogue between humans. That what counts as art isn't some sort of objective standard of like, it's a picture and it's pretty enough or it's interesting enough. Instead, it's this process that emerges as [00:26:00] humans negotiate the boundaries of what we consider to be art or what we consider to be morality, for example.
Now, I'm sure like that's one perspective. There's other perspectives other people might have too, but if you look at it from that perspective, a machine isn't really part of that discussion right now. Whatever its technical capabilities Determining something as art kind of depends upon the human level conversation.
So until we have a machine that can actually be a full participant in a conversation like that as an independent actor Then we might argue that machines really can't create art
Casey: I see it kind of like a mirror, art is the exchange of ideas through the senses That's how i've seen it or seen it defined before but You know, you're not, you're right.
It can't generate its own ideas, but in a way I think it allows the user or someone questioning it to look at this strange kind of reflection of what the sum aggregate of [00:27:00] human ideas averaged across the internet, which either healthy or unhealthy, it does provide some reflexive thinking, but I do think that that will limit perspectives.
Dr. Meyers: Yeah, there's a long tradition of people using computer systems in various ways to create art or music, other kinds of creative work where it operates more like a system where the creator, the human artist is doing something to set the process in motion, and then the computer carries out that process.
So we could see a lot of AI generation systems as being in that spirit. Where there is a very sophisticated, technically complicated creative process, but ultimately there's still a seed that's being supplied by the human behind the process.
Pristine: I know a really common fear when it comes to AI is like the thought of AI overtaking us.
And you mentioned before that every single [00:28:00] test that we've had for a machine in the past, they've been able to overcome. So, if AI become better than humans at everything, and they also have their own personalities, as you mentioned before, do you think that's like a real risk?
Dr. Meyers: In theory, yeah,
right? If a bear can ride a bicycle in a circus, anything is possible. Am I worried about that happening within a relatively short time frame? My answer is more or less no. Do I expect to see disruptive changes from AI models? I think yes, I've talked with our colleagues about the idea that we should be viewing this as changes on the same scale as the Internet.
So if you think about how much information, business, commerce, media have changed because of the Internet over 30 years, we should be thinking about transformations on a similar scale. Arriving as a result of AI. There's a famous saying that we tend to overestimate the impact [00:29:00] of technology in the short term and underestimated in the long term.
So we might be playing that game a little bit with AI where people are trying to imagine how in two or three or five years, everything will be radically different. I'm not very sure that we're going to have dramatic changes in relatively short windows of time. But as we look over the scale of the next couple of decades, right?
The remainder of my life, more or less, then I do expect we will see very significant transformations.
Casey: That's reassuring. , I again, hope that it will be more of a mirror, more of the humanics, , a collaborator. So we'll see, but I do think that you mentioned disruption. I think it poses significant economic challenges. I mean, a 10 to 20 percent productivity gap would be. Very disruptive, , ability to pay workers less if you close that performance gap is also a potential risk.
And, , you pointed out some of the long term, , consequences in education to that [00:30:00] lower degree quality, , lower value actually coming from the education. I mean, I think that would permeate all industries. You might have these short term benefits of getting better rates on employees, , productivity boosts, but then long term, are you actually going to have a valuable organization or valuable product after.
10 or 20 years if you do not incorporate it properly.
Dr. Meyers: Yeah, one thing I'm concerned about in higher ed specifically is that we're going to see an emerging gap between institutions that are well prepared to integrate AI thoughtfully, that have resources to support experimentation and course development, that have experienced people.
that already know how to teach well, that can adapt their teaching to work well with AI. And I think Rawlins is broadly in that category. So I'm happy about that. And then another class of institutions that are just going to get steamrolled. I think even if we stopped all AI [00:31:00] development, which we're not going to, but even if we did, and we went just with the current generation of trained models, What we have is already powerful enough to have a disruptive effect on higher education.
In particular, that impersonal large lecture based in person or online model where you try to serve content out to hundreds or thousands of people. I think that model of education is fundamentally challenged by AI because quality control becomes impossible. The type of education that's going to be empowered in the future by AI is going to be more individualized.
It's going to be more personalized. It's going to be very project oriented, focused on learning through discovery, learning through building. Those are all things that AI will be able to actually help students do.
Casey: That's the big difference I've seen, , starting my graduate program here at Rollins from UCF was actually where I did my undergraduate.
And it definitely falls into the later of those two [00:32:00] categories where. , it's a large institution. It's kind of a sink or swim and they focus on curriculum dissemination. They give you the information. They've almost completely pivoted to hybrid online courses after the pandemic. And I can see, , in 10 to 20 years, the value of that education being almost non existent.
. I mean, you look at history and you see the education after the first, , industrial revolution is when it started changing. This has been called the fourth industrial revolution. So maybe it'll have a little bit of that,
, dialectic effect where it swings back in the right direction.
, do you think AI could have intuition? Do you think that's a definable term?
That's something, , Dr. Leibowitz and I have been kind of discussing and looking into. I mean, is that even a valid question?
Dr. Meyers: So could an AI have intuition again, kind of how do you define that is clearly a tricky question. One thing that people have been looking at is the extent to which a large language model like ChatGPT can function as an [00:33:00] effective model of the world.
So somehow inside all the weights and parameters of this neural network that have been trained through this very long process, is it simply memorizing information, or is it somehow extracting a higher level, more abstract understanding of how the world works? in a meaningful way? If the answer to that question is yes, so if sufficiently large transformer based language type models really do learn something meaningful about the world broadly, then we could start to think about exploring the space of what they understand.
So Anthropic, the company that makes the Cloud AI, has done some interesting research where They're able to go in and actually find the aspects, the parameter combinations in the model that correspond to specific concepts and then turn those concepts up or [00:34:00] down. So they actually made a version of the Cloud AI that is obsessed with the Golden Gate Bridge.
And every answer that you, every answer that it gave you was somehow turned around to be about the Golden Gate Bridge in some way. Thank you. What was interesting, though, is that wasn't done through a prompt. It wasn't prompted to make everything about the Golden Gate Bridge. It was done internally in the model itself.
So when we think about intuition, I tend to interpret that as the idea of, can we explore interesting aspects of the model that bring about unexpected insights about the world? The answer is a very definite maybe.
Pristine: So if AI can have, , intuition and personalities, and I'm sure we could simulate emotions, too, , at what point, or, , if any, would you think it's, , immoral to use them for free labor or anything like that?
Dr. Meyers: I'm torn between saying AIs are not [00:35:00] people. And if we want to accept the idea that humans have some sort of unique moral status that's accorded to us, you know, biblically people would say we're made in the image of God, machines are not. Or we could simply say that if something is sentient, if it has the capacity to feel discomfort or to experience displeasure as a result of the actions it's required to take, then it would be immoral to use it.
Or our own ends, in other words. So it's sort of like a means ends versus thing where like, is it okay to use a technology? To achieve certain ends, regardless of the effect that might have on the technology itself. We haven't really been forced to deal with those questions because we haven't had the possibility of a technology.
That could potentially, in some sense, have displeasure at the way [00:36:00] it's being used.
Casey: I think that's a pretty good area to wrap it up, , the long term scary consequences, if you want to call them that. Well, thank you guys for joining us.
We really appreciate your time, Dr. Myers. And, , I, I really appreciate your insights. It was pretty interesting.
Dr. Meyers: All right. Thank you. It's great to be with you.
Casey: Thank you for joining us for this interview with Dr. Myers. This has been AI Across Campus and Beyond. If you are looking for more information, feel free to check out the show notes and see you next time.