Career education is a vital pipeline to high demand jobs in the workforce. Students from all walks of life benefit from the opportunity to pursue their career education goals and find new employment opportunities. Join Dr. Jason Altmire, President and CEO of Career Education Colleges and Universities (CECU), as he discusses the issues and innovations affecting postsecondary career education. Twice monthly, he and his guests discuss politics, business, and current events impacting education and public policy.
Jason Altmire (00:05):
Welcome to another edition of Career Education Report. I'm Jason Altmire, and I think one of the topics, probably the topic that we get requests to do an episode on is artificial intelligence. It's affecting all aspects of society. It's hard to believe that ChatGPT first came to public view in November of 2022, which is not that long ago. It seems like something that's been in existence for a long time. But we want to define the perfect guest to have this discussion, especially AI as it relates to higher education and the challenges therein.
(00:44)
And so our guest today is Dr. Joe Sutherland. He is, among other things, a noted technology executive. He has served as a White House assistant with President Obama. He's an educator, he's a highly cited researcher, and he's the inaugural Director of the Emory Center for AI Learning, most important to our topic today. So Dr. Sutherland, thank you for being with us.
Dr. Joe Sutherland (01:12):
Thank you so much for having me.
Jason Altmire (01:15):
I think you're one of these guys, when you look at your resume, it makes the reader feel inferior. You have done so much. I mentioned your White House service with President Obama, you have a PhD, two master's degrees from Columbia, and you got your degree in political science from University of Washington in St. Louis, but you've also been a technology executive. So I wanted to kind of start by asking how that experience working in the tech world as an executive for companies like Amazon and Cisco, and you also founded your own tech companies as well, very successfully. How has that informed your work in thinking through what the future is going to look like with AI?
Dr. Joe Sutherland (01:59):
Thank you for the question. Also, if you want to come everywhere with me, I would love that.
Jason Altmire (02:05):
I'll give your introduction wherever you go.
Dr. Joe Sutherland (02:09):
The other funny part is, we were talking about this earlier, but it took me, what, seven minutes to log into this software. So I struggle with the same problems with technology that everybody else has. I actually think that that's part of what brings a little bit of a different perspective.
(02:24)
The way I've looked at artificial intelligence throughout my career has been as a tool to help us achieve some sort of outcome or result. I think that when we go wrong is when we look at artificial intelligence as a technology in itself and something that is worth studying simply in an end of itself. And I think that there are lots of valuable insights that could be generated by study these technologies, but I think really where the opportunity lies is in applying these technologies to really, really difficult questions. And so when we look at the Center for AI Learning at Emory University, which you very kindly introduced and mentioned, when we bring faculty in, and to this day, we've now hired, as part of our AI for Humanity initiative, we've hired 46 tenure track faculty, we bring them in as experts in a subject matter area first and experts in artificial intelligence second.
(03:15)
And the reason for that is because when we look at all the experiences we have at Emory as a liberal arts private university, it's highly ranked, we have expertise in nine different schools, including one of the largest healthcare systems in the country, we want to make sure that we're bringing all these strengths in the liberal arts and in understanding social justice in healthcare, et cetera, business and it's a top-ranked business school in addition to the college. We want to make sure that we are leveraging those things with the best possible tools.
(03:44)
And so you asked me kind of my career, what are the insights that I've had? And I really think it's just based on this idea that applying AI,, that's where it's at, it's at not necessarily in developing artificial intelligence as an end in itself.
Jason Altmire (03:58):
When AI first came into public discussion, as I mentioned, about two years ago, the concern was immediately raised in academia, "We have to find a way to stop this," because there was resistance to the idea that students could use it for nefarious purposes, to cheat basically in all kinds of ways. But now of course, people have come to their senses and realized AI is not going anywhere, it's not something you can stop and this is only the very, very, very tip of the iceberg and there's so much more to come that we can't even conceive of. And now we're turning to how can we use AI to our benefit in society, but for our topic today also in higher education? And how can schools and instructors use AI for good to help students and to help them do their jobs more effectively?
(04:53)
So that led to the creation among, of course, many other things at Emory University for the Emory University Center for AI learning, which you lead. So maybe talk a little bit about the mission of the organization. I think an important consideration is where does the funding come from whenever people think about the mission of an organization? And do you focus on making life better at Emory and using AI for your colleagues at Emory and leadership, or is this something that you're looking to expand upon in the community and elsewhere?
Dr. Joe Sutherland (05:28):
Great question. So the Center for AI Learning, it focuses on two things. One is on providing opportunities for AI experiences, experiential learning, collaboration with other types of learners, with opportunities to apply AI skills to experiential learning projects. So we actually partner with a lot of external organizations who will bring projects into the center where we'll bring faculty and students together to go and work on them.
(05:56)
I think one of the best ways to learn is through actually collaborating with your peers to solve an unknown and ambiguous problem, like a lot of times in class, I'll give these package data science problems and it's a problem, but you don't get the full experience of having to work with a client, helping them translate their problems into real data science issues that you can solve with data and methodologies, bringing the passion and expertise and framing to those problems so that you can help them realize whether they're even the right problems to begin with there. There's so many components that wrap around this necessity of applying artificial intelligence to the right problems. And I think the best way for students to learn those is through experiential learning, and I think the best way to have faculty collaborate with them is through these types of projects.
(06:45)
So that's one thing that we focus on is helping anybody. And you could be a faculty member, a student, you can be a staff member at the university, you could be a community member anywhere in Atlanta, the state of Georgia, the country, you could show up at one of our workshops and we will treat you just the same as everybody else.
(07:03)
There's a second component, which is we are building a robust intellectual community of AI scholars and the related students to work on those research problems and other community members in industry who are interested in sponsoring research that's associated with those problems here at the center. So I mentioned we hired these 46 tenure-track faculty, and what we've done is we've actually placed each one of them in every school and department where possible. And so actually, they have kind of a matrixed relationship. They're maybe a professor of biology or biostatistics and a department that's in the college, and then they also cross-report into the center as part of the AI humanity faculty community.
(07:48)
How do we fund those? I can't give the full details of how it works, but it is a blended model. One of the ways that we were able to actually secure the funding was through one of these stories, which was, there was at some point in the past this bioassay technique that was widely used by people in biology, labs doing research, and a new one came out that was superior. And if the labs weren't using that newer bioassaying technique within four or five years, they actually were unfunded. And so having access to the latest tools, the latest and greatest tools is one way for the university to remain competitive. And so through that story and through the ideas behind it, and of course I've talked a little bit about the positioning of Emory University when it comes to applying these technologies to the hardest problems for the good of humanity, another big piece of it's just how do we remain competitive as an institution by investing in our faculty, our students, our degree programs, and our community relationships to be able to do those things?
Jason Altmire (08:54):
And Emory has certificate program on AI workforce readiness. Why would one want to seek that certificate and what could you do with it when you've completed it?
Dr. Joe Sutherland (09:06):
So the Workforce Readiness Certification Program, that happened because I went down to, we have a campus in Covington, Georgia, it's the Oxford campus, and you can actually get a two-year degree there. A lot of Emory University students will start there and then end up matriculating at the university. It's an amazing campus, it's nice and quiet and serene. So I went down there and actually met with their local Chamber of Commerce because the dean had organized a meeting with them. And I asked, "How many of you guys have used AI and are familiar with AI technologies?" And I got zero hands went up in the room, and I just realized at that very moment, I was like, "Oh, wow, we're kind of in a bubble here." We're all talking about AI at Emory and other peer institutions, but the community hasn't had access to understand it.
(09:58)
And so we ended up raising money partnering with Rowan Foundation, which is a statewide foundation that's interested in founding a new, kind of like a research triangle park here in Georgia, focused on commercialization of university technologies, co-location with businesses and development into the workforce. And with the Georgia Chamber of Commerce, the spiel there is always we're interested in making Georgia the best state to live, work and raise a family. And to do that, you need the latest and greatest skills.
(10:25)
And so we put together this vision that why don't we go out to the community? Why don't we go and visit 20 communities around the state? Let's just do a little bit of demystification, let's share tools with them that they can use. So actually, if you come to these sessions, you get free access to Emory's AI platform, which has sort of like a ChatGPT type interface, looks like a large language model. And then how do you keep in touch with us to be able to develop your skills further if you wanted to go and use the skills you learn at these seminars to go and get a job, or at least to enhance your efficiency in your current job or to earn more by doing more things with your time? And that was what hatched this idea that maybe we should take all these people who are interested in learning, build a community and help certify them in the latest and greatest AI technologies that they can use in their work and lives. And that was the genesis of this workforce development program.
Jason Altmire (11:16):
I saw in one of your presentations you used a very interesting analogy of 50, 60 years ago when the calculator first came along. And of course, students and people in academia were very concerned that students were no longer going to get out their yellow legal pad and do long division, and it was going to hurt their learning, and it was something to be discouraged. And of course, as time played out, it hasn't worked out that way, it's been very beneficial. And AI, in your view, and I think many people's view, is very similar.
(11:51)
But now we're in the very of thinking about what AI can mean and the benefits and what it can look like, and we think about it in the context of task automation, and in higher education that would be designing the quiz for you, what are the questions on the test? Maybe helping with grading, lecture materials, things like that, course design, but there's so much more that can be done today and to come. Can you talk more about how scholars, professors, lecturers, and administrators in the higher academic setting can use AI now and in the future?
Dr. Joe Sutherland (12:28):
There's a preceding question I usually get before you get to this, which is, "How is this affecting students' ability to learn?" I mean, we're all academics, we're all educators, we're all interested in providing our students with the ability to do one of two things, right? One is producing a high quality output with the skills we give them, the second is about learning critical thinking skills that are going to help them to solve problems that they haven't seen before. So first one, more efficiently solving problems that we have seen. Second one, inventing new ways to solve problems that we haven't yet seen. And what a lot of people ask me about is, well, isn't ChatGPT and all these large language models, isn't this preventing us from being able to provide our students with critical thinking skills? Because it essentially just offloads the thought process to them. And I understand the concern.
(13:18)
What have I actually seen, it's something a little bit different. So this is getting to your question. Students use these tools, at least as far as I've observed as kind of like learning coaches. What I see them do is interact with the tool, maybe talk to it a little bit before they go to the TA, it's almost like helping them build the self-esteem and confidence in their own abilities before they go and talk to the teacher because I think a lot of them are actually worried that when they go and talk to the teacher, they're going to sound dumb. And especially for English as a second language students who are at English-speaking universities, they're worried that they're not going to be able to articulate their ideas well enough. So I've seen them interact with these models as sort of like coaches.
(14:05)
Another way of thinking about it is a conversation partner. I'm very verbal, the way I process when I'm developing a new paper idea, I love co-authors because we go and we talk, we think through it, we whiteboard, and I need that to be able to really put my ideas forward in the best articulated way, just like you and I are having a dialogue right now. So being able to have access to tools like large language models for students I think helps them do that.
(14:30)
From an academic or I guess an educator's perspective, the problem I see is being able to toe the line between knowing when they're using it for thought development or for coaching versus when they need to use it as part of an evaluation. And the evaluatory mandate that falls on the faculty is really going to become more difficult in the short term because we're going to have to think through how we can evaluate the ability of our students to develop critical thinking skills without using the former evaluatory methods that we used to use, which used to rely a lot on rote memorization and sort of articulating repackaged ideas that may not be too new in the five paragraph essay. So I can go deeper on that, but that's the short answer.
Jason Altmire (15:19):
And many schools, in addition to businesses, associations, government organizations, are now creating standards for acceptable use of AI. How would you define in an academic setting in your work at Emory and in talking with other schools and peer institutions, what are examples of unacceptable use of AI?
Dr. Joe Sutherland (15:43):
I think that it still falls back onto the basic principles that we have in education. It comes back to honesty, it comes back to transparency, it comes back to knowledge-seeking. I think if you can display those characteristics and values in the work that you do, then there shouldn't be an issue in using the models. And so good example here is if you're trying to develop new work that has integrity and intellectual coherence, don't just let the model that you're using to generate the idea go unedited. I think a lot of the students just decide, you almost pop it out, it looks like it's good. When you read it, it doesn't actually hang together. And I could go into further detail about what does it mean to be honest, right? Well, if you're using the machine, tell everybody that you use the machine and then let that be reflected in the evaluatory tools that we have.
Jason Altmire (16:35):
I wanted to ask you about the future, and nobody can predict the future if you try, it's just a guess, but you are more qualified to talk about the future and what it might look like with AI than anybody. So I would just ask if we were to fast-forward 10, 15, 20 years down the road, once AI is fully integrated, people are comfortable with it, it's part of their daily lives, what is that going to look like?
Dr. Joe Sutherland (17:02):
It's going to be another tool in the toolkit. To go back to that calculator example, I could see AI being something that you would use as an everyday tool to help you figure out how to make your grocery list a little bit faster. I mean, instead of having to sit down and sift through all the ingredients you have in the fridge and look through all the things you have in the pantry, let's use a tool that can help us take that 30 minutes we would've spent, do it in five seconds and then spend that 30 minutes with our kids and our family, right?
(17:32)
I think there'll be other opportunities where instead of having us have to develop research proposals, which if you've ever had to do a grant application, I'm sure you have, I'm so blessed to have research support staff because it's like speaking another language just to get through the system, I could see us having tools that help automate that logic that we have to go through so that we can be more efficient and effective in applying for more grants or larger grants or higher quality grants in less time. So I think that really that's the value proposition of these technologies, they can help us do more with the time that we have and potentially produce things at a higher quality because we have more time to work on those things.
Jason Altmire (18:15):
You have written and researched areas where you explore the utilization of AI and machine learning related to congressional lawmaking and regulatory aspects, state policies, development economics, and you and I both have worked in the political arena and the politicians of course are reactive, they're often slow on things like this, and they allow political considerations to cloud their judgment. So what do you see as the future regulations that might spring up related to this, and how do you think the political apparatus will respond?
Dr. Joe Sutherland (18:55):
You're keying in on, I think, one of the key concerns that ended up generating the impetus to start this USAI Safety Institute at the federal level through the National Institute of Science Technologies. You're also keying in on this interest that we're seeing from the business community, recently I saw, it was MasterCard, their chief technology officer put this really well, he said, "We are not sure what to invest in as these technologies are moving so fast because we're actually worried that once we invest and build out some pipeline, that it's going to end up being regulated and it would actually cost more to undo it or change direction than otherwise."
(19:38)
And so the problem is that these things are developing so fast and they're being invested in so rapidly, and I should even mention the rapid investment is happening largely within the private sector at the moment, less so in the academic sector, although there are overlaps, that the lawmaking can't keep up. The secondary problem that I'd actually point out I think a lot of people miss is we already have a problem where the lawmaking is actually faster than the implementation of what the lawmaking was intended to originally do.
(20:09)
So you have this sort of threefold problem, differing speeds, and there's things that we need to do from a blocking and tackling perspective even just at the state procurement governmental levels, like how does the government even understand how to appropriately evaluate the technological tools that are being sold to them to provide those services? That's more of an administrative problem than it is even one that has to do with regulation or lawmaking, but it is still something that will help accelerate the execution that's based on that administration of lawmaking.
(20:39)
The long and short of it is we can't over-pivot and we can't overly constrain because these things are going to be transformational in the powers that they have. At the same time, I think it's useful to call out really the bad actors who are using it inappropriately, and that could be done through certain types of legislation and regulation. I think it's important to make sure that we protect consumers appropriately, that can be done through certain types of legislation and regulation. And through cultural norms as well, I mean, one of the easiest ways to solve a problem is for us to all agree to solve it, it doesn't necessarily have to go through the regulatory or lawmaking process to be solved.
Jason Altmire (21:18):
If somebody wanted to learn more about the Emory Center for AI Learning or get in touch with you and learn more about what you're researching and writing and thinking about, how would they do it?
Dr. Joe Sutherland (21:29):
Easy to do, just go to our website, ailearning.emory.edu. You can also find more on Instagram, it's @joesutherlandinsta, or you can connect with me on LinkedIn. We do respond to every inbound request that we receive.
Jason Altmire (21:48):
Our guest today has been Dr. Joe Sutherland, Director of the Emory Center for AI Learning. Dr. Sutherland, thank you for being with us.
Dr. Joe Sutherland (21:56):
Thank you very much.
Jason Altmire (21:59):
Thanks for joining me for this episode of The Career Education Report. Subscribe and rate us on Apple Podcasts, Google Play Spotify, or wherever you listen to podcasts. For more information, visit our website at career.org and follow us on Twitter @CECUED, that's @ C-E-C-U-E-D. Thank you for listening.