TCW is a leading global asset management firm with over 50 years of investment experience and a broad range of products across fixed income, equities, emerging markets, and alternative investments. In each episode of TCW Investment Perspectives, professionals from the firm share their insights on global trends and events impacting markets and the investment landscape.
Cindy Paladines: Welcome to TCW Investment Perspectives Podcast. My name is Cindy Paladines, and I'm a senior analyst with TCW Sustainable Investment Group. I'm joined today by Evan Feagans, a portfolio manager on TCW'S Global AI Equity team. TCW manages equity portfolios with a global artificial intelligence strategy, some of which may have a sustainable investment objective.
Evan and I recently sat down with his co-portfolio manager, Beau Fifer, and my colleague, on the sustainability team Ed Mitby to pen a piece for a sustainable insights publication on the prospects and sustainability challenges associated with artificial intelligence. I wanted to sit down with Evan today to expand more on some of the team's thoughts on the sustainability implications of generative AI technologies. Evan, it's great to be with you.
Evan Feagans: Thanks, Cindy. It's great to be here.
Cindy Paladines: We know that there's been a flurry of attention associated with AI in recent months among investors, the media, policymakers, and the like, what's behind all the buzz?
Evan Feagans: It's been a whirlwind couple of months here. Obviously, the big moment was ChatGPT being released to consumers. I think that everyone has been aware of AI for years now, really. This was the first widespread release of a generative AI product. I think most people expect AI to be able to do forecasting or predictive modeling things fairly close to data science, that people intuitively understand that computers are better than humans at.
When ChatGPT was released, this was the first generative AI product which was going into the creative space that we normally think that humans are better at. In this case, it's responding to queries in the same way that human would, but generative AI can also be creating music or art, and just things that were generally thought to be core to the human creative process. It is impressive stuff, but we don't really view it as a singular breakthrough, but more a culmination of technological progress.
To run a large language model, or an LLM, you need a massive amount of tech hardware, compute power, memory, networking bandwidth, all of which have made huge progress and it's now powerful and cheap enough to run these models. Additionally, you need a whole lot of data to train those models on, and the internet happens to be a great repository for this and provides plenty of data to train the model and allow it to respond as a human would.
Overall, I think that the potential here is enormous, the buzz is real, and it's an investible theme for us.
Cindy Paladines: That's great. Certainly, the buzz is real. Evan, you brought up the sustainability perspective, and I think it's important to note that some concern has been raised that artificial intelligence technologies may increase inequality by replacing some workers in the workforce, for instance. What are your impressions of the role of generative AI models with respect to these issues?
Evan Feagans: I think it's really an interesting topic, and very understandable. When people think of technological progress, they think about machines replacing workers and factories, leaving, I guess more knowledge-based jobs, white-collar jobs, relatively untouched. I think that an offset to this is that large language models in generated AI it really more impacts white-collar workers, I would say. Blue-collar trades, construction, plumbing, those aren't really impacted by things like ChatGPT
Meanwhile, things like relatively standardized legal or accounting things, that's actually more likely to be displaced here. I think that's an offset to inequality argument. The other point I'd make here is that generative AI can help people accomplish the same tasks with less training, with less skills. For example, to write code, you don't necessarily need a software engineering degree, but you might be able to ask a large language model what you want the program to accomplish in English, or whatever your natural language is and it can spit out the code to accomplish this task. Generative AI actually reduces the amount of technical knowledge and training required to achieve impressive results.
Cindy Paladines: You already started to reference some of the sectors that you feel could be particularly impacted. If you could expand on that point, what are some of the sectors that could be impacted for the good or for the bad?
Evan Feagans: We think really everything is going to be impacted, in some way. The applications for productivity gains here are just too broad to make any industry exempt from it. I think the most obvious area where people are expecting job losses are customer service jobs, companies are already reducing the number or the amount of human intervention that's needed on service requests. Chats, phone calls, a large language model can handle usually 95% + of these queries on their own, without human intervention.
Another one that has been on the horizon for a while now, actually, is autonomous vehicles replacing drivers, so taxis, truck drivers. I would say that for truck drivers specifically, that's an area where it's been hard positions to fill, and they aren't necessarily desirable jobs, to be away from your home for days on end. That actually might have a net benefit of doing a job that humans aren't too keen to do.
On the positive side, I would reiterate the point that generative AI makes retrieving the productivity, the desired outputs, easier than it used to be. Less training, more job mobility, software engineering roles. There just simply haven't been enough software engineers on this earth to fulfill the needs of companies. I think that we'll see a huge expansion of software and broader IT roles, really, with generative AI taking hold.
Cindy Paladines: That's interesting. We've already begun to talk about some of the wide-ranging set of legal and regulatory issues at play. You mentioned autonomous driving that might be impacted from a labor force perspective, but there's also, of course, regulatory issues at play there as well. In our sustainable insights piece, we referenced this, and you mentioned there a non-exhaustive list of about four key areas where these issues are clearly surfacing as a result of the advent of AI. Could you summarize these key areas for our listeners?
Evan Feagans: Sure. There's four main points that I want to make here, so I'll just go through it quickly. The first one is input copyright issues. General AI models are trained on masses amounts of data, as I said, mostly just scraped from the internet. What is publicly available on the internet is generally covered under fair use. You're allowed to read an article and basically summarize its content, and an essay or report, without actually getting permission from the owner to use the material. When a model actually scrapes that and basically spits out a reworded version of that, is that covered under input copyright? It's still unclear where that falls.
The second one is output copyright. Related to that is the output of the model, can you copyright that material? I think that's interesting because I don't think that all material is going to be 100% human-written, or 100% generative AI written. I think it'll be a combination. I think people, they'll use generative AI to help make their point, or maybe do a first draft. It's unclear if that's copyrightable material.
The third one is output liability. Right now, for example, Facebook, they're not liable for what their users post online, thankfully for them. But is a generative AI company, are they liable for what their model spits out in response to a query? It still hasn't been resolved in the courts yet.
The final one, the fourth one, is just the issue of misinformation.
Generative AI models are really good at producing words, video, audio, that sounds like it came from someone else. You think about the misinformation campaigns that already happened on social media. Now if you're leveraging generative AI to make it seem like someone else is saying words they never did, it really can be a negative, I guess, impact and tool, so that's a real concern, going forward. Just overall, I would also just say that most of these legal and regulatory issues, it's still too early, they just haven't been worked out. We're expecting a lot more to come out, with laws being made, and working through the courts, so most of these issues just haven't been resolved yet.
Cindy Paladines: You mentioned the role of multiple public entities in this process of developing policy guardrails, the OECD, the G20, the US White House Congress, multiple entities in Europe have already prepared some of these policy guidance documents to help support the role of public and private actors in addressing some of these challenges that you referenced. Companies, obviously, also can play a role here.
In your view, how can the most sustainable AI companies manage any adverse consequences responsibly? What are some of the best practices that you're already seeing in the industry?
Evan Feagans: I think it's really a bare minimum for what we're looking for, that companies are following the laws. Given that, as I just mentioned, a lot of these don't really exist yet. It's what we expect from these companies and how we expect them to behave in the absence of an underdeveloped regulatory landscape. One of the things that we're seeing is that some companies are steering clear of using copyrighted material to train their models.
They're only using content in the public domain, or content that they've acquired the rights to. This is probably sacrificing some efficacy. Generally, models are more powerful the more data you train them on, but it creates a business that's sustainable no matter how the regulatory landscape unfolds. We think that's a really smart way to insulate their businesses.
Cindy Paladines: That makes a lot of sense, that they're preempting some of these challenges by creating business models that circumvent them. You also mentioned the need for transparency in business practices. What are some of the good transparent business practices that you've already been seeing?
Evan Feagans: Yes, transparency is also important. Again, in the absence of clear regulation, we think that one of the best ways is just to be completely out in the open about how they're handling these issues. We think that companies should let consumers know exactly when they're consuming AI generated content, exactly to what extent it was used, was AI used to clean up the video and help edit it, or was it used to create the video from scratch? There's a big difference there.
Then, just in general, we also think that companies should let consumers know what their broader policies are on ethics and procedures, and ideally, it should be as rules-based as possible to prevent subjectivity. That's just the start of what we're looking for, but there's really no silver bullet here, prevent issues, I think that ultimately, we're not going to even know all the issues until they arise, unfortunately.
Cindy Paladines: It's a quickly evolving landscape, so it's hard to keep up. Although I know that we're excited by all the progress that we've been seeing recently, of course, from a sustainability perspective, there's been some talk about the intense energy and resource use required to develop and expand AI capacity in the marketplace. Obviously, for us, this is an important issue as well, as we think about sustainable investments in generative AI.
How are you thinking about the role of the intense energy and resource use required to power these technologies?
Evan Feagans: Yes, it's a great point because-- I mentioned earlier, one of the reason that the generative AI is coming about now, it's because of the technological capabilities increasing. It requires a lot of infrastructure, particularly semiconductors, the building semiconductors emits a lot of carbon, uses a lot of water, and requires a lot of power as well. It's not great, from a sustainability perspective, in the short term.
However, in the longer term, you think about what you can do with those models, and AI is really good at optimizing efficiency, improving resource management, and pretty much everything. We're looking at industries like agriculture, transportation, and energy, as areas where companies can really reduce their use of resources using AI. For example, in agriculture, some companies are developing smart weed control products using AI.
They're towed behind a tractor, basically, they identify what are the weeds, separates them from the crops, targets only them with herbicides, rather than spraying it over the entire field. You use a whole lot less herbicides, which means that the crops and the groundwater have less exposure to it, while maintaining or even boosting the production of food. That's just an example of over the long term, how AI can help, from a sustainability perspective.
Cindy Paladines: It can help over a multitude of sectors, as you've mentioned, which I think is also an exciting prospect, but still, there are some naysayers that feel that the prospect of computing technology becoming, in fact, more intelligent than humans, is a bit of a sobering thought. Do you have a more optimistic take, Evan, that you can leave our listeners with?
Evan Feagans: Yes, absolutely. I think that AI is consistent with other technologies. Ultimately, it's a set of tools that's aimed at making our lives better, accomplishing tasks better than we can on our own. I don't think it's necessarily different than the internet, or other technologies that came before it. Then, from a sustainability perspective, I think that AI is a huge opportunity, again, just managing resources more efficiently.
It really can help reduce our environmental footprint, without sacrificing too much output and quality of life for humans. I think it's an important step in our path towards a more sustainable future.
Cindy Paladines: Evan, thanks so much for leaving us with such an optimistic take. Thanks for joining me today.
Evan Feagans: Yes, thanks, Cindy. It was fun.
Cindy Paladines: If our listeners have any further questions on this exciting topic, feel free to reach out to us.
For more insights from TCW, please visit tcw.com/insights.
This material is for general information purposes only and does not constitute an offer to sell or a solicitation of an offer to buy any security. TCW Its officers, directors, employees or clients may have positions in securities or investments mentioned in this publication, which positions may change at any time without notice. While the information and statistical data contained herein are based on sources believed to be reliable. We do not represent that it is accurate and should not be relied on as such or be the basis for an investment decision. The information contained herein may include preliminary information and or quote, forward looking statements and quote due to numerous factors. Actual events may differ substantially from those presented. TCW assumes no duty to update any forward-looking statements or opinions in this document. Any opinions expressed herein are current only as of the time made and are subject to change without notice. Past performance is no guarantee of future results.