Since 1986, Convene has been delivering award-winning content that helps event professionals plan and execute innovative and successful events. Join the Convene editors as we dive into the latest topics of interest to — and some flying under the radar of — the business events community.
Convene Podcast Transcript
Convene Series: AI and Event Contracts: Legal Risks, Compliance, and Ethical Best Practices with Jill Blood
*Note: the transcript is AI generated, excuse typos and inaccuracies
[00:04] Jill Blood: But I think the more sensitive the information, the more proprietary the information, the more careful you want to be used about sort of what tool you're using, how safe it is, and sort of how you're using it.
[00:18] Magdalina Atanassova: Welcome to Season 6 of the Convene Podcast, brought to you by Philadelphia Convention and Visitors Bureau. AI is transforming contract management, but how can event planners use it safely and effectively? In this episode, we sit down with Jill Blood, VP, Deputy General Counsel at Maritz, to explore the intersection of AI, contracts, and legal risk. With a background in mergers and acquisitions and a reputation for being “surprisingly reasonable… for a lawyer,” Jill brings a wealth of knowledge on the topic.
She shares insights on how AI can assist with contract amendments, the legal pitfalls event planners should watch out for, and how to ensure compliance with evolving regulations like GDPR and AI governance laws.
We start now.
[01:20] Magdalina Atanassova: Jill, welcome to the Convene Podcast. It's a pleasure to have you here.
[01:24] Jill Blood: Hi. Thank you so much for having me. I'm happy to be here.
[01:27] Magdalina Atanassova: And we have a very interesting topic to cover. How can planners use AI for legal insights and amending contracts?
[01:38] Jill Blood: I think first of all, as a lawyer, I'm sort of obligated to say that AI is not something that you should use to sort of replace your lawyer or replace the need to consult a lawyer. So I think for your initial contract templates for important contracts, you definitely want to be bringing in your lawyer. That said, I think that AI can be really powerful for idea generation, for sort of helping you think of something. I was recently drafting the type of contract I had never drafted before. Somebody wanted to bring a helicopter to our campus, and I'd never thought about a helicopter on campus agreement. And so I was writing an agreement, I was starting from a template we had, but then I put into sort of a generative AI tool. Like what sort of language would you want for this type of contract? And I didn't use that language exactly, but it helped me think through different types of things. And it said you might need security there to make sure that the area is safe and you might want to cover who pays that. I thought, well, that's a great idea. So I think it can be good for that. Sort of what types of clauses might I want in a contract during hurricane season and idea generation? Or give me an example clause for this. I want a different form of attrition clause. Let me have it. Generate ideas. I think the critical part is not taking it one for one, using your human brain and expertise to read it and to use your critical thinking skills. But I think as a thought starter or a way to confirm what you already know, those tools can be really valuable.
[03:13] Magdalina Atanassova: I like that. And are there any legal risks event planners should be aware of when integrating AI technologies into their operations?
[03:22] Jill Blood: I think so. I think there's a couple. I think there's sort of some legal and ethical risk to using AI. I like to joke internally that I'm our resident wet blanket when it comes to AI. We have so many things doing, really exciting and cool AI stuff. And like I said, I use it personally. I think it's going to change the world once we really know how to harness its power. But I think it raises concerns about intellectual property. Are you using AI in a way that protects your IP but also is thoughtful about others? So if next year, instead of interviewing me, you go into an AI tool, put in everything I've ever said and say generate a podcast in Jill's voice. That doesn't feel great to me. Even less so if I'm a celebrity who's on the speaking circuit and you're sort of using my public works for that reason. I think you want to be thoughtful about that. The other big thing to think about is sort of discrimination and bias. AI reflects back a lot of the content on the Internet, and so it can reflect and amplify that bias. So I think planners should be thoughtful. If you're using it for something like I want a stock image created of a surgeon, it's going to be more likely to give you a picture of a white male surgeon. Even then sort of statistically would be true. And I think you want to be thoughtful there. I think you just want to be really aware of sort of the tool and how the information you put in is used and what happens to it. The example I always use is like, if you want to create a numerical ranking of everybody's Social Security numbers, generative AI is probably not the tool for that. You don't want to be putting in data that sensitivity if you don't have to. But for idea generation, things like that, really great. But I think the more sensitive the information, the more proprietary the information, the more careful you want to be used about sort of what tool you're using, how safe it is, and sort of how you're using it.
[05:16] Magdalina Atanassova: Yeah, that makes sense. And it brings me to my next question. So what should event planners look for in contracts with AI technology vendors, if that's an option at all, to protect their organization's legally and ethically?
[05:32] Jill Blood: Yeah, I think this sounds basic, but reading the entire contract you know, we've seen a few, and like any new technology, there's a million new entries to the market. Some of them are amazing and thoughtful and safe. Some of them are less so. So I think really having the conversations, vetting them, asking questions about, how do you use my data, what happens to my data when I put it in? Do you train your model on my data? Really being thoughtful about it, and then sort of customizing the tool to your use cases at Maritz? We have some AI tools that are sort of custom to us. They're very protected. Information doesn't go out, it's not used to train. We use that for things that maybe include more sensitive or more confidential data. But we also have given all of our employees access to a generative AI tool that's maybe not quite as locked down. And we encourage them to use that for things like idea generation, thought starters. I need language to write in a thank you note. And this AI tool can help me sort of break out that writer's block. So I think different tools for different uses and then just really asking the questions, talking to it, not just taking marketing materials at face value, but really thinking about the contract.
[06:43] Magdalina Atanassova: And have you had this situation where you actually sit with the AI vendor and you look through their terms and conditions and you renegotiate them for your benefit?
[06:56] Jill Blood: Yeah, absolutely. I'm lucky that we have a very, very thoughtful AI team internally. So they've done a lot of the vetting of sort of initial tools. What will happen is somebody within Maritz will identify a tool or will hear of something. They will have meetings and talk to those providers and see sort of what is the offering. Does this make sense to us? Is it useful to us? And then they and I will have a meeting with our information security team, me and them, we'll talk through the use, we'll meet with the vendor, they'll tell us what's going on and then I'll review the contract. And we've negotiated those pretty heavily. Most people have let us. But then I think the more custom and more expensive, the more likely you are to be able to negotiate the off the shelf OpenAI type tools. You're sort of just clicking terms and conditions. But I think even if you're not going to negotiate the terms, understanding what they say is very important so that you know, well, this tool is going to be used to train your AI model. That might be fine. You just might want to adjust what type of information you input into the system. Maybe you're putting in your own information, but not your attendee information. So I think understanding what the different tools are and what their limits are, even if you don't plan to negotiate them, is really important.
[08:09] Magdalina Atanassova: And did you have any specific training with the team to kind of point them in the right direction of this tool is for this purpose and that tool is for that other purpose.
[08:19] Jill Blood: You know, it's been really interesting and sort of a fun process because this technology is so new that we pretty early formed kind of a cross functional team that has me, technology leads, our innovation lead, information security, our finance team and our people in development team. And we had those conversations sort of early and often of saying what use cases would move the needle for us and how can we do that in a way that feels safe, protective of our clients, but also sort of true to who we are as a company. We as a company want to make sure we're respecting other people's intellectual property. We also want to move fast and be innovative. So we right now have an AI policy that isn't restrictive on saying you can't use it. What it has is sort of guidelines for use. So I think there's eight of them. And it sort of says things like avoid plagiarism, respect intellectual property rights. And and we ask that our team sort of think about those guidelines and those north stars when they're kind of going through their use cases. And that's worked pretty well for us because it really, it kind of has to be a partnership between different teams because, you know, I would probably be more risk averse. But then our innovation team comes to me and says, look at how we can move the needle doing this. And we've almost always found a way to make it work. But it's tough sort of building the plane while you fly it for sure.
[09:39] Magdalina Atanassova: Because you're talking about intellectual property. So who owns the intellectual property when you input or take information from AI?
[09:48] Jill Blood: Yeah, I think the law is still being developed on that. I think it will be for a while. I think technology has sort of outpaced kind of our laws and our legislature. So laws are coming down. In the US There's a couple of laws that really get at things like profiling. You're scanning faces and you're using that for something. They get at things like making employment decisions based on HR, but sort of the IP laws are still sort of being worked out. Exactly what that is. Our science internally has been, we want to be really thoughtful when we're using AI for something that we would want to be 100% confident that we Own. So if I'm coming up with the title of a session, I don't need that to be copyrighted, I don't need it to be trademarked. That's a great place to use AI for that idea generation. If I was coming up with a new logo for my meeting that I wanted to be able to use for the next 10 years and didn't want anybody else to, I think I would hesitate to use AI for that right now because it'd be too important if it was wrong or if the law changed. So I think it's something to be thoughtful about and a little bit cautious, at least while sort of those laws are being developed. But you know, if you're naming a one off event and you don't, it's not important to you to own the name, maybe it's fine, but I think it's something you want to be able to copyright or trademark. I'd encourage you to work with a lawyer before sort of relying on an AI tool and assuming that you would have ownership.
[11:16] Magdalina Atanassova: I just want to kind of insert here that the team at marriage is not a handful of people. You have a huge team. So actually having everyone aligned and aware and just mindful in which cases they should come and ask you if that's okay or not, it's quite a lift.
[11:35] Jill Blood: I think we did early, early. We got the right people in the room and we're lucky enough to have a lot of passion for our innovation and tech teams where they really spearheaded both the innovation side of it and getting new tools in, but also socializing it within the company. Socializing the ways you need to be cautious, but also the ways you can use it. I think these generative AI tools are so powerful, but if it's brand new to you, it can be intimidating. And I think they've done a really good job of sort of educating teams on, hey, here's what I use it for, here's what some other people use it for. You might want to think about that. And it seems like almost every time they do that, there's a whole new flush of people being like, actually wait, this could really help me hear some thoughts of it. So, yeah, I think getting the whole team on board and having internal education and meetings and workshops and really listening and then taking that feedback and then pushing it back out was really, really important for us. Even for a big company where it's sort of hard to get everybody on board and educated with just about anything. I think we've done a good job on that one.
[12:39] Magdalina Atanassova: Yeah. For sure. And how do you effectively communicate the use of AI with stakeholders to maintain trust and transparency?
[12:49] Jill Blood: Yeah, I think this is one that's evolving. A year ago I had never seen an AI provision in sort of a client or a vendor contract and now I see them in probably about 50%. What a lot of them ask for is we want you to disclose your use and we want to be able to consent to it. I think that's going to get harder as AI gets baked into more and more things. So we try and limit that to sort of we're going to use generative AI, we'll let you know. But some of the back end process automation, maybe we won't get consent first. It's something we've negotiated with clients and are sort of living into those terms. But I. It feels like there's going to be a new normal where it's just in every contract the way a lot of other provisions are. But until then, I think it's going to be case by case. We have clients who are saying we want you to use AI for as much as you can. We're excited about it, we want to innovate, we want it to show up in our meetings and events and others who are saying we're kind of nervous about this, we don't really want it used on our programs yet, or we want to dip our toes in the water and understand it a little bit more. So I know our sort of innovation and tech teams and even our sort of planning teams have spent a lot of time meeting with clients, educating them about what the tools can do and how we use it safely, and then sort of navigating. Do you want to be front of the pack or do you maybe want to see where the tide turns more? What I haven't seen yet is a lot of sort of details on how people will communicate the use of AI to guests and attendees and how it'll be used. And I think we're going to see that come down in the next few years too, especially as more and more tools for things like engagement emerge. I think as an industry we're going to have to land on how are we disclosing that, how are we managing it, what does that look like? I don't think we've quite figured that out yet because it's still a little bit of a novelty.
[14:43] Magdalina Atanassova: What about data privacy laws like GDPR, CCP and so forth? How can event planners keep on being compliant while using AI, especially when it comes to data collection and personalization?
[14:56] Jill Blood: Yeah, I think that's a great question. I would say those laws apply the same. So it's not as though if you're using AI, you don't have to worry about those laws. So I think it's a great thing to keep in mind because sometimes I think the tool is so powerful and so useful that we hear, well, could we use it to do something with a housing list? But then you have to think about, well, what do we do with that information? So I think a lot of that comes down to what tool you're using. If you're using an enterprise tool that's locked down, that doesn't use your data to train, it's probably pretty similar to using that data within your own internal systems. But if you want to put something into a public facing generative AI tool that you're using for free, I think you have to be really thoughtful about what that looks like. Those are also terms that are and should be included in sort of these AI vendor contracts. So I think that's something to look for too and to have your privacy and legal team review as you see those terms and think about, you know, you have the same obligations to disclose to people how their data is being used to let them control it. And so I think I would think of them like you would any other downstream vendor and sort of be thoughtful about those laws and how you're complying with them.
[16:09] Magdalina Atanassova: A word from our sponsor.
[16:11] Philadelphia Convention and Visitors Bureau Ad: Philadelphia is a city of innovation. From 1776 to today, Philly leads in life sciences with world-renowned research and cutting-edge facilities. Host your next convention in a city where breakthroughs happen. A world of discovery is waiting. Visit discoverPHL.com to start planning your next life sciences meeting.
[16:45] Magdalina Atanassova: Now back to the program. How can AI help event planners mitigate risks? So going the other direction, such as, you know, identifying potential security threats or managing large scale crowd control, you know.
[16:57] Jill Blood: I think you can probably help mitigate risk in things like idea generation, in knowledge. If you said, you know, when is hurricane season in Florida and it helps you do that, you say, what items would I need to have on site to be prepared for this type of. Or what steps should I take to be prepared for this type of disaster? Again, use your own brain. Think about that. Check it. I would never take an emergency preparedness plan straight from a generative AI tool and just adopt it. I think what's complicated and what there'll be sort of changes in the law in the next couple of years is using AI for things like risk assessments. If you were going to scan everybody's face and say, this person has an expression that makes me nervous, or we're scanning them for IDs. A lot of the current US legislation is sort of around that use of making decisions based on AI's decision. So if you said if AI flags somebody, we're not going to let them into the event, that could be challenging. Not that you can't do it, but I would put that in the category of if you want to do a use like that that's so new and potentially sort of controversial. I think that's one where you'd really want to talk to a lawyer and also be really clear with people how you're using it, what you're doing with it. And I think you'd have to be very, very aware of the inherent bias to those tools that if you say identify threats and it's identifying disproportionately minorities or men or women, that can be challenging and open you up to liability. So I think it can be done. I think the technology can do it. It could be something in the future, but I think for the next few years, I'd really encourage planners that if you're exploring any sort of facial scanning, thinking about that, using AI to make decisions about people who can or can't come into your event or who are not a threat, I think that's one where you really want to make sure you're consulting an expert, which is what we would do too. Even with my sort of expertise internally, if we wanted to do something like that, we would contact somebody who does this full time.
[19:06] Magdalina Atanassova: Yeah, totally. I get that. I hope our listeners get it too, and do the same. And you already started to speak about the future. So what changes or emerging trends in. In AI regulation should event planners keep on their radar to remain compliant?
[19:25] Jill Blood: I think the laws are going to change pretty rapidly. There's a few that have come down already. Like I said, a lot of the current ones focus mostly on sort of HR decisions, making decisions about people. So if you work at a company that has 100,000 employees and you receive 2,000 applications a month to work there, there are AI tools that could scan those and tell you these are the best or worst candidates. There's pushback in legislation on that because you're sort of making a decision about somebody's life as a computer and because of sort of the bias. So you can do it. But there's pretty robust procedures. I think Colorado has a law. A couple of other states, Washington and California, have sort of considered a bunch of AI laws They've enacted a few more is coming. I think the big one to keep an eye on in addition to those is the EU has sort of a new AI act coming down that, similar to GDPR, has some applicability even outside of the EU, if you have EU attendees, if you have EU exhibitors, and it's pretty restrictive. It's sort of a comprehensive framework for the different types of controls on different levels of AI. And it outright prohibits some uses, and some uses are pretty much okay. That's sort of coming online in stages, but starting relatively soon. I think we're going to hear a lot more about that one as sort of the details come out and it goes into place. But sometimes it's easy to think, well, my meeting's in the U.S. I don't need to worry about it. I think that one is worth keeping an eye on, both because most large meetings have at least some international attendees and because like with GDPR, sometimes when the EU, with its size, does something, other people follow at least part of it. So I think it's one to sort of be aware of, be familiar with, and be sort of vetting vendors based on their ability to comply with it. I think that's a good question to ask if you're thinking about a tool.
[21:24] Magdalina Atanassova: Yeah, for sure. And it's not only attendees.
[21:27] Jill Blood: Right.
[21:27] Magdalina Atanassova: You have sponsors and other parties that may be protected by different laws.
[21:33] Jill Blood: Yeah. And I think sometimes it's easy to lose sight of that. I think especially as Americans, we can sometimes be pretty America focused. And so I think that if you are hosting an event even in the United States, thinking about what international laws might impact, you could be different. I think also, you know, the EU law, the idea that it applies here, even if you aren't hosting your meeting in the EU was kind of a foreign concept for a lot of American companies when GDPR came into effect. So I think just keeping it on people's radar is going to be really, really important.
[22:08] Magdalina Atanassova: How can event planners create an ethical framework for incorporating AI in ways that.
[22:13] Magdalina Atanassova: Prioritize human centric values?
[22:17] Jill Blood: Yeah, I think it's hard right now when you're kind of figuring out what is this technology even going to be. I think a lot of what we've done internally is just saying what is it that this is helpful for and what is it where maybe it's faster, easier or better to have a human do it? And we've had a ton of use cases and some of them have been incredibly successful. And it really moved the needle. And there were a Few where we said, I'm not sure if AI is adding anything there. So I think that can be hard when you're still sort of figuring out how you want to use a tool. Like I said, what we've done is establish a series of guidelines. So it says things like, we want to have fair employment practices, we're going to respect it, we're going to respect confidentiality, we're going to make sure information is secure. I think that made more sense for us to sort of have guidelines that we use to run ideas through as opposed to hard and fast rules. That said, we'll always do this, we'll never do this because it's changing so fast. The technology is evolving, the regulations are evolving. So I think that makes sense to us that we said, based on who we are as a company and how fast we want to move, we're not going to say no to anything, we're not going to say yes to everything. What we are going to do is try and establish this framework that we use to say, you know, even if it's cool, we're not going to use AI to plagiarize somebody else's work. Even if it's helpful, we're not going to use AI in a way that compromises guest data. But the details of that, maybe five or ten years from now we'd be able to say these are the exact things you can and can't do. For now. It's really a discussion every time with the right stakeholders to say, does this make sense to us? Is it in line with who we are as a company and is it in line with what our clients, vendors and event attendees sort of expect and is fair to them. And so I think really looking at each case and having the right people in the room, you know, our technology teams, they love this stuff, they're excited about it, they're ready to sort of crash through the walls and do it. I'm a little bit more nervous and I like to think that we sort of pull each other to the middle where I'm a little too cautious, they're a little too forward and that balances out through those conversations in a place where we're moving, we're moving fast, but we're not breaking more than we need to.
[24:34] Magdalina Atanassova: Yeah, I love that and I love the fact that you have come together and just decided on the way forward. And what would be your advice to other planners on their long term strategy as AI is evolving so quickly so that they can ensure staying ethical, compliant and competitive like you Are.
[24:55] Jill Blood: I think this is going to sound a little counterintuitive, but it's like, don't be afraid of AI, but be a little bit afraid of AI. So don't be so afraid that you're not exploring it, that you're not thinking about the possibilities, that you're not tapping into what it can do. We hear that fear from I'm just not good with technology to I don't trust it. I don't trust the data. And then we hear the fear of, if I use this, is it going to take my job? But then on the other hand, first of all, I think it won't. I think people in the events industry have so much knowledge, there's so much nuance to doing what we do that AI can help, it can create efficiencies. I don't believe it could ever replace the knowledge that people have. So I think we do see people who are probably a little too nervous and a little too scared to sort of dip their toes in the water. You know, my mom recently was like, I would never use that. It's going to, you know, robots are going to take over the world. And then I showed her that I had used one of these tools to sort of like, personally plan a vacation I was taking, and she was like, well, actually, that seems pretty nice. But then on the other end of the spectrum, we see some people who sort of aren't scared enough. And they're saying, but we're just going to use it for everything. And they're trusting it implicitly. They're saying, well, AI told me that this was the answer. You're a smart person with decades of experience, just because AI told you, essentially you wouldn't trust everything you read on the Internet, don't trust everything AI tells you. So I think finding that balance between experimenting and moving fast, but also sort of keeping your critical thinking skills and making sure that you're reading everything and being thoughtful about it, I think that's sort of the path to success for now, which is easier said than done. Striking that balance, I think is not easy, but I think that's the sweet spot for now. While I think we as an industry and a society figure out what these tools are really going to mean.
[26:43] Magdalina Atanassova: I love that. And was there anything we didn't mention we should before we wrap up?
[26:50] Jill Blood: You know what I would say about sort of the legislation, the rules coming down is sometimes that stuff can be intimidating and people sort of want to put their head in the sand and not read it. I think there's a lot of good resources, stuff Maritz we'll be talking about and putting out content on. So I think, you know, the EU law is probably hundreds of pages long and it's written like it was written by a lawyer. So I think continue to engage with that stuff and be thoughtful about it. There's a lot of resources out there. Even if you don't sort of have the resources to have the full internal team we do, there's stuff you can do to sort of make sure that this stuff's safe. There's podcasts like this that can give you information. So I think continuing to sort of engage with it and get educated about the advantages and the risk is really, really important.
[27:36] Magdalina Atanassova: I'm thinking, would event planners grab that hundred page document, just fit it into AI and say, give me the highlights.
[27:43] Jill Blood: Would that actually work? Our CEO was talking at an event and thought he might get some questions about the regulation. And I thought, what better to ask AI about than itself? And what it put forward was 80% great as a summary of the law. And what it was really good at was a starting point of the summary of law for me to send him that I then rewrote. But it got a lot of the main points, it summarized it and most of it was right. I wouldn't have wanted to take it and copy and paste it directly, but honestly, yeah, AI is not a bad tool to ask about sort of these regulations. When we were drafting our policy and guidelines Initially I asked ChatGPT, sort of what should you include in an AI policy? And what it put forward was like I said, you wouldn't want to use it one for one. But it was interesting as a thought starter of here's some things and that kind of blows my mind. What other tool can sort of tell you how to restrict use of itself? Sort of a strange future moment. But yeah, AI is a good tool to try and stay on top of that type of stuff.
[28:50] Magdalina Atanassova: Yeah. And something that just actually came up in my social media feed was that apparently an update in Microsoft has automatically kind of opted you in into a feature that essentially is learning by reading your whatever you put in Word, whatever you put in Outlook. So I checked and it was enabled on my computer. So that was very interesting. Have you ever navigated such a scenario?
[29:23] Jill Blood: You know, I'm not familiar with that at all. I think this is probably true of a lot of lawyers. And there's that old saying about the cobbler in his own shoes. But despite being a lawyer and, you know, a privacy lawyer, I'm Pretty, pretty quick to sort of click yes on that type of terms because, you know, I enjoy the convenience and I think this technology is cool. Yeah, I don't know about that. I do think that this is the type of area where there's often sort of, you know, you'll see the Facebook or TikTok or other social media trends on Microsoft using all of your data for that. So I think that's one where it's worth sort of digging into exactly what they mean by that. I also think there's sometimes vendors that put provisions in their contracts that are very broad, but then what they're actually doing is not as scary as it sounds like it is. But yeah, I'm not familiar with that update. I'm honestly surprised it didn't sort of make more headlines because I suspect there will be trial and error with those companies. Kind of how much people are willing to sort of tolerate and put up with and what sort of the appetite is for those tools that will find that balance.
[30:35] Magdalina Atanassova: I think the problem just as a user is to be really transparent about the updates. It's fine if whatever company based and you know, they kind of bake in AI into it, just be transparent, just give enough information for people to inform themselves and decide to opt in or opt out and that's fine.
[30:57] Jill Blood: I totally agree and I, you know, not knowing anything about really upcoming legislation or having any control over that, I suspect that that will be what we see in upcoming laws around AI is similar to with data privacy. It's saying you can do this but you have to tell people in a really conspicuous way and they have to have the ability to opt in or opt out. It feels like there's so many laws that get at that with privacy. I would imagine that AI will go in a similar sort of we can do these things but we have to be a little bit more transparent and deliberate about it. But I think it'll take a couple years for that stuff to really sort of shake through.
[31:39] Magdalina Atanassova: Yeah, for sure. So for now just stay vigilant, Contact a lawyer, do not make decisions solely on AI's advice and use your critical thinking. That was the takeaway for me.
[31:52] Jill Blood: Perfect.
[31:52] Magdalina Atanassova: Jill thank you so much for being on the podcast. It was my pleasure, of course.
[31:58] Jill Blood: Thank you for having me.
[32:03] Magdalina Atanassova: Remember to subscribe to the Convene Podcast on your favorite listening platform to stay updated with our latest episodes. We want to thank our sponsor, Philadelphia Convention and Visitors Bureau. Visit discoverPHL.com to start planning your next life sciences meeting. For further industry insights from the Convene team, head over to PCMA.org/convene. My name is Maggie. Stay inspired. Keep inspiring. And until next time.