Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Transcript:
Sophie Duffy: Welcome back to our podcast series AI at Work, where we explore the intersection of AI, employment law and the workplace. Today, we’re diving into a new topic that many of our clients are grappling with: whether – and how – to implement a workplace AI policy. I’m joined today by Jen Cormier and Greg Demers, partners in our employment practice, and Alyssa Horton, a partner in our asset management practice. Thank you all for being here.
Sophie Duffy: Let’s start with the basics. Greg, why should employers have a dedicated AI use policy, and what should be the policy’s primary objectives?
Greg Demers: Well to begin with, a well-crafted AI policy makes employees aware of when they can – and can’t – use AI and makes them responsible for remaining in compliance. It also provides companies with a defensible framework if issues arise—whether that’s a data breach, a discrimination claim, an IP dispute, or any number of other potential issues that can arise in what is undeniably a fluid and rapidly evolving area of the law.
Jen Cormier: To build on Greg’s point, a comprehensive AI policy also signals to employees that the organization takes responsible AI use seriously. A policy like that should set clear expectations about what is and isn’t okay and should be putting systems into place to make sure people are held accountable. From a training and compliance standpoint, a clear and current AI policy is really going to be the foundation for effective employee education. If you don’t have a clear policy, you can’t meaningfully train employees on expectations for them, and without training, it’s also harder to credibly enforce compliance.
Sophie Duffy: Alyssa, what are the risks of not having an AI policy? Are there specific considerations investment advisers or asset managers should be aware of?
Alyssa Horton: Without clear guardrails, organizations face significant legal and reputational risk from uncontrolled AI use by employees. For investment advisers specifically, the risks are amplified because a single misstep—say, an analyst inputting MNPI about a portfolio company into a public AI tool—could trigger insider trading liability, breach of fiduciary duty claims, regulatory enforcement actions, and serious reputational harm with investors. The SEC views AI governance as an extension of an adviser’s compliance obligations, and we expect that this will be an area of great interest on exam.
Sophie Duffy: Now, one thing we see clients struggle with is defining exactly what their policy should cover. What are some key definitional considerations for organizations when drafting these policies?
Greg Demers: This is really a critical issue in policy drafting. Employers need to distinguish between on the one hand, general “AI”-technologies that perform tasks typically requiring human intelligence (such as classification, translation, pattern recognition, and the like), and on the other hand, “Generative AI,” which refers to AI that generates new outputs or content based on prompts or data. The distinction matters because traditional AI—for example, using an algorithm to screen applicant resumes for particular qualifications—has been the primary focus of regulatory attention to date, while generative AI introduces a new set of concerns around content creation and information disclosure. When defining “AI” in a corporate policy, companies should use language that’s broad enough to capture emerging technologies, but also include concrete examples so employees recognize when they’re using covered tools. And prohibited tools, for that matter.
Jen Cormier: As a practical matter I would also say that a clear definition of AI in your policy is going to help employees understand what tools fall within the policy’s scope and which do not. We’re always seeing employees use AI in ways they don’t even necessarily recognize as AI use, examples of that would be voice-to-text transcription, smart email drafting features, and automated scheduling assistants. The definition also needs to address emerging tech. Because we are seeing the AI landscape evolve so rapidly really every day, the organizations who are using these policies need to regularly reassess and update the policies to just keep up with the developments—ideally that would be at least on an annual basis, but I would say these days, because everything is evolving so quickly, more frequently, say every few months because we are seeing new developments arise all the time.
Alyssa Horton: Equally important is distinguishing between “Open Generative AI Tools”—which are publicly available tools accessed via personal or nonenterprise accounts where inputs or outputs may be used to train underlying models or shared with third parties, such as ChatGPT—versus “Licensed” or “Closed Versions,” which are enterprise-licensed tools with contractual controls that restrict access and prevent model training on company data. Licensed versions mitigate many of the privacy, confidentiality, and IP risks that come with open tools.
Sophie Duffy: Got it. Let’s talk about governance. Who within an organization should be responsible for overseeing AI use, and what kind of approval processes should ideally be in place?
Alyssa Horton: Oversight can be managed by an AI Committee, or by someone in legal and compliance, often in coordination with the IT team as well. The responsible persons should maintain a current list of approved Generative AI tools, and there should be a clear approval process for employing new tools.
Greg Demers: The approval process itself should be robust, too. Employees should not be allowed to use Generative AI tools outside of a pre-approved list without getting clearance from the appropriate members of management. The terms of each vendor contract should be reviewed in order to understand how queries and information entered into the AI tool are used. Companies should also be sure to perform diligence on vendors. For example, screening for concerns regarding the receipt of material nonpublic information, and ideally, companies will put the product through a number of test runs, prior to deployment.
Sophie Duffy: What types of information should be considered “protected” and therefore subject to restrictions under an AI policy?
Jen Cormier: Sure. A few examples of that would be employee personnel files, performance reviews, disciplinary records, medical information, accommodation requests, leave documentation, investigation files… All of those are examples of sensitive information, and they should be strictly off limits for input into AI tools, particularly open tools. Companies should be really mindful of that because they will face a lot of risk and potential exposure for privacy violations, potential HIPAA concerns if health information is involved, ADA issues if accommodation information is disclosed, and significant exposure under state privacy laws that increasingly protect employee data. For example, California has expanded its Consumer Privacy Act protections to employee information, and we are seeing other states following suit.
Alyssa Horton: It’s also important to note that any information obtained under an NDA or other third-party contractual obligations, proprietary information or trade secret confidential information, and privileged information should all be considered “protected.” For investment advisers, this would also include information about portfolio investments, prospective investments, investors, and counterparties.
Sophie Duffy: This brings us to the heart of employment law concerns. I would love to hear a little bit about what specific employment-related prohibitions organizations should include in their AI policies.
Greg Demers: This is really key: A well-drafted policy should prohibit using Generative AI tools in any manner that is illegal, harmful, intended to circumvent cybersecurity controls, or using tools in a way that could be considered discriminatory, harassing, or otherwise in violation of a company policy. Instead of broadly stating “AI should not be used inappropriately in HR matters,” companies should list the specific contexts where AI is prohibited, which may include interviews, disciplinary actions, investigations, accommodation discussions, performance evaluations, and the like. I do want to highlight the anti-discrimination element because it deserves particular attention. AI tools at times can perpetuate and amplify bias in ways that create significant Title VII, ADA, and ADEA exposure. If an organization uses AI to screen resumes, evaluate candidates, or inform employment decisions, any bias embedded in the AI model—whether from training data or biased algorithm design—becomes the organization’s liability. I’d also emphasize that these policies should not be drafted by HR alone. Given the legal complexity and evolving regulatory landscape that we have discussed, it’s critical to have in-house or outside counsel review and advise on AI policies to ensure compliance and proper risk management.
Jen Cormier: I think Greg laid out those risks really well. As a result of that, organizations should consider prohibiting reliance on Generative AI-generated output as a significant factor in any type of hiring, promotion, compensation, discipline, or termination decisions. Using AI for things like background research, background research or to summarize information for decision-making assistance may be okay, but it is a really important distinction—AI, as a general rule of thumb, should be supporting human judgment, rather than replacing it. I’d also add that the investigation context is especially sensitive in this area. Using AI to analyze witness statements, summarize complaint allegations, or draft investigation reports can create significant risks—AI for example might miss nuance, mischaracterize statements, or introduce bias into what should be an objective fact-finding process. I think that AI has come very far but misses that type of nuance pretty frequently. When you are thinking about the integrity of workplace investigations, that at its core is going to depend on human judgment, and employers should be careful about not using AI as a shortcut in that context.
Sophie Duffy: Thanks, Jen. That is a super important distinction. Greg, can you elaborate on the regulatory landscape around AI in employment decisions?
Greg Demers: Absolutely. To the extent a company permits use of Generative AI in HR or talent-related functions—especially in California, Colorado, Illinois, New York, and a growing number of jurisdictions—state and local laws often impose particular anti-discrimination standards and may require bespoke training. These jurisdictions have been at the forefront of regulating automated decision-making tools in employment contexts, and the legal landscape is evolving rapidly. For example, New York City’s Local Law 144 requires bias audits of automated employment decision tools. Illinois has the Artificial Intelligence Video Interview Act governing the use of AI to analyze video interviews. Colorado has passed legislation around algorithmic discrimination. These are just a few examples, but they give you a sense of the variation at the state and local levels, and you can be sure more regulation will continue to unfold at a fairly rapid rate to keep up with the technology.
Jen Cormier: Agreed. I think that it’s really a patchwork of state-level activity right now, but we are seeing that start to gain a lot of momentum. There are legislative proposals in several other states, and the EEOC at the federal level has issued guidance making clear that employers can be held liable under federal anti-discrimination laws for AI-driven discrimination even if they didn’t design the AI tool themselves. So that’s an important point, the EEOC’s position is that employers are responsible for the outcomes of tools they’re using, regardless of whether a third-party vendor created the tool rather than the company itself. This means organizations need to have really robust vendor diligence processes in place for any AI tools used in employment and HR contexts, including making sure there is transparency about how the tools work, what data they were trained on, and whether they’ve been audited for bias as Greg noted.
Alyssa Horton: For companies with operations in multiple jurisdictions, the compliance complexity can be significant. A firm headquartered in New York with offices in California, London, and Hong Kong needs to navigate overlapping and potentially conflicting requirements. A strong AI policy should establish baseline standards that satisfy the most stringent applicable requirements, with flexibility for jurisdiction-specific adaptations where necessary. And of course, as the regulatory landscapes continue to evolve, it’s critical to regularly review policies to ensure continued compliance.
Sophie Duffy: What about AI transcription services for meetings? That’s become incredibly common as top of mind for a lot of our clients. Are there specific employment-related guardrails organizations should consider?
Jen Cormier: AI transcription can really be a minefield if not handled carefully. We generally advise against allowing AI transcription unless there are very specific guardrails that the employer has in place, including ensuring that all participants in the meeting have consented to the recording. Consent requirements vary by state. There can be some states that require one party or two parties to consent so the most conservative approach we see is to require consent by all parties as a practical matter. If you’re dealing with a ton of states in the mix, it’s just not going to be practical to research the requirements in all of the states. There should also be a process in place to review any AI-generated transcript for accuracy. Companies also need to think about where the transcript summaries are stored and who might have access to them.
Greg Demers: Employers should also specify categories of meetings and calls where AI transcription is off limits without special authorization, such as interviews, performance evaluations, and hiring or firing discussions, as well as other meetings regarding personnel-related matters. Similarly, communications involving privileged information or potentially privileged information, which could include meetings or calls where legal advice is provided or being sought, those should be excluded as well, with any approved exceptions requiring privilege disclaimers at the beginning and end of the meeting.
Alyssa Horton: The transcription issue intersects with confidentiality obligations in important ways. Investment committee meetings, deal discussions, conversations with portfolio company management—all of these involve confidential information that should not be processed by open AI transcription tools. Even licensed transcription services raise questions about various recordkeeping requirements, access controls, and securities that all need to be vetted carefully.
Sophie Duffy: We’ve talked a lot about the ways in which employees should NOT use AI, but that said, we know there are many ways in which AI can be incredibly helpful and improve efficiencies. So, I would love to hear about some of the best use cases for AI.
Greg Demers: Safer use cases typically include research and analysis. This might include fact-checking, brainstorming, summarizing general principles, exploring industry trends, and the like. Content generation is also typically fine, such as creating first drafts of documents like job descriptions, policy documents, internal communications, and slide decks, as long as all content is thoroughly reviewed and validated before use. Bottom line: A human still needs to have eyes on the work product, and those expectations should be clearly set forth in the policy. Employees must remain fully responsible for the quality, accuracy, and completeness of any content created with AI assistance.
Sophie Duffy: Speaking of responsibility, what disclosure obligations should employees have when they are using or creating AI-generated content?
Alyssa Horton: Ultimately, transparency is essential. Employees should be required to clearly indicate when content was generated by an AI tool, unless they have thoroughly checked and confirmed that every aspect of the output being shared is entirely accurate and complete. All video, images, and audio content generated with AI should be labeled as such.
Sophie Duffy: Got it. What about training and enforcement? How should companies ensure employees actually comply with their AI policies?
Jen Cormier: It’s a great question and worth emphasizing the training component because this is often where policies fall short. You can have a really well-drafted policy, but that’s not going to be enough on its own. Employees need to understand not just what the rules are, but why they exist and how to apply them in practice. Effective training should include concrete examples and scenarios that employees are likely to encounter, not just abstract principles. I think a lot of times when we are talking about AI, it can be very high level or in vague terms, but laying out specific types of tools that are permitted or off limits, specific use cases that may be permitted or off limits will be really helpful. It should be role-specific where appropriate, for example, how you’re training your HR professionals should be different from how you’re training your investment analysts. And the training also really needs to be updated regularly. We noted this about the policies as well, because the technology is evolving, and your organization may be learning from experience over time. We also want to see training cover resources available for when employees are confused or aren’t sure what to do because we see that happen a lot given the rapid nature of all of the developments in this area. You want to be creating a culture where employees feel comfortable asking questions, know who they can turn to if they have questions, and that really can be as important as the training content itself.
Alyssa Horton: It’s also worth noting that, on the monitoring side, organizations should make clear that all uses of approved Generative AI tools will be monitored for security purposes. To the fullest extent permitted by applicable laws, employees using these tools should have no expectation of privacy over their use.
Sophie Duffy: This has been so interesting. Finally, as organizations draft these policies, what are some common pitfalls you see? And do you have any final advice for our listeners?
Alyssa Horton: Sure. The first sort of relates to a common theme that we’ve been discussing today and relates to drafting an overly rigid and inflexible policy that doesn’t account for the rapidly evolving nature of this technology. Policies should acknowledge that the list of prohibited uses is provided for illustration purposes and may not be comprehensive in light of the evolving nature of AI technology. Building in flexibility to allow approval of exceptions on a case-by-case basis helps balance risk management with operational needs. AI governance shouldn’t be a standalone silo—it should be integrated with an organization’s broader policies, and the compliance framework should work together holistically.
Greg Demers: I also want to underscore the importance of integrating AI policies with existing company policies—for example, confidentiality, insider trading, marketing, privacy, cybersecurity, and importantly of course, an employee handbook and employment policies. In short, your AI policy shouldn’t exist in a vacuum. The cross-references to other policies should be explicit, so employees understand how the AI policy relates to obligations they’re already familiar with.
Sophie Duffy: Thank you all so much for those insights. For our listeners, if you have questions about drafting AI policies for your organization, please don’t hesitate to reach out to our speakers or Ropes & Gray’s employment or asset management teams. Stay tuned for future episodes in the series where we will continue to discuss the intersection between AI and employment considerations. You can also subscribe and listen to the other Ropes & Gray podcasts wherever you regularly listen to your podcasts, including on Apple and Spotify. Until next time.