Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Greg Demers: I’m Greg Demers, a partner in our employment practice, and I’m happy to be joined today by the head of our employment practice, Meg Bisk, and employment associate John Milani, for this inaugural episode of AI at Work. We advise clients across a variety of industries, including asset management, private equity, life sciences, healthcare, and technology, on the full spectrum of labor and employment issues. A growing part of that work, as you all can imagine, involves artificial intelligence, so that will be our focus in this new podcast series: the most pressing legal issues involving AI and the workplace.
John, I’m going to turn it over to you, so that you can set the stage a bit for us today.
John Milani: Thanks so much, Greg. In this first episode, we will tackle issues that in-house counsel and HR professionals are seeing today around the use of AI by employees and in HR processes. Now, at the outset, this series is meant to be informational and not legal advice, but we hope it arms you with concrete strategies on AI use in the workplace.
And with that, Greg and Meg, let’s start with what you’re seeing day to day. What’s the single most common pitfall when employees use large language models like ChatGPT at work?
Greg Demers: Meg and I were actually discussing this the other day. I had seen a statistic that since ChatGPT launched, over 10%1 of employee searches in generative AI tools involved some form of sensitive information, including employee and personnel data, client data, and the like. The top issue we’ve seen on the employee-use side is pasting confidential or proprietary content into ChatGPT and other language learning models to “summarize” it, or “polish” it, or even expand upon it. Now this content can include fund models, target due diligence memos, pipeline lists, unpublished clinical data—among many other things.
John Milani: And what kind of risks does that entail?
Greg Demers: Well, it creates two problems—or at least two problems. First, an employee could compromise confidentiality and trade secret status if that data becomes accessible to third parties or incorporated into model training for future use. Second, you lose control over where the data goes and who sees it. In highly regulated industries, like asset management, finance, pharmaceuticals, among others, that can pose huge concerns. Even if the employee’s intent is benign, the act can undermine industry goodwill with vendors, clients and peers, and regulatory compliance efforts. And another issue, is that it could potentially waive the attorney-client privilege or attorney work product protection, depending on the information involved and the source. Now that’s a particularly thorny issue that we’ll likely address in some detail in a later podcast episode.
John Milani: Yes, risky on a lot of fronts. I think that means now is a great time to review and refresh the confidentiality and intellectual property protections in your employment documentation, just to make sure they’re ironclad. On another note, in many industries, like finance and life sciences, accuracy and precision are incredibly important. How might overreliance on AI outputs show up as a risk in that context?
Meg Bisk Well John, we’ve really seen two main fact patterns there. The first is generative AI “hallucination”—I’m sure you’re familiar with that concept—confidently stated, but fabricated, citations, metrics, or case references that creep into work product. The second is “source opacity”: if an AI output lacks attribution, then employees can’t verify where it comes from or whether it can be permissibly used. In clinical and regulatory contexts, or when supporting investment theses, that’s just not workable. Employers should assume unverified AI output is a rough draft at best, and never a final output . Failure to verify can be incredibly problematic if downstream decisions rely on faulty outputs.
John Milani: Understood. Thanks, Meg. Turning now to HR functions specifically, in your practices or what you’ve seen around the industry, where do employers stumble when deploying AI tools in recruiting, screening, or promotion?
Greg Demers: I would say bias and accessibility top the list. AI tools can result in “disparate impacts” in employment decisions because of skewed data that may be tied to protected traits – potentially disadvantaging job applicants or employees based on race, gender, disability or a similar protected characteristic. New York City, for instance, already requires bias audits and transparency notices to candidates for the use of automated employment decision tools (also known as “AEDTs”). Accessibility is also critical under disability laws. For example, if a video interview or personality test disadvantages an applicant with a disability and there’s no reasonable accommodation, you’ve got a potential compliance problem under the Americans with Disabilities Act and similar state laws. And one other word of caution here: talent acquisition vendors frequently market tools as “bias-free,” but the legal responsibility to vet and audit the algorithms used in those tools, remains with the employer using the tool.
Meg Bisk: To jump in on that point, Greg, I have been considering how we approach guidance to clients on vendors, specifically. Due diligence, contractual protections, and human oversight are critical for employers who want to use those tools. We’d advise that you take an extra few minutes to read the indemnification, liability, and insurance requirements when engaging any vendors who are going to be using AI. Understand what data the tool ingests, how and how long it’s going to be stored, and the purposes for which that data can then be used. Be sure that the contract that you are entering into with your vendor requires bias testing, gives you audit rights, and allows for cooperation on accommodations and in the event of any legal claims. It’s also important to ensure the contract protects you by prohibiting secondary uses of your data without consent, ensuring appropriate security controls.
John Milani: Thank you, both! Let’s shift gears now to what happens when AI is misused in the workplace. Greg, have you seen an uptick in employee misconduct tied to AI? How should companies respond in the moment?
Greg Demers: Yes John, I’ve definitely seen it, and I expect it will become more prevalent in the months and years to come. To begin with, in response to such an incident, once discovered, of course immediately loop in the appropriate internal or external IT and data privacy experts to assess the scope of the breach. Preserve the record – preservation is critical so you’ve got a clear record of what was done by the company to protect the information at issue, both before and after the fact. This includes generative AI inputs, outputs, and chat logs, among other things. And then, work to evaluate notification obligations if personal or proprietary data was exposed. Generally speaking, investigate the incident, in accordance with existing company protocols (and if you don’t have them, this is a plug to get that process started sooner rather than later).
Now, more proactively, I recommend several steps including:
Publishing clear policies on approved tools;
Blocking prohibited platforms;
Ensuring that you have strong information security measures in place to address circumstances like these;
Training managers to spot red flags of artificial intelligence use; and
Developing a response plan ahead of time, not on the fly.
John Milani And what about employee monitoring and productivity analytics? Greg, where do AI-enabled tools create risk there?
Greg Demers: I would say two areas. First, surveillance that chills protected activity or sweeps too broadly can create labor relations and privacy issues. Employers should tailor monitoring to legitimate business purposes, and disclose the monitoring and obtain employee acknowledgments where required. Different states require different forms of acknowledgment, so be wary of those differences, too. Second, automated timekeeping or monitoring can misclassify work or miss off-the-clock time, which causes potential wage-and-hour exposure. But first and foremost, it really is absolutely critical to be fully transparent with employees about the scope of the monitoring – notice is required in just about every jurisdiction – and you should obtain affirmative consent in those jurisdictions that require it.
John Milani: Thanks, Greg. So, I know we touched on this a bit earlier, but if an organization hasn’t formalized an AI policy, what does one need to do to build one out? Obviously, all companies are different, so any broad-based tips?
Meg Bisk: I’d say start with the scope and definitions. A comprehensive policy should cover generative and non-generative AI, public and proprietary tools, and clarify that use of any unapproved tool is prohibited. Then I’d define what uses are okay, and what’s not okay in terms of AI use. Permitted uses may be drafting non-confidential emails or summarizing public materials. Prohibited uses need to include – at an absolute minimum – inputting confidential or proprietary information (including information belonging to customers and other third-parties). The type of business and AI use will really mold the policy after that, but it’s important to require human verification of outputs and prohibit any kind of sole reliance on AI for legal, financial, or medical conclusions. We’d suggest establishing approval pathways for use of AI tools, designating responsible “AI supervisors” at your organization, and making sure that your policy aligns with others that you have, like equal employment opportunity, information security, social media, and records retention.
John Milani: Great overview, that all makes sense. It’s a tough balance to strike because AI innovation, unlike technological developments before it (like the TV or the computer), occurs seemingly in a timespan of weeks and months, not years. It’s almost impossible to keep pace with all of the developments in the field, which is what makes policy grounded in principles and use-cases so crucial, rather than trying to address each specific model that crops up. And, Meg, how about discipline for AI misconduct?
Meg Bisk: So, disciplinary best practices for AI misuse, should really track a lot of the existing best practice concepts, such as anchoring discipline in existing policy frameworks like codes of conduct and confidentiality policies, supplemented by the AI policy that we just discussed. Discipline should be fair and proportionate. And the key to this will really be consistency, documenting your rationale, and making sure you’re calibrating sanctions for risk and intent. Employers may also want to consider safe harbor reporting for early self-disclosure of improper AI use. We’ve found that it encourages prompt reporting so that employers can more efficiently deal with issues before they become bigger issues.
Greg Demers: And I’ll just add that for unionized settings, you should ensure any monitoring or disciplinary practices comply with collective bargaining obligations and are applied in line with the CBA. Also, be mindful of employees’ protected concerted activity under the National Labor Relations Act if employee monitoring or discipline is implicated.
John Milani: Okay, thank you, both. Shifting gears slightly, how would you describe good employee training on AI use?
Meg Bisk: Training should really be role specific. Investment professionals and life sciences teams need concrete, job-related hypotheticals showing how available tools work, and what cannot be pasted into any external tools. Managers should be trained to review AI-assisted work, spot hallucinations, and ask for sources. HR teams should understand bias testing results and accommodation obligations. Employers should be refreshing training regularly to reflect evolving tools and laws, and we’d suggest that they require employee attestations to reinforce accountability.
John Milani: Great – a few more questions from me here. Meg, what recordkeeping and audit practices should employers build out?
Meg Bisk: Employers should maintain a central inventory of approved AI tools and use cases. They should preserve inputs and outputs when they are used for any kind of material decisions, especially in the HR contexts, in a manner that’s consistent with record retention schedules and requirements and any litigation hold obligations. Audits should be conducted regularly – we often advise biannually if not quarterly – to test adherence to permitted and prohibited use, evaluate bias and accessibility impacts in HR tools, and confirm that security controls are actually working. The key is to then use those audit results to refine policies, training, and technical controls.
John Milani: Excellent. Our clients are doing really complex and special work that’s very fast-paced, and doesn’t always leave much time to reflect on risk. How would you advise striking a balance between innovation and risk control, particularly in fast-moving markets?
Greg Demers: I’d say we’ve seen success with creating “sandboxes” with pre-approved tools and appropriate redaction utilities. Building access tiers based on roles and data classes can narrow use to those who need it. Encourage employees to be creative with possible use cases, and pair that with transparent messaging about why certain uses are limited and where AI can accelerate work safely. The goal is to introduce AI in a compliant way, not to suppress it entirely.
John Milani: Very well said. And lastly, if you had one recommendation for HR leaders to implement this quarter, what would it be?
Greg Demers: I would say, adopt and roll out a concise, practical AI Acceptable Use Policy. Include a simple decision tree for when and how to use AI, a bright-line prohibition on confidential inputs to public tools, a requirement to verify and cite sources, and a clear process to request approval for new use cases. Follow it with targeted training and technical controls to make compliance the path of least resistance. If you make it complicated or burdensome, you can bet it will lead to increased non-compliance in the future.
John Milani: Thanks so much, Meg and Greg, this has been really insightful. That’s a wrap for this episode. Stay tuned for future episodes in the series where we will continue to discuss the intersection between AI and employment considerations. You can also subscribe and listen to the other Ropes & Gray podcasts wherever you regularly listen to your podcasts, including on Apple and Spotify. Thanks again for listening.