Good Morning, HR

In episode 175, Coffey talks with Melanie Ronen about legal considerations and emerging regulations surrounding the use of AI in employment decisions.

They discuss current anti-discrimination laws affecting AI use; recent litigation related to AI in hiring; strategies for avoiding disparate impact discrimination; the role of bias testing in AI systems; proposed legislation in California and other jurisdictions; transparency in AI usage; considerations when selecting AI vendors; and future implications of AI regulation in employment.

Good Morning, HR is brought to you by Imperative—Bulletproof Background Checks. For more information about our commitment to quality and excellent customer service, visit us at https://imperativeinfo.com.

If you are an HRCI or SHRM-certified professional, this episode of Good Morning, HR has been pre-approved for half a recertification credit. To obtain the recertification information for this episode, visit https://goodmorninghr.com.

About our Guest:

Working with clients across a broad array of industries, Melanie Ronen advises on all aspects of employment law, including hiring, promotion, termination, privacy, wage and hour, disability and protected leave, and retaliation and whistleblower issues. She assists with the drafting and review of employment manuals and agreements, ensuring compliance with federal and state employment laws.

Melanie works side by side with employers to provide practical, tailored solutions to the various issues that arise in the workplace. When unavoidable, Melanie defends employers in litigation involving all aspects of employment-related claims.

Melanie handles single-plaintiff and complex employment disputes in state and federal courts and arbitration involving claims related to discrimination, harassment, retaliation, wrongful termination, breach of contract, defamation, trade secret, unfair competition, and wage and hour issues, including class action and Private Attorneys General Act (PAGA) claims.

When complaints arise, Melanie also conducts workplace investigations regarding unlawful or unwanted conduct, including harassment and race, age and gender discrimination. She proactively identifies potential risk and formulates strategies in line with clients’ business objectives to mitigate conflict and avoid litigation.

Melanie Ronen can be reached at
https://www.stradley.com 
https://www.linkedin.com/in/melanie-ronen-07888123

About Mike Coffey:

Mike Coffey is an entrepreneur, human resources professional, licensed private investigator, and HR consultant.

In 1999, he founded Imperative, a background investigations firm helping risk-averse companies make well-informed decisions about the people they involve in their business.

Today, Imperative serves hundreds of businesses across the US and, through its PFC Caregiver & Household Screening brand, many more private estates, family offices, and personal service agencies.

Mike has been recognized as an Entrepreneur of Excellence and has twice been named HR Professional of the Year.

Additionally, Imperative has been named the Texas Association of Business’ small business of the year and is accredited by the Professional Background Screening Association.

Mike is a member of the Fort Worth chapter of the Entrepreneurs’ Organization and volunteers with the SHRM Texas State Council.

Mike maintains his certification as a Senior Professional in Human Resources (SPHR) through the HR Certification Institute. He is also a SHRM Senior Certified Professional (SHRM-SCP).

Mike lives in Fort Worth with his very patient wife. He practices yoga and maintains a keto diet, about both of which he will gladly tell you way more than you want to know.

Learning Objectives:

  1. Implement testing and monitoring procedures to identify potential discriminatory impacts of AI tools in employment decisions.
  2. Develop transparency protocols around AI usage that protect both employer interests and employee/candidate rights.
  3. Evaluate AI technology vendors based on their ability to test for bias, make necessary adjustments, and provide compliance documentation.

What is Good Morning, HR?

HR entrepreneur Mike Coffey, SPHR, SHRM-SCP engages business thought leaders about the strategic, psychological, legal, and practical implications of bringing people together to create value for shareholders, customers, and the community. As an HR consultant, mentor to first-stage businesses through EO’s Accelerator program, and owner of Imperative—Bulletproof Background Screening, Mike is passionate about helping other professionals improve how they recruit, select, and manage their people. Most thirty-minute episodes of Good Morning, HR will be eligible for half a recertification credit for both HRCI and SHRM-certified professionals. Mike is a member of Entrepreneurs Organization (EO) Fort Worth and active with the Texas Association of Business, the Fort Worth Chamber, and Texas SHRM.

Melanie Ronen:

The employer still has to ensure that the technologies they're using aren't operating a disparate impact. So there so whether it's, you know, the true testing that would have to be done under the statute or whether it's just, you know, good HR hygiene to make sure that whatever your methodologies that are being used to select employees or evaluate employees or, you know, get job postings matched to the right candidates, there still has to be some, you know, I think, effort taken by the employers to make sure that they're that they're not engaging in a disparate impact.

Mike Coffey:

Good morning, HR. I'm Mike Coffey, president of Imperative, bulletproof background checks with fast and friendly service. And this is the podcast where I talk to business leaders about bringing people together to create value for shareholders, customers, and the community. Please follow rate and review Good Morning HR wherever you get your podcast. You can also find us on Facebook, Instagram, YouTube, or at good morning hr.com.

Mike Coffey:

In the last year and a half, we've had 4 podcasts directly related to the use of AI in the employment context, and they're among our most downloaded episodes. Additionally, AI is the most popular topic when I speak at conferences. I've made my AI presentation over 2 dozen times over the past year and a half, and it's constantly changing. While Europe was quick to implement AI regulations, the US has been a little slower, but that is changing. New York City already has some regulations in place and legislators and regulators in California, always a bellwether of future legislative trends, are actively promoting new regulations.

Mike Coffey:

And the EEOC just settled its 1st AI related title 7 case. Joining me today to discuss the trends in US AI regulation and what the coming year may hold is Melanie Ronan. Melanie is a partner with the national law firm Stradley Ronan, where she advises clients across industries on a variety of employment related topics. She also represents clients in litigation matters before state and federal courts. Welcome to Good Morning HR, Melanie.

Melanie Ronen:

Thank you, Mike. Happy to be here.

Mike Coffey:

So let's start with what the current laws in the US are that already implicate the use of AI. You know, we've got federal anti discrimination laws, state laws. The EEOC has already been active in this area, and their guidance has basically said, right, title 7 exists. Shocker. So where are we going?

Mike Coffey:

What are the deficits in the existing laws that regulators see? And let's just talk about that whole Title 7 exists. So follow that too piece.

Melanie Ronen:

Sure. As you correctly note, there's plenty of anti discrimination, statutes around the country starting with title 7, the ADA, ADEA, etcetera. So we already have a framework where there's a requirement that employers avoid intentional and, disparate impact discrimination. That's true among a lot of states, California obviously being one of them as well, that has a very robust anti discrimination statute. I think the deficiencies tend to be, you know, we're operating in a sort of a new a new world in some respects with AI and it's being used in a lot of different context to try to address volume work in some respects, to try to, provide greater efficiencies for employers and reduce the need for, you know, manpower.

Melanie Ronen:

Somebody just, you know, literally sitting and sifting through, you know, mountains of of resumes. And I think there's at least some belief or I've I've certainly heard people comment that, well, isn't the use of AI and taking the human element out of the function? Isn't that how we protect against discrimination is to just take the human element out. Therefore, it's obviously going to we're gonna avoid that discrimination. And experience is teaching us, that that's not necessarily the case.

Melanie Ronen:

So then the question becomes, what do we need to do for from, you know, an employment perspective or a business perspective to ensure that there is compliance with the antidiscrimination laws given that we're operating and and using technologies that we really hadn't used before?

Mike Coffey:

So in the first you know, the front wave of of this, like most legislation, has been lawsuits. What are you seeing on the litigation side so far about employers' use of platforms that are using AI or employers' use of AI in in in making employment related decisions?

Melanie Ronen:

I think, you know, what we see a lot are, focus on hiring platforms that may be using different technologies to isolate individuals that are qualified for a particular position or selected for interviews. And they may be, using things like, you know, ZIP code or location as some of the the relevant criteria, which may ultimately operate, you know, as as proxies for discrimination. So I think you're seeing some of that that context and that dialogue in the in the existing claims that we're seeing.

Mike Coffey:

So basically redlining, which we've dealt with, you know, for history. And then there was that Itutor case, which was basically, as best I can tell, outright discrimination by the user of the software. Right? Just eliminate people over a certain age from consideration for this role, which again goes back that's prima facie discrimination. So so what would you tell an employer then who wants to avoid a disparate impact?

Mike Coffey:

I mean, I you know, we wanna use this data. There's the I talked to my AI talk, Amazon. I think it was in 20 17, did a test. You know, who you know, here's our big candidate pool. Here are our best engineers.

Mike Coffey:

AI figure out what the commonalities are and pick my best candidates. And it picked good candidates, but it consistently eliminated women. And they took all the late, the female names off the, resumes, ran it again, and it continued to eliminate women even based on things like what college they went to or what sports they played, things like that. So bad data. And, I mean, you know, AI just makes sometimes correlation is not causation, and it doesn't always see that.

Mike Coffey:

So but either whether it's disparate impact or what you know, certainly, we don't wanna do the intentional discrimination. But on disparate impact, how do we what should an employer do to avoid that that that disparate impact kind of discrimination?

Melanie Ronen:

Sure. I think what and it segues nicely into some of the, the measures we're seeing across the country. One of the common themes that we see are, you know, how do we go about testing? So are there assessments that are required of the technology to see what it's actually accomplishing? Is it actually accomplishing a a fair representation of the groups that are that are being either screened for for applications.

Melanie Ronen:

But but how is it actually operating, as opposed to just sort of blindly letting it operate and and then, you know, here's the consequences. One of the things that's that's really important with the use of generative AI and decision making tools like that is that it well, how it may start out may not be how it actually results after it's being used and the data continues to work. So As

Mike Coffey:

it learns as they say. Yeah.

Melanie Ronen:

Right. Exactly. So sometimes the the learning, it actually learns the biases that may already have existed and it may actually perpetuate them and make them worse in terms of how it may, you know, weed out certain groups. So I think paying attention to and looking at what the ultimate effects of the technology is on a on a regular basis to make sure that it's not, operating in a way that wasn't intended by the the user of the software. I think that, you know yeah.

Melanie Ronen:

Intentional discrimination, I think we all agree, you know, we don't we wanna avoid, you know, intentionally discriminating against particular groups, and I think that's a pretty easy topic. How do you avoid disparate impact is is always a harder topic, I think.

Mike Coffey:

And sometimes technology companies especially get out ahead of their skis and get too clever for their own good. I mean, there's there's a company here in the US right now that does video interviews where the employer poses a question to the candidate, and they record a response. And the employer can look at the response on their own time, which is is probably fine. Not thrilled about it from a protected class point of view, but we see protected class when we do an endpoint in person interview too. But this company is saying their technology can basically give you a behavioral profile for this person and tell you what their fit will be in your organization, like a disc, you know, type.

Mike Coffey:

That seems to me to be a horrible thing. I mean, you know, I I like behavioral assessments, but based on video, there's just a lot of problems. You know, the it because they say they're reading the neurolinguistics of the person's face and all that, so neurodivergent people are gonna have, problems with that process. The darker your skin is, the less, that facial recognition stuff works. So sometimes these technology companies get out ahead of us.

Mike Coffey:

So to protect ourselves either from learning bad data, just, you know, making you know, seeing, you know, seeing correlations and and relating with causation and following those or just software that's maybe not so clever. What do we do? Take a a pool of our qualified or, I guess, you would call it a sampling of our qualified applicant pool and say, okay. So based on our current processes, our nontechnology processes for identifying qualified applicants, we expect a certain number of females, a certain number of people from different, you know, racial groups or whatever, your criteria are, and then compare that to what the AI is spitting out? Or is there how how do you go about doing that bias testing?

Melanie Ronen:

I mean, I think that's that's one way to do it. I think, you know, the the EEOC gives the example of using the 4 5ths rule, which is what it, you know, recommends in in all sorts of other disparate impact type analysis. So certainly doing the same sort of analysis, you know, are you having a, you know, a hit rate for, you know, white employees where the the protected class is at least 80% of the the the white applicants that have a hit rate, I think is, you know, is one way to go about it. Probably that's that's one, you know, I think good way to do it that I think folks are used to in in dealing with in in other sorts of disparate impact. I think it's also, you know, you can look at it in terms of, you know, who's being weeded out.

Melanie Ronen:

Right? If there are certain groups that are consistently being weeded out, even if it doesn't necessarily, you know, fall into that, you know, exact 4 5ths rule, you know, is it worth looking into a little bit more? If you see large groups of applicants that fall into a protected class that are excluded and not, you know, given the interview or their their resume isn't being forwarded, you know, dig behind that a little bit and see and see why why it is. It may you know, there there may be nothing to be done. But, you know, before you ever get to a place where it looks like it's ultimately gonna be of a potentially discriminatory impact, you may be able to get some answers as to to why groups are being weeded out.

Melanie Ronen:

Do you have a, you know, a proxy situation where you've got ZIP codes that are being excluded, for example?

Mike Coffey:

And apart from the the litigation or the, you know, the what the law says, why would we as employers make our lives harder and eliminate qualified candidates on things that aren't relevant? And I think that's the bigger argument for it for from an employer point of view. Certainly, we don't wanna get sued, and we don't wanna be unfair. But the reality is if I'm eliminating you know, I've spent all this money and effort to build a deep applicant pool. You know, that's what true diversity is.

Mike Coffey:

Right? I I want as many qualified applicants from whatever background as I can get, and then I'm gonna let you know, have have some tool, eliminate some of those. That's just that's shooting myself in the foot. Right?

Melanie Ronen:

Absolutely. And the money spent developing the tool is also designed to be improving your applicant process and helping you achieve that goal of having a greater applicant pool and and obtaining and attracting more qualified applicants. So, you know, if that AI is ultimately working at cross purposes with the goals of the employer, you know, typically unintentionally, it's certainly worth the employer finding that out so that those so adjustments can be made so that the goals are ultimately achieved.

Mike Coffey:

And so on the flip side, a lot of these platforms are really dependent on the information provided by the employers. Right? So, you know, we've always had garbage in, garbage out with with computers. Now we got bias and bias out. So if I've got an amazing team of engineers, but the hiring manager over this team of engineers, you know, only hires people with their background, who maybe even went to their specific school and those kinds of things, that doesn't mean those people aren't fully qualified.

Mike Coffey:

But and we give that data to the to the, you know, the AI firm or the platform and say, here's what we wanna model after. They're gonna spit out more of what we you know, we're just gonna replicate that bias. How much liability do the tech companies have when if there is a disparate impact?

Melanie Ronen:

Gosh. That's that's a hard question because I think it in in many respects, it depends on the what the tech company is doing, how they're being utilized by the employer, and some in some respects, the different states that they're operating in. Certainly, some of the legislation that's being proposed around the country and maybe not necessarily, enacted yet, but is focusing on the developers of the AI and putting some responsibility on them as well. There's also at least, you know, some discussion about whether or not, you know, some of those technology developers whether depending on what the usage is, you know, can they be considered an agent for purposes of some of the anti discrimination law? So I think that's really still an open question in many respects.

Mike Coffey:

So then what questions do you think an employer ought to ask their technology provider? If we're looking at spending, you know, a whole bunch of zeros at the end of a number on a new platform, what should we ask them so that we know they're not gonna put us in a ditch or that this technology is not gonna get us in trouble down the road?

Melanie Ronen:

Sure. I think you would wanna, at a minimum, be asking what sorts of testing is being done for the AI? How is it being tested to determine if it is operating in any you know, with any sort of discriminatory, way? I would certainly wanna know, you know, how how much has it been used? How new is it?

Melanie Ronen:

Has it been, you know, utilized enough that you've seen some results to be able to make those determinations? Obviously, everything has to be used first sometime, but if it's me, I probably don't wanna be the first one.

Mike Coffey:

Yeah. I don't wanna be the bleeding edge on on something like this.

Melanie Ronen:

Yeah. For sure. And, also, you know, what can be done? What sort of protections do you have in any sort of contract with the, developer to make adjustments as needed if it if testing by either the developer or the user of the AI determines that it isn't that biases are being perpetuated or even, you know, furthered, you know, and worsened in some way through machine learning.

Mike Coffey:

And let's take a quick break. Good morning. HR is brought to you by Imperative, bulletproof background checks with fast and friendly service. At Imperative, we help clients make well informed decisions about the people they involve in their business. This includes very thorough employment related background investigations, but also all kinds of due diligence on vendors, clients, investment targets, or even joint venture partners.

Mike Coffey:

If there are people involved, there's risk involved. We help our clients mitigate that risk. We're waiting to be of service at imperativeinfo.com. If you're an HRCI or SHRM certified professional, this episode of Good Morning HR has been preapproved for 1 half hour of recertification credit. To obtain the recertification information, visit good morning hr.com and click on research credits.

Mike Coffey:

Then select episode 175 and enter the keyword Ronen, that's r o n e n. And if you're looking for even more recertification credit, check out the webinars page at imperativeinfo.com. And now back to my conversation with Melanie Ronan. One of the things I really push in my presentations and the little bit of consulting that's actually come in related to how employers can use AI is I'm a big believer in transparency in all of this. I I've always believed that if in the absence of a real understanding of how a decision was made, that applicant or that employee is always gonna believe they were treated unfairly.

Mike Coffey:

And, and so being really upfront about here's how we use it, it you know, AI doesn't make the decision. I don't see anybody out out there advocating for AI to make the hiring decision or the termination decision on productivity or any of that. It's a tool. It's generating information then a, hopefully, empathetic and, knowledgeable human is going to apply that information given their skill set and experience. And so I'm a big believer in transparency about that.

Mike Coffey:

But you always hear those those folks who are gun shy about transparency, they've got the idea that the less you tell the applicant, the less it fodder it gives them to pinpoint what was unfair. So what's how do you feel about transparency on these issues?

Melanie Ronen:

Gosh. That's a, you know, that's a good question. I think that it it really, in some respects, depends on what kind of technology you're talking about. I mean, some of those decisions may be ultimately made for employers in certain jurisdictions because that's another area that is, certainly a hot topic.

Mike Coffey:

New York City's got that. Yeah. That's a Yeah.

Melanie Ronen:

And some of the legislation that's being proposed around the country is allowing that providing that notice to the applicants or whoever it is that's the subject of the AI. You know, certainly more information I do think, you know, is is helpful. I think one place where that becomes extra maybe maybe more imperative is when with respect to the ADA and avoiding, you know, disability lawsuits. If you've got somebody with, you know, a a disability and they don't know that AI is being utilized or how it's being utilized, there may be an inability to even request an accommodation. You know, there are certain circumstances where the technologies that take a, you know, gaming technology or facial recognition is another one, voice recognition that may ultimately, you know, impact folks with disabilities that if they don't know that it's being utilized and how it's being utilized, they may not be able to request an appropriate accommodation and the employer can't appropriately engage in interactive dialogue, to address any accommodation that might be needed might be needed.

Mike Coffey:

So let's Texas legislative session start. You know, we only, we have a biennial, legislative process, so we don't have to worry about, you know, so you know, 12 months a year, you know, every year of of a potential doom and gloom from our legislature. We spend 6 months trying to kill as many bills as possible. But California, you you know, and in our session, we've already the interim hearings have been going on. AI is gonna be a thing.

Mike Coffey:

AI and privacy are big topics here. California just went through a year, you know, through a session, and there were some bills there. What were the trends and the bills that you saw in California related to AI?

Melanie Ronen:

So the the the big one is, California's AB 2930, which is which has gone through a lot of iterations. It's ultimately stalled for for now, but it addressed algorithmic discrimination in a variety of contexts, not just employment, but in terms of all sorts of consequential decisions where AI might be being used such as housing, health care, financial services, employment being just one of them. You know? And some of the discussion around that bill was really I think 2 primary areas. 1, notice, which we talked about, making sure that the subjects of the AI are on notice that it's being used and an ability to opt out if that's technologically feasible.

Melanie Ronen:

And then 2nd, testing and, periodically to make sure that the, technology is being tested both by the developer developer of the AI and or the, user or the deployer of the AI to ensure that there's not discriminatory impacts. There's been a lot of discussion around that that bill. Like I said, it stalled. I expect we're gonna see it come back again next year, and that discussion will continue. Much of the discussion is is really on the on on the technicalities of it.

Melanie Ronen:

How are things differ defined? Are they is it too broad so that it's, you know, really unworkable? Or is it really working to accomplish, the goals that the legislature is looking for while not unreasonably burdening employer or player groups and being and giving them an appropriate fame framework where they really know how it is that they're that they need to be operating and and not leaving so many ambiguities that it makes it really difficult for employers to implement.

Mike Coffey:

And it does seem even the EU regulations have a lot of ambiguity, so we're leaving it to and we've seen it in the EU with some of their AI stuff against US companies where it's it's really up to the regulators to figure out what's appropriate. And in a post Chevron US, you know, maybe we won't see that at the federal level, but we could still see it. You know, the New York City Human Rights Commission making decisions about whether something's really creating bias or not doesn't give me, a lot of confidence, you know, that they're gonna be able to do that. So in in the California bill, who who was doing the bias testing? Who was gonna and were there guidelines for how you measure its effectiveness?

Mike Coffey:

Or how did you know how how were they suggesting that we know that this testing is accurate and invalid, and that it's gonna pass muster before you spend all that money.

Melanie Ronen:

Before you spend money on the technology itself.

Mike Coffey:

And with technology too, but also the testing. I mean, I because I I don't think it's gonna be inexpensive. I look at what some some employers spend just on getting their affirmative action plans done. I can only imagine how much the cottage industry of bias testing is gonna look like, in a in a year or 2.

Melanie Ronen:

Yeah. You know, there was there there wasn't a lot of guidance on that, the the the details of it. Certainly, it was something that was gonna be, the the civil rights department would be involved in, that. There was also again, this is gone through so many iterations, but, there's, you know, having each employer having a governance program to be overseeing that and taking a look at it. But in terms of what actually would be, you know, an end result that was, deemed not discriminatory.

Melanie Ronen:

I don't think that was what that that wasn't really laid out if that was your question.

Mike Coffey:

Yeah. So, so it would be that civil rights department that would do that.

Melanie Ronen:

Yeah.

Mike Coffey:

And I can see employers of a certain size being able to really test their data, but I'm thinking down the road, you know, if, you know, just to throw a name out, if Indeed or one of the big job matching, you know, candidate and job matching websites rolls out their own AI to help, and they've already got some in the back line you know, back end already. But and I'm an employer with 50 or a 100 employees. I don't have a massive ATS system. I don't have a really complex HR system, but I'm just relying on these cloud based software as a service type products. I'd be really concerned that there's a law that says I have to bias test this, but how do I, in my con in my smaller context let's say I've got a 100 and 5 employees, so every possible law would apply to me likely.

Mike Coffey:

How do I how would I go about that? That would be a concern for me.

Melanie Ronen:

And it's and that's a a valid concern for sure. And I think that was one of the, at least, amendments that had taken place over the course of the year was that, you know, can deployers, the ultimate end user of the AI, rely on the testing that was done by the developer? And I think there's at least some movement towards that being the case so that every employer isn't having to, you know, do testing on top of what the, the developer did. But but with a caveat that it's still, you know, the employer still has to ensure that the technologies they're using aren't operating a disparate impact. So there so whether it's, you know, the true testing that would have to be done under the statute or whether it's just, you know, good HR hygiene to make sure that whatever your methodologies that are being used to select employees or evaluate employees or, you know, get job postings matched to the right candidates.

Melanie Ronen:

There still has to be some, you know, I think, effort taken by the employers to make sure that they're that they're not engaging in a disparate impact. But to answer your question more directly, I think there is at least a movement to not duplicate effort between deployers and developers.

Mike Coffey:

And then the Civil Rights Council in California had proposed rules around AI and employment discrimination. Did those vary significantly from what what you just described in the in the in the proposed bill?

Melanie Ronen:

They did in the sense that the way I think of those is they really updated some of our statutes and definitions under the Fair Employment and Housing Act that we all that's California's anti discrimination statute. It's very robust, as it is. But we now with these regulations, there'll be some, definitions that speak to the technology we're living with now, AI, you know, algorithmic discrimination, you know, having a good understanding sort of of what those, those are and how they might fit into the existing anti discrimination statutes. It's really more of a, you know, looking at these statutes that have been in place for a long time, but in my mind, updating this to reflect the the technologies that are being used in the employment world today.

Mike Coffey:

So I'm an employer, let's say anywhere in the country, certainly California and New York, but even a less progressive state on these issues. What would you say an employer who wants to make sure they wanna implement this technology. They want the efficiencies out of their employee selection process and all of that. But they wanna make sure they don't run afoul of existing law or and they wanna kind of future proof themselves. What would you suggest that their considerations be as they look at new technologies?

Melanie Ronen:

So I think there's, gosh, there's a there's a lot of things that I think that can be, you know, looked at towards future technologies. You know, one is is having the ability to, you know, adjust and change as information as be as that's used more and more information comes out. Is there an ability to adjust and make sure that changes that can be made and that you're not stuck in sort of a stagnant technology, that you'd have to scrap if it ultimately, you know, wasn't working appropriately. I think, you know, partnering with technology providers that have the ability to do testing and to make sure that the products that they're providing, have been tested on their end. So it's not just left on the employer to be trying to figure out how to do that, particularly if they're not a technology company themselves.

Melanie Ronen:

That can be quite difficult. Looking at, I think, just as you would in the world of non AI hiring, you know, paying attention to what your applicant pool looks like and what your ultimate, you know, groups look like that you're interviewing and that you're ultimately and that are being offered jobs. Are you seeing, you know, trends that that may raise a red flag? In my mind, that's that again, good high good HR practices, I think, is some of the best way that you can, you know, put yourself in a good position where you're you're being attentive to trends that are developing in your organization to make sure that you're not you don't find yourself too far down the road where it makes it, you know, difficult to remedy a situation, after the fact.

Mike Coffey:

So final question. Do you think you're bearish or bullish on AI and the employee selection process?

Melanie Ronen:

When you say bearish or bullish, give me some more context.

Mike Coffey:

Bearish, nervous, stand you know, let's let's step back and take a deep breath. Bullish, let's run toward the fire.

Melanie Ronen:

Oh, gosh. I I would say I'm probably on the more bullish side, but but leaning towards the middle. I think it's that that the opportunities are immense and the efficiencies and the scope are are really great. You can reach a lot of candidates, in a way that, you know, really couldn't have been done, in in with prior generations, and you can assess a lot of applicants. And I think that's all great.

Melanie Ronen:

I think if you'd if it's done, with careful selection processes and with, you know, eyes on ensuring that it's done fairly and appropriately, I think it's a great thing. I just think it's something that we have to be continuing to watch, continuing to see how the AI is developing and being used. And and, you know, I hate to say it. Nobody wants to be the example that, you know, that people learn from, but learn from examples that that, you know, you're seeing in the in the, you know, with the media. Then they're out there.

Melanie Ronen:

Right? The ones that didn't work out right, learn from those, and, you know, the next one will be better.

Mike Coffey:

There you go. Well, that's all the time we have. Thanks for joining me, Melanie.

Melanie Ronen:

Thank you so much. I really appreciate it.

Mike Coffey:

And thank you for listening. You can comment on this episode or search our previous episodes at goodmorninghr.com or on Facebook, Instagram, or YouTube. And don't forget to follow us wherever you get your podcasts. Rob Upchurch is our technical producer, and you can reach him at robmakespods.com. And thank you to Imperative's marketing coordinator, Mary Anne Hernandez, who keeps the trains running on time.

Mike Coffey:

And I'm Mike Coffey. As always, don't hesitate to reach out if I can be of service to you personally or professionally. I'll see you next week, and until then, be well, do good, and keep your chin up.