Time to Hire

Ep 45 AI Governance & Trust in Recruitment: Insights from FairNow's CEO

Recruitment Process Outsourcing Association (RPOA)

AI adoption is transforming recruitment, yet 30+ new global AI regulations introduced in just four years reveal a sector racing to manage unprecedented risk. Recruitment leaders grapple with the mounting challenge of implementing trustworthy AI. Only organizations investing in robust governance strategies today will seize tomorrow’s opportunities, while others face stalled innovation and escalating compliance threats.

How can recruitment process outsourcing leaders (RPOs) innovate without losing trust? In this special episode of the Time to Hire podcast, recorded live at the 2025 RPOA Annual Conference, host Lamees Abourahma welcomes guests Guru Sethupathy (CEO, Fair Now) and Emily Khan (Head of Product & Innovation, Human People Solutions). Together, they show how recruitment organizations can build AI governance frameworks that enhance trust, ensure compliance, and accelerate progress. Drawing on deep industry expertise and practical case studies, this conversation cements RPOA’s role as the trusted convener for the entire RPO community.

Hueman RPO is a sponsor of the 2025 RPOA Annual Conference and a Gold Member of the RPO Association. 

Learn about and find more content from the 2025 RPOA Annual Conference here. 

About the Podcast

Time to Hire is produced by the Recruitment Process Outsourcing Association (RPOA), the leading authority on recruitment process outsourcing (RPO) foresight and innovation, and the trusted convener for the global RPO community. Through conversations with industry leaders, the podcast explores the trends, insights, and innovations shaping the future of talent acquisition.

Learn more about RPOA and join the community at: https://www.rpoassociation.org.

Follow the host, Lamees Abourahma, on LinkedIn.


Lamees Abourahma:

The biggest risk today isn't using AI, it's using it without the right guide rails. A great insight from today's podcast. Welcome to Time to Hire, the podcast from the Recruitment Process Outsourcing Association, where we explore stories and strategies shaping the future of talent acquisition. I'm your host, Lemise Aburama. In this special episode, we bring you a live conversation from our 2025 RPOA Annual Conference in Chicago. Guru Setupathy, CEO of FairNow, and Emily Kahn, Head of Product and Innovation at Human People Solutions, share leadership insights and practical framework for building AI governance and trust in recruitment partnerships. You'll hear actionable lessons and industry-tested strategies that are shaping how smart organizations innovate without losing sight of what matters most. Let's dive in.

Guru Setupathy:

Awesome. Thanks for having us, RPOA. It's a pleasure to be here, Emily. Good to chat with you. So I think this is a really nice conversation to have following up Stan. I don't know if he's still here. Oh, there he is, yeah. Oh, thanks. Um, following up Stan's uh talk. Uh Stan and I have chatted a bunch uh about AI and about AI governance, but it's a nice segue because look, I'm actually quite bullish on AI. I'm bullish on companies like Scotty very much. I do think it's gonna, I think AI is gonna change the world. Um now, where what's the timeline of that is gonna be interesting. Um and one of the things that's gonna determine that timeline is how organizations handle risk, compliance, and governance. And when I go around talking to companies, they're overwhelmed with kind of what they're hearing about AI. Um, every vendor is talking about AI, they're bored, their C-suite is telling them to go do AI, whatever that means. Um, but at the same time, there's a lot of concerns around risk, compliance. And when we're talking about the RPO space, um, I think I met Lemise maybe about a year, year and a half ago, and I really got kind of introduced to the RPO space, and I've spent time now, and we actually work with uh human and many other RPO partners. Um, I've really come to understand the role you guys play in the talent acquisition process and how important trust is. Um, and even putting aside AI, just you guys handle some of the most important functions for an organization in terms of acquiring talent. Um, and companies trust you on that journey, whether you're doing the end-and-end process or or bits and pieces. And now that you're starting to think about how to use AI to enhance and improve what you're doing, that measure of trust is gonna be challenged, right? Your customers are gonna come and ask you a whole set of questions if they're not already about how they can trust the technologies that you bring to the table. And so for you, it's gonna be paramount for you to figure out how do you govern that technology stack that you are building, implementing, buying, partnering, whatever that may be, in support of what your customers are doing. And that's what Emily and I are gonna chat about over the next bit. This is a conversation, but actually, it's not just a conversation between us. Um, this is only gonna work well if you guys ask us questions. And and hopefully you have a lot of questions. I've been hearing a lot of questions, so I expect you guys will have a lot of questions about this topic. Um, and so please interrupt us. Uh you know, I'm we're gonna have some predefined topics, but please interrupt us, ask us questions, um, and hopefully we'll have a good conversation. Sound good? Awesome. Um, Emily, um, I know we've been working with you guys for almost a year now. Um, and I was really excited because you guys, you especially are leading a lot of the innovative efforts at Human. AI is a big part of that. And I was really impressed with kind of your foresight into the importance of governance. So help me first uh understand why AI governance is important to you and to human.

Emily Kahn:

Thanks, Guru, and thank you for having me excited to have this conversation with all of you. Um AI governance, important to me personally, because um I think that there is so much opportunity similar to you with uh the potential of what AI can do to both transform our organizations, transform the talent acquisition process, but um there are risks involved with that. And um, I did a lot of sort of research as an uh earlier in my career and life on sort of differences in bias and processes. And um the AI is only as good as the process that's around it and the data that it's trained on, and so I wanted to ensure that we were monitoring it and using it appropriately separately and how why it's important for human. Um, as you mentioned, trust is a big part of being an RPO. Um we work with a lot of healthcare organizations where trust with their patients is really important, trust in the safety of their data is really important, um, and and they're helping us, or we're helping them uh bring in new people to support their organizations and their culture long term. And so uh this would be something that could potentially disrupt that trust. So, how do we make sure that they know both that in an innovation we are still maintaining our quality as a partner, um, thinking about the risks that could affect us and them in a uh thoughtful way?

Guru Setupathy:

One of the interesting experiences I had that led me on this journey was when I was an executive at Capital One, I came in kind of guns a blazing with this very ambitious AI agenda. And Capital One is a very innovative firm. I think you guys uh know about Capital One, but it's also in a very similar space in some ways to HR. Like there's actually fascinating analogies and analogous uh analogous situations between lending and hiring, uh both in terms of the funnel and the process and the data, but also the risks, right? And so Capital One, very risk averse in some ways too, right? And so coming in, I was an outsider coming in, trying to implement AI, just so much pushback internally. And it was only when I implemented governance, increased transparency, implemented a process, brought people in, documentation, audit trail, compliance, right? Started doing those things, I realized actually that helped me move faster from an innovation standpoint. I used to think governance was like a check the box, a pain, compliance, oh, I gotta do this thing, it's gonna slow me down from an innovation standpoint. And it was a real aha moment where, like, no, no, this is actually has enabled me to go so much faster. And that's a bit of a message that I've been trying to like share with others who think, like, hey, this thing, I don't want to do it, I may have to do it. But no, no, this not only reduces your risks, but it actually enables you to go faster. So whether you're getting pressure from kind of your board, your investors, your C-suite, whomever that may be your customers who might be saying, Hey, I want AI, I want to go faster. Governance actually enables that because it builds trust and it allows you to then, because otherwise you're fighting battles constantly around the AI. So that was one kind of big realization. Emily, how have you seen that in action at Human?

Emily Kahn:

I'm glad you asked that. And it's a really good segue with your example. Um of the biggest challenges to adopting AI is the change management internally. So there's this having the trust with our clients on like how we're using the tool, how we're using their data, what the outcomes will be, what risks we're exposing them to. But there's also the internal piece on all of our recruiters who are used to interacting with candidates in a particular way. There's a building trust there as well around what is this gonna mean for the candidate experience? What does this mean for the relationship that I have with the candidate? What does it mean for what I'm promising them in in the process through and what they can expect from their employer? Um, and so part of this governance piece was also to be able to answer their questions on like, you know, what does this, what does this mean for a candidate's experience and what does this mean for bias? Are we still having, you know, are we are we appropriately treating all of the candidates the way that I would want to do that in my role in in how I care about actively recruiting the talent? So a big piece of it is to your point, not just in gaining the trust of our clients, but in gaining the adoption for the usage of the tool from the people who are gonna be using it every day.

Guru Setupathy:

The next question I have for you is I think again, we started working about a year ago. And I'd like for you to share with the audience where you guys were in that journey, especially at that point around governance, right? Um, because while I think you guys are ahead, now I'm starting to talk to other RPOs, and I think they are where you were. So maybe share a little bit of where you were at that point in your journey and kind of where you've become since then.

Emily Kahn:

Um so at the point in which we engaged Guru and Fair Now, um, we truly had nothing. We uh well, I should say we had all of the normal compliance things that you have unrelated to AI, but we didn't have any AI-specific governance on like how do we vet vendors? How do we ensure that the AI is operating appropriately? And what do we do if when we test it it's not operating appropriately? What are the mitigations we do in that case? Um do we have different use cases that are higher risk or lower risk, and how do we evaluate that? How do we communicate what it is we're doing both to our clients and to the external world? What types of use policies do we have for our um our internal uh for our recruiters? We didn't have any of that. We had defined that we wanted to use AI, we had picked a couple of use cases that we wanted to start with, and we had started looking at a variety of tools to help support us with that. Um but we hadn't selected anybody, we hadn't contracted on anything, we hadn't built anything out, and we hadn't started, we definitely hadn't started rolling it out to frontline users. Um we were truly at the beginning of that journey, didn't have any of the infrastructure there, definitely didn't have the communications, and hadn't made a whole lot of decisions either, very early.

Guru Setupathy:

And I think a lot of, I've been talking to several folks in this room and and others outside both in the RPO space and the HR ecosystem more broadly, HR technology, HR buyers, and we're seeing a lot of that right now, where organizations are starting to realize the importance of this. And one of the one of the things that's driving it, and you can look up there on the slides up there, is the regulatory compliance landscape. And what's happening is buyers, right? Your buyers, customers, HR organizations, et cetera, are often the ones heavily in scope for many of these laws and um and regulations. Because what's happening is at the end of the day, they are deploying your technology, or they are the output of your technology is then being used to make decisions in their organizations, right? You're talking about hiring and talent acquisition. And ultimately, then they're mostly responsible for it. Emily and I had a really good conversation actually about whose responsibility is this at the end of the day? Where does the decision sit? Where's the responsibility sit? Where's the risk sit? And again, it's very nuanced and it depends, but most of it sits on the side of the deployer and the organization whose employees are affected, which is not you guys as much as your buyers, right? But then that's gonna impact you, right? And so what they're concerned with is look at this list, right? Uh four years ago, there were no laws and regulations around AI. Now there's over 30 globally. I know many of you serve organizations all over the world, right? Um, and so there's huge laws, and the biggest one, of course, is gonna be going into effect next year, um, is the EU AI Act. And what's interesting is HR is very much in the wheelhouse of these regulations. Because it is concerned, it is considered a high-risk domain. You're impacting people's careers, right? Which we all know. Um, financial services, healthcare, those are some of this, and especially if you're sitting with human whose customers are HR, but in the healthcare domain, that's a double kind of risk uh scenario, right? Um, and so in effect, because they are now taking on so much risk, what they are then doing is they are doing a bunch of diligence on you, on their other vendors. And so that's where it starts to come together in terms of your responsibilities. How are you managing that trust and that relationships? And companies like Human, you're kind of in both spots where you have a vendor or a series of vendors, actually, you're going to have a series of vendors that you're buying AI technology with and using that to do work for customers. And so that complexity, if you can share more, Emily.

Emily Kahn:

Um, so in addition to the series of vendors, uh, a lot of the in terms of uh compliance in the way in which that we're using things, there's compliance around what the tool does itself, and then there's compliance in how you use it. And so as we start to string things together, or as we decide to use it in different client contexts, all of a sudden that interacts with each of those regulatory um considerations in a different way. And so one of the things that we wanted to ensure that we were doing that Fairnow has helped us with is as we have different use cases, same tools, same technology, how does it actually apply in these different scenarios differently? And how does that change our obligations to our clients, to um to the you know greater environment at large? The other piece here though is you mentioned, Guru, that um you know our clients are doing diligence on us. They're asking how we can do this. A lot of times what we were hearing from our clients was we don't really know what we're supposed to be asking you, we just know that we it there's like something we should know. Like, tell me what it is that you are doing that you're supposed to be doing. Um, and so there was a lot here too, in terms of building trust around like, let me tell you what this landscape looks like. Let me tell you why we're confident that we know what's going on, and then what it is we're doing to do that. And so it's the trust here is in because it's so nascent, that the people who we're taught, we're not just beholden to them because they have questions we need to answer. A lot of times they weren't sure what questions they were trying to ask. Um, and how could we be a great partner and help uh create an even deeper level of trust from helping them along in their own journey?

Guru Setupathy:

I think that's a that's a great point, right? I want to emphasize that for a quick second. They're not only coming to you for recruitment process outsourcing, they're not even coming to you for kind of the ROI, the efficiency, and all that. They're also coming to you for the things they don't understand and know about this technology that's coming, right? Like that is you're in a really interesting spot. And that comes along with how, again, having the relationships you've had with your customers for a long time, the trust you've built for a long time, and that's why they're coming to you. And so the more that you can become an expert on this topic, right? And I think that's where, Emily, you you're you're a leader now in this space. Like over the last year, in terms of kind of how I've seen you evolve and learn and on this thing, you can go now talk to people and actually talk about what good governance looks like and how to do it well and why why it matters and all these things, that's gonna build that trust even more. So I think that's a really important point that you were mentioning.

Emily Kahn:

Um, in a lot of situations, RPO acts as a talent acquisition expert. Like part of what we bring to the table as an industry is we know how to recruit incredibly well and can help you fulfill your current needs, but also evolve your talent acquisition organization so that it looks uh better, different, best in class. As part of that, this becomes like in the same way that for a long time, you know, recruitment tools has been part of that. We are experts, here's what we bring to the table, we can advise you on like how to use this variety of things. AI is in many ways part of that as well. How do you use AI as it relates to talent acquisition? Um, and so this is one other way in which we felt it was important to be an expert, and then from an RPO industry broadly, that the community at large is knowledgeable and can bring it to the global and sort of enter talent acquisition space, um, is as leaders in how you do this, how you do it well, um, because sort of as we show that we all understand how this goes, um, it helps everyone else garner the trust it takes to be able to use these things effectively.

Guru Setupathy:

I'll say at a high level and then Emily, feel free to add on. Um, so yeah, there's a set of kind of policies and procedures and techniques you can use to handle this. Now part of it goes on are you again, are you building this internally? Are you getting this from vendors, right? So let's assume for a second, just to simplify this, because we could talk about this for half an hour, but if you're let's say you're getting it from a vendor, a particular tool, right? Then what you want to do is first ask them for kind of any results of any testing that they've been able to do. Right? Ideally, third-party testing is better, right? Um, but yeah, you should expect your vendors, especially if they're in the uh using tools to select, score, rank, all of these things, right? They should have done this kind of testing. If they haven't, that's a red flag to me. Okay, so that's part one. Part two is you should still, on top of that, do your own testing. Because the data that they may use for their testing is helpful, that's good, but the data that you might be using as part of that tool or technology might be different data. So you don't know how the tool is gonna handle those different uh scenarios, et cetera. Um, third is one of the challenges that's there, and this is where we've kind of had to come up with some creative ways to handle this, is a lot of companies don't have demographic data. And in some ways that's good. You don't want to actually use demographic data in your AI systems, right? But the challenge is then it becomes harder to test. And so one of the things we've actually uh done is come up with synthetic data specifically tuned for the HR ecosystem that we can then use to test these different uh AI systems, get that output, and then be able to evaluate how well they perform. So those are some of the things that you you should do and that we're helping with. But Emily, if you want to add on?

Emily Kahn:

I think you covered the vast majority of it. I would say the other piece around governance that helps with this is part of what we put together in the governance playbook that FairNow helped us develop is how often should you be testing? When you're testing, what are you looking for? What's the indicator that something isn't working the way that you're having adverse effects on specific populations? Um and if you have that in your process, in your approach, you know, okay, I'm testing at this regular interval, I know what I'm looking for, and this is how I know that it's working the way I would want it to. And having that in place documented, it both is something to help alleviate your own uh concern around it not working properly, but also then documents what you would do if you were to find that something wasn't working the way you would want it to. And those things that you were doing are all of the things that Guru just mentioned.

Guru Setupathy:

Just to add very quickly before we go to your question, there's a document called a model card, which is quite useful. We've been helping some of our clients develop model cards, and what model cards do is they really define and out uh outline how that technology works. How does that AI system work? What kind of AI system is it? Is it machine learning algorithms? Is it generative AI based on LLMs? Is it using foundational models? How does it work? What data does it take in? What has it been trained on? What are the outputs? How is it intended to be used? What are the risks, right? It goes through all these things in quite a good detail. And why that's helpful is one, well, it's good for you to know that, right, if you're ever asked. Two, it gives your customers just a lot of comfort. And transparency. One of the things they're looking for is to know that you're on top of this. That's what they that's what trust amounts to, right? Are you on top of knowing what your tools do, how they work, how you manage risk? Are you and the model card helps you share that? But the last thing it also does to your point and made me think of that is also covers your risk a little bit. In the model card, you also explain this is how it's intended to be used. Because one of the things that can happen, and you need to start thinking about, is if you have technologies and your customers then, you know, do you make decisions off the technology or do your customers make it? We talked about this. How does that workflow happen? Are you making hiring decisions on their behalf, or are you just giving them the technology and they make those decisions? And if it's the latter, are they is there user error here? And who does that come back to bite? And we can go talk about that a little bit more, but that becomes a very important consideration. And I remember we talked about this with your C suit as well, in terms of determining how you want that workflow to go.

Emily Kahn:

One other thing that I think is a really important distinction here is bias that's introduced because of the underlying model, and bias that's introduced because your process has bias in it. The second one is not the AI's fault. The first one is. So in the same way that if you have things in a job description that skew or treat a certain population in a way, in one population differently than another, that's going to be that bias is going to be there regardless of whether an AI is doing the screen or whether a human is doing the screen. And so there is a component here that is how is the AI introducing bias, but then how are my processes exacerbating bias, introducing bias that wasn't there when the model stood on its own, and this is where testing it in your environment versus in the vendor's environment is important. So it's an important distinction to look at what is it that you have around the model and this also how it's being used. That's another extension of that versus just the underlying model itself.

Guru Setupathy:

Sorry, I just I just need to add on. This actually came up with a customer we have. So this is such an important point. We did a bias test uh of their vendor tools that they were using, and it was very, it looked very biased. And so then we used different data and the bias went away. And through kind of working with that customer, we it it came out that it was their processes that were then feeding in biased data into the model that then led to bias. So this is a really, really crucial point.

Emily Kahn:

Um, one example that I was talking with some people about earlier is a lot of sourcing. Um, like one of the things sourcers will use often is a Boolean string that is representative of characteristics you're trying to look for. Uh I might want to hire a salesperson who is um, and what I'm looking for in that person is being incredibly competitive. Well, some things that are representative of being competitive are people who have participated in sports at different levels. But there's competitiveness that comes out in a lot of other activities too. So if I'm if I'm searching for people who've played baseball, basketball, football, et cetera, I'm gonna find one population, and that population might look different than if I'm Boolean string searching for someone who was in debate, someone who participated in, I don't know, what else is competitive? Chess. Yep. All very competitive. Potentially different demographic sets means that I'm looking at what I'm who I'm looking for and the population that I'm looking at when I'm sourcing has some bias in it versus looking for an indicator of competitiveness. It's a similar thing in a lot of other parts of the process. And so that's a choice that you make process-wise, not something that you've necessarily is underlying in the tool. But I could go into my sourcing tool and say, like, look for someone who has, you know, who played football. Now I've introduced bias. So that's where the process piece can look different than the underlying model, because it's looking for what I've told it to look for, and I've I've told it to look at a particular population.

Guru Setupathy:

Yeah, we're seeing a wide range, um, but with the vast majority actually being pretty unsophisticated on the customer side, right? They almost there, I think they're at a stage now where they know they should be asking you stuff, they know that there's some risk that to be to be managed, they know there's a bunch of laws, but they don't know what that looks like. They don't and and the other thing they don't know is what does a good response from you look like, right? And and we worked on this with you guys too of like, hey, we got this response, but is that good? Is that bad? Is that good enough? Is that gonna protect right? Like, there's that whole element. So this is all gonna be a learning process. The way I describe it, I'm a terrible ice skater, and so is my wife. And so if we're kind of helping each other out, we're both probably gonna just fall, right? And so that's a little bit of where we are right now in this AI journey where both the buyer side and the vendor side are just in in um in early days of of learning about how this works. And so, again, that's where if you guys can get ahead of this and move faster in terms of the understanding, building the governance, and then they're gonna be more likely to trust you in the market. And so that's what we're seeing. There are some more sophisticated folks, but even then, they're not in the HR group. What happens is HR then brings in IT or risk, and those folks end up knowing more and kind of, but then they also slow down the sales cycle. So there's a bit of a wait, oh man, now I now IT is involved in this conversation.

Emily Kahn:

That's real. Um additional thing to add, remind me, you said um contracting, and then what you're hearing in terms of like more sophisticated versus less sophisticated. So um the type of question depends on the audience. The HR side has a very different set of questions than the IT and compliance side. So it's important to sort of have a different set of materials or talk track, knowing like which one is looking for what information. The HR side generally doesn't. We talked about a lot of the technical things. I want to see what your government documents, governance documents are, I want to see what your policies are. IT cares about that, HR doesn't. Um, I mean, maybe they should, but that isn't the generally the set of questions I would say that's most on the forefront. What we're hearing more, what we've heard more, are things around how is their bias, how are you mitigating that, and how do you know if there's something there? How are you using my data? They don't usually know what they should be looking for in that, but they know that they that's one of the ones they know they should be asking. Um, but like, what are you doing with my data and where does it go? Um and then what is the candidate experience like usually is a big question. Less about governance, but one of the biggest worries on the HR side on like what this whole process looks like. Um, the governance questions and on the IT side, I would say tend to hear more around how are you testing from an information security standpoint? Like, what are you doing? Where is it held? How often does it get deleted? What kind of data are you using? Is it personally identifiable information which opens up a whole different set of risks and regulations to know about? So, like, what is the underlying data packet that you are playing with? Um I would say those are the the big pieces there. Um on contracting, are you referring to um I would say best practices is generally like we've we have things that we've developed for both. So it saves you time, saves them time. If you kind of know the set of questions and it it takes asking a couple of them and going through a few of it to sort of get the full set, but like just have the set of questions and the set of answers and forward it to whoever needs like IT, you give them a huge packet and they're really happy. Um then I mean they have different processes for that, but it's much easier for everybody if you just have the stuff to be able to pull through. Um we've also helped our HR side partners on the adoption piece and some comfort there with like talk tracks for their teams because ultimately our partners want to be able to speak knowledgeably to this, to their senior leaders and to their teams as well. And so, how can we support them in being able to answer this stuff, not just in answering their questions, but in them being able to turn around and answer it as well? Um, contracting, were you referring to vendors or to clients? To clients. Um that is very um uh organization specific. So this is where like when you are creating your own governance documents, part of what you are assessing and creating that is what risk am I comfortable with taking on? What goes in your contract is a reflection of that. So it's very dependent on what where you are as an organization in terms of like where what kind of exposure you're comfortable to comfortable with, and also the set of clients you generally work with, what they're comfortable with. Um and so I would say like that fluctuates a bit depending on where you end up sitting as an organization.

Guru Setupathy:

I want to add on to answer that question and then um pivot to asking you something. So, in terms of best practices, again, one of the things we've seen work really well is having being uh proactively prepared in having transparency and documentation, right? This is where I was going with the model card concept, but there's other documents like that model cards, impact assessments, data documents, et cetera. And so even before IT comes in, right, imagine if you can say, hey, look, we have all, and one of the things we've started seeing more and more of is um a trust center, right? What a trust center would be is a page off of your website somewhere, and it would have all of the various compliance uh things that you're certified on, uh, all of your transparency documents, your model card, any bias testing reports that you have in there, it would all be there. Um and so the nice thing about that is one, that's transparent that any customer can come in and automatically get their questions answered. So you're already saving yourself a lot of time, right? Imagine all the bespoke questions that you get, they get those questions automatically answered now. And the second thing is it just builds that trust of like they know you're on top of this, right? So if this is something we're starting to see now more and more of these trust centers. So I encourage you uh uh to look into that. If you want some examples, I can send you some examples of companies that are doing that well, right? Um, but that's where if you can bring that to that conversation proactively and say, hey, yeah, we've got all these documents, we've got we're compliant with all these things ahead of the conversation, that's gonna make that that customer conversation go uh that much smoother. I'd like to ask you. Oh, I remembered what it was. Oh, yeah, yeah, go ahead before you forget.

Emily Kahn:

Um to all of these documents and the trust center and the contracts and all of that, the the talking points, these are all living documents. So this is definitely not a set it and forget it type of thing, um, because particularly because the legal landscape is evolving so rapidly. This is such a new thing, and how governments are interacting with it, what we find the issues are as we like evolve in our use cases, will continue to change the legal landscape, and the interpretation of how the laws are being like implemented also is continuing to evolve. So it's the type of thing, and and Guru has helped us with this as well, where we've put what we think is a great starting point. And then part of what's in the government's doc government documents, governance documents, man, it does become a mouthful after a while. Um, is how often we're coming back to refresh those, review them, make sure it's still serving us the way we need. So it's definitely still a living piece.

Guru Setupathy:

Well, and that's be that's both because the regulatory landscape is evolving, but also the AI technology is evolving so fast, right? Like things are constantly changing.

Emily Kahn:

So in terms like there's very few things that have no risk. And you can you can use this the example of like what we do in general, right? Like hiring people also has risk. There are places within all of our organizations, processes, et cetera, that introduce some risk. There is some chance that some recruiter says something that treats one candidate different from another, even though the process looks this way. There are issues everywhere. Um, but the the processes you have in place, the more standard something is, the training that you have around it, all that reduces the likelihood of that type of thing happening. So, this is about what do you have set up to be able to minimize that risk. And I think if you set that expectation of like nothing is perfect, but what can we do to make it operate as intended as often as possible, is how you're reducing the risk. In terms of the from a like contractual standpoint, like what kind of risk are you willing to take on? Sometimes there's true business decisions in that, right? It's like if I have this clause in a contract, that's something that a client won't sign with. Am I willing to get rid of that clause in order to have this client come on board? Like, that can be a case-by-case decision as long as the appropriate people are part of that discussion. But like sometimes these are business decisions, not like black and white, this is right or wrong decision. You just have to decide like, am I willing to do that or not? Are we as an organization willing to do that or not?

Guru Setupathy:

Um, everything she said, but I'm gonna say in a more pithy way, the number one risk you or your customers can take right now is not investing in AI.

unknown:

Right?

Guru Setupathy:

I'm gonna start with that. Because it is it is already significantly providing value to your competitors, right? And so if you're not investing in AI and you're delaying kind of when you start going down that road, that is gonna hurt you from a business standpoint. The second biggest risk you can take right now is not governing the AI that you have. Right? So they go hand in hand, right? If you're too terrified to use the AI, you're gonna lose. If you take AI on and don't put any controls and any guardrails around it, you're gonna lose. And so there is this middle ground that's actually quite powerful. Uh I'll give you an analogy that maybe all of us can understand it's not AI related, right? It's imagine you go back to your homes and you find out in the town that you're in, there's no traffic lights, no stop signs, no police cars, uh, anybody can drive, there's no such thing as DUI, you have no, there's no seatbelts anymore. How comfortable would you be driving 80, 90 miles an hour on the highway? Would you go faster than you would today? All the regulations are gone. You would?

unknown:

Absolutely.

Guru Setupathy:

Anyone else agree with him? Anyone else is gonna who else is gonna drive faster? Yeah. Exactly. So look, maybe he's fine with it, but I'm gonna bet 99% of the people are not. Right? And so that's the world we're in. It's a complete wild, wild west right now with AI. And if you don't actually have any guardrails whatsoever, no one's gonna adopt your technology. They're just not. People don't trust it right now. People are terrified of it. There's all sorts of questions, there's risks involved. So you have to be able to overcome that. And and the car analogy is great because everyone uses cars, but it's similar in that it's an incredibly powerful technology that can also do incredible harm. There's a lot, right? And so when you have that kind of technology, smart governance, right? You can also have, I completely agree, you can go overboard with governance and compliance and just gum up the gears, and that sucks too. That is super annoying. There's no doubt about it. That's the beauty of really, you know, precision governance or smart governance, which is what I've been talking about, that actually does both. It reduces your risks and actually increases the rate of innovation and adoption, and that's absolutely the sweet spot to be in. One more thing I'll say related to that, this is a really interesting point, is the the concept of human in the loop. And this is gonna be something that's really interesting to see how this evolves over the next couple of years, is what role do humans play and what roles do the AI play? Right? And one of the interesting pieces is this sense of accountability. Like self-driving cars, that's a good analogy in that they're already safer than human drivers. Okay? But what's holding it back is like when an accident comes, like we have a particular process that we're all familiar with. If someone rear-ends you, if you get into an accident, you know what that process looks like. You both get out, you exchange information, you figure out right who is at fault, the cops, right? Like there is a process of accountability. With a Tesla, with a self-driving car, we have not established guardrails of what that process looks like as a society and as a legal system. Who's responsible there? Who do you get in touch with? Right? We've already gone down the road where Tesla's like, it's not our fault. Well, that doesn't give people confidence of about what to do, right? So that's where it's gonna be really interesting to see, and this is when we talk about kind of responsible AI governance frameworks, human in the loop, human review, human oversight, and human accountability are gonna continue to play a really important role for some time to come.

Emily Kahn:

The other thing is to it being the barrier to entry. Guru, do you remember when we first started working together? Um were we actively evaluating and beginning to implement at the same time that we were building our governance? You can do them, this doesn't have to be a you've fully buttoned something up before you begin to take steps, test something out, start to talk to vendors. You answer each important question as you get there. What does the vendor need to answer as you're evaluating vendors? That was an early question we asked. What does our governance structure for evaluating vendors, like as we evaluate more and more of them? That was something we asked later in the process. So it's something that you can do sort of in parallel. It doesn't have to be fully, especially because it's evolving. If you were to say I'm not doing this until it's totally perfect, you wouldn't ever start because it's going to keep changing and something about what you have will become obsolete. So, how do you take the pieces that you need and start to build the building blocks around it? Um, so it doesn't necessarily have to be the thing that stops you from getting going.

Guru Setupathy:

Uh there's again a nice analogy with the automobile. So, you know, insurance companies uh offer, I think there's a tool or technology you can stick in your car, and it monitors how you're driving, your speed, the bumpiness, the stuff, you know, all these things, how fast you accelerate, decelerate, all these things, and that affects your insurance rates, right? Um so this company, and they partner with us, and so you know, they're able to see through kind of the work that we do and the work that they do, how risky is this tool? Is it being monitored? They'll do some more of their own testing, we'll do some of the testing, and based on all that information, they can ensure that tool for you guys, right? So, yeah, this is absolutely gonna be, I think, a huge market uh in in the future years. AI governance is actually a team sport, right? The people involved, this is what we're starting to see at leading organizations. Uh, legal is involved with AI governance, uh, risk and compliance is involved, CISO can get involved depending on the data that's being used, the business is involved because this directly affects their ability to implement the AI and they're responsible for these products. And data science and engineering teams are involved, and then HR is involved, right? So there's a lot of folks involved. And so that's actually part of the complexity of governance, and our kind of our tool helps simplify all of that as well. The other thing is also what verticals are we seeing the patterns in? And uh the the high risk domains, right? So the the the places right now that are kind of leading the way in terms of thinking about AI governance are HR, financial services, insurance, uh healthcare, and government. These are the highest risk domains in the economy, right? So all of companies in those sectors are absolutely thinking about this stuff and leading the way. Well, just to kind of quickly wrap up, look, you know, I'll be here for a little bit longer. If you have questions, um more than happy to chat. Both Emily and I's uh contact info, I put hopefully that's okay, Emily, um are up here and you know, uh happy to follow up and answer questions that way. But great conversation, great questions, Emily. Thank you for for joining me. And look, again, the message I would lead you with is I I really do believe if you do governance smartly and efficiently, it's actually gonna help you both reduce risks and kind of increase your innovation and adoption. That's the key. Uh you can go overboard in either direction, but that's really the thing to figure out.

Emily Kahn:

FairNow has been a phenomenal partner with us. I think one of the biggest learnings for me and for human at the beginning was like you don't have to know how to do this on your own. If you don't know, just find somebody who does and ask them. And I think that has been something that has served us really well, and that like this is a nascent space. We didn't have the internal expertise to build this out. Um, and so being okay with just saying, like, we're not going to have that, but there is somebody who does know, and we can ask them all the questions we need to. So I think that it helps ease the burden to starting if you know that like you don't have to know internally, you just have to find somebody who can help you with it.

Guru Setupathy:

Thank you guys. Appreciate it. Thank you.

Lamees Abourahma:

I hope you enjoyed this episode of the Time to Hire podcast from the Recruitment Process Outsourcing Association. Give us a review wherever you listen to the podcast. And always stay connected, stay engaged, and stay informed of what's happening in the talent and recruiting world by tuning into the RPOA, the place to go for RPO.