When it comes to hiring these days, AI isn’t simply a tool of The Man. AI is The Man.
So what’s the problem? After all, AI can speed up the process of screening, interviewing, and hiring new workers. It can more accurately and efficiently match an applicant’s skills to a job. But…
Yes, there is a But. And that But, according to many experts, is that AI-based tools can violate privacy rights. And they have been shown to introduce bias into the hiring process.
One answer to the problem is legislative control.
Several U.S. states, including California, Maryland, and Washington, have already enacted or are considering legislation to regulate the use of AI for worker acquisition. Congress is considering the Algorithmic Accountability Act to require employers to perform an impact assessment of any automated decision-making system. And the U.S. Equal Employment Opportunity Commission (EEOC) recently announced that it intends to increase oversight and scrutiny of AI tools used to screen and hire workers.
New York City’s Local Law 144 prohibits employers from using automated employment selection tools unless an organization institutes specific bias auditing and makes the resulting data publicly available. In addition, companies must also disclose their use of AI to job candidates who live in New York City.
Experts say the number of laws and regulations related to AI in HR is gaining momentum. And many industry watchers believe the new laws are necessary as technology outpaces existing regulations for protecting underrepresented groups.
Now we just have to make sure that AI isn’t writing – worse, voting on – the new legislation.