Recruitment

AI regulation in workplace recruitment and selection


[ad_1]

Joining Illinois and Maryland on November 10, 2021, the New York City Council approved a measure, Int. 1894-2020A (the “Bill”), to regulate the use by employers of “automated employment decision tools” to reduce bias in hiring and promotions. The bill, which awaits the signature of Mayor DeBlasio, is due to enter into force on January 1, 2023. If the mayor does not sign the bill within thirty days of council approval (i.e. before December 10), without a veto, it will become law.

Automated employment decision tools

The bill defines “automated employment decision-making tool” as “any computational process, derived from machine learning, statistical modeling, data analysis or artificial intelligence,” which notes, classifies or makes a recommendation, which is used to aid or substantially replace the decision-making process of that of an individual. The bill exempts automated tools that do not have a material impact on individuals, such as spam filter, firewall, calculator, spreadsheet, database, set data or any other compilation of data. It is not clear whether passive recruiting tools, such as the jobs suggested by LinkedIn, are covered by the bill.

The bill applies only to screening decisions of job applicants or employees for promotion within New York City, and does not apply to other employment-related decisions.

Employer requirements under the AI ​​bill

The bill prohibits employers or employment agencies from using automated decision-making tools to select candidates or employees for employment decisions, unless: (1) the tool did subject to an independent bias audit no more than one year before use; 2 ° a summary of the results of the audit as well as the publication date of the tool to which the audit applies have been made public on the website of the employer or of the employment agencies. The bill is not clear on whether and when the bias check should be updated, or whether a new bias check should be obtained before each “use” by the employer.

The bill defines an acceptable “bias audit” as an impartial assessment by an independent auditor that includes testing the tool to assess its disparate impact on individuals of any category. Federal EEO-1 “component 1”, that is, that is, whether the tool would have a disparate impact based on race, ethnicity, or gender.

Additionally, New York City employers using automated employment decision tools should inform every New York resident employee or candidate of the following:

  • at least ten working days before such use, that the tool will be used to assess or assess the person and allow a candidate to request an alternative process or accommodation;

  • at least ten working days before such use, the professional qualifications and characteristics that the tool will use to assess or assess the individual; and

  • in the absence of display on the employer’s website, and within thirty days of a written request from a candidate or employee, information on the type of data collected for the tool and the source of those data.

While the bill allows applicants to request an “alternative process or accommodation,” it does not specify what obligations, if any, an employer must take upon receiving a request.

Employers or employment agencies who fail to comply with any of the requirements of the bill may be subject to a fine of up to $ 500 for a first violation by the company’s attorney. New York City or the Department of Consumer Affairs. Employers can be fined $ 500 to $ 1,500 for each subsequent violation.

In anticipation of the bill’s likely date of entry into force of January 1, 2023, employers using automated employment decision tools in their hiring and promotion practices in New York City should take steps to ensure that they will be in compliance. This includes:

  • ensure that an automated employment decision tool has undergone an independent ‘bias audit’ no more than a year before the tool is used, in particular to determine whether the tool would have a different impact depending on race, ethnicity or gender;

  • post the results of the audit on the employer’s public website;

  • develop a method for informing employees and candidates of the use of the tool and the qualifications and characteristics that the tool will assess or assess; and

  • by carefully designing job assessments to ensure that only key knowledge, skills and abilities are taken into account, and taking into account the potential reasons for disparities.

Employers looking to responsibly and ethically implement AI tools in recruiting and screening talent should not limit their efforts to just complying with this bill. While this is not a requirement under the New York City measure, employers should also consider the accessibility of tools for people with disabilities and others. They should also consider whether and how to use passive recruiting tools, which may not comply with the bill’s notification requirement to applicants.

Employers may also wish to consult with a lawyer before implementing any type of digital hiring or promotion platform, to ensure compliance with this law and other recently enacted employment laws relating to AI. .

* Kamil Gajda, Law Clerk – Admission Pending in the firm’s New York office, helped prepare for this position.

© 2021 Epstein Becker & Green, PC All rights reserved.Revue nationale de droit, volume XI, number 322

[ad_2]