[EU] Proposed AI law could affect employers globally

[EU] Proposed AI law could affect employers globally
15 Sep 2022

Companies with employees inside the European Union (EU) could be impacted by a landmark proposal to regulate the use of AI across the region, SHRM reports.

The EU Artificial Intelligence Act is currently making its way through the legislative process. It is expected to shape technology and standards worldwide.

The act comprises a broad set of rules seeking to regulate the use of AI across industries and social activities, according to Jean-François Gerard, a Brussels-based attorney for Freshfields Bruckhaus Deringer.

The AI regulation proposes a sliding scale of rules based on risk: the higher the perceived risk, the stricter the rule, Mr Gerard told SHRM. The proposal would classify different AI applications as unacceptable, high, limited or minimal risks, according to a client briefing Mr Gerard helped to produce.

The proposed law - introduced by the European Commission (EC) in April 2021 - is reportedly expected to play a major part in shaping AI in the EU, serve as a model for regulatory authorities around the world and affect companies globally that have operations in Europe.

"The AI Act aims to ensure that AI systems placed and used in the European market are safe and respect existing legislation on fundamental rights and values of the European Union, among which is the General Data Protection Regulation [GDPR]," Johanne Boelhouwer - an attorney with Dentons in Amsterdam - said.

"In this way, the act facilitates an internal market for safe and reliable AI systems while preventing market fragmentation. AI needs to be legitimate, ethical and technically robust," she added.

Risk-Based Approach

In its risk-based approach, the AI Act distinguishes between allowing a light legal regime for AI applications with negligible risk and banning applications with unacceptable risk, Ms Boelhouwer said. "Between these extremes, stricter regulations apply as the risk increases. These range from nonbinding self-regulatory soft law impact assessments with codes of conduct to onerous externally audited compliance requirements."

High-risk AI systems would be allowed in the European market if they meet mandatory requirements and undergo a prior conformity assessment, according to Ms Boelhouwer. She told SHRM that these systems must meet rules related to data management, transparency, record keeping, human oversight, accuracy and security.

The proposal will become law once both the EC - representing EU member states - and the European Parliament agree on a common version of the text, Ms Boelhouwer said. Negotiations should reportedly be complex, given the thousands of amendments that political groups proposed in the European Parliament.

Beyond its application in EU member countries, the act is expected to embed norms and values into AI technology's architecture, extending its influence beyond Europe, according to Marc Elshof, an attorney with Dentons in Amsterdam.

Employment Uses: High Risk

AI systems used in employment contexts such as recruiting and performance evaluation would be considered "high risk" under the draft legislation and subject to heavy compliance requirements, according to legal experts.

"This will be new to the many employers who have been using so-called people analytics tools for years with limited compliance requirements" other than data privacy and, in some jurisdictions, the need to inform and consult with employee representatives, Mr Gerard said.

Employers would need to ensure the AI systems they consider using in these contexts meet all of the requirements of the act, including that they have successfully undergone a conformity assessment and compliance certification, Ms Boelhouwer said. The conformity assessment would be an obligation for AI systems providers, but employers shouldn't use systems that haven't passed the assessment.

"For example, employers who consider giving employees a bad performance review on the basis of algorithms that can read people's feelings through text, voice tone, facial expressions and gestures can't simply implement such systems without ensuring compliance with the AI Act," she said.

Discriminatory Impact

Ms Boelhouwer noted that such AI systems may appreciably impact people's future career prospects and livelihoods.

"The act emphasizes that companies should be very mindful of biases in the AI systems throughout the recruitment process's evaluation, promotion or retention of those persons. These AI systems may lead to discrimination, for example, against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation," she said.

AI systems used to monitor employees' performance and behaviour may also affect their rights to data protection and privacy. Employers should continue to comply with the GDPR when the use of AI systems involves processing personal data, Ms Boelhouwer said.

"Recruitment algorithms, which have been widely used by large employers, especially in the tech industry, led to some heated discussion about algorithmic bias and discrimination," Mr Gerard said. He added that some have called for banning AI from recruitment.

The act would reportedly apply to AI users inside the EU as well as to those in other countries if the system's output, such as content, recommendations or decisions, affects activity in the EU.


Source: SHRM

(Quotes via original reporting)

Companies with employees inside the European Union (EU) could be impacted by a landmark proposal to regulate the use of AI across the region, SHRM reports.

The EU Artificial Intelligence Act is currently making its way through the legislative process. It is expected to shape technology and standards worldwide.

The act comprises a broad set of rules seeking to regulate the use of AI across industries and social activities, according to Jean-François Gerard, a Brussels-based attorney for Freshfields Bruckhaus Deringer.

The AI regulation proposes a sliding scale of rules based on risk: the higher the perceived risk, the stricter the rule, Mr Gerard told SHRM. The proposal would classify different AI applications as unacceptable, high, limited or minimal risks, according to a client briefing Mr Gerard helped to produce.

The proposed law - introduced by the European Commission (EC) in April 2021 - is reportedly expected to play a major part in shaping AI in the EU, serve as a model for regulatory authorities around the world and affect companies globally that have operations in Europe.

"The AI Act aims to ensure that AI systems placed and used in the European market are safe and respect existing legislation on fundamental rights and values of the European Union, among which is the General Data Protection Regulation [GDPR]," Johanne Boelhouwer - an attorney with Dentons in Amsterdam - said.

"In this way, the act facilitates an internal market for safe and reliable AI systems while preventing market fragmentation. AI needs to be legitimate, ethical and technically robust," she added.

Risk-Based Approach

In its risk-based approach, the AI Act distinguishes between allowing a light legal regime for AI applications with negligible risk and banning applications with unacceptable risk, Ms Boelhouwer said. "Between these extremes, stricter regulations apply as the risk increases. These range from nonbinding self-regulatory soft law impact assessments with codes of conduct to onerous externally audited compliance requirements."

High-risk AI systems would be allowed in the European market if they meet mandatory requirements and undergo a prior conformity assessment, according to Ms Boelhouwer. She told SHRM that these systems must meet rules related to data management, transparency, record keeping, human oversight, accuracy and security.

The proposal will become law once both the EC - representing EU member states - and the European Parliament agree on a common version of the text, Ms Boelhouwer said. Negotiations should reportedly be complex, given the thousands of amendments that political groups proposed in the European Parliament.

Beyond its application in EU member countries, the act is expected to embed norms and values into AI technology's architecture, extending its influence beyond Europe, according to Marc Elshof, an attorney with Dentons in Amsterdam.

Employment Uses: High Risk

AI systems used in employment contexts such as recruiting and performance evaluation would be considered "high risk" under the draft legislation and subject to heavy compliance requirements, according to legal experts.

"This will be new to the many employers who have been using so-called people analytics tools for years with limited compliance requirements" other than data privacy and, in some jurisdictions, the need to inform and consult with employee representatives, Mr Gerard said.

Employers would need to ensure the AI systems they consider using in these contexts meet all of the requirements of the act, including that they have successfully undergone a conformity assessment and compliance certification, Ms Boelhouwer said. The conformity assessment would be an obligation for AI systems providers, but employers shouldn't use systems that haven't passed the assessment.

"For example, employers who consider giving employees a bad performance review on the basis of algorithms that can read people's feelings through text, voice tone, facial expressions and gestures can't simply implement such systems without ensuring compliance with the AI Act," she said.

Discriminatory Impact

Ms Boelhouwer noted that such AI systems may appreciably impact people's future career prospects and livelihoods.

"The act emphasizes that companies should be very mindful of biases in the AI systems throughout the recruitment process's evaluation, promotion or retention of those persons. These AI systems may lead to discrimination, for example, against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation," she said.

AI systems used to monitor employees' performance and behaviour may also affect their rights to data protection and privacy. Employers should continue to comply with the GDPR when the use of AI systems involves processing personal data, Ms Boelhouwer said.

"Recruitment algorithms, which have been widely used by large employers, especially in the tech industry, led to some heated discussion about algorithmic bias and discrimination," Mr Gerard said. He added that some have called for banning AI from recruitment.

The act would reportedly apply to AI users inside the EU as well as to those in other countries if the system's output, such as content, recommendations or decisions, affects activity in the EU.


Source: SHRM

(Quotes via original reporting)

Leave a Reply

All blog comments are checked prior to publishing