Trade unions, campaigners and UK MPs are calling for stricter oversight of the use of artificial intelligence (AI) in the workplace as concerns about its effect on workers’ rights grow, The Guardian reports.
The Trades Union Congress (TUC) held a half-day conference this week to highlight the challenges of ensuring workers are treated fairly, amid the growing prevalence of what it calls “management by algorithm”.
“Making work more rewarding, making it more satisfying, and crucially making it safer and fairer: these are all the possibilities that AI offers us,” Mary Towers - an employment lawyer who runs a TUC project on AI at work - said.
“But what we’re saying is, we’re at a really important juncture, where the technology is developing so rapidly, and what we have to ask ourselves is, what direction do we want that to take, and how can we ensure that everyone’s voice is heard?”
The TUC has turned its spotlight on the growing use of employee surveillance. Simon Thompson - the Royal Mail chief executive - recently acknowledged that some postal workers’ movements were being minutely tracked using handheld devices. Such data was, for example, used for performance management.
Speaking to MPs in February, however, Mr Thompson reportedly blamed rogue managers for breaching the company’s policy.
Striking staff at Amazon’s Coventry warehouse described a difficult regime of ever-changing targets that they believe to be set by AI. Amazon says these performance goals are “regularly evaluated and built on benchmarks based on actual attainable employee performance history”.
An operations manager with experience working at several retail distribution centres told academics compiling recent TUC research, “At some point, warehouses will be expecting the efficiency of robots from humans.”
Matt Buckley - chair of United Tech and Allied Workers, a branch of the Communication Workers union focusing on the sector - reportedly said his members had highlighted worries about being monitored at work.
“There’s really no regulation at all around employee surveillance as a concept at the moment; it’s really just up to companies,” he said. “Really, what we need is not a series of new laws, it’s a new body that can be flexible and iterative, and responsive to workers’ needs.”
Campaigners report that some of the most alarming cases are those where judgments about workers’ behaviour are effectively made by algorithms and involve little or no human oversight. These include so-called “robo-firings”.
A group of UK-based Uber drivers successfully took the gig economy giant to the court of appeal in Amsterdam recently to force it to reveal details about how decisions had been made about them.
The company is reportedly considering whether to appeal against the case at the Dutch supreme court. A spokesperson said, “Uber maintains the position that these decisions were based on human review and not on automated decision-making.”
Such cases have relied on the EU’s General Data Protection Regulation (GDPR) but campaigners caution that the UK government is poised to weaken in forthcoming legislation.
They argue that the data protection and digital information bill - which had its second reading in the House of Commons on April 17 - will make it easier for firms to turn down workers’ requests for data held about them, and loosen the requirement to have a human involved in decision-making.
Cansu Safak - from the campaign group Worker Info Exchange, which supported the Uber case - said, “We’re essentially trying to bridge the gaps in employment law by using the GDPR. The reason we’re using the GDPR is because these workers have no other recourse. They have no other avenues of redress.”
Adam Cantwell-Corn - from Connected by Data, which calls for more public involvement in the way AI is implemented - said, “Most people’s experience of GDPR is annoying pop-ups, but if we understand it in the context of increasing datafication and artificial intelligence in the workplace in particular, it’s got really important provisions that the bill is weakening.”
Angela Rayner - Labour’s deputy leader - has the future of work in her portfolio. She said, “The powerful potential of data analysis and artificial intelligence is already transforming our economy. Rights at work must keep pace with these changes so that risks can be managed and harm prevented, while benefits are felt by workers.
“Labour will update employment rights and protections so they are fit for the modern economy.”
The UK government published a separate white paper on AI that set out a series of principles for the use of the technology, including the need for fairness, transparency and “explainability”.
It reportedly suggested that existing regulators, including the Health and Safety Executive and the Equality and Human Rights Commission, could take on the responsibility of ensuring that these principles were followed.
Mr Cantwell-Corn dismissed this approach, calling it, “basically just a bunch of intentions with no firepower behind it”.
And some Conservatives agree. The former cabinet minister David Davis has a long history of defending civil liberties, he said, “The conventional regulatory approach will fail – because it will be civil servants thinking they know what’s going on, when they don’t.”
Mr Davis called for a “rapid royal commission” on the best way of overseeing the technology, with the key principle being “if you use an AI, you are responsible for the consequences”.
The TUC is reportedly calling for a ‘right to explainability’ so workers are able to understand how technology is being used to make decisions about them and a statutory duty for employers to consult before introducing new AI.
Source: The Guardian
(Links and quotes via original reporting)
Trade unions, campaigners and UK MPs are calling for stricter oversight of the use of artificial intelligence (AI) in the workplace as concerns about its effect on workers’ rights grow, The Guardian reports.
The Trades Union Congress (TUC) held a half-day conference this week to highlight the challenges of ensuring workers are treated fairly, amid the growing prevalence of what it calls “management by algorithm”.
“Making work more rewarding, making it more satisfying, and crucially making it safer and fairer: these are all the possibilities that AI offers us,” Mary Towers - an employment lawyer who runs a TUC project on AI at work - said.
“But what we’re saying is, we’re at a really important juncture, where the technology is developing so rapidly, and what we have to ask ourselves is, what direction do we want that to take, and how can we ensure that everyone’s voice is heard?”
The TUC has turned its spotlight on the growing use of employee surveillance. Simon Thompson - the Royal Mail chief executive - recently acknowledged that some postal workers’ movements were being minutely tracked using handheld devices. Such data was, for example, used for performance management.
Speaking to MPs in February, however, Mr Thompson reportedly blamed rogue managers for breaching the company’s policy.
Striking staff at Amazon’s Coventry warehouse described a difficult regime of ever-changing targets that they believe to be set by AI. Amazon says these performance goals are “regularly evaluated and built on benchmarks based on actual attainable employee performance history”.
An operations manager with experience working at several retail distribution centres told academics compiling recent TUC research, “At some point, warehouses will be expecting the efficiency of robots from humans.”
Matt Buckley - chair of United Tech and Allied Workers, a branch of the Communication Workers union focusing on the sector - reportedly said his members had highlighted worries about being monitored at work.
“There’s really no regulation at all around employee surveillance as a concept at the moment; it’s really just up to companies,” he said. “Really, what we need is not a series of new laws, it’s a new body that can be flexible and iterative, and responsive to workers’ needs.”
Campaigners report that some of the most alarming cases are those where judgments about workers’ behaviour are effectively made by algorithms and involve little or no human oversight. These include so-called “robo-firings”.
A group of UK-based Uber drivers successfully took the gig economy giant to the court of appeal in Amsterdam recently to force it to reveal details about how decisions had been made about them.
The company is reportedly considering whether to appeal against the case at the Dutch supreme court. A spokesperson said, “Uber maintains the position that these decisions were based on human review and not on automated decision-making.”
Such cases have relied on the EU’s General Data Protection Regulation (GDPR) but campaigners caution that the UK government is poised to weaken in forthcoming legislation.
They argue that the data protection and digital information bill - which had its second reading in the House of Commons on April 17 - will make it easier for firms to turn down workers’ requests for data held about them, and loosen the requirement to have a human involved in decision-making.
Cansu Safak - from the campaign group Worker Info Exchange, which supported the Uber case - said, “We’re essentially trying to bridge the gaps in employment law by using the GDPR. The reason we’re using the GDPR is because these workers have no other recourse. They have no other avenues of redress.”
Adam Cantwell-Corn - from Connected by Data, which calls for more public involvement in the way AI is implemented - said, “Most people’s experience of GDPR is annoying pop-ups, but if we understand it in the context of increasing datafication and artificial intelligence in the workplace in particular, it’s got really important provisions that the bill is weakening.”
Angela Rayner - Labour’s deputy leader - has the future of work in her portfolio. She said, “The powerful potential of data analysis and artificial intelligence is already transforming our economy. Rights at work must keep pace with these changes so that risks can be managed and harm prevented, while benefits are felt by workers.
“Labour will update employment rights and protections so they are fit for the modern economy.”
The UK government published a separate white paper on AI that set out a series of principles for the use of the technology, including the need for fairness, transparency and “explainability”.
It reportedly suggested that existing regulators, including the Health and Safety Executive and the Equality and Human Rights Commission, could take on the responsibility of ensuring that these principles were followed.
Mr Cantwell-Corn dismissed this approach, calling it, “basically just a bunch of intentions with no firepower behind it”.
And some Conservatives agree. The former cabinet minister David Davis has a long history of defending civil liberties, he said, “The conventional regulatory approach will fail – because it will be civil servants thinking they know what’s going on, when they don’t.”
Mr Davis called for a “rapid royal commission” on the best way of overseeing the technology, with the key principle being “if you use an AI, you are responsible for the consequences”.
The TUC is reportedly calling for a ‘right to explainability’ so workers are able to understand how technology is being used to make decisions about them and a statutory duty for employers to consult before introducing new AI.
Source: The Guardian
(Links and quotes via original reporting)