AI in the Workplace: Ensuring New Technologies Work for Workers

Annette Bernhardt

“AI in the Workplace: Ensuring New Technologies Work for Workers,” Joint Informational Hearing, California State Assembly, Committee on Privacy and Consumer Protection and Committee on Labor and Employment. August 7, 2024, Sacramento, CA.

Prepared Testimony by Dr. Annette Bernhardt, Director, Technology and Work Program, UC Berkeley Labor Center


Good afternoon. My name is Annette Bernhardt and I direct the Technology and Work Program at the UC Berkeley Labor Center. Thank you Chairs Bauer-Kahan and Ortega for the opportunity to speak with you today, and for your leadership in California’s efforts to be a national model on AI policy.

For the past five years, my team and I have been conducting research and policy analysis of AI and other data-driven technologies, with the goal of ensuring that working families are able to thrive in the 21st Century economy. I’d like to share several observations from our work.

To start, employers across the country are increasingly using data and algorithms in ways that stand to have profound consequences for wages, working conditions, race and gender equity, and worker power.

For example, hiring software can rank job applicants based on their tone of voice or choice of words during video interviews. Algorithms are being used to predict whether workers will quit or become pregnant or try to organize a union. Call center technologies are analyzing customer calls in real time and nudging workers to adjust their behavior. The retail industry is using “just in time” scheduling software, often wreaking havoc on workers’ lives with impossible combinations of shifts. And Amazon’s warehouse workers endure constant productivity tracking, pushing the pace of work to an alarming rate and putting their health at risk.

It is important to stress that workplace technologies range from simple digitization of payroll and HR systems, to monitoring and task direction software, to complex models predicting worker aptitudes and behavior, and now of course Generative AI. In my mind, the appropriate focus of regulation is this broad range of data-driven technologies, not just one particular type.

The core problem is that employers are introducing these (often untested) technologies with almost no regulation or oversight.

Currently in the U.S, workers largely do not have the right to know what data is being gathered on them, or whether it is being sold. They don’t have the right to review or correct their data. Employers are not required to notify workers about electronic monitoring or their use of algorithms to make consequential employment decisions – and workers do not have the right to challenge those decisions. Most important, there are virtually no meaningful guardrails on which technologies employers can use and how they use them in their workplaces.

As a result, there is broad consensus among legal scholars that existing employment and labor laws are inadequate to the task of protecting workers in the data-driven workplace. Last year, workers in California did gain some basic data rights under the CCPA. But those are no substitute for the broad protections that workers have, for example, in the 27 countries of the European Union.

This regulatory vacuum in the U.S. opens the door to a wide range of potential harms to workers.

We are only beginning to understand how digital technologies impact workers. But there is already evidence of a range of harms, including work intensification and speed-up; deskilling and automation; hazardous working conditions;  discrimination; growth in contingent work; lower wages; loss of autonomy and privacy; being blamed and penalized for error-prone tech; and suppression of the right to organize.

Of particular concern is that workers of color, women, and immigrants both face direct discrimination via biases in the technology itself, and are also more likely to work in occupations at the front lines of technological experimentation.

But these harms are not inevitable.

I believe that we currently stand at the juncture of two very different paths going forward:

  • A dystopian path where employers use technology to cut labor costs and control, exploit, and displace workers
  • A high-road path where employers use technology to augment and support workers, enabling them to gain new skills while also raising productivity

The policy choices we make today will determine which path we take. Specifically, we need a new set of 21st century labor standards establishing worker rights and employer responsibilities for the data-driven workplace – much like the foundational standards we laid down in the last century around wages and working conditions.

These standards should be established both in public policy and in collective bargaining agreements. You will hear more detail from the panelists that follow, but I’d like to lift up several principles that unions and other worker advocates have developed over the past several years.

  1. Transparency: Workers should have the right to know about all data-driven technologies being used in their workplaces, including AI.
  2. Guardrails: We need robust standards to ensure responsible use when employers monitor workers, make decisions based on algorithms, and deploy technologies that automate workers’ tasks. Use of unproven or questionable technologies should be prohibited.
  3. Humans in command: Workers should have the right to override the systems they work with, in order to prevent harm to themselves or the public.
  4. Right to organize: Workers should have the right to organize and bargain around new technologies.
  5. Education: Workers should have a fundamental right to education and training so they can adapt to changes in their jobs.
  6. Accountability: AI and other digital technologies should be tested for a broad set of harms before use, including discrimination, deskilling, health and safety harms, and job loss.
  7. Enforcement: Government agencies should play a central role in holding employers responsible for harms caused by digital technologies, and workers should be able to sue for violations.

A final point, which is that labor standards by themselves are not enough. Ultimately, workers should fully participate in decisions over which technologies are developed, how they are used in the workplace, and how the resulting productivity gains are shared.

This participation need not and should not be anti-innovation, because workers have a wealth of knowledge and experience to bring to the table. Dehumanization and automation are not the only path. With workers at the table, AI and other digital technologies can be put in the service of creating a vibrant and productive economy built on living wage jobs, safe workplaces, and race and gender equity.

Thank you again for this opportunity, and I’m happy to take any questions.