Last updated 1/9/2024

Across the country, employers are increasingly using digital technologies in ways that stand to have profound consequences for wages, working conditions, race and gender equity, and worker power. But regulation of these new technologies–like worker surveillance, algorithmic management, and artificial intelligence–is still in its infancy.

In this guide, we give an overview of current U.S. public policy that regulates employers’ use of digital workplace technologies. Our goal is to cover all major bills and laws and identify core regulatory concepts; however, this is not a legislative tracker. Note that we organize the guide topically; in practice, several topics might be combined into one policy. This is a living document that we will update frequently. See also the list of further resources at the end.

Electronic Monitoring

There is growing legislative momentum to regulate electronic monitoring in the workplace. An early but weak effort was a 2021 New York State law requiring private-sector employers to give advance notice to employees of electronic monitoring, first upon hiring and then annually.

Currently, the trend is to include electronic monitoring as part of more comprehensive regulation. For example, the 2022 California Workplace Technology Accountability Act (WTAA) lays out a broad framework for regulating employers’ collection of worker data and use of electronic monitoring and algorithmic management, and also requires impact assessments. Although the bill didn’t pass, it inspired similar 2023 legislation in Massachusetts and Vermont, as well as parts of a current New York State bill. Sections of the WTAA also appear in the 2023 federal Stop Spying Bosses Act (and its companion bill, see below). While none of these bills have been enacted into law so far, we support this type of comprehensive approach as most protective of workers.

Another approach is to regulate electronic monitoring in what are known as “just cause” bills, introduced in recent years in Illinois and New York City. These bills prohibit the unjust discharge of workers, establish a framework for justified discipline and firing, and limit employers’ reliance on data from electronic monitoring in making those decisions.

Key concepts appearing in one or more policies:

  • Employers are only allowed to use electronic monitoring for specific purposes, in a manner that affects the smallest number of workers.
  • Employers must give prior notice of any electronic monitoring.
  • Employers are prohibited from using electronic monitoring that results in a violation of labor and employment laws; records workers off-duty or in sensitive areas; uses high-risk technologies, such as facial recognition; or identifies workers exercising their rights under employment and labor law.
  • Employers are prohibited from relying exclusively on data from electronic monitoring when making decisions like hiring, firing, discipline, or promotion. Instead, the employer must independently corroborate the data, and provide the worker with full documentation.
  • Employers must conduct impact assessments of electronic monitoring systems, testing for bias and other harms to workers, prior to use.
  • Productivity monitoring systems in particular must be documented and reviewed by regulatory agencies overseeing workplace health and safety before use.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Algorithms

Many of the comprehensive policy frameworks covered above also address algorithmic management. The 2022 California Workplace Technology Accountability Act sets guardrails regarding employers’ use of algorithms to manage workers—a framework replicated by the 2023 bills in Vermont and Massachusetts. The same applies at the federal level with the 2023 No Robot Bosses Act, which serves as a companion bill to the Stop Spying Bosses Act. A current New York State bill also incorporates this civil rights framework on employment selection procedures. All of these bills include provisions that prohibit employers from relying exclusively on automated systems to manage workers. While none of these bills have been enacted into law so far, we support this type of comprehensive approach as most protective of workers.

A less developed but important policy area focuses on defining rules of the road for how humans work alongside algorithms, usually in specific industries. For example, a 2022 California bill allows health care workers to override a hospital’s care-directing algorithm if doing so is in the best interest of the patient, and provides some anti-retaliation protections for workers. A 2023 California bill requires a trained human safety operator to be present in large autonomous vehicles. In a similar vein, bills introduced in 2023 in Massachusetts and Texas require that AI systems used in the provision of mental health services are constantly monitored by a licensed mental health professional.

Key concepts appearing in one or more policies:

  • Employers must give workers detailed prior notice before using any algorithmic decision systems. The employer must also share any relevant impact assessments.
  • Employers are prohibited from using algorithms that result in a violation of labor and employment laws; profile or make predictions about a workers’ behavior that are unrelated to their job responsibilities; identify workers exercising their legal rights under employment and labor law; or use high-risk technology, such as facial or emotion recognition technologies.
  • Employers are prohibited from relying exclusively on outputs from algorithms when making decisions like hiring, firing, discipline, or promotion. Instead, the employer must independently corroborate the algorithms’ output via meaningful human oversight, and provide the worker with full documentation.
  • Workers should be able to direct and override automated technologies in the interest of the safety of workers and the public, without fear of retaliation.
  • Employers must conduct impact assessments of algorithmic decision systems prior to use, including for bias and discriminatory effects.
  • Productivity algorithms must be documented and reviewed by regulatory agencies overseeing workplace health and safety before implementation.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Data Privacy

Unlike many other countries, the U.S. currently does not have a federal data privacy law. Instead, 13 states have passed their own data privacy laws in recent years, focused on establishing rights for consumers about the data gathered by platform companies and other large businesses. California is currently the only state whose data privacy law gives workers the same rights as consumers. In 2023 several other states passed privacy laws that do not cover workers, suggesting that the current trend is toward state privacy laws that explicitly exclude workers.

A better model is to integrate workers’ data privacy rights in broader bills focused on workplace technologies. For example, the 2022 California Workplace Technology Accountability Act provides a robust set of rights and protections around all worker data that is collected by employers. We support this approach; the experience to date is that incorporating workers into consumer-focused data privacy laws is an awkward fit and often confers fewer and/or ill-fitting rights.

Key concepts appearing in one or more policies:

  • Employers can only collect worker data if doing so is necessary to achieve specific purposes (such as enabling workers to do their jobs, protecting workers’ health, or for the administration of wages and benefits).
  • Employers must give workers detailed prior notice about what data they intend to collect on workers, what rights workers have, and, importantly, how the data will be used to make employment decisions.
  • Workers have the right to access and correct their data.
  • Employers cannot sell or license worker data to third parties.
  • Workers have the right to limit employers’ use of their sensitive data.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Bias and Discrimination

Unlike other areas covered in this guide, where there are few if any existing worker protections, anti-discrimination laws at the federal and state level have long protected workers from discrimination in employment. However, emerging technologies have posed new questions about how these laws apply and how to prove violations – even as the potential for discriminatory outcomes from AI-based technologies in the workplace is clear (see the EEOC’s initial guidance to employers under the ADA and Title VII, as well as this testimony presented at the January 2023 EEOC meeting).

One policy approach – and our recommended model – is to focus specifically on discrimination in hiring and other employment-related technologies (rather than on discrimination in multiple areas). Here, the best model is the 2022 Civil Rights Standards for 21st Century Employment Selection Procedures, parts of which have been incorporated into a current New York State bill as well as the federal No Robot Bosses Act. Unfortunately, weaker laws and bills are starting to populate this area and could undermine the goal of establishing robust anti-discrimination protections in worker assessment technologies, including a New York City law that has been roundly criticized. Bills based in part on the New York City law are currently pending in New Jersey, Pennsylvania, and New York State.

A related approach is to mandate that automated decision-making systems in general be tested for discriminatory impacts before use. One example is the federal Algorithmic Accountability Act of 2023, which requires large businesses to conduct impact assessments when using automated systems for critical decisions in areas such as housing, education, employment, health care, and finance. The aforementioned New York State bill blends this approach with several of the Civil Rights Standards’ requirements. But here again, there is the danger of weaker bills being introduced, including recent bills in California.

Since existing anti-discrimination laws theoretically cover the use of digital workplace technologies, regulatory changes might suffice to address some of the discrimination issues such technologies raise. But legislation will ultimately be needed to ensure that vendors have an incentive to cooperate with employers and regulatory agencies to test their tools for bias and job-relatedness, as well as to ensure workers receive adequate disclosures regarding when and how they will be assessed.

Key concepts appearing in one or more policies:

  • Businesses, including data brokers, are prohibited from using discriminatory algorithms in employment (and other critical sectors like housing and education).
  • Employers and vendors must conduct bias audits on automated decision tools in the workplace before using them, and annually thereafter.
  • Employers must notify job candidates and employees about the use of automated tools in hiring assessments or evaluations for hire or promotion.
  • Worker selection procedures that rely on facial recognition, emotion recognition, and other suspect technologies should not be allowed.
  • Worker assessment tools must measure the ability to perform essential job functions rather than attributes tied to protected characteristics.
  • Workers should have the right to opt out of being assessed by an automated selection procedure and instead be evaluated by a human or other alternative means.

Biometric Data

Biometric data—which typically includes fingerprints, voiceprints, retina scans, hand scans, or face geometry (not biological data collected for health or medical purposes) is one of the more developed regulatory spaces of workplace technology in the U.S. The standard approach is to enact comprehensive biometric privacy legislation that covers both consumers and workers and establishes rules for the collection, storage, disclosure, and sale of sensitive data by businesses. So far, three states have broad biometric privacy laws in place: Illinois (2008), Texas (2009), and Washington (2017). In particular, the landmark Illinois Biometric Information Privacy Act (BIPA) is considered the toughest law in the U.S. and has resulted in numerous employment class action lawsuits. Replicants of BIPA have been introduced in more states and at the federal level (2020).

There is also a growing body of more targeted legislation aimed at regulating the use of biometrics in hiring specifically. For example, Maryland enacted a law in 2020 that prohibits employers from using facial expression analysis during job video interviews, unless the applicant consents. In 2019, Illinois passed the AI Video Interview Act (amended in 2021), which instituted disclosure, consent, and privacy guidelines to ensure job candidates are informed that their interview videos will be analyzed by AI-driven technology (which can include, for example, emotion or affect recognition).

Key concepts appearing in one or more policies:

  • Employers must obtain workers’ consent to collect biometric data and to use AI-enabled assessments, such as facial recognition, during video interviews for employment.
  • Employers must inform workers in writing about what biometric data are being collected or stored, and the specific purpose and length of time for that collection and storage.
  • Employers must inform workers about the use of any AI assessments in video interviews, explain how the AI works, and identify the “general types of characteristics” used to evaluate applicants.
  • Employers must have a written retention and destruction policy for biometric data.
  • Employers cannot sell or otherwise profit from workers’ biometric data.
  • Employers must safeguard biometric data in the same (or a more protective) way that they protect other confidential information, using a reasonable standard of care.
  • Employers cannot disclose or redisclose workers’ biometric data, unless workers consent to the disclosure or under other limited circumstances.
  • Workers have a private right of action to pursue relief for violations.

Public Sector

To date, we have not yet seen significant legislative activity focused on technology and public sector workers. However, there is growing activity, both legislative and administrative, to ensure accountability and responsibility in the public sector’s use of digital technology in providing services to the public. For example, states are establishing AI oversight agencies, study commissions, tech audits, and various other initiatives to create an accountability infrastructure. (We’re not able to summarize all of this activity here, but see the report Legislation Related to Artificial Intelligence for more information.)

We view this as an important policy area for public sector unions and worker advocates more generally. For example, the Biden administration and two states (California and Pennsylvania) recently issued wide-ranging executive orders dictating the development and adoption of responsible AI standards on government procurement, funding, and use. There are also legislative versions, such as bills in Washington, Maryland, and Connecticut. Less ambitiously, some states (like California and Vermont) have passed laws requiring comprehensive audits of their state’s use, development, or procurement of high-risk AI systems. These represent important opportunities to ensure that public sector workers are at the table in decision making around technology adoption and safety.

Automation

To date we have not yet identified strong legislative models to address automation of tasks or jobs by new technologies; this will be a vitally important area for public policy innovation going forward.

In our view, older displaced-worker policies such as TAA and WARN are not a great fit for emerging digital technologies, where the pace of change is typically incremental and occupations evolve gradually with partial task automation and augmentation (rather than the mass plant layoffs that drove the older policies).

That said, one promising recent model is the comprehensive Workers’ Right to Training Act of 2019. This bill establishes strong requirements for employers to provide on-the-job training to workers whose jobs are in danger of being changed (e.g., in their pay, working conditions, or skill requirements) or replaced due to new technologies. The bill also requires employers to provide advance notice about the introduction of automating technologies, and six-month severance to all workers who lose their jobs as a result of it. Finally, workers who lose their jobs must be given hiring priority for any new or open positions.

Currently, however, the more typical policy approach is modest bills that mandate studies of the impact of AI on the state economy and workforce (e.g., New Jersey, Massachusetts and New York). A variation on this theme is a 2023 New Jersey bill that requires applicants for unemployment benefits to identify on their application whether their job loss was due to automation or other technological advances, and requires the state labor agency to track and maintain this data. To the extent that legislators are taking up training, they are largely focused on programs that would train workers on skills related to the use, operation, and creation of artificial intelligence systems (e.g., Texas and this federal bill).

Warehouse Quotas

A number of states have enacted laws regulating productivity quotas in large warehouse distribution centers, following the first-to-be-enacted 2021 California law. These bills and laws address warehouse employers’ use of opaque electronic surveillance and productivity monitoring systems. The goal is to protect employees from the harms of data-driven management systems that use employee productivity data to generate frequently-shifting performance standards based on ranking employees against each other in real time.

These policies establish employee rights and protections such as prohibiting employers from disciplining employees based on undisclosed employee productivity quotas, and granting employees access to work speed data (both their own as well as at the aggregate level for the worksite) to ensure fairness in discipline based on quotas.

States that roughly replicate California’s law are Montana (bill), South Dakota (bill), Nebraska (bill), Washington (law), Minnesota (law), Connecticut (bill), Illinois (bill), and New York State (law).

Collective Bargaining

To date we have not yet seen much legislative activity focused on collective bargaining with regards to new technologies, though we expect more in 2024. Exceptions include the recently-introduced federal Protect Working Musicians Act of 2023, which would exempt musicians from antitrust restrictions, allowing them to bargain collectively over music licensing terms with dominant online distribution platforms or companies developing or deploying generative AI. In California, State Assemblymember Ash Kalra recently introduced a bill (AB 459) allowing actors and artists to nullify provisions in vague contracts that enable studios and other companies to use AI to digitally clone their voices, faces, and bodies. An older bill is the federal Workers’ Right to Training Act of 2019, which includes requirements that employers bargain directly with workers on how best to implement new technologies that are likely to change or eliminate workers’ jobs. None of these bills have passed.

Several regulatory actions are important to note. For private sector unions, the NLRB General Counsel issued a memo on surveillance technology in 2022, underscoring that Section 7 of the NLRA protects workers from surveillance and algorithmic technologies intended to interfere with the right to organize and unionize. The NLRB also announced memoranda of understanding with the CFPB (to protect workers in both labor and financial markets) and the FTC (to protect workers against the impacts of algorithmic decision-making and unfair labor practices, among other issues).

For public sector unions, states that have adopted collective bargaining rights similar or broader than the NLRA may choose to adopt the NLRB’s interpretation of Section 7 protections outlined above. (See for example this decision by the California PERB.) Of course, state legislatures are free to amend existing law to strengthen collective bargaining rights for public sector workers. One example is California, where the governor recently approved a bill (AB 96) that requires public transit employers to notify impacted unions of their plan to procure autonomous transit vehicle technology that would automate jobs or job functions, and allows for collective bargaining to start within 30 days of such notice.

Further Resources

This guide is a living document that we are updating frequently; please send us updates and corrections. Many thanks to Gabrielle Rejouis, Matt Scherer, Hayley Tsukayama, Irene Tung, and Ken Wang for their help and feedback – however, all errors and snafus are our own.

For errors or to flag policies we should include, please email Annette Bernhardt, UC Berkeley Labor Center, annette.bernhardt@berkeley.edu.

For technical assistance on these policy models, reach out to:

  • Annette Bernhardt, UC Berkeley Labor Center, bernhardt@berkeley.edu (most topics, comprehensive bills)
  • Matt Scherer, Center for Democracy and Technology, mscherer@cdt.org (most topics, especially discrimination and surveillance)
  • Tim Shadix, Warehouse Workers Resource Center, tshadix@warehouseworkers.org (warehouse bills)
  • Hayley Tsukayama, Electronic Frontier Foundation, hayleyt@eff.org (biometric bills)
  • Irene Tung, National Employment Law Project, ITung@nelp.org (just cause bills, warehouse bills)
  • Ken Wang, California Employment Lawyers Association, ken@cela.org (most topics, especially discrimination)