U.S. Capitol building on a sunny day with blue sky in background and lawn in foreground.

Last updated 11/12/2024

Across the country, employers are increasingly using AI and other digital technologies in ways that stand to have profound consequences for wages, working conditions, employment, race and gender equity, and worker power. Regulation of these new workplace technologies is still in its infancy, though the 2024 legislative session saw a significant increase in activity, reflected in this update.

In this guide, we give an overview of current U.S. public policy that regulates employers’ use of digital workplace technologies. Our goal is to cover major bills and laws and identify core regulatory concepts. However, this is not a legislative tracker. Note that while we organize the guide topically, in practice several issue areas might be addressed in one policy. This is a living document that we will update periodically. See also the list of further resources at the end.

Electronic Monitoring

There is growing legislative momentum to regulate employers’ use of electronic monitoring in the workplace. A weak approach is to only require employers to give advance notice to employees of electronic monitoring, as in this 2021 New York State law and this 2024 California bill.

A stronger approach is to include electronic monitoring as part of more comprehensive regulation of employers’ use of digital technologies. For example, California’s 2022 Workplace Technology Accountability Act and New York State’s 2024 Bossware and Oppressive Technologies Act both lay out a broad framework for regulating employers’ use of electronic monitoring and automated decision systems, and also require employers to conduct impact assessments. Similar concepts appeared in 2023 bills in Massachusetts and Vermont, as well as the 2023 federal Stop Spying Bosses Act (and its companion bill, No Robot Bosses Act, see algorithmic management section below). While none of these bills have been enacted into law so far, we support this type of comprehensive approach as most protective of workers.

Another approach is to regulate electronic monitoring through what are known as “just cause” bills, introduced in recent years in Illinois and New York City. These bills prohibit the unjust discharge of workers, establish a framework for justified discipline and firing, and limit employers’ reliance on data from electronic monitoring in making those decisions. In addition, a bill introduced at the federal level would regulate the use of electronic monitoring in the warehouse industry specifically, which we discuss in greater detail in the warehouse quotas section below.

Finally, a 2024 circular released by the Consumer Financial Protection Bureau (CFPB) clarifies that employers must obtain consent, provide transparency and disclosure, and allow workers to dispute inaccuracies when using data gathered from third parties to make employment decisions about them.

Key concepts appearing in one or more policies:

  • Employers must give detailed prior notice of any electronic monitoring (particularly if data collected will be used to make an employment-related decision).
  • Employers are only allowed to use electronic monitoring for a limited set of purposes, gathering the least amount of data, and affecting the smallest number of workers.
  • Employers are prohibited from using electronic monitoring that results in a violation of labor and employment laws; records workers off-duty or in sensitive areas; uses high-risk technologies, such as facial recognition; or identifies workers exercising their rights under employment and labor law.
  • Employers are prohibited from relying primarily or exclusively on data from electronic monitoring when making decisions like hiring, firing, discipline, or promotion. Instead, the employer must independently corroborate the data and provide the worker with full documentation, including the actual data used.
  • Employers must conduct impact assessments of electronic monitoring systems, testing for bias and other harms to workers, prior to use.
  • Employers who electronically monitor workers to assess their performance (through productivity or quota systems) are required to disclose performance standards to workers and apply these standards consistently across workers.
  • Productivity monitoring and quota-setting systems must be documented and reviewed by regulatory agencies overseeing workplace health and safety before use.
  • Workers have the right to access their data collected through electronic monitoring systems.
  • Workers have the right to correct any data collected about them and employers must adjust any employment-related decisions that were based, partially or solely, on inaccurate data.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Algorithmic Management

Many of the comprehensive policy frameworks discussed above also address algorithmic management—that is, employers’ use of digital technologies to make a wide range of decisions about workers including their wages and working conditions, their job tasks and responsibilities, and other terms and conditions of employment (some of the bills cited below may use terms such as “automated decision systems”).

The 2022 California Workplace Technology Accountability Act sets guardrails on employers’ use of algorithms to manage workers—a framework replicated by 2023 bills in Vermont and Massachusetts. The same applies at the federal level with the 2023 No Robot Bosses Act, which serves as a companion bill to the Stop Spying Bosses Act. New York State’s 2024 Bossware and Oppressive Technologies Act also incorporates a closely-related civil rights framework on employment selection procedures.  All of these bills include provisions that prohibit employers from relying exclusively or primarily on automated systems to manage workers. While none of these bills have been enacted into law so far, we support this type of comprehensive approach as most protective of workers.

Additionally, the 2024 CFPB circular discussed above has implications for algorithmic management as well—clarifying that employers must obtain consent, provide transparency and disclosure, and allow workers to dispute inaccuracies when using data gathered from third parties to make employment decisions about them.

Key concepts appearing in one or more policies:

  • Employers must give workers detailed prior notice before using any algorithmic management system (including the results of any relevant impact assessments).
  • Employers are prohibited from using algorithmic management systems that: result in a violation of labor and employment laws; profile or make predictions about a workers’ behavior that are unrelated to their job responsibilities; identify workers exercising their legal rights under employment and labor law; or use high-risk technologies, such as facial or emotion recognition.
  • Employers are prohibited from relying primarily or exclusively on outputs from algorithmic management systems when making decisions like hiring, firing, discipline, or promotion. Instead, the employer must independently corroborate the algorithms’ output via meaningful human oversight, and provide the worker with full documentation, including the data used and output generated.
  • Employers must give workers notice that an algorithmic management system was used to make an employment decision about them. Workers must be given the opportunity to correct any data that went into the decision and to appeal the decision.
  • Employers must conduct impact assessments of algorithmic management systems prior to use, testing for harms to economic security, race and gender equity, privacy, workplace health and safety, mental health, right to organize, labor rights, and other terms and conditions of work.
  • Productivity management systems must be documented and reviewed by regulatory agencies overseeing workplace health and safety before implementation.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Data Privacy

The U.S. currently does not have a federal data privacy law. Instead, 20 states have passed their own data privacy laws, focused on establishing rights for consumers about the data gathered by platform companies and other large businesses. California is currently the only state whose data privacy law, the 2018 California Consumer Protection Act (CCPA), gives workers the same rights as consumers. Other states have continued the trend of passing data privacy laws that explicitly exclude workers. (A sector-specific exception is a 2024 bill introduced in Maryland that seeks to protect the privacy of student and employee data in school settings.)

When possible, a better model is to integrate workers’ data privacy rights into broader bills focused on digital workplace technologies. For example, the 2022 California Workplace Technology Accountability Act provides a robust set of rights and protections around all worker data that is collected by employers. We support this approach, since incorporating workers into consumer-focused data privacy laws can be an awkward fit.

A subset of data privacy legislation governs biometric data specifically and requires an additional layer of protection. Biometric data typically includes fingerprints, voiceprints, retina scans, hand scans, or face geometry (note that this is distinct from biological data collected for health or medical purposes).

So far, three states have broad biometric privacy laws in place, and importantly, all of them cover workers: Illinois (2008), Texas (2009), and Washington (2017). In particular, the landmark Illinois Biometric Information Privacy Act (BIPA) is considered the toughest biometric law in the U.S. and has resulted in numerous employment class action lawsuits. Replicas of BIPA have been introduced in more states and at the federal level (2020). In 2024 Colorado passed a law amending the Colorado Privacy Act (CPA), placing some limits on the ability of employers to make the collection of biometric data a condition of employment.

There is also a growing body of legislation regulating the use of biometric data specifically in hiring. These bills mandate that employers must obtain consent from job applicants for the use of AI (e.g., the Illinois AI Video Interview Act passed in 2020 and amended in 2021 and a New Jersey bill introduced in 2024), or facial recognition technologies (e.g., a  Maryland law passed in 2020) in the hiring process.

Key concepts appearing in one or more policies: 

  • Employers must give workers detailed prior notice of any data they intend to collect about them (particularly if data will be used to make an employment-related decision).
  • Employers can only collect worker data for a limited set of purposes (such as enabling workers to do their jobs, protecting workers’ health, or for the administration of wages and benefits).
  • Workers have the right to access and correct their data.
  • Employers cannot sell or license worker data (including biometric data) to third parties, and can only disclose biometric data in limited circumstances.
  • Workers have the right to limit employers’ use of their sensitive data (such as health-related data, data related to protected characteristics such as race and ethnicity, and genetic data).
  • Employers must obtain workers’ consent to collect and process workers’ biometric data.
  • In the hiring context, employers must give detailed explanation, provide notice, and obtain consent when using AI-enabled assessments, such as facial recognition, during video interviews for employment.
  • Employers must not make the collection and processing of biometric data a condition of employment unless strictly necessary, for a specified and limited set of goals.
  • Employers must take steps to protect all worker data and must have a written retention and destruction policy. Biometric data requires greater protections.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Automation and Job Loss

Automation was the focus of many bills introduced in the 2024 legislative session. The bills differ, however, in their approach. We have grouped them into three broad categories:

1. Defining the rules of the road for how humans work alongside digital technologies.

In many cases, digital technologies replace specific tasks rather than entire jobs. Implementing these technologies in work settings can have negative impacts such as deskilling workers, eroding worker autonomy, and creating harmful impacts for the public.

In this context, an important policy approach is ensuring that workers are in command of AI and other digital technologies. A prime example is a 2022 California bill that allows health care workers to override a hospital’s care-directing algorithm, if doing so is in the best interest of the patient. It also provides some anti-retaliation protections for workers (see also similar bills introduced in Illinois and Maine in 2023). Another group of bills call for human supervision over digital technologies and their outputs. We see this concept in multiple sectors: large autonomous vehicles (like this 2024 California bill), mental health services (like this 2023 Massachusetts bill), publishing (like this 2024 New York bill), legal settings (like this 2024 California bill), and health care (like this 2024 California law). Another bill introduced in California (2024) creates safety guidelines around autonomous vehicles, including mandating training for first responders.

Less ambitious bills from 2024 instead focus on disclosure requirements, such as when generative AI is used to generate patient communications (passed in California) or interact with a consumer (passed in Utah), or when AI is used by mental health care professionals (introduced in Illinois).

2. Protecting workers from automation and job loss.

A second policy strategy is to prohibit employers from displacing workers with new digital technologies. Important examples from recent legislation include bills placing limits on automated checkout counters in retail stores (a 2024 California bill), prohibiting AI from replacing community college faculty (a 2024 California law), eliminating core job functions of call center workers (a 2024 California bill), replacing or supplementing a teacher’s role in the classroom (a 2024 Texas bill), or replacing care functions in health care settings (a 2023 Maine bill). Several bills from 2024 protect workers by mandating that specific jobs be performed by humans, not artificial intelligence. A federal bill does this for musicians, and a California law does this for community college faculty. 

A related policy model is to ensure that workers have control over their work product and are not forced to train AI systems that could replace them. Most bills in this category focus on creative workers (e.g., artists, actors, musicians, and journalists). For example, a federal bill introduced in 2024 would prohibit developers from using a creative worker’s digital product (text, image, audio, or video) that has provenance information attached to it without consent (COPIED Act). A related strategy is to prohibit the use of a worker’s digital replica without first obtaining consent from the worker, as in this 2024 Washington bill. Finally, several states passed laws in 2024 that strictly regulate contracts governing digital replicas (see California and Illinois).

A final policy strategy attempts to change the incentives and economic context for employers’ decisions about technology adoption. For example, recent bills introduced in New York would tax (2024) or withhold subsidies (2023) from companies that use technology to displace workers. Two bills introduced in New Jersey in 2024 would offer tax credits to companies that hire workers displaced due to automation or participate in technology-related apprenticeship programs for workers.

3. Education and training 

In our view, older policies tackling job displacement, such as the Trade Adjustment Assistance (TAA) and Worker Adjustment and Retraining Notification (WARN) acts, are often not a great fit for emerging digital technologies. Rather than the mass plant layoffs that drove these policies, the more typical pace we see today is incremental, with occupations evolving gradually over time via partial task automation and augmentation.

That said, education and training in the context of a rapidly changing workplace is clearly a key area requiring policy innovation. One important model here is the comprehensive federal Workers’ Right to Training Act of 2019. This bill establishes strong requirements for employers to provide on-the-job training and offer alternative employment to workers whose jobs are in danger of being changed (in their pay, working conditions, or skill requirements) or replaced due to new technologies. A more recent New Jersey bill (2024) would require employers to provide notice, retraining, and severance pay to workers who experience technology-related job loss. Other bills direct government agencies rather than employers to provide training to workers in industries impacted by technological change, such as two bills introduced at the federal level (Investing in Tomorrow’s Workforce Act of 2023 and the Workforce of the Future Act of 2024).

Key Concepts 

  • Employers should be prohibited from using digital technologies to automate or eliminate core job functions.
  • Employers must conduct an impact assessment prior to deploying any digital technologies that have the potential to automate or eliminate core job functions.
  • Employers must give notice, retraining, and compensation to workers when deploying digital technologies that will change or displace workers, and give priority to current workers when filling new positions.
  • Employers must consult workers when implementing a consequential workplace technology and share the results of impact assessments with workers.
  • Workers must have the right to override decisions made by digital technologies, and have the resources and expertise to do so.
  • Businesses that use digital technologies that emulate human interactions (such as generative AI) must clearly disclose this fact.
  • Businesses cannot train an AI model with a worker’s work product (e.g., digital likeness, expertise, voice, writing, art, music), or use a worker’s digital replica without receiving express and informed consent and potentially giving the creator credit and compensation.
  • Government taxation and economic development policies should be leveraged to disincentivize job automation.

Bias and Discrimination

Unlike other areas covered in this guide, where there are few if any existing worker protections, anti-discrimination laws at the federal and state levels have long protected workers from discrimination in employment. However, emerging technologies have posed new questions about how these laws apply, how to identify companies that may be violating the law, and how to prove such violations—even as the potential for discriminatory outcomes from AI and other digital technologies in the workplace is clear (see the Equal Employment Opportunity Commission’s [EEOC] initial guidance to employers under the American with Disabilities Act (ADA) and Title VII, as well as this testimony presented at the January 2023 EEOC meeting).

The topic of AI and discrimination has been the focus of a significant amount of legislative activity in 2024. The main policy model has been to introduce broad laws that prohibit using digital technologies to discriminate against protected classes in a wide range of settings, including employment, housing, education, and insurance. While ubiquitous in states across the country, many of these bills have been critiqued for offering only weak protections for workers and consumers; see the in-depth analyses provided in recent reports by the Center for Democracy & Technology (CDT) and the Future of Privacy Forum (FPF). Only one state, Colorado, ultimately passed a broad anti-discrimination AI law in 2024, establishing basic guardrails for both developers and deployers of “high risk” artificial intelligence systems.

Another policy approach is to focus specifically on discrimination in hiring and other employment-related technologies, rather than on discrimination in multiple areas. Here, the best model is the 2022 Civil Rights Standards for 21st Century Employment Selection Procedures, parts of which have been incorporated into New York State’s 2024 bill, the Bossware and Oppressive Technologies Act, as well as the federal No Robot Bosses Act. Unfortunately, weaker laws and bills could undermine the goal of establishing robust anti-discrimination protections in worker assessment technologies, including a 2021 New York City law that has been roundly criticized and that, according to a 2024 study, companies are largely ignoring.

Key concepts appearing in one or more policies:

  • Businesses, including data brokers, are prohibited from using discriminatory digital technologies  in employment (and other critical sectors like housing, lending, and education).
  • Employers and vendors must conduct bias audits on digital technologies in the workplace before using them and annually thereafter.
  • Employers must disclose details of any digital technologies systems in use (e.g., training data, model specifications, safety testing conducted).
  • Both developers and deployers of digital technologies must report any bias or discrimination incidents or discovered risks.
  • Employers must notify job candidates and employees about the use of digital technologies in hiring assessments or evaluations for hire or promotion.
  • Employers must not use worker selection procedures that rely on facial recognition, emotion recognition, or other suspect technologies.
  • Worker assessment tools must measure the ability to perform essential job functions rather than attributes tied to protected characteristics.
  • Workers should have the right to opt out of being assessed by an automated selection procedure and instead be evaluated by a human or other alternative means.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Public Sector

Federal and state governments have a significant imprint on the economy as employers, as funders and administrators of benefits, and through their power to shape standards through their procurement of goods and services. The regulation of AI use by government agencies is therefore an important policy area for worker advocates. There is growing activity, both legislative and administrative, to ensure accountability and responsibility in the public sector’s use of digital technology when providing services to the public; we are not able to summarize all of this activity here, but see this legislative tracker for examples.

However, we have seen less attention paid to the impact of digital technologies on public sector workers.  On the administrative front, the Biden administration and a handful of states including California, Pennsylvania, and Washington issued wide-ranging executive orders dictating the development and adoption of responsible AI standards on government procurement, funding, and use. Only President Biden’s executive order included a specific emphasis on workforce impacts; it also directs the U.S. Department of Labor (DOL) to issue responsible use standards for employers and developers which it did in 2024. The other executive orders pay some limited attention to public sector worker impacts, but it is unclear whether these will remain central as the orders are implemented.

On the legislative front, only a handful of bills focus on public sector workers specifically. A bill introduced in California (2024) prohibits government agencies from contracting with call centers that replace workers with AI. Other bills (such as in New York in 2024 and in North Carolina in 2023) focus on investigating the impacts of AI on the state workforce. Several policies also require agencies to train government workers on how to use new technologies, such as a federal bill (2021) and a California law (2024) focusing on generative AI.

Finally, several bills create varying levels of human review requirements when public agencies use digital technologies to make decisions that impact the public–also known as “worker in the loop” mandates. A bill introduced in New York (2024) would require a human to make any decision related to public rights or benefits, and strictly limits the use of large language models (LLMs). Other bills require human review of public sector decisions made by digital technologies, such as a 2024 law passed in New Hampshire and a 2021 bill introduced in Washington.

Key Concepts (not necessarily specific to, or limited to, workers)

  • Government agencies must not use digital technologies to replace human workers or essential job functions of workers.
  • Government agencies must conduct impact assessments before adopting new digital technologies, testing for a range of harms and making the results available to the public.
  • Government agencies must carry out inventories of digital technologies currently in use and make these inventories available to the public.
  • Government agencies must disclose when they are using AI when interacting with the public, and ensure that a human alternative is available if requested.
  • Government agencies must ensure that any output produced by a digital technology system is reviewed by a human with sufficient knowledge and resources.
  • Decisions that impact a member of the public’s rights, services, and benefits must be made by a human; assistance from AI must be purely advisory.
  • Government agencies must provide sufficient training to workers on how to use digital technologies in a way that protects the privacy, safety, and rights of both the public and public sector workers.
  • Government agencies must adhere to a clear set of standards when procuring digital technology systems including assessing for harm, privacy, and fairness.

Collective Bargaining

To date we have not yet seen much legislative activity focused on collective bargaining around new technologies. Exceptions include the federal Protect Working Musicians Act of 2023, which would exempt musicians from antitrust restrictions, allowing them to bargain collectively over music licensing terms with companies. The federal Workers’ Right to Training Act of 2019, mentioned above, includes requirements that employers bargain directly with workers on how best to implement new technologies that are likely to change or eliminate workers’ jobs. And two laws recently passed in California and Illinois establish that the presence of union representation is one way in which contracts governing a worker’s digital replicas can be considered valid.

Several regulatory actions are important to note. For private sector unions, the National Labor Relations Board (NLRB) General Counsel issued a memo on surveillance technology in 2022, underscoring that Section 7 of the NLRA protects workers from surveillance and algorithmic technologies intended to interfere with the right to organize and unionize. The NLRB also announced memoranda of understanding with the Consumer Financial Protection Bureau (CFPB) (to protect workers in both labor and financial markets) and the Federal Trade Commission (FTC) (to protect workers against the impacts of algorithmic decision-making and unfair labor practices, among other issues).

For public sector unions, states that have adopted collective bargaining rights similar or broader than the National Labor Relations Act (NLRA) may choose to adopt the NLRB’s interpretation of Section 7 protections outlined above. (See for example this decision by the California Public Employment Relations Board [PERB].) Of course, state legislatures are free to amend existing law to strengthen collective bargaining rights for public sector workers. One example is California, where the governor approved a bill (AB 96) that requires public transit employers to notify impacted unions of their plan to procure autonomous transit vehicle technology that would automate jobs or job functions, and allows for collective bargaining to start within 30 days of such notice.

Warehouse Quotas

A number of states have recently enacted laws addressing warehouse employers’ use of opaque electronic surveillance and productivity monitoring systems, following a first-to-be-enacted 2021 California law. States that roughly replicate California’s law are Montana (bill), Oregon (law), South Dakota (bill), Nebraska (bill), Washington (law), Minnesota (law), Connecticut (bill), Illinois (bill), and New York State (law).

Senator Ed Markey of Massachusetts introduced a federal bill in 2024 that borrows many key concepts from, but significantly strengthens, the California law. Both pieces of legislation require employers to provide workers with notice of any productivity quotas in place. However, the federal bill expands the list of prohibited quotas, provides additional protections against adverse employment actions, and limits how employers can use worker data.

Further Resources

This guide is a living document that we are updating frequently. For errors or to flag policies we should include, please email Mishal Khan, UC Berkeley Labor Center, mishalkhan@berkeley.edu.

For technical assistance on these policy models, reach out to:

The authors thank the John D. and Catherine T. MacArthur Foundation and Omidyar Network for their generous support of this project.