U.S. Capitol building on a sunny day with blue sky in background and lawn in foreground.

Last updated 12/2025

Across the country, employers are increasingly using AI and other digital technologies in ways that stand to have profound consequences for wages, working conditions, employment, race and gender equity, and worker power. While regulation of these new workplace technologies is still in its infancy, there has been significant legislative activity over the last few years—several hundred worker-impacting bills were introduced in the 2025 legislative session alone.

In this guide, we give an overview of current U.S. public policy that regulates employers’ use of digital workplace technologies. Based on a review of over 350 bills and laws across all U.S. states and at the federal level, we identify nine major topics of proposed or enacted legislation. For each topic, we also describe key concepts establishing worker rights and employer responsibilities around digital technologies. Note that while we organize the guide topically, in practice several issue areas might be addressed in one bill or law. Also, unless otherwise stated, we analyze the content of bills as they are first introduced. This is a living document that we will update periodically. See also the list of further resources at the end.

Electronic Monitoring

There is growing legislative momentum to regulate employers’ use of electronic monitoring in the workplace.

Two 2025 bills in California (see here and here) supported by unions combine to establish a broad framework for regulating employers’ use of electronic monitoring on and off the job, including notice requirements, prohibitions, and responsible use guardrails. A 2025 bill in Maine contains some of these prohibitions, and bills in Massachusetts, Vermont, and Washington combine similar electronic monitoring standards with algorithmic management provisions (see the section “Algorithmic Management” below). All of these bills contain many concepts from California’s 2022 Workplace Technology Accountability Act and New York State’s 2023 Bossware and Oppressive Technologies Act, as well as the federal 2024 Stop Spying Bosses Act. While none of these bills have been enacted into law so far, we consider this type of comprehensive approach to be most protective of workers.

A more limited model is to only require employers to give advance notice to employees of electronic monitoring, as in this 2021 New York State law, this 2024 California bill, and an older generation of laws on the books in a number of states (see Connecticut, New York, and Delaware). A 2025 California bill takes a different approach by requiring public and private sector employers to disclose all electronic surveillance tools in use to the state and the public.

Another approach regulates electronic monitoring through what are known as “just cause” bills, introduced in Illinois and New York City. These bills prohibit the unjust discharge of workers, establish a framework for justified discipline and firing, and limit employers’ reliance on data from electronic monitoring in making those decisions.

Finally, a series of sectoral bills would establish notice requirements when businesses conduct electronic monitoring in specific worksites, like nursing homes, correctional facilities, and residential homes. But the focus is often on protecting patients rather than workers, potentially opening the door to employers misusing monitoring data against workers.

Key concepts appearing in one or more policies:

  • Employers must give detailed prior notice of any electronic monitoring (particularly if the collected data will be used to make an employment-related decision).
  • Employers are only allowed to use electronic monitoring for a limited set of purposes, collecting the least amount of data, affecting the smallest number of workers, and only as often as is strictly necessary.
  • Employers are prohibited from using electronic monitoring that results in a violation of labor and employment laws; that records workers off-duty or in sensitive areas; that uses high-risk technologies, such as facial recognition; or that identifies workers exercising their rights under employment and labor law.
  • Employers are prohibited from relying primarily or exclusively on data from electronic monitoring when making decisions about hiring, firing, discipline, or promotion. Instead, the employer must independently corroborate the data and provide the worker with full documentation, including the actual data used.
  • Prior to use, employers must conduct impact assessments of electronic monitoring systems, testing for bias and other harms to workers.
  • Employers who electronically monitor workers to assess their performance (through productivity or quota systems) are required to disclose performance standards to workers and apply these standards consistently across workers.
  • Workers have the right to access and correct their data collected through electronic monitoring systems, and appeal any decisions made by an employer using this data. Employers must adjust any employment-related decisions that were based on inaccurate data.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Algorithmic Management

A growing number of bills would regulate algorithmic management—that is, employers’ use of digital technologies to make a wide range of decisions about workers, including their wages, working conditions,  job responsibilities, and other terms and conditions of employment. (Some of the bills cited below may use terms such as “automated decision systems.”)

Two landmark bills supported by unions in 2025 would establish a robust set of guardrails on employers’ use of algorithms to manage workers, including transparency requirements, prohibitions, and responsible use provisions: California’s No Robo Bosses Act and Massachusetts’ FAIR Act. Similar bills have been introduced in Vermont and Washington. Like the electronic monitoring provisions described above, these proposed policies build off concepts contained in the 2022 California Workplace Technology Accountability Act as well as the federal 2024 No Robot Bosses Act. New York State’s 2023 Bossware and Oppressive Technologies Act also incorporates a closely-related civil rights framework on employment selection procedures. While none of these bills have been enacted into law so far, we consider this type of comprehensive approach to be most protective of workers. A more modest approach is to only provide notice (see this Illinois law).

Another set of bills is not solely focused on workplace technologies, but instead requires transparency and impact assessments when algorithms are used to make decisions about consumers in a broad range of contexts, including housing, insurance, and employment. The Colorado AI Act that was signed into law in 2024 falls into this category, as do current bills in California, New Mexico, and Vermont.

And finally, a new category of bills in 2025 would prohibit employers from using surveillance data as part of an automated system to set individualized wages (introduced in Colorado, Georgia, Illinois, and at the federal level), while another set of bills would prohibit employers from using AI for disciplining purposes in specific sectors such as education (a bill in Illinois).

Key concepts appearing in one or more policies:

  • Employers must give workers detailed prior notice before using any algorithmic management systems to make decisions about them.
  • Employers are prohibited from using algorithmic management systems that result in a violation of labor and employment laws; that profile or make predictions about a worker’s behavior that are unrelated to their job responsibilities; that identify workers exercising their legal rights under employment and labor law; or that use high-risk technologies, such as facial or emotion recognition.
  • Employers are prohibited from using AI to perform teacher evaluations.
  • Employers are prohibited from using personal data as part of an automated system to set individualized worker wages.
  • Employers must ensure that there is meaningful human oversight when an algorithm is used to make an employment decision.
  • Employers are prohibited from relying primarily or exclusively on outputs from algorithmic management systems when making decisions about hiring, firing, discipline, or promotion. Instead, employers must independently corroborate the decision based on other performance data.
  • Employers must give workers notice after an algorithmic management system is used to make an employment decision about them. Workers must be provided the opportunity to correct any data that went into the decision, and have the right to appeal the decision.
  • Prior to use, employers must conduct impact assessments of algorithmic management systems, testing for harms to economic security, race and gender equity, privacy, workplace health and safety, mental health, right to organize, labor rights, and other terms and conditions of work.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Data Privacy

The U.S. currently does not have a federal data privacy law. Instead, 20 states have passed their own, focused on establishing rights for consumers about the data gathered by platform companies and other large businesses.

However, California is currently the only state whose data privacy law, the 2018 California Consumer Protection Act (CCPA), gives workers the same rights as consumers. Other states have continued the trend of passing data privacy laws that explicitly exclude workers, whether employees, independent contractors, or job applicants. This lack of basic data rights for workers is one of the glaring omissions in U.S. public policy.

By contrast, workers are often covered in a subset of data privacy legislation that governs biometric data specifically, and requires an additional layer of protection. Biometric data typically include fingerprints, voiceprints, retina scans, hand scans, or face geometry (note that this is distinct from biological data collected for health or medical purposes).

So far, three states have broad biometric privacy laws that cover workers: Illinois (2008), Texas (2009), and Washington (2017). In particular, the landmark Illinois Biometric Information Privacy Act (BIPA) is considered the toughest biometric law in the U.S. and has resulted in numerous employment class action lawsuits. Replicas of BIPA have been introduced in more states and at the federal level (2020); see also this 2023 federal bill regulating the collection of biometric data in the public sector. In 2024, Colorado passed a law amending the Colorado Privacy Act (CPA), placing some limits on the ability of employers to make the collection of biometric data a condition of employment, while a New York bill would prohibit the collection of facial recognition, iris, or retina scans as a condition of attaining or continuing employment.

There is also a growing body of proposed legislation regulating the use of biometric data in hiring specifically. These bills mandate that employers obtain consent from job applicants for the use of AI (e.g., the Illinois AI Video Interview Act passed in 2020 and amended in 2021, and a New Jersey bill introduced in 2024) or facial recognition technologies (e.g., a Maryland law passed in 2020) in the hiring process.

Key concepts appearing in one or more policies:

  • Employers must give workers detailed prior notice of any data they intend to collect about them (particularly if data will be used to make an employment-related decision).
  • Employers can only collect worker data for a limited set of purposes (such as enabling workers to do their jobs, protecting workers’ health, or administering wages and benefits).
  • Workers have the right to access and correct their data.
  • Employers cannot sell or license worker data (including biometric data) to third parties, and can only disclose biometric data to third parties in limited circumstances.
  • Workers have the right to limit employers’ use of their sensitive data (such as health-related data, data related to protected characteristics such as race and ethnicity, and genetic data).
  • Employers must obtain workers’ consent to collect and process their biometric data.
  • In the hiring context, employers must give a detailed explanation, provide notice, and obtain consent when using AI-enabled assessments, such as facial recognition, during video interviews for employment.
  • Employers must not make the collection and processing of biometric data a condition of employment, unless strictly necessary, and for a specified and limited set of goals.
  • Employers must take steps to protect all worker data and have a written retention and destruction policy. Biometric data requires greater protections.
  • Government agencies should be prohibited from collecting biometric data on the public.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Automation and Job Loss

Automation has been the focus of many bills introduced in 2024 and 2025. The bills differ, however, in their approach. We have grouped them into three broad categories:

1. Protecting workers from automation and job loss

Several laws have already been enacted in this category. In 2025, Illinois passed laws prohibiting the use of AI in place of mental health professionals or community college faculty (a handful of states introduced replicas of this bill including New York, Florida, and Pennsylvania). California passed a law prohibiting businesses from advertising their AI products using terms reserved for licensed health care professionals, while Oregon passed a similar law covering nursing professionals. Pennsylvania enacted a law that would prohibit state agencies from using generative AI to record administrative proceedings without a human stenographer present. Another 2024 law in California defines community college faculty as humans, not artificial intelligence (a federal bill does this for musicians). At the city level, Long Beach in California passed a first-in-nation ordinance in 2025 that would limit the number of checkout counters that can be in use in retail stores (similar to a 2025 California bill).

Important examples from introduced bills include: prohibiting AI from replacing media workers (a 2025 New York bill), teachers (2025 bills in New York, Texas, and Connecticut), court reporters (a 2025 California bill), core job functions of call center workers (a 2024 California bill), or care functions in health care settings (a 2025 Maine bill). Other bills mandate the presence of a human driver in commercial vehicles (e.g., 2025 bills in Colorado and Massachusetts).

Another policy strategy is to change the incentives surrounding automation. For example, several bills were reintroduced in New York in 2025 that would tax or withhold subsidies from companies that use technology to displace workers. Similarly, two bills introduced in New Jersey would offer tax credits to companies that either hire displaced workers (2024) or participate in technology-related apprenticeship programs (2025) for workers displaced due to automation.

Finally, in 2025 New York state updated its Worker Adjustment and Retraining Notice (WARN) laws requiring companies to disclose if layoffs are due to technological innovation or automation. The Federal AI-Related Job Impacts Clarity Act and a bill in Pennsylvania take similar approaches. Importantly, a bill in  New York state also requires employers to give advance notice to workers prior to any technology-related displacement.

2. Protecting workers from being forced to train their AI replacement

A related policy model is to ensure that workers have control over their work product and are not forced to train AI systems with their data. Most bills in this category focus on creative workers such as artists, actors, musicians, and journalists. In 2024 several states passed laws in this area, strictly regulating contracts governing digital replicas (see laws in California, Illinois, and New York). Other proposed legislation that would require written authorization for the use of a worker’s digital replica include the federal No Fakes Act and a bill in Massachusetts, both introduced in 2025. A 2024 Washington bill similarly creates consent requirements when employers want to use a worker’s digital likeness in the workplace context.

Other bills expand their focus beyond digital replicas. The journalism bill in New York mentioned above would prohibit employers from using media workers’ creative output to train a generative AI system without notice, consent, or the opportunity to bargain. Other examples include the federal COPIED Act, introduced in 2024, which would prohibit developers from using a worker’s digital product (text, image, audio, or video) that has provenance information attached to it without consent.

3. Education and training for the 21st century economy

Providing workers with the education and training they need to navigate rapid technological change is a key area requiring policy innovation. One important model is the comprehensive federal Workers’ Right to Training Act, which was introduced in 2019. This bill would establish strong requirements for employers to provide on-the-job retraining and offer alternative employment to workers whose jobs are in danger of being changed (in their pay, working conditions, or skill requirements) or replaced due to new technologies. A more recent New Jersey bill (2024) would require employers to provide notice, retraining, and severance pay to workers who experience technology-related job loss. Another New Jersey bill from 2025 would require the state to engage with unions in expanding training programs in sectors impacted by technology-related job loss, and proposes a tax on AI infrastructure to fund these programs. Other bills direct government agencies rather than employers to offer retraining to workers in industries impacted by technological change; two such bills introduced at the federal level are Investing in Tomorrow’s Workforce Act of 2023 and the Workforce of the Future Act of 2024. Two 2025 New Jersey bills direct public funds to nonprofits or to public-private partnerships to provide AI training to workers.

Key concepts appearing in one or more policies:

  • Employers should be prohibited from using digital technologies to automate jobs, eliminate core job functions, or reduce work hours.
  • Employers must conduct an impact assessment prior to deploying any digital technologies that have the potential to automate, eliminate, or change core job functions.
  • Employers must give notice, retraining, and compensation to workers when deploying digital technologies that will change jobs or displace workers, and give priority to current workers when filling new positions.
  • Employers must consult workers when implementing consequential workplace technology and share the results of impact assessments with workers.
  • Businesses are prohibited from claiming that AI systems have professional expertise reserved for licensed professionals.
  • Businesses cannot train an AI model with a worker’s work product (e.g., digital likeness, expertise, voice, writing, art, music), or use a worker’s digital replica without receiving express and informed consent and potentially giving the creator credit and compensation.
  • Employers cannot retaliate against a worker for refusing to give consent to have their work product be used to train a Generative AI system.
  • Workers must have additional protections when entering into contracts allowing companies to use their digital replicas, such as having union or legal representation present, or ensuring that the contract language is specific to the intended use.
  • Government taxation and economic development policies should be leveraged to disincentivize job automation.
  • Government agencies should invest in retraining initiatives for workers in industries undergoing technological change.

Human-in-the-Loop

Even if digital technologies do not automate entire jobs, they can still replace or take over specific tasks, potentially deskilling workers, eroding worker autonomy, and creating harmful impacts for the public. An important emerging policy approach is to lay down ground rules for how workers interact with technology, ensuring that a human is always in the loop and has ultimate decision-making authority when digital technologies are used in the workplace. These policies are typically industry-specific, vary in the amount of agency they are carving out for workers, and can be motivated by protecting workers, protecting the public, or both.

In 2025, numerous states passed versions of a 2024 California law requiring physician review when an algorithm is used in making a health insurance benefit decision (e.g., laws in Maryland and Nebraska). Texas enacted the strongest version of these laws, prohibiting the use of algorithms to make, wholly or partly, an adverse health benefit determination. Similar bills were introduced in over 20 states in 2025. Beyond health insurance, new laws establish human review requirements when algorithms are used in critical infrastructure (Nevada), criminal justice (Virginia and Utah), and medical diagnostic settings (Texas). A law in California now requires human review when using generative AI to produce reports in the law enforcement context. Several public sector laws also contain human review requirements when state agencies use AI under certain conditions: see 2025 laws passed in Montana and Arkansas. Finally, three separate laws signed in Texas (see here, here, and here), mandate the provision of AI training for public sector workers to ensure they are equipped to interact with new digital technologies.

Turning to bills, a prime example of a strong human-in-the-loop requirement can be found in a 2025 federal bill – the Federal Right to Override Act – that would allow health care workers to override a hospital’s care-directing algorithm if doing so is in the best interest of the patient (see similar bills in California from 2022 and Minnesota and Maine from 2025 focusing on nursing).  This bill would also prohibit businesses from retaliating against workers who exercise this right, prohibit sharing of override data on workers, and would ensure that workers can provide feedback on algorithms exhibiting bias or inaccuracies or that require frequent override.  A 2025 New York bill would require the incorporation of at least one worker with expertise into the design and development of technologies deployed in their field. Another example includes a wide-ranging 2025 New York bill (mentioned above) which would ensure that media workers have the right to approve, modify, or deny any decisions made by AI. Other bills contain human-in-the-loop requirements for large autonomous vehicles (e.g., 2025 bills in California and Massachusetts), mental health (e.g., a 2025 Texas bill), publishing (e.g., a 2024 New York bill), and law (e.g., a 2024 California bill). A variation of this type of bill requires a human alternative to be made available when members of the public interact with digital technologies—such as in call centers (e.g., these 2025 federal, Illinois, and Pennsylvania bills).

Human-in-the-loop requirements have also been central to a number of bills focusing on the public sector. A bill introduced in New York (2024) requires that a human make any decisions related to public rights or benefits, and strictly limits the use of large language models (LLMs). A 2025 Illinois bill requires not just human review but rather “continuous meaningful human review” of digital technologies used by state agencies. And similar to the Texas training laws mentioned above, other 2025 bills require the provision of training in specific sectors to ensure that workers can use new digital technologies safely and in a manner that enhances their work. Sectors include education (Maryland), criminal justice (Virginia), and law enforcement, specifically when interacting with autonomous vehicles (New Jersey).

Key concepts appearing in one or more policies:

  • Workers must have the right to deny, approve, modify, or override decisions made by digital technologies without retaliation and businesses must not share data on workers exercising this right.
  • Workers must be given the resources and time needed to meaningfully review decisions made by algorithms in both private and public sector contexts.
  • Workers must be able to share feedback on the efficacy of digital technologies deployed in the workplace, including whether they exhibit bias or provide incorrect outputs requiring frequent override.
  • Businesses and government agencies must ensure that a human alternative is available when using digital technologies that interact with the public.
  • Government agencies must ensure that any output produced by a digital technology system is reviewed by a human with sufficient knowledge and resources.
  • Decisions affecting the public’s rights, services, and benefits must be made by a human; assistance from AI must be purely advisory.
  • Workers must receive adequate training to enable them to use any digital technologies deployed in the workplace in a way that protects privacy, safety, and the rights of both workers and the public, and allows workers to enhance their work.

Bias and Discrimination

Anti-discrimination laws at the federal and state levels have long protected workers from discrimination in employment. However, emerging technologies have posed new questions about how these laws apply, how to identify companies that may be violating the law, and how to prove such violations.

As a result, the topic of AI and discrimination has been the focus of significant legislative activity in both 2024 and 2025. The main policy model consists of broad laws that build upon existing anti-discrimination protections, adding requirements for transparency, disclosure, duty of care, and impact assessments when digital technologies are developed and deployed. These laws typically cover a wide range of applications, including employment, housing, education, and insurance. The 2024 Colorado AI Act is the first law of this kind to be enacted. Several 2025 bills in California, New Mexico, and Vermont follow a similar model, as does the 2024 federal AI Civil Rights Act. See these Center for Democracy & Technology (CDT) and Future of Privacy Forum (FPF) reports analyzing 2024 versions of this category of bills.

Another policy approach is to focus solely on discrimination in employment-related technologies. Here, the best model is the 2022 Civil Rights Standards for 21st Century Employment Selection Procedures. No state has yet passed this type of expansive law, but Illinois and California have taken action to clarify that the use of AI in employment-related decisions is covered by the state’s anti-discrimination law.  Unfortunately, weak laws and bills threaten to undermine the goal of strong protections in this area. For example, a 2021 New York City law requiring bias audits and notice when AI is used in hiring has been roundly criticized and, according to a 2024 study, largely ignored by companies. Similar bills to New York City’s have been introduced in New Jersey and Pennsylvania in 2025.

Key concepts appearing in one or more policies:

  • Businesses, including data brokers, are prohibited from using discriminatory digital technologies in employment (and other critical sectors like housing, lending, and education).
  • Employers and vendors must conduct bias audits on digital technologies in the workplace before using them and annually thereafter.
  • Employers must disclose details of any digital technology systems in use (e.g., training data, model specifications, safety testing conducted).
  • Both developers and deployers of digital technologies must report any bias or discrimination incidents or discovered risks.
  • Employers must notify job candidates and employees about the use of digital technologies in hiring assessments or evaluations for promotions.
  • Employers must not use worker selection procedures that rely on facial recognition, emotion recognition, or other suspect technologies.
  • Worker assessment tools must measure the ability to perform essential job functions rather than attributes tied to protected characteristics.
  • Workers should have the right to opt out of being assessed by an automated selection procedure and instead be evaluated by a human or other alternative means.
  • Workers must have a private right of action and be protected from retaliation for exercising their rights.

Public Sector

Federal and state governments have a significant impact on the economy as employers, funders, and administrators of benefits, as well as through their power to shape standards through their procurement of goods and services. The regulation of AI use by government agencies is therefore an important policy area for worker advocates. There is growing activity, both legislative and administrative, to ensure accountability and responsibility in the public sector’s use of digital technology when providing services to the public – for example, creating disclosure, impact assessment, and meaningful review requirements. We are not able to summarize all of this activity here, but see this legislative tracker for examples.

However, we have seen less attention paid to the impact of digital technologies on public sector workers specifically. In 2023 and 2024, the Biden administration and a handful of states including California, Pennsylvania, and Washington issued wide-ranging executive orders shaping the development and adoption of responsible AI standards in government procurement, funding, and use (see this 2025 CDT brief). President Biden’s executive order (since rescinded) included a significant emphasis on workforce impacts; for example, the U.S. Department of Labor (DOL) issued responsible AI use standards for employers and developers in 2024. Pennsylvania’s executive order is also noteworthy; in 2025, the governor announced a partnership with SEIU to deploy Generative AI across agencies.

On the legislative front, only a handful of policies focus on public sector workers. In 2024 New York enacted the LOADing Act, which prohibits public agencies from implementing digital technologies that would displace workers (a 2025 Illinois bill has similar provisions). A bill introduced in California (2024) would prohibit government agencies from contracting with call centers that replace workers with AI, while a federal bill would establish reporting requirements when use of AI in federal call centers results in job loss. Several bills limit their focus to studying the impacts of AI on the state workforce (e.g., a 2025 bill in Connecticut and a 2024 bill in New York).

Key concepts appearing in one or more policies (not necessarily specific to, or limited to, workers):

  • Government agencies must not use digital technologies to replace human workers or essential job functions of workers, or contract with companies that do so.
  • Government agencies must conduct impact assessments before adopting new digital technologies, testing for a range of harms and making the results available to the public.
  • Government agencies must carry out inventories of digital technologies currently in use and make these inventories available to the public.
  • Government agencies must consult with workers before implementing new digital technologies in the workplace.
  • Government agencies must disclose when they are using AI when interacting with the public, and ensure that a human alternative is available if requested.
  • Government agencies must monitor the impacts of digital technologies on state workforces.
  • Government agencies must adhere to a clear set of standards when procuring digital technology systems, including assessing for harm, privacy, and fairness.

Collective Bargaining

We have seen important examples of legislative activity focused on collective bargaining around new technologies. The 2024 New York LOADing Act and a 2025 bill in Illinois prohibit their respective state agencies from deploying digital technologies that undermine existing collective bargaining agreements, while a 2025 bill in Washington State strengthens the rights of public sector workers to bargain over AI. A New York bill focused on media workers would prohibit employers from using Generative AI without bargaining in advance, or in a way that undermines existing collective bargaining agreements. A 2023 bill in California requires public transit agencies to notify impacted unions of their plan to procure autonomous transit vehicle technology that would automate jobs or job functions, and mandates that collective bargaining start within 30 days of such notice. Finally, the federal Workers’ Right to Training Act of 2019, mentioned above in “Automation and Job Loss,” mandates that employers bargain directly with workers on how best to implement new technologies that are likely to change or eliminate workers’ jobs.

In 2025, California enacted a law that will give Transport Network Company (TNC) workers the right to collectively bargain with on-demand labor platforms. Laws passed in California and Illinois in 2024 establish that the presence of union representation is one way in which contracts governing workers’ digital replicas can be considered valid. Finally, also relevant is the federal Protect Working Musicians Act of 2023, which would exempt musicians from antitrust restrictions, allowing them to bargain collectively over music licensing terms with streaming platforms.

Warehouse Quotas

Some states have recently enacted laws addressing warehouse employers’ use of opaque electronic surveillance and productivity monitoring systems, following a 2021 California law. States that have passed laws that roughly replicate California’s are Oregon (2024), Washington (2023), Minnesota (2023), and New York State (2022). Similar bills have been introduced in at least 13 additional states, including Montana (2023), South Dakota (2023), Georgia (2025), Arizona (2025), Virginia (2025), and Massachusetts (2025). Some bills also require the creation of a worker-led joint labor-management safety committee (e.g., Florida and Arizona).

In 2024, Senator Ed Markey of Massachusetts introduced a federal bill that borrows many key concepts from, but significantly strengthens, the original California law. Both pieces of legislation require employers to provide workers with notice of any productivity quotas in place. However, the federal bill expands the list of prohibited quotas, provides additional protections against adverse employment actions, and limits how employers can use worker data.

Further Resources

This guide is a living document that we are updating frequently. For errors or to flag policies we should include, please email Mishal Khan, UC Berkeley Labor Center, mishalkhan@berkeley.edu.

For technical assistance on these policy models, reach out to:

The authors thank the John D. and Catherine T. MacArthur Foundation and Omidyar Network for their generous support of this project.