A First Look at Labor’s AI Values: An analysis of recent statements about technology by unions and other worker organizations

  • Headshot of Mishal Khan
  • Headshot of Kung Feng
Mishal KhanandKung Feng

Over the past few years, various unions and worker organizations have published a series of principles, public statements, frameworks, and resolutions articulating a vision for how AI and other digital technologies should be developed and deployed in the workplace. We analyzed 17 of these documents by 15 organizations and identified key values being put forward. This report represents our interpretation of these documents. (Click on “Legend of union and worker organization acronyms” below to view the list of links to the documents and an abbreviation key).

Theme 1: Laying down rules for responsible tech use

“CWA members and leadership will not accept that the effects of AI systems are inevitable or pre-determined. We will hold company executives and management accountable for their decisions when adopting and implementing these systems and for the results they have on workers, customers, and communities.” – CWA

Transparency and disclosure

Transparency rights are vitally important to labor. They are understood to be foundational to all other rights (UNI) and for strengthening collective bargaining (CWA). Transparency around the use of digital technologies in the workplace is articulated as both the right to advance notice (CFLU, CWA, NNU, NEA, UNI, TUC, IATSE, AFL-CIO) and the right of post-use explanation (UNI, UNID). Companies should disclose any copyrighted works used to train an AI system (NWU, IATSE), clearly identify AI-generated content (AFSCME), and share the results of impact assessments with workers (UNI). For UNITE HERE, transparency means ensuring that algorithmic systems provide not only clear instructions but also clear explanations for why specific tasks are assigned (UH). For others, the data and ethical considerations that go into making an AI system should be available for investigation when questions of liability arise (AFSCME, UNI).

Guardrails around the collection and use of data

For many groups, worker data rights are fundamental as new digital technologies are used in the workplace. Workers must be able to know about, access, correct, and delete any data gathered about them by employers (UNID, UH, NNU, CFLU, CWA, SEIU, AFL-CIO). In addition, they should be able to influence how employers use their data—especially if it is used to make employment decisions about them or to train an AI system (UNIA, CFLU, TUC, CWA). UNI Global argues that worker consent, by itself, does not provide adequate protection when employers collect worker data, and that additional guardrails are necessary (UNID). These include strict data minimization rules, opportunities for workers to actively opt in or out of data collection, and placing limits on surveillance without a clear purpose (UNID, NNU, NEA). For the AFL-CIO, in addition to having negative mental impacts, surveillance threatens workers’ right to organize. Other concerns revolve around the privacy of workers and the public they serve (NNU, AFT), the safe storage of worker data (NEA, AFT, UH), and the right to transfer worker data between platforms upon request (UNID).

Human-made employment decisions

The right to have important decisions about workers made by a human—not an algorithm—is critical for worker groups (CWA, UNIA, SEIU). For the California Federation of Labor Unions, decisions such as “hiring, discipline, terminations, work quotas, or wage setting” are simply too important to be made solely by an algorithm. For the AFL-CIO, workers must be able to appeal AI-driven decisions made about them without retaliation. NEA argues against employers relying on technology to evaluate or discipline teachers, instead emphasizing the importance of collaborative processes, personalized feedback, and providing opportunities for growth.

Protection from discrimination and bias

Numerous organizations recognize that digital technologies can manifest bias—especially when used in hiring, promotion, and firing (AFSCME, UH, UNI, NEA, HCAP, AFL-CIO). Workers therefore should have the right to be protected from any potentially discriminatory impacts of these technologies. Employers should set procurement standards for the technologies they purchase, regularly test for harms, and ban technologies found to be problematic (NEA, UNI, CWA). UNI Global points out that certain business practices are inherently biased, highlighting for example that customer ratings can be a “backdoor to bias and discrimination” (UNIA). For NEA, it is important that the decision-makers who shape how technology is deployed in educational settings are themselves from diverse backgrounds.

Health and safety at work

“Patients and workers have a right to safety at work or while receiving care.… The burden of demonstrating safety should rest with developers and deployers, not patients and their caregivers.” – NNU

Labor wants to prioritize workers’ rights to health and safety as new digital technologies are introduced (CWA, TUC, NNU). Testing for health and safety harms through impact assessments is important for ensuring these protections (CFLU, NEA). The AFL-CIO emphasizes that especially in the public sector, AI procurement standards are crucial to ensuring the health and safety of workers and the public. In addition to physical health and safety, mental health and a healthy work-life balance should also be guarded (TUC, NNU). The California Federation of Labor Unions highlights that production algorithms or quotas should never be used in ways that violate health and safety laws.

Accountability structures

Unions and worker organizations are pressing for the right to clear accountability structures to accompany the use of digital technologies in the workplace (AFL-CIO). The California Federation of Labor Unions asserts that companies that train or develop AI should be held liable when technology “causes harm, violates the law, or has other adverse impacts.” UNI Global states that “legal responsibility for a robot should be attributed to a person” and that “machines must maintain the legal status of tools.” TUC argues for workers to have access to legal redress and strong enforcement regimes when their rights are violated by unlawful deployment of AI, while NWU wants developers of generative AI systems to be held accountable for ensuring that creators receive proper credit and compensation when their works are used.

Prohibited technologies

The right to be protected from particularly harmful technologies has emerged as a key concern for several groups—calling for their outright ban in some instances. For the California Federation of Labor Unions, facial recognition, predictive behavioral analysis, and profiling should not be used on or off the job. For UNI Global, equipment revealing a worker’s location should only be used under strictly limited circumstances (UNID).

Theme 2: Centering workers

Collective bargaining

“As with many labor issues, the best outcome is one in which the people who know the work the best, the workers, have a seat at the table and are empowered to be part of solving the problems that directly impact their work lives.” – UNITE HERE

The right to organize and collectively bargain has long been central to labor power. Many groups assert the importance of collective bargaining rights for ensuring that workers remain at the center of decisions about how digital technologies are introduced and governed (NNU, NEA, TUC, IATSE, AFSCME, UH, UNID, CFLU, SEIU, AFL-CIO). Joint labor management partnerships are also seen as important institutional structures to incorporate workers into decisions about technology deployment (NEA, HCAP).

Worker-in-command

The right to be in command of digital technologies deployed in the workplace is central for many groups. While algorithms can provide data and make suggestions, workers must have the final authority over decisions made and must always retain control of the system (UH, UNI, UNIA, SEIU). UNITE HERE suggests that instead of “algorithmic management,” we should think in terms of “algorithmic guidance,” allowing workers to override or modify the algorithm’s suggestions or directives. In other words, it is imperative that workers continue to be allowed to exercise their professional judgement and not simply provide a rubber stamp, or worse, a “liability shield” for automated systems (NNU, AFT, AFL-CIO, SEIU). It is also important that workers receive regular and ongoing training so they can adapt to and effectively use any new digital technologies they work alongside (HCAP, IATSE, UH, AFT, UNIA, NEA).

Design and deployment

“When governments use taxpayer dollars to fund AI research, workers and unions should be part of the process; including them from the beginning helps to create more effective, safer technology and can help to avoid bad decisions around AI development and implementation. The reality is that America spends billions in public money to advance innovation—incorporating worker voices and unions into these research initiatives should be a requirement and a national priority.” – AFL-CIO

A variety of groups highlight the right to participate in the design, development, and procurement of technology from the outset (CFA, CFLU, AFT, TUC, UH, NEA, HCAP, AFSCME, AFL-CIO). Workers are most familiar with the on-the-ground reality of the workplace and are best placed to ensure that appropriate and beneficial technology is implemented. AFSCME references President Biden’s Executive Order on AI that also articulates this principle. UNITE HERE envisions a future where “the worker is at the center of technological innovation.”

Saying no

“Empower educators to make educational decisions. Certified professionals must decide when, whether and how to incorporate technology in pursuit of their larger educational priorities. Technologies and technology vendors must serve, not drive, those decisions and priorities.” – AFT

For several organizations, the right to refuse to work with certain technologies is crucial. Workers should be able to say no when vendors or universities push certain products for classroom use on educators (NEA, AFT, CFT) or when a company wants to use a creative worker’s output or digital likeness to train an AI system (NWU). For NNU, workers have the right to refuse to participate in data collection and worker surveillance, while for CWA and UNI Global (UNIA), workers have the right to refuse to work on AI systems that are harmful or unethical.

Theme 3: Improving jobs and livelihoods

Work enhancement

“Technology should make work safer, more productive, and increase workers’ skills, not worsen conditions or eliminate jobs. The best way to do that is to treat technology, including AI, as a tool for workers to control, not that controls them.” – CFLU

Many organizations insist that workers have the right to ensure that digital technologies enhance the work experience—rather than replace workers (HCAP, CWA, HAC, AFL-CIO). Digital technologies should be an “additive tool” (NEA) rather than a means of control (CFLU, IATSE). Human Artistry Campaign writes that under the right circumstances, AI can assist the creative process. But instead of making work easier, new technologies have frequently increased job requirements—for example, by demanding higher technological proficiency or mastery over English (UH). However, better choices for how to use technology are possible. UNI Global suggests that management algorithms could help identify when workers need extra training rather than act as a tool to discipline workers (UNIA).

The essential role of human workers

“We envision AI-enhanced technology as an aid to public educators and education, not as a replacement for meaningful and necessary human connection.” – NEA

Labor is articulating the right to ensure that essential tasks continue to be performed by human workers. Digital technologies should not replace nurses providing in-person care (NNU), teachers carrying out the vital task of educating students (AFT, NEA), or faculty members’ intellectual work (CFA). Human Artistry Campaign states that generative AI cannot replace the essential roles that journalists and creative workers such as artists, musicians, and actors play in our society. NWU writes that human creativity is “the work of millions of human lives” and therefore must be protected from exploitation.

Control over work product

Workers’ rights to control their intellectual property, copyrighted works, digital likeness, and work products are important for many groups (AFL-CIO, CFLU). Creators want to ensure their work is not used to train generative AI models without consent and that contracts governing creative works contain clear language and are strictly regulated (NWU, HAC). CFA wants to ensure that third parties do not use faculty-produced materials to train AI systems, while NEA asserts that educators should have a proprietary right over their work.

Distributing productivity gains

Several groups assert that workers have the right to reap the benefits of any enhanced productivity brought about by the use of digital technologies in the workplace. This can be in the form of higher pay, enhanced benefits, more paid time off, or other improvements in working conditions (CWA). The economic prosperity created by AI should be distributed broadly and equitably—benefitting as many people as possible (UNIA, UNI, AFSCME).

Just transition and training

Many groups affirm that workers have the right to a “just transition” as digital technologies transform the labor market (UNI, CWA, NWU, IATSE). Employers should provide notice prior to layoffs, retraining, priority bidding for comparable roles, severance pay, and partial wage replacement (CWA, AFL-CIO). Labor-management training partnerships, union training centers, and registered apprenticeships offer the best paths to prepare workers for technological change (UNITE HERE, AFL-CIO). Other groups look to governments to provide social security, lifelong learning, and retraining so that workers can take advantage of new opportunities created through AI (UNI, AFSCME). The Teamsters contend that congress must “directly and forcefully” address the workforce impacts of the roll out of commercial autonomous vehicles.

Theme 4: Advancing the public good

Benefiting society

“Make AI Serve People and the Planet. This includes codes of ethics for the development, application and use of AI so that [AI systems] increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity.” – UNI Global

Labor is looking beyond narrowly defined workplace issues to the broader impacts of digital technologies on the public good and society at large. They are concerned, as UNI Global puts it, with ensuring that AI “serves people and the planet” and is compatible with “fundamental human rights.” A crucial way to ensure this is securing whistleblower protections for workers reporting on potential dangers of AI systems (AFL-CIO). In TUC’s view AI should benefit society, not just employers and commercial interests. AFT calls for a “culture of the mindful use of technology” and wants to use AI to help build “capacity for civic engagement.” Both AFT and the AFL-CIO are concerned about the role that AI will play in building a democratic future. AFSCME similarly expresses concern for the impact of AI on the nation’s civic community, while NWU calls attention to “ghost workers” in the global South whose labor powers AI models. For the Teamsters, public safety is a high priority as autonomous vehicles become more common on our roads, while AFSCME advocates for policies that prioritize the safety and well-being of communities.

Enhancing equity

Several organizations acknowledge that AI will have unequal impacts on marginalized communities and workers who are not represented by a union (NWU, AFL-CIO). Some organizations seek to promote equitable access to AI tools (NEA, TUC), while others want to ensure that AI narrows rather than widens existing inequities in schools (AFT) and globally (UNI). In health care, AI should be used as a tool to decrease social disparities and enhance access to good quality care (HCAP).

Protecting the planet

Unions and worker organizations are drawing attention to the environmental impacts of AI and the threats it poses to the planet. NWU and NEA recognize that AI and data centers are exacerbating the climate crisis through excessive and growing energy and water use. UNI Global wants to use AI to protect and improve our planet’s ecosystems and biodiversity.

Collaboration

Many organizations emphasize the importance of working with multiple partners to protect workers, with IATSE advocating for a comprehensive approach to tackling the challenges ahead. Some emphasize the global scale of the problem, cognizant of how global regulation and trade agreements may impact technology issues and the importance of multi-stakeholder governance (UNI, TUC, NWU). Others mention collaborating with researchers (IATSE, CWA) to understand how technology is impacting work, or with parents and community members to determine the best way to implement technology (AFT). AFSCME resolves to engage, collaborate, and support the efforts of other unions.