Theme 1: Laying down rules for responsible tech use
“CWA members and leadership will not accept that the effects of AI systems are inevitable or pre-determined. We will hold company executives and management accountable for their decisions when adopting and implementing these systems and for the results they have on workers, customers, and communities.” – CWA
Transparency and disclosure
Transparency rights are vitally important to labor. They are understood to be foundational to all other rights (UNI) and for strengthening collective bargaining (CWA). Transparency around the use of digital technologies in the workplace is articulated as both the right to advance notice (CFLU, CWA, NNU, NEA, UNI, TUC, IATSE, AFL-CIO) and the right of post-use explanation (UNI, UNID). Companies should disclose any copyrighted works used to train an AI system (NWU, IATSE), clearly identify AI-generated content (AFSCME), and share the results of impact assessments with workers (UNI). For UNITE HERE, transparency means ensuring that algorithmic systems provide not only clear instructions but also clear explanations for why specific tasks are assigned (UH). For others, the data and ethical considerations that go into making an AI system should be available for investigation when questions of liability arise (AFSCME, UNI).
Guardrails around the collection and use of data
For many groups, worker data rights are fundamental as new digital technologies are used in the workplace. Workers must be able to know about, access, correct, and delete any data gathered about them by employers (UNID, UH, NNU, CFLU, CWA, SEIU, AFL-CIO). In addition, they should be able to influence how employers use their data—especially if it is used to make employment decisions about them or to train an AI system (UNIA, CFLU, TUC, CWA). UNI Global argues that worker consent, by itself, does not provide adequate protection when employers collect worker data, and that additional guardrails are necessary (UNID). These include strict data minimization rules, opportunities for workers to actively opt in or out of data collection, and placing limits on surveillance without a clear purpose (UNID, NNU, NEA). For the AFL-CIO, in addition to having negative mental impacts, surveillance threatens workers’ right to organize. Other concerns revolve around the privacy of workers and the public they serve (NNU, AFT), the safe storage of worker data (NEA, AFT, UH), and the right to transfer worker data between platforms upon request (UNID).
Human-made employment decisions
The right to have important decisions about workers made by a human—not an algorithm—is critical for worker groups (CWA, UNIA, SEIU). For the California Federation of Labor Unions, decisions such as “hiring, discipline, terminations, work quotas, or wage setting” are simply too important to be made solely by an algorithm. For the AFL-CIO, workers must be able to appeal AI-driven decisions made about them without retaliation. NEA argues against employers relying on technology to evaluate or discipline teachers, instead emphasizing the importance of collaborative processes, personalized feedback, and providing opportunities for growth.
Protection from discrimination and bias
Numerous organizations recognize that digital technologies can manifest bias—especially when used in hiring, promotion, and firing (AFSCME, UH, UNI, NEA, HCAP, AFL-CIO). Workers therefore should have the right to be protected from any potentially discriminatory impacts of these technologies. Employers should set procurement standards for the technologies they purchase, regularly test for harms, and ban technologies found to be problematic (NEA, UNI, CWA). UNI Global points out that certain business practices are inherently biased, highlighting for example that customer ratings can be a “backdoor to bias and discrimination” (UNIA). For NEA, it is important that the decision-makers who shape how technology is deployed in educational settings are themselves from diverse backgrounds.
Health and safety at work
“Patients and workers have a right to safety at work or while receiving care.… The burden of demonstrating safety should rest with developers and deployers, not patients and their caregivers.” – NNU
Labor wants to prioritize workers’ rights to health and safety as new digital technologies are introduced (CWA, TUC, NNU). Testing for health and safety harms through impact assessments is important for ensuring these protections (CFLU, NEA). The AFL-CIO emphasizes that especially in the public sector, AI procurement standards are crucial to ensuring the health and safety of workers and the public. In addition to physical health and safety, mental health and a healthy work-life balance should also be guarded (TUC, NNU). The California Federation of Labor Unions highlights that production algorithms or quotas should never be used in ways that violate health and safety laws.
Accountability structures
Unions and worker organizations are pressing for the right to clear accountability structures to accompany the use of digital technologies in the workplace (AFL-CIO). The California Federation of Labor Unions asserts that companies that train or develop AI should be held liable when technology “causes harm, violates the law, or has other adverse impacts.” UNI Global states that “legal responsibility for a robot should be attributed to a person” and that “machines must maintain the legal status of tools.” TUC argues for workers to have access to legal redress and strong enforcement regimes when their rights are violated by unlawful deployment of AI, while NWU wants developers of generative AI systems to be held accountable for ensuring that creators receive proper credit and compensation when their works are used.
Prohibited technologies
The right to be protected from particularly harmful technologies has emerged as a key concern for several groups—calling for their outright ban in some instances. For the California Federation of Labor Unions, facial recognition, predictive behavioral analysis, and profiling should not be used on or off the job. For UNI Global, equipment revealing a worker’s location should only be used under strictly limited circumstances (UNID).