Data and Algorithms in the Workplace: An Overview of Current Public Policy Strategies
Introduction
This working paper provides an overview of existing and proposed public policy strategies designed to mitigate the risks and maximize the benefits of data processing systems[1] and algorithms at work.[2] Employers may use a multitude of technologies and techniques to monitor or otherwise collect data on their workforce. Through these methods they can collect a wide array of information on workers, including their location and movements,[3] computer activity,[4] health and wellness status,[5] co-worker interactions, biometric identifiers,[6] and social media activity.
The troves of data collected in the workplace – along with advances in the capacity of computers to store and process that data – have fueled innovations in the development of algorithmic technologies. These systems are often used to analyze worker data, make predictions or decisions about workers, influence or shape worker behaviors, plan and direct workplace tasks, train or assist workers in their job, or automate tasks entirely. In other words, algorithmic systems are used throughout a variety of operational areas to assist, augment, or automate work. For example, a Human Resources (HR) department may use predictive analytics to anticipate future staffing needs, screen or prioritize applicants with the assistance of an algorithmic system,[7] and score candidates’ video interviews using emotion recognition technology. Algorithmic systems are fueled by data; as they become more widely adopted in the workplace, the imperative to collect granular, real-time, and continuous data similarly increases.
In some cases, employers implement algorithms that make or facilitate consequential employment-related decisions without any human oversight. These decisions may have significant impacts on workers’ wages, benefits, hours, work schedules, hiring decisions, disciplinary actions, promotions, terminations, job content, and productivity requirements. While proponents maintain that algorithmic technologies are efficient and can mitigate human prejudices, critics argue that they are often unaccountable, dehumanizing, and can introduce new sources of bias. It is often difficult or impossible to determine how an algorithm arrived at its determinations, which is especially problematic when employers use these systems to make important decisions affecting hiring, work scheduling, promotions, or even discipling workers. Policy makers, academics, and advocates who are concerned about these issues are considering a variety of approaches to regulate the information systems and algorithmic technologies that employers use to collect and process worker data.
Policy makers at the federal, state and local levels have begun to respond to specific issues raised by data processing systems and algorithmic technologies.[8] Think tanks, academics, and advocacy groups have also advanced numerous policy proposals. These responses reflect varying approaches towards addressing the novel issues raised by data processing systems and algorithms. Some recommend technology-specific policies, such as moratoriums or bans on facial recognition technologies, while others have proposed laws targeting particular issue areas, such as laws regulating the use of algorithmic systems in the criminal justice system. Another strategy focuses on addressing specific harms, such as erratic work schedules that are generated by algorithms. Although the majority of policies currently target government use of algorithms or focus on consumer privacy issues, many of these regulations could be adapted or applied to the workplace. In some cases, such as algorithmic bias, the workplace is one of the primary areas of concern.
This working paper provides an inventory of existing public policy strategies that have been developed to address the challenges of data processing systems and algorithms in the workplace. The policy elements presented here are organized into the following five groups:[9]
- Notice and Transparency
Employers may collect data and implement algorithms without disclosing these practices to affected workers. For example, unbeknownst to its employees, Amazon used an algorithm to programmatically fire warehouse fulfillment center workers who failed to meet productivity targets.[10] When employers are not transparent about the systems they use, workers and their representatives are left with little recourse or ability to demand accountability. Notice and transparency policies aim to remedy this situation by requiring employers to provide workers with substantive information about their use of data processing systems and algorithmic technologies or periodically disclose such information to third-party auditors or government agencies.
- Accountability
Although transparency is a necessary condition of accountability, it is insufficient to ensure that employers use data processing systems and algorithms responsibly. Unfortunately, employers may experience data breaches that reveal sensitive information about their workforce, including biometric identifiers, social security numbers, or bank account numbers.[11] These breaches may put workers at risk and cause harm to their personal lives.[12] Beyond the threat of data exposure, data processing systems and algorithms may hurt workers by invading their privacy, threatening their personal autonomy and dignity, and jeopardizing their health and safety. Policies that promote accountability require organizations to adequately safeguard workers’ personal information and employ a risk-based approach to collecting data and using algorithmic systems.
- Individual Data Rights
Providing workers with privacy rights over their own data is another approach to safeguard workers from emerging harms. Privacy rights allow individual workers to express agency over how their data are collected and used and establish limitations based on their personal preferences. Specifically, they may allow workers to consent or object to the collection, processing, or use of personal data; delete or correct inaccurate or misleading data; or access and transport data generated throughout the duration of their employment.
- Workplace Rights
Data processing systems and algorithmic technologies used in the workplace may further harm workers by undermining established employment and labor laws. Algorithmic systems can produce outcomes that are biased against workers in protected classes, such as when Bon-Ton Stores, Inc. implemented a hiring algorithm that evaluated factors that are highly correlated with race.[13] Unfortunately, existing laws may be unable to provide redress for workers who are discriminated against by algorithmic systems.[14] Furthermore, the chilling effects of electronic monitoring technologies may inhibit workers from exercising their workplace rights, including engaging in protected concerted activity. For example, Walmart employs both traditional surveillance techniques and novel methods such as monitoring employees’ social media activity, which serves to discourage workers from organizing or forming a union.[15] Policies promoting workplace rights attempt to bolster discrimination, employment, and labor laws to address the emerging or heightened challenges posed by data processing systems and algorithmic technologies.
- Government Oversight and Regulation
Although notice and transparency, accountability measures, individual data rights, and workplace protections are important, they may be insufficient to address the collective harms created by data processing systems and algorithmic technologies. Individual workers may lack the time and expertise to exercise their rights or may fear retaliation. Furthermore, due to rapid developments in computing power, monitoring techniques, and artificial intelligence, protections that target specific technologies or practices may quickly become obsolete. Recognizing these challenges, some have advocated for expanding the powers of existing regulatory agencies or establishing new governance institutions to address the impacts of data-fueled technologies at work. These institutions may be given broad regulatory authority similar to the Food and Drug Administration (FDA), or vested with more limited powers, ranging from standards setting to apportioning liability.
Before proceeding, it should be noted that the purpose of this working paper is to provide a general overview of existing public policy strategies and proposals responding to data processing systems and algorithms in the workplace. This working paper does not address the role of worker voice in governing data and algorithms at work. Worker voice is a significant topic that deserves to be discussed in greater detail than can be afforded in this working paper. This working paper also does not attempt to analyze the efficacy of specific policies, nor does it advocate or recommend any particular strategy. In certain instances, along with a brief explanation, arguments for and against specific policies may be presented as they appear in the literature. This is solely for the purpose of familiarizing the reader with the debates surrounding certain issues.
Endnotes
[1] Data processing refers to any operation or set of operations performed on data including collection, storage, manipulation, alteration, transmission, dissemination, correction, use, or erasure. Data processing systems refer to the combination of hardware, software, organizations, people, and policies that execute or otherwise influence a data processing activity.
[2] An algorithm refers to a computational process for solving a problem or accomplishing a task. A basic algorithm can be outlined in code by a human programmer, using basic if-then logic for how the computer will perform a task. Alternatively, in the case of learning algorithms (often referred to as “machine learning algorithms” or “artificial intelligence”), programmers write code enabling the computer to develop its own rules for how to perform the task by leveraging statistical, mathematical, and computer science techniques on large data sets. The data sets that these algorithms analyze to develop a model is called “training data”. Due to their complexity, it may be impossible for anyone, including a machine learning algorithm’s creator(s), to understand how the algorithm determined the appropriate “rules.” Although algorithms are technically a component of a data processing system, they will be referred to separately because they are associated with distinct harms which are often addressed in targeted policies. For more information on algorithmic systems, how they work, and how they are used, please see Data and Algorithms in the Workplace Part I
[3] This may include GPS location, acceleration and deceleration, and speed.
[4] Computer activity data may include data such as browsing history, email contents, keystrokes, and time spent idle.
[5] Employers may offer workplace “wellness” programs in order to promote health, decrease the risk of disease, and lower insurance costs. These programs may collect information such as heart rate, exercise history, and sleep patterns. Ajunwa, Ifeoma, Kate Crawford, and Jason Schultz. “Limitless Worker Surveillance.” Calif 105 (2017): 735.
[6] Biometrics refers to the intrinsic physical or behavioral characteristics that can be used to verify an individual’s identity. Examples of biometric identifiers include facial and palm geometry, gait, voice tone, and iris measurements. See “Biometrics.” Electronic Frontier Foundation. accessed June 3, 2020 https://www.eff.org/issues/biometrics.
[7] Algorithmic systems include “decision-making algorithms” which are also often referred to as Automated Decision Systems (ADS). While there is universally agreed upon definition of decision-making algorithms or ADS, they generally refer to “[a]any software, system, or process that aims to automate, aid, or replace human decision-making. Automated decision systems can include both tools that analyze datasets to generate scores, predictions, classifications, or some recommended action(s) that are used by agencies [organizations] to make decisions that impact human welfare and the set of processes involved in implementing those tools.” Richardson, Rashida. Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force, AI Now Institute. December 4, 2019. https:// ainowinstitute.org/ads-shadowreport-2019.html.
[8] While there have been several high-profile federal proposals addressing issues raised in this working paper, the majority of legislation that has actually been enacted was implemented at the state and local level.
[9] Some policies that have been proposed or enacted comprises elements in multiple categories. For this reason, specific policies may be referenced in multiple places throughout this inventory.
[10] Employees learned of the system only once it was revealed through a labor dispute. Lecher, Colin. “How Amazon Automatically Tracks and Fires Warehouse Workers for ‘Productivity.’” The Verge, April 25, 2019. https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations.
[11] Porter, Jon. “Huge Security Flaw Exposes Biometric Data of More than a Million Users.” The Verge, August 14, 2019. https://www.theverge.com/2019/8/14/20805194/suprema-biostar-2-security-system-hack-breach-biometric-info-personal-data ; Whittaker, Zack. “Chegg Confirms Third Data Breach since 2018.” TechCrunch, April 29, 2020. https://social.techcrunch.com/2020/04/29/hackers-chegg-employee-breach/ ; Stahie, Silviu. “Interserve Hit by Data Breach; 100,000 Employee Records Stolen.” Security Boulevard, May 15, 2020. https://securityboulevard.com/2020/05/interserve-hit-by-data-breach-100000-employee-records-stolen/
[12] “Kaspersky Finds 30% of IT Security Managers Missed Important Personal Events Due to Data Breaches.” accessed May 22, 2020 https://usa.kaspersky.com/about/press-releases/2020_do-you-care-about-your-companys-reputation-and-employee.
[13] Bon-Ton Stores used an algorithmic system developed by Kenexa to screen job applicants. Among the factors it considered was how far the applicant lives from work. While this may seem like a facially neutral variable, its correlation with race or ethnicity may result in it acting as a “proxy variable” for protected categories. Williams, Betsy Anne, Catherine F Brooks, and Yotam Shmargad. “How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications.” Journal of Information Policy 8 (2018): 78–115 ; Walker, Joseph. “Meet the New Boss: Big Data.” Wall Street Journal, September 20, 2012, sec. Management. https://www.wsj.com/articles/SB10000872396390443890304578006252019616768.
[14] Pauline T. “Data-Driven Discrimination at Work.” Wm 58 (2016): 857.
[15] Berfield, Susan. “How Walmart Keeps an Eye on Its Massive Workforce.” Bloomberg, November 24, 2015. http://www.bloomberg.com/features/2015-walmart-union-surveillance/.