Beyond Basic Income: Claiming Our Right to Govern Technology

Annette Bernhardt

Originally published in Medium, May 11, 2017

One common characteristic of universal basic income advocates, and indeed progressives and labor more generally, is a near-fatalistic acceptance of the current path of technological development. It is a gaping hole in discussions about the future of work: either we are sticking our heads in the sand and avoiding the topic altogether, or we’re accepting automation as inevitable and therefore immediately ratcheting to basic income as the solution. For a movement that routinely challenges the market discipline of capitalism, this constitutes a striking retreat. To state the obvious, humans are the creators of new technology and can shape the path it takes (at least for now). Automation and displacement are not the only possible outcome.

A truly progressive agenda around the future of work should therefore add control over technology into the mix: control over which technologies are developed, to what ends, and how they are incorporated into the workplace. And this agenda needs to expand beyond the current fixation on automation. New technologies have many other direct effects on tasks — deskilling or upskilling existing ones, creating new ones — as well as a slew of indirect effects, such as enabling outsourcing and the integration of a global virtual labor force. It’s not just about the robots.

So what would it look like to claim our right as a society to govern technological development and its effects on workers and the labor market? Here are three strategies that move from less to more interventionist. They are often in the form of questions, given how little has been done to develop a proactive, worker-focused response that (importantly) is not anti-innovation.


At the very least, it is high time that progressives develop a robust and well-funded mitigation agenda. Universal basic income is one form of mitigation of course, but fleets of omniscient robots are decades away. There are plenty of near- and medium-term technologies whose effects we can anticipate or already see. Immediate forms of restitution could include industry-specific funding pools and the technology equivalent of Trade Adjustment Assistance (education, training and job placement). Any number of business-side taxes could be leveraged for funding, including the robot tax endorsed by Bill Gates or requiring Uber to pay into a fund for every self-driving car it puts on the road. And again, mitigation is not just about responding to automation. We might devise a deskilling tax, or mandatory retention and re-training laws when skill-changing technologies are introduced in the workplace.

Whatever the specific set of tools we decide upon, a broader cost-benefit analysis of new technologies will be needed. Imagine that we include as metrics the numbers of workers displaced, the loss in their life-time earnings, and the impact on their health and their children’s earnings. How would self-driving trucks fare under such an analysis? Even if the benefits still end up outweighing the full societal costs, at least we then have a metric by which to assess restitution. But perhaps a model of truck automation would emerge that preserves some percentage of the workforce to guide and manage the fleet.

Collective bargaining

Because unions are currently fighting for their life, the first instinct within labor can be to obstruct technology. It is likely that important opportunities are missed as a result. In workplaces where unions still have enough density, the deployment of new technologies should become a topic of bargaining. In the 1960s and ’70s, the longshoremen’s union (ILWU) bargained the adoption of shipping containers, ensuring job security for incumbent workers and guaranteed pensions. But a lot of technological change is slow and incremental, the result of many small decisions. Management consultants shouldn’t be the only voice guiding unionized employers when those decisions are made.

Technological change within one industry can also open up opportunities in another. For example, meal delivery apps are threatening to disrupt the food supply chain by delivering meal-kits directly to consumers. Beneath the high-tech gloss lie surprisingly traditional jobs: scores of workers in large food processing facilities, many of them direct employees. Investigative reporting of Blue Apron’s plants last year uncovered low wages and serious health and safety violations. If this new industry segment grows and thrives, it could offer fertile organizing ground.

A more ambitious approach is to figure out how to harness new technology for organizing. For example, alt-labor is exploring whether the aggregation provided by on-demand platforms can help to organize workers who were previously isolated in disaggregated workplaces, such as domestic workers. One barrier is that these platforms typically do not allow worker-to-worker communication (which is no accident). Why not regulate labor platforms as a condition of receiving a business license, so that they must enable secure communication between workers and agree to bargain if organizing results?


While mitigation and bargaining over impacts are important, ultimately the progressive goal should be outright governance: a seat at the table when decisions are made over which technologies are developed in the first place, and in pursuit of which goals. The biotech field offers a fascinating example. The advent of genetic engineering has set off robust debates over who owns technology, whether monetization distorts innovation, and government’s right regulate. The public’s right to weigh in on biotechnology seems obvious. Why is it not equally obvious that we have the right to weigh in on other impacts of new technology, including job quality and employment?

One version of governance is to control technology via direct regulation. For example, consider lending, hiring or sentencing algorithms that yield discriminatory outcomes by race or gender. Law scholars are actively debating what type of anti-discrimination legal regime is needed to address these cases, which could potentially lead to regulating the very nature of machine learning itself (since it is dependent on classification schemes). Product market regulation is another ripe arena. How different would ride-sharing look if legislators had resisted Uber’s lobbyists and classified Uber as a taxi company? Taxi apps would still have been developed, but likely with different effects on drivers. The issue of who is able to access the big data generated by private sector firms is also receiving attention; often it is customers and workers contributing that data. Greater access by government could unveil underlying business models (such as predatory pricing) that might then be subject to regulation.

The more ambitious version of governance is to shape technology via a multi-stakeholder model. The key insight here is that there are multiple paths of technological development. Optimizing efficiency by reducing or eliminating human input is not the only path; within any given occupation or industry there are alternatives where technology works with humans to improve productivity. But how to shape what engineers call the design choice — augmentation vs. automation — is not yet clear.

Ideally, we would establish mandated oversight structures that allow for multi-stakeholder decision-making over what is developed. We would greatly expand the goals of innovation — to eliminating poverty, saving the planet, ensuring the full realization of every human being, ending dangerous and back breaking work — and maybe even insist that some amount of work has intrinsic value to humans. And we would harness the powerful fact that public dollars fund a lot of technological development, often in universities (as the saying goes, venture capital only funds the last mile).

Opportunities for this ambitious form of governance will often be found at the industry level, especially if there is a clear public interest. In Germany, government is actively collaborating with employers and labor to make its manufacturing sector a leader in technology and preserve a role for workers. While we don’t have a social partners system in the US, the principle still holds. In our health care sector, for example, the path of technological development is not at all set in stone. Current attempts to introduce new technologies such as electronic patient records, automatic medication dispensers, and computer assisted diagnosis have run into myriad challenges, some due to lack of federal standards, some due to competing goals, some due to unintended effects. A robust social bargaining model backed up by regulation could help pave the way to a healthcare system that uses technology to free up workers to delivery high-quality, patient-centered care.

Is any of this even remotely feasible? Regulating and shaping technological change will require an enormous amount of power — over the private sector, over government, and over universities. But that is equally true of the basic income model, which is predicated on there being sufficient political will to generate the needed revenue. If we are willing to challenge capital to fund a basic income response to automation, then why not also try to govern technology directly?

The progressive case to society is that alternatives to shareholder capitalism exist and can thrive in the U.S. By extension, it should be possible to design and implement technology in a way that complements and values human work and is economically viable.

A final word that the tech sector is forging ahead without us. Last year, Google, Facebook, Amazon, IBM and Microsoft formed the Partnership on Artificial Intelligence to Benefit People and Society, with the goal of establishing an ethics of artificial intelligence to ensure that it is developed “safely, ethically, and transparently.” Reportedly stakeholders from civic society will be invited, but in the end this is self-regulation. Who will be at the table to represent the voices of affected communities and workers, and how much power will they bring?

A previous version of this article appeared in the Boston Review, April 25, 2017.