Skip to main content

labor Artificial Intelligence: Principles To Protect Workers

AI shouldn’t repeat the mistakes of the past, where periods of globalization, such as NAFTA and automation prioritized short-term gains for companies and investors, and left working families and their communities to bear the cost.

Viso.ai

Introduction: AI in the Workplace

There is a path where new technology makes work better and safer, with good union jobs that have fair pay and better job quality. In this vision, working people have economic security, knowing that companies and public agencies must follow rules to make sure technology such as artificial intelligence (AI) is used safely, responsibly, and fairly. These rules put people first, and include worker input in the research and development (R&D) process, during development and deployment, and at the collective bargaining table where they negotiate protections with employers. There is accountability with meaningful enforcement so that employers think twice before designing or using AI systems that hurt workers or communities. Everyday Americans have the power to shape how, when and if new technology is deployed. With workers having a real voice in technology, AI strengthens, rather than weakens, democratic institutions, creating an economy that benefits everyone and ensuring public services are not undermined by improper uses of AI. AI should be about benefiting everyone, not just tech billionaires and corporate shareholders. These principles seek to create that future.

Throughout our history, unions have been at the center of technological change, have managed through major transitions, and have fought to ensure that new technologies serve workers and their communities, and make jobs better and safer. But AI shouldn’t repeat the mistakes of the past, where periods of globalization, such as the North American Free Trade Agreement, and automation prioritized short-term gains for companies and investors, and left working families and their communities to bear the cost. There is an opportunity at this moment to avoid previously failed approaches and instead advance new strategies that produce better outcomes for working people and our country.

This vision for a worker-centered technological future does not reflect what we are seeing today as new AI-powered technology is being unleashed on all of us, much of it unregulated and some of it dangerous. Without commonsense rules and a true commitment to boosting worker voice, there is a different future where good jobs, working conditions, safety, economic security, and worker and civil rights are at risk. Already we are seeing threatening uses of AI as we live during a time when elected officials are trusting AI developers—many of them the world’s largest technology companies—to act responsibly when we know they are failing to protect people from harm, with few guardrails in place. There is an urgency to have a meaningful national conversation about how to both propel innovation and adopt sensible policies that protect working people and the general public from the well-documented negative consequences of unregulated AI.

Harmful AI is not inevitable—the choices we make today will determine the future of cutting edge technology and work. And doing nothing, as Big Tech special interests would recommend, is a choice—the wrong choice. The AFL-CIO has adopted principles for fair, safe, responsible, and worker-centered AI. We choose a future where progress and opportunity benefit everyone and where AI isn’t used against us or to weaken the protections that a fair and thriving democracy demands.

Key Priorities for Worker-Centered AI

1. Strengthen labor rights and broaden opportunities for collective bargaining

For over a hundred years, the labor movement has negotiated with employers over the implementation of new technologies, and AI is no different. The adoption of new technology in a workplace should be negotiated by labor and management to make sure it makes work better, respects labor rights, minimizes harm to the workforce, and is developed and deployed through genuine labor-management collaboration.

If AI does lead to job disruption or displacement, there must be proper advanced notice of a reduction in jobs or job functions, meaningful reemployment and income support, and effective training and retraining. Technological change must never be shoved down the throats of a workforce or used to undermine the labor rights that built the middle class. 

Strong enforcement of labor rights is also essential to prevent AI from being used as a union-busting tool, undermining workers’ rights. It is critical that protection be in place to stop employers from weaponizing AI systems to undermine workers’ right to organize. Surveillance systems can alert management when people are talking in a bathroom, or if the word “union” is said, tracking “troublemakers” and manipulating scheduling to prevent organizing or even retaliating.

2. Advance guardrails against harmful uses of AI in the workplace

When private sector employers use AI that harms workers, they are placing profit over the interests of people. When public agencies deploy AI hastily without asking the tough questions and ensuring the public is protected, people and services suffer. AI systems can be used to monitor, evaluate and control workers, often without workers even knowing about it. Workers can be punished for not smiling enough or taking a bathroom break, or pushed to work harder and faster in unsafe ways. These can lead to psychological stress, increased injury risk, burnout, turnover and in the absence of appropriate guardrails, job loss. Meanwhile, employers collect and often sell worker data, while at the same time feeding it into AI systems that carry out dangerous experimentation with unsafe and unproven automation leading to other harms such as deskilling. 

Workers deserve to know what data is being collected about them and how it’s being used, with clear consent and opt-in requirements. Automated decisions must be reviewed by a human, and workers must be given the right to appeal AI-driven decisions about scheduling, discipline, pay, hiring and firing, with protection from retaliation. AI systems must be tested before and during use to reduce dangers and dangerous directives that harm workers and the public. Private and public sector workers reporting on harms and misuses of AI must have whistleblower protections. AI systems should not become a tool to jettison millions of workers from their jobs, instead, it should be used to augment and improve job quality. 

If you like this article, please sign up for Snapshot, Portside's daily summary.

(One summary e-mail a day, you can change anytime, and Portside is always free.)

3. Support and promote copyright and intellectual property protections

Workers in creative industries and sports face the continuing risk of seeing their works, their voices and their likenesses stolen by generative AI. Without protections, AI may upend the livelihoods of professionals who rely on effective copyright and intellectual property rights to earn compensation and benefits, as well as to ensure future career opportunities. Upholding these protections, like making sure AI isn’t trained on creative works without explicit consent and compensation, ensures creative professionals maintain their pay, health care, retirement security and future job opportunities.

4. Develop a worker-centered workforce development and training system

Technological change can lead to significant shifts in needed job skills. It can and does spawn new industries and jobs. Oftentimes, low-quality Band-aid training programs promise quick fixes, but fail to help workers meaningfully; in some cases, there are no worker support programs, resulting in displacement. Instead, joint labor-management partnerships and union-centered, high-quality training programs, including Registered Apprenticeships, should be used when developing AI workforce training or digital literacy programs. The labor movement and its network of union training centers that span the entire United States is the largest training institution in the nation, second only to the Department of Defense. These training programs have the proven track record of preparing workers for and connecting them to high-quality, family-sustaining jobs. Innovation will not succeed if workers are not supported and if unions do not play a central role in managing transitions as they always have.

5. Institutionalize worker voice within AI R&D

Workers are often left out of the technology development process, despite their expertise and the fact that they’re often the end-users. Botched new technology applications will usually be a product of clumsy development and deployment that ignores workers. Working people know what is needed day to day better than a distant engineer, software developer or senior executive. When governments use taxpayer dollars to fund AI research, workers and unions should be part of the process; including them from the beginning helps to create more effective, safer technology and can help to avoid bad decisions around AI development and implementation. The reality is that America spends billions in public money to advance innovation—incorporating worker voices and unions into these research initiatives should be a requirement and a national priority.

6. Require transparency and accountability in AI applications

Many times AI is a black box system lacking transparency and leaving workers in the dark about when and how it is being used. The negative consequences of black box systems include threats on peoples’ rights such as those embodied in civil rights laws and in collective bargaining agreements. Allowing an algorithm to secretly determine whether a worker should be hired, promoted, disciplined or fired is a recipe for discrimination and unfairness, stress and job deterioration. Easy-to-understand explanations and having humans meaningfully review decisions are essential to upholding fairness, especially with the high rate of error found in various AI systems, including large language models. Workers must also have the right to use their professional judgment, without retaliation, to override AI decisions, especially in safety-sensitive or even life-or-death situations. There must also be strong accountability and enforcement for AI companies and employers that harm workers. AI systems that harm workers and the public must be held liable, and that legal responsibility must be meaningfully enforced, including with significant financial penalties to both incentivize the development and deployment of safe systems, but to also provide remedy to harms that have been caused. 

7. Model best practices for AI use with government procurement

Public tax dollars should not be spent on purchasing AI systems that are unsafe, harm public sector workers or reduce the quality of government services. Public agencies must set the gold standard for responsible AI use, and require that AI systems uphold the public interest and respect workers’ rights, including transparency, privacy, intellectual property rights and civil rights. They also must engage with unions beforehand to protect jobs and vital services, enlist workers’ experience in evaluating what works and what doesn’t and overall, center humans in the work. A federal, state or local agency should not incorporate new AI-enabled tools that impact work or services without negotiating and collaborating, in early stages of development, with the workforce and their unions. 

8. Protect workers’ civil rights and uphold democratic integrity

AI systems may turbocharge bias and discrimination, making existing inequalities worse by using systems that are difficult to detect and challenge. When algorithmic decision-making systems exclude a person based on their race, gender, age or disability or other protected characteristics, those uses violate their civil rights and must be barred. These threats aren’t limited to the workplace, but also spill over into broader society. When AI systems are used to produce misinformation and deepfakes, that undermines elections and our democracy. The ability to quickly, easily and inexpensively use generative AI to produce realistic content has filled the internet with content that may or may not be true, undermining public trust and informed public discourse which are fundamental to upholding democratic elections and systems. There should be serious consequences for using technology to undermine democracy and civil rights.