From “could we” to “how should we”

Over the past decade, leaders have faced an increasing need to grapple with tough ethical issues posed by the future of work. The question to ask: not just “could we,” but also “how should we.”

AS the future of work rapidly evolves, and organizations are integrating people, technology, alternative workforces, and new ways of working, leaders are wrestling with an increasing range of resulting ethical challenges. These challenges are especially pronounced at the intersection between humans and technology, where new questions have risen to the top of the ethics agenda about the impact of emerging technologies on workers and society. How organizations combine people and machines, govern new human-machine work combinations, and operationalize the working relationship between humans, teams, and machines will be at the center of how ethical concerns can be managed for the broadest range of benefits. Organizations that tackle these issues head-on—changing their perspective to consider not only “could we” but also “how should we”—will be well positioned to make the bold choices that help to build trust among all stakeholders.

Current drivers

Ethical concerns are front and center for today’s organization as the nature of work, the workforce, and the workplace rapidly evolve. Eighty-five percent of this year’s survey respondents believe that the future of work raises ethical challenges—but only 27 percent have clear policies and leaders in place to manage them. And managing ethics related to the future of work is growing in importance: More than half of our respondents said that it was either the top or one of the top issues facing organizations today, and 66 percent said it would be in three years.

When we asked our respondents what was driving the importance of ethics related to the future of work, four factors rose to the top: legal and regulatory requirements, rapid adoption of AI in the workplace, changes in workforce composition, and pressure from external stakeholders.

The leading driver that respondents identified was legal and regulatory requirements. Given that there is often a lag in laws and regulations relating to both technology and workforce issues, this perception is surprising. Granted, there has been some activity on this front within the European Union: In February 2019, the European Parliament adopted a resolution framing European industrial policy on AI and robotics, aiming to encourage the establishment of laws that would promote “ethical by design” technologies. There has also been some state and city legislation in the United States, including California’s 2019 law requiring hiring entities to treat gig workers as employees instead of contractors. However, outside a few moves such as these, policy changes have been slow in coming.

The pressure on ethics created by the rapid adoption of AI in the workplace, however, is much more understandable. AI and other technologies make ethics in the future of work, specifically, more relevant because the proliferation of technology is driving a redefinition of work. Perhaps the issue that has attracted the most attention in this regard is the question of how technology affects the role of humans in work. While our survey found that only a small percentage of respondents are using robots and AI to replace workers, headlines of the forthcoming “robot apocalypse” continue to capture global attention and raise concern. Organizations that are implementing technologies that drive efficiencies can expect to make decisions whether and how to redeploy people to add strategic value elsewhere, and what, if they decide to eliminate jobs, they will do to support the workers thus displaced.

As technology becomes more embedded into work, its design and use needs to be assessed for fairness and equity. Organizations should consider questions such as whether their applications of technology decrease or increase discriminatory bias; what procedures they have to protect the privacy of worker data; whether technology-made decisions are transparent and explainable; and what policies they have in place to hold humans responsible for those decisions’ outputs.

The third driver of ethics’ importance in the future of work cited most often by respondents is changing workforce composition, which raises issues about the evolving social contract between the individual and the organization and the organization and society. The growth of the alternative workforce is one major phenomenon contributing to these concerns. The number of self-employed workers in the United States is projected to hit 42 million this year, and in Britain, the gig economy has more than doubled from 2016 to 2019 to encompass 4.7 million workers. “Invisible labor forces” are being exposed in the recent research by Mary Gray and Siddarth Suri’s Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, which talks about the unsavory working conditions of many workers performing the high-tech piecework (e.g., labeling data, captioning images, flagging X-rated content, and so on) that powers automation and AI. The fast growth of this workforce segment is calling to attention related ethical concerns, including alternative workers’ access to fair pay, health care, and other potential benefits.

The final major driver of ethics’ importance in the future of work is that organizations are facing pressure from customers, investors, and other external stakeholders to act responsibly on ethical issues, even those that do not affect business operations—including issues such as access to health care, rising inequality, and climate change. Organizations are being called upon to address these challenges from a future-of-work perspective by designing work in innovative ways that can help ameliorate related concerns.

What is also interesting is that one major stakeholder group, the board of directors, is not generally weighing in on these issues, as only 12 percent of our respondents felt that board and C-suite pressure was driving a focus on ethics in the future of work. This finding is somewhat worrisome, as boards and leaders must set the right “tone at the top” for organizations to make ethics in the future of work a priority.

Our 2020 perspective

These drivers are shaping a number of specific ethical challenges related to the future of work, which can suggest an actionable agenda for addressing these issues. Yet respondents also overwhelmingly indicated that their organizations were not ready to manage these ethical challenges, with only between 8 and 19 percent saying their organization was “very ready” for any given issue.

A closer look at our respondents’ views on organizational readiness reveals an interesting insight: Organizations are the least prepared to handle ethical dilemmas in areas where humans and technology intersect. By far, the most organizations reported they were prepared to handle the technology-focused issue: maintenance of privacy and control of workers’ data. Next came issues that are distinctly human, such as fairness of pay, the design of jobs for sustainability, and treatment of alternative workers. But in matters where humans and technology converge—automation, use of AI, and use of algorithms—many organizations appear woefully unprepared.

We believe this gap has much to do with the broader tendency for organizations to treat technology and humanity as distinct paths with their own programs, processes, and solutions. Now, as boundaries blur between humans and machines, organizations are not ready to address the two paths together. These questions at the intersection of humanity and technology—how individuals are being monitored, how decisions are being made on their behalf, or how their jobs may be affected or eliminated—are very personal. And once the challenges become personal, organizations are finding themselves unprepared to meet them.

In the face of increasing ethical challenges, we believe that organizations must make intentional and bold choices. Those choices should be framed by a change in perspective: a shift from asking only “could we” to also asking “how should we” when approaching new ethical questions. By considering the broader implications of and an expanded focus on how to integrate teams, people, and technology, organizations can evolve an ethical approach to the future of work that goes beyond an assessment of technological feasibility to consider technology’s impact on humans and business results.

Learning by example

Stakeholder pressures are prompting organizations to act on the ethics question. Survey respondents from leading ethical organizations are more likely to say the increased importance of ethics is driven by direction from their boards and leaders, legal and regulatory requirements, and pressure from external stakeholders.

One way some organizations are responding is to create senior executive positions with a specific focus on driving ethical decision-making across the organization. Beyond traditional chief ethics and compliance officer roles, these organizations are formalizing responsibility for ethics around specific future-of-work domains such as AI. In 2019, Salesforce hired its first Chief Ethical and Humane Use officer to ensure that emerging technologies were being implemented ethically in the organization and to help Salesforce use technology in a way that “drives positive social change and benefits humanity.”

Some organizations are also addressing ethics issues by using new technologies in ways that can have clear benefits for workers themselves. For example, the technology company Drishti designs and implements solutions that combine AI and computer vision technologies to measure manual processes and associated tasks performed by human workers on a manufacturing line in near-real time. The technology gives workers access to robust training information, supports safer work habits to reduce workplace injuries, and provides feedback and rewards for individual contributions on the line—all things that have historically been challenging in a fast-moving manufacturing environment. While observing people in near-real time at work could be seen as a violation of personal privacy, Drishti addresses those concerns head-on by bringing workers into the conversation early, showing them the angle of the cameras and emphasizing the focus on the process, not the individual. The company reports that when this is done, workers almost immediately “see the value of the technology and its potential to improve their lives and secure their jobs, and they’re on board and excited.” The aim of the technology—improving the human experience through process analysis, measurement, and insights—sets a clear and ethical case for its use, a case that benefits the company and, just as importantly, the line associate.

Pivoting ahead

In an age when more people trust their employers to do what is right than trust governments, nongovernmental organizations, the media, or even business in general, it is incumbent upon organizations to address challenging ethical questions in all aspects of the future of work. Rather than reacting to ethical dilemmas as they arise, those who wish to lead on this front will anticipate, plan for, and manage ethics as part of their strategy and mission, focusing on how these issues may affect stakeholders both inside and outside the enterprise. The challenge is to move beyond the view that ethical issues must involve trade-offs and competition, and to focus on how to operationalize and govern the combination of humans, machines, and algorithms working as a team. This can enable organizations to harness the power of humans and technology together to truly operate as a social enterprise.