By Yannis C. Yortsos
Dean of the Viterbi School of Engineering
We live in unprecedented times. Technological advances occur at an exponentially accelerating pace that changes our world as never before, constantly. And this rate of change is only expected to further grow, as it is a consequence of the fundamental underpinnings of innovation (often called Moore’s law, but in fact having a much more universal validity than exponentially increasing electronic chip density). These changes can be immensely beneficial, e.g. in health care, or in helping eliminate extreme poverty. They can also have unintended consequences- which can be equally powerful and long lasting. This is not at all paradoxical.
I like the following very simple definition of technology, which I have paraphrased from Brian Arthur: Technology is leveraging phenomena for useful purposes. Absent from the definition is what is typically associated with technology- devices, algorithms, etc. Indeed, technology now encompasses not only physical and chemical phenomena, as in the more conventional definition. It now also includes biological and, increasingly, social phenomena. Indeed, “tech” companies today, such as Facebook, penetrate and shape the realm of social phenomena as never before.
A key word in the above definition is useful; it is a pivotal word that connects technology to ethics. Technology is by definition amoral. It is intent that provides the missing link to ethics. If I were to think in terms of Maslow’s hierarchy of needs, a useful purpose would be enhancing any of the corresponding Maslow pyramid stages: sustainability, security, health and life enrichment. These in fact are the key buckets of many recent initiatives on Grand Challenges- from the National Academy of Engineering Grand Challenges for Engineering, as well as in/to the United Nations Sustainable Development Goals. I will hasten to add that a key part of many of these is the scientific and technological discovery of new phenomena, which if fed back to the above definition of technology creates a positive feedback loop, which produces the current exponentially accelerating technology.
Useful is the key objective: it links phenomena with leveraging. It underscores a process of decision-making. Ideally, decision-making should occur at the intersection of three, hopefully intersecting, circles: Smart, Legal, and Ethical. I must add that, at least so far, such decision-making remains a domain of human activity. Although, whether or not machines will be able to produce innovation that operates at such an intersection, becomes increasingly less implausible. It is in fact a question for the discussion of ethics today.
Does the above definition actually conform with how technology evolves? Namely, has technological innovation (e.g. a startup) followed from its very onset a specific “useful” purpose, in an intersection that has remained constant (and limited to that scope)? The answer is quite nuanced. One obvious reason is that the original innovation idea is subject to change and evolution, often dramatically, as a result of what is known as pivoting. But a more fundamental reason is that if the technological leap is such that it can produce a significantly powerful effect, its evolution will almost certainly lead to equally powerful unintended consequences. Technology will have a strong core, but it will also grow branches, which are in fact likely reside outside the 3-circle intersection. Moreover, the intersection itself will likely vary, as what is ethical and legal may likely evolve with the technology itself. Such effects are more pronounced as the technology is more powerful, and as it impacts society more strongly.
It is instructive to see this as a dynamic phenomenon. As a successful (and powerful) technology evolves in time, it first starts within the three-circle intersection, within which it (the core technology that is) continuously lives. However, one or more of its branches may follow a path (intentionally, through a bad actor, or through unintended consequences) outside the three circle-intersection. To imagine this, think of a plane parallel to the first one, where the circles have not changed, but where a technology branch deviates outside the three-circle intersection.
The most obvious example is a deviation or a mutation due to a bad actor. Examples abound: Technology-enhanced violation of privacy rights, cybersecurity breaches, sabotage. In some way, these are relatively easy to spot. Malfeasance has been encountered throughout the ages- it leads to deliberate actions, where the useful purpose of the bad actor is of course detrimental to society at large. They are reminding us that what is useful to someone may not be useful to another. These are not unintended consequences.
Less obvious, but no less important, are (unpredictable) unintended consequences. These are the unavoidable outcome of technologies that are ubiquitous, powerful and disruptive- as many of today’s technologies are. With the risk of becoming too technical, their unpredictability lies in two facts, that our world is not “linear” (hence leading to unpredictable phenomena), and that organized society reacts much slower, through the legislation process, to technological change.
How we react as a society to technological change has multiple dimensions. The ideal action would be to keep strengthening the core of technology that serves useful purposes, namely the one that continues to reside in the three-circle intersection. And to prune the unwanted or undesirable branches that grow outside. For this to happen requires ethically-minded technologists to discourage the growth of such branches; and much faster policy and legislative processes that are in step with technology and can create the new perimeter that is defining what is legal. I would add that a crucial additional dimension in the latter endeavor is accurate and factual communication to the public. All these will have a cascading effect on how we educate our students today on the importance of ethics, acquiring and maintaining an internal moral compass, the process of decision-making, and the power of technology, and its unintended consequences.
But this is only one aspect of the discussion. Equally importantly, as technology evolves in unchartered territories, it will invariably challenge us with redefining the perimeter of the other circle, that of ethics. Multiple, new and rapidly evolving fields, address the synergy of humans with technology, e.g. Human Machine Interaction (HMI), Human Building Interaction (HBI), Socially Assistive Robotics (SAR), to name a few. Autonomy introduces a symbiosis of machines with humans, until now only the figment of the imagination of science fiction writers. The impact of automation on human labor and income questions the ability of society to adapt to the exponential changes, while challenging the fundamentals of education. Everything related to personalized customization (from medicine to preferences and human desires) risks the loss of privacy at unprecedented levels. The use of Machine Learning and AI to model human and societal behavior, and hence to inform future action, inherently includes biases, recently termed WMD (Weapons of Math Destruction). While reverse engineering the brain, one of the NAE Grand Challenges, probes truly fundamental aspects of what it means to be human- and so does the field of Synthetic Biology. All these bring fundamental questions to what we value as society- from the individual to the collective.
Which brings me to a likely unavoidable question: Should technology entities become engaged with the task of predicting unintended consequences (the branches noted above) and then guide their evolution in ways that are consistent with our values- past and evolving? Or, should such companies also establish another “C-Suite” title/position (but this time the Chief Ethics Officer)? The question may not be that farfetched.