David Autor analyzes in this article the use of Artificial Intelligence (AI), in the context of the economic history of technological change in the last three hundred years. The analysis goes from artisans prior to the Industrial Revolution to generative Artificial Intelligence, passing through factory machinery and the role of computers.
A type of knowledge is expert when it is necessary and scarce at the same time. It is what, for example, artisans had in the stage prior to the Industrial Revolution, a knowledge that required a long training process, which only a minority could afford. These experts developed each product creatively, as a complete and differentiated unit.
The industrial revolution displaced artisans, producing a great increase in productivity, but relegating the majority of industrial workers to extremely hard work for miserable wages for several decades. Productivity increased because from then on each worker was in charge of a small part of the process, in a repetitive and specialized way, on a mechanized assembly line.
The Luddites (who protested the mechanization that wiped out artisans) were correct in their protest that it took five decades for industrial workers to see their real wages grow significantly, requiring the power of unions and the expansion of democracy, as well as additional technological changes. Then a middle class of mass experts did emerge (intermediate workers doing administrative tasks), but they followed rules and lacked discretion (they were not the ones who made the decisions), so they were vulnerable to the automation that computers brought from the second half of the 20th century. Until personal computing and the Internet arrived, these intermediate workers saw their real wages increase and began to swell an abundant middle class in developed societies.
Computers are very effective with routine tasks, but not with those that require tacit knowledge, such as improvised language, or recognizing the face of a child in an adult. AI is the opposite, much more effective with tacit knowledge than with routine tasks.
Already before the advent of AI, and also with it, it is important to start from the basis that tools are levers that allow us to improve human work, not substitutes for it. Think of the examples of calculators, electric saws or drills. These three examples have two characteristics in common: first, they make the task of those who work with these tools much easier; secondly, to be used they require some training.
In common with other stages of accelerated technological change, AI will not eliminate human work. Employment has not stopped growing with the emergence of new technologies, despite the fact that many professions have become obsolete. But other professions have been created and professions that already existed have been able to develop in a different way. The improvement in productivity that technological change allows generates new demand for new products and services that did not exist before or that were enjoyed by a small minority. The challenge is that the new jobs created contribute to improving the dignity and living conditions of working people. In this sense, David Autor speaks out against the “inevitabilism” of thinking that AI will make human work redundant (something that he does not consider desirable, as perhaps some supporters of basic income do, as is said in passing in the article).
Unlike other technological changes, however, AI can be complementary to decision-making (and not just routine tasks), which can make it easier for many more people to participate in it, eroding the monopoly power of some specialized professions, such as doctors or university professors (or football coaches, see this article in Nature). The existing AI already helps make decisions, although the final responsibility lies with the human being, for example accepting or not a suggestion to complete a sentence, or accepting or not the “smart car” warning about its speed and direction.
The text compares computers with classical music, which follows a series of rules reproducible in each concert, and AI with jazz, which allows improvisation and adaptation to changing circumstances. David Autor suggests that AI will allow what has happened to people who work in nursing to become widespread, a portion of whom have been enabled in recent years to assume functions (for example, prescribing) that could only be performed by people before who had a medical degree. This requires additional training, but not the same as was traditionally required for a medical degree, and this expansion of employment responsibility has been made possible by technological developments such as the connection and digitization of medical records. Analogous developments can occur in education.
In this way, AI can facilitate more affordable healthcare and education (or football quality), which are not in the hands of elites who monopolize the knowledge necessary to make decisions, whether in an operating room or a classroom. If we combine this with the demographic trends that are occurring, in the future there will not be a shortage of jobs, but rather there will be a lack of people who can work, although as in the past, jobs will disappear and new ones will emerge.
The problem is not the disappearance of work, but the dignity and remuneration of working people. The human decision will be irreplaceable. That is why self-driving cars have failed, because they do not know how to make quick decisions when reality is changing. The role of AI is not to drive a car, but to assist in driving.
The unique opportunity that AI offers humanity is to reverse the shrinking trend of the mass of decent-wage workers: to expand the relevance, reach and value of human experience to a broader set of tasks. Not only could this reduce income inequality and the costs of key services such as healthcare and education, but it could also help restore the quality, prestige and prominence that too many people and jobs have lost. This alternative path is not an inevitable or intrinsic consequence of AI development. However, for David Autor (in line with other economists such as Dani Rodrik or Daron Acemoglu) it is technologically plausible, economically coherent and morally convincing. Recognizing this potential, we should not ask what AI will do for us, but what we want it to do for us.
The article does not make a prediction, but rather points out a possibility. The same technology can have different uses depending on how institutions and incentives develop. Just as nuclear energy can be used to make atomic bombs or to produce energy without contributing to climate change, AI can be used to enrich a small minority, or to pit elites against each other, or to improve the life and work experience of the vast majority.