In The value of consciousness and Free Will in a Technological Dystopia, Allan McCay make the all-too-familiar claim that:
“there will always be valuable skills requiring a particular kind of judgement that are possessed by humans, but not by non-conscious algorithmic machines, however advanced.
I find these sorts of predictions to be dangerous. As a data scientist, I believe that it is important to be honest about the full extent of the possibility of AI. Including the potential to render millions of people unemployed and, ultimately, to even replace us as a species. Only by facing the difficult possibilities can we attempt to comprehend and prepare for them.
Therefore, in this piece, I will respond to one such attempt at drawing a bright line, which illustrates the sort of fallacious reasoning that typically leads to these sorts of foolish predictions.
Let me pause to mention, however, that possibility does not imply probability. Indeed, I myself consider it more likely that Silicon Valley will collapse under the sheer weight of its own bullshit, than that it will ever move beyond building AI solely for use in marginally-more-invasive targeted advertising. But it is certainly possible.
Also, the fact that I have so far being unconvinced by all “bright lines” that I have seen does not mean that I think there aren’t any. There might be. I just think that currently people are far too trigger happy in calling things “impossible” for AI that aren’t. The most plausible candidates for a bright lines are probably characteristics related to the hard problem of consciousness.
Can Economic Decisions be Incommensurable?
Allan provides a significant amount of background literature review and context setting. But for the sake of brevity, I believe that I can jump straight to the example provided without being unfair to the author. The full article is liked above and I encourage you to read it if you are interested.
The example provided by Allan of a task that humans can perform well but which is not “algorithmic”, is that of weighing incommensurables:
“A form of plausible reasoning takes place when humans are weighing incommensurables. Thus if one is deciding whether to honor a promise to help a friend, or to go for dinner with a person one finds attractive, the competing reasons are of a different kind (duty and desire) and thus are incommensurable. The reasons do not lead to conclusions through entailment, and judgment is required. This involves reasoning about what to do, but Hodgson argues that reasoning about what to believe is also a form of plausible reasoning. An example of this comes from Hodgson’s judicial experience in which he describes a judge trying to decide what to believe after hearing inconclusive evidence.”
First off, there is an ongoing conflation in this paper between formal reasoning (“conclusions through entailment”) and algorithmic reasoning. Algorithmic reasoning can be stochastic. That’s precisely what machine learning is. It seems to me highly unlikely that humans will always be better than machines at making judgements based on inconclusive evidence in a court environment. Indeed, I’m not convinced that they are currently.
But let me move to address the stronger claim, which is that some decisions are between options that are in different categories, that therefore cannot be algorithmicaly compared, and that therefore ‘human judgement’ will always have a role.
What it means to say that two things are incommensurable is that there is no metric that applies to both. So for example, I can compare cheese with tomatoes based on which tastes better. However, there is no (obvious) way to compare cheese with chalk.
A priori there is a problem with saying that humans are “better” at judging incommensurables. How do you know? How could you possibly know? The word “better” assumes the existence of a metric, which is precisely what is being denied!
Perhaps there is an Absurdist argument to be made here. One could argue that in choosing between honoring a promise, and going on a date, there is no correct answer. That the only way to proceed is to be authentic, and whatever option, so long as it is chosen authentically, will be the better option. Allan could have argued that only humans can be authentic, and therefore humans will necessarily make the better decision. But he didn’t. So I’ll proceed.
In the AI community we refer to these metrics shared between the options as the “objective function”, or conversely as the “error function”. Every machine learning algorithm in use today has an objective function. A metric that it is trying to maximise. Conversely, anything which has an objective function can, in principle, be subjected to numerical optimisation that, with enough data points, will be better than humans.
Indeed, one of the main points of Harari’s Homo Deus (which Allan was in turn responding to), is that, as complicated as any individual human’s motives tend to be, the objective function for the entire species is very clearly defined: grow the population.
Our individual “judgements” and “motives” are all ultimately attempts by evolution to optimise this overarching objective function. And hence this is precisely why Harari argues that as a species, we are quite replaceable by entities more efficient at such optimisation.
Curiously, the thrust of Allan’s argument is not only that humans will always be better at making some of these incommensurable weight ups, but that they will be better at it from an economic perspective. In economics the objective function is more clear than in any domain: make profit.
Sometimes it is more complicated than this (eg, NGOs), but there is no case I know of in economics in which an objective function cannot be sufficiently described. As a result, I cannot think of any economic situation that would involve true incommensurables.
This mistake actually mirrors very closely another one I have seen countless times in economic commentary: stating that two goods are either substitutes for each other, or they aren’t. Substitutability is never binary. Cars are certainly a closer substitute to trains than books are. But no doubt there is a conceivable set of circumstances under which, if the price of trains went up, somebody might choose to stop taking the train and instead use the money to buy books. Why? Because there is a global objective function, or metric, something along the lines of “happiness”, against which cars and books can be compared.
Again, this is not to say that such weigh-ups between very distantly related options is easy for an AI to do. Indeed, currently, almost all AI is incredibly specialised, and the options that AI have to choose between are usually very closely related.
It’s just to say that it is possible.