Yes, AI Could do that: A Response to Allan McCay

In The value of consciousness and Free Will in a Technological DystopiaAllan McCay make the all-too-familiar claim that:

“there will always be valuable skills requiring a particular kind of judgement that are possessed by humans, but not by non-conscious algorithmic machines, however advanced.

The number of predictions that I’ve now read or heard that boldly proclaim a ‘bright line’ across which AI could never cross, not matter how advanced, is now too numerous to enumerate. But suffice it to say that this is a trend that shows no sign of abating.

I find these predictions to be incredibly dangerous. As a data scientist, I believe that it is important to be honest about the full extent of the possibility of AI. Including the potential to render millions of people unemployed and, ultimately, to even replace us as a species. Only by facing the difficult possibilities can we attempt to comprehend and prepare for them.

Therefore, in this piece, I will respond to one such attempt at drawing a bright line, which illustrates the sort of fallacious reasoning that typically leads to these sorts of foolish predictions.

Let me just begin by mentioning, however, that possibility does not imply probability. Indeed, I myself consider it more likely that Silicon Valley will collapse under the sheer weight of its own bullshit, than that it will ever move beyond building AI solely for use in marginally-more-invasive targeted advertising. But it is certainly possible.

Also, the fact that I have so far being unconvinced by all “bright lines” that I have seen does not mean that I think there aren’t any. There might be. I just think that currently people are far too trigger happy in calling things “impossible” for AI that aren’t. The most plausible candidates for a bright lines are probably characteristics related to the hard problem of consciousness. But I have yet to see a convincing argument along those lines.

Can Economic Decisions be Incommensurable?

Allan provides a significant amount of background literature review and context setting. But for the sake of brevity, I believe that I can jump straight to the example provided without being unfair to the author. The full article is liked above and I encourage you to read it if you are interested.

The example provided by Allan of a task that humans can perform well but which is not “algorithmic”, is that of weighing incommensurables:


“A form of plausible reasoning takes place when humans are weighing incommensurables. Thus if one is deciding whether to honor a promise to help a friend, or to go for dinner with a person one finds attractive, the competing reasons are of a different kind (duty and desire) and thus are incommensurable. The reasons do not lead to conclusions through entailment, and judgment is required. This involves reasoning about what to do, but Hodgson argues that reasoning about what to believe is also a form of plausible reasoning. An example of this comes from Hodgson’s judicial experience in which he describes a judge trying to decide what to believe after hearing inconclusive evidence.”

First off, there is an ongoing conflation in this paper between formal reasoning (“conclusions through entailment”) and algorithmic reasoning. Algorithmic reasoning can be stochastic. That’s precisely what machine learning is. It seems to me highly unlikely that humans will always be better than machines at making judgements based on inconclusive evidence in a court environment. Indeed, I’m not convinced that they are currently.

But let me move to address the stronger claim, which is that some decisions are between options that are in different categories, that therefore cannot be algorithmicaly compared, and that therefore ‘human judgement’ will always have a role.

What it means to say that two things are incommensurable is that there is no metric that applies to both. So for example, I can compare cheese with tomatoes based on which tastes better. However, there is no (obvious) way to compare cheese with chalk.

A priori there is a problem with saying that humans are “better” at judging incommensurables. How do you know? How could you possibly know? The word “better” assumes the existence of a metric, which is precisely what is being denied!

Perhaps Allan could have made a Absurdist argument here. He could have argued that in choosing between honouring a promise, and going on a date, there is no correct answer. That the only way to proceed is to be authentic, and whatever option, so long as it is chosen authentically, will be the better option. Allan could have then argued that only humans can be authentic, and therefore humans will necessarily make the better decision.

But he didn’t.

In the AI community we refer to these metrics shared between the options as the “objective function”, or conversely as the “error function”. Every machine learning algorithm in use today has an objective function. A metric that it is trying to maximise. Conversely, anything which has an objective function can, in principle, be subjected to numerical optimisation that, with enough data points, will be better than humans.

Indeed, one of the main points of Harari’s Homo Deus (which Allan was in turn responding to), is that, as complicated as any individual human’s motives tend to be, the objective function for the entire species is very clearly defined: grow the population.

Our individual “judgements” and “motives” are all ultimately attempts by evolution to optimise this overarching objective function. And hence this is precisely why Harari argues that as a species, we are quite replaceable by entities more efficient at such optimisation.

Curiously, the thrust of Allan’s argument is not only that humans will always be better at making some of these incommensurable weight ups, but that they will be better at it from an economic perspective. In economics the objective function is more clear than in any domain: make profit.

Sometimes it is more complicated than this (eg, NGOs), but there is no case I know of in economics in which an objective function cannot be sufficiently described. As a result, I cannot think of any economic situation that would involve true incommensurables.

This mistake actually mirrors very closely another one I have seen countless times in economic commentary: stating that two goods are either substitutes for each other, or they aren’t. Substitutability is never binary. Cars are certainly a closer substitute to trains than books are. But no doubt there is a conceivable set of circumstances under which, if the price of trains went up, somebody might choose to stop taking the train and instead use the money to buy books. Why? Because there is a global objective function, or metric, something along the lines of “happiness”, against which cars and books can be compared.

Again, this is not to say that such weigh-ups between very distantly related options is easy for an AI to do. Indeed, currently, almost all AI is incredibly specialised, and the options that AI have to choose between are usually very closely related.

It’s just to say that it is possible.


Bitcoin Wasn’t Built for You



Okay fine, the boat was invented before the wheel.

Bitcoin wasn’t made for you. It was made for the 1.7 billion people who have been cut off from the world’s finances. Whether or not it will actually help them, or whether it’s another saviour-mission gone wrong, remains to be seen. But that is what it was built to do.

I predict that for at least the next decade businesses are going to be running around in circles trying to figure out just how they’re supposed to actually make money off this blockchain thing. Until they finally figure out the truth. Bitcoin was invented by a Cypherpunk. A counter-culture, anarchistic, anti-big-business, anti-cencorship hyper-intelligent geek. And they didn’t invent it for you.

It literally says it right there in the genesis block, if you cared to read it:

“Chancellor on the verge of a second bailout for British Banks”

Open blockchains won’t help your business. They’re built to do the opposite. And everything else is bullshit.



computer science

Why I’m Leaving Whatsapp

In a nutshell

I want to be able to communicate privately and securely, and I do not believe that Whatsapp is able to provide this anymore. Furthermore, the behavior of Whatsapp’s parent company, Facebook, has reached the point where I believe that the only reasonable response is a boycott.

Hasn’t Facebook announced a new privacy-focussed strategy, including more encryption of communications?

Yes, they have, and I can’t yet rule out the possibility that they are being serious and sincere when they say this.

However, it is incorrect to think that data encryption is the only, or even the primary, mechanism for creating private communications. In many cases, metadata can reveal more about your preferences and beliefs than data can, and can be more easily de-anonymised. For example, if I know that you sent a message to a number which matches a public record for a divorce attorney and that you sent it from a suburban location far from where you normally are at 11pm at night… I would probably know more about you than if I had read the content of the message.

Continue reading


An Immutable Data Persistence Layer based on Structured Query Language: A Viable Alternative to Permission-Based Blockchain Networks?

In a recent report “Cryptocurrencies, Beyond the Hype”, the Bank for International Settlements makes a case for the use of permission-based blockchain technology. One key characteristic of such systems is so-called “immutability”, which is a property that ensures that subsequent to the insertion of data into a persistence layer, no modification or deletion can thereafter ensue. KODAKONE, for example, lists this as the key reason for why blockchain will power their new rights-management platform. But is it possible to introduce this sought-after property into traditional persistence systems without the overhead and computational power required for running a blockchain?

Continue reading

Maths, statistics

Machine Learning Defined

It’s become a trend for machine learning resources to differentiate themselves by claiming to focus more on the practice, and less on the theory. My reaction to this is similar to when software teams list their focus on agile development, instead of the waterfall approach, as a key differentiating factor. Everyone’s doing it now. It isn’t differentiating anymore.

I won’t dwell on the dismal state of linear algebra in the applied fields, since I already did that here, but it needs specific mentioning that very few machine learning authors are able give a set-theoretic account of the objects involved in machine learning.

So I’m going to try. Not necessarily because I think that this description is better, per se, but because this description helps to clarify some core concepts, and I think leads to some key insights as well.

Continue reading


Some Thoughts and Ideas on Consensus, Proof-of-Work and Distributivity

Note: like most articles on my personal blog, this one assumes a fair degree of domain familiarity on the part of the reader. If you are new to blockchain technology, I have listed at the end of this article the resources that I’ve found to be the most clear and helpful introductions, and that I would suggest consulting if you want introductory material. Feel free to post more specific questions as comments.

Marc Zuckerburg has made headlines again for announcing a dedication to fixing everything wrong with Facebook. Included in the post was a personal reflection on decentralisation:

Continue reading