The AI Delusion

In The AI Delusion, Gary Smith very successfully argues against the intelligence part of artificial intelligence (AI). With numerous examples, and sometimes a rather deep dive into statistics, he shows that current day AI is nothing more than some competence without any comprehension.

An AI system might be very good at reading stop signs, but put a sticker on the sign and you’ve lost it. Robustness and common sense are missing from AI systems. The book gave me a new insight into how ‘thinking’ machines are still a long way off. At the same time, I’m looking forward to exploring The Book of Why, one that argues that we can express causality (where now we only have correlation in AI systems) in math.

In some ways I was already on board of the AI is awesome bandwagon. I heard about Google Flu, about AI systems to give loans to people (and not being biased), and about algorithms to prevent crime. All these examples pass in the final chapter and are all shot down ruthlessly.

The main argument of Smith is that when you put together a lot of data and then let a system find correlations, it will find them. If we then don’t take a look at the black box (which correlations it used), things can get pretty weird.

Examples in the book include weather in a city in Australia, prediction (in a given year) the temperature in a city in America the next day (inversely correlated). And at many times he uses a random number generator to show that when you gather enough data/correlations you will get results.

Smith tackles industries like technical analysis (looking at stock charts and finding correlations/patterns), drug discovery, and more. With regards to stocks he mentions numerous ‘systems’ and shows that these don’t work outside of the training data and that many ‘gurus’ change their system over time (which of course is touted as evolution, but of course is just refitting the model with the new training data).

The trouble is that in many cases the results don’t translate outside the training data (where you let the AI find the correlations). This was, for instance, the case with the Google Flu system. And when you do find out what it uses, it can also be gamed (just like people do with SAT tests). One example of this is that people with Android (vs Apple) phones were worse creditors. If you (the person wanting a loan) know this, just change phones.

Yet even when you look outside the training data, you can still be lucky (or unlucky, depending on your point of view). When Smith used his random numbers and looked at the ones that predicted stock prices in one year, and then looked at all the ones that also worked the next year, some did very well. Yet that doesn’t mean it will do well again the next year (remember, random numbers).

One point he drives home is that we shouldn’t trust computers blindly. When they show competence without comprehension, you need to be the one who instils the common sense. Computers are way better (read: perfect) at remembering numbers, but they have no clue why.

The book is a bit long on the middle (Smiths background is in statistics and it shows), but also is a good wake-up that we’re not there yet. I think that with our narrow, stupid, but still very competent AI we can still do many great things. But for now, we should leave the comprehension and critical thinking to us humans.