As an IT person, I always get pretty annoyed, when people start talking about artificial intelligence and ‘robots taking over’. In my view we are light years away from a singularity (to add that light years are a measure of distance, not time). We are led to believe, we are so technologically developed, we think soon robots would do everything for us. What is actually happening, there are a few tweaks, re-branding and the new AI movement is nothing else but Prozac renamed to ZacPro. And obviously monetised (again all about the money).
My favourite AI categorisation is:
- Weak AI (narrow AI) – non-sentient machine intelligence, typically focused on a narrow task (narrow AI).
- Strong AI / artificial general intelligence (AGI) – (hypothetical) machine with the ability to apply intelligence to any problem, rather than just one specific problem, typically meaning “at least as smart as a typical human”. It’s future potential creation is referred to as a technological singularity, and constitutes a global catastrophic risk (see Superintelligence, below).
- Superintelligence – (hypothetical) artificial intelligence far surpassing that of the brightest and most gifted human minds. Due to recursive self-improvement, superintelligence is expected to be a rapid outcome of creating artificial general intelligence.
The keyword against the two of them is critical. Hypothetical. We are still in the weak AI age. The things we are building are pretty complex from a lame person point of view, but still not close to AI.
Using an example of operational robotics, these are known to most developers as automated tests based on UI input (hellishly expensive to maintain, if the user interface is changing). There is a huge movement now to replace ‘FTE’ (full time equivalent) by applying robots. But in the end, it is not AI, these are just computer programs with a very narrow and specific application.
The other revolution is coming from big data movements. Data storage cost has dropped exponentially, since PC was invented and sold as an IT server. This has led organisations, which were originally keeping the files on paper, to go to electronic storage. The next step was thinking – what could we do with the data? Are there any patterns? The application of simple algorithms (machine learning) or their more advanced family, based on neural networks (deep learning), has allowed the firms to use the data in a more effective way. Still this is not AI. Well, deep learning can be theoretically called weak AI, as it uses feedback and self-learning. Still these are only possible to apply to very specific purposes.
Examples of the application are:
- Netflix, which analyses how people watch movies, to produce new movies.
- For analysis traffic and car navigation, for self-driving cars. (here)
- Accenture proposal to use deep learning in insurance business.
- Application in marketing and pricing
- Customer support bots
Still, not really AI, not really a thinking machine, but more like extended computer programs, that are harder to understand. In the end, they still follow Turings model. In the end everything needs to be 0 or 1, true or false. There is no ambiguity or magic. On top, you need to support your deep learning model with tonnes of data, else it won’t work at all.
The ideas, that support today’s weak AI development, were created in the early 1930‘s. However the cost of equipment was high. Also the computers were very primitive and there weren’t really good computer to human interfaces (programming languages). They would never have coped with the calculations required for the deep learning. The other factor was how to teach machines, like humans do. Based on their errors. The works on neural networks started in 1960’s. First robots were created in early 1970’s. Then all ‘robotics’ funding was pulled out from mid-1970’s up until early 1990’s. The AI winter started.
Big data, deep learning and cost of equipment drop, were the major factors, why this kind of weak AI returned in 21st century. Also the fact, that it stopped being just an academical application, and you could actually start earning money on it.
Should you be afraid of the AI? My answer is yes, as the current culture within big organisations, would like to automate wherever possible. Data driven jobs, are already taking a toll. Non-skilled workers should beware. Like truck drivers, where wide-spread application of deep learning, is expected in the next 10 to 20 years. Some people will learn new things, as it was with weavers in 19th century. For some, guaranteed minimal income is supposed to be deployed. However with the latter, there are multiple controversies. Including the fact that the large organisations pay close to no tax, if they would pay more, the cost would be either moved to the end consumer, or would just plain consume the savings made by the robots.
This social dilemma will blow up at some point.
As for the hostile AI takeover. We have experiments ongoing in quantum-state based computers. These do not take simple black and white, 0 or 1, or true or false. They actually accept maybe, maybe not, maybe yes, potentially. They are based on quantum properties of the atoms. They are at the hypothesis stage and early prototypes now. Not ready for commercialisation.
To conclude. With Turing-based AI, we should not expect robots will do us any harm. Once we move towards quantum computing and expand it, we should start to be afraid. The quantum computing started in 1980. There are breakthroughs published every month. My guess is we are safe for the next 50 years. And hopefully we will run out of precious materials by then.
In 2018 we have, for the first time, reached Earth Overshoot Day or the date when humanity’s demands on nature exceed what the network’s analysts estimate the Earth can regenerate over the entire year.
To wrap up. Current AI hype is nothing else than very smart marketing. Try not to fall for it. And if forced to use, try asking yourself, why do you really need it? See that even Netflix, still uses creative directors to make the final decisions, as data-driven, are not a guarantee for a success. They are just most-probable, educated guesses, based on shit-loads of historical data (my personal definition on deep learning). We won’t be able to avoid AI aggressive marketing, but at least lets try to make the best use of it.