The rise and fall (and rise) of Artificial Intelligence
Artificial intelligence has been around nearly as long as humans have been able to think about themselves, about thought and what they do. Empathy is wired into us - some more than others but we are all capable of thinking from another's point of view. This capacity leads us to anthropomorphize things that aren't human, to imbue things in our daily lives with human qualities like moods, characteristics and personality. When we build puppets, robots, models that look sort of human it is easy to for us to assign it with greater power, ability and promise than is really there. For marketers in other fields, to have consumers attribute their products with 'magical' properties would be a dream come true but for artificial intelligence it is a nightmare - one the industry has expended funds marketing against. Artificial intelligence has delivered many great tools which today we take for granted. Our phones listen to us and understand our requests in the context of our calendar, our camera's recognise faces and social networks tell us who those faces belong to, machines translate words from one language to another (although don't get the translations tattooed just yet) and the list goes on. We chuckle at these mistakes these learning and adaptive systems make, we see the huge strides and investment and we expect a new human like intelligence to emerge in the short term. Around the middle of every decade since the 60's there has been a peak in excitement for AI, a frustration with it's lack of progress, and a reduction of funding or AI winters as they are called. In the eighties it was LISP machines, in the nineties it was expert systems. Now in the twenty-tens (I thought it was teenies but that's a kids show apparently) we are seeing a resurgence of AI, a blending of machine learning, predictive modelling and cognitive computing along with self driving cars. This raises some rare and interesting questions:
- Are we headed for a new AI winter?
- Or an AI apocalypse?
- Also, will I still be cleaning my home in 2020?
It is certainly true to say the set of tasks we can expect software and physical computing systems to do is vastly increased compared to just a decade ago, and massively so since the 60s. Doing all the things humans can do and living in our society, empathising and understanding us in that broad context is still well beyond computers - but engaging with us in specific, well-defined domains such as about our calendar or what we would like to buy from the shop is well within their grasp today. Previously difficult tasks such as searching a database for information, reading that from a screen and keying it into another screen is now entirely possible - see the earlier blog post on bots. Having a drone fly itself around an obstacle to reach an objective is still very hard. Having a vehicle drive itself on the road is in fact easier, albeit most humans don't benefit from lidar sensors, ultrasonics and eyes in the back of their head (alright, bumper).
It is good to see AI on the rise again - I loved the topic ever since getting into programming and getting involved in a cognitive psychology course some years ago. I recall writing an expert system in Pascal back in the 90s. I am concerned, as the insurance industry should be, by a new AI winter.
Self driving cars and vehicles have the potential to make the roads safer for all. We will when we see them, imbue them with more power than they have - this is human nature. We will, in the not too distant future, hear people say things like, "the car likes to give cyclists a lot of room on the road" or "the car prefers to take this corner at a fair speed" - imbuing a complex machine with sensors and programming with preferences, desires and likes - human qualities. When the first death comes we will ask how could it do such a thing. When an automated car is put in a position where it must decide between a set of actions - each leading to injury, we will hear people discuss why it chose to do what it did, people may say, "it did the best it could" or worse, "no person would ever have done that, this is why machines shouldn't be able to choose." The latter of course revealing the human construct, an unspoken contract - our expectation that smart or intelligent systems will operate like us, share our values, our culture, that we can predict their actions in our context. This is the greatest threat to AI and always has been - the expectation, the contract that the new intelligence will be like human intelligence. Some winters are due in part to that contract being broken, to these systems not living up to the expectation and making inhuman mistakes. There are a set of tools available now that are not intelligent but they are smart and they are powerful. We would be remiss in our duty to our customers and shareholders if we do not leverage them. Manage expectations about these powerful tools and understand the very real limits that exist on them. If we can do this we may benefit from the AI boom and avoid another AI Winter.
Will we see an AI apocalypse? Ironically it's not the human like intelligence that may be our greatest threat but simpler intelligences. A human like intelligence could empathise, could act in acordance with values and could be relatively predictable (in a human way). There are many stories across science fiction of smart robots that act like insects and replicate, in fact that only make copies of themselves, that pose a great threat to any civilisation. They are not intelligent, they don't want to kill off all life in the galaxy - they just turn all the available resources into copies of themselves which would have that effect. We are much closer to building that threat frankly (with drones, 3d printers, etc.), than a super intelligence that decides all human life is worthless. For now though - I expect these things to stay firmly in the space of science fiction. I include this discussion here because it does demonstrate a key difference between smart with unintended consequences and 'intelligent' - a lesson worth bearing in mind for those adopting AI.
Finally - will we see robots cleaning our homes by 2020? Well roomba is out there and sort of does that. Stairs and steps are still a huge challenge to robots. Frankly differentiating furniture, pets, clutter, magazines, rubbish, dust and recycling in a moving environment is still a very complex issue. As in insurance, I think smart things will make cleaning easier and assist those who invest but there'll be a role for human intelligence in ensuring the pets aren't recycled and the customer ultimately gets the service they expect.
Comments
-
I wouldn't necessarily say that there was a "fall" per se, at least in one area. I've been actively involved in the development of Experts Systems for 24 years (in the insurance industry for over 20) and it has been growing, for the most part, the entire time I've been involved with it. It just happens to have been "re-marketed" as "Business Rules Engines" or "Business Rule Management Systems".
On the other hand, I've seen some spikes in interest in Neural Networks, but haven't really seen much on that front for some time (at least 10 years).
-
For that we would have to define good and bad. Potentially: fewer professionals in the industry, lower cost players, better service at lower cost - it's good for some but not all.
Great blog topic! I think we love AI and even the idea of it learning, thinking and feeling as long as those thoughts and ideas benefit us or make our lives easier, more convenient. The problem arises when my self-driving car goes all Skynet on me and refuses to take me to the movie theater, instead insisting I need a workout and dropping me at the gym. Seriously, the liability questions are most of what is holding technology back in this area, but will it be a good thing or a bad thing when such obstacles are overcome?