I have been interested in the field of Artificial Intelligence since the 80's – pretty much the heart of the "AI winter". Like many others, I remember when Kasparov lost to the IBM DeepBlue Chess AI in the 1997, and watched in fascination as DeepMind's AlphaGo beat Lee Sodel in Go in 2015.
At this point, the Kasparov defeat is sort of in the history books, as everyone is focused on the AlphaGo wins and the follow technology uses that it employed. Kasparov's interactions are kind of ignored, but they shouldn't be. What happened since that defeat with the evolution of the game of chess, with human and AI opponents, is fascinating – and I think informative.
In the wake of chess AI getting so strong, a style of chess developed called Freestyle chess, and from that – a "Centaur" model appeared. It's a team of human and AI system working together to play the game – and it's immensely powerful. There's an article from 2010 that outlines more of Kasparov's views, and it's worth a read if you're interested. I'm also certainly not the first to see this. There's articles in the Huffington Post about Centaur Chess, the BBC in 2015, and TechCrunch from last year that draw parallels.
The gist of the thesis is that current AI and human cognition aren't strong in the same areas, and each tackle the solution of tasks in very different ways. If you combine these two systems in a manner that exploits the strengths of each, you get massive capability and productivity improvements. It is exactly this pattern that I see as the future for quite a number of jobs.
I don't know exactly what form this will take, but I can extrapolate some obvious pieces – two attributes that will show up more and more.
The first is "explainability". Somewhere tucked in between psychology, cognitive sciences, and art of user experience and design is the place where you get amazing transfer of information between computing systems and humans. If you have an AI system predicting or analyzing something, then sharing that information effectively is critical to having effective coordination. Early examples of this are already prevalent in cars, planes, and military combat vehicles, which have had even more of this kind of investment. Head's up displays (HUDs) are so relevant that they're the common staple for video games today. And with the variety of first-person shooters, we're even training people en-masse to understand the concepts of assistive displays.
But don't get too focused on the visual aspects of explainability – the direction haptics and sound that work with the Apple Watch while navigating are another example – very different from a head's up display, but just as effective. Conversational interfaces like you're seeing with Alexa, Siri, or Google Home are all steps to broadening interactions, and the whole field of affective computing reaches into the world of understanding, and conveying, information at an emotional level.
Because it's so different from how AI has been portrayed in movies and the "public opinion", some folks are trying to highlight that this is completely different calling it Intelligence Augmentation, although that seems to be a corporate marketing push that hasn't gained too much traction.
The second is collective intelligence. This used to be more of a buzzword in the late 90's when some of the early successes really starting appearing. The obvious, now common, forms of this are recommendation engines, but that's really only the edge of where the strengths lie. By collecting and collating multiple opinions, we can basically leverage collective knowledge to help AI systems know where to start looking for solutions. This includes human bias – and depending on the situation, that can be helpful or hurtful to the solution. In a recommendation engine, for example, it's extremely beneficial.
I think there's secondary forms of this concept, although I haven't seen research on it, that could be just as effective: using crowd source knowledge to know when to stop looking doing a path, or even category, of searches. I suspect a lot of that kind of use is happening right now in guiding the measure of effectiveness of current AI systems – categorization, identification, and so forth.
There is a lot of potential in these kinds of systems, and the biggest problem with them is we simply don't know how to best build and where to apply these kinds of systems. There's a lot of work still outstanding to even identify where AI assistance could improve our abilities, and even more work in making it's assistance easy to use and applicable to wide diversity of people and jobs. It's also going to require training, learning, and practice to use any of these tools effectively. This isn't downloading "kung fu" into someone's head, but more like giving Archimedes the lever he wanted to move the world.