AI: Savior? Demon? Or Boring Kubrick Film?

rogue_ai_poster_by_nood2708-d3h79jd

Credit: nood2708 via DeviantArt

We just can’t trust Artificial Intelligence. At least that’s the claim rampant across Western pop culture.

AI run amuck is a well-worn Hollywood trope — from the eponymous HAL 900 in 2001 or the more recent AI robot Eva in the highly acclaimed film Ex Machina (Oscar-caliber performance by Alicia Vikander – just saying).

You know the story: In the spirit of extreme hubris, a mad and/or self-deluded scientist cooks up a machine that passes the Turing test. The AI entity (usually stuck with some acronym moniker) is declared to be intelligent (a label that many would say is purely subjective). The world rejoices. It’s implied the AI can solve all of humankinds’ maladies.

But then … Something. Goes. Wrong. Through some often unexplained or glossed over glitch, the AI decides humans are not worth serving, leading to wholesale slaughter or enslavement of its homo sapien masters. Only the Keanu-esque action hero  and the sexy PhD can save our species … with explosives. And there is much rejoicing — yea.

But is this vision true to reality? Should we assume that AI would adapt the same Will to Power sickness that has defined our species from Tutankhamen to Trump?

I remain agnostic on where the AI hammer will come down but I lean towards technological optimism. Perhaps it’s too early to tell if emergent AI will fall on the side of the OMG, We’re All Going to Die Coalition or the Hurrah! The Machines Will Save Us Party.

Now at bat for Team Doom-and-Gloom, we have inventor/innovator/geek deity Elon Musk. In a 2014 WaPo interview, the Tesla CEO urged extreme caution in AI adoption, calling it a potentially existential threat.

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Valid points. But keep in mind, many of Tesla’s futuristic ideas — self-driving cars, Smart Cities and massive solar infrastructure could be considered “summoned demons” as well if deployed willy-nilly. Comparing the regulation of AI with a demonic exorcism is fallacious in that, presumably a demon would have no kill switch.

On Team AI-is-Neato, we have Guru Banavar, IBM’s chief science officer of cognitive computing. Writing for Harvard Business Review, Banavar opines:

We have never known a technology with more potential to benefit society than artificial intelligence. We now have AI systems that learn from vast amounts of complex, unstructured information and turn it into actionable insight. It is not unreasonable to expect that within this growing body of digital data — 2.5 exabytes every day — lie the secrets to defeating cancer, reversing climate change, or managing the complexity of the global economy.

Given Banavar’s impressive expertise in cognitive computing, I tend to respect his opinion over Musk’s. Make no mistake –I think Elon has made and will continue to make some of the most beneficial and progressive innovations of the century. At the same time, Banavar’s obviously given the topic deeper, more considered thought (not to mention he bears the same title as Mr. Spock!).

Banavar goes on to point out that our trust in AI will have to be earned incrementally. “We trust things that behave as we expect them to,” he said. “AI systems must be built from the get-go to operate in trust-based partnerships with people.”

Barring unexpected leaps in our current technical understanding, we won’t be interacting with high-level AI — be they servants or masters — in the next decade or longer. Until then, the nascent, baby steps we’re making show signs of optimism.

Example: Google Research announced the development of deep learning AI that could offer early detection and prevention of diabetes-related blindness. “Screening methods with high accuracy have the strong potential to assist doctors in evaluating more patients and quickly routing those who need help to a specialist,” researchers concluded. It’s not a home run for AI research but it’s a solid single.

So, what have we learned? The quest for meaningful AI is evolving, some would say even progressing. Expect to see deeper conversations about the real need for regulation and policing — from the right and left. Expect the alarmists to scream louder and more incoherently. Either Banavar is correct and AI will join us in creating a better, more sustainable world or Musk’s vision of demonic hell-machines enslaving us and stealing our gold (or whatever) will come to pass.  The difference between the two opinions? Demons don’t exist.

Advertisements
Tagged , ,
Advertisements
%d bloggers like this: