On Time Tech

Is AI Getting Too Powerful?

Written by Lance Stone | Aug 31, 2017 10:54:00 AM

Artificial Intelligence (AI) is at the forefront of new developments in computer science. But is it safe?  According to experts, it could be “the best, or the worst, thing that to ever happen to humanity.”

Devastating battles are depicted in the scenes of dystopian, sci-fi movies, where computer-controlled armies defeat human-led forces. Luckily, for our sake, it’s all just a game. At The International, Valve’s largest Dota 2 tournament, an OpenAI bot defeated the top, professional Dota 2 players—And with only two weeks of practice.

If you don’t know about Dota 2, it’s of the most popular multi-player online war games played by professional gamers worldwide. The game’s popularity is due to Dota 2’s complexity which requires mastering multiple strategies, and a deep level of understanding and skill.

AI experts believe that OpenAI’s performance at The International was significant because it displayed the true power of AI today. And, even more so than when the AlphaGo bot won against a South Korean Go champion earlier this year. Unlike Go, which is categorized as the perfect information game where all players have access to the same information, Dota 2 has lots of hidden information which forces players to react quickly and adapt their own strategies.

So, what does all this game playing have to do with you and your business? As evidenced in these games, computers are now smart enough to beat out humans.  The IT experts at Intivix believe this is important. Not only is technology becoming an integral and essential part of modern life, but it’s becoming more powerful every day—And, this is scaring some very intelligent people.

Should you be worried?

It’s true. AI is rapidly improving, and the technology behind it is being incorporated into every part of our lives. From book recommendations on Amazon, to Siri and other virtual assistants, AI assists us with many daily tasks.

Although AI is in its infancy, the technology can reproduce much of what we thought was only possible in the science-fiction movies of 40 or 50 years ago. In another 40 years, will AI advance far enough to turn one of these fictional nightmare scenarios into reality?

People are already questioning whether building better AI is such a good idea. Elon Musk expressed his concern to the National Governors Association when he told attendees that despite warning people about the dangers of unchecked AI development, authorities do little to protect against its misuse. Musk called AI the “biggest risk we face as a civilization,” and called for preemptive regulations for the industry.

Musk isn’t the only scientist who predicts AI may lead to a disaster.  Prominent theoretical physicist, Stephen Hawking, fears that once singularity in achieved, (the point where machines are more intelligent than humans) we won’t be able to prevent AI from acting independently. Hawking shares Musk’s view that if lawmakers don’t put regulations into place, advanced AI will become “either the best, or the worst, thing that ever to happen to humanity.”

Others claim the AI doomsayers are watching too many movies, and claim that the idea of AI overtaking humans ludicrous. The vice-provost of research at Imperial College London, Professor Nick Jennings, is one of them. He states that while it’s possible to develop AI that excels in a singular task, creating AI that’s capable of human-like intelligence across multiple subjects isn’t within the ability of today’s scientists. And, he doesn’t foresee technology advancing that far for a long time.

What worries more people than the “rise of the machines” nightmare becoming reality, is the continuation and acceleration of a trend that began in the 1980s. Andrew McAfee, an economist from MIT, describes a massive decline in the number of middle-class jobs in the US. He believes that the coming AI revolution will greatly speed up the rate of decline in the number of jobs, not only for middle-class workers, but for all workers.

Others disagree. Many economists believe history is a way to predict how new technology will affect the economy. They point to the effect of technology during the Industrial Revolution—That when machines displaced workers from factory jobs, they could find new and better jobs created by the introduction of the machines, like mechanics. These views are supported by a 2011 study from the International Federation of Robotics, which shows for every one-million robots added to the work force, there were three-million new jobs.

What are people doing to prevent a horror scenario from unfolding?

So, what are we doing to ensure AI is being developed in a moral and safe manner? The quick answer is, not enough. Most of the people who work in the AI field disagree with those who are currently looking for ways to regulate the industry—And at present, there aren’t any AI regulatory legislations being worked on in congress. However, some are actively trying to address the problem. DeepMind, which Google acquired several years ago, conducted a study on how to develop a ‘big red button’ that has the ability to shut down rogue AI in the future.

Whether you believe in AI has the possibility to be dangerous or not, there’s no denying that AI is a powerful technology that will change the world. It’s important to be part of the discussion about how companies develop AI, and how it’s regulated. We all have the responsibility to learn more and stay educated.