Mark Cuban on dangers of A.I.: If you don’t think Terminator is coming, ‘you’re crazy’


Cuban doubled down. The government should be involved in funding artificial intelligence because of the seriousness and scope of the threat of autonomous weapons.

“The future of warfare is on, built around, through, up and down artificial intelligence. If we don’t win that war, the next generation that are sitting here having this conversation for the next generation of high school kids? It is going to be, how are we going to catch up? That’s why we need to invest,” Cuban says. “I think we are truly at threat from autonomous weapons unless as a nation we either come to agreements with other nations on this — and we have the ability to monitor them, you know, trust but verify — but as big a threat as nuclear is, AI is even bigger.”

Cuban’s fears are similar in scope to that of another high-profile tech entrepreneur and billionaire: Elon Musk. The Tesla and SpaceX boss has said that artificial intelligence is both “far more dangerous than nukes” and poses “vastly more risk than North Korea.”

Earlier in July, Musk, along with all three of the co-founders of Google’s DeepMind were among the thousands of individuals and almost 200 organizations who publicly committed not to develop, manufacture or use killer robots. They did so by signing a pledge published and organized by the Boston nonprofit Future of Life, an organization that researches the benefits and risks of artificial intelligence along with other existential issues related to advancing technology.

Even as companies pledge not to be involved with killer robots, Cuban says he is sure the technology for autonomous robots will become commonplace.

“We already have the ability to have weapons think,” Cuban says. “There is already the ability for autonomous weapons and they are only going to go further, further, further as processors get more advanced. Once we solve the [portable] battery problem, so these Terminators can be out in the field for an extended period of time…. If we don’t win that battle, this world is upside down. That’s what scares the s— out of me.”

The potential of artificial intelligence to become dangerous is why it is one of a few other select issues the government should be involved in regulating, says Cuban. Other issues the billionaire tech entrepreneur mentioned that should be regulated by the government include health care and climate change.

Artificial intelligence won’t just change the way war is waged. It will change the job market, too, Cuban says.

“Literally, who you work for, how you work, the type of work you do is going to be completely different than your parents within the next 10 to 15 years,” Cuban says. “Even if you have no interest in computers, no interest in programming, it doesn’t matter. Just like you laugh at your parents who might or might not understand Snapchat and Instagram and Twitter and the like, you are going to have to understand AI or people are going to laugh at you.”

Disclosure: CNBC owns the exclusive off-network cable rights to “Shark Tank.”

See also:

Leave A Reply

Your email address will not be published.