Click to Translate to English Click to Translate to French  Click to Translate to Spanish  Click to Translate to German  Click to Translate to Italian  Click to Translate to Japanese  Click to Translate to Chinese Simplified  Click to Translate to Korean  Click to Translate to Arabic  Click to Translate to Russian  Click to Translate to Portuguese  Click to Translate to Myanmar (Burmese)

PANDEMIC ALERT LEVEL
123456
Forum Home Forum Home > Off Topic Forum > Off Topic Discussion > Talk about anything
  New Posts New Posts RSS Feed - Let's Hold Off on the Killer Robots
  FAQ FAQ  Forum Search   Events   Register Register  Login Login

Now tracking the new emerging South Africa Omicron Variant

Let's Hold Off on the Killer Robots

 Post Reply Post Reply
Author
Message
arirish View Drop Down
Admin Group
Admin Group
Avatar

Joined: June 19 2013
Location: Arkansas
Status: Offline
Points: 39215
Post Options Post Options   Thanks (0) Thanks(0)   Quote arirish Quote  Post ReplyReply Direct Link To This Post Topic: Let's Hold Off on the Killer Robots
    Posted: July 28 2015 at 11:26am

Musk, Woz: Let's Hold Off on the Killer Robots

By Stephanie Mlot
July 28, 2015 12:30pm EST
0 Comments

More than 1,000 artificial intelligence and robotics researchers endorsed an open letter warning against the technology.

Some of the industry's top players have signed on to support a ban on autonomous weapons systems.

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and professor Stephen Hawking are among the 1,000-plus artificial intelligence and robotics researchers who endorsed an open letter warning against the technology.

Penned by the Future of Life Institute (FLI) and presented today at a conference in Buenos Aires, the letter warns that "deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high."

While autonomous weapons make the front line safer for soldiers, they may also lower the threshold for going to battle, and likely result in more human casualties, according to the FLI.

"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting," the letter said.

The authors argue that if any major military operation were to forge ahead with AI weapon development, it would lead to a global arms race—much like the one that erupted over the atom bomb.

Still in its infancy, artificial intelligence is used by tech giants like Google and the Defense Advanced Research Projects Agency (DARPA). The latter, which boasts a strong focus on "rethinking military systems," has already put AI to use in its Ground X-Vehicle Technology program. The semi-autonomous armoured tank was built last year with the intention of creating a more mobile, less expensive combat platform.

Still, not everyone is convinced of these systems' benefits.

"[T]he endpoint of this technological trajectory is obvious," the FLI said in its letter. "Autonomous weapons will become the Kalashnikovs of tomorrow."

Cheaper and easier to construct than nuclear weapons, autonomous weapons are also easier to sell on the black market. And, really, no one wants armed quadcopters falling into the hands of terrorists, dictators, or warlords.

"We therefore believe that a military AI arms race would not be beneficial for humanity," the FLI said. "There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people."

Last fall, Tesla and SpaceX chief Elon Musk declared that "with artificial intelligence, we are summoning the demon." Bill Gates expressed similar fears, saying during a Reddit Ask Me Anything in January that he is concerned with "super intelligence."

Former Apple exec Steve Wozniak, meanwhile, had a change of heart about the technology: Initially terrified of AI, he recently backtracked, saying it could eventually benefit the human race. Still, Woz signed the LFI letter, alongside Skype co-founder Jaan Tallinn, professor Noam Chomsky, and Google DeepMind CEO Demis Hassabis.

A founding member of the Future of Life Institute, Tallinn is joined in the group by Scientific Advisory Board members Hawking and Musk (as well as actors Alan Alda and Morgan Freeman).


"All technologies can be used for good and bad," Toby Walsh, professor of AI at the University of New South Wales in Australia, said in a statement. "Artificial intelligence is a technology that can be used to help tackle many of the pressing problems facing society today—inequality and poverty, the rising cost of health care, the impact of global warming… But it can also be used to inflict unnecessary harm."

"We need to make a decision today that will shape our future and determine whether we follow a path of good," he added.

In April, a report from Human Rights Watch and Harvard Law School expressed similar concerns.
http://www.pcmag.com/article2/0,2817,2488632,00.asp
Buy more ammo!
Back to Top
jacksdad View Drop Down
Executive Admin
Executive Admin
Avatar

Joined: September 08 2007
Location: San Diego
Status: Offline
Points: 47251
Post Options Post Options   Thanks (0) Thanks(0)   Quote jacksdad Quote  Post ReplyReply Direct Link To This Post Posted: July 28 2015 at 5:26pm
AI - specifically Artificial Super Intelligence - is a worry. We've no idea how the world would be changed by the emergence of machine intelligences with IQs so much greater than ours that they could be measured in the thousands or greater. Scary stuff when you start researching it, and potentially not that far off.

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
"Buy it cheap. Stack it deep"
"Any community that fails to prepare, with the expectation that the federal government will come to the rescue, will be tragically wrong." Michael Leavitt, HHS Secretary.
Back to Top
Guests View Drop Down
Guest Group
Guest Group
Post Options Post Options   Thanks (0) Thanks(0)   Quote Guests Quote  Post ReplyReply Direct Link To This Post Posted: July 28 2015 at 5:55pm
This says it all: From the Movie Terminator 2: Judgment Day

John Connor: We're not gonna make it, are we? People, I mean.

The Terminator: It's in your nature to destroy yourselves.

John Connor: Yeah. Major drag, huh?
Back to Top
Kilt-3 View Drop Down
Valued Member
Valued Member
Avatar
Since 2006

Joined: February 04 2015
Status: Offline
Points: 1365
Post Options Post Options   Thanks (0) Thanks(0)   Quote Kilt-3 Quote  Post ReplyReply Direct Link To This Post Posted: August 04 2015 at 5:22pm
Listen, and understand! That Terminator is out there! It can't be bargained with. It can't be reasoned with It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead. 
Back to Top
 Post Reply Post Reply
  Share Topic   

Forum Jump Forum Permissions View Drop Down