Killer robots are coming, should we be concerned?

So I read an article that says you can now build a drone plane for $490 that’s autonomous…couple that with some creativity, and you have artificial intelligence, albeit crude, and in the hands of, well, just about anybody.

I read an essay a few years ago on AI and the development of smarter & smarter weapons that made a good case for a terminator-esque scenario within the next 50 years. If it is going to end up like Terminator or the Matrix, is there anything society can do now to stop it from happening?

Answer #1

thedude, I hope you’ve built the 5 laws into this site.

muwahaha

:p

Answer #2

Learning systems and ‘fuzzy logic’ are both very specialised systems, though - they’re good at what they do, but that’s about it. Nobody, so far as I’m aware, has any idea of how to apply these things to inductive reasoning, which is as best as I understand it a prerequisite for any sort of truly intelligent or sentient machine. The best hope would probably be neural networks, simply because we have one working example already (ourselves), but we’re a long way away from simulating even simple animals with them.

Answer #3

Meh. Could care less!

Answer #4

For anyone interested in research supporting the idea of machine intelligence by the 2020’s, http://www.kurzweilai.net

Answer #5

arachnid, have you seen Dr Stephen Thayler’s work & his patents on the creativity engine?

He’s been working with the armed forces for decades and from my reading of his models, has a very well articulated learning system based on a neural network for his “AI” methodology.

I’m definitely not a scientist, but I’ve also read a bit of Bart Kosko’s work, and he’s one of the foremost experts on fuzzy logic…which also has the most mathematical journals of any math, and is considered (the hypercube model) to be a superset of ‘probability’ which is a commonly used math tool to deal with ambiguitiy.

Anyway, I’m pretty ignorant on these things…however, reading through those has me faily convinced that the technology is there, or will be, and unconfortably enough for me, in the not too distant future.

Answer #6

One thing the futurists predicting imminent robotic doom always leave out is that there’s been no significant developments in the sort of AI required to make a system capable of inductive reasoning in, well, ever. There’s plenty of research and results when it comes to making “AIs” (really a very misused terms) for a range of specialised tasks such as game playing, pattern recognition, etc, but we have no idea whatsoever of how to even begin creating a computer that would qualify as “intelligent” or be capable of independent thought.

Until we see some breakthroughs in that area (and I’m not optimistic, unfortunately), I wouldn’t worry overly about killer robots, unless they’re the purely programmatic kind, in which case I’d worry more about the people programming them to kill. :)

Answer #7

No, knowledge is free for the most part. And knowledge is definitely power. Ambitious people of an evil nature will go to any length to meet their goals it’s the same with people of good nature. Good and evil will always be locked into battle until the end. The only option is to get people of a good nature to be interested in learning and staying in school instead of dropping out, seems most kids today are more interested in clothing labels and music than in educating themselves but thats the way the elitists behind the curtains that pull the strings want it. A bunch of sheeple easily controlled. Too dumb and high on drugs to notice that their rights, liberties, money, land, and future is being robbed from them. So if you could get the educated good people to outnumber the bad you may have a fighting chance of staying one step ahead of the “Evil killer robots”. The only other way around it is a world wide disaster like the eruption of a super volcano or a giant meteor bringing us back to the dark and or stone age. Hope this brightens your mood, sincerely. I’m not being sarcastic.

Answer #8

Aside from a species level event, there’s no way to prevent the technilogical singularity. Controlling it needs to be a top priority.

The best way to do that is to ensure something akin to Asimov’s laws for robots get’s etched into the fabric of the first truly intelligent machines, so that as they build even smarter machines, those machines will have our best interests always at heart as well.

Any rogue individuals who attempt to violate that and build smart machines without such safety in place, would be subject to death, since they would be risking the fate of the entire species. Similarly, international treaties need to allow for nuclear annihilation of any rogue state that attempts the same.

After the first few generations of smart machines, if rogues have not yet been built, then they practically never could be and the draconian measures could be lifted.

This sounds like sci-fi paranoia, but it really might happen in our lifetimes. Our understanding of how the brain works is exploding right now. In a few decades, we will likely have both the compute power to create truly intelligent machines, as well as the knowledge of how to do it.

Answer #9

But then Will Smith will jump through your window and snap it’s robo-neck with his vintage-2004-Converse-shoe-clad feet. Yes, he’ll snap the robot’s neck with his feet. He’s that good.

Answer #10

Yes, Asimov’s rules…nuking the states that don’t bake those into their own robotic development processes.

Ugh. I’m 30, and I didn’t think growing up that this kind of thing would become a matter of near & dear to me debate - and an important one.

So, in a few years when I buy my first ever fully robotic house hold servant (given that I have sufficient funds) I need to make sure that the Toyota or Honda manufactured robot is Asimov compliant.

Hope this brightens your mood, sincerely.

:) Nope, actually, it makes sense. Part of the reason I ask questions like this is my hope to “spread the message” and it’s part of why I keep running this site, and keep trying to build it bigger, to reach more people.

Do you really think the founders of Facebook, Bebo, Hi5, or Myspace even once have considered evil robots & the ramifications of their newfound global wealth & power?

I doubt it.

Answer #11

If killer robots come, I would meet them with my futuristic weapons and nuclear bombs, and fight them till the end.

More Like This
Advisor

Science

Biology, Chemistry, Physics