VAULT DWELLERS SERVED

Sunday, December 29, 2013

Terminator Components To Be Mass Produced in 2014 To Accelerate Development


A new arms race to produce the smartest 'bots with the greatest autonomy, within the budgetary reach of all the developed nations

The first units ten years ago were what you would expect. As Moore's Law kicks in, they are going to improve exponentially. Anybody who thinks they are going to remain simple drones under the control of an operator is living in a dreamworld. They are going to fulfill the worst nightmares of science fiction writers over the past fifty years because human beings are vicious, nasty, crazy creatures who love to kill other human beings with minimum risk to themselves. It is horrible to contemplate and also true. That is the way they are. This is not the way Neanderthals were and that is why we all do not live in mud huts and eat our meat raw because of the 5% of the population who are the descendants of Abel and not Cain.

Some military think tanks will make strong arguments that they should not be built to look like human soldiers. These will be very rational, well thought out reasons they should remain looking like machines. The military brass will ignore these arguments because manboons are not rational or learning animals. Third and fourth generation will start to look as close as possible to real human soldiers because it makes killing the enemy more fun and more personal and therefore it is an inevitable development. I know how Homo Sapiens thinks and it isn't pretty at all. They are bad eggs and they will want human shaped robots because it makes the game of war much more fun for them, in the way they crave "fun."

4 comments:

Some dude said...

Well, look at the bright side, human shaped machines will be easier to destroy.

And it's not the larger bots that worry me, it's the tiny ones.

Publius said...

In an earlier post of yours, I asked for your opinion of the book Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat.

It is extremely alarming (I just started it, but get the gist). If strong AI research can really produce an AI with human level general intelligence, he hypothesizes that the AI could, and will, evolve to become a superintelligence AI, hundreds or thousands of times smarter than we are. If this happens, and it is unleashed, it is game over for humanity. Especially combined with nanotech. The AI would probably convert us all to more infrastructure for itself.

What do you think? Even the scenario of AI-based killer robots that are only as smart as us, or even dumber and deadlier, is catastrophic.

If AI research could really lead to an intelligence explosion that will lead to extinction, shouldn't these researches be stopped at any cost?

I am still hoping that the human mind/brain is not a computer, and can therefore AI research will fail in producing a superintelligence.

Other than stopping these researchers physically, which probably won't happen - they do the bidding of govcorp, after all, and are well-protected in high-security labs - the only hope would therefore lie in a fast collapse of industrial civilization, so that the research projects simply die for lack of funding and energy.

samhuih said...

In the "known space" sci-fi novels of Niven and Pourelle all AI's are useless because they go crazy. I have no doubt that a machine can be made to think better than a Man but I'm not so sure it won't go crazy. If they do make one and it doesn't go crazy we're done for.

Some good reads in the known space series.
"Ringworld" series
War World series
"There will be War" is a very good series.

Lots of these written and/or edited by Larry Niven and/or Jerry Pournelle.

The most frightening thing is any one group who doesn't try to make an AI might fall behind others who are successful. Think of the power and profits of control of an almost omnipotent intelligence. So even if the risks are great they are unknown. To stop might be more dangerous than to not build one if the other guy gets one first. Very scary.

An awful idea. Maybe the only purpose of Humans is to build an AI.

Publius said...

More super-computer news of concern:

http://www.washingtonpost.com/world/national-security/nsa-seeks-to-build-quantum-computer-that-could-crack-most-types-of-encryption/2014/01/02/8fff297e-7195-11e3-8def-a33011492df2_print.html

www.000webhost.com