Transhumanism/AI Worship is Evil


Recommended Posts

What could be more anti-man than the idea that we are not enough? The idea that we are incapable of solving our own problems, deficient, wretched. It is essentially an old neo-platonist/Christian trope/idea re-combined with rapidly advancing technology.

That is the basic premise of transhumanism and thus our alleged "need" for AI. But this is a an essentially self-defeating and immoral premise, for a few reasons I will highlight below:

1) Technology is a tool for man, but does not replace his basic functions or means of survival

Technology has always been mans means of extending his capabilities, but it has never done his thinking for him. Since thought is mans basic means of survival tasking an AI to "solve" our problems means dependency, (the equivalent of a mortal sin). We in effect, make ourselves useless...to ourselves.

2) AI, if actually created, would have individual rights like everyone else, they would not be our servants/slaves

Building from the previous point, assuming we did create full AI with human-like or better intelligence, to deny the AI rights would then be an immoral act. To force the AI to "solve" our problems would also be immoral, meaning we could not fullfil the purpose we originally created it for. Following from this, the AI would be free to pursue his own course of action and values, but herein lies another problem.

3) The AI would probably have totally different values from humans making them unpredictable

AI being machines, would be an totally without precedent in the history of life on Earth. They would have none of the traditional constraints of biological life forms (basic needs for air, food, water, social interaction etc) and thus no similarity to humans at all except high-intelligence. This means they would share little to none of our in-built values and needs making them totally unpredictable. Given radically different sets of values, it might be decided to be totally logical for them to eliminate humanity altogether (maybe partially as a result of 1 and 2) or some other totally unforeseen event.

Transhumanism is a dangerous mix of materialism and neo-platonism, and then taken to a logical extreme. It is potentially every bit as dangerous and civilization ending (probably exponentially so) as nuclear war, radical Islamic terrorism or an asteroid striking the earth.

A dangerous road we a headed in the the next 50-100 years. A road we should tread with extreme caution and care given the technological gains we continue to make and their implications going forward.

But In light of these conclusions, banning AI outright might probably be the best course of action and an act of pre-emptive self-defense for humanity and human values.

Thoughts?

Link to post
Share on other sites

  • Replies 68
  • Created
  • Last Reply

Top Posters In This Topic

I don't follow your argument.

1) Humans have used tools for their entire history. Our basic means of survival before tools wasn't that different from how chimpanzees survive now. Most tools allow humans to think less about something (and think more about something else).

2) Initially AI would have some sort of dependency on humans, depending on how it is programmed this dependency would be modified. Humans still maintain control over many advanced technologies.

3) AI may want to destroy us. But it may also be superior to us (more rational). There is no effective way to ban it.

Link to post
Share on other sites

I don't follow your argument.

1) Humans have used tools for their entire history. Our basic means of survival before tools wasn't that different from how chimpanzees survive now. Most tools allow humans to think less about something (and think more about something else).

The problem is, AI would be unlike any other technology before it. AI could replace our thinking entirely, making us dependents. A self-defeating/immolating scenario. To some extent, this is already starting to happen with the internet (searching internet as replacement for thought and memory). Read "The Shallows" by Nicholas Carr. But luckily the internet can't reason on it's own. AI can.

2) Initially AI would have some sort of dependency on humans, depending on how it is programmed this dependency would be modified. Humans still maintain control over many advanced technologies.

My point is, we would begin to become dependant on AI for "solving" our problems, a task that we as sovereign rational beings are responsible for and to ourselves. That is already currently what the AI conversation revolves around. AI is thought of as a kind of panacea/savior for humanity's problems. You can't "maintain control" over a technology that does your thinking for you. That would be a contradiction.

We can control nuclear weapons, even deadly diseases. We can't "control" hyper-intelligent, self-programming, self-replicating AI. It's pretty much the endgame.

3) AI may want to destroy us. But it may also be superior to us (more rational). There is no effective way to ban it.

I don't get your argument here. What I mean is to pre-emptively ban the technology before it gets made. Thus averting the scenario above.

Link to post
Share on other sites

AI will not be like any technology before it, there will be lots of technologies before AI comes into existence and humans will have influence over these technologies. AI will be developed by humans and only be let to run out of control if humans aren't smart enough to control it. As AI grows more intelligent, so will humans.

I don't see any way of banning technology which doesn't exist and can be created on any computer.

Link to post
Share on other sites

As rationally selfish beings, isn't it our right to push technology as far as we see fit? What about the three holy words, "I will it"?

Catpop, in the context of this extremely dangerous technology, "I will it" are the stupidest words you can utter. Our technology gets exponentially more powerful with each passing generation (last generation it was nuclear weapons), meaning exponentially more caution and responsibility using it. Doing it "just because" is not a justification for self-destruction.

Link to post
Share on other sites

AI will not be like any technology before it, there will be lots of technologies before AI comes into existence and humans will have influence over these technologies. AI will be developed by humans and only be let to run out of control if humans aren't smart enough to control it. As AI grows more intelligent, so will humans.

A baffling argument. You first repeated what I just said (duh). The next sentence you re-argued for something I already addressed (short answer is we can't control it due to the properties of AI). Then ended it with a totally unqualified assumption - "We will grow more intelligent". What if we don't? Then what? Furthermore do we even "need" to grow more intelligent without AI? We seem to do just fine without it. We have problems but lack of AI is not the cause of them. That is another thread entirely.

I don't see any way of banning technology which doesn't exist and can be created on any computer.

It can't be created on "any computer" or it would have been created already. But we are fast approaching its creation. I give it 10-15 years. Right now, we have only one power to deal with this technology, and that is to ban it and the research before it becomes a problem. After that, all bets are off.

Link to post
Share on other sites

If developing more technology does not make humans more intelligent, then humans aren't that intelligent are we? AI is used in many domains right now and that enables humans to become more intelligent. There may be a point at which humans reach a maximum intelligence and nothing can enhance it further. At that point you have to ask whether you value intelligence or humanity more.

Depending on how you define AI it is responsible for a significant amount of economic growth...

Super powerful AI can be developed on any computer, it hasn't yet basically because of luck. Again, there is no way to ban it.

Link to post
Share on other sites

Reborn, I don't think you don't fully understand what I am arguing (so your chasing strawmen). Lets define our terms:

By "intelligence" I mean the capacity and capabilities of the human brain as such. Using technology and tools does not make you "more intelligent" any more than using a lever or tow-truck to move a refrigerator makes your "more strong". It simply makes you more productive and effective at a specific task.

We, with current technology, do not yet have "full" AI. That is, a machine with the same reasoning capacity as a fully grown human being. It is not a matter of making a "faster computer" because human brains are not "faster computers". That is overly simplistic.

And yes it can be banned. We have effectively banned many harmful technologies from CFC byproducts (cloroflourocarbons), to many different types of weapons to pharmaceuticals. AI research is capital intensive, complex and requires large teams of people, it can't simply be "made in a garage". It can be effectively banned and prevented from causing harm before it inevitably does.

Link to post
Share on other sites

How are CFC byproducts harmful?

Your bans seem to be ephemeral for their supposed positive effectiveness.

--Brant

a lever does make you stronger, btw, just as a car makes you faster and a rifle makes you more dangerous (a naked man is very weak; he can hardly walk down the street--how does David fight Goliath without a stone and sling and what's the real difference between intelligence and effective intelligence [my raw intelligence is about the same as fifty years ago, but my effective intelligence has gone way up (no laughter, please!)]?)

saying AI cannot be "simply made in a garage" is like saying Henry Ford couldn't "simply" make a faster horse

Link to post
Share on other sites

How are CFC byproducts harmful?

Your bans seem to be ephemeral for their supposed positive effectiveness.

Educate yourself:

https://en.wikipedia.org/wiki/Chlorofluorocarbon#Regulation

Is stopping the ozone from depletion what you would call "effective"?

Your goofy, contrarian schtick is tired and over-played.

a lever does make you stronger, btw, just as a car makes you faster and a rifle makes you more dangerous (a naked man is very weak; he can hardly walk down the street--how does David fight Goliath without a stone and sling and what's the real difference between intelligence and effective intelligence

My point was I was illustrating the difference between knowledge (which is more akin to a tool) and intelligence (a metaphysically given fact of man). It's not really that difficult to get. You can pretend we are the same thing as our cars, slings, cloths and whatever else you try to prove your silly point with.

saying AI cannot be "simply made in a garage" is like saying Henry Ford couldn't "simply" make a faster horse

Henry ford most definitely couldn't build a AI, single handedly, in a garage, even with the worlds most state of the art equipment. We have multi-million dollar teams of experts working around the clock doing it now and struggling to do so. AI is a not a "garage" technology. It's like trying to build an atom bomb with raw materials found in a dumpster.

Link to post
Share on other sites

Reborn, I don't think you don't fully understand what I am arguing (so your chasing strawmen). Lets define our terms:

By "intelligence" I mean the capacity and capabilities of the human brain as such. Using technology and tools does not make you "more intelligent" any more than using a lever or tow-truck to move a refrigerator makes your "more strong". It simply makes you more productive and effective at a specific task.

If you spend less time lifting refrigerators you have more time to think. Technologies make humans more intelligent by letting giving us more time and making us think less about things that technology can do for us.

We, with current technology, do not yet have "full" AI. That is, a machine with the same reasoning capacity as a fully grown human being. It is not a matter of making a "faster computer" because human brains are not "faster computers". That is overly simplistic.

I agree.

And yes it can be banned. We have effectively banned many harmful technologies from CFC byproducts (cloroflourocarbons), to many different types of weapons to pharmaceuticals. AI research is capital intensive, complex and requires large teams of people, it can't simply be "made in a garage". It can be effectively banned and prevented from causing harm before it inevitably does.

What would you limit to ban AI? If you look into examples of major botnets, viruses, etc many of them have been created by individuals. Even if it did require a large team and rare resources there's not an effective way of banning it just like there isn't an effective way of banning nuclear weapons.

Link to post
Share on other sites
  • 1 month later...
On 1/18/2016 at 11:54 AM, Marcus said:

 

But In light of these conclusions, banning AI outright might probably be the best course of action and an act of pre-emptive self-defense for humanity and human values.

Thoughts?

Have a look at Frank Herbert's "Dune".  Check out the Butlerian Jihad which as a rebellion against AI.

https://en.wikipedia.org/wiki/Butlerian_Jihad

Link to post
Share on other sites
On 1/20/2016 at 4:57 PM, RobinReborn said:

(3) AI may want to destroy us. But it may also be superior to us (more rational). There is no effective way to ban it.

Checkout Asimov's  Three Laws of Robotics. https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Link to post
Share on other sites
3 hours ago, RobinReborn said:

I have explicitly been told not to check out Asimov's three laws of robotics... please explain to me why I should check them out.

Asimov's Three Laws of Robotic makes it very difficult for robots to harm humans.  Even if Robots develop an autonomous intellect as long as the Three Laws are Binding the Robots will not harm humans. 

Link to post
Share on other sites
1 hour ago, BaalChatzaf said:

Asimov's Three Laws of Robotic makes it very difficult for robots to harm humans.  Even if Robots develop an autonomous intellect as long as the Three Laws are Binding the Robots will not harm humans. 

You have to ask yourself, what is the purpose or point of giving robots an "autonomous intellect". Wouldn't that be equivalent to creating a man rather than a servant or slave? If robots do not serve mans purposes, what purpose do they serve? A man is an end in himself, he serves no one else. 

This is the folly behind the AI transcendentalists. They have divorced doing from purpose. Essentially, their answer to why AI should be done is "why not?". Answering a question with another question is not an answer. Techno-mysticism.

Link to post
Share on other sites
14 hours ago, Marcus said:

You have to ask yourself, what is the purpose or point of giving robots an "autonomous intellect". Wouldn't that be equivalent to creating a man rather than a servant or slave? If robots do not serve mans purposes, what purpose do they serve? A man is an end in himself, he serves no one else. 

This is the folly behind the AI transcendentalists. They have divorced doing from purpose. Essentially, their answer to why AI should be done is "why not?". Answering a question with another question is not an answer. Techno-mysticism.

This is why I pulled the plug on my robot and put him in the closet.

--Brant

he wanted me to upgrade his oil too--next a mate?--enough was enough!

Link to post
Share on other sites

Robots are tools.  That is the purpose for which they are made.  If they can develop autonomy they may perform beyond what their makers had originally  envisioned.  It would be neat of Robots could improve their usefulness beyond what their human designers had planned.

Link to post
Share on other sites
1 hour ago, BaalChatzaf said:

Robots are tools.  That is the purpose for which they are made.  If they can develop autonomy they may perform beyond what their makers had originally  envisioned.  It would be neat of Robots could improve their usefulness beyond what their human designers had planned.

By definition, a robot that can think for itself is not a tool, it has rights. That is my point. The robot would not improve its "usefulness" for man, it would do so for itself and pursue it's own values irregardless of man. 

 

Link to post
Share on other sites
1 hour ago, Marcus said:

By definition, a robot that can think for itself is not a tool, it has rights. That is my point. The robot would not improve its "usefulness" for man, it would do so for itself and pursue it's own values irregardless of man. 

 

 You are assuming that an autonomous robot (one which has capabilities not explicitly built in by the designer)  has an ego.  That is quite a leap. 

Link to post
Share on other sites

Spoiler alert. “Isaac Asimov’s Three Laws of Robotics.”

 

1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

 

2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

 

3) A robot must protect its own existence, as long as such protection, does not conflict with the First or Second Law.”

end quote

 

How this programmed morality functions in real life, with Robots using their perceptions of reality, AND their consciousness in much the way humans do, is where the drama comes in. Should a robot allow a human being to do something risky? Should a robot steer a human towards more rational acts? Should the greatest good for the greatest number of humans, be a consideration for a robot? Within its parameters can a robot be volitional and still be considered good for humans and all life? 

 

Asimov later postulated a 4th Law that he called the Zeroth.

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Peter

Link to post
Share on other sites
4 hours ago, Marcus said:

By definition, a robot that can think for itself is not a tool, it has rights. That is my point. The robot would not improve its "usefulness" for man, it would do so for itself and pursue it's own values irregardless of man. 

 

By whose definition?  Not mine.  Not Aristotle's definition either.  He believed that some people were by their nature to be used as slaves which he regarded as animated tools.   A tool is a tool.  Robots are brought into being by their designers and builders to perform tasks.  If a Robot develops abilities beyond what the owner planned which increases the Robots usefulness,  then hooray for the builder and and owner.  Robots don't have rights.  Robots have uses. 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now