Search the Community
Showing results for tags 'Transhumanism'.
What could be more anti-man than the idea that we are not enough? The idea that we are incapable of solving our own problems, deficient, wretched. It is essentially an old neo-platonist/Christian trope/idea re-combined with rapidly advancing technology. That is the basic premise of transhumanism and thus our alleged "need" for AI. But this is a an essentially self-defeating and immoral premise, for a few reasons I will highlight below: 1) Technology is a tool for man, but does not replace his basic functions or means of survival Technology has always been mans means of extending his capabilities, but it has never done his thinking for him. Since thought is mans basic means of survival tasking an AI to "solve" our problems means dependency, (the equivalent of a mortal sin). We in effect, make ourselves useless...to ourselves. 2) AI, if actually created, would have individual rights like everyone else, they would not be our servants/slaves Building from the previous point, assuming we did create full AI with human-like or better intelligence, to deny the AI rights would then be an immoral act. To force the AI to "solve" our problems would also be immoral, meaning we could not fullfil the purpose we originally created it for. Following from this, the AI would be free to pursue his own course of action and values, but herein lies another problem. 3) The AI would probably have totally different values from humans making them unpredictable AI being machines, would be an totally without precedent in the history of life on Earth. They would have none of the traditional constraints of biological life forms (basic needs for air, food, water, social interaction etc) and thus no similarity to humans at all except high-intelligence. This means they would share little to none of our in-built values and needs making them totally unpredictable. Given radically different sets of values, it might be decided to be totally logical for them to eliminate humanity altogether (maybe partially as a result of 1 and 2) or some other totally unforeseen event. Transhumanism is a dangerous mix of materialism and neo-platonism, and then taken to a logical extreme. It is potentially every bit as dangerous and civilization ending (probably exponentially so) as nuclear war, radical Islamic terrorism or an asteroid striking the earth. A dangerous road we a headed in the the next 50-100 years. A road we should tread with extreme caution and care given the technological gains we continue to make and their implications going forward. But In light of these conclusions, banning AI outright might probably be the best course of action and an act of pre-emptive self-defense for humanity and human values. Thoughts?