Marcus

Members
  • Posts

    173
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Marcus

  1. If the manipulator has acquired some position of power whether social or political, "walking away" may not be a viable option. To go against his/her wishes would then be defiance. It is possible for them to gain enough power to hurt or damage the virtuous, the good, but only if the good is compliant. Also, manipulation does not require virtue, it preys upon some vice (such as emotionalism or evasion) of other men. To make manipulators the rulers of virtuous men would mean the virtuous are impotent. In reality, it's the opposite. Manipulators "feed on sores" as Rand would put it. (Interestingly, she portrayed her villains just as they are often portrayed in popular fiction, i.e. sickly, gangly, sniveling, conniving little busybodies, but ultimately, foiled by the hero)
  2. By definition, a robot that can think for itself is not a tool, it has rights. That is my point. The robot would not improve its "usefulness" for man, it would do so for itself and pursue it's own values irregardless of man.
  3. You have to ask yourself, what is the purpose or point of giving robots an "autonomous intellect". Wouldn't that be equivalent to creating a man rather than a servant or slave? If robots do not serve mans purposes, what purpose do they serve? A man is an end in himself, he serves no one else. This is the folly behind the AI transcendentalists. They have divorced doing from purpose. Essentially, their answer to why AI should be done is "why not?". Answering a question with another question is not an answer. Techno-mysticism.
  4. After quite a bit of thinking and reading on this subject, I've changed my position which, can be summed up in three words: Defiance is enough. Objectivists don't need to manipulate to counter evil, they need only disobey. I think that was Rand's point all along. She probably thought about this angle too, and came to the same conclusion. Now I see why Rand called it "petty" and "small". Nathaniel Branden too, was a genius, and has quite a few insights on the subject of obedience vs disobedience. Martin Luther King brought down an entire evil system through open acts of disobedience. It was (strikingly!) enough all by itself. No manipulation required. Evil is easily exposed and defanged. It's the problem of people who keep going along with it. Now I see the problem with the "Objectivist movement" ™ as such, and especially it's lack of appeal to the youth. It's not defiant, it's conformist. You can see this in the routine, stodgy, cordial way things are done at ARI. Pardon my French, but we need to start breaking shit and asking real questions about this culture we live in. All youth movements are defiant. In order for Objectivism regain it's forward momentum, it must embrace defiance.
  5. Thanks Mike for the stimulating response. Good stuff in there. Your right, Roark did use what could be termed "manipulation" (most interestingly on Dominque) at different points in the story. But I've come to an epiphany (more on that later.) I'll check the book out when I get the chance.
  6. Tony, to clarify, my basic point about defensive vs offensive manipulative strategies has not changed. What I mean is that the basal ganglia is often the *basis* of what makes manipulation possible. If humans were 100% totally rational, no manipulation would be possible or even necessary. They would simply do the right thing all the time. The need for manipulation really arises due to human error/emotionality so as to produce more beneficial outcomes and keep yourself from being "screwed". I earlier gave the example of "manipulating" taxes. If there were no taxes to pay would there be a need to manipulate the tax code? So the answer is not necessarily that "we're all screwed" but without manipulation "being screwed" is the likely result.
  7. Men have free minds, but we also have highly predictable and common patterns of behavior, patterns with biological/emotional causes. We are not free of our biological needs. Advertisers know this. Most people for example, want to get married. Most people want children. Most people patiently stand in line at grocery stores (naturally cooperative). Most people like music. Most people are captivated by pictures of food, sex, violence etc. Humans are rational animals, but never underestimate the power of the basal ganglia.
  8. Let me ask you a question: If a robber has a gun to your head, is it "evil" to "manipulate" a criminal from firing a gun? What about "manipulating" the tax code so as to pay less taxes? Is figuring out a way to double the response rate on an advertisement (and thus more money) through appealing to powerful basic emotions/needs "manipulation"? Here we come again to an interesting question. What exactly is "manipulation" and under what context is it "evil"? It is clear under the examples above, manipulation is not a clear-cut, black and white issue. "Manipulation" is simply cleverness. Finding new ways interpret and use the rules of a system to your benefit. Like a gun, it can be used for defensive or offensive purposes. Many forms of martial arts have used manipulation techniques for thousands of years. In certain cases, using or not using it could come down to a life or death issue. Objectivism looks down on "manipulation" as a strategy (this is actually irrational, as I demonstrate) , but makes no mention of manipulation as a defensive/neutral strategy, only as an "offensive" strategy. The Fountainhead depicts Ellsworth Toohey's machinatons, through his slick, slimy maneuvers to thwart the work of the main hero as the work of a "blond louse". The hero of the novel, Howard Roark, by contrast, does not even advertise his business. Early in the book, he goes broke needlessly because of lack of advertising and a failure to defend himself from an ongoing smear campaign. My question is, if he did how much *more* successful would he have been? How much closer and quicker would that have put him to his self-professed goals? In that context, how is "manipulation" not rational? Manipulation, in my view, is a rational strategy when pursued neutrally/defensively. Irrational/evil when used for offense. Thoughts?
  9. Ofcourse not. Not by a long shot. That is my point. These characters aren't "normal men" like Objectivism claims. Either genetically or otherwise.
  10. Howard Roark, hero of the fountainhead, is he beast or god? He is a man without any apparatus (per Ayn Rand's journal) for understanding others. Obviously a sociopath or serial killer would qualify as a "beast" but Howard Roark was none of these. He was totally self-sufficient, had few things to say, few friends (in fact, none in the beginning). Howard Roark did not concern himself with the affairs of men. Peikoff says he was a "normal" man, but it's not "normal" to be totally comfortable outside of the domain of other men and society. He was not a supernatural phenomenon but he wasn't a "normal" man by any means. Does that make him a "god" in a sense?
  11. Now your taking things out of context. For the sake of argument we are talking about fully grown adults and the (moral) choices they make. Small children cannot nor have the capacity yet, to make certain choices. Their needs must be provided for and they must be guided by adults. That's not the issue here. And Hobbes by the way, was describing existence *in a society* before the advent of the rennaissance, not a desert island. I would argue it's worse to live in an ignorant theocracy than on a desert island where at least you don't have to content with violent threats, punitive taxes by decree or deadly diseases (aside from the animals). I'll take the desert island over that "society" anyday.
  12. I think you miss the larger (Objectivist) point. We must still live and produce our own values to trade with others, just as we must produce our own values in seclusion as a hermit on a desert island. The fact that living with and among others increases our chances of survival and opportunities for trade, does not change the basic facts.
  13. Korben Dallas multipass! "Yeah I know, it's a multipass...." Sorry, had to say it lol
  14. Educate yourself: https://en.wikipedia.org/wiki/Chlorofluorocarbon#Regulation Is stopping the ozone from depletion what you would call "effective"? Your goofy, contrarian schtick is tired and over-played. My point was I was illustrating the difference between knowledge (which is more akin to a tool) and intelligence (a metaphysically given fact of man). It's not really that difficult to get. You can pretend we are the same thing as our cars, slings, cloths and whatever else you try to prove your silly point with. Henry ford most definitely couldn't build a AI, single handedly, in a garage, even with the worlds most state of the art equipment. We have multi-million dollar teams of experts working around the clock doing it now and struggling to do so. AI is a not a "garage" technology. It's like trying to build an atom bomb with raw materials found in a dumpster.
  15. Reborn, I don't think you don't fully understand what I am arguing (so your chasing strawmen). Lets define our terms: By "intelligence" I mean the capacity and capabilities of the human brain as such. Using technology and tools does not make you "more intelligent" any more than using a lever or tow-truck to move a refrigerator makes your "more strong". It simply makes you more productive and effective at a specific task. We, with current technology, do not yet have "full" AI. That is, a machine with the same reasoning capacity as a fully grown human being. It is not a matter of making a "faster computer" because human brains are not "faster computers". That is overly simplistic. And yes it can be banned. We have effectively banned many harmful technologies from CFC byproducts (cloroflourocarbons), to many different types of weapons to pharmaceuticals. AI research is capital intensive, complex and requires large teams of people, it can't simply be "made in a garage". It can be effectively banned and prevented from causing harm before it inevitably does.
  16. A baffling argument. You first repeated what I just said (duh). The next sentence you re-argued for something I already addressed (short answer is we can't control it due to the properties of AI). Then ended it with a totally unqualified assumption - "We will grow more intelligent". What if we don't? Then what? Furthermore do we even "need" to grow more intelligent without AI? We seem to do just fine without it. We have problems but lack of AI is not the cause of them. That is another thread entirely. It can't be created on "any computer" or it would have been created already. But we are fast approaching its creation. I give it 10-15 years. Right now, we have only one power to deal with this technology, and that is to ban it and the research before it becomes a problem. After that, all bets are off.
  17. Catpop, in the context of this extremely dangerous technology, "I will it" are the stupidest words you can utter. Our technology gets exponentially more powerful with each passing generation (last generation it was nuclear weapons), meaning exponentially more caution and responsibility using it. Doing it "just because" is not a justification for self-destruction.
  18. The problem is, AI would be unlike any other technology before it. AI could replace our thinking entirely, making us dependents. A self-defeating/immolating scenario. To some extent, this is already starting to happen with the internet (searching internet as replacement for thought and memory). Read "The Shallows" by Nicholas Carr. But luckily the internet can't reason on it's own. AI can. My point is, we would begin to become dependant on AI for "solving" our problems, a task that we as sovereign rational beings are responsible for and to ourselves. That is already currently what the AI conversation revolves around. AI is thought of as a kind of panacea/savior for humanity's problems. You can't "maintain control" over a technology that does your thinking for you. That would be a contradiction. We can control nuclear weapons, even deadly diseases. We can't "control" hyper-intelligent, self-programming, self-replicating AI. It's pretty much the endgame. I don't get your argument here. What I mean is to pre-emptively ban the technology before it gets made. Thus averting the scenario above.
  19. What could be more anti-man than the idea that we are not enough? The idea that we are incapable of solving our own problems, deficient, wretched. It is essentially an old neo-platonist/Christian trope/idea re-combined with rapidly advancing technology. That is the basic premise of transhumanism and thus our alleged "need" for AI. But this is a an essentially self-defeating and immoral premise, for a few reasons I will highlight below: 1) Technology is a tool for man, but does not replace his basic functions or means of survival Technology has always been mans means of extending his capabilities, but it has never done his thinking for him. Since thought is mans basic means of survival tasking an AI to "solve" our problems means dependency, (the equivalent of a mortal sin). We in effect, make ourselves useless...to ourselves. 2) AI, if actually created, would have individual rights like everyone else, they would not be our servants/slaves Building from the previous point, assuming we did create full AI with human-like or better intelligence, to deny the AI rights would then be an immoral act. To force the AI to "solve" our problems would also be immoral, meaning we could not fullfil the purpose we originally created it for. Following from this, the AI would be free to pursue his own course of action and values, but herein lies another problem. 3) The AI would probably have totally different values from humans making them unpredictable AI being machines, would be an totally without precedent in the history of life on Earth. They would have none of the traditional constraints of biological life forms (basic needs for air, food, water, social interaction etc) and thus no similarity to humans at all except high-intelligence. This means they would share little to none of our in-built values and needs making them totally unpredictable. Given radically different sets of values, it might be decided to be totally logical for them to eliminate humanity altogether (maybe partially as a result of 1 and 2) or some other totally unforeseen event. Transhumanism is a dangerous mix of materialism and neo-platonism, and then taken to a logical extreme. It is potentially every bit as dangerous and civilization ending (probably exponentially so) as nuclear war, radical Islamic terrorism or an asteroid striking the earth. A dangerous road we a headed in the the next 50-100 years. A road we should tread with extreme caution and care given the technological gains we continue to make and their implications going forward. But In light of these conclusions, banning AI outright might probably be the best course of action and an act of pre-emptive self-defense for humanity and human values. Thoughts?
  20. But wait! That's not all you get! If you order right now we'll double the rationality! Nice try. But not quite. In a free society, people must be persuaded. Even ideas have to be sold. As "dumb" as it sounds you basically have to sell people on the idea of being consistently rational and how it makes their life better. Or people will usually adopt some form of pragmatism (subjectivism) or religiosity (intrinsicism).
  21. Can you guys (you know who you are), um, take your little quarrel to another thread? This thread is about the problems of objectivist outreach efforts, lets keep it on topic folks. Thanks.
  22. And now wimpy Roger Bissell has just provided a real-time example of the above. On the "How to Improve Objectivism (2002)" thread, which is in the "Roger Bissell Corner," wimpy Roger deleted my latest post rather than answer its substance, and he also took the added precaution of locking the thread. Hahahaha! That's right, Roger, go ahead and airbrush reality, erase it, and blank it out! That'll make it go away! As I said above, it's standard behavior of Objectivish intellectual poseurs. Afraid to enter the arena. Heh. Chickenshit losers. Here's the post that wimpy Roger censored: LOL, you seem to try your best to emulate the actual south park character. You are the "method actor" of trolls at OL. It is mildly amusing and funny I admit. Where was that "reality" guy taken from? An villain from a 1920's vaudeville act?
  23. It doesn't have to be. It's not about just the "hardcore" element but basic principles and beliefs. Just a basic respect and understanding of the philosophy is enough. Folk philosophy is enough for most people. Most people are not total "creeps" or "cowards". Objectivism is suited to the vast majority of people (because it's largely common sense), but it needs to be presented well. Just like how most people by now understand that capitalism works and socialism doesn't. Exactly how is it a "spent force" in your opinion? How can it be re-ignited? Who needs philosophy? Good question. Philosophy, as I understand it, maximizes your chances of success because it (good philosophy) is fully consistent with reality. Though it is possible to survive without an explicit philosophy, you are always implicitly under the influence of certain basic beliefs, thus you are already practicing some kind of philosophy at any given point. I liken it to driving, your driving skills become "In the background" after you have learned how to drive and go beneath conscious awareness. The act of driving is not "explicit" to most people, the set of skills you learn by driving become second nature. These same skills will keep you alive on the road. And sexiness is not just "one element" it is one among many a talented communicator brings to an organization. My point was that TED is better at presenting it's ideas that ARI is. Orders of magnitude better. That makes a significant difference in how many people it can reach, even with one video. This is not insignificant and should not be ignored by Objectivism.
  24. Comment of the Day (On Donald Trump and Politicians in general): http://www.realclearpolitics.com/video/2015/12/16/trump_im_just_an_entertainer_thats_a_lot_of_bullshit.html#comment-2413574529 This guy gets it.
  25. Watching this youtube lecture you can see, Yaron Brook, foremost communicator of Objectivism, in a valiant, but it ultimately dull performance. It probably evokes at most, a loud, wailing yawn. Looking around the room, the level of boredom is almost felt palpably. Blanks stares throughout, as if they are watching not a real man in front of them but his holographic projection. It has only 1,246 views and counting. The Ayn Rand Institute, a 40 year old organization with millions of dollars in endowment, has only just over 12,000 subscribers on Youtube. A sad, sad state of affairs and pitiful in this age of million+ views for cat and dog chasing videos. Compare this with TED talks which routinely posts videos that get millions of views. As you can see, good presentation doesn't just make a difference, it makes a HUGE difference. And the issue is not that people don't engage intellectual ideas, they don't engage boring ones. Not all of the blame for this can be cast upon the public. At least part of the blame is because ARI is simply too boring for it's own good. Nobody was ever won to a noble cause by boredom. Especially not young people. As I highlighted in a previous thread (among others) Donald Trump gets this. I disagree that movements somehow "must" take several generations to come to full bloom. You can speed up the process. This requires a talented communicator. Presentation, charisma, delivery and ease of talk. All that + infectious enthusiasm = more followers to the cause. IMO Objectivism would be better off hiring a TV pitchman at this point. Really anyone with half-decent sales skills and/or acting chops. We need to bring sexy back. How can we do it? That's what this thread is about.