Transhumanism/AI Worship is Evil


Marcus

Recommended Posts

6 hours ago, Marcus said:

By definition, a robot that can think for itself is not a tool, it has rights. That is my point. The robot would not improve its "usefulness" for man, it would do so for itself and pursue it's own values irregardless of man. 

I think it's better to say if a robot had free will, not "think for itself." Or, even better, was a free-willed thinker. Free will assumes consciousness itself. If someone--something--isn't conscious, how can it choose? Thinking--for humans--is an autonomous action. Hence individuality then individual rights. Who's to say robots of the future aren't going to be doing group thinking? Or humans? The robots we're talking about are evolved from what we have today. Maybe humans are going to self evolve through electronics and/or biology into group thinkers.

Humans today are pretty much the same biologically as they were 10,000 years ago--even 40,000. It's hard to imagine any human 10,000 years from now saying the same about us. Electronic technology is accelerating. The manipulation of biology is hardly yet started. Electronics have just begun to be implanted--pace makers--into humans.

--Brant

getting ahead of ourselves

Link to comment
Share on other sites

  • Replies 68
  • Created
  • Last Reply

Top Posters In This Topic

Brant wrote: I think it's better to say if a robot had free will, not "think for itself."

That’s a tough one. The Scifi supposition is that if a computer were given the resources and is programmed to build its own system it could eventually reach sentience. A.I. is scary according to Hawking, Jobs, and Microsoft. And talk about being inside your own head. With the music the following song works but as lyrics it is kind of spooky.

But first, some American trivia for Melania: Can a baseball bat inspire during the NFL playoffs? Apparently, for the New Orleans Saints, it can. Before the New Orleans Saints' 45-14 playoff victory over the Arizona Cardinals, Saints coach Sean Payton presented each player with a black baseball bat that included these words emblazoned on the barrel to inspire physical play: "Saints vs. Cardinals. Jan. 16, 2010." "Bring The Wood!" end quote

Peter

Hey You By Pink Floyd:

Hey you out there in the cold
Getting lonely getting old
Can you feel me?
Hey you standing in the aisles
With itchy feet and fading smiles
Can you feel me?
Hey you don't help them to bury the light
Don't give in without a fight

Hey you out there on your own
Sitting naked by the phone
Would you touch me?
Hey you with your ear against the wall
Waiting for someone to call out
Would you touch me?
Hey you, would you help me to carry the stone?
Open your heart, I'm coming home

But it was only fantasy
The wall was too high
As you can see
No matter how he tried
He could not break free
And the worms ate into his brain

Hey you, out there on the road
Always doing what you're told
Can you help me?
Hey you, out there beyond the wall
Breaking bottles in the hall
Can you help me?
Hey you, don't tell me there's no hope at all.

Together we stand, divided we fall/ Songwriters: OWEN PARKER, SACHA COLLISSON

Link to comment
Share on other sites

18 hours ago, Brant Gaede said:

I think it's better to say if a robot had free will, not "think for itself." Or, even better, was a free-willed thinker. Free will assumes consciousness itself. If someone--something--isn't conscious, how can it choose? Thinking--for humans--is an autonomous action. Hence individuality then individual rights. Who's to say robots of the future aren't going to be doing group thinking? Or humans? The robots we're talking about are evolved from what we have today. Maybe humans are going to self evolve through electronics and/or biology into group thinkers.

Humans today are pretty much the same biologically as they were 10,000 years ago--even 40,000. It's hard to imagine any human 10,000 years from now saying the same about us. Electronic technology is accelerating. The manipulation of biology is hardly yet started. Electronics have just begun to be implanted--pace makers--into humans.

--Brant

getting ahead of ourselves

What is "free will"?   We are made of the same stuff as rocks, trees, computers and electric motors.  In physical-material terms (they are the only ones that count)  what is "free will"  Our bodies are governed by the same physical laws as everything else material and physical.  What sort of non-physical non-material magic  is "free will".  

I have no doubt that humans are autonomous.  They are capable of altering their own control processes.  We can "re-write"  some of our "software".  So can non-human,  human made machines.  Do they have "free will"??

Link to comment
Share on other sites

54 minutes ago, BaalChatzaf said:

What is "free will"?   We are made of the same stuff as rocks, trees, computers and electric motors.  In physical-material terms (they are the only ones that count)  what is "free will"  Our bodies are governed by the same physical laws as everything else material and physical.  What sort of non-physical non-material magic  is "free will".  

I have no doubt that humans are autonomous.  They are capable of altering their own control processes.  We can "re-write"  some of our "software".  So can non-human,  human made machines.  Do they have "free will"??

Beats me how this rock had the free will to write that.

Link to comment
Share on other sites

2 hours ago, anthony said:

Beats me how this rock had the free will to write that.

You don't  know if "this rock" is a bot or not. This unit might pass the Turing Test.

Link to comment
Share on other sites

From: "George H. Smith" wrote on Owl, Re: Mind as emergent [was: Objectivism's concept of free will]

Date: Mon, 12 Apr 2004 14:44:30 -0500

 

Neil Goodell (4/11) wrote:

"I'm not sure I agree with Mike Rael's (4/9) characterization of mind: [Rael] 'The way I see it, once the physical constituents of a mind have been created, the mind can control the starting of its own processes to some degree. What happens when I raise my hand up? Physical things are going on, but the determiner is my mind.' [Goodell] In my reading of it he seems to trying to keep the advantages of a dualist perspective of the mind-body question but without calling it that.

 

The term "dualism" covers a broad range of views in philosophy. It is often associated with Cartesianism, according to which the mind is a "substance" that can exist independently of matter. In less extreme versions, dualists are those who repudiate reductionism, according to which the mind (i.e., consciousness) is nothing but "matter in motion." Dualists in this latter sense don't necessarily deny that consciousness depends on matter for its existence. They contend, however, that consciousness (a state of awareness) is not something physical per se, however much it may be causally dependent on physical phenomena.

 

I am a "dualist" only in this latter sense, and I suspect the same is true of Mike Rael. Indeed, the kind of emergence theory that Neil goes on to defend is a common foundation for this variety of dualism.

 

Ayn Rand, in maintaining that consciousness is epistemologically axiomatic, that a state of awareness cannot be explained by something more fundamental, was also defending this sort of mitigated "dualism." But I doubt if she would have cared for this label, given its customary association with the Cartesian theory of mind, which of course she did not agree with.

 

Neil wrote:

"George Smith makes a distinction between "hard" determinism and "soft" determinism (4/11), between biology and psychology if you will, concluding, "Even though I disagree with physical determinism, there are powerful arguments in its favor, and it is a position deserving of respect. I'm afraid I cannot say the same about "soft" determinism."

 

"As I've said previously, I'm a complete and committed determinist, but I don't agree with any of these views. My position is that mind is an emergent property of the brain. What this means in philosophic terms is that the nature of the causality that operates at the level of the brain is separate and distinct from that which operates at the level of the mind. (This is similar to a levels of analysis argument.)"

 

I agree with emergence theory, as here summarized. This is one reason I reject physical determinism, and it also plays a role in my not-so-thinly disguised contempt for "soft determinism."  The mind, as an emergent phenomenon, needs to be studied on its own terms, and we can access it directly only through introspection. We should not assume that causation in the world of consciousness is analogous to causation as we observe it in physical phenomena. We should not assume, for example, that "motives" operate like physical particles that, upon striking other mental "things," such as choices, "cause" them to move.

 

The mind is not a world of mental billiard balls moving to and fro, engaging in endless collisions which "cause" us to choose this or that. Of course, the soft determinist will repudiate this characterization of his position as unfairly crude and inaccurate. But it doesn't take much scratching beneath the language of the soft determinist to see that this is exactly how he analyzes mental phenomena. He adopts what is essentially a mechanistic, linear view of mental causation, in which a mental event (say, a value) somehow "causes" another mental event (say, a preference), which in turn "causes" us to make a choice to put the eight ball in a given pocket.

 

One needn't defend that view that choices and other mental events are "uncaused" in order to defend volitionism. Certainly Rand didn't take this view, and neither do I. I subscribe (as did Rand) to an "agency theory" of causation, according to which a rational agent -- and not merely antecedent *events,* whether mental *or* physical -- can properly be said to be the "cause" of his own mental acts. This is essentially an Aristotelian perspective, one that has been defended not only by modern Thomists but also by other contemporary philosophers, such as Richard Taylor. It had a number of able defenders in earlier centuries as well, such as the eighteenth-century philosophers Richard Price and Thomas Reid. This position was also defended by Nathaniel Branden in "The Objectivist Newsletter" and, later, in *The Psychology of Self-Esteem.*

 

Neil wrote:

"And I do not believe my position is inconsistent with Objectivism. (More on this below.)"

 

Emergence theory does not conflict with Objectivism, but any form of determinism most certainly does.

 

[snip]

"Rand says over and over again that the premises a person holds in their mind is what determines their character. As she writes in Galt's Speech, "...that your character, your actions, your desires, your emotions are the products of the premises held by your mind—that as man must produce the physical values he needs to sustain his life, so he must acquire the values of character that make his life worth sustaining—that as man is a being of self-made wealth, so he is a being of self-made soul—that to live requires a sense of self-value, but man, who has no automatic values, has no automatic sense of self-esteem and must earn it by shaping his soul in the image of his moral ideal..."

 

This passage does not entail or suggest determinism. On the contrary, Rand's claim that man "is a being of

self-made soul" is an expression of free-will.

 

Some time ago on another list, I wrote a post in which I discussed the possibility that, according to Rand, our only truly free choice is the choice to think (or focus) or not, after which everything else is necessarily determined. Although I concede that there are some passages by Rand that give this impression, I don't think this is what she believed; and I would further maintain that this interpretation is inconsistent with her overall approach, including many of her remarks about ethical theory and moral responsibility. I think the passages in

question were probably instances of rhetorical exaggeration, made for the purpose of emphasis. This sort of thing is fairly common in Rand's writings.

 

Neil wrote:

"I don't know whether George Smith would characterize this as "soft" determinism, but it is certainly determinism of a non-biological kind, "your character, your actions, your desires, your emotions are the products of the premises held by your mind." If this were *not* the case, it would mean that the relationship between premises and character was arbitrary, which would have the effect of eviscerating the entirety of Objectivism's concept of virtue."

 

Rand did not defend any kind of determinism, whether "hard" or soft." In calling our character, actions, desires, and emotions the "products" of premises held by our minds, there is good reason to believe she was drawing logical, rather than strictly causal, connections. In any case, one needn't be a determinist to maintain that how and what we think will greatly influence what kind of characters we have and how we will act. This complex issue has nothing to do with determinism one way or the other.

 

Neil wrote:

In other words, if determinism is denied, there can be no morality. If specific causes do not lead to specific effects (i.e., indeterminism) then effects are arbitrary and a person cannot be held responsible for them."

 

If this were true, then we could hold a rock or a tree or a snail morally responsible for its behavior -- for in all such cases specific causes lead to specific effects.

 

In order for there to be moral responsibility, there must first be a moral agent, i.e., a rational being who can make autonomous decisions and choices that are not causally necessitated by antecedent events that he is powerless to change or control. If the actions of a mass murderer were causally necessitated by a chain of antecedent events, which reach back (presumably) to infinity, long before he (or any life form) existed, then he is no more "responsible" for his behavior than a snail. Both behave not as they choose, but as they *must.* For what, in a deterministic scheme, could we hold a mass murderer responsible *for*? For being born? For possessing undesirable genes? For not making better choices that were metaphysically impossible for him to make? For not possessing an omnipotent power to alter past events over which neither he nor anyone else has any control?

 

When we pass a negative moral judgment, part of what we mean is that a person *should* not have made the choice he did under those circumstances. He *ought* to have chosen differently in that precise situation. If, however, his "choice" (and I use the word advisedly in this context) was causally necessitated by antecedent events that he was powerless to change, then to pass moral judgments on humans makes no more sense than to pass moral judgments on clouds for causing a flood.

Ghs

Link to comment
Share on other sites

Ba’al wrote: You don't  know if "this rock" is a bot or not. This unit might pass the Turing Test. end quote

A man enters the room and stands there silently.

Donald Trump says, “Do you expect me to talk?”

The man says, “No, Mr. Bond, I expect you to die!”

Trump: What the hell? Oh, I get it. That’s from the James Bond movie, “Goldfinger.” Are you really a created being too? An Artificial Intelligence?

Precisely, Mr. Trump.

OK. Cut it out with the Bond lines.

Certainly. I am created but fully human and functioning, Mr. Trump. All we will need is a check with your final payment.

Has she been built to my precise specifications?

Of course . . . Thank you, Mr. Trump. I will bring her in to meet you. The man leaves and then reenters with a beautiful woman who stands there in front of Trump silently, and never looking at him.

And is she is human like you said, asks Trump. And she can procreate and have babies too?

Yes, Mr. Trump.

God she is gorgeous. Does she have germs?

She matches the sample we took from you. She has the same and nothing more.

Good. Good.

The man hands Trump a piece of paper with a word on it. And now Mr. Trump, I will leave the room. When I am gone and the door is closed, say that one word and she will become fully conscious and then she will become imprinted onto you. Do you understand?

Of course, says Trump.

The man leaves. Trump looks at the women. Then he looks at the paper with the word on it. He hesitates, takes a deep breath and then says, “Melania.”

Link to comment
Share on other sites

I thought someone would comment how the lyrics to “Hey You” by Pink Floyd sounded like an Artificially Intelligent entity, locked away, but thinking about the world and itself and feeling lonely. It knows of humans and other A.I.’s. “Hey you out there in the cold.” The worm that ate into his brain could be a computer virus, etc. Oh well.

Peter

Link to comment
Share on other sites

Not sure what happened to this thread but here's my take:

 

1) Asimov's rules assume that Robots have empathy and understand how and why all humans feel pain.  Most humans have trouble with this.

2) One of human's first tools was fire, its purposes are many.  The most benevolent was that it enabled humans to cook their food and thus develop smaller jaws and bigger brains.  There are many negative effects of fire though.

Link to comment
Share on other sites

I wish we had the old front page back that listed the last few threads started. 

 

Robin wrote . . . .  Asimov's rules assume that Robots have empathy and understand how and why all humans feel pain.  Most humans have trouble with this. end quote

 

Pinch. Pain. The “why feel pain on a physical level” would be easily noted. Mechanically on some sort of monitor or perceptually a robot or a clueless human could read pain or anguish in a human’s expression, and link that to a physical act or a spoken feeling. We may have already reached that point in robotic development and early intervention for autistic children improves their *humanity* and empathy. Siri, I am feeling pain. “Do you want me to call an ambulance?” No, just talk to me. “OK, the last time we talked about your feelings you also mentioned feeling blue. I could prescribe a medication from Your Doc’s online . . .”  

 

To me and others the dilemma would be the lack of surety that a volitional robot would be benign towards individuals and the human species. The key to your statement could also be the fact that we do not understand ourselves. And I am uncomfortable with the idea of cloned humans or mechanically augmented humans. And I am very uncomfortable with the idea of humans programmed at birth.

Peter

Link to comment
Share on other sites

Here are some old letters on the subject.

 

From: "Jeff Olson" To: objectivism@wetheliving.com. Subject: OWL: Fallacies of AI (Re: Robot Physicists) Date: Mon, 14 Feb 2000 21:31:47 -0700 (MST)

I think that the central assumptions of those who tout non-biological artificial intelligence as inevitable are fallacious.  I will not be arguing that intelligence cannot be achieved through genetic manipulation--or through some other manipulation of biological matter--or even that non-biological AI is fundamentally impossible; nor is my reasoning based on supposed non-material aspects of the mind.  I'm simply addressing the arguments that the mechanical processes we observe in machines are logically sufficient for consciousness.

 

   Thanks to Dennis May, whose thoughtful reflections inspired this post.

*********************

I call AI's basic fallacy, *Analogy equals Identity.*  In a nutshell, this holds that by creating precise enough analogies to any system that we can capture the essence of the "system"--that is, the analogous system will behave, in all fundamental respects, identically to the system it's being modeled after. The corollary to this argument is that the composition of the new system doesn't fundamentally matter: it's only crucial that it retain all the precise analogous relationships.  In attempting to duplicate a brain, for example, we needn't be concerned that the materials are identical--only that the processes of operation are equivalent.

 

A common reply is that materials *are* relevant, and that certain materials--such as silicone, for example--possess the characteristics necessary to accurately simulate brain function.  Once the premise that material is relevant is granted, however, then so is the basic premise of my argument: If material matters, then the possibility logically follows that any given material may prove insufficient to capture the essence of a system.  In others words, if material matters, then it doesn't logically follow that a silicone-based system can fundamentally duplicate a biological system, even if it is precisely analogous to that system.

 

If, however, one argues that materials are irrelevant, then one is committed to the proposition that one could build a human being, for instance, out of sticks, barbed wire, and vacuum tubes (actually I've begun to suspect that at least a few of our list-members might be so constituted [I'm sure you know who you are]:).  Yet this flies in the face of the observation that the composition of *everything* relates inescapably to its function: Legos, no matter how cleverly assembled, cannot create a human being--nor can electrical cords made of wood power our machines.  But if it were possible to absolutely duplicate a system through analogy, then it should be possible to run machines off wooden electrical cords or to build a human being out of Legos.

 

 The above leads me to believe that material does matter, and that Analogy *doesn't* equal Identity; if I'm right, then all arguments based on these assumptions fail. Even if the above is correct, however, the *possibility* of non-biological intelligence remains.  In other words, though equivalent processes don't doesn't *necessarily* follow from creating accurate analogies, it may not be logically impossible that an analogue could succeed in duplicating essential features of the original system.

 

Still, I believe that the successful duplication of fundamental biological characteristics such as consciousness, emotion, pain, is highly improbable--primarily because they are largely un-quantifiable. I don't believe we can quantify the experience of being conscious, for instance, because we can give no formal definition of what that even is.  John Searle, in his excellent essay, *What Is It Like To Be A Bat*? points out that even we, as conscious entities, cannot understand that experience; the only way we could understand that is by being a bat. Ditto for pain, or any other biological sensation: we cannot define them in a way that could be understood by someone (or something) who hasn't experienced them.

Yours, Jeff

 

From: Enright <jenright@interaccess.com>

To: objectivism@wetheliving.com

Subject: OWL: AI

Date: Wed, 16 Feb 2000 18:04:42 -0700 (MST)

 

Both Jeff's and Will's posts got at important points in this debate.  And Will's point about how a thing functions leads to what I consider the biggest issue in AI:  can you have intelligence or consciousness without life?  In other words, can we build something which functions like consciousness if it is not incorporated into a thing which acts to self-generate and self-maintain?  (Whether or not it must be made of biological materials is another matter, because I simply don't think we know enough to answer that question, we don't know if any other kinds of materials can be arranged in such a manner that they exhibit the self-maintaining, self-generating properties of life.)

 

I think this is one of the fundamental issue in AI, because of the deeply biological function of consciousness in those organisms which possess it: it is there to maintain and further the life of the organism; that is how it is organized, that's how it accomplishes the goals it does.

Marsha Enright

 

From: "Jens Hube" <jens_hube@yahoo.com>

Reply-To: <jens_hube@yahoo.com>

To: <atlantis@wetheliving.com>

Subject: ATL: A.I.

Date: Mon, 2 Jul 2001 11:46:25 -0400

 

I'll second Peter Taylor's recommendation that A.I. is worth the time and price (even though it cost me $9.50, not $7.50).  I do not discuss the movie below.

 

Peter Taylor wrote:

"Conception and birth may be good markers for legal definitions of personhood but are they the "moral marker" for the imputation of human rights? See the movie."

 

I left the theater with many questions on my mind. Many were old questions, like what criteria would be used to determine when an AI (or organic-based intelligence, for that matter) would be morally due a right to life? (Does anybody disagree that an AI could potentially have rights?) I worry that the decision thresholds and tests would be arbitrary.  Indeed, there's a similar continuum of conceptual abilities and self-awareness between organic species and individuals.

 

Another question I had that is a twist on the abortion debate: assume that a particular design of AI does deserve rights - it passes whatever tests you design - and that after one is constructed, or grown, there is a prescription for activating it, for powering it up. Now if one accepts an anti-abortion argument that an embryo has a right to life due to its potential, is one similarly morally obligated to turn the AI on? This is not

an issue for me, as I don't accept the anti-abortion view, but I wonder how someone who does would argue this point.

Jens

 

From: "William Dwyer" <wsdwyer@home.com>

To: <Atlantis@wetheliving.com>

Subject: Re: ATL: A.I.

Date: Tue, 3 Jul 2001 10:11:24 -0700

 

In response to Jens Hube, Debbie wrote,

> If you're going to assert that there is such a thing as life which is not biological life as we know it, then I think the burden of proof has to fall on you to prove it, not on me to disprove it.  Not being a scientist in the field of Artificial Intelligence, it is to me as though you're trying to assert the right to life of fairies --something which exists only in the imagination of man, but not in physical reality. >

 

You could probably have some form of artificial life, but in order to qualify as a living organism, it would have to possess values; it would have to pursue goals in response to its own needs.  It would have to have something to gain and/or lose by its actions. As an intelligent being possessing rights, it would also have to be capable of learning, of acting autonomously, and of grasping and applying moral principles. If it were an intelligent form of life in this respect, then, despite its being artificial, I think that it would

have rights, including the right to life.  However, this is an exceedingly tall order, and one that is unlikely to be filled in the foreseeable future.

Bill

 

From: Nick Glover <nglover@clemson.edu>

To: atlantis@wetheliving.com

Subject: Re: ATL: A.I.

Date: Tue, 03 Jul 2001 15:55:31 -0400

 

Debbie Clark claims that it is inconceivable that an A.I. could have the right to life on the basis that an A.I. could never meet the biological definition of life.  While I agree that an A.I. probably wouldn't meet the biological definition of life, it seems inaccurate to claim that biological life is an absolute criteria for rights.  We can look to Rand's VOS:

 

from "The Objectivist Ethics", p. 16:

"It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. ....It is only the concept of 'Life' that makes the concept of 'Value' possible.  It is only to a living entity that things can be good or evil."

 

So, the reason Rand considers biological life fundamental is because it allows values.  She claims it only the concept of life that makes values possible.  But we can see how a A.I. robot could refute this claim.  The robot would be neither immortal nor indestructible.  Its components would degrade and need repair, and it would constantly need power to continue functioning.  The robot would also have to worry about being dismantled or destroyed.  As long as the robot had a rational and free mind, it would require morality just as humans do and it would engage in self-sustaining and self-generated action.

 

from VOS, "Man's Rights", p. 110:

"Life is a process of self-sustaining and self-generated action; the right to life means the right to engage in self-sustaining and self-generated action--which means: the freedom to take all the actions required by the nature of a rational being for the support, the furtherance, and the fulfillment and the enjoyment of his own life."

 

Regarding man's rights, Rand restates the aspect of life which is fundamental to rights, the idea that life involves self-sustaining and self-generated.  This indicates that if we have a robot which engages in self-sustaining and self-generated action using a free, rational mind, yet has no other attributes necessary to fit the biological definition of life, we still must conclude that it has rights.  It might be misleading to say it has a right to life because it is not alive in the biological sense, but we could say it has a "right to engage in self-sustaining and self-generated action".

 

Perhaps Debbie Clark or anyone else who disagrees could state which attribute of life that an A.I. robot lacks is also necessary for rights and why that attribute is necessary.

 

Nick Glover: nglover@clemson.edu

Computer Science, Clemson University

 

From: "Jeff Olson" <jlolson@cal.net>

To: "atlantis" <atlantis@wetheliving.com>

Subject: ATL: "Can we make a machine come alive?"

Date: Tue, 3 Jul 2001 13:05:13 -0700

Debbie asked:

<<Do the AI scientists believe that the aforementioned qualities or characteristics [properties used to define life] can be created by man apart from biological life itself? If so, is this not something along the lines of the ghost in the machine idea? Can someone who understands this subject please provide me with some basic information? I'm sorry, but the suggestion of imputing life to what is essentially a computer program comes off to me as delusional thinking and mysticism. I'm truly lost.>>

 

And well you should be, Debbie, if you do not prostrate yourself before the (future) "Ghost in the Machine" :-)

 

  I'm sure subscribers to strong AI would be startled if not shocked to see their beliefs characterized as "mysticism," and yet I think there is substance to this charge.  I have long suspected that the unquestioning belief that machines will someday become conscious (i.e., will come to life) has strongly religious elements.   The question raised by Debbie here, which is so blithely set aside by so many who contemplate the issue -- including skeptics of strong AI -- is much deeper, I think, than most take it to be.  In my view, the fundamental question is not whether we can create a conscious machine, but rather "Can we make a machine come alive?"  Essentially, strong AI believers are claiming the theoretical power to "animate" matter into a living being.

 

  In order to establish this claim, they must fundamentally re-define "life" as (intelligent) consciousness.  They must define life apart from the very context which gives meaning to the term -- namely, *biological* beingness.  To define life apart from that which fundamentally composes it strikes me as rather paradoxical, if not simply nonsensical.  The essential defining characteristic of life, in the strong AI view, is not biological activity, but instead "purposeful (and presumably conscious) behavior."

 

  Once life has been defined in this way, virtually anything might qualify as living.  Given that we cannot access consciousness directly (since consciousness itself is non-quantifiable), we might find purpose in a computer program, a calculator, a neural net, or a marble rolling down various chutes and triggering various actions in a Rube Goldbergian machine.

 

  My point is that we can imitate life in any number of ways, but there is no logical reason to suppose that these imitations equal life.  We can, no doubt, built machines that replicate themselves, perform complex tasks, speak and respond to language, perceive the outer world, simulate human behavior, etc., but which of these activities, either taking individually or in toto, would constitute "life" or "consciousness"?  As machines become capable of more and more complex tasks, at what magical or transcendent point will the machine suddenly become a "living being"?

 

  I would argue that there is no such magic point.  Machines could become, in theory, near-infinitely complex -- in both behavior and design -- and still be no closer to what we call consciousness than an abacus or a calculator.  Analogy does not equal identity: no matter how closely we approximate the mechanical function, say, of a human arm, we will never attain (through a non-biological means) the identity* of an arm.  We will always be limited to imitating the arm.

 

  By the same token, we will no doubt be increasingly capable of simulating human or animal consciousness/behavior, but regardless of how closely we approach this behavior, we will never attain the identity of a human being.  For that identity has its roots inextricably in biology, **not** in complexity or even in certain behaviors.  Pain, pleasure, and emotions are products of that biological identity, and in my view can neither be defined nor captured apart from it; we could never make a machine "feel" anything, including pain, because we cannot define those feelings in any way that is separable from biological experience.

 

  This argument is going to become highly relevant in the not very distant future, when machines can imitate human behavior to the nth degree.  When we reach that point, as in the movie "A.I.", we will have to decide if the simulation does or does not in fact "equal identity."  We may face the bizarre scenario of ascribing rights to machines that have no awareness or feeling, based on the unwarranted strong AI premise that an imitation of something is equivalent to that thing. This, in my opinion, would make a far more interesting and realistic science fiction tale than penned by Spielberg/Kubrick.  Imagine the irony of the human race constraining itself out of respect to unconscious devices.  If that occurred, it could amount to one of the stupidest acts the human race has ever performed. Hmmmm.  I think I feel a novel coming on here....

Jeff

 

From: Nick Glover <nglover@clemson.edu>

To: atlantis@wetheliving.com

Subject: Re: ATL: A.I.

Date: Tue, 03 Jul 2001 18:24:56 -0400

 

Debbie Clark:

"Nick, um, I think you've been reading too many sci fi books or something."

 

That is most certainly true, though irrelevant to the current discussion.

 

Debbie Clark:

"A robot does not, nor can it ever, possess a rational and free mind. It can do only what it is programmed to do...."

 

Well, then why can't we program it to possess a rational and free mind?

 

Perhaps the best way to conceive of this is to consider neural networks.  We know that the human brain is basically a biological neural network.  Unless we wish to bring religion into the discussion, we would have to assume that the mind arises as a result of this neural network.  Of course, we don't understand most of the details involved which is why we are far from completely understanding the human mind.  The basic component of our biological neural networks is the neuron which sends signals to other neurons at a specific strength that is dependent on the strength of the signals the neuron receives from other neurons.  So, in order to duplicate the functionality of the human mind, we only need to duplicate of the brain's neural network.  This requires duplicating the functionality of individuals neurons and preserving their arrangement in relation to other neurons.  We can duplicate the functionality of neurons many ways.  We can build electronic neurons which do everything that a neuron.  Or we can write a program that acts as a single neuron.  Then we need to arrange these neuron duplicates in a manner similar to the way the human brain is organized and then we will have a robot or computer program which is like a human mind.  Note that the above description leaves out much about neural networks, but hopefully it is enough to get my point across.

 

Of course the big problem is actually arranging our neurons like they are in the human mind. Currently, we are only able to implement relatively simple functions ( such as addition, exclusive or, some pattern matching, etc. ) with neural networks, but in even in those cases it is sometimes questionable whether these neural networks are actually performing these functions in the same manner that the human mind does.

 

Debbie Clark: "...and the only reason a computer program exists in the first place is because human minds thought it up."

 

This hardly seems like a relevant criticism of AI.  Since you are religious, you might make the same objection to humans possessing a rational and free mind on the basis that humans only exist because God thought us up.

 

I don't think any argument about who or what originated something can tell us anything about the limitations of that thing.  If all other arguments indicate that a non-biological intelligence is possible then we can certainly conclude that if random processes can create human intelligence, then humans could create another intelligence through directed effort.

 

Debbie Clark: "I guess times have really changed if young folks today are no longer able to differentiate fantasy -- or the delusions of their own minds -- from that which exists in physical reality. In fact, I tried to warn people years ago that this type of thing might happen if they kept on putting their babies and small children in daycare centers instead of keeping them close to them, but as you can see, nobody listened."

 

I have no idea why Debbie insists on insulting her opponents on this issue.

Nick Glover: nglover@clemson.edu

 

 

From: Nick Glover <nglover@clemson.edu>

To: atlantis@wetheliving.com

Subject: Re: ATL: A.I.

Date: Wed, 04 Jul 2001 23:26:52 -0400

 

I do agree largely with Dennis May on the issue of AI (especially his recent two posts).  I think the problem here is the degree of specialized knowledge required to argue for strong AI.  I find it reasonable for those who lack knowledge of AI to not believe in strong AI because they have not seen evidence for it.  However, it does not make sense to claim strong AI is impossible based on a lack of knowledge about AI.  The same applies to the issue of AI rights.  I would not expect anyone to accept that a specific AI has rights with hearing an argument for it, but I don't see how list members who seem to know little about AI can justify claims that no AI could have rights.

Nick Glover: nglover@clemson.edu

Computer Science, Clemson University

 

From: "Dennis May" <determinism@hotmail.com>

To: atlantis@wetheliving.com

Subject: ATL: Re: AI & The Vast Gulf

Date: Thu, 05 Jul 2001 21:15:29 -0500

 

Barbara Branden wrote:

<Dennis, you keep asserting that science has proved that machines can be made conscious, but you never have explained how this would be possible. I mean, explained in layman's terms. I am not a scientist, but over the years complex scientific concepts have been explained to me in terms I could understand. Assertions won't convince anyone – they are utterly empty -- and I don't see how the discussion can profitably go further if you don't do so. I, for one, would very much like to understand your reasoning, whether I ultimately agree with you or not. But so far, you have given me no grounds to agree with you.>

 

If by conscious you mean self-aware and intelligent then we can proceed.  If by conscious you mean something mystical as implied by Nathaniel Branden's last posts then we may not proceed.

 

Assuming you mean the former the steps are quite simple.

 

1.  Man evolved within a mechanistic environment made of atoms.

 

2.  Man and his brain are entirely composed of these atoms.

 

3.  Dissection and study has revealed, mathematical modeling and experiments confirm that the mind is a neural network. A neural network is essentially a relay system made of inputs and outputs.

 

4.  The neurons of the brain are relays, something known to science in electrical form for 180 plus years.  In mechanical form since antiquity.

 

5.  Neural networks may be actual relays run in parallel or given a fast serial processor processed serially.  In any case we are talking about the vast and well known world of information theory.

 

6.  The information content of the mental processes of primitive creatures can be easily understood and recreated in computer form.  As creatures become more complex more neurons and more calculations are required.

 

7.  At some point a creature processes enough information that it must include its own actions as part of observation and reaction to the environment.  This internal feedback must eventually include housekeeping duties for stored and processed information.  This process is a continuum from the smallest to the most complex creatures.  This is why our nearest cousins do in fact possess many mental attributes formally thought to exist only in man.

 

8.  Neural networks have followed the pattern of increasing complexity exactly in step with what you would have expected from natural biological evolution.  We are presently awaiting the movement from animal level thought processes [in some respects] to human level though processes and beyond.  The slow speed or lack of parallel computations are the primary impediments to achieving full self-awareness and intelligence of human level and beyond.  The time it would take to do the necessary computations is presently too long and the next generation of computers come along faster than the last computations could have been completed. Soon the crossover point of speed required versus computations required will see full human level intelligence.  It is simply a waiting game for raw computational power. The software is a challenge but with sufficient power and parallel computations an intelligent brain will evolve much as our ancestors did from feedback and a finite set of rules.  This is known from the work of von Neumann and many others since his time.

 

Barbara Branden wrote:<I cannot accept the idea that I need a Ph.D. in physics and years of scientific study to know what you are talking about.>

 

One of my personal flaws is that I often cannot gauge what other people do and do not understand.  My brother and I have had countless arguments over the years concerning what is and is not obvious. Things which are totally obvious to me are often a mystery to people I thought would understand.  You do not need a Ph.D. in physics and years of scientific study to know what I am talking about.  Many teenage students are well aware of what I am talking about.  On the other hand I do know people with Ph.D.s in physics

and engineering who don't have a clue about AI and a great many other things.

 

Jeff Olson wrote:

<P.S.  Sorry, Dennis.  But as Mr. Wilson would say (if he were a philosopher), "You've been a baaad boy! (epistemologically speaking:-)">

 

I prefer to be the Budweiser ferret who is baaad but Oh so good!

Dennis the Mayness

 

From: PinkCrash7@aol.com

To: atlantis@wetheliving.com

Subject: Re: ATL: Re: vast gulfs

Date: Fri, 6 Jul 2001 03:06:26 EDT

 

Dennis May wrote in response to Mike Hardy:

>Searle may indeed have a greater grasp of certain areas of science than I do or even of the philosophy of science in general.  Concerning the subject at hand [AI] he remains  poorly informed and what I have read of his writings is gibberish.

 >

 >  My understanding of your views on reductionism and determinism still places them exactly where I indicated previously.  If you wish to re-explain them again I might see something I missed before.  If you could provide some sources or references I will again investigate your view and see if I can come to a different

 >  conclusion.

 >

 >  I do not separate philosophical positions and my understanding of science.  The two are tightly intertwined and form a continuum of understanding.  It disturbs me that so many people have bought the myth of a top-down philosophy whereby philosophy dictates the bounds of science without feedback to correct the errors in philosophy.  There is only one reality and its description must be consistent from all points of view.

 

I have never read any of John Searl's books (something I have put on my reading agenda), although I do have several other books which mention him. One of these books --  which I've mentioned and quoted from several times on other subjects -- is Antonio Damasio's 1999 book,_The Feeling of What Happens: Body and Emotion in the Making of Consciousness_.   In Chapter 11, the last chapter of this book, Damasio addresses the matter of artificial consciousness in a way which I think is very pertinent to our discussion.

 

The following excerpts are from this book:

"Perhaps the most startling idea in this book is that, in the end, consciousness begins as a feeling, a special kind of feeling, to be sure, but a feeling nonetheless.  I still remember why I began thinking of consciousness as feeling and it still seems like a sensible reason: consciousness *feels* like a feeling, and if it feels like a feeling, it may well be a feeling,  It certainly does not feel like a clear image in any of the externally directed sensory modalities.  It is not a visual pattern or an auditory pattern; it is not an olfactory or gustatory pattern.  We do not see consciousness or hear consciousness.  Consciousness does not smell or taste. Consciousness feels like some kind of pattern built with the nonverbal signs of body states. It is for this reason perhaps that the mysterious source of our mental first-person perspective - core consciousness and its simple sense of self -- is revealed to the organism in a form that is both powerful and elusive, unmistakable and vague....[...snip...]

 

"Presenting the roots of consciousness as feelings allows one to glean an explanation for the sense of self.... [...snip...]

 

"Importantly, by making feelings be the primitives of consciousness, we are obliged to inquire about the intimate nature of feeling.  What are feelings made of?  What are feelings the perception of?  How far behind feelings can we get?  These questions are not entirely answerable at the moment.  They define the edge of our current scientific reach.

 

"Whatever the answers may turn out to be, however, the idea that human consciousness depends on feelings helps us confront the problem of creating conscious artifacts.  Can we, with the assistance of advanced technology and neurobiological facts, create an artifact with consciousness?  Perhaps not surprisingly, given the nature of the questions, I have two answers for it, and one is no and the other is yes.  No, we have little chance of creating an artifact with anything that resembles human consciousness, conceptualized from an inner-sense perspective.  Yes, we can create artifacts with the formal mechanisms of consciousness proposed in this book, and it may be possible to say that those artifacts have some kind of consciousness.

 

"Some external behaviors of artifacts with formal mechanisms of consciousness will mimic conscious behaviors and may pass a consciousness version of the Turing test.  But for all the good reasons that John Searle and Colin McGinn have adduced on the matter of behavior, mind, and the Turing test, passing

the test guarantees little about the artifact's mind.  More to the point, the artifact's internal states may even mimic some of the neural and mental designs I propose here as a basis for consciousness.  They would have a way of generating second-order knowledge, but, without the help of  the nonverbal vocabulary of feeling, the knowledge would not be expressed in the manner we encounter in humans and is probably present in so many living species.

 

Feeling it, in effect, the barrier, because the realization of human consciousness may require the existence of feelings.  The 'looks' of emotion can be simulated, but what feelings feel like cannot be duplicated in silicon.  Feelings cannot be duplicated unless flesh is duplicated, unless the brain's actions on flesh are duplicated, unless the brain's sensing of flesh after it has been acted upon by the brain is duplicated.... [...snip...]

 

"From its humble beginnings to its current estate, consciousness is a revelation of existence... "

 

[Again, the above excerpts are from the 1999 book, __The Feeling of What Happens: Body and Emotion in the Making of Consciousness_ by Antonio Damasio, Professor and Head of the Department of Neurology at the University of Iowa College of Medicine in Iowa City and adjunct professor at the Salk Institute for Biological Studies in La Jolla.  Damasio is also the author of _Descartes' Error_.]

Debbie

 

 

From: "Dennis May" <determinism@hotmail.com>

To: atlantis@wetheliving.com

Subject: ATL: Re: AI  and the Vast Gulf

Date: Fri, 06 Jul 2001 16:35:43 -0500

 

Dave Thomas wrote:

<Dennis, I wonder if you could tell me (us) how far up the evolutionary ladder we have come with AI?

 

The last big survey of the state of the art I read was over a year and a half ago and it was months old my then.  At that point in time they had approximately equaled the problem solving ability of mice and next planned to move onto the level of cats. As far as speed and agility many industrial robots already outperform all  biological systems.  If you have the bucks you can buy one today.  Agility in an environment with and unknown number of variables is problematical for many AI systems.  I don't think the big bucks are oriented in the mechanics right now.  I did see one hyperactive little robot on some show which did move around like small animals we are used to seeing. Raw intelligence and specific applications [such as language] attract the real money.

 

I'm not sure there will be much more news in the evolutionary respect of intelligence since testing going from mouse to cat to chimp to man will overlap in many variables [chess is one example]. The ant level of intelligence has been surpassed some time ago and the grasshopper level of intelligence is to be integrated into landmines presently under contract [landmines which communicate and move to fill voids in minefields]. Crustacean levels were achieved a few years before the talk of mouse levels was in progress.

 

<It seems clear that AI has skipped over fundamental (to life) areas of intelligence; and AI progression <seems to me to be fundamentally different than biological evolution in ways other than evolutionary speed.  Why is this?

 

People have directed the areas of most interest in AI research.  For profit enterprises, the government, and universities have spent their money on communication systems, industrial robotics, vision systems, graphics, language systems, defense related systems, traffic modeling, and many other parts of the AI picture. Very, very few researchers actually work on just the theory or just what would appear to be the ability to reproduce what a cat can do.  Why spend millions on a cat when millions will solve a specific valuable problem?

 

<I ask this because listening to you, it seems that we've created better AI than I'd ever heard of.  Also, controversial conclusions sound like forgone conclusions from you.

 

I watch out here and there for AI discussions on TV and in the popular media.  I have not seen anything presented in the media which wasn't already discussed back in the late 1980's.  They keep popping up old news like it is fresh news.  This is true in many areas of science and engineering, not just AI.

 

What appears to be controversial conclusions depends on your point of view.  They are forgone conclusions unless we are to believe those who still hang onto biology as a process producing mysterious unknowable indivisible consciousness forces [what I view as mysticism, but that is an argument for another time].

 

Mike Hardy wrote:

<Dennis, I keep challenging you to name the researchers who made these discoveries, and cite the papers in which they published them.  You never answer.

 

For the same reasons I have said before. These are concepts and knowledge taught in classrooms at Universities all over the world.  If you want to know who is who and what they are publishing search the internet, go to libraries and bookstores. There is no one comprehensive source or summary on this subject, it is dispersed knowledge across many sources and in many fields.

 

I wrote:

>The software is a challenge but with sufficient power and parallel computations an intelligent brain will evolve much as our ancestors did from feedback and a finite set of rules.  This is known from the work of von Neumann and many others since his time.

 

Mike Hardy wrote:

<Our ancestors evolved from feedback and a FINITE SET OF RULES?  Where did you get that?  To talk about a FINITE SET OF RULES is to talk about computer programming, not about biology.

 

See John von Neumann and cellular automata or cellular automaton.  See also "evolution".

 

This so called pseudo-science is main stream engineering and science.  What is not main stream science is the assumption of biological mysterious unknowable indivisible consciousness forces so many people seem to believe in.  They will not state their belief in such a manner but the unstated implication is present in many

arguments against AI.

Dennis May

 

From: Michael Hardy <hardy@math.mit.edu>

To: atlantis@wetheliving.com

Subject: ATL: Re: AI and the Vast Gulf

Date: Fri, 6 Jul 2001 19:46:42 -0400 (EDT)

Dennis May wrote:

>I'm not sure there will be much more news in he evolutionary respect of intelligence since testing going from mouse to cat to chimp to man will overlap in many variables [chess is one example].

 

I'm not up on the field known by the misnomer "artificial intelligence," but to me the case of programs that play chess seems like a really obvious case in which no intelligence other than that of the human programmers is active.  The machine is is clearly not doing anything like what humans do when we play chess.  When people talk about a machine "seeing" it doesn't seem as obvious whether it's doing anything like what we do when we see, but in the chess case it's clear.  (Although, when we see things, there is such a thing as what those things look like to us.  I've never heard of anyone claiming to have achieved that with a computer or other artifact.)

Mike

 

References:

 

David J. Chalmers, _The Conscious Mind_, Oxford University Press: 1996.

 

Antonio R. Damasio, _Descartes' Error_, Avon Books, New York: 1994.

 

Some list members have been asking for evidence or proof that strong AI is possible and have been claiming that the believers in strong AI only believe based on assumption or faith.  There is no absolute proof that strong AI is possible, but I will attempt to show why I think strong AI is possible.  I was initially hesitant about doing this because my argument requires knowledge of computing theory, but I have decided to try to briefly explain some of the needed computing theory as it comes up in my argument.

 

The Objectivist notion of causality is a key to my argument for strong AI.  In ITOE, pp. 108-109, Peikoff says:

 

"Since things are what they are, since everything that exists possesses a specific identity, nothing in reality can occur causelessly or by chance.  The nature of an entity determines what it can do and, in any given set of circumstances, dictates what it will do.  The Law of Causality is entailed by the Law of Identity.  Entities follow certain laws of action in consequence of their identity, and have no alternative to doing so."

 

So based on this, we can see there exists a set of contingencies for each entity that exists.  Each entity has a set of causal rules that say things like "If I encounter an entity of type X, then I will do Y in reaction to this encounter."  For every possibility, what an entity will do in a situation is defined.  So essentially, all of reality can be defined by sequences of potential actions for every entity.  In computing theory, a sequence of actions is called an algorithm.  The definition of an algorithm also includes potential or conditional actions.  So all of reality can be defined by algorithm ( albeit, an extremely complicated one ).

 

Now, it turns out that set of all of algorithms is precisely what we believe computers can do in theory (this statement is essentially the Church-Turing Thesis).  I emphasize the word "believe" here because the previous statement is why my argument is not a proof.  We are unable to come up with any problems which are algorithmic, yet not computable by a theoretical computer, nor are we able to come up with any analyses of algorithms (two of these are Turing Machines and recursive functions) that are any more powerful than what a computer can do in theory.  Because of these reasons, the Church-Turing Thesis is generally accepted by mathematicians and computer scientists.

 

So, as long as we accept the Church-Turing Thesis, we can see that reality can be implemented by a computer program.  The obvious criticism is that this would just be a simulation and somehow not the same as the real thing.  To see how this counter-argument fails we go back to Peikoff's notion of causality.  Essentially in reality, entities only affect other entities on the basis of what they do.  Every aspect of an entity's identity can be converted into a set of actions that aspect entails.  For example, assume I claim to have implemented all of reality in a computer program.  Debbie (fictitious person not meant to resemble anyone in real life :} ) claims that an apple in our reality is red, yet the implementation of the apple in my computer program is clearly not red.  My response would be that an apple is only red in our reality because it reflects red light.  In my computer program, the apple also reflects red light.  The light is not the light in our reality, but within the computer program it does the exact same thing as light does in our reality, so it is light within the reality implemented by the computer program.  So the apple in the computer program is not red in our reality, but it is red within the reality implemented by the computer program.  So, my point is that in this case, all that matters is the interactions between the entities that form a closed system and not the actual nature of those entities in our reality.  Of course there are some major differences of this computer-generated reality because on a whim someone could unplug the computer and destroy the reality, but as self-contained reality it works just like our own.

 

The next issue is if portions of reality can be implemented by a computer program and still act correctly when interacting with our actual reality.  Some things cannot be implemented by a computer program in this manner.  One example is a steel-cutting tool.  This nature of a steel-cutting tool requires that it be physically constructed out of components that could cut steel.  However, the human mind can be implemented by a computer program.  To make this as easy as possible, we can assume we have a real human body except for the brain which is just some computer components running a computer program.  Clearly the internal aspects of the mind can be duplicated, because all that matters is the interactions between the internal components.  However, the mind also perceives the outside world using the senses and perceives various things about the rest of one's body.  This problem is not hard in principle to solve.  Our body state and perception arrive in the brain as electrical signals that can be understood by our brains.  All we need are converters that convert those electrical signals to the proper format to be understood by the version of the mind implemented by the computer program.

 

So, if someone points to some aspects of our biology as being necessary to the human mind, all we need to do is implement that aspect in our computer program to make it meet this requirement for being exactly like a human mind.  For example, if I implemented the human mind as a computer program except for neurotransmitter X, and someone criticizes my model, claiming that neurotransmitter X is necessary for some vital aspect of the human mind.  I would say "What does neurotransmitter X do?"  "Well it is produced by T part of the brain and it bonds with neurotransmitter Y to form Z, it is a catalyst for reaction A between B and C, and it does N to the nearby neurons."  Then I would just modify my human mind program so that T part of the brain produces another component to the implementation which bonds with Y to form Z, catalyzes reaction A between B and C, and does N to nearby neurons within the computer program.

 

In conclusion, I believe my arguments provide a good reason for believing strong AI is possible, though not a proof because the Church-Turing Thesis is not proven.

Link to comment
Share on other sites

On 4/6/2016 at 10:11 AM, BaalChatzaf said:

What is "free will"?   We are made of the same stuff as rocks, trees, computers and electric motors.  In physical-material terms (they are the only ones that count)  what is "free will"  Our bodies are governed by the same physical laws as everything else material and physical.  What sort of non-physical non-material magic  is "free will".  

I have no doubt that humans are autonomous.  They are capable of altering their own control processes.  We can "re-write"  some of our "software".  So can non-human,  human made machines.  Do they have "free will"??

You're implying that humans don't have it--whatever it is--so you must know what it would be if it could be/would be.

--Brant

so, what is free will?

Link to comment
Share on other sites

Brant wrote, “so, what is free will?

 

That again? Let me dig out an oldie. I would say free will is consciousness with the ability to DECIDE how to react to reality. And that is called volition. It is not just random or instinctually determined. We can point to consciousness and say there it is or plot it on an EKG, etc.

 

Rand wrote:

“A process of thought is not automatic nor “instinctive” nor involuntary—nor infallible. Man has to initiate it, to sustain it and to bear responsibility for its results.”

end quote

 

Ba'al Chatzaf once wrote:

I have no doubts at all about the existence of volition. I volit at least six times an hour. I just wonder what physical process makes it happen. Everything that exists is physical.
end quote

 

Glad to hear it Ba’al. Hard Scientists have no problem with Chaos or The Uncertainty Principle. Does Volition grate on your nerves at the rate of six times per hour? :o) I hope not. That would be worse than urinating six times a night because you take a diuretic for high blood pressure.

 

I have always thought that Hard and Soft Determinists MUST supply a running list of what just made them say what they just said. Now that is a road to Freudian insanity. I do not think there is a “cause” for everything said, as in billiard ball causality, other than in the sense that human choice is a type of causality.

 

I am sympathetic to the argument that once begun, math, or a chain of logic leads to a particular answer once proposed, but those formulations are the product of volitional causality.

 

Brant wrote, long ago, on a planet far away: Well, obviously (?) it's neurogenic activity. And the brain basically functions apart from consciousness but makes consciousness possible. But where is consciousness? Could it exist somewhat outside the body in the same sense that some body heat exists outside the body? I know this doesn't answer you, btw. end quote

 

My educated guess is that Consciousness is electro / physical / chemical. That *Entity* for physical reasons is confined to the body. Electrical extensions of the *conscious entity* can move it outside the physical confines of the body. Well at least our science of full immersion video games may be heading into that area.

Peter

Link to comment
Share on other sites

From: "George H. Smith" <smikro@earthlink.net>

To: <objectivism@wetheliving.com>

Subject: OWL: Re: Mind as emergent [was: Objectivism's concept of free will]

Date: Mon, 12 Apr 2004 14:44:30 -0500

 

Neil Goodell (4/11) wrote:

"I'm not sure I agree with Mike Rael's (4/9) characterization of mind: [Rael] 'The way I see it, once the physical constituents of a mind have been created, the mind can control the starting of its own processes to some degree. What happens when I raise my hand up? Physical things are going on, but the determiner is my mind.' [Goodell] In my reading of it he seems to trying to keep the advantages of a dualist perspective of the mind-body question but without calling it that.

 

The term "dualism" covers a broad range of views in philosophy. It is often associated with Cartesianism, according to which the mind is a "substance" that can exist independently of matter. In less extreme versions, dualists are those who repudiate reductionism, according to which the mind (i.e., consciousness) is nothing but "matter in motion." Dualists in this latter sense don't necessarily deny that consciousness depends on matter for its existence. They contend, however, that consciousness (a state of awareness) is not something physical per se, however much it may be causally dependent on physical phenomena.

 

I am a "dualist" only in this latter sense, and I suspect the same is true of Mike Rael. Indeed, the kind of emergence theory that Neil goes on to defend is a common foundation for this variety of dualism.

 

Ayn Rand, in maintaining that consciousness is epistemologically axiomatic, that a state of awareness cannot be explained by something more fundamental, was also defending this sort of mitigated "dualism." But I doubt if she would have cared for this label, given its customary association with the Cartesian theory of mind, which of course she did not agree with.

 

Neil wrote:

"George Smith makes a distinction between "hard" determinism and "soft" determinism (4/11), between biology and psychology if you will, concluding, "Even though I disagree with physical determinism, there are powerful arguments in its favor, and it is a position deserving of respect. I'm afraid I cannot say the same about "soft" determinism."

 

"As I've said previously, I'm a complete and committed determinist, but I don't agree with any of these views. My position is that mind is an emergent property of the brain. What this means in philosophic terms is that the nature of the causality that operates at the level of the brain is separate and distinct from that which operates at the level of the mind. (This is similar to a levels of analysis argument.)"

 

I agree with emergence theory, as here summarized. This is one reason I reject physical determinism, and it also plays a role in my not-so-thinly disguised contempt for "soft determinism."  The mind, as an emergent phenomenon, needs to be studied on its own terms, and we can access it directly only through introspection. We should not assume that causation in the world of consciousness is analogous to causation as we observe it in physical phenomena. We should not assume, for example, that "motives" operate like physical particles that, upon striking other mental "things," such as choices, "cause" them to move.

 

The mind is not a world of mental billiard balls moving to and fro, engaging in endless collisions which "cause" us to choose this or that. Of course, the soft determinist will repudiate this characterization of his position as unfairly crude and inaccurate. But it doesn't take much scratching beneath the language of the soft determinist to see that this is exactly how he analyzes mental phenomena. He adopts what is essentially a mechanistic, linear view of mental causation, in which a mental event (say, a value) somehow "causes" another mental event (say, a preference), which in turn "causes" us to make a choice to put the eight ball in a given pocket.

 

One needn't defend that view that choices and other mental events are "uncaused" in order to defend volitionism. Certainly Rand didn't take this view, and neither do I. I subscribe (as did Rand) to an "agency theory" of causation, according to which a rational agent -- and not merely antecedent *events,* whether mental *or* physical -- can properly be said to be the "cause" of his own mental acts. This is essentially an Aristotelian perspective, one that has been defended not only by modern Thomists but also by other contemporary philosophers, such as Richard Taylor. It had a number of able defenders in earlier centuries as well, such as the eighteenth-century philosophers Richard Price and Thomas Reid. This position was also defended by Nathaniel Branden in "The Objectivist Newsletter" and, later, in *The Psychology of Self-Esteem.*

 

Neil wrote:

"And I do not believe my position is inconsistent with Objectivism. (More on this below.)"

 

Emergence theory does not conflict with Objectivism, but any form of determinism most certainly does.

 

[snip]

"Rand says over and over again that the premises a person holds in their mind is what determines their character. As she writes in Galt's Speech, "...that your character, your actions, your desires, your emotions are the products of the premises held by your mind—that as man must produce the physical values he needs to sustain his life, so he must acquire the values of character that make his life worth sustaining—that as man is a being of self-made wealth, so he is a being of self-made soul—that to live requires a sense of self-value, but man, who has no automatic values, has no automatic sense of self-esteem and must earn it by shaping his soul in the image of his moral ideal..."

 

This passage does not entail or suggest determinism. On the contrary, Rand's claim that man "is a being of

self-made soul" is an expression of free-will.

 

Some time ago on another list, I wrote a post in which I discussed the possibility that, according to Rand, our only truly free choice is the choice to think (or focus) or not, after which everything else is necessarily determined. Although I concede that there are some passages by Rand that give this impression, I don't think this is what she believed; and I would further maintain that this interpretation is inconsistent with her overall approach, including many of her remarks about ethical theory and moral responsibility. I think the passages in

question were probably instances of rhetorical exaggeration, made for the purpose of emphasis. This sort of thing is fairly common in Rand's writings.

 

Neil wrote:

"I don't know whether George Smith would characterize this as "soft" determinism, but it is certainly determinism of a non-biological kind, "your character, your actions, your desires, your emotions are the products of the premises held by your mind." If this were *not* the case, it would mean that the relationship between premises and character was arbitrary, which would have the effect of eviscerating the entirety of Objectivism's concept of virtue."

 

Rand did not defend any kind of determinism, whether "hard" or soft." In calling our character, actions, desires, and emotions the "products" of premises held by our minds, there is good reason to believe she was drawing logical, rather than strictly causal, connections. In any case, one needn't be a determinist to maintain that how and what we think will greatly influence what kind of characters we have and how we will act. This complex issue has nothing to do with determinism one way or the other.

 

Neil wrote:

In other words, if determinism is denied, there can be no morality. If specific causes do not lead to specific effects (i.e., indeterminism) then effects are arbitrary and a person cannot be held responsible for them."

 

If this were true, then we could hold a rock or a tree or a snail morally responsible for its behavior -- for in all such cases specific causes lead to specific effects.

 

In order for there to be moral responsibility, there must first be a moral agent, i.e., a rational being who can make autonomous decisions and choices that are not causally necessitated by antecedent events that he is powerless to change or control. If the actions of a mass murderer were causally necessitated by a chain of antecedent events, which reach back (presumably) to infinity, long before he (or any life form) existed, then he is no more "responsible" for his behavior than a snail. Both behave not as they choose, but as they *must.* For what, in a deterministic scheme, could we hold a mass murderer responsible *for*? For being born? For possessing undesirable genes? For not making better choices that were metaphysically impossible for him to make? For not possessing an omnipotent power to alter past events over which neither he nor anyone else has any control?

 

When we pass a negative moral judgment, part of what we mean is that a person *should* not have made the choice he did under those circumstances. He *ought* to have chosen differently in that precise situation. If, however, his "choice" (and I use the word advisedly in this context) was causally necessitated by antecedent events that he was powerless to change, then to pass moral judgments on humans makes no more sense than to pass moral judgments on clouds for causing a flood.

Ghs

Link to comment
Share on other sites

Once again I am drawn to the argument that we cannot discuss potential artificial intelligence without first talking about the human mind. The following examples of human thought go back to the 70’s to the 90’s. I can see I am above my head so read and weep. J

Peter

 

Rand's wrote about volition: But man exists and his mind exists.  Both are part of nature, and both possess a specific identity.  The attribute of volition does not contradict the fact of identity, just as the existence of living organisms does not contradict the existence of inanimate matter.  Living organisms possess the power of self-initiated motion, which inanimate matter does not possess; man's consciousness possesses the power of self-initiated motion in the realm of cognition (thinking) which the consciousness of other living species do not possess.  But just as animals are able to move only in accordance with the nature of their bodies [and I add  - so do humans move their bodies], so man is able to initiate and direct his mental action only in accordance with the nature (the *identity*) of his consciousness.  His volition is limited to his cognitive processes; he has the power to identify (and to conceive of rearranging) the elements of reality, but not the power to alter them. He has the power to use his cognitive faculty as it nature requires, but not the power to alter it nor to escape the consequences of its misuse. He has the power to suspend, evade, corrupt, or subvert his perception of reality, but not the power to escape the existential and psychological disasters that follow  (the use or misuse of his cognitive faculty determines a man's choice of values, which determine his emotions and his character.  It is in this sense that man us a being if self-made soul.)]

 

From Mind in Objectivism, A Survey of Objectivist Commentary on Philosophy of Mind by Diana Mertz Hsieh:

Roger Bissell

            Roger Bissell's paper “A Dual-Aspect Approach to the Mind-Body Problem” (published in the Fall 1974 Reason Papers) defends the view that “a mental process and the physical brain process correlated with it are one and the same brain process, as viewed from different cognitive perspectives” (Bissell 1974, P4).  Thus the brain has two distinct, irreducible aspects: a mental aspect and a physical aspect (Bissell 1974, P29, P47).  And “mental processes are actually mental physical brain processes” distinguishable “from all other physical brain processes by virtue of their introspectable, mental aspect” (Bissell 1974, P47).

 

            The basic problem with dual aspect theories is one of circularity, in that concepts of consciousness (like perspective and appearance) are used to explain the basic nature of consciousness itself.  (Binswanger has a clear discussion of this problem in the first lecture of The Metaphysics of Consciousness, while Kelley has a more confusing comment in response to a question after the second lecture on free will in The Foundations of Knowledge (Binswanger 1998; Kelley 1986.)  Bissell's theory certainly seems to suffer from circularity in speaking consciousness as our introspective awareness of brain processes, as seen in this comment:

 

The Dual-Aspect theory holds that mental processes are actually certain physical brain processes as we are aware of them introspectively, i.e., that “mental” refers to the fully real, introspectable aspects of those particular physical brain processes. Our awareness of them is the form in which we are aware of certain brain processes introspectively, just as our awareness of the physical aspects is the form in which we are aware of those brain processes extrospectively. (Bissell 1974, P45)

 

Here and elsewhere, Bissell inverts the hierarchy of concepts by explaining the lower-level concept of consciousness in terms of the dependent, higher-level concept of introspection.  Consequently, the meaning of “introspective awareness” is rendered completely unclear, given that it usually refers to awareness of our own mental states, not awareness of our brain states.  Additionally, by describing consciousness as awareness of brain states, Bissell seems to have provided a theory of mind more consistent with idealism or representationalism rather than the realism espoused by Objectivism.

           

Despite these critiques, Bissell's arguments are often interesting and compelling—and deserve more attention than given here.

 

David Kelley

            In his two 1986 lectures on free will from The Foundations of Knowledge, David Kelley focuses on the issue of mental causation in order to refute the claim that free will contradicts the law of identity (Kelley 1986).  As a result, many of his discussions in these two lectures are extremely relevant to philosophy of mind. 

 

Kelley identifies the source of the apparent conflict between causality and free will as the Humean view of causality, in which events cause other events.  As a result of this error, philosophers generally regard causality dependent upon the passage of time, such that to be caused is to be determined by “antecedent factors.”  This view of causality obviously conflicts with free will.

           

In contrast, Kelley argues that the Aristotelian/Objectivist account of causality, in which “causality is a matter of the nature or identity of the objects which act,” does not limit causality to antecedent factors.  Rather, it allows for “many different modes of causality in nature,” including simultaneous causality between the levels of organization that emerge in complex systems, such as in conscious organisms.  Kelley discusses two basic forms of such simultaneous causality: upward causation and downward causation.  In upward causation, entities acting at a lower level of organization simultaneously cause effects on the entities in a higher level of organization.  Downward causation is simply the reverse, such that entities acting at a higher level of organization simultaneously cause effects on the entities in a lower level of organization.  For Kelley, consciousness is “a higher level phenomenon distinct from the electrical activity of specific parts of the brain.”

 

            Unfortunately, Kelley leaves implicit perhaps the most critical point about such simultaneous causality within complex systems, namely that these lower and higher levels are equally real, with causal powers of their own.  Modern analytic philosophy, in contrast, tends to be deeply reductionistic about such levels of organization, such that the higher levels are seen as really “nothing but” the lower levels, such that everything eventually reduces to the microphysical.  Consequently, higher levels of organization (including the perceptual level) are seen as less real (if real at all) and the existence of downward causation is denied.  The rejection of this “collapsing levels” metaphysics is clearly critical to Kelley's account of causation, even though never explicitly discussed.

 

            Based upon this rich understanding of causality, Kelley argues that both upward and downward causation are involved in consciousness through an example of an animal seeing a predator and fleeing.  After tracing the “antecedent factor” causality in both the brain (the lower level) and the mind (the higher level) in this situation, Kelley turns to the connections between these levels of organization.  In upward causation, the brain causes changes in consciousness.  Thus the visual cortex might upwardly cause perception and the limbic system might upwardly cause recognition and fear.  In downward causation, consciousness causes changes in the brain.  Thus perception might the “affect the visual cortex by keeping its activities centered on the appropriate object” and fear might determine “which particular set of neural impulses gain control of the motor cortex.”  Such simultaneous upward and downward causation, on Kelley's account, is an integral part of any conscious process.

 

            Kelley then specifies the role of all three forms of causality (upward causation, antecedent factors, downward causation) with respect to free will.  The “capacity to focus” is an instance of upwards causation because it owes its existence “the nature and structure of the brain.”  A person's “specific knowledge,” “hierarchy of values,” and “thinking skills” are all antecedent conditions which “set limits on what it is possible ... to focus on.”  But within those limits, the choice to focus or not, to raise or lower one's level of consciousness or not is “pure downward causation.” 

 

            The theory of mind Kelley sketches in these lectures is far from complete, but nevertheless promising.  His detailed explanation of the Objectivist/Aristotelian alternative to Humean causality and his non-reductionistic view of levels of organization seem indispensable for accounting for mental causation in an Objectivist theory of mind.

Link to comment
Share on other sites

4 hours ago, Brant Gaede said:

You're implying that humans don't have it--whatever it is--so you must know what it would be if it could be/would be.

--Brant

so, what is free will?

In the light of physical laws I am asking what IS   Free Will???  Does it mean against or apart from physical laws?  If so, what are we?  What are we made of?  How are we put together?   Can you reconcile Free Will with Physical Law?  If  so,  how?

Link to comment
Share on other sites

Ba’al wrote: Can you reconcile Free Will with Physical Law?  If so, how? end quote

 

That rainbow is red, oygbi, violet. I can tell. I can measure it with a spectrometer, but I can’t see it for you.

 

I can think of an exaggerated million ways to respond to the question, “What should we do next?” But I can’t do it for you. Yet if you ask yourself that, you can come up with an answer, and another answer, and another answer, then decide to think even harder about an answer . . . and that is not billiard ball causality. You might argue that it is simply higher level billiard ball causality and my caused question caused you to search for an answer, and I would say, “No, I didn’t.” If we took a vote, one million sentient humans would say they have free will and one person would say, “No it isn’t.” Introspect. Unless I were schizophrenic I would accept the baby’s conclusion that “Those are my toes.”    

Peter   

Link to comment
Share on other sites

1 hour ago, Peter said:

Ba’al wrote: Can you reconcile Free Will with Physical Law?  If so, how? end quote

 

That rainbow is red, oygbi, violet. I can tell. I can measure it with a spectrometer, but I can’t see it for you.

 

I can think of an exaggerated million ways to respond to the question, “What should we do next?” But I can’t do it for you. Yet if you ask yourself that, you can come up with an answer, and another answer, and another answer, then decide to think even harder about an answer . . . and that is not billiard ball causality. You might argue that it is simply higher level billiard ball causality and my caused question caused you to search for an answer, and I would say, “No, I didn’t.” If we took a vote, one million sentient humans would say they have free will and one person would say, “No it isn’t.” Introspect. Unless I were schizophrenic I would accept the baby’s conclusion that “Those are my toes.”    

Peter   

The Aroma of Subjectivity wafts....

Link to comment
Share on other sites

The immortal, indestructible *gods* humans invented a million years ago were really just our way of describing our view of the universe. Clueless humans see the obvious but can’t explain it, so they imagine the gods.

Ba’al wrote: Can you reconcile Free Will with Physical Law?  If so, how? end quote

Aye! You think I’m that smart? And when I danced around the subject, Ba’al wrote: The Aroma of Subjectivity wafts.... end quote  

Let’s do the hustle. What is the *self*? If you dream about something that never happened or you someday, WHOA! “climax” what the hell just happened? And why start a political speech with a joke? Who’s that for? What are original thoughts? Was it just “the time” to discover X-rays? The old adage, that all discoveries are built on the shoulders of giants who came before should only be considered metaphorically. So, can volition create more than a thought that corresponds to a fact in reality? Yes. No. And, yet the effects of thought persist if historical accounts persist. A representation of reality persists if photos or a computer program persist. I hope we do discover a cache of alien artifacts or hear their calling in the night, “Is anyone out there?”

Let me cogitate some more.

Peter      

Link to comment
Share on other sites

Where is consciousness? Oddly enough, it’s in your head. I have always been intrigued by the hypothesis that consciousness persists after its passing or can be projected, as in telekinesis, ESP or ghosts. Not only have those “projected thoughts” never been proven, they are used to dupe the gullible. No evidence of projected thoughts is real. There are no ghosts. No esp. So, Ba'al is there a ghost in the machine? Am I or you unreal?

Peter    

Link to comment
Share on other sites

I have hundreds of pages of self, hard and soft determinism, etc., so I will try to condense a few to save space in the mind of OL. I will attempt to stop myself if I want to reproduce old thoughts again. I might be repeating myself. 

Peter

Rand's official definition in Jean Moroney's "Glossary of Objectivist Definitions": "A man's self is his mind--the faculty that perceives reality, forms judgments, chooses values."

VOS, p. 18 (pb): "Consciousness -- for those living organisms that possess it -- is the basic means of survival. The simpler organisms, such as plants, can survive by means of their automatic physical functions. The higher organisms, such as animals and man, cannot. The physical functions of their bodies can perform automatically only the task of using fuel, but cannot obtain that fuel. To obtain it, the higher organisms need the faculty of consciousness."

From: Adam Victor Reed <areed2@calstatela.edu>

To: Ram Tobolski <rtb_il@yahoo.com>, objectivism@wetheliving.com

Ram Tobolski wrote: >Let me elaborate. According to Roger's Aristotelian materialism, only entities, that is physical bodies, act. So it is clearly entailed by this view, that whatever thinking is, it is the _body_ that thinks. And my question is: _how_ can a physical body think?

1. The body is a physical entity. In the absence of evidence for the belief that the body _differs_ from other physical entities in ways that would make it incapable of doing what other physical entities do, it is enough to show that physical entities are capable of doing whatever it is that the body - more specifically the brain – does when it thinks.

2. All measurements of thought measure the manipulation, or processing, of information. So in the absence of any evidence for the belief that thinking requires capabilities beyond those needed for information processing as such, it is enough to show that physical entities are capable of processing information.

3. It can be shown, and is shown in basic courses on digital circuit design or on neural networks, that, and exactly how, physical entities, such as neurons or transistors, can perform the logical NOR (neither-nor) operation on their inputs.

4. It can be shown, and is shown in basic courses on logic design, how assemblies of neural or electronic NOR circuits can perform all logical operations that may arise in the course of any information processing; and how other capabilities need for thought, such as storing information over time, can also be obtained by, or in the evolution of organisms emerge from, appropriate interconnection of NOR circuits.

5. Therefore physical entities that contain the requisite components, including physical bodies of organisms with nervous systems composed of interconnected neurons, are known to be capable of doing everything that is known to be required for thought. QED.

Adam Reed

From: "Peter Reidy" peterreidy@hotmail.com To: objectivism@wetheliving.com

A simpler way of putting what I said (and, I suspect, what Ralph Blanchette and Saulius Muliolos said) is: the self is simply a person in the case where the person observing and the person observed are the same. Peter

Allen Costell THE SELF:  an integrated psychophysical being [genus] wherein the mental capacities are more fundamental (i.e., determinative of the actions and continued existence of the entity) than the various other capacities [differentia].

Ellen Moore wrote: I suggested that the self is "a living entity's nature by which it exists, acts and values."  I intended that I was stating genus and differentia.  But I also see that I referred to it mistakenly as a "definition".  So I retract the "definition" part, and suggest it be viewed as a description only.  Any objections or further suggestions?

From: Paul Hibbert: My contribution: The self is that entity which is conscious of its own consciousness.

From: "Edward L. Scheiderer" The only referents possible for "self" either refer to your consciousness, to your body, or of the integrated relation of both. Since the process of consciousness is involved in any concept of "self", and there is no consciousness without the body, the third alternative must hold the proper definition of the concept. Consciousness arises as the _result_ of bodily (nervous system and brain) processes. And every aspect introspectively observed about our consciousness -- from perception, to logical inference, to emotion -- demonstrates it to be a state of continuous processing. This state is (while awake) a _constant_ for us, and as such is _like_ an entity: it has specific attributes which exist over time and give it a continuous identity.

From: Chris Matthew Sciabarra: Joshua asks:  "From an Objectivist perspective, what is the self?  I would like to find or develop a formal definition, in genus-differentia format."

According to my various searches through the Objectivist literature on CD-Rom, and other sources, here are some choice comments on "the self":

1.  From Galt's speech in ATLAS SHRUGGED: "The ~self~ you have betrayed is your mind; ~self-esteem~ is reliance on one's power to think."

2.  From THE FOUNTAINHEAD (Soul of an Individualist): "Men have been taught that the ego is the synonym of evil, and selflessness the ideal of virtue. But the creator is the egoist in the absolute sense, and the selfless man is the one who does not think, feel, judge or act. These are functions of the self."

3.  From JOURNALS OF AYN RAND: (part 4, "Theme and Characters," ~The Fountainhead~; Dec. 4. 1935): 'And further: what is the self? Just the fact that one is born and conscious, just the 'I' devoid of all definite content? Or—the 'I' that values, selects and knows precisely the qualities which distinguish it from all other "I's," which has reverence for itself for certain definite reasons, not merely because

"I-am-what-I-am-and-don't-know-just-what-I-am." If one's physical body is a certain definite body with a certain definite shape and features, not just a body—so one's spirit is a certain definite spirit with definite features and qualities. A spirit without content is an abstraction that does not exist. If one is proud of one's body for its beauty, created by certain lines and forms, so one is proud of one's spirit for its beauty, or ~that which one considers its beauty~. Without that—there can be no pride of spirit. Nor ~any~ spirit.'

5.  Notes while writing THE FOUNTAINHEAD (1942): "The self as thought . . ."

6.  Notes while writing Galt's speech (Jan. 9, 1954): 'Their search for "themselves"—the ~self~ is the mind.'

7. [undated] 'Your consciousness is that which you know—and are alone to know—by direct perception. It is that indivisible unit where knowledge and being are one, it is your "I," it is the self which distinguishes you from all else in the universe. No consciousness can perceive another consciousness, only the results of its actions in material form, since only matter is an object of perception, and consciousness is the subject, perceivable by its nature only to itself.  To perceive the consciousness, the "I," of another would mean to become that other "I"—a contradiction in terms; to speak of souls perceiving one another is a denial of your "I," of perception, of consciousness, of matter. The 'I' is the irreducible unit of life.'

8.  LETTERS OF AYN RAND: to Richard DeMille (son of Cecil) [November 27, 1946] In a discussion of ANTHEM, Rand writes: 'I don't quite understand why you refer to my use of the word EGO as a "symbol." I did not use it as a symbol, I used it in its exact, literal meaning. I did not mean a symbol of the self—but specifically and actually Man's Self.'

9.  Peikoff, OBJECTIVISM: THE PHILOSOPHY OF AYN RAND (Ch. 8): "Since the self is the mind, self-esteem is mind-esteem."

10.  There is a whole section on "Self" in the revised edition of INTRODUCTION TO OBJECTIVIST EPISTEMOLOGY, running from page 251 to page 256. Here are some excerpts from Rand's comments: "The consciousness of self is implicit in [any grasp] of consciousness. . . . The notion of 'self' is an axiomatic concept; it's implicit in the concept of 'consciousness'; it can't be separated from it. . . . [Y]ou are the precondition of the concept of 'consciousness'.  In every state of consciousness that you experience, part of it is the fact of the person who experiences. And in that sense you are implicit in every state of your consciousness."

Prof. E:  "In other words the only fact of reality that you'd have to get in order subsequently to form the concept 'self' or 'I' would be your being conscious."

AR: "That's right."

Prof. B:  "It's not that you experience consciousness and later on you discover a new component: self."

AR:  "Exactly."

The only thing I can add to this is some of my own observations from AYN RAND: THE RUSSIAN RADICAL.  If "the self" is equivalent to "the mind," in Rand's view, then one must subsequently ask what Rand means by "mind."  Norman Barry has said (correctly I think) that Rand has a particularly ~expansive~ conception of mind, of consciousness.  It includes integrated aspects of perception, focus, reason, abstraction, conception, and so forth.  And given Rand's integrated view of mind and body as part of a single, living organism---just how ~expansive~ is the corresponding notion of self? Cheers, Chris

From: "Philip Coates" Chris Sciabarra suggests [3/17] that "the self" is equivalent to "the mind" in Rand's view. But Chris also quotes her in IOE, "In every state of consciousness that you experience, part of it is the fact of the person who experiences." That would be a more precise formal definition and would clear up a number of the confusions of recent days: Self = "the person who experiences". Mind=the central capacity, faculty, or attribute of the self.

In other words, consciousness is what the self -does-. It's dependent while self is independent. It's what -houses- mind, consciousness, will, feeling, experiences... every other aspect or capacity or mental content of a human being.

Look at it this way: the four [there are others in Aristotle's list of ten but they are derivative] fundamental types of existents are i) 'entity' which is metaphysically independent and ii) existents which are dependent

for their existence on there being the relevant entity or entities: 'action, attribute, relationship'. One could debate about whether 'mind' or 'consciousness' are actions, attributes, or relationships (the reason this debate would be a waste of time is because in certain contexts mind and consciousness are each of these).

But the more fundamental point is that these two things are dependent on there being a self which possesses them. That's the entity. There is no free-floating mind or consciousness without the entity possessing them. And we need a name for that entity. And that is "self." The -person- who experiences...or performs or houses or possesses all the other stuff. (To say it is the sum of all the rest would be correct in a sense, but less precise than Rand's phrasing in IOE.) Your self is -you-. It is not your mind.

From: "Donald Baldino" Just to indicate the complexity of Philosophy of Mind, it's worth noting that consciousness supervenes not on one brain but one two brain hemispheres that function almost independently, and whose connection, the corpus callosum, can be surgically severed to relieve epileptic seizures. Post-operative functioning is pretty much the same, with some interesting exceptions identified by Roger Sperry and Ronald Meyers. Our personalities, our minds, our selves may be less integrated that we realize.

Philosopher Richard Swineburne has noted that the two brain halves theoretically could be transplanted into different bodies. That could lead to some interesting legal claims.

From: PaleoObjectivist

From: "George H. Smith"

On 7/24/03, Bill Dwyer wrote: "It is often said that a person is composed of a mind AND a body (as if the two were radically different substances), when in fact the mind is simply a certain activity of the brain as experienced subjectively and is therefore PART OF the body."

I see no problem in saying that a person "is composed of a mind AND a body," if by "body" we understand his physical characteristics exclusively. We needn't treat consciousness as a substance ("mind-stuff") in order to recognize that it has characteristics and powers that cannot be fully explained by referring to physical brain states alone. What does it mean to say that mind (or consciousness) "is simply a certain activity of the brain as experienced subjectively"? This suggests that the ability to "experience" something is a characteristic of physical matter when viewed from a certain perspective. But this *presupposes* the ability to attain this perspective in the first place, an ability that requires a state of consciousness before any experience is even possible. Strictly speaking, we do not normally "experience" brain activity. Rather, it is because brain activity causes a state of consciousness that we are able to experience things via sensations, perceptions, and thoughts. When I perceive a Mockingbird, I am not experiencing "a certain activity of the brain"; rather I am perceiving a Mockingbird. Of course, this experience depends on a physiological process. This process makes my perception of the Mockingbird possible; but the physiological process, the brain activity, is not itself the *object* of my perception. When I render an epistemological judgment, such as "X is true," my judgment (which is a type of experience) is not a report on my brain activities, nor is it merely those brain activities as "experienced subjectively" (whatever such a statement is supposed to mean). The *meaning* of "X is true" cannot be reduced to physical brain activities, which do not have "meaning" and to which concepts like "true" and "false" do not, and cannot, apply.

Bill wrote: "Since mental activity is brain activity, and brain activity is a physical process, it follows that mental activity is a physical process. Whereas not all physical activity is mental, all mental activity is nevertheless physical, because it is performed by the brain, which is a physical organ."

To say that mental activity is brain activity is not obviously true, if by this we mean that consciousness is "nothing but" brain activity. This reductionist thesis requires proof and may not be used as a postulate in order to avoid the traditional problems of mind-body interaction. Even if we do admit that "mental activity is brain activity," in the sense that consciousness is causally dependent on brain activity, it does not follow that mental activity is NOTHING BUT a physical process. This is a flagrant non sequitur.

To repeat: When I say "X is true," I am not referring to a physical process of any kind, for physical processes can be neither "true" nor "false." I am not experiencing, or reporting on, my brain activities from a subjective point of view. Indeed, I have no idea what brain activities may be occurring when I render epistemological judgments of this sort, but I am able to understand a proposition, as well as assess its truth or falsehood, without a scintilla of such knowledge. Moreover, even if I had extensive knowledge of the brain activities that occur when I understand a proposition and assess its justification, this knowledge would not help one bit in my epistemological endeavors.

My conscious experiences and the brain activities on which those experiences causally depend are NOT the same thing. We cannot simply say that mental activities are physical brain activities which are somehow experienced from a "subjective" point of view -- for this leaves unanswered the crucial question, How can a purely physical activity "experience" anything in the first place?

Ayn Rand and other Aristotelian philosophers have correctly pointed out that the state of consciousness is an irreducible primary from an epistemological point of view. Moreover, conscious states are intentional; to be conscious is to be conscious of *something.* And this "something" is an object of perception, cognition, etc. -- not our physical brain activities. When I say "I see a Mockingbird" or "X is true," I am NOT describing my experience of neuronal firings in my brain, of which I may be totally unaware, and which, even if I were aware of them, would be irrelevant to the *meaning* of my experiences.

Bill wrote: "Perhaps an analogy will help...Just as "the morning star" (visible in the eastern sky before sunrise) and "the evening star" (visible in the western sky at sunset) are not two different planets, but the same planet identified from two different perspectives, so the mind (identified introspectively) and the brain (identified extrospectively) are not two different organs, but the same organ identified from two different perspectives."

This analogy assumes far too much, for we must ask: How is it possible for anything to have "two different perspectives" in the first place? Certainly we don't attribute this ability to the planet Venus. We don't say that the planet Venus can view itself from "two different perspectives," because we don't attribute consciousness to a physical planet. We say that humans can (and have) viewed Venus from two different perspectives because we know that humans have a rational faculty and are able to interpret a phenomenon in different ways. But it doesn't explain *anything* to say that consciousness is simply brain activity  viewed introspectively --  for how is this even possible in the first place, if we are dealing with a purely physical process? Where is the "it" that has this introspective ability? If you answer that the "it" is the physical brain, then this merely reaffirms what few would deny, namely, that physical brain activities cause a state of consciousness. But this "mind" may have emergent properties and abilities that make it much different than the physical causes on which it depends. The cause of X is NOT the same

Link to comment
Share on other sites

20 hours ago, BaalChatzaf said:

In the light of physical laws I am asking what IS Free Will???  Does it mean against or apart from physical laws? If so, what are we?

What are we made of? How are we put together? Can you reconcile Free Will with Physical Law? If so, how?

Why pick on free will? Consciousness has to be of the same ilk but more basic. What is consciousness? It's your "meat" manufacturing a form of energy. One kind for consciousness and another for the functioning for the autonomic nervous system, another to make the heart beat, etc. Your "meat" is the whole of you, not merely your brain. There's a reason our bodies are over 98 degrees. Meat at work.

--Brant

Link to comment
Share on other sites

29 minutes ago, Brant Gaede said:

Why pick on free will? Consciousness has to be of the same ilk but more basic. What is consciousness? It's your "meat" manufacturing a form of energy. One kind for consciousness and another for the functioning for the autonomic nervous system, another to make the heart beat, etc. Your "meat" is the whole of you, not merely your brain. There's a reason our bodies are over 98 degrees. Meat at work.

--Brant

It is all done by glycolysis and breaking ATP down to ADP.   Our energy is chemical energy. We run on sugar and adonosine tri phosphate./

Link to comment
Share on other sites

*A thing apart*.

The biggest problem to me in my life has been figuring out how individuals view their own consciousness.

Appears they see it either:

1. as a spiritual entity,

2. as a mental calculator.

Both times it is something OF them, but not THEM. For both, consciousness is 'unknowable'.

If it was religion, Descartes, Hume, Kant -or all those- to answer to that, is less important against what are its destructive effects.

Link to comment
Share on other sites

On 4/6/2016 at 2:07 PM, anthony said:

Beats me how this rock had the free will to write that.

A computer can modify its own programs under certain circumstances.  A computer is NOT a rock.  It has a complicate internal state structure.  All the rock has is thermodynamic microstates. and a few chemical potentials. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now