Subject and Object


Recommended Posts

In the Metaphysics topic "Absolute versus Objective" General Semanticist asserted:

The object results from an interaction between the observer and the observed. Simply said, the "object" doesn't exist for you, when you turn away. Maybe a memory of it does but the immediate abstraction we call the object has gone. This is all it means.

There was a General name Korzybski

Who faced an infantry charge-iski

It was a fine day

When he turned smartly away

And was shot in the cheek of his sit-ski.

Being shot would be a direct perception, thus bringing the infantry to his perception, thus making it real once more, though, temporarily it did not exist for him.

In essence, perhaps, it might be claimed that all perception is an interaction between the object and the subject. A long time ago, in another discussion entirely, a friend suggested this example of the importance of context. You see an image of a primative man, crouched, rock in hand, snarling, the very symbol of the avatar proto-human, a brute, threatening. But you widen the context and now see him facing a predator, a large cat, behind the man is a female with an infant. Now what you perceive is something else, a protector and defender, the huamn versus the true threatening beast.

Clearly, we bring to every perception, our history, both cutlural and personal.

This afternoon, I read a chapter of a book on forensic evidence. The chapter was about the limitations of fingerprinting as an element in criminal investigation. If you accept the validity of fingerprinting as presented in the mass media, an expert on a witness stand who states, based on his experience, that the print matches the defendent's print, then you will perceive that evidence from one point of view. If, however, you come to the problem with the knowledge of a criminalist, you may have a host of questions about reliability, falsifiability, statistical inferences, the techniques of printing and matching. Scotland Yard and the British Home Office historically insisted on more "match points" than the plethora of American jurisdictions. At root, fingerprinting is a qualitative art, not a quantitative science. How it is perceived, however, depends not so much on the objective reality of the evidence as on the subjective experience of the perceiver.

Whether the courtroom "ceases to exist" for the juror who falls asleep is a different question entirely. I am willing to accept the broader claim that what we perceive depends on who we are. I am unwilling to accept the unfounded and unsupported assertion of General Semanticist that the very existence of the object depends upon its being perceived. That claim seems sophomoric at best.

We know the claims of objectivism -- of rational-empiricism, including the weaker assertions called "positivism" as well as the stronger claims of capital-O Objectivism. If there is any useful benefit to the general semantic non-A theory, this would be a good place for such a catalog... as opposed to bald claims that only elicit predictable replies.

Edited by Michael E. Marotta
Link to comment
Share on other sites

I am unwilling to accept the unfounded and unsupported assertion of General Semanticist that the very existence of the object depends upon its being perceived. That claim seems sophomoric at best.

Think of a camera. When you point it at something you see an image in the viewfinder then when you point it away it is gone. Our "viewfinder" is our visual cortex and so we must remember that what we are "seeing" is in our brain, not outside.

Link to comment
Share on other sites

I am unwilling to accept the unfounded and unsupported assertion of General Semanticist that the very existence of the object depends upon its being perceived. That claim seems sophomoric at best.

Think of a camera. When you point it at something you see an image in the viewfinder then when you point it away it is gone. Our "viewfinder" is our visual cortex and so we must remember that what we are "seeing" is in our brain, not outside.

What is the benefit in "must remember"? For me I "must remember" that that tree I'm driving toward really is there and I'd best turn the wheel, not my head.

--Brant

Link to comment
Share on other sites

What is the benefit in "must remember"? For me I "must remember" that that tree I'm driving toward really is there and I'd best turn the wheel, not my head.

--Brant

I guess you've never heard of an optical illusion?

Link to comment
Share on other sites

What is the benefit in "must remember"? For me I "must remember" that that tree I'm driving toward really is there and I'd best turn the wheel, not my head.

--Brant

I guess you've never heard of an optical illusion?

How does gs know an illusion from a non-illusion?

--Brant

Link to comment
Share on other sites

GS:

So since you cannot "see" the bullet, can you extrapolate that it is on its way at any point?

Adam

Link to comment
Share on other sites

I think it is because the speed of light is so fast and our nervous system is so fast that we are unaware of the time delay between when the light leaves "the object" and the image manifests itself in our cortex. Of course, when we look at distant "objects" in space it is very clear. In fact they may not even exist anymore yet we can still "see" them. :D

Link to comment
Share on other sites

GS

Ok.

So at some micro milli second level we can "see" the bullet, yes.

Adam

Link to comment
Share on other sites

  • 2 months later...

I've been doing some backreading (for fun), and came upon this post. MMarotta quoted GS I think, saying that

"The object results from an interaction between the observer and the observed. Simply said, the 'object' doesn't exist for you, when you turn away. Maybe a memory of it does but the immediate abstraction we call the object has gone. This is all it means."

Just thinking... the object never exists for you, whether you turn away or not. When looking at an object, the object excites stumuli which send electrical signals to the brain. The brain sees these electrical stimuli. When looking away from an object, the brain can still "see" the stimuli, but it does so through electrical impulses generated within the imagination. In either case, consciousness is aware of signals... it's just the source of the signals that changes.

If we're talking about "what exists for someone," we have to define "existence to someone." Here's are results from an experiment:

3 groups of participants:

1st group practiced basketball free throws for 15 minutes each day for a week

2nd group visualized practicing free throws for 15 minutes each day for a week

3rd group had no practice

Comparing ability from before week and after week, 1st and 2nd group performance of free throws increased significantly and equally; 3rd group had no increase in free-throw ability.

Conclusion: as far as the mind was concerned, visualization is just as real as stimulus-interaction. ... Enter the Matrix Neo, wooohoowoohoo

Link to comment
Share on other sites

This afternoon, I read a chapter of a book on forensic evidence. The chapter was about the limitations of fingerprinting as an element in criminal investigation. If you accept the validity of fingerprinting as presented in the mass media, an expert on a witness stand who states, based on his experience, that the print matches the defendent's print, then you will perceive that evidence from one point of view. If, however, you come to the problem with the knowledge of a criminalist, you may have a host of questions about reliability, falsifiability, statistical inferences, the techniques of printing and matching. Scotland Yard and the British Home Office historically insisted on more "match points" than the plethora of American jurisdictions. At root, fingerprinting is a qualitative art, not a quantitative science. How it is perceived, however, depends not so much on the objective reality of the evidence as on the subjective experience of the perceiver.

I think this is an interesting question (partly because I spend my days writing fingerprint analysis software). Currently, a human expert is better than any computer program at interpreting and matching fingerprints. That's because we don't completely understand the process of visual perception even as it relates to a constrained domain such as fingerprint analysis.

When I look at a pair of potentially matching fingerprints, I often get to a point where I say, "ah, hah, they match." It seems like we, as humans, are able to tell when we have enough information to say that two things match. A similar thing happens when they don't match. At some point, the fact that two prints (or images of prints) cannot match becomes clear. At this point, the match or non-match of the fingerprints seems certain.

The same cannot be said of current computer programs. Computerized matching of fingerprints usually occurs in two stages. In the first stage, the fingerprints are independently analyzed and "match points" (minutiae) or some other descriptors are extracted from them. In the second stage, the descriptors are matched.

Now, since descriptors can sometimes be in error -- true minutiae can be missed and false ones spuriously introduced -- the matching program must generally adopt some sort of statistical model of match probability. But, in such a model, the question of whether two prints match can never be answered definitively. In practice, the computed probability is usually much less than one.

In some cases the original images are consulted during the matching process. However, due to limitations in the matching technology, it is still impossible to make a definitive assertion about whether the prints match or not.

So, the question becomes, is it possible, in principle, to collect enough information about the similarity of two fingerprints (or two images of anything) to make a definite assertion that the images came from the same source? It certainly seems like it, psychologically, but we do not understand the process well enough to duplicate it algorithmically and we have no theory to argue for it, mathematically. Having a mathematical theory would probably imply an algorithmic solution to the problem, but at this time we have neither.

Darrell

Link to comment
Share on other sites

Just thinking... the object never exists for you, whether you turn away or not. When looking at an object, the object excites stumuli which send electrical signals to the brain. The brain sees these electrical stimuli. When looking away from an object, the brain can still "see" the stimuli, but it does so through electrical impulses generated within the imagination. In either case, consciousness is aware of signals... it's just the source of the signals that changes.

When the cortex produces an abstraction (object) I believe it represents a certain sequence of neurons firing. When abstractions are produced repeatedly then the cortex begins to be able to produce the abstractions without the stimuli present (imagining). You can see this in simple cases like when you look at a bright light and then close your eyes you can "see" the effects lingering in your retina with your eyes closed. The important thing to remember is that "reality" is out there but we only have abstractions to work with.

Link to comment
Share on other sites

  • 2 weeks later...

This is a fascinating topic that rings a lot of my bells. There are so many byways that lead off it, I don't know where to start. On an objective level,it raises questions like how do we know what we see,what do we choose to see (how selective are we being with our "sight-consciousness"),the awareness (or otherwise) that things haven't ceased to exist just because we turned away etc.,etc.- all culminating in the Objectivist epistemology that object and subject are independent entities.

Personally, this all harks back to my introduction - almost simultaneously - to photography and Objectivism at age 21. No connection,one might think,but indeed there was! As an intensely 'inward' boy,I was aware that (a. I didn't SEE like other people did and (b.I was escaping reality,into a dream world.

Whatever the roots of my problem were - a childhood trauma,or suchlike,I guess - is not relevant here,but I had a strong sense that I was living behind a screen,one on which my own thoughts were projected, as well as my sight. This meant that vision was a deliberate act on my part - either on or off. Intensely focused,or not at all. As GS's posts invoke in me,the abstraction of image on the retina is probably not much different to the abstraction of thought.

Well,it was, and is still,to a lesser degree,for me. Not noticing friends in the street;having little visual memory compared to my wife who can 'see pictures in her head'from child-hood,right up to yesterday - are what I've got used to. Little wonder that I took to the camera so eagerly! And it became my career.

In the final analysis, it's all about reality, I think. The ability to see it without prejudice and fear - to experience life in full consciousness - to have trust in my own senses (such as they are),but especially in my mind. Little wonder I also took to Objectivism,too.

Well, I have certainly gone off-topic here, and better hold back other ideas on this subject for now. I would certainly appreciate more in this vein from GS,Christopher,and anyone.

Link to comment
Share on other sites

I think it is because the speed of light is so fast and our nervous system is so fast that we are unaware of the time delay between when the light leaves "the object" and the image manifests itself in our cortex. Of course, when we look at distant "objects" in space it is very clear. In fact they may not even exist anymore yet we can still "see" them. :D

We see light (i.e. we process incoming photons). We do not see objects that do not radiate or reflect light in the visible spectrum. We see what appear to be objects that are not there as with a hologram. (By the way, holograms can now be synthesized).

Ba'al Chatzaf

Link to comment
Share on other sites

Ba'al:

What do you mean by that statement?

"By the way, holograms can now be synthesized."

I have been extremely interested in holography for years.

Adam

Link to comment
Share on other sites

I think this is an interesting question (partly because I spend my days writing fingerprint analysis software). Currently, a human expert is better than any computer program at interpreting and matching fingerprints. That's because we don't completely understand the process of visual perception even as it relates to a constrained domain such as fingerprint analysis.

When I look at a pair of potentially matching fingerprints, I often get to a point where I say, "ah, hah, they match." It seems like we, as humans, are able to tell when we have enough information to say that two things match. A similar thing happens when they don't match. At some point, the fact that two prints (or images of prints) cannot match becomes clear. At this point, the match or non-match of the fingerprints seems certain.

The same cannot be said of current computer programs. Computerized matching of fingerprints usually occurs in two stages. In the first stage, the fingerprints are independently analyzed and "match points" (minutiae) or some other descriptors are extracted from them. In the second stage, the descriptors are matched.

This sounds like looking for local features. Don't you use "holistic" methods like neural networks? These are used in many different methods of pattern recognition and I suppose they are also a better model for the human "aha, they match" kind of recognition.

Link to comment
Share on other sites

Hello WhyNot, and I think you demonstrate quite a bit of awareness about yourself already in your observations.

The direction of your discussion is more on the topic of Nathaniel Branden's Disowned Self, and is best dealt from a psychology (versus perception) angle. I know exactly what you mean about focusing and projecting your image that you want to see on to the world. I do the same myself quite frequently. However, one of the interesting things about awareness per se is not that it involves deep focus; rather, it involves opening up consciousness to receive signals. Thus, awareness functions, perhaps most importantly, as a receptive vehicle.

I believe our minds are divided into several sections, each having their own unique sets of feelings, motives, and memories. When we disown a part of our psychology, we forget the memories associated to that part of our brain, we lose the experience of those feelings and desires. In essence, part of ourselves disappear. We cease to consciously receive signals from that part of our brain. Disowning is a psychological defense mechanism that usually acts to protect us from painful thoughts and memories that feel threatening. The handful of people that can remember their childhood

and all the parts of their life clearly are lucky in that they have never disowned parts of themself.

The effects of psychological disownment do play a very powerful role in perception of the world. For example, without certain motivations active, our consciousness does not automatically focus on many signals within the environment. In essence, our attention is not drawn to all the aspects of our environment that we would otherwise be aware of. Here's what John Bowlby, the eminent psychologist of attachment theory, had to say about attachment (i.e. love) between children and adults: "if young children’s attachment behavior is continually aroused but not responded to, they eventually exclude from awareness the sights, thoughts, or feelings that normally would activate attachment behavior." The interesting part is that even if a child still "perceives" (sees) his parent, there would not be the same feelings associated to that perception (the same evaluation of that perception) - thus, even focusing on the same things yield different results based on the organization of one's psychology.

This is just a taste of what you're talking about. It is a very deep topic, and perhaps one of the foundations for the entire field of clinical psychology. We have the joy here of applying such knowledge to the philosophy of epistemology.

Best,

Chris

Link to comment
Share on other sites

Ba'al:

What do you mean by that statement?

"By the way, holograms can now be synthesized."

I have been extremely interested in holography for years.

Adam

See

http://www.informaworld.com/smpp/content~db=all~content=a713819247

Link to comment
Share on other sites

Chrisopher, Thank you.

I'm trying to avoid the trap of over-simplifying,paraphrasing,your meaning here,but my first reaction is this: to be 'visual',one must have been 'visible' (in those formative years).(?)

Which,to go a half-step further,corrolates with confidence and trust in one's powers of perception;and so in self-efficacy,and so with self-esteem.

Or have I strayed too far?

The blocking out (or selectivity) of vision "based on the organisation of one's psychology" has of course always engrossed me. I noticed (especially in my years covering hard news)that a bunch of people all viewing the same scene - particularly a traumatic one - all came away with differing images and responses. Interesting.

The Disowned Self is one N.Branden book I never read - I must find a copy.

Thanks again for your considerate reply.

Tony.

Link to comment
Share on other sites

To be visual, one must be visible...

If you mean by "visual" the awareness to certain perceptions, then I would agree. Formative years are generally the most vulnerable years, and if the child's internal signals are not reinforced by parents, or if those signals are actively rejected, then there is a high probability part of the child's psychology becomes invisible, and the loss of internal awareness extends to the loss of external awareness.

I think psychological visibility is more about self-love (worthiness, deserving of happiness). I don't know that we can conclude that a weaker external awareness resulting from disownment necessarily results in less self-efficacy of perception. After all, a child might no longer perceive loving bonds between people, but the child would also no longer care to do so nor be interested in the results that such perception would entail. Thus, for the child's desired goals (the goals the child is actively aware of), the child can perceive just fine.

One area I find fascinating is morality. Here's a scenario:

It’s wartime, and you and some of your fellow villagers are hiding from enemy soldiers in a basement. You are holding a baby you found abandoned when you first entered the basement. The baby starts to cry, and you cover the baby’s mouth to block the sound. If you remove your hand the baby will cry, the soldiers will hear, and they will find you and the others and kill everyone they find, including you and the baby. If you do not remove your hand, the baby will smother to death. Do you smother the baby?

The question is hypothetical, but when people respond with yes or no, different parts of their brain light up under MRI scan. For people who say yes, there is more of an utilitarian, cost-loss benefit area activated (obviously logic dictates that the baby dies either way). For people who say no, there is an area of the brain that lights up connected to relational empathy (imagine the horror of having to kill). In other words, people have entirely different psychological experiences, and likewise perceptions, to the situation that then dictate their behavior.

Chris

Edited by Christopher
Link to comment
Share on other sites

logo-print.gif Web address:

http://www.sciencedaily.com/releases/2009/08/

090813142430.htm

Brain Innately Separates Living And Non-living Objects For Processing

magnifier.pngenlarge 090813142430.jpg

Even in people who have been blind since birth the brain still separates the concepts of living and non-living objects, new research shows. (Credit: iStockphoto) ScienceDaily (Aug. 14, 2009) — For unknown reasons, the human brain distinctly separates the handling of images of living things from images of non-living things, processing each image type in a different area of the brain. For years, many scientists have assumed the brain segregated visual information in this manner to optimize processing the images themselves, but new research shows that even in people who have been blind since birth the brain still separates the concepts of living and non-living objects.

The research, published in the Cell Press journal Neuron, implies that the brain categorizes objects based on the different types of subsequent consideration they demand—such as whether an object is edible, or is a landmark on the way home, or is a predator to run from. They are not categorized entirely by their appearance.

"If both sighted people and people with blindness process the same ideas in the same parts of the brain, then it follows that visual experience is not necessary in order for those aspects of brain organization to develop," says Bradford Mahon, postdoctoral fellow in the Department of Brain and Cognitive Sciences at the University of Rochester, and lead author of the study. "We think this means significant parts of the brain are innately structured around a few domains of knowledge that were critical in humans' evolutionary history."

Previous studies have shown that the sight of certain objects, such as a table or mountain, activate regions of the brain other than does the sight of living objects, such as an animal or face—but why the brain would choose to process these two categories differently has remained a mystery, says Mahon. Since the regions were known to activate when the objects were seen, scientists wondered if something about the visual appearance of the objects determined how the brain would process them. For instance, says Mahon, most living things have curved forms, and so many scientists thought the brain prefers to processes images of living things in an area that is optimized for curved forms.

To see if the appearance of objects is indeed key to how the brain conducts its processing, Mahon and his team, led by Alfonso Caramazza, director of the Cognitive Neuropsychology Laboratory at Harvard University, asked people who have been blind since birth to think about certain living and non-living objects. These people had no visual experience at all, so their brains necessarily determined where to do the processing using some criteria other than an object's appearance.

"When we looked at the MRI scans, it was pretty clear that blind people and sighted people were dividing up living and non-living processing in the same way," says Mahon. "We think these findings strongly encourage the view that the human brain's organization innately anticipates the different types of computations that must be carried out for different types of objects."

Mahon thinks it's possible that other parts of the human brain are innately structured around categories of knowledge that may have been important in human evolution. For instance, he says, facial expressions need a specific kind of processing linked to understanding emotions, whereas a landmark needs to be processed in conjunction with a sense of spatial awareness. The brain might choose to process these things in different areas of the brain because those areas have strong connections to other processing centers specializing in emotion or spatial awareness, says Mahon.

Mahon is now working on new experiments designed to further our understanding of how the brain represents knowledge of different classes of objects, both in sighted and blind individuals, as well as in stroke patients.

The data for the study were collected at the Center for Mind/Brain Sciences at the University of Trento in Italy.

Journal reference:

  • Bradford Z. Mahon, Stefano Anzellotti, Jens Schwarzbach, Massimiliano Zampini, Alfonso Caramazza. Category-Specific Organization in the Human Brain Does Not Require Visual Experience. Neuron, 2009; DOI: 10.1016/j.neuron.2009.07.012

Adapted from materials provided by University of Rochester.

Link to comment
Share on other sites

One area I find fascinating is morality. Here's a scenario:

It’s wartime, and you and some of your fellow villagers are hiding from enemy soldiers in a basement. You are holding a baby you found abandoned when you first entered the basement. The baby starts to cry, and you cover the baby’s mouth to block the sound. If you remove your hand the baby will cry, the soldiers will hear, and they will find you and the others and kill everyone they find, including you and the baby. If you do not remove your hand, the baby will smother to death. Do you smother the baby?

The question is hypothetical, but when people respond with yes or no, different parts of their brain light up under MRI scan. For people who say yes, there is more of an utilitarian, cost-loss benefit area activated (obviously logic dictates that the baby dies either way). For people who say no, there is an area of the brain that lights up connected to relational empathy (imagine the horror of having to kill). In other words, people have entirely different psychological experiences, and likewise perceptions, to the situation that then dictate their behavior.

Chris

Not so hypothetical.

Ponder this. Along the Wilderness Road, or Boone's Trail, in the eighteenth century, westward through Cumberland Gap up to Kentucky, many families and trail parties lost their lives in border and Indian warfare. Compare two episodes in which pioneers were pursued by savages. (1) A Scottish woman saw that her suckling baby, ill and crying, was betraying her and her three other children, and the whole company to the Indians. But she clung to her child, and they were caught and killed. (2)A Negro woman, seeing how her crying baby endangered another trail party, killed it with her own hands, to keep silence and reach the fort. Which one made the right decision?

[Cited from Joseph Fletcher, Situation Ethics( Philadelphia, 1966) p. 124]

This is a footnote to a passage discussing an actual incident during the Holocaust in which a crying infant endangered a group trying to hide from German soldiers. They were hesitant to cover the child's mouth with a pillow in case the infant suffocated, but one man eventually did eventually grab the pillow and use it to cover the child's face--with the result that the infant died but the rest of the party survived.

Link to comment
Share on other sites

  • 3 weeks later...

Rand has given a definition of the term "objectivity". Has she also defined the terms "objective" and "subjective"?

Oh no!

Alert Gulch and the CDC there is an outbreak of the "subjective flu" T32R024L3L strain.

Adam

Post Script: That complies with one of my pledges to the President, the other one is too subversive for print under O'biwan, The Most Merciful's rule.

Link to comment
Share on other sites

One area I find fascinating is morality. Here's a scenario:

It’s wartime, and you and some of your fellow villagers are hiding from enemy soldiers in a basement. You are holding a baby you found abandoned when you first entered the basement. The baby starts to cry, and you cover the baby’s mouth to block the sound. If you remove your hand the baby will cry, the soldiers will hear, and they will find you and the others and kill everyone they find, including you and the baby. If you do not remove your hand, the baby will smother to death. Do you smother the baby?

The question is hypothetical, but when people respond with yes or no, different parts of their brain light up under MRI scan. For people who say yes, there is more of an utilitarian, cost-loss benefit area activated (obviously logic dictates that the baby dies either way). For people who say no, there is an area of the brain that lights up connected to relational empathy (imagine the horror of having to kill). In other words, people have entirely different psychological experiences, and likewise perceptions, to the situation that then dictate their behavior.

Chris

Not so hypothetical.

Ponder this. Along the Wilderness Road, or Boone's Trail, in the eighteenth century, westward through Cumberland Gap up to Kentucky, many families and trail parties lost their lives in border and Indian warfare. Compare two episodes in which pioneers were pursued by savages. (1) A Scottish woman saw that her suckling baby, ill and crying, was betraying her and her three other children, and the whole company to the Indians. But she clung to her child, and they were caught and killed. (2)A Negro woman, seeing how her crying baby endangered another trail party, killed it with her own hands, to keep silence and reach the fort. Which one made the right decision?

[Cited from Joseph Fletcher, Situation Ethics( Philadelphia, 1966) p. 124]

Chris/Jeffrey,

Is there a "right" decision? What is your opinion about it in view of the examples provided?

Edited by Xray
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now