SCORECARD! Can't tell the players without a scorecard!


Recommended Posts

  • Replies 332
  • Created
  • Last Reply

Top Posters In This Topic

Getting back to the original topic of this thread. :)

Bob Campbell's and MSK's articles pretty much cover things.

What I have learned in the brief time I came back to check on the state of the O'ist world, is that there are broadly two groups: the 'orthodoxy', who are by and large aligned with Piekoff and ARI, and everyone else. In the 'everyone else' camp are a wide range of different groups. I would say they can usually be divided into two main groups: those who hate the Brandens and those who don't.

In most cases, few people move from the 'everyone else' camp to the orthodoxy. Those that do can only do so by denouncing/denying prior people and positions (including denouncing the Brandens). Its more usual to move from the orthodoxy to the none. You just have to be excommunicated.

I almost thought that a chart would be good to show the people and groups and their allignment. The only problem is that this chart isn't static. I've noted that just in the last year of certain people being denounced by the orthodoxy.

Edited by Michael Brown
Link to comment
Share on other sites

That gives me a convenient opening for asking Daniel Barnes a question I've wanted to ask him: Am I correct in understanding that Popper argues that, no, we do NOT learn by induction? (Near as I've thus far understood his comments anti Hume's view on the psychological issue of induction, this is what he's arguing -- I'm doing some heavy mulling over on that issue, since I've always hated the description "trial and error," given my Behavorist college education, but Popper is leading me to think of "trail and error" in a different way than I ever have before -- and I'm doubting that "our brains and nervous sytems" do work by "induction.")

Ellen

___

How else does one formulate rules and strategies for dealing with situations not yet encountered and for guiding actions not yet taken? We hominids are trial and error learning machines. It has been so at least as far back as when our hairy ancestors figured out ways of making flint tools as needed. I seriously doubt that our worthy ancestor deduced the flint blade a priori.

Ba'al Chatzaf

Edited by BaalChatzaf
Link to comment
Share on other sites

.....

Thus the difference between the mind and body is in function, not essential nature organization-wise.

.....

Michael

Are minds made of atoms? If not, how are they just like other stuff. If so, what is their chemical composition?

Ba'al Chatzaf

Link to comment
Share on other sites

Bob,

We already know that brains are made of atoms and that the nature of consciousness is still under investigation. It is too early to say what it is with absolute precision. In the end, it just might be made of atoms (although I agree that this is not likely). You certainly will not find any scientific consensus on the fundamental nature of consciousness. (Or religious consensus, for that matter.)

btw - As everyone knows, atoms are made of even smaller particles. Think of what that implies for the brain. I predict that even more little suckers will be discovered over time. I consulted my Oiuja board yesterday and I was assured of it.

:)

Michael

Link to comment
Share on other sites

MSK,

Sorry about replowing ground that's already been plowed here.

There are too many interesting threads on this site for me to be able to contribute to more than a few at a time. Either that, or I have too many opinions and too few resources with which to express them :(

What's more, if we just sticking to epistemological problems somewhere in the vicinity of induction, there are a whole bunch of different ones. Limited working memory capacity, boundary (if any) between perceptual and conceptual knowledge, causes of error, the difference between heuristics and algorithnms, the relationship between logic, the functioning of the mind, and characteristics of the environment, etc. etc. etc. etc. have all been stated or alluded to, somewhere on this one thread...

Let's take what I think is the core of your position here:

In the classic example: "I have observed several white swans, therefore all swans are white," is a misuse of induction. The correct use is: "I have observed several white swans, therefore white swans exist as a category of reality." This implies that other white swans exist and rests on an axiom (one I have not seen anywhere in my reading, yet) that if two or more existents are observed and identified as a group, other unobserved members of that group exist. Science actually rests on this axiom in addition to deduction.

Obviously induction can be used to speculate, so the classic swan problem qua speculation is not a misuse of induction. But I am using induction here to mean a form of reasoning (or logic) for identification, i.e., for gaining knowledge. As a form of using induction for gaining knowledge qua knowledge, the classic problem as stated is a

misuse of induction
.

Finding a black swan does not disprove the knowledge of white swans gained by induction, because finding a black swan does not contradict the validity of the category. It only contradicts a proposition of closing the category off to new knowledge of the type of entity (closing swans off to any other color than white). In fact, a black swan causes the category of swan to be divided into two subcategories: white swans and black swans (more precisely, at that point, the categories are white swans and at least one black swan, and this will only become "white swans and black swans" when more than one black swan is observed).

When this kind of reasoning is applied to the is/ought problem (along with deduction), it becomes very easy to derive ought from is. One does not close the categories involved. One merely makes a statement about the categories that have already been identified.

In fact, there can be no deduction without categories. Induction is nothing more than volitional concept formation.

The problem is that you don't know in advance whether to form a swan concept or a white swan concept. There are tradeoffs, depending on which one you form.

As long as you get evidence of more and more white swans, it could be either, because you have no evidence of differences in color among birds that are otherwise the same shape, swim and fly pretty much the same way, lay similar eggs (if female), etc. Neither (I'm supposing) do you know much about the mechanisms of heredity, which are responsible for what color feathers these birds can have under normal environmental conditions.

What's going to keep you from treating overgeneralizing your swan concept with regard to color, if all of your data pertain to white swans and you don't know either a reason why birds that are otherwise very much like white swans can be a different color? Or what color they could be, if they weren't white?

So, biologists based in Europe formed a swan concept that subsumed an erroneous generalization, namely, that all swans are white. Encounters with black swans in Australia and black and white swans in South America promptly convinced them to revise their swan concept.

If they'd "stuck closer to the data" and all that, European biologists would have done what you recommended, and formed a white swan concept. But then, on encountering the relevant birds in South America and Australia, they'd have had to form a new black swan or black and white swan concept, and then realize that nearly all of the generalizations that they could correctly make using their white swan concept could be correctly made using an overall swan concept.

Either way, they had to change their concept in the face of new data. Which means, in turn, that their old one wasn't completely adequate.

Meanwhile, native Australians could have formed a black swan concept instead a swan concept, but on encountering white and black-and-white swans, they'd have had to subsume their black swan concept--or correct their swan concept. Either way, they'd have had to fix something.

When you make any interesting generalization, you run the risk of error. And as you learn more, you have to correct your old generalizations, sometimes even toss them out.

Robert Campbell

Link to comment
Share on other sites

When you make any interesting generalization, you run the risk of error. And as you learn more, you have to correct your old generalizations, sometimes even toss them out.

Robert Campbell

Proving that inductive generalization is valid except when it is not valid.

Will you forgive me for being underwhelmed by all of this?

Ba'al Chatzaf

Link to comment
Share on other sites

Bob K,

Help me out here.

Are you underwhelmed because you think you and I agree on something?

Or because you think we don't?

Don't worry, there's no shortage of things that we don't agree about. :)

What I was trying to say to MSK is that the goal of rational conceptual change without a lick of error along the way--indeed, without occasional wholesale rejection of old concepts--is unattainable.

Rand wasn't the only one who thought that conceptual change could be error-free, if only guided by philosophically correct rules and principles. But her claims in ITOE are the point of reference in this discussion.

I'm saying she was wrong.

And in my post up-thread, I agreed with you that there are no valid argument forms in inductive logic. I.e., INDUCTIVE GENERALIZATION IS NOT LOGICALLY VALID.

Robert Campbell

PS. I believe on some level Rand, despite all of her disclaimers about honest error and human beings not being ominsicient or infallible, longed for epistemological procedures that would rule out all error, comprehensively, once and for all--except the kind that stems from evil or dishonesty. Leonard Peikoff pretty much gave the game away, in his infamous article on "Fact and Value."

There aren't any such procedures. There aren't going to be. What, in fact, would it mean for a goal-directed system or a living organism to be incapable of error?

Link to comment
Share on other sites

Robert,

I do not see where the discovery of a black swan invalidates the category of white swan. It only invalidates the proposition that ALL swans are white. I don't know who said that, but it was obviously stretching matters to consider color a defining characteristic (genus, differentia and all that).

Did people actually claim that all swans are white as a matter of being logically proven, or was this something started by Hume? Even from your description of European biologists, they appear to have merely included an obvious observation in their classification because most birds are not one color only. Does anyone think that any of the biologists back then would have stated that the existence of a black swan was an impossibility? That doesn't sound like any biologist I have ever read.

As I understand it, a higher category is "swan," anyway. There is no way to make the categories of "white swan" and "black swan" and "two-tone swans" without "swan" in the first place.

This leads to a very interesting question. How do people who use this argument know they are dealing with a black swan if all swans are supposed to be white as a defining characteristic? Doesn't that in itself imply that the essential characteristic of "swan" is something other than color? When one says in the famous proposition "All swans are white," are they not already talking about "swan" as a category irrespective of color?

There is one proposition that is absolutely true. Up to the moment a black swan was observed, all swans that had been observed by the people involved had been white. This is what Peikoff calls a contextual absolute (which he defines as "an immutable truth within the specified context," OPAR, p. 175).

Since induction is based on observation, all induction contains the amount of observation made as one of its essential elements, i.e., its context. No observation, no induction.

There is a law of nature on which all science rests (stating my observations from the previous post in different words):

Everything in reality can be classified.

There there is another law:

The human mind can classify all things that are observed.

Classification, however, implies including unobserved units. This is based on the above ontological concept: everything in reality can be classified.

I don't know how to prove that, but I know I am on to something.

I just thought of the following (and I remember reading about it in ITOE regarding proper names). There is a fundamental difference between a name for a single existent (or specific number of existents) and a name for a category. A category (concept) is made to be open-ended and include more units and new information (the inevitable result of more observation) without changing the category (sort of like a file folder). A name for a specific existent (or something like "that desk" while pointing at it, since it is already identified by being in a category) never changes to include new units or new information, unless the information is in regard to that single item. (This also holds for specific multiple items.)

Once we get to cloning, though, this starts getting blurred.

Michael

Link to comment
Share on other sites

Robert,

I do not see where the discovery of a black swan invalidates the category of white swan. It only invalidates the proposition that ALL swans are white. I don't know who said that, but it was obviously stretching matters to consider color a defining characteristic (genus, differentia and all that).

Michael

The existence of a non-white swan negates the universally quantified statement that ALL swans are white. It does not make the class of white swans disappear. There are white swans and non-white swans. These are non- empty sets. Somehow you have not grasped what a universally quantified proposition is. This is why I keep telling you to buy a book on logic (real logic please, not Objectivist $logic). Learn what quantification is. Learn what a set is. There are books that can teach you the basics without running you through a mathematical meat grinder. The heavy math is for the specialists and the technicians. What you need are the basics.

Get Copi's textbook on logic. It is a middle-brow text that is sufficient for conveying the basics to you. It also covers classical logic, the kind the Aristotle formulated. There is even a simpler book (no insult intended -honest!). -Logic for Dummies-. It may be sufficient for you to grasp the basics.

For technical types and professionals deeper study is required. Since you do not plan (I assume) to make a career out of mathematical logic, you need not take this route. I, on the other hand, did. Before I became a grad school drop out I was planning to get a thesis topic in the field of mathematical logic, in particular the field of computable functions. Instead I dropped out and made a career in applied mathematics and computer software. Frankly, I was getting disgusted with the academic environment and wanted to get to the real world of real work for real pay. So that is what I did. I did not have the temperment or the discretion to last in an academic setting. My urge to tell the Emperor that he is Bare Ass would have been my undoing. So I did software and mathematical applications, some of which I well tell you about some other time. In the world of software, the only thing that matters is bug free software delivered on time and within budget.

Ba'al Chatzaf

Edited by BaalChatzaf
Link to comment
Share on other sites

The existence of a non-white swan negates the universally quantified statement that ALL swan are white.

Bob,

Who has ever made that statement or affirmation?

Former biologists?

Or did it all start with Hume?

If it started with Hume, then it was a straw-man argument (or straw-swan argument as the case may be).

Michael

Link to comment
Share on other sites

The existence of a non-white swan negates the universally quantified statement that ALL swan are white.

Bob,

Who has ever made that statement or affirmation?

Former biologists?

Or did it all start with Hume?

If it started with Hume, then it was a straw-man argument (or straw-swan argument as the case may be).

Michael

Predicate logic (also known as first order logic) 101. The negation of FORALL x P(x) is THEREXISTS x such that -P(x) where the "-" indicate logical negation. This is basic stuff. The FORALL and THEREXISTS are -quantifiers-. The P is a predicate or property. P(x) means x has the property P. Teenagers learn this in school.

Ba'al Chatzaf

Link to comment
Share on other sites

Bob,

I already know what a "universally quantified statement" is. For example, let's look at your own statement and break it down into two:

There are white swans...

This is based on a universally quantified statement (all white swans are swans). And the continuation:

... and non-white swans.

This is also based on a universally quantified statement (all non-white swans are swans).

No exceptions.

Michael

EDIT: If you don't like those because of circularity, here are a couple more (and I could go on all day):

All white swans are birds.

All non-white swans are birds.

Universally quantified and no exceptions. These statements are true for every single white and non-white swan observed in the past, present and includes all to be observed in the future.

Link to comment
Share on other sites

The existence of a non-white swan negates the universally quantified statement that ALL swan are white.

Bob,

Who has ever made that statement or affirmation?

Former biologists?

Or did it all start with Hume?

If it started with Hume, then it was a straw-man argument (or straw-swan argument as the case may be).

Michael

Predicate logic (also known as first order logic) 101. The negation of FORALL x P(x) is THEREXISTS x such that -P(x) where the "-" indicate logical negation. This is basic stuff. The FORALL and THEREXISTS are -quantifiers-. The P is a predicate or property. P(x) means x has the property P. Teenagers learn this in school.

I find it very difficult to explain these things to you because you lack the basics. Fortunately these can be acquired with minimal effort. May I suggest you buy or borrow -Logic for Dummies-. Part IV, page 223. No joke, no insult intended. You can get it used on amazon.com very cheap or at your nearest branch library.

Ba'al Chatzaf

Edited by BaalChatzaf
Link to comment
Share on other sites

Who has ever made that statement or affirmation?

Predicate logic (also known as first order logic) 101. The negation of FORALL x P(x) is THEREXISTS x such that -P(x) where the "-" indicate logical negation. This is basic stuff. The FORALL and THEREXISTS are -quantifiers-. The P is a predicate or property. P(x) means x has the property P. Teenagers learn this in school.

All that is a who?

Michael

Link to comment
Share on other sites

When little babies learn what is what and what is where and what is when they do it by induction. It is as natural to humans as breathing in and out. It is the way our brains and nervous systems work.

That gives me a convenient opening for asking Daniel Barnes a question I've wanted to ask him: Am I correct in understanding that Popper argues that, no, we do NOT learn by induction? (Near as I've thus far understood his comments anti Hume's view on the psychological issue of induction, this is what he's arguing -- I'm doing some heavy mulling over on that issue, since I've always hated the description "trial and error," given my Behavorist college education, but Popper is leading me to think of "trail and error" in a different way than I ever have before -- and I'm doubting that "our brains and nervous sytems" do work by "induction.")

Ellen

___

Bob K., you reply:

How else does one formulate rules and strategies for dealing with situations not yet encountered and for guiding actions not yet taken? We hominids are trial and error learning machines. It has been so at least as far back as when our hairy ancestors figured out ways of making flint tools as needed. I seriously doubt that our worthy ancestor deduced the flint blade a priori.

You didn't follow the query I'm raising. You're apparently equating learning by induction with learning by trial-and-error. But Popper, I gather, doesn't equate these. Instead, he argues that we don't learn by induction; instead, by trial-and-error forming and testing of theories.

I await: (1) Further input from Daniel; (2) Further reading of Popper. (I've ordered Objective Knowledge and received notice from Amazon that I can expect its delivery Monday.)

Ellen

___

Link to comment
Share on other sites

You didn't follow the query I'm raising. You're apparently equating learning by induction with learning by trial-and-error. But Popper, I gather, doesn't equate these. Instead, he argues that we don't learn by induction; instead, by trial-and-error forming and testing of theories.

I await: (1) Further input from Daniel; (2) Further reading of Popper. (I've ordered Objective Knowledge and received notice from Amazon that I can expect its delivery Monday.)

Ellen

___

While you are waiting for you book, read this piece by Popper himself on the matter of Induction:

http://dieoff.org/page126.htm

The article pretty well states Popper's position on Induction.

Ba'al Chatzaf

Link to comment
Share on other sites

While you are waiting for you book, read this piece by Popper himself on the matter of Induction:

http://dieoff.org/page126.htm

The article pretty well states Popper's position on Induction.

Ba'al Chatzaf

Thanks for the link.

Here, succinctly expressed, is a paragraph saying just what I thought he thought:

The principle of empiricism (3) can be fully preserved, since the fate of a theory, its acceptance or rejection, is decided by observation and experiment - by the results of tests. So long as a theory stands up to the severest tests we can design, it is accepted; if it does not, it is rejected. But it is never inferred, in any sense, from the empirical evidence. There is neither a psychological nor a logical induction. [my emphasis] Only the falsity of the theory can be inferred from empirical evidence and this inference is a purely deductive one.

Thus he's disagreeing with Hume's belief (and as I understand some remarks of yours, with your belief) that a psychological induction is how we learn.

He goes on to spell out plainly both his agreement and disagreement with Hume:

I hold with Hume that there simply is no such logical entity as an inductive inference; or, that all so-called inductive inferences are logically invalid - and even inductively invalid, to put it more sharply [see the end of this selection]. We have many examples of deductively valid inferences, and even some partial criteria of deductive validity; but no example of an inductively valid inference exists. 2 And I hold, incidentally, that this result can be found in Hume, even though Hume, at the same time, and in sharp contrast to myself, believed in the psychological power of induction; not as a valid procedure, but as a procedure which animals and men successfully make use of, as a matter of fact and of biological necessity.

I take it as an important task to make clear, even at the cost of some repetition, where I agree and where I disagree with Hume.

I agree with Hume's opinion that induction is invalid and in no sense justified. Consequently neither Hume nor I can accept the traditional formulations which uncritically ask for the justification of induction; such a request is uncritical because it is blind to the possibility that induction is invalid in every sense, and therefore unjustifiable.

I disagree with Hume's opinion (the opinion incidentally of almost all philosophers) that induction is a fact and in any case needed. I hold that neither animals nor men use any procedure like induction, or any argument based on the repetition of instances. The belief that we use induction is simply a mistake. It is a kind of optical illusion.

What we do use is a method of trial and the examination of error; however misleadingly this method may look like induction, its logical structure, if we examine it closely, totally differs from that of induction. Moreover, it is a method which does not give rise to any of the difficulties connected with the problem of induction.

Ellen

___

Link to comment
Share on other sites

(I don't know where this goes, so I have this posted here and at the other thread also.)

=================================================

But why why why?

If we look at the Popperian assertion regarding the single counterexample, why is this a falsifcation? It seems to ignore some basics that are not stated, but if they became the focus, might change some definitions completely.

1) Human logic must be defined in terms of human capabilities. A definition of logic that leaves out human capabilities is an oxymoron.

2) Every entity that exists, having a physical being, is finite. There is nothing that has been observed (that I know of) that is infinitely large. (Note: the "Universe" is a reified construct and cannot be adequately defined in relation to any other entity outside itself, except by theists.)

3) As finite human beings, we lack the capacity to perceive infinite entities using our finite sense data. We lack the capacity to observe something infinite in scope.

4) Our set of empirical observations (inferential knowledge) is by nature thus limited in scope and quantity, and the realm of these observations is defined and delimited by how we focus our attention. (This is the side issue of volitional consciousness.)

5) Every enitity exists in time and space, and when we focus on an entity, we focus on it in a specific defined place or realm (context).

6) Within every set there is a finite number of instances of finite entities within the defined realm.

7) If there is an instance of something (similar to the set members) outside of the defined realm, it is irrelevant. The rational mind cannot focus on two different defined realms simultaneously.

8) If there is an instance of something (within the defined realm) that is outside of the set, it is outside the set; it may be used as a delimiter of the set definition, but according to the Popperian definiton, it must invalidate or falsify the set. I don't see this as a valid inference.

9) Inferential knowledge is thus based in statistical probabilty, which I have referred to elsewhere as "the Confidence Factor". Never 100%, but it's what we've got.

What I am calling a "set" is what elsewhere is called a "concept", as formed within the basic S-I-R paradigm (with appropriate feedback loops). Sets of percepts are delimited empirically, and this is entirely a "psychological" process. The so-called "logical induction" of perfect concepts is an artist's dream, nothing more, and should be discussed under a separate subject (if not dismissed completely) of "Logic as an Art", instead of as a science.

Whereas inductive generalization is the process of locating in similar realms, duplicate or parallel sets of entities. But nothing goes on forever, additional realms are not necessarily identical, and so these derived generalizations tend to undergo their own peculiar "entropy" over time. (They "fall apart", usually through some ill-informed attempt at exegetical application.) But at least at first, in their generalities, they approximate reality well enough to be able to use them to succeed in some other goal-directed behaviour. This provides justification for continuing use of the "concept". ("Don't Argue with Success!") Until the defined realm is sufficiently different that the "concept" doesn't work any more.; it is then maintained as an irrational "falsehood" in the face of "reality"; or else it is abandoned for and replaced by a new organized set of percepts, derived from a new defined realm being brought into focus.

And no, AR did not address this topic in full detail.

Link to comment
Share on other sites

RC says:

INDUCTIVE GENERALIZATION IS NOT LOGICALLY VALID

~ Given that word 'generalization', that's definitionally true; no 'generalization' is.

~ The prob question is: is induction nothing MORE than mere generalizing? --- Let's not do blue swans counterfacted by an orange one here; some observations ARE over-generalized, ergo, NOT 'induced.' Indeed, I think some are falsely 'over-generalizing' the process of generalizing as being synonomous with that of induction.

~ At any rate, if it is, then the statement that "Deduction is always free of error-introduction" is not an induction. In that case, it must be a deduction (question: from what?), or, it's an observation of...? --- As an aside, I do not see this statement as 'analytic' or 'contingent', for those whose framework of thinking is so inclined, but, I'm open to comments on even that.

LLAP

J:D

Link to comment
Share on other sites

John,

I'm pretty sure that Mill will put in an appearance.

Mill's methods were covered in Leonard Peikoff's old logic course. They are also mentioned in the Peikoff-Harriman treatment of induction.

Robert Campbell

Link to comment
Share on other sites

RC:

~ Thanx on that response. Haven't kept up on all of LP's lecture-tapes (I think I need to hit on Gates for a loan for doing that.)

~ A while ago (ok; quite a while) I caught (guess it was his 1st) a tape-course of his on Logic, but, it didn't really cover Induction much; some aspects were mentioned though. I remember him covering the question "How many occurrences are necessary?" with the answer "Oh, 3 or 4", but it wasn't really done 'in depth'. And I'm sure Mill wasn't mentioned. The course primarily covered the standard classical logic stuff (deductions, mediate and im-, syllogisms, fallacies, etc)...standardly. --- The Q&A's were more interesting, but, expectedly irrelevent to the subject.

LLAP

J:D

Edited by John Dailey
Link to comment
Share on other sites

Bob K...in my post up-thread, I agreed with you that there are no valid argument forms in inductive logic. I.e., INDUCTIVE GENERALIZATION IS NOT LOGICALLY VALID.

Huh? Are you simply saying that induction is not deduction, and that induction is not deductively valid? If so, Peikoff agrees with this, and he points out in his IPP lectures that one of the biggest mistakes theorists have made in trying to validate induction is to try to reduce it to a form of deductive logic.

But logic in general refers to the ~instruments~ or ~methods~ of human thought, and concepts, propositions, and arguments are such instruments/methods. Induction is an extended form of concept-application by which we form more general propositions and argue for more general conclusions. Without induction, our deductive conclusions are groundless, for all the "logical validity" they might have.

And induction ~does~ have a set of procedures that are just as clear-cut and rigorous as the procedures of deductive logic -- and if conscientiously pursued, they will lead just as reliably to truth/factual conclusions as will deductive logic properly employed.

Peikoff points out that inductive logic is self-correcting, if one rigorously, conscientiously applies the essential method of induction to one's thinking. You can't ask for much more than that. In that respect, it's even more reliable than deductive logic.

REB

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now