Blame David Hume


BaalChatzaf

Recommended Posts

So, are you defending Popper's view or not? If so, then you cannot assign any probability other than 0.5 to the event of the Sun rising tomorrow or to an egg breaking when dropped on concrete. You must assume complete ignorance because past experience has "no bearing" on expectations about the future.

Well firstly Darrell, to be honest, from your comments here it doesn't sound like you fully understand the argument you're attempting to criticise. (For example, Popper would not assign a 0.5 probability to the sun rising tomorrow based on X no. of previous observations. He would say that no such probability is assignable. And so does Hume.)

I thought you might object to my probability assignment. However, in some sense, it is unavoidable. A probability assignment may be thought of as representing a person's state of knowledge. If a person doesn't know anything about an experiment, other than that there are two possible outcomes, it is logical to assign a probability of 0.5 to each outcome.

Consider the Monty Hall problem. The contestant is presented with three curtains, one of which conceals a valuable prize and the other two of which conceal booby prizes. The contestant doesn't have any information about which curtain conceals the valuable prize. Therefore, from the standpoint of the contestant, the probability of the valuable prize being behind any particular curtain is one third. That follows from the fact that the contestant doesn't know anything about what the curtains conceal. From the standpoint of the host (Monty Hall) however, the probability that the valuable prize is behind one particular curtain is equal to one while the probabilities for the other two curtains are zero. That follows from the fact that Monty Hall knows where the valuable prize is located.

The above probability assignment is not arbitrary. If a subject, knowing nothing about what is concealed behind three curtains, is asked to choose in repeated experiments, he will guess right about one third of the time. So, a probability assignment of one third represents the actual state of his knowledge.

Now recall that Popper and Hume state that past experience has, "no bearing," on future outcomes (a clear contradiction of probability theory). That implies a condition of complete ignorance and demands a probability assignment of 0.5 to each possible outcome of a binary experiment.

BTW, I am quite aware of the nature of the problem. I have struggled with it for many years starting in graduate school. I was struck by illogical use of probability by AI theorists. They assumed that nothing was known for certain (likely as a result of the influence of people such as Hume and Popper) and that, therefore, one must assign probabilities to all events. However, I quickly realized that if nothing is known with certainty, then it is impossible to know the probability of anything either (and to Hume's and Popper's credit, they are at least consistent on this point).

One possible conclusion is that knowledge is impossible. Another possible approach is to try to understand how it is possible to know some things with certainty. If you accept the first conclusion, you are still left with the problem of trying to explain how it is possible for people to function in the world. So, throwing up your hands in disgust or despair is no solution. It just places you back at square one. I have taken the latter tack.

Darrell

Link to comment
Share on other sites

[Popper]:"The logical problem: Are we rationally justified in reasoning from repeated instances of which we have had experience to instances of which we have had no experience?"

Actually, here seems to be the problem. You need to focus on the first three words of this quote.

And your point is ... ?

Link to comment
Share on other sites

They don't have to be anything.

Brant,

They sure do. They have to represent mental units going back to "one."

I object to the insinuation that our mind is one thing and reality is something else, that our mind can contain a "nothing" it can work with. Nothing is nothing. Period. It is not a kind of fact in our minds. We are part of reality, not separate from it.

Abstractions exist as abstractions, i.e., as things that go on in your mind. Just because an abstraction is not the same thing as its referent, this does not mean that it does not exist in its own right (as a mental operation or memory). Otherwise you could not have abstractions of abstractions, which are treated as referents. This is one of the fundaments of Objectivist epistemology.

Michael

Link to comment
Share on other sites

I thought you might object to my probability assignment. However, in some sense, it is unavoidable. A probability assignment may be thought of as representing a person's state of knowledge. If a person doesn't know anything about an experiment, other than that there are two possible outcomes, it is logical to assign a probability of 0.5 to each outcome.

The thing is that this has nothing to do with induction. This is a deductive argument. You've set the terms in advance (two possible outcomes) thus there is a 0.5 probability. This is true no matter how many times you've observed the sun rising. It's true even if you'd never seen the sun rise at all.

Consider the Monty Hall problem.

There is nothing inductive about the Monty Hall problem either. Once again it is deductive - that is, the probability is not based on X number of past experiences of "Let's Make A Deal"!! In fact it's true even if you've never experienced it.

Now recall that Popper and Hume state that past experience has, "no bearing," on future outcomes (a clear contradiction of probability theory). That implies a condition of complete ignorance and demands a probability assignment of 0.5 to each possible outcome of a binary experiment.

Yes, Popper and Hume and you and anyone else would all say that the deductive result would be 0.5. It's the inductive result is the unassignable one.

BTW, I am quite aware of the nature of the problem. I have struggled with it for many years starting in graduate school. I was struck by illogical use of probability by AI theorists. They assumed that nothing was known for certain (likely as a result of the influence of people such as Hume and Popper) and that, therefore, one must assign probabilities to all events. However, I quickly realized that if nothing is known with certainty, then it is impossible to know the probability of anything either (and to Hume's and Popper's credit, they are at least consistent on this point).

Yes.

One possible conclusion is that knowledge is impossible.

Well, only absolutely certain knowledge. Provisional, hypothetical...hey, even open-ended knowledge...now that's a different thing again. Think doxa, rather than episteme.

Another possible approach is to try to understand how it is possible to know some things with certainty. If you accept the first conclusion, you are still left with the problem of trying to explain how it is possible for people to function in the world.

I think your basic assumption (and Rand's, and many other philosophers) is faulty. Why is "absolutely certain" knowledge necessary to for people to function? Why can't a series of rough, approximate theories, being gradually improved over time work? Who knows, if you are lucky enough you might hit on the truth (even if you could never inductively confirm that it was the truth). So truth is still possible, even if unlikely.

So, throwing up your hands in disgust or despair is no solution. It just places you back at square one. I have taken the latter tack.

Well, good for you. It is a prominent philosophical problem, and a noble aim to wrestle with. But the fact that you end up repeatedly converting inductive problems into deductive ones in order to come up with a solution suggests that square one is still where you are at.

Edited by Daniel Barnes
Link to comment
Share on other sites

The thing is that this has nothing to do with induction. This is a deductive argument. You've set the terms in advance (two possible outcomes) thus there is a 0.5 probability. This is true no matter how many times you've observed the sun rising. It's true even if you'd never seen the sun rise at all.
Consider the Monty Hall problem.

There is nothing inductive about the Monty Hall problem either. Once again it is deductive - that is, the probability is not based on X number of past experiences of "Let's Make A Deal"!! In fact it's true even if you've never experienced it.

Daniel, perhaps you're not familiar with Bayesian probability theory? In that scenario prior knowledge of outcomes does indeed effect the probability of future outcomes, the way I understand it.

Link to comment
Share on other sites

Daniel, perhaps you're not familiar with Bayesian probability theory? In that scenario prior knowledge of outcomes does indeed effect the probability of future outcomes, the way I understand it.

I am no expert in this area, and the issue is complex to say the least (there are two schools of Bayesian thought for starters, subjective and objective). Also Popper and Bayes have some strong similarities as well as differences, so they do not clash entirely. At any rate, Bayesian modelling depends on so-called "prior probabilities" (this is the "prior knowledge" you refer to, right?) But how does one securely establish this "prior probability"? By earlier statistical studies, which of course are deductive. Thus it does not seem to avoid the original problem.

Link to comment
Share on other sites

They don't have to be anything.

Brant,

They sure do. They have to represent mental units going back to "one."

As such "mental units" are arbitrary constructs. In this case they only have to be individuated. I see them as empty two-dimensional square boxes represented by four lines. What is in your mind doesn't have to exist out there. What makes mathematical statements true (?) is the logical structure to a conclusion with everything in the brain. This does strike me as circular and self-referential though not necessarily fallacious. That is, they aren't mathematical statements if they are illogical and the units aren't immutable.

--Brant

Edited by Brant Gaede
Link to comment
Share on other sites

You need to understand what someone is saying and actually verify what they need before telling them what they need.

Did you do this with me? :)

GS,

Yep.

On purpose.

Rhetorical emphasis by presenting an example and not talking about it. (There must be a technical name for this, but I don't know it. Artists use this method all the time.)

Michael

Isn't that "ostensive definition?"

--Mindy

Link to comment
Share on other sites

I am no expert in this area, and the issue is complex to say the least (there are two schools of Bayesian thought for starters, subjective and objective). Also Popper and Bayes have some strong similarities as well as differences, so they do not clash entirely. At any rate, Bayesian modelling depends on so-called "prior probabilities" (this is the "prior knowledge" you refer to, right?) But how does one securely establish this "prior probability"? By earlier statistical studies, which of course are deductive. Thus it does not seem to avoid the original problem.

For example, the question of whether or not the sun will rise tomorrow, you would say that is highly likely would you not? Specifically, much higher probability than 50/50, which is the standard probability model. Given that the earth has been revolving for billions of years (we think) and according to all our models of planetary motion we fully expect this to be the case tomorrow. I don't know how to compute this, maybe someone here does.

Link to comment
Share on other sites

Strictly speaking, according to Hume --- the logical conclusion of the Problem of Induction --- it is not merely the case that a particular egg might not break, if dropped, but that it might be the case that no egg will ever break again --- because eggs have no particular nature. Mysteriously, however, they can still be identified as eggs. A manifest contradiction.

As I wrote in another post, the basic problem with this whole line of argument is that it conflates two different things:

1) Reality

2) Our knowledge of reality

Hume's critique applies to 2), not 1). It doesn't entail that things "have no particular nature." You are quite mistaken. All it says is that from X previous experiences we are not entitled to draw a logically valid future prediction. This is a knowledge problem, not a reality problem. Thus there is no "manifest contradiction", and no conflict with the LOI.

If you wish to focus on the knowledge problem, that is fine with me. It will allow me to sharpen my attacks. Recall that I have never defended induction as a method of knowing. Instead, I have focused on understanding through deduction.

Let us return to the pool ball problem. It is clear that one pool ball causes another to move when it strikes it due to its speed, mass, and shape and the mass and shape of the previously stationary ball. Moreover, an observer can see that nothing else caused the movement of the second ball. Therefore, the observer can conclude, by process of elimination, that the first pool ball caused the second to move. He may therefore also conclude that if, in the future, one pool ball strikes another in a similar fashion, the second will move. That follows from his understanding of the nature of the pool balls, their mass, speed, and shape, and of the other prevailing conditions. If those conditions are repeated in the future, then the same outcome is to be expected.

The observer's conclusions are not based upon induction. The number of repetitions of the experiment is immaterial to his conclusion. In this case he is assumed to already know something about pool balls. That knowledge may have been gained through earlier experiments with pools balls. He may have held them and determined their mass. He may have observed them and noted their shape. And, he may be able to determine their velocity, to some level of accuracy, by observing them with his eyes. He may also hear the impact when one strikes another and notice that the previously stationary ball starts to move immediately after being struck.

If the observer has a hard time determining the exact cause of the movement of the second pool ball, that difficulty has nothing to do with the process of induction. If he wishes to repeat the experiment, it is not for the sake of repetition, it is in order to isolate, by process of elimination, the exact causes of the event observed. Once those causes have been determined, the observer is logically justified in concluding that in all future experiments in which the essential conditions are the same or sufficiently similar, the same or similar results will be obtained.

Darrell

Link to comment
Share on other sites

As such "mental units" are arbitrary constructs. In this case they only have to be individuated.

Brant,

If they "have to be individuated," then they are not arbitrary, are they?

Try this. If you think the mental units on which math are based are arbitrary, that means they can easily be replaced with something else. Right? Or that math does not depend on them. Right?

So I would be interested if you could explain to me how one can do math without the mental unit of one. You can replace it with anything you wish, or not use any mental unit at all, but I bet it won't work.

Isn't that "ostensive definition?"

Mindy,

LOL...

Maybe "ostensive rhetoric?"

:)

Michael

Link to comment
Share on other sites

The observation that some things belong to the same category is not a process of induction. If it were, how could you determine that two objects were of the same kind? How could you determine that two birds were swans (or were even birds)?

This is covered in the beginning of ITOE. What we are concerned about with induction is not assigning things into a pre-existing category so much as identifying a category, and creating a category (i.e., a concept). You identify by noticing differences and similarities, then by differentiation and integration.

You don't need a teacher for this, either, as looking at any baby can prove. (But you can have a teacher like in your example. It's just not a requirement for noticing differences and similarities.)

That's the Objectivist version, at least.

Michael

I have nothing against the Objectivist version, as explained above, but that explanation has nothing to do with induction. I'd have to reread ITOE to know whether the concept of induction slips in somewhere.

Darrell

Link to comment
Share on other sites

Darrell,

ITOE (2nd), Chapter 3, Abstraction from Abstractions, p. 28:

Thus the process of forming and applying concepts contains the essential pattern of two fundamental methods of cognition: induction and deduction.

The process of observing the facts of reality and of integrating them into concepts is, in essence, a process of induction. The process of subsuming new instances under a known concept is, in essence, a process of deduction.

Michael

Link to comment
Share on other sites

As such "mental units" are arbitrary constructs. In this case they only have to be individuated.

Brant,

If they "have to be individuated," then they are not arbitrary, are they?

Try this. If you think the mental units on which math are based are arbitrary, that means they can easily be replaced with something else. Right? Or that math does not depend on them. Right?

So I would be interested if you could explain to me how one can do math without the mental unit of one. You can replace it with anything you wish, or not use any mental unit at all, but I bet it won't work.

They are arbitrary individuated. :)

--Bant

Link to comment
Share on other sites

I am no expert in this area, and the issue is complex to say the least (there are two schools of Bayesian thought for starters, subjective and objective). Also Popper and Bayes have some strong similarities as well as differences, so they do not clash entirely. At any rate, Bayesian modelling depends on so-called "prior probabilities" (this is the "prior knowledge" you refer to, right?) But how does one securely establish this "prior probability"? By earlier statistical studies, which of course are deductive. Thus it does not seem to avoid the original problem.

There is an excellent web page here about Bayesian reasoning - http://yudkowsky.net/bayes/bayes.html

Similarly, Popper's dictum that an idea must be falsifiable can be interpreted as a manifestation of the Bayesian conservation-of-probability rule; if a result X is positive evidence for the theory, then the result ~X would have disconfirmed the theory to some extent. If you try to interpret both X and ~X as "confirming" the theory, the Bayesian rules say this is impossible! To increase the probability of a theory you must expose it to tests that can potentially decrease its probability; this is not just a rule for detecting would-be cheaters in the social process of science, but a consequence of Bayesian probability theory. On the other hand, Popper's idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes' Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.
Link to comment
Share on other sites

I'd have to reread ITOE to know whether the concept of induction slips in somewhere.

In the next post, #164, MSK quotes the two-(short)-paragraph passage in which induction slips in:

ITOE (2nd), Chapter 3, Abstraction from Abstractions, p. 28:

Thus the process of forming and applying concepts contains the essential pattern of two fundamental methods of cognition: induction and deduction.

The process of observing the facts of reality and of integrating them into concepts is, in essence, a process of induction. The process of subsuming new instances under a known concept is, in essence, a process of deduction.

According to the Index, this passage contains the only reference either to "deduction" or "induction" in the entire text of ITOE. "Induction" is also referenced in the following pages of the Excerpts from the Epistemology Workshops: 295-304, 306. However, those passages aren't addressed to what she wrote on pg. 28, i.e., to why she said that concept formation and application "contains the essential pattern of [...] induction and deduction." Instead they're addressed to the issue of establishing a causal relationship. Furthermore, on page 303-04, she says, in response to questioning from "Prof. M":

ITOE (2nd), Appendix, p. 303-04:

Prof. M: The question is: when does one stop? When does one decide that enough confirming evidence exists? Is that in the province of the issue of induction?

AR: Yes. That's the big question of induction. Which I couldn't begin to discuss--because (a) I haven't worked on that subject enough to even begin to formulate it, and (b ) it would take an accomplished scientist in a given field to illustrate the whole process in that field.

In regard to that answer, Daniel Barnes has commented numerous times (he might even have commented earlier in this thread) about her not seeing that the issue of induction is a logical issue.

My best guess in regard to her peculiar statement in the passage on pg. 28 is that she was talking in terms of an analogy and might have realized, if pressed, that what she wrote could mistakenly be taken literally.

Ellen

___

Edited by Ellen Stuttle
Link to comment
Share on other sites

I'd have to reread ITOE to know whether the concept of induction slips in somewhere.

In the next post, #164, MSK quotes the two-(short)-paragraph passage in which induction slips in:

ITOE (2nd), Chapter 3, Abstraction from Abstractions, p. 28:

Thus the process of forming and applying concepts contains the essential pattern of two fundamental methods of cognition: induction and deduction.

The process of observing the facts of reality and of integrating them into concepts is, in essence, a process of induction. The process of subsuming new instances under a known concept is, in essence, a process of deduction.

According to the Index, this passage contains the only reference either to "deduction" or "induction" in the entire text of ITOE. "Induction" is also referenced in the following pages of the Excerpts from the Epistemology Workshops: 295-304, 306. However, those passages aren't addressed to what she wrote on pg. 28, i.e., to why she said that concept formation and application "contains the essential pattern of [...] induction and deduction." Instead they're addressed to the issue of establishing a causal relationship. Furthermore, on page 303-04, she says, in response to questioning from "Prof. M":

ITOE (2nd), Appendix, p. 303-04:

Prof. M: The question is: when does one stop? When does one decide that enough confirming evidence exists? Is that in the province of the issue of induction?

AR: Yes. That's the big question of induction. Which I couldn't begin to discuss--because (a) I haven't worked on that subject enough to even begin to formulate it, and (b ) it would take an accomplished scientist in a given field to illustrate the whole process in that field.

In regard to that answer, Daniel Barnes has commented numerous times (he might even have commented earlier in this thread) about her not seeing that the issue of induction is a logical issue.

My best guess in regard to her peculiar statement in the passage on pg. 28 is that she was talking in terms of an analogy and might have realized, if pressed, that what she wrote could mistakenly be taken literally.

Ellen

___

I don't understand why concept-formation, or at least the whole process taken to its conclusion in forming a definition of a new concept, isn't easily understood as being inductive. It integrates multiple instances and is "bottom-up" and it creates the categorical knowledge that is then applied inductively.

--Mindy

Link to comment
Share on other sites

I don't understand why concept-formation, or at least the whole process taken to its conclusion in forming a definition of a new concept, isn't easily understood as being inductive. It integrates multiple instances and is "bottom-up" and it creates the categorical knowledge that is then applied inductively.

--Mindy

Umm...you mean "applied deductively," right?

Isn't concept-formation inductive and concept-application deductive? We generalize, and then we apply those generalizations to specific cases, right? And those processes are, respectively, induction and deduction -- at least, in Rand's use of the terms.

(Rand)

ITOE (2nd), Chapter 3, Abstraction from Abstractions, p. 28:

Thus the process of forming and applying concepts contains the essential pattern of two fundamental methods of cognition: induction and deduction.

The process of observing the facts of reality and of integrating them into concepts is, in essence, a process of induction. The process of subsuming new instances under a known concept is, in essence, a process of deduction.

[emphasis added]

REB

Link to comment
Share on other sites

In mathematics numbers are not adjectives but abstract objects.

This is how to create floating abstractions.

If you like. But that they are floating abstractions doesn't mean that they can't be usefully applied. Our whole modern technology would not exist without those floating abstractions (like infinite-dimensional complex spaces for example).

Link to comment
Share on other sites

The law of identity implies certain things. For one, it implies that it takes a finite amount of time for something that exists to be transformed into something else. If that were not true, then it would be impossible to state that a thing had an identity in the first place.

Not at all. Suppose you have thing X that is equal to A until a certain time t1, after which it suddenly changes into B: X(t, tt1) = A; X(t, t > t1) = B, with B ≠ A. This would not violate the law of identity, as for all times t: X(t) = X(t). That we don't find such instantaneous transitions in the real world is an empirical datum, while the law of identity is a logical law, which doesn't give us empirical information.

It is also clear that the more massive something is, the longer it takes to transform it and the more energy the transformation requires. These seem like corollaries of the law of identity, though, at this point, I'm sort of going out on a limb. Still, it seems reasonable that to transform something massive, all of its parts must be transformed. So, if it takes a certain amount of time and energy to transform something small, it should take longer and require more energy to transform something large.

If this were true, it would still be an empirical datum, but in fact it is not true. If you have a milligram polonium 214, 0,5 milligram will be turned into lead 210 in 160 microseconds, but if you have 1000 kg polonium 214, 500 kg will be turned into lead 210 in the same time.

Edited by Dragonfly
Link to comment
Share on other sites

I don't understand why concept-formation, or at least the whole process taken to its conclusion in forming a definition of a new concept, isn't easily understood as being inductive. It integrates multiple instances and is "bottom-up" and it creates the categorical knowledge that is then applied inductively.

--Mindy

Umm...you mean "applied deductively," right?

Isn't concept-formation inductive and concept-application deductive? We generalize, and then we apply those generalizations to specific cases, right? And those processes are, respectively, induction and deduction -- at least, in Rand's use of the terms.

(Rand)

ITOE (2nd), Chapter 3, Abstraction from Abstractions, p. 28:

Thus the process of forming and applying concepts contains the essential pattern of two fundamental methods of cognition: induction and deduction.

The process of observing the facts of reality and of integrating them into concepts is, in essence, a process of induction. The process of subsuming new instances under a known concept is, in essence, a process of deduction.

[emphasis added]

REB

Sorry, yes, I meant "deductive" at the end of my post.

Mindy

Link to comment
Share on other sites

The law of identity implies certain things. For one, it implies that it takes a finite amount of time for something that exists to be transformed into something else. If that were not true, then it would be impossible to state that a thing had an identity in the first place.

Not at all. Suppose you have thing X that is equal to A until a certain time t1, after which it suddenly changes into B: X(t, tt1) = A; X(t, t > t1) = B, with B ≠ A. This would not violate the law of identity, as for all times t: X(t) = X(t). That we don't find such instantaneous transitions in the real world is an empirical datum, while the law of identity is a logical law, which doesn't give us empirical information.

It is also clear that the more massive something is, the longer it takes to transform it and the more energy the transformation requires. These seem like corollaries of the law of identity, though, at this point, I'm sort of going out on a limb. Still, it seems reasonable that to transform something massive, all of its parts must be transformed. So, if it takes a certain amount of time and energy to transform something small, it should take longer and require more energy to transform something large.

If this were true, it would still be an empirical datum, but in fact it is not true. If you have a milligram polonium 214, 0,5 milligram will be turned into lead 210 in 160 microseconds, but if you have 1000 kg polonium 214, 500 kg will be turned into lead 210 in the same time.

It's easy to think of examples where additional time AND energy are not required, such as acceleration. Added mass is accelerated by equivalent added energy in the same time.

--Mindy

Link to comment
Share on other sites

Isn't concept-formation inductive and concept-application deductive? We generalize, and then we apply those generalizations to specific cases, right? And those processes are, respectively, induction and deduction -- at least, in Rand's use of the terms. [emphasis added]

Does she anywhere else use the terms in so vaguely non-technical a way?

I looked in the Lexicon. There's no entry for "Deduction." The entry for "Induction and Deduction" cites exactly the quote from pg. 28 of ITOE, no other reference.

I think she was seeing a "family resemblance" -- generalizing and applying -- and using the terms "induction" and "deduction" loosely in that context. (I'm being charitable. ;-))

Ellen

___

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now