The Logical Leap: Induction in Physics


Recommended Posts

The role "contextual certainty" seems to me to play for Objectivists is to let them think that they have a grasp which they don't have on physics and the history of physics. I don't see them using it in regard to issues like the defense of individual rights.

I've seen them use it where it suits an apparent purpose of not learning from mistakes and dogmatically clinging to beliefs, which certainly isn't limited to the realm of physics. I've seen no other use of this concept.

I don't think any field would be good for validating induction, if by that you mean reaching certainty through induction, since I don't think induction -- which I'm using here basically as Harriman uses it, arguing from some cases to laws pertaining to all cases -- ever can be validated.

I certainly don't mean to mean what Harriman means, nor would I mean what you think Harriman means, when I say a word it means what I mean. And what I mean is if you actually want to show how induction works, show them on an issue that is important to everyone. What is induction? Reasoning from particulars to general principles. Unless you believe in apriori knowledge, this is the only method of knowing anything about anything.

That isn't the way Rand argues for rights.

So?

Shayne

Link to comment
Share on other sites

  • Replies 1.4k
  • Created
  • Last Reply

Top Posters In This Topic

No, the non-omniscient argument pulls in a nonsensical standard, a standard which has no relationship to knowledge anyway, so it doesn't actually tell you anything about the possibility of certainty.

The point of the argument is that from the hypothesis of the existence of a counterexample does follow that this impossible standard would be true, and therefore the counterexample cannot be true - that's the crux of a reductio ad absurdum argument. The certainty with which you know that this nonsensical standard cannot be true is the certainty with which you know that the counterexample is impossible, in other words, the more nonsensical that standard, the stronger the argument.

Your argument is not a reductio ad absurdum argument. Instead it unjustifiably poses an absurdity as the only possible alternative to what you're arguing against. You take as your criterion for certainty an indefensible idea (which moreover is a stolen concept in Randian language) -- the idea of knowing everything all at once with no means or method of knowledge -- and then claim that because knowledge requires a means and method, certain knowledge isn't possible. You might as well set up immortality as your standard for whether life is possible and then say that since humans aren't immortal, they can't live.

I agree with the contention that certainty about the laws of science isn't possible, but not for the reason that humans aren't omniscient. Instead, because a final tally to check correctness of conclusions would be impossible.

Ellen

Link to comment
Share on other sites

I certainly don't mean to mean what Harriman means, nor would I mean what you think Harriman means, when I say a word it means what I mean. And what I mean is if you actually want to show how induction works, show them on an issue that is important to everyone. What is induction? Reasoning from particulars to general principles. Unless you believe in apriori knowledge, this is the only method of knowing anything about anything.

"Reasoning from particulars to general principles" is too vague to give me a sense of what you'd take as an inductive demonstration.

That isn't the way Rand argues for rights.

So?

You were suggesting using rights as a "proving ground for validating induction." Since I couldn't tell what you mean by induction, I said "if what you mean is X," and then that that isn't how Rand argues for rights. Fine, if you say a word, it means what you mean, but you didn't provide a basis for me to know what you mean. Hence, I hypothesized.

Ellen

Link to comment
Share on other sites

An infinite number of neologisms? That's a real stretch. As for "possible new concepts," even if you believe that their number is potentially infinite (a claim that strikes me as bizarre)

In turn, I don't see why it's a "bizarre" at all. If "concepts" are true things we can know, and their number is finite and not infinite, then this entails that we might one day run out of new truths to know. In which case we might know everything, in which case we one day might be omniscient. (And this is without touching on the issue of false concepts, or anti-concepts).

In contrast to this, I would maintain that this is unlikely; that there are a potentially infinite number of possible new concepts (which to for now to avoid being sidetracked I will use interchangeably with the term theories), hence we, either individually or collectively, can never come even close to being omniscient. And of course this also tracks with the idea of undetermination of of theories, which has a number of good arguments supporting it.

So I put it to you that far from being "bizarre", you may want to consider that such a claim is no more outrageous than the contrary proposition; in fact probably far less so. So it seems a reasonable basis to proceed from.

Obviously it seems we have quite a bit to thrash out about just how good Popper's theory is before we can even try to see if it applies to Rand. But can I just say that I feel we are off to a promising start now, after a bit of argy-bargy to begin with. I would also like to thank you for your recent thoughtful responses. This is in fact one of Popper's most enduring, and least known dogmas. He was still propounding it in Unended Quest. (Actually, if you have a moment, and UQ, I suggest you might have a look at his presentation there in Chapter 7 as I recall. That, combined with the Aristotle chapter in OSE, are I think the clearest statements of his argument. Ellen, if you are reading this, I suggest you have a re-read of the UQ chapter as it might explain some of what you're seeing as a clash in Popper's pronouncements. If necessary I will also post some of the relevant UQ arguments.). If it turns out the old dog was wrong, so much the better.

I also propose we put Phil out of his misery and move it to another thread? If you agree I will start another shortly.

I will also point out that here it is Monday, and I am now back at work, so my replies may be necessarily piecemeal, such as this one, and I apologise in advance.

Edited by Daniel Barnes
Link to comment
Share on other sites

"Reasoning from particulars to general principles" is too vague to give me a sense of what you'd take as an inductive demonstration.

You should learn to distinguish your understanding being vague from a proposition actually being vague. Or is it your position that my particular word choice is, regardless of how one might define terms, inherently and irredeemably vague? It would be interesting to examine your theory of vagueness, as to what combinations of words in the English language we are not entitled to combine on the grounds of their inherent vagueness.

You were suggesting using rights as a "proving ground for validating induction." Since I couldn't tell what you mean by induction, I said "if what you mean is X," and then that that isn't how Rand argues for rights. Fine, if you say a word, it means what you mean, but you didn't provide a basis for me to know what you mean. Hence, I hypothesized.

I am not privy to how you define words. Hence I cannot provide in advance what you presume I should. This elementary fact, surprisingly to me, seems to be relatively unknown to most people, who prefer presumption to asking a simple question.

Shayne

Link to comment
Share on other sites

I thought we'd established that *Rand* did not ever, in any of her published work which appears on the CD-ROM, use the term "contextual certainty."

I'm not even sure that Peikoff used that exact wording. If anyone reading the thread has notes from any of his courses which document his using that wording, I'd appreciate documentation.

Ellen,

In Peikoff's OPAR, p. 171, there's a whole chapter about "certainty as contextual".

Not a chapter, a section of Chapter 5, as I said in the post you're quoting.

Ellen

Link to comment
Share on other sites

Shayne,

Sorry I addressed you as if maybe you'd be willing to have a reasonable conversation and not go into your standard posture attack mode. You've been gone for awhile. I hoped you might have learned meanwhile how to have a conversation instead of an adversarial match. Since you haven't learned, end of my replying to you.

Ellen

Link to comment
Share on other sites

Sorry I addressed you as if maybe you'd be willing to have a reasonable conversation and not go into your standard posture attack mode. You've been gone for awhile. I hoped you might have learned meanwhile how to have a conversation instead of an adversarial match. Since you haven't learned, end of my replying to you.

It isn't my intention to "attack." I merely describe the nature of your reply. I can only suggest that if the actual nature of what you did is not agreeable to you, then perhaps you should refrain from doing it. But don't shoot the messenger.

Edit: I suppose that by "reasonable conversation" Ellen must mean that I gag myself while she accuses me of being vague and puts words in my mouth. I'm not allowed to object to her illogic, I just have to try my best to be polite and try some other way of communicating with her than to try to get her to reframe her side of the conversation in rational terms. In other words, by "reasonable conversation" she means the opposite.

Edited by sjw
Link to comment
Share on other sites

An infinite number of neologisms? That's a real stretch. As for "possible new concepts," even if you believe that their number is potentially infinite (a claim that strikes me as bizarre)

In turn, I don't see why it's a "bizarre" at all. If "concepts" are true things we can know, and their number is finite and not infinite, then this entails that we might one day run out of new truths to know. In which case we might know everything, in which case we would one day might be omniscient. (And this is without touching on the issue of false concepts, or anti-concepts).

First, concepts are not "true things we can know." (I don't even know what this is supposed to mean.) Rather, concepts are mental classifications that we use to organize our knowledge and express it in propositional form.

Second, even if it is true that there are an infinite number of things to know, it does not follow that we would need an infinite number of concepts to incorporate such knowledge. On the contrary, it would defeat a primary purpose of concepts -- which Rand calls "unit-economy" -- to insist that each new piece of knowledge requires a new concept. As Rand puts it in ITOE:

The essence...of man's incomparable cognitive power is the ability to reduce a vast amount of information to a minimal number of units -- which is the task performed by his conceptual faculty. And the principle of unit-economy is one of that faculty's essential guiding principles. (p. 63)

...

Concepts represent a system of mental filing and cross-filing....This system serves as the context, the frame-of-reference, by means of which man grasps and classifies (and studies further)every existent he encounters and every aspect of reality. (p. 69)

...

The descriptive complexity of a given group of existents, the frequency of their use, and the requirements of cognition (of further study) are the main reasons for the formation of new concepts. Of these reasons, the requirements of cognition are the paramount one.

The requirements of cognition forbid the arbitrary grouping of existents, both in regard to isolation and to integration. They forbid the coining of special concepts to designate any and every group of existents with any possible combination of characteristics. For example, there is no concept to designate "Beautiful blondes with blue eyes, 5'5" tall and 24 years old." Such entities or groupings are identified descriptively. If such a special concept existed, it would lead to senseless duplication of cognitive effort (and to conceptual chaos): everything of significance discovered about that group would apply to all other young women as well. There would be no cognitive justification for such a concept -- unless some essential characteristic were discovered, distinguishing such blondes from all other women and requiring special study, in which case a special concept would become necessary. (pp. 70-71).

...

[T]he requirements of cognition determine the objective criteria of concept-formation. The conceptual classification of newly discovered existents depends on the nature and extent of their differences from and the similarities to the previously known existents. p. 73.)

...

(These excepts are from Chapter 7, "The Cognitive Role of Concepts." This is perhaps the most important chapter in ITOE for understanding Rand's overall perspective. I have only quoted some sample passages here. A careful study of the entire chapter is necessary if one wishes to understand Rand's approach.)

As Rand indicates elsewhere, any given concept can subsume a potentially infinite number of units, past, present, and future. There is absolutely no reason to suppose that a potentially infinite amount of knowledge would necessitate an infinite number of concepts.

In any case, as I said in my first post on this matter, all this is completely irrelevant to Popper's argument. Consider his example that I mentioned before, namely, the definition of "democracy." Okay, if you wish to define this concept, you might also have to define the major elements of your definition, such as "people" and "will, and then you might have to define some elements of those definitions." So what? Political philosophers do these all the time, and it doesn't involve anything like an infinite regress.

Popper shows no awareness of how definitional analysis often functions in philosophical arguments. Rarely do philosophers begin with an exhaustive list of definitions. Rather, they frequently argue to definitions rather than from definitions, in an effort to pinpoint and clarify their specific areas of disagreement. As Freud once pointed out, if we don't always begin arguments with precise definitions of the key terms, this is often because the purpose of an argument is to arrive at such definitions, i.e., to clarify and explain what we mean with more precision than we started with. This is scarcely a new idea. On the contrary, it is the one of the primary goals of the "Socratic method."

Suppose I encounter a person who claims that freedom is the highest political value. Well, since I also believe that freedom is the highest political value, it might seem as if we will agree on most policy issues. But it turns out we don't. My interlocutor is a welfare statist, whereas I am a libertarian. So I ask him, "What do you mean by "freedom?" -- and it turns out that he is talking about the "positive freedom" associated with Hegelianism, the "new liberalism" of the late 19th century, and the modern welfare state, whereas I am thinking of the "negative freedom" that dominated classical liberalism.

Okay, we each defined our basic terms, and in so doing we have clarified the major source of our disagreements. We might ask each other to define a few more terms (such as "coercion"), but this won't go on all that long, and it borders on idiocy to suggest that it would, or should, continue "to infinity" (as Popper claims). After we each know where the other stands so far as the definitions of our basic terms are concerned, we can then argue over more substantial points, as Isaiah Berlin did in his classic essay "Two Concepts of Liberty."

Popper quotes a philosopher as follows: "Most of the futile arguments on which we all waste time are largely due to the fact that we each have our own vague meanings for the words we use and assume our opponents are using them in the same senses. If we defined them to start with, we could have far more profitable discussion." (Open Society, vol. II, pp. 16-17).

This procedural advice has been offered again and again by philosophers for many centuries, and it is exactly right. But Popper blunders on with some truly dumb-ass remarks, e.g., that such a procedure will result in explanations that are "infinitely long," and that "the attempt to define terms will only increase the vagueness and confusion."

I scarcely know how to respond to comments like this. They leave me flabbergasted, since they are so obviously wrong, as all of us have witnessed many times on a practical level. If nothing else, when we learn that we are working from radically different definitions, this might indicate that further argument is pointless -- and that's a good thing to know, is it not?

Why does Popper make such peculiar claims? I can only surmise that he was so involved with his pet theory about definitions that he let common sense pass him by.

Ghs

Link to comment
Share on other sites

[...] I would maintain [...] that there are a potentially infinite number of possible new concepts (which to for now to avoid being sidetracked I will use interchangeably with the term theories) [...].

Bad idea. Popper doesn't use "concept" and "theory" interchangeably. In his chart on pg. 19 (or thereabouts, if in a different edition than I have), he uses "designations," "terms" and "concepts" interchangeably -- which differs from Rand, for whom the word isn't the concept. He distinguishes "designations or terms or concepts" (left side of the chart) from "statements or propositions or theories" (right side).

His arguments about an infinite number of possible implications *of a theory* involves the infinite series of natural numbers. It doesn't apply, even as he uses "concept," to concepts.

[...] the content of any (nontautological) statement -- say a theory t -- is infinite. For let there be an infinite list of statements a, b, c, ..., which are pairwise contradictory, and which individually do not entail t. (For most t's, something like a: "the number of planets is 0", b: "the number of planets is 1", and so on, would be adequate.) Then the statement "t or a or both" is deducible from t, and therefore belongs to the logical content of t; and the same holds for b and any other statement in the list. From our assumptions about a, b, c,..., it can now be shown simply that no pair of statements of the sequence "t or a or both", "t or b or both",..., are interdeducible; that is, none of these statements entails any other. Thus the logical content of t must be infinite.

Ellen

Edited by Ellen Stuttle
Link to comment
Share on other sites

Bad idea. Popper doesn't use "concept" and "theory" interchangeably. In his chart on pg. 19 (or thereabouts, if in a different edition than I have), he uses "designations," "terms" and "concepts" interchangeably -- which differs from Rand, for

I was doing it for the sake of argument, actually. I am quite aware of the difference, as anyone who has seen that chart is. It's just that as you'd equated definitions, and I presumed, perhaps wrongly, the concepts behind them, with hypotheses (which Popper uses interchangeably with theories), this would be ok with you for now. Popper also talks about replacing the analysis of concepts with the testing of theories. I was going to save the concepts vs theories discussion till later, as I hoped my remark about a "sidetrack" made clear.

Edited by Daniel Barnes
Link to comment
Share on other sites

Why does Popper make such peculiar claims? I can only surmise that he was so involved with his pet theory about definitions that he let common sense pass him by.

This is all good, wish I had time to reply more fully. Have you seen the chart Ellen is referring to in UQ? If you don't have it I will post it in order to clarify this very point.

Link to comment
Share on other sites

Popper shows no awareness of how definitional analysis often functions in philosophical arguments.

Actually, as Chapter 11 OSE and Chap 7 UQ try to make clear, he argues that "definitional analysis" is responsible for long running dysfunction in philosophical arguments.

That's the point of his comparison of the output of philosophy (and the social sciences) in the past 500 years or so and the output of natural science - which, he claims, is largely due to the degree from which natural science has departed from Aristotle's doctrine. He claims that's why most philosophy is still mired in Scholasticism while natural science races ahead in leaps and bounds.

This is one of the things that has made Popper rather less popular with the academic community than the science community.

All this is of course up for debate. But that is his point.

Once again apologies for brevity.

Edited by Daniel Barnes
Link to comment
Share on other sites

Popper shows no awareness of how definitional analysis often functions in philosophical arguments.

Actually, as Chapter 11 OSE and Chap 7 UQ try to make clear, he argues that "definitional analysis" is responsible for long running dysfunction in philosophical arguments.

That's the point of his comparison of the output of philosophy (and the social sciences) in the past 500 years or so and the output of natural science - which, he claims, is largely due to the degree from which natural science has departed from Aristotle's doctrine. He claims that's why most philosophy is still mired in Scholasticism while natural science races ahead in leaps and bounds.

Yeah, this story has been told countless times. It was especially popular among 19th century Comtean positivists. Comte gave this Enlightenment bedtime story its most enduring form. If only, Comte often said, we could apply the rigorous methods of science, which have produced remarkable results, to philosophy and the humanities...then we would finally see some results! (Popper's stress on definitional analysis renders his version less plausible than Comte's.)

I would give this story a failing grade for veracity. But I would give it an A+ for the number of things it leaves out.

Ghs

Link to comment
Share on other sites

I scarcely know how to respond to comments like this. They leave me flabbergasted, since they are so obviously wrong, as all of us have witnessed many times on a practical level. If nothing else, when we learn that we are working from radically different definitions, this might indicate that further argument is pointless -- and that's a good thing to know, is it not?

And this clinging to radically different definitions might be the right procedure, even though the cost is that discussion breaks down?

This is exactly what Popper condemns about this approach.

Edited by Daniel Barnes
Link to comment
Share on other sites

I scarcely know how to respond to comments like this. They leave me flabbergasted, since they are so obviously wrong, as all of us have witnessed many times on a practical level. If nothing else, when we learn that we are working from radically different definitions, this might indicate that further argument is pointless -- and that's a good thing to know, is it not?

And this clinging to radically different definitions might the right procedure, even though the cost is that discussion breaks down?

This is exactly what Popper condemns about this approach.

I said nothing about "clinging" to radically different definitions.

Arguments sometimes break down, and sometimes they don't. Some arguments deserve to break down -- for instance, a metaphysical argument between a Scientologist and a Jehovah's Witness -- but there is no Popperian law of nature that they must.

Philosophy has produced many worthwhile results, such as theories of rights, freedom, and limited government that exerted an enormous influence during the 17th and 18th centuries. The arguments that produced these results typically placed great stress on definitions. One reason for this stress, especially during the 17th century, was the widespread admiration among philosophers (e.g., Spinoza) for the geometrical method, in which definitions play a fundamental role. The light emanating from the Enlightenment was the light of clear thinking.

I posted an article some time ago that is relevant to this discussion.

Ghs

Link to comment
Share on other sites

[...] I would maintain [...] that there are a potentially infinite number of possible new concepts (which to for now to avoid being sidetracked I will use interchangeably with the term theories) [...].

Bad idea. Popper doesn't use "concept" and "theory" interchangeably. In his chart on pg. 19 (or thereabouts, if in a different edition than I have), he uses "designations," "terms" and "concepts" interchangeably -- which differs from Rand, for whom the word isn't the concept. He distinguishes "designations or terms or concepts" (left side of the chart) from "statements or propositions or theories" (right side).

His arguments about an infinite number of possible implications *of a theory* involves the infinite series of natural numbers. It doesn't apply, even as he uses "concept," to concepts.

[...] the content of any (nontautological) statement -- say a theory t -- is infinite. For let there be an infinite list of statements a, b, c, ..., which are pairwise contradictory, and which individually do not entail t. (For most t's, something like a: "the number of planets is 0", b: "the number of planets is 1", and so on, would be adequate.) Then the statement "t or a or both" is deducible from t, and therefore belongs to the logical content of t; and the same holds for b and any other statement in the list. From our assumptions about a, b, c,..., it can now be shown simply that no pair of statements of the sequence "t or a or both", "t or b or both",..., are interdeducible; that is, none of these statements entails any other. Thus the logical content of t must be infinite.

Ellen

Actually, as I can see hopscotching between the two systems, not to mention participants own interpretations of them, and my interpretations of those... will inevitably cause a few problems, let me try a different tack.

Ellen, you have Popper's little chart to hand.

What practical problem(s) do you think Rand's approach solves that Popper's doesn't?

What theoretical problems? (other than the problem of universals, which we'll park for now )

Edited by Daniel Barnes
Link to comment
Share on other sites

Arguments sometimes break down, and sometimes they don't. Some arguments deserve to break down, but there is no Popperian law of nature that they must.

Absolutely. Cars break down sometimes too, and sometimes they don't. Normally if they break down consistently in a particular fashion, we take a look under the hood to see what might be causing this problem.

You've described one way arguments can consistently break down: when two people dogmatically cling to two radically different definitions of a term, each refusing to budge.

Popper is suggesting ways we might decrease such breakdowns.

Philosophy has produced many worthwhile results, such as theories of rights, freedom, and limited government that exerted an enormous influence during the 17th and 18th centuries. The arguments that produced these results typically placed great stress on definitions. One reason for this stress, especially during the 17th century, was the widespread admiration among philosophers (e.g., Spinoza) for the geometrical method, in which definitions play a fundamental role. The light emanating from the Enlightenment was the light of clear thinking.

This is a good point against Popper.

Link to comment
Share on other sites

Arguments sometimes break down, and sometimes they don't. Some arguments deserve to break down, but there is no Popperian law of nature that they must.

Absolutely. Cars break down sometimes too, and sometimes they don't. Normally if they break down consistently in a particular fashion, we take a look under the hood to see what might be causing this problem.

Locke's explanation is more plausible and richly textured than Popper's. Here is how I put it in Why Atheism? (I often use excerpts like this because I have the material in a computer file and can easily paste it to a post. This saves me a lot of tying.)

No trend in modern philosophy is more significant than the shift of emphasis from the metaphysical theory of being to the epistemological theory of knowledge. Alexander Pope’s dictum that the proper study of mankind is man was echoed by David Hume’s contention that a “science of man” is the foundation of all other sciences, because all knowledge, including our knowledge of nature, is a product of the human mind. Thus, as John Locke had previously argued, philosophers should investigate the origin and limitations of human knowledge before venturing into the murky world of metaphysical systems.

Locke’s empiricism and high regard for Newtonian physics made him contemptuous of traditional metaphysics. “You and I have had enough of this kind of fiddling,” he wrote to a friend; we are unlikely to learn anything in metaphysical works on God and spirits. And in his Essay Concerning Human Understanding, Locke declares that the “master-builders” of his era were not metaphysicians but scientists, such as “the incomparable Mr. Newton,” whose work had been hindered by the “uncouth, affected, or unintelligible terms” of metaphysics. Metaphysicians attempted to pass off “vague and insignificant forms of speech” as profound insights and deep learning, when in fact they were nothing more than “the covers of ignorance, and hindrance of true knowledge.”

This is a fairly complex issue, as we see in the fact that Locke himself would later be hailed by Enlightenment thinkers as the father of modern metaphysics. Modern and traditional metaphysics had something in common, in that both viewed metaphysics as what Aristotle had called the science of first principles. But modernists looked for these first principles in the human mind, not in “being” external to the mind, because there can be no knowledge without a cognitive process that produces this knowledge. Moreover, this modernist metaphysics included more than we now understand by the term “epistemology” (which was coined in the nineteenth century). Modernist metaphysics included both epistemology and psychology, insofar as the latter was concerned with the origin and formation of ideas.

In a famous passage, Locke described his philosophic role as a modest one -- “as an underlaborer in clearing the ground a little, and removing some of the rubbish that lies in the way to knowledge.” (This refers, of course, to metaphysical rubbish.) This profession of modesty, however, is a bit disingenuous. Locke was calling for a revolution in philosophy, and he knew it.

Ghs

Link to comment
Share on other sites

Thus, as John Locke had previously argued, philosophers should investigate the origin and limitations of human knowledge before venturing into the murky world of metaphysical systems.

That is a nice passage concerning Locke. I'm not sure Popper said the difference in definitions was the only reason for the success of science. It doesn't sound like the sort of thing he would say. He does think it is an important, and overlooked one however.

His point was rather that the stress on the importance of definitions was a pathway to deadlocks such as the one you describe

GHS:"If nothing else, when we learn that we are working from radically different definitions, this might indicate that further argument is pointless..."

Why might "working from radically different definitions...indicate that further argument is pointless..."?

This would only be true, surely, if the debaters choose to cling to their own radically differing definitions, thinking that they are far too important to give up. Then you are right: discussion would break down. However, if they want debate to continue, they could come to a mutual compromise, or even give up their definition, and adopt the other's.

But to do that, it seems to me that they would be implicitly deciding that their definitions are not of fundamental importance - that words can be used loosely, and useful debate can still be had.

Further, in my experience, it seems to me that where the doctrine of the fundamental importance of definition is accepted, this kind of breakdown is quite common - and not just restricted to radically different definitions, but subtly different ones too.

And, of course, subtly different definitions that are presented by their possessors as radically different ones!

It seems to me I have encountered this sort of thing a great deal over the years, and seen it break down a lot of discussions. Hence I agree with Popper that it is a problem, and that it can be overcome by adopting his principle of treating arguments over the meanings of words as an issue of minor importance.

I do not think Popper argues that his approach solves all problems with arguments, just as inventing the starter motor didn't solve all problems with cars. But it can solve the source of some of them.

Edited by Daniel Barnes
Link to comment
Share on other sites

Thus, as John Locke had previously argued, philosophers should investigate the origin and limitations of human knowledge before venturing into the murky world of metaphysical systems.

That is a nice passage concerning Locke. I'm not sure Popper said the difference in definitions was the only reason for the success of science. It doesn't sound like the sort of thing he would say. He does think it is an important, and overlooked one however.

His point was rather that the stress on the importance of definitions was a pathway to deadlocks such as the one you describe

You seem so fixated on Popper's theory that you don't understand how radically Locke dissented from it. Recall the passages I quoted where Locke condemns metaphysical jargon. Much of what he says illustrates his belief that metaphysical gibberish was widely accepted because there had not been serious investigation into the meanings of metaphysical terms. Locke was far from alone in believing that clarity of meaning was the best solvent of metaphysical rubbish. Popper's notion that philosophical terms should not be defined would have been regarded by Locke as the cause of the problem, not a solution.

GHS:"If nothing else, when we learn that we are working from radically different definitions, this might indicate that further argument is pointless..."

Why might "working from radically different definitions...indicate that further argument is pointless..."?

This would only be true, surely, if the debaters choose to cling to their own radically differing definitions, thinking that they are far too important to give up. Then you are right: discussion would break down. However, if they want debate to continue, they could come to a mutual compromise, or even give up their definition, and adopt the other's.

But to do that, it seems to me that they would be implicitly deciding that their definitions are not of fundamental importance - that words can be used loosely, and useful debate can still be had.

Further, in my experience, it seems to me that where the doctrine of the fundamental importance of definition is accepted, this kind of breakdown is quite common - and not just restricted to radically different definitions, but subtly different ones too.

There are so many twists and turns in this passage that I am going to have to settle for the following:

Recall my hypothetical about two people who agree that that freedom is the highest political value, yet they disagree radically over what policies freedom demands. Using definitions, they soon find the main source of the problem, viz: by "freedom," one meant "positive freedom" and the other meant "negative freedom."

Now, this is not a matter of overstressing the importance of definitions, since definitions are what brought the source of the problem to light in the first place. What this process has revealed is not two people clinging desperately to their definitions who are unwilling to compromise. Rather, the process has shown us that these two guys are talking about two different concepts that happen to share the same word. Their highest political values are not the same at all. This was merely a linguistic illusion.

This conclusion may cause a breakdown of communication because of the mutual realization of the huge gap that separates the two guys. One or both might therefore decide that he doesn't want to undertake the arduous, time-consuming, and highly improbable task of converting someone who inhabits the other end of the political spectrum. This is nothing more than a practical decision, but it becomes an intelligent practical decision when one pays close attention to the meaning of words.

Ghs

Link to comment
Share on other sites

In any case, as I said in my first post on this matter, all this is completely irrelevant to Popper's argument. Consider his example that I mentioned before, namely, the definition of "democracy." Okay, if you wish to define this concept, you might also have to define the major elements of your definition, such as "people" and "will, and then you might have to define some elements of those definitions." So what? Political philosophers do these all the time, and it doesn't involve anything like an infinite regress.

Before we get to the practical aspects of this - obviously no-one can go on to infinity in practice - and also the ins and outs of Rand's approach, I suppose we should get the non-Objectivist theoretical side cleared up.

Do you agree that trying to prove all propositions in theory leads to an infinite regress? That is, to prove one proposition, we need to introduce another proposition: to justify p we need p1. But to justify p1 we need p2. And to justify p2 we need p3. Etc on to infinity.

I would have thought this was an uncontroversial result. Certainly you don't need to accept Objectivist or Popperian doctrine to agree, it was known long before either.

Likewise, then, as Popper points out, defining a term requires the introduction of at least one other term. So to define t, we need t1. To define t1, you need t2. To get t2, you need t3. To get t3, you need t4. Etc. on to infinity.

Do you accept this in principle? I don't see anything wrong with it myself.

If so, then we can perhaps move to the possible consequences of this.

Edited by Daniel Barnes
Link to comment
Share on other sites

You seem so fixated on Popper's theory that you don't understand how radically Locke dissented from it. Recall the passages I quoted where Locke condemns metaphysical jargon. Much of what he says illustrates his belief that metaphysical gibberish was widely accepted because there had not been serious investigation into the meanings of metaphysical terms. Locke was far from alone in believing that clarity of meaning was the best solvent of metaphysical rubbish. Popper's notion that philosophical terms should not be defined would have been regarded by Locke as the cause of the problem, not a solution.

This was not all that clear from that passage. First, it seemed to contain the type of condemnation of metaphysical cant that one might find in Erasmus or Kant, and that Popper quotes approvingly from various sources. So it didn't exactly read like a resounding condemnation of Popper's approach. Secondly the point seemed to be about Locke's shift in emphasis from metaphysics to epistemology. That's how it seemed to me.

PS if you don't want to continue this discussion, just say so. This could be because I'm too stupid for you to bother with, too ignorant, too fixated, or whatever. But I'm noticing you're now struggling to keep a civil tone. If it's all so silly and contemptable then perhaps we should just not bother. I'm sure we both have better things to do.

Edited by Daniel Barnes
Link to comment
Share on other sites

In any case, as I said in my first post on this matter, all this is completely irrelevant to Popper's argument. Consider his example that I mentioned before, namely, the definition of "democracy." Okay, if you wish to define this concept, you might also have to define the major elements of your definition, such as "people" and "will, and then you might have to define some elements of those definitions." So what? Political philosophers do these all the time, and it doesn't involve anything like an infinite regress.

Before we get to the practical aspects of this - obviously no-one can go on to infinity in practice - and also the ins and outs of Rand's approach, I suppose we should get the non-Objectivist theoretical side cleared up.

Do you agree that trying to prove all propositions in theory leads to an infinite regress? That is, to prove one proposition, we need to introduce another proposition: to justify p we need p1. But to justify p1 we need p2. And to justify p2 we need p3. Etc on to infinity.

No, of course I don't believe this. Not to put too fine a point on the matter, but it's bullshit. Moreover, it has no bearing on Popper's argument.

I was going to ask you to define "prove" and "justify," but I didn't want to thrust you into the hellish vortex of an infinite regress.

Ghs

Link to comment
Share on other sites

Here is another passage from Why Atheism? that is relevent to the ongoing discussion. It is actually an endnote in the chapter "The Career of Reason."

Locke’s approach, which was to decompose complex ideas into simple ideas and then trace the latter to their origin in experience (either sensation or reflection), has been called the psycho-genetic method (or, more simply, the genetic method) because of its stress on the psychological genesis, or origin, of ideas. This “plain, historical method” (as Locke called it) was widely criticized by later philosophers (especially post-Kantians) as an illicit mixture of epistemology and psychology. The origin of an idea, we are told, is an issue distinct from the validity or truth-value of that idea; and Locke’s genetic method, we are further told, fails to take this crucial distinction into account. But I think this criticism is a bit unfair to Locke, who nowhere (that I know of) expressly articulates the view commonly attributed to him. And I also think that philosophers who accept this criticism tend to do so either because they rely more on secondary accounts than the original text, or because they read the text with preconceptions of what they expect to find, thereby fastening upon any passage that that seems to support their preconceptions and glossing over any that may contradict it. In any case, the common complaint that philosophers before Kant failed to grasp the difference between epistemology and psychology – between the justification of a belief and its origin in sense experience – is an egregious and pretentious assumption that, if true, would mean that Locke and other empiricists were ignorant of a basic distinction that is taught in first-semester courses on logic. My interpretation of Locke (which is admittedly a sympathetic one) may be summarized as follows: The genetic method is primarily a method of conceptual analysis and clarification, a means whereby we can make our ideas clear by giving to them fixed and determinate meanings. Thus any complex idea that cannot be broken into its constituent parts and ultimately traced to experience is bound to be unclear or, at worst, incoherent. Thus, according to this view, Locke’s genetic method was concerned first and foremost with meaning -- and this is related to justification inasmuch as the meaning of a proposition must be made clear before we can know what kind of evidence would be relevant to establishing its truth.

Ghs

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now