Gregory Browne

Members
  • Posts

    97
  • Joined

  • Last visited

Everything posted by Gregory Browne

  1. No: he assumes the exact opposite: he denies that the definitional attributes are all there is to a concept. In fact, that is his key point in this section of the article. The unknown characteristics are part of the concept, but they are not (not always) part of the definitions.
  2. Brant, Calling something "verbiage" implies more than just that what was said could be expressed more economically, i.e., in fewer words. It implies that the words are worthless or of little value. It is connected to verbosity and the French word for chatter. In my experience is it always use pejoratively. Greg
  3. The problem is that he doesn't understand those basic premises, That's what you still have to prove. Are you still under the impression that these issues are primarily issues of science and mathematics? As I pointed out last year, these issues concerning these dichotomies are philosophical issues, and the doctrines that their are such dichotomies are philosophical doctrines, invented by philosophers. Most were argued for by Hume (and some of these are older than him); Kant gave the name to the A/S distinction; the Logical Positivists developed the whole system explicitly. All of these were philosophers. Scientists did not invent these distinction (to their credit!). It was philosophers who persuaded many scientists of their validity. From their, of course, scientists passed the doctrine on to other scientists, but it is still a fact that philosophers invented these distinctions. Anybody who knows much about these issues knows this--whether they love the dichotomies or hate them. This is a pretty basic point. I've heard of many people who discuss these issues, but I've never heard anyone else treat them as anything but philosophic issues. You are just making personal insults of Peikoff again, and those are just Ad Hominem arguments. Ad Hominem arguments are irrelevant and therefore fallacious, unless they are used to undermine a claim to authority. And, as I pointed out before, neither Peikoff nor I ever asked you to take anything on authority. And even these insults were relevant, they are somewhat circular reasoning: you base your insults on the stands he takes in the ASD, and so to then use the insults as a basis for criticizing the ASD is arguing in a circle. I think you're giving her too much credit. She wasn't an innovator in economics. The little she knew about economics She know a lot about economics Probably not. Mises thought she had a first-rate mind, so she probably already understood a lot when she met him. Almost any economist or economic writer has some background in some economic tradition. And are you saying that we shouldn't listen to anybody discuss economics unless they have a background in the economic tradition? And would you extend that to other fields? If so, then how are we ever going to get any external check on the people in any field of study? Indeed, how do you know that they are experts in the field, without some external test? Otherwise, you will simply say that certain people are experts on economics because other people in economics say they are, and you say that those people experts because still other people say that are, and so on. Rather, the fact that Rand got so much right when economists who were supposed to be expert got them wrong is a credit to her and shame to them.
  4. A statement can also be true if it doesn't refer to anything that exists in reality. With the conventional definition of a unicorn the statement "a unicorn has a horn" is an analytical truth. Yes, it is. But for a statement to be meaningful, whether it is true or false, it has to have some tie to reality. The concept of unicorns is based on concepts for real things such as the concept of horse and concept of horn. It is not extra to the concept of truth, but it is extra to convention. You say that the above statement about bachelors is true merely by convention, and nothing else. I have pointed out that it is due to both convention and the Law of Identity, and the Law of Identity is not due to convention. This doesn't contradict anything I've said. If you base your claim that analytic truths are non-factual (not about the world) on the claim that they are mere arbitrary human products (products of convention or stipulation), then it does contradict what you said. No, not every statement that is a product of convention of stipulation is a truth. I never said that it was. What made think that I did? I'm not talking about all those distinctions, the only thing I say is that analytical truths, such as mathematical theorems or a particular geometry in themselves don't tell us anything about the real world, but that we'll have to determine empirically You are talking about 3 of the 4 distinctions, right in this passage: the analytic/synthetic one, the factual/non-factual one (non-factual truths, if there were any, would be ones that don't tell us anything about the real world) and the empircal/a priori one. You are saying (1) all analytical truths are non-factual, and (2) all factual truths are empirical. You have been claiming these things since the beginning, but you have not yet proved (1). (2) I already agree with. Mind your language, Dragonfly: I don't call your posts "rubbish", so do not call mine "verbiage". That's true(!), but the point is not that you assert arbitrarily that such a statement is true, but that a statement that is logically deduced from arbitrary (non-contradictory) axioms is analytically true. I deny also that the axioms are arbitrary. The concepts in the axioms may be arbitrarily put together, but that does not make the truth of axioms arbitrary. This is exactly analogous to the case of minyaks and munyaks: I arbitrarily picked the numbers 28 and 29 and arbitrarily chose to combine the concepts of them with out concepts to make the new concepts of minyaks and munyaks, but the statement "A minyak has 28 sides" is not an arbitrary statement: we are free to use the term and concept of minyaks, or not use them, but if we do use them we are not free to deny the statement (that is, we cannot do so and still avoid falsehood). It's true while it is a consistent system based on Euclids axioms. So you're saying that it is true because it is consistent? But that is not enough to make something true. At a minimum, the words have to mean something. No, it's not about the world (more accurately: it doesn't tell us anything about the world), as you'll first have to prove empirically that space is uncurved. No. It says something about what is possible and impossible for a world to be: Euclidean geometry is saying that a world cannot have straight space and yet fail to agree with the propositions of Euclidean geometry. I'll illustrate my point this way: answer me this: is "Arsenic is poisonous" about the world? That's another assumption you have not yet proven. If it refers to things in the real world, and says something about them, then it is saying something about the real world. That statement is a bit ambiguous. A geometry as a mathematical system is non-empirical, as it is derived from purely abstract axioms. You still have not proven that assumption. Geometries and mathematical theories are analytically true, whether they can be applied to physical systems or not. Whether they are useful for science is an empirical question. Some abstract theories may remain completely useless for a long time, and yet turn out to be applicable to some physical system much later. So why do think that they are true. Just because they are consistent? It is sloppy, as evidenced by the fact that you and Bill Dwyer (both Objectivists, or Objectivist-leaning) make different assumptions about what Peikoff meant. If Peikoff only counted floating ice as ice, it is still beside the point: he was attacking a theory and using the kind of definition that defenders of the theory would use.
  5. That's just what I have been saying. The English word "curve" has excluded straight lines for centuries. The redefing "curve" so as to include straight lines probably goes back only to the Logical Positivists or their forerummer such as Russell, about a century ago. (If you tell me that Descartes or somebody of his time used "curve" that way I would still point out that these did not become the standard English usage.) Now wait a moment... we were discussing the word axiom, of which you claimed that mathematics hijacked it to give it a new meaning. Now it turned out that the mathematical axiom has a quite respectable age, you surreptitiously switch to the word "curve". But the fact that the word curve in may have been used for centuries in daily language to exclude straight lines does not imply that the meaning of "curve" in a mathematical sense isn't old as well. In every dictionary you'll find the mathematical definition of a curve as one of the possible meanings. Words do have different meanings and there is no single "correct" meaning, and neither does the etymology of a word necessarily define its current meaning. How old do you think that the mathematician's use of the word "axiom" in such a way as to include mere postulates is? Is it older than about a century? Dictionaries do list the modern mathematical definition of curves, because these definitions have come into common use among mathematicians, but they also list the definition that expresses the meaning that non-mathematicians use, and that is older. The mathematician's definition is an innovation, and that is unfortunate, because it involves changing the meaning of words, resulting confusion and ambiguity (see my reply to General Semanticist, 2 posts back).
  6. I agree with your conclusion DF, but I would suggest that it makes no sense to speak about defining concepts whatsoever. Concepts must be considered as some sort of neural process and as such are not subject to definitions. Definitions only apply to words. I agree with DF, too (with your terminlogical amendment, GS)--and Peikoff agrees with DF, too: a definition, to be useful, can only list a few characteristics. Unfortunately, the fact that it lists only a few has led many to assume that it does not imply many more.
  7. Mathematicians often take common terms and give them technical meanings that only apply in mathematics. For example, 'function' has a very specific meaning in mathematics which has little to do with ordinary usage. Yes. And this is something that I have been criticizing them for--and criticizing philosophers, scientists and most people for. It multiplies meanings and thereby creates ambiguities, which can lead to misunderstandings and faulty reasoning--the Fallacy of Ambiguity. It's a bad habit. When somebody comes up with a new concept, they should invent a new term--either a new word (as in the case of 'quark') or a new combination of existing works that describe--accurately describe--the things included under the concept. (Specialists such as mathematicians may plead that they are only using the term with a new meaning in their field, but that merely quarantines the problem rather than curing it. Specialists need their own concepts, but that should not use existing terms to express them.)
  8. Actually, I am a Substance Dualist: I think that the mind is a thing distinct from the brain. Since it is a thing, an individual thing, it is not abstract; it is concrete. And 'concrete' is the opposite of 'abstract', not 'physical'. And you can see this without agree with my Substance Dualism: there are physical concretes and physical abstractions---or rather physical abstractable. For example, an apple is a physical concrete and its attribute of being red is a physical abstractable--i.e, a physical attribute which can be abstracted by the mind to produce the idea of its redness (and the idea of redness in general)>
  9. I see this is true for the induction/deduction issue. It is like a tug of war and, at root, both sides are right when they say their type of reasoning is essential and both are wrong when they say (or imply) that the other type is not essential or somehow inferior. The validity of deduction has been discussed so much that I have nothing to add. But I do about induction. A great deal of confusion about induction exists because of an axiom I arrived at that is never mentioned (or, at least, I have not seen it yet). I haven't formulated a definitive form but it goes something like this: When two or more similar existents are perceived, more similar ones exist. This is the root of induction and has nothing to do with mathematical probabilities. It is a fact of nature and, from what I see, it is a corollary of the Law of Identity. Not only is a thing what it is, it exists with similarities and differences to other things, thus all things can be categorized with other things when they are similar (if such are found). This is so strong that even when there is only one unique thing (like an individual person), it is possible to imagine another. In this last case, the idea of cloning has been around a long time. This almost goes back to Aristotle's forms. Categories exist. Epistemologically they are components in a manner of mental organization. Metaphysically, they reflect something that actually can be perceived, so they exist. They are essentially differences and similarities. I don't see how one can deny that. Michael Michael, I agree that both deduction and induction are justified, which is good, because we need both: we need to induction to help us in areas where we don't have enough information to use deduction: sometimes we must settle for probability rather than certainty. I believe in the model of demonstrative science (i.e., deduction from necessary and certain premises), used by philosophers as different as Aristotle and Descartes, supplemented by induction. I believe that the arguments of Hume and others against these forms of reasoning are invalid. However, I don't think we can say that your claim: When two or more similar existents are perceived, more similar ones exist. is always true, since perhaps those two existents are the only ones of their kind. However, since kinds are, as I put it, "wide" (that is, they can include other members besides the ones they actually include)--which corresponds to Rand's claim that their concepts are "open-ended" (specifically, I would say that they are "open-sided", but may or may not be "open-bottomed"), we could say When two or more similar existents are perceived, more similar ones could exist. I have more to say on this later. Greg
  10. Cal, I think that much of our disagreement may come from the following. It seems that perhaps your basic objection to Peikoff and the ASD could be phrased as follows: "Peikoff is so ill-informed on these topics: he goes against everything my teachers taught me about scientific methodology" If this is what you are saying, then I say: Peikoff knows this; his point is that the thinks that what your teachers taught you is wrong. He is challenging their basic premises. To criticize Peikoff for going against the mainstream views would be like saying: "Rand is so ill-informed on economics: she goes against everything my economics teachers taught me". Yes, she did. She challenged their basic premises. And neither Rand nor Peikoff merely asserted that the premises of the mainstream thinkers were wrong: they gives arguments in an attempt to prove that they are wrong. And by coming to this discussion I had hoped, and still hope, to hear and participate in a debate on these basic premises. Greg
  11. Robert, You bring up a number of interesting points. I won't have time to address them all tonight but I may get back to them later. Leonard Peikoff's dissertation, completed in 1964, was titled "The Status of the Law of Contradiction in Classical Logical Ontologism." Now that Clemson University has a subscription to ProQuest digital dissertations, I've been able to download the entire thing in PDF. So far I've only read only the two sections on Kantianism and conventionalism (pp. 165-188). But I can tell you that the dissertation is primarily an investigation of the history of philosophy. The focus of the dissertation is on accounts that ground the laws of logic in the nature of things, i.e., on variants of Platonism and Aristotelianism. The views that supplanted them--first Kantianism, then what Dr. Peikoff calls "conventionalism"--get a lot less attention, though there is enough material about conventionalism to give the reader a reasonable idea of which contemporary philosophers Dr. Peikoff had in mind. Willard van Orman Quine is not mentioned in the dissertation. Among the logical empiricists (as Dr. Peikoff calls them) are A. J. Ayer, Carl Hempel, Hans Reichenbach, C. I. Lewis, and Ernst Nagel. Ayer, Lewis, and Nagel are quoted at some length on logic having no grounding in ontology. There is scarcely a lick about Ordinary Language Analysis in the dissertation. The sole pragmatist to be cited and quoted is John Dewey, who was a major influence on Dr. Peikoff's dissertation advisor, Sidney Hook. Although the revisability of logical truths doesn't come up explicitly in his response to Dewey, I recall that in his early 1970s lectures on modern philosophy, Dr. Peikoff devoted a lot of attention to Jamesian and Deweyan pragmatism, including some colorful stories that were almost certainly about exchanges with Hook (he spent little time on Charles Peirce, commenting that Peirce was a "mixed case" who held back from the subjectivism that Dr. Peikoff considered typical of pragmatism). Dr. Peikoff attributed to Dewey the view that, everything inevitably being in flux, the laws of logic had worked for so long that they were bound to need replacing in the near future. Because Dr. Peikoff wanted to argue that Kantian views on logic were an unstable intermediary between old-fashioned ontologism and 20th century conventionalism, he also quoted two figures from the mid-1800s: Sir William Hamilton and Henry Mansel (the latter's interpretation of Kant was an obvious favorite with him; it's pretty clear where Ayn Rand got the Mansel quote that she used in one of her essays). Clearly, Leonard Peikoff would have considered Quine a "conventionalist" (in the loose sense in which he uses that word in the dissertation). I'm not sure he would--but how does Peikoff define the term "conventionalist"? I am not sure, but I suspect that he probably thought that Quine was something of a voice in the wilderness and that Quine's views wouldn't catch: his "Two Dogmas of Empiricism" came out in 1953, which is 11 years before the dissertation , which is a relatively short time, philosophically speaking, and in the following 3 years Peikoff probably figured little had changed. The Port Royal school is important for this debate, as they used a version of the intension/extension distinction which was a forerunner of Frege's sense/reference distinction, which was used to bolster the dichotomies of truth we are discuss ing here. There is much relevant to this debate in medieval scholasticism. Does Peikoff mention that? GReg
  12. You still don't get it. Arbitraty postulates and theorems deduced from them are by definition true. No: we cannot make anything true by simply making a definition (except in special cases of self-referential truth, such "I am making a definition" which is true if you utter while making a definition). It sure would be nice if we could, but that's a magical view of the world. Of course we can. If we define a bachelor as an unmarried man, then the statement "a bachelor is an unmarried man" is true by definition. This is what we call an analytic truth. No: it owes it truth partly to definition (which we made true by convention or stipulation) but also partly to the Law of Identity. The definition allows us to subsitute "bachelor" for "unmarried man" in "A bachelor is an unmarried man" to get "An unmarried man is an unmarried man" and that is an instance of the Law of Identity, and that law is not a product of convention. We did not make everything identical to itself. This doesn't contradict anything I've said. If you base your claim that analytic truths are non-factual (not about the world) on the claim that they are mere arbitrary human products (products of convention or stipulation), then it does contradict what you said. It may be that the answer to the correct question of whether the world is Euclidean or not is a contingent truth, whereas truths of geometry, including those about minyaks, are necessary truth, but you cannot assumet the contingent/necessary distinction is the same as the analytic/synthetic, empirical/apriori or factual/non-factual distinctions. I know you assume all of those things, but you still have not proven any of them. (See my earlier post on the things you still need to prove.) An analytic truth is not true? That's interesting: truths that are not true... You need to read what I say more carefully, as I read what you say carefully. I did not say that they were not true. I said that they were not made true by mere arbitrary assertion, i.e., that they did not owe their truth to merely arbitrary assertion. If they did not tell us anything about the real world it would be unlikely in the extreme that they were of value in physical models. Not at all. We can for example construct different geometries; if one of these can be applied to a certain physical system, the other ones cannot, as they would give different results. In other words, all those other geometries don't tell us anything about that system. Nevertheless they are all equally valid, the truth of the mathematical statements does not depend on their applicability in physics. Then what is your reason for saying that Euclid's geometry is true, and useful in physical models? My reason for saying that Euclid's geometry is true, and useful in physical models, is that it describes a possible world, a way the world could be--with uncurved space. If space were uncurved, Euclid's proposition would be true. And this last sentence is a truth of mathematics (and so it is necessary and analytic) and yet it is about the world (and so it is factual) The correctness of the math should already be established. But in a sense we did test Euclid's math and found that it does not wholly accurately describe the space of our world, which turned out to be curved. That doesn't in any way invalidate Euclidean geometry; its theorems are always true. Whether we can apply them to physical systems is a different question. Again, this is the essential distinction between analytical truths (like the theorems of Euclidean geometry) and synthetic truths (empirical statements about the geometry of physical objects). The distinction is crystal clear! You still have not proved 1. that all empirical truths are synthetic 2. that geometry is non-empirical. These are empirical questions, which have no bearing at all on the correctness of Euclidean geometry. This is the essential point you keep evading: a mathematical theory like Euclidean geometry can be completely consistent and correct in its own right. Its statements are analytic truths. Whether you can apply them to physical systems is a different question - that is the domain of synthetic truths. Again: why do you think that they are true (and useful) if they do not apply to physical systems? This is in contradiction with what you wrote in an earlier post: 'I say, for example, that "All water is H20" (that is, all water has the atomic structure expressed by that formula) is true and we know it to be true, because we know that being H20 is a necessary fact about water, not just a contingent one.' Heavy water is D20, so your statement was wrong. No. D stands for deuterium, which is an isotope of hydrogen, and therefore a kind of hydrogen, and so D2O is a kind of H20. You'll mean "ice". Peikoff was so sloppy that he even didn't give any definition of ice. That wasn't sloppiness: he assumed the definition "Ice is solid water", which almost everyone would have agreed to. In that context it was very sloppy, as the exact definition is important for determining whether the statement "ice floats on water" is an analytic or a synthetic truth. Not sloppy: he assumed it because it was what the people whose views he was critiquing would say--that "Ice floats on water" is not analytic but "Ice is solid" is analytic, because the former does not follow from the definition "Ice is solid water" while the later does.
  13. They are not incorrect. This meaning can be found in any dictionary, and is not determined by pretentious philosophers who think that they have the monopoly of language. Some of them are incorrect, because they run contrary to ordinary usage. For example, the habit of modern mathematicians defining "curve" in such a way as to include straight lines it contrary to ordinary usage. Now are you under the impression that I think that the meaning is determined by philosophers??? That is precisely the view I I have been rejecting: as I said, Leibniz was right to say that we should think with the wise but speak with the common people. That's just what I have been saying. The English word "curve" has excluded straight lines for centuries. The redefing "curve" so as to include straight lines probably goes back only to the Logical Positivists or their forerummer such as Russell, about a century ago. (If you tell me that Descartes or somebody of his time used "curve" that way I would still point out that these did not become the standard English usage.)
  14. Not so. Science develops along the lines of abduction which is hypothesizing the likeliest cause or maintaining the symmetry of the laws underlying the observed effects. Ba'al Chatzaf Much you say after this is good, but it shows you have misunderstood my point. I am saying that modern scientists do so well because they do not practice Logical Positivism, which apparently many scientists still preach (as indicated by Cal's remarks). Abduction is not a concept the Logical Positivism--at least in its classic form--uses. And I do not share the view that all science is based on induction. (By the way, Bacon did not have a high opinion of the mere induction by enumeration, which he said is "puerile".)
  15. Philosophically absurd philosophies of science do not do these things; science does. So why does modern scientists do so well? Because modern scientists do not practice the philosophy of scientific methodology that they preach--thank goodness. What most of them apparently still preach is a version of the philosophy of Logical Positivism, as do you and Cal. According to Logical Positivism, science can only be based on induction. But Logical Positivists have never been able to justify induction. Why? Because their philosophy is based on the premises of David Hume, who claimed that induction cannot be justified--and if you accept his premises you cannot justify induction (this is the famous "problem of induction"--which is only a problem if you accept his premises, which I of course do not). Some Logical Positivists tried to justify it, but their work did not satisfy other Logical Positivists. Two of the most prominent, one by Rudolf Carnap and one by Hans Reichenbach, were critiqued by Wesley Salmon, another Logical Positivist. Carnap argued that induction could be justifed by the laws of probability. However, Salmon pointed out that the laws of probability are mathematical truths, and so are necessary truths, whereas when you claim that induction from some past or present truth(s) to (alleged) future truths is true, you are make a factual claim. But Logical Positivists (folllowing Hume) say that there a no necessary factual truths. So if they are right (which they aren't) Carnap's attempt at justifying induction fails. A similar objection was made by Salmon against Reichenbach's attempt. In short, both of the attempts at justifying induction made by these two Logical Positivists violated their own Logical Positivist assumptions. Fortunately most scientists, though they preach Logical Positivism, go right ahead and do their inductions, with the great successes you mentioned. Now since I believe in necessary factual truths I can use Carnap's strategy and say that induction can be justified by mathematical laws of probability. But since you believe that there are no necessary factual truths, Ba'al, how would you try to justify induction? (And I ask the same question of Cal.)
  16. Cal, In late June we started to discuss the following claims, and then you got busy. Now if you have time we should resume discussion of them, since they are central to debate. These are some of the claims which Peikoff and I deny and which Cal presumably affirms (where "definitional truth" means a truth expressing a Nominal Definition, which is a definition we learn when we first learn the meaning of a term, and includes all truths of logic and all truths of math): 1. that only definitional truths are such that their denial yields a contradiction (i.e., are analytic) 2. that only definitional truths are necessary (i.e., such that it is impossible for them to be false) 3. that only definitional truths are non-falsifiable (i.e., certain, provable with certainty) 4. that definitional truths are not knowable empirically (i.e. from experience) 5. that definitional truths are "non-factual"--i.e., they "say nothing about the world" Now you need to argue for each of the claims on this list without using any of the others claims on this list as a premise. Otherwise you will not make your case. Now as regards 1, you wanted to say that to say that say it was true by definition. And I said that if you want to define 'synthetic truth' this way, and define 'analytic truth' as definitional truth, I coull accept that, for the sake of the argument, but then you must be accept the consequences of doing so: now you can no longer say that the following are true by definition: Truths whose denial is a self-contradiction = analytic truths Truths whose predicate is contained in their subject = analytic truths because those presuppose Truths whose denial is a self-contradiction = definitional truths Truths whose predicate is contained in their subject = definitional truths which needs to be argued for. (note: I do not deny that all of those the right of the = sign are included among those to the left of it, but I deny that all of those to the left of it are contained in those to the right of it). So now you have 2 more to argue for. Now regarding 2: 2. that only definitional truths are necessary (i.e., such that it is impossible for them to be false) you say: 2. The necessity of a synthetic truth has no meaning to me. Such a statement may be true in this universe but not in another one. What does "necessity" mean in that regard? No, a truth that is not true in some another universe is not a necessary truth: a necessary truth is a true that is true in all possible universes, by definition. When you say that necessity of a synthetic truth has no meaning to you you are simply expressing your firm belief that necessary truths cannot be synthetic, but that is no justification for saying that you don’t understand my denial of this belief and does not prove that synthetic truths cannot be necessary.. I assume that you agree that I necessary truth is one that cannot be false (in any possible universe), and you are using synthetic to mean “non-definitional”. So a necessary synthetic truth, given your definitions, would be a non-definitional truth that cannot be false in any possible universe. You deny that there are such truths, because of your belief in 2: 2. that only definitional truths are necessary (i.e., such that it is impossible for them to be false). But you still have not argued for 2. So that remains on the list. Now for the next one: 3. that only definitional truths are non-falsifiable (i.e., certain, provable with certainty) You say to this that statements may be non-falsifiable without being certain. This implies then that 3 imaking at least 2 claims: 3a. that only definitional truths are non-falsifiable 3. that only definitional truths are certain (i.e., certain, provable with certainty) So this will replace 3 on the revised list. Now for 4: 4. that definitional truths are not knowable empirically (i.e. from experience) You say: This formulation is vague. The definitional truth may refer to something that is known empirically, but its truth cannot be derived empirically, I would say that ‘derived’ is vaguer than ‘known’, but if you prefer this terminology you can have it. So 4 becomes: 4. that definitional truths are not derivable empirically (i.e. from experience) Your argument is for this is that definitional truths cannot be falsified by empirical observation. This leaves one premise unstated. 4a. Definitional truths cannot be falsified by empirical observation. (4b. What cannot be falsified by empirical observation cannot be derivable empirically.) 4. Definitional truths are not derivable empirically. But now 4b needs to be argued for. Now for 5: 5. that definitional truths are "non-factual"--i.e., they "say nothing about the world" You say: This formulation is also vague. A definitional truth may refer to things in the world, but it doesn't give any new information about the world, as its truth is independent of any empirical evidence. Yes, they don't tell you anything about the world that is not already contained in them. On this we agree But now the question is: do definitional truths give ANY information about the world. And I think you believe that the correct answer is “No”, and in that case we still have a disagreement, over this claim: 5’ that definitional truths do not give any information about the world. So now the revised list of claims to be proven is this: 1'a. that only definitional truths are truths whose denial is a self-contradiction 1'b. that only definitional truths are truths whose predicate is contained in their subject 2. that only definitional truths are necessary (i.e., such that it is impossible for them to be false) 3a. that only definitional truths are non-falsifiable 3b. that only definitional truths are certain (i.e., certain, provable with certainty) 4b. what cannot be falsified by empirical observation cannot be derivable empirically. and perhaps 5’ that definitional truths do not give any information about the world. 2 and 3b are the most important for you to prove.
  17. That you don't know the meaning of mathematical terms does not mean that these are somehow secondhand and less correct than the meaning you happen to be familiar with. I did not say they were secondhand. Insofar as they are incorrect they are incorrect because they do not express the meaning of the term in ordinary language. Why is this bad? See my latest reply to Daniel. I never said it was new. Where did you get that idea? You still don't get it. Arbitraty postulates and theorems deduced from them are by definition true. No: we cannot make anything true by simply making a definition (except in special cases of self-referential truth, such "I am making a definition" which is true if you utter while making a definition). It sure would be nice if we could, but that's a magical view of the world. Even if we make up new concepts by making new definitions, we don't make the truths that can be derived from them true. For example, consider the case of minyaks and munyaks, which I discussed earlier here. I supposed that for some reason I made up these new concepts, definining 'minyak' as 'a geometric figure with 28 equal straight sides of 1 inch in length" and 'munyak' as 'a geometric figure with 29 equal straight sides of 1 inch in length". The concepts are arbitrary products of my mind and the terms and their definitions are my arbitrary linguistic creations. Nonetheless, the truths that can be derived from these concepts and definitions are not arbitrary or produced by me. For example, the truth that the ratio of the area of a minyak to the area of a munyak is______(whatever quantity it is) is something that was not created by me (I don't even know what it is) but rather has to be discovered by careful reasoning, whose rules are not arbitrary creations. That's assuming that analytic truths don't tell us anything about the real world, which is one of the major points you need to argue for (I will send an updated version of the list in my next post). And analytic truths, even in the narrow sense, cannot be made true simply by arbitrarily asserting that they are true. If they did not tell us anything about the real world it would be unlikely in the extreme that they were of value in physical models. The correctness of the math should already be established. But in a sense we did test Euclid's math and found that it does not wholly accurately describe the space of our world, which turned out to be curved. You can prove something if you can show that it is self-evident or derive it by deduction from something self--evident. You can disprove a postulate if you can deduce from it a conclusion that itself can be disproved. No you cannot. Can you prove Euclid's 5th postulate? No, you can't. Can you disprove Euclid's 5th postulate? No, you can't. That is why it is called a postulate (or an axiom). You are confusing a postulate with a hypothesis. Euclid thought that he was describing the properties of actual physical space. Relativity Theory says that he was not entirely correct, that space is curved, and their properties are described by the alternative geometries. Why would that be inaccurate? Bill Dwyer assured me that when Peikoff talked about "ice": "he's talking about normal ice, not very high density ice. Can't you see that??". So two Objectivists use a different definition of ice, one inclusive (also sinking ice is ice - implying that Peikoff's statement was incorrect), the other one exclusive (only floating ice is ice - implying Peikoff's statement was correct). This is of course just an illustration that definitions are arbitrary, not in the sense that any definition will do, but that there is not one single correct definition. Is heavy water a form of water? The two different definitions of 'ice' express two different meanings of 'ice', because they determine two different reference classes, one of which includes sinking ice and the other does not, the latter being a subset of the former. Each of these definitions is correct as an expression of that meaning and reference, and each is incorrect as an expression of the other meaning and reference. I think that my definition expresses the meaning of the term 'ice' as expressed in ordinary language, because I think that ice is like a Shallow Kind, and therefore there is nothing more to it than being solid and water. However, Bill Dwyer seems to be taking ice as a Deep Kind, and if ice were a Deep Kind, the meaning of 'ice' would be in part determined by paradigm cases, and the ice samples we encounter in everyday life are paradigm cases of ice and so to be an ice sample it would have to share all of their common attributes, even the unknown ones, and so the unusual sinking ice would not be true ice, though similar in many qualities including many internal ones. If some of the water of everyday life is Heavy Water (and my current understanding is that it is), then it is included in the meaning of the term 'water' and so Heavy Water is water, because water is a Deep Kind, and therefore its meaning is in part determined by paradigm cases, and the water samples we encounter in everyday life are paradigm cases of water, and if some of them are Heavy Water then Heavy Water is a subkind of water. You'll mean "ice". Peikoff was so sloppy that he even didn't give any definition of ice. That wasn't sloppiness: he assumed the definition "Ice is solid water", which almost everyone would have agreed to.
  18. The term axiom is used for traditional reasons. It is picked up from Euclid's presentation of geometry. In -Elements- by Euclid, there are axioms and postulates. ... .... Perhaps it were better if you studied the history of mathematics before using slam terms such as hijack and intentional deception. Ba'al Chatzaf If we're going to approach things that way, I'll point out that it would be better if you extended your studies of intellectual history beyond the history of mathematics, where you will find that concepts such as the concept of axioms are not confined to mathematics nor invented by mathematicians. Also, be wary of assuming that premodern mathematicians shared all of the methodological presuppositions of modern mathematicians.
  19. There is nothing funny about that, but misuse of the meanings of the words can be comic (though sometimes tragicomic, when major practical decisions rest on it): for example, redefining ‘curve’ so that it applies to all lines, straight or curved, so that lines which aren’t curved come to be called “curves” . In any case, why are people having so much trouble understanding what I am saying on this point, and seeing that it is true?. It is almost common sense: we should not change the meanings of terms in the middle of an investigation or debate—unless we announce explicity beforehand that that is what we are doing, and give an explicit definition expressing the new meaning—since otherwise we would be using terms ambiguously, which is a basic logical fallacy. And further we should not even change meanings then, since it creates confusion, and leaves the door open to ambiguity. It is ironic that happens so much in discussing these topics, since the Logical Positivists, who are main defenders of the view that Peikoff’s criticizing, aimed for an improved language in which each term had only one meaning, and yet they themselves contributed to ambiguous use of terminology by creating new meanings and definitions of existing terms. Of course they usually would say that they didn’t want to add their definitions and meanings to existing ones, but rather replace existing definitions and meanings—but why should they have been allowed to do this, to give existing terms with well-established meanings new meanings? By what right to they remake the English language and other natural languages? Now some people would take a middle-of-the road position, and they that these was all right for technical language, but not for ordinary language: for example, if, say, physicists wanted a technical vocabulary, using existing terms but with special definitions for physicists, while lay people used the terms with the old meaning, that is OK. But it is not: why should physicists and lay people use the same term differently? If physicists want a new vocubalary to express new concepts, that is fine, but it should be made up of new terms (either new words or new sets of old words), not old terms with new meanings. Otherwise ambiguity will be present and the physicsts and lay people will just talk past each other, and there is too much lack of commication between scientists and lay people as it is. Better to follow Leibniz’s saying: “We should think with the wise but speak with the common people”.
  20. Be they called axioms or postulates these are ground statements of a theory -assumed- for the purpose of deducing consequences. The postulates are generally not arbitrary. The are very carefully crafted to capture the intuitive idea of the mathematical object of interest. For example the Peano Postulates encapsulate the normal understanding of what an integer is. It may surprise you to know there are non-standard interpretations of the Peano Postulates. See "Peano's Axioms and Models of Arithmetic" by Thoralf Skolem. The postulates for vector spaces encapsulate the intuitive idea of a vector as an object having both magnitude (length) and direction. The assumptions underlying integrals are crafted to capture the idea of approximating the area under a curve with rectangles. Etc. Etc. This is hardly arbitrary. That is good to hear: you are, in effect, saying that these postulates are definitions (or partial definitions) of terms: that which is "crafted to capture the intuitive idea" of something or "encapsulates the normal understanding" of what something is, is a definition of the term referring to that something. And once we have these definitions then definitional truths derived from them becoming self-evident: that is, they are evident through themselve --.e., you know that they are true as soon as you understand the meanings of the words in them (and the grammar of the sentence). By saying that they do not "give the desired results" are saying that they do not define the term in such a way as to make it apply to all integrals? What do you mean by "does not commute with limit as an operator"? So are you now discussing the attempt to come up with a definition of the term 'measure', or are talking about making refinements to the definition of 'integral'? If it is based on definitions of its terms, it is not worthless, because the definitional truths derived from such definitions will be self-evident truths, which provide a solid-foundation. No confusion: I am sure that Euclid did not consider his axioms to be mere assumptions, but rather self-evident truths or derivable from them, because the notion that math, or any other field of knowledge, can be based on mere assumptions is rare before the last two centuries. Your problem above seems to be the belief that I have been criticizing, the belief that math, because it can reach to non-actual worlds and is often accompanied by abstractions, that it is not based on reality, and so its basic truths are based on mere assumptions rather than definitions. So much the worse for modern mathematicians: without self-evident truths there are no indirectly evident truths, and without those there are no evident truths at all. Self-evident truths are what you need to start with, not derive. I am glad that you admit that there are self-evident truths in logic and math. However, the law of non-contradiction is not the only one: there are also Aristotle's other two laws, and any definitional truth. There is nothing wrong with using slam terms if they are deserved, and they are deserved in this case. And don't get touchy: I didn't say that all mathematicians are intentionally deceiving anyone, nor that you are. And in any case whether it is intentional or unintentional deception is unimportant in this context (if you want to object to using "deception" for unintentional cases then use another word): the point is that misusing language, intentionally or unintentionally, leads to confusion, which opens the door to error. And no, the history of practices in a given field will not justify bad practices. And it is a bad practice to appropriate an existing term for a new concept. It is, in short, the result of laziness: it is much easier to appropriate an existing term than invent a new one. That is like saying modern logicians are meticulous. Yes, they do try to be precise, and often succeed and do good, but they are also notorious for being overly precise and nitpicky, which has given modern logic and modern philosophy in general a mostly deserved reputation for being dry, obsessed with trivia and out of touch with the real world. Yet at the same time they have been sloppy in their analysis of some issues. Well modern logic and modern math unfortunately grew up together, and the errors of the former have often infected the latter (such as the myth that logic and math are not about the world because they are just products of convention or because they are purely formal, or that the material conditional expresses the conditional in ordinary language), though mathematicians get away with much less because of math's heavy application to physics. I am very glad to hear that you admit that mathematical truths have a foundation and that it can be known. I thought that you would try to deny that, since you deny that math is based on self-evident truths. Insofar is that it true it is because so much of philosophy has been bad, but good philosophy is an indispensable foundation for other fields. In particular, little or nothing could be achieved intellectual without sound logic, which is a branch of philosophy. This effects the attempt to deal with every topic, even those which, because non-quantitative, do not involve math. I know that there has been a loss of certainty in most intellectual fields in the last two centuries, and this results from uncertainty in philosophy in the same period, going back to Hume's skeptical thinking, which appeared in modified form in the work of the Logical Positivists. It is to attack this excessifve skepticism that is, or should be, the main point of Peikoff's ASD and my main point in supporting it.
  21. When you think "point" what is it you are pointing at? When you think "line" what is it you are aligned with? I never said that all definitions are ostensive, which is would be no more correct than the common view that all definitions are verbal. And even among the verbal ones not all are of the classic genus-and-differentia formor any other description-like form: some are contextual defrinitions, showing how to use a word in a sentence. If the original definitions of 'point' and 'line' were ostensive, they were probably something like this: people defined 'point' by pointing to the tips of spears, arrows, knifes, etc., and people defined 'line' by pointing to stripes on striped things or to rows of things. Now this is what geometry books tell us. But note that even this is somewhat abusing ordinary language. Platonistic minded mathematicians told us that what we think are points or lines are not "real" points and lines, which do not exist in this world, but in some other world. And so they changed the meanings of the words, which referred to things in the real world. By itself it was small change, but it paved the way for worse things. In the late 19th and early 20th century mathematicians, or some of them, defined 'curve' in such a way as to apply to all lines, straight or curved, and defined 'line' in such a way as to apply only to infinite straight lines. These definitions eventually reached even 3rd-grade math books in the 1960s, which told us 3rd-grade students that these words didn't mean what we thought they meant. Later, in the 1970s, I found a textbook my father had in a math or engineering class in trade school in the 1930s, and found definitions of geometrical terms more in conformity with ordinary language. My gut reaction was that this older book was unsophisticated, not knowing the superior definitions I learned in school. But I soon saw how wrong this was: the mathematicians who originated the definitions in my textbook had simply changed the meanings of words to suit their purposes. But this is wrong: neither mathematicians nor anyone else has a right to simply change the meaning of a term. Again, as I have said before, when one comes up with a new concept one should not be lazy and take an already-existing term with an already-existing meaning and re-define it; instead, invent a new term (a new word or a new combination of existing terms). See my post 617. There is not a consensus in favor of Platonism among either mathematicians or philosophers of mathematics. Of course, you are probably saying that, being closet Platonists, that they are Platonists whether they admit it or not (or even whether they realize it or not), because points and lines do not exist in the real world. So again see post 617. Again see post 617 See post 617. But what you say again reminds of the case of a body which is not acted on by any forces (which Newton's First Axiom talks about), the concept of which body is also an idealization. If the fact the concepts of geometric figures are idealizations proved that the statements of geometry are not factual or not empirical, then it would prove that Newton's 1st Axiom, whose subject-concept is an idealization, is not factual and not empirical, but it does not. I haven't heard anyone dealing with this challenge. This is, at best, true only metaphorically, not literally: taken literally it stretches the meaning of 'hallucination' and 'fantasy' way out of shape. First, we can extrapolate from our knowledge of the actual world to other possible worlds, but this is not hallucination or fantasy, nor do the truths cease to be factual: for example, dispositional statements (such as 'That stuff is poison', where that means 'That stuff would make you very sick if you drank it', reach beyond the actual world (since it may be that no one drinks the poison) but are very factual. Indeed, decisions about the future involve going beyond the actual world, since some facts are contingent and therefore there is more than one possible future. Second, we may talk in approximations, and again this is not hallucination of fantasy. There may be some physical reason why a perfectly straight line is not even possible, but we can still treat approximately straight lines (i.e., lines that are straight within a certain given margin of error) as if they were perfectly straight, for a given practical purpose, where no harm will be done by approximating. What is Wigner's question. It sounds like he his asking "How is it that math can be so effective (i.e., give true and useful answers about the real world) when it has no rational justification (i.e., is not based on self-evident truths about the real world)?" However, this is assuming that math does not have a rational justification. I instead would ask a similar question: "How could it be true that math can be so effective (i.e., give true and useful answers about the real world) if it were true that it has no rational justification (i.e., is not based on self-evident truths about the real world)?" To put it another way, doesn't it seem extremely improbable to think that a math that based on hallucination or fantasy, or on any kind of arbitrary assumption, or on any kind of convention, could produce so many results that were true--or, if you wish to deny that they are actually true, then so many results that are so useful? My answer is that it is very improbable, and that we should therefore conclude that math is not based on hallucination or fantasy, or on any kind of arbitrary assumption, or on any kind of convention.
  22. Hi Daniel. The debate is over the question: Is all knowledge empirical (i.e., deriving from experience) or not. More specifically: Is it possible to derive all of our knowledge from experience, or do we need some other source of knowledge, such as innate ideas? I answer “Yes” to these questions: experience is the source of all of our knowledge. And when I say that it is a source, I mean that it a database: I do not deny that we need abstraction to derive concepts from database and logic to derive truths from the concepts and percepts. But I deny that we need an additional database. So, as a conceptual empiricist (and I think most of us on this forum are conceptual empiricists, as are most intellectuals in the English-speaking world), I say all concepts are conceptual, and, as a propositional empiricist, I say that truths derived from concepts can be empirical if the concepts are empirical. The distinction is between concepts of things which exist in the physical world and those which do not. That is not a distinction between truths, or between empirical and non-empirical knowledge. It might be, or be close to, a distinction between perceptual knowledge and conceptual knowledge, but the later is derived from the former. Yes, Plato concedes a connection between abstraction and the real world—but it is the reverse of what I and other empiricists say it is! He thinks that things in the real world resemble (imperfectly) his Ideas or Forms because the Demiurge impressed these Forms on imperfect matter and made material things, whereas we empiricists say that abstractions resemble material things because we have perceived material things and then abstracted these abstract concepts (abstractions) from our percepts No, this is just the sort of thing I want to avoid. I am a Conceptualist, not a Realist, on the subject of universals (and any other kind of absraction): there are no really existing abstractions outside the mind; the only abstract beings that exist are abstract thoughts in the mind. My formulation avoids implying a secondary, inferior sort of existence to shadowy beings as Meinong did (which Quine said was constructing an ontological slum). Sherlock Holmes does not exist in the mind; rather, Holmes does not exist at all, but we have ideas of Holmes in our mind. Anselm made a famous argument for the existence of God which relied on this mistake (and others): he said that God exists at least in the mind, since we have a concept of God (even the atheist must have the concept in order to deny the existence of God). However, he would have used this premise if he added realized that “X exists in the mind” is just a metaphor for, or casually way of saying, “We have a thought of X”. We don’t have to idealize them if we don’t see the imperfections at first, as we wouldn’t if the objects look perfectly triangular. And many do look perfectly triangular: many things, mostly human-made, appear to be perfectly triangular or to have other perfect geometric shapes involving perfectly straight lines, because the naked eye cannot see the imperfections: manufacturing can make lines that look so perfectly straight that it takes a refined measuring device to see it. And since most of us are never going to bother making many of these measurements, let alone all of them, we will in fact experience far more triangles, etc. that look perfectly triangular, etc. then we will experience ones that do not. Now since these objects look perfectly triangular at first inspection, we do not need already have an abstract standard of triangularity or straightness first. Moreover, I am not sure that I am not conceding too much by saying perfect geometric shapes do not exist in the real world: is this really true on the subatomic level? And, now that I think of it, what about 3 intersecting light beams? Tell us about the 3 World cosmology.
  23. Really? Strange then that mathematicians for example talk about Peano's axioms and the axiom of choice. See for example here: So this, if it is an accurate description of mathematical practice, shows that mathematicians, at least some of the time, use 'axiom' and 'postulate' and 'assumption' to mean the same thing. But this involves changing the meaning of 'axiom', which did not refer to any arbitrary assumption, but to a foundational truth. And changing the meanings of terms, using old terms to refer to new concepts or, in this case, to other old concepts, is a very bad habit of mathematicians and of non-mathematicians, as I have said before. It creates confusion and allows for unintentional and intentional deception. Mathematicians already had the terms 'postulate' and 'assumption', and so there was no need hijack 'axiom' from its existing meaning (which, by the way, is still more or less used outside of math). Also, the above passage leaves out the possibility of self-evident truths. If all you have is arbitrary postulates and theorems deduced from them, then you have no reason to think any of them are true. Then mathematics would be worthless. You cannot prove or disprove a mathematical postulate (or axiom). Example: Euclid's 5th postulate. You can prove something if you can show that it is self-evident or derive it by deduction from something self--evident. You can disprove a postulate if you can deduce from it a conclusion that itself can be disproved. You may define ice in such a way that the statement "ice always floats on water" is an analytic truth. But that doesn't imply that it gives a correct description of the physical world. If you define ice as "Solid water that floats on water" then the definition will be inaccurate as an expression of the meaning of the word 'ice' in English, because it refers to any solid water, even those newly discovered forms that don't float. In any case, neither Peikoff nor I nor anyone else has defined 'water' in this way. What we would say is that if "Ice floats on water" were true then it would be an analytic true, but it isn't true, and therefore cannot be analytically true, by either our definition or yours. Peikoff and I and most other people mistakenly believed "All ice floats on water" was true, and so mistakenly believed it was analytically true--by our definitions (the standard Kantian and Logical Positivist definitions). But we did not believe that it was analytically true by your definition--true by definition--because we did not and would define ice as "Solid water that floats on water" or in any other way that would make a "All ice floats on water" a definitional truth (all this is assuming that we are talking about Nominal Definitions).
  24. How does know what they mean, without a definition? Both could be defined, and in math terms normally are defined, and often need to be, because mathematicians often define terms differently from non-mathematicians (which is a very unfortunate habit they share with most other people: when people come up with a new concept they should come up with a new term for it--either a new word or a new combination of old words--but people are frequently lazy about that and grab a term that already has a meaning). You cannot define every word in a given context, try it! Take any sentence and pick a word in it and define it. Then using the definition pick another word and define it. Keep doing this and you will find youself defining in circles and this means that at very basic levels of language we have to trust the person understands us. If we can't agree on some undefined terms then communication is not possible. In my mathematical definition of a circle above there is no need to define 'point', it understood what it means, on objective levels. As I said in post 550, a definition does not have to be verbal. It can be ostensive, and perhaps there are still other ways of defining a term. Yes, beginning communication is difficult. But we can tell ourselves what we mean by a term. For example, I can use one term to refer to strawberries and another term to a refer to a certain person I know, and always use the terms that way, and hope that other people know what I am referring to and start to use the word that way. But I know what I mean by those terms. And when these halting beginnings develop in a full-fledged language then I can give myself a definition of those terms, and give it to other people.
  25. Not if we take 'definition' is the broadest sense of the term, to apply to anything that indicates the meaning of a term. So a definition does not have to be verbal: there are ostensive definitions, which involve pointing. How does know what they mean, without a definition? Both could be defined, and in math terms normally are defined, and often need to be, because mathematicians often define terms differently from non-mathematicians (which is a very unfortunate habit they share with most other people: when people come up with a new concept they should come up with a new term for it--either a new word or a new combination of old words--but people are frequently lazy about that and grab a term that already has a meaning). Most words in natural language a defined, and defined verbally. See my last reply to Ba'al, concering the apriori.