1. The mathematics of integers, rational numbers, real numbers, or complex numbers under addition, subtraction, multiplication, and division.

Also, there were other quotes from ITOE were Rand specifically stated that math was a science of method.

I am not doggedly trying to defend Rand. I am simply trying to understand why something that makes perfect sense to me is denigrated. I have no problem disagreeing with Rand, but I have to see the logic or at least something.

This is a half-truth. Topology is about both metric and non-metric spaces.

Well, of course every metric space is also a topological space, but the reverse is not true. The theory is more general than that of metric spaces, that is why it was invented in the first place.

Here is Korzybski's general semantic definition of mathematics. - "Mathematics consists of limited linguistic schemes of multiordinal relations capable of exact treatment at a given date."

and for the numbers ; " 0 and 1 represent unique and specific symetrical relations and all other numbers, also unique and specific asymetrical relations".

In brief, the numbers 0 and 1 are used to represent the symetrical relation of equality as in if a=b then a-b=0 and a/b=b/b=1.

Rand's definition -- the science of measurement -- is far too narrow. It doesn't even include arithmetic!

Admittedly, she did not say it was a definition. Nowhere to my knowledge did she say this or that is a definition of mathematics. Maybe it's only a description. But I think that the above is the best candidate for her definition.

How does the following definition of measurement exclude arithmetic?

Measurement is the identification of a relationship in numerical terms...

How do you know it's a definition? Maybe it's only a description. Suppose you measure a stick and find it's 6 inches long. Did you do any addition, subtraction, multiplication, and division?

I'm not trying to denigrate Rand. I wouldn't expect Rand, who studied no math beyond middle-school algebra, to give a good definition.

I'll go with description (or partial definition, if needed). Still, "science" seems like an awfully good genus and "measurement" an awfully good differentia.

Suppose you measure a stick and find it's 6 inches long. Did you do any addition, subtraction, multiplication, and division?

Addition, of course. One inch and one inch and so on until you get six inches, presuming inch is the unit. If foot is the unit, then there would have to be division and subtraction. By a stretch, the six inches also could be arrived at through multiplication.

Addition, of course. One inch and one inch and so on until you get six inches, presuming inch is the unit. If foot is the unit, then there would have to be division and subtraction. By a stretch, the six inches also could be arrived at through multiplication.

I didn't ask you if numbers are subject to the rules of arithmetic, which is what you answered for 6. I asked if you did any addition, subtraction, multiplication, and division. Clearly you did not.

I might not have done the addition, but somebody did it for me if I am using a ruler divided into inches. There is the guy who made the ruler, for instance.

Here is Korzybski's general semantic definition of mathematics. - "Mathematics consists of limited linguistic schemes of multiordinal relations capable of exact treatment at a given date."

and for the numbers ; " 0 and 1 represent unique and specific symetrical relations and all other numbers, also unique and specific asymetrical relations".

In brief, the numbers 0 and 1 are used to represent the symetrical relation of equality as in if a=b then a-b=0 and a/b=b/b=1.

In more general terms 0 is the identity of a so-called additive Abelean Group and 1 is the identity of a multiplicative group in general. A number field (or division ring as it is sometimes called) is such that its elements constitutes and additive group, its non-zero elements constitutes a multiplicative group and the two groups are connected by the distributive law.

There are algebraic structures in which there are many binary operations, not just addition and multiplication. There are groups and semi-groups in which there is just one binary operation and some of these are not-commutative. For example the group of linear operations over a vector space, where composition is the binary operator. Such groups are non-commutative so they are not numeric. There are also the quaternions which is a non-commutative field and the octonians which is a non-associative algebra.

The Count was stuck with numerical type algebras and his characterization was right as far as it went, but it was not the most general characterization of mathematics. The Count was educated in the same era as Ayn Rand and he was about 100 years behind where mathematics was.

The Count was educated in the same era as Ayn Rand and he was about 100 years behind where mathematics was.

Ba'al Chatzaf

Korzybski, 1879-1950. Rand, 1905-1982, I would say that puts them a good generation apart but what difference does that make? I don't know what you mean about being 100 years behind, he wasn't doing mathematics, he was creating general semantics.

That is the wrong order. Do you define mathematics as "derivatives of functions"? Don't you have to learn how to count first? Simple to complex, etc.?

Michael

Micheal,

If you read that definition of numbers I posted you may see that numbers are symbols we use to express unique, specific (exact) symetric and asymetric relations. If I say 1+1=2 then it's mathematics but if I add 1 apple and 1 apple and get 2 apples then it's technically applied math now. It would be possible to teach children about group theory (as Bob so kindly introduced us to above) without ever applying it to counting, adding etc, real objects, but it would take a radical adjustment in our education system and attitudes since the emphasis is usually on applying math . Mathematics is not usually valued for itself, most people think "where will I ever use this?" and dismiss it as useless.

In mathematics AND natural language 'point' would be considered undefined, and so one just 'knows' what it means.

How does know what they mean, without a definition?

Both could be defined, and in math terms normally are defined, and often need to be, because mathematicians often define terms differently from non-mathematicians (which is a very unfortunate habit they share with most other people: when people come up with a new concept they should come up with a new term for it--either a new word or a new combination of old words--but people are frequently lazy about that and grab a term that already has a meaning).

You cannot define every word in a given context, try it! Take any sentence and pick a word in it and define it. Then using the definition pick another word and define it. Keep doing this and you will find youself defining in circles and this means that at very basic levels of language we have to trust the person understands us. If we can't agree on some undefined terms then communication is not possible. In my mathematical definition of a circle above there is no need to define 'point', it understood what it means, on objective levels.

As I said in post 550, a definition does not have to be verbal. It can be ostensive, and perhaps there are still other ways of defining a term.

Yes, beginning communication is difficult. But we can tell ourselves what we mean by a term. For example, I can use one term to refer to strawberries and another term to a refer to a certain person I know, and always use the terms that way, and hope that other people know what I am referring to and start to use the word that way. But I know what I mean by those terms. And when these halting beginnings develop in a full-fledged language then I can give myself a definition of those terms, and give it to other people.

You cannot know that a statement is true unless it is either self-evident or provable by derivation (deductively or inductively) from the self-evident. Therefore you cannot know that an axiom is true unless it is self-evident or provable. Therefore you cannot know that it is an axiom unless it is self-evident or provable (note that I am not denying that it will still be axiom: statements can be true without us knowning them to be true).

I think you have axioms confused with postulates (see my last reply to Ba'al).

Really? Strange then that mathematicians for example talk about Peano's axioms and the axiom of choice. See for example here:

"In mathematics, an axiom is any starting assumption from which other statements are logically derived. It can be a sentence, a proposition, a statement or a rule that enable to construct a formal system. Unlike theorems, axioms cannot be derived by principles of deduction, nor are they demonstrable by formal proofs—simply because they are starting assumptions—there is nothing else they logically follow from (otherwise they would be called theorems). In many contexts, "axiom," "postulate," and "assumption" are used interchangeably.

As seen from definition, an axiom is not necessarily a self-evident truth, but rather a formal logical expression used in a deduction to yield further results.

So this, if it is an accurate description of mathematical practice, shows that mathematicians, at least some of the time, use 'axiom' and 'postulate' and 'assumption' to mean the same thing. But this involves changing the meaning of 'axiom', which did not refer to any arbitrary assumption, but to a foundational truth. And changing the meanings of terms, using old terms to refer to new concepts or, in this case, to other old concepts, is a very bad habit of mathematicians and of non-mathematicians, as I have said before. It creates confusion and allows for unintentional and intentional deception. Mathematicians already had the terms 'postulate' and 'assumption', and so there was no need hijack 'axiom' from its existing meaning (which, by the way, is still more or less used outside of math).

Also, the above passage leaves out the possibility of self-evident truths.

If all you have is arbitrary postulates and theorems deduced from them, then you have no reason to think any of them are true. Then mathematics would be worthless.

You may assume a postulate p to be true in order see its consequences, or even to prove it false. Then if you can validly deduce q from p you have proved the truth of the conditional statement 'If p then q'. But you will not have proven q, if you have not proven p.

You cannot prove or disprove a mathematical postulate (or axiom). Example: Euclid's 5th postulate.

You can prove something if you can show that it is self-evident or derive it by deduction from something self--evident.

You can disprove a postulate if you can deduce from it a conclusion that itself can be disproved.

Ttruths of physics and truths of mathematics differ in their subject matter, but not in their truth.

I know that the analytic-synthetic dichotomy is supposed to correspond to the mathematic-physics dichotomy, but I deny this--and do so even on your definition of 'analytic truth'--i.e., definitional truth--because I say that Newton's axioms can be derived from definitions of the terms 'body' and 'force', and so, even by your definition, are still analytic.

You may define ice in such a way that the statement "ice always floats on water" is an analytic truth. But that doesn't imply that it gives a correct description of the physical world.

If you define ice as "Solid water that floats on water" then the definition will be inaccurate as an expression of the meaning of the word 'ice' in English, because it refers to any solid water, even those newly discovered forms that don't float.

In any case, neither Peikoff nor I nor anyone else has defined 'water' in this way.

What we would say is that if "Ice floats on water" were true then it would be an analytic true, but it isn't true, and therefore cannot be analytically true, by either our definition or yours.

Peikoff and I and most other people mistakenly believed "All ice floats on water" was true, and so mistakenly believed it was analytically true--by our definitions (the standard Kantian and Logical Positivist definitions). But we did not believe that it was analytically true by your definition--true by definition--because we did not and would define ice as "Solid water that floats on water" or in any other way that would make a "All ice floats on water" a definitional truth (all this is assuming that we are talking about Nominal Definitions).

So this, if it is an accurate description of mathematical practice, shows that mathematicians, at least some of the time, use 'axiom' and 'postulate' and 'assumption' to mean the same thing. But this involves changing the meaning of 'axiom', which did not refer to any arbitrary assumption, but to a foundational truth. And changing the meanings of terms, using old terms to refer to new concepts or, in this case, to other old concepts, is a very bad habit of mathematicians and of non-mathematicians, as I have said before. It creates confusion and allows for unintentional and intentional deception. Mathematicians already had the terms 'postulate' and 'assumption', and so there was no need hijack 'axiom' from its existing meaning (which, by the way, is still more or less used outside of math).

Also, the above passage leaves out the possibility of self-evident truths.

If all you have is arbitrary postulates and theorems deduced from them, then you have no reason to think any of them are true. Then mathematics would be worthless.

Be they called axioms or postulates these are ground statements of a theory -assumed- for the purpose of deducing consequences. The postulates are generally not arbitrary. The are very carefully crafted to capture the intuitive idea of the mathematical object of interest. For example the Peano Postulates encapsulate the normal understanding of what an integer is. It may surprise you to know there are non-standard interpretations of the Peano Postulates. See "Peano's Axioms and Models of Arithmetic" by Thoralf Skolem.

The postulates for vector spaces encapsulate the intuitive idea of a vector as an object having both magnitude (length) and direction. The assumptions underlying integrals are crafted to capture the idea of approximating the area under a curve with rectangles. Etc. Etc. This is hardly arbitrary. But the postulates are not self evident either. For example the assumptions underlying the Riemann definition of an integral are not sufficiently general to give desired results. The Riemann integral (as an operator) does not commute with limit as an operator. A more general definition along with additional postulates are required to define measure (a generalization of area or volume) so that the integral of the limit of a sequence of measurable functions is equal to the limit the the sequence of the integrals of each term of the sequence. And so on....

The mathematics so constructed is not only not worthless, it is indispensable for doing physics.

The term axiom is used for traditional reasons. It is picked up from Euclid's presentation of geometry. In -Elements- by Euclid, there are axioms and postulates. An axiom is more general in its scope and meaning than a technical postulate. So Euclid's assumptions about quantities, in general, are called axioms. The special geometric assumptions are called postulates. The confusion is -yours- because you do not know the history of how the terms are used. Mathematicians have long since stopped thinking about "self evident" mathematical truths. This is especially true since the discovery of non-Euclidean geometry in the late 18-th and 19-th centuries. Mathematics, as a deductive art, is about inferring conclusions from assumptions, not finding "self-evident" truths. The only "self evident" truth in mathematics and logic is the law of non-contradiction. Mathematical systems must be consistent in order to be used or further developed.

Perhaps it were better if you studied the history of mathematics before using slam terms such as hijack and intentional deception. No one has hijacked anything. Mathematicians are deceiving no one. Modern mathematicians are meticulous about showing the foundations and operating rules for their theories. Mathematics from antiquity to the present has a long, honorable and useful career and has done more for mankind than philosophy.

You might want to read: "Mathematics: The Loss of Certainty" by Morris Kline.

>And changing the meanings of terms, using old terms to refer to new concepts or, in this case, to other old concepts, is a very bad habit of mathematicians and of non-mathematicians, as I have said before. It creates confusion and allows for unintentional and intentional deception. Mathematicians already had the terms 'postulate' and 'assumption', and so there was no need hijack 'axiom' from its existing meaning...

Ho, ho, yes mathematicians "hijacked" the term "axiom" so it "allows for unintentional and intentional deception."

>Some time ago Daniel asked me to present my own summary of Peikoff's argument in the ASD.

Hi Greg,

I recall, perhaps incorrectly, that I was suggesting a formalisation of Peikoff's argument - as you were going to do it for something else less relevant - rather than a summary of it. But thank you for doing it anyway. I have read Peikoff's essay previously. It seems you have a formalisation in mind at some point. That would be useful if you get round to it.

> This should serve to remind us of the empirical origin of even our mathematical concepts.

Let's look at this issue from a slightly different angle: namely, origin from is, rather obviously, not the same as equivalent to.

To wit: even if it could be easily decided one way or the other, what does the genesis of the concept "triangle" matter? This is rather like saying there is no "dichotomy" between a bison and a jellyfish as we might speculate they all originated in some simple organism at the dawn of time.

Hi Daniel.

The debate is over the question: Is all knowledge empirical (i.e., deriving from experience) or not.

More specifically: Is it possible to derive all of our knowledge from experience, or do we need some other source of knowledge, such as innate ideas?

I answer “Yes” to these questions: experience is the source of all of our knowledge. And when I say that it is a source, I mean that it a database: I do not deny that we need abstraction to derive concepts from database and logic to derive truths from the concepts and percepts. But I deny that we need an additional database.

So, as a conceptual empiricist (and I think most of us on this forum are conceptual empiricists, as are most intellectuals in the English-speaking world), I say all concepts are conceptual, and, as a propositional empiricist, I say that truths derived from concepts can be empirical if the concepts are empirical.

The cutting edge of the issue is, as you state generally (but then seem to take back in a rather ad hoc formulation which we will look at in a moment), that "perfect triangles" do exist in our heads but do not exist in the physical world. Thus there is a clear dichotomy, or at the very least a highly useful distinction.

The distinction is between concepts of things which exist in the physical world and those which do not. That is not a distinction between truths, or between empirical and non-empirical knowledge. It might be, or be close to, a distinction between perceptual knowledge and conceptual knowledge, but the later is derived from the former.

If all you want to argue is that there is some kind of original connection between the abstraction and the real world, well so does Plato. But this seems to me to be beside the point.

Yes, Plato concedes a connection between abstraction and the real world—but it is the reverse of what I and other empiricists say it is! He thinks that things in the real world resemble (imperfectly) his Ideas or Forms because the Demiurge impressed these Forms on imperfect matter and made material things, whereas we empiricists say that abstractions resemble material things because we have perceived material things and then abstracted these abstract concepts (abstractions) from our percepts

Incidentally, I do not get into debates over the meanings of words, but I would suggest a minor terminological clarification that might save confusion. When you write:

Greg: "...these beings do not exist in the physical world we should say that they don't exist at all, but we have thoughts of them..."

...it might be better to say these "beings" - such as perfect triangles - exist, but abstractly, and not physically.

No, this is just the sort of thing I want to avoid. I am a Conceptualist, not a Realist, on the subject of universals (and any other kind of absraction): there are no really existing abstractions outside the mind; the only abstract beings that exist are abstract thoughts in the mind.

My formulation avoids implying a secondary, inferior sort of existence to shadowy beings as Meinong did (which Quine said was constructing an ontological slum). Sherlock Holmes does not exist in the mind; rather, Holmes does not exist at all, but we have ideas of Holmes in our mind.

Anselm made a famous argument for the existence of God which relied on this mistake (and others): he said that God exists at least in the mind, since we have a concept of God (even the atheist must have the concept in order to deny the existence of God). However, he would have used this premise if he added realized that “X exists in the mind” is just a metaphor for, or casually way of saying, “We have a thought of X”.

Now, with that minor distinction made, what appears to me to be a rather ad hoc formulation is as follows:

Greg:"...there are surfaces that look perfectly triangular to the unaided eye, and from these we formed the concept of perfect triangles, without even needing to idealize, and then only later, when we measured carefully, did we find that the triangles were not composed of perfectly straight lines." (italics DB)

This seems simply confused, and merely verbalist, because when we "formed the concept of perfect triangles", we could have hardly done anything else but "idealised" them - obviously because the perfect triangle did not physically exist in the first place! Your theory becomes even weaker as you don't mention exactly what objects "look perfectly triangular" - in reality no object does, although there are objects that resemble perfect triangularity to a greater or lesser extent. However, to even make such a judgement seems to presume an abstract standard in the first place.

These are pretty straightforward criticisms of your position.

We don’t have to idealize them if we don’t see the imperfections at first, as we wouldn’t if the objects look perfectly triangular.

And many do look perfectly triangular: many things, mostly human-made, appear to be perfectly triangular or to have other perfect geometric shapes involving perfectly straight lines, because the naked eye cannot see the imperfections: manufacturing can make lines that look so perfectly straight that it takes a refined measuring device to see it. And since most of us are never going to bother making many of these measurements, let alone all of them, we will in fact experience far more triangles, etc. that look perfectly triangular, etc. then we will experience ones that do not.

Now since these objects look perfectly triangular at first inspection, we do not need already have an abstract standard of triangularity or straightness first.

Moreover, I am not sure that I am not conceding too much by saying perfect geometric shapes do not exist in the real world: is this really true on the subatomic level?

And, now that I think of it, what about 3 intersecting light beams?

FYI, I am a Popperian, and find Popper's "3 World" cosmology provides a fruitful hypothesis for some of these and other perennial problems.

Not if we take 'definition' is the broadest sense of the term, to apply to anything that indicates the meaning of a term. So a definition does not have to be verbal: there are ostensive definitions, which involve pointing.

When you think "point" what is it you are pointing at? When you think "line" what is it you are aligned with?

I never said that all definitions are ostensive, which is would be no more correct than the common view that all definitions are verbal.

And even among the verbal ones not all are of the classic genus-and-differentia formor any other description-like form: some are contextual defrinitions, showing how to use a word in a sentence.

If the original definitions of 'point' and 'line' were ostensive, they were probably something like this:

people defined 'point' by pointing to the tips of spears, arrows, knifes, etc., and people defined 'line' by pointing to stripes on striped things or to rows of things.

A small spot of ink or graphite on a piece of paper is NOT a point. A stretched cotton or linen thread is NOT a line nor is an elongated streak of ink or graphite on a piece of paper.

Now this is what geometry books tell us.

But note that even this is somewhat abusing ordinary language. Platonistic minded mathematicians told us that what we think are points or lines are not "real" points and lines, which do not exist in this world, but in some other world. And so they changed the meanings of the words, which referred to things in the real world.

By itself it was small change, but it paved the way for worse things.

In the late 19th and early 20th century mathematicians, or some of them, defined 'curve' in such a way as to apply to all lines, straight or curved, and defined 'line' in such a way as to apply only to infinite straight lines. These definitions eventually reached even 3rd-grade math books in the 1960s, which told us 3rd-grade students that these words didn't mean what we thought they meant. Later, in the 1970s, I found a textbook my father had in a math or engineering class in trade school in the 1930s, and found definitions of geometrical terms more in conformity with ordinary language. My gut reaction was that this older book was unsophisticated, not knowing the superior definitions I learned in school. But I soon saw how wrong this was: the mathematicians who originated the definitions in my textbook had simply changed the meanings of words to suit their purposes. But this is wrong: neither mathematicians nor anyone else has a right to simply change the meaning of a term. Again, as I have said before, when one comes up with a new concept one should not be lazy and take an already-existing term with an already-existing meaning and re-define it; instead, invent a new term (a new word or a new combination of existing terms).

There is literally nothing in the world some of whose properties you omit which will yield point or line.

See my post 617.

Which indicates that mathematicians (or anyone else) when they are -doing- mathematics are Closet Platonists. They can't help it.

There is not a consensus in favor of Platonism among either mathematicians or philosophers of mathematics.

Of course, you are probably saying that, being closet Platonists, that they are Platonists whether they admit it or not (or even whether they realize it or not), because points and lines do not exist in the real world. So again see post 617.

They are dealing with things nowhere in the physical world. Nowhere.

Again see post 617

This, by the way, shows that Rand's notion of measurement omission is not quite on the mark. There is nothing in the world that has the property -length- such that if you omit its quantity you get the property -length-. What there -is- in the world are measuring sticks (rulers, yardsticks, tape-measures) and things in the world that we lay these devices against. And even then we are idealizing their rigidity. These devices are NOT rigid. They bend and stretch when sufficient force is applied, which is a state intermediate between nothing happening to them (a perfectly rigid object) or they break or become permanently deformed.

See post 617.

But what you say again reminds of the case of a body which is not acted on by any forces (which Newton's First Axiom talks about), the concept of which body is also an idealization. If the fact the concepts of geometric figures are idealizations proved that the statements of geometry are not factual or not empirical, then it would prove that Newton's 1st Axiom, whose subject-concept is an idealization, is not factual and not empirical, but it does not. I haven't heard anyone dealing with this challenge.

In short our most effective tool for understanding the world, i.e. mathematics, is an exercise in applied hallucination. We are fantasizing.

This is, at best, true only metaphorically, not literally: taken literally it stretches the meaning of 'hallucination' and 'fantasy' way out of shape.

First, we can extrapolate from our knowledge of the actual world to other possible worlds, but this is not hallucination or fantasy, nor do the truths cease to be factual: for example, dispositional statements (such as 'That stuff is poison', where that means 'That stuff would make you very sick if you drank it', reach beyond the actual world (since it may be that no one drinks the poison) but are very factual.

Indeed, decisions about the future involve going beyond the actual world, since some facts are contingent and therefore there is more than one possible future.

Second, we may talk in approximations, and again this is not hallucination of fantasy. There may be some physical reason why a perfectly straight line is not even possible, but we can still treat approximately straight lines (i.e., lines that are straight within a certain given margin of error) as if they were perfectly straight, for a given practical purpose, where no harm will be done by approximating.

Which gets back to Wigner's question which he raised in his essay on the unreasonable effectiveness of mathematics. Rand never did give an answer to Wigner's question. Neither has anyone else.

Ba'al Chatzaf

What is Wigner's question. It sounds like he his asking "How is it that math can be so effective (i.e., give true and useful answers about the real world) when it has no rational justification (i.e., is not based on self-evident truths about the real world)?"

However, this is assuming that math does not have a rational justification.

I instead would ask a similar question: "How could it be true that math can be so effective (i.e., give true and useful answers about the real world) if it were true that it has no rational justification (i.e., is not based on self-evident truths about the real world)?" To put it another way, doesn't it seem extremely improbable to think that a math that based on hallucination or fantasy, or on any kind of arbitrary assumption, or on any kind of convention, could produce so many results that were true--or, if you wish to deny that they are actually true, then so many results that are so useful? My answer is that it is very improbable, and that we should therefore conclude that math is not based on hallucination or fantasy, or on any kind of arbitrary assumption, or on any kind of convention.

So this, if it is an accurate description of mathematical practice, shows that mathematicians, at least some of the time, use 'axiom' and 'postulate' and 'assumption' to mean the same thing. But this involves changing the meaning of 'axiom', which did not refer to any arbitrary assumption, but to a foundational truth. And changing the meanings of terms, using old terms to refer to new concepts or, in this case, to other old concepts, is a very bad habit of mathematicians and of non-mathematicians, as I have said before. It creates confusion and allows for unintentional and intentional deception. Mathematicians already had the terms 'postulate' and 'assumption', and so there was no need hijack 'axiom' from its existing meaning (which, by the way, is still more or less used outside of math).

Also, the above passage leaves out the possibility of self-evident truths.

If all you have is arbitrary postulates and theorems deduced from them, then you have no reason to think any of them are true. Then mathematics would be worthless.

Be they called axioms or postulates these are ground statements of a theory -assumed- for the purpose of deducing consequences. The postulates are generally not arbitrary. The are very carefully crafted to capture the intuitive idea of the mathematical object of interest. For example the Peano Postulates encapsulate the normal understanding of what an integer is. It may surprise you to know there are non-standard interpretations of the Peano Postulates. See "Peano's Axioms and Models of Arithmetic" by Thoralf Skolem.

The postulates for vector spaces encapsulate the intuitive idea of a vector as an object having both magnitude (length) and direction. The assumptions underlying integrals are crafted to capture the idea of approximating the area under a curve with rectangles. Etc. Etc. This is hardly arbitrary.

That is good to hear: you are, in effect, saying that these postulates are definitions (or partial definitions) of terms: that which is "crafted to capture the intuitive idea" of something or "encapsulates the normal understanding" of what something is, is a definition of the term referring to that something.

And once we have these definitions then definitional truths derived from them becoming self-evident: that is, they are evident through themselve --.e., you know that they are true as soon as you understand the meanings of the words in them (and the grammar of the sentence).

But the postulates are not self evident either. For example the assumptions underlying the Riemann definition of an integral are not sufficiently general to give desired results.

By saying that they do not "give the desired results" are saying that they do not define the term in such a way as to make it apply to all integrals?

The Riemann integral (as an operator) does not commute with limit as an operator.

What do you mean by "does not commute with limit as an operator"?

A more general definition along with additional postulates are required to define measure (a generalization of area or volume) so that the integral of the limit of a sequence of measurable functions is equal to the limit the the sequence of the integrals of each term of the sequence. And so on....

So are you now discussing the attempt to come up with a definition of the term 'measure', or are talking about making refinements to the definition of 'integral'?

The mathematics so constructed is not only not worthless, it is indispensable for doing physics.

If it is based on definitions of its terms, it is not worthless, because the definitional truths derived from such definitions will be self-evident truths, which provide a solid-foundation.

The term axiom is used for traditional reasons. It is picked up from Euclid's presentation of geometry. In -Elements- by Euclid, there are axioms and postulates. An axiom is more general in its scope and meaning than a technical postulate. So Euclid's assumptions about quantities, in general, are called axioms. The special geometric assumptions are called postulates. The confusion is -yours- because you do not know the history of how the terms are used.

No confusion: I am sure that Euclid did not consider his axioms to be mere assumptions, but rather self-evident truths or derivable from them, because the notion that math, or any other field of knowledge, can be based on mere assumptions is rare before the last two centuries.

Your problem above seems to be the belief that I have been criticizing, the belief that math, because it can reach to non-actual worlds and is often accompanied by abstractions, that it is not based on reality, and so its basic truths are based on mere assumptions rather than definitions.

Mathematicians have long since stopped thinking about "self evident" mathematical truths. This is especially true since the discovery of non-Euclidean geometry in the late 18-th and 19-th centuries.

So much the worse for modern mathematicians: without self-evident truths there are no indirectly evident truths, and without those there are no evident truths at all.

Mathematics, as a deductive art, is about inferring conclusions from assumptions, not finding "self-evident" truths.

Self-evident truths are what you need to start with, not derive.

The only "self evident" truth in mathematics and logic is the law of non-contradiction. Mathematical systems must be consistent in order to be used or further developed.

I am glad that you admit that there are self-evident truths in logic and math. However, the law of non-contradiction is not the only one: there are also Aristotle's other two laws, and any definitional truth.

Perhaps it were better if you studied the history of mathematics before using slam terms such as hijack and intentional deception. No one has hijacked anything. Mathematicians are deceiving no one.

There is nothing wrong with using slam terms if they are deserved, and they are deserved in this case.

And don't get touchy: I didn't say that all mathematicians are intentionally deceiving anyone, nor that you are.

And in any case whether it is intentional or unintentional deception is unimportant in this context (if you want to object to using "deception" for unintentional cases then use another word): the point is that misusing language, intentionally or unintentionally, leads to confusion, which opens the door to error.

And no, the history of practices in a given field will not justify bad practices. And it is a bad practice to appropriate an existing term for a new concept.

It is, in short, the result of laziness: it is much easier to appropriate an existing term than invent a new one.

Modern mathematicians are meticulous

That is like saying modern logicians are meticulous. Yes, they do try to be precise, and often succeed and do good, but they are also notorious for being overly precise and nitpicky, which has given modern logic and modern philosophy in general a mostly deserved reputation for being dry, obsessed with trivia and out of touch with the real world. Yet at the same time they have been sloppy in their analysis of some issues.

Well modern logic and modern math unfortunately grew up together, and the errors of the former have often infected the latter (such as the myth that logic and math are not about the world because they are just products of convention or because they are purely formal, or that the material conditional expresses the conditional in ordinary language), though mathematicians get away with much less because of math's heavy application to physics.

about showing the foundations and operating rules for their theories.

I am very glad to hear that you admit that mathematical truths have a foundation and that it can be known.

I thought that you would try to deny that, since you deny that math is based on self-evident truths.

Mathematics from antiquity to the present has a long, honorable and useful career and has done more for mankind than philosophy.

Insofar is that it true it is because so much of philosophy has been bad, but good philosophy is an indispensable foundation for other fields. In particular, little or nothing could be achieved intellectual without sound logic, which is a branch of philosophy. This effects the attempt to deal with every topic, even those which, because non-quantitative, do not involve math.

You might want to read: "Mathematics: The Loss of Certainty" by Morris Kline.

Ba'al Chatzaf

I know that there has been a loss of certainty in most intellectual fields in the last two centuries, and this results from uncertainty in philosophy in the same period, going back to Hume's skeptical thinking, which appeared in modified form in the work of the Logical Positivists.

It is to attack this excessifve skepticism that is, or should be, the main point of Peikoff's ASD and my main point in supporting it.

I know that there has been a loss of certainty in most intellectual fields in the last two centuries, and this results from uncertainty in philosophy in the same period, going back to Hume's skeptical thinking, which appeared in modified form in the work of the Logical Positivists.

It is to attack this excessifve skepticism that is, or should be, the main point of Peikoff's ASD and my main point in supporting it.

Hume has nothing to do with it. Hume made no contributions to geometry or to any other branch of mathematics. The pioneering work was done by Gauss and Riemann. The discovery of -consistent- non-Euclidean geometries proves that geometry does NOT rest on self-evident axioms. The only requirement is consistency. It can actually be proved that Euclidean geometry is consistent if and only if both hyperbolic and elliptical geometry is consistent. In hyperbolic geometry, Given a line and a point not on the line there exists an infinite set of parallel lines to the given line passing through the given point. In elliptical geometry (for example the geometry of geodisics on a sphere) give a line (geodisic) and a point not on the line there are no parallels to the given line through the given point. Now, if geometry were based on "self-evident truths" then all of them could not be logically consistent. The geometrical theories are either all consistent or all inconsistent.

Einstein's Theory of General Relativity is a geometric theory based on non-Euclidean geometry. It also applies very nicely to the physical world. Your GPS works because a geometry that contradicts Euclidean geometry is consistent and applicable.

The interesting thing is that all of the major systems of geometry both Euclidean and non-Euclidean have found a use in physics. The geometry of the Special Theory of Relativity is hyperbolic, not Euclidean. The geometry of the General Theory non-Euclidean. Only the space-time manifold of a universe devoid of mass and energy would be Euclidean. Clearly that does not apply to the universe we live in. One of the reasons that Newtonian Physics is false is that it is based on Euclidean space. The structure of the physical space-time manifold is not Euclidean.

Here is a piece of advise for you. Disregard just about anything that L.P. says about mathematics. He is a mathematical ignoramus. It is no shame to be an ignoramus, since we are all born ignorant. It IS a shame to be an invincible ignoramus, and L.P. is that. He is also an ignoramus when it comes to modern physics. You know, the physics that has given us the A-bomb and the GPS system, as well as transistors and lasers. That latter is based on quantum physics which is held by Orthodox Objectivists to be patently absurd. Why is it that philosophically absurd theories predict reality correctly to twelve decimal places? The key quality of a physical theory is not its philosophical purity, but its empirical correctness.

So this, if it is an accurate description of mathematical practice, shows that mathematicians, at least some of the time, use 'axiom' and 'postulate' and 'assumption' to mean the same thing. But this involves changing the meaning of 'axiom', which did not refer to any arbitrary assumption, but to a foundational truth. And changing the meanings of terms, using old terms to refer to new concepts or, in this case, to other old concepts, is a very bad habit of mathematicians and of non-mathematicians, as I have said before.

That you don't know the meaning of mathematical terms does not mean that these are somehow secondhand and less correct than the meaning you happen to be familiar with. As Bob told you, the notion of an axiom in mathematics is nothing new.

If all you have is arbitrary postulates and theorems deduced from them, then you have no reason to think any of them are true. Then mathematics would be worthless.

You still don't get it. Arbitraty postulates and theorems deduced from them are by definition true. That implies that they don't tell us anything about the real world, in other words, these are analytic statements. That mathematical theories can be used in physical models is a completely different issue. In those models we don't test the correctness of the mathematics, but the applicability of that particular mathematical theory.

You cannot prove or disprove a mathematical postulate (or axiom). Example: Euclid's 5th postulate.

You can prove something if you can show that it is self-evident or derive it by deduction from something self--evident.

You can disprove a postulate if you can deduce from it a conclusion that itself can be disproved.

No you cannot. Can you prove Euclid's 5th postulate? No, you can't. Can you disprove Euclid's 5th postulate? No, you can't. That is why it is called a postulate (or an axiom). You are confusing a postulate with a hypothesis.

If you define ice as "Solid water that floats on water" then the definition will be inaccurate as an expression of the meaning of the word 'ice' in English, because it refers to any solid water, even those newly discovered forms that don't float.

Why would that be inaccurate? Bill Dwyer assured me that when Peikoff talked about "ice": "he's talking about normal ice, not very high density ice. Can't you see that??". So two Objectivists use a different definition of ice, one inclusive (also sinking ice is ice - implying that Peikoff's statement was incorrect), the other one exclusive (only floating ice is ice - implying Peikoff's statement was correct). This is of course just an illustration that definitions are arbitrary, not in the sense that any definition will do, but that there is not one single correct definition. Is heavy water a form of water?

In any case, neither Peikoff nor I nor anyone else has defined 'water' in this way.

You'll mean "ice". Peikoff was so sloppy that he even didn't give any definition of ice.

>...these beings do not exist in the physical world we should say that they don't exist at all, but we have thoughts of them...

I wrote:

>...it might be better to say these "beings" - such as perfect triangles - exist, but abstractly, and not physically.

Greg replied:

>No, this is just the sort of thing I want to avoid. I am a Conceptualist, not a Realist, on the subject of universals (and any other kind of absraction): there are no really existing abstractions outside the mind; the only abstract beings that exist are abstract thoughts in the mind.

Hi Greg

Just to get clarity on the above - by "abstract" I mean as distinct from physical i.e. non-physical. That is what you mean too, yes? So we can summarise your view as:

Non-physical things exist, but only in individual consciousnesses.

Presumably you consider that consciousness, where these non-physical entities existl, is also non-physical i.e. distinct from the physical brain (though of course this does not mean it can exist without the physical brain; any more than, say, a human being can exist without surrounding air pressure)

The reason I ask is because I sometimes encounter some confusion in this (abstract vs physical) area that's better cleared up before moving on to other issues.

>Tell us about the 3 World cosmology.

Well, in short, if you thought dualism was bad, here comes trialism!...;-) As I am short of time, here is a good summary. Popper's Tanner lecture appended at the bottom is worth a look too if you are interested in drilling down.

Well, in short, if you thought dualism was bad, here comes trialism!...;-) As I am short of time, here is a good summary. Popper's Tanner lecture appended at the bottom is worth a look too if you are interested in drilling down.

. . . . Beginning in 1968 experimental counts of neutrinos reaching the earth from the sun were found to be less than half the number expected according to our understanding of the nuclear-fusion process by which they are produced in the sun. There are three types of matter neutrinos (and three types of anti-matter neutrinos, and perhaps, a seventh neutrino, called “sterile” [which might constitute the negative-pressure sea we call “dark energy”]). These are the electron-, muon-, and tau-neutrinos.

Discussion of implications of new experimental results bearing on possible seventh neutrino:

## Recommended Posts

## Top Posters In This Topic

136

97

93

77

## Popular Days

Jul 5

46

Jun 17

30

Jun 7

29

Jun 8

29

## Top Posters In This Topic

Michael Stuart Kelly136 postsGregory Browne97 postsDaniel Barnes93 postsBaalChatzaf77 posts## Popular Days

Jul 5 2007

46 posts

Jun 17 2007

30 posts

Jun 7 2007

29 posts

Jun 8 2007

29 posts

## merjet

Here is my working definition.

mathematics -- the study of number, space and structures that can be quantified, and their abstractions and generalizations.

Rand's definition -- the science of measurement -- is far too narrow. It doesn't even include arithmetic!

This is a half-truth. Topology is about both metric and non-metric spaces.

## Link to comment

## Share on other sites

## Michael Stuart Kelly

Merlin,

Thank you for the definition. I am curious, though. How does the following definition of measurement exclude arithmetic?

Here is an online definition of

arithmetic:Also, there were other quotes from ITOE were Rand specifically stated that math was a science of method.

I am not doggedly trying to defend Rand. I am simply trying to understand why something that makes perfect sense to me is denigrated. I have no problem disagreeing with Rand, but I have to see the logic

or at least something.Michael

## Link to comment

## Share on other sites

## Dragonfly

AuthorWell, of course every metric space is also a topological space, but the reverse is not true. The theory is more general than that of metric spaces, that is why it was invented in the first place.

## Link to comment

## Share on other sites

## tjohnson

Here is Korzybski's general semantic definition of mathematics. - "Mathematics consists of limited linguistic schemes of multiordinal relations capable of exact treatment at a given date."

and for the numbers ; " 0 and 1 represent unique and specific

symetrical relations and all other numbers, also unique and specificasymetricalrelations".In brief, the numbers 0 and 1 are used to represent the symetrical relation of equality as in if a=b then a-b=0 and a/b=b/b=1.

See http://www.esgs.org/uk/art/sands-ch18.pdf for a full discussion.

Edited by general semanticist## Link to comment

## Share on other sites

## merjet

Admittedly, she did not

sayit was adefinition. Nowhere to my knowledge did she say this or that is adefinitionof mathematics. Maybe it's only adescription. But I think that the above is the bestcandidatefor herdefinition.How do you know it's a

definition? Maybe it's only adescription. Suppose you measure a stick and find it's 6 inches long. Did you do any addition, subtraction, multiplication, and division?I'm not trying to denigrate Rand. I wouldn't expect Rand, who studied no math beyond middle-school algebra, to give a good definition.

Edited by Merlin Jetton## Link to comment

## Share on other sites

## Michael Stuart Kelly

Merlin,

I'll go with description (or partial definition, if needed). Still, "science" seems like an awfully good genus and "measurement" an awfully good differentia.

Addition, of course. One inch and one inch and so on until you get six inches, presuming inch is the unit. If foot is the unit, then there would have to be division and subtraction. By a stretch, the six inches also could be arrived at through multiplication.

Michael

## Link to comment

## Share on other sites

## merjet

I didn't ask you if numbers are subject to the rules of arithmetic, which is what you answered for 6. I asked if you

didany addition, subtraction, multiplication, and division. Clearly you did not.## Link to comment

## Share on other sites

## Michael Stuart Kelly

Merlin,

I might not have done the addition, but somebody did it for me if I am using a ruler divided into inches. There is the guy who made the ruler, for instance.

Michael

## Link to comment

## Share on other sites

## BaalChatzaf

In more general terms 0 is the identity of a so-called additive Abelean Group and 1 is the identity of a multiplicative group in general. A number field (or division ring as it is sometimes called) is such that its elements constitutes and additive group, its non-zero elements constitutes a multiplicative group and the two groups are connected by the distributive law.

There are algebraic structures in which there are many binary operations, not just addition and multiplication. There are groups and semi-groups in which there is just one binary operation and some of these are not-commutative. For example the group of linear operations over a vector space, where composition is the binary operator. Such groups are non-commutative so they are not numeric. There are also the quaternions which is a non-commutative field and the octonians which is a non-associative algebra.

The Count was stuck with numerical type algebras and his characterization was right as far as it went, but it was not the most general characterization of mathematics. The Count was educated in the same era as Ayn Rand and he was about 100 years behind where mathematics was.

Ba'al Chatzaf

Edited by BaalChatzaf## Link to comment

## Share on other sites

## tjohnson

Korzybski, 1879-1950. Rand, 1905-1982, I would say that puts them a good generation apart but what difference does that make? I don't know what you mean about being 100 years behind, he wasn't doing mathematics, he was creating general semantics.

## Link to comment

## Share on other sites

## tjohnson

Micheal,

If you read that definition of numbers I posted you may see that numbers are symbols we use to express unique, specific (exact) symetric and asymetric relations. If I say 1+1=2 then it's mathematics but if I add 1 apple and 1 apple and get 2 apples then it's technically

appliedmath now. It would be possible to teach children about group theory (as Bob so kindly introduced us to above) without ever applying it to counting, adding etc, real objects, but it would take a radical adjustment in our education system and attitudes since the emphasis is usually onapplyingmath . Mathematics is not usually valued for itself, most people think "where will I ever use this?" and dismiss it as useless.Tom

## Link to comment

## Share on other sites

## Gregory Browne

As I said in post 550, a definition does not have to be verbal. It can be ostensive, and perhaps there are still other ways of defining a term.

Yes, beginning communication is difficult. But we can tell ourselves what we mean by a term. For example, I can use one term to refer to strawberries and another term to a refer to a certain person I know, and always use the terms that way, and hope that other people know what I am referring to and start to use the word that way. But I know what I mean by those terms. And when these halting beginnings develop in a full-fledged language then I can give myself a definition of those terms, and give it to other people.

## Link to comment

## Share on other sites

## Gregory Browne

So this, if it is an accurate description of mathematical practice, shows that mathematicians, at least some of the time, use 'axiom' and 'postulate' and 'assumption' to mean the same thing. But this involves changing the meaning of 'axiom', which did not refer to any arbitrary assumption, but to a foundational truth. And changing the meanings of terms, using old terms to refer to new concepts or, in this case, to other old concepts, is a very bad habit of mathematicians and of non-mathematicians, as I have said before. It creates confusion and allows for unintentional and intentional deception. Mathematicians already had the terms 'postulate' and 'assumption', and so there was no need hijack 'axiom' from its existing meaning (which, by the way, is still more or less used outside of math).

Also, the above passage leaves out the possibility of self-evident truths.

If all you have is arbitrary postulates and theorems deduced from them, then you have no reason to think any of them are true. Then mathematics would be worthless.

You can prove something if you can show that it is self-evident or derive it by deduction from something self--evident.

You can disprove a postulate if you can deduce from it a conclusion that itself can be disproved.

If you define ice as "Solid water that floats on water" then the definition will be inaccurate as an expression of the meaning of the word 'ice' in English, because it refers to any solid water, even those newly discovered forms that don't float.

In any case, neither Peikoff nor I nor anyone else has defined 'water' in this way.

What we would say is that if "Ice floats on water" were true then it would be an analytic true, but it isn't true, and therefore cannot be analytically true, by either our definition or yours.

Peikoff and I and most other people mistakenly believed "All ice floats on water" was true, and so mistakenly believed it was analytically true--by our definitions (the standard Kantian and Logical Positivist definitions). But we did not believe that it was analytically true by your definition--true by definition--because we did not and would define ice as "Solid water that floats on water" or in any other way that would make a "All ice floats on water" a definitional truth (all this is assuming that we are talking about Nominal Definitions).

Edited by Greg Browne## Link to comment

## Share on other sites

## BaalChatzaf

Be they called axioms or postulates these are ground statements of a theory -assumed- for the purpose of deducing consequences. The postulates are generally not arbitrary. The are very carefully crafted to capture the intuitive idea of the mathematical object of interest. For example the Peano Postulates encapsulate the normal understanding of what an integer is. It may surprise you to know there are non-standard interpretations of the Peano Postulates. See "Peano's Axioms and Models of Arithmetic" by Thoralf Skolem.

The postulates for vector spaces encapsulate the intuitive idea of a vector as an object having both magnitude (length) and direction. The assumptions underlying integrals are crafted to capture the idea of approximating the area under a curve with rectangles. Etc. Etc. This is hardly arbitrary. But the postulates are not self evident either. For example the assumptions underlying the Riemann definition of an integral are not sufficiently general to give desired results. The Riemann integral (as an operator) does not commute with limit as an operator. A more general definition along with additional postulates are required to define measure (a generalization of area or volume) so that the integral of the limit of a sequence of measurable functions is equal to the limit the the sequence of the integrals of each term of the sequence. And so on....

The mathematics so constructed is not only not worthless, it is indispensable for doing physics.

The term axiom is used for traditional reasons. It is picked up from Euclid's presentation of geometry. In -Elements- by Euclid, there are axioms and postulates. An axiom is more general in its scope and meaning than a technical postulate. So Euclid's assumptions about quantities, in general, are called axioms. The special geometric assumptions are called postulates. The confusion is -yours- because you do not know the history of how the terms are used. Mathematicians have long since stopped thinking about "self evident" mathematical truths. This is especially true since the discovery of non-Euclidean geometry in the late 18-th and 19-th centuries. Mathematics, as a deductive art, is about inferring conclusions from assumptions, not finding "self-evident" truths. The only "self evident" truth in mathematics and logic is the law of non-contradiction. Mathematical systems must be consistent in order to be used or further developed.

Perhaps it were better if you studied the history of mathematics before using slam terms such as hijack and intentional deception. No one has hijacked anything. Mathematicians are deceiving no one. Modern mathematicians are meticulous about showing the foundations and operating rules for their theories. Mathematics from antiquity to the present has a long, honorable and useful career and has done more for mankind than philosophy.

You might want to read: "Mathematics: The Loss of Certainty" by Morris Kline.

Ba'al Chatzaf

Edited by BaalChatzaf## Link to comment

## Share on other sites

## Daniel Barnes

Greg Browne:

>And changing the meanings of terms, using old terms to refer to new concepts or, in this case, to other old concepts, is a very bad habit of mathematicians and of non-mathematicians, as I have said before. It creates confusion and allows for unintentional and intentional deception. Mathematicians already had the terms 'postulate' and 'assumption', and so there was no need hijack 'axiom' from its existing meaning...

Ho, ho, yes mathematicians "hijacked" the term "axiom" so it "allows for unintentional and intentional deception."

## Link to comment

## Share on other sites

## Gregory Browne

Hi Daniel.

The debate is over the question: Is all knowledge empirical (i.e., deriving from experience) or not.

More specifically: Is it possible to derive all of our knowledge from experience, or do we need some other source of knowledge, such as innate ideas?

I answer “Yes” to these questions: experience is the source of all of our knowledge. And when I say that it is a source, I mean that it a database: I do not deny that we need abstraction to derive concepts from database and logic to derive truths from the concepts and percepts. But I deny that we need an additional database.

So, as a conceptual empiricist (and I think most of us on this forum are conceptual empiricists, as are most intellectuals in the English-speaking world), I say all concepts are conceptual, and, as a propositional empiricist, I say that truths derived from concepts can be empirical if the concepts are empirical.

The distinction is between concepts of things which exist in the physical world and those which do not. That is not a distinction between truths, or between empirical and non-empirical knowledge. It might be, or be close to, a distinction between perceptual knowledge and conceptual knowledge, but the later is derived from the former.

Yes, Plato concedes a connection between abstraction and the real world—but it is the reverse of what I and other empiricists say it is! He thinks that things in the real world resemble (imperfectly) his Ideas or Forms because the Demiurge impressed these Forms on imperfect matter and made material things, whereas we empiricists say that abstractions resemble material things because we have perceived material things and then abstracted these abstract concepts (abstractions) from our percepts

No, this is just the sort of thing I want to avoid. I am a Conceptualist, not a Realist, on the subject of universals (and any other kind of absraction): there are no really existing abstractions outside the mind; the only abstract beings that exist are abstract thoughts in the mind.

My formulation avoids implying a secondary, inferior sort of existence to shadowy beings as Meinong did (which Quine said was constructing an ontological slum). Sherlock Holmes does not exist in the mind; rather, Holmes does not exist at all, but we have ideas of Holmes in our mind.

Anselm made a famous argument for the existence of God which relied on this mistake (and others): he said that God exists at least in the mind, since we have a concept of God (even the atheist must have the concept in order to deny the existence of God). However, he would have used this premise if he added realized that “X exists in the mind” is just a metaphor for, or casually way of saying, “We have a thought of X”.

We don’t have to idealize them if we don’t see the imperfections at first, as we wouldn’t if the objects look perfectly triangular.

And many do look perfectly triangular: many things, mostly human-made, appear to be perfectly triangular or to have other perfect geometric shapes involving perfectly straight lines, because the naked eye cannot see the imperfections: manufacturing can make lines that look so perfectly straight that it takes a refined measuring device to see it. And since most of us are never going to bother making many of these measurements, let alone all of them, we will in fact experience far more triangles, etc. that look perfectly triangular, etc. then we will experience ones that do not.

Now since these objects look perfectly triangular at first inspection, we do not need already have an abstract standard of triangularity or straightness first.

Moreover, I am not sure that I am not conceding too much by saying perfect geometric shapes do not exist in the real world: is this really true on the subatomic level?

And, now that I think of it, what about 3 intersecting light beams?

Tell us about the 3 World cosmology.

## Link to comment

## Share on other sites

## Gregory Browne

I never said that all definitions are ostensive, which is would be no more correct than the common view that all definitions are verbal.

And even among the verbal ones not all are of the classic genus-and-differentia formor any other description-like form: some are contextual defrinitions, showing how to use a word in a sentence.

If the original definitions of 'point' and 'line' were ostensive, they were probably something like this:

people defined 'point' by pointing to the tips of spears, arrows, knifes, etc., and people defined 'line' by pointing to stripes on striped things or to rows of things.

Now this is what geometry books tell us.

But note that even this is somewhat abusing ordinary language. Platonistic minded mathematicians told us that what we think are points or lines are not "real" points and lines, which do not exist in this world, but in some other world. And so they changed the meanings of the words, which referred to things in the real world.

By itself it was small change, but it paved the way for worse things.

In the late 19th and early 20th century mathematicians, or some of them, defined 'curve' in such a way as to apply to all lines, straight or curved, and defined 'line' in such a way as to apply only to infinite straight lines. These definitions eventually reached even 3rd-grade math books in the 1960s, which told us 3rd-grade students that these words didn't mean what we thought they meant. Later, in the 1970s, I found a textbook my father had in a math or engineering class in trade school in the 1930s, and found definitions of geometrical terms more in conformity with ordinary language. My gut reaction was that this older book was unsophisticated, not knowing the superior definitions I learned in school. But I soon saw how wrong this was: the mathematicians who originated the definitions in my textbook had simply changed the meanings of words to suit their purposes. But this is wrong: neither mathematicians nor anyone else has a right to simply change the meaning of a term. Again, as I have said before, when one comes up with a new concept one should not be lazy and take an already-existing term with an already-existing meaning and re-define it; instead, invent a new term (a new word or a new combination of existing terms).

See my post 617.

There is not a consensus in favor of Platonism among either mathematicians or philosophers of mathematics.

Of course, you are probably saying that, being closet Platonists, that they are Platonists whether they admit it or not (or even whether they realize it or not), because points and lines do not exist in the real world. So again see post 617.

Again see post 617

See post 617.

But what you say again reminds of the case of a body which is not acted on by any forces (which Newton's First Axiom talks about), the concept of which body is also an idealization. If the fact the concepts of geometric figures are idealizations proved that the statements of geometry are not factual or not empirical, then it would prove that Newton's 1st Axiom, whose subject-concept is an idealization, is not factual and not empirical, but it does not. I haven't heard anyone dealing with this challenge.

This is, at best, true only metaphorically, not literally: taken literally it stretches the meaning of 'hallucination' and 'fantasy' way out of shape.

First, we can extrapolate from our knowledge of the actual world to other possible worlds, but this is not hallucination or fantasy, nor do the truths cease to be factual: for example, dispositional statements (such as 'That stuff is poison', where that means 'That stuff would make you very sick if you drank it', reach beyond the actual world (since it may be that no one drinks the poison) but are very factual.

Indeed, decisions about the future involve going beyond the actual world, since some facts are contingent and therefore there is more than one possible future.

Second, we may talk in approximations, and again this is not hallucination of fantasy. There may be some physical reason why a perfectly straight line is not even possible, but we can still treat approximately straight lines (i.e., lines that are straight within a certain given margin of error) as if they were perfectly straight, for a given practical purpose, where no harm will be done by approximating.

What is Wigner's question. It sounds like he his asking "How is it that math can be so effective (i.e., give true and useful answers about the real world) when it has no rational justification (i.e., is not based on self-evident truths about the real world)?"

However, this is assuming that math does not have a rational justification.

I instead would ask a similar question: "How could it be true that math can be so effective (i.e., give true and useful answers about the real world)

ifitweretrue that it has no rational justification (i.e., is not based on self-evident truths about the real world)?" To put it another way, doesn't it seem extremely improbable to think that a math that based on hallucination or fantasy, or on any kind of arbitrary assumption, or on any kind of convention, could produce so many results that were true--or, if you wish to deny that they are actually true, then so many results that are so useful? My answer is that it is very improbable, and that we should therefore conclude that math is not based on hallucination or fantasy, or on any kind of arbitrary assumption, or on any kind of convention.## Link to comment

## Share on other sites

## Gregory Browne

That is good to hear: you are, in effect, saying that these postulates are definitions (or partial definitions) of terms: that which is "crafted to capture the intuitive idea" of something or "encapsulates the normal understanding" of what something is, is a definition of the term referring to that something.

And once we have these definitions then definitional truths derived from them becoming self-evident: that is, they are evident through themselve --.e., you know that they are true as soon as you understand the meanings of the words in them (and the grammar of the sentence).

By saying that they do not "give the desired results" are saying that they do not define the term in such a way as to make it apply to all integrals?

What do you mean by "does not commute with limit as an operator"?

So are you now discussing the attempt to come up with a definition of the term 'measure', or are talking about making refinements to the definition of 'integral'?

If it is based on definitions of its terms, it is not worthless, because the definitional truths derived from such definitions will be self-evident truths, which provide a solid-foundation.

No confusion: I am sure that Euclid did not consider his axioms to be mere assumptions, but rather self-evident truths or derivable from them, because the notion that math, or any other field of knowledge, can be based on mere assumptions is rare before the last two centuries.

Your problem above seems to be the belief that I have been criticizing, the belief that math, because it can reach to non-actual worlds and is often accompanied by abstractions, that it is not based on reality, and so its basic truths are based on mere assumptions rather than definitions.

So much the worse for modern mathematicians: without self-evident truths there are no indirectly evident truths, and without those there are no evident truths at all.

Self-evident truths are what you need to start with, not derive.

I am glad that you admit that there are self-evident truths in logic and math. However, the law of non-contradiction is not the only one: there are also Aristotle's other two laws, and any definitional truth.

There is nothing wrong with using slam terms if they are deserved, and they are deserved in this case.

And don't get touchy: I didn't say that all mathematicians are intentionally deceiving anyone, nor that you are.

And in any case whether it is intentional or unintentional deception is unimportant in this context (if you want to object to using "deception" for unintentional cases then use another word): the point is that misusing language, intentionally or unintentionally, leads to confusion, which opens the door to error.

And no, the history of practices in a given field will not justify bad practices. And it is a bad practice to appropriate an existing term for a new concept.

It is, in short, the result of laziness: it is much easier to appropriate an existing term than invent a new one.

That is like saying modern logicians are meticulous. Yes, they do try to be precise, and often succeed and do good, but they are also notorious for being overly precise and nitpicky, which has given modern logic and modern philosophy in general a mostly deserved reputation for being dry, obsessed with trivia and out of touch with the real world. Yet at the same time they have been sloppy in their analysis of some issues.

Well modern logic and modern math unfortunately grew up together, and the errors of the former have often infected the latter (such as the myth that logic and math are not about the world because they are just products of convention or because they are purely formal, or that the material conditional expresses the conditional in ordinary language), though mathematicians get away with much less because of math's heavy application to physics.

I am very glad to hear that you admit that mathematical truths have a foundation and that it can be known.

I thought that you would try to deny that, since you deny that math is based on self-evident truths.

Insofar is that it true it is because so much of philosophy has been bad, but good philosophy is an indispensable foundation for other fields. In particular, little or nothing could be achieved intellectual without sound logic, which is a branch of philosophy. This effects the attempt to deal with every topic, even those which, because non-quantitative, do not involve math.

I know that there has been a loss of certainty in most intellectual fields in the last two centuries, and this results from uncertainty in philosophy in the same period, going back to Hume's skeptical thinking, which appeared in modified form in the work of the Logical Positivists.

It is to attack this excessifve skepticism that is, or should be, the main point of Peikoff's ASD and my main point in supporting it.

## Link to comment

## Share on other sites

## BaalChatzaf

Hume has nothing to do with it. Hume made no contributions to geometry or to any other branch of mathematics. The pioneering work was done by Gauss and Riemann. The discovery of -consistent- non-Euclidean geometries proves that geometry does NOT rest on self-evident axioms. The only requirement is consistency. It can actually be proved that Euclidean geometry is consistent if and only if both hyperbolic and elliptical geometry is consistent. In hyperbolic geometry, Given a line and a point not on the line there exists an infinite set of parallel lines to the given line passing through the given point. In elliptical geometry (for example the geometry of geodisics on a sphere) give a line (geodisic) and a point not on the line there are no parallels to the given line through the given point. Now, if geometry were based on "self-evident truths" then all of them could not be logically consistent. The geometrical theories are either all consistent or all inconsistent.

Einstein's Theory of General Relativity is a geometric theory based on non-Euclidean geometry. It also applies very nicely to the physical world. Your GPS works because a geometry that contradicts Euclidean geometry is consistent and applicable.

The interesting thing is that all of the major systems of geometry both Euclidean and non-Euclidean have found a use in physics. The geometry of the Special Theory of Relativity is hyperbolic, not Euclidean. The geometry of the General Theory non-Euclidean. Only the space-time manifold of a universe devoid of mass and energy would be Euclidean. Clearly that does not apply to the universe we live in. One of the reasons that Newtonian Physics is false is that it is based on Euclidean space. The structure of the physical space-time manifold is not Euclidean.

Here is a piece of advise for you. Disregard just about anything that L.P. says about mathematics. He is a mathematical ignoramus. It is no shame to be an ignoramus, since we are all born ignorant. It IS a shame to be an invincible ignoramus, and L.P. is that. He is also an ignoramus when it comes to modern physics. You know, the physics that has given us the A-bomb and the GPS system, as well as transistors and lasers. That latter is based on quantum physics which is held by Orthodox Objectivists to be patently absurd. Why is it that philosophically absurd theories predict reality correctly to twelve decimal places? The key quality of a physical theory is not its philosophical purity, but its empirical correctness.

Ba'al Chatzaf

## Link to comment

## Share on other sites

## Dragonfly

AuthorThat

youdon't know the meaning of mathematical terms does not mean that these are somehow secondhand and less correct than the meaning you happen to be familiar with. As Bob told you, the notion of an axiom in mathematics is nothing new.You still don't get it. Arbitraty postulates and theorems deduced from them are by

definitiontrue. That implies that they don't tell us anything about the real world, in other words, these are analytic statements. That mathematical theories can be used in physical models is a completely different issue. In those models we don't test thecorrectnessof the mathematics, but theapplicabilityof that particular mathematical theory.No you cannot. Can you prove Euclid's 5th postulate? No, you can't. Can you disprove Euclid's 5th postulate? No, you can't.

That is why it is called a postulate (or an axiom).You are confusing a postulate with ahypothesis.Why would that be inaccurate? Bill Dwyer assured me that when Peikoff talked about "ice": "he's talking about normal ice, not very high density ice. Can't you see that??". So two Objectivists use a different definition of ice, one inclusive (also sinking ice is ice - implying that Peikoff's statement was incorrect), the other one exclusive (only floating ice is ice - implying Peikoff's statement was correct). This is of course just an illustration that definitions are arbitrary, not in the sense that any definition will do, but that there is not one single correct definition. Is heavy water a form of water?

You'll mean "ice". Peikoff was so sloppy that he even didn't give

anydefinition of ice.## Link to comment

## Share on other sites

## Brant Gaede

Fire or ice: how the world might end, but not this argument.

--Brant

## Link to comment

## Share on other sites

## Daniel Barnes

Greg wrote:

>...these beings do not exist in the physical world we should say that they don't exist at all, but we have thoughts of them...

I wrote:

>...it might be better to say these "beings" - such as perfect triangles - exist, but

abstractly, and notphysically.Greg replied:

>No, this is just the sort of thing I want to avoid. I am a Conceptualist, not a Realist, on the subject of universals (and any other kind of absraction): there are no really existing abstractions outside the mind; the only abstract beings that exist are abstract thoughts in the mind.

Hi Greg

Just to get clarity on the above - by "abstract" I mean as distinct from physical i.e.

non-physical. That is what you mean too, yes? So we can summarise your view as:Non-physical things exist, but only in individual consciousnesses.Presumably you consider that consciousness, where these non-physical entities existl, is also non-physical i.e. distinct from the physical brain (though of course this does not mean it can exist without the physical brain; any more than, say, a human being can exist without surrounding air pressure)

The reason I ask is because I sometimes encounter some confusion in this (abstract vs physical) area that's better cleared up before moving on to other issues.

>Tell us about the 3 World cosmology.

Well, in short, if you thought dualism was bad, here comes trialism!...;-) As I am short of time, here is a good summary. Popper's Tanner lecture appended at the bottom is worth a look too if you are interested in drilling down.

http://en.wikipedia.org/wiki/Popperian_cosmology

## Link to comment

## Share on other sites

## BaalChatzaf

I read it. It sounds vaguely Kantian.

Ba'al Chatzaf

## Link to comment

## Share on other sites

## Guyau

Sidebar: Sterile Neutrino Update

Discussion of implications of new experimental results bearing on possible seventh neutrino:

http://resonaances.blogspot.com/2007/04/after-miniboone.html

## Link to comment

## Share on other sites

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account## Sign in

Already have an account? Sign in here.

Sign In Now