Dragonfly 6 Posted January 8, 2010 Share Posted January 8, 2010 The most remarkable about the new record is that it has been realized on an ordinary desktop computer, and not on some supercomputer. See here. Link to post Share on other sites

BaalChatzaf 60 Posted January 8, 2010 Share Posted January 8, 2010 The most remarkable about the new record is that it has been realized on an ordinary desktop computer, and not on some supercomputer. See here.Mon Dieu! Some people have too much time on their hands. Anything past ten places is an exercise in wretched excess. What is a more challenging problem is to find an algorithm that will produce the n-th digit of the decimal expansion without computing the prior n-1 digits.Ba'al Chatzaf Link to post Share on other sites

Selene 7 Posted January 8, 2010 Share Posted January 8, 2010 The most remarkable about the new record is that it has been realized on an ordinary desktop computer, and not on some supercomputer. See here.WOW!Nice find DG.Geez, Robert you would throw cold water on O'Biwan the Diminished's impeachment and conviction other than the dumbest Vice President taking over!Adam Link to post Share on other sites

BaalChatzaf 60 Posted January 8, 2010 Share Posted January 8, 2010 (edited) The most remarkable about the new record is that it has been realized on an ordinary desktop computer, and not on some supercomputer. See here.What would have been more interesting would have been to find a parallel algorithm that would have permitted the networking of several desk top/lap top computers and utilizing collateral computation streams. I am sure the French lad did a creditable job of programming and retaining precision throughout the computation, but once there is a formula in the form of an infinite series or product to compute pi (or a simple function of pi) there is nothing Gee Whiz left except parallel computation or a computation of the n-th digit of the decimal expression for pi.Ba'al Chatzaf Edited January 9, 2010 by BaalChatzaf Link to post Share on other sites

syrakusos 1 Posted January 9, 2010 Share Posted January 9, 2010 The most remarkable about the new record is that it has been realized on an ordinary desktop computer, and not on some supercomputer. See here.Thanks! Interesting as that is...How is it validated? What was the algorithm he used to generate pi?I guess you could "proofread" it electronically, comparing his files up the 2.577 billionth place against the Japanese result. That would be a start.Close to completing a master's in social science, I am taking an undergrad class in "Ethics in Physics" in order to frame investigations into white collar crime in science.Faking your results is fraud. That is one crime. Last summer, I had a grad class in "Wrongful Convictions" and junk science and fraudulent data are both common elements of travesties.Anyway... It's still interesting... Nice to know what can be done on a simple machine powered by a human mind. Link to post Share on other sites

BaalChatzaf 60 Posted January 9, 2010 Share Posted January 9, 2010 Thanks! Interesting as that is...How is it validated?That is an excellent question. Here is a basic fact: The problem of determining when two algorithm's are equivalent is recursively unsolvable. It is straightforward to determine when two algorithms are not equivalent for at some stage of the output the results are different. But to determine from the definitions of the algorithms themselves, that the output sequence are equal for all stages (even when there is no stopping), cannot be done by finite means. So the answer to your question is that there is, nor can there be absolute proof that this algorithm is correct. All one can say is that this algorithm and some other algorithm produce the same expansion of pi up to the N-th place for some N (presumably very large N). The equivalence problem is Turing unsolvableThe stopping problem is Turing unsolvableBa'al Chatzaf Link to post Share on other sites

tjohnson 0 Posted January 9, 2010 Share Posted January 9, 2010 I get it and I don't have any friends. Link to post Share on other sites

Selene 7 Posted January 9, 2010 Share Posted January 9, 2010 Damn GS:And here I thought...Adamtearing up his happy birthday card he was going to send to GS on Korzybski's birthday!Adam Link to post Share on other sites

Dragonfly 6 Posted January 9, 2010 Author Share Posted January 9, 2010 The correctness of an algorithm can be proved analytically. If two algorithms are both proved to be correct, they must give the same results (though not necessary equally fast). A simple example is the formula for calculating pi using the series expansion for the arctan function (which converges very slowly, as it oscillates wildly around the limit value) and the formula that is obtained by summing partial sums of two terms of that series (which converges much faster). Of course you still have to check that the programming for the calculation is done correctly, but there is no doubt about the algorithms themselves. Link to post Share on other sites

BaalChatzaf 60 Posted January 9, 2010 Share Posted January 9, 2010 The correctness of an algorithm can be proved analytically. If two algorithms are both proved to be correct, they must give the same results (though not necessary equally fast). A simple example is the formula for calculating pi using the series expansion for the arctan function (which converges very slowly, as it oscillates wildly around the limit value) and the formula that is obtained by summing partial sums of two terms of that series (which converges much faster). Of course you still have to check that the programming for the calculation is done correctly, but there is no doubt about the algorithms themselves.This is not as straightforward as you think. The first tier is to determine just how much a proof of a theorem is really a proof. Proofs readible by human beings are filled with phrases such as "it is clear that....", "it is easy to see...", " a trivial calculation shows ..." and such like. In short, proofs as they are written in the journals are never full bore proofs as defined in various form systems of logic such as first order predicate logic. A full bore formal proof for even a theorem with a "proof" of moderate length such as the Heine-Borel compactness theory, or the standard proofs of convergence of various infinite series or infinite products written as a genuine formal proof would exceed the length of Russel and Whitehead's three volume work entitled -Principia Mathematica- (not to be confused with Newton's work with a similar title). Then there is the matter of showing that a computer oriented algorithm using a finite number of bits to represent real numbers (usually structured in fraction-mantissa form) actually maintains precision through all the zillions of steps necessary to produce a partial expansion a jillion digits long. Then there is the matter of how an algorithm is actually written. Very few algorithms are written directly in machine language. Some high level language such as C, C++, Java, or Fortran is used. But that means the program which presumably represents some abstract algorithm which is distilled from an analytical expression compiled correctly for the machine on which it was compile and run on. There is in general, no universal algorithm for telling when two machine algorithms are equivalent. So only particular cases can be dealt with and usually consists of writing two versions and seeing if their output is the same for some finite number of steps. But this does not prove that their output would be the same if the algorithms produced an infinite string of digits which is precisely what is need for pi, which is not only irrational but transcendental. At best one could show that algorithm A produces the same string as algorithm B for say 10^40 characters. This still does not prove that algorithm A is equivalent to algorithm B. So the problem is correct compilation is rather up in the air. The best we can do is achieve a warm fuzzy feeling that the compilations are correct. There is no absolute proof.Here is a "dirty little secret". What passes for a proof in very complicated cases is really the decision of a committee or jury of leading professional mathematicians. Consider the proof of Burnside's Conjecture written by Feit and Thompson back in the 1960s and which took over 300 pages in a single issue of the Canadian Journal of Mathematics. That proof was vetted by several committees and after several months they declared the "proof" (which is not really a formal proof, but a readable short-hand indicator of how a formal proof could be written) could not be faulted by the committees. So much for "absolute" mathematical proof. It exists only in theory and sometimes in actuality, but not always. Consider the case of Andrew Wile's proof for Fermat's Last Theorem, presented publicly in 1993. A committee of top number theorists and group theorists vetted the first version and found a logical hole (the mathematical equivalent of a bug). Wiles closed that hole and the committees, after several months of very careful checking declared the "proof" (which is really not a formal proof) to be kosher. So the problem was declared solved. Or consider the resolution of the four-color problem by Haaken and Apfel. This proof actually used as part of its structure a computer program which broke the general four color conjecture into what was claimed to be about 1250 mutually exclusive and exhaustive alternatives, each of which was "proved" to show four colors were both necessary and sufficient. After double checking with three alternative programs using several operating systems and getting the same decomposition of the problem the juries declare the proof sound and valid. Many mathematicians grumbled about using a computer program as an integral part of a proof of an abstract mathematical proposition, but that is just the old guard grumbling about new fangled ways. As we have seen, even classically acceptable proofs have a region of doubt, since they are not full bore proofs and there may be logical hole hiding between "it is clear that..." and "it can be trivially shown that..." Und so weiter. Ba'al Chatzaf Link to post Share on other sites

Greybird 2 Posted January 9, 2010 Share Posted January 9, 2010 (edited) It's all spectacularly pointless, o'course, except in how it gets new algorithms into "production," tweaks math students' imagination, and other such indirect effects.Accurate calculation of pi to a mere twelve places — 3.141592653589, and I just typed that from memory and without straining — took over 2000 years to achieve. And it's sufficient, if I read the orders of magnitude properly, to calculate the circumference of the Earth to within the width of a proton.Anything more is practical overkill. Even if the next nuclear war — the first ended in August 1945 — destroys all calculators with EMP blasts. (I still own a top-flight LogLog slide rule ... heh heh heh.)What gripes me is the mathematical ignorance evinced by The Independent, or at least its Web-makeup editors. No one will ever find "the precise value of pi." Not in the sense they're talking about. No such value exists. That's part of the point: Pi cannot be expressed as the root of an algebraic equation, and its decimal expansion is infinite and non-repeating.Yes, that "transcendental" nature is acknowledged towards the end of the piece, but the headline is eternally and needlessly misleading. Edited January 9, 2010 by Greybird Link to post Share on other sites

Dragonfly 6 Posted January 9, 2010 Author Share Posted January 9, 2010 The convergence formulas for calculating pi are of course very much simpler than the proof of the four-color theorem or Wiles' proof of Fermat's theorem, it's like comparing a match to an atomic bomb. That in those very complex proofs of the latter type something may have been overlooked that could invalidate such proofs is not entirely implausible. But the common derivations of the formulas for the expansion of pi are good enough for me. People who like to write them out in the most complete logical form are of course free to do so, but I don't think they'll tell us anything new. You don't have to prove every time that 2+2=4. Link to post Share on other sites

BaalChatzaf 60 Posted January 9, 2010 Share Posted January 9, 2010 (edited) The convergence formulas for calculating pi are of course very much simpler than the proof of the four-color theorem or Wiles' proof of Fermat's theorem, it's like comparing a match to an atomic bomb. That in those very complex proofs of the latter type something may have been overlooked that could invalidate such proofs is not entirely implausible. But the common derivations of the formulas for the expansion of pi are good enough for me. People who like to write them out in the most complete logical form are of course free to do so, but I don't think they'll tell us anything new. You don't have to prove every time that 2+2=4.That is true. But when you program a numerical method on a computer to computer the limit of a series you need to show your program will maintain precision. Why? Because a computer does not really operate on real numbers. It operates on floating-point numbers which have a limited dynamic range under the computer analogs to arithmetic operations. So it must be shown that the algorithm that is actually programmed retains sufficient precision so that the finial result of the programmed algorithm is meaningful. Now here is the rub: the so-called proof of sufficient precision is not really a formal proof. It turns out for most of the classical numeric methods that we have confidence in our approximate proof methods and the logical complexities are not deep (as you point out), but the problem is still there.Stating it again. Computers do not really do arithmetic operations on real numbers. They do an analog of arithmetic on floating point numbers and it is necessary to show that the analogy is faithful.Please see -Rounding Errors in Algebraic Processes- by J.H.Wilkinson.See also: http://cauchy.math.o...98/4513-l04.pdfand pay close attention to the Loss of Precision Theorem. With floating point numbers subtraction of nearly equal quantities which later are used in division can cause significant loss of precision which can be propagated by addition and multiplication.See also: and http://docs.sun.com/source/806-3568/ncg_goldberg.htmlBa'al Chatzaf Edited January 9, 2010 by BaalChatzaf Link to post Share on other sites

BaalChatzaf 60 Posted January 9, 2010 Share Posted January 9, 2010 What gripes me is the mathematical ignorance evinced by The Independent, or at least its Web-makeup editors. No one will ever find "the precise value of pi." Not in the sense they're talking about. No such value exists. That's part of the point: Pi cannot be expressed as the root of an algebraic equation, and its decimal expansion is infinite and non-repeating.Yes, that "transcendental" nature is acknowledged towards the end of the piece, but the headline is eternally and needlessly misleading.Well not quite. We know mathematically pi/4 = arctan(1) and arctan has a Taylor series expansion. So the quantity does exist as a limit of the series. Pi exists mathematically but we have no way of writing it out in completion as a decimal expansion (or expansion to any other base != 1).Ba'al Chatzaf Link to post Share on other sites

BaalChatzaf 60 Posted January 9, 2010 Share Posted January 9, 2010 you guys will love this. Floating point error analysis based on interval arithmetic. Please see http://chenlab.ece.cornell.edu/Publication/Fang/icassp2003_fang.pdfThere is at least a three to one improvement on error bounds using interval arithmetic rather than standard floating point analog to normal arithmetic. Ye gods! I think I have discovered a New Continent.Ba'al Chatzaf Link to post Share on other sites

Dragonfly 6 Posted January 9, 2010 Author Share Posted January 9, 2010 That is true. But when you program a numerical method on a computer to computer the limit of a series you need to show your program will maintain precision. Why? Because a computer does not really operate on real numbers. It operates on floating-point numbers which have a limited dynamic range under the computer analogs to arithmetic operations. So it must be shown that the algorithm that is actually programmed retains sufficient precision so that the finial result of the programmed algorithm is meaningful. Now here is the rub: the so-called proof of sufficient precision is not really a formal proof. It turns out for most of the classical numeric methods that we have confidence in our approximate proof methods and the logical complexities are not deep (as you point out), but the problem is still there.I know, that was already drilled into me some 40 years ago by Prof. Van der Sluis, one of the pioneers in numerical mathematics and in particular in the theory of rounding errors and the pitfalls they represent. You also had to do practical (and instructive) exercises on the computer - at that time a big one, with paper tape and punch cards as input devices and a line printer with large sheets of paper as output device. Writing different programs for calculating the decimal expansion of π was one of the first exercises.This turned out to be very useful knowledge when I later wrote programs for calculations in group theory (SU6 → SU3, manipulations with Young tableaux). One of those programs ran for several days on the mainframe at the time, and I had to devise several checks to test whether the results were correct. Link to post Share on other sites

tjohnson 0 Posted January 9, 2010 Share Posted January 9, 2010 - at that time a big one, with paper tape and punch cards as input devices and a line printer with large sheets of paper as output device. Ah, those were the days! Standing in line at the card reader with your stack of punch cards until your turn to run them through. Then go to the printer room and try and find the output. Link to post Share on other sites

BaalChatzaf 60 Posted January 9, 2010 Share Posted January 9, 2010 - at that time a big one, with paper tape and punch cards as input devices and a line printer with large sheets of paper as output device. Ah, those were the days! Standing in line at the card reader with your stack of punch cards until your turn to run them through. Then go to the printer room and try and find the output. Indeed. For the majority of young folk 026 and 029 do not ring a bell and what is a plug board? Batch processing is nearly 50 years out of date. I did the following experiment about 15 years ago. I dug up my K&E multi-scale slide rule from the storage bin and showed it to co-workers who were (at that time) thirty five years old or younger. I asked, do you know what this is? Half of them had no idea and the other half said, yes it is a slide rule. My dad had one of those.Ba'al Chatzaf Link to post Share on other sites

BaalChatzaf 60 Posted January 9, 2010 Share Posted January 9, 2010 Mr. Spock (the Vulcan, son of Sarek) would have loved this.Ba'al Chatzaf Link to post Share on other sites

syrakusos 1 Posted January 9, 2010 Share Posted January 9, 2010 I did the following experiment about 15 years ago. I dug up my K&E multi-scale slide rule from the storage bin and showed it to co-workers who were (at that time) thirty five years old or younger. I asked, do you know what this is? Half of them had no idea and the other half said, yes it is a slide rule. My dad had one of those.When K&E closed out their warehouse, they took requests for a free slide rule. I got one. I have five or six. I carry one in my flight bag: works without a battery; use it as a straight edge; God forbid you should need it as a splint...But this is starting to remind me of the Dilbert Cartoon where the guys are complaining about Windows and GUIs... In my day .... Well, I once wrote an entire operating system in zeroes and ones.... You had zeroes? Link to post Share on other sites

## Recommended Posts

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account## Sign in

Already have an account? Sign in here.

Sign In Now