I realise at this late date that I have been using the word "algorithm" undefined. That is an oversight on my part. A thing is an "algorithm" if a Turing machine can do it, and that's all. Proving the equivalence between Lisp programs and Turing machines and ordinary electronic computers is the work of undergraduate computer science.
Ylis wrote:
The probability that a computers LISP interpreter or JVM has a bug is non-zero.
Granted. In many cases, indeed, that probability is close to one. But in contrast to what you say ...
Quote:
We have a non-zero probability that the same computer program will return different results on different computers, so the result isn't an absolute reality. It's a percieved reality, based on the integrity of the language interpreter.
... irrespective of a computer's ability to execute any given program, it remains the case that if program P can be proven to terminate on input I with output O, then P(I) is O. If a computer C running a program which purports to be P, and when given input I produces some output Q != O, then computer C is faulty. It has no bearing on the success or failure of the language in which P is written.
Unlike in English, and with human speakers of English, a program written in Lisp can be proven to mean a particular thing. By contrast, in English, Alice may communicate some message M to Bob and, though neither Alice nor Bob understands M perfectly, both Alice and Bob get *some* of the information being communicated. With computer programs, however, the situation is far simpler: either the communication is successful, or it is a failure. I liken the failure of a correct, deterministc computer program to have deterministic output to the failure of a human being to understand a communication at all, say because that human being does not understand the language in which the communication is being presented, or because that human being has been recently bashed in the head with a heavy object.
Lisp, in particular, is equivalent to the language of proof of validity. (Yes, every Turing-complete computer language has this property, but Lisp is the best one to talk about because its semantics are so easily defined.) If you accept that some correct, terminating program Lisp program P can validly output different values given the same input, then what you are doing is denying the very possibility of a logical proof of anything at all, and I do not wish to talk to you any more.
Nick raises one point I had wished I had made more explicit:
nick012000 wrote:
Discounting things like random numbers generated from the computer's clock, which is constantly changing.
The sequence of pseudo-random numbers generated from any particular seed is absolutely deterministic. If some algorithm depends upon random numbers, then those random numbers are part of the algorithm's input. Indeed, in simulations programming, it is often desirable to use a known sequence of pseudo-random values when designing and testing an algorithm because that makes proving the expected output for any given input possible.