Eyeless Blond wrote:
Easy: until you start talking either true random numbers (yes, derived from radioactive decay or
Brownian motion or somesuch) or quantum computing, there is no such thing as a non-determinate algorithm. Computations in computers are *always* deterministic; the same inputs will always give you the same outputs. The timestamp-pRand() function you described is just a way to ensure that you never have the same inputs over a reasonable set of normal trials, though it's still not a true random number and really isn't all that good for a randomized function that truly requires real random input.
I was asking Slamlander that question because he said that he wasn't talking about determinism "in the general sense", but "in the terms of deterministic algorithms and systems in Computer Science", implying that the latter was something other than just any old algorithm that didn't rely on non-determinism in the general sense (e.g. radioactive decay).
Quote:
And you would be wrong about that; scientists have known that for near a hundred years now. Most of quantum mechanics is non-deterministic; Brownian motion is just a macroscopic example. This is why, for instance, single photons can interfere with themselves, producing a diffraction pattern when going through a grating.
I'm not claiming that determinism is true, merely restating the definition of it again, for the sake of trying to find out how determinism "in the terms of deterministic algorithms and systems in Computer Science" is different from regular old run-of-the-mill determinism.
Quote:
Note that I am not making the mistake of saying that quantum mechanics implies the existence of a conscious mind--
that has also been fairly well debunked--but that the processes of thought in the brain are themselves non-deterministic, non-mathematical quantum mechanical processes, and thus cannot be fully determined even by the most advanced computers. The idea of a computer ever being able to fully calculate the whole of the human mind, based solely on complete knowledge of the inputs,
which the Uncertainty Principle expressly forbids, by the way, is impossible, which is why we are forced to come up with macroscopic models based on the brain's structure rather than computationally reading its thoughts.
Ok, so let set aside the whole issue of predictability, since (as you said, and as I earlier said) we can't know all the inputs and we don't, even if we could, know the functions (laws of nature) perfectly. The question at hand is about whether you could have functionality which we would describe as a human-like mind implemented in a "deterministic" system; one which is non-chaotic and thus does not show signs of randomness at the larger scale, e.g. a computer. (Even though, as the universe is non-deterministic, even a computer is not strictly deterministic but merely non-chaotic; randomness is extremely unlikely to be observed at any relevant scale, since the randomness present at the quantum scale has little effect on the operations on much larger scales; though if we built our computers differently it could, e.g. quantum tunneling problems with submicron etchings in chips).
So, I'm willing to grant (though not being a neuroscientist I don't know for a fact) that the human brain is quite likely a chaotic system, and nondeterminism on the quantum scale is likely to manifest on larger scales significantly more often than it would in a computer system. So, predictability aside, if you were to take a person sitting alone in a cell with no outside input, and then magically rewind time and let that person live through exactly the same week over and over and over again (his mind of course being "rewound" as well), his mind would probably be in an at-least-slightly different state at the end of each run through that week; while a computer sitting in a closet doing some very difficult calculations all week long would probably have the exact same state at the end of every run, barring some extremely unlikely quantum fluctuations.
The question now is, is that chaos and manifest randomness on a large scale a necessary feature of a human-like mind? My answer would be no; though if you've got arguments otherwise I'd like to hear them. But imagine, for illustration of my position, that you have a computer system built such that the aforementioned quantum tunneling problems occur, or somehow or another, true random error is introduced into the functioning of the computer. However, the computer is programmed to compensate for such errors, so they don't just crash the system, they just introduce more errors into the calculations and make the computer have to reprocess things and take circuitous roundabout measures of ensuring that it's probably got the right output. You can probably imagine a system like this being produced via a genetic algorithm of some sort, where we humans determine which programs produce the "right output" most often despite the error-prone nature of the system, and thus which programs get reproduced; but one way or another, the final program we select will be code like any other, just code that can tolerate error and randomness. Such code could theoretically have been written and designed instead of selected through an evolutionary process; heck, it might have even been more efficient to do so.
But now imagine you take that final code and implement it on hardware which is not so error-prone. That code running on the error-prone hardware would, like a human brain, end up giving different outputs at the end of one week of executing. That same code, running on hardware which is not error prone, would give the exact same output at the end of one week of executing. But it's still the same code! Imagine now that that code, on the error-prone hardware full of true randomness, managed to simulate human intelligence. Would you say that the same code, running on error-free hardware, would no longer seem intelligent to us?
I guess the point I'm making is that while, yes, the human brain is probably a chaotic system which manifests randomness on the macro-scale, what makes it distinct from just a box of true random number generators sitting in someone's skull is the non-random functionality of it; the patterns, functions, and predictable behavior of it. So it seems that if you could perfectly replicate all those "deterministic" features of a human brain and leave out the randomness, it would still be what we'd call intelligent. The randomness is not a necessary feature, it's just incidental.