Tamayo wrote:
In that brains are neural networks (whatever else they are), and that consciousnesses are hosted by brains, and that neural networks are very difficult to program directly, I suspect even that it will be centuries at least before a person's consciousness can be hosted on a machine that is not that person's original brain.
The Baron wrote:
hey, lookie there!
an article on this by a guy who teaches at my goddamn school that apparently won't let me check my grades, fix my registration for next semester, or pay them! </bitter>
good article. seems to emphasize one of the things I was about to mention before I looked this up, that the brain is massively parallel. it's keeping all autonomic functions running, parsing sensory information from an incredible number of sources, and then all higher level functions (aka, sentience). I'm inclined to agree with Tamayo on that it will be centuries before someone's consciousness can be mapped into a machine; we have to understand the brain in its entirety before we can even really start researching that line of thought. I know exactly zero about AI research, so I'll let Tamayo talk about that if she wants.
Thinman wrote:
The rest of this post is more AI stuff, and probably only interesting if you're into that kind of thing.
The thing is, most AI research (that I'm aware of, anyway) doesn't even bother trying to emulate how the human brain works; instead it focuses on ideas that can be rigorously mathematically understood. For one thing, the brain is an ad hoc system. It's a product of millions of years of accretion of functionality. Things are hopelessly interconnected and not always even done any particularly optimal way. For another, biologists and psychoperceptual scientists keep changing their theories of how the mind works.
Our little friend the <a href="http://en.wikipedia.org/wiki/Neural_network">multilayer perceptron</a> was a something of a scientific fad. Claims about great results emulating nature with "neural networks" have given way to the general conclusion that MLPs are delicate, temperamental, algorithms to set up, difficult to analyze once operational and anyway they bear an only superficial resemblance to how (we currently think) the brain actually works.
The current hot topic is kernel methods, like the <a href="http://en.wikipedia.org/wiki/Support_vector_machine">support vector machine</a> which reduces the learning problem to an optimization that can be solved with deterministic numerical methods. It's stable, general, works very well, and bears no resemblance to anything known in nature.[1]
1. Vapnik, the primary mathematician behind the statistical methods used in the SVM, has written a hilariously bitter critique of the neural network community, who ignored his superior theoretical work for many years in favor of the "more natural" MLP.
Emy wrote:
Thinman wrote:
The rest of this post is more AI stuff, and probably only interesting if you're into that kind of thing.
The thing is, most AI research (that I'm aware of, anyway) doesn't even bother trying to emulate how the human brain works; instead it focuses on ideas that can be rigorously mathematically understood. For one thing, the brain is an ad hoc system. It's a product of millions of years of accretion of functionality. Things are hopelessly interconnected and not always even done any particularly optimal way. For another, biologists and psychoperceptual scientists keep changing their theories of how the mind works.
Our little friend the <a href="http://en.wikipedia.org/wiki/Neural_network">multilayer perceptron</a> was a something of a scientific fad. Claims about great results emulating nature with "neural networks" have given way to the general conclusion that MLPs are delicate, temperamental, algorithms to set up, difficult to analyze once operational and anyway they bear an only superficial resemblance to how (we currently think) the brain actually works.
The current hot topic is kernel methods, like the <a href="http://en.wikipedia.org/wiki/Support_vector_machine">support vector machine</a> which reduces the learning problem to an optimization that can be solved with deterministic numerical methods. It's stable, general, works very well, and bears no resemblance to anything known in nature.[1]
1. Vapnik, the primary mathematician behind the statistical methods used in the SVM, has written a hilariously bitter critique of the neural network community, who ignored his superior theoretical work for many years in favor of the "more natural" MLP.
What ever happened to colloid physics/chemistry?