RedKnight wrote:
If we go down this road, aren't we veering toward determinism? Essentially, we are saying that our responses are completely predetermined by our, "programming," and if the same is true for all human beings, aren't all of our interactions since the dawn of time already set in stone? Obviously, natural events and other, "outside," stimulus come into play...
And if we are, what of it? Unless you want to go back to Cartesian dualism and say that our minds are separate from our brains, then our consciousness is based on a physical construct. We're probably safe from full determinism, from our point of view, due to quantum uncertainty, but that doesn't change the fact that if we can replicate a similar physical construct, or an artificial construct which works in an analogous manner, then it is theoretically possible to create an intelligence (there's more to it than that of course, involving development and input/output methods, but the basic principle stands).
Quote:
What I'm trying to say is that when a computer, independent of direct human suggestion, can conceive of its' own existence, it will define exactly what it means to be a sentient AI; in the opinion of the forum readers, is this possible?
Yes. Krylex's argument that we cannot create something smarter than ourselves is a bit silly. While it's true that no one person can completely understand a being smarter than ourselves (in fact, we cannot completely understand
ourselves, for Gödellian reasons), that doesn't mean we can't build one. No one person knows every line of code in Microsoft Windows, but it still works (More or less). Furthermore, the creation of any actual AI program would likely involve a great deal of evolutionary algorithms and so forth, things which are set up to use simple rules in order to become more complex on their own. So, barring some unforeseen physical barrier*, I don't see why it shouldn't be theoretically possible to create a machine intelligence at some point.
Now, whether or not we will ever do so is another question. We could easily end up killing ourselves off before then, or just banning AI research outright.
As for the wisdom of doing such a thing, I can't say I'm really opposed to it. Hopefully we would be able to give them some sense of ethics to head off any potential Matrix (Ha!) or Teminator-type scenario. And you have to keep in mind that AI's wouldn't necessarily contain the competitive and dominating impulses humans do unless we put them there. Yudkowsky has been
talking*2 about this type of thing for years. Although if it came to it, and the Machines had access to advanced nanotech and automated weaponry, I don't think the requisite ragtag human resistance would have much of a chance :)
* Of which I don't see any on the horizon. Processing power could be a problem, as we still have a ways to go to reach human-equivalence for advanced abilities, but that'll probably be solved by either quantum computers or better (nanotech?) methods for making non-quantum ones.
*2 Usual disclaimers apply- his opinions are not necessarily all my own, and so forth. I haven't had the chance to read through this revised version yet, but as I recall, it's pretty good.
EDIT: Misspelled "determinism". I had no choice!