Welcome to
Science a GoGo's
Discussion Forums
Please keep your postings on-topic or they will be moved to a galaxy far, far away.
Your use of this forum indicates your agreement to our terms of use.
So that we remain spam-free, please note that all posts by new users are moderated.


The Forums
General Science Talk        Not-Quite-Science        Climate Change Discussion        Physics Forum        Science Fiction

Who's Online Now
0 members (), 388 guests, and 4 robots.
Key: Admin, Global Mod, Mod
Latest Posts
Top Posters(30 Days)
Previous Thread
Next Thread
Print Thread
#4632 11/24/05 01:21 PM
Joined: Oct 2005
Posts: 901
B
Superstar
OP Offline
Superstar
B
Joined: Oct 2005
Posts: 901
I suppose I know what the answer will be, but is anyone willing to lend some support to Roger Penrose.

He argues against the strong AI viewpoint that the processes of the human mind are algorithmic and can thus be duplicated by a sufficiently complex computer. This is based on claims that human consciousness transcends formal logic systems because things such as the insolvability of the halting problem and G?del's incompleteness theorem restrict an algorithmically based logic from traits such as mathematical insight. - Wikipedia.

Halting Problem - Given a description of a program and its initial input, determine whether the program, when executed on this input, ever halts (completes). The alternative is that it runs forever without halting.

Regards,

Blacknad.

.
#4633 11/27/05 01:46 AM
Joined: Sep 2005
Posts: 636
J
jjw Offline
Superstar
Offline
Superstar
J
Joined: Sep 2005
Posts: 636
Blacknad, I know not of Roger Penrose. As I read what you say I feeel there is something missing. Possibly a word left out? Any way:

"Halting Problem - Given a description of a program and its initial input, determine whether the program, when executed on this input, ever halts (completes). The alternative is that it runs forever without halting."

Rep: Not sure I understand the issue but I do like the question. I did at one time write all of the programs for my office in BASIC and a lot of small programs for my hobbies. We know it is possible to write programs that can write entirely new programs dependent on the input. Microsoft Quick BASIC is a good example. Other programs can go on until the equipment fails if they are continually stimulated by internal language to feed new input after a waiting period fails to provide outside input. That is what happens with your common screen saver program. I feel like I am missing the point of a ?halting problem? as you may wish to present it. In short I am secure in the conviction that a computer program can reproduce itself, repair itself, modify its original design and continue to execute itself until the equipment using it falters. That is just my view but probably not relevant to your issue.
jjw

#4634 11/27/05 02:31 AM
Joined: Sep 2005
Posts: 636
J
jjw Offline
Superstar
Offline
Superstar
J
Joined: Sep 2005
Posts: 636
Second try:
?The Halting Problem? you posted and to which I probably made a worthless response was reviewed as part of the data Google provides on Roger Penrose. A talented fellow it appears. The equation the encyclopedia provides is beyond my usual sand box so I will not deal with it directly. The issue, to me, boils down to whether it is possible for a computerized system to acquire enough knowledge to think rationally as a human does (when we do). The first requirement is that it must have the ability to acquire new knowledge as we do on a continuous basis and sufficient memory to store all the data it can acquire in multiple different ways so it is accessible in each of its parts for forging into new concepts and conclusions of its own creation. I hesitate to debate with any accepted and proven specialized academic so that is all I have to offer ? but ? if it was possible to provide all of the input the average human has access to with all of the sensory input that goes along with it ? I think a computer is capable of duplication human thought and gaining in ability while learning. Sorry it is till not an answer to the ultimate question,
jjw

#4635 11/28/05 12:30 PM
Joined: Jun 2005
Posts: 1,940
T
Megastar
Offline
Megastar
T
Joined: Jun 2005
Posts: 1,940
"Computer Science is no more about computers than astronomy is about telescopes." -- Edsgar Dykstra.

Edsgar got it. Computer Science is actually branch of mathematics and it has to do with the limits of algorithmic computing - not necessarily real computers.

A Turing Machine is a theoretical computer - an abstraction of real computers that is at once more powerful than and less powerful than real computers. It has a very simple programming language. There's a discription here: http://mathworld.wolfram.com/TuringMachine.html

Here is a site that explains the halting problem in a pretty simple way:
http://www.cgl.uwaterloo.ca/~csk/halt/
This is about the simplest and most lucid explanation I've ever read on the subject.


It's important to understand the definition of terms and to follow the text carefully. If it doesn't make a lot of sense, don't feel bad. Many graduate CS majors have difficulty wrapping their heads around it.

The problem isn't really about what real computers do - although the results are immediately transferable. The problem is this: is there an algorithm which could examine ANY program in advance to determine whether it would halt (return a yes/no answer) in a definite amount of time.

The Halting Problem is analogous to Goedel's Incompleteness Theorem and probably even equivalent to it.

#4636 12/03/05 12:46 AM
Joined: Oct 2005
Posts: 901
B
Superstar
OP Offline
Superstar
B
Joined: Oct 2005
Posts: 901
Thanks FF - very helpful.

And thanks for having a crack at it Jim, and yes Penrose is bright - anyone who co-writes books with Stephen Hawking is probably not a dullard.

But his argument must remain independent of any such considerations - he believes that computers will never mirror human intelligence. It seems to be a statement of faith, but I am not as bright as he is.

Here is my ignorance - I believe the problem is that not all problems are decidable and this means that a computer has to know when to stop evaluating, but how does it know when?

Please somebody set me right.

Regards,

Blacknad.

#4637 12/03/05 01:25 AM
Joined: Sep 2005
Posts: 636
J
jjw Offline
Superstar
Offline
Superstar
J
Joined: Sep 2005
Posts: 636
Blacknad:

Here is my ignorance - I believe the problem is that not all problems are decidable and this means that a computer has to know when to stop evaluating, but how does it know when?

No doubt wrong again but not from my perspective.

The view boils down to the idea that a computer will never develop intuition. I disagree. The tool will "know when to stop" when it sees it is going nowhere after which it will start all over again seekng new sources from which to make new decisions, and on and on= until the equipment fails. This requires a concept foriegn to the heavy hitters.
jjw

#4638 12/03/05 10:18 PM
Joined: Jun 2005
Posts: 1,940
T
Megastar
Offline
Megastar
T
Joined: Jun 2005
Posts: 1,940
I've pondered your message a bit, jj. I only vaguely understand what you're saying, but I *believe* your confusing two ideas.

Humans use judgement to solve real-world problems, but that judgement is not perfect. The halting problem is a question about mathematical certainty.

If computers were ever imbued with intution, it would give them a huge leg up on solving many kinds of real-world problems - BUT they would also be imbued with human fallibility.

#4639 12/08/05 08:00 PM
Joined: Jun 2005
Posts: 1,940
T
Megastar
Offline
Megastar
T
Joined: Jun 2005
Posts: 1,940
I think I said before (perhaps in another thread) that pattern matching is something (perhaps the main thing) that humans can do much better than computers.

http://www.cnn.com/2005/TECH/12/05/matrix.brain/index.html

I don't have time to read the Science article at the moment, but I will. It's not clear from the cnn summary exactly how important this is.

#4640 12/15/05 11:57 PM
Joined: Sep 2005
Posts: 636
J
jjw Offline
Superstar
Offline
Superstar
J
Joined: Sep 2005
Posts: 636
Hi TFF:

I always appreciate your tasteful insertions, especially after I offer some lame comment on a subject I failed to give reasonable attention. I see now how far away I was from the central issue.

In general I do expect that computers will some day achieve all the primary talents of humans. With secondary attachments they will broaden this ability and potentially qualify as true robots, clones, of humans. Easily said but not readily proven. One fascinating thing about the brain is that it does not keep recording images that it already has in storage. We would soon overload if every time we looked at our desk the entire image was stored all over again. Suppose the layout is a little different due to paper disbursement. We must be able to recognize this particular scene at least temporarily but not indefinitely. This little item is the essence, as I see it, between computers we have now and the means by which the brain is so successful. The use of multiple ?computers? segregating data into multiple storage centers which in turn are broken down into even smaller data areas. It is all so fast the entire system is searched to recreate a thought or a scene or possibly spark some intuition for something newer than the data of which it was formed. Not my best topic.
jjw

#4641 12/16/05 12:11 AM
Joined: Sep 2005
Posts: 636
J
jjw Offline
Superstar
Offline
Superstar
J
Joined: Sep 2005
Posts: 636
PS:

I meant to add a comment on the "mathematical certainty" in common use. I see most results and data quoted in averages. Each planets orbit is known to have special facts but we use the "mean" measurement. We do so much averaging because the details may be beyound math to fully comprehend. I never tried to learn Calculus, and was not sure I could or that I needed it, but my preliminary view was that it is an ultimate scope on averaging. This would require a new topic. Being wrong is not all bad, it provides a reason to try harder.
jjw

#4642 12/16/05 04:06 PM
Joined: Jun 2005
Posts: 1,940
T
Megastar
Offline
Megastar
T
Joined: Jun 2005
Posts: 1,940
"I offer some lame comment"
I don't know what anyone else has said to you, but I don't recall every having said your posts were lame.

"In general I do expect that computers will some day achieve all the primary talents of humans."
So do I. I don't think it will happen in my lifetime or in the lifetime of my children or perhaps even of my children's children. But I'm pretty sure it will happen eventually.

"I see most results and data quoted in averages."
It's not that the details are necessarily beyond math. It's that the details are beyond us. Averages allow us to "get our heads around" masses of data. But averages alone are insufficient. We like to know the standard deviation, the skewness and kurtosis. If we can hazard a guess about the source distribution that's a good thing.

We encapsulate a huge mass of data elements into a few numbers - and eventually, with luck, into a simple formula.

Calculus is a good thing. Frankly I don't think I've used it more than a few dozen times in my work (and most of that in the last year). But it's a good thing to know - and it's not all that hard if you can find the right teacher or the right book. It's a natural extension of what most students learn at the very end of algebra II, namely sequences, series, and limits.


Link Copied to Clipboard
Newest Members
debbieevans, bkhj, jackk, Johnmattison, RacerGT
865 Registered Users
Sponsor

Science a GoGo's Home Page | Terms of Use | Privacy Policy | Contact UsokÂþ»­¾W
Features | News | Books | Physics | Space | Climate Change | Health | Technology | Natural World

Copyright © 1998 - 2016 Science a GoGo and its licensors. All rights reserved.

Powered by UBB.threads™ PHP Forum Software 7.7.5