Freethought & Rationalism ArchiveThe archives are read only. |
07-12-2002, 07:25 PM | #11 | |
Regular Member
Join Date: May 2002
Location: North America
Posts: 203
|
Synaesthia:
Quote:
|
|
07-12-2002, 07:27 PM | #12 |
Regular Member
Join Date: May 2002
Location: North America
Posts: 203
|
John Page asks a good question: Why is being organic necessary to feel?
What are the necessary conditions for having the capacity for experiential states? |
07-12-2002, 07:37 PM | #13 | |
Regular Member
Join Date: May 2002
Location: North America
Posts: 203
|
Here is an interesting quote from Colin McGinn's Mysterious Flame:
Quote:
|
|
07-12-2002, 07:52 PM | #14 |
Veteran Member
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
|
Induction therefore suggests that this assumption is backed by some kind of necessary truth? Perhaps, but I suspect it is not the one he seems to think. The most likely explanation for the observation "All the cases of consciousness we know of are associated with organic brains" is "The only significant replicators known to exist are organic." To jump to the conclusion that some unknown causal power in the collection of atoms we label "organic" is completely unwarranted.
|
07-12-2002, 09:25 PM | #15 | |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
Taffy Lewis:
...Could there be a robot or computer which is *not conscious* yet capable of performing any action or engaging in any behavior that a human can? Well firstly I'll quote my definitions of awareness and consciousness: Quote:
In those definitions, it has things that you can test - you can test to see if the computer can actively learn new behaviours.... you could see if it is at stage 5 by seeing if it can have philosophical discussion with you that last for *hours* where it shows evidence of new learning. If it satisfies my definitions then it would truly be aware and conscious. I don't think beliefs can be preprogrammed in if they are self-learnt. (By "self" I mean the system as a whole) If they are self-learnt and it is acting on them they are its real beliefs - they aren't fake. Same with self-learnt problem-solving strategies, etc. Basically I'm saying that it wouldn't be able to fully appear to be human and not be conscious. It might appear conscious to someone that naive, but not to someone who knows how to test these things. (e.g. you've got to see if it can actively learn new beliefs and problem-solving strategies) Also, can there be conscious computers or conscious machines which are not organically based? Of course |
|
07-12-2002, 11:58 PM | #16 |
Contributor
Join Date: May 2002
Location: Saint Paul, MN
Posts: 24,524
|
I have no idea. My instinct is to believe that anything which is genuinely indistinguishable from a conscious entity is, in fact, conscious, but I can't tell.
I can't imagine how we'd *know*; after all, if, as we assume, the behavior is outwardly indistinguishable, what's the grounds for claiming it's not conscious? We can't say "oh, we know how it works, and there's no conscious parts"; careful study of human brains reveals no "conscious" part. We're a bunch of chemicals and electrical impulses, and yet, each of us knows that at least *ONE* of us is conscious. |
07-13-2002, 02:54 AM | #17 | |
Regular Member
Join Date: Mar 2002
Location: CT
Posts: 333
|
Quote:
Suppose people build a complex machine that becomes self aware. maybe it could even be programed to exhibit signs of anger at appropriate times. Maybe it could self program to exhibit such signs at times that are in it's best intest. Do computations that that produce signs, that we humans would recognise as anger, acutually produce "feelings" of anger within the machine? Will it's all it's "neural" circuits fire at once? Will adrenaline flood it's "bloodstream"? Will it "feel" it's muscles tense, and it's heart race? I doubt it, ergo ersatz consciousnes; from an organic perspective anyways.(What other perspective is there?) SB [ July 13, 2002: Message edited by: snatchbalance ] [ July 13, 2002: Message edited by: snatchbalance ]</p> |
|
07-13-2002, 04:07 AM | #18 |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
in reply to snatchbalance:
AI's that satisfy my definitions for awareness and learn to become conscious (high-level awareness) would have "drives" which they *constantly* try and satisfy - sometimes in new ways that they've learnt... I don't think it is necessary for them to behave exactly as we do to be conscious... e.g. they don't need to have malfunctioning tear ducts (cry) when they feel great loss, etc. [ July 13, 2002: Message edited by: excreationist ]</p> |
07-13-2002, 06:18 AM | #19 |
Regular Member
Join Date: Mar 2002
Location: CT
Posts: 333
|
excretionist,
I agree that a consious being does not need to behave exactly like we do for it to be consious. However, I would say that it must have SOME recogniseable behaviors before we would consider any being to be consious. I would go on to say that it may be possible to build a machine that mimics consiousnes, but that that machine would not be consious in a way that an organic being would be consious. Hence, I would still refer to it as an ersatz form of consiousness, being the elitist organic organism that I am that is. SB |
07-13-2002, 07:42 AM | #20 |
Banned
Join Date: Jun 2002
Location: Montrčal
Posts: 367
|
Synaesthesia : ..."exibited the same capabilities, it would certainly be conscious".
Did Searle not show this to be false. The machine would only be a symbolic manipulator. * * * Sammi says : The basic machine (the kernel of the NEW MACHINE) must necessarily collect information about itself, its basics, in order TO TAKE the first step on the road to consciousness. Our current CPU technology will never qualify "current computers" to be conscious. No matter how hard we try, Searle will always crop up to dismiss our quasi claims of machine consciousness. Sammi Na Boodie (sounds good! does it not?) |
Thread Tools | Search this Thread |
|