FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 10-08-2002, 07:16 AM   #51
Banned
 
Join Date: Sep 2001
Location: Eastern Massachusetts
Posts: 1,677
Post

Quote:
No machine can have a brain of the type humans have (else it wouldn't be a machine).
Vogelfrei, you have created a tautology. If anything is proven conscious, it is no longer a machine by your definition, so you are right. If a machine is proven conscious, it must have "a brain of the type humans have", is thus no longer a machine, so you are right.

What, exactly, is the vague "a brain of the type humans have"? Surely not the specific human carbon-based biology, because there is nothing inherent in nature, to our current knowledge, that prohibits non-carbon-based forms of life.

And surely not the relative complexity, since it is only a matter of time before "machines" have the equivalent processing power. In fact, it will soon (most likely within our lifetimes) be possible, if anyone wanted to bother, to create an exact copy of the human brain, neuron by neuron, synapse by synapse, impulse by impulse, in silicon rather than meat. Not considered the most efficient way to create artificial intelligence, but possible. Add artificially created versions of all the sensory organs. You can even put it in an erect, bipedal robot. You might even be able to "download" the current imprint of an actual human brain into it, so that it would not have to "grow" up for 20 years in order to be an adult.

What about this "machine" would prevent it from being "conscious", if not, ultimately, an irrational belief in an intangible "soul"?

Or would it no longer be a machine, by your definition, by virtue of being conscious?

You see the tautology? Help us out of it.

[ October 08, 2002: Message edited by: galiel ]</p>
galiel is offline  
Old 10-08-2002, 03:05 PM   #52
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

Vogelfrei

1) It is not at all clear that animal-type neurons with animal-type biochemistry cannot be replicated in a machine. Both functional replicationa and computer simulation are potential avenues for doing so.

2) Unless you are arguing that there is some property that animal neurons have which nothing else does, that gives rise to consciousness, I fail to see how you are arguing against the possibility of artificial intelligence.

A) I am frankly amazed that you would accuse me of unscientific, when all you have to offer is the assertion "it is a brute fact that animal neurons have the property of causing consciousness." This is what is known as "giving up", and there is hardly a less scientific attitude. It is a possibility, but it is the last possibility, the position taken when all other aventues of investigation have been exhausted.

B) If by saying that I am "appealing to a computational explanation of consciousness" you mean I think that consciousness is the product of chain reactions of neural activity in the brain, then you are correct. As for your supposed problems:

I. The phrase "computation is in the eye of the beholder" appears to simply be a cute way of stating the second "problem."

II. That there are multiple coherent interpretational schemes for any particular system is not appear to be problem for a computational explanation of consciousness. It might make figuring out exactly how the brain works difficult, but it has no more applicability to issue at hand than it does to the ability of a computer to play chess.

III&gt; Since a computation explanation of consciousness does not entail that computation is consciousness, the assertion that it would imply the universe is some sort of vast consciousness is ridiculous.

And, gratuitously:

3) As Searle asks, is your digestive system conscious? It instantiates computational processes, albeit in a nontraditional manner. If you think not, because it's not "complex" enough, specify which hidden magical cause it is that allows a system to jump from non-consciousness to consciousness once some particular threshold is passed.

3) As I said, it is a matter of complexity and organization. The question does nothing to increase my respect for Searle, as it is analagous to asking "Can Duke Nukem play chess?" Simply being as "complex" as the brain, by whatever measure you are using, does not imply the ability to do what the brain does, as complexity says nothing of organization. Is this not obvious? I am embarrassed that I have to explain it to you.

Edited to say: The way it looks from here, computation = a system + an interpretation, and that's how I've intended the term, above. Feel free to disagree, but make sure you do it explicitly.
tronvillain is offline  
Old 10-08-2002, 03:53 PM   #53
Veteran Member
 
Join Date: Jan 2001
Location: India
Posts: 6,977
Post

Vogelfrei, you seem to believe that human brains alone can have consciousness. I would rather say that we have human type consciousness. Machines, if they become conscious would have machine type consciousness which might be very different from ours.
hinduwoman is offline  
Old 10-09-2002, 06:07 AM   #54
Junior Member
 
Join Date: Jul 2002
Location: NY
Posts: 37
Post

Quote:
Originally posted by galiel:
[QB]

Vogelfrei, you have created a tautology. If anything is proven conscious, it is no longer a machine by your definition, so you are right. If a machine is proven conscious, it must have "a brain of the type humans have", is thus no longer a machine, so you are right.
Here's what I wrote about what I meant by "machine":

"I thought it was obvious that my usage of the word 'machine' was shorthand for "non-standard non-biological machine"."

Clearly, if a non-biological thing was shown to be consciousness, it would remain a machine, under the partial definition I previously provided, contrary to your baseless assertion about what I meant.

Dishonest debating tactics don't work too well online.

Quote:
What, exactly, is the vague "a brain of the type humans have"?
When a scientist or doctor sees the brain in a person's head (whether by MRI, or autopsy, or...), they aren't suddenly taken aback, wondering whether it's a human brain in there, or a sheep brain, or a fish brain...

Whatever identifying characteristics they use to distinguish between human brains and brains of other species, I use. I don't care to write an expository essay about them. The characteristics may be vague, but they clearly distinguish human brains from silicon wafers, and if they can distinguish a human brain from at least one thing, reasonable arguments against functionalism can be made.

Quote:
Surely not the specific human carbon-based biology, because there is nothing inherent in nature, to our current knowledge, that prohibits non-carbon-based forms of life.
Dishonest debating tactic #2: Who's talking about life?

I agree that under current (vague) definitions, all sorts of non-biological organisms could qualify as being alive. But life isn't consciousness, so there isn't any point to what you've said.

As for consciousness, maybe it is carbon-based physiology, or, as I previously suggested, animal-type biochemistry and organization (a further refinement of the carbon-based requirement). There isn't any evidence for the proposition, and there isn't any evidence against it. It's got to be the fifth time I've said it, but everyone seems to prefer arguing over reading: BECAUSE THERE IS NO EVIDENCE FOR OR AGAINST ANY PROPOSITION ABOUT THE NECESSARY/SUFFICIENT CAUSES OF CONSCIOUSNESS BEYOND HUMAN BRAINS (or beyond each subject's individual brain, if you want to split hairs), ANY BELIEF ABOUT CONSCIOUS MACHINES, OR THE IMPOSSIBILITY OF SUCH MACHINES, IS TAKEN ON FAITH.

Quote:
And surely not the relative complexity, since it is only a matter of time before "machines" have the equivalent processing power. In fact, it will soon (most likely within our lifetimes) be possible, if anyone wanted to bother, to create an exact copy of the human brain, neuron by neuron, synapse by synapse, impulse by impulse, in silicon rather than meat.
Dishonest debating tactic #3: An exact copy of a thing is indistinguishable from the original. What you really mean is "functionally/computationally equivalent copy".

What's the difference? I can encode the molecular structure of a steak in a computer...but I can't derive nutrition from the hard drive. You've committed a grievious category error.

I agree that computational power of computers will exceed that of humans eventually. I don't see what that has to do with anything. If my claim is that substrate could have something to do with consciosuness, then no amount of computer power will be able to mimic a substrate insofar as the computer isn't built using that substrate; if it were, then there wouldn't be any need to mimic it.

Quote:
What about this "machine" would prevent it from being "conscious", if not, ultimately, an irrational belief in an intangible "soul"?
Dishonest debating tactic #4: Strawman.

My claim is only that it's possible that organizational principles such as the substrate upon which a putatively-conscious machine/organism is built may determine whether that machine/organism is conscious. This is no more mysterious than any other causal process (which is, at bottom, pretty mysterious, since ALL causal processes don't have "explanations"; at bottom, they "just work that way"). No "hidden property" is needed beyond the properties which any physical molecular arrangement exhibit.
Vogelfrei is offline  
Old 10-09-2002, 06:54 AM   #55
Junior Member
 
Join Date: Jul 2002
Location: NY
Posts: 37
Post

Rather than give a full response to your post, I'll simply respond to what seems to me to be a false categorization of my views. I plan on separately posting a clear and uncluttered logical argument, expounding my position, since no one seems to be able to attempt a reasonable exegesis of my position.

Quote:
Originally posted by tronvillain:
[QB]Vogelfrei

1) It is not at all clear that animal-type neurons with animal-type biochemistry cannot be replicated in a machine. Both functional replicationa and computer simulation are potential avenues for doing so.
Replication = exact replication
Functional replication /= exact replication
Simulated replication = functional replication
Simulated replication /= exact replication

If you don't think so, then consider whether a functional replication (molecular/structural model) of a steak provides nutrition. Note that this isn't asking for something that mimics the steak's effect on human physiology; that's a different animal.

If you think a molecular/structural model of a steak, in 1s and 0s, isn't what parallel reasoning demands, and that mimic-ing the steak's effect on human physiology is required, then I further require that your functional replication be indistinguishable from a real steak in every way. That is to say, using ideal scientific tools, I want to view the substrate upon which your functional replication is based and say "yes, this steak is made of carbon, just like a real steak". Furthermore, since functionalism suggests that there's no difference between substrates, I require you to make the steak without using carbon.

That's impossible, by defintion, isn't it! Functionalism can only show that different instantiations of a thing are computationally equivalent.

Quote:
2) Unless you are arguing that there is some property that animal neurons have which nothing else does, that gives rise to consciousness, I fail to see how you are arguing against the possibility of artificial intelligence.
I've already provided my argument regarding how animal neurons might be the only possible substrate upon which consciousness can rest, and I said nothing of hidden properties. By dismissing my argument out of hand, as you are now, you only beg the question of functionalism.

Quote:
A) I am frankly amazed that you would accuse me of unscientific, when all you have to offer is the assertion "it is a brute fact that animal neurons have the property of causing consciousness."
I've never asserted that.

Here are two representative statements that I've made:

"Maybe neurons with animal-type organization and biochemistry are required."

"I've already specified a potential physical property which cannot be replicated in a machine: animal-type neurons with animal-type biochemistry."

Thus, I suggested one possible property that a computer couldn't have, and suggested that it COULD be the case (note the "maybe") that that property is necessary for consciousness.

Feel free to review all my posts on this subject to verify my assertion. I pride myself on trying to understand the arguments of those whom I discuss issues with, but apparently you don't care to do the same. My argument has rejected the possibility of knowing whether computers can be conscious, from the beginning; asserting the necessity of animal neurons for consciousness would be a ludicrous thing for me to do, wouldn't it?

Quote:
B) If by saying that I am "appealing to a computational explanation of consciousness" you mean I think that consciousness is the product of chain reactions of neural activity in the brain, then you are correct.
Append "qua those chain reactions instantiate computation" to accurately convey my meaning. This is different from the bare fact of physical activity.

Quote:
I. The phrase "computation is in the eye of the beholder" appears to simply be a cute way of stating the second "problem."
Far from it. This problem is of the ontological nature of the interpretational schemes themselves; that is, whether they're real things in the first place, or just imaginings unrelated to the systems they interpret. That's quite different from how many interpretation schemes there are.

Facile dismissal of complex points never got anyone anywhere in philosophy.

Quote:
II. That there are multiple coherent interpretational schemes for any particular system is not appear to be problem for a computational explanation of consciousness. It might make figuring out exactly how the brain works difficult, but it has no more applicability to issue at hand than it does to the ability of a computer to play chess.
Computers don't play chess. Computers instantiate physical reactions to which you imply an interpretational scheme (in this case, one which converts physical reactions into chess moves). Computers similarly do many more things for which interpretational schemes could be applied, and there are alternate interpretational schemes that could be applied to the physical reactions you interpret as chess-playing, but those things go conveniently ignored. They MUST, however, be addressed, if computation is sufficient for consciousness (I only appear to have one consciousness, although there are infinite interpretational schemes that could be applied to the system that is my brain), andd the only way I see to address them is to claim that interpretational schemes are real if and only if we believe in them. But that runs into even larger problems, like how consciousness ever originated...

Quote:
Since a computation explanation of consciousness does not entail that computation is consciousness, the assertion that it would imply the universe is some sort of vast consciousness is ridiculous.
This looks like a non-sequitur to me. My claim is that an interpretational scheme exists that yields a hyper-intelligent universe; if computation is sufficient for consciousness, then the universe must be conscious (= God).

Quote:
3) As Searle asks, is your digestive system conscious? It instantiates computational processes, albeit in a nontraditional manner. If you think not, because it's not "complex" enough, specify which hidden magical cause it is that allows a system to jump from non-consciousness to consciousness once some particular threshold is passed.
3) As I said, it is a matter of complexity and organization.

Either these are issues of the system, or the interpretational scheme. However...it's logically possible to create an interpretational scheme that yields just about anything for ANY system, because interpretational schemes are arbitrary (just say "X event means Y", and make Y whatever you like; unlike computers meant for productive human ends, there's no requirement that the interpretational scheme need be useful). And since system + interpretation = computation, you would need to claim that computation isn't sufficient for your above statement to mean anything...

Quote:
The question does nothing to increase my respect for Searle, as it is analagous to asking "Can Duke Nukem play chess?"
I don't see any relationship between my claim and that.

Quote:
Simply being as "complex" as the brain, by whatever measure you are using, does not imply the ability to do what the brain does, as complexity says nothing of organization.
All the brain does is physically react. If you claim that complexity is a question of its physical organization qua physical organization, then you're arguing the position which I suggested as possible and you rejected as impossible. If you claim that complexity is a question of interpretational scheme, then I respond that the digestive system is as simple or complex as I want, depending on the scheme I choose. If you claim that complexity is a question of computation, then my response is the same.

[ October 09, 2002: Message edited by: Vogelfrei ]</p>
Vogelfrei is offline  
Old 10-09-2002, 06:56 AM   #56
Junior Member
 
Join Date: Jul 2002
Location: NY
Posts: 37
Post

Hmm.

Given that, after posting the formal exposition I promised, I'll probably have a handful of people, at the minimum, vying to respond, and further, given that I doubt I'll have sufficient time to fully and fairly respond to all of them, it seems that that won't work out.

However, I'd still like to continue discussing this issue. Therefore, I wonder if it's possible to organize a formal debate, on the formal debate board, on the topic. That way, I would be limited to responding to only a single person, which would be acceptable for me in terms of time investment, and would allow me to focus more closely on the argument.

I don't know how to officially propose a formal debate, but I imagine the first prerequisite is a debate partner. Are any of my adversaries here interested?

[ October 09, 2002: Message edited by: Vogelfrei ]</p>
Vogelfrei is offline  
Old 10-09-2002, 07:22 AM   #57
Veteran Member
 
Join Date: Mar 2001
Location: Portsmouth, England
Posts: 4,652
Post

So if I have a ladder made of wood and you show me a ladder made of steel I can just say "that's not a ladder because it's not made of wood" and my justification for that claim could validly be that "maybe a prerequisite for being a ladder is to be made of wood"?

Seems like a strange way of arguing to me.

I decline an invitation to formally debate you until I can work out wtf you are talking about!

Amen-Moses
Amen-Moses is offline  
Old 10-09-2002, 07:32 AM   #58
Banned
 
Join Date: Jun 2002
Location: Montrčal
Posts: 367
Post

This may be the case where the ladder made of wood is best suited to work in a vineyard or near winepresses, and it will never be a ladder if it is not made of wood.

The definition if in tautological form adheres in a complete manner to its implication. The reason the definition comports gives it its truer sense.

It seems that in comparision to human consciousness, the substrate or the base mass, may play an important part in attaining human-like consciousness. Remember human consciousness at this point also entails building machines to mimic itself.

Sammi Na Boodie ()
Mr. Sammi is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 01:03 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.