FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 03-27-2003, 07:37 PM   #1
Veteran Member
 
Join Date: May 2001
Location: US
Posts: 5,495
Default Imputing a creature's process of mind

I went to Baltimore Aquarium today. It struck me that all the creatures that had eyes could see (something). Assuming they could see something using a sophisticated mechanism I concluded that all creatures that have eyes (likely) use them to collect and memorize visual impressions of their environment. If they had no memory from their senses they would bump into the sides of the tanks .... anyway, I just felt that the shark *knew* I was there - not necessarily in a human consciousness way but in the same way that it senses when its prey are aware when its near - hence it develops its hunting tactics based on the likely awareness of its prey that they *know* the shark is there. This is a repeat fo a thought sequence that occured to me after staring into a jaguar's eyes at Edinburgh Zoo some years ago.

I have no evidence of a biological nature regarding how sharks, turtles, puffins, sea horses or fish etc. actually think, let alone what they think. However, from the train of reasoning above in which I have tried to be careful to avoid anthropomorphism, its seems clear to me that most creatures have very sophisticated senses of awareness of their surroundings, the behavior of other creatures in these surroundings and their relationship to these other creatures.

1. Do you think my line of thinking unreasonable.
2. If you think my line of reasoning unreasonable, are you misjudging me in a similar manner to my misjudging the above creatures?

Cheers, John
John Page is offline  
Old 03-27-2003, 07:51 PM   #2
Regular Member
 
Join Date: Feb 2003
Location: Denmark
Posts: 122
Default

In any case you will suffer from what Chalmers call the pre-experimental bridging principle. That is according to crude behavorism(or any other emprical investigaton for that matter) you might only assume(without any reasonable scientific or philosophical foundation*) what phenomenological happens within the mind of the creature. A point well presened in AI(you know the Steven Spielberg movie). Is the boy conscious or not? How will you determine?

*unreasonable since the principle was determined before the experiment and without the possibility of verification. This is not usually how science is conducted.

More another day as the time in Denmark is 4.51 in the morning(and slightly pissed).

Cheers Frotw
Frotiw is offline  
Old 03-27-2003, 07:56 PM   #3
Regular Member
 
Join Date: Feb 2003
Location: Denmark
Posts: 122
Default

Damn almost forgot. What I also meant to say was that you can easily create a simple robot almost certainly not aware of anything still being able to navigate without conscious memory or awareness.
Frotiw is offline  
Old 03-28-2003, 09:09 AM   #4
Senior Member
 
Join Date: Jan 2003
Location: Switzerland
Posts: 889
Default

Perfectly reasonable , what's your problem ?
Quote:
*unreasonable since the principle was determined before the experiment and without the possibility of verification. This is not usually how science is conducted.

So what ? This doesn't make it unreasonable, just less fit for use.
DoubleDutchy is offline  
Old 03-28-2003, 10:34 AM   #5
Junior Member
 
Join Date: Feb 2003
Location: Chicago
Posts: 95
Default Re: Imputing a creature's process of mind

John,

The phenomena you are describing may very well be happening to you. I've read in more than one place that human vision is drawn to vertical symmetry. The supposition is that if another creature is looking at me, it's either going to try to attack me, try to avoid being my lunch, or recognize me as a harmless stranger, ally, or mate. All good things to know for survival. So it's instinctive to find meaning and intent when we meet eye to eye with another animal.

That being said, I think your line of thinking is reasonable. I don't think it's unreasonable to start with a feeling about something and then try to figure out what could have inspired that feeling. The real trick is to develop experiments that would prove or refute your idea, and then to submit your findings for review by others.

frotiw made this interesting point:

Quote:
Damn almost forgot. What I also meant to say was that you can easily create a simple robot almost certainly not aware of anything still being able to navigate without conscious memory or awareness.
frotiw, I'd like to know what you think about this: Your simple robot, for instance, hits a wall and turns around until it hits another obstacle. That's what it's built to do. Could we say that the robot is as aware as it needs to be, as aware as it's programming allow it to be?

What if we upgrade? Every time it hits an obstacle, it remembers where that obstacle is in relation to the last obstacle and so on. Assuming the obstacles don't move from day to day, every time the robot is let loose it navigates a little better until it has essentially mapped out all the obstacles. Would that be awareness of a sort?

Take it one further. Now our robot has to avoid moving obstacles. To do so, it is able to sense the direction and speed of the obstacle and anticipate its trajectory. It acts on this information and avoids the moving obstacle. Now, not only does it have an internal map of it's surroundings, it can now anticipate the actions of others. Have we crossed a line into awareness yet?

-Neil
Neilium is offline  
Old 03-28-2003, 11:10 AM   #6
Veteran Member
 
Join Date: May 2001
Location: US
Posts: 5,495
Default Black Box

Frotiw:

Yes, you point out a major difficulty. What I'm trying to figure out is whether one can reach the same conclusions as "menatlly occurred" to me using, say, a black box approach (where one does not concern onesself with exactly what or how the creature is thinking) and deductive reasoning.

Maybe something like, 1. I know that things happen through cause and effect. 2. I know that animals comprise muscles that are operated through signals from a nervous system, but I do not know how the nervous system determines what signals to transmit. 3. As for item 2, but related to the incoming sense data. 4. By correlating stimulus and action over, one can build up a picture of the inferences that must necessarily have been made by the creature's brain.

I think this is methodological behaviorism i.e. scientific study of behavior. I guess all I'm trying to impute is that the cause of (most of) the bahvior is the creature's nervous system.

Cheers, John
John Page is offline  
Old 03-28-2003, 12:46 PM   #7
Regular Member
 
Join Date: Feb 2003
Location: Denmark
Posts: 122
Default

I just read the OP through again I enjoy your very carefull approch by not just concluding right away whereither animals think or not. My point is that one should not confuse the concept of thinking we apply on our selves with the concept of thinking why apply on animals and robots. As I understand J. Page has noted this therefor the objection is perhaps more targeted against others. Think you(J. Page) may conclude that it is able to navigate in the enviroment and that's it. The part really intersting for philosophers is untouched. The part about awareness namely phenomenological awarness.

As for Neilium and the robot whereither it would poses awareness of some sort. No I don't think so. Atleast not in a way the we usually meen when we speak of awereness. The usual objection which really should be read from e.g. Chalmers papers* is that besides being able to navigate the world we also have the awareness of navigating the world. There is no reason to assume the robot actually "feels" anything navigating.

Methodological behaviorism is undoubtly a rewarding and good empirical method but one should always keep in mind that the area of investigation is behavior and only behavior. The problem arise when behaviorism is applied outside it's field of investigation then the method or science move on "very thin ice".-Again I stess I am not sure how much I really disagree as I am not completely sure how much is being proposed and claimed. One place I think there might be a problem is in .4 By correlating stimulus and action over, one can build up a picture of the inferences that must necessarily have been made by the creature's brain

If what is meant by inferences simply is the behavioral and e.g. physical side of explanation there is no problem only if inferences
are meant as phenomemological awareness then there are mayor problems. To sum up the first objection again there is no way to match the correlates with anything. The subject of investigation is simply not directly available for investigation. You may observe the physical act and then presume this is related to some conscious act but you cannot verify it, it will remain an assumption. Assumptions are not uncommen in science but they often justified(mostly because they can be verified later on) the problem in this case is that the assumption is actually the core of that we want to investigate. Also in this case the assumption cannot be cleared up later on. I hope I make myself clear.

*Find the link in the booklist there is plenty of material VERY specific on this issue on Chalmer's page. Oh also I recommend reading the "chinese room experiment" also related

Cheers Frotiw
Frotiw is offline  
Old 03-28-2003, 01:39 PM   #8
Regular Member
 
Join Date: Feb 2003
Location: Croydon: London's Second City
Posts: 144
Default Re: Imputing a creature's process of mind

Quote:
Originally posted by John Page
I went to Baltimore Aquarium today...
Very evocative opening. Better than "Call me Ishmael", or "Stately, plump Buck Mulligan..." IMO.

Hi, John!

After being cloistered up, and then trying to jam the "hard determinism" thread with my waffling, I might venture something along the lines of:
The shark's "knowledge" may be fired by stimuli, such as your putative resemblance to a light, yet, filling, meal. Such knowledge is dependent on its past activities, and reception to pertinent stimuli previously received (with evolutionarily-derived behaviour thrown in). Our knowledge may be more useful than a shark's, in that we can recombine memories into novel mental objects, ie forecasts. This means we can extrapolate behaviour from a more limited set of real-world stimuli. Our extrapolations etc, insofar as they are mental objects, may be analogized as"awareness".
Or not, as the case may be.

Take care,
KI

PS You do know whats' going to happen to you if you keep staring out predators?
King's Indian is offline  
Old 03-28-2003, 02:14 PM   #9
Veteran Member
 
Join Date: May 2001
Location: US
Posts: 5,495
Default Re: Re: Imputing a creature's process of mind

Quote:
Originally posted by King's Indian
This means we can extrapolate behaviour from a more limited set of real-world stimuli.
Cool! You're extrapolating my behavior of extrapolating the shark's behavior wherea's the shark has a more limited means to extrapolate my behavior (but nasty, sharp bitey teeth and a mean streak a mile wide). Better brains = better tactics all other things being equal, I guess.
Quote:
Originally posted by King's Indian
PS You do know whats' going to happen to you if you keep staring out predators?
Deconstruction?

Separately, you raise a good point - to what extent does one include internal behavior as opposed to externally observable behavior.

Cheers, John
John Page is offline  
Old 03-28-2003, 09:12 PM   #10
Junior Member
 
Join Date: Feb 2003
Location: Chicago
Posts: 95
Default

Quote:
Originally posted by Frotiw

....besides being able to navigate the world we also have the awareness of navigating the world. There is no reason to assume the robot actually "feels" anything navigating.
Frotiw,

To be aware, one must be aware that one is aware? This sounds more like it applies to consciousness than awareness. Maybe I'm making a distinction between the two that doesn't exist.

Is self awareness required for awareness?

My idea of "aware" was that the robots were able to collect information, albeit very rudimentary, about their envrionment and use it to make the appropriate action. I certainly don't think that such robots are conscious.

Also, thanks much for pointing me to Chalmers' site. What an astounding collection!

-best
-neil
Neilium is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 09:50 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.