FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 05-12-2003, 11:11 PM   #81
Senior Member
 
Join Date: Aug 2000
Location: Chicago
Posts: 774
Default

Sorry about my last post above, I kept getting a "busy server" message whenever I tried to reenter the IIDB. So I couldn't edit the post.


Quote:
Originally posted by excreationist
[B]


Quote:


....But the process of reducing the "components" of thinking to their "lowest terms" seems to reach a stopping point in the case of mental phenomena, where subjective experiences, like pain for example, cannot be reduced to more fundamental terms.....

I think bodily pain signals are automatically given a strong priority in our brain for us to avoid. Depending on our other priorities we might endure the pain or discomfort (mild pain) for a while. Some people might seek pain out of boredom or the sense of justice it brings (if they're a martyr). Even if we don't avoid the pain, we can still be aware of it (though there is a small limit to how many things we can be aware of). This information can then be used in the future. e.g. we might feel back pains every now and then, but act on it later... So it's about priorities - "do's" and "do not's", things to seek and things to avoid - and bodily pain is by default a thing our brain is set to avoid. So that's how I reduce pain. I don't think it's just some sensation that can't be reduced at all. The reason why it "feels" bad is because I think "we" are our brains - we are the thing with priorities - things we *must* seek or avoid - and lesser priorities that we would prefer to seek or avoid.

While I would agree that the sensations of pain that we feel in our bodies have a physical cause, I wouldn't agree that that demonstrates that "we" (presumably, our "conscious selves") are nothing but our brains. What may actually be occurring during a sensation of pain in the body may be the result of the "mind'" interpreting certain sequences of neurological impulses as the pain.
But I would agree that the brain is "set" to avoid pain. In fact, it is my contention that pain triggers much of the same brain chemistry as other things that we instinctively seek to avoid, such as bitter tasting chemical elements and compounds. But of course, the research could prove me wrong.

Quote:

Quote:


....Mental/physical causality, for example, can occur in both directions. Mental states can produce physiological effects and vice versa....

This suggests that mental phenomena has a very physical basis... e.g. chemicals and injury can affect your mood or thinking ability... stress can lower the immune system (due to other priorities) and increase the heart rate and muscle tension (for fighting/running), etc, (stress is just the instinctual fight or flight response to threats) and calm thoughts decrease the heart-rate and leave more resources to the rest of the body... (the brain uses a very large amount of resources, and intense thinking uses even more I think... and the absense of stress involves the fight/flight response not happening).

But stress can have psychological causes, which would seem to suggest that an immune system whose resistance has been lowered due to the stress (a physiological effect), has its origin in mental phenomena. In other words, observation based reasons can be adduced for reducing the mental/physical relationship either way. So reductionism, in my view, should remain provisional.



Quote:

Quote:


But if mental phenomena cannot be reduced to physical phenomena, the actual relationship between consciousness and the brain may be more complex than it appears to be from the standpoint of Physicalist Reductionism.

If the "observer" is totally passive (though still non-physical) then things would be a little more complicated..... if the observer wasn't passive (i.e. they could think and act "outside the box") then it would be more complicated....

This is a good point. I'm not sure whether there is any way to determine, from the data of observation, whether the "observer" is passive. However, assuming that the "observer" is totally passive is tantamount to embracing epiphenomenalism, which has problems of its own.
jpbrooks is offline  
Old 05-12-2003, 11:20 PM   #82
Senior Member
 
Join Date: Aug 2000
Location: Chicago
Posts: 774
Default

Quote:
Originally posted by DRFseven


You don't think of the mechanism of sight or of pain as actually being the pain. I don't either, really, but it is convenient to speak of it that way. And that's the way I look at thinking, as well. For all practical purposes, it seems identified with its mechanism, though when I really reduce it down, I am aware that electrochemical firing is not the same thing as the chalupa dinner I am anticipating now. But if it's not the same thing, it surely causes it, don't you think?
In reductionistic terms, yes. From the standpoint of language usage, this problem is similar to that encountered when one attempts to describe all of one's experiences of things in the world in terms of "sense data" statements rather than in our usual way of describing the sensed things themselves. As in that case, it is much easier to avoid mentioning the intermediate steps in a causal sequence in the description of its causal relationships, even though the intermediate steps are actually there (i.e., they haven't been ontologically reduced out of existence).
jpbrooks is offline  
Old 05-12-2003, 11:53 PM   #83
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default

jpbrooks:
While I would agree that the sensations of pain that we feel in our bodies have a physical cause, I wouldn't agree that that demonstrates that "we" (presumably, our "conscious selves") are nothing but our brains.
"We" being our brains is just my belief, I didn't mean to claim that my last post demonstrates/proves that.

....But I would agree that the brain is "set" to avoid pain.
I think it is actually set up to avoid those chemical pain signals (depending on their intensity).... its attempt at avoidance can have physical manifestations (like screams [for help], clenched muscles [to resist instinctive screaming, flinching], stomach-heaving, etc) - I think that is what gives pain it's visceral physical-type sensation. Even if we aren't flinching or resisting flinching, we could be imagining ourselves doing it...

In fact, it is my contention that pain triggers much of the same brain chemistry as other things that we instinctively seek to avoid, such as bitter tasting chemical elements and compounds. But of course, the research could prove me wrong.
I think our bodily pain as well as things like bitter/sour foods, etc goes onto a type of universal priorities area where our different priorities can be compared (to see if some outweigh others, and to determine the future course of action). The initial chemicals used to signal bodily pain or really bitter tastes might be different but I think they are eventually translated into the same format so they can be compared.

But stress can have psychological causes,
I said "stress is just the instinctual fight or flight response to threats" - threats would be determined by the brain. They might be determined very rapidly though, much faster than it takes us to consciously realize what it going on. Or these could be threats that our conscious thinking (train of thoughts) leads us to recognize. (The threats could be imaginary or exaggerated though - e.g. stage fright, etc)

which would seem to suggest that an immune system whose resistance has been lowered due to the stress (a physiological effect), has its origin in mental phenomena.
I agree, although sometimes threats are recognized very rapidly... e.g. if there is a sudden noise that makes your heart race - although by the time it affects your immune system you would have consciously realized that there was an unusually loud noise.

In other words, observation based reasons can be adduced for reducing the mental/physical relationship either way.
You mean it works both ways? (mental affects physical and vice-versa) Well I agree.

So reductionism, in my view, should remain provisional.
I'm not saying reductionism is a "fact". It is my opinion.

....I'm not sure whether there is any way to determine, from the data of observation, whether the "observer" is passive....
In the far future we could see if the brain acts totally according to the rules of physics or not. If it doesn't, perhaps a person's "soul" is affecting it.... quantum physics might affect neurotransmitters a little (perhaps the "soul" interacts with neurotransmitters in that way).... if souls and God doesn't exist, I'd expect those quantum effects to be random, though very, very occassionally doing something unlikely. Perhaps a person with a lot of "free will" would have quantum phenomena happening in a very non-random way in their brain... so that their decisions vary from what they ordinarily would have been.
excreationist is offline  
Old 05-13-2003, 01:10 AM   #84
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default

Quote:
Originally posted by mhc
To say that physical activities in the brain *cause* thoughts seems to lead back to dualism. How to avoid ending up with exclusive physical and mental realms again?
What if goal-oriented information (thoughts) is just a property of certain processes involving matter and energy?
That is like how rotation is a property that can exist in certain processes involving matter and energy... same with "wetness"... (I think)
I don't think quadism (four-ism) is needed to account for the existence of the physical, mental, rotation and wetness realms....

There's probably a flaw in that comparison somewhere... but I was assuming that the mental realm was just another aspect of
the physical world, like how rotation and wetness are. Maybe rotation and wetness sound too physical compared to thoughts... well what about other patterns/similarities that exist in the physical world - e.g. similiarities in quantities. - there can be two apples or two galaxies. "Two-ness" seems quite non-physical though I think it requires a physical world in order for the concept to exist. If no aspect of the concept applied to the physical world then the concept would be meaningless. Though things like infinity and perfection may not directly apply to the physical world, we can sense them to a lesser degree (near-infinity, near-perfection) and imagine our experiences involving greater quantities or more perfection. Or we could experience perfection in some areas (e.g. sense perfect red, etc) and then apply that concept to other areas that we haven't sensed directly - e.g. a perfect car.
excreationist is offline  
Old 05-13-2003, 04:58 AM   #85
Regular Member
 
Join Date: Oct 2001
Location: Sweden Stockholm
Posts: 233
Default Re: Re: Re: Mechanistic Thinking Process?

Quote:
Originally posted by DRFseven
But there are objections to those objections; just because we haven't produced AI to duplicate everything the human brain can do, doesn't mean that the essential component is an immaterial force.

Thinking is all about memory, which is defined neurologically as cell changes that reflect data input, and that, in turn, effect changes in behavioral output. Either this is an illusion (and behavioral information that perceptions depend upon previously learned information is false, as well), or our will is not free will.
Soderqvist1: Not everything in a neural-system is computable!
In the set of all arithmetical systems, there are more true statements than a Turing Machine can possibly prove, according to its own defining set of rules, therefore; in arithmetical systems there are true statements which is computationally undecidable, but we know by non-computational means that these statements are true! Therefore human mind is something more than a computational system or Turing Machines, because we can decide that arithmetical systems are incomplete, which is fundamentally undecidable by Turing Machines!

Susan Greenfield at Oxford University, she is Peter Atkins wife, and Atkins and Richard Dawkins are colleagues at Oxford University! Here are my references, which my representation is based on!

Quote:
Neuroscientist Professor Susan Greenfield is the first woman director of the prestigious 200-year-old Royal Institution of Great Britain. She leads a team dedicated to finding out how the brain works and whose latest work has been to look at the connection between Parkinson's and Alzheimer's disease. Professor Greenfield was in Australia as a special guest of the Andrew Olle Memorial Trust Lecture and gave a public lecture in Sydney. The Trust raises money for research into brain cancer.

I would just like to indicate why biological brains are currently not like current artificial systems. First we have non algorithmic processes, that is to say we have commonsense and intuition, we don't necessarily think in a step by step algorithmic process.
http://www.abc.net.au/rn/science/ss/stories/s137294.htm


Jones and Wilson, An Incomplete Education
In 1931, the Czech-born mathematician Kurt Gödel demonstrated that within any given branch of mathematics, there would always be some propositions that couldn't be proven either true or false using the rules and axioms ... of that mathematical branch itself. You might be able to prove every conceivable statement about numbers within a system by going outside the system in order to come up with new rules an axioms, but by doing so you'll only create a larger system with its own un-provable statements. The implication is that all logical system of any complexity are, by definition, incomplete; each of them contains, at any given time, more true statements than it can possibly prove according to its own defining set of rules.

Nagel and Newman, Gödel 's Proof
Gödel showed that within a rigidly logical system such as Russell and Whitehead had developed for arithmetic, propositions can be formulated that are undecidable or un-demonstrable within the axioms of the system. That is, within the system, there exist certain clear-cut statements that can neither be proved nor disproved.

Hofstadter, Gödel, Escher, Bach
All consistent axiomatic formulations of number theory include undecidable propositions ...
Gödel showed that provability is a weaker notion than truth, no matter what axiom system is involved ...
http://www.miskatonic.org/godel.html
Peter Soderqvist is offline  
Old 05-13-2003, 07:22 AM   #86
mhc
Regular Member
 
Join Date: Mar 2003
Location: CA
Posts: 124
Default

Great links, Peter Soderqvist. Thanks!
I am reminded that there is more to the picture. The mental may or may not be reducible to the physical, but either way, the model of brain as computer is incomplete at best.

excreationist: Consciousness as a property of brain makes sense!It allows a causal relationship without the linear 'this, then that' relationship that didn't seem possible and wasn't helpful. Thank you!
mhc is offline  
Old 05-13-2003, 07:46 AM   #87
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default

Peter Soderqvist:
Do you understand that there are two main approaches to AI? One is symbolic (high-level maths/logic based - the top-down approach) AI, the other is neural network type AI which is inspired by studying biological systems (genetic algorithm AI is similar) (the researchers work in a bottom-up way).
Godel's theorem, etc, is what traditional (symbolic) AI is all about.
Let's say that a system had to prove that 1*2, 2*2, 3*2, 4*2, etc will never be equal to an odd number. So if an odd number is ever found, the search can be stopped. But if an odd number isn't found, the search needs to continue.
The AI program would try and prove or disprove:
For all x, (x*2) is not odd.
so for x=1, the answer is 2, not odd.
for x=2, the answer is 4, not odd.
for x=3, the answer is 6, not odd.
and so on, until it has checked all of the x's to make sure none of them are odd.
Neural networks work differently though... they are trained about things (and give wrong answers at first).... but after they are trained properly they can guess the answers for things they haven't been trained for... they can infer and predict things and handle distorted/noisy inputs.
In the case of odd/even example, a properly trained neural network might be able to take an educated guess (have a "hunch") that x*2 will never be odd. It could maybe be trained with similar (but different) questions...
I guess that example isn't very good though since neural networks haven't come very far as far as their capacity to deal with symbolic manipulation (abstract maths) goes... remember that currently artificial neural networks only have a few thousand or a few hundred thousand neurons... and our brains have about 100 BILLION, where each one is connected to about 10,000 others - and yet it takes many years of training/learning for us to learn about symbols and complex maths. That is what neural networks are about - they take a long time to train/develop - just like civilized people, circus animals, etc - though people can apply what they've learnt to other areas ("intuitively" infer things, etc).
http://www.newscientist.com/hottopics/ai/gambler.jsp
This talks about a neural network used for gambling that is a lot better than the best human tipster. It basically learns patterns and predicts how games will go. It isn't programmed about exactly what strategies to take, like Deep Blue (the chess computer) is.... it simply arrives at a prediction in one go. Many of the neurons work together simultaneously doing simple additions and multiplications, etc. In the left-hand column of that link it has more news stories about neural network developments.
(For an introduction to neural networks see this link)
excreationist is offline  
Old 05-13-2003, 07:50 AM   #88
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default

Quote:
Originally posted by mhc
excreationist: Consciousness as a property of brain makes sense!It allows a causal relationship without the linear 'this, then that' relationship that didn't seem possible and wasn't helpful. Thank you!
I'm glad you got something from that... lately I've been having trouble comprehending it all.
excreationist is offline  
Old 05-13-2003, 08:56 AM   #89
Junior Member
 
Join Date: Mar 2003
Location: Indianapolis,Indiana
Posts: 27
Default

I think Godels proofs, esp. the second one or as it's classically listed- the corellary to the first one, has been largely excepted by most analytic philosophy folks. Yes is does have limiting factors implied on math based AI discussions. But it's not totally clear if it is THE TOTALLY limiting factor in either math based logic systems or in machines that compute past infinity, or so I have been told.
Excreationist- Your right there is two approaches to AI. The latter approach of a buildup from scratch also has several methods we are looking at. They have obvious apps. to supercomputing. The problem of this research is the money has to come from somewhere to do it. And the dollars really are limited. You would think that IBM or Microsoft would lead that parade. Instead it seems to be the Japanese that have the biggest handle on this.
DRFseven- On reviewing my post I had confusion on my part about your definition of "Mechanistic". It accures to me that your definition would include a post genetic process. Sorry, I blew that
one!
Cobrashock
cobrashock is offline  
Old 05-13-2003, 09:22 AM   #90
Junior Member
 
Join Date: Mar 2003
Location: Indianapolis,Indiana
Posts: 27
Default

Anyone know what ever happened to the NASA research projects that were looking a ways for planetary probes to teach themselves how to react to new situations? They don't list it on NASA's web site anymore.
cobrashock
cobrashock is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 09:40 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.