FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 11-13-2002, 01:52 PM   #1
Banned
 
Join Date: Sep 2002
Location: Fall River, N.S.
Posts: 142
Post The 'meaning' of Meaning

What is 'meaning'? Does the word have any application in AI? How does a biochemical mechanism create meaning out of data? What is the mechanical explanation for 'significance', and for the existence of signs and symbols? How does a brain, "understand"? Can a computer understand? What is 'understanding', anyway?

All questions I can't answer. Any suggestions?

--pickle
picklepuss is offline  
Old 11-14-2002, 04:45 AM   #2
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
What is 'meaning'?
Meaning is about symbols... e.g. a spoken or written word referring to objects and properties and relationships and behaviours...
or memories of those things or hypothetical instances of those things...

Quote:
Does the word have any application in AI?
In AI they sometimes talk about "semantics" vs. syntax... e.g. the following sentence of Chomsky's is apparently syntactly (grammatically) correct but is meaningless - "Colorless green ideas sleep furiously". In AI, you could also try and decipher "natural language" (everyday English) and make the program access its database and learn according to the meaning of what you're saying. Sometimes meaning can be ambiguous and that causes a problem in AI... e.g. if you use shorthand like "it" or "she/he" it could be ambiguous... so there can be problems in AI about what particular words (or "tokens") refer to.

Quote:
How does a biochemical mechanism create meaning out of data?
In the same way that animals can learn to associate the smell of smoke with an unseen fire, etc, we can also learn to associate spoken and written words with objects, properties, etc. We can also infer things (like neural networks - which is a field of AI) - and actively learn...
Anyway, I think an important thing about "creating meaning out of data" is that the signal (the symbols) are *used* for some purpose - to pursue subgoals and goals - which are ultimately motivated by fundamental instinctual desires... (e.g. the desire for a certain amount of newness - and stability/coherence, etc)

Quote:
What is the mechanical explanation for 'significance', and for the existence of signs and symbols?
Symbols are quite arbitrary though sometimes they are very appropriate e.g. "moo" representing the sound that a cow makes (it sounds like "moo"). Signs and symbols allow us to communicate with one another in complex ways - and transfer patterns of experience to each other. e.g. if a person knows what "boots" are and about "cats", etc, they could be told about a new concept - a cat who wore boots... or they could plan how to build a city, etc - using lines to represent streets. It has survival advantages... I've heard from various sources that our ancestors killed off all their close relatives - so now our closest relatives are chimps... even about 2 or 3 million years ago we apparently had stone age technology, but the ones who can use symbols (use language to communicate) would have the advantage.... imagine a fight between two groups - one who could communicate with itself and the other who couldn't talk or even use hand gestures... assuming the communication is used to its full potential, the group with the better communication would have a huge advantage. (And the fittest survive...)

Quote:
How does a brain, "understand"? Can a computer understand? What is 'understanding', anyway?
I think understanding is a self-motivated activity which involves an intelligent thing analysing and reanalysing how something else works. It would possess a collection of all the patterns that summarize that system. It would probably also need to have learnt those patterns itself (so they're not instinctual or preprogrammed...) which also means its capable of adapting if the system the patterns it describes dramatically changes...
excreationist is offline  
Old 11-14-2002, 05:34 AM   #3
Veteran Member
 
Join Date: Oct 2001
Location: Canada
Posts: 3,751
Post

Have you looked at the famous Hilary Putnam paper of the same name as your post? (modulo the quotation marks)
Clutch is offline  
Old 11-14-2002, 06:30 AM   #4
Banned
 
Join Date: Sep 2002
Location: Fall River, N.S.
Posts: 142
Post

Thanks very much, excreationist. I get most of it, but I'm still not sure I grasp where neurons form associations that create abstraction, symbolism, meaning, and understanding. Does it take place at a molecular, or at an atomic, or at a quantum level?

Thanks for the tip on Putnam, clutch. I'd never heard of her. I'll do a web search.

--pickle
picklepuss is offline  
Old 11-14-2002, 05:08 PM   #5
Veteran Member
 
Join Date: Oct 2001
Location: Canada
Posts: 3,751
Post

Him, actually.

I'd read about five Putnam papers before I figured out that she was a he.
Clutch is offline  
Old 11-14-2002, 06:37 PM   #6
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by picklepuss:
<strong>Thanks very much, excreationist. I get most of it, but I'm still not sure I grasp where neurons form associations that create abstraction, symbolism, meaning, and understanding. Does it take place at a molecular, or at an atomic, or at a quantum level?</strong>
Well that is quite a complex thing to think about... but I'll try and answer it...


This shows a basic neural network. It has 3 inputs and 2 outputs, and the individual neurons have between 1 and 3 inputs and outputs...
In case you didn't know, this is how neural nets work:
The inputs of a neuron are "summed up" and if they cross a threshold, the neuron fires and it produces an output, which then becomes input for other neurons. e.g. say the threshold for a neuron is +1 and it has 4 inputs... these might be 2, 3, -2 and 1. Let's say the first two inputs weren't on but the last two were - this would give a total of -1 (-2 + 1), so the neuron wouldn't fire. (-1 isn't above the +1 threshold).
Sometimes neurons can work in an analog way rather than binary so they partially fire depending on the sum of its inputs. Our brain uses both types I think (analog and binary types - I'm not sure what the medical term is).
And our brain has about 100 billion neurons with each one being connected to about 10,000 others.
Another thing about neural networks is that they are "trained"... i.e. the weightings of the neuron inputs are adjusted so that the network gives the correct (or almost correct) output based on the input it was just exposed to.

About associations and neural networks.... there is a kind of neural network called a "bidirectional associative memory"... I don't really understand the maths involved though... neural networks involve lots of maths - I can understand the basic kind where the inputs are summed up and there is a threshold though.

from <a href="http://www.comp.nus.edu.sg/~pris/AssociativeMemory/AssociativeMemoryContent.html" target="_blank">Associative Memories</a>:
Quote:
An associative memory is a content-addressable structure that maps a set of input patterns to a set of output patterns. There are two types of associative memory: autoassociative and heteroassociative. An autoassociative memory retrieves a previously stored pattern that most closely resembles the current pattern. In a heteroassociative memory, the retrieved pattern is, in general, different from the input pattern not only in content but possibly also in type and format.

...The network structure of the bi-directional associative memory (BAM) model is similar to that of the linear associator but the connections are bi-directional, i.e. BAM allows forward and backward flow of information between the layers. The BAM model can perform both autoassociative and heteroassociative recall of stored information.
<a href="http://www.comp.nus.edu.sg/~pris/AssociativeMemory/BidirectionalAssociativeMemory.html" target="_blank">Bidirectional Associative Memory</a>
This has the maths for it...

<a href="http://www.geocities.com/CapeCanaveral/1624/bam.html" target="_blank">Here</a> is some source code which demonstrates bidirectional associative memory in action. <a href="http://www.geocities.com/CapeCanaveral/1624/nn.zip" target="_blank">Here</a> are the programs and source code files.
Quote:
TINA -&gt; P:*[HBQ | TINA -&gt; 6843726

ANTJE -&gt; !_+87&9 | ANTJE -&gt; 8034673

LISA -&gt; )@;ZV)- | LISA -&gt; 7260915

6843726 -&gt; SP4^L | 6843726 -&gt; TINA

8034673 -&gt; ^;GI# | 8034673 -&gt; ANTJE

7260915 -&gt; A-/%1 | 7260915 -&gt; LISA

TINE -&gt; CW48F ^ | TINA -&gt; 6843726

ANNJE -&gt; @ZB%8Q! | ANTJE -&gt; 8034673

RITA -&gt; H0(@=^/ | DIVA -&gt; 6060737
So you can give it names as inputs and it retrieves the name with the closest match and associated numbers as outputs, or vice-versa.
(The program isn't interactive though - it could have been though)

Associations are fuzzy things really... there can be "strong" associations or "weak" associations... they involve patterns or relationships - or probabilities...

I think it has to do with the setup of a neuron and what thing it will trigger depending on its inputs... the thing it triggers is what was "associated" with its inputs.

So it involves signals and neurons... this can be done with neuron cells or artificial neurons done with software - or electronic neurons.... and probably mechanic neurons... (if they existed) - it's about large-scale structures and systems - molecules, atoms and quantum particles are pretty irrelevant.

I guess that didn't fully answer your question as far as abstraction, etc, goes but I think to understand about neural networks being used for abstraction you first need to understand the basics of neural networks and probably things like bidirectional associative memory.
excreationist is offline  
Old 11-14-2002, 08:29 PM   #7
Banned
 
Join Date: Sep 2002
Location: Fall River, N.S.
Posts: 142
Post

Ok, I think I'd better give up. I do see the scope and direction of what goes on, but once it dissolves into math I'm just not up to it. But I thank you very much, excreationist, for trying to explain it to me.

Yeah, I see that 'she' is a 'he', clutch.
I'm still mulling over this XYZ is not H2O thing. I still don't grasp the importance of his point.Yes, of course there is functional and nominal equality where there is equality of properties and attributes, even if actual identity differs. That's only saying that the significance we give to something is unrelated to what the thing is, per se. And that's 'old hat'. Or perhaps I'm not really grasping his point at all. I'm getting pretty slow in my old age. I'll do a bit more mulling.

--pickle
picklepuss is offline  
Old 11-15-2002, 06:39 AM   #8
Veteran Member
 
Join Date: Oct 2001
Location: Canada
Posts: 3,751
Post

Pickle, the point is just what he says off the bat. An apparently good prima facie characterization of meaning would have it fulfilling two roles: it's the cognitive import of words we understand; and it determines the reference of words and sentences. Putnam argues this: Meaning can't do both things. If it's what we grasp, then it doesn't determine reference (since your narrow psychological state would be the same whether you lived in an XYZ world or an H2O world). If it determines reference, then it must do so in virtue of something other than your narrow psychological state (for the same reason).

This all depends on the plausibility of the claim that, in 1750, the referent of 'water' was H2O. If you agree, then Putnam's conclusion seems to follow; nothing in the psychological state of someone circa 1750 could have determined such a fact. If you disagree, then (the story goes) you owe some account of how the reference changed between then and now.

I disagree with Putnam, fwiw. But by contemporary standards, that leaves me swimming upstream.
Clutch is offline  
Old 11-15-2002, 08:45 AM   #9
Banned
 
Join Date: Sep 2002
Location: Fall River, N.S.
Posts: 142
Post

Quote:
Originally posted by Clutch:
[QB]Pickle, the point is just what he says off the bat. An apparently good prima facie characterization of meaning would have it fulfilling two roles: it's the cognitive import of words we understand; and it determines the reference of words and sentences. Putnam argues this: Meaning can't do both things. If it's what we grasp, then it doesn't determine reference (since your narrow psychological state would be the same whether you lived in an XYZ world or an H2O world). If it determines reference, then it must do so in virtue of something other than your narrow psychological state (for the same reason).

This all depends on the plausibility of the claim that, in 1750, the referent of 'water' was H2O. If you agree, then Putnam's conclusion seems to follow; nothing in the psychological state of someone circa 1750 could have determined such a fact. If you disagree, then (the story goes) you owe some account of how the reference changed between then and now.

I disagree with Putnam, fwiw. But by contemporary standards, that leaves me swimming upstream.
Thanks for your patience, clutch. Is Putnam just talking semantic technicalities, symbol/referent relationships in human speech, or is he referring to the nature of reality/truth? I don't grasp the meaning of, "determine the reference".
I'm not usually this dense. I hope.

--pickle
picklepuss is offline  
Old 11-16-2002, 06:16 AM   #10
Veteran Member
 
Join Date: Oct 2001
Location: Canada
Posts: 3,751
Post

Reference is just how language picks things out. Eg, the words 'horse' and 'cheval' refer to the same thing; presumably there is some story to tell about how this reference is determined within the respective languages to which they belong. Presumably there is also some story to tell about what, psychologically speaking, is involved in understanding the words. The received view, before Putnam, was that it's the same story in both cases. He's arguing that it can't be.

Suppose that you see something out of the corner of your eye -- a moving shadow, but it's gone when you turn to get a better look. You conjecture that you saw a rat who ran round the corner. You take to calling the rat 'Ted', and you say things like, "I wonder where Ted sleeps at night?", and "I hope Ted doesn't have nasty diseases".

Consider two situations consistent with all of this. In the first one, you really did see the moving shadow of a running rat. In the second, the moving shadow was cast through the window onto your wall, and was just an accident of some moving branches in the moonlight. The point is that both situations are consistent with the same visual experience and psychological states on your part. The only difference is whether there actually is a rat. In the first situation, what does your use of the term 'Ted' refer to? Obviously, the rat. In the second, what does it refer to? Obviously, nothing. So your psychological state cannot determine reference; at most it co-determines reference, along with a collection of brute facts that typically go far beyond anything you know.
Clutch is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 02:45 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.