Freethought & Rationalism ArchiveThe archives are read only. |
06-10-2002, 09:26 PM | #31 |
Banned
Join Date: Jan 2002
Location: Australia
Posts: 991
|
If I programmed a robot to choose between two seperate options does the robot then have freewill? At what point does the robot aquire freewill? And what does a robot with freewill have that a robot without freewill lacks?
|
06-10-2002, 09:26 PM | #32 | |
Veteran Member
Join Date: Mar 2001
Posts: 2,322
|
Quote:
[ June 10, 2002: Message edited by: DRFseven ]</p> |
|
06-10-2002, 09:36 PM | #33 |
Junior Member
Join Date: Apr 2002
Location: oklahoma
Posts: 96
|
I'm sorry DRF I misunderstood you.
Perception of data has a bit of subjectiveness to it but generally its not really a choice what your brain will store. It just does it. I don't think this truly relates to free-will. Free-will is knowingly selecting an option from a choice of many. |
06-10-2002, 09:39 PM | #34 | |
Junior Member
Join Date: Apr 2002
Location: oklahoma
Posts: 96
|
Quote:
[ June 10, 2002: Message edited by: unworthyone ]</p> |
|
06-10-2002, 09:42 PM | #35 |
Regular Member
Join Date: Apr 2002
Location: In your mind!
Posts: 289
|
What I cannot figure out is God saying after A + E eating the fruit-"NOW they are like us, knowing good and evil". They had the choice before hand didn't they?
Eat= bad. Not eat=good. Does this mean that when Adam and Eve chose to condemn all humanity(not directly) they did not not know that what they were doing was bad? That means they( and us) were punished for a choice that was not made with complete free will!!So we have free will, now because of our forefather's sin we are all destined to hell-so did we have a choice in the matter?-no. <img src="graemlins/banghead.gif" border="0" alt="[Bang Head]" /> |
06-10-2002, 09:49 PM | #36 | |
Junior Member
Join Date: Apr 2002
Location: oklahoma
Posts: 96
|
Quote:
Once a child is brought to accountability for that actions, only then do they realize the free-will that was involved (right vs wrong) in it. [ June 10, 2002: Message edited by: unworthyone ]</p> |
|
06-10-2002, 10:11 PM | #37 | ||
Veteran Member
Join Date: Mar 2001
Posts: 2,322
|
Quote:
Quote:
|
||
06-10-2002, 10:36 PM | #38 | |
Banned
Join Date: Jan 2002
Location: Australia
Posts: 991
|
Quote:
Who determines Right and Wrong - Society Why adhere to Right and Wrong - Self Preservation Program 100 robots with ten distinct personality parameters (such as desire to mate, desire to kill, desire to follow rules etc) and about 100 variables each as well as two main goals, self preservation and mating. Also program the robots to react to every other possible robot trait (1000 in total). Furthermore, give the robots the ability to determine positive and negative actions of other robots and store this information. Finally, create an isolated environment with two specific rules, no killing and no mating by force (moral name for no robot raping ). Because each robot is different, they'll react differently to other robots and their interpretation of the rules will be different. One robots desire to mate may be so great that in disregard for the rules it may try to force mate with another robot. Other robots may kill this robot to prevent the rules from being broken again, or the robot that was attacked may kill in order to maintain self preservation. Because other robots know that forced robot mating is ultimately a negative action they may decide not to do it. The forced mating traits of some robots may be so great however, that influence from other robots may make little difference or they may pick a more defenceless robot to attack. This is a pretty complex society, but it is still very simple compared to human society. Nonetheless, elements of right and wrong are beginning to emerge just like human society. Each robot acts and reacts differently, and each robot knows the difference between positive and negative effects for itself and for other robots. You could even take it one step further and introduce actions to please, or anger robots, therefore creating simple emotions such as happiness (which robots will endeavour to achieve as much as possible) or vengeance (to get revenge against an attacker). Obviously multiplying the variables by a 1000 times significantly increases the complexity of this robot society. |
|
06-11-2002, 05:02 AM | #39 | |||
Veteran Member
Join Date: Feb 2002
Location: Southeast of disorder
Posts: 6,829
|
Quote:
<strong> Quote:
<strong> Quote:
|
|||
06-11-2002, 06:24 AM | #40 | ||||
Contributor
Join Date: Jun 2000
Location: Buggered if I know
Posts: 12,410
|
Quote:
Quote:
Quote:
Quote:
Furthermore, many people protect a shakey belief by willfully and consciously refusing to consider alternatives or contrary evidence - a decision that can be reversed. A hard-core belief can be difficult to change, but often it is still possible - for example, by challenging that belief (each and every time it results in an emotional reaction and behaviour) by building a contradicting mental perspective around the belief and its consequent mental state, till the emotive power of that underlying belief is rendered nil over time. |
||||
Thread Tools | Search this Thread |
|