FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 08-02-2003, 03:27 AM   #11
Veteran Member
 
Join Date: Jun 2001
Location: Boulder, Colorado
Posts: 3,316
Default

Where do you pick up this notion of "we have to stick to probability" Friar?

That way you can argue that no1 won the last night's lottery since it is too improbable. I ask you please to consider the fact that probability has nothing to do with reality in a sense of establishing fact. All probability can tell you is "Maybe" with some degree of confidence. Maybe souds far from a fact to me.

Again, if humans species has a finite lifespan that this is correct - every human child brings us closer to the probability of it being the last person born.

But then - why do we need probability to tell us that a finite thing will come to an end?
Kat_Somm_Faen is offline  
Old 08-02-2003, 08:32 AM   #12
Junior Member
 
Join Date: Feb 2003
Location: Flying around the US
Posts: 47
Default

I am not really sure about the statistical parallel between the urns and the population lifespan. In the urn example, you are setting a constraint that there are 2 urns. But in population lifespans, there is always one more "urn".

1. The lifespan of humanity is 100 billion.
2. The lifespan of humanity is 100 trillion.
3. The lifespan of humaning is not listed here.

We can fill out the list to any number of hypotheses but there is always that last one. Ultimately, there is no limit to the number of "urns". I could be wrong but this seems like a misapplication of Bayesian analysis which is properly applied when there is a clear dichotomy (like with the urns).

Although we can assume the lifespan of humanity is finite, our existence does not help predict how finite.
IoftheBholder is offline  
Old 08-02-2003, 09:25 AM   #13
Moderator - Science Discussions
 
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
Default

Quote:
Originally posted by Furby
Suppose we were to present this Doomsday Argument several thousands years back where the number of humans that have existed was only one hundred thousand.
That's a bit like saying, suppose we present the tiny probability of winning the lottery to the person who actually is going to win the lottery. Of course any statistical argument is bound to be wrong in a small number of cases, but all other things being equal, you're unlikely to be such a special case.

There's some very good analysis of the doomsday argument by philosopher Nick Bostrom on http://www.anthropic-principle.com which I encourage anyone who's interested in this issue to check out. He extends this into a generalized version of the anthropic principle which he calls the self-sampling assumption, which says that in general I should reason as though I was randomly-selected from the set of all observers (or possibly 'observer-moments'...see below), and then I can use that to do the kind of Bayesian reasoning that Friar Bellows outlined (where you start with a prior probability distribution and then alter it based on knowledge about yourself, such as your birth order in the human species). Bostrom also wrote a book on the subject called Anthropic Bias: Observation Selection Effects in Science and Philosophy, but I haven't read it.

One of the interesting questions brought up by Bostrom is the "problem of the reference class"--what are the exact boundaries of the set that I should reason as if I'm randomly selected from? All humans? All "observer-moments" of humans (which long-lived people would have more of)? All observer-moments of sentient beings (which would include any intelligent descendents of humans, and A.I.s we create, etc., which could make a big difference to how you interpret the doomsday argument)? Only all observer-moments which are considering probabilities based on the self-sampling assumption at that moment? The fact that the answer to this question is not at all obvious is probably the biggest weakness in the idea, for me.

However, one can invent thought-experiments where there is not much ambiguity because we're dealing solely with a group of humans who are likely to have about equal intelligence and lifespans, and there it seems to be valid. For example, consider this example by philosopher John Leslie (from the FAQ at anthropic-principle.com):

Quote:
A firm plan was formed to rear humans in two batches: the first batch to be of three humans of one sex, the second of five thousand of the other sex. The plan called for rearing the first batch in one century. Many centuries later, the five thousand humans of the other sex would be reared. Imagine that you learn you’re one of the humans in question. You don’t know which centuries the plan specified, but you are aware of being female. You very reasonably conclude that the large batch was to be female, almost certainly. If adopted by every human in the experiment, the policy of betting that the large batch was of the same sex as oneself would yield only three failures and five thousand successes. ... [Y]ou mustn’t say: ‘My genes are female, so I have to observe myself to be female, no matter whether the female batch was to be small or large. Hence I can have no special reason for believing it was to be large.’ (Leslie 1996, pp. 222-23)
Of course, Leslie's example fails to state the prior probabilities, but suppose you also know that part of the "firm plan" was that which batch would be male and which batch would be female was to be determined by a random coinflip. In this case, would anyone disagree that one should bet that there's a 5000:3 chance that one is a member of the larger batch, and therefore that the larger batch is very likely to be (insert your sex here) while the smaller batch was (insert opposite sex here)? But if you do agree, then you're using something like the self-sampling assumption, at least in this case.
Jesse is offline  
Old 08-02-2003, 11:03 PM   #14
Veteran Member
 
Join Date: Apr 2001
Location: arse-end of the world
Posts: 2,305
Default

Quote:
Originally posted by Furby
Suppose we were to present this Doomsday Argument several thousands years back where the number of humans that have existed was only one hundred thousand.

Now, we again have two models of humanity. In one model, humans become extinct after the 1 millionth human has been born, while in the other model, humans become extinct after the 60 billionth human.

So, according to the argument, it is far more probable that the first model is true rather than the second one. But hey, here we are with 60 billion humans having been born and we still exist!
And our current existence is not ruled out by the Doomsday argument. You're contaminating prior probabilities with posterior probabilities. You can't transport our knowledge back in time and pass it on to a neolithic man. He must reason in the same way as we do. The only difference is that his information set is different that ours. So what if our current existence seems highly unlikely from the point of view of the stone-age man? Add this to your prior probability model if you like.

Quote:
Originally posted by Chimp
What about other factors in the equation, for example, when humans reach a certain level of technological advancement and they are able to guide their own evolution?
Good question. I don't know.

Quote:
Originally posted by Tenpudo
Taken to extremes, wouldn't this formula indicate that humanity is most likely to go extinct before another single person is born?
Well, it depends on how you weight your prior probabilities. The trick to Bayesian probability analysis is choosing a suitable prior.

Quote:
Originally posted by Kat_Somm_Faen
But then - why do we need probability to tell us that a finite thing will come to an end?
We want to know which is more likely, given certain initial assumptions. Jesse answered the rest of your objection.

Quote:
Originally posted by IoftheBholder
I am not really sure about the statistical parallel between the urns and the population lifespan. In the urn example, you are setting a constraint that there are 2 urns.
I'm not asking when humanity will become extinct. I'm asking which of the two models I presented is more likely, starting off with certain prior assumptions about the two models, and then using the fact that roughly 60 billion humans have existed. Nevertheless, Bayesian probability can definitely handle cases where there are more than two choices. If there were n models, and:

p(i) = the prior probability of the ith model
p(60B | i) = probability of being at 60 billion given the ith model

then the probability of the ith model being true given that we are at 60 billion is:

p(i | 60B) = p(i) * p(60B | i) / S(n)

where S(n) = sum from j=1 to n of p(j) * p(60B | j)

Make n as big as you like.

If you want to learn more, visit the site Jesse and I referenced. Fascinating stuff.
Friar Bellows is offline  
Old 08-04-2003, 01:16 AM   #15
Regular Member
 
Join Date: Jul 2003
Location: 123 Fake Street
Posts: 279
Default

I saw this argument first in Martin Rees' book "Our Final Hour" (Our Final Century) in the UK. It is, of course based on bayesian theory and the reasoning is sound as far as all that goes. The main reason the idea in the book is disturbing is that you have the concept, basically, that "you are in the middle". Being in the species-specific middle is not so disturbing if it is 1,000,000 BC. You may have made this argument to our distant ancestor and he could have simply said - well, that's interesting but I have good statistics here showing that population grows only slowly. You say there have been maybe a billion of us and there might be a billion more. My numbers say that it took a million years to get this far and it'll take a million more to go to 2 billion. The "argument from the middle" doesn't become scary until you are in a rapid population growth phase. Thus, it might have taken a million years to get 60 billion of us. It will take only a hundred more to get to 120 BN given current trends though. If we are truly "in the middle" then it's a nasty prospect. Not saying I agree but that is the reason the idea scares so many.
Boredom is offline  
Old 08-04-2003, 04:59 AM   #16
Contributor
 
Join Date: Jan 2001
Location: Folding@Home in upstate NY
Posts: 14,394
Thumbs up

Quote:
Originally posted by Tenpudo
Taken to extremes, wouldn't this formula indicate that humanity is most likely to go extinct before another single person is born?
*checks watch*
Oop-- too late!
Back to the drawing board...
Exactly what I was thinking! As others have since pointed out, there a quite a few more options than the two listed, many (approx. 40 billion) with probabilities higher than even the 100 billion option!

Anyway, I don't think I'm going to let this stop me from living my life. If those statistics worry you, you probably shouldn't get out of bed in the morning, since the chances you'll die from any of a myriad of other things is much higher! Forget about driving or flying or even walking anywhere.

I'm going to file those probabilities under 'useless facts that don't affect my day-to-day life.'
Shake is offline  
Old 08-04-2003, 11:32 AM   #17
Regular Member
 
Join Date: Mar 2002
Location: Earth
Posts: 247
Default

False analogy!

The number of specimens of a given species that has lived is no indication of the likelyhood of its extinction.
Hans is offline  
Old 08-04-2003, 11:51 AM   #18
Moderator - Science Discussions
 
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
Default

Quote:
Originally posted by Hans
False analogy!

The number of specimens of a given species that has lived is no indication of the likelyhood of its extinction.
Hans, what do you think of the example from the anthropic-principle.com FAQ that I mentioned in an earlier post? Here it is again, with my comments:

Quote:
A firm plan was formed to rear humans in two batches: the first batch to be of three humans of one sex, the second of five thousand of the other sex. The plan called for rearing the first batch in one century. Many centuries later, the five thousand humans of the other sex would be reared. Imagine that you learn you’re one of the humans in question. You don’t know which centuries the plan specified, but you are aware of being female. You very reasonably conclude that the large batch was to be female, almost certainly. If adopted by every human in the experiment, the policy of betting that the large batch was of the same sex as oneself would yield only three failures and five thousand successes. ... [Y]ou mustn’t say: ‘My genes are female, so I have to observe myself to be female, no matter whether the female batch was to be small or large. Hence I can have no special reason for believing it was to be large.’ (Leslie 1996, pp. 222-23)
Of course, Leslie's example fails to state the prior probabilities, but suppose you also know that part of the "firm plan" was that which batch would be male and which batch would be female was to be determined by a random coinflip. In this case, would anyone disagree that one should bet that there's a 5000:3 chance that one is a member of the larger batch, and therefore that the larger batch is very likely to be (insert your sex here) while the smaller batch was (insert opposite sex here)? But if you do agree, then you're using something like the self-sampling assumption, at least in this case.
Jesse is offline  
Old 08-04-2003, 02:00 PM   #19
Regular Member
 
Join Date: Mar 2002
Location: Earth
Posts: 247
Default

Quote:
Originally posted by Jesse
Hans, what do you think of the example from the anthropic-principle.com FAQ that I mentioned in an earlier post?
I'll plead ignorance as statistics and probabilities aren't my strong points.

Whether or not Doom is in store for us tomorrow, a trillion years, or never, it is true in each scenario that the sequence of events leading up to today (pre-doom) would be realized.

A. Today-Doom
B. Today---------------Doom
C. Today------------------------------------Doom Never

The fact that we are here today, in and of itself, will tell us nothing of a pending doom or lack there of, IMHO. What is true, and what I believe the doomsday argument plays on, is that if there is a doom anywhere in the future we are one day closer to it today than we were yesterday.
Hans is offline  
Old 08-04-2003, 02:25 PM   #20
Moderator - Science Discussions
 
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
Default

Hans:
I'll plead ignorance as statistics and probabilities aren't my strong points.

But there's not really much statistics in the scenario. You just know that two opposite-sex batches of humans were created, one with 5000 members and one with 3. You also know that there were 50-50 odds originally that it'd be 5000 males and 3 females as opposed to 5000 females and 3 males. And you know that you are a product of one of these batches, although you haven't met any of your batch-mates.

So, without even getting into the exact odds, would you say that the fact that you observe yourself to be a male makes it a lot more likely that it was 5000 males and 3 females? Or do you still think it's equally likely that it was 5000 females and only 3 females?

Keep in mind, if every person in both batches bets that the larger batch has the same sex as themselves, 5000 will have bet correctly and only 3 will have bet wrongly.
Jesse is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 03:20 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.