Freethought & Rationalism ArchiveThe archives are read only. |
09-06-2005, 03:57 PM | #11 | ||
Veteran Member
Join Date: Jan 2005
Location: USA
Posts: 1,307
|
Quote:
Quote:
Stephen |
||
09-07-2005, 08:28 AM | #12 | |
Junior Member
Join Date: Aug 2003
Location: Illinois
Posts: 70
|
Quote:
On Poisson vs. Normal - A normal distribution is the standard bell curve that we hear about a lot. It can have positive and negative values, and non-integer values. A Poisson distribution can not have negative numbers, and can only have integer values. So, if you are talking about something like word counts (or individuals arriving at an ATM in textbooks), it is a good choice. Word counts are not negative or fractional. If you know a process should be described by a Poisson, and you know the average rate (of say arrivals at an ATM), you can give the probability of some other number of arrivals. Say if you normally get 5 arrivals in 5 minutes, you can give the probability of 10 arrivals. Stephen did a nice job with most of your question. You ask -does it take into account that the 020 material might be more or less lengthy than the 002 material? And I can reaffirm the answer is yes, that is taken into account. I think based on the study alone, without taking into account other factors, the 3SH looks best. Lately for reasons outside the study, I tend to think that Luke did not use Matthew originally, but later a lot of Matthian material made its way over. |
|
09-07-2005, 08:44 AM | #13 | |
Junior Member
Join Date: Aug 2003
Location: Illinois
Posts: 70
|
Quote:
There is no special procedure for dealing with nulls. To Stephen - They are treated as zeros, not nulls. To the non-statistical - Nulls in statistics usually mean "missing data", and they can be a tricky issue, and require special care. In my study there are no missing values, but there are zero counts. The information is not missing, we know for a fact there are zero of that type of word in that category. Yes, if two categories both lack a word that appears elsewhere, that will tend to draw them together and make them look more similar, but it's the relative lack of the word that is important. An example - suppose based on a control we expect 1 occurrence of the word "cat". If we examine the 2 categories and find zero cats in both, then that increases their similarity, but only a tiny bit. Now if we expected to find 20 cats, based on the control, and we found zero cats in both of the two categories, then that is a much stronger indication of similarity. The technique could be applied to smaller samples of text, if the word counts were acquired, but the results would likely lack significance. |
|
09-07-2005, 10:59 AM | #14 | |
Veteran Member
Join Date: May 2005
Location: Midwest
Posts: 4,787
|
Quote:
Thanks. Ben. |
|
09-07-2005, 11:07 AM | #15 |
Veteran Member
Join Date: Jul 2001
Location: the reliquary of Ockham's razor
Posts: 4,035
|
Why not publish in a journal on computational linguistics rather than a journal of theology or NT studies? The former will not care so much if you don't give a history of the Synoptic Problem, and it will give you greater latitude for describing the mathematics.
kind thoughts, Peter Kirby |
09-07-2005, 08:32 PM | #16 | |
Junior Member
Join Date: Aug 2003
Location: Illinois
Posts: 70
|
Quote:
|
|
Thread Tools | Search this Thread |
|