Freethought & Rationalism ArchiveThe archives are read only. |
01-25-2007, 02:51 PM | #1 |
Veteran Member
Join Date: Jul 2003
Location: Colorado
Posts: 8,674
|
Computer software for determining interpolations?
Is there computer software that analyzes texts to determine the likelihood that a passage is an original part of the text or from a later hand?
|
01-25-2007, 03:15 PM | #2 |
Veteran Member
Join Date: May 2005
Location: Midwest
Posts: 4,787
|
|
01-25-2007, 04:25 PM | #3 |
Veteran Member
Join Date: Jan 2005
Location: USA
Posts: 1,307
|
Not that I'm aware of. The closest I'm familiar with are stylostatistical authorship programs which determine, e.g., which of Federalist Papers were written by which of three possible candidates.
For smaller interpolations, there may not be enough data for the statistical techniques to achieve significance. |
01-26-2007, 01:35 AM | #4 | |
Junior Member
Join Date: Sep 2006
Location: Canada
Posts: 23
|
Quote:
Also, we must ask "What constitutes 'an original part of the text'"? For instance, is the original text of Matthew the first 'edition' of Matthew, even though Matthew itself incorporates Mark? Or do we exclude all the non-Markan material in Matthew as additions to Mark? Does this reflect historical reality (i.e. was Matthew simply adding to Mark, or was he writing a new and creative work of his own, drawing upon previous work(s))? In the end, I suspect that a program of this sort would merely replicate the parameters we put into it. In effect, it would produce nothing but tautology. |
|
01-26-2007, 05:44 AM | #5 |
Veteran Member
Join Date: Jul 2003
Location: Colorado
Posts: 8,674
|
Well, I think that this software definitely needs to be developed, and can be developed, and in the future will be more accurate and more powerful than any human judgment of the text.
There are already very sophisticated reading programs and AI "machines" that read books, and read the Internet to learn, etc. This would be a great way to get an unbiased observer, assuming of course that the developers of the software don't write in a bias, but that can pretty well be avoided by not allowing those people to even know what texts the program is going to be used on etc. Maybe I'll try to get a university to develop such a program if I ever get a PhD, since my background is in computers anyway. I think that AI is much more sophisticated already than what you understand, and in the next 10 years its going to double in power or more. |
01-26-2007, 06:13 AM | #6 | |
Contributor
Join Date: Mar 2002
Location: nowhere
Posts: 15,747
|
Quote:
spin |
|
01-26-2007, 06:21 AM | #7 | |
Contributor
Join Date: Mar 2002
Location: nowhere
Posts: 15,747
|
Quote:
spin |
|
01-26-2007, 06:44 AM | #8 | |
Veteran Member
Join Date: Jul 2004
Location: Western Sweden
Posts: 3,684
|
Quote:
As has been mentioned, I think given a sufficiently large material, software will help towards determining authorship, but only as a complement to human efforts. A comparison: software for automatic translation would have a substanstial financial (etc.) impact. Not even in this field, there is something to be found that doesn't rely heavily on human touch-up of its sometimes utterly ridiculous output. For this audience it might be interesting to note a fact that probably is forgotten by most of the few who knew it in the first place: the man who in Sweden almost singlehandedly introduced the use of computers in the humanities, to boot for authorship analysis, was the Professor (now Emeritus) of English, author of Jesus – One Hundred Years Before Christ (or via: amazon.co.uk), Alvar Ellegård. |
|
01-26-2007, 09:38 AM | #9 | |
Veteran Member
Join Date: Jan 2005
Location: USA
Posts: 1,307
|
Quote:
Stephen |
|
01-26-2007, 10:09 AM | #10 |
Veteran Member
Join Date: Jul 2003
Location: Colorado
Posts: 8,674
|
It will be a while, I'm hoping to go back for a masters next year, plus starting a family, plus working on books, plus my day job... oy...
|
Thread Tools | Search this Thread |
|