FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > Religion (Closed) > Biblical Criticism & History
Welcome, Peter Kirby.
You last visited: Yesterday at 03:12 PM

 
 
Thread Tools Search this Thread
Old 01-25-2007, 02:51 PM   #1
Veteran Member
 
Join Date: Jul 2003
Location: Colorado
Posts: 8,674
Default Computer software for determining interpolations?

Is there computer software that analyzes texts to determine the likelihood that a passage is an original part of the text or from a later hand?
Malachi151 is offline  
Old 01-25-2007, 03:15 PM   #2
Veteran Member
 
Join Date: May 2005
Location: Midwest
Posts: 4,787
Default

Quote:
Originally Posted by Malachi151 View Post
Is there computer software that analyzes texts to determine the likelihood that a passage is an original part of the text or from a later hand?
Yes. The human brain.

Or you can just ask spin.

Ben.
Ben C Smith is offline  
Old 01-25-2007, 04:25 PM   #3
Veteran Member
 
Join Date: Jan 2005
Location: USA
Posts: 1,307
Default

Not that I'm aware of. The closest I'm familiar with are stylostatistical authorship programs which determine, e.g., which of Federalist Papers were written by which of three possible candidates.

For smaller interpolations, there may not be enough data for the statistical techniques to achieve significance.
S.C.Carlson is offline  
Old 01-26-2007, 01:35 AM   #4
Junior Member
 
Join Date: Sep 2006
Location: Canada
Posts: 23
Default

Quote:
Originally Posted by Malachi151 View Post
Is there computer software that analyzes texts to determine the likelihood that a passage is an original part of the text or from a later hand?
I am not sure that one could make such a program that works in any sort of rigourous fashion, precisely because the methods themselves require a great deal of human judgment. For instance, one principle is that when a passage seems to be a parenthetical note clarifying the larger passage, this is likely due to an editor or scribe--in effect, early examples of commentary on the passage, incorporated into the passage itself. How would one would program software to detect such clarifying notes?

Also, we must ask "What constitutes 'an original part of the text'"? For instance, is the original text of Matthew the first 'edition' of Matthew, even though Matthew itself incorporates Mark? Or do we exclude all the non-Markan material in Matthew as additions to Mark? Does this reflect historical reality (i.e. was Matthew simply adding to Mark, or was he writing a new and creative work of his own, drawing upon previous work(s))?

In the end, I suspect that a program of this sort would merely replicate the parameters we put into it. In effect, it would produce nothing but tautology.
Brooke is offline  
Old 01-26-2007, 05:44 AM   #5
Veteran Member
 
Join Date: Jul 2003
Location: Colorado
Posts: 8,674
Default

Well, I think that this software definitely needs to be developed, and can be developed, and in the future will be more accurate and more powerful than any human judgment of the text.

There are already very sophisticated reading programs and AI "machines" that read books, and read the Internet to learn, etc.

This would be a great way to get an unbiased observer, assuming of course that the developers of the software don't write in a bias, but that can pretty well be avoided by not allowing those people to even know what texts the program is going to be used on etc.

Maybe I'll try to get a university to develop such a program if I ever get a PhD, since my background is in computers anyway.

I think that AI is much more sophisticated already than what you understand, and in the next 10 years its going to double in power or more.
Malachi151 is offline  
Old 01-26-2007, 06:13 AM   #6
Contributor
 
Join Date: Mar 2002
Location: nowhere
Posts: 15,747
Default

Quote:
Originally Posted by S.C.Carlson View Post
Not that I'm aware of. The closest I'm familiar with are stylostatistical authorship programs which determine, e.g., which of Federalist Papers were written by which of three possible candidates.

For smaller interpolations, there may not be enough data for the statistical techniques to achieve significance.
This stirs up distant memories of texts I read long ago, one of which chopped up the Pauline corpus into sections and analysed them statistically for a collection of linguistic markers, lengths of sentences and paragraphs, to see if any distinctions exist between the sections. I seem to remember the names MacGregor and Morton in such statistical analyses, perhaps for the book of John, but certainly this methodology has no way of recognizing interpolations. It may distinguish between samples of texts, as long as the samples are large enough to make the statistics based on them indicative. I wonder what happened to the means of analysis.


spin
spin is offline  
Old 01-26-2007, 06:21 AM   #7
Contributor
 
Join Date: Mar 2002
Location: nowhere
Posts: 15,747
Default

Quote:
Originally Posted by Malachi151 View Post
Well, I think that this software definitely needs to be developed, and can be developed, and in the future will be more accurate and more powerful than any human judgment of the text.

There are already very sophisticated reading programs and AI "machines" that read books, and read the Internet to learn, etc.

This would be a great way to get an unbiased observer, assuming of course that the developers of the software don't write in a bias, but that can pretty well be avoided by not allowing those people to even know what texts the program is going to be used on etc.

Maybe I'll try to get a university to develop such a program if I ever get a PhD, since my background is in computers anyway.

I think that AI is much more sophisticated already than what you understand, and in the next 10 years its going to double in power or more.
Natural language understanding has not advanced very much at all in the last 40 years. To understand language, you need the culture behind it. What's left is programming which reflects the programmer's understanding of the issues and almost certainly based on statistics (as the most successful natural language parsers are). Statistics will not go beyond certain limits: small samples cause statistics not to be useful. And interpolations usually reflect very small samples.


spin
spin is offline  
Old 01-26-2007, 06:44 AM   #8
Veteran Member
 
Join Date: Jul 2004
Location: Western Sweden
Posts: 3,684
Default

Quote:
Originally Posted by spin View Post
Natural language understanding has not very advanced much at all in the last 40 years.
Exactly, and I blame Chomsky and his followers for pushing the field of research in the wrong direction.

As has been mentioned, I think given a sufficiently large material, software will help towards determining authorship, but only as a complement to human efforts. A comparison: software for automatic translation would have a substanstial financial (etc.) impact. Not even in this field, there is something to be found that doesn't rely heavily on human touch-up of its sometimes utterly ridiculous output.

For this audience it might be interesting to note a fact that probably is forgotten by most of the few who knew it in the first place: the man who in Sweden almost singlehandedly introduced the use of computers in the humanities, to boot for authorship analysis, was the Professor (now Emeritus) of English, author of Jesus – One Hundred Years Before Christ (or via: amazon.co.uk), Alvar Ellegård.
Lugubert is offline  
Old 01-26-2007, 09:38 AM   #9
Veteran Member
 
Join Date: Jan 2005
Location: USA
Posts: 1,307
Default

Quote:
Originally Posted by Malachi151 View Post
Well, I think that this software definitely needs to be developed, and can be developed, and in the future will be more accurate and more powerful than any human judgment of the text.

...

Maybe I'll try to get a university to develop such a program if I ever get a PhD, since my background is in computers anyway.
IMHO, the field is wide open and in need of ground-breaking contributions. Peter Kirby once had been interested in pursuing this and I think the field should benefit from even more minds working on the problems. If you choose to get into it, more power to you.

Stephen
S.C.Carlson is offline  
Old 01-26-2007, 10:09 AM   #10
Veteran Member
 
Join Date: Jul 2003
Location: Colorado
Posts: 8,674
Default

It will be a while, I'm hoping to go back for a masters next year, plus starting a family, plus working on books, plus my day job... oy...
Malachi151 is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 01:01 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.