![]() |
Freethought & Rationalism ArchiveThe archives are read only. |
![]() |
#1 |
Contributor
Join Date: May 2001
Location: San Jose, CA
Posts: 13,389
|
![]()
Can a computer program create new information; that is, given a dataset and a set of logical and mathematical operations, can any new information be created.
Many people I have asked immediately respond yes, but I am not sure. It seems that they are responding to a more useful output rather than new information My response is 1. the organization of data is task dependant and ultimately arbitrary 2. logical and mathematical operations are tautological and cannot cerate new information. Take a bubble sort as an example: With the input as an array of numbers. The bubble sort just compares numbers and swaps them depending on size. Does this add any new information? The numbers are still the same; does placing them in numeric order reduce entropy? Not if the numbers are my phone number! |
![]() |
![]() |
#2 |
Veteran Member
Join Date: Oct 2000
Location: Nashville, TN, USA
Posts: 2,210
|
![]()
Please provide your definition of the term 'information'.
|
![]() |
![]() |
#3 |
Veteran Member
Join Date: Oct 2000
Location: Nashville, TN, USA
Posts: 2,210
|
![]()
I'll expand.
Consider the set of prime numbers. At any given time throughout our history, there has a largest known prime. Calculating a higher prime number is a matter of computation, not contemplation. Now, computers, given a set of computational rules, can calculate for us previously unknown prime numbers. Have they created information? I would think under the ordinary interpretation of the word the answer would be yes. Computational rules have determined the truth value of the statement The number n is a prime number, previously unknown. I would consider this new information. I can see how one could also argue that the number itself and the truth value of the statement is a consequence of elementary number theory, and is therefore not information 'created' by the algorithm. I think it comes down to a matter of semantics. Your thoughts? Bookman |
![]() |
![]() |
#4 |
Contributor
Join Date: May 2001
Location: San Jose, CA
Posts: 13,389
|
![]()
Bookman:
The definition of information I would like to use for this is very similar to that of entropy. Information, Merriam-Webster:definition 2b The attribute inherent in and communicated by one of two or more alternative sequences or arrangements of something (as nucleotides in DNA or binary digits in a computer program) that produce specific effects In response to your example about prime numbers: The new prime is a deterministic outcome of the program used to calculate the number. Many mathematicians would even say that the new prime is a result of an equation on a black board and just leave it as that. It seems that this will ultimately rest on the question if a computer can be creative; which is what I think is necessary to create new information. A materialist might ask if humans can actually be creative or are we just operating deterministically with our programming and input data... I don't think it is just semantics. I think information content is measurable independent of the subjective "usefulness" of the information. |
![]() |
![]() |
#5 |
Veteran Member
Join Date: Oct 2000
Location: Nashville, TN, USA
Posts: 2,210
|
![]()
I'm not really sure where you're going with this, and I don't have my arms around your concept of 'creating information'. Can you give an example? I'm afraid your definition just left me a little further in the dark.
Can you give me a set of data relevant to basic mathematical and logical operators and then provide an example of 'new information' that can not be created by a computer? Bookman |
![]() |
![]() |
#6 |
Veteran Member
Join Date: Jun 2003
Location: Houston, TX
Posts: 4,197
|
![]()
Algorithmically, I don't think you can get more information out than you put in. For cryptography purposes, sometimes you need a good source of random numbers. Linux for example resorts to using the timings between interrupts from various devices, the mouse, the keyboard, the disk drive controllers, etc. to construct the data in the systemwide entropy pool used by /dev/random. These measurements have a decent amount of unpredictability especially in the least significant digits of those measures. These measurements can accumulate into a systemwide pool of entropy, which can be used, typically to generate cryptographic keys, etc.
If you're setting up pgp keys for the first time, and your computer has just booted, the entropy pool might run out, and you might have to move the mouse, or type a bit to get key generation to proceed, since it needs a certain amount of data in the entropy pool, and blocks until such time as the data is available. I think it may be significant that they resort to such measures to get random data. It means it is not to be had from an algorithm. (edited for grammar and to add link.) Here's recent linux source for the device driver for /dev/random http://lxr.linux.no/source/drivers/c...?v=2.6.0-test7 There are comments near the top of the file, starting around line 40 that explain it better than I did. |
![]() |
![]() |
#7 | |
Contributor
Join Date: May 2001
Location: San Jose, CA
Posts: 13,389
|
![]()
Bookman:
Maybe this definition of information might be more informative. Godless Wonder: I was thinking that truncation and bit errors might be a source of information or rather creativity. Quote:
|
|
![]() |
![]() |
#8 |
Veteran Member
Join Date: Oct 2000
Location: Nashville, TN, USA
Posts: 2,210
|
![]()
I guess I'm just clueless because the way you've posed this makes it seem trivial.
Given a set of computer instructions P, a set of inputs I. Is it possible to have created 'new information' in the result set R after some number of iterations n? Obviously no, if I understand your definitions. Even if the result is highly dependent upon set I, the results can not be of high entropy. Even if R is complex and seemingly 'random' (of high entropy?) a 'data compression strategy' to eliminate the apparent randomness would be to simply transmit P, I, and n. I'm guessing that there's more to your question than I'm getting still. Bookman |
![]() |
![]() |
#9 |
Veteran Member
Join Date: Jun 2000
Location: Montreal, Canada
Posts: 3,832
|
![]()
Like other posters said, it depends a lot of the definition of "information" used. Also, it depends of the definition of "computer". Do you mean by that a Turing machine? Because if we use a broad definition of "computer", then the answer is obviously yes, because we are computers and we can create information.
If by "information" you mean "entropy", then the answer is also yes for a Turing machine: a can take n random numbers and sort them. The entropy definitivelly changed. |
![]() |
![]() |
#10 |
Veteran Member
Join Date: Jan 2001
Location: Median strip of DC beltway
Posts: 1,888
|
![]()
Here's a counter-question: When a mathematician finishes his proof for a new theorem, did he create any new information?
Wittgenstein pointed out that all proofs are necessarily tautologies, and thus don't actually contain anything that wasn't in the original axioms. Thus in the history of mathematics, we haven't really ever created anything new, right? That seems unsatisfying. Similar things apply to computer programs, since an algorithm is more or less a "proof" within a different system (that's not entirely accurate, but close enough for now). From the point of view of total information content of the inputs versus the total information content of the outputs, nothing new was generated. From the perspective of the human generating the information, information is generated. For example, if I plug in a bunch of data points into excel and do a linear regression on them, the slope of the best fit line is arguably already part of the set of data points plus the linear regression relation. However, since I did not know that information, it reduces uncertainty to me, and thus provides information. You might want to read Gregory Chaitin's homepage. He's pretty much the top algorithmic information theorist around, and has a lot of easy to read briefs on his site that talk about the mathematics of information paired with algorithms on a hypothetical machine. It doesn't really answer your question, but it seems to be in the spirit of your question. |
![]() |
Thread Tools | Search this Thread |
|