FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 07-10-2003, 05:17 AM   #41
Veteran Member
 
Join Date: Jan 2001
Location: USA
Posts: 1,072
Default

Quote:
Jeremy Pallant: Admittedly I do less and less programming these days. I am diversifying into management.
DNAunion: Tsk tsk...selling out

So far I've successfully resisted all attempts by management to move me "up" into their ranks.

If they think my salary is unjustifiable as a "mere programmer", they can try to do without me. I can easily find another job in the field, but they can't easily find someone who can fill my shoes. In at least one sense, the way things stand, I have the upper hand on management, even though I'm just a "lowly programmer".
DNAunion is offline  
Old 07-10-2003, 05:44 AM   #42
Veteran Member
 
Join Date: May 2003
Location: On the road to extinction. . .
Posts: 1,485
Default trashing warning

I must issue a trashing warning :

nFoundOne = 0;
while (nFoundOne == 0)
{
nIndex = (int) GetRandomNumber(0, nLetteredTiles - 1);
...
}

The set of solutions which use randomization to access every element in a set once, is prone to what is termed trashing. This means if one randomly picks numbers from 1..n, it is not true that n tries will eventually visit every every element.

Mostly you end up with 2n or 3n computations to visit every element. The elegant solution to this is randomly choosing 85% to 90% of the indicies, then sequentially selecting every non-selected item. The complexity of this algorithm is fixed at 1.8n, and as such the overhead is fixed and known.

A true randomiser may never pick one entry and this may entail your program may run into the infinity clause, even though you think it may not.

Happy Computing
sophie is offline  
Old 07-10-2003, 05:53 AM   #43
Veteran Member
 
Join Date: May 2003
Location: On the road to extinction. . .
Posts: 1,485
Default contiguous

DNAunion, nice of you to have written a doubly linked list. The last example of doubly linked lists I saw was in the VMS kernel.

The claim elements of an array are stored linearly and contiguously is not necessarily true.

The appearance of contiguity is achieved through memory mapping.
sophie is offline  
Old 07-10-2003, 06:00 AM   #44
Veteran Member
 
Join Date: Sep 2001
Location: Los Angeles Area
Posts: 1,372
Default

So is this thread about computer programming or computer science? I see a smattering of both. Has anyone created their own minimal language and compiler? I'd love to try this one day.
fando is offline  
Old 07-10-2003, 07:04 AM   #45
Contributor
 
Join Date: Jul 2001
Location: Deep in the heart of mother-lovin' Texas
Posts: 29,689
Default

Mageth: Blind luck?

DNAunion: No, because of the way memory is allocated for array variables....


No, it's blind luck. Any time you write to an address outside the allocated bounds of a variable/data structure, you are risking serious problems. The fact that your program worked is, from the programmer's perspective, blind luck.

But the two subscripts are misleading…there is no real element [5, 1] in the array. That is, to access element [5, 1] the computer does not go down 6 rows and then over to the right 1. No, elements of an array are stored linearly and contiguously. You can demonstrate this for yourself by using pointer arithmetic instead of subscripts.

Umm, I learned this, what, 22 years ago in my intro programming classes. And since then, I've seen many strange problems in code caused by overstepping the bounds/misaddressing of an array, even by only one element. And like sophie indicated, a programmer should never assume how the data's being stored.

So the array should be the last variable that memory is reserved for and overshooting the upper bound by one element does not cause a problem.

Pulling the trigger during Russian Roulette when the hammer happens to be over an empty chamber doesn't cause a problem, either.
Mageth is offline  
Old 07-10-2003, 08:27 AM   #46
Veteran Member
 
Join Date: Jan 2001
Location: Median strip of DC beltway
Posts: 1,888
Default Re: contiguous

Quote:
Originally posted by sophie
DNAunion, nice of you to have written a doubly linked list. The last example of doubly linked lists I saw was in the VMS kernel.

The claim elements of an array are stored linearly and contiguously is not necessarily true.

The appearance of contiguity is achieved through memory mapping.
I think you might be confusing some things. Memory mapping goes on at the hardware/OS level, and you'll never be able to tell the difference. It is gauranteed that if you have a "char gargle[25]", then you will have 25 contiguous bites of ram with gargle pointing to the first one.

A multidimensional array is actually an array of arrays.
char foo[10][10];
is an array of 10 blocks of 10 contiguous bites. *Usually* these are stored contiguously, so that foo[0][9] and foo[1][0] are adjacent, but there is no such requirement in the language spec. It's perfectly valid for foo to point to an array of pointers to arrays. In fact, you can see this in the similar notation:

Code:
char* x[10];
for(int i=0;i<10;i++)
   x[i]=new char[10];
You can then access x[0][1] and it should work and look just like a multidimensional array, though only the blocks of 10 in the final subscript are likely to be contigent.

The key difference between the two is that typeof(foo[1]) is a const char* and typeof(x[1]) is char*. This lets foo[1] be optimized into a constant offset from a memory location rather the an addressed stored in memory that must be loaded.

Strictly speaking, the only assumption you can make is that the last subscript will be contiguous (eg pointer math will work within it's bounds) or it's possible that your program will break if you change compilers.
NialScorva is offline  
Old 07-10-2003, 08:39 AM   #47
Veteran Member
 
Join Date: Jan 2001
Location: Median strip of DC beltway
Posts: 1,888
Default

Quote:
Originally posted by DNAunion
The only issue that might arise is that the very last element overshoots the array bounds by 1 element’s worth. But, as long as the compiler doesn’t override my sequence of variable declarations to try some optimization, that doesn’t occur. All of the variables are defined at the very beginning of main(), and the last one defined is the array. So the array should be the last variable that memory is reserved for and overshooting the upper bound by one element does not cause a problem.
Except that local variables stored on the stack, so the first time you call a subroutine you'll overwrite the last element with one of the registers. If you were to access that last element inside of the subroutine, then it would gleefully overwrite the saved state so you'd get undefined behavior when you returned from the function and restored the registers to a state different than when you went in. A bigger problem is if the registers are preserved after the call is made. The block you're overwriting is the return address in that case, meaning that you have no idea where code will be executed when you return from the function. Not only that, but you'll corrupt the stack and be unable to see where your program is crashing. At the *very* least, you run the risk of overwriting other local variables, since there is no gaurantee as to what order they'll be on the stack (in C++, anyway).

If your array is on the heap, you have similar problems. Most heaps allocate in block sizes, so you very possibly have run-off room at the end. That doesn't mean you should use it. If your array is the same as the heaps block size, then running past the end will likely overwrite the metadata for the next heap block. Once your heap is corrupted, all bets are off. Your program will run fine 9 times out of 10, but then crash hard. It all depends upon what's in memory when when the program. This is *not* easily debugged.
NialScorva is offline  
Old 07-10-2003, 08:59 AM   #48
Senior Member
 
Join Date: Aug 2000
Location: San Diego, CA, USA
Posts: 913
Default

Quote:
Originally posted by DNAunion
DNAunion: Tsk tsk...selling out

So far I've successfully resisted all attempts by management to move me "up" into their ranks.

If they think my salary is unjustifiable as a "mere programmer", they can try to do without me. I can easily find another job in the field, but they can't easily find someone who can fill my shoes. In at least one sense, the way things stand, I have the upper hand on management, even though I'm just a "lowly programmer".
It is never a good idea to bet on your own irreplacability. No one is so valuable that they cannot be replaced. It might be painful and inconvenient for the company, but if they decide that they are better off without you than with you, you're history. And don't go convincing yourself that your programming skills are so highly developed that no-one can follow behind you. Any code, no matter how obsequious, can be understood given enough time and incentive.
LeftCoast is offline  
Old 07-10-2003, 09:43 AM   #49
Banned
 
Join Date: Jul 2002
Location: U.S.
Posts: 4,171
Default

Quote:
Originally posted by Mageth

That depends, generally, on the target platform and the application/purpose/function of the code.
Well of course. That's why I used non-absolute terms.

Quote:
Originally posted by Mageth
In addition, many compilers optimize "inefficient" code for you (though I've had some optimizing compilers introduce some strange behavior into my code).
What you will find is that in a great number of cases.. if not a vast majority... those considerations are irrelevant in this day and age.

Quote:
Originally posted by Mageth
However, there's ususally no reason that code must be made more complex just for understandability.
Yes there is. The social context in which the code is written.

I may program something that I will never touch again and some unknown person may pick it up. In that case, its of prime importance that the code be understandable.

Quote:
Originally posted by Mageth
IMO, it's usually the case that simplifying and streamlining the code makes it more understandable (comments are more important than code structure, generally).
That may be the case but its not always the case by far. It also depends on what is meant by "complex." For example, seperating a multi-step process into individual code blocks which are commented well may be less "complex" by one definition than another case where the same multi-step process might be written in a single "elegant" code block.

DC
Rusting Car Bumper is offline  
Old 07-10-2003, 10:26 AM   #50
Senior Member
 
Join Date: Aug 2000
Location: San Diego, CA, USA
Posts: 913
Default

If you want to see spaghetti code written by professionals, take a look at some of the examples at The International Obfuscated C Code Competition
LeftCoast is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:54 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.