[Lazarus] How to compile lazarus from its SVN source in WinXP?

Popeye Spinach popeye.lists at yahoo.com
Mon Apr 6 13:07:41 CEST 2009


Hello:

I have made my two cents benchmark.

Two small programs (just FPC file, without Lazarus). one with  
LoadFromFile and one with readln(file,-..).

For small files, the LoadFromFile is faster. For big files, readln way  
is faster (in my system, for over 150K files, old-fashioned becomes  
faster). The bigger the file is, the better readln way is. I must  
admit that I was a little puzzled, I thought that readln way would  
always be faster because I thought: "They do the same, read a file,  
but the Tstringlist must do more work to handle the class and allocate  
memory."

But they don't do the same, LoadFromFile loads the file from disk in  
big blocks, the bigger the systems allows. On the other hand,  
"readln", reads only until it finds an EOL (well I suppose the  
operating system reads a minimum size of block and caches a little),  
so it send many I/o/ commands, one per line. In the old days I used to  
do a similar trick using blockread. I suppose that when the file is  
bigger, the overhead of allocating and deallocating memory is bigger.

So the conclusion: For many cases, loadFromFile is faster. Even when  
it's not faster it may be better because it makes many tasks easier.

My complain was that when Dians asked about how to read a text file,  
the right answer should had been showing both ways.

In fact, I pushed the benchmark to the limit with 1Gb file. "readln"  
program processed the file, but TStringList popped an out of memory.  
But that was not the big problem, the problem was that with  
TStringlist, for a minute, the system turned almost irresponsive,  
while, with "readln", I didn't notice anything.

Having things in memory is a good idea many times, particularly if you  
must read data several times you waste a lot of time reading from  
disk, but we also must be aware that memory is a valuable resource.  
When we use memory we are punishing the rest of processes running in  
our system. I know I am not showing a secret, but I am afraid we are  
forgetting it,... we don't balance pros and cons anymore, we just grab  
the memory.

>
> Very true!  Just curious, how do you know how much memory an
> application uses? Preferably a Linux and Windows method.
>
> Does the 'heaptrc' unit do that?
>
> eg:  TStringList vs Old Fashioned TextFile
> Both CLI test programs loaded the same sample text file and simply
> does a writeln() for each line of the sample text file and then quits.
> The sample text file is 26.4KB in size.
>





More information about the Lazarus mailing list