[Lazarus] Writing >1000 TBufDataset records to file is extremely slow
Martin Schreiber
mse00000 at gmail.com
Mon Mar 27 10:46:00 CEST 2017
On Sunday 26 March 2017 23:53:08 Werner Pamler via Lazarus wrote:
> Trying to extend the import/export example of fpspreadsheet from a dBase
> table to a TBufDataset I came across this issue with TBufDataset: While
> data are posted to the database as quickly as usual writing to file
> takes extremely long if there are more than a few thousand records.
>
> Run the demo attached below. On my system, I measure these (non-linearly
> scaling) execution times for writing the TBufDataset table to file:
>
> 1000 records -- 0.9 seconds
> 2000 records -- 8.8 seconds
> 3000 records -- 31.1 seconds
> etc.
>
> Compared to that, writing of the same data to a dbf file is a wink of an
> eye. Is there anything which I am doing wrong? Or should I report a bug?
>
Can you switch off 'applyupdate'-functionality in TBufdataset? MSEgui
TLocalDataset (a fork of FPC TBufDataset) writes 1'000'000 records in about
0.4 seconds if options bdo_noapply is set.
"
1000000: 0.313s
1000000: 0.308s
1000000: 0.319s
1000000: 0.311s
1000000: 0.411s
1000000: 0.293s
1000000: 0.327s
1000000: 0.321s
3000: 0.001s
3000: 0.001s
3000: 0.001s
"
"
procedure tmainfo.recev(const sender: TObject);
var
i1: int32;
t1: tdatetime;
begin
locds.active:= false;
locds.disablecontrols();
try
locds.active:= true;
for i1:= 1 to reccount.value do begin
locds.appendrecord([i1,inttostrmse(i1)+'abcdefghiklmnop',10*i1]);
end;
t1:= nowutc();
locds.savetofile('test.db');
t1:= nowutc()-t1;
writeln(reccount.value,': ',formatfloatmse(t1*60*60*24,'0.000s'));
locds.active:= false;
finally
locds.enablecontrols();
end;
end;
"
Martin
More information about the Lazarus
mailing list