[Lazarus] Writing >1000 TBufDataset records to file is extremely slow
mse00000 at gmail.com
Mon Mar 27 10:46:00 CEST 2017
On Sunday 26 March 2017 23:53:08 Werner Pamler via Lazarus wrote:
> Trying to extend the import/export example of fpspreadsheet from a dBase
> table to a TBufDataset I came across this issue with TBufDataset: While
> data are posted to the database as quickly as usual writing to file
> takes extremely long if there are more than a few thousand records.
> Run the demo attached below. On my system, I measure these (non-linearly
> scaling) execution times for writing the TBufDataset table to file:
> 1000 records -- 0.9 seconds
> 2000 records -- 8.8 seconds
> 3000 records -- 31.1 seconds
> Compared to that, writing of the same data to a dbf file is a wink of an
> eye. Is there anything which I am doing wrong? Or should I report a bug?
Can you switch off 'applyupdate'-functionality in TBufdataset? MSEgui
TLocalDataset (a fork of FPC TBufDataset) writes 1'000'000 records in about
0.4 seconds if options bdo_noapply is set.
procedure tmainfo.recev(const sender: TObject);
for i1:= 1 to reccount.value do begin
More information about the Lazarus