[Lazarus] Writing >1000 TBufDataset records to file is extremely slow
Howard Page-Clark
hdpc at talktalk.net
Mon Mar 27 00:53:25 CEST 2017
On 26/03/17 22:53, Werner Pamler via Lazarus wrote:
> Trying to extend the import/export example of fpspreadsheet from a
> dBase table to a TBufDataset I came across this issue with
> TBufDataset: While data are posted to the database as quickly as usual
> writing to file takes extremely long if there are more than a few
> thousand records.
>
> Run the demo attached below. On my system, I measure these
> (non-linearly scaling) execution times for writing the TBufDataset
> table to file:
>
> 1000 records -- 0.9 seconds
> 2000 records -- 8.8 seconds
> 3000 records -- 31.1 seconds
> etc.
>
> Compared to that, writing of the same data to a dbf file is a wink of
> an eye. Is there anything which I am doing wrong? Or should I report a
> bug?
I don't think you do anything wrong.
I can get small performance increases by
- avoiding FieldByName() calls and using AppendRecord
- using SaveToFile and avoiding an intermediate memory stream
- increasing the value of PacketRecords
but the speedups are insignificant.
Clearly either the insertion algorithm should be improved, or the
buffering, or the way the buffered records are written to disk. Maybe
all three areas of TBufDataset can be optimised for better performance.
More information about the Lazarus
mailing list