[Lazarus] Losing data when saving Database fileds with "Size" defined and UTF8 chars

Reinier Olislagers reinierolislagers at gmail.com
Wed Jul 17 09:46:51 CEST 2013


On 17/07/2013 01:18, Hans-Peter Diettrich wrote:
> Reinier Olislagers schrieb:
>> On 16/07/2013 17:18, Hans-Peter Diettrich wrote:
>>> Reinier Olislagers schrieb:
>>>> On 15-7-2013 18:43, Hans-Peter Diettrich wrote:
>>>>> Reinier Olislagers schrieb:
>>>>>> On 14-7-2013 8:00, Daniel Simoes de Ameida wrote:
>>>>> Another workaround: use the appropriate codepage for storing
>>>>> strings in
>>>>> the database, so that all characters are single bytes. With the new
>>>>> (encoded) AnsiStrings this should be quite easy (automatic
>>>>> conversion).
>>>>>
>>>> Wouldn't you run into trouble if you want to use a character outside
>>>> the
>>>> codepage? Presumably OP has enabled UTF8 on the db instead of some
>>>> other
>>>> codepage on purpose.
>>> Then the choice of byte sized characters in the DB field is
>>> inappropriate at all. I wonder how the DB or SQL would sort or compare
>>> (LIKE) such strings?
>>
>> No it isn't. Why shouldn't a user be able to enter Chinese, Greek,
>> Cyrillic and Latin characters?
> 
> Then such characters either must be understood properly by the DB, or
> the DB must not care about such data at all.

No, it must be understood properly otherwise you can't e.g. perform
selects with where clauses or order by clauses

>> As for sorting etc, there are various unicode collation standards.
> 
> A DB implementing these standards has to offer Unicode fields in the
> first place.

That's not the problem. Can you show me one that is supported by sqldb
that *doesn't*???

The point wasn't that dbs didn't support unicode; the point was that the
FPC dataset concept was focusing on byte lengths instead of character
lengths.




More information about the Lazarus mailing list