[Lazarus] Hi-DPI tweak of components
Ondrej Pokorny
lazarus at kluug.net
Thu Jun 8 13:12:36 CEST 2023
On 08.06.2023 12:24, Giuliano Colla wrote:
>
> Il 08/06/23 11:58, Ondrej Pokorny via lazarus ha scritto:
>> All in all, an over-complicated approach with little gain.
>
> The gain would be that you do not add up rounding errors. We can't
> have fractional pixels, of course, but we may have the exact actual
> size at design/creation time, and for each different DPI the best
> approximation to the actual size. If you switch back and forth between
> two monitors with different DPI the rounding errors remain constant,
> they don't add up.
You have to consider that for monitor DPI scaling, incrementing series
of multiplications like
A * 1.5 * 1.25 * 1.75 * 2.00 * etc * etc
never appear.
You always have a limited count of different resolutions (usually 2) and
you always switch between up- and down-scaling. Statistically the
Round() function rounds up and down equal sets of float values. So,
statistically, the errors cannot add up, but they fix themselves up with
the count of up- and down-scaling operations.
Here is my proof for switching between 2 different resolutions from
start with 100%:
program Project1;
uses Math;
const
Resolutions: array[0..3] of Double = (1.25, 1.50, 1.75, 2.00);
var
R: Double;
I, F: Integer;
begin
for R in Resolutions do
for I := 0 to 1000 do
begin
F := I;
// scale first up and then down
F := Round(F * R);
F := Round(F / R);
if not SameValue(F, I) then
Writeln(I, ': ', F);
end;
ReadLn;
end.
Run the program, and you will se that there is no starting value at 100%
that would differ if you scale it up and down.
-------
If you want to test scaling down and up, then there of course there will
be a discrepancy between starting and ending value after 1 down- and
up-scaling cycle because you lose resolution with the first division.
But if you do the same up/down scaling again, you will see that every
starting value ended at a well defined pair of values. The values
definitely do not grow or shrink with the count of scaling operations.
program Project1;
uses Math;
const
Resolutions: array[0..3] of Double = (1.25, 1.50, 1.75, 2.00);
var
R: Double;
I, F, ErrorCountInMiddleValue, ErrorCountAfterSecondScaling,
MiddleValue: Integer;
begin
ErrorCountInMiddleValue := 0;
ErrorCountAfterSecondScaling := 0;
for R in Resolutions do
for I := 0 to 1000 do
begin
F := I;
// first scale down, then up
F := Round(F / R);
F := Round(F * R);
MiddleValue := F;
if not SameValue(F, I) then
begin
Inc(ErrorCountInMiddleValue);
// scale first down and then up
F := Round(F / R);
F := Round(F * R);
if not SameValue(F, MiddleValue) then
begin
Writeln('Error after second scaling: ', I, ': ', F);
Inc(ErrorCountAfterSecondScaling);
end;
end;
end;
Writeln('ErrorCountInMiddleValue: ', ErrorCountInMiddleValue);
Writeln('ErrorCountAfterSecondScaling: ', ErrorCountAfterSecondScaling);
ReadLn;
end.
As you can see, there is no starting value that would cause a constantly
incrementing or decrementing series of values. Every starting value
settles down on a well-defined pair of values after the second scaling.
I consider my arguments as proven. Your suggestion to have a better
precision units for sizes doesn't help at all.
---
And even if you found some very rare combination of scaling A -> B -> C
-> A of some value X that increments by 1px every time after the whole
scaling cycle is performed: who cares that after so many scaling
operations the window/column/... width is bigger/smaller by 1px?
As I said in the beginning, the default values should not be scaled with
this approach, they should be scaled always from a constant value at
100%/96 PPI. That means for them the rounding error problem does not
apply. It would apply only for user-defined sizes and there it does not
matter because if the size doesn't do it for the user, he can always
resize the window/column/ whatever.
Ondrej
More information about the lazarus
mailing list