[Lazarus] Fast drawing to canvas

Reimar Grabowski reimgrab at web.de
Sun Feb 26 09:14:58 CET 2012


On Fri, 24 Feb 2012 22:22:49 +0100
Marco van de Voort <marcov at stack.nl> wrote:

> It is sufficient indication that one and the same binary must be able to do
> both. Regardless if that is between vendors or not.
If you have to support a range of cards, you are absolutely right. Choosing which technique to use is the problem most of the time. Either an invisible mini-benchmark at programm startup (render-to-texture but never show it to the user, which gives you at least an idea about the real performance) or some heuristic based on OpenGL version/vendor/driver version/card name/star alignment,
 
> Of course not. You can test OpenGL version and extensions and conditionally
> execute code accordingly.
That's a given.
I think you misunderstood me. I was talking about different ways of using the PBO. All of the three mentioned methods will work if PBOs are supported.

> I do. But for me that is the interesting benchmark. And though I readily
> admit that I only started to scratch the surface of testing the various
> performance aspects, I do present realistic benchmarks of about 2 days
> efforts spent on the issue.
I am doing OpenGL programming for over 10 years (nowadays mostly as a hobby, so I am not up to date on the current bleeding edge stuff) and I have seen a lot in this time. If you tested your stuff on 2 cards and 2 driver versions that's all your results are really valid for. It is not unreasonable to draw some conlusions from that but they may be wrong nonetheless (not likely in your case, but still possible). That does not mean that you are doing anything wrong only that I take even my own tests with a grain of salt. OpenGL can be a b*tch and from one driver version to the next (one vendor to the other or an X more in the name of the card) performance of a give code path may change dramatically (at least that's what I have seen). Perhaps the situation got better, but I doubt it.
 
> It depends on your purpose. I'm in computer vision, and I want to show my
> frame as soon as possible.
I thought especially in computer vision you would try to create the frame on the GPU perhaps using CUDA or OpenCL. At least there is quite some info about computer vision algorithms and GPGPU on the net, but you are the expert.
Just uploading one textue, showing an image and doing not much more on the card is not a purpose OpenGL is heavily optimized for.

> If you know a way to fire an event after upload
> is finished, don't hesitate to mention it. As long as I don't have that, I
> have no choice than to block on it.
No events in OpenGL, sorry. Only the driver knows what the driver does. That's why I mentioned the NVidia SDK. ATI can give you similar information but I currently don't know what they call it.
 
> This is not a game where you just fire all textures on level load, and just
> proceed and block at the end, hoping that every paralellizes as much as
> possible.
Your knowledge about game technology seems to be a little outdated.

> Nvidia's border conditions are typically not my own. And in practice only
> real-life (read: average) performance is usuable. Peak performance under
> idealized circumstances is a mere footnote.
It is an SDK. It gives you information from the card/driver that is not accessible in any other way. At least I am thankfull that now I can get at least some info and am not working with a black box as it used to be. You as developer decide what you use it for and most of the time you use it to benchmark your own code so I fail to see what you are talking about.
It is for example used by gDebugger which I can recommend.

R.





More information about the Lazarus mailing list