It’s late at night and I’m back in Pisa for my last year of master. And, as often happens, a weird idea struck my mind. What if OpenGL is not the right thing for Lightspark? No, I’m not talking about dropping hardware accelerated rendering as that’s surely the right way to go, but using OpenGL really looks unnatural. In the design of the advanced graphics engine OpenGL is being basically used only to upload images rendered with cairo to the VRAM, and to blit and composite all the rendered chunks together... do we really need all the OpenGL complexity to do this??
Well... OpenGL is basically the only thing we have, the only way to talk with the graphics hardware. But, here it comes the gallium project! As gallium splits the API from the driver it could be possible, in theory, to write a specialized gallium state tracker to do only the work we need... and maybe do it better.
I’m writing here first because I’m not (yet) experienced enough with the gallium platform to know if the idea is sane, and second because I somehow feel the same approach could be useful for other apps... for example Lightspark and compositing window managers have similar needs. So I’d like to have some feedback about writing a small API and a gallium state tracker to do:
- DMA accelerated transfers of rendered image data
- Blitting and compositing of such data on screen
- Notify the application when asynchronous work (such as DMA transfers) has ended (BTW: what’s the right way of doing this in OpenGL?)
- Enqueue to-be-uploaded-to-vram images and have them sequentially transfered
- Apply simple but programmable (shader-like) transformation to pixel data