You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the wgpu rasterizer as implemented only actually utilizes the GPU for blitting and bluring, I imagine this is only useful in two scenarios:
WebAssembly because it "kills" performance (~2x slower) so offload to WebGPU might be useful. However due to complexities in Rust <-> JS WASM interoperability in the current ecosystem subrandr is incapable of actually utilizing hwaccel on the web so I have not tested this and am purely speculating.
Size 1100 font text in srv3 that takes 100 (!) milliseconds to blur on large resolutions on the CPU currently. (I will attempt to optimize this more in the future)
In particular the rasterizer would probably see major performance gains if it starts rasterizing glyphs. For this I think there's mainly two approaches:
Flatten Bezier curves in the glyph outline into a polyline, then tessellate the polyline on the CPU. This is difficult but not as difficult as the other way. Due to the flattening the tessellation will also be pretty bad with lots of thin triangles (especially if it's not a Delaunay triangulation, which is more difficult to implement for the non-zero winding rule). An alternative is to handle Beziers on the GPU although I'm not sure how to do that while handling winding count changes across overlapping Beziers (Bezier intersection could work and a tessellator could theoretically handle Beziers like this, but it is also slow and approximate since it is most commonly implemented via recursive subdivision, although analytical approaches also exist afaik, this would further degrade the precision of the tessellator).
Do rasterization of the entire outline on the GPU, this is very complex and there exist whole projects like Pathfinder or vello that attempt to do this. The results are very impressive but the complexity involved is probably too much to re-implement ourselves. Maybe an experiment to make use of one of these libraries would be worth it though.
Maybe at least a glyph atlas could be implemented to reduce the number of draw calls, not sure whether that'd help. In general currently the rasterizer is not as much of a bottleneck as I thought it would be so I'd rather not worry about this for now. There's more important things to work on currently, so the wgpu rasterizer will likely remain in this "not really much better than sw" state for now.
I have already spent too much time investigating this that I could've instead spent on things more impactful currently.
The text was updated successfully, but these errors were encountered:
Currently the wgpu rasterizer as implemented only actually utilizes the GPU for blitting and bluring, I imagine this is only useful in two scenarios:
In particular the rasterizer would probably see major performance gains if it starts rasterizing glyphs. For this I think there's mainly two approaches:
Maybe at least a glyph atlas could be implemented to reduce the number of draw calls, not sure whether that'd help. In general currently the rasterizer is not as much of a bottleneck as I thought it would be so I'd rather not worry about this for now. There's more important things to work on currently, so the wgpu rasterizer will likely remain in this "not really much better than sw" state for now.
I have already spent too much time investigating this that I could've instead spent on things more impactful currently.
The text was updated successfully, but these errors were encountered: