-
Notifications
You must be signed in to change notification settings - Fork 2.1k
SDL_RenderGeometryRaw: optionally copy additional coordinates based on stride #11276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The stride could indicate some other data besides additional vertex coordinates. What's the use case for this? |
This change alone is sufficient to enable rendering fully textured perspective projections by transforming the coordinates themselves on CPU without SDL knowing anything about shaders or matrices or z-buffers. Without it, there is no way to get at the hyperbolic interpolation for perspective-correct mapping. |
Can you provide a little demo program? |
Wouldn't this break vertex positions for people who have wide strides that contain non-position data? For example if they have an array of |
Yes, it would. I’m pretty sure the answer here is that you should use the new GPU API, I’m just curious what could be done with this. |
Look, Ma, no shaders (or matrices)!perspective.mp4local original = {
{300,0,0,1, 1,0},
{0,300,0,1, 0,1},
{0,0,0,1, 0,0},
}
local tri = love.graphics.newMesh(
{
{"VertexPosition", "float", 4},
{"VertexTexCoord", "float", 2},
},
original,
"triangles",
"dynamic"
)
local tex = love.graphics.newCanvas(32,32)
function love.load()
love.graphics.setCanvas(tex)
for i=1,32,4 do
love.graphics.line(i,0,i,32)
end
for j=1,32,4 do
love.graphics.line(0,j,32,j)
end
love.graphics.setCanvas()
tri:setTexture(tex)
end
local phase = 0
function love.update(dt)
phase = phase + dt*math.pi*2
local extra = math.cos(phase)
tri:setVertexAttribute(1, 1, original[1][1], original[1][2], original[1][3], original[1][4] + extra)
end
function love.draw()
love.graphics.draw(tri)
end But yeah, I guess RenderGeometryRaw's behaviour shouldn't be altered in the manner originally stated due to the breaking nature. But it's just so tantalizingly close for the RenderAPI with just one tiny change, without SDL having to take up any additional responsibility or design opinion. |
That's interesting, but I mean from a practical point of view, for creating an SDL game. |
Here's a better-looking example: look_ma_no_vertex_shaders.mp4Code:local original = {
{-1,-1,-1, 1, 0,0},
{-1,-1, 1, 1, 0,1},
{ 1,-1, 1, 1, 1,1},
{-1,-1,-1, 1, 0,0},
{ 1,-1, 1, 1, 1,1},
{ 1,-1,-1, 1, 1,0},
{-1,-1,-1, 1, 0,0},
{ 1,-1,-1, 1, 1,0},
{ 1, 1,-1, 1, 1,1},
{-1,-1,-1, 1, 0,0},
{ 1, 1,-1, 1, 1,1},
{-1, 1,-1, 1, 0,1},
{-1,-1,-1, 1, 0,0},
{-1, 1,-1, 1, 0,1},
{-1, 1, 1, 1, 1,1},
{-1,-1,-1, 1, 0,0},
{-1, 1, 1, 1, 1,1},
{-1,-1, 1, 1, 1,0},
{ 1, 1, 1, 1, 0,0},
{ 1, 1,-1, 1, 0,1},
{-1, 1,-1, 1, 1,1},
{ 1, 1, 1, 1, 0,0},
{-1, 1,-1, 1, 1,1},
{-1, 1, 1, 1, 1,0},
{ 1, 1, 1, 1, 0,0},
{-1, 1, 1, 1, 1,0},
{-1,-1, 1, 1, 1,1},
{ 1, 1, 1, 1, 0,0},
{-1,-1, 1, 1, 1,1},
{ 1,-1, 1, 1, 0,1},
{ 1, 1, 1, 1, 0,0},
{ 1,-1, 1, 1, 0,1},
{ 1,-1,-1, 1, 1,1},
{ 1, 1, 1, 1, 0,0},
{ 1,-1,-1, 1, 1,1},
{ 1, 1,-1, 1, 1,0},
}
local tri = love.graphics.newMesh(
{
{"VertexPosition", "float", 4},
{"VertexTexCoord", "float", 2},
},
original,
"triangles",
"dynamic"
)
local tex = love.graphics.newCanvas(32,32)
function love.load()
love.graphics.setCanvas(tex)
for i=1,32,4 do
love.graphics.line(i,0,i,32)
end
for j=1,32,4 do
love.graphics.line(0,j,32,j)
end
love.graphics.setCanvas()
tri:setTexture(tex)
end
local phase = 0
function love.update(dt)
phase = phase + dt*math.pi*2 * 0.5
end
function love.draw()
for i=1, #original do
local X,Y,Z,W = unpack(original[i])
X, Z = X*math.cos(phase) - Z*math.sin(phase), X*math.sin(phase) + Z*math.cos(phase)
Z = Z + 5
local x = X*500 + 400*Z
local y = Y*500 + 300*Z
local z = W
local w = Z
tri:setVertexAttribute(i, 1, x, y, z, w)
end
love.graphics.draw(tri)
end Also the example is made in Love2D because obviously I can't access the third and fourth coordinate with SDL_RenderGeometryRaw currently, but Love2D by default just forwards the coordinates verbatim as I described.
Being able to manually manipulate all four coordinates means that one could create simple unshaded 3D visualizations without going into the GPU API at all. For example math teachers who doesn't care about fancy lighting or shader performance and just want to draw a simple XYZ axis plot with correctly mapped images. This allows them to do their own math to transform the homogeneous coordinates without SDL prescribing any notion of projection matrices. As long as the values are transmitted verbatim to the render backend, the built-in interpolation hardware will take care of the perspective mapping. The thing to note is that the interpolation fixed function intrinsically doesn't have any notion of transform or projection matrices, all it does is dumbly divide xyz by w before linearly interpolating the output values, that's all there is to "perspective projection". Opening up the ability to manually set what w it's divided by is sufficient to access the proper interpolation functionality. |
I took a look at this a bit, and allowing this is not trivial. The assumption that you're only using 2 floats for xy position and that you're using the normal model view matrices is baked into all back ends. You're probably better off writing a thin layer over the SDL GPU API that has this designed in, along with support for shaders and so forth. If you write something like that I'd love to see it! |
I don't understand your argument about the model view matrix, it's not touched in any way at all. All that's happening is sending four coordinates to the GPU, this could literally be implemented in OpenGL 1.1 with The only change involved is sending two additional coordinates while keeping everything else unchanged. I think you're thrown off by the traditional modelview notion into thinking that perspective has to have some sort of matrix involved, when in reality all you ever need is just having four numbers to represent your vertex.
The whole point of this usecase is to not involve writing shaders or any sort of additional complexity, that it's trivially achieved by not artificially locking away the last two coordinate that the GPU is already using for interpolation anyways. |
Well, feel free to submit a proof of concept PR. |
I think I might have misunderstood what you meant by the following:
Do you mean this as "you are welcome to try to add this functionality to the Render API by wrapping the GPU API", or did you meant it as "if you want to do something like this, you're better off going straight to the GPU API instead of Render API"? |
I meant the latter. The implementation isn't that hard (you can look at SDL_render_gpu.c for inspiration) and you'll have complete flexibility over how the API works for you. You can add full coordinate support and shader support, etc. |
And you are still open for a PR demonstrating that the functionality is trivially added to the RenderAPI without altering any of the matrices currently used by the backends? |
Sure. |
Is a fully functioning PR required to convince you that the existing matrices do not need to be touched in any way, or can a toy GL1.1 example matching the same assumptions made by the backend be enough to get the point across? It is a lot of abstractions to dig through from scratch without existing knowledge of the organizational cruft just to prove a point, I'm concerned about whether there's a release timeline cutoff for preliminary consideration, which will require a hyper-accelerated effort on my part in order to meet. In any case, it'd help if you can point me to where and which assumptions are made and where the geometries are actually communicated to the GPU. |
I understand the point, I just am not sure how the existing code could be easily modified to accept data in the way you propose. That's one of the reasons I'm suggesting you write a new API on top of the GPU API that gives you more flexibility and lets you pass through data however you choose. We often have people asking for the render API + stuff, and maybe it's time to write that. :)
No, there's no release timeline cutoff. If we added something like this it would not be using the proposed API, instead we would probably add a new function that allowed the application to specify the vertex layout, and that could be done anytime.
Take a look at QueueCmdGeometry() in SDL_render.c, that's probably the best place to start. |
@slouken found it: SDL/src/render/opengl/SDL_render_gl.c Lines 1409 to 1419 in a10578a
} else {
// SetDrawState handles glEnableClientState.
if (thistexture) {
--- data->glVertexPointer(2, GL_FLOAT, sizeof(float) * 8, verts + 0);
--- data->glColorPointer(4, GL_FLOAT, sizeof(float) * 8, verts + 2);
--- data->glTexCoordPointer(2, GL_FLOAT, sizeof(float) * 8, verts + 6);
+++ data->glVertexPointer(4, GL_FLOAT, sizeof(float) * 10, verts + 0);
+++ data->glColorPointer(4, GL_FLOAT, sizeof(float) * 10, verts + 4);
+++ data->glTexCoordPointer(2, GL_FLOAT, sizeof(float) * 10, verts + 8);
} else {
--- data->glVertexPointer(2, GL_FLOAT, sizeof(float) * 6, verts + 0);
--- data->glColorPointer(4, GL_FLOAT, sizeof(float) * 6, verts + 2);
+++ data->glVertexPointer(4, GL_FLOAT, sizeof(float) * 8, verts + 0);
+++ data->glColorPointer(4, GL_FLOAT, sizeof(float) * 8, verts + 4);
}
} This is all that's needed to enable a new Basically, just for the case of The old RenderGeometry and RenderGeometryRaw functions simply copy just the xy coordinate and leave zw as 0 and 1, only in RenderGeometryHomogeneous do you also copy the zw components over. This is significantly simpler than a whole new API that involves attaching custom vertex formats and matching vertex shaders -- at that point you're literally just in GPU-API territory. This proposal instead forces you to stick with the default shader and default transforms, only letting you code-golf perspective mapping by manipulating the vertices manually. |
Interesting. I wonder if this would cause performance issues for existing platforms. I'm guessing not? The SDL 2D render API is pretty lightweight relative to most 3D GPU loads. |
Can a new enum value be added to On one hand, having a new render command type prevents the draw calls from being batched with the 2D rendergeometry commands, but on the other hand I think there aren't many any scenarios where you would be frequently interleaving 2D draw calls with them. |
Internals can change, and new API functions can be added, we just can't change the existing public API. |
SDL_RenderGeometryRaw already lets you specify an arbitrary stride for your vertex list.
It would be very useful to allow up to four spatial coordinates to be transmitted to the render backend if the stride is greater than 8 bytes.
Backends which accept up to four spatial coordinates should have the third and fourth coordinate default to zero and one respectively if the user provides a stride of only two coordinates.
SDL should make no additional guarantee or responsibility beyond dumb transmission of the coordinates to the hardware, and the software renderer should do nothing at all with the third and fourth optional coordinate.
Example:
SDL/src/render/opengl/SDL_render_gl.c
Line 963 in e0321ca
SDL/src/render/opengl/SDL_render_gl.c
Lines 994 to 995 in e0321ca
The text was updated successfully, but these errors were encountered: