Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What do you mean GPUs don't support them? Most opengl tutorials I've read have people building triangles in 2d to learn fragment shaders and vertices before going 3d. Most of them also support parametric curves, so even hardware accelerated bezier curves should be possible.


Wait, why do you say GPUs support parametric curves? GPUs are based on triangles mostly.



That's like saying "GPUs support Master Chief". You can model Master Chief with triangles, and you can model parametric curves with triangles. But I wouldn't call it "supporting parametric curves", you're still rasterizing triangles, they're just morphed into the shape of a curve. And most practical, shipping versions of this technique would do adaptive triangulation on the CPU, since otherwise you don't have an idea of your mesh density and are either over-submitting or under-submitting triangles.

Loop-Blinn, similarly, is mostly a CPU-side approach and has a lot of drawbacks, but at it's core it's using the pixel shader to define a curve profile.


The parametric curves don't get transformed into triangles. This isn't tessellation or similar techniques. You aren't feeding the GPU any triangles - you're only feeding it the function that defines the parametric curve. Then, using that function for the parametric curve the GPU can calculate pixel output. Again, modern GPUs (really, most GPUs in the past decade) can support more than just triangles. These more exotic techniques don't get as much attention, since most graphics assets are still implemented with triangle meshes.


GPUs can take in way more than just triangles as input. There are particle simulation and even Ray tracing implemented on GPUs nowadays. Support for parametric curves was one of the more recent additions.


That's a very... naive view of how a GPU works. Particle simulation is done in compute, and ray tracing, as implemented in RTX/DXR, is done on a soup of triangles. The core of rasterization is still done on triangles, and can't easily be done in compute. Have any references to parametric curves on GPUs? All the approaches I know of, like the recent mesh shader work, still output triangle meshes.


This is from 2007: https://developer.nvidia.com/gpugems/gpugems3/part-iv-image-...

Again, the vast majority of use cases for GPUs is 3d vertex graphics. But they're capable of more than that. Modern GPUs are very different from early GPUs that only worked with triangles. Some of the early ones were actually ASICs, and couldn't even load different shader programs.


"In compute" just means that there aren't inputs and outputs related to the current output frame in the calculation job - the computational possibilities are the same.

Triangles aren't necessarily involved in in rendering either, see eg how the stuff on shadertoy.com works.


GPU aren't based on anything. They are turing complete, you can run Linux on it if you wanted to.

What you are saying is outdated by maybe a couple of decades.


A Turing Machine is turing-complete, but it would be inefficient to run Linux on it. We're not talking about raw computability here, but feasibility. And still, I am not aware of anything "running Linux on a GPU", their scheduling engines are not designed for those sorts of workloads.


The first triangle is still in 3d space but the z coordinates are set to 0


Or you can just not define a z coordinate in the vertex buffer object (at least, in opengl). I think the vertex shader might still need to output a 3rd coordinate. But you can always just discard it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: