If you're not in a Node.js context, the Canvas API does a decent job of rasterizing SVGs nowadays. Once rasterized, you can call canvas.toDataURL() to get a download link. Here's a demo:
I used to have troubles with retina displays, I always got a blurred image, no matter to what trick I tried. Is there a way to fix the rendering pixel ratio to 1?
The usual approach with canvas for high dpi is to increase the size of the canvas and then scale it back down afterwards: https://codepen.io/graup/pen/jOPpopR
This downloads 200x200. How to scale it back to 100x100? Ideally so that the image is the same no matter the display where it was exported on. Also I would like to not need to include any image processing libs. The usecase is to export a HTML element (e.g. a chart) and send to another API (e.g. attach it to a report), without downloading.
If using CSS with SVG is important then it’s actually better to use a backend with a library like librsvg (e.g. Cairo) then run node-canvas on top of it like automattic does. Even Inkscape has problems with CSS in SVG files. CSS in browsers is subjective because nowadays they might be using an style that doesn’t have a straightforward approach that can be rasterized. I haven’t had much luck with rasterizing SVG styles past CSS2 spec.
not javascript, but screenshots work best - other approaches suffer from css and rendering bugs.
best way is to just use your browser in headless mode from the command line. you can do a screenshot, and specify the window size.
True, but if you have digital display - that is, which have separate, immovable pixels, working with pixels and not with vector-based analog primitives - you'll eventually have to convert "vector" picture into "pixel" picture.
Is there any good book/resource about how all that fancy stuff you can use in Inkscape/Illustrator is actually implemented in terms of rendering. E.g. how are curves and lines with a thickness rendered or shapes that are defined by boolean operations etc. ?
CS148 [1] varies from year to year (most people complained that I made the final assignment on subdivision surfaces too hard, but I had at least a few "Thank you, this was awesome") and is the pre-cursor to 248. I'd recommend it if you're getting started.
If you like ray tracing, Cem Yuksel currently teaches most of the related courses at Utah [2].
There's tons of material online now. If you want a "traditional Utah graphics" curriculum, Cem is teaching most of the stuff (with updates!) from 15 years ago. You can also probably find the old course slides and assignments. I'd suggest cs6620, personally:
CS 140 (hope I remembered the course number right) where you write an operating system mostly from scratch is notoriously time consuming and rewarding, at least back when I was there.
As an alternative if you build SDKs or do work in gaming, mobile, or anything involving images really you will end up understanding / implementing this or more than this.
It's pretty ridiculous and has a ton of just bizarre stuff you have to read through (and do!) to even get to something remotely interesting.
One of the first projects I "assigned" to myself when learning to program was to create a 3d environment and be able to scale, rotate, translate objects in 3d. It was easy and fun because I got to choose the language, the rules, etc.
This assignment makes me cringe a bit because there are a lot of hoops to jump through. But yeah, I guess, no pain no gain or something like that :)
No. We implemented a 3d software frame buffer pipeline at my non-Standford, non-"elite" computer science school, in C and with no dependencies other than the standard library, for our computer graphics class.
I think the spirit of the post is correct: the quality of instructors can dramatically increase the quality and relevance of course content in the curriculum. Stanford courses tend to produce new, interesting assignments that other schools crib. Same applies to all the top engineering departments.
My friend tried to get a CS degree at Pepperdine and the curriculum was 15 years out of date. To students who are new to the field, it is hard to figure out where to start even if they could do it independently.
I assume the point is that you have to implement a pure software rasterizer. It's not actually that difficult, but it feels like a lot for an undergrad who's not familiar with the space. I assume this is done as part of a class that gives you context.
For an example of a rasterizer, you can take a look at my pure JS implementation of canvas (which is roughly the same imaging model as SVG). All lines and shapes are flattened into a pure line polygon, then drawn with a scanline rasterizer. Getting the basics working is fairly easy. Handling the endless edge cases, however.... :)
Yeah, writing a scanline rasterizer from scratch is fun. For beginners, I advocate the sample-based approach I've written about, as functional "is inside shape" can be easier to reason about for simple cases. It's harder for non-convex polygons though :)
Making a 2D rasterizer is fairly trivial, making a rasterizer that looks good is insanely difficult. Handling things like aliasing without overblowing and making it work in all possible cases is tricky and one spends 99% of time addressing those tiny little details that "make one fall in love with software".
GPUs aren't really designed for 2D graphics out of the box. I've written about this before [0].
Text rendering can be tricky, but not that much trickier -- it's just the same curves at smaller scales.
Not sure what you mean by stencil textures. Are you talking about the NV_path_rendering approach where you stencil out the path? Yeah, that's not really a thing that people do these days.
It's not that simple unfortunately. Hinting for TrueType fonts is a thing and without properly aligning resulting pixels at their own scale fonts tend to look very ugly when following pure curve definitions. Oversampling at e.g. 8x size is one possible option but then there are other problems that pop up once downscaling to display size.
Fast, efficient 2D graphics using the PostScript (i.e. SVG) rendering model is still an open research problem. Most shipping implementations these days do full tessellation to triangles on CPU.
What do you mean GPUs don't support them? Most opengl tutorials I've read have people building triangles in 2d to learn fragment shaders and vertices before going 3d. Most of them also support parametric curves, so even hardware accelerated bezier curves should be possible.
That's like saying "GPUs support Master Chief". You can model Master Chief with triangles, and you can model parametric curves with triangles. But I wouldn't call it "supporting parametric curves", you're still rasterizing triangles, they're just morphed into the shape of a curve. And most practical, shipping versions of this technique would do adaptive triangulation on the CPU, since otherwise you don't have an idea of your mesh density and are either over-submitting or under-submitting triangles.
Loop-Blinn, similarly, is mostly a CPU-side approach and has a lot of drawbacks, but at it's core it's using the pixel shader to define a curve profile.
The parametric curves don't get transformed into triangles. This isn't tessellation or similar techniques. You aren't feeding the GPU any triangles - you're only feeding it the function that defines the parametric curve. Then, using that function for the parametric curve the GPU can calculate pixel output. Again, modern GPUs (really, most GPUs in the past decade) can support more than just triangles. These more exotic techniques don't get as much attention, since most graphics assets are still implemented with triangle meshes.
GPUs can take in way more than just triangles as input. There are particle simulation and even Ray tracing implemented on GPUs nowadays. Support for parametric curves was one of the more recent additions.
That's a very... naive view of how a GPU works. Particle simulation is done in compute, and ray tracing, as implemented in RTX/DXR, is done on a soup of triangles. The core of rasterization is still done on triangles, and can't easily be done in compute. Have any references to parametric curves on GPUs? All the approaches I know of, like the recent mesh shader work, still output triangle meshes.
Again, the vast majority of use cases for GPUs is 3d vertex graphics. But they're capable of more than that. Modern GPUs are very different from early GPUs that only worked with triangles. Some of the early ones were actually ASICs, and couldn't even load different shader programs.
"In compute" just means that there aren't inputs and outputs related to the current output frame in the calculation job - the computational possibilities are the same.
Triangles aren't necessarily involved in in rendering either, see eg how the stuff on shadertoy.com works.
A Turing Machine is turing-complete, but it would be inefficient to run Linux on it. We're not talking about raw computability here, but feasibility. And still, I am not aware of anything "running Linux on a GPU", their scheduling engines are not designed for those sorts of workloads.
Or you can just not define a z coordinate in the vertex buffer object (at least, in opengl). I think the vertex shader might still need to output a 3rd coordinate. But you can always just discard it.
[1] https://news.ycombinator.com/item?id=14694179