by seveibar on 6/5/25, 2:05 AM with 71 comments
by badmintonbaseba on 6/5/25, 10:18 AM
Possibly you do the subdivisions along the edges uniformly in the target space, and map them to uniform subdivisions in the source space, but that's not correct.
edit:
Comparison of the article's and the correct perspective transform:
by gyf304 on 6/5/25, 5:29 AM
More reading: https://retrocomputing.stackexchange.com/questions/5019/why-...
by JKCalhoun on 6/5/25, 2:29 AM
A friend was writing a flight simulator from scratch (using Foley and van Dam as reference for all the math involved). A classic perspective problem might be a runway.
Imagine a regularly spaced dashed line down the runway. If you get your 3D renderer to the stage that you can texture quads with a bitmap, it might seem like a simple thing to have a large rectangle for the runway, a bitmap with a dashed line down the center for the texture.
But the texture mapping will not be perspective (well, not without a lot of complicated math involved).
Foley and van Dam say — break the runway into a dozen or so "short" runways laid end to end (subdivide). The bitmap texture for each is just a single short stripe. Now because you have a bunch of these quads end to end, it is as if there is a longer runway and a series of dashed lines. And while each individual piece of the runway (with a single stripe), is not in itself truly perspective, each quad as it gets farther from you is nonetheless accounting for perspective — is smaller, more foreshortened.
by jesse__ on 6/5/25, 5:17 AM
Meanwhile.. drawing 512 subdivisions for a single textured quad.
It's a cute trick, certainly, but ask this thing to draw anything more than a couple thousand elements and I bet it's going to roll over very quickly.
Just use webgl where perspective-correct texture mapping is built into the hardware.
by exabrial on 6/5/25, 5:52 PM
"Everything else" would be a pluggable execution runtime that are distributed as browser plugins: [WASM Engine, JVM engine, SPIR-V Engine, BEAM Engine, etc] with SVG as the only display tech. The last thing we'd define is an interrupt and event model for system and user interactions.
by shaftway on 6/5/25, 11:27 PM
I was on the original SVG team at Adobe back in '00 and built some of the first public demos that used the technology. This kind of 3d work was some of the first stuff I tried to do and found it similarly lacking due to the lack of possible transforms. I had some workarounds of my own.
One demo had a 3d stack of floors in a building for a map. It used an isometric perspective (one where parallel lines never converge) and worked pretty well. That is pretty easy and can be accomplished with rotation and scaling transforms.
The other was a 3d molecule viewer where you could click and drag around to view the structure. This one basically used SVG as a canvas with x and y coordinates for drawing. All of the 3d movement was done in Javascript, computing x and y coordinates and updating shapes in the SVG DOM. Styles were used to handle single / double / triple bonds, and separate groups were used to layer everything for legibility.
by laszlokorte on 6/5/25, 7:27 AM
by rollulus on 6/5/25, 4:51 AM
by iamleppert on 6/5/25, 2:21 PM
by unwind on 6/5/25, 7:12 AM
One possibly uncalled-for piece of feedback: is that USB-C connection finished, and is it complying with the various detection resistor requirements for the CCx pins? It seemed very bare and empty, I was expecting some Rd network to make the upstream host able to identify the device. Sorry if I'm missing the obvious, I'm not an electronics engineer.
See [1] for instance.
[1]: https://medium.com/@leung.benson/how-to-design-a-proper-usb-...
by JKCalhoun on 6/5/25, 2:32 AM
by weinzierl on 6/5/25, 9:03 AM
by dedicate on 6/5/25, 4:12 AM
by m-a-t-t-i on 6/5/25, 11:42 AM
by itishappy on 6/5/25, 2:04 PM
by looneysquash on 6/5/25, 3:55 PM
by rixed on 6/5/25, 4:58 PM
by leptons on 6/5/25, 6:49 PM
Why did you feel you had to do this with SVG?
by stuaxo on 6/5/25, 9:35 AM
by est on 6/5/25, 6:40 AM
by ndgold on 6/5/25, 3:58 AM