Know something about this production that we don't?
Know something about this production that we don't?
Last edited on 15 Aug 2021 by 100bit. See all edits
Some details about the code in here.
It is an infinite zoomer inspired by TBL's spectacular zoomer in [Magia](https://www.pouet.net/prod.php?which=9468). Large images (up to 8192x8192) with lots of detail get converted to polar images. The bigger the polar image, the further we can zoom in. The neat thing about this conversion is that only the details that are important while zooming towards a single point are kept, while the rest of the details get discarded. The original filesize of some of these images maybe 200 MB or so.
The polar image gets mapped on your typical tunnel table of U,R coordinates. Instead of scrolling the r-coordinate, as you would in a tunnel, it is scaled by a zoom factor. To avoid choppy movement, there is some subtexel precision on the radius and angle coordinates allowing very smooth movement. Without the subtexel precision the slow movement is impossible. The LUT itself was made piecewise linear to avoid reading so much table data from RAM. The effect is embedded in the c2p to save further bandwidth.
At first we tried automatically stitching together multiple smaller images, but the seams between the images always turned out blurred. Painting these images is also a huge amount of work. I seem to remember TBL saying Louie spent a lot of effort on that Magia zoomer and I'm really impressed at how long it zooms and its seamless transitions.
In the end we came up with the idea of using gigantic fractal images and we called upon Evilryu to help out with coding these in Shadertoy. He tried several kind of fractal formulas and color combinations. The most detailed ones caused too much aliasing and in the end the techno world you see in the demo was chosen. The images are blurred before conversion to polar to reduce some aliasing. Since many images were made I'm tempted to release another demo just featuring all of these variations. Lots of cool stuff.
While this was going on Farfar modeled the mask object which fit very well inside this technomaze imagery.
6 fractal images rendered with 90 degrees field of view from a single point mapped on a cube.
I really wish it was a movieplayer as suspected above as that would be more impressive :) I suspect the reason it looks like a movie is because the "glow" adds some mpeg like artefacts.
The exporter is taking a lightly extended GLTF format. It has got a few custom properties exported from Blender to describe particle systems. One very convenient thing about the exporter is that it works on a 24-bit original scene and collects all textures, overlays, shadetables into a common palette that everything gets remapped to. The palette generator tries to make sure there are colors for transparent effects such as particles and glow. Without this automation it is very painful to manage the palette for Amiga 3d scenes.
- Node animation and hierarchy
- Shape morphs
- Affine texture
- Perspective correct texturemapping (unused)
- "Normalmap" + Texture (unused)
The "normalmap" is actually a cylinder map with (angle,y) components. So just adding an constant to this angle will make a light fly around the object on a fixed axis. The angle-constant is calculated using:
It is not going to please the phyics professors, but it is minimal overhead to get some dynamic looking lighting on Amiga. I suggest consulting the papers of Larusse, Kippenes et al for more on this technique.
- Motion: turbulence field, velocity, dampening ..
- Compiled sprites
- Colored particles that can blend against eachother in 256 colors
- Export time clustering to fake more particles
The particle images are pieces of code that draw the particle. This way no extra cycles are spent on blending fully transparent areas within the particle image. Whether this gave any performance advantage is yet to be measured, however I always wanted to try it so left it in there.
Antialiased polygons are quite rare on the amiga scene. This AA method works by first detecting the silhouette of a mesh while backface culling. Edges that are shared between a backfaced culled triangle and a visible triangle make up the silhouette edge set.
As these edges are encountered during rendering, the coverage data and background of the edge is recorded before the triangle is drawn. Then this data is blended back in after the triangle has been drawn. Sometimes this leads to bleedthrough artefacts, but these are then cleverly concealed by blinking and viewers attention are trivially diverted by the MPEG artefact like "glow".
It is not limited to glowing towards light, but also glows toward darkness. The image brightness gets sampled in 16x16 interval grid. This gets linear interpolated across the surface and sent through Tone-Loc mapping before written to the surface. This glow has the property that it looks MPEG-ish. Making people think they are watching a Video-CD thing in 1998.
FPU temporary registers:
The demo changes the FPU rounding/internal precision mode so that 32-bit values can be stored in FPU registers without getting corrupted. This way one can store addresses and integers in tmp FPU regs instead of spilling them to RAM.
The music routine is approximating an original 16-bit signal in chunks of 512 samples using normalized_8_bit_signal[i] x amiga_channel_volume[chunk]. This gives more bit resolution in low volume parts of the track making the output better than pure-8-bit and without the volume loss of the traditional 14-bit technique. If the best approximation would be an volume of 31.5, then amiga_channel_volume of channel 0 would be set to 30 and of channel 1 to 32 etc. Hopefully giving a next multiplier of 31.5.
For more about this technique consult Kippenes et al.