Archive: Some more wishes...


5th April 2002 00:56 UTC

Some more wishes...
Note : Some seem to only will stay as wishes =(

-RenderAPE - 3D : (2 wishes)
------------The ability for users to send 3D "object preset" to Winamp site, under Plugins. (Of course, this would need users being able to MAKE 3D's)
------------A textbox for movement, variables are d (distance, affects size), x,y,z (those for location movement), rx, ry, rz (Rotation in those Axis), and, if possible, Red, Green, and Blue.

-Fast Brightness - Affected colorspectrum can be edited

-I heard of Trillinear Filtering... wait, never mind. Billinear Filtering is enough =)

-Effect Lists - 2 Textboxes - One to determine how many frames the list commences, 1 to set how many frames until the list commences again.

-AVS presets have presets for themselves (Alternatives - click somewhere and the preset changed to a desired alternative... Preset of a Preset can also be named. This makes it that Remixes won't clutter the AVS list too much)(One of the ones that is less likely to become true)

-AVS now can dig to the depth of 2 folders in the right-click menu (Many packs of the same user, turn them into 1 folder. More space, less cluttered)

-Daily/Weekly presets from the staffs. Who knows who's better at making presets, whether the Staffs or the non-Staffs :D

-Monthly/Bi-weekly pack of the best presets in the AVS community.

-I know it's mentioned before, but please, an Undo (if possible, Redo) button. Or, just to narrow it down, an Undo Delete button.

-For the wishes in this forum to come true. Most/Some of them are for better AVS community - If they're made real, We'll all (probably) see much better presets. Who knows how good can AVS get up to... The Edge of the Universe is the limit.


5th April 2002 02:10 UTC

Will add them to the list in the next update...

Oh and for the tri-linear filtering, here's the scoop... (just read it slowly, I try to be clear :)). I've made a short image to illustrate the concepts:

http://acko.net/dump/trilinear.png

Traditionally, real-time 3D on computers works by defining an object through a mesh of small triangles (a triangle is always flat) that compose the object's surface. To make them less boring, a texture (typically an image) is stretched over this surface.

Now the texture obviously has a finite size, and when the camera comes too close to the surface, the individual texels (pixels of the texture) show as big squares on the screen. To counter this effect, bi-linear filtering was introduced. Instead of drawing the nearest texel, we blend the 4 surrounding texels with correct weights, so that you get a smooth surface. That fixes big ugly squares. The name comes from the fact that the color is blended linearly and in 2 dimensions (bi).

On top of the texture looking like crap from up close, it also looked like crap from a large distance. Visualise the texture 'unstretched' again off the surface of the triangle. When the triangle gets smaller, the distance between the individual texels increases. Suppose you had an alternating white-black texture, like this:

[ 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1 ]


When the texture is too small, so that only the even texels get picked to be drawn, we get:
[    0,    0,    0,    0,    0    ]


which is something totally different. When the scale factor is not a decimal number, severe distortions occur.

To counter this effect, MIP-mapping was introduced. This means that aside from the original texture, smaller copies of it are also stored. Suppose we have this texture:

[ 1, 0, 1, 1, 1, 0, 0, 1 ]


We take the average of every 2 numbers, like this (and repeat the step):

[  0.5,    1,  0.5,  0.5 ]


[    0.75,        0.5    ]


And when the triangle moves too far from the camera, we read the texels out of the smaller texture instead of the larger one. The distortions are gone, and like the human eye, we will see the average color if we move far away.

Though MIP mapping was good (especially when combined with bi-linear filtering), it had one flaw: when you take a long polygon that is almost parallel to the viewing direction, you can visibly see where the renderer switches to a lower MIP level. To counter this, we add another layer of blending: on top of blending between the texels in one image, we blend between the texels in 2 separate MIP maps (and thus, in 3 directions). Now, the distortions in the distance are gone and separate mip maps blend smoothly too.


Now... all that wasn't really necessary, but it has taught you something about how 3D and texture mapping work :). So when AVS does bilinear filtering, it simply means that when a movement specifies the coordinates (100.4, 105.1) that AVS will not draw pixel (100, 105) which is nearest, but that it will rather blend the pixels (100, 105), (101, 105), (100, 106), (101, 106) with correct weights. There is no need for MIP mapping, so no need for trilinear filtering (since there is no depth information anyway). Also, MIP maps are normally calculated when the program is started (it's slow, high-quality resizing), so they are too slow for real-time animations such as AVS.

5th April 2002 21:14 UTC

Hmm, just remembered one (that I think is) important wish :

Instead of LineWidth in pixels, use % of screen size : This way, presets which critically uses edited LineWidth won't change much around different AVS window resolutions.


5th April 2002 22:08 UTC

So would tri-linear filtering be possible on AVS or would it slow down too much? From what I got from your article tri-linear would make the texture much more clearer and less glitchy. What about a qua-linear filtering? But of course nic01 has a good point, bi-linear is enough...


6th April 2002 01:21 UTC

Ouch... read my explanation again. Or maybe I'll shorten it and leave out the non-relevant stuff:

Bi-linear filtering means that, when a pixel with non-integer coordinates is needed, the 4 surrounding pixels are blended with correct weights, so that you don't see the individual pixels. This makes movements/transformations a lot more accurate, because small movements are not lost.

Mip-mapping means that you calculate high-quality smaller versions of the image on before hand, and when the distance between individual pixels gets too large, you read out of a smaller mip-map-level to avoid distortion and aliasing of details. This only works when depth information is available, e.g. in 3D geometry.

Tri-linear filtering means that you have bi-linear filtering of texels, and that you blend between mip-map levels as well... so instead of 'switching' to a lower mip-map level, you blend from one into the other. The extra level of blending shouldn't have been called tri, because it makes it sound as if the 3 steps are related.

There's no such thing as quad-linear filtering, simply because there is no other 'dimension' to blend in:
there are 2 for the image itself, and 1 for the separate mip-levels...


Here's a good page with images that explain mip mapping and trilinear filtering... the actual text is cyrillic (Russian, I think) though.

http://www.ixbt.com/video/3linear.html

Just remember that none of this makes sense in AVS, because movements do not have depth information and AVS is simply a 2D tool.