Archive: ugly bmp->megabuf method


30th March 2005 06:50 UTC

ugly bmp->megabuf method
Until we get an ape for loading a bitmap straight into gmegabuf here is a 'dirty' approach. Basically the exe included in the zip will convert *some* .bmps into a .gvm containing gmegabuf assigns for the width, height and the rgb data.

I call this a dirty approach for several reasons. Firstly (and most importantly) a .gvm is a lot bigger than the bmp it came from... and we dont store any more data... actually we store a bit less. Secondly it can only handle some bitmaps as I am far too lazy to handle all the millions of special cases and know that PAK-9 is working on a much better APE solution.

24-bit square bitmaps with even number width and height are all its supposed to be able to handle.

I won't support this or promise to develop this further, I'm just posting this as I feel a bit lame for having tools for doing this stuff easily... that others may not have simply because they don't know about bitmap files or c++ or whatever...

Included is sample .gvm .bmp and .avs as well as bmpconv.exe

it gives you help if you just type bmpconv.. for those too lazy to do that it works like this:

bmpconv mybmp.bmp mygvm.gvm <n>

where <n> is the offset in the gmegabuf that the data will start. gmegabuf(offset) is the width and gmegabuf(offset+1) is the height, the rgb data is then stored after

n.b. please dont base anything off of the example preset. its horribly unoptimised and is probably buggy.


30th March 2005 12:10 UTC

Since we're posting dirty tools

Converts an interleaved RGB .raw file into a series of gmegabuf assignments in a file for use with Jheriko's APE.

Like J's, quick dirty and unsupported :D


30th March 2005 12:43 UTC

i know i already posted one... but here is a second example (with source bitmaps) which implements a general case normal mapping.

its not particularly well commented, but the basic method is to do normal lighting i.e. light direction dot surface normal = brightness (capped between 1 and 0) with an attenuation factor got from the distance (i'll assume you can do that :P). then to apply the normal map each local normal is dot producted with the relevant components of the 'axis vectors' like (trivially in a non rotating static world) 1 0 0, 0 1 0, 0 0 1... but these may be moving and rotating (when the object the normal map is attached to moves) so you need to recalculate the 'world normal' like this since the texture only stores a 'local normal'.

anyway... look at the code.. it probably makes more sense than my crappy explanation.

note, i have found a few bugs in my ape. one that seems to caus most crashes can be totally avoided by not using more than one instance. so in this preset i stuck the two gvms together so i could load it just once into one. am working on the fix and will release asap. in this zip is a 1.01 with //$notrans for faster eeltrans.ape working togetherness and also a few minor bugfixes (typo in help gone, filename length stored correctly)


attachment is rar in zip cos it just fits under the max size


30th March 2005 14:15 UTC

Originally posted by jheriko
in this zip is a 1.01 with //$notrans for faster eeltrans.ape working togetherness
you forgot to mention that this also needs a new eeltrans.ape, which you can get from my deviantart page

30th March 2005 22:42 UTC

:)

Use eeltrans it make evallib into 'not a crap language'.

[/advert]


14th April 2005 12:53 UTC

looking forward to an ape writing the actual rendered picture into megabuf.
That would enable relektions done with an superscope.
Btw. i´m currently working on a superscope which has quite realistic lightning and which own shadows affect the lightning of the superscope.


17th April 2005 05:30 UTC

you mean reflections? those'll be dead slow
self-shadowing... sounds interesting
why not use triangles though? :)
they're faster and look much more solid


17th April 2005 09:53 UTC

yes i mean reflections but only one-level reflections(direkt to enviroment).
my code will support the triangle-idea, which will sure speed up the whole thing.
but unfortunally the global var manager seems not to work for me.


17th April 2005 11:18 UTC

jheriko just recently released a new version of his globmgr.ape in another thread. Perhaps it will work for you now?!


17th April 2005 13:43 UTC

thanks ^..^ !
aus deutschland?
texturing done! :)


17th April 2005 14:12 UTC

jau bin ich! wie kommst du darauf?


17th April 2005 22:07 UTC

storing the current framebuffer into the megabuf would probably be quite slow. as there is no direct access you would need to recompile and execute a huge block of code every frame. notice how long a large load takes for instance.

btw, doing shadows using a megabuf texture on triangles and raytraces is feasable (for small scenes or in very low quality if you like real time). i'm currently working on getting the obj data and bmp data to work together so that i can start making some nicely shadowed, normal spec and gloss mapped presets using some complex models. but i'm lazy and its taking ages.

good luck with making that preset.


18th April 2005 13:20 UTC

The problem with shadowing is speed, if you take the raytrace method. Jheriko and I had a discussion about it over the phone, personally I think its only really practical if you reverse it and do a sort of 'shadow volumes' approach, i.e. just fire rays through the vertices of the shadowed object and use a few triangles to represent the shadow.

Its more complex to implement but faster because your doing less raytracing and just rendering a few triangles for the shadow. If you imagine casting a shadow from a triangle to a plane for example its just three ray-plane intersections and rendering a single triangle for the shadow, pretty cheap.


18th April 2005 16:10 UTC

one thing to note is that its impossible to do 'real' shadows this way, since instead of blocking lighting you are just rendering a black blob... which isn't what a shadow is. for the simpler case of one light source though it would be indistinguishable, for two or three light sources i suspect it would start looking really crappy though.


18th April 2005 17:53 UTC

pak´s idea isn´t that wrong.
u can make the efort to influence the shadowblobs with the other light sources.
but shadowing on non environment needs raytracing.
jheriko: i think before 2006 :(
quite realistic shadowing won´t be done with more than 1 light source(speed).
one level reflections are alot faster than realshadowing.
two level reflections will be as slow as realshadowing with maximum geometric detail.

so keep fighting the code
:up:


18th April 2005 18:11 UTC

Originally posted by jheriko
one thing to note is that its impossible to do 'real' shadows this way, since instead of blocking lighting you are just rendering a black blob... which isn't what a shadow is. for the simpler case of one light source though it would be indistinguishable, for two or three light sources i suspect it would start looking really crappy though.
Well thats when the advantages of AVS come in, because you can render the shadow using 50/50 blend or some other blend mode for a nice fake translucency effect.

18th April 2005 18:45 UTC

btw, realistic shadowing with more than one light source will be done before 2006. ;)

i understand that you can fake the translucency etc but once you have more than one light source it looks very unnatural, especially to lamers like me who spend so much time making lighting 'demos' that every error glares out.

i dont know how you could make a true reflection consistently faster than a true shadow in all cases (especially the simpler ones we are more likely to use), since the number of raytraces for a reflection of any level is a minimum of two per point per triangle per reflective triangle, where as for a shadow it is a maximum of one per point per triangle per triangle per light. So what I'm saying is that with two or three lights a true shadowing algorithm could well be faster than a true reflection algorithm on the same scene if all faces are reflective.

The speed advantage of reflection, I suppose, would be that you don't always have to have a reflective surface where as every triangle will need to cast a shadow.

Plus its not computationally expensive to make these shadows soft rather than hard, in fact its pretty close to being free. this is because if you use the standard-ish 'sum of angles = 360' method to check if a point is in a polygon and use the same value and other code path to determine how far from the polygon it is if it isn't.

I'm not saying its a particularly efficient, but it is quite plausable.


18th April 2005 18:59 UTC

the reason why i think onelevelreflections are a lot faster
is that for shadowing u must check if the point is in a shadow ray.
for reflections we assume the camera ray is hitting every point we see.
so we take the vector from the camera to the point where calculation transform the vector with the normal of that point and calculate where the ray hits the environment.
two level reflections must here check if it hits non enviroment and then transform the vector agian with the normal of that point.
the shadow check must be done (displayed points of polygon)*(number of points of geometry doubles);


18th April 2005 19:25 UTC

i´ve reread jherikos last post.
and have a bit to add to my last post.
onelevelreflections do need lightned environment( which is done fast) for realistic purpose. and a reflection raytrace is faster done as an shadowing one.(for refl. ca.10 algorithms).


18th April 2005 20:12 UTC

Note that the method I suggested is also produces higher resolution shadows because you arent limited by the 'pixels' in your 'texture', which lets be honest is pretty low


18th April 2005 20:38 UTC

what i was trying to get at asd5a is that for a reflection you trace from the camera to some point checking for a collision so we know when the reflection is blocked then reflect the ray and see what it hits in the world. for a shadow all i need to do is raytrace from the light to the point and see if it collides, which is the first step of the reflection algorithm. my reflection algorithm could be a crap one though... the assumption i made is that it isn't. i also assume that you are using triangles or quads that are subdivided, or some superscope parallel which would operate similarly rather than a dm, which works very differently and reflection would indeed be a lot more practical.


19th April 2005 07:21 UTC

for onelevelreflections(after reflection the ray definitly hits only environment(environment:i.e the cube,which isn´t reflective your things are flying in)) we don´t have to check if the ray from the camera is blocked we assume that every nonenvironmentsurfacepoint that is facing towards us
is reflecting.after the transformation of the rayvector i directly check where it hits the environmental cube.
jheriko your are (some kind of) wrong the first check isn´t necessary.


19th April 2005 17:31 UTC

well it depends on how you implement it. the first check isnt required but why reflect a point when you wont be able to see the reflection? i just thought that would be optimal.

btw that torus thing i said in another thread is wrong, you need to use the central circles normals to do the first split rather than depth. jaak just pointed this out to me and i realised the error. silly as i have done exactly that in other presets :P


19th April 2005 19:01 UTC

that torus prob is solved.
well i want to implement reflections into that torus preset and it´s a pity that havent learned yet how to raytrace the inner side of an cube in andm.you could maybe raytrace a cube with egdelenght 2 and distance 2 from the camera, please.


20th April 2005 05:58 UTC

can't raytrace a cube?

there are lots of threads and forums topics dealing with raytracing.

as well as google etc..