Monday, December 21, 2009

Stable Shadows

hi
this time i worked on stabilizing the flickering when doing cascaded/pssm shadow maps.
if you had the chance to do cssm/pssm you know what i'm talking about, you can't ignore those flickering when you move your camera (panning/rotating).
the first game i know who solve this problem was kill zone 2, nothing to say about this game!
i'v a lot of respect to those guys :)
anyway, if you don't know what i'm talking about and what those flickers are, let me explain (i can't promise you will understand it but i will still try...)
i will talk about cssm from now, the same idea works for pssm.
when you render your shadows you are creating/generating your shadow matrix whenever the view changes (your camera view), if this change is very small and almost smooth you are fine but if not, the shadows boundaries are rendered or more correctly rasterized differently which cause visible flickering of the shadow edges.
the more area your farther shadows (farther cascaded) will cover the more you will notice it.
ok, get the idea, how do i solve it? well, i split it into 2 cases:
1. rotation problem
2. moving/panning problem
case 1: can be solved by eliminating the rotation of the shadow maps when our camera is rotated (this cause our shadow map to be rasterized differently) , we can use smallest enclosing sphere of the cascaded frustum and use it to generate our cascaded matrices, using the center as the lookat target and the radius to generate the orthogonal projection matrix.
this assure that your shadow maps will not rotate with the camera.
case 2: can be solved by canceling the movement in sub-texel level, this can be archived by multiplying a fixed world position with our cascaded matrix, round the result and see how much it differs from your fixed world position, note that this must be calculated in shadow map space in order to get the right offset. we now use this offset to translate our cascaded matrix.
note that my implementation is different, i use bounding box slice selection and different flickering solution which is very sable (can't give info about that because we are working on a big project so i need to keep few tricks inside)
here is a video to show the translation of all those words in action :)


The resolution of the shadow maps are low to make the problem more noticeable

Saturday, November 7, 2009

Revisiting Shadows Methods

hi
shadows are always been an issue when it comes to real-time app, there is no such thing a perfect method for shadows, until now i choose vsm for shadowing, but the more i work with them, the more i get the feeling its great for "demos" not for complex scene where shadow complexity is high , yes i know about the bleeding issues and how to minimize them, but there are scenes where you need to minimize them a lot and at the end you just get simple shadow map, and the softness you applied by blurring or such isn't noticed so much.
because of this i started to investigate few other methods, and i must say i got really good results:
a word about poplar shadowing methods:
1. sm - old school shadow maps
2. pcss - percentage closer soft shadows
3. vsm - variance shadow maps
4. esm - exponential shadow maps
5. cssm - cascaded shadow maps
6. pssm - parallel split shadow maps
as much as it will sound odd, i choose method 1 for indoor shadows, yes, old school shadow maps with poission disk and jittering gives me very good results.
while method 2 sound really good, its not applied very well for every scene, it also not free in terms of performance.
anyway, i created 16 samples of poission disk and generated jitter texture which i passed to the shader. the poission disk can be tuned for more samples or less, the jittering can be tuned for more jitter or less.
because i use two shadow maps, static for static shadows and dynamic for dynamic shadows, i managed to optimize this by using one texture with offsetting.
method 5,6 used for outdoor scenes, i use pssm...
here is few screenshots:

complex shadows from 2 light sources
note that vsm will fail below the platform (light bleeding) and the fence details won't be visible even with 1k shadow maps because the compassion is based on probability and could be failed where simple compassion will pass.

complex surface casting shadows

zoom in on shadows

before you said its not so impressive, note that the 2 light sources using only 8 samples on 1k maps, the barrel is dynamic object which means the dynamic shadow map is updated and merged with static shadow maps, the fence have very tiny elements, and it all runs on my old g6600, so i think its pretty good ;)

Wednesday, September 30, 2009

Maya Exporter

hi
this time i talked about the maya exporter i wrote so i could defines whatever i want where i want and the why i want, without sticking to id md5 format.
when you want to create maya exporter you have few options:
1. create plugin dll
2. create exe application and call maya functions
3. create mel script
i choose option 2 because it more easy to debug and doesn't need your maya run everytime you want to export and test something, the drawback is the you don't go to file menu in maya, select export and see your new format.
option 3 is also good if you know mel script, but doesn't allow you to access everything in maya, and i don't think you have tools to debug it like in dev studio.
anyway, the new format called OGE, this format designed to support both static and dynamic models.
the structure of this format looks like this:
MESH
mesh_info - vertices count, texcoords count, normal count, weight_count (if animated), faces etc...
material_info - the name to material defined in .mtr script
mesh_data - vertices, normals, texcoords, indices etc...
anim_data - weights, skeleton data

global for all meshes
anim_data - anim sets count, fps of each set, joints animated and their anim data

the reason anim_data is shared between all meshes is because each mesh could have different influence on specific joints and those have it own skinCluster, because i don't want to manage and control animation controller per skinCluster, i'm merging those skinCluster's data into one skinCluster data and export it as it was single skin for the whole model meshes.
each of those data is optional and could be selected from the exporter ui.
here i a screenshot of the exporter:

options:
1. convert from right to left handed (maya uses right handed, engine uses left handed)
2. convert from Y up to Z up (engine uses z up)
3. export position, normals, texture coordinates (possible to invert v coord)
4. export materials
5. export animations, possible to set animation name
6. at right i have meshes list, this list allows me to export only the meshes i select. skinned meshes will be shown with '(skinned)' keyword after their name.

the reason i have the option to overwrite animation name is that i can use the exporter to create animation files (containing only animation data without geometry data).
the model could be created at first with one idle animation (or without animation at all), and at later point i can add more animations to it.
also, if you have few characters with same skeleton, you don't want to duplicate all animations, you just want to have shared animation folder containing all animation files for those characters.

Tuesday, September 29, 2009

New Animation System

hi
its been more than a month since i updated this blog, the reason is that i just didn't save time for taking screenshots/create a video and update the blog.
so here i'm finding few minutes to write about the new things i implemented, hopefully updating with screenshots/videos on the next post ;)
list of things i'v working on:
1. new animation system
2. abandon md5 file format and use internal format (writting exporter from maya)
3. real-time atmospheric scattering
4. HDR
5. read shaderx6 :)

the first thing i wanted to rewrite for along time is the animation system, calling it animation system is a nice joke, but still it could play different animations for different parts of the model, do some kind of blending and at the time it was enough.
but the time is come and i wanted more advanced system which allows me control and create animation on the fly, adding sse and such...
the new animation system based on blend trees, what it means is that you can apply operations/operators on trees and get a new tree as a result.
the system designed so the operators works just like in math, you apply operator get a result and apply another operator on it and so on...
so how this is going to help with animation?
when doing skinned animation, you need few things:
a. model with weights
b. bind pose skeleton
c. animation data for that bind pose skeleton
this skeleton form a tree, so you can define the operations in term of animation and you are set.
you can thing of animation as a list of skeletons which defines new position & orientation for this skeleton or just list of that skeleton with different poses.
the operations i'm using is:
a. subtract tree A from tree B - useful for separating certain parts from specific animation, for example: you have walk animation which moves both hands and legs, but you want to separate it to two new animations one that moves on the legs and another for moving the hands.

b. multiple tree A with tree B - useful for merging two trees to form merged animation, for example: merging the two animation created in a and get the original animation (the one which moves both hands and legs)

c. transition from tree A to tree B - the is by far the most useful operation, this create a transition from one animation to another by setting transition/blend factor [0..1], what this means is for blend factor 0 tree A will be the returned, 1 tree B will be returned, for 0.5 for example a tree between A and B will be returned. so why is it so useful? well imagine you have a character that have walk,run animations and you want to play walk animation for velocity under 10 and run over 50, but what happens between 10 and 50? well you can use the velocity to compute blend factor to be used to create smooth transition between walk and run and run to walk.

when setting animation is just a matter of setting list of skeletons, but when implementing you don't want to switch from one skeleton to another in one frame, what you want is to create smooth transition between one skeleton to another, for example: skeleton from frame 1 to 2, 2 to 3, 3 to 4 and so on... the result is a skeleton inbetween old frame skeleton and current frame skeleton, this skeleton used as input to other operations.
also, the animation system works hand by hand with the new engine format 'oge', used for both static and animated models, this format created using the new exporter i wrote.
next time i will write about the other things i worked on...
cya

Thursday, August 20, 2009

Deferred Lighting

hi, quick update
last crytek presentation they show their deferred lighting system, and i thought.. hmmm, i implemented those things few months ago, doing test and stuff to see if its good, but i didn't finsih all the tiny things that make it unstable (i wanted to focus on things other the graphics & effect), so last week i finished handling those tiny things and here is the result:

87 dynamic lights (yes, they all cast dynamic shadows)

Sunday, August 2, 2009

Undo/Redo System

hi, last week i implemented undo/redo system.
as you all know, editing software without undo/redo system is pretty useless.
i think undo/redo system is a must in any editing software, i think the system is very simple and easy to implement (depending on the app, but still) so i can't see any reason why not supporting it.
ok, so explained why, so now i explain (in general) how i added it to my level editor, i will explain the idea so for implementation detail just ask...
the more generic your system is (in term of managing editable objects) the more easy it become, here i describe a system with user define max size for undo/redo (because of memory issues, you don't want to consume more than needed).
you need to support few things before implementing it:
1. you have to be able to take a snapshot of current scene state, meaning, every editable object must be stored in a way that could be restored later.
2. deque container, i use deque so i could keep X oldest elements (used to limit memory used by the system), basically every container with option to add and remove from both direction will be fine.
after you support those its just a matter of identify the places you want to support undo/redo, and wrap them with Undo_Start & Undo_End or whatever name you want.
so what those functions do?
1. Undo_Start - take a snapshot of the scene and save it in undo deque, i make sure to store a flag (lets call it done) which says if i called Undo_End or not, and i checked this flag right after i entered this function. this to assure i wont try to call Undo_Start inside Undo_Start/Undo_End block (prevent nesting), the function also gets a string so i could name the operation i wrap, useful if you want your console to output text like: Undo create new box, or undo delete...
2. Undo_End - checks if i started undo op, and set the 'done' flag to true
simple enough, so now we'll see how to perform undo/redo actions:
1. Perform_Undo - pop the head from the undo deque, push it to the redo queue, and restore the snapshot you saved, i also check size limits right after pushig new element and i its greater then a max size i defined i just pop from the tail.
2. Perform_Redo - pop the head from the redo deque, push it the the undo deque, and restore the snapshot you saved, also do same limit test as in Perform_Undo

note this system could be optimized in few ways, one place to start is the snapshot function.
how exactly you do the snapshot is depends on the app, but a simple method is to duplicate all editable objects in the scene.
also note that the snapshot needs to run ONLY once, just before you modifying the scene.
practical example:
if you have 'CreateNewBox' function that you call when you click on a create new box button on main toolbar, you need to wrap this function like this:
OnButtonCreateNewBox
{
Undo_Start("Create New Box")
CreateNewBox()
Undo_End()
}
if your system supporting those kind of scene changes, speed isn't an issue, but if you support dragging and manipulating object properties with the mouse (like scale/rotate/move vertices/faces) you need to do some optimizations.
that's it, pretty simple right?

Sunday, July 19, 2009

Building Editor

hi
i'v thought it will be good to leave id tools (their editor) because the more the engine improved the more id editor doesn't give me good options to set the new properties the engine supports, for example, i want to create constraints and apply them to brushes or entities, until now i added new entity call constraint_hinge (for example) and then i added properties to this entity that the engine can parse and set constraint values, but the editor doesn't know its hinge constraint and can't show me the angles in the viewport, i was better if the editor could draw/show me the angles in a more friendly way so i know how to set them correctly without running the engine and testing it.
this i just an exampe, so i started to build an editor (i always wanted to, but there was important things to do before...) which will support all engine features and option to add generic porperties to each entity (just like id editor).
the editor isn't going to be like crysis editor, its more like ut editor which you build the level from scracth or more correcty, from basic objects, detail object could be added as well.
right now i setup the basic ui, viewports, tools toolbar and basic manipolation on brushes such as move, scale, rotate, multi selection, also supporting copy/paste system.
the 2d viewports support zooming and snap to grid option, the 3d viewport allow you to conrol a free camera with the mouse so you can view the scene from any angle.
still have to finish alot of things but its getting better each day i'm working on it.
next time i will work on it, i think i will add undo/redo system, without it i think the editor is useless ;)
btw: no its not C# its pure c++ with mfc! uses the new feature pack which is great...
here is a screenshot of how it looks so far:

Saturday, June 20, 2009

Havok Physics Demonstration

hi
its been a long time since i posted anything, this is because i didn't have time to work on the engine... (busy busy busy)
so here is the video of few physics elements using havok i said i will post:



few elements shown in the video:
1. better and smooth character controller (no jittering, smooth jumping), every character/bot have its own properties set using a script.
2. moving platform interacting with character controller, platform push everything on it, which means if something goes on the platform it will move.
3. dynamic bridge using limited hinge constraint, i added options to add constraints via editor so this bridge is a simple brush with hinge constraint at the bottom, when the character apply enough force on it, it will fall down.
4. smooth movement using fixed time step and interpolation, almost every physics engine (physx, bullet, havok, ode, newton etc...) have the option to use fixed time step (i use 1/60), but what this means is when appication dt goes down 1/60 you will notice slow motion and moon gravity effect because the physics simulation is updating in the right time step (which is 1/60), to overcome this we need to check for this situation and apply few interations and interpolate between last and new physics objects states if current dt is not multiple of physics dt (for more info see my post on fixed time step or ask on the forum)
if you have any question feel free to ask http://orenk2k.forumotion.net/

Sunday, May 3, 2009

Havok Integration Completed

hi
its been a while since i posted anything, i didn't had time to code so...
anyway, i'v finished to integrate havok physics and got rid of any nvidia physx, i must tell you that i'm pleased with the results, havok rocks!
few features added:
1) anything could be physical, entities, brushes, models etc, that means i can create houses, towers directly from the editor and make it interact/break to parts very easy.

2) using fixed time step, this is huge win because by default havok/physx runs great when doing the step in a fixed manner thats mean you need to choose your dt and step the simulation with it.
why you need this thing?
a quick example, lets say we are using dt=1/60, that means your game need to be run at 60 hz so the physics simulation will be fine, if your game runs for 30 hz that means you get slow motion physics or moon gravity effect, if you game runs at 120 hz that means your physics simulation get andernaline and runs twice as fast as you intended.
consoles does not have this issues as the run in a fixed tv hz 25/30/60, but in pc it doesn't, so to fix this fast, you can turn on vsync and everything will be fine (at least for hardware that support it), but this way your renderer is limited to 60 hz! not so good, so the ideal solution is to run the renderer at high hz as possible and the physics at fixed hz, so what you should do is
to simulate the physics engine considering your fixed and dynamic time step and perform one or more steps of the physics simulation to achive smooth motion, using interpolator.
in our example (dt=1/60) if the game run at 30 hz we should perform 2 physics updates, if the game run at 5 hz we should do more, if the game runs at 120 hz we should do on update per 0.5 updates, meaning one update per 2 frames.
to deal with this i use accumulator and interpolation factor, the accumulator used to know how much updates we should do (you should set max steps so you wont freeze the game when the steps goes to high) and the interpolation factor used to interpolate between prev and current states of the objects, used when game dt isn't multiple of our fixed dt (if for example we are running at 45 hz and not 60 hz, we should perform 1.5 updates, but we can't do 0.5 update, so instead we do 2 updates and use 0.5 as the interpolation factor).
if you want to know more details on the subject please ask at the forum...

3) character controller interaction with physical objects, moving platform moves the controllers, the controller blocks other physical objects, other physical objects can push the controller, characters can push other characters and more.

4) character motion properties set by the script, so each bot can have different abilities, jump height, movment speed etc.

5) ragdoll are much more stable, angles limits are set very easy via scripts, havok have greate contraint for doing shoulder joint limits.

i probably forget few things but those are the main features.
i already have some scenes that shows those in action, but as usual i prefered to code few min more than capture a movie :)
as usual, if you have any question feel free to ask http://orenk2k.forumotion.net/

Sunday, April 5, 2009

Moving to Havok

hi
well, the day has come and i realize that nvidia physx isn't good enough for my game play physics, so i started to read havok physics sdk and i really like it, its not simple as physx but gives better control over things like character controller, ragdolls, ray casts etc.
why i'm doing it right now and not waiting after i release a demo?
the things is the physics is very important part of the game especially the character controller, so i prefer doing it before and have solid system than say: "i cant do it now, its a lot of work".
few things i found physx is very bad at:
1. filters, not enough and not general so i could filter collision detecting between objects and such, for example: i want to the bot to shot but i dont want it to collide with its ragdoll! i solved it via bits filtering but limit to only 32 bots.
2. character controller! smooth collision nice handling with different shapes (box, sphere, capsule) but very very bad in interacting other physics element, the thing is that this character controller (cct) is placed outside the physical system and perform sweep test against the world to do sliding and stairs handling.
so interacting with other rigid bodies is done via collision callback that tells you which bodies the cct collide with and allows you to add forces to do the pushing effect, but what about other rigid bodies that pushing the cct? well this is not possible! because the cct is a kinematic object which means its static! and have high priority over other none kinematic objects.
so doing moving platforms that allows the cct to stand and moving with them isn't possible! you can do it by doing dirty hacks and maybe you get the right results but it will jitter a lot and wont perform very well.
those are 2 things i can't compromise and because of them (mostly because the cct) i move to havok, which have great cct and let you chose the one which fits better to your game, they have rigid body cct and proxy cct which acts like physx cct (placed outside of the physical system and do sweep test and such), when you choose rigid body cct it means it part of the physical system and everything works! meaning you can push things, things can push you, moving platform become very easy to do.
they also have very generic filtering system and ragdoll support, they even have animation support which works with ragdoll physics and ik so you can do animation with ik and enable ragdoll physics when you want.
there alot of things i didn't mentioned but i think that if you need good physics system and you dont want to compromise you should select it very carfull or you end up changing it like me :)
so in two words: Havok Rocks!
cya until next time...

if you have any question feel free to ask http://orenk2k.forumotion.net/

Friday, March 27, 2009

Triggers in Action

hi, after i finished to update my system (installing and updating few things) i remembered that i said i will put some video to show triggers in action so i created a small map with a box attached to ceiling with few physics objects inside, so when i touch the trigger the box will be opened and the objects will fall down.
this is very simple example of using trigger_once (which trigger his targets only once), other triggers could be use at the same manner to achieve more complex behavior.


using triggers

if you have any question feel free to ask http://orenk2k.forumotion.net/

Saturday, March 21, 2009

Doing some maintenance

hi, this week i did some maintenance, something i want to do for a long time is to replace my hard disk, my current hdisk (old 80 gb) start to do some noises so i thought its time to replace it, so i buy new 320 gb and clone everything.
i thought i will be fast and simple but the new norton ghost is so bad, i just want to do simple task (as i did in the dos days) and they make it so hard, they change the option for cloning hdisk to copy hard drive option which allows you to clone only one partition at a time and not the whole hdisk as one unit.
so i searched for another app, and i found really good software 'Acronis True Image Home', this is the best app i found for doing the job, very easy to use and have a lot of options to backup things. anyway i use it and it work perfectly.
i also install source control software (which i didn't use until now), the project become very big with a lot of files and it become very complicate to control it, so i installed svn (which i also use it at work), i really like this software, very fast and relaible.
the last thing i did is to upgrade my dev system to vs2008, i was using vs2003 which wasn't so great, i hope the new one will perform much better :)
thats it for now, bye

Wednesday, March 11, 2009

Triggers

hi, long time i didn't post anything, i'v been busy at work and also wanted to try few techniques and to read few articles from gpu gems 3 and gpg 7, i'm going to get a new copy of ai game programming wisdom 4 which will complete my series so i'm really looking forward to read it.
anyway, after checking some new methods on the graphics side (deferred methods) i wanted to get on with the gameplay so i decided to add new triggers to support interesting level design stuff.
background on triggers for those who are not familiar with it.
trigger is a convex volume which triggers an event when game entity (player, bots, projectiles etc) touches it.
simple example for using triggers: you place a door and want it to be opened only if you get nearer, so you place a trigger around the door but make it a little bit bigger so you will touch the trigger before you touch the door, you set the target of the trigger to be the door so when you touch it it will send a 'trigger' message to the door.
each trigger have basic attributes:
1) target - name of the entity that will be triggered when we touch the trigger, we can set multiple targets.
2) message - message to desplay when triggered
3) delay - amount of time between the touching and the actual triggering
there are few more relative to trigger type.
the triggers i'v added so far:
1) trigger_once - triggers only once, when it is triggered you can't trigger it again, can be used to destroy a bridge after you pass on it.
2) trigger_multiple - triggers multiple times, you can set wait time before the trigger will be trigger again when touched.
3) trigger_hurt - apply damage when touched, can set damage amount.
thats the main idea of triggers, i'v few more to add and there is cool stuff you can make with them.
next time i hope i will finish the rest and upload some video to show some action.
bye
as always if you have any question feel free to ask http://orenk2k.forumotion.net/

Sunday, February 15, 2009

Custom Deformation

hi, quick update about new feature i'v added into material system called custom deform.
this feature allow you to create specific deformation on a geometry batch.
by saying specific i mean that you define 3 functions/expressions (anything you want), to define the deformation on x,y,z component of each vertex in the batch.
for example:
here is a material definition for sin wave deformation:
materialTestDeform
{
deform custom 0, sin(time+VertexIndex), 0
}
ok, what this simple material does to any geometry using it, is applying a sin wave deformation on each vertex on the y component. (the x,z component doesn't do anything as its 0)
time - internal keyword returning game time in ms
VertexIndex - internal keyword returning current vertex index of the vertex we are going to deform.
i could also use it to do some old scholl water deformation (using height map) by doing something like this:
materialWaterDeform
{
deform custom 0, heightMapTexture[256*VertexPos.y+256*VertexPos.x], 0
}
heightMapTexture - is the dynamic 256x256 height map texture
VertexPos.x/y/z - internal keyword returning current vertex component of the vertex we are going to deform.
this is just a few examples of using this new feature (you probably wont apply this water deformation as its not fast enough and could be done faster using the gpu)
because this deformation is based on the functions you define (its not constant), this could not be done in the gpu so its take some cpu time.
thats all for now.
if you have any question feel free to ask http://orenk2k.forumotion.net/

Wednesday, February 4, 2009

Forum For All

hi
i decided to open a forum to discus my engine programming and game development in general.
the main idea of the forum is to be the place to ask questions and hopefully get answers :)
it also contain game design area which allow you to share you work and show what you got, the main idea of this section is to get in contact with some talented users who would like to help with new models, animations, level design etc.
i still post news here but if you have any questions about something (new or old), i would like you to ask it in the forum, so enjoy and cya in the forum :)
forum address:
http://orenk2k.forumotion.net/

Sunday, January 25, 2009

Improving Shadows System Continued

hi, last post i let you think on answer to some question about shadow updating scheme.
in short, the problem was that if dynamic entity gets into influence area of the light it means the light needs to update his shadow map, but this turns to be very intensive if this area contains lots of detail models which is static (read the last post to get the details and comments...)
ok, so the solution to this problem could be one of:
(1) two shadow maps for each light
(2) stay with one shadow map per light but...
(3) other
lets check those options:
(1) two shadow maps for each lights is the way to go but still raise some problems:
a. more memory (twice), if the light taken 1.5 mb per cube shadow map, now it will take 3mb!,
not so good as the reason i use shadow cache system is to maximize shadows cound and minimize the memory used by them.
b. we need to merge those shadow maps to get final one so we could use it when rendering, so we need another shadow maps for this! still the merging process isn't for free so i think you get the idea way this option isn't so good.
(2) here i stay with one shadow map per light but i use another shadow map which will be shared for all lights, this second shadow map will only used for dynamic geometry, so the unique shadow map per light stay ONLY for static geometry and the shared one ONLY for dynamic geometry.
so the memory issue solved, what about the merging? well, instead of merging those shadow maps in cpu, we merge them on gpu.
sound good but that means we need another shadow map right? not necessity, what i do is binding those shadow maps and merge them inside ps, and then i compute shadow map as usual. so instead of using the value sampled from one shadow map, i use the merged sample.
this method saved the need for another shadow map and the extra pass for merging them.
(3) other? nothing i see now, but maybe you could think of something?...
in practice its work very well and the performance gain is huge.
here is two screenshots show the results using this method:

using both static & dynamic shadows

using only static shadows

i see those screenshots but what is so special about them? its just shadow maps...
well yes and no, the first screenshot shows simple scene with few static geometry and one dynamic pickup item (armor), the only shadow map that is being updated is the dynamic shadow map, so what you see is the result of merging two shadow maps in the ps, so you can see both static and dynamic shadows.
the second screenshot shows the same scene but this time we picked up this item, so the resulting shadow map taken only from the static shadow map (dynamic one is empty!)

Monday, January 19, 2009

Improving Shadows System

hi
recently i'm working on performance issues and such, one of the biggest challenge in this area is shadows and how do we make them faster?
the answer to this question is another question:
how much cpu and memory can we sacrifice for this?
the answer to this one is: it depends on other process in the game and your platform.
so here i'm working on pc platform where memory is not an issue (like in ps2/xbox/ps3) so the main question is:
what do you do if you have big scene with 5000 point lights that need to cast dynamic shadows?
lets say we can determine their visibility very fast, so the bottleneck is the shadows generation and not the visibility of the lights. another assumption is that the lights is mostly static but if at any time we want some of the lights become dynamic, we could.
so the answer to this question can be one of those:
1. use light maps
2. compute all shadow map at load time
3. other
well, lets check those methods one by one:
1. light maps is out of the question because they are static and if a light is moving, his shadows wont be updated. also if we have a bridge and we walk beneath, the shadows of the bridge wont cast on our character, because when we compute those light maps, our character wasn't there.
2. this sound promising, it solved the problems in 1 but has another problem.
each point light using cube map for his shadow maps, that means 6 maps for each side, so if we take for example 256 by 256 cube map for all the lights we get something like 1.5 mb per light (assuming 4 bytes per pixel), this become 1.5*5000=7500 mb!! = 7.5 gigs, this is huge and of course cant be done in the real world when we have limited memory.
3. so what we need is a solution that solve both 1 and 2 and take reasonable memory, so the solution i come with is to use a cache system for shadows.
this system will have specific amount of memory dedicated to shadows and based on priority value we computed for each light the system will give cache entry for the most important lights (that is for example: the lights with large range and closer to the viewer).
so basically the system work with x entries that keeps shadow maps and set them as needed to the right lights, when we set shadow entry, we flag the light that he needs to update his shadows, this way we wont spend memory for far lights the viewer wont even see.
so when the scene loaded, the system set shadow entries for the most important lights and when the light tries to generate his shadows we check:
a. is this light have shadow entry? if not it means this light is not important and wont generate shadows.
b. is this light already updated his shadows or not? if not it means the light become dirty or someone flag the light that he need to update shadows (see below)
if we have shadow entry and the light needs to update his shadows, we get the shadow maps from cache entry and generate shadow maps for shadow maps stored in this entry.
when we done, we flag this light that he update his shadows so next frame we wont update it again.
few thing we need to consider:
1. if entity that cast shadows get inside light bounds, the light must update his shadows so the shadows of the entity will get in.
2. if the light is moving we need to update his shadows maps.
ok, so this is the main idea of the system and it is very fast and generic and as any algorithm, this have some tiny things that need to be done correctly so it works smoothly and handle all the special cases.
before i finish, here is another question:
what if we have a room with 500,000 faces with one point light that cast shadows?
we use this great shadows cache system :)
right, but what if an entity, lets say a small ball with 16 faces get inside that room?
hmm, the light need to update his shadows so the shadows of the entity will get in.
well, thats right but that means that for this tiny entity with only 16 faces you need to generate shadows from 500,000+16 faces???
think about it until next time...

Tuesday, January 13, 2009

Volumetric Flares

hi
while changing some of my render batches design to support static and dynamic batches better and faster, i decided to add this nice fake volume effect that i called light flares or volume flares.
unlike the simple 4 vertices billboard/sprite flares that we rotate to face the camera those flares are different and look much better.
when editing the map i place quad shapes aligned with scene geometry and gives them the special flares material, inside the engine i build special border extruded by specific size (defined inside material script) and make sure they always face the viewer.
i also make sure the flare wont be visible if the camera position placed behind.
when the position of the camera is at front and smoothly become behind, i take the distance from camera position to the flare plane and use it to compute flare intensity so the flare will smoothly become invisible the more the camera become closer.
also i bind it a built-in texture which encoded a quadratic function so the borders of will have a smooth falloff effect.
there are few tiny things here and there (setting texcoords and such) but this is the main idea.
here is a screenshot to show this in action:

Volumetric Flare Around Fluorescent

note that this volume shape changed relative to viewer position so it will be different when looking from other position, but the volumetric effect will be preserved.

Sunday, January 11, 2009

Improving Area Visibility

hi
last time i talked about my new visibility system and how i compute lights and areas visibility, this time i improved my visibility system in a way that i could know the visibility inside each area and not just which areas are visible.
i call it internal area pvs, this pvs use the same mechanism of bitsets and gives me information about all the entities/surfaces etc that are visible inside specific area.
at first it sound like, why do you need it? well, when running big levels each area have lots of detail in term of level design and also game play by placing high detail models and pickup items, those can take a lot of power when rendering.
also note that computing internal area pvs is very fast and done when doing the portal traversal, so with a few lines i save lots of rendering, also what you don't see does not need to be lit so the saving is huge when doing the lighting pass!
here is 2 screenshots that shows it in action:
NOTE: the images in wireframe mode so you could see the data the visible without and with internal area pvs system.

without internal area pvs
notice the small red entities at the right side

with internal area pvs
notice the red entities at the right side was removed along with other surfaces

the red entities in the images are light flares, which is a nice effect to fake volume light effects. note that unlike billboard/sprites those are not simple quad aligned to camera view, next time i talked about them...