Mike notes

From Bo3b's School for Shaderhackers
Revision as of 01:17, 26 August 2016 by Bo3b admin (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Shadows

Hi bo3b - yeah this is not the right shader for shadow rendering ;-) It's not always obvious, but a few clues are:

- There is a ShadowMap sampler in there somewhere

- There is a SomethingXXXToShadow or SomethinXXXToLight matrix in there (or the reverse!)

- A World Coordinate is contructed from EyePos or CamPos, and a 3-comp texcoord

- There is an InvView or InvProjView in there (usually generating something in world coords)

- Recognize that you may not see the word "shadow" in a shader that actually plots shadows. As noted above you often need to look for matrices with "light" in them. This makes more sense because these matrices do a coordinate transformation to the "view" of where the light source is that either casts shadows or enhances the light on a surface, say like a spotlight (very much the same as the View transformation moving to the POV of the camera).

I hope this stuff helps, I basically don't realize consciously nowadays what it is that I look for so writing this stuff down is useful for me too :-)

Good link from Mike_ar69: Shadows.

Related awesome post from DarkStarSword regarding fixing shadows:Shadow fixing And The followup.

Basic learning

+================================================================================================= The general "roadmap" for learning should be something like this:

BEGINNER 1 Beginner fixes do not require shader knowledge, or DX9 knowledge. You just need to learn "what to do". You will need to get familiar with the basic assembly syntax, but that's all. Studying other peoples fixes will help you absorb it pretty quickly, and bo3bs references above have loads of stuff. These beginner fixes are NOT using "full stereo correction", but more are 'moving things to fixed locations/depths' etc. 1. Read the stuff on what the helixmod dll is, the difference between the debug and the releae 2. Learn how to disable problematic shaders. This will crystalize your understanding of the basics of using the helixmod wrapper, stepping through shaders, editing them etc. 3. Find a game (even if it's already been fixed!) that has a skybox at the wrong depth. Fix it using the instructions in the original Helix guides. 4. Similarly, find a game with a 2D hud. Practice moving the hud elements to different depths.

    • When you can do the above 3 things, you will actually be in a great position to start creating fixes that make games playable. This is not insignificant!

BEGINNER 2 5. Next step would be spending a bit more time looking at how you "separate textures" - this is really useful for HUD fixes, so you can move some parts of the hud and not others. It's also essential for some games where fixing one part of the game (e.g. moving a HUD element to depth) also affects some entirely unrelated part of the game (objects in the world).

    • With the above, you can now do some fancy stuff :-)

INTERMEDIATE 1 Intermediate fixes start to require some knowledge of what is going on in the vertex shaders, and they start to use "full stereo correction" i.e. the actual formula that Nvidia uses to inject stereo in the drivers. Only Vertex shaders are involved. 6. Haloing: A lot of the really nasty looking issues in games are caused when things "halo" or "double/triple image". There is an absolute (almost universal) reason for, pattern to fixing, all of these issues. I will state it here for when you are ready to tray this out. In a VS the "output position" variable is calculated from a View Projection transformation (you need to read about what that is), and this is what defines the coordinates of objects in the scene - the nvidia drivers specifically stereoizes these output variables. However, what often happens in games is that a temporary variable is used for the result of the VPM transform, and then used to set the output position variable. The problem comes when this same temporary position variable is additionally output to other "texcoords" that get used in later Pixel Shaders. These texcoords do not get stereoized by the Nvidia driver, and so in the pixel shaders they are the wrong coordinates. The fix is to manually stereoize the temp position variable before outputting to the texcoord. There are loads of example fixes for this 7. Water: Many (but not all) water effects are vertex shader issues, and this is where you would start looking for a solution. Main issues are haloing and reflections. Both of these are often fixed the same way as above in point 6.

ADVANCED These fixes require Pixel Shader correction. They require you to understand much better the graphics rendering stages, the coordinate systems, and the transformations between them, including what the stereo correction is doing - this background is actually the most important part. You still do *not* need to know much about shader programming, though it will start to help a teeny bit to understand how to apply matrix transforms. I am not going to go into detail right now, but there are a few levels at which PS correction is relevant: - Accepting a newly defined replacement variable from a parent VS and using that (e.g. a stereoized version of some texcoord that is needed in only one part of the PS, but not everwhere) - What is called "texture coordinate" corrections to offset where sample maps are sampled (often a texture coordinate needs to be "built' form something call vPos) - Direct correction of projection space coordinates (easiest) - Indirect correction of world space coordinates for deferred rendered shaders (hardest)

+=================================================================================================

SkyBox

Hi bo3b. In a nutshell, it is whatever looks best :-) The standard formula almost never applies to skyboxes.

That formula applies a correction where one has not been applied by the driver, or 'un-applies' a correction that should not have happened. Skyboxes are at the "correct" depth its just that the developer made that depth stupidly small. The usual way to fix a skybox is to just multiply stereoParams.x by a constant value (between 0-1). Sometimes that just does not seem to work well, or sometimes you need to multiply by a much bigger number than one, and oftentimes the skybox may still exhibit dependence on convergence.

The fix "r10.x += stereoParams.x * (stereoParams.y);" basically removes the driver level convergence correction, leaving only whatever the depth correction was - so if the depth of the box was "1", this is equivalent to the usual correction of just multiplying stereoParams.x by a number between 0-1 (and in this case actually "1"). This fix will now be independent of convergence. If you look in other shaders you will see things like (r10.x += stereoParams.x * (-r10.w + stereoParams.y + 1); and what this is doing is completely offsetting the driver correction, "(-r10.w + stereoParams.y)" (making it at 2d screen depth) and then applying the max depth correction with the "1").

Like I say it's all trial and error because it depends what arbitrary depth they set the skybox at. In AC3/4, they don't use the same depth for skybox, clouds, stars, sun or moon, hence the variation in ways to correct them, and hence the variation in lining them up.

Sorry that was a bit longwinded, but the upshot is you try one of 4 approaches, in this order, and see what works best:

  1. r10.x += stereoParams.x * [0-1] or [0-inf...];
  2. r10.x += stereoParams.x * (stereoParams.y);
  3. r10.x += stereoParams.x * (stereoParams.y) * [0-1];
  4. r10.x += stereoParams.x * (- r10.w + stereoParams.y + [0-1]);

- by 'works best' that means stays in the same place when separation and convergence are adjusted, or scales perfectly with the scene when they are adjusted. If the skybox goes in and out of depth as convergence is adjusted, that's generally not good unless the 'playing range' of convergence is such that not much movement happens.

Hope that helps.