Game Creation with XNA/3D Development/Shaders and Effects
Shaders and Effects
[edit | edit source]There are pixel shaders and vertex shaders. You first need to understand the difference, how they work and what they can do for you. Then you need to learn about the shader language HLSL, its syntax and how to use it. Especially how to call it from the program. Finally, you will also learn about the program called FXComposer, which shows you how to load effects, what their HLSL code is, how to modify it, and how to export and use the finished shaders in your game.
Development of shaders
[edit | edit source]In the past computer generated graphics were generated by a so called fixed-function pipeline (FFP) in the video hardware. This pipeline offered only a reduced set of operations in a certain order. This proved to be not flexible enough for the growing complexity of graphical applications like games.
That is why a new graphics pipeline was introduced to replace this hard-coded approach. The new model still has some fixed compentents, but it introduced so called shaders. Shaders do the main work in rendering a scene on the screen and can be easily exchanged, programmed and adapted to the programmer's needs. This approach offers full creativity but also more responsibility to the graphics programmer.
There are two kinds of shaders: the vertex shader and the pixel shader (in OpenGL called fragment shader). And with DirectX 10 and OpenGL 3.2 a third kind of shader was introduced: the Geometry shader that offers even further possibilities by creating additional, new vertices based on the existing ones.
Shaders describe and calculate the properties of either vertices or pixels. The vertex shader deals with vertices and their properties: their position on the screen, each vertice's texture coordinates, its color and so on.
The pixel shader deals with the result of the vertex shader (rasterized fragments) and describes the properties of a pixel: its color, its depth compared to other pixels on the screen (z-depth) and its alpha value.
Types of shaders and their function
[edit | edit source]Nowadays there are three types of shaders that are executed in a specific order to render the final image. The scheme shows the roles and the order of each shader in the process of sending data from XNA to the GPU and finally rendering an image. This process is called the GPU workflow:
Vertex Shader
[edit | edit source]Vertex shaders are special functions that are used to manipulate the vertex data by using mathematical operations. To do this the vertex shader takes vertex data from XNA as input. That data contains the position of the vertex in the three dimensional world, its color (if it has a color), its normal vector and its texture coordinates. Using the vertex shader this data can be manipulated, but only the values are changed, not the way the data is stored.
The most basic function of every vertex shader is transforming the position of each vertex from the three dimensional position in the virtual space to the two dimensional position on the screen. This is done by matrix multiplication with the view, world and projection matrix.
The vertex shader also calculates the depth of the vertex on the two dimensional screen (z-buffer depth), so that the original three dimensional information about the depth of objects is not lost and vertices that are closer to the viewer are displayed in front of vertices that are behind other vertices.
The vertex shader can manipulate all the input properties such as position, color, normal vectors and texture coordinates, but it cannot create new vertices. But vertex shaders can be used to change the way the object is seen. Fog, motion blur and heat wave effects can all be simulated with vertex shaders.
Geometry Shader
[edit | edit source]The next step in the pipeline is the new but only optional geometry shader. The geometry shader can add new vertices to a mesh based on the vertices that were already sent to the GPU. One way to use this is called geometry tesselation which is the process of adding more triangles to an existing surface based on certain procedures to make it more detailed and better looking.
Using a geometry shader instead of an high-poly model can save a lot of CPU time, because not all of the vertices that are supposed to be later displayed on the screen have to be processed by the CPU and sent to the GPU. In some cases the polygon count can be reduced to half or a quarter.
If no geometry shader is used the output of the vertex shader goes straight to the rasterizer. If a geometry shader is used, the output also goes to the rasterizer after adding the new vertices.
Pixel / Fragment Shader
[edit | edit source]The rasterizer takes the processed vertices and turns them into fragments (pixel-sized parts of a polygon). Whether a point, line, or polygon primitive, this stage produces fragments to "fill in" the polygons and interpolate all the colors and texture coordinates so that the appropriate value is assigned to each fragment.
After that the pixel shader (DirectX uses the term "pixel shader," while OpenGL uses the term "fragment shader") is called for each of these fragements. The Pixel shader calculates the color of an individual pixels and is used for diffuse shading (scene lightning), bump mapping, normal mapping, specular lighting and simulating reflections. Pixel shaders are generally used to provide surfaces with effects they have in real life.
The result of the pixel shader is a pixel with a certain color that is passed to the Output Merger and finally drawn onto the screen.
So the big difference between vertex and pixel shaders is that vertex shaders are used to change the attributes of the geometry (the vertices) and transform it to the 2D screen. The pixel shaders in contrast are used to change the appearance of the resulting pixels with the goal to create surface effects.
Programming with BasicEffect Class in XNA
[edit | edit source]Basic Class XNA is very useful and effective if you want to make a simple effect and lighting for your model. It works like fixed function pipeline(FFP) which offered a limited and unflexible operation.
To use BasicEffect class we need first to declare an instance of the BasicEffect at the top of the game class.
BasicEffect basicEffect;
This instance should be initiliazed inside Initiliaze() methode because we want to initiliaze it when the program starts. If we do this in another place that could be lead into performance problem.
basicEffect =
new BasicEffect(graphics.GraphicsDevice, null);
Next, we implement some method in the game class to draw a model with BasicEffect class. With the BasicEffect class, we don't have to create EffectParameter object for each variable. Instead, we can just assign these value into BasicEffect' properties.
private void DrawWithBasicEffect
(Model model, Matrix world, Matrix view, Matrix proj){
basicEffect.World = world;
basicEffect.View = view;
basicEffect.Projection = proj;
basicEffect.LightingEnabled = true;
basicEffect.DiffuseColor = new Vector3(1.0f, 1.0f, 1.0f);
basicEffect.SpecularColor = new Vector3(0.2f, 0.2f, 0.2f);
basicEffect.SpecularPower = 5.0f;
basicEffect.AmbientLightColor =
new Vector3(0.5f, 0.5f, 0.5f);
basicEffect.DirectionalLight0.Enabled = true;
basicEffect.DirectionalLight0.DiffuseColor = Vector3.One;
basicEffect.DirectionalLight0.Direction =
Vector3.Normalize(new Vector3(1.0f, 1.0f, -1.0f));
basicEffect.DirectionalLight0.SpecularColor = Vector3.One;
basicEffect.DirectionalLight1.Enabled = true;
basicEffect.DirectionalLight1.DiffuseColor =
new Vector3(0.5f, 0.5f, 0.5f);
basicEffect.DirectionalLight1.Direction =
Vector3.Normalize(new Vector3(-1.0f, -1.0f, 1.0f));
basicEffect.DirectionalLight1.SpecularColor =
new Vector3(0.5f, 0.5f, 0.5f);
}
After all necesarry properties have been assigned. Now our model should be drawn with BasicEffect class. Since in a model could be have more than one mesh, we use foreach-loop to iterate each mesh of the model
private void DrawWithBasicEffect
(Model model, Matrix world, Matrix view, Matrix proj){
....
foreach (ModelMesh meshes in model.Meshes)
{
foreach (ModelMeshPart parts in meshes.MeshParts)
parts.Effect = basicEffect;
meshes.Draw();
}
}
To view our model in XNA, we just call the our methode inside Draw() methode.
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
DrawWithBasicEffect(myModel, world, view, proj);
base.Draw(gameTime);
}
Draw texture with BasicEffect Class
[edit | edit source]To draw a texture with BasicEffect class we must enable the alpha property. After that we can assign the texture into the model.
basicEffect.TextureEnabled = true;
basicEffect.Texture = myTexture;
Create transparency with BasicEffect class
[edit | edit source]First we assign the transparency value into basicEffect properties
basicEffect.Alpha = 0.5f;
then we must tell the GraphicsDevice to enable transparency with this code inside Draw() methode
protected void Draw(){
.....
GraphicsDevice.RenderState.AlphaBlendEnable = true;
GraphicsDevice.RenderState.SourceBlend = Blend.SourceAlpha;
GraphicsDevice.RenderState.DestinationBlend = Blend.InverseSourceAlpha;
DrawWithBasicEffect(model,world,view,projection)
GraphicsDevice.RenderState.AlphaBlendEnable = false;
.....
}
Programming your own HLSL Shaders in XNA
[edit | edit source]Shading Languages
[edit | edit source]Shaders are programmable and to do that several variations of a C like high-level programming languages have been developed.
The High Level Shading Language (HLSL) was developed by Microsoft for the Microsoft Direct3D API. It uses C syntax and we will use it with the XNA Framework.
Other shading languages are GLSL ( OpenGL Shading Language) that is offered since OpenGL 2.0 and Cg ( C for Graphics) another high-level shading language that was developed by Nvidia in collaboration with Microsoft, which is very similar to HLSL. Cg is supported by FX Composer which is discussed later in this article.
The High Level Shading Language (HLSL) and its use in XNA
[edit | edit source]Shaders in XNA are written in HLSL and stored in so called effect files with the file extension .fx. It is best to keep all shaders in one separate folder. So create a new folder "Shaders" in the content node of the Solution Explorer in Visual C#. To create a new Effect fx-file, simply right-click on the new "Shaders" folder and select Add → New Item. In the New Item dialog select "Effect File" and give the file a suitable name.
The new effect file will already contain some basic shader code that should work, but in this chapter we will write the shader from scratch, so the already generated code can be deleted.
Structure of a HLSL Effect-File (*.fx)
[edit | edit source]As already mentioned, HLSL uses C syntax and can be programmed by declaring variables, structs and writing functions. A Shader in HLSL usually consist of four different parts:
Variable declarations
[edit | edit source]Variable declarations that contain parameters and fixed constants. These variables can be set from the XNA application that is using the shader.
Example:
float4 AmbienceColor = float4(0.5f, 0.5f, 0.5f, 1.0f);
With this statement a new global variable is declared and assigned. HLSL offers the standard c data types like float, string and struct but also other shader specific data types for Vectors, Matrices, Sampler, Textures and so on. The official Reference: MSDN
In the example we declared a 4 dimensional vector that is used to define a color. Colors are represented by 4 values that represent the 4 channels (Red, Green, Blue, Alpha) and have a range from 0.0 to 1.0.
Variables can have arbitrary names.
Data structures
[edit | edit source]Data structures that will be used by the shaders to input and output data. Usually these are two structures: one for the input that goes into the vertex shader and one for the output of the vertex shader. The output of the vertex shader is then used as the input of the pixel shader. Usually there is no structure needed for the output of the pixel shader, because that is already the end result. If you include a Geometry Shader you need additional structures, but we will just look at the most basic example consisting of a vertex and pixel shader. Structures can have arbitrary names.
Example:
struct VertexShaderInput
{
float4 Position : POSITION0;
};
This data structure has one variable of the type 4 dimensional vector in it called Position (or any other name).
POSITION0 after the variable name is a so called semantic. All the variables in the input and output structs must be identified by semantics. A list can be found in the official HLSL Reference: MSDN
Shader functions
[edit | edit source]Implementation of the shader functions and logic behind them. Usually that is one function for the vertex shader and one for the pixel shader.
Example:
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return AmbienceColor;
}
Functions are like in C: They can have parameters and return values. In this case we have a function called PixelShaderFunction (name can be arbitrary) which takes a VertexShaderOutput object as input and returns a value of the semantic COLOR0 and type float4 (4 dimensional vector representing the 4 color channels)
Techniques
[edit | edit source]A technique is like the main() method of a shader and tells the graphic card when to use what shader function. Techniques can have multiple passes that use different shader functions, so the resulting image on the screen can be composed with multiple passes.
Example:
technique Ambient
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
This example technique has the name Ambient and just one pass. In this pass the vertex and pixel shader functions are assigned and the shader version (in this case 1.1) is specified.
First try: A simple ambient shader
[edit | edit source]The simplest shader is a so called ambient shader that just assigns a fixed color to every pixel of an object so only its outline is seen. Let's implement an ambient shader as a first try.
We start with an empty .fx-File that can have an arbitrary filename. The vertex shader needs the three scene matrices to calculate the two dimensional position of a certain vertex on the screen based on the three dimensional coordinates. So we need to define three matrices inside the fx-file as variables:
float4x4 WorldMatrix;
float4x4 ViewMatrix;
float4x4 ProjectionMatrix;
float4 AmbienceColor = float4(0.5f, 0.5f, 0.5f, 1.0f);
A variable of the type float4x4 is a 4 dimensional matrix. The other variable is a 4 dimensional vector to determine the ambient light color (in this case a gray tone). The color values for the Ambient color are float values that represent the RGBA channels, where the minimum value is 0 and the maximum value is 1.
Next we need the input and output structures for the vertex shader:
struct VertexShaderInput
{
float4 Position : POSITION0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
};
Because it is a very simple shader the only data they contain at the moment is the position of the vertex in the virtual 3D space (VertexShaderInput) and the transformed position of the vertex on the two dimensional screen (VertexShaderOutput). POSITION0 is the semantic type of both positions.
Now we need to add the shader calculation itself. This is done in two functions. At first the vertex shader function:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
return output;
}
This is the most basic vertex shader function and every vertex shader should look similar. The position that is saved in input is transformed by multiplying it with three scene matrices and then returning it as the result. The input is of the type VertexShaderInput and the output is of the type VertexShaderOutput. The matrix multiplication function that is used (mul) is part of the HLSL language.
Now all we need is to give the pixel shader the position that was calculated by the vertex shader and color it with the ambient color (based on the ambient intensity). The pixel shader is implemented in another function that returns the final pixel color with the data type float4 and the semantic type COLOR0:
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return AmbienceColor;
}
So it should become clear why in the end result every pixel of the object will have the same color: because we don't have any lightning yet in the shader and all the three dimensional information gets lost.
To make our shader complete we need a so called technique, which is like the main() method of a shader and the function that is called by XNA when using the shader to render an object:
technique Ambient
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
A technique has a name (in this case Ambient) which can be called directly from XNA. A technique can also have multiple passes, but in this simple case we just need one pass. In one pass it is exactly defined which function of our shader file is the vertex shader and which function is the pixel shader. We do not use a geometry shader here, because in contrast to the vertex and pixel shader it is just optional. Furthermore it is determined which shader version should be used, because the shader models are continually developed and new features are added. Possible versions are: 1.0 to 1.3, 1.4,2.0, 2.0a, 2.0b, 3.0, 4.0.
For the simple ambient lighting we just need version 1.1, but for reflections and other more advanced effects pixel shader version 2.0 is needed.
The complete shader code:
float4x4 WorldMatrix;
float4x4 ViewMatrix;
float4x4 ProjectionMatrix;
float4 AmbienceColor = float4(0.5f, 0.5f, 0.5f, 1.0f);
struct VertexShaderInput
{
float4 Position : POSITION0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, WorldMatrix);
float4 viewPosition = mul(worldPosition, ViewMatrix);
output.Position = mul(viewPosition, ProjectionMatrix);
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return AmbienceColor;
}
technique Ambient
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
Now the shader file is completed and can be saved, we just need to get our XNA application to use it for rendering objects.
At first a new global variable of the type Effect has to be defined. Each Effect object is used to reference a shader which is inside a fx-file.
Effect myEffect;
In the method that is used to load the content from the content folder (like models, textures and so on) the shader file needs to be loaded as well (in this case it is the file Ambient.fx in the folder Shaders):
myEffect = Content.Load<Effect>("Shaders/Ambient");
Now the Effect is ready to use. To draw a model with our own shader we need to implement a method for that purpose:
private void DrawModelWithEffect(Model model, Matrix world, Matrix view, Matrix projection)
{
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
part.Effect = myEffect;
myEffect.Parameters["World"].SetValue(world * mesh.ParentBone.Transform);
myEffect.Parameters["View"].SetValue(view);
myEffect.Parameters["Projection"].SetValue(projection);
}
mesh.Draw();
}
}
The method takes the model and the three matrices that are used to describe a scene as parameters.
It loops through the meshes in the model and then trough the mesh parts in the mesh. For each part it assigns our new myEffect object to a property that is called "Effect" as well.
But before the shader is ready to use, we need to supply it with the required parameters. By using the Parameters collection of the myEffect-object we can access the variables that were defined earlier in the Shader file and give them a value. We assign the three main matrices to the equivalent variable in the shader by using the SetValue() method.
After that the mesh is ready to be drawn with the Draw() methode of the class ModelMesh.
So the new method DrawModelWithEffect() can now be called for every model of the type Model to draw it on the screen using our custom shader! The result can be seen in the picture. As you can see, every pixel of the model has the same color because we have not used any lightning, textures or effects yet.
It is also possible to change fixed variables of the shader directly in XNA by using the Parameters collection and the SetValue() method. For example to change the ambient color in the shader in the XNA application the following statement is needed:
myEffect.Parameters["AmbienceColor"].SetValue(Color.White.ToVector4());
Diffuse shading
[edit | edit source]Diffuse shading renders an object in the light that is coming from a light emitter and reflects off the object's surface in all directions (it diffuses). It is what gives most objects their shading, so that they have brightly lit parts and darker parts creating a three dimensional effect that was lost in the simple ambient shader. Now we will modify the previous ambient shader to support diffuse shading as well. There are two ways to implement diffuse shading, one way uses the vertex shader the other uses the pixel shader. We will look at the vertex shader variant.
We need to add three new variables to the previous ambient shader file:
float4x4 WorldInverseTransposeMatrix;
float3 DiffuseLightDirection = float3(-1.0f, 0.0f, 0.0f);
float4 DiffuseColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
The variable WorldInverseTransposeMatrix is another matrix that is needed for the calculation. It is the transpose of the inverse of the world matrix. With the ambient lighting only we did not have to care about the normal vectors of the vertices, but with the diffuse lighting this matrix becomes necessary to transform the normals of a vertex to do lighting calculations.
The other two variables are used to define the direction where the diffuse light comes from (first value is X, second value Y and third Z in the 3D space) and the color of the diffuse light that bounces off the surface of the rendered objects. In this case we use simply white color and the light emits in the direction of the x-axis in virtual space.
The structures for VertexShaderInput and VertexShaderOutput need some small modification as well. We have to add the following variable to the struct VertexShaderInput to get the normal vector of the current vertex in the vertex shader input:
float4 NormalVector : NORMAL0;
And we add a variable for the color to the struct VertexShaderOutput, because we will calculate the diffuse shading in the vertex shader, which will result in a color that needs to be passed to the pixel shader:
float4 VertexColor : COLOR0;
To do the diffuse lighting in the vertex shader we have to add some code to the VertexShaderFunction:
float4 normal = normalize(mul(input.NormalVector, WorldInverseTransposeMatrix));
float lightIntensity = dot(normal, DiffuseLightDirection);
output.VertexColor = saturate(DiffuseColor * lightIntensity);
With this code we transform the normal of a vertex so that it is then relative to where the object is in the world (first new line). In the second line the angle between the surface normal vector and the light that shines on it is calculated. The HLSL language offers a function dot() that calculates the dot product of two vectors, which can be used to measure the angle between two vectors. In this case the angle is equal to the intensity of the light on the surface of the vertex. At last the color of the current vertex is calculated by multiplying the diffuse color with the intensity. This color is stored in the VertexColor property of the VertexShaderOutput struct, which is later passed to the pixel shader.
At last we have to change the value that is returned by PixelShaderFunction:
return saturate(input.VertexColor + AmbienceColor);
It simply takes the color we already calculated in the vertex shader and adds the ambient component to it. The function saturate is offered by HLSL to make sure that a color is within the range between 0 and 1.
You might want to make the AmbienceColor component a bit darker so its influence on the final color is not so big. This can also be done by defining an intensity variable that regulates the intensity of a color. But we will keep things short and simple now and discuss that later.
The complete shader code:
float4x4 WorldMatrix;
float4x4 ViewMatrix;
float4x4 ProjectionMatrix;
float4 AmbienceColor = float4(0.2f, 0.2f, 0.2f, 1.0f);
// For Diffuse Lightning
float4x4 WorldInverseTransposeMatrix;
float3 DiffuseLightDirection = float3(-1.0f, 0.0f, 0.0f);
float4 DiffuseColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
struct VertexShaderInput
{
float4 Position : POSITION0;
// For Diffuse Lightning
float4 NormalVector : NORMAL0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
// For Diffuse Lightning
float4 VertexColor : COLOR0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, WorldMatrix);
float4 viewPosition = mul(worldPosition, ViewMatrix);
output.Position = mul(viewPosition, ProjectionMatrix);
// For Diffuse Lightning
float4 normal = normalize(mul(input.NormalVector, WorldInverseTransposeMatrix));
float lightIntensity = dot(normal, DiffuseLightDirection);
output.VertexColor = saturate(DiffuseColor * lightIntensity);
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return saturate(input.VertexColor + AmbienceColor);
}
technique Diffuse
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
That is it for the shader file. To use the new shader in XNA we have to make one addition to the XNA application that uses the shader to render objects:
We have to set the WorldInverseTransposeMatrix variable of the shader in XNA. So right in the DrawModelWithEffect method in the part where the other parameters of the object myEffect are set by using SetValue() we have to set the WorldInverseTransposeMatrix. But before setting it, it needs to be calculated. For that we invert and then transpose the world matrix of our application (Which is multiplied with the objects transformation first, so everything is at the right place).
Matrix worldInverseTransposeMatrix = Matrix.Transpose(Matrix.Invert(mesh.ParentBone.Transform * world));
myEffect.Parameters["WorldInverseTransposeMatrix"].SetValue(worldInverseTransposeMatrix);
That is all that needs to be changed in the XNA code. Now you should have nice diffuse lighting. You can see the result in the pictures. Remember this shader is already using diffuse and ambient lighting, that is why the dark parts of the model are just gray and not black.
If we modify the pixel shader to just return the vertex color without adding the ambient light, the scene looks different (second picture):
return saturate(input.VertexColor);
The dark parts of the model where there is no light are now completely black because they no longer have an ambient component added to them.
Texture Shader
[edit | edit source]Applying and rendering textures on an object based on texture coordinates is also done with shaders. To adapt the previous diffuse shader to work with textures we have to add the following variable:
texture ModelTexture;
sampler2D TextureSampler = sampler_state {
Texture = (ModelTexture);
MagFilter = Linear;
MinFilter = Linear;
AddressU = Clamp;
AddressV = Clamp;
};
ModelTexture is of the HLSL data type texture and stores the texture that should be rendered on the model. Another variable of the type sampler2D is associated to the texture. A sampler tells the graphic card how to extract the color for one pixel from the texture file. The sampler contains five properties:
- Texture: Which texture file to use.
- MagFilter + MinFilter: Which filter should be used to scale the texture. Some filters are faster than others, other filters look better. Possible values are: Linear, None, Point, Anisotropic
- AddressU + AddressV: Determine what to do when the U or V coordinate is not in the normal range (between 0 and 1). Possible values: Clamp, Border Color, Wrap, Mirror.
We use the Linear filter which is fast and Clamp, which just uses the value 0 if the U/V value is lesser than 0 and the value 1 if the U/V Value is greater than 1.
Next we add texture coordinates to the output and input structs of the vertex shader so this kind of information can be collected by the vertex shader and forwarded to the pixel shader.
Add to struct VertexShaderInput:
float2 TextureCoordinate : TEXCOORD0;
And add to struct VertexShaderOutput:
float2 TextureCoordinate : TEXCOORD0;
Both are of the type float2 (a two-dimensional vector) because we just need to store two components: U and V. Both variables also have the semantic type TEXCOORD0.
The step of applying the color of the texture to the object happens in the pixel shader, but not in the vertex shader. So in the VertexShaderFunction we just take the textureCoordinate from the input and put it into the output:
output.TextureCoordinate = input.TextureCoordinate;
In the PixelShaderFunction we then do the following:
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 VertexTextureColor = tex2D(TextureSampler, input.TextureCoordinate);
VertexTextureColor.a = 1;
return saturate(VertexTextureColor * input.VertexColor + AmbienceColor);
}
The function now calculates the color of the pixel based on the texture. Additionally the alpha value for the color is set separately in the second line, because the TextureSampler does not get the alpha value from the texture.
Finally in the return statement the texture color of the vertex is multiplied by the diffuse color (which adds diffuse shading to the texture color) and the ambient color is added as usual.
We also need to make a change in the technique function this time. The new PixelShaderFunction is now to sophisticated for pixel shader version 1.1, so it needs to be set to version 2.0:
PixelShader = compile ps_2_0 PixelShaderFunction();
The complete shader code for the texture shader:
float4x4 WorldMatrix;
float4x4 ViewMatrix;
float4x4 ProjectionMatrix;
float4 AmbienceColor = float4(0.1f, 0.1f, 0.1f, 1.0f);
// For Diffuse Lightning
float4x4 WorldInverseTransposeMatrix;
float3 DiffuseLightDirection = float3(-1.0f, 0.0f, 0.0f);
float4 DiffuseColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
// For Texture
texture ModelTexture;
sampler2D TextureSampler = sampler_state {
Texture = (ModelTexture);
MagFilter = Linear;
MinFilter = Linear;
AddressU = Clamp;
AddressV = Clamp;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
// For Diffuse Lightning
float4 NormalVector : NORMAL0;
// For Texture
float2 TextureCoordinate : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
// For Diffuse Lightning
float4 VertexColor : COLOR0;
// For Texture
float2 TextureCoordinate : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, WorldMatrix);
float4 viewPosition = mul(worldPosition, ViewMatrix);
output.Position = mul(viewPosition, ProjectionMatrix);
// For Diffuse Lightning
float4 normal = normalize(mul(input.NormalVector, WorldInverseTransposeMatrix));
float lightIntensity = dot(normal, DiffuseLightDirection);
output.VertexColor = saturate(DiffuseColor * lightIntensity);
// For Texture
output.TextureCoordinate = input.TextureCoordinate;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
// For Texture
float4 VertexTextureColor = tex2D(TextureSampler, input.TextureCoordinate);
VertexTextureColor.a = 1;
return saturate(VertexTextureColor * input.VertexColor + AmbienceColor);
}
technique Texture
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
Changes in XNA:
In the XNA Code we have to add a new texture by declaring a Texture2D object:
Texture2D planeTexture;
Load the texture by loading a previously added image of the content node (in this case a file called "planetextur.png" that is located in the folder "Images" of the content node of the solution explorer) :
planeTexture = Content.Load<Texture2D>("Images/planetextur");
And finally assign the new texture to the shader variable ModelTexture in our usual draw method:
myEffect.Parameters["ModelTexture"].SetValue(planeTexture);
The object should then have a texture, diffuse shading and ambient shading as you can see in the sample image.
Advanced Shading with Specular Lighting and Reflections
[edit | edit source]Now let's create a new more sophisticated effect that looks really nice and real and can be used to simulate shiny surfaces like metal. We will combine a texture shader with a specular shader and a reflection shader. The reflection shader will reflect a predefined environment
The specular lighting adds shiny spots on the surface of a model to simulate smoothness. They have the color of the light that is shining on the surface.
The difference of specular lighting to the shaders we have used before is that it is not only influenced by the direction the light comes from, but also the direction from which the viewer is looking at the object. So as the camera moves in the scene, the specular lighting is moving around on the surface.
The same goes for the reflection shader, based on the position of a viewer the reflection on an objects surface is changing.
Calculating reflections like in the real world would mean to calculate single rays of light bouncing off surfaces (a technique called ray tracing). This requires way to much calculation power which is why we use a simpler approach in real time computer graphics like XNA. The technique we use is called environment mapping and maps the image of an environment onto an object's surface. This environment mapping is moved when the viewers position is changing so the illusion of a reflection is created. This has some limitations, for example the object only reflects a predefined environment image and not the real scene. Therefore the player and all other moving models will not be reflected. This has some limitations, but they are not very noticeable in a real time application.
The environment map could be the same as the skybox of a scene. More about the skybox in another article: Game Creation with XNA/3D Development/Skybox. If the environment map is the same as the skybox it will fit to the scene and look accurate, however you can use whatever environment mapping looks good on the model in the scene.
The basis for the following changes is the previously developed texture shader. For specular lighting the following variables need to be added:
float ShininessFactor = 10.0f;
float4 SpecularColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
float3 ViewVector = float3(1.0f, 0.0f, 0.0f);
The ShininessFactor defines how shiny the surface is. A low value stands for a surface with broad surfaces highlights and should be used for less shiny surfaces. A high value stands for shinier surfaces like metal with small but very intense surface highlights. A mirror would have an infinite value in theory.
The SpecularColor specifies the color of the specular light. In this case we use white light.
The ViewVector is a variable that will be calculated and set from the XNA application at run time. It tells the shader which direction the viewer is looking at.
For the reflection shader we need to add the environment texture and a sampler as variables:
Texture EnvironmentTexture;
samplerCUBE EnvironmentSampler = sampler_state
{
texture = <EnvironmentTexture>;
magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
AddressU = Mirror;
AddressV = Mirror;
};
The EnvironmentTexture is the environment image that will be mapped as a reflection on our object. This time a cube sampler is used which is a little bit different from the previously used 2D sampler. It assumes that the supplied texture is created to be rendered on a cube.
No changes need to be made in the VertexShaderInput struct, but two new variables need to be added to the struct VertexShaderOutput:
float3 NormalVector : TEXCOORD1;
float3 ReflectionVector : TEXCOORD2;
NormalVector is just the normal vector of a single vertex that comes directly from the input. The reflection vector is calculated in the vertex shader and used in the pixel shader to assign the right part from the environment map to the surface. Both are of the semantic type TEXCOORD. There is already one variable of thetype TEXCOORD0 (TextureCoordinate) so we count further to 1 and 2.
In the VertexShaderFunction we have to add the following commands:
// For Specular Lighting
output.NormalVector = normal;
// For Reflection
float4 VertexPosition = mul(input.Position, WorldMatrix);
float3 ViewDirection = ViewVector - VertexPosition;
output.ReflectionVector = reflect(-normalize(ViewDirection), normalize(normal));
At first the previously calculated normal vector of the current vertex is written to the output, because it is later needed for specular shading in the pixel shader.
For the reflection the vertex position in the world is calculated along with the direction the viewer looks on the vertex. Then the reflection vector is calculated using the HLSL function reflect() that uses normalized values of the previously calculated normal and ViewDirection vector.
To the PixelShaderFunction we add the following calculations for the specular value:
float3 light = normalize(DiffuseLightDirection);
float3 normal = normalize(input.NormalVector);
float3 r = normalize(2 * dot(light, normal) * normal - light);
float3 v = normalize(mul(normalize(ViewVector), WorldMatrix));
float dotProduct = dot(r, v);
float4 specular = SpecularColor * max(pow(dotProduct, ShininessFactor), 0) * length(input.VertexColor);
So to calculate the specular highlight the diffuse light direction, the normal, the view vector and the shininess is needed. The end result is another vector that contains the specular component.
This specular component is added along with the reflection to the return statement at the end of the PixelShaderFunction:
return saturate(VertexTextureColor * texCUBE(EnvironmentSampler, normalize(input.ReflectionVector)) + specular * 2);
In this case we got rid of the diffuse and ambient component because it is not necessary for this demonstration and looks even better without it in this case. Without the diffuse lighting component, it looks like the light comes from everywhere and reflects on shiny metal.
So in the return statement the texture color is used along with the reflection and the specular highlight (multiplied by 2 to make it more intense).
The finished shader code:
float4x4 WorldMatrix;
float4x4 ViewMatrix;
float4x4 ProjectionMatrix;
float4 AmbienceColor = float4(0.1f, 0.1f, 0.1f, 1.0f);
// For Diffuse Lightning
float4x4 WorldInverseTransposeMatrix;
float3 DiffuseLightDirection = float3(-1.0f, 0.0f, 0.0f);
float4 DiffuseColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
// For Texture
texture ModelTexture;
sampler2D TextureSampler = sampler_state {
Texture = (ModelTexture);
MagFilter = Linear;
MinFilter = Linear;
AddressU = Clamp;
AddressV = Clamp;
};
// For Specular Lighting
float ShininessFactor = 10.0f;
float4 SpecularColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
float3 ViewVector = float3(1.0f, 0.0f, 0.0f);
// For Reflection Lighting
Texture EnvironmentTexture;
samplerCUBE EnvironmentSampler = sampler_state
{
texture = <EnvironmentTexture>;
magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
AddressU = Mirror;
AddressV = Mirror;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
// For Diffuse Lightning
float4 NormalVector : NORMAL0;
// For Texture
float2 TextureCoordinate : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
// For Diffuse Lightning
float4 VertexColor : COLOR0;
// For Texture
float2 TextureCoordinate : TEXCOORD0;
// For Specular Shading
float3 NormalVector : TEXCOORD1;
// For Reflection
float3 ReflectionVector : TEXCOORD2;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, WorldMatrix);
float4 viewPosition = mul(worldPosition, ViewMatrix);
output.Position = mul(viewPosition, ProjectionMatrix);
// For Diffuse Lighting
float4 normal = normalize(mul(input.NormalVector, WorldInverseTransposeMatrix));
float lightIntensity = dot(normal, DiffuseLightDirection);
output.VertexColor = saturate(DiffuseColor * lightIntensity);
// For Texture
output.TextureCoordinate = input.TextureCoordinate;
// For Specular Lighting
output.NormalVector = normal;
// For Reflection
float4 VertexPosition = mul(input.Position, WorldMatrix);
float3 ViewDirection = ViewVector - VertexPosition;
output.ReflectionVector = reflect(-normalize(ViewDirection), normalize(normal));
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
// For Texture
float4 VertexTextureColor = tex2D(TextureSampler, input.TextureCoordinate);
VertexTextureColor.a = 1;
// For Specular Lighting
float3 light = normalize(DiffuseLightDirection);
float3 normal = normalize(input.NormalVector);
float3 r = normalize(2 * dot(light, normal) * normal - light);
float3 v = normalize(mul(normalize(ViewVector), WorldMatrix));
float dotProduct = dot(r, v);
float4 specular = SpecularColor * max(pow(dotProduct, ShininessFactor), 0) * length(input.VertexColor);
return saturate(VertexTextureColor * texCUBE(EnvironmentSampler, normalize(input.ReflectionVector)) + specular * 2);
}
technique Reflection
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
To use the new shader in XNA we need to set 2 additional shader variables from XNA in the draw method:
myEffect.Parameters["ViewVector"].SetValue(viewDirectionVector);
myEffect.Parameters["EnvironmentTexture"].SetValue(environmentTexture);
But at first the object environmentTexture should be declared and loaded first (as usual):
TextureCube environmentTexture;
environmentTexture = Content.Load<TextureCube>("Images/Skybox");
In contrast to the model texture, this texture is not of the type Texture2D but the type TextureCube because in our case we use a skybox texture as environment map. A skybox texture consists not only of one image like a regular texture, but six different images that are mapped on each side of a cube. The images have to fit together in the right angle and be seamless. You can find some skybox textures here: RB Whitaker Skybox Textures
Secondly the viewDirectionVector we use to set the ViewVector variable in the reflection shader should be declared in the class as a field:
Vector3 viewDirectionVector = new Vector3(0, 0, 0);
It can be calculated this way:
viewDirectionVector = cameraPositionVector – cameraTargetVector;
Whereby cameraPositionVector is a 3D vector containing the current position of the camera and cameraTargetVector is another vector with the coordinates of the camera target. If for example the camera is just looking at the point 0,0,0 in the virtual space, the calculation would be even shorter:
viewDirectionVector = cameraPositionVector;
//or
viewDirectionVector = new Vector3(eyePositionX, eyePositionY, eyePositionZ);
With all these changes in the XNA game the reflection should look like in the picture. But the appearance largely depends on the environment map used.
Additional Parameters
[edit | edit source]Another good idea would be to introduce parameters for the intensity of a shader. For example instead of simply returning the ambient color in the return statement of the pixel shader function in the diffusion shader above:
return saturate(input.VertexColor + AmbienceColor);
One could return:
return saturate(input.VertexColor + AmbienceColor * AmbienceIntensity);
Whereby AmbienceIntensity is a float between 0.0 and 1.0. This way the intensity of the color can be easily adjusted. This can be done with every component we have calculated so far (ambient, diffusion, textur color, specular intensity, reflection component).
Postprocessing with shaders
[edit | edit source]Until now we have worked with 3D shaders but 2D shaders are also possible. A 2D image can be modified and processed by a picture editing software such as Photoshop to adapt its contrast, colors and apply filters. The same can be achieved with 2D shaders that are applied on the entire output image that is the result of rendering the scene.
Examples for the kinds of effects that can be achieved:
- Simple color modifications like making the scene black and white, inverting the color channels, giving the scene a sepia look and so on.
- Adapting the colors to create a warm or cold mood in the scene.
- Blur the screen with a blur filter to create special effects.
- Bloom Effect: A popular effect that produces fringes of light around very bright objects in an image simulating an effect known from photography.
So to start we create a new shader file in Visual Studio (call it Postprocessing .fx) and insert the following code for post-processing
texture ScreenTexture;
sampler TextureSampler = sampler_state
{
Texture = <ScreenTexture>;
};
float4 PixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0
{
float4 pixelColor = tex2D(TextureSampler, TextureCoordinate);
pixelColor.g = 0;
pixelColor.b = 0;
return pixelColor;
}
technique Grayscale
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
As you can see for the post-processing we only need a pixel shader. The post-processing is handled by supplying the rendered image of the scene as a texture which is then used by a pixel shader as input information, processed and returned.
The function has only one input parameter (the texture coordinate) and returns a color vector of the semantic type COLOR0. In this example we just read the color of the pixel at the current texture coordinate (which is the screen coordinate) and set the green and blue channel to 0 so that only the red channel is left.
Then we return the color value.
Now using this 2D shader in XNA is a bit more tricky. At first we need the following objects in the Game class:
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
RenderTarget2D renderTarget;
Effect postProcessingEffect;
It is very likely that the GraphicsDeviceManager and SpriteBatch object is already created in an existing project. However the RenderTarget2D and Effect object have to be declared.
Check that the GraphicsDeviceManager object is initialized in the constructor:
graphics = new GraphicsDeviceManager(this);
And the SpriteBatch object is initialized in the LoadContent() method. The new shader file we just created should be loaded in this method as well:
spriteBatch = new SpriteBatch(GraphicsDevice);
postProcessingEffect = Content.Load<Effect>("Shaders/Postprocessing");
Finally make sure that the RenderTarget2D object is initialized in the method Initialize():
renderTarget = new RenderTarget2D(
GraphicsDevice,
GraphicsDevice.PresentationParameters.BackBufferWidth,
GraphicsDevice.PresentationParameters.BackBufferHeight,
1,
GraphicsDevice.PresentationParameters.BackBufferFormat
);
Now we need a method to draw the current scene to a texture (in form of a render target) instead of the screen:
protected Texture2D DrawSceneToTexture(RenderTarget2D currentRenderTarget) {
// Set the render target
GraphicsDevice.SetRenderTarget(0, currentRenderTarget);
// Draw the scene
GraphicsDevice.Clear(Color.Black);
drawModelWithTexture(model, world, view, projection);
// Drop the render target
GraphicsDevice.SetRenderTarget(0, null);
// Return the texture in the render target
return currentRenderTarget.GetTexture();
}
Inside this method we use the draw function that is using our 3D shader (in this case: drawModelWithTexture()). So we still use all the 3D shaders to render the scene first, but instead of displaying this result directly, we render it to a texture and do some post-processing with it in the Draw() method. After that the processed texture is displayed on the screen. So extend the Draw() method with this:
protected override void Draw(GameTime gameTime)
{
Texture2D texture = DrawSceneToTexture(renderTarget);
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.SaveState);
postProcessingEffect.Begin();
postProcessingEffect.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(texture, new Rectangle(0, 0, 1024, 768), Color.White);
postProcessingEffect.CurrentTechnique.Passes[0].End();
postProcessingEffect.End();
spriteBatch.End();
base.Draw(gameTime);
}
At first the normal scene is rendered to a texture named texture. Then a sprite batch is started along with the postProcessing effect that contains our new post-processing shader. The texture is then rendered on the sprite batch with the postProcessing Effect applied to it.
The effect should look like in the picture.
Another simple effect that can be achieved with a post-processing shader is converting the color image to a gray scale image and then reducing it to 4 colors, which creates a cartoon-like effect. To achieve this, the PixelShaderFunction inside our shader file should look like this:
float4 PixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0
{
float4 pixelColor = tex2D(TextureSampler, TextureCoordinate);
float average = (pixelColor.r + pixelColor.g + pixelColor.b) / 3;
if (average > 0.95){
average = 1.0;
} else if (average > 0.5){
average = 0.7;
} else if (average > 0.2){
average = 0.35;
} else{
average = 0.1;
}
pixelColor.r = average;
pixelColor.g = average;
pixelColor.b = average;
return pixelColor;
}
A gray scale image is generated by calculating the average of the red, green and blue channel and using this one value for all three channels. After that the average value is additionally reduced to one of 4 different values. At last the red, green and blue channel of the output is set to the reduced value. The image is grayscale because the red, green and blue channel all have the same value.
Create tansparency Shader
[edit | edit source]Create a tranparency shader is easy. We can start with diffuse shader example from above. First we need some variable called alpha to determine the transparency. The value should be between 1 for opaque and 0 for complete transparent. To implement the transparency shader we just need some modification in PixelShaderFunction. After all lighting calculation have been done, we must assign the alpha value into result color properties.
float alpha = 0.5f;
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 color = saturate(input.VertexColor + AmbienceColor);
color.a = alpha;
return color;
}
to enable alpha blending we must add some code in technique
technique Tranparency {
pass p0 {
AlphaBlendEnable = TRUE;
DestBlend = INVSRCALPHA;
SrcBlend = SRCALPHA;
VertexShader = compile vs_2_0 std_VS();
PixelShader = compile ps_2_0 std_PS();
}
}
The complete transparency shader
float4x4 WorldMatrix;
float4x4 ViewMatrix;
float4x4 ProjectionMatrix;
float4 AmbienceColor = float4(0.2f, 0.2f, 0.2f, 1.0f);
// For Diffuse Lightning
float4x4 WorldInverseTransposeMatrix;
float3 DiffuseLightDirection = float3(-1.0f, 0.0f, 0.0f);
float4 DiffuseColor = float4(1.0f, 1.0f, 1.0f, 1.0f);
struct VertexShaderInput
{
float4 Position : POSITION0;
// For Diffuse Lightning
float4 NormalVector : NORMAL0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
// For Diffuse Lightning
float4 VertexColor : COLOR0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, WorldMatrix);
float4 viewPosition = mul(worldPosition, ViewMatrix);
output.Position = mul(viewPosition, ProjectionMatrix);
// For Diffuse Lightning
float4 normal = normalize(mul(input.NormalVector, WorldInverseTransposeMatrix));
float lightIntensity = dot(normal, DiffuseLightDirection);
output.VertexColor = saturate(DiffuseColor * lightIntensity);
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 color = saturate(input.VertexColor + AmbienceColor);
color.a = alpha;
return color;
}
technique Diffuse
{
pass Pass1
{
AlphaBlendEnable = TRUE;
DestBlend = INVSRCALPHA;
SrcBlend = SRCALPHA;
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
Other kinds of shaders
[edit | edit source]A few other popular shaders with a short description.
-
Bump Mapping applied on the even surface of a sphere
-
Adding details back to a low-poly model by using normal mapping
-
Toon Shader
Bump Map Shader
[edit | edit source]Bump Mapping is used to simulate bumps on otherwise even polygon surfaces to make a surface look more realistic and give it some structure, additionally to the texture. Bump Mapping is achieved by loading another texture that contains the bump information and perturbing the surface normals with this information. The original normal of a surface is changed by an offset value that comes from the bump map. Bump maps are grayscale images.
Normal Map Shader
[edit | edit source]Bump Mapping is nowadays replaced by normal mapping. Normal mapping is also used to create bumpiness and structures on otherwise even polygon surfaces. But normal mapping handles drastic variations in normals better than bump mapping.
Normal Mapping is a similar idea to bump mapping: another texture is loaded and used to change the normals. But instead of just changing the normals with an offset a normal map uses a multichannel (RGB) map to completely replace the existing normals. R, G and B values of each pixel in the normal map correspond to the X,Y,Z coordinates of the normal vector of a vertex.
The further development of normal mapping is called parallax mapping.
Cel Shader (Toon Shader)
[edit | edit source]A Cel Shader is used to render a 3D scene in a cartoon-like look so that it appears to be drawn by hand. Cel Shading can be implemented in XNA with a multi-pass shader that builds the result image in several passes.
Toon Shader Example
[edit | edit source]To create toon shader we can start from diffuse shader. The basic idea behind toon shader is that the light intensity will be divided into several levels. In this example we create the intensity into 5 levels. To represents the lightness level we need some array variable called toonthresholds and to determine the boundary between levels we use array toonBrightnessLevels.
float ToonThresholds[4] = {0.95,0.5, 0.2, 0.03 };
float ToonBrightnessLevels[5] = { 1.0, 0.8, 0.6, 0.35, 0.01 };
now in PixelShader we implement the classification of light intensity and assign into it an appropriate color.
float4 std_PS(VertexShaderOutput input) : COLOR0 {
float lightIntensity = dot(normalize(DiffuseLightDirection),
input.normal);
if(lightIntensity < 0)
lightIntensity = 0;
float4 color = tex2D(colorSampler, input.uv) *
DiffuseLightColor * DiffuseIntensity;
color.a = 1;
if (lightIntensity > ToonThresholds[0])
color *= ToonBrightnessLevels[0];
else if ( lightIntensity > ToonThresholds[1])
color *= ToonBrightnessLevels[1];
else if ( lightIntensity > ToonThresholds[2])
color *= ToonBrightnessLevels[2];
else if ( lightIntensity > ToonThresholds[3])
color *= ToonBrightnessLevels[3];
else
color *= ToonBrightnessLevels[4];
return color;
}
The complete toon shader
float4x4 World : World < string UIWidget="None"; >;
float4x4 View : View < string UIWidget="None"; >;
float4x4 Projection : Projection < string UIWidget="None"; >;
texture colorTexture : DIFFUSE <
string UIName = "Diffuse Texture";
string ResourceType = "2D";
>;
float3 DiffuseLightDirection = float3(1, 0, 0);
float4 DiffuseLightColor = float4(1, 1, 1, 1);
float DiffuseIntensity = 1.0;
float ToonThresholds[4] = {0.95,0.5, 0.2, 0.03 };
float ToonBrightnessLevels[5] = { 1.0, 0.8, 0.6, 0.35, 0.01 };
sampler2D colorSampler = sampler_state {
Texture = <colorTexture>;
FILTER = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VertexShaderInput {
float4 position : POSITION0;
float3 normal :NORMAL0;
float2 uv : TEXCOORD0;
};
struct VertexShaderOutput {
float4 position : POSITION0;
float3 normal : TEXCOORD1;
float2 uv : TEXCOORD0;
};
VertexShaderOutput std_VS(VertexShaderInput input) {
VertexShaderOutput output;
float4 worldPosition = mul(input.position, World);
float4 viewPosition = mul(worldPosition, View);
output.position = mul(viewPosition, Projection);
output.normal = normalize(mul(input.normal, World));
output.uv = input.uv;
return output;
}
float4 std_PS(VertexShaderOutput input) : COLOR0 {
float lightIntensity = dot(normalize(DiffuseLightDirection),
input.normal);
if(lightIntensity < 0)
lightIntensity = 0;
float4 color = tex2D(colorSampler, input.uv) *
DiffuseLightColor * DiffuseIntensity;
color.a = 1;
if (lightIntensity > ToonThresholds[0])
color *= ToonBrightnessLevels[0];
else if ( lightIntensity > ToonThresholds[1])
color *= ToonBrightnessLevels[1];
else if ( lightIntensity > ToonThresholds[2])
color *= ToonBrightnessLevels[2];
else if ( lightIntensity > ToonThresholds[3])
color *= ToonBrightnessLevels[3];
else
color *= ToonBrightnessLevels[4];
return color;
}
technique Toon {
pass p0 {
VertexShader = compile vs_2_0 std_VS();
PixelShader = compile ps_2_0 std_PS();
}
}
Using FXComposer to create shaders for XNA
[edit | edit source]FX Composer is a integrated development environment for shader authoring. Using FX Composer to create our own shader is very helpful. With Fx Composer we can see the result soon and it is very efficient to make some experiment with the shader.
Using FX Composer shader library into XNA
[edit | edit source]
In this example I use FX Composer version 2.5. using FX Composer library into your own XNA is very easy task. Let just start it with example.
Open your FX Composer and create some new Project. In Material click right mouse and choose „Add Material From File“ and choose metal.fx.
All you need is copy all the codes from metal.fx and create new effect in your XNA project and replace all the content with the codes from metal fx. You can also copy the file metal.fx into put it into your XNA project.
From this, all we need is some modification in XNA class based on variables in metal.fx
in metal.fx you can see this code :
// transform object vertices to world-space:
float4x4 gWorldXf : World < string UIWidget="None"; >;
// transform object normals, tangents, & binormals to world-space:
float4x4 gWorldITXf : WorldInverseTranspose < string UIWidget="None"; >;
// transform object vertices to view space and project them in perspective:
float4x4 gWvpXf : WorldViewProjection < string UIWidget="None"; >;
// provide transform from "view" or "eye" coords back to world-space:
float4x4 gViewIXf : ViewInverse < string UIWidget="None"; >;
In our XNA Class we must change the ParameterEffect name.
Matrix InverseWorldMatrix = Matrix.Invert(world);
Matrix ViewInverse = Matrix.Invert(view);
effect.Parameters["gWorldXf"].SetValue(world);
effect.Parameters["gWorldITXf"].SetValue(InverseWorldMatrix);
effect.Parameters["gWvpXf"].SetValue(world*view*proj);
effect.Parameters["gViewIXf"].SetValue(ViewInverse);
we must also change the technique name in XNA class. Because XNA use directX9 we choose the “technique Simple”
effect.CurrentTechnique = effect.Techniques["Simple"];
Now you can run the code with metal effect.
the complete function:
private void DrawWithMetalEffect(Model model, Matrix world, Matrix view, Matrix proj){
Matrix InverseWorldMatrix = Matrix.Invert(world);
Matrix ViewInverse = Matrix.Invert(view);
effect.CurrentTechnique = effect.Techniques["Simple"];
effect.Parameters["gWorldXf"].SetValue(world);
effect.Parameters["gWorldITXf"].SetValue(InverseWorldMatrix);
effect.Parameters["gWvpXf"].SetValue(world*view*proj);
effect.Parameters["gViewIXf"].SetValue(ViewInverse);
foreach (ModelMesh meshes in model.Meshes)
{
foreach (ModelMeshPart parts in meshes.MeshParts)
parts.Effect = basicEffect;
meshes.Draw();
}
}
Particle Effects
[edit | edit source]to create particle effect in XNA we use a point sprite. A point sprite is a resizable textured vertex that always faces the camera. There are several reasons why we use pointsprite for rendering particles
- a point sprite only use one vertex. It could reduce a significant number of vertex for a thousand particles.
- there is no need to store or set map UV coordinates.it is done automatically.
- Point sprites always face camera. So we don't need to bother with angle and view.
creating a point sprite shader is a very easy, we just need some implementation in pixelshader to define the texture coordinate
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float2 uv;
uv = input.uv.xy;
return tex2D(Sampler, uv);
}
and in a vertexshader we only needs to return a POSITION0 for the vertex .
float4 VertexShader(float4 pos : POSITION0) : POSITION0
{
return mul(pos, WVPMatrix);
}
to enable the point sprite and set the properties of point sprite we do that in technique.
technique Technique1
{
pass Pass1
{
sampler[0] = (Sampler);
PointSpriteEnable = true;
PointSize = 16.0f;
AlphaBlendEnable = true;
SrcBlend = SrcAlpha;
DestBlend = One;
ZWriteEnable = false;
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
the complete point sprite shader
float4x4 World;
float4x4 View;
float4x4 Projection;
float4x4 WVPMatrix;
texture spriteTexture;
sampler Sampler = sampler_state
{
Texture = <spriteTexture>;
magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 uv :TEXCOORD0;
};
float4 VertexShaderFunction(float4 pos : POSITION0) : POSITION0
{
return mul(pos, WVPMatrix);
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float2 uv;
uv = input.uv.xy;
return tex2D(Sampler, uv);
}
technique Technique1
{
pass Pass1
{
sampler[0] = (Sampler);
PointSpriteEnable = true;
PointSize = 32.0f;
AlphaBlendEnable = true;
SrcBlend = SrcAlpha;
DestBlend = One;
ZWriteEnable = false;
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
now lets move to our game1.cs file. First we need to declare and load the Effect and the texture. And to store the position vertex we use an array of VertexPositionColor elements. The position of vertex should be initialized with random number.
Effect pointSpriteEffect;
VertexPositionColor[] positionColor;
VertexDeclaration vertexType;
Texture2D textureSprite;
Random rand;
const int NUM = 50;
....
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
textureSprite = Content.Load<Texture2D>
("Images//texture_particle");
pointSpriteEffect = Content.Load<Effect>
("Effect//PointSprite");
pointSpriteEffect.Parameters
["spriteTexture"].SetValue(textureSprite);
positionColor = new VertexPositionColor[NUM];
vertexType = new VertexDeclaration(graphics.GraphicsDevice,
VertexPositionColor.VertexElements);
rand = new Random();
for (int i = 0; i < NUM; i++) {
positionColor[i].Position =
new Vector3(rand.Next(400) / 10f,
rand.Next(400) / 10f, rand.Next(400) / 10f);
positionColor[i].Color = Color.BlueViolet;
}
}
next step we create DrawPointsprite method to draw the particle.
public void DrawPointsprite() {
Matrix world = Matrix.Identity;
pointSpriteEffect.Parameters
["WVPMatrix"].SetValue(world*view*projection);
graphics.GraphicsDevice.VertexDeclaration = vertexType;
pointSpriteEffect.Begin();
foreach (EffectPass pass in
pointSpriteEffect.CurrentTechnique.Passes)
{
pass.Begin();
graphics.GraphicsDevice.DrawUserPrimitives
<VertexPositionColor>(
PrimitiveType.PointList,
positionColor,
0,
positionColor.Length);
pass.End();
}
pointSpriteEffect.End();
}
and we call the DrawPointSprite() methode in Draw() methode
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
DrawPointsprite();
base.Draw(gameTime);
}
to make the position dynamic we make some implementation in Update() methode.
protected override void Update(GameTime gameTime)
{
positionColor[rand.Next(0, NUM)].Position =
new Vector3(rand.Next(400) / 10f,
rand.Next(400) / 10f, rand.Next(400) / 10f);
positionColor[rand.Next(0, NUM)].Color = Color.White;
base.Update(gameTime);
}
this is very simple pointsprite shader. You can make more sophiscated point sprite with dynamic size and color.
the complete game1.cs
namespace MyPointSprite
{
public class Game1 : Microsoft.Xna.Framework.Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Matrix view, projection;
Effect pointSpriteEffect;
VertexPositionColor[] positionColor;
VertexDeclaration vertexType;
Texture2D textureSprite;
Random rand;
const int NUM = 50;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
}
protected override void Initialize()
{
view =Matrix.CreateLookAt
(Vector3.One * 40, Vector3.Zero, Vector3.Up);
projection =
Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
4.0f / 3.0f, 1.0f, 10000f);
base.Initialize();
}
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
textureSprite =
Content.Load<Texture2D>("Images//texture_particle");
pointSpriteEffect =
Content.Load<Effect>("Effect//PointSprite");
pointSpriteEffect.Parameters
["spriteTexture"].SetValue(textureSprite);
positionColor = new VertexPositionColor[NUM];
vertexType = new VertexDeclaration
(graphics.GraphicsDevice, VertexPositionColor.VertexElements);
rand = new Random();
for (int i = 0; i < NUM; i++) {
positionColor[i].Position =
new Vector3(rand.Next(400) / 10f,
rand.Next(400) / 10f, rand.Next(400) / 10f);
positionColor[i].Color = Color.BlueViolet;
}
}
protected override void Update(GameTime gameTime)
{
positionColor[rand.Next(0, NUM)].Position =
new Vector3(rand.Next(400) / 10f,
rand.Next(400) / 10f, rand.Next(400) / 10f);
positionColor[rand.Next(0, NUM)].Color = Color.Chocolate;
base.Update(gameTime);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
DrawPointsprite();
base.Draw(gameTime);
}
public void DrawPointsprite() {
Matrix world = Matrix.Identity;
pointSpriteEffect.Parameters
["WVPMatrix"].SetValue(world*view*projection);
graphics.GraphicsDevice.VertexDeclaration = vertexType;
pointSpriteEffect.Begin();
foreach (EffectPass pass in
pointSpriteEffect.CurrentTechnique.Passes)
{
pass.Begin();
graphics.GraphicsDevice.DrawUserPrimitives
<VertexPositionColor>(
PrimitiveType.PointList,
positionColor,
0,
positionColor.Length);
pass.End();
}
pointSpriteEffect.End();
}
}
}
Links
[edit | edit source]Introduction to HLSL and some more advanced examples Last accessed: 9th June 2011
Another HLSL introduction Last accessed: 9th June 2011
Very good and detailed tutorial on how to use Shaders in XNA Last accessed: 15th January 2012
Official HLSL Reference by Microsoft Last accessed: 9th June 2011
Author
[edit | edit source]- Leonhard Palm: Basics, GPU Pipeline, Pixel and Vertex Shader, HLSL, XNA Examples
- DR 212: BasicEffect Class, Transparency Shader, Toon Shader, FX Composer, Particle Effects