Jump to content

Cg Programming/Unity/Screen Overlays

From Wikibooks, open books for an open world
Title screen of a movie from 1934.

This tutorial covers screen overlays.

It is the first tutorial of a series of tutorials on non-standard vertex transformations, which deviate from the standard vertex transformations that are described in Section “Vertex Transformations”. This particular tutorial uses texturing as described in Section “Textured Spheres” and blending as described in Section “Transparency”.

Screen Overlays

[edit | edit source]

There are many applications for screen overlays, e.g. titles as in the image to the left, but also other GUI (graphical user interface) elements such as buttons or status information. The common feature of these elements is that they should always appear on top of the scene and never be occluded by any other objects. Neither should these elements be affected by any of the camera movements. Thus, the vertex transformation should go directly from object space to screen space. Unity has various ways to render a texture image at a specified position on the screen. This tutorial tries to achieve this purpose with a simple shader.

Rendering a Texture to the Screen with a Cg Shader

[edit | edit source]

Let's specify the screen position of the texture by an X and a Y coordinate of the lower, left corner of the rendered rectangle in pixels with at the center of the screen and a Width and Height of the rendered rectangle in pixels. (Specifying the coordinates relative to the center often allows us to support various screen sizes and aspect ratios without further adjustments.) We use these shader properties:

   Properties {
      _MainTex ("Texture", Rect) = "white" {}
      _Color ("Color", Color) = (1.0, 1.0, 1.0, 1.0)
      _X ("X", Float) = 0.0
      _Y ("Y", Float) = 0.0
      _Width ("Width", Float) = 128
      _Height ("Height", Float) = 128
   }

and the corresponding uniforms

         uniform sampler2D _MainTex;
         uniform float4 _Color;
         uniform float _X;
         uniform float _Y;
         uniform float _Width;
         uniform float _Height;

For the actual object, we could use a mesh that consists of just two triangles to form a rectangle. However, we can also just use the default cube object since back-face culling (and culling of triangles that are degenerated to edges) allows us to make sure that only two triangles of the cube are rasterized. The corners of the default cube object have coordinates and in object space, i.e., the lower, left corner of the rectangle is at and the upper, right corner is at . To transform these coordinates to the user-specified coordinates in screen space, we first transform them to raster positions in pixels where is at the lower, left corner of the screen:

         uniform float4 _ScreenParams; // x = width; y = height; 
            // z = 1 + 1.0/width; w = 1 + 1.0/height
         ...
         vertexOutput vert(vertexInput input) 
         {
            vertexOutput output;
 
            float2 rasterPosition = float2(
               _X + _ScreenParams.x / 2.0 
               + _Width * (input.vertex.x + 0.5),
               _Y + _ScreenParams.y / 2.0 
               + _Height * (input.vertex.y + 0.5));
            ...

This transformation transforms the lower, left corner of the front face of our cube from in object space to the raster position float2(_X + _ScreenParams.x / 2.0, _Y + _ScreenParams.y / 2.0), where _ScreenParams.x is the screen width in pixels and _ScreenParams.y is the height in pixels. The upper, right corner is transformed from to float2(_X + _ScreenParams.x / 2.0 + _Width, _Y + _ScreenParams.y / 2.0 + _Height). Raster positions are convenient and, in fact, they are often used in OpenGL; however, they are not quite what we need here.

The output parameter of the vertex shader is in the so-called “clip space” as discussed in Section “Vertex Transformations”. The GPU transforms these coordinates to normalized device coordinates between and by dividing them by the fourth coordinate in the perspective division. If we set this fourth coordinate to , this division doesn't change anything; thus, we can think of the first three coordinates as coordinates in normalized device coordinates, where specifies the lower, left corner of the screen on the near plane and specifies the upper, right corner on the near plane. In order to specify any screen position as vertex output parameter, we have to specify it in this coordinate system. Fortunately, transforming the and coordinates of the raster position to normalized device coordinates is not too difficult. For the coordinate we want to use the coordinate of the near clipping plane. In Unity, this depends on the platform; therefore, we use Unity's built-in uniform _ProjectionParams.y which specifies the coordinate of the near clipping plane.

            output.pos = float4(
               2.0 * rasterPosition.x / _ScreenParams.x - 1.0,
               2.0 * rasterPosition.y / _ScreenParams.y - 1.0,
               _ProjectionParams.y, // near plane is at -1.0 or at 0.0
               1.0);

As you can easily check, this transforms the raster position float2(0,0) to normalized device coordinates and the raster position float2(_ScreenParams.x, _ScreenParams.y) to , which is exactly what we need.

There is one more complication: Sometimes Unity uses a flipped projection matrix where the axis points in the opposite direction. In this case, we have to multiply the coordinate with -1. We can achieve this by multiplying it with _ProjectionParams.x:

            output.pos = float4(
               2.0 * rasterPosition.x / _ScreenParams.x - 1.0,
               _ProjectionParams.x * (2.0 * rasterPosition.y / _ScreenParams.y - 1.0),
               _ProjectionParams.y, // near plane is at -1.0 or at 0.0
               1.0);

This is all we need for the vertex transformation from object space to screen space. However, we still need to compute appropriate texture coordinates in order to look up the texture image at the correct position. Texture coordinates should be between and , which is actually easy to compute from the vertex coordinates in object space between and :

            output.tex = float4(input.vertex.x + 0.5, 
               input.vertex.y + 0.5, 0.0, 0.0);
               // for a cube, vertex.x and vertex.y 
               // are -0.5 or 0.5

With the vertex output parameter tex, we can then use a simple fragment program to look up the color in the texture image and modulate it with the user-specified color _Color:

         float4 frag(vertexOutput input) : COLOR
         {
            return _Color * tex2D(_MainTex, input.tex.xy);   
         }

That's it.

Complete Shader Code

[edit | edit source]

If we put all the pieces together, we get the following shader, which uses the Overlay queue to render the object after everything else, and uses alpha blending (see Section “Transparency”) to allow for transparent textures. It also deactivates the depth test to make sure that the texture is never occluded:

Shader "Cg shader for screen overlays" {
   Properties {
      _MainTex ("Texture", Rect) = "white" {}
      _Color ("Color", Color) = (1.0, 1.0, 1.0, 1.0)
      _X ("X", Float) = 0.0
      _Y ("Y", Float) = 0.0
      _Width ("Width", Float) = 128
      _Height ("Height", Float) = 128
   }
   SubShader {
      Tags { "Queue" = "Overlay" } // render after everything else
 
      Pass {
         Blend SrcAlpha OneMinusSrcAlpha // use alpha blending
         ZTest Always // deactivate depth test
 
         CGPROGRAM
 
         #pragma vertex vert  
         #pragma fragment frag

         #include "UnityCG.cginc" 
           // defines float4 _ScreenParams with x = width;  
           // y = height; z = 1 + 1.0/width; w = 1 + 1.0/height
           // and defines float4 _ProjectionParams 
           // with x = 1 or x = -1 for flipped projection matrix;
           // y = near clipping plane; z = far clipping plane; and
           // w = 1 / far clipping plane
 
         // User-specified uniforms
         uniform sampler2D _MainTex;
         uniform float4 _Color;
         uniform float _X;
         uniform float _Y;
         uniform float _Width;
         uniform float _Height;
 
         struct vertexInput {
            float4 vertex : POSITION;
            float4 texcoord : TEXCOORD0;
         };
         struct vertexOutput {
            float4 pos : SV_POSITION;
            float4 tex : TEXCOORD0;
         };
 
         vertexOutput vert(vertexInput input) 
         {
            vertexOutput output;
 
            float2 rasterPosition = float2(
               _X + _ScreenParams.x / 2.0 
               + _Width * (input.vertex.x + 0.5),
               _Y + _ScreenParams.y / 2.0 
               + _Height * (input.vertex.y + 0.5));
            output.pos = float4(
               2.0 * rasterPosition.x / _ScreenParams.x - 1.0,
               _ProjectionParams.x * (2.0 * rasterPosition.y / _ScreenParams.y - 1.0),
               _ProjectionParams.y, // near plane is at -1.0 or at 0.0
               1.0);
 
            output.tex = float4(input.vertex.x + 0.5, 
               input.vertex.y + 0.5, 0.0, 0.0);
               // for a cube, vertex.x and vertex.y 
               // are -0.5 or 0.5
            return output;
         }
 
         float4 frag(vertexOutput input) : COLOR
         {
            return _Color * tex2D(_MainTex, input.tex.xy);   
         }
 
         ENDCG
      }
   }
}

When you use this shader for a cube object, the texture image can appear and disappear depending on the orientation of the camera. This is due to clipping by Unity, which doesn't render objects that are completely outside of the region of the scene that is visible in the camera (the view frustum). This clipping is based on the conventional transformation of game objects, which doesn't make sense for our shader. In order to deactivate this clipping, we can simply make the cube object a child of the camera (by dragging it over the camera in the Hierarchy Window). If the cube object is then placed in front of the camera, it will always stay in the same relative position, and thus it won't be clipped by Unity. (At least not in the game view.)

Changes for Opaque Screen Overlays

[edit | edit source]

Many changes to the shader are conceivable, e.g. a different blend mode or a different depth to have a few objects of the 3D scene in front of the overlay. Here we will only look at opaque overlays.

An opaque screen overlay will occlude triangles of the scene. If the GPU was aware of this occlusion, it wouldn't have to rasterize these occluded triangles (e.g. by using deferred rendering or early depth tests). In order to make sure that the GPU has any chance to apply these optimizations, we have to render the screen overlay first, by setting

Tags { "Queue" = "Background" }

Also, we should avoid blending by removing the Blend instruction. With these changes, opaque screen overlays are likely to improve performance instead of costing rasterization performance.

Summary

[edit | edit source]

Congratulations, you have reached the end of another tutorial. We have seen:

  • How to render screen overlays with a Cg shader.
  • How to modify the shader for opaque screen overlays.

Further reading

[edit | edit source]

If you still want to know more

< Cg Programming/Unity

Unless stated otherwise, all example source code on this page is granted to the public domain.