Jump to content

Cg Programming/Unity/Mirrors

From Wikibooks, open books for an open world
“Toilet of Venus”, ca. 1644-48 by Diego Rodríguez de Silva y Velázquez.

This tutorial covers the rendering of plane mirrors.

It does not require any shader programming (unless you want to use it with stereo rendering) but it does require some understanding of Section “Vertex Transformations” and of texturing as discussed in Section “Textured Spheres”.

Rendering Plane Mirrors

[edit | edit source]

There are various ways of rendering plane mirrors in computer graphics. If rendering to textures is possible, the most common method consists of the following steps:

  • Mirror the main camera's position at the mirror and place a “mirror camera” at this mirrored position behind the mirror.
  • Render the scene from the point of view of the mirror camera using the mirror plane as view plane. Render this image to a render texture.
  • Use the render texture as texture of the mirror when rendering the scene with the main camera.

This is the basic method. Let's implement it.

The first issue is to obtain the position of the main camera. For monoscopic cameras, this is just transform.position of the camera. For stereoscopic cameras, we have to decide whether we use the position of the camera for the right eye or the camera for the left eye. We can get the view matrix of a camera mainCamera for the right eye with

   Matrix4x4 viewMatrix = mainCamera.GetStereoViewMatrix (Camera.StereoscopicEye.Right);

and the view matrix for the left eye with

   Matrix4x4 viewMatrix = mainCamera.GetStereoViewMatrix (Camera.StereoscopicEye.Left);

The inverse of the view matrix transforms the origin of the camera coordinate system to the vector in the 4th column (3rd column if you start counting with 0). Since the origin of the camera coordinate system is the position of the camera and the inverse of the view matrix transforms from camera coordinates to world coordinates, it follows that the 4th column of the inverse of the view matrix specifies the position of the camera in world coordinates. If the view matrix is stored in viewMatrix, we can obtain this vector with this code:

   mainCameraPosition = viewMatrix.inverse.GetColumn (3);

Mirroring the main camera's position at a plane can be achieved in various ways. In Unity, one simple way is to transform the main camera's position into the object coordinate system of a quad game object. Then one can easily mirror this position by changing the sign of its coordinate. This mirrored position is then transformed back to world space.

Here is a C# that implements this process:

// This script should be called "SetMirroredPosition" 
// and should be attached to a Camera object 
// in Unity which acts as a mirror camera behind a 
// mirror. Once a Quad object is specified as the 
// "mirrorQuad" and a "mainCamera" is set, the script
// computes the mirrored position of the "mainCamera" 
// and places the script's camera at that position.
using UnityEngine;

[ExecuteInEditMode]

public class SetMirroredPosition : MonoBehaviour {

    public GameObject mirrorQuad;
    public Camera mainCamera;
    public bool isMainCameraStereo;
    public bool useRightEye;

    void LateUpdate () {
        if (null != mirrorQuad && null != mainCamera &&
            null != mainCamera.GetComponent<Camera> ()) {
            Vector3 mainCameraPosition;
            if (!isMainCameraStereo) {
                mainCameraPosition = mainCamera.transform.position;
            } else {
                Matrix4x4 viewMatrix = mainCamera.GetStereoViewMatrix (
                    useRightEye ? Camera.StereoscopicEye.Right :
                    Camera.StereoscopicEye.Left);
                mainCameraPosition = viewMatrix.inverse.GetColumn (3);
            }
            Vector3 positionInMirrorSpace =
                mirrorQuad.transform.InverseTransformPoint (mainCameraPosition);
            positionInMirrorSpace.z = -positionInMirrorSpace.z;
            transform.position =
                mirrorQuad.transform.TransformPoint (
                    positionInMirrorSpace);
        }
    }
}

The script should be attached to a new camera object (in the main menu choose Game Object > Camera) and be called SetMirroredPosition.cs. The mirrorQuad should be a set to a quad game object that represents the mirror and mainCamera should be set to the main game camera.

To use the mirrorQuad as view plane, we can use the script in Section “Projection for Virtual Reality”, which should be attached to our new mirror camera. Include the line [ExecuteInEditMode] in that script to make it run in the editor. Make sure to check setNearClipPlane such that objects entering the mirror are clipped. If there are artifacts at the intersection of objects with the mirror plane, decrease the parameter nearClipDistanceOffset until these artifacts disappear.

To store the rendered image of the mirror camera in a render texture, create a new render texture by selecting Create > Render Texture in the Project Window. In the Inspector Window, make sure the size of the render texture is not too small (otherwise the image in the mirror will appear pixelated). Once you have a render texture, select the mirror camera and, in the Inspector Window, set the Target Texture to the new render texture. You should now be able to see the image in the mirror in the Inspector Window of the render texture.

To use the render texture as texture image for the mirror, apply a shader with texturing to the mirror quad, e.g., the Standard shader or the shader Unlit/Texture. Use the render texture for texturing like any other texture object. Note that you might have to rotate the mirror quad such that its front face is visible to the main camera. By default, the texture image will appear mirrored horizontally. However, there is an easy fix using the shader properties for textures: set the X coordinate of Tiling to -1 and the X coordinate of Offset to 1.

Stereo Rendering

[edit | edit source]

For stereo rendering, we need two mirrored cameras: one for the left eye and one for the right eye. We also need one render texture for each eye. The texturing of the mirror has to make sure that each eye accesses its corresponding render texture. To this end, Unity provides a built-in shader variable unity_StereoEyeIndex, which is 0 for the left eye and 1 for the right eye.

A basic shader that takes both textures and returns the color from the texture for the currently rendered eye could look like this:

Shader "BasicStereoTexture"
{
    Properties
    {
        _LeftTex ("Left Texture", 2D) = "white" {}
        _RightTex ("Right Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
  
            #include "UnityCG.cginc"

            struct vertexInput
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct vertexOutput
            {
                float2 uvLeft : TEXCOORD0;
                float2 uvRight : TEXCOORD1;
                float4 vertex : SV_POSITION;
            };

            sampler2D _LeftTex;
            uniform float4 _LeftTex_ST;
            sampler2D _RightTex;
            uniform float4 _RightTex_ST;

            vertexOutput vert (vertexInput i)
            {
                vertexOutput o;
                o.vertex = UnityObjectToClipPos(i.vertex);
                o.uvLeft = TRANSFORM_TEX(i.uv, _LeftTex);
                o.uvRight = TRANSFORM_TEX(i.uv, _RightTex);
                return o;
            }

            float4 frag (vertexOutput i) : SV_Target
            {
                return lerp(tex2D(_LeftTex, i.uvLeft), 
                    tex2D(_RightTex, i.uvRight), 
                    unity_StereoEyeIndex);
            }
            ENDCG
        }
    }
    FallBack "Diffuse"
}
Alternative Surface Shader: click to show/hide
Shader "StereoTexture"
{
    Properties
    {
        _LeftTex ("Left Texture", 2D) = "white" {}
        _RightTex ("Right Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }

        CGPROGRAM
        #pragma surface surf Standard 

        uniform sampler2D _LeftTex;
        uniform sampler2D _RightTex;

        struct Input
        {
            float2 uv_LeftTex;
            float2 uv_RightTex;
        };

        void surf (Input IN, inout SurfaceOutputStandard o)
        {
            fixed4 c = lerp(tex2D(_LeftTex, IN.uv_LeftTex), 
               tex2D(_RightTex, IN.uv_RightTex), 
               unity_StereoEyeIndex);
            o.Emission = c.rgb;
        }
        ENDCG
    }
    FallBack "Diffuse"
}

Limitations

[edit | edit source]

There are several limitations of this implementation which we haven't addressed. For example:

  • multiple mirrors (you might have to share the same render texture for all mirrors)
  • multiple reflections in multiple mirrors (this is complicated because you need an exponentially increasing number of mirror cameras)
  • reflection of light in mirrors (each light source should have a mirrored partner to take light into account that is first reflected in the mirror before lighting an object)
  • uneven mirrors (e.g. with a normal map)
  • etc.

Summary

[edit | edit source]

Congratulations! Well done. Some of the things we have looked at:

  • How to mirror positions at a plane by transforming them into the object coordinate system of the plane and changing the sign of the object coordinate.
  • How to render a camera view into a render texture.
  • How to use a mirrored render texture for texturing.

Further reading

[edit | edit source]

If you want to know more

  • about using the stencil buffer to render mirrors, you could read Section 9.3.1 of the SIGGRAPH '98 Course “Advanced Graphics Programming Techniques Using OpenGL” organized by Tom McReynolds, which is available online.

< Cg Programming/Unity

Unless stated otherwise, all example source code on this page is granted to the public domain.