Unity

Unity Shaders Refernce

Unity shaders are written in a unique script with separate blocks, some parts describing the shader to the engine and other parts with actual shader code.

Unity's render engine (to my knowledge) uses the classic model of rendering and not ray-tracing.

Unity offers a few different workflows depending on the kind of effect you want to make. For lighting objects you'll most likely want to start with a Surface shader which has access to light details and creates shadows. Surface shaders don't use the old-school vertex and fragment shader functions but offers something similar.

Properties

These define the data a material passes to the shader.

Properties
{
    _nameInShader ("Display Name", Type) = Default Value
}

They are not comma-separated!

Types

Arrays and matricies can also be provided, but only through scripting.

Attributes

Attributes can be added before property definitions to affect how they are displayed in the material editor.

Accessing them

You can either use the values directly with a bracket notation [_propertyName] or by adding variables in the shader HLSL shader code type _nameInShader.

Scaling and Tiling data

Textures can be given this data in the editor. To access them just append _ST to the property name. Access data as a float 4 where the x and y entries contain tiling data and z and w entries contain scaling data.

SubShaders

When Unity renders a mesh, it picks out the first SubShader that will run on the player's computer. Ideally you can create an HD shader first then simpler, cheaper shaders below that.

Pass

For each Pass, the geometry of the object is rendered once.

You can give a shader pass a name so it's easier to debug. All names must be uppercase! Name "MY_PASS_NAME"

Tags

Tags specify when the shader will be run in the pipeline. There can be tags for the subshader or for the pass, and each have their own separate properties and options.

You can make your own tags and get them via Scripting with the Material object Material.GetTag

SubShader Tags

Tags
{
    "Queue" = "Background | Geometry | AlphaTest | Transparent | Overlay"
    "RenderType" = "Same as above"
    "PreviewType" = "Sphere | Plane | Skybox"
    "DisableBatching" = "True | False | LODFading"
    "ForceNoShadowCasting" = "True | False"
    others not listed here
}

Render Setup

Each pass can specify its render setup. These are the flags and state the object will be rendered in (how blending is done, depth tests, etc.)

Cull Back | Front | Off
ZTest (Less | Greater | LEqual | GEqual | Equal | NotEqual | Always)
ZWrite On | Off
Offset OffsetFactor, OffsetUnits // Z-buffer offset
Blend sourceBlendMode destBlendMode
Blend sourceBlendMode destBlendMode, alphaSourceBlendMode alphaDestBlendMode
BlendOp colorOp
BlendOp colorOp, alphaOp
AlphaToMask On | Off

Reference

Surface Shaders

Surface shaders utilize more power in the Unity Engine but they have a strange, unique syntax. You create a surface function that takes in whatever data you need and fills in a SurfaceOutput object.

struct SurfaceOutput
{
    fixed3 Albedo;  // diffuse color
    fixed3 Normal;  // tangent space normal, if written
    fixed3 Emission;
    half Specular;  // specular power in 0..1 range
    fixed Gloss;    // specular intensity
    fixed Alpha;    // alpha for transparencies
};

// Physically Based rendering
struct SurfaceOutputStandard
{
    fixed3 Albedo;      // base (diffuse or specular) color
    fixed3 Normal;      // tangent space normal, if written
    half3 Emission;
    half Metallic;      // 0=non-metal, 1=metal
    half Smoothness;    // 0=rough, 1=smooth
    half Occlusion;     // occlusion (default 1)
    fixed Alpha;        // alpha for transparencies
};
struct SurfaceOutputStandardSpecular
{
    fixed3 Albedo;      // diffuse color
    fixed3 Specular;    // specular color
    fixed3 Normal;      // tangent space normal, if written
    half3 Emission;
    half Smoothness;    // 0=rough, 1=smooth
    half Occlusion;     // occlusion (default 1)
    fixed Alpha;        // alpha for transparencies
};

You define a Surface shader by using the surface directive #pragma surface surfaceFunction lightModel [optionalparams]. The surface function should have the form void surf (Input IN, inout SurfaceOutput o) where Input is the struct you define containing the data you want. lightModel is one of the following options.

Custom Lighting Functions

For basic forward-rendering and no UnityGI work, you can use thus function

There are also hidden variables used in the lighting functions that are not documented anywhere because fun times.

These are for using your own GI (light map data?) deconstruction

Transparency

Append one of these options to the surface directive to control how the object gets rendered with transparency.

Modifiers, shadows, and tessellation

Append one of these options to the surface directive to modify how the shader runs.

Input Structures

These define the values you can put into an Input struct.

Example Shader

Here is an example surface shader that was used for making Hyperwolf.

Shader "Bitzawolf/Creature"
{
    Properties
    {
        _BorderColor ("Border Color", Color) = (0.3, 0.3, 0.3, 1)
        _BorderSize ("Border Size", Range(0, 1)) = 0
        _MainTex ("Texture", 2D) = "Black" {}
        _LightRamp ("Light Ramp", 2D) = "White" {}
    }

    SubShader
    {
        // Create border around the object
        Pass
        {
            Name "TreeBorder"
            Tags { "Queue" = "Transparent" }
            Blend SrcAlpha OneMinusSrcAlpha
            Cull Off
            ZTest Always
            ZWrite Off

            CGPROGRAM
                #pragma vertex vert
                #pragma fragment frag
                #include "UnityCG.cginc"

                struct v2f {
                    float4 pos  : POSITION;
                };

                float _BorderSize;
                fixed4 _BorderColor;

                v2f vert (appdata_full v)
                {
                    v2f o;
                    float3 vert = v.vertex + v.normal * _BorderSize;
                    o.pos = UnityObjectToClipPos(vert);
                    return o;
                }

                half4 frag(v2f i) : COLOR
                {
                    return _BorderColor;
                }
            ENDCG
        }

        CGPROGRAM
            #pragma surface surf Custom fullforwardshadows
            #pragma target 3.0

            sampler2D _MainTex;
            sampler2D _LightRamp;

            struct Input
            {
                float2 uv_MainTex;
            };

            void surf (Input IN, inout SurfaceOutput o)
            {
                fixed4 c = tex2D (_MainTex, IN.uv_MainTex);
                o.Albedo = c.rgb;
                o.Alpha = 1;
            }

            half4 LightingCustom (SurfaceOutput s, half3 lightDir, half atten)
            {
                half NdotL = dot(s.Normal, lightDir);
                half val = (NdotL + 1) / 2;
                half4 ramp = tex2D(_LightRamp, float2(0.5, val));
                half4 c;
                c.rgb = s.Albedo * _LightColor0.rgb * ramp;
                c.a = s.Alpha;
                return c;
            }
        ENDCG
    }
    Fallback "Diffuse"
}

UsePass

A use pass allows a shader to reference a pass inside a different shader that way you can define an effect in one location and reuse that effect without rewriting the code.

UsePass "Shader/Name"

This will copy all passes from that shader directly into this one.

GrabPass

This will grab the current screen's contents that is behind the object being drawn.

GrabPass { }

The screen texture will stored into a texture variable called _GrabTexture

User Inteface

UI is a huge pain in Unity, but only because their UI system is so flexible.

All UI components are positioned on the screen by either a percentage or a pixel value. A button could be 10% from the left side of the screen, or 50 pixels. That same button could have a width of 5% of the entire screen, or a fixed width of 100 pixels no matter the screen size.

This obviously presents a lot of scaling issues... What looks good on your scren right now will not look good at a different size, and this is the bigges problem with Unity's UI.

The solution is to understand how the UI works and then create UI elements that will actually work and scale appropriately.

Anchors

These use units of percentage. They describe where an element will be placed (where it will be anchored). As the screen changes size, the anchors will remain in the same exact place relative to the screen's size.

For example, an anchor of 0.5 means the anchor is in the exact middle of the screen (50%).

Another example, an anchor of 0.2 means the anchor is 20% from the bottom/left of the screen, so if the screen was 1000 pixels wide and the min X anchor is 0.2 then the element will start at 200 pixes from the bottom/left of the screen

Rect Transform

These use units of pixels. They describe the offset from the anchors that this element will be placed at.

For example, a value of 20 will mean the element will be drawn 20 pixels after one of the anchors.

Using the same 0.2 anchor example from above, if an element had a "left" values of -50 then the element will be positioned starting 150 pixels from the left of the screen (200 - 50).

How Anchors and Rect Transforms work together

So, after some playing around, you might notice that the labels for the Rect Transform values change. Sometimes one might say Top, then changing an anchor value will cause it to say "Pos Y".

This is based on the shape the anchors make. If the anchor is a point (all 4 vales are the same) then the Rect will only allow options for specifying a position and height. This makes sense because an anchor as a single point means the element will never change size, just change position on the scree. So we can specify a fixed height and width for an element.

If the anchor forms a box (4 separate values) then the Rect specifies 4 different offset values in terms of the element's top, right, bottom, and left border. This makes sense because the anchor will vary in pixel size as the screen size changes.

Tips

Common workflow

Scripting Collisions

Unity has an odd system for determining which function is called when a collision occurs, so here's a handly reference.

Static colliders are game objects with a collider attached but no rigidbody. They are not expected to ever move, even by script. The engine actually applies some logic to these types like with lighting and physics simulations so it's a really bad idea to move them.

Rigidbody colliders are game objects with a collder and a normal rigidbody. They react to forces and gravity and are fully smulated.

Kinematic Rigidbody colliders also have a collider and rigidbody, but the kinematic option is enabled. These do not react to forces and gravity and are not simulated, but they are not optimized like the static colliders so they are a good choice when you want game objects moved or enabled/disabled by script.

Rigidbody colliders will receive collision events against any static, rigidbody, or kinematic object.

Kinematic colliers will only receive collision events against rigidbody colliders.

Triggers change up the chart quite a bit and is more complex. The following list will indicate which object the listening object will recieve events for.

FMOD

Tips for integrating FMod into unity

Initial Setup

1) Log in to https://www.fmod.com 2) Download the latest Unity package https://www.fmod.com/download#integrations 3) Import the package into your game 4) Drop the FMode assets into your project folder 5) Setup FMod to know where the audio project files are a) FMod button at the top of the screen b) Edit Settings c) "Studio Integration" set to "Multiple Platform Build" d) Click Browse and locate the folder with the "Desktop" folder 6) If all is well, look at FMod -> Event Browser, it should show you some events. 7) Attach an FMOD Studio Listener script to the camera

Creating a simple sound emitter

User FMOD Studio Event Emitter on some object. Enter the path to the event. It should play based on the default Unity Collision/trigger detection functions.

Customizing a sound

© Bitzawolf 2019