Zelda's Silent Realm in Unity HDRP

(Minor spoilers ahead: We'll be looking at something you don't encounter until about 1/3 into Skyward Sword, but I will not be directly discussing the role these areas play in the story of the game.)

While doing some research for a different effect from The Legend of Zelda: Skyward Sword, I was reminded of the unique look of the Silent Realm and it inspired me to recreate my own take on the effect in Unity.

The Silent Realm is a parallel world you visit in Skyward Sword. It’s a spirit world of sorts shrouded in twilight with air filled with floating bits of ethereal energy, which Link finds himself covered in. It’s a hostile place, patrolled by guardians much stronger than the normal enemies you’d expect to find in the same areas in the physical realm of Hyrule.

Link at the entrance of Farore’s Silent Realm

Three effects stand out to me in the above clip:

  1. The shadowy cyan-green tint over the whole scene
  2. The spirit particles floating around the world
  3. The spirit smoke shimmering across Link’s body

This article will primarily focus on the spirit smoke, but I’ll talk about the first two as well to properly set the mood.

Also before we go any further I wanted to thank my friend Angela for collecting all the clips featured from Skyward Sword HD, she saved me a ton of time chasing them all down!

Creating our test scene

For our test scene I’m starting from Unity’s updated high-definition render pipeline (HDRP) scene template as a base. I’m using Unity 2022.1.16f1, HDRP 13.1.8, and version 10.9.1 of the template.

To avoid distracting artifacts, I disabled motion blur and swapped TAA for SMAA. I also tweaked the sun and sky lighting a bit, but otherwise the HDRP baseline of the test scene is the same as the template.

Instead of Link we’ll be using Princess Zelda courtesy of Christoph Schoch’s excellent Breath of the Wild Zelda model (see this tweet for downloads.)

To breath some life into Zelda I’m using animations from Mixamo retargeted to Zelda using Unity’s own humanoid retargeting support. (This really doesn’t do Christoph’s rig justice, it’s just a quick and dirty solution. If you’re planning on following along it’d be easier to just use models from Mixamo.)

Everything else in the scene is just reused assets from the Unity template. The foliage from room 2 was duplicated numerous times to create a discount Faron Woods behind Zelda, and the Unity material test sphere was recolored to red and placed in the forest to provide some color contrast.

Establishing the basic Silent Realm vibe

Before we get started on the spirit smoke effect, let’s establish the overall Silent Realm vibe for the scene.

As seen in the comparison below the Silent Realm looks the very similar to the equivalent location Hyrule, except there is a cyan-green tint over the whole scene and bits of spirit particles floating around in the air.

I don’t know enough about the Wii’s graphics hardware to speculate how this was done in the original game, but in modern graphics pipelines color grading is a great way to accomplish the color tint effect.

There’s numerous ways to manipulate the color grading in HDRP. The most flexible method is to manipulate the color grading look-up texture (LUT) directly, and Unity provides instructions for doing so with Photoshop or DaVinci Resolve.

The less flexible method is to manipulate the values used by Unity to generate the LUT internally.

What’s a color grading LUT?

The short version is that it’s a three-dimensional texture where each texel represents the output color for the color mapped to that point. So if you have a LUT with color (197, 163, 255) at coordinate (255, 0, 0) it would replace inputs colored with .

For an more detailed explanation, see this lovely article by Harry Alisavakis on manually implementing color grading with LUTs. (It’s maybe worth noting that HDRP uses an actual 3D texture rather than the unrolled 2D texture Harry did.)

When you don’t manually specify your own LUT it’s generated by the pipeline from the various color grading-related parameters using the LutBuilder3D compute shader.

I went with the second option since the version of Photoshop I own is too old to export color lookup tables in the format Unity expects (remember when you could buy Photoshop outright?)

In either case you’ll create an empty game object in your scene and add a Volume component to it along with a new volume profile to describe our changes. Explaining all the different settings you can override is beyond the scope of this blog post (there’s a bajillion of them), but here’s the settings I ended up with:

Filter color is 167, 221, 218 @ intensity 0

Filter color is 167, 221, 218 @ intensity 0

The most important change here is the hue vs hue color curve. The hue vs hue curve takes an input hue on the x-axis and offsets it by the value on the y-axis. For example: Putting a point at (0, 60) would cause hue 0 (red) to offset to hue 60 (yellow). This means setting it to a completely linear curve from one corner to the other results in everything getting hue 180 (which is cyan.) Because Unity doesn’t really support discontinuous curves, it’s easier to set everything to 180 and adjust it to the exact hue you want using the offset in the color adjustments section (so in my case -3 gets us around hue 177 for the full image.) In my case I pulled in the top and bottom of the curve slightly to preserve the differences between hues and prevent the scene from becoming completely monochromatic.

In addition to the color fiddling, I also enabled some somewhat aggressive bloom. Skyward Sword doesn’t have true full-screen bloom, but it’s imitating it and I think having it matches the spirit world vibe we’re looking for.

Why does your hue vs hue graph have so many points?

While I was fiddling with the graph I wrote this little script to help me generate curves algorithmically. You don’t need to use this, but I found it useful for testing various ideas so maybe you will too. Ideally you should maybe actually manually create a LUT since the tools Unity exposes are not ideal for recreating this specific effect.

using UnityEditor;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;

public sealed class ProfileCurveHelper : EditorWindow
{
    private VolumeProfile SelectedProfile;
    private float TargetHue = 180f; //177f;
    private float Multiplier = 1f;

    private void OnGUI()
    {
        SelectedProfile = (VolumeProfile)EditorGUILayout.ObjectField("Volume Profile", SelectedProfile, typeof(VolumeProfile), allowSceneObjects: false);

        ColorCurves colorCurves = null;
        EditorGUI.BeginDisabledGroup(SelectedProfile == null || !SelectedProfile.TryGet(out colorCurves));

        TargetHue = EditorGUILayout.FloatField("Target hue", TargetHue);
        Multiplier = EditorGUILayout.FloatField("Multiplier", Multiplier);

        if (GUILayout.Button("Configure hue vs hue curve"))
        {
            TextureCurve curve = colorCurves.hueVsHue.value;

            // Clear the curve
            int oldLength = curve.length;
            for (int i = 0; i < oldLength; i++)
            { curve.RemoveKey(0); }

            // Generate the curve
            float targetHueNormalized = Mathf.Clamp01(TargetHue / 360f);
            const int steps = 36;
            for (int i = 0; i <= steps; i++)
            {
                float t = ((float)i) / ((float)steps);
                float delta = targetHueNormalized - t;
                delta *= Multiplier;
                delta += 0.5f;
                curve.AddKey(t, delta);
            }

            curve.SetDirty();
        }

        if (GUILayout.Button("Dump curve"))
        {
            TextureCurve curve = colorCurves.hueVsHue.value;

            if (curve.length == 0)
            { Debug.Log("Curve is empty"); }

            for (int i = 0; i < curve.length; i++)
            {
                Keyframe keyFrame = curve[i];
                Debug.Log($"[{i}] {keyFrame.value} @ t = {keyFrame.time}");
            }
        }

        EditorGUI.EndDisabledGroup();
    }

    [MenuItem("Pixel Alchemy/HDRP Volume Profile Curve Tool")]
    public static void ShowEditor()
        => GetWindow<ProfileCurveHelper>().titleContent = new GUIContent("Curve Tool");
}
I have modern Photoshop, how would you do it there?

The general instructions are outlined in the HDRP documentation.

I found the following filters give a pretty good effect:

  • Brightness/Contrast (applied first – IE: It should be lower on the layers list.)
    • Brightness -18
    • Contrast 100
  • Hue/Saturation
    • Hue 177
    • Saturation 26
    • Lightness -3
    • Colorize Yes

For the particles, it actually turns out the dust particles already present in the HDRP template scene are a pretty good approximation for the ones in Skyward Sword. I simply cloned the particle system from room 2 and tweaked a few settings:

(It’s not especially visible in the reference video at the beginning of this article, but there’s also particles which emanate from Link himself. I did not attempt to recreate these since the particels were not my focus focus.)

All of this together results in the following scene:

It’s not 100% perfect, but it gets us in the ballpark. You wouldn’t want to get overly attached to the settings used for a test scene like this anyway since stylistic color grading like this should really be tailored to the assets in your actual game.

Creating the spirit smoke effect

Now onto the main attraction: The spirit smoke effect!

First let’s take a closer look at how the effect looks in our reference material:

Judging by the way the smoke behaves, I would garner a guess that it’s just a static smoke texture scrolled across Link. It’s not totally clear to me if Link’s UVs are just authored in a way that allows the smoke texture to flow over him naturally or if he has a second set of UVs specifically for this effect, but either way this effect has seemingly impacted the workflow of the character modeler.

Another more subtle detail which isn’t immediately obvious is that you can actually see through Link in the brightest parts of the smoke. This is especially visible on Link’s shoulder and hat in the closer-up clip below:

Goals for my take on the effect

As I mentioned earlier, this article is about my take on the spirit smoke effect, so we aren’t trying to recreate this effect exactly but implement our own effect to evoke a similar feeling. Skyward Sword was originally released on the Nintendo Wii, which by modern standards has a very anemic GPU with a fixed-function pipeline (IE: it did not support shaders as we know them today.) So while the effect is nice, what if we looked at this effect again in a world with programmable shaders and advanced rendering pipelines like HDRP?

My interpretation of this effect is that it represents how the ethereal energy of the Silent Realm reacts with a foreign entity such as Link.

One thing in particular which I find lacking in Skyward Sword in this regard is that the smoke doesn’t react to Link moving around, as seen below:

If you look closely, it also always flows the same way. It’s mostly up from his feet and towards his head except on his hands where it flows down reason.

I also think that ideally the effect shouldn’t require any special attention from the modeler, that way it can be extended to other objects and is easier to use by small teams which might be working from pre-made assets.

With these ideas and criticisms in mind, here are my primary goals for this effect:

  1. Procedurally generate the smoke in such a way that it doesn’t affect the authoring of the character model (or any other objects we might want to apply the effect to)
  2. Have the smoke morph around the character as they move so that it looks like they’re moving around in a pool of energy (as opposed to having it glued to their skin)
  3. Drive the density/brightness of the smoke based on the character’s movement to simulate the energy temporarily becoming more dense as the they move into it
  4. Works in Unity HDRP (Most of this should be doable in URP or Unity’s legacy render pipeline too, but the approach would probably be different.)

I also want to experiment with having the smoke float slightly off of the affected mesh.

What about being able to see through parts of Link?

TL;DR: I don’t care for it, it’s tricky to do in HDRP, and I want to explore an idea that conflicts with it.

One aspect of the original effect which I am not planning to recreate in my implementation is the holes on Link at the brightest parts of the smoke.

I don’t find this aspect of the effect particularly compelling as it’s not very noticeable outside of cutscenes. (I actually wasn’t even sure if it occurred outside of cutscenes at first.)

More importantly though, it doesn’t actually fit into the extensibility model of HDRP very well. Extending rendering in HDRP is primarily done through custom passes. A custom pass can’t (easily) bring back pixels behind something already rendered. We could use a custom pass to render spirit smoke objects completely separately, but HDRP does not provide much extensibility in the earlier stages of rendering and in particular doesn’t officially support rendering objects prior to lighting. (You might get excited by AfterOpaqueDepthAndNormal, but according to the documentation passes there are not supposed to touch the color buffer.)

I’m pretty sure with enough effort you could extend HDRP’s render graph to accomplish this, but I wanted to keep things relatively simple and well-supported for this article.

In theory we could also override the object’s material directly, except Unity explicitly discourages authoring HLSL shaders for materials in HDRP. Shader graph is supported, but its support is not well advertised in my experience it’s slightly janky so I’m not super confident in it. Personally I’d also rather work with textual code over graphs. Additionally, the most straight forward way of accomplishing this also means making what is normally an opaque object transparent, which fundamentally changes how it’s rendered in a way that feels wrong to me for an effect like this. (Transparent objects are rendered in a completely separate stage in HDRP.)

However, it is possible to incorporate HLSL into a shader graph, and you could discard pixels instead of using alpha blending. So if you do want to accomplish that aspect of the original effect, this might be a viable alternative.

Additionally I wanted to experiment with having the smoke floating off of the surface of Link, which I don’t feel would work well with a see-through effect in the first place.

The basic custom HDRP pass

There’s a handful of different ways we might implement this effect in HDRP, but the most straightforward is by using a custom pass. Our custom pass will re-render Zelda with our spirit smoke material instead of her normal HDRP one.

There’s two primary ways to create this custom pass:

  1. Through a renderers custom pass, which uses a shader+material structured specifically to be a custom HDRP pass to be applied to specific objects.
  2. Through a custom pass written in C#.

We’ll be using the latter because it enables some features we’ll use later on, and personally I find the lack of boilerplate simpler.

Creating the custom pass script

First things first we need to create our CustomPass implementation in C#. HDRP provides a template for this under Assets > Create > Rendering > HDRP C# Custom Pass. We’ll simply call it SpiritSmokePass:

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;

public sealed class SpiritSmokePass : CustomPass
{
    public Material Material;
    public LayerMask LayerMask;

    protected override void Execute(CustomPassContext ctx)
    {
        CoreUtils.SetRenderTarget(ctx.cmd, colorBuffer: ctx.cameraColorBuffer, depthBuffer: ctx.cameraDepthBuffer, ClearFlag.None);
        CustomPassUtils.DrawRenderers(ctx, LayerMask, overrideMaterial: Material);
    }
}

There isn’t much here yet, but let’s go over what we have so far:

public Material Material;
public LayerMask LayerMask;

As you’re used to with other Unity components, public fields will be shown in the inspector window when we add our custom pass later on.

This material will be applied to objects matching the configured layer mask.


Next is the implementation of Execute, which describes what our custom pass does during rendering. It is executed once per rendering camera for every single frame.

CoreUtils.SetRenderTarget(ctx.cmd, colorBuffer: ctx.cameraColorBuffer, depthBuffer: ctx.cameraDepthBuffer, ClearFlag.None);

This tells Unity we will be rendering to the camera’s color and depth buffer, and that we don’t want to clear them. We won’t want our smoke to write to depth, but we will want to read it to make sure we respect it. (Without a depth buffer our smoke effect would render on top of objects between the camera and Zelda.)

CustomPassUtils.DrawRenderers(ctx, LayerMask, overrideMaterial: Material);

This draws all renderable objects matching the given layer mask with the configured spirit smoke material.

(CustomPassUtils.DrawRenderers is a helper function which handles some of the finer details of accomplishing this using Unity’s lower level rendering infrastructure found in the UnityEngine.Rendering namespace.)

Creating the custom pass shader and material

Next we’ll define a shader for our smoke effect. There’s no template for low level shaders like this, so just choose whatever. If you’re feeling indecisive: Assets > Create > Shader > Unlit Shader will do the job.

We’ll get to the smoke later, for now let’s just get things working:

Shader "CustomPass/SpiritSmoke"
{
    Properties
    {
        [HDR] _Color("Color", Color) = (1.0, 0.0, 0.0, 0.5)
        _MeshOffset("Mesh Offset", Range(0.0001, 0.1)) = 0.0001
    }

    SubShader
    {
        Pass
        {
            Name "SpiritSmoke"
            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off

            HLSLPROGRAM
            // Material property values
            float4 _Color;
            float _MeshOffset;

            // Unity-provided values
            float4x4 unity_MatrixVP;
            float4x4 unity_ObjectToWorld;

            struct PsInput
            {
                float4 Position : SV_Position;
            };

            #pragma vertex VertexMain
            PsInput VertexMain(float3 position : POSITION, float3 normal : NORMAL)
            {
                position += normal * _MeshOffset;

                PsInput result;
                float4 worldPosition = mul(unity_ObjectToWorld, float4(position, 1.f));
                result.Position = mul(unity_MatrixVP, worldPosition);
                return result;
            }

            #pragma fragment PixelMain
            float4 PixelMain(PsInput input) : SV_Target
            {
                return _Color;
            }
            ENDHLSL
        }
    }
}

This is a pretty straightforward “transform a mesh and render it with a solid color” shader, but there’s a few things worth noting:

ZWrite Off – This disables writing to the depth buffer. At this point Zelda is already present in the depth buffer, so there’s no reason to modify it further. It wouldn’t necessarily hurt to write her to the depth buffer in this situation, but it’s atypical to write to it when rendering transparent objects since it will write regardless of whether the shaded pixel is solid.

position += normal * _MeshOffset; – This expands the mesh ever so slightly to prevent z-fighting. We’ll also be able to use this later on to make the smoke float off the surface of the mesh (hence why it’s adjustable.)

What is HLSLPROGRAM?

If you’re used to writing shaders for Unity’s legacy render pipeline, you might be used to using CGPROGRAM in your shaders.

The main difference between the two is that CGPROGRAM implies automagically including various built-in shader includes.

CGPROGRAM is considered legacy and is not officially supported with the newer render pipelines.

The use of HLSLPROGRAM is also why we’ve had to manually declare the unity_* variables in our shader. You can use #include "UnityShaderVariables.cginc" instead if you’d rather not do this yourself.

Why 0.0001? / I get artifacts when I increase the mesh offset.

Ideally we want this value to be as small as possible without introducing artifacts. There’s no exact science to 0.0001, it was determined experimentally: 0.00001 (one extra 0) is pretty close, but still has some artifacts.

Unity seems to agree with 0.0001 as well, it’s what they use in their custom renderables pass template.

You might be tempted to set it higher, but if you go too big you can get artifacts caused by pixels getting shaded twice by the overlapping parts of the mesh. This is problematic since we don’t write depth and we’re using alpha blending. (I might address how to deal with this properly in a future article.)

See the below screenshot for an illustration of what I’m talking about. This is with the mesh offset upped to 0.01. See how there’s a brighter red outline near the tops of her boots? This is happening because first a triangle on the top of her boot is rendered, and red shading is blended onto the color buffer. Then later on a different triangle from the lower part of her boot is rendered and red shading is blended onto the same pixel a second time.

You can see this anywhere on her mesh where two triangles overlap after being offset. Essentially any pixel which is shaded by two offset triangles will be “too red”.

This isn’t a problem with a low offset even when triangles overlap because we have depth testing enabled and Zelda was already previously drawn to the depth buffer. Any pixels shaded by triangles under other ones will fail the depth test.

Now that the shader’s done, we need to create a corresponding material. Simply right-click the shader in the assets window and select Create > Material.

Finally we need to designate a layer to hold the objects targeted by our effect. Go to Edit > Project Settings... > Tags and Layers and name one of the layers SpiritSmoke.

Enabling the custom pass in our scene

Now that we have our script, shader, and materials created we’re ready to add the custom pass to our scene. Pick any game object (probably the same one you already use to manage global volumes) and add a Custom Pass Volume component to it.

Of the basic custom pass volume settings, only one is especially relevant to us and that’s the injection point. The injection point determines where in the render pipeline our pass is added, and conversely which render pipeline resources it has access to.

The diagram below from the Unity HDRP manual shows an overview of its render pipeline with custom pass injection points highlighted in purple.

As seen in the C# script above, our pass reads/write the color buffer and reads the depth buffer. The earliest injection point where both of these are ready is Before PreRefraction, so that’s what we’ll use. It’ll work in any of the later stages too, but having it early means our effect will be affected by later effects like refraction, motion blur, or color grading (the last of which happens during Remaining Post Process.)

Next click the + button to add a custom pass to the custom pass volume, selecting the SpiritSmoke custom pass script we created.

In the custom pass settings select the following:

What at the target buffer settings used for and why are they set to ‘None’?

If you set values here Unity will automatically call CoreUtils.SetRenderTarget for you with the specified settings.

Using Camera/Camera/None here is equivalent to the SetRenderTarget call we already wrote in our script. If you find the configuration flexibility to be valuable, it’d be perfectly fine to use these settings instead and remove the call from the script. My personal preference is to have the SetRenderTarget call be explicit and apparent when reading the logic of the pass.

In summary, here’s what the final custom pass volume configuration should look like:

Finally we need to assign Zelda to the layer we created earlier. Select Zelda’s root object in the hierarchy window and in the inspector set the layer to the one you created earlier. When prompted, ask Unity to apply the layer to child objects as well.

If everything is set up correctly, you should find yourself with a red-tinted Zelda (or a cyan-tinted one if you left the color grading volume enabled.)

Creating a procedural smoke texture

Now that we have the ability to layer effects on top of Zelda, let’s get started on the smoke. As the basis for the smoke I’m going to use a procedural texture based on fractal Brownian motion or fBm. This texture goes by a few different names (EG: Blender calls it a Musgrave Texture after Ken Musgrave who likely deserves credit for popularizing the use of fBm in graphics) and implementations vary slightly, but the way it works is generally the same: Take a source of noise and layer it multiple times with an offset and decreasing influence for each layer.

As seen below, this results in a soft smoke-like pattern, which makes it great for our use-case.

Another thing that makes it great for our use-case is that it’s smooth in all three dimensions. If you slowly scrub through the Z slice in the visualization above you’ll see that the texture slowly morphs as you move through the third dimension. This will be helpful as it means we can sample fBm in world space and get something sensible that flows over our object without any extra effort.

Feel free to fiddle with the other parameters to get a feel for how the texture behaves.

What do the other parameters mean?
  • Octaves - This is the number of layers of noise that will be accumulated
  • Initial influence - This is how much the first layer impacts the result
  • Influence multiplier - The multiplier applied to the influence for each layer
  • Position multiplier - The multiplier applied to the position for each layer
  • Zoom- Scales the input x/y coordinate before it is fed into the fBm function
    • (Z isn’t scaled since scaling it is confusing for this 2D visualization.)

If you’re familiar with traditional fBm you might be used to seeing lacunarity and H parameters. These are not directly represented in my implementation. If you want to visualize “vanilla” fBm, set Initial influence to 1, Position multiplier to lacunarity, and Influence multiplier to pow(lacunarity, -H).

Adding fBm to our custom pass

As mentioned earlier, fBm works off of layers of noise, so we need a source of noise. For this we’ll be hashing the input position with David Hoskins’s hash without sine and smoothing it out using Morgan McGuire’s noise function (which I’ve dubbed smoothNoise.)

Here’s our shader updated with fBm and supporting functions. (I’d highlight the lines that have changed but almost all of them did 😅)

Shader "CustomPass/SpiritSmoke"
{
    Properties
    {
        [HDR] _Color("Color", Color) = (1.0, 0.0, 0.0, 1.0)
        _MeshOffset("Mesh Offset", Range(0.0001, 0.1)) = 0.0001
        _PositionScale("Position Scale", Range(0, 100)) = 10
        _TimeScale("Time Scale", Range(0, 10)) = 1

        _FbmOctaves("fBm Octaves", Integer) = 3
        _FbmInitialInfluence("fBm Initial Influence", Range(0, 1)) = 0.5
        _FbmInfluenceMultiplier("fBm Influence Multiplier", Range(0, 1)) = 0.5
        _FbmPositionMultiplier("fBm Position Multiplier", Range(0.001, 10)) = 2
    }

    SubShader
    {
        Pass
        {
            Name "SpiritSmoke"
            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off

            HLSLPROGRAM
            // Material property values
            float4 _Color;
            float _MeshOffset;
            float _PositionScale;
            float _TimeScale;

            int _FbmOctaves;
            float _FbmInitialInfluence;
            float _FbmInfluenceMultiplier;
            float _FbmPositionMultiplier;

            // Unity-provided values
            float4x4 unity_MatrixVP;
            float4x4 unity_ObjectToWorld;
            float4 _Time;

            struct PsInput
            {
                float4 Position : SV_Position;
                float4 FbmPosition : FbmPosition;
            };

            #pragma vertex VertexMain
            PsInput VertexMain(float3 position : POSITION, float3 normal : NORMAL)
            {
                position += normal * _MeshOffset;

                PsInput result;
                float4 worldPosition = mul(unity_ObjectToWorld, float4(position, 1.f));
                result.Position = mul(unity_MatrixVP, worldPosition);
                result.FbmPosition = worldPosition;
                return result;
            }

            // hashing function (with precision tweaks by Morgan McGuire)
            // Copyright (c) 2014 David Hoskins
            // Licensed under the MIT license
            // https://www.shadertoy.com/view/4djSRW
            float hash(float p)
            {
                p = frac(p * 0.011f);
                p *= p + 7.5f;
                p *= p + p;
                return frac(p);
            }

            // smoothNoise calculates noise at 8 corners of a integer-aligned cube and interpolates between them based on the fractional part, essentially smoothing it out
            // https://www.shadertoy.com/view/4dS3Wd
            // By Morgan McGuire @morgan3d, http://graphicscodex.com
            // Reuse permitted under the BSD license.
            float smoothNoise(float3 seed)
            {
                // Split the seed into integer and fractional parts
                float3 i = floor(seed);
                float3 f = frac(seed);

                // For performance, compute the base input to a 1D hash from the integer part of the argument and the incremental change to the 1D based on the 3D -> 1D wrapping
                const float3 step = float3(110, 241, 171);
                float n = dot(i, step);

                // Calculate 8 random values for each corner of a cube to interpolate between
                float s000 = hash(n + dot(step, float3(0, 0, 0)));
                float s100 = hash(n + dot(step, float3(1, 0, 0)));
                float s010 = hash(n + dot(step, float3(0, 1, 0)));
                float s110 = hash(n + dot(step, float3(1, 1, 0)));
                float s001 = hash(n + dot(step, float3(0, 0, 1)));
                float s101 = hash(n + dot(step, float3(1, 0, 1)));
                float s011 = hash(n + dot(step, float3(0, 1, 1)));
                float s111 = hash(n + dot(step, float3(1, 1, 1)));

                // Interpolate between the corners using the smoothstep of the fractional bit
                float3 u = f * f * (3.0 - 2.0 * f);
                return lerp(
                    lerp(
                        lerp(s000, s100, u.x),
                        lerp(s010, s110, u.x),
                        u.y
                    ),
                    lerp(
                        lerp(s001, s101, u.x),
                        lerp(s011, s111, u.x),
                        u.y
                    ),
                    u.z
                );
            }

            // This is a variant of fBm, a fractal which accumulates layers of offset noise to create an interference pattern
            // Based on the version described in Texturing and Modelling: A procedural approach
            float fbm(float3 position)
            {
                float result = 0.f;
                float influence = _FbmInitialInfluence;

                for (int i = 0; i < _FbmOctaves; i++)
                {
                    result += influence * smoothNoise(position);
                    position *= _FbmPositionMultiplier;
                    influence *= _FbmInfluenceMultiplier;
                }

                return result;
            }

            #pragma fragment PixelMain
            float4 PixelMain(PsInput input) : SV_Target
            {
                float3 timeInfluence = _Time.yyy * _TimeScale;
                float x = fbm(input.FbmPosition.xyz * _PositionScale + timeInfluence);
                return _Color * x;
            }
            ENDHLSL
        }
    }
}

As mentioned earlier we’re sampling fBm using world space, so it’s now preserved in VertexMain for use in the pixel shader.

I also added some logic for offsetting the world position based off of time with a configurable scale to give a nice flowing smoke effect even when Zelda isn’t moving.

You can get some pretty interesting effects based on how you drive this offset. For example, the texture for the logo on my home page is based on a 2D fBm with the time offset driven to flow up and wobble left and right to give a mystical fire effect. If your scene has a natural wind vector, it would probably make sense to use it here. (Although if your wind changes direction or magnitude you’d want to animate an offset on the CPU and feed it into the material instead of using _Time directly.)

After making these changes, we now have a red, smoky Zelda:

Looking good so far! Just gotta give it a dose of ethereal vibes…

(If you’re following along and the smoke seems a lot dimmer, you need to update the color on your material. I changed the default alpha from 0.5 to 1.0.)

Giving it a dose of ethereal vibes

To make our smoke more like ectoplasmic goo, I wanted to essentially take just the highlights of the smoke and eliminate everything else.

To do this I added a threshold parameter to set a level below which all fBm values are rejected. I found 0.95 yielded the general appearance I was looking for, but the edges were too harsh. I solved this by scaling the value so that the threshold (0.95) was mapped to 0 (you can toggle this behavior using the Scaled checkbox in the visualization below.)

I did not end up using it, but I also introduced the ability to apply an exponent to the fBm value. I’ve left it in in case you want to mess with it.

Finally I tweaked the return value to prevent the alpha from exceeding the 0..1 range as it doesn’t make sense for it to be able to. (It’s OK and even desirable for the color to be able to since we’re targeting an HDR color buffer.)

Here’s an updated visualization that lets you play with these values. The Default Preset button will reset things to the default cloudy fBm if you want to start from that and tweak for yourself. The Ethereal Preset button uses the settings I settled on within Unity.

These changes result in the following updated PixelMain:

float4 PixelMain(PsInput input) : SV_Target
{
    float3 timeInfluence = _Time.yyy * _TimeScale;
    float x = fbm(input.WorldPosition.xyz * _PositionScale + timeInfluence);
    x = pow(x, _Power);
    if (x < _Threshold)
    {
        x = 0.f;
    }
    else
    {
        x -= _Threshold;
        x /= 1.f - _Threshold;
    }
    return float4(_Color.rgb * x, _Color.a * saturate(x));
}
Updated shader in full

Here’s the full shader again with these changes, highlighted lines being the ones we changed.

Shader "CustomPass/SpiritSmoke"
{
    Properties
    {
        [HDR] _Color("Color", Color) = (1.0, 0.0, 0.0, 1.0)
        _MeshOffset("Mesh Offset", Range(0.0001, 0.1)) = 0.0001
        _PositionScale("Position Scale", Range(0, 100)) = 10
        _TimeScale("Time Scale", Range(0, 10)) = 1

        _FbmOctaves("fBm Octaves", Integer) = 3
        _FbmInitialInfluence("fBm Initial Influence", Range(0, 1)) = 0.5
        _FbmInfluenceMultiplier("fBm Influence Multiplier", Range(0, 1)) = 0.5
        _FbmPositionMultiplier("fBm Position Multiplier", Range(0.001, 10)) = 2

        _Power("Power", Range(0.001, 10)) = 1
        _Threshold("Threshold", Float) = 0.95
    }

    SubShader
    {
        Pass
        {
            Name "SpiritSmoke"
            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off

            HLSLPROGRAM
            // Material property values
            float4 _Color;
            float _MeshOffset;
            float _PositionScale;
            float _TimeScale;

            int _FbmOctaves;
            float _FbmInitialInfluence;
            float _FbmInfluenceMultiplier;
            float _FbmPositionMultiplier;

            float _Power;
            float _Threshold;

            // Unity-provided values
            float4x4 unity_MatrixVP;
            float4x4 unity_ObjectToWorld;
            float4 _Time;

            struct PsInput
            {
                float4 Position : SV_Position;
                float4 FbmPosition : FbmPosition;
            };

            #pragma vertex VertexMain
            PsInput VertexMain(float3 position : POSITION, float3 normal : NORMAL)
            {
                position += normal * _MeshOffset;

                PsInput result;
                float4 worldPosition = mul(unity_ObjectToWorld, float4(position, 1.f));
                result.Position = mul(unity_MatrixVP, worldPosition);
                result.FbmPosition = worldPosition;
                return result;
            }

            // hashing function (with precision tweaks by Morgan McGuire)
            // Copyright (c) 2014 David Hoskins
            // Licensed under the MIT license
            // https://www.shadertoy.com/view/4djSRW
            float hash(float p)
            {
                p = frac(p * 0.011f);
                p *= p + 7.5f;
                p *= p + p;
                return frac(p);
            }

            // smoothNoise calculates noise at 8 corners of a integer-aligned cube and interpolates between them based on the fractional part, essentially smoothing it out
            // https://www.shadertoy.com/view/4dS3Wd
            // By Morgan McGuire @morgan3d, http://graphicscodex.com
            // Reuse permitted under the BSD license.
            float smoothNoise(float3 seed)
            {
                // Split the seed into integer and fractional parts
                float3 i = floor(seed);
                float3 f = frac(seed);

                // For performance, compute the base input to a 1D hash from the integer part of the argument and the incremental change to the 1D based on the 3D -> 1D wrapping
                const float3 step = float3(110, 241, 171);
                float n = dot(i, step);

                // Calculate 8 random values for each corner of a cube to interpolate between
                float s000 = hash(n + dot(step, float3(0, 0, 0)));
                float s100 = hash(n + dot(step, float3(1, 0, 0)));
                float s010 = hash(n + dot(step, float3(0, 1, 0)));
                float s110 = hash(n + dot(step, float3(1, 1, 0)));
                float s001 = hash(n + dot(step, float3(0, 0, 1)));
                float s101 = hash(n + dot(step, float3(1, 0, 1)));
                float s011 = hash(n + dot(step, float3(0, 1, 1)));
                float s111 = hash(n + dot(step, float3(1, 1, 1)));

                // Interpolate between the corners using the smoothstep of the fractional bit
                float3 u = f * f * (3.0 - 2.0 * f);
                return lerp(
                    lerp(
                        lerp(s000, s100, u.x),
                        lerp(s010, s110, u.x),
                        u.y
                    ),
                    lerp(
                        lerp(s001, s101, u.x),
                        lerp(s011, s111, u.x),
                        u.y
                    ),
                    u.z
                );
            }

            // This is a variant of fBm, a fractal which accumulates layers of offset noise to create an interference pattern
            // Based on the version described in Texturing and Modelling: A procedural approach
            float fbm(float3 position)
            {
                float result = 0.f;
                float influence = _FbmInitialInfluence;

                for (int i = 0; i < _FbmOctaves; i++)
                {
                    result += influence * smoothNoise(position);
                    position *= _FbmPositionMultiplier;
                    influence *= _FbmInfluenceMultiplier;
                }

                return result;
            }

            #pragma fragment PixelMain
            float4 PixelMain(PsInput input) : SV_Target
            {
                float3 timeInfluence = _Time.yyy * _TimeScale;
                float x = fbm(input.FbmPosition.xyz * _PositionScale + timeInfluence);
                x = pow(x, _Power);
                if (x < _Threshold)
                {
                    x = 0.f;
                }
                else
                {
                    x -= _Threshold;
                    x /= 1.f - _Threshold;
                }
                return float4(_Color.rgb * x, _Color.a * saturate(x));
            }
            ENDHLSL
        }
    }
}

Here’s the settings I used:

Spirit smoke material settings

(The color is 191, 191, 191, 192 with an intensity of 0.917)

And here’s the final result!

Issues with movement

Zelda’s looking a little stiff from standing in one place for too long, let’s get her walking around!

That…doesn’t look great.

Our procedural texture is way too high frequency. It looks nice when she’s mostly still as with the idle animation, but when she moves through the field of smoke it’s just way too distracting. Not quite what I had in mind when I said I wanted the smoke to react to Zelda moving through it 😅

I knew this was potentially going to be a problem, but I definitely didn’t think it’d be quite this bad.

Switching to model-space

Thankfully there’s an easy solution to this. We can just map the spirit smoke texture in model space instead of world space:

#pragma vertex VertexMain
PsInput VertexMain(float3 position : POSITION, float3 normal : NORMAL)
{
    position += normal * _MeshOffset;

    PsInput result;
    float4 worldPosition = mul(unity_ObjectToWorld, float4(position, 1.f));
    result.Position = mul(unity_MatrixVP, worldPosition);
    result.FbmPosition = float4(position, 1.f);
    return result;
}

This fixes the issue in general, but it does have some downsides.

The most obvious one being that the spirit smoke no longer reacts to the object moving around within the world. For a single object this is easy to fix by offsetting FbmPosition by a scaled object position.

The second is that the object’s scale is no longer properly reflected in the spirit smoke. This can be seen in the screenshot below where the left ball is 5x the size of the other. This is also easy to fix as we can just scale down the FbPosition (or adjust _PositionScale.)

The third and final issue is that it doesn’t work with skinned meshes. That’s a biiiiitttt of a problem considering Zelda is a skinned mesh.

Let’s focus on solving the skinned mesh issue first since it’s arguably the more biggest issue here.

Preserving neutral vertex positions for skinned meshes

If you read through our vertex shader implementation you’ll find that there’s nothing in it concerning skeletal animation. This is because Unity applies skeletal animation to our mesh ahead of time in a compute shader. Essentially we’re rendering a copy of the mesh which has already had all of its vertices transformed for the current frame of animation. You can see the preparation of this temporary pre-transformed mesh happening in Unity’s frame debugger as shown below:

How does skeletal animation work?

The finer details of how skeletal animation works is far beyond the scope of this article, but if you want a quick overview I’d recommend reading over the skin-related sections of the glTF file specification overview.

The short version though is that artists make a mesh and a skeleton. They then assign each vertex of that mesh/skin to one or more bones/joints in the skeleton, with each association being given a weight (level of influence) between 0 and 100%. The animation system drives the transforms of each of those bones (which Unity represents as objects in the scene hierarchy) and then each vertex is transformed by their associated bone transformations proportional to the weight for each. This can all be done in your vertex shader, but if you render a particular instance of a mesh many times per frame it can make sense to do it once ahead of time and save that pre-transformed version into a temporary buffer – which is exactly what Unity does.

All of this means that the vertex positions fed into VertexMain will be moving around, which is not desirable in our situation. Ideally we want to use the vertex positions of the neutral pose (usually a T-pose or A-pose in the case of humanoid models.)

Unfortunately as far as I’m aware Unity does not expose this information to our shader.

It’s a bit of a hack, but we can work around this by copying all of the mesh’s vertex positions to an unused attribute. I went with TexCoord7. (Unity does not allow defining custom vertex attributes, but there’s 8 texture coordinate buffers so it’s not unreasonable to eat one for this purpose. Do note that some are reserved by HDRP, see the comments below.)

Ideally we’d do this at asset import time, but I’m not aware of a way to do so for mesh formats natively supported by Unity. So we just do it in a little C# script:

using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;

public sealed class SpiritSmoke : MonoBehaviour
{
    // The vetex attribute to use for the neutral pose positions
    // This should be a texture coordinate and must match what is used in the SpiritSmoke shader.
    // Channels 0-3 are expected to be available for normal texture use
    // Channels 4-5 are used internally for HDRP motion vector stuff (see MotionVectorVertexShaderCommon.hlsl)
    // Channels 6-7 should be free
    // (This is inferred from the HDRP source code, it is not explicitly documented anywhere.)
    private const VertexAttribute NeutralPosePositionsAttribute = VertexAttribute.TexCoord7;
    private const int NeutralPosePositionsChannel = NeutralPosePositionsAttribute - VertexAttribute.TexCoord0;

    private void Awake()
    {
        List<Vector3> positions = null;
        foreach (SkinnedMeshRenderer renderer in GetComponentsInChildren<SkinnedMeshRenderer>())
        {
            Mesh mesh = renderer.sharedMesh;
            if (mesh == null)
            { continue; }

            // Add neutral pose positions to skinned meshes if necessary
            if (!mesh.HasVertexAttribute(NeutralPosePositionsAttribute))
            {
                mesh.GetVertices(positions ??= new List<Vector3>());
                mesh.SetUVs(NeutralPosePositionsChannel, positions);
                mesh.UploadMeshData(markNoLongerReadable: false);
            }
        }
    }
}

This script can be applied to the top-level object containing the renderers you wish to use with this effect and it will automatically update all meshes used by child skinned mesh renderers.

Next we need to update our vertex shader to use this new attribute instead of POSITION (which is the transformed pose that we don’t want.)

#pragma vertex VertexMain
PsInput VertexMain(float3 position : POSITION, float3 normal : NORMAL, float3 neutralPosePosition : TEXCOORD7)
{
    position += normal * _MeshOffset;

    PsInput result;
    float4 worldPosition = mul(unity_ObjectToWorld, float4(position, 1.f));
    result.Position = mul(unity_MatrixVP, worldPosition);
    result.FbmPosition = float4(neutralPosePosition, 1.f);
    return result;
}

While I was at it I also changed TimeScale to a vector since the implicit change of angle revealed using the same time scale for all axes wasn’t ideal:

Shader "CustomPass/SpiritSmoke"
{
    Properties
    {
        // ...
        _TimeScale("Time Scale", Vector) = (1, 1, 1, 1)
        // ...
    }

    SubShader
    {
        Pass
        {
            Name "SpiritSmoke"
            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off

            HLSLPROGRAM
            // Material property values
            float4 _Color;
            float4 _TimeScale;
            float _MeshOffset;
            float _PositionScale;

            // ...

            #pragma fragment PixelMain
            float4 PixelMain(PsInput input) : SV_Target
            {
                float3 timeInfluence = _Time.yyy * (_TimeScale.xyz * _TimeScale.w);
                // ...
            }
            ENDHLSL
        }
    }
}

Since Shader Lab properties only support 4-component vectors and constant buffer packing rules mean we’d be wasting a scalar on padding anyway I opted to make w a multiplier for independently adjusting the speed.

For my speed scale I chose (0.1, 1, 2, 1), all other material settings have gone unchanged.

Here’s what our walk cycle looks like with these changes, much better!

Modifying our custom pass to allow per-object material settings

Earlier I mentioned that we could solve the world movement and scale issues could be solved by adjusting the offset/scale of the fBm position, and you might be thinking that sounds problematic because our material properties are applied indiscriminately to all objects on the SpiritSmoke layer. This is the typical way to apply a custom pass to renderables in HDRP and is the only way explicitly documented by Unity.

However, Unity provides an overload of CustomPassUtils.DrawRenderers which takes an array of shader tags. It’s poorly documented, but these shader tags specifically correspond to the value of the LightMode shader tag. (Which doesn’t actually have much to do with lighting and is likely named as such for legacy reasons. HDRP uses it for plenty of non-light-related things.)

We can use this to render all renderables which reference a material with a particular LightMode pass. Then we just have to attach our spirit smoke material to our renderables, no need to specify a layer anymore.

To make this switch, first we add a LightMode tag to our shader:

Shader "CustomPass/SpiritSmoke"
{
    // ...
    SubShader
    {
        Pass
        {
            Name "SpiritSmoke"
            Tags { "LightMode" = "SpiritSmoke" }
            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off

            HLSLPROGRAM
            // ...
            ENDHLSL
        }
    }
}

And then update the custom pass to use the tag instead of the layer+override material:

using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;

public sealed class SpiritSmokePass : CustomPass
{
    private ShaderTagId[] LightModeTags;

    protected override void Setup(ScriptableRenderContext renderContext, CommandBuffer cmd)
    {
        LightModeTags = new[] { new ShaderTagId("SpiritSmoke") };
    }

    protected override void Execute(CustomPassContext ctx)
    {
        CoreUtils.SetRenderTarget(ctx.cmd, colorBuffer: ctx.cameraColorBuffer, depthBuffer: ctx.cameraDepthBuffer, ClearFlag.None);
        CustomPassUtils.DrawRenderers(ctx, LightModeTags, ~0);
    }
}
Why ~0 for the layer mask?

~0 represents a bitmask of all 1’s and as such matches all layers, it’s equivalent to “Everything” option in Unity’s layer mask selector.

Since we’re using the LightMode to filter the renderables list now we don’t need to filter on the layer ID anymore. I removed it since I felt like it was confusing to have two different filters, but you can leave it if you find value in having it. (You could even do a hybrid of both pass methods if you wanted to use a global material for static bojects and an object-specific one for dynamic ones. Our shader still works fine as an override material.)

Now to apply the spirit smoke effect to our objects, just add an additional material slot and assign the spirit smoke material to it:

Handling submeshes

When an object has multiple submeshes, each material slot corresponds to an individual submesh. Additional materials beyond the submesh count sadly only apply to the last submesh, so this technique doesn’t work well with them.

As a workaround, you can add a child to affected objects with an identity transform (IE: no local position/rotation/scaling – that way it inherits the transformation of the “real” object) and the same mesh with a renderer configured to use the spirit smoke material. See below fore an example using the Unity mateiral test ball, where the left inspector is the “normal” ball rendered by HDRP and the right inspector is the fake spirit smoke ball rendered by our custom pass.

(I also disabled things relating to shadows and reflections for this object. This isn’t strictly necessary since the materials on this object have no passes that affect those, but I felt it helped clarify intent and avoids any unpleasant surprises I didn’t notice.)


Another less flexible approach is to continue using a global override material and extract translation from the object’s transformation matrix. See the section at the end of this article for details.

Since I’m implementing this effect just for fun I’m assuming more flexibility is automatically better. In a real game you’d want to have a conversation with the rest of your team to figure out which set of compromises makes sense for your project. If your artists aren’t using submeshes then this is a non-issue. If spirit smoke objects never move around then all of this model-space stuff would’ve been a non-issue. If you aren’t using skinned or morph target animation with spirit smoke objects the neutral pose stuff is a non-issue.

Using shader keywords to handle skinned mesh differences

Right now our shader only works with skinned meshes due to the neutral pose position workaround. This workaround also has a quirk in that it doesn’t work in edit mode unless the game has entered play mode at least once to allow the mesh modification to take place.

Ideally we’d like to only use this workaround when necessary, and we can do so by introducing a shader keyword to create a variant of our vertex shader:

#pragma multi_compile_vertex _ USE_NEUTRAL_POSE_ATTRIBUTE
#pragma vertex VertexMain
PsInput VertexMain(float3 position : POSITION, float3 normal : NORMAL, float3 neutralPosePosition : TEXCOORD7)
{
    position += normal * _MeshOffset;

    PsInput result;
    float4 worldPosition = mul(unity_ObjectToWorld, float4(position, 1.f));
    result.Position = mul(unity_MatrixVP, worldPosition);
#if USE_NEUTRAL_POSE_ATTRIBUTE
    result.FbmPosition = float4(neutralPosePosition, 1.f);
#else
    result.FbmPosition = float4(position, 1.f);
#endif
    return result;
}

(It is important to use multi_compile instead of shader_feature here because we’re going to toggle this keyword at runtime. If you used shader_feature instead Unity wouldn’t bother to build the neutral pose variant because it isn’t actually referenced by any materials.)

Before we update the SpiritSmoke component to toggle this keyword, let’s add a quick extension method to Material/Shader to help us identify our spirit smoke material:

using UnityEngine;
using UnityEngine.Rendering;

public static class UnityExtensions
{
    public static bool HasTagPair(this Material material, ShaderTagId tag, ShaderTagId tagValue)
        => material.shader.HasTagPair(tag, tagValue);

    public static bool HasTagPair(this Shader shader, ShaderTagId tag, ShaderTagId tagValue)
    {
        for (int passIndex = 0; passIndex < shader.passCount; passIndex++)
        {
            if (shader.FindPassTagValue(passIndex, tag) == tagValue)
            { return true; }
        }

        return false;
    }
}

This method will make it easier to identify our material using the LightMode tag we defined earlier. (This is preferrable over checking the shader’s name or something since it identifies our material using the same method as our custom pass. This also makes it easier to introduce other shaders to be rendered during this pass.)

Next up we can modify our SpiritSmoke component to clone the material and enable this keyword for skinned renderers. On Unity 2022 we take advantage of the new material variant system (not to be confused with shader variants discussed earlier) to ensure edits to the base material are propagated to our objects.

using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;

public sealed class SpiritSmoke : MonoBehaviour
{
    // The vetex attribute to use for the neutral pose positions
    // This should be a texture coordinate and must match what is used in the SpiritSmoke shader.
    // Channels 0-3 are expected to be available for normal texture use
    // Channels 4-5 are used internally for HDRP motion vector stuff (see MotionVectorVertexShaderCommon.hlsl)
    // Channels 6-7 should be free
    // (This is inferred from the HDRP source code, it is not explicitly documented anywhere.)
    private const VertexAttribute NeutralPosePositionsAttribute = VertexAttribute.TexCoord7;
    private const int NeutralPosePositionsChannel = NeutralPosePositionsAttribute - VertexAttribute.TexCoord0;

    private void Awake()
    {
        List<Vector3> positions = null;
        Dictionary<(bool IsSkinned, Material Material), Material> materialMap = new();
        ShaderTagId lightModeTag = new("LightMode");
        ShaderTagId spiritSmokeTag = new("SpiritSmoke");

        foreach (Renderer renderer in GetComponentsInChildren<Renderer>())
        {
            SkinnedMeshRenderer skinnedRenderer = renderer as SkinnedMeshRenderer;
            bool isSkinned = skinnedRenderer is not null;

            // Clone and configure any spirit smoke materials
            Material[] materials = renderer.sharedMaterials;

            bool clonedAny = false;
            for (int i = 0; i < materials.Length; i++)
            {
                Material material = materials[i];

                // If this material has been cloned already, just replace it
                if (materialMap.TryGetValue((isSkinned, material), out Material existingClone))
                {
                    materials[i] = existingClone;
                    clonedAny = true;
                    continue;
                }

                // Skip this material if it isn't spirit smoke
                if (!material.HasTagPair(lightModeTag, spiritSmokeTag))
                { continue; }

                // Otherwise make a fresh clone
                Material clone = new(material);
#if UNITY_2022_1_OR_NEWER && UNITY_EDITOR // On newer versions of Unity we make our clone a variant so that modifications to the base material propagate to this one
                clone.parent = material;
#endif
                clone.name += " (Script Clone)";
                materialMap.Add((isSkinned, material), clone);
                materials[i] = clone;
                clonedAny = true;

                // If this is a skinned renderer, the material will use the neutral pose attribute we create below
                if (isSkinned)
                { clone.SetKeyword(new LocalKeyword(clone.shader, "USE_NEUTRAL_POSE_ATTRIBUTE"), true); }
            }

            // Nothing left to do for this renderer if it did not have any spirit smoke materials
            if (!clonedAny)
            { continue; }

            // Apply the updated materials array to the renderer
            renderer.sharedMaterials = materials;

            // Add neutral pose positions to skinned meshes if necessary
            if (isSkinned)
            {
                Mesh mesh = skinnedRenderer.sharedMesh;
                if (mesh != null && !mesh.HasVertexAttribute(NeutralPosePositionsAttribute))
                {
                    mesh.GetVertices(positions ??= new List<Vector3>());
                    mesh.SetUVs(NeutralPosePositionsChannel, positions);
                    mesh.UploadMeshData(markNoLongerReadable: false);
                }
            }
        }
    }
}

As you might’ve noticed, I also updated the script to affect non-skinned renderers as well. There’s no difference in the cloned materials yet, but there will be later.

We make any effort to avoid cloning materials unnecessarily by re-using the clone for children which use the same material and mesh type.

Reintroducing the reaction to world-space movement

Now that we have a separate materials for each spirit smoke object, we can work on driving the spirit smoke differently for each of them.

First let’s introduce an additional matrix to enable arbitrary transformations of FbmPosition:

int _FbmOctaves;
float _FbmInitialInfluence;
float _FbmInfluenceMultiplier;
float _FbmPositionMultiplier;
float4x4 _FbmTransform;

// ...
PsInput VertexMain(float3 position : POSITION, float3 normal : NORMAL, float3 neutralPosePosition : TEXCOORD7)
{
    // ...
#if USE_NEUTRAL_POSE_ATTRIBUTE
    result.FbmPosition = float4(neutralPosePosition, 1.f);
#else
    result.FbmPosition = float4(position, 1.f);
#endif
    result.FbmPosition = mul(_FbmTransform, result.FbmPosition);
    return result;
}
Why wasn’t _FbmTransform added to the Properties block?

The properties block only determines what appears in the material inspector GUI. It isn’t important for things which are only modified by code.

Unity also doesn’t support matrices in the properties block, which is fairly sensible since the raw values of a matrix are rarely user-friendly for direct editing purposes.

From there we can modify our SpiritSmoke component to update this transform based on the object’s transform:

using System;
using System.Collections.Generic;
using System.Linq;
using UnityEngine;
using UnityEngine.Rendering;

public sealed class SpiritSmoke : MonoBehaviour
{
    // ...
    private const VertexAttribute NeutralPosePositionsAttribute = VertexAttribute.TexCoord7;
    private const int NeutralPosePositionsChannel = NeutralPosePositionsAttribute - VertexAttribute.TexCoord0;

    private Material[] Materials;
    private static readonly int FbmTransformPropertyId = Shader.PropertyToID("_FbmTransform");

    private Quaternion LastOrientation = Quaternion.identity;
    private Quaternion SmokeOrientation = Quaternion.identity;

    [Range(0f, 1f)] public float PositionScale = 0.15f;
    [Range(0f, 1f)] public float RotationScale = 0.1f;

    private void Awake()
    {
        // ...

        // Create the materials array
        Materials = materialMap.Values.ToArray();
        if (Materials.Length == 0)
        {
            Debug.LogWarning($"{nameof(SpiritSmoke)} is attached to '{gameObject.name}', but it has no applicable materials.", this);
            enabled = false;
        }

        // If this game object is static, update the transform once and then disable
        if (gameObject.isStatic)
        {
            LateUpdate();
            enabled = false;
        }
    }

    private void LateUpdate()
    {
        // Animate the smoke orientation based on the change in object orientation
        Quaternion orientation = transform.rotation;
        Quaternion delta = Quaternion.Inverse(LastOrientation) * orientation;
        delta = Quaternion.SlerpUnclamped(Quaternion.identity, delta, RotationScale);
        SmokeOrientation = (delta * SmokeOrientation).normalized; // Normalization needed due to floating point error accumulation eventually denormalizing the quaternion
        LastOrientation = orientation;

        // Build fBm transform and apply it to all cloned materials
        Matrix4x4 fbmTransform = Matrix4x4.TRS
        (
            transform.position * PositionScale,
            SmokeOrientation,
            // We don't do anything special for scale so that the smoke looks similar between objects
            // If you plan to animate scale you'd probably want to come up with a strategy for handling it better
            transform.lossyScale
        );

        foreach (Material material in Materials)
        { material.SetMatrix(FbmTransformPropertyId, fbmTransform); }
    }
}

In Awake we added an extra step to save the cloned materials so we can update them later. We also added some logic to disable the component when it won’t need updating (either because it’s misconfigured or the object is marked as static.)

LateUpdate is the primary change here. This calculates a new fbmTransform each frame and applies it to the material(s) associated with the object. We do this in LateUpdate so that any sibling components animating this object in Update are accounted for consistently.

The position is easy, we just scale it so that 1 unit of movement in the world becomes 0.15 units of movement in the spirit smoke.

The orientation is a bit trickier. We can’t just scale the orientation because it wraps around. (Consider what happens to a scaled rotation of an object at 350 degrees on frame N and is then 10 degrees on frame N + 1.) Therefore we keep track of our previous orientation so that we can calculate the delta rotation, scale that delta, and then apply it to the fBm rotation.

I didn’t do anything special for the scale, it’s just applied directly since I’m assuming it won’t be animated. (I felt like the exact method you’d want to use for scale depends on why you’re animating scale in the first place.)

Do note that this implementation assumes that our component is applied to the root of the object’s motion. Any transformation applied to children won’t be reflected in the spirit smoke. If you intend to animate any child objects separately or any of them have scaling besides (1, 1, 1), you might want to apply the spirit smoke to objects individually or modify the script to handle things appropriately.

Here’s how Zelda looks with these changes. I disabled the time scale for this clip so we can see our animation in isolation:

And here’s another clip with Zelda walking in circles to show the orientation animation too:

Bonus content

Guardian fury effect

The Silent Realm isn’t always a peaceful place. If the guardians of the Silent Realm are made aware of Link’s presence they will pursue him relentlessly until he can subdue them again to restore peace. During their pursuit the atmosphere of the Silent Realm completely changes to convey the danger Link has found himself in:

There’s three major changes visible here:

In contrast to what we did earlier, the color grading is much simpler:

Filter color is 255, 213, 201 @ intensity 0

Filter color is 255, 213, 201 @ intensity 0

Tint is 255, 244, 232

Tint is 255, 244, 232

For the particles I modified the system to set the initial velocity of the dust particles to (0, -2, 0) and reduced the lifetime to 1-2 seconds.

For the tunnel vision effect, I opted to use HDRP’s post-process system instead of a full custom pass. I went this route primarily because we don’t gain anything from the flexibility of the custom pass system this time around and I thought the After Post Process Blurs injection point made the most sense for this effect since it is a blur effect of sorts and putting it any earlier would have weird imapcts on motion blur (if it were enabled.)

Like custom passes, custom post-process effects have both a C# script component and a shader component:

using System;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;

[Serializable, VolumeComponentMenu("Post-processing/Custom/Tunnel Vision")]
public sealed class TunnelVisionEffect : CustomPostProcessVolumeComponent, IPostProcessComponent
{
    public BoolParameter Enabled = new(false);
    public ClampedIntParameter LayerCount = new(3, 0, 10);
    public ClampedFloatParameter LayerOffset = new(0.04f, 0f, 1f);
    public ClampedFloatParameter LayerAlpha = new(0.2f, 0f, 1f);

    private Material Material;

    public override bool visibleInSceneView => false;

    public bool IsActive()
        => Material != null && Enabled.value && LayerCount.value > 0 && LayerOffset.value > 0f && LayerAlpha.value > 0f;

    public override CustomPostProcessInjectionPoint injectionPoint => CustomPostProcessInjectionPoint.AfterPostProcessBlurs;

    public override void Setup()
    {
        const string shaderName = "Hidden/Shader/TunnelVisionEffect";
        Shader shader = Shader.Find(shaderName);
        if (shader == null)
        {
            Debug.LogError($"Could not locate shader '{shaderName}'!");
            return;
        }

        Material = new Material(shader);
    }

    public override void Render(CommandBuffer cmd, HDCamera camera, RTHandle source, RTHandle destination)
    {
        Material.SetInt("_LayerCount", LayerCount.value);
        Material.SetFloat("_LayerOffset", LayerOffset.value);
        Material.SetFloat("_LayerAlpha", LayerAlpha.value);
        cmd.Blit(source, destination, Material, 0);
    }

    public override void Cleanup()
        => CoreUtils.Destroy(Material);
}
Shader "Hidden/Shader/TunnelVisionEffect"
{
    Properties
    {
        _MainTex("Main Texture", 2DArray) = "grey"
    }

    SubShader
    {
        Tags{ "RenderPipeline" = "HDRenderPipeline" }
        Pass
        {
            Name "TunnelVisionEffect"
            ZWrite Off

            HLSLPROGRAM
            #include "Packages/com.unity.render-pipelines.high-definition/Runtime/PostProcessing/Shaders/RTUpscale.hlsl"

            int _LayerCount;
            float _LayerOffset;
            float _LayerAlpha;

            TEXTURE2D_X(_MainTex);

            struct VsInput
            {
                uint VertexID : SV_VertexID;
                UNITY_VERTEX_INPUT_INSTANCE_ID
            };

            struct PsInput
            {
                float4 Position : SV_POSITION;
                float2 Uv : TEXCOORD0;
                UNITY_VERTEX_OUTPUT_STEREO
            };

            #pragma vertex VertexMain
            PsInput VertexMain(VsInput input)
            {
                PsInput output;
                UNITY_SETUP_INSTANCE_ID(input);
                UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(output);
                output.Position = GetFullScreenTriangleVertexPosition(input.VertexID);
                output.Uv = GetFullScreenTriangleTexCoord(input.VertexID);
                return output;
            }

            float3 SampleAt(float2 uv)
            {
                return SAMPLE_TEXTURE2D_X(_MainTex, s_linear_clamp_sampler, uv).rgb;
            }

            #pragma fragment PixelMain
            float4 PixelMain(PsInput input) : SV_Target
            {
                UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
                float3 inputColor = SampleAt(input.Uv);

                float alpha = _LayerAlpha;
                float3 result = inputColor;
                float2 fromCenter = float2(0.5f, 0.5f) - (input.Uv / _RTHandleScale.xy);
                fromCenter.x *= _ScreenSize.x * _ScreenSize.w; // Correct aspect ratio
                for (int i = 1; i <= _LayerCount; i++)
                {
                    float3 layer = SampleAt(input.Uv + fromCenter * _LayerOffset * i);
                    result = lerp(result, layer, alpha);
                    alpha *= alpha;
                }

                return float4(result, 1);
            }
            ENDHLSL
        }
    }
}

Since this is bonus content I won’t go into a ton of detail here, but the gist of the effect is that the rendered scene is overlayed over its self multiple times with each layer incrasing in size and transparency. One thing worth pointing out is that fromCenter is intentionally not normalized in order to make the effect more pronounced at the edges of the screen.

What’s up with the _RTHandleScale bit?

Internally Unity will sometimes render our scene into a buffer which is larger than what’s actually visible. As a result, UV 0.5, 0.5 will not actually represent the middle of the screen. The divide here scales the UV appropriately so that it is.

(It’s beneficial to use over-sized buffers to avoid excessive memory reallocations when the output size changes. Additionally the alignment requirements for textures on the GPU means it’s usually not even beneficial to reallocate textures when their size only changes slightly anyway since their effective size in memory remains the same.)

Why correct for aspect ratio?

I’m correcting for the aspect ratio to make the impact of the effect circular. Without this, the offset will appear exaggerated across the shorter screen dimension (vertical in our case.)

If you’re having trouble visualizing the “shape” of the effect you add this before the return at the end of PixelMain:

float fromCenterLength = length(fromCenter);
if (fromCenterLength > 0.2f && fromCenterLength < 0.21f)
    return float4(0, 0, 1, 1);

With the aspect ratio correction you’ll see a blue circle, without it you’ll get a blue ellipse.

Due to the way custom post-processes interact with Unity’s volume system we don’t use an explicit material here. Instead the shader’s properties are driven by the script directly. As such we don’t need to create a material asset and the properties block doesn’t need to be populated (other than _MainTex which is required by CommandBuffer.Blit.)

To enable the post-process in HDRP, go to Edit > Project Settings > Graphics > HDRP Global Settings then under Custom Post Process Orders add TunnelVisionEffect to After Post Process Blurs. Finally add an override to the relevant volume to set the Enabled parameter to true. (The defaults for the other settings are fine.)

Here’s the final result:

A different approach to the world-space movement issue

I briefly mentioned this above when talking about handling submeshes, but I wanted to further describe an alternative approach to solving the world-space movement issue without using separate spirit smoke materials per object. (You might prefer this approach if you use lots of sub meshes, find that using a dedicated layer is easier in your workflow, or maybe you just want other ideas.)

One thing I noticed while modifying the SpiritSmoke component that for smaller objects the orientation animation doesn’t matter all that much. (Or more specifically, it doesn’t matter much for objects which are small perpendicular to the axis of which they tend to rotate – IE: Zelda is small in the XZ plane and tends to rotate about the Y axis.)

This means the only component of the transform that really needs modification is the translation. We have the world matrix of the object and translation bits of the matrix are separate so it’s easy to modify (in Unity they are stored in the last column of the matrix.) Therefore we can actually just calculate the fBm matrix in our vertex shader:

float4x4 fbmTransform = unity_ObjectToWorld;
fbmTransform._m03_m13_m23 *= _FbmWorldScale;

The obvious downside here is that it doesn’t offer other flexibility in the future, and you can’t easily modify orientation or scale. (Extracting them is non-trivial and orientation has to be animated to scale it properly.)

You also lose out on the ability to easily use shader variants to handle the differences between skinned and non-skinned meshes. (You could handle this instead by having two different layers for the two types of meshes, just keep in mind that layers are a limited resource.)

Future work

This article has ended up quite a bit longer than I had originally planned, but I wanted to briefly mention some ideas I didn’t get to fully explore. (I might revisit these in their own articles in the future, so be sure to let me know if you think either of these sound interesting!)

Make the smoke float above the surface of the mesh

As I noted earlier, you get artifacts if you set _MeshOffset too high. These artifacts actually aren’t nearly as noticeable with the spirit smoke effect as it was with the transparent red test effect. They’re still visible though, especially around Zelda’s eyes. The proper solution to this would be to do a spirit smoke depth prepass of sorts. This would involve customizing our custom pass to duplicate the camera’s depth buffer, rendering our spirit smoke to it with an added depth-only pass, and then rendering our spirit smoke with the camera color buffer and our cloned depth buffer bound.

Using velocity to drive the highlights

This was initially a goal of my effect, but after thinking about it more I realized it was going to be a more involved than I initially thought and I was already pretty happy with the effect without it.

Originally I planned to do this by using motion vectors (mainly because they properly account for skinned mesh animation) but they honestly aren’t super ideal for this since they track motion in 2D on the screen rather than movement in 3D space (which is typical and makes sense given their purpose.) In theory you could use just the magnitude of the motion vector, but it’s pretty inefficient to use them that way and I felt that ideally this effect would take the direction of movement in 3D into account (IE: only glow stronger on the front of Zelda when she’s moving forward.)

Doing that wouldn’t really be feasible using motion vectors if the object is moving parallel with the eye vector since the motion vectors in this case are actually outwards/inwards for movement toward/away from the camera respectively. (If you want to visualize this, go to Window > Analysis > Rendering Debugger and then under Rendering > Fullscreen Debug Mode select MotionVectors.)

Like I said I have not fully explored this, but I think a better approach would be to have our SpiritSmoke component pass the velocity of the object to our shader and use that along with the surface normal to drive highlights. For skinned mesh animation we can use the previous frame’s position data stored in TEXCOORD4 to compute a per-vertex velocity in our vertex shader.

Closing thoughts

This article ended up being way longer than I had planned for, so if you made it all the way to the end thanks a lot for reading! (And even if you just skimmed and ended up here, thanks to you too! Hopefully you found a few places to stop and read in more detail.)

If you read through all this and thought to yourself “Wow I never want to deal with that nonsense, if only I could just hire this guy to deal with that…”: Well you’re in luck because I’m currently looking for work! So please don’t hesitate to get in touch.

Thanks again once more to my friend Angela for collecting all of the reference clips from Skyward Sword HD, I would’ve melted if I had to do that part myself in addition to this write-up.

Final code

For the sake of completeness, here’s the final version of the bits we were working on incementally:

using System;
using System.Collections.Generic;
using System.Linq;
using UnityEngine;
using UnityEngine.Rendering;

public sealed class SpiritSmoke : MonoBehaviour
{
    // The vetex attribute to use for the neutral pose positions
    // This should be a texture coordinate and must match what is used in the SpiritSmoke shader.
    // Channels 0-3 are expected to be available for normal texture use
    // Channels 4-5 are used internally for HDRP motion vector stuff (see MotionVectorVertexShaderCommon.hlsl)
    // Channels 6-7 should be free
    // (This is inferred from the HDRP source code, it is not explicitly documented anywhere.)
    private const VertexAttribute NeutralPosePositionsAttribute = VertexAttribute.TexCoord7;
    private const int NeutralPosePositionsChannel = NeutralPosePositionsAttribute - VertexAttribute.TexCoord0;

    private Material[] Materials;
    private static readonly int FbmTransformPropertyId = Shader.PropertyToID("_FbmTransform");

    private Quaternion LastOrientation = Quaternion.identity;
    private Quaternion SmokeOrientation = Quaternion.identity;

    [Range(0f, 1f)] public float PositionScale = 0.15f;
    [Range(0f, 1f)] public float RotationScale = 0.1f;

    private void Awake()
    {
        List<Vector3> positions = null;
        Dictionary<(bool IsSkinned, Material Material), Material> materialMap = new();
        ShaderTagId lightModeTag = new("LightMode");
        ShaderTagId spiritSmokeTag = new("SpiritSmoke");

        foreach (Renderer renderer in GetComponentsInChildren<Renderer>())
        {
            SkinnedMeshRenderer skinnedRenderer = renderer as SkinnedMeshRenderer;
            bool isSkinned = skinnedRenderer is not null;

            // Clone and configure any spirit smoke materials
            Material[] materials = renderer.sharedMaterials;

            bool clonedAny = false;
            for (int i = 0; i < materials.Length; i++)
            {
                Material material = materials[i];

                // If this material has been cloned already, just replace it
                if (materialMap.TryGetValue((isSkinned, material), out Material existingClone))
                {
                    materials[i] = existingClone;
                    clonedAny = true;
                    continue;
                }

                // Skip this material if it isn't spirit smoke
                if (!material.HasTagPair(lightModeTag, spiritSmokeTag))
                { continue; }

                // Otherwise make a fresh clone
                Material clone = new(material);
#if UNITY_2022_1_OR_NEWER && UNITY_EDITOR // On newer versions of Unity we make our clone a variant so that modifications to the base material propagate to this one
                clone.parent = material;
#endif
                clone.name += " (Script Clone)";
                materialMap.Add((isSkinned, material), clone);
                materials[i] = clone;
                clonedAny = true;

                // If this is a skinned renderer, the material will use the neutral pose attribute we create below
                if (isSkinned)
                { clone.SetKeyword(new LocalKeyword(clone.shader, "USE_NEUTRAL_POSE_ATTRIBUTE"), true); }
            }

            // Nothing left to do for this renderer if it did not have any spirit smoke materials
            if (!clonedAny)
            { continue; }

            // Apply the updated materials array to the renderer
            renderer.sharedMaterials = materials;

            // Add neutral pose positions to skinned meshes if necessary
            if (isSkinned)
            {
                Mesh mesh = skinnedRenderer.sharedMesh;
                if (mesh != null && !mesh.HasVertexAttribute(NeutralPosePositionsAttribute))
                {
                    mesh.GetVertices(positions ??= new List<Vector3>());
                    mesh.SetUVs(NeutralPosePositionsChannel, positions);
                    mesh.UploadMeshData(markNoLongerReadable: false);
                }
            }
        }

        // Create the materials array
        Materials = materialMap.Values.ToArray();
        if (Materials.Length == 0)
        {
            Debug.LogWarning($"{nameof(SpiritSmoke7)} is attached to '{gameObject.name}', but it has no applicable materials.", this);
            enabled = false;
        }

        // If this game object is static, update the transform once and then disable
        if (gameObject.isStatic)
        {
            LateUpdate();
            enabled = false;
        }
    }

    private void LateUpdate()
    {
        // Animate the smoke orientation based on the change in object orientation
        Quaternion orientation = transform.rotation;
        Quaternion delta = Quaternion.Inverse(LastOrientation) * orientation;
        delta = Quaternion.SlerpUnclamped(Quaternion.identity, delta, RotationScale);
        SmokeOrientation = (delta * SmokeOrientation).normalized; // Normalization needed due to floating point error accumulation eventually denormalizing the quaternion
        LastOrientation = orientation;

        // Build fBm transform and apply it to all cloned materials
        Matrix4x4 fbmTransform = Matrix4x4.TRS
        (
            transform.position * PositionScale,
            SmokeOrientation,
            // We don't do anything special for scale so that the smoke looks similar between objects
            // If you plan to animate scale you'd probably want to come up with a strategy for handling it better
            transform.lossyScale
        );

        foreach (Material material in Materials)
        { material.SetMatrix(FbmTransformPropertyId, fbmTransform); }
    }
}
Shader "CustomPass/SpiritSmoke"
{
    Properties
    {
        [HDR] _Color("Color", Color) = (1.0, 0.0, 0.0, 1.0)
        _MeshOffset("Mesh Offset", Range(0.0001, 0.1)) = 0.0001
        _PositionScale("Position Scale", Range(0, 100)) = 10
        _TimeScale("Time Scale", Vector) = (1, 1, 1, 1)

        _FbmOctaves("fBm Octaves", Integer) = 3
        _FbmInitialInfluence("fBm Initial Influence", Range(0, 1)) = 0.5
        _FbmInfluenceMultiplier("fBm Influence Multiplier", Range(0, 1)) = 0.5
        _FbmPositionMultiplier("fBm Position Multiplier", Range(0.001, 10)) = 2

        _Power("Power", Range(0.001, 10)) = 1
        _Threshold("Threshold", Float) = 0.95
    }

    SubShader
    {
        Pass
        {
            Name "SpiritSmoke"
            Tags { "LightMode" = "SpiritSmoke" }
            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off

            HLSLPROGRAM
            // Material property values
            float4 _Color;
            float4 _TimeScale;
            float _MeshOffset;
            float _PositionScale;

            int _FbmOctaves;
            float _FbmInitialInfluence;
            float _FbmInfluenceMultiplier;
            float _FbmPositionMultiplier;
            float4x4 _FbmTransform;

            float _Power;
            float _Threshold;

            // Unity-provided values
            float4x4 unity_MatrixVP;
            float4x4 unity_ObjectToWorld;
            float4 _Time;

            struct PsInput
            {
                float4 Position : SV_Position;
                float4 FbmPosition : FbmPosition;
            };

            #pragma multi_compile_vertex _ USE_NEUTRAL_POSE_ATTRIBUTE
            #pragma vertex VertexMain
            PsInput VertexMain(float3 position : POSITION, float3 normal : NORMAL, float3 neutralPosePosition : TEXCOORD7)
            {
                position += normal * _MeshOffset;

                PsInput result;
                float4 worldPosition = mul(unity_ObjectToWorld, float4(position, 1.f));
                result.Position = mul(unity_MatrixVP, worldPosition);
            #if USE_NEUTRAL_POSE_ATTRIBUTE
                result.FbmPosition = float4(neutralPosePosition, 1.f);
            #else
                result.FbmPosition = float4(position, 1.f);
            #endif
                result.FbmPosition = mul(_FbmTransform, result.FbmPosition);
                return result;
            }

            // hashing function (with precision tweaks by Morgan McGuire)
            // Copyright (c) 2014 David Hoskins
            // Licensed under the MIT license
            // https://www.shadertoy.com/view/4djSRW
            float hash(float p)
            {
                p = frac(p * 0.011f);
                p *= p + 7.5f;
                p *= p + p;
                return frac(p);
            }

            // smoothNoise calculates noise at 8 corners of a integer-aligned cube and interpolates between them based on the fractional part, essentially smoothing it out
            // https://www.shadertoy.com/view/4dS3Wd
            // By Morgan McGuire @morgan3d, http://graphicscodex.com
            // Reuse permitted under the BSD license.
            float smoothNoise(float3 seed)
            {
                // Split the seed into integer and fractional parts
                float3 i = floor(seed);
                float3 f = frac(seed);

                // For performance, compute the base input to a 1D hash from the integer part of the argument and the incremental change to the 1D based on the 3D -> 1D wrapping
                const float3 step = float3(110, 241, 171);
                float n = dot(i, step);

                // Calculate 8 random values for each corner of a cube to interpolate between
                float s000 = hash(n + dot(step, float3(0, 0, 0)));
                float s100 = hash(n + dot(step, float3(1, 0, 0)));
                float s010 = hash(n + dot(step, float3(0, 1, 0)));
                float s110 = hash(n + dot(step, float3(1, 1, 0)));
                float s001 = hash(n + dot(step, float3(0, 0, 1)));
                float s101 = hash(n + dot(step, float3(1, 0, 1)));
                float s011 = hash(n + dot(step, float3(0, 1, 1)));
                float s111 = hash(n + dot(step, float3(1, 1, 1)));

                // Interpolate between the corners using the smoothstep of the fractional bit
                float3 u = f * f * (3.0 - 2.0 * f);
                return lerp(
                    lerp(
                        lerp(s000, s100, u.x),
                        lerp(s010, s110, u.x),
                        u.y
                    ),
                    lerp(
                        lerp(s001, s101, u.x),
                        lerp(s011, s111, u.x),
                        u.y
                    ),
                    u.z
                );
            }

            // This is a variant of fBm, a fractal which accumulates layers of offset noise to create an interference pattern
            // Based on the version described in Texturing and Modelling: A procedural approach
            float fbm(float3 position)
            {
                float result = 0.f;
                float influence = _FbmInitialInfluence;

                for (int i = 0; i < _FbmOctaves; i++)
                {
                    result += influence * smoothNoise(position);
                    position *= _FbmPositionMultiplier;
                    influence *= _FbmInfluenceMultiplier;
                }

                return result;
            }

            #pragma fragment PixelMain
            float4 PixelMain(PsInput input) : SV_Target
            {
                float3 timeInfluence = _Time.yyy * (_TimeScale.xyz * _TimeScale.w);
                float x = fbm(input.FbmPosition.xyz * _PositionScale + timeInfluence);
                x = pow(x, _Power);
                if (x < _Threshold)
                {
                    x = 0.f;
                }
                else
                {
                    x -= _Threshold;
                    x /= 1.f - _Threshold;
                }
                return float4(_Color.rgb * x, _Color.a * saturate(x));
            }
            ENDHLSL
        }
    }
}