
A frame of Slime Rancher
Slime Rancher is a game where you play as intrepid explorer Beatrix LeBeau, exploring the lands of the Far, Far Range, discovering the secrets within and, well…ranching slimes.
Developed by Monomi Park and released in August 2017, Slime Rancher is available on Steam, Epic, and whichever games console you like.
In case you haven’t had the pleasure of experiencing Slime Rancher for yourself, here’s a recent trailer to get you up to speed:
Slime Rancher is such a lovely looking game and so full of life. I think my friend Stephanie put it best:

Today we’re going to be taking a closer look at just what makes those pixels so gosh darn wiggly.
I’m going to be using the current Steam release on Windows (1.4.4) along with the Secret Style Pack. I’ll be using the Very High
graphics preset at 1080p on my NVIDIA GTX 1080. For my analysis I’ll be using Microsoft PIX to peek behind the curtain.
I’ll start by going over the broad render pipeline of the game followed by details about specific things I found interesting.
For the most part we’ll be looking at things from a bird’s-eye view as it’s not particularly feasible to look at the implementation details of individual shaders without their corresponding debug info.
A lot of the fun of Slime Rancher comes from exploration and discovering new things. Since I’ve just about 100%ed Slime Rancher, a lot of what we’ll be looking at doesn’t show up until later in the game. I don’t think this article will spoil you any more than that trailer just did, but just in case consider this your final warning!
(Also I should point out that Slime Rancher 2 was recently released into early access on Windows and Xbox, which is pretty hype! This article is entirely about the first game though.)
A frame of Slime Rancher
We’ll be starting our journey with a broad look at Slime Rancher’s rendering pipeline.
I looked at quite a few captures from this game, but for most of our analysis we’ll be focusing on the following scene:

Two million pixels of wiggly goodness
At a high level, Slime Rancher is using Unity’s built-in render pipeline, specifically using the forward rendering path. By modern standards the built-in pipeline is arguably the legacy option, but it was the only sensible option back when Slime Rancher was initially developed. (This version of Slime Rancher was built using Unity 2019.4, but the game was would’ve originally been developed using something much older.)
There are 3,851 draw calls involved in rendering this scene. That’s quite a few, but I’ve seen Slime Rancher push out even more, with my Hen Hen farm shoving out 9,199 of ’em!
As far as I saw, Slime Rancher isn’t using instancing at all for the most part*. That pile of cuberries rotting away in the corner of your ranch? Each one was submitted to the GPU independently. Not entirely sure why instancing isn’t in use, maybe some limitation of Unity’s instancing back then disagreed with Slime Rancher as and so it was disabled.
*I eventually found some instanced objects in the Slimeulation, but it seems to be the exception. Nothing is instanced in the frame we’re looking at.
By the way, a quick aside: This article is meant to be educational in nature. So when I mention stuff like this please don’t take it as criticism against the developers at Monomi Park. Slime Rancher performs pretty well on my GTX 1080 and as far as I’m aware it’s meeting performance targets for their target hardware. Making games is hard and Monomi Park was quite small back then. At best these comments should be interpreted as “Oh hey that’s weird”-type observations and maybe as a hint for something you might want to think about for your own games.
Also when it comes to performance oddities you can never really know if something is a problem until you benchmark it.
Depth texture
For our first pass, the scene is rendered with only a depth buffer bound in order to create a depth texture:

Camera DepthTexture
- D32_FLOAT_S8X24_UINT
What is a depth buffer? / What am I looking at here?
The depth buffer encodes how close objects are to the camera. They’re used by GPUs along with depth testing to ensure objects appear in the correct order visually regardless of the order which they are drawn.
In the illustration above, the brighter a pixel is the closer it is to the camera.
It’s worth noting that the banding artifacts (most visible on left cliffside) are the result of me encoding the depth buffer into a grayscale image for you to see. In reality each pixel is a 32-bit floating point value so it’s much smoother!
You can read more about depth buffers and their uses on Wikipedia.
Why are the sparkling solar anomaly particles missing from the depth buffer?
The depth texture intentionally does not include transparent objects, such as the sparkling solar anomaly particles floating in the air, the torch fire, or the glow above the slime statues.
This is because the pixels at those locations essentially belong to two different objects. You can’t store two different values at the same location in the depth buffer (or any standard GPU texture for that matter), so we only store the value for the opaque object. (Most games are structured to render opaque objects before transparent ones for this reason. Games which do not do this are likely to have weird rendering artifacts, but modern game engines have guardrails in place to prevent this mistake so it’s not very common anymore.)
If you’re wondering about the grass and the tree leaves – Those aren’t alpha-blended, they’re alpha-tested. Alpha-tested objects are effectively opaque since they’re either 0% or 100% visible with nothing in-between. As such the shape of their texture’s alpha mask can be applied to the depth buffer (and other non-color buffers) without causing issues.
(If you’re still wondering about the tree leaves because you can see branches and stuff through them – that isn’t being done how you might expect. Don’t worry, we’ll get to it later✨)
What the heck is a D32_FLOAT_S8X24_UINT
?
That’s the format this texture uses on the GPU. I’ll be including it below textures mainly for those interested. You don’t have to worry about it too much as I’ll call attention to it when it has implications.
This particular format should be read as “A texture with a 32-bit floating point depth component, an 8-bit unsigned integer stencil component, and a 24-bit unused component”.
The stencil buffer is present in this texture but is never written and as such is unused. As far as I saw, Slime Rancher never actually uses any stencil buffers.
You can read about the various DirectX texture formats in the DXGI_FORMAT
documentation. (There’s a lot of them but once you figure out the patterns it’s not so bad.)
It’s worth noting that this is not a depth pre-pass. It will not be used for depth testing after it’s been created.
This is the texture Unity creates when you enable the depth texture on your camera.
Slime Rancher uses it for various purposes that we’ll look at later; mainly for screen-space shadows, screen-space ambient occlusion (SSAO), and certain particle effects.
Normals texture
Next the scene is rendered again to create a view-space normals texture for the entire scene. This texture will be used later during SSAO calculations and nothing else.

Camera DepthNormalsTexture
(Red/Green only) - R8G8B8A8_UNORM
What are normals?
Normals are vectors used to represent the direction a surface is facing.
If you’ve worked with 3D modeling tools before you’ve maybe come across the concept of normal maps, which are encoding normals in texture space. What you’re seeing above is encoding the normals for every pixel on the screen in view space.
If you’re used to looking at normals in graphics stuff you might be confused why the capture above has a sickly yellow-green tint rather than the purple one you’re used to. (Or maybe you read the caption and you’re wondering how you get three numbers out of two color channels.)
What we’re looking at is Unity’s DepthNormals
texture. In order to save on texture bandwidth (and given the age of this pipeline: texture slots), Unity encodes the normal into just the red and green channels with a typical 8 bits per channel.
You can find the functions Unity and your shaders use for encoding/decoding these normals in <Unity install directory>\Editor\Data\CGIncludes\UnityCG.cginc
, look for DecodeDepthNormal
and follow it to DecodeViewNormalStereo
. The Stereo
part of this function name is referring to stereographic projection, which is described in this article along with several other methods for encoding normals. (It’s worth pointing out this article is very old, make sure to read the disclaimer at the top before you actually use anything from it!)
Here’s the texture again except I’ve decoded and reencoded it with all three channels:

Keep in mind these normals are in view-space, not world-space like you might be used to. Bright red represents a normal to the right, bright green is up, bright blue is towards the camera.
By the way, did you notice that the leaves of the sunburst trees are missing from this pass even though they were present in the depth texture pass? Remember that for later, it’ll be important.
Can you show me the separate channels?
In case you’re having trouble visualizing things with all the colors mixed up, below are the individual channels separated out as a grayscale images.
If you focus on the Y channel in the second image, notice how flat surfaces pointing up (like the floor) are white and the surfaces pointing down (like the ceiling of the structure in the background) are black.
Keep in mind that 50% gray here means 0, so the floor is gray in the X channel because the floor doesn’t point in any direction along the X axis.

X / Red

Y / Green

Z / Blue
How do more modern pipelines like Unreal Engine 4, Unity URP, and Unity HDRP handle normal encoding?
It’s not uncommon for more modern pipelines to just store the normals more or less unmodified.
Based on my previous graphics study, Unreal Engine 4 seems to prefer to dedicate an entire R10G10B10A2_UNORM
to store all three components of the normal.
Unity URP in deferred mode stores all three in the RGB channels of an R8G8B8A8_SNORM
.
Unity HDRP (also in deferred mode) is the odd one out and still uses a special encoding for normals. However it’s different than the built-in pipeline, it instead uses octahedron-normal vectors, which each of the two floating point components encoded as a pair of 12 bit unorm values stored across the 8-bit red, green, and blue channels. The basic gist is that the low 8 bits of the unorm values are stored in the red/green channels and the remaining 4 bits of each are stored in the low/high nibbles of the blue channel. (The alpha channel in this buffer is used for PBR roughness.)
If you want to see the actual implementation, start at DecodeFromNormalBuffer
and follow it to Unpack888ToFloat2
and UnpackNormalOctQuadEncode
.
As implied by the name of this pass’s texture, it also encodes depth. This is more or less redundant to the depth texture we already created in the previous pass, and as far as I saw nothing in Slime Rancher ever actually uses it. But if you’re curious, here’s what the depth channels look like:
As with the normals these use special encoding. 8 bits of precision simply isn’t enough to reasonably represent depth, so two channels are used to get additional precision. You can find the math in the same UntyCG.cginc
file mentioned earlier, look for EncodeFloatRG
/DecodeFloatRG
.
The short version is that blue encodes a rough depth and alpha encodes a fine depth relative to the rough depth. (This is why the alpha channel has a repeating sawtooth look to it.)
Because it’s required for rendering, this pass also results in a full depth buffer. Well, except for those sunburst trees. Ignoring the trees, this makes the previous pass is basically redundant to this one.
Actually the trees aren’t the only difference, if we compare the two depth buffers there’s another very subtle difference:
(You can click+drag or swipe to switch between the two buffers)
No I’m not talking about the vines (although they’re missing too.)
I’m talking about the grass, it’s ever so slightly different in the normals pass. Turns out the grass is animated in the vertex shader, but that shader isn’t being used in the normals pass. Whoops!
Since the normals are only used for SSAO, any artifacts this might cause will most likely be completely unnoticeable.
Real-time shadows
Now that we have normals out of the way, it’s time to render some shadows!

Shadowmap
for the sun - 4096x4096 D16_UNORM
Rotated 90° counter-clockwise for the sake of legibility
This is the shadow map for the sun. Like most games made in the past ever, Slime Rancher accomplishes real-time shadows using shadow maps. Shadow maps are created by rendering the scene (again) into a depth buffer from the perspective of the light. As such this depth buffer encodes how close objects in the scene are to the light. You can use a pile of math to reproject arbitrary positions in the world back to the shadow map and determine if that position is being hit by the corresponding light or shadowed by something else.
What do you mean by “real-time” shadows?
Real-time shadows as opposed to static, ahead-of-time, baked shadows.
Real-time shadows are shadows which are calculated while the game is running. Since they’re calculated while the game is running, they can respond to objects in the scene moving around or even the lights themselves moving around. Shadows are pretty hard to calculate though, so real-time shadows generally aren’t super accurate compared to real life.
One apparent limitation is that rendering shadows via shadow maps doesn’t consider the effects of indirect lighting caused by light bouncing off of objects. Imagine yourself being in a dark room and you shine a bright flashlight at a wall. You would expect that the shadows cast by the objects in the room will point away from the wall since they’re being indirectly lit by the wall rather than directly by the flashlight. Shadow maps do not properly account for this, so the phenomena doesn’t appear in games unless extra effort has been done to fake it.
The opposite of real-time shadows are baked shadows (or baked lighting as it’s more commonly called.) Baked lighting can be much more sophisticated because it doesn’t matter if it takes a long time to calculate since it’s done once ahead of time when the game is built. The downside of baked lighting is that the lights can’t move (so no more setting sun) and moving objects will not cast a shadow.
It is possible to use a hybrid of baked lighting and real-time shadows to overcome the moving object limitation. There are also techniques for solving the moving light problem in limited circumstances, such as baking the sun’s lighting for several fixed positions. However these techniques are not without their drawbacks.
As far as I’ve seen, all lights in Slime Rancher are real-time, which I think is ideal for a stylized game like this with only a handful of lights present in any given area.
This shadow map in particular is a cascaded shadow map. Each quadrant of the shadow map essentially represents a different zoom level from the light’s perspective. This allows objects close to the camera to use high-detail shadows from the closest cascade at the bottom-right, while objects far in the distance can use the low-detail shadows from the top-left cascade (and everything in-between.)
I’m having trouble relating what’s in the scene with what’s in the shadow map.
Remember that the shadow map is rendered from the perspective of the light. Here’s the same area in the game, except I’ve flown up to give you a bird’s eye view.
This isn’t the exact angle since I can’t get much higher than this due to the world height limit, but hopefully it makes it a bit easier to tell what’s going on.

A good point of reference to find is that stone platform with the torch on it. Some objects aren’t drawn for the distant cascades, so focus on the close-up cascades on the right side of the shadow map.
You might notice that some objects behind the player (like those cliffs in the background) are missing from the shadow map. Unity knows it can skip them since those objects are behind the main camera and cast shadows away from it, so there’s no possibility for their shadows to be visible.
You might notice that everything in the shadow map looks a lot more angular than usual (the pillar in the top-right cascade is particularly noticeable.) Directional lights (like the sun) essentially represent an infinitely-sized light source infinitely far away in a particular direction. The rays of light for a light like this don’t diverge, so their shadow map is rendered with orthographic projection (as opposed to perspective projection.)
Another thing you might’ve noticed is the slimes look a little polygonal in the shadow map. In order to save on some rendering time, the shadow map is rendered with lower level of detail (LOD) variants of some models. You can also see things like a tabby slime’s tail/ears or a phosphor slime’s wings/antenna are skipped for the same reason.
When you’re looking for it, this can be pretty apparent in the final render. Here’s an example focusing on the tabby slime on the left side of the screen:
If you’d like to read more about shadow maps as well as other (mostly legacy) real-time shadow techniques, this presentation provides a nice overview.
Screen-space shadow map
In order to save the effort of sampling and filtering the shadow map for every pixel of every object (including pixels which might eventually be hidden by something else being rendered on top of them), Slime Rancher renders a screen-space shadow map.
For each pixel, the world position is reconstructed using the view matrix and the depth texture from the first pass. This world position is then projected onto the shadow map to determine if that pixel in shadow. (That’s the basic gist, there’s a whole lot of other stuff like filtering, choosing the cascade, blending between them and such too.)
You can see the results of this pass below; I’ve included the output of the opaque objects pass as well so you can see how this screen-space shadow map will later be applied.
Screenspace ShadowMap
- R8G8B8A8_UNORM
You’ll notice there’s some artifacts on some surfaces, particularly those facing away from the sun. You’ll also notice that those artifacts are not present in the final output.
Because those surfaces are known to be facing away from the sun (IE: the dot product between the light vector and the normal vector is positive), we also know they’re definitely in shadow. So when we render them later they can be treated as if they’re in full shadow, which hides these artifacts.
Why does the shadow of the brick next to the torch look sharper in the render than it does in the shadow map?
Good eye! That sharp shadow is actually from the shadow cast by the torch rather than the sun.
Below is a view of what that stone platform looks like before the torch’s shadows are applied to it.
As you can see, before the torch’s shadow is applied on top of it the shadow is blurry and angled just as you’d expect.
It’s worth noting that this screen-space shadow map is what will be used for all shadows due to the sun for this scene. The sun’s light-space shadow map will not be used again after this.
(In some games the light-space shadow map would still be used later on for objects that have transparency, but as far as I saw Slime Rancher doesn’t do this as transparent objects are generally shiny things that don’t need shadows.)
Point light shadows
Now that the sun’s shadows have been calculated, it’s time to handle other lights in the scene. All of these lights are point lights, which means they’re located at a single point in space and emanate light out in all directions from that point.
Point lights use shadow maps for their shadows just like the sun, except with a few major differences:
- They use perspective projection instead of orthographic since the light is emanating outwards from a single point (IE: the rays of light are not parallel)
- In order to cast shadows in all directions we actually render six shadow maps – one for each axis and direction to form a cube – a shadow cube map!
- Unlike the sun, they don’t use cascades since the light isn’t cast far enough for it to matter
These shadow maps are also quite a bit lower resolution than the 4096x4096 pixel shadow map that was rendered for the sun. Slime Rancher renders shadow maps for five different point lights in this scene, with the lowest resolution one being 128x128x6 and the highest resolution one being that torch in the foreground at 1024x1024x6.
It’s a lot easier to visualize cube maps in 3D, so I’ve created the little visualization below to let you look around within them.
Why is everything so spooky and foggy?
These shadow maps only “see” as far as the light’s sphere of influence. The underground lights in particular are quite dim, so you can’t “see” very far in their shadow maps.
The distance visible isn’t actually uniform like a sphere though. It’s more cube-like because the area rendered in each direction is shaped like a truncated pyramid called a frustum.
You might be confused about where these underground lights are coming from. Off to the left through that gate are the Ancient Ruins. There’s three lights in an underground area there which affect objects close enough to be considered worth rendering but aren’t actually visible. These shadow maps are effectively useless, but they’re still fun to look at!
Opaque objects
It’s time for the main event: The opaque object passes!
Opaque objects are anything which is visually solid, IE: You can’t see what’s behind them.
Like most games, Slime Rancher renders opaque objects before transparent stuff, this is done mainly for two reasons:
- This allows rendering objects in an arbitrary order by using the depth buffer (as opposed to relying on the painter’s algorithm)
- Transparent objects (such as the sparkling solar anomaly particles in our scene) cannot be represented in the depth buffer since that pixel will effectively be associated with two different depths.
- Certain full-screen special effects (such as the SSAO pass coming later) rely on the contents of the screen matching the data in the depth buffer/normals buffer.
- Transparent objects are, again, problematic for buffers like these so they must be rendered after these effects are handled.
The opaque pass in Slime Rancher is split into two halves, which are loosely defined. This split was implicitly created by the use of Unity’s GrabPass
feature. We’ll talk about GrabPass
in more detail during the transparents pass, but for now just know that it enables shaders for objects drawn after it to access whatever was on the screen when the GrabPass
occurred.
Both textures 1920x1080 R8G8B8A8_UNORM
, resolved from 8xMSAA R8G8B8A8_UNORM
intermediates
Note: Actual GrabPass
texture likely has garbage from a previous frame in the black areas as the source buffer was not cleared before drawing.
The weird phantom shadows in the GrabPass
are a side-effect of using a screen-space shadow buffer. They don’t matter since they’re eventually covered up.
But what is this GrabPass
texture actually used for?
Notice anyone missing from the GrabPass
texture? The phosphor slimes are missing! Turns out the GrabPass
was created for them.
It’s pretty subtle, but phosphor slimes actually refract and reflect the world around them. It’s much more apparent when you look at a largo:
As far as I know this is the only thing this opaque objects GrabPass
is ever used for (it’s definitely the only use in this particular scene.)
However due to the dynamic nature of GrabPass
there might be other uses elsewhere in Slime Rancher I didn’t notice.
Anti-aliasing with MSAA
The display you’re reading this article on– wait you printed it?! Jeeze I hope it’s just to PDF at least. Anyway, the display you’d normally be reading this article on is more or less made up of a grid of pixels. (Technically not always a literal grid at the physical level, but from a rendering perspective it’s a grid.)
3D models in games are made up of many triangles that will be laid out over this pixel grid through a process called rasterization. Basic rasterization shades pixels completely or not at all based on whether or not the triangle covers the center of said pixel. A side-effect of this is that it makes it hard to represent smooth, curved surfaces (like a slime body) without giving it a jagged appearance we call aliasing. (There’s other types of aliasing too, but this is the kind that most people care about.)
Thankfully there’s an entire class of computer graphics techniques called anti-aliasing designed to help reduce this jagged appearance. The technique Slime Rancher uses is called multisample anti-aliasing (MSAA.)
The basic gist of how MSAA works is that each pixel is still evaluated once for shading, but the contribution of that shading towards each pixel is based on how much of the pixel is covered by the triangle. The number of points to be sampled within each pixel to determine this coverage is called the multisample count. Slime Rancher exposes MSAA with sample counts of 2, 4, and 8–the captures in this article were taken with 8xMSAA. Higher values result in better quality anti-aliasing at the cost of performance.
MSAA is unique among anti-aliasing techniques in that it is implemented in hardware as it changes how rasterization works and requires encoding textures differently than usual. A side-effect of this is that in order to (easily) access the data on a multisampled render target, you must first resolve the “actual” value of each pixel and store it in a normal non-multisampled texture.
Below is a 4x zoom-in of a tabby slime from our scene. The first image has no anti-aliasing applied and the second uses 8xMSAA.
Notice in particular how much smoother it looks in the area where the slime’s ear meets their body.
This zoom-in also demonstrates one of the bigger limitations of MSAA: It focuses on geometry.
Rather than using geometry to represent individual blades of grass, the grass in the background uses alpha-tested textures. As a result they are not anti-aliased since MSAA focuses on the edges of geometry, and the visual edges of the grass do not line up with their geometric edges (which are invisible.)
Below is an interactive comparison to let you see the opaques pass with and without MSAA. I’ve cropped it slightly to avoid things being scaled down, but if you’re on a phone or some other device with a small/low-resolution display it might be hard to tell the difference.
Enabled has 8 samples
Don’t worry if you don’t notice a huge difference here, people have different sensitivities to aliasing. It’s also less noticeable without movement.
It’s worth noting that this actually wasn’t the first MSAA resolve of the frame. The first one was actually in-between the two halves of the opaque pass.
MSAA resolve happens every time the contents of the render target will be used as the input to a pixel shader.
In this particular frame of Slime Rancher, MSAA resolve happened 9 times. This is a little excessive for reasons we’ll see later, and not super ideal because the MSAA resolve is somewhat expensive. However having MSAA enabled kind-of implies you’re playing on a nicer computer, so it’s not the worst thing ever.
Screen-space ambient occlusion
In order to simulate how light from the sun tends to hit everything both directly and indirectly, many games use a global ambient light component across the other scene. This ambient light component is the reason it’s not pitch black in the shadows of this scene. It is a very rough approximation of the indirect lighting caused by the sun bouncing off nearby objects.
The problem with using a constant ambient component is that it isn’t super realistic as it does not account for situations where indirect lighting is occluded (IE: blocked.)
You can get away without this sort of occlusion, but it can lead to the scene having a visually flat appearance.
It’s not super noticeable in this scene since it’s outdoors and a lot of the scene is directly (plus Slime Rancher’s art style is pretty legible without it.) It is however noticeable here on this slime statue:

In the real world you’d expect there to be a shadow between the slime statue and the stone underneath it, but there isn’t one.
Accurately calculating indirect lighting to account for this is extremely expensive and isn’t practical for most games. Instead games tend to use an approximation of these shadows calculated based on what is visible on the screen. This class of techniques is called screen-space ambient occlusion (SSAO.)
The basic idea behind calculating SSAO is that for each and every pixel you randomly sample the depth texture for the surrounding area to approximate whether there’s anything nearby that might be occluding indirect light from reaching that pixel.
The orientation of the pixel (sourced from the normals texture) is used to determine which parts of the depth buffer should be sampled. This is done to avoid problematic sampling of areas which wouldn’t occlude indirect light in the first place. (If you don’t do this, a flat object would occlude ambient light for itself, which is not accurate.)
Slime Rancher also uses the luminosity of each pixel to limit how much SSAO has an impact on bright-looking areas.
The output of this calculation is a buffer which describes how much indirect light reaches each pixel.

SSAO buffer - 960x540 R8G8B8A8_UNORM
You might notice this buffer is a little small – half-sized to be exact. SSAO is still pretty expensive to calculate so it’s done at a smaller size and scaled back up afterwards.
The calculation of SSAO here represents the single most expensive draw call done for the entire frame – it took over 0.34 ms all on its own. (About 4% of the frame.)
Now that we have our finished SSAO buffer we can apply it back to our scene:
1920x1080 R8G8B8A8_UNORM
Notice anything a little odd about the changes SSAO had on our scene? It added a translucency effect to the leaves of the sunburst trees!
But wait…based on our description of what SSAO is supposed to accomplish it doesn’t seem like it should be doing that 🤔
Remember way back during the creation of the normals texture, I told you to remember the leaves were missing from the normals texture? This is what causes that! SSAO is expecting the data in normals texture and the depth texture to match, but it doesn’t. This causes SSAO to sample the wrong hemisphere to determine what’s occluding the leaves, which causes the occlusion buffer to contain artifacts in the shape of whatever’s behind the leaves.
I had noticed this effect on the sunburst trees before but didn’t connect the dots that it was due to SSAO. I initially made the connection to SSAO when I stumbled upon a vine in the Moss Blanket with the same effect while exploring the game for interesting things to capture, and when I saw it there I thought it was probably a bug. But I kept noticing it in other places too: Other (different) vines use it, phase lemon trees use it, quantum slimes use it, and after seeing it be responsible for the sunburst tree effect I think it has to be on purpose. (Right?)
I think this is a pretty cool intentional misuse of how SSAO is calculated✨
The only downside is that anyone playing with SSAO disabled will never see these translucency effects.

From my notes: “Is this SSAO being applied over a transparent object?"
(Not quite right but close enough!)

Different vines exhibiting the effect

Mochi’s giant peach cloud tree uses the effect
(As do the smaller ones)

Nimble needle trees too!
Finally to end things off, I wanted to show you a different scene where SSAO had a more significant impact.
This capture comes from a cave in the Indigo Quarry, so there’s no sunlight to speak of (in fact, the sun’s shadow map actually stops rendering entirely when you’re inside caves like these.) As a result, all shadows in this scene are from ambient occlusion.
Notice in particular that the back wall looks almost completely flat before SSAO. Adding ambient occlusion here reveals it’s actually quite a bit more faceted.
If you’d like to learn more about SSAO, here’s a nice presentation which goes into more detail on the implementation, flaws, and motivations behind SSAO.
Transparent objects
With all our opaque objects out of the way, it’s time to move on to transparent objects! These include the sparkling solar anomaly particles, glass tiles for the mosaic slimes, spheres of radiation for the rad slimes, the subtle glow of the phosphor slimes, fire, clouds, glowing statues, and various decorative gems.
But there’s something sinister lurking within this pass…
It’s time to talk about GrabPass
.

GrabPass
GrabPass
is a feature in Unity that allows shaders to access the contents of whatever’s currently present in the render target. It was introduced in Unity 5 (2015) and they describe it as follows:
GrabPass is a command that creates a special type of Pass that grabs the contents of the frame buffer into a texture. This texture can be used in subsequent Passes to do advanced image based effects.
On paper GrabPass
sounds like a pretty cool feature!
Slime Rancher uses it heavily for various distortion effects in the game. We saw it was used for the phosphor slimes during the opaque objects pass, but it’s also used to render:
- Ponds of water
- Splashes of water when you refill the Vacpack
- The wind vortex effect when you’re vacuuming things up
- The glowing radiation sphere around rad slimes
- Hunter slimes when they’re being sneaky
- The shockwave from feeding gordo slimes
- The spinning vortex above teleporters
The main thing it’s used for in our reference scene are the three rad slimes chilling in the background.
Here’s the GrabPass
captured before they were rendered and what the output of their radiation spheres looks like:
These various distortion effects all look great in Slime Rancher, so what’s the problem?
The main issue with GrabPass
is the implicitness of the passes it creates, and that it’s a huge performance footgun but Unity does a very poor job at indicating as such.
To Unity’s credit, they do vaguely acknowledge this as of the Unity 2020.3 documentation. (Five years after this feature was introduced, but I’ll take it.)
This command can significantly increase both CPU and GPU frame times. You should generally avoid using this command other than for quick prototyping, and attempt to achieve your effect in other ways. If you do use this command, try to reduce the number of screen grabbing operations as much as possible; either by reducing your usage of this command, or by using the signature that grabs the screen to a named texture if applicable.
Additionally the feature is more or less deprecated as it no longer works in the modern scriptable pipelines like URP and HDRP. Unity instead recommends using more explicit alternatives with them.
Implicitness
Let’s talk about the implicitness first.
If you’ll recall from earlier, we saw in the opaque objects pass that GrabPass
is used to render phosphor slimes to allow them to refract the world around them.
Here’s what those GrabPass
textures looked like:

GrabPass Temp
for phosphor slimes - 1920x1080 R8G8B8A8_UNORM

GrabPass Temp
for phosphor slimes - 1920x1080 R8G8B8A8_UNORM
Lots is missing from these textures, but most notably is the sky, tree leaves, and slime faces.
When you use GrabPass
you’re basically adding an additional pass to the end of whichever object pass the shader will be used in. However you don’t get any direct control over where this split happens and Unity is pretty vague about where it will place it.
Historically GrabPass
was described it as follows:
This shader has two passes: The first pass grabs whatever is behind the object at the time of rendering, then applies that in the second pass.
This paragraph has been removed from recent versions of the Unity documentation, probably because it turns out that it’s a pretty flimsy guarantee and as you can see in the textures above it’s not even true.
If a human had deliberately placed this pass, they likely would’ve placed it after all of the opaque objects were rendered in their entirety.
Here’s another example of GrabPass
being used to render the vortex above this teleporter. The GrabPass
end up being captured before the clouds are rendered:
As a result the distortion will not affect the clouds and the clouds will render as if the distortion never happened.

After clouds are rendered
This causes some odd artifacts around that sunburst tree since the clouds are rendered as if the tree is still in its original location.
This happens because the clouds are rendered according to the depth buffer, which isn’t affected by the distortion.
Show me the depth buffer!

Depth buffer after clouds are rendered - 1920x1080 D32_FLOAT_S8X24_UINT
It’s really hard to see, but the clouds actually ended up in this depth buffer–that’s not typical for transparent objects so it’s mildly interesting.
Another reason this implicitness can easily become a problem is that you can end up weird GrabPass
usage patterns without even knowing about it.
Going back to our original frame, there are five different GrabPass
captures being made, one in the opaque objects pass and four in the transparent objects pass:
- Used for phosphor slimes (as we saw before)
- Used for a teleporter vortex (not actually visible on screen)
- Used for absolutely nothing
- Used for the various gems in the scene (eyes on the slime statues, the gems around the gateway)
- Used for the rad slime radiation glow spheres
Here’s what the GrabPass
textures look like for the four in the transparent objects pass:

GrabPass
#2 used for unseen teleporter vortexGrabPass
#3 used for nothingGrabPass
#4 used for various gems

GrabPass
#5 used for rad slime radiation spheres
Why are #2, #3, and 4 all identical?
#2 and #3 are identical because the only thing rendered between the two grabs is the teleporter vortex, which is completely culled.
#4 is technically not completely identical, it actually has 8 pixels of difference found in a pair of very distant particles:
These three grabs ending up identical isn’t Unity’s fault (it had no practical way of knowing ahead of time that they would be.) I more wanted to emphasize that the seemingly arbitrary placement of these pass boundaries ends up wasting a lot of resources for no practical gain.
There’s also an inefficiency here where GrabPass
#2 could’ve reused the output of SSAO instead of resolving the scene texture again since nothing is drawn before it’s grabbed, but I think you get the point by now.
Performance
The process of capturing the current contents of your render target to a texture for the purposes of GrabPass
and then continuing to render to it some more actually comes with a fairly heavy performance penalty.
GPUs do not allow you to read the contents of a texture while it’s being rendered to. The main reason for this is that GPUs are insanely concurrent devices and the contents of a render target are rarely coherent unless you specifically request it.
To put it another way: A texture can be a render target or a shader resource, but not both.
Whenever GrabPass
grabs the current output, we must first warn the GPU that we’re going to read the contents of our scene texture before we make a copy of it for shaders to use. Once we’ve made our copy, we must also inform the GPU that we’re going to resume rendering to the scene texture again.
What are the detailed steps involved?
Here are the specific steps Unity takes to grab the current output for the purposes of GrabPass
:
- Transition the
MSAA scene texture
from theRENDER_TARGET
state to theRESOLVE_SOURCE
state - Transition the
Resolve texture
to theRESOLVE_DEST
state - Resolve
MSAA scene texture
toResolve texture
- Transition the
Resolve texture
to theCOPY_SOURCE
state - Transition the
GrabPass texture
to theCOPY_DEST
state - Copy
Resolve texture
toGrabPass texture
- Transition the
MSAA scene texture
back to theRENDER_TARGET
state - Transition the
GrabPass texture
to thePIXEL_SHADER_RESOURCE
state (This can wait until the texture is actually needed by a draw command.) - Start submitting draw calls again
In Direct3D 12 terminology these transitions are called resource barriers. (Slime Rancher is a Direct3D 11 game but these barriers still happen, just implicitly.)
That first step is akin to saying “Hello GPU, I would like to access the scene texture as a resolve source soon, so please make sure everything is in order”.
What’s up with Resolve texture
?
As mentioned earlier, a multisampled texture must be resolved to a normal texture in order to use it in pixel shaders.
For whatever reason Unity resolves to an intermediate texture dedicated to resolving, which is then copied to the GrabPass
texture. This shouldn’t be necessary; I’m assuming it’s a quirk of how things are structured internally within Unity.
When MSAA is disabled, the process is similar except the Scene texture
is copied straight to the GrabPass
texture:
- Transition the
Scene texture
from theRENDER_TARGET
state to theCOPY_SOURCE
state - Transition the
GrabPass texture
to theCOPY_DEST
state - Copy
Scene texture
toGrabPass texture
- Transition the
Scene texture
back to theRENDER_TARGET
state - Transition the
GrabPass texture
to thePIXEL_SHADER_RESOURCE
state (This can wait until the texture is actually needed by a draw command.) - Start submitting draw calls again
That initial warning to the GPU is the expensive part. When you tell the GPU that you’re going to read the contents of a render texture it first has to wrap up any outstanding work, flush out its caches, perform [NVIDIA TRADE SECRET REDACTED]
, etc.
When this happens, the pipeline is stalled. The GPU can no longer work on things concurrently anymore, the only thing happening is the process of creating the GrabPass
texture.
We can actually see this happening using PIX’s timeline view:

I’ve narrowed this view down to only include the opaque pass, the SSAO pass, and the transparents pass.
The individual blue rectangles represent individual units of GPU work (mostly draw calls), the taller the stack the more concurrent work is happening. (IE: Taller is generally better)
The bright yellow rectangles represent a grab for GrabPass
(specifically they represent the MSAA resolve in step 3 in the collapsed detailed section above.)
As you can see, whenever GrabPass
has to grab the contents of the scene texture, everything grinds to a halt. As a result the concurrency in the transparents pass is absolutely pitiful. Thankfully there isn’t a ton going on in the transparents pass for this scene so it’s not the end of the world, but I hope you can still see how causing things to get broken up unnecessarily can cause issues.
If you’re still using Unity’s built-in render pipeline, hopefully I’ve inspired you to explore more sophisticated options for accomplishing these sorts of effects.
I’ve also hopefully made you think twice about exposing rainbow-colored footguns covered in sparkles with no safety installed. It’s not Monomi Park’s fault for choosing to use GrabPass
. On the surface it’s a very compelling feature, and it wasn’t even documented as being performance heavy back when Slime Rancher was developed.
(As an aside, I didn’t test but I believe you can mitigate some of the performance issues by naming your GrabPass
textures and using the same name between shaders. However you’re also maybe more likely to run into unpleasant emergent behavior from where the grab implicitly ends up.)
(It’s also a valid takeaway to recognize that even if you don’t do everything “perfect” the game will still be fun and perform well enough that nobody really cares.)
(I also think there’s ways Unity could’ve exposed something similar to GrabPass
without these problems but this tangent has gone on way longer than it already should’ve so maybe next time.)
Bloom
When something is significantly brighter than its surroundings it tends to look as if it’s glowing a bit. In real life this happens due to light scattering in the air and some quirks of how light interacts with the lenses in your eyes. In gamedev terminology, the imitation of this phenomena is called bloom.
Bloom helps convey to the player how bright things are, such as the radiant mosaic slime towards the right side of our scene.
The general strategy still used to render bloom today was made popular in 2004 by the developers of Tron 2.0 thanks to an article they published describing their technique, although modern pipelines tend to create the highlights differently. (Also back then they referred to it as glow.)
To get started with bloom, Slime Rancher first scales down the output to just a quarter of its normal size:

Downsized output - 480x270 R8G8B8A8_UNORM
Bloom is inherently a blurry, low-detail effect so calculating it at a low resolution doesn’t harm the result. (Similar to SSAO this also helps out with performance.)
From this downsized output we extract highlights. Highlights are basically the “shiny things” which bloom will be applied to.
In the original Tron article highlights were defined manually by artists, but like many modern games Slime Rancher determines them automatically.

Extracted highlights - 480x270 R8G8B8A8_UNORM
Many modern games use high dynamic range (HDR) textures, and one of their many reasons for doing so is to make it easier to extract accurate bloom highlights. However you might’ve noticed that Slime Rancher has been R8G8B8A8_UNROM
for its scene textures. This is a standard dynamic range (SDR) texture format.
The big downside of using SDR up until this point is we don’t actually know what’s super bright in this scene or not, so this is a pretty rough approximation. This is fine for Slime Rancher since its bloom is pretty subtle, but I wanted to point it out since this is why the extracted highlights seem to include some odd things.
What’s the difference between HDR and SDR? How does HDR allow for higher quality highlight extraction?
The main difference between HDR and SDR is the number of bits of data used to represent each channel. SDR uses 8 bits for each channel, which gives the possibility of 256 different values. HDR on the other hand tends to use either 10 or 16 bits per channel, which increases the possibility to 1,024 and 65,536 different values per channel respectively.
SDR is perfectly fine for most images as most consumer displays are 8-bit SDR (although 10-bit HDR displays are becoming more common.) However SDR is not good for encoding colors which are brighter than what can be represented on the screen. As such, more modern games will render to a HDR buffer to keep this information around and then use only a subrange of the values from that buffer for what actually ends up on the screen.
For example, a 10-bit HDR buffer might have value 512 mapped down to SDR 255 on your display. The game still has information about things brighter than level 512 and could use that information to extract highlights automatically.
The inverse situation is technically possible–where you render to an 8-bit SDR buffer and map 128 up to 255–but this isn’t great for color fidelity since you’re effectively using 7-bits per channel now.
The other alternative is for your artists to define which parts of which objects should have highlights manually. This is what Tron 2.0 did since in 2004-era hardware wasn’t capable of rendering HDR textures and Tron’s bloom highlights were pretty strong.
Slime Rancher most likely doesn’t use HDR textures in order to support lower-spec hardware as well, but they can get away with the low-quality highlight extraction since their bloom highlights are so subtle. (If they weren’t subtle, Slime Rancher would be a blurry mess.)
Here’s a comparison between the bloom-less scene render and extracted bloom highlights from above so you can see what got pulled out.
This shows the limitations of using an SDR input to automatically extract highlights like this. Canonically we would expect the mosaic slime and the glittering solar anomaly particles to be much brighter than everything else, and in contrast we wouldn’t really expect the gentle light from the torch to be so prominent in a bright outdoors scene like this. However the information required to discern between these objects simply doesn’t exist by this point.
Now that we have our extracted bloom highlights, we need to blur them to make them more glowy. Slime Rancher blurs the highlights in a typical four pass blur, two for each direction and alternating between directions.
Blurred highlights in hand we can finally apply them to our scene!
Like I said, it’s pretty subtle in Slime Rancher.
Vacpack
By now you’re probably wondering where Beatrix’s trusty Vacpack is. It hasn’t been rendered yet since it doesn’t really exist in the world. (Although if you think about it, does anything in any game actually exist? 🤔🤔🤔)
The Vacpack is rendered separately after everything else, and not before clearing the depth buffer. This ensures the Vacpack doesn’t interact with the world in undesirable ways. (For example, rendering the Vacpack in a separate pass like this ensures it doesn’t clip through walls when the player gets close to them.)
First things first a new depth texture is created with just the Vacpack (and Beatrix’s arms.)

Camera DepthTexture
(for Vacpack) - D32_FLOAT_S8X24_UINT
This texture is conceptually the same as the depth texture generated way back at the beginning of the pipeline, except this one isn’t actually used for anything during this frame.
For a long time I didn’t think it was ever used, but I did eventually find out why it’s created…but we’re gonna have to look at it later because it’s only used in one very specific location in the game. (I’ll give you a hint though: This unused depth texture is only created when you’re outside–IE: Not in caves.)
Hints of tangents aside; after creating that depth texture, Beatrix’s arms and the Vacpack are rendered right on top of our scene:

Since these objects are essentially “outside” of the scene, they aren’t affected by any of the shadow maps we created earlier. This means, for example, that if you walk under a tree the Vacpack won’t appear to be in shadow. In fact, Beatrix and the Vacpack can’t cast or receive shadows at all since there weren’t any shadow maps built for this pass. (This is all fine and could easily be an aesthetic choice.)
The display on the Vacpack is rendered as a single quad all at once with the composition of the screen elements all being done within the pixel shader. The dial face and both meters are all encoded in separate channels of the same texture, which is mildly neat:

dial_face
- 256x256 BC1_UNORM
Otherwise there’s not much else to be said about the Vacpack pass. Both the Vacpack and Beatrix’s arms are disembodied, but that’s pretty typical. (Emphasis on arms though. Beatrix’s right arm is drawn, it’s just off screen.)
Output is flipped
I have a confession to make. All of the screenshots I’ve shared with you thus far have been LIES.
This is what Slime Rancher’s output actually looked like at the end of the Vacpack pass:

Turns out the Far, Far Range is actually just Australia
One of the more obnoxious differences between Direct3D and OpenGL is the location of the origin for textures and the screen. By default Direct3D places [0, 0]
at upper-left and OpenGL places it at the bottom-left.
To help hide this difference, Unity renders (almost) everything upside-down when rendering on non-OpenGL platforms and flips it back during this pass.
Since I’m looking at Slime Rancher as it was rendered with Direct3D, everything has been flipped in the debugger. (PIX thankfully lets you flip its visualization to help preserve your sanity.)
Why does Unity prefer OpenGL’s convention?
Back in the ancient times (2005) Unity was released exclusively for Mac OS X. As a side effect, Unity has roots in being an OpenGL-only game engine.
Additionally, OpenGL historically did not support configuring its clip space. You can use glClipControl
on platforms which support it, but this extension came around way too late so macOS is not one of those platforms. (WebGL also doesn’t support it, and if I remember right OpenGL ES platforms like smart phones don’t either.)
2D HUD
Only thing really left to do now is to render the HUD!
Not much worth noting here. It’s a HUD. It’s rendered. It looks nice, gets the job done. It reminds you phosphor slimes like fruit.

Cuberries are their favorite though
Present!

Wrapped up and ready to go!
That’s all, brave adventurer! Our journey through the pipeline is finally over. The output of the 2D HUD is copied to the back buffer and presented to the screen.
It’s out of our hands now, the GPU to convert our wiggly pixels into wiggly electrons to be sent to the display to be converted into wiggly photons beamed into your wiggly-loving eyeballs!
Don’t think we have time to relax though, time to go back to the very beginning and get started working on the next frame!
Frame statistics
For the curious, follows are some statistics from this frame.
These were collected by replaying the capture, so they do not account for CPU bottlenecks or anything related to the game’s simulation.
Segment | Time | % of total | Draw calls | % of total | Triangles submitted | % of total |
---|---|---|---|---|---|---|
Total | 8.92 ms | 100.0% | 3,851 | 100.0% | 2,441,265 | 100.0% |
Real-time GI texture upload | 0.74 ms | 8.2% | 0 | 0.0% | 0 | 0.0% |
Depth texture | 0.39 ms | 4.4% | 660 | 17.1% | 523,434 | 21.4% |
Normals texture | 0.34 ms | 3.9% | 656 | 17.0% | 523,954 | 21.5% |
Real-time shadows | 1.85 ms | 20.8% | 1,364 | 35.4% | 509,862 | 20.9% |
Sun | 1.44 ms | 16.1% | 902 | 23.4% | 318,522 | 13.0% |
Shadow map | 1.26 ms | 14.2% | 901 | 23.4% | 318,520 | 13.0% |
Screen-space shadow map | 0.17 ms | 1.9% | 1 | 0.0% | 2 | 0.0% |
Point lights | 0.42 ms | 4.7% | 462 | 12.0% | 191,340 | 7.8% |
Opaque objects | 3.62 ms | 40.5% | 993 | 25.8% | 814,373 | 33.4% |
SSAO | 0.50 ms | 5.7% | 4 | 0.1% | 8 | 0.0% |
Transparent objects | 0.51 ms | 5.8% | 114 | 3.0% | 30,376 | 1.2% |
Bloom | 0.20 ms | 2.3% | 9 | 0.2% | 18 | 0.0% |
Unused resolve | 0.05 ms | 0.6% | 0 | 0.0% | 0 | 0.0% |
Vacpack | 0.54 ms | 6.1% | 13 | 0.3% | 38,768 | 1.6% |
Output flip | 0.09 ms | 1.0% | 1 | 0.0% | 2 | 0.0% |
2D HUD | 0.02 ms | 0.2% | 36 | 0.9% | 468 | 0.0% |
Present | 0.06 ms | 0.7% | 1 | 0.0% | 2 | 0.0% |
For some context on the timings: When you’re targeting a typical 60 frames per second (FPS) you have about 16.67 milliseconds to render a frame. Do remember though that this only represents GPU work. As such Slime Rancher can’t necessarily accomplish 112 FPS on my system, I actually get around 70 FPS in this area with vsync off.
What is “Real-time GI texture upload”?
I didn’t mention this in the pass overview because:
- It’s not actually used by anything
- It’s the very first thing that happens and it felt like a confusing place to start
Global illumination (GI) is the contribution of light from indirect sources. (Ambient occlusion is trying to create shadows to compensate for the fact that we don’t usually calculate global illumination.)
Unity has a system for computing global illumination in real-time, I assume this is related to that.
These textures aren’t actually used for anything, so I assume something related to real-time GI is enabled when it shouldn’t be.
If you’re curious, here’s what those textures look like:
What is “Unused resolve” after bloom?
I’m not sure! The results of this resolve aren’t ever used.
It doesn’t always happen, but it did happen during this frame.
I didn’t mention it during the overview since it seems inconsequential, but it felt wrong to leave it out of the statistics (or lump it into a neighboring pass.)
A closer look at specific things
Now that we’ve gotten the overall pipeline squared away, let’s look at a handful of other object renderings in more detail.
This section will mostly consist of things I was curious about or didn’t fit into the macro-level overview above.
Slime rendering
It wouldn’t be Slime Rancher without the slimes!
Slime Rancher features 21 distinct species of slime, 91 largo variants, and 14 gordo variants – and that’s without including secret styles!
Let’s take a closer look at how these cuties are rendered in general, followed by a deeper look at some the interesting differences I happened to notice in specific varieties.

Tag yourself, I’m the phosphor slime freaking out in the middle of the group on the right.
Slimes are made up of four main parts:
- Eyes (for finding chickens)
- Mouths (for eating chickens)
- Bodies (for digesting chickens)
- Extra bits (for extra wiggling)
Extra bits are things like a tabby slime’s ears and tail, a phosphor slime’s wings and antenna, a mosaic slime’s glass tiles, etc.
Most slimes have all of these, but some only have a subset. For example: Puddle and fire slimes don’t have mouths or extra bits.
As mentioned back when we discussed the opaque objects pass, the opaque pass happens in two halves implicitly created by the use of Unity’s GrabPass
feature.
Generally speaking bodies and extra bits will be drawn in the first half and slime faces will be drawn in the second half, but there are some exceptions
Here’s the scene from above at both stages of the opaque pass:
As we saw earlier, this GrabPass
was created specifically for phosphor slimes’ bodies. As such they aren’t visible in the GrabPass
. (Although oddly enough the pink phosphor largo’s extra bits are visible. Their extra bits don’t use the refraction shader like their normal-sized counterparts.)
Slime bodies in detail
Below you’ll find a selection of slime meshes for your viewing pleasure.
The white wireframe is the body mesh, and the purple wireframe represents the extra bits. Sparkles✨ indicate that the slime is using their secret style.
Tangle slime
Tabby slime✨
Puddle slime
Puddle slime✨
Saber rock largo
Tarr
Quicksilver slime
Dervish gordo
slime_default
is the base mesh used by the vast majority of slimes as well as largos.
The slime_DLC_default
is slightly odd. It’s used by some secret styles (but not all of them.) It’s structurally the same as slime_default
, but it has two pairs of texture coordinates instead of one–and neither of them are the same as the default mesh. Not sure what the practical difference is.
Puddle slimes, fire slimes, and quicksilver slimes all switch default mesh when they jump. Additionally, when they’re far away quicksilver slimes use the puddle slime mesh.
Slimes aren’t just a static mesh, they wiggle too. All 238 vertices of each little cutie are animated on the CPU. Would’ve been cool to see them animated in a compute shader, but that wasn’t as reasonable back when this game was developed.
Since I focused on the GPU side of things I can’t say exactly how they’re animated. Based on how they act I’m pretty confident it’s at least somewhat procedural though.
Slime faces in detail
Slime faces are rendered by drawing the slime’s body a few extra per-object passes. One for their mouth and again for their eyes.
The slime faces themselves are represented as signed distance field (SDF) textures. SDF textures are a technique for encoding arbitrary vector shapes as relatively low-resolution raster images. In typical implementations, a value of 50% represents the boundary of the shape, anything below that is outside of the shape and anything above is inside. The bigger the difference from the boundary value, the further away that pixel is from the shape boundary.
(The reason for encoding 0%..50%..100% instead of -100%..0%..100% like you might expect from the word “signed” is that these textures are usually encoded using 8-bit unorm textures, which can’t actually store negative numbers.)
The main benefit of SDF textures is that they allow you to render shapes with nice crisp edges at arbitrary sizes. In the context of Slime Rancher, this means slime’s faces never look blurry even when you get really close to them. (For this reason SDF textures are frequently used to render text, so you’ll be unsurprised to learn Slime Rancher uses them for that too.)
Since you’re able to know the distance from the edge of the shape, an additional perk of SDF textures is that you can use them to render neat effects like outlines, glow, and shadows. Slime Rancher is presumably using this property of SDF textures to add depth, specular highlights, outlines around mouths, etc.
The use of SDF textures in games was popularized by Valve back in 2007, if you want to learn more about SDF I’d suggest reading their original paper.
Another thing worth highlighting about SDF textures is that they can be encoded in a single color channel (although there is a multi-channel variant called MSDF.) Slime Rancher takes advantage of this by encoding different parts of the slime faces in different channels of the face textures.
I’ve created a little widget to let you play with the various slime face textures below. (I also included the font atlas if you want to see what that looks like.)
In a typical SDF implementation you’d want to drive the smoothness based on the relative size of the texture on the screen (IE: using font size or screens-space derivatives of the texture coordinates), but I’ve left it exposed manually so you can play with it.
As mentioned earlier, SDF textures can be pretty low resolution. Most face textures in Slime Rancher are 64x64 pixels, a few are 128x128 (mainly the glitch and hunter faces), and the feral ones are 256x256. (For reference that viewport above is rendered at least 256x256 pixels–it might be more if you’re reading on a high resolution display.)
For the majority of the textures, red is used for eyes and alpha is used for mouths. The blue channel is used for slimes which have eye whites (such as the hunter slime pictured on the right – that’s
hunter_elated
if you want to poke at the SDF.)
The green channel I am less certain about. These seem to be various special effects (mostly glows and highlights), I don’t think they’re actually meant to be interpreted as signed distance fields.
I stumbled upon an article written by Ian McConville, a principal artist (now art director) at Monomi Park: Art Blog: Let’s Talk About Face Shaders
I was on the right track with the green channel, turns out it’s specifically highlights for the eyes!
In the article Ian also goes into detail on how he authored the textures, how slime tongues work, and more! It’s a good read if you’re still hungry for more slime face rendering knowledge.
Glitch slimes
Speaking of things which don’t look like SDFs, if you poked around in the above widget long enough you might’ve noticed that of the glitch faces are different from the others.
I did confirm the same shader is being used to render glitch slimes’ eyes, so it seems they are being rendered as SDF. Due to how SDF textures are rendered, trying to interpret a normal non-translucent texture as an SDF will end up looking pretty similar but the edges won’t be as smooth. Considering the context, these artifacts are actually desirable.
The mouth shader on the other hand is different. Not sure if it’s technically using SDF (doesn’t really matter), but it uses some extra shader goodness to add some random noise to their mouths, which you can see pretty clearly in the capture below:

Look at that poor boi completely squished in the back. Ouch!
Hunter slimes
Hunter slimes are another one of the exceptions to the slime rendering pass rules.
When they’re cloaked they’re rendered in during the transparent objects pass. (When they’re visible they’re rendered during the opaque pass as per usual. When they’re in-between states they render in the transparents pass.)
An interesting quirk of this is that they end up refracting their own faces (which are still rendered during the opaque pass as per usual):
Puddle slimes
Who doesn’t just love puddle slimes? Look at these little guys! Splishin’ and splashin’ with their rubber ducky 🥺 an–hey wait a minute, what’s that one in the back doing?

Hey you!

Yeah, you.
Look, I’m not mad. But are you okay little guy? Your water looks…holey and you seem kinda squished.
Puddle slimes are quite a bit different compared to the other slime species. They use an entirely different mesh (slime_puddle
), don’t eat food (or even have mouths to do so), don’t get agitated (unless you count blushing), and just generally vibe.
However, they do still sometimes jump! And when they’re doing so they’re actually rendered with the same slime_default
mesh used by most other slimes.
Now you might be thinking that slime still doesn’t look very much like a normal slime, and you’re not wrong. On top of using the normal slime mesh while jumping, puddle slimes also have their mesh modulated in the vertex shader using a puddly noise texture:
This modulation gives the slime an appearance of flowing like water while it’s in the air.
That explains the squish, but what about the hole in the water?
You might’ve noticed the hole looks fairly slime-shaped–kinda like the unmodulated silhouette shown above. If you’ll recall from earlier, slime faces are drawn in a separate pass from their bodies, but still using the same mesh.
Here’s what the depth buffer looks like before that slime’s face is drawn, drag right to reveal the truth:
I think I’ve seen a ghost!
Turns out the vertex shader used while rendering the face doesn’t include the modulation. Whoops!
As a result, when the water is rendered later during the transparents pass the pixels around that puddle slime incorrectly fail the depth test and are culled:

Visible pixels are shaded green, culled pixels are red.
While it’s not ideal, this bug is honestly pretty hard to notice. If you stare at them long enough you’ll eventually see it, but it’s generally pretty subtle and I got very lucky here that it’s so apparent in this capture. (Believe it or not, I actually got this capture on accident and it’s how I noticed the bug in the first place!)
Jumping slimes and bugs aside, another unique aspect of puddle slimes is their ability to blush. Puddle slimes are quite shy, and if there’s too many friends nearby they’ll get a lil’ overwhelmed and blush uncontrollably.
This blush is rendered at the same time as the slime’s eyes from the texture seen below.

face_blush
- 128x128 BC3_UNORM
Quantum slimes
Quantum slimes have a couple interesting things to note about their rendering, so let’s take a visit down to my free-range quantum slime farm I keep in the Grotto!

Just look at these quantum cuties!
Those floating rings seen around two of the slimes above appear when they are agitated. (Don’t blame me, the phase lemons are right there.)
I had expected those rings to be made up of a couple rotating planes centered on the slime or maybe a few trail meshes, but it turns out that it’s just a single billboard and all the 3Dness is done in the pixel shader itself. Pretty neat!
(A billboard is a quad in 3D space always oriented towards the camera)

A screenshot from the draw call responsible for the right slime’s agitation rings with an overlay of the wireframe of the drawn mesh.
The red part highlights pixels which were culled because they’re inside the slime.
The texture for these rings is called fx_quantumAgitation
; it encodes each part of the effect in separate color channels so that the pixel shader can separate them out and recolor them independently. The alpha channel also contains a nice soft noise texture, not sure what it’s used for but if I had to wildly guess it’s involved in the color modulation.
128x128 - R8G8B8A8_UNORM
Notice that swirl pattern on our four-dimensional friends? Just like their faces, the swirl on each quantum slime is an SDF texture:
The blue channel most likely not meant to be an SDF; it looks to be a noise reference.
Not 100% sure what that green channel is for, but it’s neat looking!
When quantum slimes are in superposition they get all warbly and extra wiggly. Similar to puddle slimes this is done in their vertex shader, and similar to puddle slimes this effect isn’t properly applied when rendering their faces or in the depth pass.
This warbling is not super noticeable in a still screenshot, but it can still be seen by overlaying the unwarbled mesh from the eye rendering pass over the warbled body:

Notice how the slime’s body renders outside the top and bottom of the unwarbled mesh.
Green pixels were drawn, red were culled
This overlay also highlights another interesting detail about quantum slimes: Their eyes render under their body when they’re in superposition…kinda.

Enhance!
Note how the eye is under the interlaced pattern of the body, except at the very bottom.
The reason for why this happens at all can be seen by looking at the depth buffer used during opaque rendering:

Opaque pass depth
The interlaced pattern of slimes (and plorts) in superposition is reflected in the depth buffer too!
This is a hint that Slime Rancher is using the discard
statement rather than using alpha blending to achieve this effect, and indeed if we poke at the bytecode of the shader used to render the body we can see it being used:
call void @dx.op.discard(i32 82, i1 %276), !pix-dxil-inst-num !559 ; Discard(condition)
(This is expected and good, using alpha blending wouldn’t be the right tool for the job.)
What I believe is happening is that the warbled body mesh is actually clipping outside the unwarbled body mesh used to render the eyes, which results in the eye being behind the body. But since the body is rendered with this interlaced effect (and doesn’t warble without it) the eye is still visible when this happens.
It’s not totally clear if this is intentional or not, but it certainly fits the aesthetic.
Speaking of depth buffers, this scene is another good example of that weird SSAO-based translucency effect we looked at earlier.
Just like the sunburst trees outside the Ancient Ruins, quantum slimes and the leaves of phase lemon trees are rendered to the depth texture but are absent from the normals texture. (Curiously quantum plorts are present in both.)
Unlike with the opaque pass depth buffer, the slime is not warbled or interlaced in the depth texture while it’s in superposition. On the other hand, quantum plorts are interlaced in the depth texture but not for the normals.
I’m not seeing any obvious signs of the effect on the quantum slimes in this scene, but in theory you should be there under the right circumstances.
It is however visible on the phase lemon tree–in particular you can see the shape of the of the rocks on the ceiling behind it coming through the leaves. It is a bit more subtle than it was with the sunburst trees; it’s easier to see if you swap between the normals texture and the finished scene:
And here’s the ambient occlusion texture just to show there’s not really anything that would affect the slimes:

Gordo slimes
Gordo slimes are rendered using the same process as their smaller counterparts. Thanks to the use of SDF, the face textures are the able to be the same without appearing pixelated or blurry.
The main difference is that gordo slimes use the slime_gordo
mesh instead of slime_default
.
Unsurprisingly their extra bits are sometimes different too, for example the shell of glass tiles used for the happy friend below comes from the mesh mosaicGordo
.

Extra per-object passes
We saw earlier that slimes are rendered in multiple subpasses of their own, but they aren’t the only ones!
In this section we’ll look at some subpasses which might apply to any opaque object–slimes or otherwise.
Opaque object lighting
One of the biggest differences between forward and deferred rendering is how lighting is applied. In deferred rendering you render each light and it imparts its color onto the objects around it. In forward rendering (which is what Slime Rancher uses) the lighting is applied when the objects it affects are rendered.
Some pipelines implement this by picking N
nearby lights for each object (3 is common) and passing the data for each light to the object’s pixel shader when it’s rendered.
The other alternative (which is what Unity’s built-in forward pipeline uses) is to render the object once with only basic lighting (IE: from the sun) and then render it additional times for each light which affects it. That first pass is fully opaque and the additional passes are blended onto that opaque base.
Using that stone platform from our main scene as an example, here’s what it looks like after the base lighting pass.
This pass is rendered using the stone platform mesh, various moss and stone brick textures, the sun’s light parameters, and the screen-space shadow map:

Stone platform base pass
Much later (after 489 unrelated draw calls), an additional pass is rendered for this platform. This time it’s rendered as if the torch is the only light affecting it.
The results of this pass are then added (literally) to the previous pass to blend the results:
If there were more lights nearby, this would then be repeated for each and every light affecting the object.
Slime splats
When slime bounce around hard enough they leave splats of slime on whatever they hit. These types of splats are more generically called decals in gamedev terms.
Decals represent another common difference between forward and deferred pipelines.
Just like lights, in deferred pipelines the decals are rendered directly. They naturally fall out of the nature of how deferred rendering works and can be accomplished in just one draw call to render the decal itself.
With forward rendering we’re faced with a choice again. One strategy is to build a list of all objects affected by a given decal and re-render their meshes with the pixel shader overridden with one which will render the decal’s texture in the appropriate location. This is the strategy Unity’s projector component uses, and is the only Unity-provided option for decals in the built-in render pipeline. As such it’s not too surprising to find that this is how Slime Rancher accomplishes its splats.
In fact, our stone platform from the previous section had an additional pass to render a slime splat of its own!

Stone platform slime splat pass
This works well enough, but as you might imagine it can get a little bit dicey when there’s a ton of splats on large, complicated objects. Thankfully it’s not quite as bad as you might fear since most of the pixels get discarded quickly so the only real overhead is the vertex shader, which is usually simple.

Although doing stuff like this does cause a 30-40 FPS dip. Slime Rancher still kept things above 60 FPS the whole time though!
To give some perspective, I did a little analysis on capture above:
This scene is rendered in 4,399 draw calls processing a total of 4,918,524 triangles.
Of those, 2,112 draw calls (48%) and 3,284,748 triangles (67%) were submitted exclusively to render slime splats.
If that sounds excessive, one thing to keep in mind is that each object affected by any given slime splat will be redrawn. It doesn’t need to actually affect the object either, Unity just has to think it does.
It actually wouldn’t surprise me if something here is misconfigured, some of these slime splat draw calls are rendering the entire slime ocean! (Which consists of 20,064 triangles all on its own. It’s not visible in this scene, but it’s still lurking under the world.)
Many games put a cap on the number of decals visible at any given time to avoid players creating situations like this. However an alternate strategy would be to just render them like a deferred pipeline would.
All you need is a depth texture, which Slime Rancher is already creating anyway. (Here’s a tutorial on how to do this in Unity.)
Various full-screen effects and overlays
Held objects and largos
When Beatrix attempts to vacuum up anything too large for the Vacpack it gets stuck on the nozzle until she fires it off.

This close and personal friend is actually rendered to a dedicated render target as the very first thing in the pipeline. (Before even the camera depth texture pass.)

heldRenderTexture
- 1024x1024 R8G8B8A8_UNORM
(Transparent pixels trimmed off for layout reasons.)
One thing worth pointing out is that this render texture is not multi-sampled. As such, anything rendered to it won’t be anti-aliased. (This isn’t a big deal, but it is noticeable.)
After rendering whatever is being held, the texture is slightly blurred in four passes, which hides a lot of the aliasing:

Blurry friend
This texture is then eventually applied during the Vacpack pass by drawing it to a full-screen quad before rendering the Vacpack. (With some added transparency so you can still see what’s going on.)
Damage effects
Speaking of holding on to largos, some of them are not quite as nice as our moustachioed friend…

Those teeth look sharp!
When Beatrix takes damage, a distortion effect is added in the middle of the Vacpack pass:
Notice anything odd about the Vacpack? It’s not fully rendered yet!
This is another good example of GrabPass
being problematic: This effect wasn’t explicitly placed in the pipeline and Unity automatically chose to put it in a bit of an odd place. As a result it doesn’t affect the glass bits of the Vacpack.
This effect happens so fast though that the bug isn’t very noticeable during normal play, but it looks a bit odd in a still capture:

Vacpack pass after Vacpack’s glass parts are rendered
As a small aside, you might’ve noticed the distortion isn’t affecting the closest part of the Vacpack at all. Turns out it’s not a full-screen quad but rather a plane placed very close to the camera:

Distortion effect w/ wireframe overlay
Red indicates pixels which failed the depth test
After the Vacpack, the distortion, and the rest of the Vacpack are rendered, a handful of particle effects and full-screen overlays complete the effect:

Finished damage effect
(If you’re wondering why the Vacpack’s screen didn’t show up until now, that’s because it’s actually rendered after those circular particles in the middle.)
Gadget mode blueprint overlay
When you go into gadget mode, there’s a blueprint effect that animates from the center of the screen and settles on the edges:

It is unsurprising to find that this effect is drawn as full-screen overlay at the very end of the Vacpack pass, but what is surprising is this effect is entirely procedural!
There’s no texture bound for that blueprint pattern, nothing but math 😎
Augmented Vacpack
When Beatrix is in the Nimble Valley or its associated area near Mochi’s Manor, her Vacpack is augmented with enhancements from Miles Tech.

The normal Vacpack is still under there, these enhancements are made up of three extra meshes rendered on top of it…so why am I talking about it?
Is it because of this neat holographic transition effect used when it’s equipped?

This effect is very cool (and appears to be entirely procedural✨), but it’s not the main reason I wanted to talk about it.
Remember back when we looked at the Vacpack pass and there was an unused depth texture being created?
Well it’s for the augmented Vacpack…sort-of.
When the augmented Vacpack is equipped, there’s an extra couple of passes added to the start of the Vacpack pass:

Shadowmap
for the sun for the Vacpack - 4096x4096 D16_UNORM
Empty cascades removed

Screenspace ShadowMap
for the Vacpack - R8G8B8A8_UNORM
(Unused background pixels shaded dark gray for legibility)
Look familiar? It’s the sun’s shadow map! Except this time only for the Vacpack pass.
If you’ll recall all the way back when we talked about real-time lights, the shadow map together with the depth texture are used to create the screen-space shadow map. I believe that this is what that depth texture was being created for even though this shadow map isn’t actually being created most of the time.
The reason this shadow map isn’t typically created is that only the Vacpack enhancements are marked as being able to cast a shadow. (That’s why they’re the only thing in the light-space shadow map.)
I assume this was done on accident, because the shadows are barely ever visible and when they are they look wrong due to the rest of the Vacpack missing.
You can see this in the first screenshot where the shadow visible on Beatrix’s glove ends abruptly in an area that should’ve been shadowed by the main body of the Vacpack.
This goof also results in odd shadows popping in and out of existence during the holographic transition effect. (In the second photo you can see this on the barrel, that shadow is supposed to be for the oval augmentation bit.)
Teleporter transition effect
Whenever you step through one of the many teleporters spread across the Far, Far Range your screen is briefly covered by a lightning effect.
I was expecting this to be overlayed during the HUD pass or maybe during the Vacpack pass like the gadget mode blueprint effect, but it actually turns out it’s actually particles glued to poor Beatrix’s face!

Ouch!
These particles are rendered during the transparents pass and end up filling the whole screen.
Their mesh (which appears to be animated on the CPU) comes through in PIX as a chaotic stack of four quad pancakes:

Pass the syrup!
I’d show you the mesh overlayed on the scene, but it just looks like Beatrix taped a bunch of cardboard triangles to her face.
So to make up for it, here’s the lightning texture used for this particle effect along with the UVs used in that capture (in case you want to try and discern the individual lightning rings or something, I dunno):

Three of the quads are on the left one, the fourth is on the right.
The Slimeulation
When you enter the Slimeulation there’s a cool full-screen effect to simulate the world materializing around you.
During the effect, the world renders as normal but at the very end of the transparent objects pass a pair of giant spheres are rendered around the player to apply the effect. These spheres are drawn with depth testing disabled so they end up covering absolutely everything in the scene.
It’s not totally clear why there’s two spheres involved. Their output is basically identical in this capture. They both use the same mesh and shaders with almost all the same inputs. I assume it’s some aspect of the effect’s animation, but in the couple captures I took of this effect I could never quite discern the intent.
The effect operates by analyzing the depth texture to outline the edges of objects and overlay a square pattern over geometry, the latter of which comes from dpt_voxelGrid
:

dpt_voxelGrid
- 512x512 BC4_UNORM
This texture is used all over the place in the Slimeulation, some noteworthy examples include fringes around glitch slimes and their trails, the glowing grids scattered on the ground, the debug spray, and the edges of exit portals.

dpt_voxelGrid
is used for the green splash of debug spray as well as the glitch slimes

dpt_voxelGrid
is used for the portal’s fringe and glitchy bits, as well as the floating neon grids
(It is not responsible for the patterns on the cliffs, that’s a different texture called mask_voxelGrid
.)
The giant wireframe overlay part of the effect comes from a cube map named vrGrid
, seen below.
I’ve also included the stars cube map used to overwrite the sky, this is the same texture used for the normal night sky–although I think this effect only samples the red channel, which does not contain any of the auroras.
The grid itself is encoded in the red channel, the green and blue channels seem to be used as a mask for sky and ground areas. Not 100% sure when that’s used.
As with dpt_voxelGrid
, this cube map is used for other special effects around the Slimeulation, most notably on the tarr-like corruption that spawns as the simulation begins to fall apart:

Uh Beatrix, I think it’s time to go!
Finally, I wanted to mention a Slimeulation-related effect that many people have probably never even seen.
If you attempt to access the Slimeulation before ever talking to Viktor, the simulation will fail to start and you’ll be greeted by this red overlay effect:
Just like the materialization effect, this effect is applied at the very end of the transparents pass–although it’s a full-screen quad instead of a giant sphere.
This pattern looks very similar to dpt_voxelGrid
, but it’s actually a different texture named fx_vrtermFlash
:

fx_vrtermFlash
- 512x512 R8G8B8A8_UNORM
The red channel is the same as dpt_voxelGrid
except it’s inverted. The green and blue channels both contain the same water pattern presumably used in the effect’s animation.
NPC interaction UI
The NPC interaction UI is different from other interfaces in the game in that it’s implemented as its own pass!
This is necessary to accomplish the blurred background it uses. As far as I’m aware, none of the other in-game interfaces use this blur effect.
(no subject)
This pass unsurprisingly comes very last after the HUD pass.
The blur is performed in a single pass. Based on the ringing effect on the HUD elements I believe it’s maybe probably a modified box blur with sampling done at a few offsets to make it blurrier.
In addition to the blur effect, I wanted to briefly talk about the NPC animation seen on Mochi in the clip above.
I had originally assumed this animation would be implemented in two halves:
- The rectangle of vertices used to draw her portrait would scale up and down vertically to simulate breathing
- An artist at Monomi Park traced the portrait a few times to create different frames of animation to add a little extra movement
Turns out I was wrong on both accounts! Both aspects of the animation happen in the pixel shader.
The first part is simple, it’s just scaling the texture coordinates instead of the whole mesh.
The second part is what’s more interesting and unexpected.
I can only guess since I don’t have shader source code, but the fancy-looking stk_paintmask
texture shown below is bound when Mochi is rendered. My guess is it’s used to offset the texture coordinates slightly or something along those lines.

stk_paintmask
- 512x512 BC1_UNORM
(This texture actually shows up when rendering many other things as well, this is just where it caught my attention.)
Inside the ranch house
When you enter the ranch house you end up in a completely different environment from the rest of the game, so it’s no surprise that rendering it happens differently too.

Home sweet home
In this scene Beatrix and her Vacpack are rendered to their own render texture named BeatrixHouseRenderTexture
.
This happens in a handful of passes with some similarities to the game’s normal render pipeline.
First a depth texture is created:

Camera DepthTexture
- D32_FLOAT_S8X24_UINT
Followed by not one but two sets of directional light shadow maps:

Shadowmap
for light 1 - 4096x4096 D16_UNORM

Shadowmap
for light 2 - 4096x4096 D16_UNORM

Screenspace ShadowMap
for light 1 - 1920x1080 R8G8B8A8_UNORM
(Unused background pixels shaded dark gray for legibility)

Screenspace ShadowMap
for light 2 - 1920x1080 R8G8B8A8_UNORM
(Unused background pixels shaded dark gray for legibility)
With lighting out of the way: Beatrix herself, her Slimepedia, and the ever-reliable Vacpack are rendered:

BeatrixHouseRenderTexture
- 1920x1080 R8G8B8A8_UNORM
- Not multisampled
One thing to note about this render texture is that it is not multisampled. As a result Beatrix will not be anti-aliased when she’s rendered in this scene.
Next a bare-bones version of the world is rendered. All that’s really rendered is the sky sphere along with the Vacpack and the HUD. SSAO and bloom also still happen, although SSAO in particular does a whole bunch of nothing since there’s no normals texture and the depth texture was cleared.
All of this isn’t of much use since it’s just going to be covered up, but it’s also not hurting much either and it was likely easier to just leave it in.

Welcome to the void
On top of this there’s just two full-screen quads: One for the background and one for Beatrix, followed by the UI which is rendered just like the HUD.
It’s worth noting that this render texture strategy is not used when Beatrix is rendered on the title screen. She’s rendered in the world along with all the slimes just as you’d expect.
Glowing slime gate carvings
Scattered around the Far, Far Range are gates you must unlock to gain access to new areas. When you open these slime gates the carvings on them glow in a burst of light right before the door opens:

For whatever reason I’m always curious how other developers accomplish “magical glows seeping through gaps in stuff”-type effects. (Probably because there’s lots of different ways to do it.)
Slime Rancher goes for the simple approach here and just renders a glow texture on a transparent quad floating just above the door mesh:
Simple yet effective!
If you look closely at the carvings towards the top of the door you can see why this approach isn’t always ideal, but the glow effect goes by so fast you wouldn’t notice the misalignment outside of screenshots like this.
Chroma Pack color customization
As you play through Slime Rancher you unlock a variety of Chroma Packs you can use to recolor your house, various technological things, and your Vacpack.

My personal color scheme using the Kanpeki Chroma Pack for everything


Another configuration using Ginger Snap on the house, Milkshake on tech, and Eventide on the Vacpack



The PC version of Slime Rancher has 29 Chroma Packs in total. Since it wouldn’t be practical to have different textures for each of the Chroma Packs for every customizable object, Slime Rancher uses color masks to describe which Chroma colors go where on relevant objects.
Here’s a few examples: mask_ranchtech
is used for many (but not all) of the ranch buildings, mask_drone
is used for our robotic bee friends, and mask_vac4
is used for the portion of the Vacpack visible in the HUD.
I’ve added an alpha grid behind these textures so you can better see the transparent parts. The alpha on mask_drone
and mask_vac4
maps to the glowing bits on their corresponding objects. I’m not sure what mask_ranchtech
is using it for since the meshes I looked at didn’t use that area.
Since it’s a bit hard to interpret these masks on their own, I’ve dumped the mesh data from PIX and made a tool to convert them into a Blender-friendly format in order to make a few renders with the masks applied as the normal diffuse color:
One thing I want to emphasize here is that it’s the actual color that matters, the magenta slime emblem used on the roof of the house isn’t a 50:50 blend between the “red” color and the “blue” color, it’s its own distinct color.
There’s quite a few more masks than this. For example, the lattice girder above the plort market uses mask_ranchtech_lab
and the railing flanking either side of it use mask_ranchtech_labTiles
.
There are also some objects not affected by Chroma Packs using this scheme, one from our screenshots are the teleporters using mask_telepad
to allow the game to have a wide variety of color-coded teleporters. Another is mask_hands
used for Beatrix’s arm and gloves as they appear in the first-person view.
Wait what?
Yup, turns out Beatrix LeBeau herself is rendered this way as well! Not sure if there were originally plans to allow the player to customize her appearance or if the artist working on her just liked this workflow, but I thought this was pretty cool to stumble upon.
Here’s the two masks she uses in the title screen and inside the ranch house:
mask_beatrix_head
is specifically used for her head and hair, so keep that in mind when trying to relate colors between eachother in the renders below since they don’t necessarily correspond with the body mask.
Additionally, the colors used on the Slimepedia use the Chroma Pack color map despite being part of her body mask.
Below you can find Beatrix rendered using the same method as above. I’ve also included a back shot for you Boundary Break fans, which reveals her jacket has a 7Zee Corporation logo on it! Keep in mind you normally never see this angle, so don’t mind the wonky appearance. Based on the color map we should expect this to be a pale pink similar to her tank top.
And just to save you from scrolling so much, here’s Beatrix again as she’s rendered in-game. You can see how certain colors correspond with eachother, such as her bandages and inner gloves matching the floofy part of her jacket.
Some of the more detailed parts of Beatrix come from a separate texture and as such aren’t visible in the color masks above. Some examples include her belt buckle, the knuckles on her gloves, and her jacket’s zippers.

Night sky
There’s nothing crazy going on in the night sky, I just think it’s pretty and wanted to take a closer look at it.

The night sky is made up of a couple layers of cube maps:
- The stars and auroras
- The slime moon
- Distant clouds (this map is used to make the horizon more interesting–the actual clouds are rendered separately using Perlin noise)
The fact that the stars and the auroras are part of the same map initially confused me. I was wondering how the game handled situations where the moon and the auroras overlap since physically the moon should be further than the auroras but closer than the stars. The auroras aren’t in present in the red channel and the stars are the same in all three, so in theory you could separate the two but that’d be a weird way to do it…
Turns out the stars and the moon just revolve around the planet at the same rate so they never overlap 😅
Both the stars and the auroras are modulated using the trippy texture seen below in order to make them twinkle and add some movement.
As I mentioned in the ramble, the auroras aren’t present in the red channel. This is probably done mainly to keep the red and pink highlights in this texture from showing up in them.
You can look around the cube maps using the widget below, and just for fun I included a quick and dirty imitation night sky for you to look around in as well.
Closing thoughts
Welcome to the end, brave adventurer!
Special thanks to my friends Stephanie and Chris for reviewing an early draft of this article and providing me with valuable feedback.
While writing this article I got into an endless loop of “scouting” Slime Rancher to look for things to talk about, and inevitably I kept finding more and more interesting things. As a result it took quite a bit longer to write than I had originally planned, so thank you so much for reading (or even just skimming) all the way through! 💜
I hope I’ve helped you gain a greater appreciation for all the effort that went into Slime Rancher, and maybe taught you a thing or two about rendering stuff.
Slime Rancher is absolutely overflowing with an endless variety of interesting rendering techniques, so I never would’ve been able to cover all of it. That being said, if there’s something you wish I had talked about, feel free to politely yell at me on Twitter (assuming Twitter still exists by the time you’re reading this.)
If your game’s rendering pipeline is currently in need of a friend, you’ll be happy to know I’m currently looking for work! So please don’t hesitate to get in touch.
