Intro to Unity Shaders

edited in Tutorials
Okay, first up, this is only going to cover some basic vertex-fragment shader stuff (as opposed to the other types of shaders you can make in Unity, which, for reasons listed elsewhere, have various disadvantages). These are by far the most useful in my experience (although it does depend on what you're using them for, I guess), but they're also most easily applied to other game engines and scenarios. Also, disclaimer, I'm not a programmer. I'm hardly an authority on the matter, and it's totally possible that some of what I've written here is technically inaccurate, but good enough to get you going, and I don't think that's a problem.

I also like the idea of this growing organically, so as you have questions, or as some of those who know better than I do have better explanations, we can update this. Also because I should actually be doing any of hundreds of other things instead of procrastinating like this, so it's more of a 1-hour stream of consciousness for now, but I hope you find it useful anyway.

What is a shader?
A shader is a computer program that takes a bunch of input data (usually meshes, textures and numbers). manipulates them in some way, and then converts them into the pixels that you see on your screen. They're super fun to write, because you can do countless effects with them, but because of how they run millions of times per frame, they're also one of the things that are the most sensitive to killing performance, so small tweaks in optimization in shaders often give you more bang for buck than elsewhere in your code (assuming you're GPU-bound).

What does a shader look like?
I figure the easiest way for you to learn how shaders work is to take an existing shader, play with it, break it, fix it, and see what you end up with. Here's a really basic shader:

Shader "Custom/MobileUnlit" 
{
	Properties 
	{
		_MainTex ("Base (RGB)", 2D) = "white" {}
	}
	
	CGINCLUDE
	#include "UnityCG.cginc"
	
	struct v2f {
		float4 pos:	SV_POSITION;
		half2 uv:	TEXCOORD0;
	};
	
	sampler2D _MainTex;
	
	ENDCG
	
	SubShader
	{
		Tags 
		{ 
			"RenderType"="Opaque" 
		}
		
		LOD 200
		
		Pass
		{
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag

			v2f vert(appdata_full v)
			{
				v2f o;
				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
				o.uv = v.texcoord.xy;
				return o;
			}

			fixed4 frag(v2f i): COLOR
			{
				fixed4 tex = tex2D(_MainTex, i.uv);
				return tex;
			}
			ENDCG
		}
	}
	FallBack "Diffuse"
}


Let's break it into bits and pieces so you know what you're doing when you're editing it.

Shader "Custom/MobileUnlit" 
{

This is the name of the shader. Using the forward-slash, you can group your shaders so that they're easier to find. With the Custom/MobileUnlit name, this shader would be found in the Custom folder in your shader selection drop-down when you assign a shader to a material.

Properties 
	{
		_MainTex ("Base (RGB)", 2D) = "white" {}
	}

These properties are what appear in your Unity Inspector. This one would show a label called "Base (RGB)" (obviously you can name it whatever you want, but it's good to be descriptive of what you're expecting, especially if you're using special kinds of textures), expects a 2D texture, and saves a reference to it as _MainTex, which you can use later. You can read more about what properties are available to you in the Unity docs and here.

CGINCLUDE
	#include "UnityCG.cginc"
	
	struct v2f {
		float4 pos:	SV_POSITION;
		half2 uv:	TEXCOORD0;
	};
	
	sampler2D _MainTex;
	
	ENDCG

CGINCLUDE is a handy place to put things that you need to refer to later on in your shader. It's especially useful if you've got a shader with multiple passes or multiple subshaders, because they can all refer to stuff that you write in here. You can also use #include to include other shader files, which is great for having multiple shaders share common functionality.

The struct here holds whatever data you want to pass on from your vertex shader to the fragment shader (hence its often being called "v2f", or some variation thereof). You can see all the different types of data you can pass here. (TODO: more about semantics and which are/aren't allowed in Unity?)

Lastly, it's here that I usually declare any variables that I have in my Properties (above). sampler2D means it's a 2D texture, and _MainTex is the name I used in my property.

SubShader
	{

In Unity, a subshader is a way for you to write several different shaders and have them all in one file. In theory, Unity then picks the subshader that best matches the system that's running the game. I've never used multiple ones.

Tags 
		{ 
			"RenderType"="Opaque" 
		}

Here, you can change what part of the queue to render things in, and what "type" of shader this is. The type is useful for doing shader replacement, but isn't otherwise necessary as far as I know. I've found that the most common use is for changing the "RenderQueue" tag when dealing with layers of transparent objects. Read more about the queue here.

LOD 200

This is apparently used with multiple subshaders. Unity thinks your system can't run the shader, it checks the next subshader. If there aren't any more, then it checks the Fallback (at the end of the file). Unfortunately, I don't believe there's any easy indication of knowing what exactly it is that Unity checks to see if you can run a particular shader. The number seems rather arbitrary... and so I don't really use this either. <_<

Pass
		{

You can have multiple passes in your shader. More passes are more expensive, but there are some effects you can't do otherwise. Each pass can interact with previous passes in various ways, whether you're using ZTest or Blending or whatever. (TODO!)

CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag

These tell Unity what the names of your vertex and fragment programs are.

v2f vert(appdata_full v)
			{
				v2f o;
				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
				o.uv = v.texcoord.xy;
				return o;
			}

This is the vertex shader. It takes in mesh data (provided by Unity in "appdata_full", although there are other types listed here), does stuff to it, and spits out the struct we defined earlier. The mesh data is provided in object space, which is basically the space you modelled it in in your 3D package. You then use matrix multiplication to convert your vertices between the different spaces, usually ending up in something called clip space, which is effectively a 2D shape that appears on your screen. You can read more about the conversion between these spaces here. (It's maths-heavy, but while it's nice to understand what's going on there, you're pretty much doing the same UNITY_MATRIX_MVP multiplication for 99.9% of shaders. Knowing the maths opens up the possibility of doing other effects, but you can get really far without it.)

So here, we're converting the vertex positions from a 3D space into a 2D canvas that you show on screen, and we're also grabbing each vertex's UV coordinates, which we'll use to read textures later. (TODO: PICTURES.)

fixed4 frag(v2f i): COLOR
			{
				fixed4 tex = tex2D(_MainTex, i.uv);
				return tex;
			}

Where the vertex program basically creates a 2D canvas for you out of 3D mesh data, the fragment program basically "colours in" this canvas with whatever you tell it to. In our case, we're looking up the _MainTex texture using the mesh's UV coordinates, and drawing whatever colour we get.

ENDCG
		}
	}
	FallBack "Diffuse"
}

The Fallback here uses the Diffuse shader for any behind-the-scenes stuff that we didn't write, or uses the Diffuse shader if for some reason Unity thinks the system can't handle whatever shader we wrote. Apparently. I've never seen this happen myself. <_<

Man. So much more, but not sure if I've even covered this stuff well enough. XD

edit (2014-5-7): Fix broken tags.

Comments

  • Oh mannnn, thanks @Elyaradine!!!! It's a good start, short of going line by line :) I'm going to try and use this. Really like all the "here" links, gonna be my reading for a while. Thannnnks!! :D and looking forward to more! :D
  • @Elyaradine - Wow! Thank you so much for stating this and taking the time to give such detailed explanations. This will be extremely valuable to so many people here.
  • edited

    CGINCLUDE
    	#include "UnityCG.cginc"
    	
    	struct v2f {
    		float4 pos:	SV_POSITION;
    		half2 uv:	TEXCOORD0;
    	};
    	
    	sampler2D _MainTex;
    	
    	ENDCG


    The struct here holds whatever data you want to pass on from your vertex shader to the fragment shader (hence its often being called "v2f", or some variation thereof).
    Be careful about what datatypes you use, especially when creating shaders for mobile! What I've posted below is taken directly from the Unity documentation and is better than I could sum up :)

    Precision of computations
    When writing shaders in Cg/HLSL, there are three basic number types: float, half and fixed (as well as vector & matrix variants of them, e.g. half3 and float4x4):

    float: high precision floating point. Generally 32 bits, just like float type in regular programming languages.
    half: medium precision floating point. Generally 16 bits, with a range of -60000 to +60000 and 3.3 decimal digits of precision.
    fixed: low precision fixed point. Generally 11 bits, with a range of -2.0 to +2.0 and 1/256th precision.

    Use lowest precision that is possible; this is especially important on mobile platforms like iOS and Android. Good rules of thumb are:

    -For colors and unit length vectors, use fixed.
    -For others, use half if range and precision is fine; otherwise use float.
    -On mobile platforms, the key is to ensure as much as possible stays in low precision in the fragment shader
    -On most mobile GPUs, applying swizzles to low precision (fixed/lowp) types is costly; converting between fixed/lowp and higher precision types is quite costly as well.

    (Luke Note 1: Swizzling means to access the individual components of a vector/matrix variant, so doing something like 'half2 a = b.yz + c.wx;')
    (Luke Note 2: 'fixed' precision isn't supported on Desktop GPUs, so if you run a shader that uses fixed on your PC, it will replace it with 'half'. This can have you end up with things that look fine when you're developing, but then then end up not looking right on Android/iOS when you deploy there! So if you use fixed anywhere and your specular seems off or something looks too dark or to yellow or something when you run on mobile, make sure you check that you aren't doing fixed math that is producing something outside of the -2 : 2 range!!!)

    LOD 200

    This is apparently used with multiple subshaders. Unity thinks your system can't run the shader, it checks the next subshader. If there aren't any more, then it checks the Fallback (at the end of the file). Unfortunately, I don't believe there's any easy indication of knowing what exactly it is that Unity checks to see if you can run a particular shader. The number seems rather arbitrary... and so I don't really use this either. <_<
    By default, Unity's uses in infinite shader LOD, so if you don't tweak that yourself, the LOD value that you set it ignored really, and the highest one that you have in a shader file is used. You can however set the LOD limit of your shaders either per-shader (shaderObj.maximumLOD = X;) or globally (Shader.globalMaximumLOD = Y;) to whatever you want as a way of manually controlling your shader quality at runtime.

    So say you detect that the user is using a iPad1 which has horrible fillrate, you would want it to choose much cheaper shaders (ie. no Phong Specular Cubemaps :P) than if the user was running on an iPad4. You may want to try to keep your SubShaders in LOD order in a .shader file too (so LOD 600 declared before LOD 500, etc.) as I had some issues with LODs and Fallbacks in Unity 3.5.7 during Bladeslinger development that were known Unity bugs... perhaps they were fixed in Unity 4.X, but I'm not sure.

    ENDCG
    		}
    	}
    	FallBack "Diffuse"
    }

    The Fallback here uses the Diffuse shader for any behind-the-scenes stuff that we didn't write, or uses the Diffuse shader if for some reason Unity thinks the system can't handle whatever shader we wrote. Apparently. I've never seen this happen myself. <_<
    Unity will also choose the Fallback shader when the global Shader LOD level is less than the lowest LOD value you have set for all SubShaders in this file. So if you have set "Shader.globalMaximumLOD = 100;" in your code somewhere and you render with the above shader, it will ignore the SubShader here and use Diffuse (the Fallback) instead. Also note that you can use *any* shader name here, so either built in Unity shader names like Diffuse, or other custom ones of your own, such as "Custom/MobileUnlitCheap", etc.
  • Some basic 3D knowledge helps with understanding some of what's possible with manipulating data in your shaders. A 3D mesh is essentially just an array of blocks of data. Each block of data, typically a "vertex", holds a 3D position (although this is often stored as a 4D vector because maths is fun and quaternions do stuff that I'm too stupid to explain), a 2D position, and a normal.

    image

    (More advanced, 3D-artist info: To keep these data blocks simple (so each vertex only has one 3D position, one 2D positon, one colour, one normal, etc.), when you export the data from your 3D package, or import them into Unity (I'm not sure which it is that does this), each vertex is actually duplicated. If a vertex appears once in 3D space but twice on your UVs, that's stored as two vertices. If a vertex has a hard edge, then that means it has multiple normals (TODO: explain this?), in which case you also need multiple vertices. As a result, the number of vertices you see in your 3D package is usually significantly lower than in your game engine. This also means that, if for some reason these duplicate verts aren't created (as far as I know, most packages don't expect to have to export colours that change over a single vert), you can work around it by breaking these edges and duplicating your vertices yourself (or writing a script to do it), and have exactly the same effect.)

    Your vertex can also store a tangent, normal, bitangent and a colour. While they have these names, and those are the things you typically would use them for, you don't have to be using them for that particular purpose.

    For example, while a vertex "colour" can be passed onto your fragment shader to change the colour of the pixels you see on the screen per vertex (which is often used for tinting terrain, or for tinting different meshes that use the same shader and textures to give them some variation), you don't have to use them in that way. But a colour is just a 3D (RGB) or 4D (RGBA) piece of data. You can (and many games do) use vertex colour to mask how terrain blends up to four textures together.

    As a result, you can end up with something like this (the same shader as above, but with blending of two textures, using the red channel in the vertex colour instead of just showing one).
    Shader "Custom/BlendedUnlit" 
    {
    	Properties 
    	{
    		_MainTex ("Base (RGB)", 2D) = "white" {}
    		_MainTex2 ("Second layer (RGB)", 2D) = "white" {}
    	}
    	
    	CGINCLUDE
    	#include "UnityCG.cginc"
    	
    	struct v2f {
    		float4 pos:	SV_POSITION;
    		half2 uv:	TEXCOORD0;
    		fixed4 col: COLOR;
    	};
    	
    	sampler2D _MainTex;
    	sampler2D _MainTex2;
    	
    	ENDCG
    	
    	SubShader
    	{
    		Tags 
    		{ 
    			"RenderType"="Opaque" 
    		}
    		
    		LOD 200
    		
    		Pass
    		{
    			CGPROGRAM
    			#pragma vertex vert
    			#pragma fragment frag
    
    			v2f vert(appdata_full v)
    			{
    				v2f o;
    				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    				o.uv = v.texcoord.xy;
    				o.col = v.color;
    				return o;
    			}
    
    			fixed4 frag(v2f i): COLOR
    			{
    				fixed4 tex = tex2D(_MainTex, i.uv);
    				fixed4 tex2 = tex2D(_MainTex2, i.uv);
    				
    				return lerp(tex, tex2, i.col.r);
    			}
    			ENDCG
    		}
    	}
    	FallBack "Diffuse"
    }


    For practice, you might try blending four textures together, using some of the other colour channels to control it.
    shaders-meshdata.png
    1548 x 1280 - 207K
  • @Elyaradine nice thread you've got going here :) I think it's important for 3D game developers to know what's going on behind the scenes of their game and how to tinker with it.

    If I may add some technical details:

    Back-face culling (removing triangles that are facing away from the viewer) is not done using the vertex normals, but rather the 'winding order' which is the order in which each point of the triangle is specified. I think this is clockwise by default in OpenGL, but can be changed. While this seems like a bit of a technicality, it is important to know when you're debugging and wondering why half your model looks inside-out.

    Triangles themselves don't have normals, only their vertices do. The normals on the surface of a triangle (accessed in the fragment shader) are linearly interpolated from the three vertices making up the triangle. With flat shading, all triangles appear to have one normal interpolated across its surface. This means that each of its three vertices must have the same normal. This also means that a vertex that is part of more than one triangle has to have multiple normals, one for each triangle it is connected to. Since vertices can't actually have more than one of anything (one 3D coordinate, one 2D texture coordinate, one normal), the only way to achieve this is to duplicate the whole vertex with a different normal.

    As for why vertex position is stored as a 4D vector, is not actually because of Quaternions, but rather because UNITY_MATRIX_MVP is a 4x4 matrix, and the only way the maths is going to work is for the vertex position to have a fourth 'homogeneous' w-coordinate. All you need to know is that the w-coordinate is equal to 1 for points, and 0 for normals.

    On a side note, quaternions are amazing. I've done my own research into them and found them to be much more stable (less subject to rounding errors) than standard Euler rotations. Couple that to the lack of gimbal-lock (something horrible that happens when using Euler angles) and the ability to interpolate between them without going mad, makes them the only reasonable choice when rotating objects in 3D space.

    Just my three cents :) Hope someone finds it useful/interesting
    Thanked by 1Elyaradine
  • edited
    Thaaaaaaanks everyone for contributing, most of all @Elyaradine!! :D


    1. If a property is declared at the top and not included in the CGINCLUDE, for example your sampler2D isn't there, does that mean you can't access that property in the shader program? I ask because the shader I'm looking at doesn't have the property in the CGINCLUDE…

    2. When you say pass data from vertex shader to the fragment shader, are we writing the vertex shader here at all, or is that just saying that this shader is a fragment shader and we aren't writing a vertex shader here?

    3. Is LOD = Level Of Detail?

    4. So when vert returns the o, is that o a pixel in clip space or a vertex in 3D space? I'm guessing the former because you're talking about spitting out into 2D canvas, but I partially don't understand because there's the possibility of having more than one vert in a 2D space. Does this spitting out do it per pixel in clip space or per… geometry in 3D space?

    5. frag (v2f i): COLOR - so this frag program runs once per pixel on screen? and i is the vehicle for .pos and .uv returned from the vert program, right?


    OK I think I'm getting it a bit more now. Thank you SOOOO much for the invaluable stuff! Looking forward to more!!! :D

    (trying to apply this knowledge to http://wiki.unity3d.com/index.php?title=Silhouette-Outlined_Diffuse. Gah HARD.
  • edited
    @jellymann: Thanks! I didn't know about the winding and the reason for the 4D position vector.

    @Tuism:

    1. Yarp. If you don't declare the variable there, you can't use it. If the shader you're looking at behaves differently, it's possible that it's one of the other types of shaders that I kind of don't recommend you to work with, because of how non-standard and Unity-specific it is. (Yeah, just took a look, and that outline shader isn't a vertex-fragment shader, hence it's breaking all of the rules. It's one of the Shaderlab things that's super Unity-specific, and super limiting. [edit] Actually, took a proper look now, and it's a hybrid.)

    2. Okay, there's actually a lot more stuff going on behind the scenes. I just tend to ignore it because there's nothing you can do about it. What actually happens, as far as I know, is something called rasterization, where you're taking the vertices, creating triangles from them, and then turning those triangles into pieces of "canvas" that you get to draw on in your pixel shader. When we create the v2f struct, we're including that data along with that canvas, so we can find out a bit more about the mesh data and use that to manipulate our fragment shader calculations.

    3. Yep.

    4. Well, no. The "o" is just a thing that holds data. One of those pieces of data is a vertex position, which, at the end of the vertex shader should be in clip space. This happens once per vertex.

    5. Yep! The vertex shader runs once per vertex, and the fragment shader runs once per pixel. This is useful for understanding how you can best optimize something. Sometimes, you want to have fewer verts. Sometimes you want to have fewer pixels. It depends whether it's the vertex or fragment shaders that's forming your bottleneck.

    I can look at making a vertex-fragment version of that shader for you, but with the Andrew Walsh thing going on this weekend, I've got way less time to meet my deadlines next week than I thought. :P Just remind me.
    Thanked by 1Tuism
  • edited
    Coool! Thanks! Still trying to wrap my head around it all, the theory of shaders sound simple enough but dammmn I don't get why it looks so cryptic XD

    The shader I'm looking for would do this:

    The original vertex colour would be disregarded. If there was a mesh of a specific layer in Unity ("surface") over this object, it would add 50% black (or whatever can be tweaked later) to that colour and return it. The eventual result would be a silhouette of the object that is beneath "surface" with its colour based on the surface's colour value rather than the original object's colour value.

    I'll try and research and write something, you should focus on your Andrew Walsh thing that's a biggie :)

    [edit: crap, some digging seemed to tell me that layers don't exist in shaders - pretty much as I feared XD the only way I can think to achieve what I describe above is to apply the shader to the surface and make sure there's nothing else under it that doesn't need to be there? :/ would that be slow? "Surface" layers are the entire game floor :/ ]

    Thaaaaanks :D
  • This is how I would do silhouetting (requires Unity Pro, though):

    Render the object by itself onto a separate texture using a basic shader that only draws with one colour (so as to make the silhouette), say draw the object in white and leave the background black. Then, pass the resulting texture to the shader drawing the object(s) you wish to have the silhouette visible through (for instance the floor object) during the main render pass (the one to the screen). So then in your floor's fragment shader, for each fragment/pixel you would sample the texture you just made and subtract 50% from the colour if the sample is white, and leave as-is if the sample is black.
  • Thanks man, that makes sense in pseudo-code... But I have no idea how that works in actual Shader code... I'll try to figure it out a bit :)
  • image

    I saw your shark had lighting. I had a go at adding lighting to the shark shader, but I ran into a bit of trouble getting it to work the same way as the default diffuse shader. For some reason it doesn't update as regularly as the default diffuse one seems to. :/ Anyway, so I ended up just putting an arbitrary light direction vector that you can drag around to fake/match your scene's light for now.

    Otherwise, it's pretty much what @Chippit described in his post in the Unity thread.

    Shader "Custom/SharkShader" {
    	Properties 
    	{
    		_MainTex ("Base (RGB)", 2D) = "white" {}
    		_Color ("Main Colour (RGB)", Color) = (1,1,1,1)
    		_Color2 ("Silhouette Colour (RGB)", Color) = (0,0,0,1)
    		_FakeLightCol ("Fake light colour (RGB)", Color) = (1,1,1,1)
    		_LightDirection ("Fake light direction", Vector) = (0,1,0,0)
    		_Ambient ("Ambient light colour (RGB)", Color) = (0,0,0,1)
    	}
    	
    	CGINCLUDE
    	#include "UnityCG.cginc"
    	
    	struct v2f_simple {
    		float4 pos:			SV_POSITION;
    		half2 uv:			TEXCOORD0;
    	};
    	
    	struct v2f {
    		float4 pos:			SV_POSITION;
    		half2 uv:			TEXCOORD0;
    		half4 lightStr:		TEXCOORD1;
    	};
    	
    	sampler2D _MainTex;
    	fixed4 _Color;
    	fixed4 _Color2;
    	fixed4 _Ambient;
    	half4 _LightDirection;
    	fixed4 _FakeLightCol;
    	
    	ENDCG
    	
    	SubShader
    	{
    		Tags 
    		{ 
    			"RenderType"="Opaque" 
    			"Queue"="Geometry-5"
    		}
    		
    		LOD 200
    		
    		Pass // Silhouette
    		{
    			ZWrite Off
    			ZTest Greater
    			
    			CGPROGRAM
    			#pragma vertex vert
    			#pragma fragment frag
    			#pragma multi_compile_fwdbase
    
    			v2f_simple vert(appdata_full v)
    			{
    				v2f_simple o;
    				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    				o.uv = v.texcoord.xy;
    				return o;
    			}
    
    			fixed4 frag(v2f_simple i): COLOR
    			{
    				fixed4 tex = tex2D(_MainTex, i.uv);
    				return tex * _Color2;
    			}
    			ENDCG
    		}
    		
    		Pass // Above ground
    		{
    		
    			CGPROGRAM
    			#pragma vertex vert
    			#pragma fragment frag
    			#pragma multi_compile_fwdbase
    
    			v2f vert(appdata_full v)
    			{
    				v2f o;
    				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    				o.uv = v.texcoord.xy;
    				
    				half3 worldNorm = normalize(mul((half3x3)_Object2World, v.normal));
    				o.lightStr = saturate((dot(worldNorm, _LightDirection) * _FakeLightCol) + _Ambient);
    				
    				return o;
    			}
    
    			fixed4 frag(v2f i): COLOR
    			{
    				fixed4 tex = tex2D(_MainTex, i.uv);
    				return tex * _Color * i.lightStr;
    			}
    			ENDCG
    		}
    	}
    	FallBack "Diffuse"
    }


    Shader "Custom/GroundShader" {
    	Properties 
    	{
    		_MainTex ("Base (RGB)", 2D) = "white" {}
    		_Color ("Main Colour (RGB)", Color) = (1,1,1,1)
    	}
    	
    	CGINCLUDE
    	#include "UnityCG.cginc"
    	
    	struct v2f {
    		float4 pos:	SV_POSITION;
    		half2 uv:	TEXCOORD0;
    	};
    	
    	sampler2D _MainTex;
    	fixed4 _Color;
    	
    	ENDCG
    	
    	SubShader
    	{
    		Tags 
    		{ 
    			"RenderType"="Opaque" 
    			"Queue"="Geometry-10"
    		}
    		
    		LOD 200
    		
    		Pass
    		{
    			CGPROGRAM
    			#pragma vertex vert
    			#pragma fragment frag
    			#pragma multi_compile_fwdbase
    
    			v2f vert(appdata_full v)
    			{
    				v2f o;
    				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    				o.uv = v.texcoord.xy;
    				return o;
    			}
    
    			fixed4 frag(v2f i): COLOR
    			{
    				fixed4 tex = tex2D(_MainTex, i.uv);
    				return tex * _Color;
    			}
    			ENDCG
    		}
    	}
    	FallBack "Diffuse"
    }
    Thanked by 1Tuism
  • @Elyaradine I didn't think of using a Z-buffer hack like that, although as you mentioned there are problems (requires 'fake light' and you have to specify the Silhouette colour)

    Here's something I made this afternoon to explore one possible option. It allows any colour/texture for the ground no problem and also allows any Material for the silhouetted object. The only problem is it's Pro-only, so I don't know how useful it is for most people. :/

    Gif!
    image
    Thanked by 1Tuism
  • OMG I LOVE YOU GUYS YOU GUYS ARE AWESOMESAUCE OMG :D Gonna try 'em out now...
  • edited
    @jellymann: Yeah. I don't think the fake light and silhouette colour are problems with the concept of using the Z-buffer, so much as my banging my head against how Unity treats lights behind the scenes.

    With the fake light, you should be able to work with actual Unity lights by using their "AutoLight.cginc" includes and the macros that that provides. Thing is, there's no documentation for this, and sometimes the shader doesn't actually update the light direction when a light moves until an object that uses one of the default Unity shaders enters the light's radius. I can guess why (Unity optimization), but I don't know what to do about it. (Although as far as I can tell, this only happens in the scene view and not in an actual game...? I haven't tested that recently. So it might just be an annoyance, rather than being game-breaking.)

    And with the silhouette, you could just change the blend mode to use Multiply or some similar blend mode, and I imagine it'd work quite well with whatever you have with your ground shader. So I imagine you could do pretty similar stuff with blend modes to what you'd do with a rendertexture.
  • edited
    @Elyaradine I fixed the lighting annoyance. Turns out you can add a Lambert surface shader with other passes, it just can't be in a Pass block like the other passes.

    Improved SharkShader
    Shader "Custom/SharkShader" {
    	Properties 
    	{
    		_MainTex ("Base (RGB)", 2D) = "white" {}
    		_Color ("Main Colour (RGB)", Color) = (1,1,1,1)
    		_Color2 ("Silhouette Colour (RGB)", Color) = (0,0,0,1)
    	}
    	
    	CGINCLUDE
    	#include "UnityCG.cginc"
    	
    	struct v2f_simple {
    		float4 pos:			SV_POSITION;
    		half2 uv:			TEXCOORD0;
    	};
    	
    	struct v2f {
    		float4 pos:			SV_POSITION;
    		half2 uv:			TEXCOORD0;
    		half4 lightStr:		TEXCOORD1;
    	};
    	
    	sampler2D _MainTex;
    	fixed4 _Color;
    	fixed4 _Color2;
    	
    	ENDCG
    	
    	SubShader
    	{
    		Tags 
    		{ 
    			"RenderType"="Opaque" 
    			"Queue"="Geometry-5"
    		}
    		
    		LOD 200
    		
    		Pass // Silhouette
    		{
    			ZWrite Off
    			ZTest Greater
    			Blend SrcAlpha OneMinusSrcAlpha
    			
    			CGPROGRAM
    			#pragma vertex vert
    			#pragma fragment frag
    			#pragma multi_compile_fwdbase
    
    			v2f_simple vert(appdata_full v)
    			{
    				v2f_simple o;
    				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    				o.uv = v.texcoord.xy;
    				return o;
    			}
    
    			fixed4 frag(v2f_simple i): COLOR
    			{
    				return _Color2;
    			}
    			ENDCG
    		}
    		
    		CGPROGRAM
    		#pragma surface surf Lambert
    
    		struct Input {
    			float2 uv_MainTex;
    		};
    
    		void surf (Input IN, inout SurfaceOutput o) {
    			o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb * _Color;
    		}
    		ENDCG
    	}
    	FallBack "Diffuse"
    }


    EDIT: fixed bug in code.
    EDIT2: added blending.
    Thanked by 2Tuism Elyaradine
  • edited
    Great! I've never mixed them like that before!

    For reference, this is what I was trying to do. As I said, it "works", but... not quite how I expect. :P I'd like to thrash this out properly, but I've got art homework for a class tonight, so I'm short on time right now. It does kind of show how to do lighting without having to use surface shaders, but as I said, it's not quite looking like a surface shader would yet. I find that a lot of my time is wasted on trying to figure out how Unity expects you to do something (and what it's doing "for" you), as opposed to fighting actual shader code itself. :(

    Shader "Custom/SharkShader" {
    	Properties 
    	{
    		_MainTex ("Base (RGB)", 2D) = "white" {}
    		_Color ("Main Colour (RGB)", Color) = (1,1,1,1)
    		_Color2 ("Silhouette Colour (RGB)", Color) = (0,0,0,1)
    	}
    	
    	CGINCLUDE
    	#include "UnityCG.cginc"
    	#include "Lighting.cginc"
    	#include "AutoLight.cginc"
    	
    	struct v2f_simple {
    		float4 pos:			SV_POSITION;
    		half2 uv:			TEXCOORD0;
    	};
    	
    	struct v2f {
    		float4 pos:			SV_POSITION;
    		half2 uv:			TEXCOORD0;
    		fixed3 light:		TEXCOORD1;
    		LIGHTING_COORDS(2,3)
    	};
    	
    	sampler2D _MainTex;
    	fixed4 _Color;
    	fixed4 _Color2;
    	
    	ENDCG
    	
    	SubShader
    	{
    		Tags 
    		{ 
    			"RenderType"="Opaque" 
    			"Queue"="Geometry-5"
    		}
    		
    		LOD 200
    		
    		Pass // Silhouette
    		{
    			ZWrite Off
    			ZTest Greater
    			
    			CGPROGRAM
    			#pragma vertex vert
    			#pragma fragment frag
    			#pragma multi_compile_fwdbase
    
    			v2f_simple vert(appdata_full v)
    			{
    				v2f_simple o;
    				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    				o.uv = v.texcoord.xy;
    				return o;
    			}
    
    			fixed4 frag(v2f_simple i): COLOR
    			{
    				fixed4 tex = tex2D(_MainTex, i.uv);
    				return tex * _Color2;
    			}
    			ENDCG
    		}
    		
    		Pass // Above ground
    		{
    			Tags {
    				"LightMode" = "Vertex"
    			}
    			
    			CGPROGRAM
    			#pragma vertex vert
    			#pragma fragment frag
    			#pragma multi_compile_fwdbase
    
    			v2f vert(appdata_full v)
    			{
    				v2f o;
    				o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    				o.uv = v.texcoord.xy;
    				o.light = ShadeVertexLights(v.vertex, v.normal);
    				
    				TRANSFER_VERTEX_TO_FRAGMENT(o);
    				
    				return o;
    			}
    
    			fixed4 frag(v2f i): COLOR
    			{
    				fixed4 tex = tex2D(_MainTex, i.uv);
    				fixed atten = LIGHT_ATTENUATION(i);
    				return tex * _Color * atten + fixed4(i.light,1);
    			}
    			ENDCG
    		}
    	}
    	FallBack "Diffuse"
    }
  • This seems to work. First of all, ShaderVertexLights handles diffuse, ambient, and attenuation, so you don't have to worry about any of that. Second, for some reason the light has to multiplied by 2 in the fragment shader. The only downside at the moment is that it doesn't do shadows (yet).

    Pass // Above ground
    {
    	Tags {
    		"LightMode" = "Vertex"
    	}
    	
    	CGPROGRAM
    	#pragma vertex vert
    	#pragma fragment frag
    	#pragma multi_compile_fwdbase
    
    	v2f vert(appdata_full v)
    	{
    		v2f o;
    		o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    		o.uv = v.texcoord.xy;
    
    		o.light = ShadeVertexLights(v.vertex, v.normal);
    
    		TRANSFER_VERTEX_TO_FRAGMENT(o);
    		
    		return o;
    	}
    
    	fixed4 frag(v2f i): COLOR
    	{
    		fixed4 tex = tex2D(_MainTex, i.uv);
    		
    
    		return tex * _Color * fixed4(i.light*2,1);
    	}
    	ENDCG
    }


    Oh and there is a problem with just blending instead of using a render texture: The Shark/Monkey blends with itself. I've tried figuring a way around the problem by rendering things in different orders with different blending modes but it's not something you can just do. Here's an image showing exactly what I mean:

    image
  • Looks like ye olde transparent thing doesn't cull back faces when trying to be transparent problem?

    I'm super grateful for all you guys' input already, it's pretty much good enough for prototype :D Though that multiply blend looks awesome, what I have now is plenty good enough! Yay! :D
  • edited
    @Tuism No, those aren't back faces, those are other front faces of the object that are being occluded by the object itself, due to the mesh being of a concave nature.

    I'm on the brink of forming a possible solution that will alleviate the issue, the only problem is it breaks the skybox :P Fix one thing, break another!

    So this (probably outrageous) method involves drawing the silhouette at 50% gray before everything on a black background, then drawing the ground, multiplying it by OneMinusDstColor, then drawing everything else. The problem with this is that the background has to be cleared to black first, so you can't use a skybox. However, you can draw the skybox after the terrain (but behind the terrain) using a custom skybox shader. The issue here is that there's some bug in my custom skybox shader that I still need to figure out. :/

    Happy to help :)

    EDIT: I got it working!

    Great success, after much hackery, the skybox is fixed! Go ahead and sniff the code.

    Gif!
    image

    (OMG such hacks!)

    EDIT2: Just realised this is probably going to break horribly if/when you use more than one "floor" object :/
  • Just to chime in on the transparency issue.

    The 'standard' way of addressing that problem is to draw transparent objects like that with a two pass shader that does the following:

    The first pass renders with a colour mask such that it draws nothing to the frame buffer, but still writes depth to the zbuffer as normal.
    The second pass uses a ZEquals depth test and the usual alpha blend mode settings to draw only the 'correct' pixels. This way you won't get those tranparency artifacts.
  • @Chippit I hope you're going to explain what you just said in the workshop
  • I concur :p as much as it's awesome to have great minds working on this (man you guys are geniuses), I'm trying my damnedest to understand the code and it's really all guesswork from my side :/

    But thank you guys! :D
  • Fengol said:
    @Chippit I hope you're going to explain what you just said in the workshop
    It's easy, you just modulate the anomalous flux capacitor until the temporal field stabiliser reaches its azimuth, then the quantum plasma conduit should align with your visual requirements.
  • "PAY TRAFFIC FINES!!!"
    "TWEAK THE FLUXNIPPLES!!!"
    "SHAKE!!! SHAKE!!!"
    Thanked by 1Chippit
  • How did I only discover this now? Honestly, I only realized there was a tutorials tab now
Sign In or Register to comment.