Edge detection image effect.

Hello there,

I am trying to get edge detection working in Unity and I am having a couple of problems. I know there are some of you on the forums that have experience with shaders and image effects so hopefully you can help ;)

Ok so the first thing I tried was Unity's edge detection script that comes with the Standard Assets. I used the Triangle Depth Normals setting and it worked quite well but if you have a large plane it picks up false positives on the depth test. This happens because of the way the test works. It compares the depth value of 2 pixels opposite of the curent one. If there is a difference then it is considered to be a edge. So if you have a large plane and a shallow angle it passes that test even though I dont want lines to be drawn. It looks quite bad on the horizon.
Another (better?) explanation of the problem with images can be found here : http://williamchyr.com/2014/03/development-update-edge-detection/
If I can figure out an additional test to rule out this behaviour I would be all set.

While researching all of this I came across the excellent blog of William Chyr (Manifold Garden) here :
In one of the posts (http://williamchyr.com/2014/05/revisiting-edge-detection/) he talks about an interesting way of implementing edge detection through the use of normals, colors and the depth buffer. I enthusiastically implemented this method but I discovered it has the same problem with artifacts at shallow angles and large distances. The reason is fairly similar. You are creating a unique color for each pixel based on its depth and normals. This means that at shallow angles and large distances the colors vary enough to be considered an edge. I tried to use the vertex shader to calculate a color based on the objects position but then I remembered that this shader is not attached to a GameObject and, the vertex shader is sending me the quad's information.
If I can find a way of creating a unique color for an entire face depending on its normals and position I will be all set.

Here isa selected image of the problem im trying to solve :

William solves both of these problems in his shaders and even describes how but I cant seem to get it working in my shaders. I tried his way of adding the 4 surrounding pixels depths together to get the average and then compare the average to the center pixels depth, but this gave me the same artifacts.

Any help would be appreciated. I'm still a noob with shaders so this is proving to be challenging for me.
Thanks, Kobus.

PS : Sorry for the lack of formatting or code but I am currently without internet so I am using my phone to write this.

PPS : William links to a post made by Lucas Pope (Papers Please) on the method his new game uses and from what I understand, instead of having the color changes be controlled by the Image Effect he attaches a shader to each game object and then that sets the color from which the edge detection can do its work. Link : https://forums.tigsource.com/index.php?topic=40832.msg1025558#msg1025558
Thanked by 2critic tbulford


  • Interesting problem, have a look here for a starting point.

  • On a plane, the normal should stay constant, right? You're checking for that too, and still getting edges within planes?
  • @elyaradine
    Well at the moment there are 2 tests :
    1. Is the adjacent pixels normals different ? If so this pixel is a edge .
    2 is the adjacent pixels depths different ? If so this pixel is a edge.

    If I were to change the tests to be in a way that I assume you are suggesting it would look like this :
    1. Is the adjacent pixels normals different and the adjacent pixels depths different ? If so this is an edge.

    But this will miss some cases where the normals are the same but the depth isn't
  • edited
    You could do a pixel removal pass that check the surrounding 4 pixels, if 3 of them are 'blank' then you remove that as an edge

    Are you using a luminescence/colour based edge detection or a normal based one, btw? It wasn't 100% clear from your post (and I only had time to skim it right now)

    I'll try flesh this idea out a bit tomorrow, but in a rush atm! *zoom* chat soon!
  • @raxter interesting idea. But again how would I determine which of my pixels are blank ?

    So in my first method I am using a normal + depth based one.

    In my second method I create a color for each pixel based on the normal + depth and then run a luminescence/color based detection on top of that.

    Both have this problem. Its a logic problem as far as I can tell. I just need a test that will eliminate the false positives. But I can't think of one.
  • I'm guessing here, but maybe distant objects are falsely passing the depth test because you haven't linearised it? (Usually you want more precise depth nearer the camera, so the depth texture's often encoded to give fine detail closer to the camera, and coarse data farther away, so maybe it's still in its exponential form?) Maybe you can use a depth texture with more precision?

    Maybe in your depth, if it's linear, if you current pixel's depth falls exactly (well, you can't measure exactness, but within a tolerance) in between the adjacent depths, that part of the surface is considered flat, and it's not an edge?

    I feel like with image-based methods you'll always be able to create cases that result in false positives, and it really comes down to tweaking your tolerance to give you an acceptable appearance. As far as making a game goes, it could mean designing your assets to avoid things looking wrong in the first place. Maybe your level's designed never to see a long, flat plane that goes far into the distance, because it's blocked by cliffs and buildings. Or maybe the edges fade away or also get thinner (to the point of disappearing). It could look quite pleasing, kind of like fog.
    Thanked by 2AngryMoose critic
  • So I do scale the required threshold by the distance of the pixel. Does that linearize it ?

    That fading of the edges is an interesting idea. I implemented and its looks pretty cool. Kind of like a pencil sketch. But my problem persists.. If you go closer towards the ground plane the angle towards the horizon gets sharper and thus the pixel depths vary more and more.

    I realize it must be difficult for you to help me without some sort of idea of the code. I will try to get internet somewhere to upload the shader code I am using.
  • edited
    I don't have Unity installed at work, (I'm not allowed to :P) but I remember there being a Unity macro that does the linearizing for you. I don't remember what it's called though. :P It might be DECODE_EYEDEPTH or LinearEyeDepth. I'd run a search in UnityCG.cginc to confirm.

    I think that to linearize it, you should be multiplying your depth lookup by the camera's far clipping distance. (Or maybe it's by the far clipping distance - the near clipping distance.) I actually forget this stuff all the time (because the standards seem to be different depending on platform or DX/OGL) and just try the next one if it looks wrong. XD

    If you're using a purely postprocess/image shader, you're always going to have the edge appearing as the surface normal and your view direction approach right angles. I don't think there's a way around that.

    I've just been thinking about it a bit, and maybe there's another hack, if you're okay with not doing purely postprocess work (it comes at lots of other costs though).

    So, we're looking for every individual object to be a distinct (flat) colour, so that the edge detection filter highlights things correctly. Maybe you could use another pass on your objects that renders a colour based on its pivot. You can get the pivot by transforming the local position float4(0,0,0,1) into clip space, and shading a "unique" colour based on that. (It'd be some combination of x, y and z positions that gives you a unique colour, bearing in mind that in clip space x and y are bound between 0 and 1, or -1 and 1, and z is bound by the clipping planes. Or something similar, I always forget.)

    You'd use the result of this in your edge detecting filter, and it'll separate all of your objects -- as long as their pivots aren't in the same position in world space.

    You could do a variation of this too using the object's centre (the centre of its bounds), although, again, it's possible the test will fail if two objects somehow have the exact same mesh centres. I've never done this in Unity before, but Unreal has a node for it, so I imagine it'd be possible to get somehow. I know you can via script, and I'm quite sure there's a concept of "bounds" in a shader; I've just never tried accessing it before.

    The downside is that in this pass, you'll need your world not to have any batching, because anything that gets batched will end up having the same pivot. Worst case scenario, you end up with a lot of duplicate mesh data (one set where everything is separate to have different colours -- and even though they're all using the same shader, they can't be batched, so potentially huge drawcall count here -- and one set for batching).

    Sorry if any of this isn't clear, and not as terse or concise as I'd like. I haven't been thinking very sharply lately due to lack of sleep. :P
    Thanked by 1tbulford
  • I managed to solve this yesterday. I took a look at how unity implements their Sobel depth edge detection. So now I get the Sobel value of the the 4 pixels around the current one and then compare them. This works really great so I will post this image effect on github and share it once I get home :)

    Thanks again guys !!
    Thanked by 1tbulford
  • edited
    I am back on stable internet again (yay!) so here is what I ended up doing :
    Criticism is more than welcome. There is a lot of calculation going on so I am worried about the performance of this shader but so far it works.

    Here is a link to a gist : https://gist.github.com/Kobusvdwalt/d74ad7013255d275a03817453dfbe28a
    Thanked by 1Karuji
Sign In or Register to comment.