Shader workshop JHB - Poll

Comments

  • edited
    @Elyaradine the car metal (or should it be car paint?) shader... I have many uses for that kind of thing :)
  • Yeah that's what I thought you meant. That's a fairly standard outline effect... it's in a bunch of games with toon shaders. OK not that I know how the shader would be programmed, but in photoshop... find the shape, enlarge by x pixels, place under object :P
  • @hermantullken: Oh, that one! :D I actually wanted to make something like this: (DX11 required, afaik, or you may run into artifacts and other graphical anomalies. Also, it's pretty big; I think it's 100MB or so.)
    http://dl.dropboxusercontent.com/u/16565603/Unity/Ferrari/Ferrari.html

    ...but for mobile. D:

    @Tuism: Yeah, you actually do pretty much that in 3D too. ;)
  • Tuism said:
    While we're taking requests :P

    Ghost Lamp 3D pixels effect? It *seems* easy enough given the knowledge, (i'm guessing each pixel has a layer of 4, each of their opacity is adjusted by an average value of angle of light falling on it x strength of light) but I dunno how to express that in... Anything, lol :P
    You're gonna have to explain that one, I've not seen what you mean. Pictures, videos!
  • edited
    Hahaha I got the name all wrong, it was Sprite Lamp :P

    And the effect was also used in Confederate Express:
    image
  • edited
    Not to steal any thunder from anyone. But I owe almost 100% of my unity shader knowledge to this resource:
    http://en.wikibooks.org/wiki/Cg_Programming/Unity

    Also, @Chippit I'd recommend you make some mention of the fact that the fixed function pipeline, cg shaders, and surface shaders within unity are all inherently different. Dear lord that took me the longest time to grok. Totally wish someone could have just told me that.

    Edit: I also found strumpy shader editor (a free node based shader editor) a great boon in learning shaders. Helps avoid arb syntax errors which are pretty easy to make in CG as the compiler is... ahem... poor. What I'll often do is mock up the the shader in Strumpy and then open the compiled shader output as reference and copy into my own stripped down shader.
  • edited
    Also, @Chippit I'd recommend you make some mention of the fact that the fixed function pipeline, cg shaders, and surface shaders within unity are all inherently different. Dear lord that took me the longest time to grok. Totally wish someone could have just told me that.
    That would definitely be one of the things I would cover as part of a theoretical introduction, yeah!
    Tuism said:
    Hahaha I got the name all wrong, it was Sprite Lamp :P
    Ah. 2D lighting. Simpler than you think. :P It's fundamentally still the same as 3D lighting, really!
  • Chippit said:
    Ah. 2D lighting. Simpler than you think. :P It's fundamentally still the same as 3D lighting, really!
    My impression of it is that you overlay a couple of sprites, each with a sprite lit from a different angle, and then you blend them by putting them on top of each other and adjusting each pixel's opacity according to lighting info :)

    I dunno if that's the "same" as 3D lighting but hey that's why I'm asking :)
  • edited
    That's one method, but an easier way is to do what Gratuitous Space Battles did and just have normal maps for your 2D sprites. All the fancy 3D contour illusion with only two textures and a per-pixel light.
  • I would love to know what a normal map is, but yes that's what the workshop is for :P
  • @Tuism: You know in Photoshop, how you have your different channels?

    Imagine now that you're drawing a picture of some sculpture or something that's lit from the right. Surfaces that are facing more toward the right will be brighter, and surfaces that are facing away from the right will be darker. So you've got a greyscale image that describes a lot of the information of the 3D object, but only in terms of whether it's facing left or right. Imagine saving that greyscale image to your Red channel.

    And repeat, but for a light that's shining from the top. And put that in your Green channel.

    You now have an image with two channels from which you can derive what angle the surface of your sculpture is facing: how dark or light the red channel is is relative to the surface's direction along one axis, and how dark or light the green channel is is relative to the surface's direction along another axis. (And you've still got a the blue channel for even more information, that's usually used for scaling/depth/normalization, but that's not important right now. :P)

    So, in essence, you've got a 2D image that can give you some 3D information that you could use to do lighting calculations as if it were a 3D mesh.

    This is an example of a (tangent space) normal map. Open it up in Photoshop, and take a look at the R and G channels.
    image
    Thanked by 1Gazza_N
  • Thanks for that explanation! But... That sounds just like what I explained? Overlayed images of the same image...
    tuism said:
    lit from a different angle, and then you blend them by putting them on top of each other and adjusting each pixel's opacity according to lighting info :)
    I can't tell the difference between what you're saying and what I've said?
  • edited
    With the method you described, you need multiple versions of the same image to overlay and fade. The more angles you want to light, the more images you need to draw to get a good effect. With a normal map, you only ever have two images - one with your once-off contour information (the normal map), which you use in your shaders to do lighting calculations , and one with your colour information (the actual texture). The best part is that you can cater to any possible lighting level or direction with the shader approach, whereas the faded texture method is limited to what you've drawn yourself in your art or 3D package. And let's not forget - all those textures eat memory for breakfast. Sixteen low-res textures with limited angles, or two high-res that can do anything? You choose. ;)
  • Oh, so what you're saying is a normal map, or a channel, is not a 'full image', but a greyscale layer, kinda multiplied over the original image, which is different from a full colour image?
  • edited
    Well, it's really three grayscale layers representing contours of the surface from set angles (up and right and other info), each neatly contained in the existing R,G, and B channels of an image file. We're actually being very tricksy - we're using data that would be interpreted as colours by an image program like Photoshop to actually represent the angles and contours of our surface. With a bit of maths and shader wizardry, we can use that info to calculate how an otherwise flat texture would be lit from any given angle. :)
  • I think there are two points here that are causing confusion:

    1. The texture is just a vehicle for saving data. A texture doesn't necessarily have to represent an actual image. You can save whatever you want into there.

    For argument's sake, you could use a texture to record the positions of a particle's animation. The R, G, and B channels could represent the X, Y and Z axes in 3D space. The top left pixel could be at the start of the animation, and the bottom right pixel could be the end of the animation, with the rest of the pixels being everything in between. (You'd probably never do this, but I'm just using it as an example of how your "texture" might well be a 2D image with 3 or 4 channels of greyscale data, but what it represents could be almost any kind of data.)

    2. The way that 3D lighting is calculated is by taking the direction of the light, and comparing it with the direction of the surface of the object you're shining the light onto. If the surface is facing exactly at the light, it will be very bright. If the surface is facing away from the light, it will be very dark. The normal map tells you which direction the surface is facing. You can calculate the light direction based on where it is in the world. Based on these two pieces of information (and some others, as you make it more complex), you can calculate how bright your object should be.

    It has nothing to do with multiplying or overlaying 2D images or anything. It has everything to do with performing the lighting calculations that you'd normally use in a 3D engine, but performing them on a 2D texture that contains data that represents 3D information.

    --
    It'll be a lot easier to understand once Chippit's covered some fundamentals. :P
    Thanked by 1Tuism
  • Or you guys can just carry on right here and I won't have to do anything! :D
    Thanked by 2Bensonance Nitrogen
  • edited
    I think that's the important point: A shader calculates the colour of a pixel on a surface, that's what it's eventual output is. It can do simple stuff like look up a colour from a specific point in a texture and then just output that colour, or it can do stuff like also look up the information on where lights are in relation to that surface in 3D and then figure out how much to modify the output colour by the light's colour.

    You're not blending images (although you could) you're creating a new image dependent on the other information coming into the shader.

    -edit- Ninjed.
  • edited
    @Chippit: Yeaaaaah, knowing the theory and implementing the shader are two very different animals. I'll have to bow out when it comes to the latter. :P
  • Chippit said:
    Or you guys can just carry on right here and I won't have to do anything! :D
    Or by the time we get to the workshop we can all start from advanced level and you can show us some cool techniques from 2016 :)

  • edited
    Perhaps I can explain it simpler:

    An image is really just an array of numbers. So a normal map exploits this by using r, g, b values to store the "direction" the surface is pointing. i.e. rgb doesn't represent a color but actually an x, y, z direction vector.

    Normal map:
    image
    Each pixel represents an x, y, z surface vector.

    Your shader then uses the POWER OF MATH to figure out the angle between the "direction" of your surface and the direction of light to adjust the colour of your base texture based on how much light it should receive.

    Bonus: this is why 3d drag handles in unity and other programs are coloured the way they are.
    X = (1, 0, 0) = red
    Y = (0, 1, 0) = green
    z = (0, 0, 1) = blue
    If you extrapolate this you will realise that is why most normal maps are blueish. Because mostly the normals point 90 deg away from the surface in the Z direction. :D
  • If you extrapolate this you will realise that is why most normal maps are blueish. Because mostly the normals point 90 deg away from the surface in the Z direction. :D
    When in tangent space! Object space normal maps (uncommon) tend to be far more colourful.
  • Ahhhhh! The thing that got me to understand was:
    elyaradine said:
    2. The way that 3D lighting is calculated is by taking the direction of the light, and comparing it with the direction of the surface of the object you're shining the light onto. If the surface is facing exactly at the light, it will be very bright. If the surface is facing away from the light, it will be very dark. The normal map tells you which direction the surface is facing. You can calculate the light direction based on where it is in the world. Based on these two pieces of information (and some others, as you make it more complex), you can calculate how bright your object should be.
    Cooool :D

    Ok getting fundamentals now. But damn it sounds complicated, you'd create these normal maps... Only possible out of a 3D program, right?

    Whereas Sprite Lamp is a more artist-friendly way of doing it, correct?
  • Not necessarily. Arguably the easiest way to generate normal maps is to paint a height map for your sprite. There exists software that can transform that into a normal map (it's a fairly trivial conversion). Creating one height map is, for the most part, far simpler than painting your character as lit from multiple directions (you'll need at least 2/3 for that to work well!)
  • Height map... Sounds hard to visualise, but yes I get you :)
  • You'd be surprised - and actual proper artists will murder me for saying this - how often the luminance of an image serves as a passable height map for things.
    Thanked by 1hermantulleken
  • edited
    Passable if you're blind.

    :P

    (Although, tbh, we actually do that pretty often, though we do a bit more than just grabbing luminance.)
  • Tuism said:
    Ahhhhh! The thing that got me to understand was:
    elyaradine said:
    2. The way that 3D lighting is calculated is by taking the direction of the light, and comparing it with the direction of the surface of the object you're shining the light onto. If the surface is facing exactly at the light, it will be very bright. If the surface is facing away from the light, it will be very dark. The normal map tells you which direction the surface is facing. You can calculate the light direction based on where it is in the world. Based on these two pieces of information (and some others, as you make it more complex), you can calculate how bright your object should be.
    Cooool :D

    Ok getting fundamentals now. But damn it sounds complicated, you'd create these normal maps... Only possible out of a 3D program, right?

    Whereas Sprite Lamp is a more artist-friendly way of doing it, correct?
    If I may add - the primary purpose of a normal map is to generate fake surface detail on top of the current polygons of a model. Normal polygon geometry already gets shaded by lighting in the game, but by adding this trick it will look as if there's more detail - you're basically using the rendering/lighting engine to your advantage instead of having high poly models.
  • farsicon said:
    Tuism said:
    Ahhhhh! The thing that got me to understand was:
    elyaradine said:
    2. The way that 3D lighting is calculated is by taking the direction of the light, and comparing it with the direction of the surface of the object you're shining the light onto. If the surface is facing exactly at the light, it will be very bright. If the surface is facing away from the light, it will be very dark. The normal map tells you which direction the surface is facing. You can calculate the light direction based on where it is in the world. Based on these two pieces of information (and some others, as you make it more complex), you can calculate how bright your object should be.
    Cooool :D

    Ok getting fundamentals now. But damn it sounds complicated, you'd create these normal maps... Only possible out of a 3D program, right?

    Whereas Sprite Lamp is a more artist-friendly way of doing it, correct?
    If I may add - the primary purpose of a normal map is to generate fake surface detail on top of the current polygons of a model. Normal polygon geometry already gets shaded by lighting in the game, but by adding this trick it will look as if there's more detail - you're basically using the rendering/lighting engine to your advantage instead of having high poly models.
    Fake things? Tricks? In game development? What an outlandish concept!
    Thanked by 1Bensonance
  • edited
    For those who value books like me the following book(pdf) should be helpful although I don't use unity

    http://it-ebooks.info/book/2954/

    for solid understanding I suggest you also get one of the books I mentioned above.
Sign In or Register to comment.