Model size versus camera clipping

I'm hoping you can give me some advice. I'm trying to make a FPS sailing game and I've run into the problem that you can see further than the clipping.

I was originally using 1 unit = 1 meter but I've since shrunk that by half. The problem I have now is the near-clipping plane starts to cut off models that are close to the player. I'm needing to play this game of getting size and forced perspective right but I'm a little lost as where to begin.

I'm checking Let's Play videos of Silent Hunter to see if I can figure out how they do it.

Any ideas?

Comments

  • Hmmmm, I'm not super pro at this, but I've been messing about with huge scales, and I've run into the same problem, which I've tried these with:

    Hopefully someone with more experience can chime in if these are wrong.

    1. On the camera, increase view distance... but according to this thread (http://answers.unity3d.com/questions/755322/largest-reasonable-camera-view-distance.html) someone said that 1-2k is about reasonable. My understanding of this is that it doesn't matter how "far" you look, it's got more to do with the ratio between objects you want visible and the distance. So making your objects tiny while having a "smaller" view distance doesn't make a dent computational power-wise.

    2. Use fog so that the viewing distance doesn't cut off abruptly.

    3. Use some kind of "stand in" content for content that's further, like a blend into the skybox?

    4. Couldn't you change the near clipping plane so that it's closer back?

    5. Changing the Field of View up can grant you a better illusion of scale.
  • Im not sure if this will help, but usually for this sort of probably (if you have a HUGE scene and you want to be able to zoom right into a model) you would implement a logarithmic depth buffer.

    Some info here:
    example
    http://outerra.blogspot.co.uk/2009/08/logarithmic-z-buffer.html
    and unity implimentation:
    http://forum.unity3d.com/threads/logarithmic-z-buffer-issue.214377/

    hope this answers your question
    Thanked by 1Tuism
  • @shanemarks... I really wish I understood what that stuff was talking about!

    Is that shader trying to tell unity to render everything that is at HUGE distance away from the camera as a flat thing? Does that make the far planes visible?
  • So you can set your near clipping plane to whatever you want it to be, the same thing with your far clipping plane. You can have those planes be thousands of world units apart. The reason you probably don't want to do this is because that means that your Z-buffer (which internally only has values between 0 and 1 while rendering) ends up losing accuracy (because of floating point math), so objects that are far from your near clipping plane start to do a thing called z-fighting where they intersect.

    You could easily set your near plane to 0.01 and that would probably help with your clipping issues for close meshes. But that moves your z-fighting area even closer to the camera (because floating point math) so it starts being visible at middle distances.

    I'd suggest setting your near plane to the largest value you can get away with and still have your models looking right. Then, if you get z-fighting problems (which you'll probably have where water intersects ships in the distance, so look there first), solve that when you get to it. Chances are you won't have many issues, so just mess with your clipping planes and see...

    If it DOES mess up, well, you have a bunch of things you can try. Including just rendering close meshes in their own pass with drastically changed planes.
  • edited
    A trick that I saw used in early Unity tutorials (and maybe what Danny's getting at when he refers to multipassing) is to have two overlapping cameras - a detail camera with a shorter frustrum for more detailed items nearby, and then a "distance" camera whose near plane is set just short of the far plane of the detail camera, and whose far plane extends into the wild blue yonder. Sure, you'll be multipassing, but you shouldn't get much overdraw since the frustrums (frustra?) don't overlap, and you'll preserve z-buffer resolution where you need it. You can also use the individual camera layer filters to cull any objects or lights you don't need to see in the distance.
    Thanked by 2Tuism Elyaradine
  • If I understand Gazza correctly, then you end up using the far camera as a zoomed scope for your telescope?
    Or do you mean to say that you use the far camera to render another image as a backdrop?
    That's awesome. I didn't know you could do that.
    ...
    Another thing that comes to mind is human visual acuity...

    The human eye can see a candle light flicker at 38 km's, However it can't distinguish an object unless the object occupies more than 1/16 of a degree within our field of vision. This means that you can only distinguish a separate human sized object at about 3 kilometers. This is the distance where something human-sized begins to occupy more than 1/16 of a degree of our field of vision. Beyond that distance, it's just a speck/blob that melts into the background. This 1/16 degree is a limitation with the cone cells in our eyes. At least two needs to be excited in order for the eye to recognize a distinguishable object.

    As Steven mentioned... fog is a valid gimmick, especially at sea. The atmospheric scattering will turn everything into a haze at great distances... even when you look at it through a telescope you'd still only see a blob (a bigger blob) because of all the atmospheric particles and humidity in between... If you're using the latest Unity 5, then you should have access to some advanced fog options that are height based (very similar to atmospheric scattering but not quite exactly the same.)
  • @tuism: Not quite, my understanding if a bit fuzzy to give you a perfect concise answer but , here is my attempt based on some further reading from here: https://www.opengl.org/wiki/Depth_Buffer_Precision and http://www.songho.ca/opengl/gl_transform.html If this is wrong in any way, please correct me :) I am sure Elyaradine or Herman will know better than me.

    There is always more precision near the camera than far away. My understanding is that a logarithmic depth buffer fudges the depth to create a more even distribution by inserting some magic in-between the conversion from clip coordinates to normalized device coordinates.

    convert object coordinates, to eye co-ordinates and then to clip-cordinates using a built in unity function:
    o.pos = mul( UNITY_MATRIX_MVP, v.vertex );

    apply our fancy logarithmic fudge for better use of z-space - my maths is not good enough to explain why this works:
    o.pos.z = log2(max(1e-6, 1.0 + o.pos.w)) * (2.0 / log2(_ProjectionParams.z + 1.0)) - 1.0;

    we multiply o.pos.z by W , this to counter-act the division by W that would happen later on (W is the "perspective" component that would normally be used to convert clip co-ordinates to normalized device coordinates),
    o.pos.z *= o.pos.w;
Sign In or Register to comment.