Model size versus camera clipping
I'm hoping you can give me some advice. I'm trying to make a FPS sailing game and I've run into the problem that you can see further than the clipping.
I was originally using 1 unit = 1 meter but I've since shrunk that by half. The problem I have now is the near-clipping plane starts to cut off models that are close to the player. I'm needing to play this game of getting size and forced perspective right but I'm a little lost as where to begin.
I'm checking Let's Play videos of Silent Hunter to see if I can figure out how they do it.
Any ideas?
I was originally using 1 unit = 1 meter but I've since shrunk that by half. The problem I have now is the near-clipping plane starts to cut off models that are close to the player. I'm needing to play this game of getting size and forced perspective right but I'm a little lost as where to begin.
I'm checking Let's Play videos of Silent Hunter to see if I can figure out how they do it.
Any ideas?
Comments
Hopefully someone with more experience can chime in if these are wrong.
1. On the camera, increase view distance... but according to this thread (http://answers.unity3d.com/questions/755322/largest-reasonable-camera-view-distance.html) someone said that 1-2k is about reasonable. My understanding of this is that it doesn't matter how "far" you look, it's got more to do with the ratio between objects you want visible and the distance. So making your objects tiny while having a "smaller" view distance doesn't make a dent computational power-wise.
2. Use fog so that the viewing distance doesn't cut off abruptly.
3. Use some kind of "stand in" content for content that's further, like a blend into the skybox?
4. Couldn't you change the near clipping plane so that it's closer back?
5. Changing the Field of View up can grant you a better illusion of scale.
Some info here:
example
http://outerra.blogspot.co.uk/2009/08/logarithmic-z-buffer.html
and unity implimentation:
http://forum.unity3d.com/threads/logarithmic-z-buffer-issue.214377/
hope this answers your question
Is that shader trying to tell unity to render everything that is at HUGE distance away from the camera as a flat thing? Does that make the far planes visible?
You could easily set your near plane to 0.01 and that would probably help with your clipping issues for close meshes. But that moves your z-fighting area even closer to the camera (because floating point math) so it starts being visible at middle distances.
I'd suggest setting your near plane to the largest value you can get away with and still have your models looking right. Then, if you get z-fighting problems (which you'll probably have where water intersects ships in the distance, so look there first), solve that when you get to it. Chances are you won't have many issues, so just mess with your clipping planes and see...
If it DOES mess up, well, you have a bunch of things you can try. Including just rendering close meshes in their own pass with drastically changed planes.
Or do you mean to say that you use the far camera to render another image as a backdrop?
That's awesome. I didn't know you could do that.
...
Another thing that comes to mind is human visual acuity...
The human eye can see a candle light flicker at 38 km's, However it can't distinguish an object unless the object occupies more than 1/16 of a degree within our field of vision. This means that you can only distinguish a separate human sized object at about 3 kilometers. This is the distance where something human-sized begins to occupy more than 1/16 of a degree of our field of vision. Beyond that distance, it's just a speck/blob that melts into the background. This 1/16 degree is a limitation with the cone cells in our eyes. At least two needs to be excited in order for the eye to recognize a distinguishable object.
As Steven mentioned... fog is a valid gimmick, especially at sea. The atmospheric scattering will turn everything into a haze at great distances... even when you look at it through a telescope you'd still only see a blob (a bigger blob) because of all the atmospheric particles and humidity in between... If you're using the latest Unity 5, then you should have access to some advanced fog options that are height based (very similar to atmospheric scattering but not quite exactly the same.)
There is always more precision near the camera than far away. My understanding is that a logarithmic depth buffer fudges the depth to create a more even distribution by inserting some magic in-between the conversion from clip coordinates to normalized device coordinates.
convert object coordinates, to eye co-ordinates and then to clip-cordinates using a built in unity function:
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
apply our fancy logarithmic fudge for better use of z-space - my maths is not good enough to explain why this works:
o.pos.z = log2(max(1e-6, 1.0 + o.pos.w)) * (2.0 / log2(_ProjectionParams.z + 1.0)) - 1.0;
we multiply o.pos.z by W , this to counter-act the division by W that would happen later on (W is the "perspective" component that would normally be used to convert clip co-ordinates to normalized device coordinates),
o.pos.z *= o.pos.w;