Game Engine Programmer needed urgently!
Hi all
We're looking for an experienced game engine programmer (C++) with preferably a degree in comp. engineering or comp. science and at least a few years experience in low level engine design/implementation.
We're busy developing a DirectX11 engine with Oculus integration for a very specific range of applications. It is an exciting and challenging project so anyone that are interested please message me.
Cheers
We're looking for an experienced game engine programmer (C++) with preferably a degree in comp. engineering or comp. science and at least a few years experience in low level engine design/implementation.
We're busy developing a DirectX11 engine with Oculus integration for a very specific range of applications. It is an exciting and challenging project so anyone that are interested please message me.
Cheers
Thanked by 1AngryMoose
This discussion has been closed.
Comments
Even with something like Simplygon integrated, Unreal, Unity etc. cannot handle the assets we have to work with. The object count and poly count is just too high. Our application currently runs on a DX9 in-house engine that were designed to handle our assets, but we have to upgrade our visuals and allow support for Oculus and other future technologies to remain current.
P.S. Just asking questions to respond to the urgency in your post title. Often using other people's stuff is a lot faster than rolling your own. Plus I suspect you'll have more luck looking for Unity devs that can handle the optimisation problem than trying to find DX11 devs that can code engines from scratch without jobs at the moment.
I'm fully ignorant of what you are trying to do and with what resources, but if engines like Unity and The Powerhouses That Are Unreal And CryEngine can't handle what you're throwing at it, you might want to re-look at what is being thrown :)
Of course if having proprietary in-house technology for other reasons (IP ownership, self-reliance, stupid EULAs, etc.), then go nuts! Regardless though, there is likely a much more cost effective and efficient way to create the content that you seem to be needing.
Perhaps talking to a consultant with expertise in this field who can evaluate what you are trying to do for what ends and letting them give you some recommendations in that regard would be a worthwhile investment going forward for your company?
Regardless, best of luck with your endeavors!
So the problem is at a lower level than optimization of assets, we need to fundamentally handle and display the data differently (including memory management etc.) from how the mainstream game engines do it.
1. I hope you have a 120 million budget to pay this person
2. I hope you are not expecting it any time soon.
Jokes aside, those are some mammoth expectations. I dont think you will find any engineer capable of pulling off such a task hanging around here. They would be working for a AAA Gaming Corporation making mega bucks no doubt ;)
Hopefully there are some forum lurkers around with the experience and chops to help them out!
Unity is great, but it has limits (especially without source code access). Those limits are a lot further away that a lot of people think as pointed out by @dislekcia and @angrymoose, but they certainly do exist. It's a bit presumptive for us to say we understand @venteras business needs better than they themselves do.
On the other hand @venteras, this is advice is forged through (literally) decades of combined experience. There is a good chance there is a another way to solve the problem that might be a path of lower resistance. (For instance maybe you need a mesh reduction algo, not a custom game engine)
Either way, best of luck in your endeavours!
I'd really appreciate if anyone can give me contact details of a suitable person!
Sorry, it's a bit of a fascinating problem... I messed with visualising outputs of complex state models years ago, the test case was a Lorenz attractor dataset that had over 400M points. Yup. Certainly seems that way :) Most of the "hey, don't write your own engine" stuff is aimed squarely at situations where people haven't actually considered the impacts. This is a situation where the problem is so different to what game engines typically do that it makes sense to go about rendering that level of detail a completely different way. When you have less pixels on a screen than polys in a mesh, a whole bunch of other options open up.
Most CS students know low level stuff. Unisa and UCT openGL; UP directX... So go and scout at universities
I've been lurking for a while, and saw an interesting thread so I thought I would register. I would dare to call myself fairly competent at high performance graphics engine development (I'm not really into game development as such, just the tech). I'm not open to employment right now, but if anyone wants to discuss the tech side of large model rendering, feel free to PM me, or discuss it in this thread if you think others will find it interesting.
Anyway, the first thing I noticed is that at 120M Triangles (I don't know if this is a realistic upper limit or not for you, @venteras), a top end consumer graphics card should be able to handle that easily:
Rasterization pipe: at approx 5GTris/sec, you should be able to get ~42 frames per second
Bandwidth: at 224GB/s, that's enough bandwidth for 46bytes per triangle at 42 fps, which is more than enough. Ideally, you would use a fraction less since you would need bandwidth for rendering to your rendertargets/depth maps, or possibly more if you are texturing too. Assuming these are untextured CAD models, with fine grained triangles, low depth complexity and/or good depth ordering, you should be good.
Shader: at 5TFlops, you should have ~1000 ops per vertex. Similarly, this comes out of your pixel shader budget so you wouldn't want to use the full quota, but given that your pixels are ~2 orders of magnitude less (assuming decent front-to-back rendering and low depth complexity), you should be fine.
On board memory: 2-4GBs would be required for the geometry, which high end cards have (up to 6GB on consumer, or 12GB on quadros) - less memory is used if some of your geometry is instanced (e.g., 12 tea cups).
The above says that you should be able to just render it, and get something useful (say, ~80% efficiency) without doing anything fancy - if this isn't the case, there's probably something beyond just the hardware capabilities that is slowing you down.
To take it to the next level (shadows, 1B+ triangles, textures, etc.), you would have to use something more advanced to remove work. This can be done by using conditional occlusion queries on the bounding boxes of coherent patches of connected triangles. At the very least, it would remove ~40% of the triangles due to back-facing triangles, and will probably remove a lot more if there is depth complexity.
To improve on the above, a very rough render of the geometry CPU side could be used to choose your render order - the idea being that you want to maximize spatial coverage of your depth/frame buffer using the nearest patches of geometry that will cover it - once this is rendered, the rest of the geometry can be conditionally rendered using conditional occlusion queries.
Another common technique, although a bit of a headache is to implement, is to do conservative occlusion queries CPU side. For these large models, you would have to rasterize (CPU side) conservative occluders (e.g., large polygons grown internally to the closed 2-manifold meshes) into a depth buffer, and then query conservative bounding volumes (e.g., bounding boxes) of occludees - if these are fully covered, you can avoid making the draw call entirely - if not fully covered draw it, and render its conservative internal occluder to the buffer, and repeat.
A totally different approach that has been really successful is to use ray-tracing. RT complexity is fairly insensitive to the total size of the model, rather it is mostly sensitive to just what is seen. Furthermore, it provides a neat way to handle out-of-core rendering for really huge models: First load the bounding volume hierarchy you are using for acceleration. Then render the scene, loading off disk/network/NFS/host->GPU (if ray-tracing on a GPU), the geometry/textures you need on demand, and keep a last-used tag for each node in case you have to evict the data. During rendering you can run interactively (very easy with just primary rays, even shadow rays and possibly one bounce ray-tracing can be done interactively these days for huge models), and when an uncached node is traversed, just display its bounding box temporarily while the geometry is loaded asynchronously to replace it. It is also possible to prefetch data as well (by casting sparse rays originating within some radius of the view point).
For dynamic models, if some objects are rigid but dynamic (just its orientation changes), there are simple ways to handle this via compartmentalized acceleration hierarchies, and even if it is fully dynamic there are really fast ways to incrementally update bounding volume hierarchies on the fly (see Bounding Interval Hierarchies).
Some other vaguely useful bits may be that the NVidia Quadro cards are better at rendering sub-pixel triangles, and wire frames, and if you're willing/able to do a bunch of processing on the models, displacement mapping can actually save bandwidth (displacement relative to a plane is cheaper than triangles), and can provide LoD optimization implicitly as well.
Anyway, that post was a lot longer than I expected - I had some time to kill. I know that @venteras was looking for a hire, not necessarily a technology dump, but perhaps it will help.
From a technical side we have numerous techniques integrated in our engine for optimization, including the fastest occlusion culling and other algorithms out there. There are many well documented and proven techniques to implement a good render engine, which a good programmer can do fairly easily given enough time. Assuming all this is in place, just rendering 120m polys is indeed easy, but that is only about 20-30% of the challenge and overhead in the application. Beyond the poly count it depends on how many objects you have, whether you know before you load the objects what they look like (i.e. do you need to give people full flexibility to add/import any models from any CAD format at any time while using the program), whether you need full physics with collisions on all objects, how many particle effects you have and material effects you want to render on objects, how much post processing will be done on the scene, the number of dynamic lights, the level of interaction with and data connected to each object, etc.
All of the above adds overhead, and most of those topics and how they interact are not very well documented, so I guess what I'm saying is that we are not just looking to render a lot of polys real fast, that wheel has been designed, built and documented many times, we are looking for someone who can both implement a render engine based on best practices and also think outside the box for all the other aspects of the application that has to seamlessly tie in with and compliment the rendering.
I find it interesting that a rendering engine for non-game purposes has a whole different set of problems to those typically encountered in games.