Game Engine Programmer needed urgently!

Hi all

We're looking for an experienced game engine programmer (C++) with preferably a degree in comp. engineering or comp. science and at least a few years experience in low level engine design/implementation.

We're busy developing a DirectX11 engine with Oculus integration for a very specific range of applications. It is an exciting and challenging project so anyone that are interested please message me.

Cheers
Thanked by 1AngryMoose

Comments

  • What's the reason you can't use an existing engine? I thought you might have hardware constraints or some strange OS limitation, but that gets blown out of the water by the DX11 thing.
  • We use Unity for some of our projects and we evaluated Unreal engine for this specific application, but the 3D models we need to handle is simply too big for any of the engines out there. As great as engines like Unreal, CryEngine and even Unity is, they still assume you're going to have optimized 3D assets, which is a luxury we don't have.

    Even with something like Simplygon integrated, Unreal, Unity etc. cannot handle the assets we have to work with. The object count and poly count is just too high. Our application currently runs on a DX9 in-house engine that were designed to handle our assets, but we have to upgrade our visuals and allow support for Oculus and other future technologies to remain current.
  • @venteras: I see. I'm not entirely sure why whatever optimisation techniques you're using in your engine can't be done in something like Unity. The mesh editing/manipulating architecture is almost exactly like DX9, so any clever slicing or remapping or whatever that you're doing should be able to be translated pretty easily... Of course if the only difference is that it's faster in DX9 without any tricks, then that's due to the differences in the shader pipeline. It'll get slower as you update to DX11 too - although again, optimising shaders would help in both Unity and homegrown ;)

    P.S. Just asking questions to respond to the urgency in your post title. Often using other people's stuff is a lot faster than rolling your own. Plus I suspect you'll have more luck looking for Unity devs that can handle the optimisation problem than trying to find DX11 devs that can code engines from scratch without jobs at the moment.
  • edited
    You're likely to find more, cost effective, talented 3D artists to "optimize" your 3D assets with a much quicker turnaround time than you are to find a low-level engine programmer to make upgrades to your existing engine.

    I'm fully ignorant of what you are trying to do and with what resources, but if engines like Unity and The Powerhouses That Are Unreal And CryEngine can't handle what you're throwing at it, you might want to re-look at what is being thrown :)

    Of course if having proprietary in-house technology for other reasons (IP ownership, self-reliance, stupid EULAs, etc.), then go nuts! Regardless though, there is likely a much more cost effective and efficient way to create the content that you seem to be needing.

    Perhaps talking to a consultant with expertise in this field who can evaluate what you are trying to do for what ends and letting them give you some recommendations in that regard would be a worthwhile investment going forward for your company?

    Regardless, best of luck with your endeavors!
  • The problem is we do not have any control over the 3D assets that are thrown at our application. It gets used by clients with very big, inefficient engineering models (i.e. the equivalent of ten million polys to model a tea cup, and then have a set of 12), which are most of the time protected by IP rights so only the client can see/use it.

    So the problem is at a lower level than optimization of assets, we need to fundamentally handle and display the data differently (including memory management etc.) from how the mainstream game engines do it.
  • So you are looking for a single developer to code an engine from the ground up that can handle 120 million polygons per frame?

    1. I hope you have a 120 million budget to pay this person
    2. I hope you are not expecting it any time soon.

    Jokes aside, those are some mammoth expectations. I dont think you will find any engineer capable of pulling off such a task hanging around here. They would be working for a AAA Gaming Corporation making mega bucks no doubt ;)
  • So you are looking for a single developer to code an engine from the ground up that can handle 120 million polygons per frame?
    venteras said:
    We're busy developing a DirectX11 engine with Oculus integration for a very specific range of applications.
    venteras said:
    Our application currently runs on a DX9 in-house engine that were designed to handle our assets, but we have to upgrade our visuals and allow support for Oculus and other future technologies to remain current.
    They're not looking for someone to build them an engine from the ground up. They're looking for a talented programmer with low-level engine experience to work on upgrading and maintaining their current in-house engine tech :)

    Hopefully there are some forum lurkers around with the experience and chops to help them out!

  • They're not looking for someone to build them an engine from the ground up. They're looking for a talented programmer with low-level engine experience to work on upgrading and maintaining their current in-house engine tech
    yes, but industry norm is that anyone asked to do such a thing will look at the code and say he needs to build it from scratch to make it better ;) haha
    Thanked by 2vintar NickWiggill
  • I think this might be one of the (very) rare exceptions where this forum's general distrust of anyone wanting to build their own engine tech is slightly misplaced, or at least that is my suspicion without knowing the full scope of the project and the type of work that is required.

    Unity is great, but it has limits (especially without source code access). Those limits are a lot further away that a lot of people think as pointed out by @dislekcia and @angrymoose, but they certainly do exist. It's a bit presumptive for us to say we understand @venteras business needs better than they themselves do.

    On the other hand @venteras, this is advice is forged through (literally) decades of combined experience. There is a good chance there is a another way to solve the problem that might be a path of lower resistance. (For instance maybe you need a mesh reduction algo, not a custom game engine)

    Either way, best of luck in your endeavours!
  • I should probably clarify that we have a team of people working on the existing engine, application and also the upgrade, we're just looking for more people. It is indeed obvious that one person will not be able to do this! And yes, if we can find someone who works or worked for a AAA studio on engine development it would be perfect. We're not afraid of paying someone what their worth, but unfortunately we can't find someone like that in SA.

    I'd really appreciate if anyone can give me contact details of a suitable person!
  • venteras said:
    The problem is we do not have any control over the 3D assets that are thrown at our application. It gets used by clients with very big, inefficient engineering models (i.e. the equivalent of ten million polys to model a tea cup, and then have a set of 12), which are most of the time protected by IP rights so only the client can see/use it.

    So the problem is at a lower level than optimization of assets, we need to fundamentally handle and display the data differently (including memory management etc.) from how the mainstream game engines do it.
    Ouch. That's rough... What are you doing, some sort of space-filling raytracer? Anything that's trying to raster polygons at those sizes is just going to fall over if you've got to keep it realtime. I imagine that's why you need something that doesn't behave like a typical mesh-based scene graph.

    Sorry, it's a bit of a fascinating problem... I messed with visualising outputs of complex state models years ago, the test case was a Lorenz attractor dataset that had over 400M points.
    I think this might be one of the (very) rare exceptions where this forum's general distrust of anyone wanting to build their own engine tech is slightly misplaced, or at least that is my suspicion without knowing the full scope of the project and the type of work that is required.

    Unity is great, but it has limits (especially without source code access). Those limits are a lot further away that a lot of people think as pointed out by @dislekcia and @angrymoose, but they certainly do exist. It's a bit presumptive for us to say we understand @venteras business needs better than they themselves do.
    Yup. Certainly seems that way :) Most of the "hey, don't write your own engine" stuff is aimed squarely at situations where people haven't actually considered the impacts. This is a situation where the problem is so different to what game engines typically do that it makes sense to go about rendering that level of detail a completely different way. When you have less pixels on a screen than polys in a mesh, a whole bunch of other options open up.
  • The only local people I can think of that are anywhere close to fitting the description are @AngryMoose and maybe @ScurvyKnave.
    Thanked by 2mattbenic garethf
  • Your best bet is likely to be along the lines of poaching someone from the likes of ThoroughTec or 5DT. They do some pretty heavy sim work, including military contracts, and have (or at least had a few years back) to the best of my knowledge, large scale in house rendering engines built around DirectX with quite a few devs working on building and maintaining them.
    Thanked by 2dislekcia garethf
  • I have read over 300 game programming books, but I can't apply for this job.

    Most CS students know low level stuff. Unisa and UCT openGL; UP directX... So go and scout at universities
  • KD_KD_
    edited
    Hi all,

    I've been lurking for a while, and saw an interesting thread so I thought I would register. I would dare to call myself fairly competent at high performance graphics engine development (I'm not really into game development as such, just the tech). I'm not open to employment right now, but if anyone wants to discuss the tech side of large model rendering, feel free to PM me, or discuss it in this thread if you think others will find it interesting.

    Anyway, the first thing I noticed is that at 120M Triangles (I don't know if this is a realistic upper limit or not for you, @venteras), a top end consumer graphics card should be able to handle that easily:
    Rasterization pipe: at approx 5GTris/sec, you should be able to get ~42 frames per second
    Bandwidth: at 224GB/s, that's enough bandwidth for 46bytes per triangle at 42 fps, which is more than enough. Ideally, you would use a fraction less since you would need bandwidth for rendering to your rendertargets/depth maps, or possibly more if you are texturing too. Assuming these are untextured CAD models, with fine grained triangles, low depth complexity and/or good depth ordering, you should be good.
    Shader: at 5TFlops, you should have ~1000 ops per vertex. Similarly, this comes out of your pixel shader budget so you wouldn't want to use the full quota, but given that your pixels are ~2 orders of magnitude less (assuming decent front-to-back rendering and low depth complexity), you should be fine.
    On board memory: 2-4GBs would be required for the geometry, which high end cards have (up to 6GB on consumer, or 12GB on quadros) - less memory is used if some of your geometry is instanced (e.g., 12 tea cups).

    The above says that you should be able to just render it, and get something useful (say, ~80% efficiency) without doing anything fancy - if this isn't the case, there's probably something beyond just the hardware capabilities that is slowing you down.

    To take it to the next level (shadows, 1B+ triangles, textures, etc.), you would have to use something more advanced to remove work. This can be done by using conditional occlusion queries on the bounding boxes of coherent patches of connected triangles. At the very least, it would remove ~40% of the triangles due to back-facing triangles, and will probably remove a lot more if there is depth complexity.

    To improve on the above, a very rough render of the geometry CPU side could be used to choose your render order - the idea being that you want to maximize spatial coverage of your depth/frame buffer using the nearest patches of geometry that will cover it - once this is rendered, the rest of the geometry can be conditionally rendered using conditional occlusion queries.

    Another common technique, although a bit of a headache is to implement, is to do conservative occlusion queries CPU side. For these large models, you would have to rasterize (CPU side) conservative occluders (e.g., large polygons grown internally to the closed 2-manifold meshes) into a depth buffer, and then query conservative bounding volumes (e.g., bounding boxes) of occludees - if these are fully covered, you can avoid making the draw call entirely - if not fully covered draw it, and render its conservative internal occluder to the buffer, and repeat.

    A totally different approach that has been really successful is to use ray-tracing. RT complexity is fairly insensitive to the total size of the model, rather it is mostly sensitive to just what is seen. Furthermore, it provides a neat way to handle out-of-core rendering for really huge models: First load the bounding volume hierarchy you are using for acceleration. Then render the scene, loading off disk/network/NFS/host->GPU (if ray-tracing on a GPU), the geometry/textures you need on demand, and keep a last-used tag for each node in case you have to evict the data. During rendering you can run interactively (very easy with just primary rays, even shadow rays and possibly one bounce ray-tracing can be done interactively these days for huge models), and when an uncached node is traversed, just display its bounding box temporarily while the geometry is loaded asynchronously to replace it. It is also possible to prefetch data as well (by casting sparse rays originating within some radius of the view point).

    For dynamic models, if some objects are rigid but dynamic (just its orientation changes), there are simple ways to handle this via compartmentalized acceleration hierarchies, and even if it is fully dynamic there are really fast ways to incrementally update bounding volume hierarchies on the fly (see Bounding Interval Hierarchies).

    Some other vaguely useful bits may be that the NVidia Quadro cards are better at rendering sub-pixel triangles, and wire frames, and if you're willing/able to do a bunch of processing on the models, displacement mapping can actually save bandwidth (displacement relative to a plane is cheaper than triangles), and can provide LoD optimization implicitly as well.

    Anyway, that post was a lot longer than I expected - I had some time to kill. I know that @venteras was looking for a hire, not necessarily a technology dump, but perhaps it will help.
    Thanked by 2critic NickWiggill
  • Hi @KD_, thanks for taking the time to put down all this info, and it seems like you could be a suitable candidate, since you have a passion for this, so if you're interested please send me your CV.

    From a technical side we have numerous techniques integrated in our engine for optimization, including the fastest occlusion culling and other algorithms out there. There are many well documented and proven techniques to implement a good render engine, which a good programmer can do fairly easily given enough time. Assuming all this is in place, just rendering 120m polys is indeed easy, but that is only about 20-30% of the challenge and overhead in the application. Beyond the poly count it depends on how many objects you have, whether you know before you load the objects what they look like (i.e. do you need to give people full flexibility to add/import any models from any CAD format at any time while using the program), whether you need full physics with collisions on all objects, how many particle effects you have and material effects you want to render on objects, how much post processing will be done on the scene, the number of dynamic lights, the level of interaction with and data connected to each object, etc.

    All of the above adds overhead, and most of those topics and how they interact are not very well documented, so I guess what I'm saying is that we are not just looking to render a lot of polys real fast, that wheel has been designed, built and documented many times, we are looking for someone who can both implement a render engine based on best practices and also think outside the box for all the other aspects of the application that has to seamlessly tie in with and compliment the rendering.
    Thanked by 1EvanGreenwood
  • KD_KD_
    edited
    venteras said:
    Hi @KD_, thanks for taking the time to put down all this info, and it seems like you could be a suitable candidate, since you have a passion for this, so if you're interested please send me your CV.
    Thanks, but I'm gainfully employed, and I'm not in SA anymore - I would definitely be interested otherwise.
    venteras said:

    From a technical side we have numerous techniques integrated in our engine for optimization, including the fastest occlusion culling and other algorithms out there. There are many well documented and proven techniques to implement a good render engine, which a good programmer can do fairly easily given enough time. Assuming all this is in place, just rendering 120m polys is indeed easy, but that is only about 20-30% of the challenge and overhead in the application. Beyond the poly count it depends on how many objects you have, whether you know before you load the objects what they look like (i.e. do you need to give people full flexibility to add/import any models from any CAD format at any time while using the program), whether you need full physics with collisions on all objects, how many particle effects you have and material effects you want to render on objects, how much post processing will be done on the scene, the number of dynamic lights, the level of interaction with and data connected to each object, etc.

    All of the above adds overhead, and most of those topics and how they interact are not very well documented, so I guess what I'm saying is that we are not just looking to render a lot of polys real fast, that wheel has been designed, built and documented many times, we are looking for someone who can both implement a render engine based on best practices and also think outside the box for all the other aspects of the application that has to seamlessly tie in with and compliment the rendering.
    Brings back memories (mostly good ones :-) ). I'm very familiar with the above since I've worked in CAD, VR and Visualization - it's a complex beast. The issues I encountered were things like mesh structures that cannot be changed since it is directly part of the underlying physics/EM/fluid simulation parameters, picking was done at a per-triangle level, and therefore wire-frame outlines for huge models had to be rendered. We had to support real-time updates of surface properties (for graphics, and for material properties used on simulation) on a per-triangle basis, groups of triangles basis (picked via brush or selection polygon), and even object basis (easy at the object level, however the sub-object changes required regrouping the index buffers, supporting very fine undo/redo editing behavior, etc.). Multi-scale rendering within the same environment was another tricky problem. Sometimes engineers would try to "de-discretise" their simulation results by throwing a ridiculous number of triangles at the problem, visual vector fields, contours and flows, had to be overlaid onto model surfaces, real-time high resolution isosurface generation mixed into the model render, multiple lights, soft shadows, SSAO rendering for presentation purposes, etc. Not to mention the continuous desire for accurate (well, within bounds) transparency - really hard to get right and fast (some neat ways to handle that now). Back in the day, I was once asked to generate a .ps file for smaller models, that would render in vector form rather than a bitmap - got that working, but it took a long time to draw the doc.

    I find it interesting that a rendering engine for non-game purposes has a whole different set of problems to those typically encountered in games.
    Thanked by 1NickWiggill
This discussion has been closed.