Experimenting with home photogrammetry (3D scanning) on a budget ($free)

Hey hey all,

I've recently been quite intrigued by the concept of photogrammetry. After seeing its beautiful implementation in The Vanishing of Ethan Carter and Star Wars Battlefront, I've wanted to give it a spin at home. Today I managed to create my first model with the help of this tutorial and I'd like to share the process with you all.

My tools:

A Samsung Galaxy S5 (any phone with a reasonable resolution and ISO control will do)
A plinth to place objects on (artistic relatives, enough said)
A log I found in the garden

The following software:

1. Visual SFM for photogrammetry: ccwu.me/vsfm/
2. CMVS plugin for Windows: di.ens.fr/cmvs/
3. Meshlab for remeshing: meshlab.sourceforge.net/
4. Blender for decimation and optimisation: https://blender.org/

The first step was to find a room with neutral light. Then, with my ISO level set to 800 and a steady hand, I took roughly 25 images per rotation in two rings (upper and middle):

image (Upper)image (Middle)

After transferring the files over to my PC, VisualSFM creates a point cloud of common verts and generates the correct angles to interpolate the images from, which takes quite a while even with a decent amount of RAM:

image

Then, in meshlab you extrapolate from those verts and create both the model and texture file:

image

The final shot in Unity:

image

Positive things about this process:

1. The texture map and resulting model can look 100% photorealistic (especially if combined with a decent post-processing stack).
2. It's considerably faster than modeling the object manually.
3. It can be done for free if you already own a phone.
4. It's photogrammetry! This is some really cool, cutting edge tech and the learning experience is very rewarding.

Negatives/lessons:

1. VisualSFM needs a lot of photos to create an accurate depiction of the object. My first try with a single ring-pass of about 30 photos wasn't enough and I had some serious gaps in the cloud. A second pass up top remedied this.
2. The amount of computing power needed to accomplish photogrammetry is fairly high. My PC was struggling at points, though it's no slouch.
3. Results can be a little bit messy if your phone is low resolution or doesn't have a high quality CCD/CMOS. Pixels end up looking like blobs and noisy detail comes out a little bit sloppy looking. Ultimately, investing in a DSLR camera is probably a better long term choice.
4. Lighting matters. A lot. Any spotlight shadows or self-shadowing will show up in the resulting albedo map.

Conclusion:

I'm absolutely going to start integrating this process into our modeling pipeline. Long term I'll likely buy a better camera and set up a permanent spot for capture. I'm especially intrigued by the possibilities of human figure capture and the time that could save in future, more advanced projects.

Give it a go if you feel adventurous!

Comments

  • Wow thanks a lot for sharing. Even tho I am sure we won't be using it in our games it would sure be awesome to give it a test in a small project!
  • Zaphire said:
    Wow thanks a lot for sharing. Even tho I am sure we won't be using it in our games it would sure be awesome to give it a test in a small project!
    It's definitely a work in progress - implementation becomes a different beast altogether, as your resources typically don't look good at all angles depending on how you photograph them. I can see it being used to create some beautiful stuff, though.
  • This looks quite cool! Will play around with it a bit.
    Thanks for the info jackshiels
    Thanked by 1jackshiels
  • This is great! I have some traditional sculpts that I wanted to capture (before melting the clay down and starting over with the next practice piece), but my experiments with 123D were pretty terrible, and I didn't need it enough to want to pay for one of the professional suites. Woot!
    Thanked by 1jackshiels
  • edited
    This is cool, I've wanted to play around with photogrammetry but never really started looking into it, It's cool tech. Thanks for the links.
    Thanked by 1jackshiels
  • Could you try with a person?
  • edited
    This is great! I have some traditional sculpts that I wanted to capture (before melting the clay down and starting over with the next practice piece), but my experiments with 123D were pretty terrible, and I didn't need it enough to want to pay for one of the professional suites. Woot!
    It intrigues me that 123D is even a viable solution. It must be very low quality, given the limited amount of RAM and processing power (relatively speaking) on a smartphone.
    Sonicay said:
    Could you try with a person?
    I will, if I have some spare time tomorrow. But yes, I'm very keen to try this out next.
  • edited
    This is amazing, and the results are great!

    I have one question - You mentioned that lighting is important. How did you go about a consistent light level? Did you just pick a time of day where their was no clouds that could interfere with the natural light?
    Thanked by 1jackshiels
  • edited
    This is amazing, and the results are great!

    I have one question - You mentioned that lighting is important. How did you go about a consistent light level? Did you just pick a time of day where their was no clouds that could interfere with the natural light?
    I have a room with some very light cotton curtains, meaning the ambient level isn't too bright or too low. Then, I lit the opposing side of the object with a lamp (at a distance, to avoid garish brightness) to even it out where the window light was reflecting less. The reason the window is so amazingly bright in the photos above is because the manual ISO leveling encourages it; the actual physical light is a lot more dim.

    Overall it's a pretty great location considering how bright it gets in SA right now :D
    Thanked by 1duncanbellsa

  • It intrigues me that 123D is even a viable solution. It must be very low quality, given the limited amount of RAM and processing power (relatively speaking) on a smartphone.
    It uploads to their cloud services and processes it there.

    @jackshiels, thanks a lot for the post! I didn't actually realize this was possible at home. I have tried the laser method years ago at home, but that gives pretty terrible results.

    I've been looking for a way to scan a dress-form/mannequin for my wife who lectures pattern design at the university here. With the goal being to be able to 3D print (or laser cut cardboard layers) a new one out which will be much cheaper than buying an actual one. This looks pretty great!

    I spent most of last night trying to get it to work, using various background, lighting, and trying both moving around the object and spinning it on a lazy suzan with neutral backdrop. The big problem is that this dress-form has barely any distinguishing texture, and the front looks the same as the back. When I do the 2-image compare thing I can see it is matching totally wrong points to each other. And this is doing over 100 images around it with a DSLR.
    When I worked out that was the problem we added colored tape to the object:

    image

    This helped a massive amount, and the results I'm getting now are significantly better.
    It also seems to work better on a lazy suzan for this object.
    I'll carry on trying today in more natural light, and will let you know how it goes.

    The moral of the story for people that are going to try this out though, is that the objects needs distinguishing texture/shape. Each image is compared against every other image, and the sequence of them does not seem to matter. This means if a view from the back looks the same as the front, it will likely match those and totally ruin the point cloud.
    Thumb.JPG
    622 x 1081 - 127K
    Thanked by 1Tuism
  • (I can't figure out how to upload an image in the edit window, so sorry about the double-post)

    This is the best scan I got right at the end, and it was just a partial scan because I was tired of taking 100+ images and then it failing :P
    image

    The lighting is weird in MeshLab, and the texture is mis-aligned, but the important bit is that the area that I circled in white has extremely good/smooth shape. So that is encouraging!
    Thumb2.jpg
    691 x 1040 - 188K
  • This is really really awesome :D Looking forward to see more experimentations and me personally, I wonder what I can do with this to make low-poly stuff :) But of course I need to get to modelling at all first :P
    Thanked by 1jackshiels
  • roguecode said:
    (I can't figure out how to upload an image in the edit window, so sorry about the double-post)

    This is the best scan I got right at the end, and it was just a partial scan because I was tired of taking 100+ images and then it failing :P

    The lighting is weird in MeshLab, and the texture is mis-aligned, but the important bit is that the area that I circled in white has extremely good/smooth shape. So that is encouraging!
    Nice observation. I was also trying to scan a mostly black statuette and found the same issues arising. It essentially gave me three different models in one scan... not ideal. Differentiation is essential, and the markers can be edited out in post work.
  • A question, does this work best for smooth, organic forms, or does it work for angular, blocky forms, like lego? I'm guessing lego mecha is way too technically complex for this, but hey wouldn't it be cool if it works :P Maybe in disassembled bits?

    image
  • Tuism said:
    A question, does this work best for smooth, organic forms, or does it work for angular, blocky forms, like lego? I'm guessing lego mecha is way too technically complex for this, but hey wouldn't it be cool if it works :P Maybe in disassembled bits?

    image
    The scan naturally produces round, softer forms (because of the pixels). However, you can always do some retopology in post to simplify it. You run the risk of losing some of the natural UV maps, though. That being said, retexturing with substances is probably a better decision anyway.

  • Reflective surfaces don't capture well at all so it is best avoided. I've used AGISOFT PHOTOSCAN before and it was a rather pain free experience. The program isn't free but you can request a 30 day trial and a standard license cost $179 which is not too bad if you are going to use it a lot.

    I've attached some photos of the scans that we have done before. I also made a tutorial about it last year which is split into 4 different videos that cover scanning all the way to retopology and importing into your game engine so you can get good looking textures and low poly model counts.

    You can find the tutorial here:



    rock_water.PNG
    1229 x 823 - 2M
    rocks.PNG
    1427 x 874 - 1M
    statue.PNG
    991 x 807 - 676K
    wall.PNG
    1504 x 902 - 3M
    Thanked by 1roguecode
  • OK, I tried again this afternoon with natural light, lots of photos, and more tape to give it reference points.
    It came out significantly better! I noticed in the images that the tape is reflecting, which @cryonetic says is bad above. So I may try a more matte tape.

    The texture is also still coming out vertically stretched, which doesn't actually matter for my use here, but any idea why it is happening @jackshiels?

    https://gfycat.com/AntiqueCleverBooby (yes, that really is the random link gfycat chose)
    image
  • Cryonetic said:

    Reflective surfaces don't capture well at all so it is best avoided. I've used AGISOFT PHOTOSCAN before and it was a rather pain free experience. The program isn't free but you can request a 30 day trial and a standard license cost $179 which is not too bad if you are going to use it a lot.

    I've attached some photos of the scans that we have done before. I also made a tutorial about it last year which is split into 4 different videos that cover scanning all the way to retopology and importing into your game engine so you can get good looking textures and low poly model counts.

    You can find the tutorial here:
    Beautiful stuff :)

    I checked out Photoscan before trying the free software. Definitely going to be purchasing it at some point soon. It seems to be a bit more integrated and powerful.
    roguecode said:
    OK, I tried again this afternoon with natural light, lots of photos, and more tape to give it reference points.
    It came out significantly better! I noticed in the images that the tape is reflecting, which @cryonetic says is bad above. So I may try a more matte tape.

    The texture is also still coming out vertically stretched, which doesn't actually matter for my use here, but any idea why it is happening @jackshiels?

    https://gfycat.com/AntiqueCleverBooby (yes, that really is the random link gfycat chose)
    image
    Try adding a second circle of shots from a 45 degree angle downwards (as with my log scan). It might need some vertical data to normalise. Looking good regardless.
  • edited
    This is so informative. thanks to all for sharing. im so glad to see more of these types of posts recently. seems that a lot of members have been leveling up.
    Thanked by 1jackshiels
  • Yup, going to do an upper and lower ring, but want to get the centre working well first.
  • roguecode said:
    Yup, going to do an upper and lower ring, but want to get the centre working well first.
    Your centre may well fix itself if you add the upper ring. Just a shot in the dark...
  • roguecode said:
    Yup, going to do an upper and lower ring, but want to get the centre working well first.
    Your centre may well fix itself if you add the upper ring. Just a shot in the dark...
    It's a good point. But what I'm worried about is that the top view is going to have the same problems with the reflective tape. It's hard to see in my image above because of all the smoothing, but a lot of the areas of tape either have very few dots, or they're completely raised up/indented.
  • Sonicay said:
    Could you try with a person?
    An entire person would be awesome but if you are an indie developer it is a bit of a chore and you can use programs like Makehuman for the body.

    I will note that you can capture a face easily enough and then capture different expressions. Your captures have to be incredibly good to get this to work properly. I have tried this in blender and then using SHRINKWRAP modifier I set up my shapekeys/morph targets. Note you would have to retopologize your model and then use that and attach it to the different scans of expressions and add shapekeys/morph targets for each one.

    You can then drive your animations with keyframes or by putting some dots on your face and use those dots to drive the shapekeys/morphtargets. The results need some clean up but you can get a very professional looking facial capture doing this. It is still a very time consuming process. Something else you can do is use an Xbox one kinect(needs usb adapter) as they are cheap enough(Cash Converters had a few for R600) because no one wants them and then use a program called FACESHIFT.

    I don't think you can buy FACESHIFT anymore but I was lucky enough to get a license before the company was bought by Apple. Anyway have a look.


    I am not a character artist so I keep looking for other ways to created characters easily.
    Thanked by 1jackshiels
  • roguecode said:
    roguecode said:
    Yup, going to do an upper and lower ring, but want to get the centre working well first.
    Your centre may well fix itself if you add the upper ring. Just a shot in the dark...
    It's a good point. But what I'm worried about is that the top view is going to have the same problems with the reflective tape. It's hard to see in my image above because of all the smoothing, but a lot of the areas of tape either have very few dots, or they're completely raised up/indented.
    Post here if you manage to fix it. I had literally zero scaling issues with the log, probably because the texture of the object is well defined and diverse.
  • edited
    I've been playing a lot with this and am getting amazing results. This software is just incredible.
    Here is me (well, creepy Mc DeadEyes) on first attempt.
    image
    The texture is a bit messy in some areas because I had bad lighting causing some images to be blurred, and for it to pick up a bit of noise. It even manages to handle hair relatively well, although it halved the length of my beard - presumably because I didn't do a low angle circle so it had no data for the bottom.
    The details it get's are insane - look at the creases in my shirt, those are not a normal map or just texture, they are really in the mesh.
    image
    Now to rig this and do silly things.

    p.s. I've managed to get a pretty perfect model of the dress-form I was doing. Easily good enough quality to 3D print a negative mold from it.

    p.s.2. Regarding using Kinect for the scanning, I tried with a Kinect 2 before and was never able to get things close to the detail of this. Although it is certainly faster and much more convenient.
    Shirt.JPG
    543 x 512 - 55K
  • edited
    Amazing!

    The issue I find that crops up after this is decimation. Often the retopology process kills the UV map.

    Remapping a lower poly model to fit the texture is a pain in the ass, so I'm still trying to find out how this can be done more effectively.
  • I think you guys should check out This presentation at GDC 2016 by DICE about photogammetry done for StarWars Battlefront.
    It is ~188MiB, BTW.
    Thanked by 1roguecode
  • ryan20fun said:
    I think you guys should check out This presentation at GDC 2016 by DICE about photogammetry done for StarWars Battlefront.
    It is ~188MiB, BTW.
    "Simple Kit (Canon 6D)" :D

    Thanks for the link! Looks like this is the video version.

    Thanked by 2jackshiels critic
  • Definitely watching that!
  • edited
    @roguecode you have just been immortalized! That is amazing results thanks for testing it out and sharing!
    What color background did you have?
  • edited
    @Zaphire this one was done by my wife walking around me on our balcony, and with terrible lighting.
    I found a perfect setting on my DSLR which takes about 5 photos per second by holding down the shutter. So you can just hold down and do a few loops.
    image

    I see Beautiful Desolation uses photogrammetry for some of the world.
  • Thanks a lot for showing me an example. I thought you also maybe had like a white or one colored background. Amazing to see the results with no real lighting changes and a background with a lot of detail and it still works. Definitely going to give it a shot!
Sign In or Register to comment.