Experimenting with home photogrammetry (3D scanning) on a budget ($free)
Hey hey all,
I've recently been quite intrigued by the concept of photogrammetry. After seeing its beautiful implementation in The Vanishing of Ethan Carter and Star Wars Battlefront, I've wanted to give it a spin at home. Today I managed to create my first model with the help of this tutorial and I'd like to share the process with you all.
My tools:
A Samsung Galaxy S5 (any phone with a reasonable resolution and ISO control will do)
A plinth to place objects on (artistic relatives, enough said)
A log I found in the garden
The following software:
1. Visual SFM for photogrammetry: ccwu.me/vsfm/
2. CMVS plugin for Windows: di.ens.fr/cmvs/
3. Meshlab for remeshing: meshlab.sourceforge.net/
4. Blender for decimation and optimisation: https://blender.org/
The first step was to find a room with neutral light. Then, with my ISO level set to 800 and a steady hand, I took roughly 25 images per rotation in two rings (upper and middle):
(Upper) (Middle)
After transferring the files over to my PC, VisualSFM creates a point cloud of common verts and generates the correct angles to interpolate the images from, which takes quite a while even with a decent amount of RAM:
Then, in meshlab you extrapolate from those verts and create both the model and texture file:
The final shot in Unity:
Positive things about this process:
1. The texture map and resulting model can look 100% photorealistic (especially if combined with a decent post-processing stack).
2. It's considerably faster than modeling the object manually.
3. It can be done for free if you already own a phone.
4. It's photogrammetry! This is some really cool, cutting edge tech and the learning experience is very rewarding.
Negatives/lessons:
1. VisualSFM needs a lot of photos to create an accurate depiction of the object. My first try with a single ring-pass of about 30 photos wasn't enough and I had some serious gaps in the cloud. A second pass up top remedied this.
2. The amount of computing power needed to accomplish photogrammetry is fairly high. My PC was struggling at points, though it's no slouch.
3. Results can be a little bit messy if your phone is low resolution or doesn't have a high quality CCD/CMOS. Pixels end up looking like blobs and noisy detail comes out a little bit sloppy looking. Ultimately, investing in a DSLR camera is probably a better long term choice.
4. Lighting matters. A lot. Any spotlight shadows or self-shadowing will show up in the resulting albedo map.
Conclusion:
I'm absolutely going to start integrating this process into our modeling pipeline. Long term I'll likely buy a better camera and set up a permanent spot for capture. I'm especially intrigued by the possibilities of human figure capture and the time that could save in future, more advanced projects.
Give it a go if you feel adventurous!
I've recently been quite intrigued by the concept of photogrammetry. After seeing its beautiful implementation in The Vanishing of Ethan Carter and Star Wars Battlefront, I've wanted to give it a spin at home. Today I managed to create my first model with the help of this tutorial and I'd like to share the process with you all.
My tools:
A Samsung Galaxy S5 (any phone with a reasonable resolution and ISO control will do)
A plinth to place objects on (artistic relatives, enough said)
A log I found in the garden
The following software:
1. Visual SFM for photogrammetry: ccwu.me/vsfm/
2. CMVS plugin for Windows: di.ens.fr/cmvs/
3. Meshlab for remeshing: meshlab.sourceforge.net/
4. Blender for decimation and optimisation: https://blender.org/
The first step was to find a room with neutral light. Then, with my ISO level set to 800 and a steady hand, I took roughly 25 images per rotation in two rings (upper and middle):
(Upper) (Middle)
After transferring the files over to my PC, VisualSFM creates a point cloud of common verts and generates the correct angles to interpolate the images from, which takes quite a while even with a decent amount of RAM:
Then, in meshlab you extrapolate from those verts and create both the model and texture file:
The final shot in Unity:
Positive things about this process:
1. The texture map and resulting model can look 100% photorealistic (especially if combined with a decent post-processing stack).
2. It's considerably faster than modeling the object manually.
3. It can be done for free if you already own a phone.
4. It's photogrammetry! This is some really cool, cutting edge tech and the learning experience is very rewarding.
Negatives/lessons:
1. VisualSFM needs a lot of photos to create an accurate depiction of the object. My first try with a single ring-pass of about 30 photos wasn't enough and I had some serious gaps in the cloud. A second pass up top remedied this.
2. The amount of computing power needed to accomplish photogrammetry is fairly high. My PC was struggling at points, though it's no slouch.
3. Results can be a little bit messy if your phone is low resolution or doesn't have a high quality CCD/CMOS. Pixels end up looking like blobs and noisy detail comes out a little bit sloppy looking. Ultimately, investing in a DSLR camera is probably a better long term choice.
4. Lighting matters. A lot. Any spotlight shadows or self-shadowing will show up in the resulting albedo map.
Conclusion:
I'm absolutely going to start integrating this process into our modeling pipeline. Long term I'll likely buy a better camera and set up a permanent spot for capture. I'm especially intrigued by the possibilities of human figure capture and the time that could save in future, more advanced projects.
Give it a go if you feel adventurous!
Comments
Thanks for the info jackshiels
I have one question - You mentioned that lighting is important. How did you go about a consistent light level? Did you just pick a time of day where their was no clouds that could interfere with the natural light?
Overall it's a pretty great location considering how bright it gets in SA right now :D
@jackshiels, thanks a lot for the post! I didn't actually realize this was possible at home. I have tried the laser method years ago at home, but that gives pretty terrible results.
I've been looking for a way to scan a dress-form/mannequin for my wife who lectures pattern design at the university here. With the goal being to be able to 3D print (or laser cut cardboard layers) a new one out which will be much cheaper than buying an actual one. This looks pretty great!
I spent most of last night trying to get it to work, using various background, lighting, and trying both moving around the object and spinning it on a lazy suzan with neutral backdrop. The big problem is that this dress-form has barely any distinguishing texture, and the front looks the same as the back. When I do the 2-image compare thing I can see it is matching totally wrong points to each other. And this is doing over 100 images around it with a DSLR.
When I worked out that was the problem we added colored tape to the object:
This helped a massive amount, and the results I'm getting now are significantly better.
It also seems to work better on a lazy suzan for this object.
I'll carry on trying today in more natural light, and will let you know how it goes.
The moral of the story for people that are going to try this out though, is that the objects needs distinguishing texture/shape. Each image is compared against every other image, and the sequence of them does not seem to matter. This means if a view from the back looks the same as the front, it will likely match those and totally ruin the point cloud.
This is the best scan I got right at the end, and it was just a partial scan because I was tired of taking 100+ images and then it failing :P
The lighting is weird in MeshLab, and the texture is mis-aligned, but the important bit is that the area that I circled in white has extremely good/smooth shape. So that is encouraging!
Reflective surfaces don't capture well at all so it is best avoided. I've used AGISOFT PHOTOSCAN before and it was a rather pain free experience. The program isn't free but you can request a 30 day trial and a standard license cost $179 which is not too bad if you are going to use it a lot.
I've attached some photos of the scans that we have done before. I also made a tutorial about it last year which is split into 4 different videos that cover scanning all the way to retopology and importing into your game engine so you can get good looking textures and low poly model counts.
You can find the tutorial here:
It came out significantly better! I noticed in the images that the tape is reflecting, which @cryonetic says is bad above. So I may try a more matte tape.
The texture is also still coming out vertically stretched, which doesn't actually matter for my use here, but any idea why it is happening @jackshiels?
https://gfycat.com/AntiqueCleverBooby (yes, that really is the random link gfycat chose)
I checked out Photoscan before trying the free software. Definitely going to be purchasing it at some point soon. It seems to be a bit more integrated and powerful. Try adding a second circle of shots from a 45 degree angle downwards (as with my log scan). It might need some vertical data to normalise. Looking good regardless.
I will note that you can capture a face easily enough and then capture different expressions. Your captures have to be incredibly good to get this to work properly. I have tried this in blender and then using SHRINKWRAP modifier I set up my shapekeys/morph targets. Note you would have to retopologize your model and then use that and attach it to the different scans of expressions and add shapekeys/morph targets for each one.
You can then drive your animations with keyframes or by putting some dots on your face and use those dots to drive the shapekeys/morphtargets. The results need some clean up but you can get a very professional looking facial capture doing this. It is still a very time consuming process. Something else you can do is use an Xbox one kinect(needs usb adapter) as they are cheap enough(Cash Converters had a few for R600) because no one wants them and then use a program called FACESHIFT.
I don't think you can buy FACESHIFT anymore but I was lucky enough to get a license before the company was bought by Apple. Anyway have a look.
I am not a character artist so I keep looking for other ways to created characters easily.
Here is me (well, creepy Mc DeadEyes) on first attempt.
The texture is a bit messy in some areas because I had bad lighting causing some images to be blurred, and for it to pick up a bit of noise. It even manages to handle hair relatively well, although it halved the length of my beard - presumably because I didn't do a low angle circle so it had no data for the bottom.
The details it get's are insane - look at the creases in my shirt, those are not a normal map or just texture, they are really in the mesh.
Now to rig this and do silly things.
p.s. I've managed to get a pretty perfect model of the dress-form I was doing. Easily good enough quality to 3D print a negative mold from it.
p.s.2. Regarding using Kinect for the scanning, I tried with a Kinect 2 before and was never able to get things close to the detail of this. Although it is certainly faster and much more convenient.
The issue I find that crops up after this is decimation. Often the retopology process kills the UV map.
Remapping a lower poly model to fit the texture is a pain in the ass, so I'm still trying to find out how this can be done more effectively.
It is ~188MiB, BTW.
Thanks for the link! Looks like this is the video version.
What color background did you have?
I found a perfect setting on my DSLR which takes about 5 photos per second by holding down the shutter. So you can just hold down and do a few loops.
I see Beautiful Desolation uses photogrammetry for some of the world.