Photoscan: Using Blender as an intermediary touch up tool

I’m glad that I have a solution (use a good camera and lens) for capturing 3D models – but I’m still trying to get the process matured – to the point where I could run a scan-and-print booth at a flea market.  (not that I’m going to; I’d just like to be that good at it).

The Problem: No Noggin

imageThis is a DSLR Dan Scan with a Medium Dense Point cloud on Moderate (Ultra high didn’t really add to the detail).  The top of the head is missing, and there’s a large seam in the back that is not filled.  Also, the bottom isn’t a good base for a printed model.

The Solution:  Blender!

image image

image

It is important to leave the model where it comes in – I have to export it at that exact spot in order for Photoscan to pick it up.   Luckily, I can select the object and zoom to it using “.” on the keyboard. (Blender: shortcuts keys are dependent on which window is active, it has to be the 3d window for . to work)

I then remove stuff from the model, and add in some more surfaces (NURBS surface shown here), to get the model closer to what I want:

a image b image c image

d image e image f image

  • a: I use perspective + border select + delete vertices to clear away an even cut which I will augment with a surface.
  • b,c: I create (separate object) a NURBS surface with “Endpoint=UV”; I fit the outer borders first (so that there is a little gap showing) and then move the inner points (how much it sticks out) in; I try to ensure there’s always a gap between the actual model and it (easier to join later)
  • d: For the base, I create a square (cylinder would have been better?), move the edges in, and then delete the top surface.   I subdivide till it has the right “resolution”.
  • I then convert NURBS to meshes; select the meshes; subdivide them to match resolution, and join them everything into a single mesh.
  • e: Sometimes to avoid “helmet head” effects, I have to delete some of the edge faces of the former NURBS by hand.
  • f: Stark from Farscape.

Another touch-up I can do in Blender is to smooth skin surfaces using “Scuplt”:

imageimage

Wrapping it up

I have to join the pieces together by hand *somewhere*, so that the holes have boundaries.  Try to keep each hole in 1 axis.    Notice how I subdivide the larger mesh so that the points line up. Don’t forget to Normalize faces outward when done.

imageimageimage

 

I can then export the .OBJ back to disk, and in to Agisoft Photoscan, where I finish the hole-filling process.  (I’ve tried filling the holes in blender, but I run into vertex normalization / faces being inverted problems.)

imageimageimage

And then, we can build a texture, and Dan’s noggin won’t be left out.

And Now for Something Completely Different:

The other direction to go is to use a completely different mesh and see what happens:

suzanne the monkey:
image

sphere scaled to head shape:
image

We could also do a Minecraft version of Dan – by joining several rescaled cubes, and then building a texture around that.  That would make for an excellent Pepakura model.   However, I’m out of time on this blog post (1h24m so far), so I’ll save that for another day.

(after editing:  1h40m taken)

Getting Photoscan to Work: 3/N

Good news!  Using a Nikon camera with a 55mm lense, I got pretty good (printable) results!

image
Subject:  Lamont Adams

Here’s how it went down:

  • I borrowed Dan Murphy’s Nikon DSLR Camera.  He had several lenses, I chose the 55mm lens (not prime; I just didn’t zoom it)
  • I sat the subject in an empty room in the center beneath four fluorescent lights.   By sitting them, they stay stiller; and I am taller, so I can get more details of their hair.
  • I started taking pictures from their back, so that the pictures across the front are contiguous / seamless.
  • I took 3 extra pictures from the front from a lower angle, to get nose and chin details
  • image
  • I took Lens Calibration pictures starting from as far away as possible; and got a lens profile.  I did not fit k4.
  • image
  • image
  • I used High accuracy matching and a Medium Point cloud; Low polygon count mesh (20k)
  • Minor editing of point cloud before meshing and closing holes (deleting floaters and fixing hair)
  • Export mesh to Wavefront .OBJ format; use Blender to rotate it and convert it to STL; export STL at 100x scale
  • Netfabb to clean up the STL and scale it precisely (100mm height)
  • 6 hours to print it.

I have ordered a small full color print from Shapeways to see what that looks like.    Should be here June sometime.

He looks kinda like the KFC Colonel.  What with that chin growth and all.

Addendum:   Instead of a 4/N post, I’ll just put him here:  Yellow Dan the Pirate Man with those dark sunken eyes.

image

Tired of PhotoScan, I just want it to Work: 2/N

Since the last Blog post, I finished going around the object and selecting enough cameras to get a decent set of dense point cloud going.  I did this in four chunks; I was trying to quarter the model with each chunk:

image
9 cameras
1553 points
image
17 cameras
3260 points
image
10 cameras
2332 points
image
11 cameras
2575 points

Lets Align Some Chunks, Shall We?

Align Chunks, Camera Based, 1 Camera overlap

Nothing.  It needs more cameras?  I could see it being in 3 dimensions, it would need 3 cameras minimum.  I’ll come back to this with more cameras in overlap.

Align Chunks, Point Based

image

Not Quite.   And as far as I can tell, there are no manual align controls anywhere (in the non-professional version).

I could take this out to Meshlab and try to align it there, however, I won’t later be able to map a texture; I have to solve this in PhotoScan. 

Add More Cameras, Align Chunks, Camera Based, 3 Camera Overlap

While I’m at it, I also add in a few more cameras in some of the gaps that I see.  And this is what I get:

image

Queue Darth Vader Imperial March

I can tell it got the cameras correct.   However, my fear is realized:  I think as I walked around the subject, he moved slightly.  Or, my distance from the subject was not constant, so I ran into some lens calibration issues, and thus the resultant object was not mapped at the correct size.    Either way, what we have now is a FrankenDan.

image

I cannot resist.  Going all the way through to a model and texturing this beauty – and learning how to do an animation in blender at the same time —

[youtube=http://www.youtube.com/watch?v=-V4Yam2yCU8&w=448&h=252&hd=1]
Franken Dan Murphy

Attempting a single chunk with the same 55 cameras

image

What is happening is either a) the model moved, or b) I changed distance from the model (and the camera alignment is wonky), and it just cannot get the math to work.  FrankenDan is actually a better representation of the reality that was captured. 

So.. I don’t think there is a solution here, with a GoPro Hero3 walking around a subject.   There are several directions I could go, though:

  • Start from the back of the subject, so that the seam would be in the back.
  • Put markers on the person’s back (so that there is something to “fix” on), or give them a “garland” of some sort.
  • Use the 120fps to capture the model quicker; but I need to find a reliable way to spin the camera around the subject and hopefully not invoke motion blur.  (Hula Hoop Mount?)
  • Use a better camera (not a GoPro); perhaps a DSLR; with a ring laid out for distance from the subject (see teaser solution below)
  • Use multiple cameras! (so many people have had success with this – and they don’t have to be good cameras either)

Teaser Solution:  DSLR

In comparison, here is me, taken via a DSLR camera with a 50mm fixed (prime) lens.  Its not quite printable, as the (shiny? homogeneous?) back of my head failed to capture.   There’s definitely something to be said for not using a GoPro.

imageimage

Tired of PhotoScan, I Just Want it to Work: 1/N

Subtitle:  Going to ridiculous lengths to understand what doesn’t work with PhotoScan.

I took two sample videos with the GoPro a few days ago, of Dan and Rider.  I want to print a color 3D model of them (shapeways, small), just to see it done, and to have a simple process to do it.   But it keeps not quite working, and its annoying me.   So, here goes another night of experimentation.  What am I missing?

Here’s the precalibration stuff from Agisoft Lens, btw:

image

Check #1.  How much does it matter how close or how far apart the frames are?

imageI extracted out at 60fps (the video is 120fps), so I have 1400 frames in a circle.  That’s a lot of frames.

Here are sample reconstructions using just two frames – varying the number of frames apart.  I’m using 4mm as the focal length, but I will play with that in the next section.   Process:  Align on High, Dense cloud on High.    The picture on the right is what Frame # 0 looks like; the screen capture is “look through Frame 0”, zoom out to bring the head in frame, and rotate left (model’s right) about 45 degrees.

 

1 frame apart (cannot build dense cloud)
2 frames (cannot build dense cloud)
4 frames (cannot build dense cloud)
8 frames
image
16 frames
image
32 frames
image
64 frames
image
128 frames (cannot build dense cloud)
All 8 frames
image
Above view, to see how the cameras are aligned
image

Clearly, more pictures is not the answer.    The best one was 0 to 32, which was about a 6 degree difference.

Check #2: Trying every 32 frames, how does adding more pictures improve quality?

This time I’m moving the camera up so I can see the “jaggies” around the edges

3 Frames combined (0,32,64):
image
4 frames combined:
image
6 frames combined:
image
7 frames combined:
image

The same 7 frames, this time with the wall in view, trying to line up the roof and the wall:

image

Check #3: Focal Length

Trying to solve for the wall jagginess.

2mm:image 4mm:image
6mm:image 8mm:  Cannot build dense cloud
5mm:image  
3mm:image 4.5:image

Okay, so .. 4.5 is wonky, but 4 and 5 are okay?   Its very hard from this angle to see any difference in quality between 3,4,5 and 6.   2, 7, and 8 are clearly out**  

Maybe another angle:

3mm:image 4mm:image
5mm:image 6mm:image

** Or maybe 7 is not quite out yet.  Turns out, I can “align photos” once.. get one result.. then try aligning again .. and get a different result.   So I retried 8 a couple of times over, and I got this:

image

None of this is making any sense to me.   I guess I’ll stick with 4mm, for lack of a better idea.  Do you see any patterns in this? Moving on.

Check #4:  Low, Medium, High Accuracy?

I’ve bumped it up to 17 cameras (32 frames apart).  Testing for “Align Photos” accuracy (Low, Medium, High) + Dense Cloud accuracy + Depth Filtering

High, High, Moderate
image
Low, Low:   Cannot build dense cloud.
Medium, Medium, Moderate
image

High, Medium, Moderate
image
High, High, Mild:  (Mild took around 3 minutes)
image
High, Ultra-High, Aggressive:  (12 minutes)
image
Close up of H/UH/A:
image

Aggressive is definitely the way to go; however, there are still way too many floaters!

image

Ah, but this image might clear that up a bit.   It has to do with the velocity with which I was moving the camera.  I slowed down.  Hence several of the frames are not very far apart.   I might need a different approach for frame selection.

Test #5: Compass Points Approach

imageimageI will attempt to bisect angles and derive frames in that manner.  Note that I’m not going to try the full 360 – I suspect that the subject moves a bit, so it can’t connect 359 back to 0;  instead, I’m hoping to get a nice 90 degree profile, and maybe merge chunks to get it down to a single model.   So lets try to get a set of frames from the image on the Left (000) to the image on the Right (400).

  • 0,200,400 – Aligns 2/3
  • 0,100,200,300,400 – Aligns 5/5, but fails to build dense cloud
  • 0,50,100,…,350,400:

imageimage

I have to cut this blog post short here – it looks like I have WAY too many images, and Live Writer is freaking out.   Doing a quick edit past, and then posting this as a part 1/N.