Tired of PhotoScan, I Just Want it to Work: 1/N

Subtitle:  Going to ridiculous lengths to understand what doesn’t work with PhotoScan.

I took two sample videos with the GoPro a few days ago, of Dan and Rider.  I want to print a color 3D model of them (shapeways, small), just to see it done, and to have a simple process to do it.   But it keeps not quite working, and its annoying me.   So, here goes another night of experimentation.  What am I missing?

Here’s the precalibration stuff from Agisoft Lens, btw:

image

Check #1.  How much does it matter how close or how far apart the frames are?

imageI extracted out at 60fps (the video is 120fps), so I have 1400 frames in a circle.  That’s a lot of frames.

Here are sample reconstructions using just two frames – varying the number of frames apart.  I’m using 4mm as the focal length, but I will play with that in the next section.   Process:  Align on High, Dense cloud on High.    The picture on the right is what Frame # 0 looks like; the screen capture is “look through Frame 0”, zoom out to bring the head in frame, and rotate left (model’s right) about 45 degrees.

 

1 frame apart (cannot build dense cloud)
2 frames (cannot build dense cloud)
4 frames (cannot build dense cloud)
8 frames
image
16 frames
image
32 frames
image
64 frames
image
128 frames (cannot build dense cloud)
All 8 frames
image
Above view, to see how the cameras are aligned
image

Clearly, more pictures is not the answer.    The best one was 0 to 32, which was about a 6 degree difference.

Check #2: Trying every 32 frames, how does adding more pictures improve quality?

This time I’m moving the camera up so I can see the “jaggies” around the edges

3 Frames combined (0,32,64):
image
4 frames combined:
image
6 frames combined:
image
7 frames combined:
image

The same 7 frames, this time with the wall in view, trying to line up the roof and the wall:

image

Check #3: Focal Length

Trying to solve for the wall jagginess.

2mm:image 4mm:image
6mm:image 8mm:  Cannot build dense cloud
5mm:image  
3mm:image 4.5:image

Okay, so .. 4.5 is wonky, but 4 and 5 are okay?   Its very hard from this angle to see any difference in quality between 3,4,5 and 6.   2, 7, and 8 are clearly out**  

Maybe another angle:

3mm:image 4mm:image
5mm:image 6mm:image

** Or maybe 7 is not quite out yet.  Turns out, I can “align photos” once.. get one result.. then try aligning again .. and get a different result.   So I retried 8 a couple of times over, and I got this:

image

None of this is making any sense to me.   I guess I’ll stick with 4mm, for lack of a better idea.  Do you see any patterns in this? Moving on.

Check #4:  Low, Medium, High Accuracy?

I’ve bumped it up to 17 cameras (32 frames apart).  Testing for “Align Photos” accuracy (Low, Medium, High) + Dense Cloud accuracy + Depth Filtering

High, High, Moderate
image
Low, Low:   Cannot build dense cloud.
Medium, Medium, Moderate
image

High, Medium, Moderate
image
High, High, Mild:  (Mild took around 3 minutes)
image
High, Ultra-High, Aggressive:  (12 minutes)
image
Close up of H/UH/A:
image

Aggressive is definitely the way to go; however, there are still way too many floaters!

image

Ah, but this image might clear that up a bit.   It has to do with the velocity with which I was moving the camera.  I slowed down.  Hence several of the frames are not very far apart.   I might need a different approach for frame selection.

Test #5: Compass Points Approach

imageimageI will attempt to bisect angles and derive frames in that manner.  Note that I’m not going to try the full 360 – I suspect that the subject moves a bit, so it can’t connect 359 back to 0;  instead, I’m hoping to get a nice 90 degree profile, and maybe merge chunks to get it down to a single model.   So lets try to get a set of frames from the image on the Left (000) to the image on the Right (400).

  • 0,200,400 – Aligns 2/3
  • 0,100,200,300,400 – Aligns 5/5, but fails to build dense cloud
  • 0,50,100,…,350,400:

imageimage

I have to cut this blog post short here – it looks like I have WAY too many images, and Live Writer is freaking out.   Doing a quick edit past, and then posting this as a part 1/N.

Pepakura Lucy next to the Real One

image

This time, I printed things out on cardstock.   Bad move – Cardstock doesn’t bend very well, I had to pre-bend every bend, and even so, the thickness of the paper caused some things to move out of place over time.

I think it would be better to make certain pieces from cardstock, and the rest from regular paper, however you would have to “open” the model just right to get the pieces just right and not obvious. 

Also, the resulting struture was just too complicated to put together.  There’s no way I can get the red piece on the right glued into her head correctly.   Or, there’s a way, but its too frustrating to keep at it.

One thing I can point out though:  In Pepakura, I said, “Model Height=160mm” .. and yes, the final model matches the original fairly well.   That’s a win.

I’ve decided I’m not spending any more time on this particular model, too many other fun projects to play with.

More Photoscan Experiments

I need to wrap up this subject soon.  But every time I visit a thread, I seem to open several others.   This is about point and model reconstruction in Agisoft Photoscan.

My Goal

imageI want to be able to create realistic busts and maybe even poses of people, in full color, that I could 3d color print (via Shapeways, etc).  (Here’s a professional company doing the same thing: Twinkind, $300, including visit to a studio.) In order to do this, I need to grab the frames very quickly, so keep trying to use a video camera to grab frames and then pull data out of the video.  That’s not working too well for me.

Original, for reference, is on the right.  Lucille Ball?

Two Attempts Contrasted:   (Dense Cloud, because it shows imperfections well)

iPhone 5s, 26 pictures:

image

  • The subject is in focus
  • There’s a lot of resolution which leads to accuracy in 3D space
  • If you miss a shot from an angle, you are S.O.L.  For example, I don’t have her right ear.
  • Takes a while to take the pictures  This one took 2 minutes and 30 seconds.  I had to click to focus each shot. And the subject must not move.
  • I don’t think I used calibration (taking picture of a grid, use Agisoft Lens to generate camera model) for this, it could get it from EXIF data, and it did a very good job at that.

JVC Everio HM1 1080p Video Camera, 0:47 seconds

image

  • Extracted at 5 frames per second to 247 frames.
  • I actually did two passes around the subject, so really about 20 seconds would have sufficed.
  • Lower resolution on the pictures, so the 3D is blockier (I think).  And more depth errors.
  • a lot more processing time.
  • I did use Agisoft Lens to calibrate this.

GoPro Hero 3 Black 1080p Medium FOV

  • I don’t yet have decent results with this.  I think I need to do some calibration with it first, then I might retry.   Different post…

iPhone 5s Video

  • I also need to go back and try this using the quality estimation technique listed below to find the best frames.   However, the data rate is much lower than the JVC, and I’m pretty sure artifacts will be a problem.

Red 5 Diamond

  • Heheh.  I have a film friend who raves about this camera.  Nope, I don’t have one.  Smile 

More on Using Video

The workflow is not obvious, so here’s what I’ve found. My apologies for all the paid-software here, but dude, I have things to solve, and I don’t have the time to fart around hunting for free solutions.

  • Use Adobe Premiere Pro, Export to PNG, “Entire Sequence” to extract the frames.   It takes a while.  (click to zoom in to these screenshots).  This gives much better frames than using VLC to automatically extract frames.  
    • image
  • Load all the frames into Agisoft PhotoScan.  But you can’t process them like this.  I once tried 1183 frames of a prius, and after a weekend, it was still only 10% done.
  • On the Photo’s tab, select “Details”.  I didn’t even know that existed till I read the advanced documentation… 
    • image
  • select all (click, Ctrl/A), right click, “calculate image quality”
  • Sort by Quality, skip over the best 100 frames or so, and the rest, select and disable camera (or delete)
    • image
  • Run the first pass over the 100 or so “good” cameras that are left,  and then look at where it placed cameras.   (I would show a screenshot, but I FORGOT TO SAVE.)  Abberations should stand out (not be in a smooth line of frames), and should be easy to grab and exclude.

Conclusion

I don’t yet have a solution for what I want, but I do have a solution for a model good enough to create a pepakura for.     Maybe in the future I can have an array of cameras.. that’s what the pro’s like TwinKind use.  I’m not a pro.   

I might need to try this with “sport” mode + burst (click click click) using a DSLR camera.  

Frustrated about 3D Stuff

General feeling of frustration tonight (time of writing: Friday 5/9).  Hard to put into words.  But I could draw it.  (might need to zoom in)

3d printing roadmap

  • The Green things are things that I have figured out.
  • The Red things are things that definitely have not worked for me.
  • The Yellow Things are things I want to be able to to.
  • The Orange things are things that I haven’t yet figured out.   They depend on each other, there is usually a chain.
  • I forgot about Minecraft Prints.  Those fall under “Color print that I might could afford”, pretty much.

Latest Frustration Tonight

It turns out a Agisoft PhotoScan project file (.PSZ) looks like a ZIP file, with a doc.xml, which has the below structure – so I could write code to hunt through 1420+ images, and select the best 50, spread out.

image

The code looked like this (sorry, no LINQ, going old school here, command line parameters, etc)

image

Except that the PSZ not really a Zip file.  When I tried to stuff the modified doc.xml back into the .PSZ file, it came out as being corrupted.    Dead End?  Retry with 7Z?   Extra metadata?  older compression format? ?

I guess what I have is code that tells “which cameras I should enable”.   That’s workable, except that I need to grab frames/image[@path] so that a human could identify it. 

Future: Maybe I could write code to read the video, figure out the quality of the frames, and only extract the best frames?

Also: the $3000 version of Photoscan has a Python Scripting interface.  Sorry, I’d rather buy a color 3D printer than that.

However, good news, it looks like I might finally have a lens profile for a GoPro Hero3 in 720p (x120FPS) Wide mode.  I had to play with the focal length, for some reason numbers like 0.1mm and 2mm work way better than 7mm that folks advertise.   More to be proven later.

Slicing and Dicing with OpenSCAD

In my quest to make a cool coaster, I wanted a way I could slice up a model so that each face could be printed, well, “face-up”, so I don’t run into problems with overhangs and supports and stuff gumming the works.  I would then glue the model together later.   (In the case of Coasters, I can also swap filaments and have the design “pop” on each side)…

In the process, I learned some OpenSCAD:

  • F5 = “quick”, F6 = slow and build
  • No such thing as “export this object”, it just builds and then you can choose to export everything.
  • variables are lexically scoped, ie, more like constants.  There are some dynamic variables as well.
  • I had to remember some algebra.
  • I applied some D.R.Y.

Here’s the result as applied to http://www.thingiverse.com/thing:60728 – first in “ShowMe()” mode:

image

And then SliceEverything() mode without the full fledged CSG operations:

image

And then SliceEverything() in the way that I could export an STL:

Yeah, that crashed.   It was too complicated.  I hate to take out the // Slice Everything, and instead, here’s just the front and an ear, exported to STL, and viewed in NetFabb.

imageimageimageimage

Note: Its NOT fast, when doing the full build (F6).  It also crashes less if run from command line – apparently the crash is during rendering to the screen?)

Show Me The Code

Since its getting a bit long, I’m linking to github below; but this is what it looks like, approximately:

imageimageimage

link to code at github

update:  I’ve tried to use this approach for several things now, and … its very fragile.  So fragile, that I have successfully used it to create a print.  Almost every model is “too complex” and fails at render.  I might need to try a different language.. something that is rock solid. 

Photogrammetry: What Not To Do: Dark or Small?

Trying to create a model of a Panasonic Insta-something camera. (My wife loves it):

2014-04-24 01.26.36

image

This is what I got (sparse, dense):

image

image

Looking from above, there might be some lens distortion – the edges of the room do not appear to be square:

image

In general, though, I think the problem is that a) the model is too dark, and b) the model is too small. 

I decided to switch models for something lighter, and take the pictures closer up.

UnClogged 3D Printer: Part 5: Back in Business

We’re back in business!

The last time I had tried to print something, I noticed a bunch of smoke coming out of “C” in the diagram below.  And things were dripping too freely.  I had the thermister at position “A” below, just how its supposed to be.

image

Thanks to acquiring a digital thermometer capable of measuring temperatures up to 200C, I was able to read my own temperatures – and sure enough – Fred in this chart.  The real temperature in the barrel was much higher than was being read.  Why?  This makes no sense!

Well, luckily I had watched a video on how to put a print head back together, and it turns out that I had not looped some Kapton tape around the nozzle first, I had put the thermister directly on there, and then covered it up with tape.      I tried it the other way:  One loop around with tape, then the thermister, and then another layer of tape, yielding sample Barney.   (I also put the fiberglass insulation (tattered, but still in one piece) on it like they suggested.

Sample Thermister Location Temp (C) Reference Thermometer location Temp (C)
Fred A / Direct 130 C 170
Barney A / Kapton 130 C 125
Scooby B 130 C 125

Much better!   We’re in business.. almost.   While printing, the tape gave way and the thermister fell off. 

Why, I wonder?   My guess is that by having one loop of kapton tape, and kapton tape is a good heat transfer agent, it gets to sample the average heat from all around the nozzle, all around the thermister, rather than just one side of the thermister.  Or something like that.   Or maybe there’s just a bad spot on nozzle and I was unlucky.

I tried taping it back on 2 more times.  No luck.  I seem to have done something to the print head, the wires are shorter now, and after much cursing and screaming, I gave up.

Instead, I put the thermister in at location “B” – inside the fiberglass insulation, which held it snugly in place – and yielded sample Scooby.   We still seemed to be in business.   And here it is, printing:

image

The URL to the above camera is https://www.dropcam.com/p/sunnywiz, although there is no guarantee that it will be pointed at a geeky subject at the time this post posts.

(Those things in the picture, btw, are thingamabobs (technical term) that Jason needs in his Arcade machine build)

Yay! so now that its back to working again, what now?  Hmmm..

For future time historians, the list of all posts on the clog:   http://geekygulati.com/tag/3d-printing+clog/

Clogged 3D Printer: Part 4

Jason did a great job on the nozzle, but after I had it up and printing (nicely), i noticed a trail of smoke coming from the PEEK barrel.  (Why the heck is it named that?)

image

I think I have the barrel too far up into the PEEK thingy, and the heat is heading up there and melting things.

Also, with the thermister dialed in for 140, the plastic was dripping freely … i suspect that things are MUCH hotter than they are registering at the moment.

I might need a secondary source to determine real temperature.  I wonder if my IR gun can get close enough for an accurate reading.

Clogged 3D Printer: part 3

@jstill is the Man.

image

Here’s what he did to unclog the nozzle.

  1. Put it on the Grill, for a while, at 700 degrees or more.     At the end of this, it still had black stuff all over it.  (Much better than my idea of the oven.. no wife being annoyed at me for stinking up the house)
  2. Tried to remove the black stuff with ____ (I didn’t quite catch it) and Mineral Spirits.. didn’t work.
  3. Had a Eureka moment, realized it was all carbon, so he used Hoppe’s No. 9. Gun Bore Cleaning Solvent. A single wipe, and it all came off.
  4. ‘Tis beautiful.

I think for good measure, I’m going to pick up some guitar strings and run floss it as well.   Then put it on and test.    This time, definitely tighter on the barrel .. I think it was loose, when I removed it, there was filament where filament ought not be.

Excited. 

Dan!

Here be a picture of Dan with his head in his hand.

2014-04-25 16.53.50

This was done using a video of his head, broken out to frames.. etc etc .. and then an exacto-knife, and glue.  The head was rendered with 200 faces, and then further simplified (delete a bunch of complicated faces, fill holes).

Closeup:

image

I wish there was a way I could print this on plastic that would shrink about 5%, so that I could put a texture around a 3D print of the same thing.