More Photoscan Experiments

I need to wrap up this subject soon.  But every time I visit a thread, I seem to open several others.   This is about point and model reconstruction in Agisoft Photoscan.

My Goal

imageI want to be able to create realistic busts and maybe even poses of people, in full color, that I could 3d color print (via Shapeways, etc).  (Here’s a professional company doing the same thing: Twinkind, $300, including visit to a studio.) In order to do this, I need to grab the frames very quickly, so keep trying to use a video camera to grab frames and then pull data out of the video.  That’s not working too well for me.

Original, for reference, is on the right.  Lucille Ball?

Two Attempts Contrasted:   (Dense Cloud, because it shows imperfections well)

iPhone 5s, 26 pictures:

image

  • The subject is in focus
  • There’s a lot of resolution which leads to accuracy in 3D space
  • If you miss a shot from an angle, you are S.O.L.  For example, I don’t have her right ear.
  • Takes a while to take the pictures  This one took 2 minutes and 30 seconds.  I had to click to focus each shot. And the subject must not move.
  • I don’t think I used calibration (taking picture of a grid, use Agisoft Lens to generate camera model) for this, it could get it from EXIF data, and it did a very good job at that.

JVC Everio HM1 1080p Video Camera, 0:47 seconds

image

  • Extracted at 5 frames per second to 247 frames.
  • I actually did two passes around the subject, so really about 20 seconds would have sufficed.
  • Lower resolution on the pictures, so the 3D is blockier (I think).  And more depth errors.
  • a lot more processing time.
  • I did use Agisoft Lens to calibrate this.

GoPro Hero 3 Black 1080p Medium FOV

  • I don’t yet have decent results with this.  I think I need to do some calibration with it first, then I might retry.   Different post…

iPhone 5s Video

  • I also need to go back and try this using the quality estimation technique listed below to find the best frames.   However, the data rate is much lower than the JVC, and I’m pretty sure artifacts will be a problem.

Red 5 Diamond

  • Heheh.  I have a film friend who raves about this camera.  Nope, I don’t have one.  Smile 

More on Using Video

The workflow is not obvious, so here’s what I’ve found. My apologies for all the paid-software here, but dude, I have things to solve, and I don’t have the time to fart around hunting for free solutions.

  • Use Adobe Premiere Pro, Export to PNG, “Entire Sequence” to extract the frames.   It takes a while.  (click to zoom in to these screenshots).  This gives much better frames than using VLC to automatically extract frames.  
    • image
  • Load all the frames into Agisoft PhotoScan.  But you can’t process them like this.  I once tried 1183 frames of a prius, and after a weekend, it was still only 10% done.
  • On the Photo’s tab, select “Details”.  I didn’t even know that existed till I read the advanced documentation… 
    • image
  • select all (click, Ctrl/A), right click, “calculate image quality”
  • Sort by Quality, skip over the best 100 frames or so, and the rest, select and disable camera (or delete)
    • image
  • Run the first pass over the 100 or so “good” cameras that are left,  and then look at where it placed cameras.   (I would show a screenshot, but I FORGOT TO SAVE.)  Abberations should stand out (not be in a smooth line of frames), and should be easy to grab and exclude.

Conclusion

I don’t yet have a solution for what I want, but I do have a solution for a model good enough to create a pepakura for.     Maybe in the future I can have an array of cameras.. that’s what the pro’s like TwinKind use.  I’m not a pro.   

I might need to try this with “sport” mode + burst (click click click) using a DSLR camera.  

Frustrated about 3D Stuff

General feeling of frustration tonight (time of writing: Friday 5/9).  Hard to put into words.  But I could draw it.  (might need to zoom in)

3d printing roadmap

  • The Green things are things that I have figured out.
  • The Red things are things that definitely have not worked for me.
  • The Yellow Things are things I want to be able to to.
  • The Orange things are things that I haven’t yet figured out.   They depend on each other, there is usually a chain.
  • I forgot about Minecraft Prints.  Those fall under “Color print that I might could afford”, pretty much.

Latest Frustration Tonight

It turns out a Agisoft PhotoScan project file (.PSZ) looks like a ZIP file, with a doc.xml, which has the below structure – so I could write code to hunt through 1420+ images, and select the best 50, spread out.

image

The code looked like this (sorry, no LINQ, going old school here, command line parameters, etc)

image

Except that the PSZ not really a Zip file.  When I tried to stuff the modified doc.xml back into the .PSZ file, it came out as being corrupted.    Dead End?  Retry with 7Z?   Extra metadata?  older compression format? ?

I guess what I have is code that tells “which cameras I should enable”.   That’s workable, except that I need to grab frames/image[@path] so that a human could identify it. 

Future: Maybe I could write code to read the video, figure out the quality of the frames, and only extract the best frames?

Also: the $3000 version of Photoscan has a Python Scripting interface.  Sorry, I’d rather buy a color 3D printer than that.

However, good news, it looks like I might finally have a lens profile for a GoPro Hero3 in 720p (x120FPS) Wide mode.  I had to play with the focal length, for some reason numbers like 0.1mm and 2mm work way better than 7mm that folks advertise.   More to be proven later.

Slicing and Dicing with OpenSCAD

In my quest to make a cool coaster, I wanted a way I could slice up a model so that each face could be printed, well, “face-up”, so I don’t run into problems with overhangs and supports and stuff gumming the works.  I would then glue the model together later.   (In the case of Coasters, I can also swap filaments and have the design “pop” on each side)…

In the process, I learned some OpenSCAD:

  • F5 = “quick”, F6 = slow and build
  • No such thing as “export this object”, it just builds and then you can choose to export everything.
  • variables are lexically scoped, ie, more like constants.  There are some dynamic variables as well.
  • I had to remember some algebra.
  • I applied some D.R.Y.

Here’s the result as applied to http://www.thingiverse.com/thing:60728 – first in “ShowMe()” mode:

image

And then SliceEverything() mode without the full fledged CSG operations:

image

And then SliceEverything() in the way that I could export an STL:

Yeah, that crashed.   It was too complicated.  I hate to take out the // Slice Everything, and instead, here’s just the front and an ear, exported to STL, and viewed in NetFabb.

imageimageimageimage

Note: Its NOT fast, when doing the full build (F6).  It also crashes less if run from command line – apparently the crash is during rendering to the screen?)

Show Me The Code

Since its getting a bit long, I’m linking to github below; but this is what it looks like, approximately:

imageimageimage

link to code at github

update:  I’ve tried to use this approach for several things now, and … its very fragile.  So fragile, that I have successfully used it to create a print.  Almost every model is “too complex” and fails at render.  I might need to try a different language.. something that is rock solid. 

Photogrammetry: What Not To Do: Dark or Small?

Trying to create a model of a Panasonic Insta-something camera. (My wife loves it):

2014-04-24 01.26.36

image

This is what I got (sparse, dense):

image

image

Looking from above, there might be some lens distortion – the edges of the room do not appear to be square:

image

In general, though, I think the problem is that a) the model is too dark, and b) the model is too small. 

I decided to switch models for something lighter, and take the pictures closer up.

UnClogged 3D Printer: Part 5: Back in Business

We’re back in business!

The last time I had tried to print something, I noticed a bunch of smoke coming out of “C” in the diagram below.  And things were dripping too freely.  I had the thermister at position “A” below, just how its supposed to be.

image

Thanks to acquiring a digital thermometer capable of measuring temperatures up to 200C, I was able to read my own temperatures – and sure enough – Fred in this chart.  The real temperature in the barrel was much higher than was being read.  Why?  This makes no sense!

Well, luckily I had watched a video on how to put a print head back together, and it turns out that I had not looped some Kapton tape around the nozzle first, I had put the thermister directly on there, and then covered it up with tape.      I tried it the other way:  One loop around with tape, then the thermister, and then another layer of tape, yielding sample Barney.   (I also put the fiberglass insulation (tattered, but still in one piece) on it like they suggested.

Sample Thermister Location Temp (C) Reference Thermometer location Temp (C)
Fred A / Direct 130 C 170
Barney A / Kapton 130 C 125
Scooby B 130 C 125

Much better!   We’re in business.. almost.   While printing, the tape gave way and the thermister fell off. 

Why, I wonder?   My guess is that by having one loop of kapton tape, and kapton tape is a good heat transfer agent, it gets to sample the average heat from all around the nozzle, all around the thermister, rather than just one side of the thermister.  Or something like that.   Or maybe there’s just a bad spot on nozzle and I was unlucky.

I tried taping it back on 2 more times.  No luck.  I seem to have done something to the print head, the wires are shorter now, and after much cursing and screaming, I gave up.

Instead, I put the thermister in at location “B” – inside the fiberglass insulation, which held it snugly in place – and yielded sample Scooby.   We still seemed to be in business.   And here it is, printing:

image

The URL to the above camera is https://www.dropcam.com/p/sunnywiz, although there is no guarantee that it will be pointed at a geeky subject at the time this post posts.

(Those things in the picture, btw, are thingamabobs (technical term) that Jason needs in his Arcade machine build)

Yay! so now that its back to working again, what now?  Hmmm..

For future time historians, the list of all posts on the clog:   http://geekygulati.com/tag/3d-printing+clog/

Clogged 3D Printer: Part 4

Jason did a great job on the nozzle, but after I had it up and printing (nicely), i noticed a trail of smoke coming from the PEEK barrel.  (Why the heck is it named that?)

image

I think I have the barrel too far up into the PEEK thingy, and the heat is heading up there and melting things.

Also, with the thermister dialed in for 140, the plastic was dripping freely … i suspect that things are MUCH hotter than they are registering at the moment.

I might need a secondary source to determine real temperature.  I wonder if my IR gun can get close enough for an accurate reading.

Clogged 3D Printer: part 3

@jstill is the Man.

image

Here’s what he did to unclog the nozzle.

  1. Put it on the Grill, for a while, at 700 degrees or more.     At the end of this, it still had black stuff all over it.  (Much better than my idea of the oven.. no wife being annoyed at me for stinking up the house)
  2. Tried to remove the black stuff with ____ (I didn’t quite catch it) and Mineral Spirits.. didn’t work.
  3. Had a Eureka moment, realized it was all carbon, so he used Hoppe’s No. 9. Gun Bore Cleaning Solvent. A single wipe, and it all came off.
  4. ‘Tis beautiful.

I think for good measure, I’m going to pick up some guitar strings and run floss it as well.   Then put it on and test.    This time, definitely tighter on the barrel .. I think it was loose, when I removed it, there was filament where filament ought not be.

Excited. 

Dan!

Here be a picture of Dan with his head in his hand.

2014-04-25 16.53.50

This was done using a video of his head, broken out to frames.. etc etc .. and then an exacto-knife, and glue.  The head was rendered with 200 faces, and then further simplified (delete a bunch of complicated faces, fill holes).

Closeup:

image

I wish there was a way I could print this on plastic that would shrink about 5%, so that I could put a texture around a 3D print of the same thing.

Clogged 3D Printer Part 2

Well, we tried.  And its better.  But it ain’t fixed.

Here’s the first bits of print that came out of it.  You can see the dark stuff that was either the old filament or burnt:

image

However, the stream of filament coming out the end keeps varying.  Sometimes its good, and then other times it fizzles out:

2014-04-24 05.12.00

Conclusion: There’s something still clogging the bottom thing.    More work needs to be done.

3D Models.. from Camera to Paper

I’ve been playing with “structure from motion” / “photogrammetry” apps for a bit.  Lots of stuff still to figure out, but here’s a possible interesting path.   Note that none of this is optimal – wrong camera, wrong print medium, etc – but its fun.

1.  Start with a Video of walking around The Dude Abiding.

[youtube=http://www.youtube.com/watch?v=XkrIkdK0kjI&w=448&h=252&hd=1]
He owned the copy of Lebowski that I watched, so I associate him with the Dude

(His name is Joel, he’s one of my coworkers, he also teaches kids how to code)

2.  Using VLC, play then Shift-S to snap, then plan, then Shift-S to snap, to extract a bunch of frames.

2014-04-18 14_32_47-F__2014_3dmodels_joel

3. Load them into Agisoft PhotoScan ($179, or you could go 123D Catch from Autodesk, that one is free, but possibly less accurate?).   Run “Align Cameras”, get point cloud.

2014-04-23 16_27_44-joel.psz — Agisoft PhotoScan

4. Crop in the model box, hit Build Dense Cloud.  This is where it starts to look really interesting.

2014-04-23 16_28_03-joel.psz — Agisoft PhotoScan

5.  Build a Model .. Build Texture.  This is what the result looks like at “medium” resolution:

2014-04-18 14_32_30-Untitled_ — Agisoft PhotoScan (demo)

Not the greatest, but that’s because of the camera I used (video compression = artifacting = bumps) plus camera lens distortion etc.

6. However, that doesn’t work for what we’re doing next.  I don’t own a $10k color 3d printer.  So instead.. paper!

Build a model at lower resolutions:

2014-04-23 16_28_23-Build Mesh2014-04-23 16_28_49-joel.psz — Agisoft PhotoScan

Don’t forget to close holes! (tools menu) …  build texture…

2014-04-23 16_29_00-Build Texture2014-04-23 16_29_51-joel.psz_ — Agisoft PhotoScan

7. Export the model.  This is the $179 pay for step in Agisoft PhotoScan.

2014-04-23 16_30_16-Export Model - Wavefront OBJ

Note that this saves both a .JPG and a .PNG texture.  JPG on the Left.    I might be mistaken, but I think they were saved at the same time.   Actually, the .JPG might be the overall “build me a texture”, and the .PNG is the “export my model to wavefront”.   It’s a little freaky.

joeljoel

8.  Load the .OBJ into Pepakura.  $39, I totally plan to buy it.

2014-04-23 16_31_47-joel - Pepakura Designer 3

9. Unfold, and Print:

2014-04-23 16_31_55-joel - Pepakura Designer 3

10.  Print it out on cardstock, cut, and fold ..

I haven’t done this part yet.

These screenshots were from my second attempt.  Here is Joel holding my first attempt, which was 500 faces, a bit too complex:

2014-04-23 14.25.40