Using OpenJSCAD to Print a House (1/N)

Since before I got my 3D printer, I’ve wanted to make a scale replica of my house.   I tried doing it with Legos once – it was cost prohibitive.

I came up with a workflow where I drew out the entire house in SweetHome3D, and then exported that, but I ran into manifold problems and stuff like that.

So I did one of the floors in Sketchup.  However, that was a painfully task – and the resulting model was still too big (I want 1:24 or 1:36 scale).  I’d have to slice up the model to print out individual pieces, which means I wanted to cut them in such a way that they joined together with some kind of self-aligning joint.

I was about to try it again, but the sheer amount of detail that I had to go through kept holding me back.  I wanted a formula.

A recent blog post brought my attention to OpenJSCAD. and an Idea formed in my head:

Convert THIS: image
Into THIS: image

I had tried to do something similar in OpenSCAD before, however, because that language doesn’t have procedural elements, I ran into all kinds of problems.   Fresh new start!

So I set about to do it.

As you can see by this screenshot, I succeeded.

The code is here: https://github.com/sunnywiz/housejscad.  It took me about 2 hours.   You can see the commit log, I committed every time I figured even a small piece of the puzzle out.

UPDATE 2/1/2015:  the code as of this blog post is tagged with “Post1”, ie https://github.com/sunnywiz/housejscad/releases/tag/Post1  — the code has since evolved. Another blog post is in the works.  I guess I could “release to main” every time I do a blog post.  Heh.

The Code

  • Provide a translation of map character to 1x1x10 primitive anchored at 0,0,0
  • convert the template into a 2D Array, so that I can look for chunks of repeated stuff.
  • Walk the pattern, looking for chunks.  Rather than get fancy, I made a list of all chunk sizes from 6×6 down to 2×1, and check for each one at a time.
    • There are more efficient ways to do this, but IAGNI.
  • If a chunk is found, generate the primitive for that chunk, scale it up, and add it to the list.  “Consume” the characters which we just generated.
  • When all done, union everything together.

Notes about the Code

  • The resulting file is not manifold, however, NetFabb fixes that pretty easily and reliably.
  • The chunking is necessary if I want to represent steps in an area.    Otherwise, I didn’t need it.
  • You can define any mapping you want .. from a character to a function that returns a CSG.
  • Could probably use this to generate dungeon levels pretty easily.  Or, maybe take a game of NetHack and generate out the level?  Coolness!

Where would it go from here

  • Lay out an actual template of (part of) the house, and fine tune it from there.
    • Probably involve adding “and I want the result to be exactly 150 by 145mm” type scaling.
    • The functions will probably start taking arguments like (dx,dy) => so that the function can draw something intelligent for an area that is dx by dy in size.
    • I just noticed, the output is mirrored due to axes being different between R,C and Y,X
  • Preferably, I’d like to create a object / class that does this work, rather than the current style of coding.  IAGNI at the moment.  Then, maybe running in node, I could take the different floors and convert then into objects, and then do further manipulation on them..
    • Like slice them into top and bottom pieces.  Windows and doors print a lot better upside down – no support material necessary.
    • Would also need to slice them into horizontal pieces.  My build platform is limited to 6” square.
  • I live in a very 90-degree-angle house.   Thus, this kind of solution would work for me.  Sorry if you live in a circular, or slightly angled, house, this solution is not for you.    Buy me a house, and I’ll build you a solutiion. 😛
    • Seriously thinking about this.  I’d probably have a template of “points”, and then a language of “Draw a wall from A to D to E”;  and then “place a door on wall from A to D at the intersection of F”  or something like that.

A fun night of short and sweet coding.  I had to look up a lot of javascript primitives, mostly around arrays of arrays, and checking for undefined.

Some day I’ll get that “doll” house printed.  Then I can make scale sizes of all my furniture from Lego’s!  Fun fun.

Using blender to make 3D font thingies to print at Shapeways

Posted in a how-to video here:

[youtube=https://www.youtube.com/watch?v=muMX3byfJ9o&w=448&h=252&hd=1]

 

0:00 Intro
0:49 Choosing a Font
1:06 Blender the inadequate Intro
2:00 The “Fast” route that doesn’t always work
6:07 “Reliable” way
9:40 Deleting Inner Shells
10:43 Exporting to Shapeways via OBJ – No Texture
12:00 UV Mapping, Texture
15:31 Exporting to Shapeways via X3D – With Texture

Having now finished the video, it occurs to me that using a picture as texture would work very well with cylindrical or spherical mapping.

While in this video I’m focusing on printing at Shapeways, its possible to print at home – just export to STL.  However, Shapeways uses a different process which allows for filler material which makes supports a lot easier for this kind of complexity, compared to the FDM process I would use at home on my Solidoodle.

Video captured using Open Broadcaster Software; edited down using Premiere Pro.

From Photoscan to Shapeways: Process 2/N

TL;DR:  Nice idea, but fail.

Holy crunch this is taking a lot of hands on stuff figuring out what works, and what doesn’t.   I decided to step back and take a look at the strength / weaknesses of everybody and see if I could meta a process out of it.

Program Strengths Weaknesses
Agisoft Photoscan Can fill holes; gets normals correct when filling holes
Can Export and Import .OBJ
Can Export and Import WRL
Unique: can create textures from images
Cannot select only visible surfaces for deletion; has to select hidden as well
Can only close holes; does not always generate manifold
Cannot export STL
Cannot resize
Import object cannot have been moved
Netfabb Great job filling holes
Can detect non manifold
Unique: Very good at making things manifold automatically
Import/Export OBJ, STL, WRL
Can resize
No direct mesh manipulation
Free version cannot remove extra shells
Blender Unique: Can do union operations to add missing chunks together
Can detect non-manifold very well
In depth mesh editing
Can resize
Thin wall detection
Import/Export OBJ, STL, WRL
Can remove extra shells
Fills holes with akward normals
Cannot auto-fix manifold
Deleteing faces causes complex manifold problems which escalate quickly
Windows / etc Can ZIP files heh
Shapeways Import texture with ZIP files
Thin wall detection
Unique: Can print in color!
Cannot directly import multi part objects unless ZIP first
Cannot resize
Thin wall fixes loose texture
Cannot get texture when importing OBJ

Proposed Flow

Agisoft
  • Generate mesh
  • Save as A.obj
  • Use circle and block delete to remove chunks to be replaced in blender
  • Save as B.obj
Netfabb
  • Import B.obj
  • Fill holes / clean up / repair
  • Save C.obj
Blender
  • Import A.obj
  • Import C.obj – should be manifold
  • Create additional surfaces D1,D2 etc to union using A for reference
  • Union them with C <– not sure if this will work every time. 
  • If it works, save as F.obj
  • If unioning does not work, then:
    • Import B.obj
    • Prune surfaces D1,D2 etc so that there is no overlap with B
    • Create faces to join B and D1,D2 etc so that a fill holes will do the right thing
    • Save as E.obj
Agisoft (if stuck at E)
  • Load E.obj
  • Fill holes
  • Call this F.obj
Agisoft
  • Load model from F.obj
  • Generate textures
  • Export as G + GT (.obj)
Blender
  • Start over
  • Load G + GT (.obj)
  • Relocate, Orient
  • Scale, Apply scale
  • Thin wall detect
  • Save as H  + GT (.X3D)
Windows
  • Zip H+GT to HZ
Shapeways
  • Upload HZ
  • Look at thin walls situation -> leave open
Blender
  • Fix thin wall situations (and other problems)
  • Save as I + GT (.x3d)
Windows
  • Zip I+GT to IZ
Shapeways
  • Replace with IZ
  • Hopefully can print.

Wow, that’s a lot of steps.   No wonder I’m a bit frustrated.  

Update as of 6/1 (3-4 days after writing this): 

I tried it.  Twice.   5 hours later: It’s a fail.     The reason is that:  When I get to step G->H in Blender – when I do any fixes in blender – if those fixes involve cutting away dead faces and closing holes or anything like that – I loose texture.  As I usually have to do this surgery around places where I filled it in during step C+DN, Its all over the model, and It would look ugly.

However, I did learn a bit about fitting NURBS spheres onto human faces.   You go into Alt-Q 4-way view (Top, right, front), and first fit the edge pieces (where the surface comes all the way out to the control point); before going on to the other bits.  In the case of a human head, the anchor points are:  just above the ears, an imaginary line going back from the chin, just under the nose, etc.    This could be its own blog post, but I’d need more practice to verify that it works well every time. 

Next up I’m going to try a much-reduced polygon count – get rid of some of this detail that causes the thin walls.  The idea would be to use color and texture to make it look like the human.

From PhotoScan to Shapeways: 1/N – Using Blender as an intermediary

imageShapeways is one of several print services that offer full color prints – something I cannot afford myself.   However, they do not allow on-site scaling – you have to scale the model yourself.    There are also various cleanup steps that have to happen before shapeways will print your model.  For these I use blender.  (ref: shapeways page on blender editing)

Scaling, then Lighting, then Checking for Problems

Blender does not (by default) understand units of measurement; however during import, shapeways will ask what the unit is.  I go with mm; I want the model to be up to 50mm in size.

imageThe original is starting out at (Dimensions) 0.972 (assumed mm) as its largest dimension.  If I scale it up by 50, I get 31 x 48 x 40 instead.    Then, very important:  apply scale so that the XYZ coordinates are actually changed (some tools, like solidify, work before scaling – used for hollowing out an object).

imageimageSide note if you want to check textures: A this point, the model is a bit dark as the original light is now inside the model, and not quite bright enough.   I usually find and drag the Lamp gand drag it out into the open, and give it a distance for falloff of 3000 and more energy. 

And now the fun part!  Turn on the 3d printing tools (via extensions), and … don’t believe everything.  But do check for minimum wall thickness of 2mm.  and.. imageouch.  A lot of stuff.     Now, Shapeways may not complain about all of this – you are always welcome to upload.  In fact, I’m doing just that, to see what they will come back with.

imageimage

Shapeways

False End #1Do not export as .OBJ.   Shapeways does not process the .MTL and the .PNG file, and you get no color.  Instead, try .X3D.

If you open the .x3d with a large-text-file editor (like notepad++, or even in blender), you’ll find that it has a reference to a texture file with a different name: 

image

Be aware of it – zip both the .png and the .x3d file into a .zip file for upload to shapeways.   

After the file uploads and gets processed (30 seconds?), you get this screen:

image

And if I scroll all the way down, to the interesting stuff – Full Color Sandstone:

image

First, $23.13 .. OUCH.    Second, Thin walls.  It’s a link.   Follow the link. 

image

As you can see, this is not the sea of yellow that we saw earlier.   But, it also has this “Fix Thin Walls” button.

image

Oh my, we gave Dan the pox.  Well, no big deal, lets print that anyway …

image

But now we have lost color. No deal.  Back to fixing wall thickness in blender (and reducing the size of the model!)

Photoscan: Using Blender as an intermediary touch up tool

I’m glad that I have a solution (use a good camera and lens) for capturing 3D models – but I’m still trying to get the process matured – to the point where I could run a scan-and-print booth at a flea market.  (not that I’m going to; I’d just like to be that good at it).

The Problem: No Noggin

imageThis is a DSLR Dan Scan with a Medium Dense Point cloud on Moderate (Ultra high didn’t really add to the detail).  The top of the head is missing, and there’s a large seam in the back that is not filled.  Also, the bottom isn’t a good base for a printed model.

The Solution:  Blender!

image image

image

It is important to leave the model where it comes in – I have to export it at that exact spot in order for Photoscan to pick it up.   Luckily, I can select the object and zoom to it using “.” on the keyboard. (Blender: shortcuts keys are dependent on which window is active, it has to be the 3d window for . to work)

I then remove stuff from the model, and add in some more surfaces (NURBS surface shown here), to get the model closer to what I want:

a image b image c image

d image e image f image

  • a: I use perspective + border select + delete vertices to clear away an even cut which I will augment with a surface.
  • b,c: I create (separate object) a NURBS surface with “Endpoint=UV”; I fit the outer borders first (so that there is a little gap showing) and then move the inner points (how much it sticks out) in; I try to ensure there’s always a gap between the actual model and it (easier to join later)
  • d: For the base, I create a square (cylinder would have been better?), move the edges in, and then delete the top surface.   I subdivide till it has the right “resolution”.
  • I then convert NURBS to meshes; select the meshes; subdivide them to match resolution, and join them everything into a single mesh.
  • e: Sometimes to avoid “helmet head” effects, I have to delete some of the edge faces of the former NURBS by hand.
  • f: Stark from Farscape.

Another touch-up I can do in Blender is to smooth skin surfaces using “Scuplt”:

imageimage

Wrapping it up

I have to join the pieces together by hand *somewhere*, so that the holes have boundaries.  Try to keep each hole in 1 axis.    Notice how I subdivide the larger mesh so that the points line up. Don’t forget to Normalize faces outward when done.

imageimageimage

 

I can then export the .OBJ back to disk, and in to Agisoft Photoscan, where I finish the hole-filling process.  (I’ve tried filling the holes in blender, but I run into vertex normalization / faces being inverted problems.)

imageimageimage

And then, we can build a texture, and Dan’s noggin won’t be left out.

And Now for Something Completely Different:

The other direction to go is to use a completely different mesh and see what happens:

suzanne the monkey:
image

sphere scaled to head shape:
image

We could also do a Minecraft version of Dan – by joining several rescaled cubes, and then building a texture around that.  That would make for an excellent Pepakura model.   However, I’m out of time on this blog post (1h24m so far), so I’ll save that for another day.

(after editing:  1h40m taken)

Freakin Holes

This is my part in NetFabb.  Its Fine.

image

This is my part in Blender with Select NonManifold.   its Fine.

imageimageimage

This is what Repetier Host and Slic3r 0.99 think of my part.  They are not fine.

image

Following the Trail

Loading the part up in blender, translating it by the X Y Z indicated in Repetier host, and then putting the 3d cursor at the specified coordinates:

image

Nothing suspcious?    Rereading: it said “near edge”, so there should be an edge between those two locations.  Because the X and Z stay constant, its along the Y axis.   So it must be one of these guys:

image

Aha – I think I found it.   There’s actually several vertices here. Ctrl-+ a few times to grow selection, Shift H to hide unselected, and then move some stuff around.

image

Delete the vertex, look for holes and fix:

image

Send it back over to slicer, and try again:

image

Not winning today.

Update: didn’t seem to have any problems with slic3r 1.01.  Now I have to figure out how to configure Repetier Host to talk to a new slic3r install on the computer. (Update: very easy. Tell RH which directory slic3r lives in, and it handles the rest).

Getting Photoscan to Work: 3/N

Good news!  Using a Nikon camera with a 55mm lense, I got pretty good (printable) results!

image
Subject:  Lamont Adams

Here’s how it went down:

  • I borrowed Dan Murphy’s Nikon DSLR Camera.  He had several lenses, I chose the 55mm lens (not prime; I just didn’t zoom it)
  • I sat the subject in an empty room in the center beneath four fluorescent lights.   By sitting them, they stay stiller; and I am taller, so I can get more details of their hair.
  • I started taking pictures from their back, so that the pictures across the front are contiguous / seamless.
  • I took 3 extra pictures from the front from a lower angle, to get nose and chin details
  • image
  • I took Lens Calibration pictures starting from as far away as possible; and got a lens profile.  I did not fit k4.
  • image
  • image
  • I used High accuracy matching and a Medium Point cloud; Low polygon count mesh (20k)
  • Minor editing of point cloud before meshing and closing holes (deleting floaters and fixing hair)
  • Export mesh to Wavefront .OBJ format; use Blender to rotate it and convert it to STL; export STL at 100x scale
  • Netfabb to clean up the STL and scale it precisely (100mm height)
  • 6 hours to print it.

I have ordered a small full color print from Shapeways to see what that looks like.    Should be here June sometime.

He looks kinda like the KFC Colonel.  What with that chin growth and all.

Addendum:   Instead of a 4/N post, I’ll just put him here:  Yellow Dan the Pirate Man with those dark sunken eyes.

image

Tired of PhotoScan, I just want it to Work: 2/N

Since the last Blog post, I finished going around the object and selecting enough cameras to get a decent set of dense point cloud going.  I did this in four chunks; I was trying to quarter the model with each chunk:

image
9 cameras
1553 points
image
17 cameras
3260 points
image
10 cameras
2332 points
image
11 cameras
2575 points

Lets Align Some Chunks, Shall We?

Align Chunks, Camera Based, 1 Camera overlap

Nothing.  It needs more cameras?  I could see it being in 3 dimensions, it would need 3 cameras minimum.  I’ll come back to this with more cameras in overlap.

Align Chunks, Point Based

image

Not Quite.   And as far as I can tell, there are no manual align controls anywhere (in the non-professional version).

I could take this out to Meshlab and try to align it there, however, I won’t later be able to map a texture; I have to solve this in PhotoScan. 

Add More Cameras, Align Chunks, Camera Based, 3 Camera Overlap

While I’m at it, I also add in a few more cameras in some of the gaps that I see.  And this is what I get:

image

Queue Darth Vader Imperial March

I can tell it got the cameras correct.   However, my fear is realized:  I think as I walked around the subject, he moved slightly.  Or, my distance from the subject was not constant, so I ran into some lens calibration issues, and thus the resultant object was not mapped at the correct size.    Either way, what we have now is a FrankenDan.

image

I cannot resist.  Going all the way through to a model and texturing this beauty – and learning how to do an animation in blender at the same time —

[youtube=http://www.youtube.com/watch?v=-V4Yam2yCU8&w=448&h=252&hd=1]
Franken Dan Murphy

Attempting a single chunk with the same 55 cameras

image

What is happening is either a) the model moved, or b) I changed distance from the model (and the camera alignment is wonky), and it just cannot get the math to work.  FrankenDan is actually a better representation of the reality that was captured. 

So.. I don’t think there is a solution here, with a GoPro Hero3 walking around a subject.   There are several directions I could go, though:

  • Start from the back of the subject, so that the seam would be in the back.
  • Put markers on the person’s back (so that there is something to “fix” on), or give them a “garland” of some sort.
  • Use the 120fps to capture the model quicker; but I need to find a reliable way to spin the camera around the subject and hopefully not invoke motion blur.  (Hula Hoop Mount?)
  • Use a better camera (not a GoPro); perhaps a DSLR; with a ring laid out for distance from the subject (see teaser solution below)
  • Use multiple cameras! (so many people have had success with this – and they don’t have to be good cameras either)

Teaser Solution:  DSLR

In comparison, here is me, taken via a DSLR camera with a 50mm fixed (prime) lens.  Its not quite printable, as the (shiny? homogeneous?) back of my head failed to capture.   There’s definitely something to be said for not using a GoPro.

imageimage