Commute 3D print: 1/3: nodejs, openjscad, vscode

I have had a project in my brain for a long time – I think I first mentioned it in 2013*

The problem is, it wouldn’t go away.  It started to .. stink.  In my mind, stale, and stinking, and in my mind.   It had to be done.

The gist of the project is:

  • Take several ways of driving to work
  • Plot them out in 3D space with Time on the Z-Axis, in a virtual race as it were
  • Should be able to see things like stoplights show up as vertical bars

Well, I got tired of thinking about it, decided to do it.  I was spurred on by finding out that “jscad”, which I had known as openjscad.org, was now available as a node module — https://www.npmjs.com/package/jscad.  Okay, lets do this!

* Here’s a first reference: Car Stats: Boogers! Foiled!  1/11/2013  … There’s a few other posts just prior where I had been plotting interesting things from car track data, using powershell.

The implementation details

image

The above screenshot shows a sample output with some initial tracks, but .. they are the wrong tracks.  I’ll be recording the correct tracks later.

  • I created a “Base” for the sculpture by also drawing all the tracks along Z=0, but a little bit wider.
  • I made it a bit more stable by putting a “Pillar” up to the other end of the track, so its supported.

Code at: https://github.com/sunnywiz/commute3d .. the version as of 4/29/2017 11:57pm should be runnable to give the above output.

image

It took me probably 12 hours spread over 3 days …  here are some of the things I learned along the way:

chocolatey install nodejs

I’ve taken to using chocolatey to install windows packages so I don’t have to go hunt things down on the internet.   Most of my machines have it, its part of me setting up the machine.

I could totally install nodejs via chocolatey – and once installed, that gave me npm.

image

The command line, btw, would be:   cinst –y nodejs

npm init

This bit will seem silly to people who’ve been using it for a while, but these were the walls I had to hit.

image

Can’t npm install a package (everybody says “run this command” – it doesnt work) … until you’ve first “npm init” started a project.  I was thinking it was like CPAN where you added a package to a central repo and it was available, but nope, its more like “local depdency per project” more like nuget.  Makes sense, my CPAN (Perl) was was from 1998 and the world has changed since then.

node is awesome

It feels like the days of Perl way back.   Need something?  There’s a library.  Install it, use it, move on, solve the problem.  

VSCode is awesome

image

Editing and debugging is a LOT easier if you use an appropriate developer environment.  I tried vscode, and much to my surprise, I found it could:

  • Edit with intellisense (I knew this, but didn’t realize how fast it was)
  • Debug!  Watch!  Inspect!  OMG!   Very similar to Chrome Dev Tools.
  • Integrated terminal
  • Integrated git awareness and commits and pushes
  • Split screen yadda yadda
  • Very Nice.

nodejs asynchronous can be a learning curve

Take these two bits of code:

image

image

The first problem I had was that It would go to read all the tracks in … asynchronously.  I didn’t have a way to wait till all the tracks were read before it continued on to doing something with the tracks.

That lead me to some confusing code using “Promises.map”.  Then I found out that, most folks do const Promises = require(‘bluebird’)!    There’s a library that makes things easier.

So, if you’re not aware, and explaining to my future self:

  • Call readtracks .. its an async call
  • It returns a promise.
  • the implementation of readtracks eventually either calls resolve or reject (in my case, resolve).
  • At that point, the promise’s “then” clause gets run.
  • the glob … function (er, files)  is a “normal” nodejs callback strategy at the level below promises .. doesn’t rely on promises, its doing a directly specified callback.
  • the bluebird.map() basically does a foreach() on files, for each one, runs filename through a function, takes the result of that function, sticks it in an array.  It does this in a promise…
  • … which when its done with all the things, then calls resolve
  • the text fileName => readTrack(fileName) is the same as function(fileName) { return readTrack(fileName) }.
  • when you have promises, there’s a way of writing them out using “async” and “await”.    I didn’t try that.
  • the .then(resolve) is the same as .then(function( x ) { resolve(x); } )

The other beginner mistake that I kept making – I would forget to say “new” when I meant to create a new object.  Smile 

So I want to join two points in 3d space with a beam

I spent an embarrassingly long time trying to figure out how to take a cube, and scale it and rotate it and transform it so it acts as a beam between (x1,y1,z1) and (x2,y2,z2).

The answer is: 

image

The resolution:4 gives it four edges.

I need to view .STL files on my machine

3D Builder is a windows program that would let me do it, but it hung sometimes.

The easier route was to drag the model over to http://www.viewstl.com/  (which is what I did for the screenshot with this blog post).

Netfabb got Bought Out

Once upon a time, I found some really nice software .. which I thought I liked enough to give them real money.   Looking back, I cant find a record of giving them money… I guess I intended to at one point.

In the time that I haven’t been doing 3D printing, apparently AutoDesk bought them out, and.. the new cost is $125 PER MONTH.  Uh, no.   While researching for this blog post, I did manage to find a binary of the old netfabb Studio Basic, so I snagged that.   I saved my emails with the keys that I had used to activate it in the past, and those seemed to work, so Yay!

CSG Unioning of 1000+ things takes a long time

I also discovered console.time() while benching this.

When running the program, I end up with a bunch of CSG’s, but they’re all intersecting.  I need to merge them together to get a printable thing.

That code, when run for a larger print (100 tracks), took all of 18 hours to do the merging.  That was … too much.   So, I tried to write a program to make the merging easier.

1029 items (spheres, cylinders, boxes) unioned together, NO DEBUGGER, yielding a CSG with 11935 polygons:

  • If I union them in a for loop one after the other
    • 34 minutes!
    • It gets slower over time.
    • there’s a O(N^2) probably that happens under the hood – it has to re-consider the same shape over and over as it tries to add new shapes.
  • If I union them using a binary tree tactic:
    • 42 seconds!!!
  • The actual numbers:
    • image
  • There’s probably libraries out there that specializing in union arrays of CSGs, rather than two objects at a time.   Maybe I’ll upgrade at some point.

image

Note that if I upload the non-unioned version to somebody like sculpteo, they have an “auto-fixer” tool that fixes things pretty quickly, this is not absolutely necessary.  Just .. feels mathematically clean to do it.

I did try to write the above fancyUnion using promises, but .. didn’t help.  Node still only uses one thread, it did not branch out to multithreading. 

3 Major sources of Documentation for jscad / openjscad / csg:

Conclusion

Code is written!   

I’m going to hopefully get some tracks recorded over this week.

I’ll make another post of how the printing process goes with the real sculpture.

Inventory of Projects

I’ve been meaning, for a while, to put together a spreadsheet of the (software) projects I’ve worked on in my life so far.   The sheer number of them is staggering.

The thought was I could put in enough columns so that .. an interesting taxonomy could emerge. 

Its also one of those things where, I never thought the list would get so large that I couldn’t remember them, but.. here I am.  I guess 26 years of paid experience will do that to a person.

I Started It

I tried to do that for a bit – here’s what I got:

https://docs.google.com/spreadsheets/d/19ivowzqQJiPrMVk8rAfHbZ-cf_G6tqmQAzqYIW1sc00/edit#gid=0

image

The columns I have so far are:

  • Name
  • Year(s) – turns out some of them extended over multiple years.
  • Employed by + Project For – working at igNew, my project was for a client other than my employer.  But before that, these were the same.
  • Implementation Details – this was pretty hard, to figure out what columns to use:
    • Language – Its mostly C#, but if I go back far enough there’s some Java, Perl, Clipper, etc as well.
    • UI – technologies related to UI stuff
    • DB Backend – technologies related to database stuff
    • Host Platforms – I was trying to figure out how to say Asp.Net MVC vs a Windows Service.
    • Additional Technologies – I think maybe EF + Dapper need to move to DB column
  • Management Details
    • SCM used.. this feels like an unnecessary column, who would ever want this info?
    • Unit Testing / Mock framework – if sorted by start year, can see how this becomes important
    • Integration Testing – as I think this is super important, putting it out in its own column.
    • CI/Build strategy – or lack of it – how we went about running the project
  • Project Management Details
    • Roles
      • Engineer = “thought about how to do it” + “did it”
      • Lead Engineer = “mostly all me”
      • Project Manger = “updating the Burndown” + “Communicating estimates + schedule”
      • Ops Support = “things broke in Prod.  Figure it out.”
    • Slices  – If I worked with a team of people, then these are the bits that I worked on.
    • Proud Of – this is probably the best part of looking back.
    • Coworkers + Contacts – I’m going to have to go look up many names for Contacts.

What Now?

There are so many projects!   I put down 11 tonight,  I think the list of paid things is..  probably in the 40’s? 1-3 per year, depending on role, and then add in another 30-40 of fun things?

I guess I could make a list of all the projects I could think of first .. vertically .. going through each Job.

I could (and will!) also add in all the for-fun projects that I’ve done.  

But Why?

Simply put, this is to battle Imposter syndrome.   Also, in my job, I’m undergoing a role shift – where I’m taking on more Ops and Maintenance type work – it feels like a good time to look back at my career as a software developer and take some stock of what I’ve accomplished so far.

The other part of it is, … my resume.  I don’t need one at the moment, but every time people talk about keeping a resume updated … the level of detail involved … gives me anxiety.  So the thought is, if I have this spreadsheet out there – my resume can become more of “who I am” and “what I care about” and shove all the detail crap to this spreadsheet.

Updates to twit-sort

I am currently on vacation!  In Florida!  We drove!

I took the opportunity (in the early mornings / late nights when everybody else is asleep) to work on some code that I wanted to update – I don’t normally get time to do this.   Its my twitter-reading app.   Which only I use.  But hey, that’s fine, I haven’t tried to market it.

Changes

Change #1:  I kept seeing “…” with truncated tweets.  A little research led to some interesting stuff — basically following retweets to get down to the original tweet to get the text from there.

Along the way I tried to build up a “who quoted who” breadcrumb:

image

The code is a bit wonky:

image

However, its confusing – many people (or clients) retweet without actually having a retweet link.  There’s “quoting” which is different ..  eh. whatever.  If I’m given it via the API, passing it through, figure it out later.

Change #2: I surfaced the like count and favorite count, as well as a link per tweet to open the tweet in its own window (as hosted by twitter). 

image

Complications

I have the code in github, which is public, and I don’t want my access keys and tokens and stuff checked in there.   So, I had to fudge around a bit – this was my solution:

image

  • I created a branch with all the passwords and stuff, and worked from there.
  • Once it was done, I either did a rebase (with cherry picking) or a direct cherry pick to move commits over to master, which I pushed up.

Granted I could use my azure visual studio hosted git, but I wanted the code to be visible / usable.  So.. if anybody has other ideas of how to do this better, please let me know.

What Next (with this project)

Not much else I can do with this in its current codebase.  I could certainly make it prettier, but .. eh, that’s not me.

If I had unlimited time, I’d rewrite it – make it so it did all its fetching and filtering and sorting locally (in javascript).  I could get a LOT fancier then – things like pulling out hashtags into groups, etc.  Maybe adding some sentiment analysis things.

However, I have so many other projects on the burner.. this one won’t make it for a while.  My need has been solved, so there’s very itch here to scratch.

The url of the site:   http://twit-sort.azurewebsites.net

The code: https://github.com/sunnywiz/twit-sort

Making Educational Videos for Code Louisville

I volunteered to be a mentor for the latest session of CodeLouisville, on the .Net side.

No big deal, I thought.. and then, TreeHouse was unable to get their Entity Framework videos ready on time, so we, the mentors, volunteered to pick up the slack.

Naturally, I don’t do well when given an open ended problem, so I went into crazy-detail-mode.    I paid $15 for a 1-year subscription to https://screencast-o-matic.com/home  (VERY GOOD), and I started making videos.

And I ran out of steam.  Because, of course, I chose the slow, measured, one-thing-at-a-time approach, and I created far too much work for me to do myself.

Luckily, two other mentors suggested that we do X for our mentoring session – so if what I was doing was from top down, they were going bottom up.  Basically, show the fast way to create an MVC website, Code First, Scaffold, and let the students backtrack from there.

This worked VERY well.   It boot-kicked the students into effective land.  At least one of them showed me some working code in their project in the same week.

Lessons Learned

I am too. detail-oriented for my own good.

When in doubt, ask people what they need.

Sometimes, you just have to show the finished product and NOT explain everything, then go back and explain what is needed.   Its okay, people are resilient, they will survive Smile.

I like screencast-creation, but not more than a few hours a week: For a 20 minute screencast, it takes about 40 minutes of recording, about 2 hours of editing, another 20 minutes of rendering and uploading.   So basically plan on 4-5 hours for a 20 minute video on a topic.    Thus, I can probably sustain generating 10 minutes of educational content a week.

What Did I make

This is a walkthrough of LocalDB:   https://docs.google.com/document/d/1tTDKTQsfGPr_nFSbEifGQRc8dc-FfrxbPUfnc7CyKoc/edit?usp=sharing

This is a walkthrough of C# talking to SQL:  https://docs.google.com/document/d/1lSt-C5-L3VwLLGE6oJO3A_UOpIu2lYA-EWMhzAo-Ye0/edit?usp=sharing – it has links to 5 videos, and (eventually) links to code in github.

Unit Testing vs Integration Testing

Unit Testing

  • mock everything external to your code.
  • mock everything up to the thing to be tested
  • do some stuff
  • assert one thing.
  • Don’t continue a test – instead, mock up the environment to match the new thing to test, and write that in a new test, with a new assert.
  • Its cheap to run lots of tests
  • Its cheap to mock things.

Integration Testing

  • Its expensive to set up data
    • That’s a talk to itself, on how to do that in a sane repeatable way.
  • There is no mocking. Its real stuff, all the time.
  • Thus, when you achieve one milestone, your test (more like a “flow”) continues to the second milestone. Examples:
    • Upload, Pause, Continue Upload, Download, Pause, Continue Download
    • Upload, kill upload, Upload again
    • Upload, Download, kill Download, Download again.
    • Create item, edit item, rename/move item, delete item.
    • Its too expensive to try to get to a middle state, unlike mock-land.
  • Along the way, you print out “assessments” (to go with a AAA-style term) of where your test is at and what data its seeing.
    • ie, Arrange, and then
    • Act Assess Assert
    • Act Assess Assert
    • Act Assess Assert
  • In case of failure, you compare the log of the failed test with the log of a successful previous test to see what’s different.
  • The test can be VERY complicated – and LONG – and that’s fine. You only know the detail of the test while you are building it.
    • Once it goes green in a CI system, you forget the detail, until it fails.
    • If it does fail, you debug through the test to re-understand the test and inspect data along the way.
  • Expect flakiness
    • Sometimes things just fail. Example: locks placed on tables in PostGres for Unique in their UAT environment by some other process.
    • Sometimes things fail because of a reason
      • Somebody changed a schema.
      • Somebody deleted some key data
      • Some process crashed
      • Previous other test left behind data that FK locks the data you want to work with.
      • All these need human care and feeding and verification, its not “mathematically sound” like unit tests are.
    • Its a good idea to put an Assert.Ignore() if any failures happen during Arrange() section (ie, databases are down, file system full, etc – no longer a valid test. Not failed, but not valid, so ignored.
      • Can postpone this till a test starts to be flaky.
  • But when it works
    • you know that all the stuff is CORRECT in that environment.
    • And when it works in CI day after day after day, any failures = “somebody changed something unexpected” and needs to be looked at.
      • Fairly often its a shared DB, and somebody else changed schema in a way that you’re not yet accounting for.
      • Or somebody changed the defaults of something, and your test data that you’re hinging a test on has not been updated to match.

Which Ones To Use Where

  • Use Unit Tests to explore the boundaries and paths inherent in a piece of code.
    • Faster
    • Many of them
  • Use a single integration test (or just a few) to run through the code with all dependent systems
    • try to hit every SQL statement / external service at least once
    • If it worked, combined with all the unit tests, you’re probably good

SQLite for C#/SQL Developers

Current project uses sqlite3 for local storage of stuff for various reasons.  This was our first time working with it.  We’ve learned a few things that are not obvious.. coming from a SqlServer world —

Tooling

SQLite Expert for the win.  It has a chocolatey package as well, although that failed to install once for me (but worked twice.  Who knows).

Note that SQLite is multi-process-open-able; data inserted by your program shows up automatically in the Data tab in SQLite Expert.  However, if SQLite Expert has the .db3 file open, can’t delete it to scrub it clean.

All the same: varchar nvarchar and text

SQLite doesn’t care.   It has 5 types of “data” it can store, depending on what the data is.  Handy reference: http://www.tutorialspoint.com/sqlite/sqlite_data_types.htm.

So when we were doing “uniqueidentifier” it was actually doing BLOB under the hood.

Its entirely dependent on what you’re trying to store, as to what gets stored.   So you can store a string in an int field.   https://www.sqlite.org/faq.html#q3  — its a feature, not a bug.

GUIDs as binary, beware Endian.

Most programmatic forms of storing Guid data end up storing it as Binary/Blob.   You know its binary if you do:

select id, hex(id)

And you get two things that look about the same size in length, but are shuffled around:

{E589C188-2575-4AC6-BD41-A66A00A1AF22}

88C189E57525C64ABD41A66A00A1AF22

You might say “hey! bad Endian!”  but actually its Guid.ToByteArray() that’s doing the re-shuffling.

So, if you see the value {E589C1… in a grid result, and you want to select it, and you say

select .... where id=x'e589c1...'

That doesn’t work!  you have to give the bytes in the other order (as returned from hex(id)).

Foreign Keys Not On By Default from C#

This one was a shocker.  Its a pragma to turn foreign key checking on.  However, tools such as SQLite Expert do this automatically for you.

What we ended up with is a GetOpenDbConnection() call that did both .Open() and executed the pragma.

Logging Generated SQL from Dapper

A lot of this got figured out after we could see the SQL Generated by the various libraries we were working with. Turns out, that’s 2 statements after your connection is opened:

con.Flags |= SQLiteConnectionFlags.LogAll;
con.Trace += (o, e) => { Console.WriteLine("SQL: " + e.Statement); };

In Conclusion

These are the bombshells we’ve experienced over the last week.  Hopefully, all the bombshells are done with now.

Other than this learning curve — very solid, very fast, very nice.  2 Thumbs Up. Plus One. Heart It.

git rebase

I’ve never done a rebase before.  I figured today would be a good day to try it out.   It’s a simple case —

imageimage

From the client’s point of view, origin/master is ahead by one commit.   It happens to be a conflicting change.

In Tortoise (I’m starting out easy), I pick “Rebase”, and I’m saying I want to change my branch SG-EndToEndTest to be “re-based on” remote master.

image

  • Its showing me the list of commits that I’ve had, since I started my branch from wherever
  • Everything is picked.  Could choose to not-pick some stuff, leave it behind, like config file changes for local debugging.

I click “Start Rebase”.  It starts from the bottom of the list (ID=1) above and triest to re-apply the commits to the new root.   It runs into conflicts –

image

I right click on the conflict, edit it .. turns out TortoiseDiff doesn’t think it’s a conflict, I can easily mark it as resolved.  I have to do this twice, both in .csproj files.

When its done, its all still local – server’s not any different – but local shows:

image

  • The old commits are still there.
  • But they’ve been cloned and re-grafted onto a new source node, and the label has been moved.
  • If I had to, I could undo everything by force-moving SG-EndToEndTest to be based on ad1c.

If I now push up to the server – I get an error:

image

Instead, I have to push with –force (“may discard unknown changes”) (ie, “tell the server to do stuff where it discards changes that I, the client, know nothing about – JUST MOVE THE REF”)

image

Now the server looks like this:

image

That was my first, very simple, rebase (without squashing).

The pull request that I had open against that branch survived as well, and changed from “Cannot merge – conflicts” to “Can merge automatically”:

image

Goodbye to old code and dreams of immeasurable wealth

imageIn the Beginning

I would always hear about people who wrote simple pieces of software, who were in the right spot, at the right time, and their stuff got used and they became famous, and.. perhaps even rich. 

Every time I heard such a story, my baby tyrant would say: “I want that!  Lets DO that!”

To which my Fuddy-grownup would say, “Honey, you probably won’t become famous, and it probably won’t work out.  Are you sure?”

And the Couch-Buddy would say, “Ah, too much work.  Lets read some more facebook.”

I Made a Decision

I started coding this thing that I thought would be a good start of things.  It was an app to make reading twitter easier – less context switches.  It was also an experiment in using Azure, Visual Studio Online, and a little bit in starting bootstrap from scratch.

I got it working.

I started the code 7/26/2015, and by 9/22/2015 I was ready for the big time.   This was mostly an hour or two during a workweek in the evenings, and maybe an hour or two on a Sunday morning.

I had a logo created, bought a bootstrap style, I had added what I thought were the key features I needed, I rebranded it, and I bought a domain name.

image  image

I stopped.

And then life got complicated, and I let it sit – costing me monthly $, btw.  $17 per month to keep it hosted at the cheapest level I could get away with, AND have a domain name.  I used it for a while. 

Eventually, I got cheap, and work distributed a full MSDN license to me with an azure subscription, so I nuked it.

I’m letting it go.

Very recently, I put it back online under my MSDN license –  You can use it here:

http://twit-sort.azurewebsites.net/

I’ve cleaned up the code that I deployed, removed all the passwordly bits from it, and uploaded it to github.  Here’s the guts of it:

https://github.com/sunnywiz/twit-sort/blob/master/azuremvcapp1/Controllers/ReadController.cs

Letting go the dreams as well

I would have liked to have seen this thing become better.

  • I could have done a face lift on the front page.  Too many words.  Replace with screenshots of the configuration page and the read page.
  • I could have made it more colorful. Orange and Blue!  You can see this in the icon a bit.
  • I could have made it front-end js only, with no server side talking to twitter, using local-storage for persistence
  • I could have added “click hashtag or username” to create additional groupings on the fly.  delete groupings on the fly as well.

The good news is, all these dreams live on, in a future project – that works with Facebook, instead of Twitter

Conclusion

Letting this one go to make psychic room for other things that interest me.   May it bless others.  If you write a good one like this I’ll use it.

Latest Round of Harvesting Car-Tracks

image

In the past, I used Android – Torque, and an upload to Dropbox, to gather car-tracks.

I’ve started up that project again – this post focuses on my solution for gathering car-tracks for later processing.

 

imageAndroid-Tablet-Always-On

I got a Samsung Galaxy Note Tablet used, and I’ve stuck it in the glove compartment of my car.  Its plugged in to power, but power only runs when the car is turned on.  Click on image to zoom.

I use automate-it to run a few rules:

  • When power goes off, go into airplane mode.
  • When power comes on, turn off airplane mode, and start MyCarTracks

This seems to work as long as I drive a good amount each day. Then again, when I went to write this blog post, I found the tablet powered completely off – not enough charging?  temperamental battery?   Looks like not enough charging is the culprit.

It would be awesome to completely shut down the tablet when power is lost, and then power on when power is applied; however, I’d have to jailbreak to get that, and my 15 minutes at attempting to jailbreak, didn’t work, so, meh. 

MyCarTracks

imageI could totally get by with the free version of MyCarTracks.  Its an excellent product!  It has these features which I use:

  • Auto-record car tracks – after you reach 6 miles per hour
  • Auto-stop recording car tracks – when still for 5 minutes.

I can then get access to my tracks via an “Export All” feature, which will let me export to CSV, GPX, or KML.  GPX is the winner for me:

<?xml version=”1.0″ encoding=”ISO-8859-1″
standalone=”yes”?>
<?xml-stylesheet type=”text/xsl” href=”details.xsl”?>
<gpx
version=”1.0″
creator=”MyCarTracks”
xmlns:xsi=”
http://www.w3.org/2001/XMLSchema-instance”
xmlns=”http://www.topografix.com/GPX/1/0″
xmlns:topografix=”http://www.topografix.com/GPX/Private/TopoGrafix/0/1″ xsi:schemaLocation=”http://www.topografix.com/GPX/1/0 http://www.topografix.com/GPX/1/0/gpx.xsd http://www.topografix.com/GPX/Private/TopoGrafix/0/1 http://www.topografix.com/GPX/Private/TopoGrafix/0/1/topografix.xsd”>
<trk>
<name><![CDATA[2015-04-15 19:52]]></name>
<desc><![CDATA[]]></desc>
<number>29</number>
<topografix:color>c0c0c0</topografix:color>
<trkseg>
<trkpt lat=”38.242019″ lon=”-85.72378″>
<ele>132.89999389648438</ele>
<time>2015-04-15T23:52:25Z</time>
</trkpt>
<trkpt lat=”38.241821″ lon=”-85.723613″>
<ele>131.89999389648438</ele>
<time>2015-04-15T23:52:29Z</time>
</trkpt>
<trkpt lat=”38.241649″ lon=”-85.723491″>
<ele>129.1999969482422</ele>
<time>2015-04-15T23:52:32Z</time>
</trkpt>

MyCarTracks.Com

But Wait There’s More!

imageI went ahead and paid them $16 for a 1-year service for a small fleet, which gives me access to my tracks online (up to 2 years old) for quick viewing.   In order to make this happen, I sometimes hook up the tablet to my WIFI and say “synchronize all”.  There’s also an option where I can say “sync between 2 and 3 in the morning”, and I configure automate-it to take airplane mode off from 2-3, however, that’s hit or miss.

Once the tracks are loaded up to MyCarTracks.Com, I can browse them on a pretty nice map.  (picture at the top of the post).

What’s Next

My intention is to load these GPX’s into a small sql-server database, using Spatial (Points), and then come up with little data sets of “here’s all the tracks that passed through these two points”, etc.

I then want to take those tracks and convert them to a 3-D rendering with Z-axis = time, to compare various paths with each other.   

And I want to convert that into a 3-D sculpture.   Because, art.   My art.   Representation, archival – these I love.

But, one thing at a time.  I’m always welcome to shelf my projects; I only work on them when they are attracting my soul.  Might be a bit before I get there.  I do have a start on the gpx-parsing code, though: https://github.com/sunnywiz/cartracks2016

SqlServer on a Ram Drive: Fast or Not?

TL;DR:  Don’t bother, its not.

I’m at it again .. writing what I call integration tests, which are effectively database-friendly-data-setup-and-teardown tests.  As can be imagined, its definitely much slower than unit testing; however, I love doing it, and there’s a lot of sprocs and other stuff that its really nice to get some tests around. (most of the bugs that led to this investment in time, were in the sprocs).

Since I had the RAM available.. decided to try to have SQLSERVER (Developer Edition) run against databases that were stored in RAM.  How does that compare?

The database is about 4G in size.

SqlServer Developer, 10G RAM Drive, In a VM , Tests run by R#

image

SqlServer Developer, using MDF files against C: in a VM; VM is on SSD; R#

image

SqlServer Enterprise on VM in Azure (network lag); R#

image

One test failure was because this server’s copy of the invoice database was not complete; and the test was set to be readonly against this server (local sqlexpress = much happier about dropping and re-inserting records).

I could not ping all the way to the database server, but Client Statistics would indicate a ping of probably 150ms?

image

SqlServer Enterprise at client location via VPN; R#

image

This is definitely faster than our azure-hosted SQL – Looking at ping, says it’s a 50ms round trip.

Pinging taacasql01.triaa.local [10.120.0.10] with 32 bytes of data:
Reply from 10.120.0.10: bytes=32 time=50ms TTL=127
Reply from 10.120.0.10: bytes=32 time=44ms TTL=127
Reply from 10.120.0.10: bytes=32 time=45ms TTL=127
Reply from 10.120.0.10: bytes=32 time=45ms TTL=127

SqlServer Enterprise local network (client site); Teamcity

image

This is almost on-par with local SqlServer.  However, I don’t know if the machines are faster.

Summary

Ram-Disk Sql-Server didn’t help.  Or maybe its that, even on a regular hard drive, it was able to load everything into RAM, and got quite fast.

Local Sqlserver vs LAN SqlServer were close enough; I’d use one as a substitute for “how would it perform” on the other.   Helps that the customer is (probably) running both VM’s on a Hypervisor, and network communication between the two machines is … superfast.

WAN SqlServer was definitely the dumps; however, that’s good for posing artificial limits on ourselves, to make sure we’re not doing too many round trips, etc.   Nevertheless, our main cloud sql server is slower than a VPN into our client; that doesn’t seem right. Or, our client is that awesome.  It could be the latter. 

Not shown above, but if you drill in to some of the tests, you can see the cost of setting up an Entity Framework context the first time.  It seemed to take about 8 seconds against my local server.  Once set up, subsequent tests were less than a second each.   However, the set-up would happen again for every other test fixture – apparently whatever its doing to cache things, got dropped between fixtures.  Possible optimization, hang onto it in a static?  *food for thought*

Methodology Notes:

I ran each full suite twice.  Sometimes three times, till I got what seemed to be a stable number.

I used SoftPerfect for the RAM disk.  Its set to sync ramdisk to disk every hour or so.  After seeing that it didn’t really improve things, I deleted it.

image

I drank 3 glasses of Tea, 1 Spicy Hot V8, and ate 3 pieces of chocolate, and 1 cream cheese snack, while running all these tests.