On the Trail of Writing a Rap Song

This year, I’m volunteering to write songs for each of my extended family’s birthday’s.  There’s a lot of them.  It started off with a remake of Taylor Swift’s 22 for a 13 year old, and then a Surfin USA remake for my spouses’ dad.  And then my ten year old nephew has asked for… Thriftshop.  By Macklemore.

Not a problem.    

Two days later, I’ve gotten a very rough draft of what the first two verses might be like, and it was time to look at the music.  I’ve taken to downloading MIDI files, removing tracks, and then recording over with a live guitar and singing.  *I do not sing well.

The free version of Thriftshop MIDI was dysmal.  So I found: GeerDes Midi Music.  And their version was phenomenal.

However, opening up the file in MusicStudio on my iPad yielded a problem.  They were using a C#7 drum note, but the drums in Music Studio only went up to I think C6, which got substituded, which was a very awful sound.   Editing those notes in the iPad was possible, but I thought hey maybe I can do something on the desktop computer with this.   (actually, I left my iPad at work, so I was using the iPhone, and hence the desire to edit elsewhere – that screen can be tiny)

So I downloaded Anvil Studio, and found it .. meh, actually harder to use than MusicStudio.  And it sounded horrible – the General Midi Synth in Windows is very meh.   But it did have support for VSTi’s

Which lead me to (not linked here) various VSTi downloads, which turns out there are several that will play “.sf2” soundfonts, so I downloaded one, but it was a .sfark, so I had to get the decompressor for it, but then the VSTi ended up only playing a single instrument rather than multiple, and so I abandoned that..

But in the mean time I did a search for “Best DAW Midi”, which lead me to FL Studio, which I downloaded the trial for, and OMG! it installs polyphonic VSTi!  and it edits MIDI!  And layers audio!  And there’s a wicked DJ looking thing that I COULD NOT FIGURE OUT!   And its so frickin complex that my head hurts!  And it might even be affordable! Which lead me to…

Lynda.com, my favorite video learning website covers Adobe products pretty well, but not FL Studio.  But Groove3 does.  for $15.  I might do that.   4.5 hours.  The reviews are not great, but there’s nothing else out there…

Then again, I could just wipe out that one line of notes with MusicStudio and move on.

But my curiosity is definitely up around FL-Studio.    Its much more geared towards music creation than Adobe Audition 1.0, which is what I’ve used for the last 8(!) years for audio editing work.  It might be time to upgrade my toolset.

Some day, I might even do voice lessons.

Catching up on Facebook Quickly–Try #1

I had removed the facebook app from my phone, because it was taking too much time.  I dreamed of writing my own app that would let me catch up on facebook quickly, and a week at a time.  This afternoon, I got some time to play.  

Three hours Later

image

Research:

  • I had to go to developer.facebook.com, register, and create an application, to get an application ID.  (well, maybe I didn’t, but I would eventually.)
  • I tried going down the path of Heroku, hosting an app, git, python, etc – but once I found the .Net client, I was done.  (nuget: Facebook
    • I ran into problems where heroku’s git client would not present the right ssh key, and I couldn’t find the ssh command line that it had installed to tell it the right key to use.
  • To bypass authentication, I went to https://developers.facebook.com/tools/explorer and generated an access token from there; it needed the read_stream priviledge.   The token expires after an hour.
    • the url to query is “/me/home” (thank you stack overflow) to get the JSON that represents your newsfeed (most recent items first, looks like)

Then, I created an MVC app to do the querying.  The controller side, to be replaced eventually with more fancier login stuff:

var accessToken = "AAACEdEose0cBABI5wFnNe209ddAmRFpWOB9T9O8x2sCaNlc91ZB1u6gqqrxHseMvBKhuDHtkS3KY6KAlIz6Xc8Ps24nvIWKRF... etc";
var client = new FacebookClient(accessToken);
dynamic homeraw = client.Get("me/home");

That’s great, I have a dynamic object, but I’d rather have intellisense.  Enter json2csharp (thank you!!!!), creates me some classes, a little refactoring / renaming, and some hard coding: 

foreach (var dataraw in homeraw.data)
{
  var homeItem = new HomeItem()
  {
    id = dataraw.id,
    from = new From()
    {
      id = dataraw.@from.id,
      name = dataraw.@from.name,
    },
    message = dataraw.message,
    caption = dataraw.caption,
    name = dataraw.name,
...

And sent everything off to my view, which spits it out into html (the Acronym stuff is not shown here, but I built “SG” out of “Sunjeev Gulati”):

@foreach (var item in Model.Data.data) { <div class="infobubble"> <div>

<strong title="@item.from.name">@Model.AuthorAcroymn[item.from.id]</strong>:

@(item.message ?? item.caption ?? item.name) </div> @if (!String.IsNullOrEmpty(item.picture)) { <div> <img class="pic" src="@item.picture" /> </div> } </div> }

Added a little styling:

.infobubble 
{
    font-size: 12px;
    border: 1px solid gray;
    float: left;
    display: inline-block;
    padding: 2px;
    margin: 2px;
    word-wrap: break-word;
}

And now for some fun.  Reformatting things in jQuery to be more blocky:

    $(window).load(function () {
        $(".infobubble").each(function () {
            var width = $(this).width();
            var height = $(this).height();

            // determine if there's a picture in here somewhere
            var pic = $(this).find(".pic");
            if (pic.length > 0) {
                var picwidth = $(pic[0]).width();
                $(this).width(picwidth);
                width = $(this).width();
                height = $(this).height();
            } else {
                // no picture.  just text.  assume its already very wide
                // bring the width down till its closer to square
                for (var i = 0; i < 10; i++) {
                    if (width > height * 2.0) {
                        $(this).width(width / 2.0);
                        width = $(this).width();
                        height = $(this).height();
                    } else if (height > width * 2.0) {
                        $(this).height(height / 2.0);
                        width = $(this).width();
                        height = $(this).height();
                    } else {
                        break;
                    }
                }
            }
            $(this).data("width", width);
            $(this).data("height", height);
        });
        $("#container").masonry({ itemSelector: '.infobubble' });
    });

This does a few things:

  • If it has a picture, size to the picture.  This usually works (though the “WAL-MART” one at the bottom of the screenshot is an interesting example, I’ll have to check for for overly long text)
  • Otherwise, try to size it till its kinda-square.
  • Then use jquery.Masonry to fill the space.

Where to go from here

  • Use some pre-defined sizes so that Masonry has an easier time filling a wall?
  • Use photo pictures, but really small, to identify who wrote a post?
  • Deal with odd-sized things, or find a better algorithm for blockifying things.  Probably try standard widths like 50 100 200 and 400 and then move on.
  • Decide what to do with comments / likes, etc; support for other kinds of facebook items (only time will reveal them to me)
  • Going further back in time, rather than just a single call.    Try to make it infinite scrolling?  Or, call the service over and over till I have the data I want, re-sort it, and then display it?  While being respectful of their service?
  • Actually go through the pain of logging the user in to facebook.
  • Host the app somewhere?  I’m not sure if it violates any facebook laws, like maybe there’s one that says “you cannot write an app that replaces the newsfeed”?

If this app actually gets written, some of the features that would make it useful:

  • Enter a date range to view over (defaulting to the last time you were in)
  • Inline see comments (if a reasonable number, damn you Takei)
  • Filter down by person or type of post
  • Sort by person, size, photo.
  • Keep track of items seen or not-seen; Star items to look at later; mark all as read.
  • Automatic mute / filters.
  • Jump to item in facebook proper to go do something more with it.

Of course, that’s pie in the sky.   If you want a crack at it, go for it.   What I probably could do on short order:

  • Allow the user to enter the token in a textbox, so anybody could use it (without having to tackle the facebook login hurdle)
  • Host it somewhere for people to have fun with
  • Put the source on github

That would be “a good milestone”.    Maybe next time.

Nevertheless:  that was fun.

Automerging in TFS

There’s an ongoing thread in my head on “what’s different in the land of Feature Branches”, but it hasn’t fermented into something postable yet.  However, there’s one low hanging fruit I can pluck: automatic merge between branches.

In the beginning, there was a branch…

First day hanging out with this team.  The client already has a stellar team of developers; we were discussing how we could work with them on this “other” feature that they don’t have time to handle. Overly dramatized:

  • We:  Pray tell, dear client, where shalt we code? 
  • Client:  Forsooth!  Thy code may be smelly as in fish; and perhaps thy project shalt be backburnered; thus thou shalt code here:  a subbranch from our development branch.
  • We: That shalt be wonderful, for we shall make this place our own, and be merry.

2 months go by.  The feature takes form, completely changes, and takes form again, and our code [B] is not so smelly.  However, we are also two months out of sync from their development branch; and we’re getting to the spot where we could think about releasing.  The problem:  They have had one release for feature [A], and then have another feature coming up [C] which is not ready to go.image

The painful merge

We ended merging our code [A] back into their code [C] … and then followed their normal release path up to QA and out.   Luckily, we were able to extract “just our feature” (plus a few extra styles) without moving their feature [C] (but that was luck, really).

That merge took a while:

  • 3 days: Dev1 to Dev2, code + database changes + 67 conflicts.  Dev2 now contains A+B+C. Merging sprocs outside of version control can be painful, thank you Beyond Compare for being wonderful.
  • 1.5 days: Dev2 back to Dev1, mostly dealing with merging (the same) stored procedures and schema, 4 (easy) code conflicts.   Dev1 now contains A+B+C.
  • Easy:  Parts of Dev1 (the “most recent commits”) to QA.  QA now has A+B and very little of C (a few styles crept in). 
  • Again: We were lucky that there was almost no overlap between B and C. 

Having no desire to redo that pain, we came up with a plan.

TeamCity Automerge script under TFS

We use TeamCity as our “easily installable build server”, so we set up a daily task at 5:30 in the morning to automatically merge anything new in our parent branch down to our branch:

$tf = get-item "c:\program files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\TF.EXE"
# http://youtrack.jetbrains.com/issue/TW-9050
& $tf workspaces /server:http://TFSSERVER/tfs/defaultcollection
# Its very important that the right branch (source of automerge) be listed below. 
& $tf merge /candidate $/PROJECTNAME/Dev/A . /r
& $tf merge            $/PROJECTNAME/Dev/A . /r /noprompt
$a=$LastExitCode
#if ($a -eq 0) { 
    & $tf checkin /comment:"automerge from A" /recursive /noprompt /override:"no workitem applies"
    $a=$LastExitCode
    # checkin with no changes yields a errorlevel 1
    if ($a -eq 1) { $a=0 } 
#}
# move this out to a seperate build step to be sure it runs
& $tf undo . /r
exit $a
  • We had a problem with getting tf to recognize workspaces, hence the extra tf workspaces call.
  • the tf /merge candidate lists the possible items to merge – used for populating the build log with information for humans.
  • the actual merge yields a 0 if there were no conflict errors.  We save that to return later.  If there’s no changes, that’s a 0.
  • if there were no conflicts, do a checkin.   In this case, no changes is an error, so ignore that error.
  • finish up with a tf undo to “unlock” the files in tfs server. 
  • return the exit code that would indicate a conflict if there was one.
  • We are running Teamcity under one of our accounts, thus there’s no login information in the calls to TFS.  Most other VCS’s, we end up putting passwords in the script;  its not the best, but there are few alternatives.   Most companies that have a good build infrastructure usually have a build user for these kinds of things, which only the build admins know the password for, which once again would exclude us from using it.

Living with an Automerge

Most days it works fine.  But some days it has conflicts.   When it does have a conflict, it shows up as a failed build in TeamCity:

image

We started off with a 1-week rotation for dealing with conflicts, but that ended up being unfair to Tanner, he got 5 bad days in a row, so we switched to a round-robin “who has not done it recently” approach instead.

On the days that it did run green, opening the build log shows what got merged.  We hardly ever look at this:

image

New Branching Strategy

Learning something, and having now earned the client’s trust, our next branch is rooted at the QA level, so that our development branch is a peer to theirs.  This is a continuing experiment; there’s more to consider, hence the still-cooking post on Feature Branching.    

Till Later!

The Purpose Of This Blog (and deriving posting patterns)

When I started this blog, I wasn’t sure exactly what or why I was doing it.  Maybe, I wanted to be famous like Hanselman.   Maybe, I wanted to derive some ego from it.    Maybe, I wanted to feel that my geekiness could be appreciated by some. 

It is clearer now.  Today, there are several components:

  • I hope my peers read it, and get it.    And thus, get to know me a little bit better.  I am a social being, and I have a need to be known.  And unfortunately, I don’t game, I’m married, with a teenager who makes our lives interesting, and I don’t work out a lot, so there seems to be little overlap with my peers.    But yeah, this is all a part of me.
  • When I run into somebody at CodePaLousa or the .Net Meetup Group, I’d love to hand them a card with the blog’s URL on it, just so that I have a “brand” that I could be identified by.   I no longer want to be the Anonymous Software Developer who seeks to Prove Himself to You via Resume.  That is so.. un-networked.  (And no, I’m not looking, thank you very much)
  • Building on the previous point – I hope that in the future, if somebody is interested in hiring me – they’re hiring ME, not just my skills.  ME the person.   I’m hoping that I’ll have the this blog to document myself – my intent, my sparkle, my love of people, love of process, love of teamwork, love of Geeky STUFF.   This is a gamble – that WHO I AM is a valuable asset, much more than whatever skills I am currently current at.

This decision, realization, understanding yields some side effects:

How Often Will I Update

I tried to be the “constant content creator.”  One post a week, on Fridays.  When I had a lot of stuff happen AND documented, I tried to stretch the posts out into the future, to have a series.   It felt like I was lying – withholding myself to increase market share or some silly thing.   Way too full of myself.   I’m not going to do that again.

Today:  I had the time, I had the keys, and I think this is my Third Post of the Day. (W00T!)   Then there might not be anything for a while, till the buffer fills up again.  And I get time to dump it out into a post.

Does this Blog Really Represent Me, or is it just a big Advertisement

Pretty much everything that goes on here is 100% indicative of me. (Although the inverse is not true: I am NOT 100% this blog, I am much more than this.. at least 3 circles more, which are off-topic here).     [But, if this was an advertisement, could I say this and be lying… ?   *ouch my brain hurts*]

If you know me in person, you will pretty quickly figure out:  I am incapable of subterfuge, of keeping up a pretense, of keeping a poker face, of being fake.    This yields some interesting awkward moments.  But it also keeps my life simple.  I like simple.  I tried complicated, that disintegrated under its own gravity into nothingness. 

What do I seek

I don’t think I’m going to discover something big, or write the most awesome anything (anymore (I used to)).   Nor do I seek to make people think really hard about abstract ideas.   Pretty much, I’m trying to figure out: 

  • How can I make sense of things
  • How can I make things better
  • How can I be of service

At least, that’s what I *think* this blog is about.   8 months after starting it.   Maybe another year will bring something different, but I doubt it.   

What now?

I have no clue.  I think with this, my buffer is clean.    Tomorrow is going to be all about family stuff.. 6 of us going to a WWE event!     Maybe next weekend..  who knows what will be brewing in me by then.

Enjoy!

Nexus 10 vs iPad Retina: Actual Usage

I’ve had the pleasure (and displeasure) of using both an iPad Retina and a Nexus 10 for the last week or two last month and a half.  (this post was started in the last week of December)

This year, the iPad wins.

Nexus Failures:

  • In Facebook, a link to a video sent me to a mobile facebook website; the Nexus crashed and rebooted.   This has happened almost nightly.
  • In Facebook, pictures crop too much.   I couldn’t see the meme text without clicking in to the photograph.
  • In Evernote, linked to a Bluetooth keyboard, trying to organize the list of stuff that I want to get done for the weekend: a total disaster.
    • I could NOT use the arrow keys to navigate through the document.
    • Copy and Paste did not seem to work.
  • In the Trulia app: browsing real estate with the wife… Pictures much larger and UI worked better on the iPad.  I was jealous that she was using the iPad and I was using the Nexus.
  • Schlock Mercenary on was unusable – the UI elements fused together overlapping in a horrible way.
  • Battery on the Nexus seemed to drain faster… Two days to empty instead of four.  Note: This is not a measured, scientific observation.
  • Watching Videos on YouTube:
    • If the video started rebuffering, I could not tell the video to pause for a while (to let it get ahead).  I could only pause when it was buffered and playing.    On the iPad, pausing was independent of rebuffering.
    • I experienced rebuffering when their appears to be data still available in the buffer.
  • Netflix:  About the same, till I went to make a 30-second rewind (to show my wife something awesome).  On the Nexus, it did not register the shuttle change unless I did it for more than a minute.  When you’re trying to show a skeptical loved one something awesome, a minute is a LONG time to wait.  On the iPad, the shuttle resolution was better.

Nexus Successes:

  • Watching YouTube videos was a better experience – mostly due to the form factor, and better contrast.   (I was watching video on both devices – letting one “buffer” up while I watched on the other)

About the same:

  • Kindle Book Reading about the same on both.
    • Visiting a website with Flash – without working around it on the nexus – failed equally on both.  I tried working around it on the Nexus, but I couldn’t get that to work (probably the wrong browser)
  • Reading email via the Gmail app was about the same on both.  Both of them, I wish they fit more data on the screen. 

I dearly wanted the experience to be equal.. I have a soft spot for Android, I love its “intent” system for hooking apps together.. but as an honest consumer, the iPad experience for me in December 2012, January 2013, was better.   And for $100 more.. yes, I’d say it was worth it.

Addendum 2/9/2013:  My wife took over the Nexus for a while.    She is frustrated with it, HOWEVER, having multiple users on the device was clearly a win.     She’s going to get an iPad Mini, it fits her collection of purses better.  (Women are weird) (I like them like that)

Randomness: WordPress Fail, Feature Branching

  • I tried editing a post that I had started from Windows Live Writer, in the WordPress iPad app. Something about how the HTML got formed killed the editor there, as well as the rich text box editor on the WordPress site. Not having a laptop with me, I tried Remote Desktop from the iPad to my work computer, but that experience was too hard to do any real content creation. So I gave up and am writing this post instead.
  • I have had a rough patch at work the last few weeks. End of Release, lots of defects. We did a post-mortem, traced down each defect and where it came from, and found out:
    • Only about 10%were actual code errors. Yay, professional pride intact.
    • The rest were of the nature that no matter how much we looked at it, we would not have detected it was not what the customer wanted. The customer had to look at it, give us feedback, unfortunately in the form of a defect, because that is the stage of the SDLC we were in.
    • We are trying to find ways to get the customer to give us their eyeballs earlier in the process. The good news: The customer “gets it”, and now one of their style stars is hanging out on campfire where we can trade screenshots. Hopefully they will also be able to use the CI build and deploy to “play” with the latest code.
    • Part of the learning experience here is that the customer is going from a single pipeline of development to “Feature Branching”, and everybody is learning this new paradigmn. I could probably do a blog post on things to watch for with Feature Branching.
    • To be fair, we also missed a lot of things that if we had read every word of the requirements very carefully, we would have gotten. Yet, with feature requirements changing often, it was very easy to go with what we thought was common consensus rather than being word nazi on the requirements that may not have been up to date. Possible solution: as our perspective of things drifted, we could add notes to the requirements, as addendums.
  • I have not had a chance to get back to crunching car stats. the next thing I want to do there is visualize tracks in 3D using Processing. I have something working, but its not pretty enough yet.
  • A friend asked me for advice on how to do a timelapse of some work their employer is getting done. I hope I get to help!

Oops look at the time I am needed elsewhere. Publish without Pretty!

Developing as a Team

Update: This entry was re-posted on my company blog, here:   http://ignew.com/2013/02/16/developingasateam/  — with some updates.   

In October, I started on my current project as a solo affair. In November, we went up to 2, and then shortly afterwards 3 developers, and we had to figure out how to work together effectively.

I’ve had some good experiences in the past working as a team, and some bad ones.   I was eager to try to craft a good working experience, and with the help of my colleagues, I think we mostly succeeded.  We just did an internal presentation to my company on the subject, so now it feels right that I can blog about it.

We started collaborating on Campfire

At one person on a project, well, there’s just myself and myself.  Not much to do there; I can email the customer questions, I can save the emails I get back for stuff that’s important.

At two people, I can communicate with my teammate over IM (if they’re in), or yell across the hall.    Maybe try to maintain a wiki, and forget to do it.   Very often, one person takes primary in client communication, and the other person takes secondary.

At three people, things get a bit different.  Multi-person IM can be a pain.   One primary and two secondaries leads to a lot more communication overhead for the primary.

Enter Campfire.   http://campfirenow.com/

With Campfire, communication can be asynchronous – as long as everybody agrees to check the history of campfire to see what they missed.   We can work on different schedules.  (Its still more fun when we’re on at the same time).

  • Campfire became the place to start asking “I wonder if the customer meant this or that”
  • Campfire became the history of the project
  • Campfire became the place to state what we were working on, and to trade pieces of work (“while I’m in there, I could take care of xxx”)
  • Campfire became the place to let everybody know if one was working from home, or the office, or other
  • Campfire became the place to paste code for a quick code review
  • Campfire became the place to share cute jokes, and side technical things, as we got to know each other more

Note:There are many alternatives to campfire; that’s just what we used.  IRC is how we could have done it in the old days, although it would take some setup to get the history to save.

We centralized our communication with the client via Basecamp

If we had a question to ask the client, we would use a basecamp discussion.

  • We would create a discussion in basecamp, with a subject of “Question: xxxxx”, and the body being the question.
  • We would “loop the client in” (specifying their email addresses) to the discussion.  The client folks would get an email with the discussion thread.  We would also add ourselves to the discussion.
  • When the client responded, it got automatically added to basecamp, and we got notified by email.
  • Once the question was answered, we would rename the subject to “Answered: xxxx”.

This gave us:

  • A central place where we could see “what decisions had been made”
  • An easy way to figure out “what has not yet been answered”

I would recommend this even for a single person working on a project, because of the second bullet point.

Once again, I’m sure there are alternatives that could be used here; although I’m not sure which ones would provide a way to log customer responses with such ease, while presenting the questions in a format that they would easily understand: email.

We collaborated on Google Documents for Digesting Requirements

Wen we received the equivalent of 15-20 pages of requirements to digest and estimate, with links to external resources (mockups) and other “stuff”, we floundered for about 2 hours.  This was our (eventual) solution:

  • One of us pulled ALL the information into a SINGLE document, in Google Docs.
  • We individually went through the document, adding comments at places that we had questions, highlighting things that we found interesting, etc.
    • In reality, we did this together in almost- real time.  Maybe not on the same page, but I could see my teammates putting in comments, and I could even respond to them, as we all looked at the document together.
  • We then sat down face to face (although we could have done it over a phone call), and walked the document from top to bottom – everybody talking about the stuff they found interesting, how they thought it would affect the architecture, etc.
  • We then created a spreadsheet, and started filling out the “details” of the work.  We all collaborated on it as we were typing it up:
    • By the time we left that document, I *knew* that we all agreed on the same way to do the work, and the architecture.
    • The breakdown was more rigorous than I could have done myself (everybody’s rigor got UNION’ed together)
  • During the collaboration process, we also figured out the how to divvy the work up.  In this case, we decided to have one spearhead doing UI work, and another one coming up behind and connecting the UI to the underlying layers.

Google Docs is just what we had available to use at my company; One could also edit office documents in SkyDrive in a similar way.  As long as you can see what everybody is doing in real time, this works.  

We Used Planning Poker and Google Spreadsheets To Estimate

Example planning poker cardsOnce we had a work breakdown, we would use the planning poker process to figure out Low-end and High-end estimates on the individual items.

  • We did this face to face with a real planning poker deck, once.   (Thank you Microsoft)
  • We did this online, in a google spreadsheet, once.
    • We hid the numbers with white-on-white, and ran through the spreadsheet, and called out “done” when we were ready.
    • We revealed all the numbers
    • And then visited line by line to find any differences.
    • Sadly, our estimates were way off – high – but I don’t think it was because of the process.

Beware, with multiple people, you will get a higher number. This is because those who are more sure and bid lower, give in to the folks who bid higher.

We started collaborating on Google Documents for Demo Notes

Our process involves a weekly demo to show the client where we are at.     These demos were on Friday’s at 11am.   Thus, usually on Thursday night (Depending on who got to a stopping point first), or on Friday morning, we would create a “Demo Notes” document

  • We had a “team level” update which was usually put together by the “most chatty” of us. (me)
  • Each of us had a section, which we would fill out – of the stuff we wanted to show the customer, that we had gotten done.
    • We each had a different level of detail.  That’s fine, and in time, comes to be something we celebrate.
  • In each section, we had questions set aside like this, of stuff to ask the customer.   It was clear that there was an answer missing, there was a spot to type in the answer.
    • Question:  Why did the chicken cross the road?
    • Answer:
  • We could have added a section on “pending questions”.  Our client was excellent in responding, though, so we didn’t need to.
  • At the end of the document, we added a “Decisions” section.

During the demo:

  • We would openly share the demo notes document, using it as an agenda.  Whoever’s screen was being shared with the client, the client would see the demo notes somewhere in there.
  • As one person presented, if the client had feedback, or answers, another of us would take notes on their behalf.
    • It gave all of us a job.  With three of us on the call, we had two people listening in and ensuring that anything that was important got captured.   We helped each other.  We came to trust each other in a very “you’ve got my back” kind of way.
  • When the client stated something that was even a bit complex, we would type it into the demo notes, highlight it, and ask: “Did I capture this correctly?”
  • If decisions were made, we added them into the Decisions section.

Soon after the demo:

  • We would scrub the document (a little bit), to clean up the mess that sometimes happens during the demo.
  • We would export it as a PDF and email it to the client.

After the demo:

  • If something was noted that we needed to take care of during the demo, we would add it as a “TODO: xxx” in the demo.
  • When it was done, one could go back to those demo notes, and change it to “DONE: xxx”.  (we didn’t all do this; maybe it was just me – but that’s the beauty of a live document, it can represent “now” rather than “at that time”).

We started sharing administrative tasks

In a past team environment, I made the mistake of “volunteering” to be the only person who did administrative tasks, like merges, status updates, etc.  I was “team lead”, after all, wasn’t I supposed to do this?  In doing so, under the guise of “protecting” my teammates, I signed up for all kinds of pain.

In this incarnation, we’re going with the philosophy “Everybody is capable of, and willing to do, everything”.

The first part, “capable”, meaning:

  • If there’s something that one of us doesn’t know how to do, we’re willing to learn
  • We don’t have to be awesome at it.   As long as we can get it done well enough to move the project forwards.

The second part, “willing”, meaning:

  • Nobody has to be saddled with exclusive pain.
  • I know my teammates have my back.  They are capable and willing to take on my pain.

So, we’ve ended up at this:

  • We have a round robin order, for the weekly administrative tasks:
    • Preparing and sending the Weekly Status Report
    • Sending the Demo notes
    • Updating the Project Burndown

Additionally, we started doing automatic merges from the parent branch into our branch.   At first, we tried a “weekly” approach to it – but that ended up being WAY too much pain in a week.   So, if we hit that again, we might be doing a “its your turn today” round robin approach instead.

We started using Google Docs for Status Report Documents

The person saddled with the status report, would create the document, and look through our time tracking system / the commit log to see what people did, and take a first stab at the contents of the report.

Then, they would invite the rest of us in to the report.  We would update our individual sections, and add a “sign off” at the bottom of the document.

Once the document had everybody’s sign off, it got sent.

There’s More To Learn

Even with all of the above awesomeness, we have room to grow.    My teammates will probably either groan or cackle with glee when they read this, but here’s what I’m thinking:

  • There are other admin tasks we were lax on.   These could be added in to the admin-task-monkey’s list of stuff to handle.
  • We might start writing test cases together.
  • We might start running each other’s test cases [more often]
  • We might use a better way of breaking down the available work – in such a way that more than one person can get their feet wet in a feature.
  • We’ve now invited the client into some of our collaboration – might learn some things there.

The underlying thing is that we were willing to do what was necessary to have a good team working environment, and we did it.   And for that, I am grateful.

Car Stats: Boogers! Foiled!

Minor setbacks in my Car Stats Gathering Solution.

Firstly, the repeated exposure to the really cold temperatures in the car, seems to have done something to my Droid battery.  It no longer charges when hooked up to the “normal” chargers; I had to hook it up to a 2.1A ipad-level USB port to get it to charge.  Rut-roh.  (Luckily my brother in law has given me his extra batteries from when he had a Droid, so I might have a spare.. but I may not be able to leave it in the car during the winter.  Data logging might be over for the time being).

Secondly, I found out I’m uploading the wrong folder. [Maybe].  

  • I was uploading /mnt/sdcard/.torque/tripLogs/<timestampfoldername>/<stuff>
  • The log that it generates is actually /mnt/sdcard/torqueLogs/*.csv

No problem.  I synced up the other folder instead, and…

None of my code works anymore!

The reasons are:

  • the torqueLogs csv files – very often have multiple headers (it writes to the same file twice in one driving?)
  • these log files are in the units that the app is set to, ie, miles per hour instead of km/h.   Thus, all my column names are off.

I’m going to have to do some refluctoring.  Or, decide that I like reading from Torque’s internal “triplog” (I found the separate option for turning this on/off) more than the “tracklog”. 

Also, I’m having trouble getting PeasyCam to work in Processing, it gives me some fun exceptions.   More on that later, when its more baked, but basically trying to do 3D renderings of Long,Lat,Time to determine the differences between the many routes I can take to work. There’s a lot of filtering and scaling work to do yet.

Car Stats–Speed vs Mileage Revisited

Previous posts in the series:

Revisiting the second post in this series, but this time:

  • All the data collected to date
  • Red for uphill, green for downhill, size of point for magnitude
  • Blue for mostly level

image

Analysis:

  • The ECU allows idling the engine when coasting, until I get to the last gear, at which time it changes its strategy – it always provides enough fuel to keep the engine purring at a somewhat higher number.  Probably because it doesn’t disengage the drivetrain.  But it does reduce the gas such that it’s a flat line across at 110 mpg or so.    (just enough oomph to prevent the engine from spinning down so fast that the car feels like its stuck in mud, probably.)
  • I get better gas mileage around 42 mph – closer to 40mpg.  Then it drops down to the 33mpg range as I get up to 55, but pretty much stays there through 75mph.
  • When accelerating, the engine opens up in such a way that I get a nice flat line at the bottom of the graph.

Code comments:

  • I added a column for altitude change – detected it within a file, I didn’t want to do it outside of the file boundary.
  • Sometimes, there’s a trailing comma in the column names.
  • I added better Axes to the graph.

Code:

$alldata = @(); 
$files = gci . -r -include "trackLog.csv"
foreach ($file in $files) { 
   $lines = get-content $file
   "Processing {0}: {1} lines" -f $file, $lines.count
   
   # to get around errors with header names not being valid object names
   $lines[0] = $lines[0] -ireplace '[^a-z,]','' 
   if (-not $lines[0].EndsWith(",")) { $lines[0] = $lines[0] + "," } 
   $lines[0] = $lines[0] + "AltChange"
   
   $data = ($lines | convertfrom-csv)
   
   for ($i=1; $i -lt $data.Count; $i++) { 
       $prevAlt = [double]$data[$i-1].AltitudeM
       $Alt = [double]$data[$i].AltitudeM
       if ($prevAlt -ne $null -and $Alt -ne $null) { 
           $data[$i].AltChange = $Alt - $prevAlt
       }
   }
   
   $alldata = $alldata + $data
}
"Total of {0} items" -f $alldata.count

$altmeasure = $alldata | measure-object AltChange -min -max -average

[void][Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms.DataVisualization")
$chart = new-object System.Windows.Forms.DataVisualization.Charting.Chart
$chart.width = 800
$chart.Height = 600
$chart.Left = 40
$chart.top = 30
$chart.Name = "Foo"

$chartarea = new-object system.windows.forms.datavisualization.charting.chartarea
$chart.ChartAreas.Add($chartarea)

$legend = New-Object system.Windows.Forms.DataVisualization.Charting.Legend
$chart.Legends.Add($legend)

$series = $chart.Series.Add("Series1")
$series = $chart.Series["Series1"]
#FastPoint ignores color
$series.ChartType = [System.Windows.Forms.DataVisualization.Charting.SeriesChartType]::Point
$series.IsXValueIndexed = $false

$thresh = 0.05

foreach ($data in $alldata)
{
    if ($data.MilesPerGallonInstantMpg -eq $null) { continue } 
    if ($data.GpsSpeedkmh              -eq $null) { continue } 
    if ($data.AltChange                -eq $null) { continue } 

    $speed = [double]$data.GpsSpeedkmh * 0.621371          
    $mpg =   [double]$data.MilesPerGallonInstantMpg 
    $alt =   [double]$data.AltChange  
        
    if ($alt -lt -$thresh) { 
        # downhill green
        $color = [System.Drawing.Color]::FromARGB(100,0,255,0)
        $markersize = 1 - ($alt*2)
    } elseif ($alt -lt $thresh) { 
        $color = [System.Drawing.Color]::FromARGB(100,0,0,255)
        $markersize = 2
    } else  { 
        # uphill red
        $color = [System.Drawing.Color]::FromARGB(100,255,0,0)
        $markersize = 1+$alt*2
    }  
    
    if ($markersize -gt 5) { $markersize = 5 }
    
    $datapoint = New-Object System.Windows.Forms.DataVisualization.Charting.DataPoint($speed,$mpg)   
    $datapoint.Color = $color
    $datapoint.MarkerSize = $markersize
    
    $series.Points.Add($datapoint)
}

$chartarea.AxisX.Name = "Speed MPH"
$chartarea.AxisX.Interval = 5
$chartarea.AxisX.Minimum = 0
$chartarea.AxisX.IsStartedFromZero=$true

$chartarea.AxisY.Name = "MPG"
$chartarea.AxisY.Interval = 10
$chartArea.AxisY.Minimum = 0
$chartarea.AxisY.IsStartedFromZero=$true

$Form = New-Object Windows.Forms.Form 
$Form.Text = "PowerShell Chart" 
$Form.Width = 1100 
$Form.Height = 600 
$Form.controls.add($Chart) 
$Chart.Dock = "Fill" 
$Form.Add_Shown({$Form.Activate()}) 
$Form.ShowDialog()

Car Stats: Graphing with Powershell – Where Have I Been?

Previous posts in the series:

I could barely contain myself.  Thurday night, I had all kinds of data.. just calling my name.  

GPS Time, Device Time, Longitude, Latitude,GPS Speed(km/h), Horizontal Dilution of Precision, Altitude(m), Bearing, Gravity X(G), Gravity Y(G), Gravity Z(G),Miles Per Gallon(Instant)(mpg),GPS Altitude(m),Speed (GPS)(km/h),Run time since engine start(s),Speed (OBD)(km/h),Miles Per Gallon(Long Term Average)(mpg),Fuel flow rate/minute(cc/min),CO₂ in g/km (Average)(g/km),CO₂ in g/km (Instantaneous)(g/km)
Thu Jan 03 16:29:06 EST 2013,03-Jan-2013 16:29:11.133,-85.57728556666666,38.24568238333333,0.0,16.0,196.9,0.0,-0.015994536,0.9956599,0.0949334,0,196.9,0,17,0,31.66748047,19.45585442,259.47247314,-
Thu Jan 03 16:29:09 EST 2013,03-Jan-2013 16:29:14.004,-85.57729401666667,38.245684716666666,0.0,12.0,195.2,0.0,-0.015994536,0.9956599,0.0949334,0,195.2,0,20,0,31.66731453,45.80973816,259.47247314,-

Friday afternoon, once work things were completed, I started playing with it.  To start, I tried to read in the CSV file into Powershell.   I figured once I had it there, I could do *something* with it.

I faced some challenges:

  • The CSV column names are not “clean”, so I needed to sanitize them
  • Some files did not have certain pieces of data.
  • CSV import was a string, needed to be casted to a number before certain operations (“137” –gt 80.0 = false)
  • The units are fixed, part of the Torque app. (Actually, part of the OBDII standard)

After getting it read in, I looked around for a graphing library. Turns out I can use System.Windows.Forms.DataVisualization with Powershell. (thank you sir), which had some fun stuff:

  • FastPoint ignores colors
  • Had to turn off auto-scale on the Y-Axis
  • Made the charting controll Dock-Fill in the form

I ended up with this script:

$alldata = @(); 
$files = gci . -r -include "trackLog.csv"
foreach ($file in $files) { 
   $lines = get-content $file
   "Processing {0}: {1} lines" -f $file, $lines.count
   
   # to get around errors with header names not being valid object names
   $lines[0] = $lines[0] -ireplace '[^a-z,]','' 
   
   $data = ($lines | convertfrom-csv)
   $alldata = $alldata + $data
}
"Total of {0} items" -f $alldata.count

$speedmeasure = $alldata | measure-object GpsSpeedKmh -min -max
$speedspread = $speedmeasure.Maximum - $speedmeasure.Minimum
if ($speedspread -le 1.0) { $speedspread = 1.0 }

$mpgmeasure = $alldata | measure-object MilesPerGallonInstantmpg -min -max
$mpgspread = $mpgmeasure.Maximum - $mpgmeasure.Minimum
if ($mpgspread -le 1.0) { $mpgspread = 1.0 }

$ffmeasure = $alldata | where-object { $_.Fuelflowrateminuteccmin -ne "-" } | measure-object Fuelflowrateminuteccmin -min -max
$ffspread = $ffmeasure.Maximum - $ffmeasure.Minimum
if ($ffspread -le 1.0) { $ffspread = 1.0 }

[void][Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms.DataVisualization")
$chart = new-object System.Windows.Forms.DataVisualization.Charting.Chart
$chart.width = 800
$chart.Height = 600
$chart.Left = 40
$chart.top = 30
$chart.Name = "Foo"

$chartarea = new-object system.windows.forms.datavisualization.charting.chartarea
$chart.ChartAreas.Add($chartarea)

$legend = New-Object system.Windows.Forms.DataVisualization.Charting.Legend
$chart.Legends.Add($legend)

$series = $chart.Series.Add("Series1")
$series = $chart.Series["Series1"]
#FastPoint ignores color
$series.ChartType = [System.Windows.Forms.DataVisualization.Charting.SeriesChartType]::Point
$series.IsXValueIndexed = $false

foreach ($data in $alldata)
{
    if ($data.MilesPerGallonInstantMpg -eq $null) { continue } 
    if ($data.Fuelflowrateminuteccmin -eq $null) { continue } 
    if ($data.Fuelflowrateminuteccmin -eq "-") { continue } 

    $speed = (([double]$data.GpsSpeedkmh              - $speedmeasure.Minimum) / $speedspread)  
    $mpg =   (([double]$data.MilesPerGallonInstantMpg -    $mpgspread.Minimum) /   $mpgspread)
    $ff    = (([double]$data.Fuelflowrateminuteccmin  -    $ffmeasure.Minimum) /    $ffspread)
    
    $higherspeed = $speed; 
    if ($higherspeed -gt 0.05) { $higherspeed = [Math]::Sqrt($speed) }
    $lowerspeed = $speed * $speed; 
    
    # MPG numbers seem to be clustered closer to 0 with a few annoying outlyers. spread them up a bit.
    #if ($mpg -gt 0.05) { $mpg = [Math]::Sqrt($mpg) }    

    # calculate color.   
    $blue = 250*$ff;
    
    # slower = more red
    # faster = more green
    # medium = yELLOW!
    $red = 250 - (250 * $lowerspeed)
    $green = 250 * $higherspeed
    
    $datapoint = New-Object System.Windows.Forms.DataVisualization.Charting.DataPoint($data.Longitude, $data.Latitude)   
    $color = [System.Drawing.Color]::FromARgb(125,$red, $green, $blue)
    
    $datapoint.Color = $color
    $series.Points.Add($datapoint)
    $datapoint.MarkerSize = ($mpg)*5 + 1
}

$chartarea.AxisY.IsStartedFromZero=$false

$Form = New-Object Windows.Forms.Form 
$Form.Text = "PowerShell Chart" 
$Form.Width = 1100 
$Form.Height = 600 
$Form.controls.add($Chart) 
$Chart.Dock = "Fill" 
$Form.Add_Shown({$Form.Activate()}) 
$Form.ShowDialog()

Which reads everything in, scrubs some data, figures some transforms, and yields me the following pretty picture:

image_thumb5

  • Green = faster, Red = slower.
  • Blue = Gas used (doesn’t show very well)
  • Fatter = Better MPG (I should reverse this, probably)
  • The green lines are the interstates (I71, I264, I64, and I265 shown)
  • Stop lights show up as little red dots.
  • I had to hand-scale the window till it looked right. Real co-ordinate systems some other day.

I would like to do this in 3-D, but I haven’t gotten my Processing chops quite figured out yet. Maybe next week!