Duct Tape and Lego’s

While discussing the finer points of avoiding RMI stress injuries (aka carpel tunnel, but there are several others) at work, JS mentioned “if he could only tilt his trackball up by a few degrees”…

Welp, I’ve been struggling the last 2-3 weeks with all kinds of arm pain.. leading me to try all kinds of things. I liked the “tilt” idea. So I made one from Legos:

Its not the only thing I’ve made. I tried making a Droid Charging Station, but that didn’t work so well; however my Lego Monitor Tilt works pretty good:

And then.. Duct Tape. This is the original version of the “I need a handle to pull the air filter out of the furnace” thingy:

Life is good.

Old Fogey

In the department of how-old-a-fogey-am-I, An email I wrote at work might be a good blog post here. It shows my early geekness.  It has been altered to be more of a blog post.

A Coworker wrote:

Looks like he’s got some dev cred. Plus I think that’ll make Sunny and me the old guys no longer…

My Response:

OMG I used to lust after the contents of that book!

I wax nostalgic:

I grew up in Liberia, West Africa; my parents taught at Cuttington University College.

we had power 2-3 hours in the evening (not enough gasoline for the campus generator). Just enough to get the refridgerators cool .. you took what you needed out at 4pm when power came on, and then shut it (with paper tape, no duct tape available there) so nobody would accidentally open it, and then it would get cool enough to stay cold till the next day.

I first met a computer, it was a TRS-80 model I (i think, might have been a II – image on left), a fullbrite professor’s kid (Lars F) showed me a little game called “Adventure”. Then he showed me BASIC. And I was hooked. (PEEK 14400!)  (Side note: Lars also introduced me to Dungeons and Dragons, and the Rubik’s Cube)

Later on, the campus got a “computer lab” – of TI-99/4A‘s. My parents, being math professors, taught the courses, so they brought one home with them. With that book. Of course, we didn’t have any of those cartridges.. all I could do was look through that book. (and this one also: image on right)

But yeah, i’d start planning my programs during school, on paper.. power would come on, and I’d type them in furiously, getting them to work.. play .. and then power would go out, and it was gone. Repeat daily.

(after power went out, light 4-5 candles and read novels for the rest of the night till mom pestered me to go to bed)

When I came to the US for the first time.. 1983 … i was amazed at, in order:
a) 24 hour power
b) hot water
c) vending machines
d) hamburgers
e) 300 baud models hooking up to Iowa State University’s CS computer .. which introduced me to..
f) UNIX!

*gratitude for the little things*

Test Data Creators (Integration Tests)

Everybody seems to love Unit Tests

I agree, they are wonderful. I have lots of logic that is unit tested … and its easy to set up (especially with tools like moq)…

But its not what I rely on.  I have found it to be too limited to give me the confidence I’m looking for as I write a system.  I want as much tested as I can – including the data access layers – and how everything fits together – and that my dependency injectors are working correctly.

Another view: in my current project, I’m using nHibernate as the data provider.  The general consensus on mocking nHibernate is: don’t do it. Instead, use an in-memory database (didn’t work – had to maintain different mapping files), or write an IRepository around it.

When I do that, what I find is most of the logic that needs testing is in the subtleties around my queries (LINQ and otherwise) – the rest is plumbing data from one representation to another.  While unit testing that is valid, it does not cover the places where I find most of my failures.  Stated in GWT syntax, my tests would be “GIVEN perfect data, WHEN logic is executed, THEN the app does the right thing”“perfect data” being the elusive part.

I have tried providing a List<T>.AsQueryable() as a substitute data source in unit tests – and that works well, as long as my queries do not get complicated (involving NHibernate .Fetch and so on.)   If the queries grew beyond my ability to mock them with .AsQueryable(); my “test” situation (LINQ against a list) started to differ significantly from the “real” situation (LINQ against a database) and I started to spend too much time getting the test just right, and no time on real code.

My Solution – Test Data Creators

My solution for the past 5 years over multiple projects has been “Integration Tests”, which engage the application from some layer (controller, presenter, etc) all the way down to the database.

“Integration”,”Unit”, and “Functional” tests — there seem to be a lot of meanings out there. For example, one boss’s idea of a “Unit” test was, whatever “Unit” a developer was working on, got tested. In that case, it happened to be the “Unit” of batch-importing data from a system using 5 command line executables. Thus, for this article only, I define:

  • Unit Test – A test called via the nUnit framework (or similar) that runs code in one target class, using mocks for everything else called from that class, and does not touch a database or filesystem
  • Integration Test – A test called via the nUnit frameowrk (or similar) that runs code in one target class, AND all of the components that it calls, including queries against a database or filesystem
  • Functional Test – Something I haven’t done yet that isn’t one of the above two
  • Turing Test – OutOfscopeException

Having built these several times for different projects, there are definite patterns that I have found that work well for me. This article is a summary of those patterns.

Pattern 1: Test Data Roots

For any set of data, there is a root record.
Sometimes, there are several.
In my current project, there is only one, and it is a “company”; in a previous project, it was a combination of “feed”, “company” and company.

The Pattern:

  • Decide on a naming convention – usually, “TEST_”+HostName+”_”+TestName
  • Verify that I’m connecting to a location where I can delete things with impunity — before I delete something horribly important (example: if connection.ConnectionString.Contains(“dev”))
  • If my calculated test root element exists, delete it, along with all its children.
  • Create the root and return it.
  • Use IDisposable so that it looks good in a using statement, and any sessions/transactions can get closed appropriately.

Why:

  • The HostNameallows me to run integration tests on a build server at the same time as a local machine, both pointed at a shared database.
  • I delete at the start to leave behind test data after the test is run. Then I can query it manually to see what happened. It also leaves behind excellent demo material for demoing functionality to client and doing ad-hoc manual testing.
  • The TestName allows me to differentiate between tests. Once I get up to 20-30 tests, I end up with a nice mix of data in the database, which is helpful when creating new systems – there is sample data to view.

Example:

using (var root = new ClientTestRoot(connection,"MyTest")) { 
    // root is created in here, and left behind. 
    // stuff that uses root is in here.  looks good. 
}

Pattern 2: Useful Contexts

Code Example:

using (var client = new ClientTestRoot(connection,"MyTest")) { 
    using (var personcontext = new PersonContext(connection, client)) { 
       // personcontext.Client
       // personcontext.Person
       // personcontext.Account
       // personcontext.PersonSettings
    }
}

I create a person context, which has several entities within it, with default versions of what I need.

I also sometimes provide a lambda along the lines of:

new PersonContext(connection, client, p=>{p.LastName="foo", p.Married=true})

to allow better customization of the underlying data.

I might chain these things together. For example, a Client test root gives a Person context gives a SimpleAccount context … or seperately, a MultipleAccount context.

Pattern 3: Method for Creating Test Data can be Different from What Application Uses

By historical example:

Project Normal App Data Path Test Data Generation Strategy
Project 1 (2006) DAL generated by Codesmith OracleConnection, OracleCommand (by hand)
Project 2 (2007) DAL generated by Codesmith Generic Ado.Net using metadata from SELECT statement + naming conventions to derive INSERT + UPDATE from DataTable’s
Project 3 (2008) DAL generated by Codesmith DAL generated by Codesmith — in this case, we had been using it for so long, we trusted it, so we used it in both places
Project 4 (2010) Existing DAL + Business Objects Entity Framework 1
Project 5 (2011) WCF + SqlConnection + SqlCommand + Stored Procedures No test data created! (see pattern 7 below)
Project 6 (2012) NHibernate with fancy mappings (References, HasMany, cleaned up column names) NHibernate with simple mappings – raw column names, no references, no HasMany, etc

The test data creator will only be used by tests — not by the application itself. It maintains its own network connection. However you do it, get it up and running as quickly as you can – grow it as needed. Refactor it later. It does NOT need to be clean – any problems will come to light as you write tests with it.

Pattern 4: Deleting Test Data is Tricky Fun

The easiest way everybody seems to agree on is: drop database and reload. I’ve had the blessings to be able to do this exactly once, its not the norm for me – usually I deal with shared development databases, or complicated scenarios where I don’t even have access to the development database schema.

Thus, I have to delete data one table at a time, in order.

I have used various strategies to get this done:

  • Writing SQL DELETE statements by hand — this is where I start.
  • Putting ON DELETE CASCADE in as many places as it makes sense. For example, probably don’t want to delete all Employees when deleting a Company (how often do we delete a company! Are you sure?) but could certainly delete all User Preferences when deleting a User. Use common sense.
  • Create a structure that represents how tables are related to other tables, and use that to generate the delete statements.

This is the hardest part of creating test data. It is the first place that breaks — somebody adds a new table, and now deleting fails because foreign keys are violated. (long term view: that’s a good thing!)

I got pretty good at writing statements like:

delete from c2
where c2.id in ( 
    select c2.id from c2
    join c1 on ...
    join root on ....
    where root.id = :id )

After writing 4-5 of them, you find the pattern.. the child of a C2 looks very similar to the delete query for C2, except with a little bit more added. All you need is some knowledge of where you delete first, and where you can go after that.

How Tables Relate

I no longer have access to the codebase, but as I remember, I wrote something like this:

var tables = new List(); 
var table1 = new TableDef("TABLE1","T1"); 
{ 
     tables.Add(table1); 
     var table2 = table1.SubTable("TABLE2","T2","T1.id=T2.parentid"); 
     { 
         tables.Add(table2); 
         // etc etc
     }
     // etc etc
}
tables.Reverse();   // so that child tables come before parent tables

I could then construct the DELETE statements using the TableDef’s above – the join strategy being the third parameter to the .SubTable() call.

Slow Deletes

I ran into a VERY slow delete once, on Oracle. The reason was, the optimizer had decided that it was faster to do a rowscan of 500,000 elements than it was to do this 7-table-deep delete. I ended up rewriting it:

select x.ROWID(), ...; foreach ... { delete  where rowid==... }

Moral(e): you will run into weird deletion problems. That’s okay, it goes with the territory.

Circular Dependencies

Given:

  • Clients have People
  • Feeds have Files For Multiple Clients
  • Files Load People
  • A loaded person has a link back to the File it came from

This led to a situation where if you tried to delete the client, the FK from Feed to Client prevented it. If you tried to delete the feed, the FK from People back to File prevented it.

The solution was to NULL out one of the dependencies while deleting the root, to break the circular dependency. In this case, when deleting a Feed, I nulled the link from person to any file under the feed to be deleted. I also had to do the deletes in order: Feed first, then Client.

Example:
Here’s some real code from my current project, with table names changed to protect my client:

var exists =
	(from c in session.Query() where c.name == companyNameToLookFor select c).
		FirstOrDefault();
if (exists != null)
{
	using (var tran = session.BeginTransaction())
	{
		// rule #1: only those things which are roots need delete cascade
		// rule #2: don't try to NH it, directly delete through session.Connection

		// ownercompany -> DELETE CASCADE -> sites
		// sites -> manual -> client
		// client -> RESTRICT -> feed
		// client -> RESTRICT -> pendingfiles
		// client -> RESTRICT -> queue
		// queue -> RESTRICT -> logicalfile
		// logicalfile -> CASCADE -> physicalfile
		// logicalfile -> CASCADE -> logicalrecord
		// logicalrecord -> CASCADE -> updaterecord

		var c = GetConnection(session);

		c.ExecuteNonQuery(@" 
			delete from queues.logicalfile 
			where queue_id in ( 
			   select Q.queue_id 
			   from queues.queue Q
			   join files.client CM ON Q.clientid = CM.clientid
			   join meta.sites LCO on CM.clientid = LCO.bldid
			   where LCO.companyid=:p0
			)
			", new NpgsqlParameter("p0", exists.id)); 

		c.ExecuteNonQuery(@" 
			delete from queues.queue 
			where clientid in ( 
				select bldid
				from meta.sites
				where companyid=:p0
			)
			", new NpgsqlParameter("p0",exists.id)); 

		c.ExecuteNonQuery(@"
			delete from files.pendingfiles 
			where of_clientnumber in (
				select bldid
				from meta.sites
				where companyid=:p0
			) ",
			new NpgsqlParameter(":p0", exists.id));
		c.ExecuteNonQuery(@"
			delete from files.feed 
			where fm_clientid in (
				select bldid
				from meta.sites
				where companyid=:p0
			) ", 
			new NpgsqlParameter(":p0",exists.id)); 
		c.ExecuteNonQuery(@"
			delete from files.client 
			where clientid in (
				select bldid
				from meta.sites
				where companyid=:p0
			) ",
			new NpgsqlParameter(":p0", exists.id)); 

		session.Delete(exists);
		tran.Commit();
	}
}

In this case, ownercompany is the root. And almost everything else (a lot more than what’s in the comments) CASCADE DELETE’s from the tables I delete above.

I did not write this all at once! This came about slowly, as I kept writing additional tests that worked against additional things. Start small!

Pattern 5: Writing Integration Tests Is Fun!

Using a library like this, writing integration tests becomes a joy. For example, a test that only accounts which are open are seen:

Given("user with two accounts, one open and one closed"); 
{
   var user = new UserContext(testClientRoot); 
   var account1 = new AccountContext(user,a=>{a.IsClosed=true, a.Name="Account1" }); 
   var account2 = new AccountContext(user,a=>{a.IsClosed=false,a.Name="Account2" }); 
}
When("We visit the page"); 
{ 
    var model = controller.Index(_dataService); 
}
Then("Only the active account is seen"); 
{
    Assert.AreEqual(1,model.Accounts.Count); 
    ... (etc)
    Detail("account found: {0}", model.Accounts[0]); 
}

The GWT stuff above is for a different post, its an experiment around a way of generating documentation as to what should be happening.

When I run this test, the controller is running against a real data service.. which could go as far as calling stored procedures or a service or whatever.
When this test passes, the green is a VERY STRONG green. There was a lot that had to go right for the test to succeed.

Pattern 6: Integration Tests Take Time To Iterate

Running unit tests – can easily run 300-500 in a few seconds. Developers run ALL tests fairly often. Integration tests, not so much.

Solution: Use a CI server, like TeamCity, and run two builds:

  • Continuous Integration Build – does the compile, and then runs unit tests on **/bin/*UnitTest.dll
  • Integration Test Buid – if previous build is successful, then triggers – compiles – and runs unit tests on **/bin/*Test.dll

Ie, the Integration Test build runs a superset of tests – Integration tests AND Unit Tests both.
This also relies on naming convention for test dll’s – *UnitTests.dll being more restrictive than *Tests.dll.
There’s another approach I have used, where Integration Tests are marked with a category and Explicit() – so that local runs don’t run them, but the integration server includes them by category name. However, over time, I have migrated to keeping them in separate assemblies – so that the unit tests project does not have any references to any database libraries, keeping it “pure”.

When working on code, I usually run one integration test at a time, taking 3-4 seconds to run. When I’m done with that code, I’ll run all tests around that component.. maybe 30 seconds? Then, I check it in, and 4-5 minutes later, I know everything is green or not, thanks to the CI server. (AND, it worked on at least two computers – mine, and the CI server).

Pattern 7: Cannot Create; Search Instead

This was my previous project. Their databases had a lot of replication going on – no way to run that locally – and user and client creation was locked down. There was no “test root creation”, it got too complicated, and I didn’t have the privileges to do so even if I wanted to tackle the complexity.

No fear! I could still do integration testing – like this:

// Find myself some test stuff
var xxx = from .... where ... .FirstOrDefault(); 
if (xxx == null) Assert.Ignore("Cannot run -- need blah blah blah in DB"); 
// proceed with test
// undo what you did, possibly with fancy transactions
// or if its a read-only operation, that's even better.

The Assert.Ignore() paints the test yellow – with a little phrase, stating what needs to happen, before the test can become active.

I could also do a test like this:

[Test] 
public void EveryKindOfDoritoIsHandled() { 
    var everyKindOfDorito = // query to get every combination
    foreach (var kindOfDorito in everyKindOfDorito) {
        var exampleDorito = ...... .FirstOrDefault(); 
        // verify that complicated code for this specific Dorito works
    }
}

Dorito’s being a replacement word for a business component that they had many different varieties of, with new ones being added all the time. As the other teams created new Doritos, if we didn’t have them covered (think select…case.. default: throw NotSupportedException()) our test would break, and we would know we had to add some code to our side of the fence. (to complete the picture: our code had to do with drawing pretty pictures of the “Dorito”. And yes, I was hungry when I wrote this paragraph the first time).

Interestingly, when we changed database environments (they routinely wiped out Dev integration after a release), all tests would go to Yellow/Ignore, then slowly start coming back as the variety of data got added to the system, as QA ran through its regression test suite.

Pattern 8: My Test Has been Green Forever.. Why did it Break Now?

Unit tests only break when code changes. Not so with Integration tests. They break when:

  • The database is down
  • Somebody updates the schema but not the tests
  • Somebody modifies a stored procedure
  • No apparent reason at all (hint: concurrency)
  • Intermittent bug in the database server (hint: open support case)
  • Somebody deleted an index (and the test hangs)These are good things. Given something like TeamCity, which can be scheduled to run whenever code is checked in and also every morning at 7am, I get a history of “when did it change” — because at some point it was working, then it wasn’t.

    If I enable the integration tests to dump what they are doing to console – I can go back through Teamcity’s build logs and see what happened when it was last green, and what it looked like when it failed, and deduce what the change was.

    The fun part is, if all the integration tests are running, the system is probably clear to demo. This reduces my stress significantly, come demo day.

    Pattern 9: Testing File Systems

    As I do a lot of batch processing work, I create temporary file systems as well. I utilize %TEMP% + “TEST” + testname, delete it thoroughly before recreating it, just like with databases.

    In Conclusion

    Perhaps I should rename this to “My Conclusion”. What I have found:

    I love writing unit tests where it makes sense – a component, which has complicated circuitry, which can use a test around that circuitry.
    I love even more writing integration tests over the entire system – One simple test like: “CompletelyProcessExampleFile1” tells me at a glance that everything that needs to be in place for the REAL WORLD Example File 1 to be processed, is working.
    It takes time.
    Its definitely worth it (to me).
    Its infinitely more worth it if you do a second project against the same database.

    May this information be useful to you.

Diabetes Type II

I am a diabetic, type II.   I talk more about that on my livejournal.

I was reading a book, Wheat Belly by William Davis, MD.  It made a lot of sense, and fit directly into the knowledge that I already had — just gave me a new term, “AGE”‘s.   This confluence inspired me to put together a visio^H^H^H^H^H creately diagram of the concepts that I knew of so far, about my diabetes.

Here it is (click for full size):

This will probably get updated and reposted over time.  If you have any questions, ask, and I’ll tell you what I understand (but remember: I am NOT a doctor.  Just a geek.  With Diabetes Mellitus Type II.)

Duplicating sections of a PostgreSQL database using Powershell

The Problem

  • The customer has large postgreSQL database; it is too large to transfer over VPN.
  • I need to develop against a local copy of the database, where I can make schema modifications at will.

My Solution

  • Pull the schema
  • Pull the sequence information separately (it did not come over with the schema)
  • Pull full dumps for small tables (in order)
  • Pull subsets for large tables (in order)
  • Load everything locally
  • Do this in a script

Here is the code for the solution, with some commentary as to why certain things are the way that they are:

GetData.ps1

$PGDUMP = get-command pg_dump.exe 
$PSQL = get-command psql.exe

get-command verifies that it can find the executable in your current path, or dies if it cannot.
I try to do this for every executable I invoke in a powershell script.

$Env:PGCLIENTENCODING="SQL_ASCII"
$H="111.22.33.44"
$U="sgulati"
$P="5432"
$DB="deathstardb"

PGCLIENTENCODING was necessary because some of the rows in their database had UTF-8-like characters that confused the loader. I arrived at it by trial and error.

. .\tableconfig.ps1

Because I use the same configuration for getting data as for loading data, I pushed that into its own file.

tableconfig.ps1

$FULLTABLES = @( 
   "ds_employees.employees", 
   "ds_contacts.contact_types",
   "ds_contacts.companies",
   "ds_contacts.systems", 
   "ds_inbound.clients",
   "ds_inbound.feeds",
   "ds_inbound.pendingfiles"
); 
$PARTIALTABLES = @( 
   @(   "ds_inbound.processedfiles", 
        "select * from inbound.processedfiles where clientid='555' "
   ), 
   @(   "ds_inbound.missingfiles",
        "select * from inbound.missingfiles where clientid='555' "
    )
);

$FULLTABLES are tables I’m going to grab all data for.
$PARTIALTABLES are tables which I cannot grab all data for (they are too large), so I’m just going to grab the subset that I need

# PG_DUMP
# http://www.postgresql.org/docs/8.1/static/app-pgdump.html
# -s = schema only
# -a = data only
# -F = format.. p = plain, -c = custom
# -O = --no-owner
# -f = output file
# -c create
# -d --inserts
# -X --disable-triggers
# -E = encoding = SQL_ASCII

When there are confusing command line options called from a script, I put a comment in a script explaining
what many of the command line options are, along with a link to online documentation.
This helps with future maintenance of the script.

$exportfile = "${DB}.schema.sql"
if (! (test-path $exportfile)) { 
   "Schema: $exportfile"
   & $PGDUMP -h $H -p $P -U $U --create -F p -O -s -f $exportfile ${DB}
} else { 
   "skip schema: $exportfile"
}

I use a convention that if something has been pulled, do not pull it again.
This enables me to selectively refresh pieces by deleting the local cache of those files.

Note that The PGDUMP command creates a schema file, but does NOT pull current sequence values.

$exportfile = "${DB}.sequence.sql"
if (! (test-path $exportfile)) { 
    $sql = @"
select N.nspname || '.' || C.relname as sequence_name
from pg_class C
join pg_namespace N on C.relnamespace=N.oid
where relkind='S'
and N.nspname like 'ds_%'
"@
    $listOfSequences = ($sql | & $PSQL -h $H -p $P -U $U -d $DB -t)
    $sql = @()
    foreach ($sequence in $listofsequences) { 
       $trim = $sequence.trim(); 
       if ($trim) { 
           "Interrogating $sequence"
           $lastval = ( "select last_value from $trim" | & $PSQL -h $H -p $P -U $U -d $DB -t ) 
           $sql += "select setval('${trim}', $lastval);" 
       }
    }
    $sql | set-content $exportfile
} else { 
    "skip sequence: $exportfile"
}

This gets complicated:

  • I am running a query to get every sequence in the system.. then for each of those sequences, I’m getting the last value.
  • I am doing this by executing PSQL and capturing its output as text; I could have done it with Npgsql called directly from powershell, but i didn’t go down that route at the time this was written.
  • I am saving the information in the form of a SQL statement that sets the value correctly. This eliminates the hassle of understanding the data format.
  • I am relying on the customer’s convention of prefixing their schema names with “ds_” to filter out the system sequences. You may need a different approach.

Update: My customer read through this post, and pointed out something I had missed: There’s a view called

pg_statio_user_sequences

which provides a list of sequences. Still need to loop to get the current values… nevertheless, nice to know!

foreach ($fulltable in $FULLTABLES) { 
  $exportfile = "${DB}.${fulltable}.data.sql";
  if (! (test-path $exportfile)) { 
     "Full: $exportfile"
     & $PGDUMP -h $H -p $P -U $U --inserts --disable-triggers -F p -E SQL_ASCII -O -a -t $fulltable -f $exportfile ${DB}

	 # we need to patch the set searchpath in certain situations
	 if ($exportfile -eq "deathstardb.ds_inbound.feeds.data.sql") { 
		 $content = get-content $exportfile
		 for($i=0; $i -lt $content.length; $i++) { 
			 if ($content[$i] -eq "SET search_path = ds_inbound, pg_catalog;") { 
				$content[$i]="SET search_path = ds_inbound, ds_contacts, pg_catalog;"; 
			 }
		 }
		 $content | set-content $exportfile
	 }

  } else { 
     "Skip full: $exportfile"
  }
}

This executes PG_DUMP on the tables where we want full data, and dumps them into “rerunnable sql” files.
However, some of the triggers (that are pulled with the schema) were badly written; they made assumptions on the runtime searchpath (a postgres thing) and thus failed.
I fixed that by adding some search and replace code to convert bad sql into good sql for the specific instances that were dying.

foreach ($partialtabletuple in $PARTIALTABLES) { 
  $partialtable = $partialtabletuple[0];
  $query = $partialtabletuple[1]; 
  $exportfile = "${DB}.${partialtable}.partial.sql"; 
  if (! (test-path $exportfile)) { 
      "Partial: $exportfile"
	  & $PSQL -h $H -p $P -U $U -c "copy ( $query ) to STDOUT " ${DB} > $exportfile
  } else { 
	 "skip partial: $exportfile"
  }
}

This runs PSQL in “copy (query) to STDOUT” mode to capture the data from a query to a file. The result is a tab seperated file.

LoadData.ps1

Things get much simpler here:

$PSQL = get-command psql.exe
$Env:PGCLIENTENCODING="SQL_ASCII"
$H="localhost"
$U="postgres"
$P="5432"
$DB="deathstardb"

. .\tableconfig.ps1

# PSQL
# -c = run single command and exit

$exportfile = "${DB}.schema.sql"
& $PSQL -h $H -p $P -U $U -c "drop database if exists ${DB};"
& $PSQL -h $H -p $P -U $U -f "${DB}.schema.sql"
& $PSQL -h $H -p $P -U $U -d ${DB} -f "${DB}.sequence.sql"

I’m going with the model that I’m doing a full wipe – i don’t trust anything locally, I am far too creative a developer for that — hence I drop the database and start fresh.
I create the schema from scratch (there are a few errors, hasn’t bitten me yet)
and then I set all the sequence values.

foreach ($fulltable in $FULLTABLES) { 
  $exportfile = "${DB}.${fulltable}.data.sql"
  & $PSQL -h $H -p $P -U $U -d ${DB} -f $exportfile
}

Important: The data is loaded IN ORDER (as defined in $FULLTABLES), so as to satisfy FK dependencies.
To figure out dependencies, I used pgadmin‘s “dependencies” tab on an object, and drew it out on paper.
It seemed daunting at first, but upon persevering, it was only 6-7 tables deep. A job I had in 2006 had (30+ total, 7 deep?) for comparison.

foreach ($partialtabletuple in $PARTIALTABLES) { 
  $partialtable = $partialtabletuple[0];
  $query = $partialtabletuple[1]; 
  $exportfile = "${DB}.${partialtable}.partial.sql"; 
  get-content $exportfile | & $PSQL -h $H -p $P -U $U -d ${DB} -c "copy $partialtable FROM STDIN "
}

Source Control

I check everything into source control (subversion for me):

GetData.ps1
LoadData.ps1
Data\tableconfig.ps1
Data\deathstardb.schema.sql
Data\deathstardb.sequence.sql
Data\deathstardb.ds_employees.employees.data.sql
Data\deathstardb.ds_contacts.contact_types.data.sql
Data\deathstardb.ds_inbound.processedfiles.partial.sql
(etc)

Important bits here:

  • My client did not have a copy of their schema in source control. Now they do.
  • The naming convention makes it easy to know what each file is.
  • I’m keeping the data in a seperate folder from the scripts that make it happen.

Additional Scripting

There are some additional scripts that I wrote, which I am not delving into here:

  • the script that, when applied to a copy of the production database, creates what I am developing with.
    • Luckily, what I’m doing is all new stuff, so I can rerun this as much as I want, it drops a whole schema and creates with impunity
  • the script to apply the above (dev) changes to my local database
  • the script to apply the above (dev) changes to my development integration database

Whenever I’m working with a database, I go one of two routes:

  • I use the above “make a copy of prod” approach as my “start over”, and only have a script of forward-changes
  • I make my script do an “if exists” for everything before it adds anything, so it is rerunnable.

With either approach its very important that when a production rollout occurs, I start a new changes script, and grab a new copy of the schema.

There is a newer third route – which is to use some kind of software that states with authority, “this is what it should be”, and allows a comparison and update to be made against an existing data source. Visual Studio Database Solutions are one such example, ERStudio is another. Hopefully, it does its job right! Alas, this client does not have that luxury.

In conclusion

Getting my development environment repeatable is a key to reducing stress. I believe The Joel Test calls it #2: “Can you make a build in one step?”.

I used a ton of tricks to get it to work.. it felt like I was never going to get there.. but I did. If you do something 3-4 times, you might want to automate it.

May your journey be similarly successful.

Backing up and Restoring

I recently helped my wife set up her new work computer. I could not do everything; the IT guy had to come in and add it to the domain, and she installed various essentials like Minesweeper (j/k, i think it was Photoshop).
Being a good geek, I intend to have a good image of that computer now that its set up.

So, I practiced on my laptop tonight.

Step 1: Back up the machine.
Hook up external Hard Drive
Boot off Hiren’s Boot Disk
Basically following http://sir-sherwin.blogspot.com/2011/04/disk-imaging-using-acronis-true-image.html
(except that I used Seagate Disk Wizard Something Something with Acronis support)
2.5 hours later, I have several .tib files (partitioned into 4.7g chunks)
For reference, the laptop had 55G of used HD space.

Step 2: Play with Backup
Attached external Hard Drive to my big computer
Downloaded http://www.vmware.com/products/converter/
Started to convert the .tib file into a vmware image
There were a lot of options.. i ended up hydrating to a 80G virtual drive, and got to choose the partitioning scheme.

1.5 hours later, I have a vmware image i can run.

Step 3: See Laptop living in VM on big computer
Left = Original; Right = VM

There’s a few problems with drivers.. to do it perfectly, I would sysprep the machine…
It definitely validates the backup, though.

Just ’cause I could.
Yep, life is good.

What do I bill?

My first client in the consulting world at my current gig was a big company.   They welcomed me spending all kinds of time, including overtime, to get their product built faster.  They were also using somewhat older technologies – VS2008, VB.Net, WCF/SoA, a hand written DAL, no BOL – to get the work done, and as a result, I knew pretty much what I was going to do.  There was no “play” time needed.

My second client, is a small company.  Every hour counts.  The less hours, the better.  Its also entirely up to me how to build it – so, how new do I go?

  • I could use all the cutting edge stuff that I don’t know fluently, charge the client for me learning the hard way.
    • This seems unethical.
  • I could stick to older stuff that I do know.   Only sell the client the skills I’m awesome at, and spend time to bring some skills up to awesome.
    • It takes a lot of extra time to become awesome at things.
    • As I have a family i like.. this is impractical.
  • A mix of the above.
    • How?
My Employer’s Solution
They have set it up so that I am “salaried” at 36 hours.. and I have an extra 4 hours to do a self-directed project.   During this time, i can become awesome at stuff I don’t yet know!  And it doesn’t have to be related to any current or future project, just stuff I want to figure out.
(I have a huge list of things I want to play with and get working…)

My solution

I have partitioned my list into three sections:

  • BUCKET A: The stuff I know how to do fairly well
    • Architecture decisions
    • Project management
    • Console app, Parsing command line options
    • Setting up diagnostics in various places to make the utilities easier to use
    • Database design
    • Setting up local test data environment
    • Research into options available
  • BUCKET B: The stuff I don’t know how to do yet, that I will definitelyhave to learn.  I do charge for this, and try to get it working as fast as I can.
    • Fluent NHibernate + NHibernate
    • MVC3.  I am NOT going webforms, sorry.
    • Dealing with ENUM’s in PostgreSQL  and mapping those to enum’s in C#.
  • BUCKET C: The stuff I need to play with to figure out &  use, but if I don’t figure it out immediately, I can get by without it.
    • SpecFlow / BDD
    • fastest way to setup- and teardown  data in the database for functional/integration testing
    • Selenium

Then, its not that I stay away from Bucket C, its more that I focus most of my time on Bucket A… till I feel i’ve been productive..  then B, and then maybe C.

I also timebox bucket C.  For example, i researched Specflow today, figured out that yes, i want to use that.. and then I cut that off at 0.5 hours.  The rest of my specflow “play” time will be “on my own” – until I get it working enough that I can move it down into Bucket A.

This gives me a clear conscience – I’m not charging the customer for playing/learning  stuff that was not required – instead, i realize that with any new technology, there’s going to be some “settling” of it into my toolbelt, and that does take time.  I cannot “rush” that time – so I’ll do myself a favor, and not pressure myself into learning it quickly.

For that matter, I timebox bucket B as well.   For example, I could not get C# enums to save as PostgreSQL ENUM’s in about an hour of trying.  I had a workaround – save as TEXT.  I went for it – can revisit this at another time.

How this works at home

Hanging out with my wife is one of the joys of my life, and I do not shelve that easily.  Luckily, synergistically, My wife had dinner with friends today, so I was able to make tonight into “tech playtime”.   I combined playtime with  bringing my VMWare work image home on an external hard drive – and that worked beautifully as well.

My next chance for evening tech playtime, unfortunately, may not be till next week. But, if I can get ahead of my required hours at work, i might convert some work time into play time.    Getting in to work early means I can do playtime starting at 3 or 4 pm!

 

Why would I ever read a Technical Book?

When I was in college, I used to laugh at the “technical books” section of the bookstore in the mall. Well, actually I didn’t, because at the time, I would go there exculsively to drool over all the science fiction books.. $3.50 or so each.. that I could not afford, as I was living on ramen noodles and cans of peas, because that’s all I could afford. (link)

Then, when I became a working stiff paid professional, I would go to the technical books section and laugh, because.. I knew all that stuff. There was a lot less to know in the early 1990’s, and there was a lot of stuff “beneath me” (Dbase II, FoxPro, etc). (I was cool, I was porting apps from Clipper S87 to 6.0, and nothing came even close to the beauty of LPC)

In the late 1990’s, I would sneer, because I was a close-minded anti-microsoft pro-linux-perl guy, and I really did not want to know MFC. I did, however, buy and own the Perl Cookbook, which opened my mind to the amazing ways to hack things into place to get things done. I used that book a LOT. (The next year, i got sent to C# class because they had an extra spot, and I have changed my mind about Microsoft) actually, I would say that Microsoft changed, and no longer annoyed me. C# was almost as awesome as LPC. (link)

For a while, in 2006 – when I found myself facing unemployment (it lasted all of about 2 weeks), i found myself browsing technical books, lamenting: so much to learn, what shall I learn? I ended up gravitating towards unit testing and Asp.Net WebForms, which I learned almost entirely via google, not from a book. Thank you .Net Rocks and http://www.hanselminutes.com/ for the pointers! In this case, the dead-tree books did not do anything for me, and being unemployed, I felt I shouldn’t be spending $$$ if stuff was available on the internet for free.

I did buy some technical books in 2008 to read on vacation – wow, that was a rousing success. (Not). I hardly picked them up. A waste of $100+. (Patterns and Practises in C#, something else). They’re still too expensive, when all of that knowledge is available for relatively free on the internet.

So, i repeat the question:

Why would I ever buy a technical book?

My answer:

I bought one two nights ago. I wanted to know about how to use EF Code First – I couldn’t sleep – I bought the book on my iPad Kindle – and I read it, cover to cover, in about 45 minutes. And my lightbulb was born.

Here is what is different:

  • I have a very specific application, for which I might need the technology. This is not “reading for fun”, but rather, reading to get a specific job done.
  • I don’t know enough about the technology to know what to search for. (searching online only gave me introductory examples, nothing with real meat.)
  • I’m approaching it not as “a ton of money spent for a dead paperweight that I’ll never look at again”, but rather as a fairly inexpensive class briefing me in a specific subject which i can refer back to later. Most of these books cost me less than an hour’s work, after taxes. (I am a professional, and I need to know as much as possible, as quickly as possible, to give my client the kind of service that I want to give them.)
  • I have an e-reader. on my iPad, and pretty soon, on an e-ink device. I can archive with impunity, without killing bookshelves.

And thus, I’m sold. Here’s what I choose to read up on for my current client, to ensure I’m giving them the best that I can:

  • PostgresSQL (done) (pdf, free)
  • EF CodeFirst (done) ($10)
  • EF (general)
  • Asp.Net MVC 3 +/- Razor
  • Dependency Injection (Structuremap vs Unity)

Whee!