The Problem
- The customer has large postgreSQL database; it is too large to transfer over VPN.
- I need to develop against a local copy of the database, where I can make schema modifications at will.
My Solution
- Pull the schema
- Pull the sequence information separately (it did not come over with the schema)
- Pull full dumps for small tables (in order)
- Pull subsets for large tables (in order)
- Load everything locally
- Do this in a script
Here is the code for the solution, with some commentary as to why certain things are the way that they are:
GetData.ps1
$PGDUMP = get-command pg_dump.exe
$PSQL = get-command psql.exe
get-command verifies that it can find the executable in your current path, or dies if it cannot.
I try to do this for every executable I invoke in a powershell script.
$Env:PGCLIENTENCODING="SQL_ASCII"
$H="111.22.33.44"
$U="sgulati"
$P="5432"
$DB="deathstardb"
PGCLIENTENCODING was necessary because some of the rows in their database had UTF-8-like characters that confused the loader. I arrived at it by trial and error.
. .\tableconfig.ps1
Because I use the same configuration for getting data as for loading data, I pushed that into its own file.
tableconfig.ps1
$FULLTABLES = @(
"ds_employees.employees",
"ds_contacts.contact_types",
"ds_contacts.companies",
"ds_contacts.systems",
"ds_inbound.clients",
"ds_inbound.feeds",
"ds_inbound.pendingfiles"
);
$PARTIALTABLES = @(
@( "ds_inbound.processedfiles",
"select * from inbound.processedfiles where clientid='555' "
),
@( "ds_inbound.missingfiles",
"select * from inbound.missingfiles where clientid='555' "
)
);
$FULLTABLES are tables I’m going to grab all data for.
$PARTIALTABLES are tables which I cannot grab all data for (they are too large), so I’m just going to grab the subset that I need
# PG_DUMP
# http://www.postgresql.org/docs/8.1/static/app-pgdump.html
# -s = schema only
# -a = data only
# -F = format.. p = plain, -c = custom
# -O = --no-owner
# -f = output file
# -c create
# -d --inserts
# -X --disable-triggers
# -E = encoding = SQL_ASCII
When there are confusing command line options called from a script, I put a comment in a script explaining
what many of the command line options are, along with a link to online documentation.
This helps with future maintenance of the script.
$exportfile = "${DB}.schema.sql"
if (! (test-path $exportfile)) {
"Schema: $exportfile"
& $PGDUMP -h $H -p $P -U $U --create -F p -O -s -f $exportfile ${DB}
} else {
"skip schema: $exportfile"
}
I use a convention that if something has been pulled, do not pull it again.
This enables me to selectively refresh pieces by deleting the local cache of those files.
Note that The PGDUMP command creates a schema file, but does NOT pull current sequence values.
$exportfile = "${DB}.sequence.sql"
if (! (test-path $exportfile)) {
$sql = @"
select N.nspname || '.' || C.relname as sequence_name
from pg_class C
join pg_namespace N on C.relnamespace=N.oid
where relkind='S'
and N.nspname like 'ds_%'
"@
$listOfSequences = ($sql | & $PSQL -h $H -p $P -U $U -d $DB -t)
$sql = @()
foreach ($sequence in $listofsequences) {
$trim = $sequence.trim();
if ($trim) {
"Interrogating $sequence"
$lastval = ( "select last_value from $trim" | & $PSQL -h $H -p $P -U $U -d $DB -t )
$sql += "select setval('${trim}', $lastval);"
}
}
$sql | set-content $exportfile
} else {
"skip sequence: $exportfile"
}
This gets complicated:
- I am running a query to get every sequence in the system.. then for each of those sequences, I’m getting the last value.
- I am doing this by executing PSQL and capturing its output as text; I could have done it with Npgsql called directly from powershell, but i didn’t go down that route at the time this was written.
- I am saving the information in the form of a SQL statement that sets the value correctly. This eliminates the hassle of understanding the data format.
- I am relying on the customer’s convention of prefixing their schema names with “ds_” to filter out the system sequences. You may need a different approach.
Update: My customer read through this post, and pointed out something I had missed: There’s a view called
pg_statio_user_sequences
which provides a list of sequences. Still need to loop to get the current values… nevertheless, nice to know!
foreach ($fulltable in $FULLTABLES) {
$exportfile = "${DB}.${fulltable}.data.sql";
if (! (test-path $exportfile)) {
"Full: $exportfile"
& $PGDUMP -h $H -p $P -U $U --inserts --disable-triggers -F p -E SQL_ASCII -O -a -t $fulltable -f $exportfile ${DB}
# we need to patch the set searchpath in certain situations
if ($exportfile -eq "deathstardb.ds_inbound.feeds.data.sql") {
$content = get-content $exportfile
for($i=0; $i -lt $content.length; $i++) {
if ($content[$i] -eq "SET search_path = ds_inbound, pg_catalog;") {
$content[$i]="SET search_path = ds_inbound, ds_contacts, pg_catalog;";
}
}
$content | set-content $exportfile
}
} else {
"Skip full: $exportfile"
}
}
This executes PG_DUMP on the tables where we want full data, and dumps them into “rerunnable sql” files.
However, some of the triggers (that are pulled with the schema) were badly written; they made assumptions on the runtime searchpath (a postgres thing) and thus failed.
I fixed that by adding some search and replace code to convert bad sql into good sql for the specific instances that were dying.
foreach ($partialtabletuple in $PARTIALTABLES) {
$partialtable = $partialtabletuple[0];
$query = $partialtabletuple[1];
$exportfile = "${DB}.${partialtable}.partial.sql";
if (! (test-path $exportfile)) {
"Partial: $exportfile"
& $PSQL -h $H -p $P -U $U -c "copy ( $query ) to STDOUT " ${DB} > $exportfile
} else {
"skip partial: $exportfile"
}
}
This runs PSQL in “copy (query) to STDOUT” mode to capture the data from a query to a file. The result is a tab seperated file.
LoadData.ps1
Things get much simpler here:
$PSQL = get-command psql.exe
$Env:PGCLIENTENCODING="SQL_ASCII"
$H="localhost"
$U="postgres"
$P="5432"
$DB="deathstardb"
. .\tableconfig.ps1
# PSQL
# -c = run single command and exit
$exportfile = "${DB}.schema.sql"
& $PSQL -h $H -p $P -U $U -c "drop database if exists ${DB};"
& $PSQL -h $H -p $P -U $U -f "${DB}.schema.sql"
& $PSQL -h $H -p $P -U $U -d ${DB} -f "${DB}.sequence.sql"
I’m going with the model that I’m doing a full wipe – i don’t trust anything locally, I am far too creative a developer for that — hence I drop the database and start fresh.
I create the schema from scratch (there are a few errors, hasn’t bitten me yet)
and then I set all the sequence values.
foreach ($fulltable in $FULLTABLES) {
$exportfile = "${DB}.${fulltable}.data.sql"
& $PSQL -h $H -p $P -U $U -d ${DB} -f $exportfile
}
Important: The data is loaded IN ORDER (as defined in $FULLTABLES), so as to satisfy FK dependencies.
To figure out dependencies, I used pgadmin‘s “dependencies” tab on an object, and drew it out on paper.
It seemed daunting at first, but upon persevering, it was only 6-7 tables deep. A job I had in 2006 had (30+ total, 7 deep?) for comparison.
foreach ($partialtabletuple in $PARTIALTABLES) {
$partialtable = $partialtabletuple[0];
$query = $partialtabletuple[1];
$exportfile = "${DB}.${partialtable}.partial.sql";
get-content $exportfile | & $PSQL -h $H -p $P -U $U -d ${DB} -c "copy $partialtable FROM STDIN "
}
Source Control
I check everything into source control (subversion for me):
GetData.ps1
LoadData.ps1
Data\tableconfig.ps1
Data\deathstardb.schema.sql
Data\deathstardb.sequence.sql
Data\deathstardb.ds_employees.employees.data.sql
Data\deathstardb.ds_contacts.contact_types.data.sql
Data\deathstardb.ds_inbound.processedfiles.partial.sql
(etc)
Important bits here:
- My client did not have a copy of their schema in source control. Now they do.
- The naming convention makes it easy to know what each file is.
- I’m keeping the data in a seperate folder from the scripts that make it happen.
Additional Scripting
There are some additional scripts that I wrote, which I am not delving into here:
- the script that, when applied to a copy of the production database, creates what I am developing with.
- Luckily, what I’m doing is all new stuff, so I can rerun this as much as I want, it drops a whole schema and creates with impunity
- the script to apply the above (dev) changes to my local database
- the script to apply the above (dev) changes to my development integration database
Whenever I’m working with a database, I go one of two routes:
- I use the above “make a copy of prod” approach as my “start over”, and only have a script of forward-changes
- I make my script do an “if exists” for everything before it adds anything, so it is rerunnable.
With either approach its very important that when a production rollout occurs, I start a new changes script, and grab a new copy of the schema.
There is a newer third route – which is to use some kind of software that states with authority, “this is what it should be”, and allows a comparison and update to be made against an existing data source. Visual Studio Database Solutions are one such example, ERStudio is another. Hopefully, it does its job right! Alas, this client does not have that luxury.
In conclusion
Getting my development environment repeatable is a key to reducing stress. I believe The Joel Test calls it #2: “Can you make a build in one step?”.
I used a ton of tricks to get it to work.. it felt like I was never going to get there.. but I did. If you do something 3-4 times, you might want to automate it.
May your journey be similarly successful.