AWS RDS SqlServer Native Backup and Restore

Had to learn this yesterday to clone a production environment down to a lower environment. Figured it qualified for a blog post.

exec msdb.dbo.rds_backup_database 
         @source_db_name='xxxProd',
         @s3_arn_to_backup_to='arn:aws:s3:::xxx-sql-native-backup/xxxProd.bak',
         @overwrite_S3_backup_file=1,
         @type='full';

exec msdb.dbo.rds_task_status;   -- till lifecycle=SUCCESS

ALTER DATABASE xxxUAT SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
drop database xxxUAT;
exec msdb.dbo.rds_restore_database
         @restore_db_name='xxxUAT',
         @s3_arn_to_restore_from='arn:aws:s3:::xxx-sql-native-backup/xxxProd.bak';
exec msdb.dbo.rds_task_status;   -- till lifecycle=SUCCESS

delete from xxxUAT.dbo.SensitiveTableStuff;

The Gotcha’s were:

  • Had to set up an option group that added SqlServer Native Backup and Restore to the RDS instance.  It took a few minutes to apply, the RDS instance did not reboot or go offline during this process.
  • Could not restore over an existing database.
  • Learned the hard way that while you can detach, you can’t re-attach a database using SSMS.  Reattaching uses a custom stored procedure.   And detaching and attaching had nothing to do with deleting.

From Certifiable to Certified

imageYesterday, I passed my AWS Certified Developer Associate exam.  I started studying for it two weeks ago.

Actually, no, I started studying for it a year ago when I started using AWS.  And at the time, I thought I was going to take the Certified Solutions Architect Associate exam, but we went for this one because its easier, and I had two weeks to study for it.

Why two weeks?  My company has been going for an AWS Partnership thing, and for that we need two certs.  It wasn’t a immediate priority… until some stuff came up which really affected some of our value.   We raised the flag, this needs to be a priority.  Schedule it and take it, and lets send two people so that if either one passes, we’re good.   We both passed.

What did I learn in those two weeks about how to study for the exam?  If I were to go back in time to younger self, what would I tell him?

ACloud.Guru, but not like that

https://acloud.guru is VERY good.  I’m keeping my subscription to it so I can listen in to some of the master-level courses during my commute.   However, I had prescribed something like:

  1. Listen and watch all the videos
  2. Take all the Quizzes
  3. Take the practice exam
  4. Go read other boring stuff.

Inefficient and Fear Based

Turns out, if you get a subscription, they have this thing called an Exam simulator (beta).  And at the end of this you get to see what you did wrong (as well as what you did right), and an explanation of the thing, and a link to more resources:

image

The suggestion is to attack the problem from a different angle

Thing is, both the real exam and the practice exam asked some pretty in depth questions that the video instruction does not directly cover.  Ryan does say, “Make sure you read the FAQs” and that’s 100% for real.  You gotta read the FAQ’s.

However, the FAQ’s themselves reveal the core ideas central to the service you are reading about. As do the videos.  My suggestion is that once you are comfortable, you get denser information faster from the FAQ’s. 

I also found that the developer exam really wanted to know if you had used something.  Called some methods.   And if you hadn’t .. like me, I just barely used scan and query mostly I was being a sysop and using terraform to set things up for others ..  then reading through the actions, attributes, headers etc – gives you that feel of what actually working with the thing is like.  (Assuming you have plenty of experience and have done many things in the past).  You’ll get into the mind of the designers of the system, and from that, you can infer all kinds of stuff. 

Some Of My Notes

imageI started out taking notes on Paper.  Then I took notes in Google Docs. Then I switched over to sheets.

I’m a visual person, and I like organization, and having a big grid where to “store” information (human brain is optimized for location tracking) was very helpful for me.  If I got a concept wrong, I could go back to the place where that information was hiding on my sheet… it would probably help to have a background “image” to place information even better.  I’ll do that next time.

The Actual Exam

Here’s what was different from what I expected:  Location: Downtown Louisville, Jefferson Tech I think

  • Parking was easy to find early enough in the morning.  I vavigated straight to a parking lot, and paid for the day (less worry!), about 2 blocks away.
  • The proctors were very nice friendly people who gently guided my anxiety stricken self through what I needed to do.
  • Much to my surprise, I got there an hour early, and they invited me to take my test early. I was done by the time my actual start time came around.
  • The actual exam was pretty much at the same level of detail as the acloud.guru exam was, except for more “you really gotta know this” choose all that apply type questions.  I detected at least one stale question on the test, which I commented on.

My weakest areas:   Federation, and Cloud Formation, I think.   Makes sense, haven’t really had to do that, and we use Terraform for the same task.

Woot woot.  Okay, gotta go run a race now.

Learning New Stuff–Terraform, AWS, Lambda, DotNetCore

This last week has been a crash course in new stuff for me.  I’m helping with the scripts that manage the infrastructure around a project – freeing up the developer to work on user stories, I’m taking care (or trying to take care) of the deployment aspects of it.   In a way, its a big catch-up to other folks who have been charging ahead into newer technologies – so its not like I’m having to discover things on my own.  On the other hand, everything is already evolved to N+2, and I’m at N-1, so its a bit of a firehose.

Here goes though, stuff I’ve picked up this week:

  • Teamcity build calling a powershell script to do deployment stuff. 
    • New to me: I didn’t know PSPROJ was a thing – that I could step debug through powershell in Visual Studio now.  Come a long way since Powershell 1.0.
    • Dotnet lambda package, zipping, sending to S3…   Somebody else whose first name rhymes with “Miss” and last name rhymes with “Aye Lee” wrote this part for something else, I get to adapt it for the current project.
  • AWS API Gateway => AWS Lambda => C# NetCore1.0 => MVC  chain
    • Got to learn about the “Version Hell” that happens in NetCore1.0.   It will probably be much nicer by the time we get to 2.0 or better.. just the 1.0 to 1.1 is pretty rough at the moment.   Get the intersection of the bleeding edge of NetCore as it was 7 months ago with the bleeding edge of where AWS is taking their Amazon Linux.   We had to do a deviation and host some stuff via EB rather than Lambda. 
    • I’ll be playing more with this on Monday as I try to debug something into not giving me a 500 internal server error.
  • Terraform as a way of deploying AWS Resources
    • Modules, and Variables, and Data sources, oh my.
    • Debugging Terraform – I found the GET/POST requests.. the problem was a Content/Type for a resource in an S3 bucket.  Can’t get .body that way, so couldn’t get the hash value.
    • Partial apply’s because sometimes you don’t recognize a change and don’t want to mess up somebody else’s experimentation
    • I got to copy what Miss Aye Lee did, nice job Dude.
  • Rewrapping my brain around Build Configurations
    • Thanks to previous training, Build Config = Debug (PDB) vs Release, but also = XSLT Config Transforms to get configuration values per environment.
    • Now, Build Config = just Debug vs Release for “how debuggable do you want this”
    • There’s another avenue for “which settings do you want to use” which is completely different.
    • More playing with this on Monday.
  • AWS Security stuff
    • IAM User’s for local access from visual studio while developing
    • Roles for when running in Lambda, EC2, etc.  (Built by Terraform)
    • Policy documents describing what access available to what (built by Terraform), shared by the IAM and Role.
    • All the stuff that was actually built by Terraform using a Terraform runner credential
    • The Terraform Runner’s policy that allows it to create all the things
    • All running in another account that we cross-account assume roles into.
    • Somebody whose first name does not sound like XML and whose last name might have to do with Whiskey is a good teacher and dreamer.

The end result:

  • If starting from scratch – done by human.
    • cd env-shared;  terraform plan & apply to create shared resources, like S3 buckets, VPC’s, RDS’s, etc
    • Any further environment changes, also applied by human via script file.  No clicky the mouse.
  • New environment – like QA1 or QA2 or other – done by human
    • cd env-qa1 (or mkdir, if starting new)
    • copy and edit a file that says what the environment name is
    • terraform plan and apply to create all the things
      • DynamoDB tables
      • Queues
  • Every build to be deployed – automated, not done by human.
    • powershell to get stuff up to S3
    • powershell to call terraform to deploy
      • Lambda
      • API Gateway hangs out with this.

Pretty powerful stuff.     Glad I’m learning it.   It will feel better end of next week when I actually have something completely checked in that completely works.