Keeping Track of Work Tasks

At work, we started using Zendesk. It has been very effective. Also, due to COVID-19, we were briefly down to 2 people able to work.. recently, one of us got back to being better. Rather than a “fixed days working from home each week” (I used to get Wednesday WFH) we’re switching to a “3 person rotation of who is in the office”, since we need to have an onsite presence for our primary job function, which is to keep all the folks in the warehouse productive.

This is leading to “onsite” days, where pretty much deal with the flow of tickets .. and then deal with other little problem things that need to be fixed. These days entirely run by email and Zendesk. The person onsite triages the incoming stuff so that the offsite folks can focus on their project work. Going back through Pending tickets and updating statuses as we wait for other folks to respond and do their parts.

And then there’s two days of “offsite”. Blessed ability to focus in deep on tasks, because the interruption buzz is being handled by the onsite person. Spent about 2 hours working on a 8-part CTE (common table expression) with a colleague today. Got the web page that reports it partially done, another 4-6 hours tomorrow and it will be done.

Currently, we have an Open Projects spreadsheet (we were using Microsoft Teams Tasks, and prior to that ClickUp) which lists these projects. I’m thinking we’re going to transfer them into Zendesk but with a tag of “project” — and alter the other views to exclude them. That itself is a project.

Short Update: Monitoring For Data Problems in complex systems

create view problem_view_darth_vader as (select ..... problem data) 

In our case, it was that a particular item had two UPC codes from two suppliers. Add a couple of extra columns like department=’production’ and fix=’how to fix it’.

Next, write a generic thing which checks select view_name from information_schema.views where view_name like ‘problem_view_%’, and loops through those.. if anything has rows, displays the rows. I used a DataTable and DataAdapter.Fill() because then you get the columns even if there are no rows, unlike Dapper’s <dynamic>. Show this on a web page, along with a div id=SuccessDiv

Then, using your favorite “how’s my website doing” tool (mine is DataDog Synthetic Tests), do a test looking for success div. Set up an monitor, track failures. Datadog takes screenshots, so the errored data is saved with the failed monitor.

Result: Here’s the monitor zoomed in to when i introduced a test view to create a problem:

And here is the screen that was captured as part of the failed test:

The only side problem is that you don’t have control over how frequent the checks go. Probably different pages could be done and use different view names, like problem_view_darth_vader_60 to check every hour, for example. May improve it in the future.

I think the next things to put monitors up for are probably these particular things that get stuck without a thing. (Sorry, redacted, but you get the idea).

Short updates

I surrender. My life as it is now will not support the time to make beautiful posts. Like I would have bolder the word beautiful there and added a picture. Because marketing.

nope this sucker is being written on my phone in the morning as I sip coffee. I have 25 minutes before I need to leave for work and I’m not dressed yet. Evidence: (test phone camera):

Warming device

So this is how I’ll need to write posts in the future. And the purpose is: to be a testament to my future self that I DID do geeky things and find things interesting.

Yesterday … I found out the hard way that I had to ask ALL domain controllers about when people last logged in. Apparently we’ve had one controller be PDC for a very long time and machines remember their favorite DC for a long time.

Script modified from https://social.technet.microsoft.com/Forums/ie/en-US/d86b7495-729a-44e2-ad68-5e154ecbd6d7/getaduser-lastlogontimestamp-is-reporting-blank?forum=winserverpowershell
 
$( foreach ($dc in (Get-ADDomainController -Filter "domain -eq '$((Get-ADDomain).DnsRoot)'" | % {​​​​ $_.HostName}​​​​ ) ) {​​​​ Get-ADUser -Filter '*' -searchbase 'DC=******' -Server $dc -Properties LastLogon | Select SamAccountName, LastLogon, @{​​​​n='LastLogonDC'; e={​​​​ $dc }​​​​}​​​​ }​​​​ ) | Group SamAccountName | % {​​​​ ($_.Group | sort lastLogon -Descending)[0] }​​​​ | select SamAccountName, lastlogon,@{​​​​n='LastLogon1'; e={​​​​ (Get-Date $_.LastLogon).ToLocalTime() }​​​​}​​​​, LastLogonDC | export-csv -path "lastlogon.csv"

Chart: Maybe later

Result: I can figure approx how many CAL we need. Going to update DC to next server version.

Crystal Reports Fun

I haven’t posted, or talked much, about my new work .. basically, I joined a company whose primary job is things, and most work focuses around an ERP, and its all about stabilizing and optimizing flows of information around processes. Edit/Update: I wrote this in, like, February. A lot happened since then.. CoViD19 especially. It is now June.

One of my new skillsets is Crystal Reports. To which my developer peeps say Ewww, usually. Eh, it works. The main thing is, the ERP system that everybody is logged into, has a way to embed Crystal Reports into it. That ERP System (Prophet 21) is also backed by a SQL Server database, and has a decent table structure to it, and a decent user-defined-field add-on strategy to it, making many tasks.. sane.

However, navigating the environment for what actually works has been hard. There were not a log of prior art examples which did things at the architecture I would like, so I’ve had to fumble around. Here’s what does NOT work:

  • Using anything other than a particular driver (I forget which one) for connecting to SQL. Specifically, not to use the SQL Native driver. This is because when the ERP is hosting the report, they do a switcharoo on the connection to connect to the right “instance” of the ERP, and if you use the wrong driver, that doesn’t work. You don’t find out till you try to run the report from the ERP.
  • Directly using a stored procedure to do heavy lifting for a report. You can, and it auto-creates report parameters for you, but those report parameter types lead to less-than-optimal user dialog (think dates plus times instead of dates).
  • Using a parent crystal report to include a subreport to get around the previous thing. Works great for a crosstab, but page headers not so good in a grid style report. However, I am able to bind a parameter from the parent report through a calculated field to plug in to the subreport (and thus the stored procedure).
  • Also, if you have a parent report that only calls sub-reports, and doesn’t actually connect to the database itself, the ERP system doesn’t like that because it cannot find the database connection to override.
  • Not choosing a printer when designing the report. Apparently this affects font choices and Arial looks better than Device CPI 10.

Here’s what does seem to work:

  • I can use Views to encapsulate business logic, example “The View of xxx customers” where xxx is a particular program that customers can enroll in.
  • I can use stored procedures to D.R.Y., for example, the stuff to get the raw data of the number of designer frames sold per customer within a time period.
  • I can call stored procedures from a “command” custom SQL block from Crystal Reports. In that block, I can: declare @t table (…) and insert into @t exec SPROC {?param1} {?param2} to get data from a stored procedure.
    • For example, there are two reports: One is a CrossTab that breaks out customers across brands and # of frames, and another is a detail report. The detail report goes into how many DIFFERENT brands were sold + the number of frames sold — these numbers roll into a formula describing what % back the customer gets, per the rules of the contract(s). Both of these reports use the same stored procedure to get the underlying data.
    • However, using this method, I have not yet been able to go (user input) to (calculated field) to (input into command) to (call stored procedure), yet. On the other hand, I can do a lot of manipulation in T-SQL, so that should be fine.

I’m continuing to learn a lot of things.. next week looks like it will be learning the ways of our EDI interfaces with some bigger customers, like the names you’ll find at a Mall. (I don’t know how much I can talk or not about our customers). (Edit: It is preferred that I do not.)

Side note – We have added datadog to our infrastructure, and monitoring is making our lives better, I think. Separate blog post, but in summary: immediate notify on errors, and notify on lack of success. Except on weekends.

Looking back at this post I wrote 4 months ago.. wow, there’s so much detail I could go into about all the little things that I’ve learned and tweaked. Like some powershell to inspect bad emails in an SMTP dead letter folder. And a powershell to automate connecting my Cisco VPN connection. Messing with Redhat 8.1 re-learining all the unix things including SMB shares. However, .. that’s another post; coming up shortly.

Using Timesheet data from Harvest to create a Force Directed Graph Animation

This is too complicated to try to put into words, so there’s a screencast instead.

Video of the final product:

https://www.youtube.com/watch?v=_TJi9yAm_kM

Video explaining how to do it:

https://youtu.be/DrrA9sd6AAQ

Source: https://github.com/sunnywiz/HarvestToGephi/blob/master/HarvestToGephi/Program.cs

In text form:  C# code to convert a Harvest CSV extract into a node and edges CSV file.  Then in Gephi, import the two files, convert the start/end dates into an interval, and set up the prettyness.  Record a long video with lots of stabilization and then speed it up.

Sorting Watch Later by Video Length

EDIT – does not work anymore see comments

Something I do often – I sort my Watch Later list in Youtube by how long the videos are.  I think its “shortest video to get the quickest entertainment bang for my time” (which is semi sad).

I wanted to automate it.  Found several dead ends:   Youtube doesn’t let you use their App Scripts API to modify the Watch Later list, and the page doesn’t let you run a bookmarklet to load an external js file.

However, Chrome allows for snippets.. and some folks pointed out that I could use various other softwares (Greasemonkey? another name was given) for running arbitrary javascript on a page.

Ctrl-Shift-I, Sources, >>, Snippets

https://gist.github.com/sunnywiz/064a4d07f8469074356e73eba5dc1215

That’s my solution for now.   It gets what I need done.  I wish I could have done that more generically.

Thanks to Matt H. for pointing me down the path of Chrome snippets.  That dude is bright.

Tree of Directions Revisited

image

We are in the process of possibly changing residences, so (during my coding fun times on Wednesday mornings with a friend) I revisited this project.  When I left it, it looked like this:

image

In fact, that was so not-cute that I had tried to go 2D with it instead.   

I revisited it, and went through several stages. 

First, I had to re-do how I called Google Maps – they no longer had a “free” tier that was usable, but they now had a “pay as you go” tier which equated to $5 for 1000 route requests.   No problem.  But, to keep costs low, I added a cache strategy so I didn’t ask for the same thing twice.  (Caveat: I think I forgot to turn off traffic, so different requests at different times were delayed by different amounts).

Then, I tried to create a surface underneath the plot.  I did this using some electrical engineering stuff I once helped somebody with – Here’s the plot + a surface underneath it.  Better, but not the awesome I was hoping:

image

I called that the “Minecraft” look.  (That’s a very rough draft with only 10×10, Its about the same at 50×50 except the squares are smaller). I went for something where I calculated the midpoint and drew a polyhedron from each of the four vertices, it looked a lot better:

image

(The previous were github previews of STL files; this is 3D Model Viewer built in to Windows).  

But, its still to .. Jaggedy.  So, I did a few things: First, I made it so rather than a spiderweb, I did a “ramp” effect (filling in underneath the path), as well as, I trimmed off all the residential streets (<30mph) at the ends of the routes.  This gave me a much better print, which is closer to what I had in mind when I started: something that showed “Which were the best ways to get places”:

image

However, I didn’t realize it, but I had done something even better.  Here’s Just the Ramps:

image

This is what I had been going for!  To heck with the surface print part!  You can see the mountain AND the detail!  This is commit a75a67 at https://github.com/sunnywiz/TreeOfDirection/.

Next direction — where I do the same thing for both addresses, and then figure out a way to do stuff in two colors (ie, two prints, but that join together).    In order to do this, I have to first force the bounding boxes to be exactly the same.

Also other things learned – 3D slicing has come a long way, I don’t need things to be perfectly combined as long as they don’t have holes in them.

pruning promotions in GMail

I finally got around to writing some code – turns out its super easy, there’s something called script.google.com, and you Resources|Advanced Google Resources and add the Gmail resource, and then you can write functions to prune your inbox arbitrarily:

function prunePromotions() { 
  var response = Gmail.Users.Messages.list('me',{'q':'label:promotions', 'includeSpamTrash':false});
  if (response && response.messages && response.messages.length > 0) { 
    // messages seem to come in from most recent to least recent
    Logger.log('inspecting '+response.messages.length+' messages'); 
    var collected = {};
    for (var i=0; i < response.messages.length; i++) { 
      var message = response.messages[i]; 
      if (message.id) { 
        var message = Gmail.Users.Messages.get('me',message.id);           
        if (message && message.payload && message.payload.headers && message.payload.headers.length > 0) { 
          for (var j=0; j < message.payload.headers.length; j++) { 
            var header = message.payload.headers[j]; 
            if (header.name=='From') { 
              var from = header.value; 
              if (!collected[from]) collected[from] = []; 
              collected[from].push(message); 
              break; 
            } // if header is from
          } // for each header
        } // if message.payload.headers
      } // if message.id
    } // every message
    for(var from in collected) {
      if (collected[from].length > 1) { 
        for (var i=1; i < collected[from].length; i++) { 
          var message = collected[from][i];
          Logger.log('trashing '+message.snippet); 
          Gmail.Users.Messages.trash('me',message.id);
        }
      }
    }
  } 
}

Unfortunately, I get a lot of promotions, and this only works on the most recent 100 messages or so, so it might be of limited use in this state.   But its a start.

Learned some IoT today

My head is swimming with new information, so dumping it here in a bit, but first, a meta- of what’s going on.

1.  I often feel “left behind” and “not having time to play with things”.  I decided to try to make it a priority by committing with a friend to spend an hour learning new stuff..  together.  So the idea is we spend half an hour on my project, and half an hour on his project, in a pair-program kind of way.

2. We did the first non-planning session of it today.  Well, it was still a lot of planning.  And what happened is:  the 30 minutes were just enough to get me unblocked and to learn which directions to go.. and then we switched over to his project.

3. I’m looking forward to revisiting this next week.

3.97.  if this works well over time, we might polish it up a bit and open it up to more folks. 

The Details. 

First, His Project:  (topic not revealed, that’s his story)

  • I saw Jupyter for the first time in action.  It was amazing.  I think I’ll use it the next time I need to go spelunk and chart in a RDBMS (instead of query, copy-to-excel, create chart)
  • Revisiting OpenCV for image manipulation and stuff
  • There might be some ML in that project’s future.  That’s up to him, of course.  My job is to provide interest and provide support and be a rubber duck.
  • I hope to learn more about running things in docker containers from him.

imageMy Project – Temperature logging for my A/C, Furnace — what I learned

  • I got an LED to blink and read a photoresistor via a WIFI connection (via particle.io and Tinker)
  • Kinda-sorta why we need resistors going to ground to make circuits work.  At least, I saw that repeated pattern.
  • What a breadboard is, and how it is wired
  • Digital vs Analog ~= 3.3V and ranges, and HI=3.3V.  Some specific pins have better A-to-D converters and can read ranges up to 4096.
  • The temperature sensor I have uses something called a One-Wire protocol, and I have to do a library-add to get the right library to read it.  It involves scanning for devices, multiple devices can be on the same wire.    However, there’s two ways of powering it – parasitic and non-parasitic.
  • It might be that when I plug power into the photon, if the other end is my computer, i might be able to debug print via a serial connection.   That will help with the one-wire scan.
  • Turns out particle.io does all kinds of cloudy-mc-cloud stuff so once i have my number, all i need to do is publish(“temperature”, value) and it heads up to the cloud.   Then if I add the integration to Azure IoT, it goes into Azure where I can do other things with it.  AWS doesn’t seem to have that integration.   There are other folks who do have ways to graph things.  I need to analyze patterns over a 6-month time period, involving the drop or rise in temperature from ambient to heated/cooled. 
  • For one-wire stuff, i need a 4.7k ohm resistor, and i’ll probably need breadboard cables.  I didn’t have them yet.  I ordered them.   Should be coming along with a few more thermistors.
  • Using one-wire, i might be able to have a separate sensor for “ambient temperature” so that i can diff the two easier and just take the delta.
  • I read up on the spec for the “Wiring” language, which is basically C/C++ but scaled down. 
  • I learned there’s a web, a desktop, a CLI, as well as a VsCode development environment.  The vscode one seems to have a debugger.  Don’t think I’ll need that, as long as I can get serial.print() to work.

Side note – I didn’t have LetsEncrypt set up to auto-renew my cert, so the https cert for this blog expired.  I had to go find my blog post to find the link to find the commands to renew it.   Done.

AWS RDS SqlServer Native Backup and Restore

Had to learn this yesterday to clone a production environment down to a lower environment. Figured it qualified for a blog post.

exec msdb.dbo.rds_backup_database 
         @source_db_name='xxxProd',
         @s3_arn_to_backup_to='arn:aws:s3:::xxx-sql-native-backup/xxxProd.bak',
         @overwrite_S3_backup_file=1,
         @type='full';

exec msdb.dbo.rds_task_status;   -- till lifecycle=SUCCESS

ALTER DATABASE xxxUAT SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
drop database xxxUAT;
exec msdb.dbo.rds_restore_database
         @restore_db_name='xxxUAT',
         @s3_arn_to_restore_from='arn:aws:s3:::xxx-sql-native-backup/xxxProd.bak';
exec msdb.dbo.rds_task_status;   -- till lifecycle=SUCCESS

delete from xxxUAT.dbo.SensitiveTableStuff;

The Gotcha’s were:

  • Had to set up an option group that added SqlServer Native Backup and Restore to the RDS instance.  It took a few minutes to apply, the RDS instance did not reboot or go offline during this process.
  • Could not restore over an existing database.
  • Learned the hard way that while you can detach, you can’t re-attach a database using SSMS.  Reattaching uses a custom stored procedure.   And detaching and attaching had nothing to do with deleting.