C# code against Gmail

I wrote some code and I FINISHED IT

UPDATE – After writing this blog post, about two hours later, I got the stuff working for it to be considered “DONE”. Pretty happy with it. The rest of this post, though, accurately reflects how I was feeling at the time, so i’m leaving it as written.

I wrote this code, and I am Sad. I’m sad that i got started on this silly project (that i’ve thought about for several years), and I got far enough that I can see the end … and now:

ONE: I’m out of time. I did this while we were on our 1-week-away vacation… it started slowly, but got faster and better. This is probably 6-7 hours in? Spread over 3 evenings? I rediscovered so many things (list later), but .. I have a maximum of 4 more hours left before I return to normal life which has NO room for focus time like this (which is something I could change/prioritize, but I don’t because I am finally prioritizing adequate sleep).

TWO: I’m not done, and I’m already rewriting and improving it. So why finish it in this incarnation? I realized with the approach I’m taking, it deals with “recent” emails pretty good, and it does “discovery” pretty good, but it does not handle “the massive backlog of crap that could be deleted”. Nope, that would be a different approach .. in the above screenshot, i think this would be the “Discovery” tab and then there would be another “Purge-atory” tab for the deep cleanse cycle.

Anyway, what is it? Source is here: https://github.com/sunnywiz/gmail-filter-2024/tree/blog-post-1

  • It uses OAuth to connect to Gmail
  • It downloads the last N days of email and stores just a bit of it in a local cache
  • It looks for senders who send me lots of stuff.
  • It lets me choose to keep only the most recent N things per that sender and delete the rest (from that sender)

Along the way, I got to revisit some technology stuff:

  • I started in Console for .Net 6, switched back .Net Framework 32bit talking to Outlook via interop for just a bit, realized that was also painfully slow, and switched back to Gmail’s nuget package in .Net 6 with WPF. Thank God, at my workplace i’ve been entirely in “Framework” and its nice to see where (what was .Net Core and is not .Net) has gotten.
  • I wrote it somewhat with JetBrains AI Assistant, mostly looking up sample code on how to do stuff like get the list of emails from Gmail, wire up a TwoWay binding, etc.
  • I had to reremember a lot of the WPF stuff.. haven’t worked in that since 2016? 2018? … There was this trick of setting up a Debug Type converter that I had to use to fix a binding, turns out I needed to change the UpdateTrigger to be on property change rather than loose focus, everything was ok otherwise.
  • I remembered a trick of using a FlowDocument to lay out controls quickly in a way that looks okay-ish and is usable, rather than trying to get all those !@#!@# grid columns right.

So, for 6 hours spread over 3 days.. not too bad. Not 100% of where I wanted to get to. But i have a few more hours left yet. At half an hour per “session”, I might be able to get these things done – by the time you read this, this drama will be over:

  • I need to remember what the settings are for different from: email addresses, and save them, so i can auto-apply them later
  • I need a hands-off mode where it spins up, grabs email, and purges stuff.
  • I need a “keep 2 weeks” kind of thing for stuff like Paypal and Venmo transactions, and Patreon posts. I usually check on those things within 2 weeks.

And then there’s the nice-to-have:

  • Show the category that an email is in
  • Show if an email is read or unread
  • Option for “Mark all as Read” – stop cluttering my inbox notification counter!!!!
  • As mentioned above, the “Deep Clean” cycle – where it applies the rules to EVERYTHING from those senders – individual searches per sender, only get the details on the messages to keep, and everything else, i can send the messageId’s to the Trash() method.

Or, I could eat that Cinnamon Roll that I got and watch Youtube and call it done for the weekend.

I couldnt sleep – Hyperdrive

It might be the 4 espresso drinks i drank earlier. But i couldn’t sleep. My brain was chewing on ways i could do a hyperdrive.

The idea is like .. a condor? big bird, floats on drafts/currents. Starts off on feet and wings, expends a bunch of energy getting up into the air, but once its there, it floats.

So, earlier, i wanted thrust in a direction to keep on building velocity. I would make it encounter a kind of resistance that goes up, so that there’s a terminal velocity that gets approached. Lets call that V=1.

Upon getting to that V=1, you kick in the … insert science fiction name here, but its the “flapping” part of getting flying. Say it kicks you into hyperspace, at V=1.3. If you get below V=1, you exit hyperspace. Maybe its the Hyperspace actuator. You can get better hyperspace actuators.

In hyperspace, there’s a different engine available. Its a directional “push” – you can push against what would normally be gravity sources in normal space. The closer the thing, the better the push. you aim your pusher beam against the target, and apply push power. This speeds you up well beyond into much better V’s.

However, there’s “friction” in hyperspace .. probably map-based. I’d probably do it as a function of how close you are to a star .. basically gravity wells have cleared out the friction stuff. So depending on where you are, you slow down. If you go far away from all stars, the distance to the stars = less push, and friction, makes it impossible to stay in hyperspace.

Would have to play with the numbers, but lets say that it would take 10-30 minutes to get between most stars at V=1. Lets take the 30 minute case, D=30 between them. Assume stars have mass M=1.

  1. Line up your current star with your destination star. Speed up towards your current star, getting to V=1. Gravity pulls you in as well. Beginning your run as it were.
  2. When you get close enough to V=1 kick in your hyperdrive actuator to boop over to hyperspace, maybe at V=1.5
  3. Pass through/past the star and then use your hyperdrive vector engine to target your star, and push against it. At distance D=1, you get a nice M/D push of 1 bringing you up to 2.. i did a quick excel thing to expound on this …

I lost my bullet numbers. I get a transit in T=10 instead of T=30.. I’ll probably have to make more oomph available at longer distances, so its more like a pole, but i really wanted it to be “if you’re caught between the stars, its hard to get oomph.” Also, with the above setup, your max distance from a star works out to about 60-70 units if the star has mass 1.

If i change it to be Mass/sqrt(distance), i get still 4 minutes, but the distance greatly increases. Hmm.

But, anyway, that’s the idea. As you get close to the target star, you push against it bringing your velocity back under V=1 and you pop back into normal space. And avoid those “dense” areas where max-v is lower if you want to float longer.

Okay, now maybe i can sleep.

Space Mud/Game Ideas

Long time no update .. a codebase. https://geekygulati.com/2016/03/12/dotnetmud-spacemud-optimizing-network-traffic/ is when I talked about it last.

I was playing Starcom Nexus, and it reminded me that once upon a time i was playing with flying a ship around in gravity. I looked, it was 7 years ago. I cloned it, I updated it, I ran it. I was amazed.

I’m not going to kid myself with “I have time to work on this” .. but, if I did, what would I make? This is like “If I win the lottery”, giving space for some creative stuff to sparkle. I would have planets that (slowly, unrealistically) orbit their stars.

I would have planet and sun gravity that draw ships in

But the ships won’t crash against the planet. Instead, there will be a “autopilot” bump that happens that assists the ships into a non-crashing orbit over time. (so a few times they’ll go through the planet, but eventually would end up orbiting)

ships can do a “land” feature if they are close enough to a city, goes into an automatic “shrink into the planet” type of thing. At that point, the game switches over to text mode of having landed, getting out of the ship, n/s/e/w to visit whatever places.

At first i wanted real thrust type stuff. But that can be very unmanageable.

So make it manageable. Choose an object to travel to. Accelerates towards, the decelerates. Can be planetary. Can be a ship. Different levels of autopilot around this can be purchased. Fuel cost for the big accelerations, free small ones to prevent stranding. Standard given autopilot = bring self to a stop relative to object (so you can end up at your target)

But i don’t want acceleration to be good for interstellar travel. Not without putting in 10 minutes of game time or more. For that, i want a hypertravel mode, but with a twist:

In hyperspace mode, gravity is inverted at d^5 or something silly like that. So to send yourself to another star, you get close to your current star, switch to hyperspace, and get catapulted out that way. If you aim wrong you can still use your hyperspace drives to point yourself in some direction, but they can only slow you down or speed you up to some absolute value. (the stellar launch can exceed that value). spend more money for better vectoring and acceleration and maximum controllable velocity. You would learn about stellar launch the hard way, you could just go into hyperspace and start accelerating towards a star.

So you would theoretically line up against a star, go into hyperspace, get catapulted, get close, and if you did it just right you would get slowed down by the target star but if not you would drop out of hyperspace somewhere around your target star. You drop out of hyperspace only when your hyperspace velocity gets slow enough.

The rest of the game would be similar, i guess, to Starcom Nexus. You visit planets, you solve things, you get resources, you fight baddies, you mine stuff, etc. That’s the game built on top of the above. Except that planet visits are text adventures.

I’m writing this from a hotel in Phoenix, where i’m on vacation with my wife. Its only taken .. 2 days? of vacation time, to get my brain clear enough to where this stuff starts coming up for creativity. yay!

Missed Metformin Dose

I forgot to take my metformin this morning at 7am! OMG! What shall I do?

Well, according to Wikipedia, the half life is between 4 to 8.7 hours. I assumed 6 hours. Thanks to a little googling, the formula to use is below. I modelled it based on taking my morning pill at 5pm when I got home and skipping my evening pill (which is not at 7pm, but usually around 9pm to 11pm), and:

The formula:

I like useful math

Quick Win: Small WPF App – Fast Finder Tool.

Since I don’t work on (interesting) large projects anymore, maybe I could write about the (small) places where (small) work = a (relative) win for somebody.

Returns (the department) is switching to the newer version of our ERP system. Its web based, and its a bit slow. For a customer, given a product, they have to either a) start from customer and what they bought, filter down to item, or b) start from item and go the other way. Either way its a few seconds to bring up stuff, another few seconds to apply a filter.

Solution: WPF app, doing a SQL query and using Telerik WPF RadGridView:

customerId <tab> item shortcode (Excluding size and color) <enter> (wait 1 second) – this brings up a filterable, sortable grid of when that customer bought that item (and the order numbers).

The code leaves the focus on the item short code highlighted so the next short code can be typed in. (Any customer will usually return several items – customers in these cases are businesses returning inventory for some reason – its a 20%+ return-rate industry, related to fashion.)

With this information, Returns can quickly know what prices to assign the return (the price they paid for it) and what order number to RMA against.

Returns came back with minor feedback: Colorize the rows for purchases vs returns in the grid that comes up. Done.

I’m on my home laptop, so no screenshot at this time.

Overall the screen for returns took maybe 6-8 hours spread over 2-3 working days over a span of 1 week, with 2 rounds of feedback. (These “fun” projects, i don’t start work on them till after 2pm, after I’ve worked on the less-fun more-demanding work in the morning and other misc stuff in the afternoon).

Keeping Track of Work Tasks

At work, we started using Zendesk. It has been very effective. Also, due to COVID-19, we were briefly down to 2 people able to work.. recently, one of us got back to being better. Rather than a “fixed days working from home each week” (I used to get Wednesday WFH) we’re switching to a “3 person rotation of who is in the office”, since we need to have an onsite presence for our primary job function, which is to keep all the folks in the warehouse productive.

This is leading to “onsite” days, where pretty much deal with the flow of tickets .. and then deal with other little problem things that need to be fixed. These days entirely run by email and Zendesk. The person onsite triages the incoming stuff so that the offsite folks can focus on their project work. Going back through Pending tickets and updating statuses as we wait for other folks to respond and do their parts.

And then there’s two days of “offsite”. Blessed ability to focus in deep on tasks, because the interruption buzz is being handled by the onsite person. Spent about 2 hours working on a 8-part CTE (common table expression) with a colleague today. Got the web page that reports it partially done, another 4-6 hours tomorrow and it will be done.

Currently, we have an Open Projects spreadsheet (we were using Microsoft Teams Tasks, and prior to that ClickUp) which lists these projects. I’m thinking we’re going to transfer them into Zendesk but with a tag of “project” — and alter the other views to exclude them. That itself is a project.

Short Update: Monitoring For Data Problems in complex systems

create view problem_view_darth_vader as (select ..... problem data) 

In our case, it was that a particular item had two UPC codes from two suppliers. Add a couple of extra columns like department=’production’ and fix=’how to fix it’.

Next, write a generic thing which checks select view_name from information_schema.views where view_name like ‘problem_view_%’, and loops through those.. if anything has rows, displays the rows. I used a DataTable and DataAdapter.Fill() because then you get the columns even if there are no rows, unlike Dapper’s <dynamic>. Show this on a web page, along with a div id=SuccessDiv

Then, using your favorite “how’s my website doing” tool (mine is DataDog Synthetic Tests), do a test looking for success div. Set up an monitor, track failures. Datadog takes screenshots, so the errored data is saved with the failed monitor.

Result: Here’s the monitor zoomed in to when i introduced a test view to create a problem:

And here is the screen that was captured as part of the failed test:

The only side problem is that you don’t have control over how frequent the checks go. Probably different pages could be done and use different view names, like problem_view_darth_vader_60 to check every hour, for example. May improve it in the future.

I think the next things to put monitors up for are probably these particular things that get stuck without a thing. (Sorry, redacted, but you get the idea).

Short updates

I surrender. My life as it is now will not support the time to make beautiful posts. Like I would have bolder the word beautiful there and added a picture. Because marketing.

nope this sucker is being written on my phone in the morning as I sip coffee. I have 25 minutes before I need to leave for work and I’m not dressed yet. Evidence: (test phone camera):

Warming device

So this is how I’ll need to write posts in the future. And the purpose is: to be a testament to my future self that I DID do geeky things and find things interesting.

Yesterday … I found out the hard way that I had to ask ALL domain controllers about when people last logged in. Apparently we’ve had one controller be PDC for a very long time and machines remember their favorite DC for a long time.

Script modified from https://social.technet.microsoft.com/Forums/ie/en-US/d86b7495-729a-44e2-ad68-5e154ecbd6d7/getaduser-lastlogontimestamp-is-reporting-blank?forum=winserverpowershell
 
$( foreach ($dc in (Get-ADDomainController -Filter "domain -eq '$((Get-ADDomain).DnsRoot)'" | % {​​​​ $_.HostName}​​​​ ) ) {​​​​ Get-ADUser -Filter '*' -searchbase 'DC=******' -Server $dc -Properties LastLogon | Select SamAccountName, LastLogon, @{​​​​n='LastLogonDC'; e={​​​​ $dc }​​​​}​​​​ }​​​​ ) | Group SamAccountName | % {​​​​ ($_.Group | sort lastLogon -Descending)[0] }​​​​ | select SamAccountName, lastlogon,@{​​​​n='LastLogon1'; e={​​​​ (Get-Date $_.LastLogon).ToLocalTime() }​​​​}​​​​, LastLogonDC | export-csv -path "lastlogon.csv"

Chart: Maybe later

Result: I can figure approx how many CAL we need. Going to update DC to next server version.

Crystal Reports Fun

I haven’t posted, or talked much, about my new work .. basically, I joined a company whose primary job is things, and most work focuses around an ERP, and its all about stabilizing and optimizing flows of information around processes. Edit/Update: I wrote this in, like, February. A lot happened since then.. CoViD19 especially. It is now June.

One of my new skillsets is Crystal Reports. To which my developer peeps say Ewww, usually. Eh, it works. The main thing is, the ERP system that everybody is logged into, has a way to embed Crystal Reports into it. That ERP System (Prophet 21) is also backed by a SQL Server database, and has a decent table structure to it, and a decent user-defined-field add-on strategy to it, making many tasks.. sane.

However, navigating the environment for what actually works has been hard. There were not a log of prior art examples which did things at the architecture I would like, so I’ve had to fumble around. Here’s what does NOT work:

  • Using anything other than a particular driver (I forget which one) for connecting to SQL. Specifically, not to use the SQL Native driver. This is because when the ERP is hosting the report, they do a switcharoo on the connection to connect to the right “instance” of the ERP, and if you use the wrong driver, that doesn’t work. You don’t find out till you try to run the report from the ERP.
  • Directly using a stored procedure to do heavy lifting for a report. You can, and it auto-creates report parameters for you, but those report parameter types lead to less-than-optimal user dialog (think dates plus times instead of dates).
  • Using a parent crystal report to include a subreport to get around the previous thing. Works great for a crosstab, but page headers not so good in a grid style report. However, I am able to bind a parameter from the parent report through a calculated field to plug in to the subreport (and thus the stored procedure).
  • Also, if you have a parent report that only calls sub-reports, and doesn’t actually connect to the database itself, the ERP system doesn’t like that because it cannot find the database connection to override.
  • Not choosing a printer when designing the report. Apparently this affects font choices and Arial looks better than Device CPI 10.

Here’s what does seem to work:

  • I can use Views to encapsulate business logic, example “The View of xxx customers” where xxx is a particular program that customers can enroll in.
  • I can use stored procedures to D.R.Y., for example, the stuff to get the raw data of the number of designer frames sold per customer within a time period.
  • I can call stored procedures from a “command” custom SQL block from Crystal Reports. In that block, I can: declare @t table (…) and insert into @t exec SPROC {?param1} {?param2} to get data from a stored procedure.
    • For example, there are two reports: One is a CrossTab that breaks out customers across brands and # of frames, and another is a detail report. The detail report goes into how many DIFFERENT brands were sold + the number of frames sold — these numbers roll into a formula describing what % back the customer gets, per the rules of the contract(s). Both of these reports use the same stored procedure to get the underlying data.
    • However, using this method, I have not yet been able to go (user input) to (calculated field) to (input into command) to (call stored procedure), yet. On the other hand, I can do a lot of manipulation in T-SQL, so that should be fine.

I’m continuing to learn a lot of things.. next week looks like it will be learning the ways of our EDI interfaces with some bigger customers, like the names you’ll find at a Mall. (I don’t know how much I can talk or not about our customers). (Edit: It is preferred that I do not.)

Side note – We have added datadog to our infrastructure, and monitoring is making our lives better, I think. Separate blog post, but in summary: immediate notify on errors, and notify on lack of success. Except on weekends.

Looking back at this post I wrote 4 months ago.. wow, there’s so much detail I could go into about all the little things that I’ve learned and tweaked. Like some powershell to inspect bad emails in an SMTP dead letter folder. And a powershell to automate connecting my Cisco VPN connection. Messing with Redhat 8.1 re-learining all the unix things including SMB shares. However, .. that’s another post; coming up shortly.

Using Timesheet data from Harvest to create a Force Directed Graph Animation

This is too complicated to try to put into words, so there’s a screencast instead.

Video of the final product:

https://www.youtube.com/watch?v=_TJi9yAm_kM

Video explaining how to do it:

https://youtu.be/DrrA9sd6AAQ

Source: https://github.com/sunnywiz/HarvestToGephi/blob/master/HarvestToGephi/Program.cs

In text form:  C# code to convert a Harvest CSV extract into a node and edges CSV file.  Then in Gephi, import the two files, convert the start/end dates into an interval, and set up the prettyness.  Record a long video with lots of stabilization and then speed it up.