Updates to twit-sort

I am currently on vacation!  In Florida!  We drove!

I took the opportunity (in the early mornings / late nights when everybody else is asleep) to work on some code that I wanted to update – I don’t normally get time to do this.   Its my twitter-reading app.   Which only I use.  But hey, that’s fine, I haven’t tried to market it.

Changes

Change #1:  I kept seeing “…” with truncated tweets.  A little research led to some interesting stuff — basically following retweets to get down to the original tweet to get the text from there.

Along the way I tried to build up a “who quoted who” breadcrumb:

image

The code is a bit wonky:

image

However, its confusing – many people (or clients) retweet without actually having a retweet link.  There’s “quoting” which is different ..  eh. whatever.  If I’m given it via the API, passing it through, figure it out later.

Change #2: I surfaced the like count and favorite count, as well as a link per tweet to open the tweet in its own window (as hosted by twitter). 

image

Complications

I have the code in github, which is public, and I don’t want my access keys and tokens and stuff checked in there.   So, I had to fudge around a bit – this was my solution:

image

  • I created a branch with all the passwords and stuff, and worked from there.
  • Once it was done, I either did a rebase (with cherry picking) or a direct cherry pick to move commits over to master, which I pushed up.

Granted I could use my azure visual studio hosted git, but I wanted the code to be visible / usable.  So.. if anybody has other ideas of how to do this better, please let me know.

What Next (with this project)

Not much else I can do with this in its current codebase.  I could certainly make it prettier, but .. eh, that’s not me.

If I had unlimited time, I’d rewrite it – make it so it did all its fetching and filtering and sorting locally (in javascript).  I could get a LOT fancier then – things like pulling out hashtags into groups, etc.  Maybe adding some sentiment analysis things.

However, I have so many other projects on the burner.. this one won’t make it for a while.  My need has been solved, so there’s very itch here to scratch.

The url of the site:   http://twit-sort.azurewebsites.net

The code: https://github.com/sunnywiz/twit-sort

Making Educational Videos for Code Louisville

I volunteered to be a mentor for the latest session of CodeLouisville, on the .Net side.

No big deal, I thought.. and then, TreeHouse was unable to get their Entity Framework videos ready on time, so we, the mentors, volunteered to pick up the slack.

Naturally, I don’t do well when given an open ended problem, so I went into crazy-detail-mode.    I paid $15 for a 1-year subscription to https://screencast-o-matic.com/home  (VERY GOOD), and I started making videos.

And I ran out of steam.  Because, of course, I chose the slow, measured, one-thing-at-a-time approach, and I created far too much work for me to do myself.

Luckily, two other mentors suggested that we do X for our mentoring session – so if what I was doing was from top down, they were going bottom up.  Basically, show the fast way to create an MVC website, Code First, Scaffold, and let the students backtrack from there.

This worked VERY well.   It boot-kicked the students into effective land.  At least one of them showed me some working code in their project in the same week.

Lessons Learned

I am too. detail-oriented for my own good.

When in doubt, ask people what they need.

Sometimes, you just have to show the finished product and NOT explain everything, then go back and explain what is needed.   Its okay, people are resilient, they will survive Smile.

I like screencast-creation, but not more than a few hours a week: For a 20 minute screencast, it takes about 40 minutes of recording, about 2 hours of editing, another 20 minutes of rendering and uploading.   So basically plan on 4-5 hours for a 20 minute video on a topic.    Thus, I can probably sustain generating 10 minutes of educational content a week.

What Did I make

This is a walkthrough of LocalDB:   https://docs.google.com/document/d/1tTDKTQsfGPr_nFSbEifGQRc8dc-FfrxbPUfnc7CyKoc/edit?usp=sharing

This is a walkthrough of C# talking to SQL:  https://docs.google.com/document/d/1lSt-C5-L3VwLLGE6oJO3A_UOpIu2lYA-EWMhzAo-Ye0/edit?usp=sharing – it has links to 5 videos, and (eventually) links to code in github.

Unit Testing vs Integration Testing

Unit Testing

  • mock everything external to your code.
  • mock everything up to the thing to be tested
  • do some stuff
  • assert one thing.
  • Don’t continue a test – instead, mock up the environment to match the new thing to test, and write that in a new test, with a new assert.
  • Its cheap to run lots of tests
  • Its cheap to mock things.

Integration Testing

  • Its expensive to set up data
    • That’s a talk to itself, on how to do that in a sane repeatable way.
  • There is no mocking. Its real stuff, all the time.
  • Thus, when you achieve one milestone, your test (more like a “flow”) continues to the second milestone. Examples:
    • Upload, Pause, Continue Upload, Download, Pause, Continue Download
    • Upload, kill upload, Upload again
    • Upload, Download, kill Download, Download again.
    • Create item, edit item, rename/move item, delete item.
    • Its too expensive to try to get to a middle state, unlike mock-land.
  • Along the way, you print out “assessments” (to go with a AAA-style term) of where your test is at and what data its seeing.
    • ie, Arrange, and then
    • Act Assess Assert
    • Act Assess Assert
    • Act Assess Assert
  • In case of failure, you compare the log of the failed test with the log of a successful previous test to see what’s different.
  • The test can be VERY complicated – and LONG – and that’s fine. You only know the detail of the test while you are building it.
    • Once it goes green in a CI system, you forget the detail, until it fails.
    • If it does fail, you debug through the test to re-understand the test and inspect data along the way.
  • Expect flakiness
    • Sometimes things just fail. Example: locks placed on tables in PostGres for Unique in their UAT environment by some other process.
    • Sometimes things fail because of a reason
      • Somebody changed a schema.
      • Somebody deleted some key data
      • Some process crashed
      • Previous other test left behind data that FK locks the data you want to work with.
      • All these need human care and feeding and verification, its not “mathematically sound” like unit tests are.
    • Its a good idea to put an Assert.Ignore() if any failures happen during Arrange() section (ie, databases are down, file system full, etc – no longer a valid test. Not failed, but not valid, so ignored.
      • Can postpone this till a test starts to be flaky.
  • But when it works
    • you know that all the stuff is CORRECT in that environment.
    • And when it works in CI day after day after day, any failures = “somebody changed something unexpected” and needs to be looked at.
      • Fairly often its a shared DB, and somebody else changed schema in a way that you’re not yet accounting for.
      • Or somebody changed the defaults of something, and your test data that you’re hinging a test on has not been updated to match.

Which Ones To Use Where

  • Use Unit Tests to explore the boundaries and paths inherent in a piece of code.
    • Faster
    • Many of them
  • Use a single integration test (or just a few) to run through the code with all dependent systems
    • try to hit every SQL statement / external service at least once
    • If it worked, combined with all the unit tests, you’re probably good

SQLite for C#/SQL Developers

Current project uses sqlite3 for local storage of stuff for various reasons.  This was our first time working with it.  We’ve learned a few things that are not obvious.. coming from a SqlServer world —

Tooling

SQLite Expert for the win.  It has a chocolatey package as well, although that failed to install once for me (but worked twice.  Who knows).

Note that SQLite is multi-process-open-able; data inserted by your program shows up automatically in the Data tab in SQLite Expert.  However, if SQLite Expert has the .db3 file open, can’t delete it to scrub it clean.

All the same: varchar nvarchar and text

SQLite doesn’t care.   It has 5 types of “data” it can store, depending on what the data is.  Handy reference: http://www.tutorialspoint.com/sqlite/sqlite_data_types.htm.

So when we were doing “uniqueidentifier” it was actually doing BLOB under the hood.

Its entirely dependent on what you’re trying to store, as to what gets stored.   So you can store a string in an int field.   https://www.sqlite.org/faq.html#q3  — its a feature, not a bug.

GUIDs as binary, beware Endian.

Most programmatic forms of storing Guid data end up storing it as Binary/Blob.   You know its binary if you do:

select id, hex(id)

And you get two things that look about the same size in length, but are shuffled around:

{E589C188-2575-4AC6-BD41-A66A00A1AF22}

88C189E57525C64ABD41A66A00A1AF22

You might say “hey! bad Endian!”  but actually its Guid.ToByteArray() that’s doing the re-shuffling.

So, if you see the value {E589C1… in a grid result, and you want to select it, and you say

select .... where id=x'e589c1...'

That doesn’t work!  you have to give the bytes in the other order (as returned from hex(id)).

Foreign Keys Not On By Default from C#

This one was a shocker.  Its a pragma to turn foreign key checking on.  However, tools such as SQLite Expert do this automatically for you.

What we ended up with is a GetOpenDbConnection() call that did both .Open() and executed the pragma.

Logging Generated SQL from Dapper

A lot of this got figured out after we could see the SQL Generated by the various libraries we were working with. Turns out, that’s 2 statements after your connection is opened:

con.Flags |= SQLiteConnectionFlags.LogAll;
con.Trace += (o, e) => { Console.WriteLine("SQL: " + e.Statement); };

In Conclusion

These are the bombshells we’ve experienced over the last week.  Hopefully, all the bombshells are done with now.

Other than this learning curve — very solid, very fast, very nice.  2 Thumbs Up. Plus One. Heart It.

git rebase

I’ve never done a rebase before.  I figured today would be a good day to try it out.   It’s a simple case —

imageimage

From the client’s point of view, origin/master is ahead by one commit.   It happens to be a conflicting change.

In Tortoise (I’m starting out easy), I pick “Rebase”, and I’m saying I want to change my branch SG-EndToEndTest to be “re-based on” remote master.

image

  • Its showing me the list of commits that I’ve had, since I started my branch from wherever
  • Everything is picked.  Could choose to not-pick some stuff, leave it behind, like config file changes for local debugging.

I click “Start Rebase”.  It starts from the bottom of the list (ID=1) above and triest to re-apply the commits to the new root.   It runs into conflicts –

image

I right click on the conflict, edit it .. turns out TortoiseDiff doesn’t think it’s a conflict, I can easily mark it as resolved.  I have to do this twice, both in .csproj files.

When its done, its all still local – server’s not any different – but local shows:

image

  • The old commits are still there.
  • But they’ve been cloned and re-grafted onto a new source node, and the label has been moved.
  • If I had to, I could undo everything by force-moving SG-EndToEndTest to be based on ad1c.

If I now push up to the server – I get an error:

image

Instead, I have to push with –force (“may discard unknown changes”) (ie, “tell the server to do stuff where it discards changes that I, the client, know nothing about – JUST MOVE THE REF”)

image

Now the server looks like this:

image

That was my first, very simple, rebase (without squashing).

The pull request that I had open against that branch survived as well, and changed from “Cannot merge – conflicts” to “Can merge automatically”:

image

Goodbye to old code and dreams of immeasurable wealth

imageIn the Beginning

I would always hear about people who wrote simple pieces of software, who were in the right spot, at the right time, and their stuff got used and they became famous, and.. perhaps even rich. 

Every time I heard such a story, my baby tyrant would say: “I want that!  Lets DO that!”

To which my Fuddy-grownup would say, “Honey, you probably won’t become famous, and it probably won’t work out.  Are you sure?”

And the Couch-Buddy would say, “Ah, too much work.  Lets read some more facebook.”

I Made a Decision

I started coding this thing that I thought would be a good start of things.  It was an app to make reading twitter easier – less context switches.  It was also an experiment in using Azure, Visual Studio Online, and a little bit in starting bootstrap from scratch.

I got it working.

I started the code 7/26/2015, and by 9/22/2015 I was ready for the big time.   This was mostly an hour or two during a workweek in the evenings, and maybe an hour or two on a Sunday morning.

I had a logo created, bought a bootstrap style, I had added what I thought were the key features I needed, I rebranded it, and I bought a domain name.

image  image

I stopped.

And then life got complicated, and I let it sit – costing me monthly $, btw.  $17 per month to keep it hosted at the cheapest level I could get away with, AND have a domain name.  I used it for a while. 

Eventually, I got cheap, and work distributed a full MSDN license to me with an azure subscription, so I nuked it.

I’m letting it go.

Very recently, I put it back online under my MSDN license –  You can use it here:

http://twit-sort.azurewebsites.net/

I’ve cleaned up the code that I deployed, removed all the passwordly bits from it, and uploaded it to github.  Here’s the guts of it:

https://github.com/sunnywiz/twit-sort/blob/master/azuremvcapp1/Controllers/ReadController.cs

Letting go the dreams as well

I would have liked to have seen this thing become better.

  • I could have done a face lift on the front page.  Too many words.  Replace with screenshots of the configuration page and the read page.
  • I could have made it more colorful. Orange and Blue!  You can see this in the icon a bit.
  • I could have made it front-end js only, with no server side talking to twitter, using local-storage for persistence
  • I could have added “click hashtag or username” to create additional groupings on the fly.  delete groupings on the fly as well.

The good news is, all these dreams live on, in a future project – that works with Facebook, instead of Twitter

Conclusion

Letting this one go to make psychic room for other things that interest me.   May it bless others.  If you write a good one like this I’ll use it.

Latest Round of Harvesting Car-Tracks

image

In the past, I used Android – Torque, and an upload to Dropbox, to gather car-tracks.

I’ve started up that project again – this post focuses on my solution for gathering car-tracks for later processing.

 

imageAndroid-Tablet-Always-On

I got a Samsung Galaxy Note Tablet used, and I’ve stuck it in the glove compartment of my car.  Its plugged in to power, but power only runs when the car is turned on.  Click on image to zoom.

I use automate-it to run a few rules:

  • When power goes off, go into airplane mode.
  • When power comes on, turn off airplane mode, and start MyCarTracks

This seems to work as long as I drive a good amount each day. Then again, when I went to write this blog post, I found the tablet powered completely off – not enough charging?  temperamental battery?   Looks like not enough charging is the culprit.

It would be awesome to completely shut down the tablet when power is lost, and then power on when power is applied; however, I’d have to jailbreak to get that, and my 15 minutes at attempting to jailbreak, didn’t work, so, meh. 

MyCarTracks

imageI could totally get by with the free version of MyCarTracks.  Its an excellent product!  It has these features which I use:

  • Auto-record car tracks – after you reach 6 miles per hour
  • Auto-stop recording car tracks – when still for 5 minutes.

I can then get access to my tracks via an “Export All” feature, which will let me export to CSV, GPX, or KML.  GPX is the winner for me:

<?xml version=”1.0″ encoding=”ISO-8859-1″
standalone=”yes”?>
<?xml-stylesheet type=”text/xsl” href=”details.xsl”?>
<gpx
version=”1.0″
creator=”MyCarTracks”
xmlns:xsi=”
http://www.w3.org/2001/XMLSchema-instance”
xmlns=”http://www.topografix.com/GPX/1/0″
xmlns:topografix=”http://www.topografix.com/GPX/Private/TopoGrafix/0/1″ xsi:schemaLocation=”http://www.topografix.com/GPX/1/0 http://www.topografix.com/GPX/1/0/gpx.xsd http://www.topografix.com/GPX/Private/TopoGrafix/0/1 http://www.topografix.com/GPX/Private/TopoGrafix/0/1/topografix.xsd”>
<trk>
<name><![CDATA[2015-04-15 19:52]]></name>
<desc><![CDATA[]]></desc>
<number>29</number>
<topografix:color>c0c0c0</topografix:color>
<trkseg>
<trkpt lat=”38.242019″ lon=”-85.72378″>
<ele>132.89999389648438</ele>
<time>2015-04-15T23:52:25Z</time>
</trkpt>
<trkpt lat=”38.241821″ lon=”-85.723613″>
<ele>131.89999389648438</ele>
<time>2015-04-15T23:52:29Z</time>
</trkpt>
<trkpt lat=”38.241649″ lon=”-85.723491″>
<ele>129.1999969482422</ele>
<time>2015-04-15T23:52:32Z</time>
</trkpt>

MyCarTracks.Com

But Wait There’s More!

imageI went ahead and paid them $16 for a 1-year service for a small fleet, which gives me access to my tracks online (up to 2 years old) for quick viewing.   In order to make this happen, I sometimes hook up the tablet to my WIFI and say “synchronize all”.  There’s also an option where I can say “sync between 2 and 3 in the morning”, and I configure automate-it to take airplane mode off from 2-3, however, that’s hit or miss.

Once the tracks are loaded up to MyCarTracks.Com, I can browse them on a pretty nice map.  (picture at the top of the post).

What’s Next

My intention is to load these GPX’s into a small sql-server database, using Spatial (Points), and then come up with little data sets of “here’s all the tracks that passed through these two points”, etc.

I then want to take those tracks and convert them to a 3-D rendering with Z-axis = time, to compare various paths with each other.   

And I want to convert that into a 3-D sculpture.   Because, art.   My art.   Representation, archival – these I love.

But, one thing at a time.  I’m always welcome to shelf my projects; I only work on them when they are attracting my soul.  Might be a bit before I get there.  I do have a start on the gpx-parsing code, though: https://github.com/sunnywiz/cartracks2016

SqlServer on a Ram Drive: Fast or Not?

TL;DR:  Don’t bother, its not.

I’m at it again .. writing what I call integration tests, which are effectively database-friendly-data-setup-and-teardown tests.  As can be imagined, its definitely much slower than unit testing; however, I love doing it, and there’s a lot of sprocs and other stuff that its really nice to get some tests around. (most of the bugs that led to this investment in time, were in the sprocs).

Since I had the RAM available.. decided to try to have SQLSERVER (Developer Edition) run against databases that were stored in RAM.  How does that compare?

The database is about 4G in size.

SqlServer Developer, 10G RAM Drive, In a VM , Tests run by R#

image

SqlServer Developer, using MDF files against C: in a VM; VM is on SSD; R#

image

SqlServer Enterprise on VM in Azure (network lag); R#

image

One test failure was because this server’s copy of the invoice database was not complete; and the test was set to be readonly against this server (local sqlexpress = much happier about dropping and re-inserting records).

I could not ping all the way to the database server, but Client Statistics would indicate a ping of probably 150ms?

image

SqlServer Enterprise at client location via VPN; R#

image

This is definitely faster than our azure-hosted SQL – Looking at ping, says it’s a 50ms round trip.

Pinging taacasql01.triaa.local [10.120.0.10] with 32 bytes of data:
Reply from 10.120.0.10: bytes=32 time=50ms TTL=127
Reply from 10.120.0.10: bytes=32 time=44ms TTL=127
Reply from 10.120.0.10: bytes=32 time=45ms TTL=127
Reply from 10.120.0.10: bytes=32 time=45ms TTL=127

SqlServer Enterprise local network (client site); Teamcity

image

This is almost on-par with local SqlServer.  However, I don’t know if the machines are faster.

Summary

Ram-Disk Sql-Server didn’t help.  Or maybe its that, even on a regular hard drive, it was able to load everything into RAM, and got quite fast.

Local Sqlserver vs LAN SqlServer were close enough; I’d use one as a substitute for “how would it perform” on the other.   Helps that the customer is (probably) running both VM’s on a Hypervisor, and network communication between the two machines is … superfast.

WAN SqlServer was definitely the dumps; however, that’s good for posing artificial limits on ourselves, to make sure we’re not doing too many round trips, etc.   Nevertheless, our main cloud sql server is slower than a VPN into our client; that doesn’t seem right. Or, our client is that awesome.  It could be the latter. 

Not shown above, but if you drill in to some of the tests, you can see the cost of setting up an Entity Framework context the first time.  It seemed to take about 8 seconds against my local server.  Once set up, subsequent tests were less than a second each.   However, the set-up would happen again for every other test fixture – apparently whatever its doing to cache things, got dropped between fixtures.  Possible optimization, hang onto it in a static?  *food for thought*

Methodology Notes:

I ran each full suite twice.  Sometimes three times, till I got what seemed to be a stable number.

I used SoftPerfect for the RAM disk.  Its set to sync ramdisk to disk every hour or so.  After seeing that it didn’t really improve things, I deleted it.

image

I drank 3 glasses of Tea, 1 Spicy Hot V8, and ate 3 pieces of chocolate, and 1 cream cheese snack, while running all these tests. 

My Right Forearm is Toast

image

I am writing this blog post mostly using a microphone.

I think I’ll try to do this in one shot, clean it up later but keep the original version around, so that the blemishes and problems with voice dictation can be seen.This is not a good sample though, because I am not using a headset: my Turtle Beach microphone has a TRS connector and the Samsung Surface doesn’t have that input.

I am having difficulty using my right hand (lateral epicondylitis) and in an effort to heal I am choosing not to use it for at least 3 weeks. This has led to several discoveries:

Taking things for granted Discoveries:

There are several things built into the muscle memory of my right hand.  For example in Visual Studio, Intellisense.  When a suggestion shows, to choose it is apparently down arrow enter, but when my left hand tried, it tried pressing enter and did not know about the down arrow involved.

I discovered that my brain would output code at the same speed that my hands could put it in. (Perhaps my brain puts it out faster than my hands can put it in and that is why I am having a problem). Now down to typing with one hand, I simply cannot keep up! I get frustrated, and the thoughts that were in my brain disappear leaving me wondering what I was typing.

I have to reevaluate how I am going to be effective at coding over the next few weeks.

Directions and Challenges

I am going to see a doctor whom I trust and see where she thinks I should go. I am hoping that rest will let my body self heal, but if there are major tears in the tendon etc that might need a different level of attention. I am very reluctant to go there. I have heard that once you invade, complications arise.  Bottom line: I need diagnostic stuff, x-ray or something, to determine what the state of things are under the skin.

I have been trying to find resources online that state exactly what the mechanism is for ice and compression leading to healing. I see a lot of sources that indicate that inflammation in the body’s attempt to recover in which case how can I get more inflammation? 🙂 (A: Graston Method is one.)

I have another problem as well: by using my left hand more now, it too is in danger of burning out.  Friday I think my active left arm hurt more than my slinged right arm.

I have watched videos of people using various speech synthesis program to do programming. my reaction was: OMG that is so slow! I am hoping that if I do something like draw onto paper first and get the code out quickly, then I will be able to take my time typing it into the computer in a relaxed and optimal way. I wonder if I could hook up a midi keyboard to something that types in code for me.  lots and lots of macros.

Something very nice about my current employer colon he has had to go to this kind of pain as well, about two years ago.

Tools:

Here are the various tools and tricks that I have learned:

Typing one handed with a living room style small chicklet keyboard: this minimizes travel distance between keys; however I had a hard time finding a small keyboard which also had function keys. Wife and I found one, will try it on Monday.

I am now much more aware of the muscle tension in my forearm while typing.  Normally I would tense up everything and then out would come a burst of keystrokes.  I am now trying to be aware of keeping everything relaxed and returning to zero after every keystroke. 

Four ways of limiting motion:

1. Tennis elbow brace – bad. cut off a lot of blood circulation and ended up being less than effective, because my problem is along the entire length of the muscle / tendon and not just in the elbow region.

2. Sling: a lot better. the first two days using the sling, my hand definitely went into repair and recovery mode. however, the sling dug into my neck of lot and can be very uncomfortable.

3. Athletic Tape:  bind my right hand fingers together to prevent the temptation to use them.

4. Wear a glove on my right hand: this is very effective in stopping me from using it accidentally. However, it is not as healing for my hand as a sling, but it is better on my neck.

I find it very funny that my attempt to say third came out as a turd

Another thing that deserves a metnion is KT tape.   I used it for a week or so prior to becoming serious and using a sling.  It provides a lot of good support, but at the expense of skin sensitivity, as the support comes from elasticity of the skin.   Eventually the skin rebels.

And now I have a cat who is trying to use the microphone as well

Emotion:

So I have this amazing opportunity color to re-evaluate how I work, as well as something that forces me to choose what I do with my time. I am certainly not going to play much Elite dangerous, nor am I going to work on my side project, until this is better – trying to save my hands for generating income.  I could probably get around playing American Truck Simulator with my steering wheel and using my left hand just like driving my car.  However, there’s only two cities left to visit, I’m probably going to wait for the DLC / Arizona to show up before I play it more.

No, I now have an opportunity to catch up on several books that I have not yet read, and to focus on working out and nutrition and things like that.

However, right now I am sad and just want to say cluck it all and watch TV shows and hide. and that’s okay, it’s just part of Grief. I am not sure I am out of denial yet. Maybe I’m near the pyramids. get it? Denial? the Nile? Ed used to make that joke all the time.

The other disadvantage of a voice dictation is that this laptop keeps thinking I am not doing anything, and shutting off in the middle of a sentence

Wish me well! I am now going to go back through this post and use my left hand to edit it so it is readable.  I also had a cat volunteer to sit on my right hand helping me not use it.

Original Text – before lots of editing.

I am writing this blog post mostly using a microphone.

I think I’ll try to do this in one shot and then clean it up later but keep the original version around so that it’s blemishes can be seen. space this is not a good sample though, because I am not using a headset comma because my Turtle Beach microphone has a TRS connector which My Little surface Samsung sing sing thing doesn’t have an input for

I am having difficulty using my right hand insert black muscle Palma and in an effort to heal this hand I am choosing not to use it for at least 3 weeks. this has led to several discoveries hole in the line

There are several things that I take for granted that are built into the muscle memory of my right hand for example in Visual Studio Computing intelligence is apparently down arrow enter, but I didn’t. Know that my right hand you that when my left hand try to do the same thing it tried just pressing enter and did not know about the down arrow involved

Other thing I discovered is that for many years, my brain would output code at the same speed that my hands could put it in. perhaps my brain puts it out faster than my hands and put it in and that is why I am having a problem. Now down to typing with one hand, I simply cannot keep up the mycohl I get frustrated, and the thoughts that were in my brain disappear leaving me wondering what I was typing. I have to reevaluate how I am going to be effective at coding over the next few weeks.

My solution plan is currently very simple: I am going to see a doctor whom I trust and see where she thinks I should go. I am hoping that rest will let my body curious, but if there are major tears in the tendon or something like that color that might need a different level of attention. I am very reluctant to go there. Center Center

I have another problem as well: by using my left hand more now, it too is in danger of burning out.

I have watched videos of people using various speech synthesis program to do programming. my reaction was color oh my God that is so slow exclamation I am hoping that if I do something like hold onto paper first and get the pot out quickly, then I will be able to take my time putting the code in to the computer in a relaxed and optimal way.

Something very nice about my current employer colon he has had to go to this kind of pain as well, about two years ago

Here are the various tools and tricks that I have learned colon um

Typing one handed with a living room style small chiclet keyboard color this minimizes travel distance between keys semicolon however I had a hard time finding a small keyboard which also had function keys

At first using a tennis elbow brace, however. Cut off a lot of blood circulation and ended up being less than effective, because my problem is along the entire length of the muscle / tendon and not just in the elbow region

Second attempt is using a sling, and that has worked a lot better. the first two days using the sling Kama my hand definitely went into repair and recovery mode. however, the slang dig into my neck of lot and can be very uncomfortable align

Turd was to use either some athletic To find my right hand fingers together to prevent the temptation to use them, or to wear a large fingerless mittens on my right hand colon this is very effective in stopping me from using it accidentally. However, it is not as healing for my hand as a link, but it is better on my neck

I find it very funny that my attempt to say third came out as a turd

I have been trying to find resources online that state exactly what the mechanism is for ice and swelling leading to Healing. I see a lot of sources that indicate that inflammation in the body’s attempt to recover in which case how can I get more information? 🙂 New York

And now I have a cat who is trying to use the microphone as well

So I have this amazing opportunity color to re-evaluate how I work, as well as something that forces me to choose what I do with my time. I am certainly not going to play much Elite dangerous, nor am I going to work on my side project until this is better Kama trying to save my hands for generating income. I could probably get around playing American Truck Simulator with my steering wheel and using my left hand just like driving my car. Uline no, I now have an opportunity to catch up on several books that I have not yet read, and to focus on working out and nutrition and things like that.

However, right now I I am sad and Hi just want to say cluck it all and watch TV shows and hide. and that’s okay, it’s just part of Greece. I am not sure I am out of denial yet. Maybe I’m near the pyramids. get it? Denial? the space and I on? The Nile River? Ed used to make that joke all the time.

The other disadvantage of a voice dictation is that this laptop keeps thinking I am not doing anything, and shutting off in the middle of a sentence

Wish me well! I am now going to go back through this post and use my left hand to edit it so it is readable.

dotnetmud: spacemud: optimizing network traffic

I spent some time getting the network load produced by the game down.     The rest of this post, I test out the changes to see how much better everything became.  

Methodology

Two ships connected; I’m going to leave their starting locations as random.  I’m going to have them turn in circles and constantly fire.  This will yield some explosions against the planet, and others against each other, but most of the missiles will be flying out into space.

Running against a deployed web server in the cloud

Using WIFI and my home internet connection.

Collecting data via portal.azure.com’s App Service Monitoring graph set to minutes.  (This didn’t work, I had to go with my local wifi connection)

image

https://github.com/sunnywiz/dotnetmud2015/tree/chatty1

This is where we started at.    Sample packet and network load:

imageimage

 

https://github.com/sunnywiz/dotnetmud2015/tree/chatty2

The main change was to give custom JsonProperty names, as well as to reduce the number of decimal places being transmitted.

imageimage

I use Decimal  instead of Double because in double’s, there’s a chance that the underlying representation isn’t quite so digit friendly; decimals are exact for every digit.  However, decimals are harder on the processor for doing math – supposedly.   I haven’t tested that.

The result:  Not too much better.   5.5 MB came down to 3.1MB on the receive side.

 

imageimage  

https://github.com/sunnywiz/dotnetmud2015/tree/chatty4

chatty3, which I’m skipping, changed the method signatures to two-character method names.  Not too much saved there.

chatty4 added “blanking out” data that doesn’t change much.  It does this by keeping track of what it believes the client thinks the state is, and doing a diff:

image image

imageimage

In order to pull this off, I had to go to a dictionary of objects to render, rather than an array.   I also kept a “full” frame every 10 frames, similar to MPEG encoding G vs I frames.

I thought I’d get fancy and send nulls if various values (DX, DY) had not changed (for example, they do not change for bullets) – however, JSON sends a “:null” which can be longer than just sending the value.  The more advanced version of “don’t include the property if it hasn’t changed” is possible, but it got too complicated, so I ignored that for now.   So this version only does the image and name attributes, but that’s enough to get a nice reduction in size:

 

imageimage

The savings:  another 30%.  Its still a lot of data.  Not quite the gains I was hoping for.

At this point, an average entity on the screen is taking 100 bytes or so, instead of the 500 they were before. 

Network Optimization: Where to go from here?

I’m going to call this good enough for now.  The directions to go from here for network optimization:

  • Instead of using the system JSON serialization, write my own serializer.   This could do things like “If It hasn’t changed, don’t send it”.
    • so instead of: {“DR”:0.0,”DX”:17.109,”DY”:299.512,”ID”:0,”IM”:null,”N”:null,”R”:446.7,”RA”:3.0,”X”:37.0,”Y”:1474.8}
    • “dx17.109dy299.512id341r446.7x37y1474.8”   that’s about half the size.   But needs a lot more code to massage it.
    • Could use a binary packing method as well.   Quick search doesn’t find anything that is happy in both C# and Javascript.
  • Put some logic in as to whether something needs to be sent every time or not.  For example,
    • bullets pretty much go in a straight line once fired.  They don’t need to be sent every time.   In fact, if the current X,Y is approximately where the previous update X,Y + DX,DY would put it, then there’s no need to send those values.
    • Planets currently don’t move. (That will change).      This is a broader case of “nothing has changed in this object from what the client expects, so just acknowledge the object still exists, nothing more”.

What’s next?

I think that authorization / identity of clients (and dealing with disconnection better) is in order.  

Then I can do the “single person logged in, multiple clients” code

And then we can launch a spaceship from the text game.

And then we can design an actual playable game.   (oooo!)