DIY Dutch Door

Posted by & filed under DIY, Home.

One of many “my wife saw it on Pinterest” projects – next was a Dutch door project at the bottom of our stairs at the entry to the basement.  Having the dutch door here will act as a baby/dog gate, and keep the downstairs more easily connected to up during social functions, or just normal life.  Most posts on this seemed pretty lacking as this isn’t really a simple task if you want it done right.Overview of steps:

  1. Get a solid core door slab (slab preferred over pre-hung)
  2. Determine right split, cut the door in half
  3. Build shelf onto lower door section
  4. Measure and cut hinge mortise on door
  5. Lay out jamb and transfer hinge mortise locations to jamb
  6. Drill lockset
  7. Assemble Jamb around door
  8. Install

Specific tools needed:

  1. Router (I’ve been using a simple trim router for all of this work)
  2. Door Mortise Kit
  3. Lockset Jig
  4. Jamb kit (ex: TIP: Make sure you use a finger-jointed pine jamb and not MDF, the MDF has too much flex for the weight and floppy-ness of the two door halves.
  5. Dutch door bolt – Deltana seems to have some quality hardware.
  6. 4 Hinges (3 1/2″) – has nice sets of schalge 3 1/2″ radius hinges, although I got this set at Home Depot (3 pack + single)

Get a Solid Core Door

I found a bunch of good door slabs at a local building salvage, the one I eyed for the dutch door was a 6-panel, solid core door.  You’ll want solid core so you can cut it in half at any point, and don’t have to fight with a hollow-core.  Preferable if you can get one that doesn’t have the locks drilled or hinges mortised yet.  The size should match whatever the current door is, standards are usually 80″ high, and 28, 30, or 32″ wide.  I ended up with a door 1″ shorter (79″) than the current normal door, knowing I’d be adding a shelf and could make up the extra 1″ that way.


Cut the door in Half

Determine where you should cut the door in half, some factors to help decide might be:

  • Matching door-knob height to your other doors, add enough room for your shelf
  • Desired shelf height – add/subtract from there
  • Style of door might dictate the best “look”

I found this guide to help give me some guidelines on what a dutch door “should” be:,  I ended up cutting it at 38″ from the bottom, adding a 1″ shelf, which would leave adequate clearance for my knob to be drilled at 35 1/2″.  After fighting with clamps and make-shift fences for years with my Circular Saw, I finally got an edge clamp which makes things so much easier.

Build Shelf/Ledge

Depending on the thickness of your ledge, you might be making up height for a short door like mine, or need to trim some height from one of your door halves to accommodate it. There are lots of ways to attach it, i went with a direct attachment, an easier way might have been build a “U” shaped channel to fit over the bottom half but that adds bulk.  I used a 2×4 I milled to the desired depth I needed to add, then rabbeted the shelf back to match the door thickness.  I decided to add a few dowels, glue, then attach from the top with counter-sunk screws to which I added wood plugs.

 What side should the shelf go on?

I think the way I did it might have been slightly non-standard, but the internet tells me there’s no real standard.  I’ve seen it with the shelf extending to both sides, to the front, and back only also.  I had the shelf protrude out from the back side of the door – or away from the inside of the jamb; this way I didn’t have to notch the shelf correctly to fit within the door-stops because that side of the door would still remain flat.  This seemed like the easiest option.

Cut Hinge Mortise

I ended up using this random PDF to determine hinge locations:, this is for 4 1/2″ hinges, which you’ll probably really only need 3 1/2″ hinges – four of them instead of 3.

  • Top Half: Top of top hinge: 4 7/8″ from top of door
  • Top Half: Bottom of bottom hinge 4 7/8″ from bottom
  • Lower Half: Top of top hinge: 4 7/8″ from top
  • Lower Half: Bottom of bottom hinge 6 7/8″ from bottom of door.

Mark all the hinge locations and mortise all the hinges.

Drill Lockset

Before you assemble the jamb around the door, now is the best time to prep the lockset.  Drill your knob hole and for the lockset, then mortise the faceplate.  You could also drill and mortise the strike-side of the jamb now too.

Build your Jamb

Next we’ll construct the jamb around the door, and end up with a pre-hung door assembly.

  • Lay out your hinge-side jamb next to your door halves pushed together
    • I didn’t end up making an artificial gap of any size – just pushed them together and left it
    • I aligned the top of the jamb about 1/8″ higher than the top of the door for adequate space.
    • Check your rough-opening height and trim your jamb boards now to the desired pre-hung height. (Ex: if your rough opening is 81 1/2″ high, but the jamb kit boards are 82″, you’ll want to cut them down to something like 81 1/4″.)
  • Mark all the hinge locations onto your hinge-side jamb and mortise the hinges
  • Attach the hinge-side jamb to the door halves by installing all of your hinges.  TIP: If your hinge screws extend more than about an 1/8″ from the jamb you may want to cut these tips off before you try to lift it into your rough opening.
  • Measure the top jamb piece considering adequate gap on either side of the door (1/8″-3/16″).
  • Line up the strike and top jambs to the front of the door and nail them together at the top corner.


I won’t cover all door-hanging basics, just what I found to be different with this dutch door. Now you likely have a very heavy, wobbly pre-hung dutch door.  I nailed in a temporary door-stop on the strike side jamb where the two halves meet just to have something to keep it closed against.  Most of installation is the same, except you’ll want to constantly check the swing of both door halves as you start nailing in your jamb.

  • I used some clamps to keep the jamb in the doorway since I was doing this solo, you’ll need that to keep it in place if you let the top swing out while you work on the bottom.
  • Start at the bottom – make sure your bottom half has adequate clearance from the floor and swings properly.  Check level of the door and  plumb of the jamb – within reason.
  • Because I used an MDF jamb, I had a lot of flex in a few key areas that allowed for pivot-points when adjusting the door, these were at the lower-half top hinge, and near both hinges for the top half.
    • Because I didn’t add the door stops at this point I used some counter-sunk screws to be able to adjust how far off the rough opening the jamb would be
    • Play with tightening and loosening the screws to check the swing of both your upper and lower door.
    • Thinking I could suck in the top corner to keep the top half high actually pulled the opposite corner in too much, and the outside corner of the top half would hit the corner of the jamb.
    • Keeping the jamb as plumb/true as possible meant both halves swung free with proper spacing in the jamb.
  • Once you’ve secured the hinge-side, start shimming out the rest of the jamb and nailing it in place – do constantly check your swing in case you pinch any area too much.
  • The only friction was one of the wood plugs covering the shelf screw that needed to be sanded flat.

Countersink screws into the jamb – allowed for adjustments (will be covered by doorstop):

Temporary door-stoop:

Final Latches:

Photos show a coat of paint on the shelf and door itself, and the installation of the dutch door bolt.  The Deltana bolt comes with two catches – one a surface-mount for if your shelf extends to the opening side of the door (vs strike-side like mine) and another recessed plate (as pictured).  Pretty easy to throw on and makes it all work together!


Configure Honeywell Wifi Thermostat Binding for OpenHab2

Posted by & filed under Code, DIY, Home, Java.

I started working on setting up OpenHab as a home automation hub using an old laptop and a Z-Stick.  My main requirement is to be able to hook up my thermostat, as everything Z-Wave looks pretty straightforward.  My thermostat is a Honeywell WiFi enabled thermostat, which links up to their My Total Connect Comfort site.  The only binding available so far ins’t well documented and also forced me to switch to OpenHab2:  As I think getting OpenHab2, and this custom binding set up are the hardest parts I thought I’d share my steps since it’s not very clear from ANY of the OpenHab2 docs…

HoneywellWifiThermostat Binding

The binding has been built within this fork of OpenHab2:  The binding itself and the only code that is unique to the fork is:

In general as most “APIs” I’ve found and built for this thermostat rely on reverse-engineering the web APIs built to support the mytotalconnectcomfort site, which as I’ve found is fairly reliable and consistent to hack on in lieu of a supported API.

The limited information for this binding and general OpenHab2 docs about custom bindings and configuration makes it a daunpoiurweqting task for someone wqewrho doesn’t live and breathe Java projects.  The steps needed to get your OpenHab2 configured with this binding is roughly:

  • Get the OpenHab2 IDE set up
  • Clone/download the Rendman fork with new binding
  • Build the new fork and/or add the specific binding into the main branch within your IDE
  • Run a build of the specific Honeywell binding
  • Get the built JAR for the Binding, place it in your OpenHab2 install.
  • Update addon configuration to recognize the local addon.


OpenHab2 IDE

NOTE: I did this all on Windows 7.  The guide at is pretty good, a few notes:

  • I ended up setting a JAVA_HOME environment variable
  • Added Maven to my classpath after install

Following the OpenHab instructions, Eclipse will come pre-configured to pull from all the existing repos, you’ll need to somehow get the Rendman fork next.

 Get Rendman fork with HoneywellWifi Binding

The two ways I did this:

1. Use Eclipse Git Plugin:

  • Window > Show View > Other…
  • Git > Git Repositories
  • From the Git Repository view you can chose to clone a new Git Repository into this view.

2. Clone the Repo Normally

  • Clone it somewhere as you would normally.

Build OpenHab

You’ll want to then get the dev environment working by build the main repository that Eclipse came with.  The largest issue I had was that for some reason some of the configs were causing a bunch of build errors due to outdated dependency references (I think).  I found the easiest thing was to add in the Honeywell binding to the regular OpenHab repository.

  • If you’re good with Eclipse you can some how add the Rendman addons HoneywellWifiThermostat into the main project.
  • I just copied the files directly on the filesystem and refreshed the project view in Eclipse.

Next, open a command prompt into the HoneywellWifiThermostat directory:

  • Run mvn clean package
  • If this is successful you’ll now have the .jar file to use for the binding in: [ECLIPSE DIR]gitopenhab2-addonsaddonsbindingorg.openhab.binding.honeywellwifithermostattarget
  • org.openhab.binding.honeywellwifithermostat-2.0.0-SNAPSHOT.jar

Get the JAR file and place into your running OpenHab addons directory

IF you can get the .jar file it still works pretty well.  Lucky for you I will provide the latest version I generated:

Now you can configure the binding…


I’m not sure what I did in OpenHab2, but using PaperUI config I added things to the following files in conf/services:

  • addons.cfg
  • added “openhab.binding.honeywellwifithermostat” to the binding config

Trying to reverse-engineer the settings, I can’t figure out where the actual valid config is stored.  In the end what actually worked was getting HABmin2:, which actually makes configuring things in OpenHab2 understandable.

FULL DISCLOSURE: At this point I’ve now shifted to SmartThings since I picked up a good deal on a SmartThings Hub . It has been much simpler to pair new devices and set up various automations, plus it’s a standalone box I can just leave plugged in and not be constantly meddling with.

Previously I’d spent some time playing with Home Assistant.  In a fraction of the time I spent on OpenHab I was sucessfully able to:

  • Get the server running
  • Add in the Honeywell Thermostat (
  • Add in my UniFi Wireless controller
  • Add all 4 of my disparately-branded webcams.
  • Get the dev environment set up
  • Build and deploy the server and the UI
  • Successfully made changes to the existing Honeywell/Thermostat binding to add in support for additional “Fan Modes”.


Tunneling Slack traffic through ssh tunnels

Posted by & filed under Code, Work.

After using Slack for about a week or so, I was suddenly greeted by a corporate IT policy that blocked the domain…

Bored of not having it I found a way this morning to selectively send my traffic through an SSH tunnel, leaving all other traffic to flow through it’s normal routes on VPN.  I’m using my dreamhost account as the SSH host, and their instructions cover most the setup.

  2. Get plink or other ssh software like Git Bash (it’s awesome!)
  3. Use a Proxy-Auto-Config (PAC) file:
  4. Slack free!

SSH Tunnel

Using your ssh ( I find command-line to be the easiest via either plink or a shell emulator), you’ll connect to your ssh host and open up the tunnel.  I chose 8080 as the port.

ssh -2 -D 8080 [username]@[your ssh host]

 Proxy Config

It looked like the easiest way to filter only specific traffic was a Proxy Auto-Config.  I’m using Chrome on windows so I can just mess with the standard LAN settings in Chrome Settings > Proxy Settings > Advanced > Change Proxy Settings > LAN Settings.  By policy I already had a wpad.dat automatic configuration script selected:

I downloaded it locally and went to edit it.  Mine already had conventions in place of a bunch of matching patterns and then instructions for each match.

	slackproxy = /;
	if (slackproxy.test(host)) return "PROXY www.[yourproxydomain].com:8080; DIRECT";

File with edits:

Then, I re-pointed the auto-configuration script to my new local copy.

Now I’m back in!

Dark Social: What it isn't.

Posted by & filed under Rant, Social Media, Work.

I’d heard the term “Dark Social” on a client call about two months ago, I googled it and balked at the definition.

“Dark social describes any web traffic that’s not attributed to a known source”

Allegedly coined by this article, which does call out important points about the whole theory – however in a very flowery context.  Why did they choose to call it dark social?  I guess that’s my biggest issue with it, since it’s really just “dark traffic”.  To say that this bucket of traffic is any more significant portion than any other identified source is a giant leap.

Source Reporting

Taking your standard sources breakdown, as this is often our main starting point for segmentation of web analytics data.  You might be taking advantage of the built in classifications from your tool, or getting more specific as part of some custom analysis process.  Either way – these all rely on the same points of data to determine the source breakdown.

 We can see our Direct bucket as 40% of total traffic.  This “direct” identification comes from lack of any other context about traffic source – no referring URL, and no querystring parameters.  When we think about how the rest are identified – it’s still relying on those same two pieces of data.

Direct – or not.

Before we leap to classifying this large chunk of traffic as “Social” maybe we should think about what else it could be:

  • Any of the other identifiable sources lacking the normal ways of identification (gasp!)
    • Blocked referrers due to user-privacy settings
    • Dropped referrers due to bad redirects
    • Improperly trafficked paid traffic lacking campaign tracking parameters in URL
    • Improperly trafficked social and/or other channel traffic without campaign tracking parameters in URL
  • True “direct” traffic
    • Bookmarks
    • Return traffic
    • All other non-attributable non-social
  • True “dark social” traffic
    • Word-of-mouth
    • IM Sharing
    • All other non-attributable social

When we look at the laundry list of why a traffic source can’t be identified, it’s probably important to make sure you as the analyst, or your company has a handle on their overall campaign strategy for attributing traffic.  If any one of these is less than optimal, your “dark social” assignment is just putting a name to “poor execution of marketing”.  I think you can expect at least a some percentage of all unidentifiable traffic to fall into the existing source buckets – and likely in some close proportion to the set of identifiable ones.


How should you use it?

As mentioned, before you attempt to classify anything as “dark social” you need to make sure that things aren’t falling into “direct” for other reasons.

  • Paid traffic without proper tracking.
  • Bad redirects dropping referrers
  • Social tracking/campaigns without proper campaign tracking

Where I do like how dark social is applied (from

The first was people who were going to a homepage ( or a subject landing page ( The second were people going to any other page, that is to say, all of our articles. These people, they figured, were following some sort of link because no one actually types “” They started counting these people as what they call direct social.

However, all things considered I think this really helps you attribute true direct instead of this concept of dark social.  I’d be more comfortable calling any visits entering to those top-level pages from unidentifiable sources “true direct” vs the deeper-level pages “dark social”.  I’d almost rather we bring back “word of mouth” for this bucket…


The Problem with CORS

Posted by & filed under Development, Javascript, Rant.

Cross Origin Resource Sharing (CORS) is intended to be revolutionary, empowering the web to push and pull data from everywhere.  In reality I see this causing more problems than helping anyone.  Note: I’ll use XMLHTTP instead of AJAX since the latter is so overloaded…

1. It breaks the cross-domain paradigm

This is really all I have to say, Client-side, or “Front-End” was always intended as-such.  Your Front-End was only allowed to talk to your Back-End, all heavy-lifting was to be done by your servers in a protected, secure environment.  Your Front-end is a gateway, probably the least-secure gateway imaginable to your server-side infrastructure.  The restrictions put in place we’re intended to protect your sensitive information, and ensure a secure user experience for users.

  • Form POST validation – don’t respond to requests if they’re not validated as from your own servers (this is more of a convention than actual restriction)
  • Same-origin policy for XMLHTTP
  • Cross-domain cookie policies and restrictions.

2. It’s intended for resources

Yes, “resources” can mean anything – but think about what the original challenge was when we were doing AJAX when CORS was added to browsers?  Getting mostly static-content to power single-page, rich web applications with “partial page-loads”.  Getting public data from a server not on your domain using XMLHTTP.   As you start to look into CORS you’ll see it defaults to allow simple loading of content with ease, without support for anything else you might come to expect from a true XMLHTTP call.  These are things like authenticated requests, reading cookies to deliver personalized responses.

3. Let the servers do the talking

In the days of quad-core processors, 64-bit browsers, untouched RAM-potential and dizzying internet speeds CORS is yet another thing that cracks open the limitation(s) that made us be conscious about what we do client side.

 “Great!  I can finally load this cross-domain data without making a server-proxy to bring it in via XMLHTTP”

Wrong.  Do you control this server?  What is their response-time SLA?  What if tomorrow they blow open the payload and send Megabytes back to the browser?  Cross-domain lmitations and the reliance of servers to do the cross-domain talking enforced increased security between parties, and allowed for server-power to broker any issues between this data and your client-side.

4. Did you forget Node.js?

Oh, you did?  Just like everyone else?  Node was a fun tool that allowed us to write server-side programming in our favorite client-side orchestration language of Javascript.  Asynchronous-all-the-things, even cross-domain requests!   There’s a reason this was allowed, server-side.  Now you can blend all of this, without using NPM, and using my browser as the server…

5. You can’t have it all.

CORS has a very specific set of restrictions in place that maintain the level of security we expect with XMLHTTP.

CORS Request CORS Response Domain Restriction Result (Origin) Cookies Notes
Access-Control-Allow-Origin: * all allowed no CORS everywhere
Access-Control-Allow-Origin: no CORS for a single whitelisted domain
Access-Control-Allow-Origin:, NOT ALLOWED no Can’t whitelist more than one
Access-Cotrol-Allow-Origin: *
Access-Control-Allow-Credentials: true
all allowed no Will break if CORS request contains credentials
withCredentials=”true” Access-Cotrol-Allow-Origin: *
Access-Control-Allow-Credentials: true
NOT ALLOWED no Cant have wildcard origin whitelist AND allow credentials
withCredentials=”true” Access-Cotrol-Allow-Origin:
Access-Control-Allow-Credentials: true yes Only way to send credentials with single whitelist domain.

Notice if all you’re receiving is static content its easy to do across all domains, if we want to restrict CORS to a set of domains you can only do it for one, and most strict is the ability to send credentials – this includes Auth and Cookies.  I found these two posts lay out the credentials issue the best:

As we think about APIs, it’s clear that CORS wasn’t designed to provide the same heavy lifting as server-API calls.  API endpoints typically allow any number of Origins/hosts given they have proper permission, and usually have a pretty good auth scheme for that permissioning.  How do you set up CORS, with credentials, for multiple domain-origins?  You can’t, or at least you shouldn’t.  The answers I’ve seen all recommend managing a whitelist server-side and setting the Access-Control-Allow-Origin origin dynamically based on the request.

6. What about JSONP?

Isn’t this why we’re here?  JSONP “isn’t good enough”, or “has security holes”, or “cross site scripting vulnerabilities”.  Those might all be true but in the end is CORS really any better?  Lets think about JSONP:

  • Allows request to be made cross-domain like any other resource
  • Standard cross-domain restrictions remain in-place
    • Cookies/credentials
    • Can only send/receive data via GET
  • You’re putting the same faith in the 3rd party as with a CORS request

To me, CORS is not a better answer to JSONP, it’s just another answer to the same problem that at the end of the day – you’re really better off handling server-side if you truly want to be secure.

7. It may be the right tool for the job.

I didn’t find anything that talked about how much CORS is used currently across the web.  I did find some information that leads me to believe a lot of mobile developers rely on it for apps.  I think as with most things – there’s a good set of use-cases for CORS, but that doesn’t mean it’s a silver bullet that will take the place of good ‘ol web architecture.

Check/Read Messages Exchange/Office365 Inbox with Powershell

Posted by & filed under Code, Development, Work.

We have a process by which notifications of new users in another system, that need to get created in ours are sent via Email to a standalone inbox.  The new user emails have to be read and entered into the system based on the information provided (Name, and Email).  We get 10-20 a week, and the UI for which we enter these users into is very slow – so its a big time waster.  I decided to waste even more time automating this so I NEVER HAVE TO DO IT AGAIN!

I decided PowerShell would be the natural choice, there appear to be tools specifically for Exchange/Office365 called Exchange Management Tools/Exchange Management Console which have to be installed – so my goal was to not use any cmdlets that appear to make things really easy.

The steps that need to  happen are:

  • Check inbox for un-read messages matching a particular subject
  • Verify the email message is a new created user (there are updates and removals – purely notification) by checking the message body.
  • Parse out the email address and name from the email message body
  • Add the user to our system (using an in-house Python REST utitlity)
  • Archive the email into a specific folder for these requests.

Connect to the Inbox

In the below, $s becomes your Exchange “Service” object.  The one dependency is having the Microsoft.Exchange.WebServices dll available – this you can download, then it can be loaded locally as a .NET assembly.

The other trick is the Exchange web-service endpoint for Office365

$inbox becomes the Inbox object itself.

[Reflection.Assembly]::LoadFile("C:Program FilesMicrosoftExchangeWeb Services1.2Microsoft.Exchange.WebServices.dll")
$s = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService([Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2007_SP1)
$s.Credentials = New-Object Net.NetworkCredential('', 'P@$$Word', '')
$s.Url = new-object Uri("");
$inbox = [Microsoft.Exchange.WebServices.Data.Folder]::Bind($s,[Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Inbox)

Find Folder for Completed Requests

Doing this now to set up the source and destination folders before we go play with  messages, here we create a folder view, and SearchFilter object to find the specific folder by name.  I’m assuming its returned and its the first item in the FolderView collection.

$fv = new-object Microsoft.Exchange.WebServices.Data.FolderView(20)
$fv.Traversal = "Deep"

$ffname = new-object Microsoft.Exchange.WebServices.Data.SearchFilter+ContainsSubstring([Microsoft.Exchange.WebServices.Data.FolderSchema]::DisplayName,"Completed Items")

$folders = $s.findFolders([Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::MsgFolderRoot,$ffname, $fv)
$completedfolder = $folders.Folders[0]

 Search Inbox for Un-read request Messages

Next we’ll search the Inbox for unread messages matching the subject line.  I also need to check for text in the message body to confirm its an add and not an update user request – but I found it difficult to do so at this point, instead I’ll do it as I process each message.  I’m using a SearchFilterCollection, which for both SearchFilter and SearchFilterCollection have interesting syntax within Powershell and

$iv = new-object Microsoft.Exchange.WebServices.Data.ItemView(50)
$inboxfilter = new-object Microsoft.Exchange.WebServices.Data.SearchFilter+SearchFilterCollection([Microsoft.Exchange.WebServices.Data.LogicalOperator]::And)
$ifisread = new-object Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo([Microsoft.Exchange.WebServices.Data.EmailMessageSchema]::IsRead,$false)
$ifsub = new-object Microsoft.Exchange.WebServices.Data.SearchFilter+ContainsSubstring([Microsoft.Exchange.WebServices.Data.EmailMessageSchema]::Subject,"User Profile Sync")
$msgs = $s.FindItems($inbox.Id, $inboxfilter, $iv)

Read and process emails

$msgs now contains all the unread messages matching our subject line.  Now we have to process them all, there is a trick to getting certain properties of the message – for which we apply a PropertySet to the ItemView of messages.  I also include a user-defined function that parses out text from the email in a simple “Name: Value” format.  Based on the values parsed out of the email I will call a python script that makes the proper API call.  I’m also writing out each user to a csv just for records.

$psPropertySet = new-object Microsoft.Exchange.WebServices.Data.PropertySet([Microsoft.Exchange.WebServices.Data.BasePropertySet]::FirstClassProperties)
$psPropertySet.RequestedBodyType = [Microsoft.Exchange.WebServices.Data.BodyType]::Text;


function getEmailField($msg, $field){
	$value = ""
	$pattern = "(?<=" + $field + ":s)[^s]+"
	return ([regex]::matches($msg, $pattern) | %{$_.value})
foreach ($msg in $msgs.Items)

	If ($msg.Subject -eq "User Profile Sync" -and $msg.Body.Text -match "is created"){
	# double-check the subject (left over), check message body for text indicating new user
		$body = $msg.Body.Text
		$name = (getEmailField -msg $body -field "FirstName")  + " " + (getEmailField -msg $body -field "LastName" )
		$email = getEmailField -msg $body -field "EMailID"
		($name + "," + $email ) >> ".newusers.csv"
                #call ptyhon script, wait for it to complete
		$adduser = Start-Process 'python' -ArgumentList " $email ""$name"" $email" -wait -passthru
		do {start-sleep -Milliseconds 500}
		until ($adduser.HasExited)

		# move message to previously located destination folder, then mark as read.
		$msg.IsRead = $true
		write-host "Added $email to system, archived email"


I can now schedule this locally, or set it up on a windows machine somewhere as long as I package it with our Python utility, and the Exchange Web Services dll.

Parsing Querystring Values in Fiddler Custom Rules

Posted by & filed under Code, Development, Work.


I’ve been hot-rodding Fiddler for some time now to make my life a lot easier when debugging my site tagging and analytics tracking requests.  You can customize just about anything within the sessions window from the color, background, custom columns to the request it actually makes.  I’d previously created a custom column called “EventType”, which was reading in a value sent back in the response body as part of our own tracker when in a “debug” mode.  We’re now switching over to use Snowplow as our tracker technology, so I wanted to add in the event type for Snowplow events.  Snowplow describes all the special sauce to decoding their tracker request URLs, which is what I used to parse out what the event being sent is.  You could do a similar breakdown for Google Analytics, Site Catalyst etc…

Parsing the Querystring

Fiddler script is some weird amalgamation of .NET code and Javascript, with a syntax like Classic ASP.   You can easily access anything within .NET, it will just look nothing like you’d expect…  I’d like to easily pull out querystring parameter values by name to determine the Event Type to show in the column.  We can use the .NET HttpUtility.ParseQueryString, all we need to do is add the proper Imports:


import System;
import System.Windows.Forms;
import Fiddler;
// add in system web
import System.Web;

Then we can use the method to parse the querystring. The Fiddler session object doesn’t provide us quick access to just the querystring, so we do have to parse it out of the URL.

//I like to pad the end with a "?" so split will always have a [1] index
// in the event there is no querystring present
var s_qs = (oSession.url + "?").split("?")[1];

var querystring = HttpUtility.ParseQueryString(s_qs);
// now we can easily .Get any querystring value by name
var s_event = querystring.Get("e");

Snowplow Events

For Snowplow events, I decided to set up event types for the following:

  • Page View
  • Page Ping
  • Structured Events
    • Return the event “Action”
  • Unstructured Events
    • Return the event “name”


public static BindUIColumn("EventType", 60)
function FillEventTypeColumn(oSession: Session){
	//check for whatever the tracker request domain is
		//I like to pad the end with a "?" so split will always have a [1] index
		// in the event there is no querystring present
		var s_qs = (oSession.url + "?").split("?")[1];

		var querystring = HttpUtility.ParseQueryString(s_qs);
		var s_event = querystring.Get("e");
		var s_eventtype = "";

			case "pv":
				s_eventtype = "PageView";
			case "pp":
				s_eventtype= "PagePing";
			case "se":
				var s_action = querystring.Get("se_ac");
				s_eventtype =  s_action;
			case "ue":
				var s_name = querystring.Get("ue_na");
				s_eventtype = s_name;


		return s_eventtype;

The screenshot shows the snippet of the Web Sessions window with the Snowplow EventTypes populated among the other tracking requests:

Happy Hacking!

Connect Pentaho/Mondrian Schema Workbench to Amazon Redshift

Posted by & filed under Development, Java, Work.

The more you play with big data technologies, the more you end up circling back to the basic question “How does my end user get access/value?”.  Currently we’re using Amazon Redshift as a down and dirty query window on top of our raw data.  Recently, I’ve started to try and run Mondrian OLAP against redshift.

After getting through the initial server setup against MySQL, I tried connecting to my redshift cluster.  Since Mondrian supports PostgreSQL out of the box, it works – but I ran into some key roadblocks where the subtle differences in Redshift fall short.

When trying to use the Mondrian Schema Workbench, it connects successfully to the database using the connection manager, but in the JDBC explorer and when I add a new Table to a cube I only get a list of schemas but not the databases underneath them.  Looking at the log output the error appeared to be:

org.postgresql.util.PSQLException: Unable to determine a value for MaxIndexKeys due to missing system catalog data.

You’ll notice in the extended logs it sucessfully grabs all of the schemas, but appears to die when getting metadata information from the “public” schema:

2013-08-12 13:30:55,473 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: initConnection
2013-08-12 13:30:55,780 DEBUG [mondrian.gui.JdbcMetaData] JDBC connection OPEN
2013-08-12 13:30:55,780 DEBUG [mondrian.gui.JdbcMetaData] Catalog name = hillsraw
2013-08-12 13:30:55,780 DEBUG [mondrian.gui.JdbcMetaData] Database Product Name: PostgreSQL
2013-08-12 13:30:55,780 DEBUG [mondrian.gui.JdbcMetaData] Database Product Version: 8.0.2
2013-08-12 13:30:55,781 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: initConnection - no error
2013-08-12 13:30:55,781 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: setAllSchemas
2013-08-12 13:30:55,887 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: setAllTables - information_schema
2013-08-12 13:30:55,887 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: Loading schema: 'information_schema'
2013-08-12 13:30:56,050 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: setAllTables - pg_catalog
2013-08-12 13:30:56,050 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: Loading schema: 'pg_catalog'
2013-08-12 13:30:56,211 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: setAllTables - pg_internal
2013-08-12 13:30:56,211 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: Loading schema: 'pg_internal'
2013-08-12 13:30:56,378 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: setAllTables - public
2013-08-12 13:30:56,378 DEBUG [mondrian.gui.JdbcMetaData] JdbcMetaData: Loading schema: 'public'
2013-08-12 13:30:56,668 ERROR [mondrian.gui.JdbcMetaData] setAllTables
org.postgresql.util.PSQLException: Unable to determine a value for MaxIndexKeys due to missing system catalog data.
	at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getMaxIndexKeys(
	at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getImportedExportedKeys(

Conveniently the error message points me directly to the PostgresSQL JDBC driver on github.  Tracing back through the code, the error appears to be centered around this section of code.  From what I can guess, the first check of “haveMinimumServerVersion(“8.0″)” passes for Redshift, however the query to return the key information does not work in Redshift (returns emtpy result set).  The query contained within the block for “haveMinimumServerVersion(“7.3″)” however, does work.  So although Redshift is intended to be a version 8 of PostgreSQL, we find there are some oddities within it that don’t match your typical PostgreSQL (should be no surprise).


To keep moving, I created a brute-force hack so that it forces all code paths into the “else” block, see my fork on github.  I’ll create a bug, and if someone knows how to tell the difference between Redshift PostgreSQL and normal version 8 PostgreSQL, let me know.

Here is the built .jar file from my source code, tested and it works against Redshift in schema workbench:

Update – 9/23/2015

I realized I never updated this post with relevant info from an issue I filed on GitHub:

Per the comments Redshift doesn’t exactly play by the same standards as the Postgres spec/codebase.  I agree that its technically an Amazon problem – and perhaps their latest set of jdbc drivers for Redshift account for this.

Setup Mondrian on Mac OSX

Posted by & filed under Code, Java, Work.

The Mondrian documentation is terrible, and Java isn’t something you get to easily “dabble” in.  My goal was to setup Mondrian on my Mac OSX machine, starting with Mondrian 3.5.  While trying to set up the initial FoodMart sample dataset I ran into a bunch of java errors, always something like

Exception in thread "main" java.lang.NoClassDefFoundError: mondrian/test/loader/MondrianFoodMartLoader
Caused by: java.lang.ClassNotFoundException: mondrian.test.loader.MondrianFoodMartLoader

Based on various forum posts and other blog posts, I found this to be the most direct route to get it installed. This is all assuming you have your JDK up to date…

  1. Install Tomcat 7
  2. Download Mondrian (3.5.0 – I didn’t see the starter .sql script in the /demo folder, so I opted for 3.4.1 at first)
  3. DEPLOY THE MONDRIAN WEBAPP.  Once Tomcat is up, copy from the mondrian download /lib/mondrian.war to the “webapps” directory for tomcat.  If you follow the steps in the Tomcat install I linked to, this will be /Library/Tomcat/webapps.  This will unpack the app into /Library/Tomcat/webapps/mondrian

You should now be ready to run the java command to build your sample dataset as in:

The sample command usually looks like:

$ java -cp "/mondrian/lib/mondrian.jar:/mondrian/lib/log4j.jar:/mondrian/lib/commons-logging.jar:/mondrian/lib/eigenbase-xom.jar:/mondrian/lib/eigenbase-resgen.jar:/mondrian/lib/eigenbase-properties.jar:/usr/local/mysql/mysql-connector-java-5.0.5-bin.jar"
     -verbose -tables -data -indexes

Horrific example since it doesn’t talk about where you should be running this from, or where any of these .jar files should be…  Since you’ve deployed the web-app, we can make our lives easier by including the /lib directory of the web app in the classpath.  NOTE: you may need to manually download your jdbc database driver, sticking it in the Tomcat /lib for your mondrian web-app keeps things simple.   Also – note the “inputFile” parameter, this should be set to the location of the “FoodMartCreateData.sql” script, usually in the “demo” folder in the Mondrian download.  Just pay attention to where the file is or where you run the script from and you should be fine.

$ java -cp "/Library/Tomcat/webapps/mondrian/WEB-INF/lib/*"
     -verbose -tables -data -indexes

After this I follwed the rest of the setup instructions (modifiation of the web.xml file and a few of the .jsp pages), and I was able to pivot away in the browser…

Delta Monitor shower faucet low water pressure

Posted by & filed under DIY.

Recently, my old plumbing decided to choke up a round of sediment, wreaking havoc on all the little holes in my plumbing fixtures.  Now the shower has very low water pressure, this time would be round two of clearing out the cartridge – but of course nothing goes as planned.  The faucet is a standard Delta Monitor, which I’ve successfully replaced the old Cartridge for previously.

But of course, this time it appeared that the set screw to remove the handle had been stripped…

I tried letting it rest in some WD-40, difficult to do when its horizontal… none of this seemed to help and I just wanted it off.  I saw my only choice as to start the process of trying to remove the set-screw with other means:

  1. Try using some extractors, like this set
    • They will have a specific size of hole intended for you to drill.
    • You’ll have to drill a bit deeper of a hole then the current set-screw socket.
    • These bight by reverse-thread which once embedded should turn the set-screw in a counter-clockwise direction to remove it.

I was able to get them into the set screw, but they flexed pretty good when tourqued out, this set screw was frozen. My only recourse at this point was to drill out the set screw.

  • Start smaller to see if you can get away with not ruining the handle. Read more »