Using SlowCheetah to Transform Any Config File Based on Publish Profiles (and save us for config hell)

We have recently been working on a build and deployment refactoring of a SaaS product that contained a large number of config files. Changing settings was a nightmare and wrapping your head around which config files were relevant was difficult and error prone. What contributed to the config bloat was the approach the previous developers used to manage configuration based on the build/deployment target environment. Each config was duplicated and modified to suit the target then Post Build Events would be used to copy the files to the active location, naturally this resulted in a lot of redundant config files and a great deal of confusion. In fact, there were over 200 connection strings in the solution that when rationalized resulted in 12 unique connections. Obviously we had to clean this up.

Config Transforms to the Rescue

To address “config hell” I wanted to use Config Transforms, a really useful technology MS first introduced in VS2010 that applies a delta to a base config file. However, there are two limitations to Config Transforms that bothered me:

  1. Config Transforms only apply to the Web.config – This was a big issue for us as we have windows services and console jobs that have App.config files.
  2. Config Transforms apply the delta based on Build Configuration (e.g. Debug, Release). Although we could work with that what we really want is a delta applied based on deployment target.

The Chili Peppers Save Our A$$

Turns out there is a guy at MS, named Sayed Hashimi (among others), who recognizes the first limitation and has written a Visual Studio extension called SlowCheetah (apparently he’s a Chili Peppers fan). SlowCheetah extends VS to support transforming any config file, not just the Web.config. Sweet! You can find more details here:

http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5

Okay, so what about limitation 2?

Publish Profiles

It turns out MS recognized the second limitation as well and implemented a new feature in Visual Studio 2012 that supports transforming a Web.Config based on Publish Profiles. This means we can apply a delta to the config files based on our deployment target. Scott Hanselman blogs about it here:

http://www.hanselman.com/blog/TinyHappyFeatures3PublishingImprovementsChainedConfigTransformsAndDeployingASPNETAppsFromTheCommandLine.aspx

We had a plan to upgrade to VS2012 on the road map so we just pushed that forward a little. So now we have SlowCheetah (which supports VS2012) and our solution converted to VS2012 so all is right with the world, right? Almost…

SlowCheetah, Visual Studio 2012 and Config Transforms

We started experimenting with using this combination and ran into a few hiccups, one of which is documented here:

http://stackoverflow.com/questions/13037714/can-partial-config-files-linked-to-a-web-config-via-configsource-be-transformed

After a little tweaking and some polite suggestions from Sayed, the author of SlowCheetah we had Config Transforms operating on a VS2012 test project where we could transform any config file based on Build Config but unfortunately only the Web.Config was transforming based on Publish Profiles. I had wondered whether this was supported by SlowCheetah but couldn’t find a definitive answer on any of the blogs. Thankfully, Sayed appears to be a workaholic and responded to an email saying that SlowCheetah didn’t support Publish Profiles but that he’d implement the functionality over the weekend if we’d help out by testing it. The issue is documented here:

https://github.com/sayedihashimi/slow-cheetah/issues/46

After a little back and forth we installed the updated VS extension and found the transforms were now executed based on Publish Profile for any config file including chaining Build Configuration based transforms as well.

So there you have it, vastly simplified config settings that are transformed based on deployment target thanks to Sayed being so willing to help out and extend SlowCheetah very quickly. I’ll write another post on how we centralized the config transforms in the near future.

Oh, a link to the Chili Pepper’s song if anyone is interested:

Safe Cross Browser console.log in a Couple of Lines

I wanted a nice simple wrapper around console.log so that I could use it throughout my code without worrying about it choking in a particular browser. IE for example will only successfully execute console.log if you have the developer tools active. There are lots of suggestions floating around and there are also complex logging libraries like one I’ve used before with a log4j/log4net style implementation (http://log4javascript.org/). However, I just wanted something simple and light weight. Here is what I managed to put together based on various suggestions including a little from the YUI guys:

   var debugMode = true;

   var log = function (msg) {
       debugMode && window.console && console.log ? console.log(msg) : null;
   };

Nice and simple. I just instrument our JS code with calls to log(“stuff”) and don’t worry about anything else. We can disable logging by flipping the debugMode flag. It will be easy to enhance the implementation later on if need be as well. One thing that I do miss that came with more complex logging implementations is a built in error threshold above which the log info will be sent to the server via an ajax call to be logged using say log4net or Nlog. I’ll probably add that at some point, having a single repository for at least all the critical log information is incredibly useful.

How to Determine if you are in an IE WebBrowser Control in Javascript

As I’ve mentioned in previous posts our API displays UI methods and we provide both a windows DLL proxy that uses the IE Web Browser Control to display UI and a JS proxy that uses a lightbox and an iframe to do the same. Unfortunately, the mechanism used to return data from each of the dialog implementations is different. In the Web Browser control we used the external property of the window object and in JS we simply pass the data to the parent since we just live in an iframe. To correctly perform the right action depending on where the page/JS is hosted we need to be able to determine if we’re running in a browser control or an actual browser.

The simple solution was to implement the javascript function below:

    function isInBrowserControl() {
        if (window.external && "MyExternalProperty" in window.external) {
            return true;
        }

        return false;
    }

MyExternalProperty is one of the properties of the object assigned to the External property of the WebBrowser control instance. In the case of a normal browser, this property is unlikely to ever exist.

Trade Study – Eh?

So I bumped into a systems engineering concept I had never heard of recently, namely “Trade Studies” or “Trade-off Study”. I’m far from all knowledgeable, I’ll humbly admit there is zillions of things I don’t know about software engineering, however, I seriously had just never bumped into this one before. I struggled to intuit what it could mean from the name. If only it had been called “Trade-off Study”, which really seems a better name in my opinion, I might have had some clue.

So what is it?

Well, turns out it’s a process by which one may select from a series of viable solutions the most balanced solution based on a bunch of functions which encapsulate the variables representing measures of the system.

Okay, that’s my first attempt… err… okay, I sort of get it. It helps you pick the best solution to satisfy your goals. However, the tricky bit I would think is how do you distill those goals down into a series of measures or functions?

Here is the wiki page for it: http://en.wikipedia.org/wiki/Trade_study

There seems to be different approaches to performing Trade Studies depending on level of uncertainty in your variables. Honestly, I like the concept because software architecture is often about choosing from many viable options, it’s an exercise in compromise/trade-offs. What combination of patterns will give me the system we need within the cost required and will also be maintainable longer term? I must admit, the Trade Studies concept reminds me of a bunch of people sitting around in a windowless room at a bunch of tables arranged in a square all facing each other all discussing how we haven’t yet followed Process C3 outlined in the decision matrix defined in Policy R2. Okay, I’m probably being unfair…

If I read between the lines I’m thinking the context that I read the term in probably just meant that you’ve considered all the options, weighed the pros and cons of each and arrived at the best solution that you know of right now.

SOLID Principles – These are things to value

You may have bumped into the idea of SOLID as it applies to object orientated design. So here is a cheat sheet for the SOLID principles (using my soon to be ratified “stuff” specification language):

Single Responsibility Principle – Stuff should have only one reason to change

Open/Closed Principle – You can add stuff, but you can’t modify stuff that’s already there

Liskov Substitution Principle – You can add stuff in stuff derived from stuff but you shouldn’t modify the original stuff. It’s  OCP applied to derived objects.

Interface Segregation Principle – Interfaces should be broken up into groups of similar stuff

Dependency Inversion Principle – Stuff shouldn’t depend on specific stuff, but on abstract stuff

I’m sure the OO Gods would strike me down… So a little more detail on the interesting, errr, stuff…

DIP

Inverting dependencies? What does that mean. It means that we add an abstraction layer between a higher level class and a lower level class. By doing that we reduce coupling and allow objects behavior to be evolved over time. When I first read about this many moons ago I thought abstraction meant a layer of indirection, like another class, but of course not, it’s usually implemented as an interface. So if ClientService depends on ClientRepository, instead of it directly referencing ClientRepository, it should reference an abstraction of the repository, an IClientRepository, or perhaps an IRepository. That way, if you need to evolve the ClientService over time you can implement the abstract repository differently and provide it to the ClientService, there by evolving it’s functionality without modifying it’s code. To me, DIP is like OCP for composition. The composition is introduced via the dependency, the dependency uses an abstraction (which is a key approach to following the OCP) enabling the object to be extended without modifying the objects original code.

This also ties in with another principle/guideline I’ve seen thrown around for a while, “encapsulate what varies”. Although we think of encapsulation as “data hiding” what it really means is “anything hiding”. I should reference the book I read that in once, but I’ve long since forgotten what it was, just know I didn’t make that term up. So encapsulate what varies, can actually mean, “hide anything that varies”. So how do we do that? By hiding the concrete, specific implementation behind an abstraction.  Sounds like the DIP to me.

You can also say the same thing about the OCP. By implementing a class as an abstract base class, and deriving different implementations by implementing that base class differently we are effectively hiding a variance behind the abstract class. Again OCP and DIP are closely linked.

SRP

SRP is part of the reason we see the Repository pattern and model objects. The model should be persistence ignorant, in that if it knew how to persist itself it would likely need to know about the specifics of your persistence mechanism (SQL Server, Mongo DB, etc) in addition to knowing about the model, storing data, validating data etc. And thus violation SRP. Using a repository means we push knowledge of persistence into the repository’s implementation allowing a model to be responsible for one thing, in other words, it has only one reason to change. The Repository also follows SRP in that it has only one reason to change, i.e. if the persistence mechanism changes.

In some ways, SRP can feel a little odd. We are use to objects being data and behavior, we are used to grouping functionality based on the entity, rather than building objects based on responsibilities.

Facade vs Adapter Pattern

I don’t know why, but I always forget the differences between these patterns. So here is a nice simple description:

Facade

Hide something complicated behind something simple.

Automotive Analogy:

Turn ignition key =

      1. Check security code in key
      2. Flip switch to open current to starter motor
      3. Starter Motor crank engine
      4. Inject fuel into combustion chamber
      5. Fire spark plugs
      6. …. and so on

In other words, the complex process of starting a car is simplified using a Facade, the ignition.

Adapter Pattern

Fit a square peg in a round hole. There is not necessarily a simplification, just making two incompatible interfaces, compatible.

Common place analogy:

Power plug pin adapter. http://www.examiner.com/images/blog/wysiwyg/image/E105bkweb.jpg

Cross Domain API Using postMessage – Displaying UI and Returning Data

I promised in a previous post that I would cover how we implemented displaying UI as part of our API and how we managed cross domain issues in this scenario. Our API is possibly a little unusual in that not only do we have API methods that take parameters and return data just like you would expect we also have methods that display UI. We took this approach for several reasons. First we wanted to be able to evolve the UI as needed given there are situations where legally we need to ask for certain pieces of information. Secondly we wanted to make consuming our API easy and simple for our partners and avoiding the requirement for them to implement the UI themselves seemed like a good way to do so.

I already described how we allowed for API calls across domains but the approach we used for displaying UI is interesting too. Honestly, it’s not so much the UI display that’s interesting it’s how we return data from the UI. The UI methods also return data from that UI, for example, if a method is creating some kind of business entity we will return an object (serialized as JSON) after the user commits the change and closes the dialog. The challenge here is that because our pages come from a different domain compared to the API consumer the usual methods used to display pages won’t allow data to be returned. For example, window.open or rendering our pages in an iframe in a lightbox like a jquery UI dialog will work just fine for display and interacting with our UI but any attempt to return data will be blocked by the browser’s same origin policy.

The Options

To avoid bumping into the same origin policy we looked at the following options:

  1. Lightbox and appending the returned markup directly to the partner’s DOM
  2. window.open and using postMessage to pass data across domains
  3. Lightbox containing an iframe and using postMessage

Option 1 is difficult because our pages are all self contained, complete html pages include script references etc. Conceivably we could return markup for the dialog only, pass it back from our domain via postMessage and attach it to the partner’s DOM, but it’s very messy and there is also the question of the script includes required by our pages. This approach might work for some very simple markup but seemed unworkable in our case.

Options 2 and 3 are very similar the only difference being our pages are hosted in a popup window versus an iframe in a lightbox (essentially a DIV). The lightbox is a little more elegant, should never run afoul of popup blockers and is easy to communicate with via postMessage. However, window.open in option 2 has the advantage of it will work from within our hidden iframe representing our domain. Which leads to the next question, where should the dialog be opened from? That is, from which domain.

Master of our own Domain?

I found it useful to think of there being a dividing line between the partners domain and ours in the browser DOM. Our domain exists solely in the iframe we attach to the partner’s page and the rest of the DOM is the partner’s domain. (See the diagram below if you’re a more visual person.) On the partner’s side exists the one artifact we provide to them that they must physically attach to their DOM which is a JS file. This file, as I’ve mentioned in the previous post , contains if you like the public API the partner invokes. On the other side is our domain where we can act without worrying about complications like cross domain. In here we attach our own JS file which contains the “private” implementation of our API, this is where we can implement much of the complexity and hide it from the consumer. So the question remains, where should we launch the dialog from. Initially I experimented with option 2, window.open from the iframe hosting our domain. So when calling a UI method the following would have to occur:

  1. Invoke the method in the partner domain
  2. postMessage is called which passes the parameters and the method context to our domain
  3. Code in our domain launches the dialog

The problem with this approach is that because launching the dialog is divorced from the click event the browser’s popup blockers will activate and the dialog will be blocked. We couldn’t use a lightbox from here because our iframe was hidden. Our requirements did not allow for the possibility of users disabling popup blockers (and besides, Safari’s support for site specific popup blocker settings is woeful) so we needed to find another way. Launching the dialog in the partner domain will solve the problem but it introduces a lot of complexity since we now have two separate iframes/windows hosting our domain attached to the partner’s DOM. We no longer have nice simple API boundaries, we essentially have two isolated “islands” of our domain that cannot easily communicate.

The Final Solution

The solution we ran with was to perform all operations within one iframe hosting our domain by making the iframe transparent, overlaying the entire partner page and visible when executing a UI method. We were now free to launch the dialog as a lightbox rather than window.open which neatly avoids popup blockers. As an added bonus we can also now use jQuery UI to display the lightbox since we include it in pages in our domain. The end result is an simple and clean API boundary. The partner domain hosts a very simple, thin wrapper proxy for our API methods that just hides the postMessage calls and then the private implementation of our API proxy that hides the AJAX calls and UI display. Returning data from the UI is not a problem since it’s launched from an iframe in our domain. Once the data is returned and the UI dialog closed, we then use standard postMessage functionality to return the data to the partner’s domain.

JsProxyCrossDomain

Cross Domain Javascript API – Supporting UI display using a transparent iframe and passing data using postMessage

Determining Scheme and Hostname of Current Script Include

As part of my efforts to provide an easy to integrate HTTP API I needed to know the scheme and hostname of our server (for origin validation). Since I want our partners to be able to just include a single JS file (hosted on our server) in their page I couldn’t render a variable using a server side tech like ASP.Net MVC. I considered a few options such as serving up an ASPX page which is really just a container for javascript. I didn’t like this idea for a few reasons, the primary one is support for debugging javascript in page like this is spotty and syntax highlighting and intellisense are lost.

So for while we simply had a hard coded placeholder in the JS file that we would have to change manually depending on where the code was deployed. The plan was to include something in our under-development publish process that would automatically update the placeholder.

Then I realized the answer was obvious, just add a little javascript to parse the scheme and hostname off the src attribute of the script element in the partner’s page, like so:

function determineHostnameScheme() {
   var scripts = document.getElementsByTagName('script');
   if (scripts == null) throw "Error - Script element not found on page. Unable to set base hostname and scheme";

   for (var i=0; i < scripts.length; i++) {
      if (scripts[i].src.indexOf("NameOfOurIncludeFileInPartnerPage.js") > 0) {
         var aUrl = parseUrl(scripts[i].src);
         // Note: IE bug, SSL requests result in the standard SSL port being included here when using host
         // other browsers don't include it. It should be okay to specify the hostname here since we use a standard port
         // This will need to be modified if we ever host on a non-standard port.
         var scriptDomain = aUrl.protocol + "//" + aUrl.hostname;
         return scriptDomain;
      }
   }
}

Note: I’m using a helper function I blogged about previously, parseUrl which created an anchor object in order to parse a URL and give us access to each of it’s components.

There was a bug in IE where the host property of the anchor object includes the port when using SSL. Firefox, Chrome and Safari do not.

Parsing a URL – Javascript

A handy snippet of javascript I added to our code base recently was one that parses a URL by creating an anchor object. The anchor object has properties such as protocol, host, hostname etc.

MSDN – anchor object

Without jQuery:

    function parseUrl(url) {
        var a = document.createElement('a');
        if (url) {
            a.href = url;
        }
        return a;
    }

Using jQuery:

   function parseUrl(url) {
      var a = $("<a />");
      a.attr('href', url);
      var anchor = a.get(0); // Grab the actual anchor object, rather than the jQuery wrapper.
      return anchor;
   }

This is a much cleaner way of parsing a URL versus using regular expressions or standard string parsing.

HTTPS in ASP.Net MVC – RequireHttp or is there a better way?

A common solution to implementing HTTPS in ASP.Net MVC is to decorate your controllers or action methods with the RequireHttp attribute. This is fine if you are happy with a redirect to HTTPS if a user types HTTP, however, this does open you to some attack vectors described here:

http://www.troyhunt.com/2011/11/owasp-top-10-for-net-developers-part-9.html

The drawback of not automatically redirecting is that the user must explicitly type HTTPS into the address bar. Since I’m designing an API for consumption by other developers and a user should NEVER be typing the address into the browser directly I don’t have to worry about that usability issue.

Instead, I used a controller base class I already had implemented with an OnActionExecuting override and added just two lines of code:

if(!filterContext.HttpContext.Request.IsSecureConnection) {
    filterContext.Result = new ViewResult(){ViewName="SSLError"};
}

I have a stripped down view in the Shared folder that displays a very simple error message something like “This site can only be accessed via HTTPS”. That’s it. No redirect for Dr Evil to take advantage off, some HTML is returned in the response.

If you don’t want to use a controller base class you can also implement this code in a custom ActionFilter attribute.