The Perils of in-memory Sessions

December 14, 2009

The “usual” implications of in-memory session are fairly well-understood:

  • A user is tied to a machine, so if the machine dies, the session dies (To be more specific, a user is tied to the process that is serving the request, so if the process dies, the session dies as well – this is useful to keep in mind when having web gardening, with multiple processes serving up an IIS application pool.)
  • Load balancing may get skewed depending on how long a user’s session lives.

 However, when it comes to high availability setups, in-memory sessions have more than these basic implications. For one, availability is not just protecting against a machine going down, it is also protecting against a user’s session going down, especially if the session involves multi-step transactions. If you are looking for a 99.99% availability, you really do not want to have in-memory sessions.

Also combine that with when you have set up your IIS server to have app pool recycles. If a recycle happens while users are logged on to that machine, then all those users are presented with a logon dialog in the middle of their order confirmation and are going to be somewhat upset with you.

The other implication is how you upgrade high availability systems. Consider that I just want to make a simple web.config change to all my web servers. A change to web.config results in an app pool recycle, so any users who are logged on will get thrown out. So, if I do need to make this change, then I need to something like so:

  • Assuming you have a reasonable load balancer, configure it so that no new sessions are allowed into this server.
  • Twiddle thumbs till you see that there are no new requests on this server
  • Make the web.config change
  • Re-configure your load balancer to have traffic back into this machine
  • Repeat for all the machines in your farm.

And so, a simple web.config change which should have just been a couple of seconds to deploy is now suddently a few minutes, or a few hours.


Layers and Tiers

June 2, 2009

At times, people tend to use “layers” and “tiers” interchangeably when they talk of application architecture. These are really two very different things:

Layers are a means of logical separation, and are an architectural pattern to separate concerns. Typical layers are the storage / data layer, the data access layer, a business logic layer, and a presentation layer. When deploying an application, these layers might exist either on one machine, or on multiple machines.

When architecting the layers of an application, one must consider various factors:

  • What is this layer responsible for?
  • Will this layer translate to a tier at any point?
  • Are other applications expected to communicate with this layer?
  • What are the dependencies for this layer?

Tiers are the physical separation of an application. Typically, a regular web application will have a data tier which maps to the database, a business and presentation tier (potentially together) that map to the underlying business logic and web pages respectively, and a front-end tier that maps to the browser. There are scenarios wherein the business and presentation layers are split, so the business layer components reside on a separate node, and the web application itself resides on another node. Tiers necessarily imply layers, but not vice versa.

 While layer decisions are driven primarily by separation of concerns, a tiering decision is driven differently. In fact decisions around tiers need to be well thought-out, especially since physical tiers may result in a performance degrade, due to network latency, and marshalling / unmarshalling overheads. For the most part, having the data sit in a separate tier on a database server is fairly well accepted, but the decision of whether or not to separate web pages into a tier separate from business logic components is a question that is often debated. A few points that support this separation are:

  • If the web server is expected to serve up a large amount of static content, it is more feasible to scale out the web servers independent of the business logic processing, and hence keep them on separate tiers.
  • If the processing of the businss logic requires a significant amount of processing, and this acts as a bottleneck for the rest of the web page processing, then it is more feasible to scale out the application servers independent of the web servers, and hence keep them on separate tiers.
  • If the functionality encapsulated by the business logic components is expected to be consumed by clients other than the presentation layer, then it is better to have these hosted on a separate tier, and front-end it with a web service / WCF facade.
  • If there is a need to have more security for the business functionality, then it is better to host the businss logic components in a separate tier, and separate it from the web server using a firewall.

If, on the other hand, the intent is only to scale out, without a deep need to scale out business and presentation tiers independently, then keeping them in one tier is better: Consider the case when I have five machines, and I use all five to host both my presentation and business tiers. In comparison, if I use two of these for my presentation tier, and three for my business tier, then I still have the same compute power, but now have to contend with the additional overhead of marshal/unmarshal, and latency. I’m hence better off with the single tier in such a case.

All said though, while a physical separation improves scale-out capability and security, it does impose overheads due to the network. The design for tiers needs to take into account the communication between tiers – too chatty, and performance degrades rapidly. The choice of protocol is equally important – the protocol of communication should be the most lightweight, and be as native as possible – so if both the web pages and the business logic components are built on the .Net stack, then a NetTcp binding is more suitable than a Http-based binding.

One interesting point in this context is how WCF supports multiple bindings, as well as the capability to build new ones. A neat extension as been built in the form of the Null Transport channel. Using this, if I have the case wherein I start with wanting to keep the two tiers together initially, but have the intent to separate them later, then I use WCF, but with the null transport for communication. This allows me to get performance closest to an in-proc call, but also gives me the ability to subsequently switch to a different binding if needed. That seems the best of both worlds.