content top

Talking about Load balancing

I was doing some research about load balancing for a project and after speaking to one of my WebsSphere colleagues (I’m a closet WebSphere support guy and fan – if you need semaphores deleted and WAS/IHS restarted, I’m your man). Anyway whilst typing up a document about load balancing, I came across this article which I have to say put across load balancing better than I could briefly of the top of my head. Do check it out by clicking the url below.

http://oit2.utk.edu/helpdesk/kb/entry/1699/

Selecting a load balancing method

Several different load balancing methods are available for you to choose from. If you are working with servers that differ significantly in processing speed and memory, you might want to use a method such as Ratio or Weighted Least Connections.

Important: Load balancing calculations can be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation).

One thing to consider when you’re looking at implementing load balancing is what you are seeking to achieve:

  • Application compatability – there could be application session state considerations, which is around how the application handles data within the application and the infrastructure itself. Not all applications are able to recognize or ‘cache’ data so if you hit web server 1 start a session then loose connection and try again and the load balancer diverts you to web server 2, the application might not be able to pick up where you left off meaning logging in again. In this case do you want load balancing, or the functionality to balance load (take down a web server for maintenance/implement disaster recovery seemlessly).
  • Load balancing type – automatic or default is the most infamous statement that people will say to you, but what if the default is none, if it has some logic involved how do we want this configured and are we integrating it with the application. Just because the monitoring page is up doesn’t mean the back end infrastructure is up so when the user presses submit it actually does what it’s supposed too.
  • Operations considerations – are we running a live/live configuration? Do we want the site to only run on one set of servers for example production because our web support team know which bits to check out should a problem arise or even due to performance/latency issues.
  • Certificates – are you hosting the application certificates on the web servers themselves or the load balancers and how that might change the support procedures and certificate management process.

Quick reference guide:

  • Round robin – users traffic (users) are diverted to the next server in the pool, so in a two web server scenario users are alternated between the two web servers. The analogy I use is user one hits web server one, user two to server two, user three to server one, but the logic is a bit more complicated.
  • Load balanced – two types that I’ve primarily played with, first one is based on priority, users always hit the production web server until it doesn’t respond after a certain time period after which it switches to the backup web server. The other is more intuitive and can be integrated with performance or monitoring tools and involves traffic being distributed between web servers based on load, end user performance etc.

The key thing to understand is your business and application requirements, also not to be afraid to use it. One of my colleagues had mentioned that they’d still deployed it for a web application that didn’t currently support it in their pharmaceutical business because they wanted the ability to control web traffic and in the next version of the application code the developers were seaking to capitalize on the functionality.

468 ad

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.