Server load balancing architectures, Part 2: Application-level load balancing

source:http://www.javaworld.com/javaworld/jw-10-2008/jw-10-load-balancing-2.html

The transport-level server load balancing architectures described in the first half of this article are more than adequate for many Web sites, but more complex and dynamic sites can't depend on them. Applications that rely on cache or session data must be able to handle a sequence of requests from the same client accurately and efficiently, without failing. In this follow up to his introduction to server load balancing, Gregor Roth discusses various application-level load balancing architectures, helping you decide which one will best meet the business requirements of your Web site.

The first half of this article describes transport-level server load balancing solutions, such as TCP/IP-based load balancers, and analyzes their benefits and disadvantages. Load balancing on the TCP/IP level spreads incoming TCP connections over the real servers in a server farm. It is sufficient in most cases, especially for static Web sites. However, support for dynamic Web sites often requires higher-level load balancing techniques. For instance, if the server-side application must deal with caching or application session data, effective support for client affinity becomes an important consideration. Here in Part 2, I'll discuss techniques for implementing server load balancing at the application level to address the needs of many dynamic Web sites.

Intermediate server load balancers

In contrast to low-level load balancing solutions, application-level server load balancing operates with application knowledge. One popular load-balancing architecture, shown in Figure 1, includes both an application-level load balancer and a transport-level load balancer.

Load balancing on transport and application levels

Figure 1. Load balancing on transport and application levels (click to enlarge)

The application-level load balancer appears to the transport-level load balancer as a normal server. Incoming TCP connections are forwarded to the application-level load balancer. When it retrieves an application-level request, it determines the target server on the basis of the application-level data and forwards the request to that server.

Listing 1 shows an application-level load balancer that uses a HTTP request parameter to decide which back-end server to use. In contrast to the transport-level load balancer, it makes the routing decision based on an application-level HTTP request, and the unit of forwarding is a HTTP request. Similarly to the memcached approach I discussed in Part 1, this solution uses a "hash key"-based partitioning algorithm to determine the server to use. Often, attributes such as user ID or session ID are used as the partitioning key. As a result, the same server instance always handles the same user. The user's client is affine or "sticky" to the server. For this reason the server can make use of a local HttpRequest cache I discussed in Part 1.

Listing 1. Intermediate application-level load balancer

class LoadBalancerHandler implements IHttpRequestHandler, ILifeCycle {
private final List<InetSocketAddress> servers = new ArrayList<InetSocketAddress>();
private HttpClient httpClient;

/*
* this class does not implement server monitoring or healthiness checks
*/

public LoadBalancerHandler(InetSocketAddress... srvs) {
servers.addAll(Arrays.asList(srvs));
}

public void onInit() {
httpClient = new HttpClient();
httpClient.setAutoHandleCookies(false);
}


public void onDestroy() throws IOException {
httpClient.close();
}

public void onRequest(final IHttpExchange exchange) throws IOException {
IHttpRequest request = exchange.getRequest();

// determine the business server based on the id's hashcode
Integer customerId = request.getRequiredIntParameter("id");
int idx = customerId.hashCode() % servers.size();
if (idx < 0) {
idx *= -1;
}

// retrieve the business server address and update the Request-URL of the request
InetSocketAddress server = servers.get(idx);
URL url = request.getRequestUrl();
URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
request.setRequestUrl(newUrl);

// proxy header handling (remove hop-by-hop headers, ...)
// ...


// create a response handler to forward the response to the caller
IHttpResponseHandler respHdl = new IHttpResponseHandler() {

@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
exchange.send(response);
}

@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};

// forward the request in a asynchronous way by passing over the response handler
httpClient.send(request, respHdl);
}
}



class LoadBalancer {

public static void main(String[] args) throws Exception {
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
HttpServer loadBalancer = new HttpServer(8080, new LoadBalancerHandler(srvs));
loadBalancer.run();
}
}

In Listing 1, the LoadBalancerHandler reads the HTTP id request parameter and computes the hash code. Going beyond this simple example, in some cases load balancers must read (a part of) the HTTP body to retrieve the required balancing algorithm information. The request is forwarded based on the result of the modulo operation. This is done by the HttpClient object. This HttpClient also pools and reuses (persistent) connections to the servers for performance reasons. The response is handled in an asynchronous way through the use of an HttpResponseHandler. This non-blocking, asynchronous approach minimizes the load balancer's system requirements. For instance, no outstanding thread is required during a call. For a more detailed explanation of asynchronous, non-blocking HTTP programming, read my article "Asynchronous HTTP and Comet architectures."

Another intermediate application-level server load balancing technique is cookie injection. In this case the load balancer checks if the request contains a specific load balancing cookie. If the cookie is not found, a server is selected using a distribution algorithm such as round-robin. A load balancing session cookie is added to the response before the response is sent. When the browser receives the session cookie, the cookie is stored in temporary memory and is not retained after the browser is closed. The browser adds the cookie to all subsequent requests in that session, which are sent to the load balancer. By storing the server slot as cookie value, the load balancer can determine the server that is responsible for this request (in this browser session). Listing 2 implements a load balancer based on cookie injection.

Listing 2. Cookie-injection based application-level load balancer

class CookieBasedLoadBalancerHandler implements IHttpRequestHandler, ILifeCycle {
private final List<InetSocketAddress> servers = new ArrayList<InetSocketAddress>();
private int serverIdx = 0;
private HttpClient httpClient;

/*
* this class does not implement server monitoring or healthiness checks
*/

public CookieBasedLoadBalancerHandler(InetSocketAddress... realServers) {
servers.addAll(Arrays.asList(realServers));
}

public void onInit() {
httpClient = new HttpClient();
httpClient.setAutoHandleCookies(false);
}

public void onDestroy() throws IOException {
httpClient.close();
}

public void onRequest(final IHttpExchange exchange) throws IOException {
IHttpRequest request = exchange.getRequest();


IHttpResponseHandler respHdl = null;
InetSocketAddress serverAddr = null;

// check if the request contains the LB_SLOT cookie
cl : for (String cookieHeader : request.getHeaderList("Cookie")) {
for (String cookie : cookieHeader.split(";")) {
String[] kvp = cookie.split("=");
if (kvp[0].startsWith("LB_SLOT")) {
int slot = Integer.parseInt(kvp[1]);
serverAddr = servers.get(slot);
break cl;
}
}
}

// request does not contains the LB_SLOT -> select a server
if (serverAddr == null) {
final int slot = nextServerSlot();
serverAddr = servers.get(slot);

respHdl = new IHttpResponseHandler() {

@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
// set the LB_SLOT cookie
response.setHeader("Set-Cookie", "LB_SLOT=" + slot + ";Path=/");
exchange.send(response);
}

@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};

} else {
respHdl = new IHttpResponseHandler() {

@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
exchange.send(response);
}

@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};
}

// update the Request-URL of the request
URL url = request.getRequestUrl();
URL newUrl = new URL(url.getProtocol(), serverAddr.getHostName(), serverAddr.getPort(), url.getFile());
request.setRequestUrl(newUrl);

// proxy header handling (remove hop-by-hop headers, ...)
// ...

// forward the request
httpClient.send(request, respHdl);
}

// get the next slot by using the using round-robin approach
private synchronized int nextServerSlot() {
serverIdx++;
if (serverIdx >= servers.size()) {
serverIdx = 0;
}
return serverIdx;
}
}


class LoadBalancer {

public static void main(String[] args) throws Exception {
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
CookieBasedLoadBalancerHandler hdl = new CookieBasedLoadBalancerHandler(srvs);
HttpServer loadBalancer = new HttpServer(8080, hdl);
loadBalancer.run();
}
}

Unfortunately, the cookie-injection approach only works if the browser accepts cookies. If the user deactivates cookies, the client loses stickiness.

In general, the drawback of intermediate application-level load balancer solutions is that they require an additional node or process. Solutions that integrate a transport-level and an application-level server load balancer solve this problem but are often very expensive, and the flexibility gained by accessing application-level data is limited.

HTTP redirect-based server load balancer

One way to avoid additional network hops is to make use of the HTTP redirect directive. With the help of the redirect directive, the server reroutes a client to another location. Instead of returning the requested object, the server returns a redirect response such as 303 See Other. The client recognizes the new location and reissues the request. Figure 2 shows this architecture.

Http redirect-based application-level load balancing

Figure 2. HTTP redirect-based application-level load balancing

Listing 3 implements an HTTP redirect-based application-level load balancer. The load balancer in Listing 3 doesn't forward the request. Instead, it sends a redirect status code, which contains an alternate location. According to the HTTP specification, the client repeats the request by using the alternate location. If the client uses the alternate location for further requests, the traffic goes to that server directly. No extra network hops are required.

Listing 3. HTTP redirect-based application-level load balancer

class RedirectLoadBalancerHandler implements IHttpRequestHandler {
private final List<InetSocketAddress> servers = new ArrayList<InetSocketAddress>();

/*
* this class does not implement server monitoring or healthiness checks
*/

public RedirectLoadBalancerHandler(InetSocketAddress... realServers) {
servers.addAll(Arrays.asList(realServers));
}

@Execution(Execution.NONTHREADED)
public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
IHttpRequest request = exchange.getRequest();

// determine the business server based on the id´s hashcode
Integer customerId = request.getRequiredIntParameter("id");
int idx = customerId.hashCode() % servers.size();
if (idx < 0) {
idx *= -1;
}

// create a redirect response -> status 303
HttpResponse redirectResponse = new HttpResponse(303, "text/html", "<html>....");

// ... and add the location header
InetSocketAddress server = servers.get(idx);
URL url = request.getRequestUrl();
URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
redirectResponse.setHeader("Location", newUrl.toString());

// send the redirect response
exchange.send(redirectResponse);
}
}


class Server {

public static void main(String[] args) throws Exception {
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
RedirectLoadBalancerHandler hdl = new RedirectLoadBalancerHandler(srvs);
HttpServer loadBalancer = new HttpServer(8080, hdl);
loadBalancer.run();
}
}

The HTTP redirect approach has two weaknesses. First, the whole server infrastructure becomes visible to the client. This could be a security problem if the client is an anonymous client on the Internet. Providers often try to minimize the attack surface by hiding their server infrastructure. Second, this approach does little for high availability. Similarly to DNS-based load balancing (discussed in Part 1), the clients do not switch to another server if this server fails. The client has no easy way to recognize the dead server and keeps trying to reach it. If the client uses the original request for further calls, the number of network hops stays the same, because the request goes to the load balancer and is redirected to the server each time.

Server-side server load balancer interceptor

Another way to avoid additional network hops is to move the application-level server load balancer logic to the server side. As shown in Figure 3, the load balancer becomes an interceptor.

Server-side load balancer interceptor

Figure 3. Server-side load balancer interceptor

Listing 4 implements a server-side application-level load balancer interceptor. The code is almost the same as for Listing 1's LoadBalancerHandler. The difference is that if the request target is identified as the local server, the request is forwarded locally instead of using the HttpClient.

Listing 4. Server-side application-level load balancer interceptor

class LoadBalancerRequestInterceptor implements IHttpRequestHandler, ILifeCycle {
private final List<InetSocketAddress> servers = new ArrayList<InetSocketAddress>();
private InetSocketAddress localServer;
private HttpClient httpClient;

/*
* this class does not implement server monitoring or healthiness checks
*/

public LoadBalancerRequestInterceptor(InetSocketAddress localeServer, InetSocketAddress... srvs) {
this.localServer = localeServer;
servers.addAll(Arrays.asList(srvs));
}

public void onInit() {
httpClient = new HttpClient();
httpClient.setAutoHandleCookies(false);
}


public void onDestroy() throws IOException {
httpClient.close();
}


public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
IHttpRequest request = exchange.getRequest();

Integer customerId = request.getRequiredIntParameter("id");

int idx = customerId.hashCode() % servers.size();
if (idx < 0) {
idx *= -1;
}

InetSocketAddress server = servers.get(idx);

// local server?
if (server.equals(localServer)) {
exchange.forward(request);

// .. no
} else {
URL url = request.getRequestUrl();
URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
request.setRequestUrl(newUrl);

// proxy header handling (remove hop-by-hop headers, ...)
// ...

IHttpResponseHandler respHdl = new IHttpResponseHandler() {

@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
exchange.send(response);
}

@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};
httpClient.send(request, respHdl);
}
}
}


class Server {

public static void main(String[] args) throws Exception {
RequestHandlerChain handlerChain = new RequestHandlerChain();
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
handlerChain.addLast(new LoadBalancerRequestInterceptor(new InetSocketAddress("srv1", 8030), srvs));
handlerChain.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
handlerChain.addLast(new MyRequestHandler());

HttpServer httpServer = new HttpServer(8030, handlerChain);
httpServer.run();
}
}

This approach reduces additional network hops. On average, the percentage of requests handled locally equals 100 divided by the number of servers. Unfortunately, this approach helps only when you have a small number of servers.

Client-side server load balancer interceptor

Load balancing logic equivalent to that of a server-side load balancer interceptor can be implemented as an interceptor on the client side. In this case no transport-level load balancer is required. Figure 4 illustrates this architecture.

Client-side load balancer interceptor

Figure 4. Client-side load balancer interceptor

Listing 5 adds an interceptor to the HttpClient. Because the load balancing code is written as an interceptor, the load balancing is invisible to the client application.

Listing 5. Client-side application-level load balancer interceptor

class LoadBalancerRequestInterceptor implements IHttpRequestHandler, ILifeCycle {
private final Map<String, List<InetSocketAddress>> serverClusters = new HashMap<String, List<InetSocketAddress>>();
private HttpClient httpClient;

/*
* this class does not implement server monitoring or healthiness checks
*/

public void addVirtualServer(String virtualServer, InetSocketAddress... realServers) {
serverClusters.put(virtualServer, Arrays.asList(realServers));
}

public void onInit() {
httpClient = new HttpClient();
httpClient.setAutoHandleCookies(false);
}

public void onDestroy() throws IOException {
httpClient.close();
}

public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
IHttpRequest request = exchange.getRequest();

URL requestUrl = request.getRequestUrl();
String targetServer = requestUrl.getHost() + ":" + requesrUrl.getPort();

// handle a virtual address
for (Entry<String, List<InetSocketAddress>> serverCluster : serverClusters.entrySet()) {
if (targetServer.equals(serverCluster.getKey())) {
String id = request.getRequiredStringParameter("id");

int idx = id.hashCode() % serverCluster.getValue().size();
if (idx < 0) {
idx *= -1;
}

InetSocketAddress realServer = serverCluster.getValue().get(idx);
URL newUrl = new URL(requesrUrl.getProtocol(), realServer.getHostName(), realServer.getPort(), requesrUrl.getFile());
request.setRequestUrl(newUrl);

// proxy header handling (remove hop-by-hop headers, ...)
// ...

IHttpResponseHandler respHdl = new IHttpResponseHandler() {

@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
exchange.send(response);
}

@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};

httpClient.send(request, respHdl);
return;
}
}

// request address is not virtual one -> do nothing by forwarding request for standard handling
exchange.forward(request);
}
}



class SimpleTest {

public static void main(String[] args) throws Exception {

// start the servers
RequestHandlerChain handlerChain1 = new RequestHandlerChain();
handlerChain1.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
handlerChain1.addLast(new MyRequestHandler());

HttpServer httpServer1 = new HttpServer(8040, handlerChain1);
httpServer1.start();


RequestHandlerChain handlerChain2 = new RequestHandlerChain();
handlerChain2.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
handlerChain2.addLast(new MyRequestHandler());

HttpServer httpServer2 = new HttpServer(8030, handlerChain2);
httpServer2.start();


// create the client
HttpClient httpClient = new HttpClient();

// ... and add the load balancer interceptor
LoadBalancerRequestInterceptor lbInterceptor = new LoadBalancerRequestInterceptor();
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("localhost", 8030), new InetSocketAddress("localhost", 8030) };
lbInterceptor.addVirtualServer("customerService:8080", srvs);
httpClient.addInterceptor(lbInterceptor);

// run some tests
GetRequest request = new GetRequest("http://customerService:8080/price?id=2336&amount=5656");
IHttpResponse response = httpClient.call(request);
assert (response.getHeader("X-Cached") == null);

request = new GetRequest("http://customerService:8080/price?id=2336&amount=5656");
response = httpClient.call(request);
assert (response.getHeader("X-Cached").equals("true"));

request = new GetRequest("http://customerService:8080/price?id=2337&amount=5656");
response = httpClient.call(request);
assert (response.getHeader("X-Cached") == null);

request = new GetRequest("http://customerService:8080/price?id=2337&amount=5656");
response = httpClient.call(request);
assert (response.getHeader("X-Cached").equals("true"));

// ...
}
}

The client-side approach is high efficient, highly available, and highly scalable. Unfortunately, some serious disadvantages exist for Internet-based clients. Similarly to the HTTP redirect-based load balancer, the whole server infrastructure becomes visible to the client. Furthermore, this approach often forces client-side Web applications to perform cross-domain calls. For security reasons, Web browsers and browser-based containers such as a Flash runtime or a JavaScript runtime will block calls to different domains. This means some workarounds must be implemented on the client side. (See Resources for a link to an article describing some strategies that address this issue.)

The client-side load balancing approach is not restricted to HTTP-based applications. For instance, JBoss supports smart stubs. A stub is an object that is generated by the server and implements a remote service's business interface. The client makes local calls against the stub object. In a load balanced environment, the server-generated stub object acts also as an interceptor that understands how to route calls to the appropriate server.

Application session data support

As I discussed in Part 1, application session data represents the state of a user-specific application session. For classic ("WEB 1.0") Web applications, application session data is stored on the server side, as shown in Listing 6.

Listing 6. Session-based server

class MySessionBasedRequestHandler implements IHttpRequestHandler {

@SynchronizedOn(SynchronizedOn.SESSION)
public void onRequest(IHttpExchange exchange) throws IOException {
IHttpRequest request = exchange.getRequest();
IHttpSession session = exchange.getSession(true);

//..

Integer countRequests = (Integer) session.getAttribute("count");
if (countRequests == null) {
countRequests = 1;
} else {
countRequests++;
}

session.setAttribute("count", countRequests);

// and return the response
exchange.send(new HttpResponse(200, "text/plain", "count=" + countRequests));
}
}


class Server {

public static void main(String[] args) throws Exception {
HttpServer httpServer = new HttpServer(8030, new MySessionBasedRequestHandler());
httpServer.run ();
}
}

In Listing 6, the application session data (container) is accessed by the getSession(...) method. When true is passed as an argument, a new session is created if one doesn't exist already. According to the Servlet API, a cookie named JSESSIONID is sent to the client. The value of the JSESSIONID cookie is the unique session ID. This ID is used to identify the session object, which is stored on the server side. When it receives subsequent client requests, the server can fetch the associated session object based on the client request's cookie header. To support clients that do not accept cookies, URL rewriting can be used for session tracking. With URL rewriting, every local URL of the response page is dynamically rewritten to include the session ID.

In contrast to cached data, application session data is not redundant by definition. If the server crashes, the application session data will be lost and in most cases will be unrecoverable. As a consequence, application session data must either be stored in a global place or be replicated between the involved servers.

If the data is replicated, normally all the servers involved hold the application data of all sessions. For this reason this approach scales only for a small group of servers. The server memory is limited, and updates must be replicated to all involved servers. To support larger numbers of servers, the servers must be partitioned into several smaller server groups. In contrast to the full-replication approach, the global-store approach uses a database, a file system, or in-memory session servers to store the session data in a global place.

In general, application session data handling does not force you to make the clients affine to the server. If the replication approach is used, normally all servers will hold the application session data. If session data is modified, the changes must be replicated to all servers. In the case of a global-store approach, the application data is fetched before the request is handled. Sending the response writes the changes of the session data back to the global store. The store must be highly available and represents one of the total system's hot-spot components. If the store is unavailable, the server can't handle the requests.

However, the locality caused by client affinity makes it easier to synchronize concurrent requests for the same session. For a more detailed explanation of threading issues with session state management, read "Java theory and practice: Are all stateful Web applications broken?" (see Resources). Furthermore, if clients are affine to the server, more-efficient techniques can be implemented. For instance, if session servers are used, the session server's responsibility can be reduced to a backup role. Figure 5 illustrates this architecture. Often the session ID is used as load balancing key for such architectures.

Backup session server based application session data support

Figure 5. Backup session server based application session data support

When the response is written, modifications to the application session data are written to the session server. In contrast to the non-affine case, the servers read application session data only in the event of a failover.

Listing 7 defines a custom ISessionManager based on the xLightweb HTTP library (see Resources) to implement this behavior.

Listing 7. Session management

class BackupBasedSessionManager implements ISessionManager {

private ISessionManager delegee = null;
private HttpClient httpClient = null;

public BackupBasedSessionManager(HttpClient httpClient, ISessionManager delegee) {
this.httpClient = httpClient;
this.delegee = delegee;
}


public boolean isEmtpy() {
return delegee.isEmtpy();
}

public String newSession(String idPrefix) throws IOException {
return delegee.newSession(idPrefix);
}


public void registerSession(HttpSession session) throws IOException {
delegee.registerSession(session);
}

public HttpSession getSession(String sessionId) throws IOException {
HttpSession session = delegee.getSession(sessionId);

// session not available? -> try to get it from the backup location
if (session == null) {
String id = URLEncoder.encode(sessionId);
IHttpResponse response = httpClient.call(new GetRequest("http://sessionservice:8080/?id=" + id));
if (response.getStatus() == 200) {
try {
byte[] serialized = response.getBlockingBody().readBytes();
ObjectInputStream in = new ObjectInputStream(new ByteArrayInputStream(serialized));
session = (HttpSession) in.readObject();
registerSession(session);
} catch (ClassNotFoundException cnfe) {
throw new IOException(cnfe);
}
}
}

return session;
}

public void saveSession(String sessionId) throws IOException {
delegee.saveSession(sessionId);

HttpSession session = delegee.getSession(sessionId);

ByteArrayOutputStream bos = new ByteArrayOutputStream() ;
ObjectOutputStream out = new ObjectOutputStream(bos) ;
out.writeObject(session);
out.close();
byte[] serialized = bos.toByteArray();

String id = URLEncoder.encode(session.getId());
PostRequest storeRequest = new PostRequest("http://sessionservice:8080/?id=" + id + "&ttl=600", "application/octet-stream", serialized);
httpClient.send(storeRequest, null); // send the store request asynchronous and ignore result
}

public void removeSession(String sessionId) throws IOException {
delegee.removeSession(sessionId);
String id = URLEncoder.encode(sessionId);
httpClient.call(new DeleteRequest("http://sessionservice:8080/?id=" + id));
}

public void close() throws IOException {
delegee.close();
}
}


class Server {

public static void main(String[] args) throws Exception {

// set the server's handler
HttpServer httpServer = new HttpServer(8030, new MySessionBasedRequestHandler());

// create a load balanced http client instance
HttpClient sessionServerHttpClient = new HttpClient();
LoadBalancerRequestInterceptor lbInterceptor = new LoadBalancerRequestInterceptor();
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("sessionSrv1", 5010), new InetSocketAddress("sessionSrv2", 5010)};
lbInterceptor.addVirtualServer("sessionservice:8080", srvs);
sessionServerHttpClient.addInterceptor(lbInterceptor);

// wrap the local built-in session manager by backup aware session manager
ISessionManager nativeSessionManager = httpServer.getSessionManager();
BackupBasedSessionManager sessionManager = new BackupBasedSessionManager(sessionServerHttpClient, nativeSessionManager);
httpServer.setSessionManager(sessionManager);

// start the server
httpServer.start();
}
}

In Listing 7, the BackupBasedSessionManager is responsible for managing the sessions on the server side. The BackupBasedSessionManager implements the ISessionManager interface to intercept the container's session management. If the session is not found locally, the BackupBasedSessionManager tries to retrieve the session from the session server. This should only occur after a server failover. If the session state is changed, the BackupBasedSessionManager's saveSession() method is called to store the session on the backup session server. A client-side server load balancing approach is used to access the session servers.

Apache Tomcat load balancing architectures

Why haven't I used the current Java Servlet API for the preceding examples? The answer is simple. In contrast to HTTP libraries such as xLightweb, the Servlet API is designed as a pure synchronous, blocking API. Insufficient asynchronous, non-blocking support makes load balancer implementations based on the Servlet API inefficient. This is true for both the intermediate load balancer approach and the server-side load balancer approach. Client-side interceptor-based load balancing is out of the scope of the Servlet API, which is a server-side-only API.

What you can do is to implement a HTTP redirect-based server load balancer based on the Servlet API. Tomcat 5 ships with such an application, named balancer. (The balancer application is not included in the Tomcat 6 distribution.)

A popular load balancing approach for Tomcat is to run Apache HTTP Server as a Web server and send the request to one of the Tomcat instances over the Apache Tomcat Connector (AJP) protocol. Figure 6 illustrates this approach.

Figure 6. Popular Apache Tomcat infrastructure

The Web server acts as an application-level server load balancer by using the Apache mod_proxy_balancer module. Client affinity is implemented based on the cookie/path JSESSIONID parameter. As I discussed earlier, the JSESSIONID cookie parameter is created implicitly by retrieving the HttpSession within a servlet.

Adding the necessary routing information to JSESSIONID's value modifies the server's response, to determine the target server. If the client sends a subsequent request, this routing information is extracted from that request's JSESSIONID value. Based on this information, the request is forwarded to the target server.

To make the application session data highly available, a Tomcat cluster must be set up. Tomcat provides two basic paths for doing this: saving the session to a shared file system or database, or using in-memory replication. In-memory replication is the more popular Tomcat clustering approach.

As an alternative, you are also free to write your own Apache application-level load balancer module to distribute the load over the Tomcat instances. Or, you can use other hardware/software-based load balancing solutions like the ones shown in the preceding portions of this article.

In conclusion

Client-side server load balancing is a simple and powerful technique. No intermediate server load balancers are required. The client communicates with the servers in a direct way. However, the scope of client-side server load balancing is limited. Cross-domain-calls must be supported for Internet clients, which introduces complexity and restrictions.

As you learned in Part 1, pure transport-level server load balancer architectures are simple, flexible, and highly efficient. In contrast to client-side server load balancing, no restrictions exist for the client side. Often such architectures are combined with distributed cache or session servers to handle application-level caching and session data issues. However, if the overhead caused by moving data from and to the cache or session servers grows, such architectures become increasingly inefficient. By implementing client affinity based on an application-level server load balancer, you can avoid copying large datasets between servers. This is not the only use case for application-level server load balancing. For instance, requests from specific premium users can be forwarded to dedicated servers that support high quality of service. Or specific business-function groups can be forwarded to specialized servers.

Although commercial and hardware-based solutions have not been discussed in this article, they should also be considered when you design a server load balancing architecture. As always, the concrete server load balancing architecture you choose depends on your infrastructure's specific business requirements and restrictions.

About the author

Gregor Roth, creator of the xLightweb HTTP library, works as a software architect at United Internet group, a leading European Internet service provider to which GMX, 1&1, and Web.de belong. His areas of interest include software and system architecture, enterprise architecture management, object-oriented design, distributed computing, and development methodologies.


posted on 2009-01-06 10:21 .VwV. 阅读(832) 评论(0)  编辑  收藏


只有注册用户登录后才能发表评论。


网站导航:
 
<2024年12月>
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234

导航

统计

常用链接

留言簿

随笔档案

文章分类

文章档案

搜索

最新评论

阅读排行榜

评论排行榜