Home > Mock Exam / Certifications > What is asymmetric clustering – Part 1

What is asymmetric clustering – Part 1

Symmetric and asymmetric clustering is one of the very important topics in SCEA. In this post , we will talk about asymmetric clustering.

Traditional J2EE application servers work well for a large class of applications. This class can broadly be categorized as applications that run in a stateless cluster in front of a database. I call this a symmetric cluster:

– All the cluster members can perform any task at any time.

– The application is stateless.

– The application is modal which means it only performs work synchronously in response to a client request which can be received using HTTP/IIOP or JMS.

There are other applications that do not work well in such an environment, for example, an electronic trading system in a bank. Such applications typically use tricks that can greatly improve performance such as partitioning, multi-threading and write through caching. These are applications that can exploit asymmetric clustering. An asymmetric cluster is practically the opposite of a symmetric cluster:

– applications can declare named partitions at any point while it’s running

– partitions are highly available uniquely named singletons and run on a single cluster member at a time

– incoming work for a partition is routed to the cluster member hosting the partition

– The application is amodal. Partitions have a lifecycle of their own and can start background threads/alarms as well as respond to incoming events whether they are IIOP/HTTP or JMS/foreign messages.

WebSphere XD offers a new set of programming API’s called the “Partitioning Facility”. These APIs allow applications that require an asymmetric cluster to be deployed on a J2EE server for the first time to my knowledge.

How can partitioning improve application performance?

A stateless cluster will only scale so far once the cluster members start competing for database access. If the workload is a read mostly workload then solutions like caching with optimistic locking work well. However, as the write rate increases this starts to break down because of collisions. Some workloads also require incoming work to be executed in order, for example, buy and sell orders for a stock symbol need to be processed in order of price and then in order of arrival. When the work is spread over a cluster then this is made more complex and while it can be done, it’s not easy. Partitioning allows the application to partition itself, split the incoming requests in to streams corresponding to those partitions and then route incoming work for a particular event/request stream exclusively to a single partition. This incoming work needs to be classified in to partitions and the requests are then routed to the cluster member hosting that partition using either IIOP or messaging. Once the work arrives then the cluster member can aggressively cache read/write data specific to that partition as the developer knows this data can only be modified on this cluster member. The cluster member can also order the work and ensure that it’s processed in the correct sequence in memory independently of any other cluster members. The database is now offloaded as it just gets writes from the cluster, all reads are satisfied using the cache in the cluster member. So long as there are more partitions than boxes then adding boxes will make this application run faster until the database ultimately becomes a bottleneck.

The other situation where async beans plus partitioning improves performance is that both these features provide low level primitives that can be used to practically build a custom application container that fully leverages the features of the application server. The J2EE specification provides a very high level set of services to an application developer. If this set of services isn’t what the application developer requires then the standard spec does not allow the developer to provide an alternative. The normal option is write a standalone J2SE application that does it or try to have features added to the next release of the commercial application server. The partitioning facility and async beans provide low level primitives that allow this code to run on top of the application server in a fully supported manner. This approach allows advanced customers to have the benefit of using a commercial application server without the normal limitations of the J2EE one size fits all philosophy.

Why might the asymmetric/partitioned model scale better than the symmetric model?

A financial application that matches buyers with sellers is an example of one type of application that we’ve seen. The application trades a set of financial instruments such as stocks. It tracks the buy and sell orders for each stock and attempts to match buyers with compatible sellers. It would also compare the buyers and sellers with prices from other exchanges and may route orders to an external exchange if that exchange is currently posting a better price. Orders should be processed in price order and if orders have the same price then they are processed in the order they arrived. Such a system might receive between 20,000 and 100,000 orders per day but would also receive between 500 and 2000 remote exchange prices/second (2 million prices/hour) and this volume is growing at about 40% a year. Clearly handling the prices volume is the main performance problem. Partitioned application can also better exploit SQL batching to significantly improve performance as they don’t need to worry about optimistic locking collisions. The rows are only being changed on a single server, no collisions.

What kind of applications typically implemented on symmetric clusters might benefit from being refactored into asymmetric ones?

Applications with smaller data sets that experience high request volumes and a relatively high write to read ratio. This kind of application typically doesn’t scale horizontally because of contention between cluster members even using approaches such as optimistic locking.

Applications that require sequenced request handling where a subset of the incoming events must be processed using some sequence or order. This can be implemented more efficiently using partitioning than with approaches using database locking.

Applications that have dynamic messaging requirements. If the set of message queues or topics used by an application changes dynamically during the time the application is deployed then a dynamic POJO message listener framework can be easily constructed with the exact threading model needed by the application rather than the normal cookie cutter approach.

Applications that have very high incoming message rates and where it makes sense to split the incoming message feed so that a specific subset only goes to a single cluster member. We’re partitioning the incoming topics into groups using hashing or some deterministic approach and each cluster member only receives messages for topics assigned to partitions hosted by that cluster member. This cluster member can then aggressively cache state for this subset and this improve performance as well as offloads the database. This again enables horizontal scale up especially when message order is important.