Cluster terminology – What “Active/Active” actually means

As a follow-up to my last entry (attempting to clear up some Windows Clustering terminology), I’ve found an article that makes another distinction that I forgot to include – the difference between an active/passive and an active/active cluster:

The misconception of active/active clustering (a la AirborneGeek.com)

The understanding among those new to cluster seems to be that a/a vs. a/p is a licensing question, and then if you’re licensed for it, you just turn it on. In reality, it really just describes whether you have clustered services living on only one node or split between both nodes (during normal operation – during a cluster failover, any cluster might be active/active for a short period of time. Or, I suppose, your cluster is active/active if your quorum drive lives on the opposite node from your clustered service). There’s no load-balancing involved in clustering at all – at any time, only one node owns a particular resource, and only that node is responding to client requests for that resource.

In SQL Server 2012 AlwaysOn, the new high-availability feature, the SQL Server service is running on both cluster nodes, but client access (through the “Availability Group”) is controlled by the cluster service. That means that all clients making a connection go first to the active server, and then the SQL Service there might send them to get their data from one of the other nodes (it’s worth reiterating here that, in AlwaysOn, SQL Server isn’t clustered, but the SQL services operate independently on each node).

Clearing up Windows Cluster terminology

I wanted to clear up some terminology around Windows Clusters that seems to cause a bit of confusion. I’ve stumbled across a few questions on StackOverflow and Experts-Exchange that seem to have some basic confusion around “clustering servers” and “how to install an application to a cluster”, and I’m hoping to set a few things straight.

  1. There’s really no such thing as a clustered server. Servers can have clustering enabled and configured, but the servers themselves aren’t really clustered – they’re just set up to enable clustered applications. When servers are part of a cluster, they still do all their thinking on their own, including running their own applications, services, and tasks, without the other servers in the cluster even being aware.
  2. You don’t cluster servers, you cluster applications and resources. Once servers have had clustering installed and are configured, you can cluster an application or a resource. This clustering is really just telling the cluster manager that you want it to control which server clients talk to when they want to access the resource. The cluster manager ensures that the application (or service or resource) is running on only one node at any given time, and to the extent it’s able, it ensures that it’s always running (watching for a failure and bringing the resource online on another node and then directing clients to that node instead).
  3. Applications don’t have to be “cluster-aware” to be clustered. I work mostly with SQL Server, which is cluster-aware, but applications you cluster don’t need to be. You can cluster any service, or resource on a server by just adding it to the cluster manager – the cluster manager will ensure it only runs on one server at a time, not allowing it to start on other nodes. For example, we use a monitoring tool that runs as a service – we installed the service on each cluster node and then added to the cluster manager – it now can be failed back and forth between nodes as a clustered resource, so it’s always online, is failure-resistant, and shares a segment of the HKLM in the registry between nodes – all without being explicitly cluster-aware.
  4. SQL Server doesn’t need to be clustered when it’s installed on a cluster. While you can install a clustered instance of SQL Server (which automatically registers everything with the cluster manager), you can also install stand-alone instances of SQL Server (or any other application) on a cluster. That’s actually how a new feature in SQL 2012 – AlwaysOn – works: You install a non-clustered instance of SQL Server on different cluster nodes, and then you let the cluster manager coordinate client connections to the SQL Servers, but they still operate independantly and replicate their data between each other.

Hopefully this clears things up and doesn’t lead to more confusion. When I first started working with clustering, I had the impression that setting up a cluster caused the servers to act as one and share all their resources, but that understanding led to a lot of confusion when it came time to set something up or troubleshoot an issue. With the understanding that “clustered servers” are really just servers with clustered resources, and not actually clustered themselves, hopefully it will simply things!