Receive updates in your Inbox.

Distributing Load Within a Rundeck Cluster

Rundeck Enterprise's cluster features go beyond high-availability. As Rundeck usage grows, administrators want to distribute the load within an expandable cluster, and Rundeck's cluster tools make that possible. Here is how.

Split Front End and Worker traffic within the cluster

Using's cluster member tags and remote job execution policy, you can designate specific roles for your Rundeck Enterprise cluster members. For example, you can tag specific members as "Front End" to handle all front-end traffic (GUI, API, CLI). You can also specify specific cluster members as "Worker" to handle all job executions.




The design goal is to have a Front End cluster member receive a request to run a job or ad-hoc command (via GUI, API, CLI) and then pass that request to a Worker cluster member for execution. With this configuration, you can isolate — and scale independently — the load from user activity and the load from job execution.

Define the policy for distributing load across Workers

Now that you have tagged the cluster members that you want to handle front-end traffic ("Front End") and those that you want to handle the execution traffic ("Worker"), you can define what policy you want to use to distribute the load among Worker members:

  • Random - Executes randomly among allowed cluster members (in this case, members tagged "Worker")
  • RoundRobin - Executes round-robin style among allowed members
  • Preset - Executes on one other preset member
  • Load - Executes on a member based on load (thread ratio and percentage of CPU by default).

In some situations, you may want isolation of certain executions or are supporting multi-tenancy. If so, you can create multiple remote execution profiles that map Rundeck projects to specific cluster member tags and remote execution policies. This is an advanced topic that we will cover in an upcoming blog post.