Consolidation of Rundeck activity is one of the popular use cases for the cluster features in Rundeck Enterprise. Rather than manage multiple Rundeck instances (which could also cause a fragmented user experience), Rundeck admins want to consolidate Rundeck activity into one or more highly-available and scalable Rundeck clusters.
This concentration of Rundeck activity can create two challenges as usage scales. In a previous post, I covered the challenge of distributing load within a Rundeck cluster. In this post, I'll address the second challenge, segmenting and isolating execution within the cluster.
There are different use cases where you may want to segment or isolate Rundeck activity by directing executions for specific projects to different cluster members. Some use cases focus on infrastructure or platform differences, such as needing to dedicate and configure cluster members for certain environment types (e.g., Linux, Windows, PCI, etc.). Other use cases are around isolating execution (e.g., keeping operations support and incident response jobs separated from the lumpy load of scheduled operations and business analytics batch jobs).
The key mechanism for segmenting and isolating execution traffic is the profile feature of Rundeck's cluster remote execution policy. The profile feature allows you to associate a project (or set of projects) with a specific execution policy.
For example, you can create a profile for projects containing jobs to execute in PCI compliant environments. First, you would set up specific Worker members of the Rundeck Cluster to have connectivity to the PCI environments. Next, you would add the "PCI" tag to those cluster members. Finally, you would create a remote execution policy profile that associates specific projects with a policy that sends the executions for those projects to those PCI tagged Worker members of the Rundeck Cluster. You can use the remote execution policy to set a load balancing strategy for the profile. You also have options like softer controls ("preferred" members) and stricter controls ("allowed" members).
Note: in this scenario, we are following the convention of "Front End" cluster members handling user traffic (UI, API, CLI) and Jobs/Commands being routed to "Worker" cluster members for execution.
If you want to discuss an execution model specific to your use cases, please don't hesitate to contact us.