From 937631a52f05fa509beb7e2fcc7f978472b86536 Mon Sep 17 00:00:00 2001 From: EricPonvelle Date: Thu, 14 May 2026 17:53:16 -0500 Subject: [PATCH] OSDOCS-14819: Added Scaling Limits to the product documetentation --- ...nt-cluster-max-classic-considerations.adoc | 49 +++++++++++++++ ...nning-environment-cluster-max-classic.adoc | 63 +++++++++++++++++++ ...onment-cluster-max-hcp-considerations.adoc | 48 ++++++++++++++ ...-planning-environment-cluster-max-hcp.adoc | 63 +++++++++++++++++++ ...rosa-planning-environment-cluster-max.adoc | 40 ------------ rosa_planning/rosa-planning-environment.adoc | 11 +++- 6 files changed, 232 insertions(+), 42 deletions(-) create mode 100644 modules/rosa-planning-environment-cluster-max-classic-considerations.adoc create mode 100644 modules/rosa-planning-environment-cluster-max-classic.adoc create mode 100644 modules/rosa-planning-environment-cluster-max-hcp-considerations.adoc create mode 100644 modules/rosa-planning-environment-cluster-max-hcp.adoc delete mode 100644 modules/rosa-planning-environment-cluster-max.adoc diff --git a/modules/rosa-planning-environment-cluster-max-classic-considerations.adoc b/modules/rosa-planning-environment-cluster-max-classic-considerations.adoc new file mode 100644 index 000000000000..46ee859c5467 --- /dev/null +++ b/modules/rosa-planning-environment-cluster-max-classic-considerations.adoc @@ -0,0 +1,49 @@ +// Module included in the following assemblies: +// +// * rosa_planning/rosa-planning-environment.adoc + +:_mod-docs-content-type: REFERENCE +[id="planning-environment-cluster-maximums-considerations-classic_{context}"] += Considerations for cluster maximum tests + +* *Idle pod baselines:* Metrics include `pause` pods and default cluster control plane pods on control plane nodes. Results use the `node-density` workload and change with active application load. +* *Resource and I/O limitations:* Observed limits depend on workload intensity. For example, high-density I/O workloads (for example, PostgreSQL) or hitting the PVC-per-node ceiling can reduce the maximum stable pod count. +* *Namespace composition:* Tests include default cluster operator namespaces. Each test namespace has idle pods, config maps, and secrets to model routine operational load. +* *Service scalability:* Data reflects tested `ClusterIP` services with a single endpoint using the `cni-density` workload. Other service types can scale differently. +* *Infrastructure and routing:* The test environment used two routers with default settings on dedicated {rosa-classic-title} infrastructure nodes. Limits used the `cluster-density` workload. +endif::openshift-rosa-hcp[] + +[id="planning-environment-cluster-maximums-estimate-classic_{context}"] +== Estimating worker nodes from expected pod counts + +Oversubscribing physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Take steps to avoid memory swapping. + +Some tested maximums apply in only one dimension at a time. They change when many objects run on the cluster at once. + +Older Red{nbsp}Hat test setups and tunings can differ from the OpenShift 4.21 tables in the preceding sections. Treat historical formulas as general guidance unless you match the same test profile. + +While planning your environment, estimate how many pods can fit per node by using the following formula: + +---- +required pods per cluster / pods per node = total number of nodes needed +---- + +The tested maximum number of pods per node in the preceding tables is 250. The number of pods that fit on a node still depends on each application memory, CPU, and storage profile, as described in _Planning your environment based on application requirements_. + +For example, if you want to scope your cluster for 2200 pods, you need at least nine worker nodes when you plan for 250 pods per node: + +---- +2200 / 250 = 8.8 +---- + +If you increase the number of worker nodes to 20, then the pod distribution changes to 110 pods per node: + +---- +2200 / 20 = 110 +---- + +Where: + +---- +required pods per cluster / total number of nodes = expected pods per node +---- \ No newline at end of file diff --git a/modules/rosa-planning-environment-cluster-max-classic.adoc b/modules/rosa-planning-environment-cluster-max-classic.adoc new file mode 100644 index 000000000000..2b4534503180 --- /dev/null +++ b/modules/rosa-planning-environment-cluster-max-classic.adoc @@ -0,0 +1,63 @@ +// Module included in the following assemblies: +// +// * rosa_planning/rosa-planning-environment.adoc + +:_mod-docs-content-type: REFERENCE +[id="planning-environment-cluster-maximums_{context}"] += Planning your environment based on tested cluster maximums + +[role="_abstract"] +You can use tested cluster maximums nce runs when you size {product-title} clusters and namespaces. These values are not hard limits or Red{nbsp}Hat supported ceilings for production. + +The following tables summarize the latest published tested maximums for your architecture. + +[NOTE] +==== +These numbers come from internal Red{nbsp}Hat tests on the latest {product-title} clusters using default cluster tunings. Reaching or exceeding a number does not mean the cluster fails or degrade immediately. + +Each value reflects limits for individual OpenShift resources measured with separate workloads that target a few resource types at a time. The workloads do not reproduce every production load, but they target patterns that stay close to common customer use cases. Tests used the Cloud Native Computing Foundation (CNCF) workload orchestrator. For more information, see link:https://www.cncf.io/projects/kube-burner/[the project page for this orchestrator]. +==== + +[id="planning-environment-tested-maximums-classic_{context}"] +== Tested cluster maximums for {product-title} + +[options="header",cols="2,1"] +|=== +|Maximum type |{product-title} tested maximum + +|Number of compute (worker) nodes +|249 + +|Number of pods +|60,000 + +|Number of pods per node +|250 + +|Number of deployments +|11,200 + +|Number of namespaces +|2,250 + +|Number of routes +|4,500 + +|Number of secrets +|29,000 + +|Number of config maps +|34,000 + +|Number of services +|22,000 + +|Number of pods per namespace +|2,000 + +|Number of services per namespace +|1,000 + +|Number of deployments per namespace +|2,000 +|=== \ No newline at end of file diff --git a/modules/rosa-planning-environment-cluster-max-hcp-considerations.adoc b/modules/rosa-planning-environment-cluster-max-hcp-considerations.adoc new file mode 100644 index 000000000000..6cee8fa3ca62 --- /dev/null +++ b/modules/rosa-planning-environment-cluster-max-hcp-considerations.adoc @@ -0,0 +1,48 @@ +// Module included in the following assemblies: +// +// * rosa_planning/rosa-planning-environment.adoc + +:_mod-docs-content-type: REFERENCE +[id="planning-environment-cluster-maximums-considerations-hcp_{context}"] += Considerations for cluster maximum tests + +* *Idle pod baselines:* Metrics include `pause` pods and default cluster control plane pods on control plane nodes. Results use the `node-density` workload and change with active application load. +* *Resource and I/O limitations:* Observed limits depend on workload intensity. For example, high-density I/O workloads (for example, PostgreSQL) or hitting the PVC-per-node ceiling can reduce the maximum stable pod count. +* *Namespace composition:* Tests include default cluster operator namespaces. Each test namespace has idle pods, config maps, and secrets to model routine operational load. +* *Service scalability:* Data reflects tested `ClusterIP` services with a single endpoint using the `cni-density` workload. Other service types can scale differently. +* *Infrastructure and routing:* The test environment used two routers with default settings on dedicated {rosa-classic-title} infrastructure nodes. Limits used the `cluster-density` workload. + +[id="planning-environment-cluster-maximums-estimate-hcp_{context}"] +== Estimating worker nodes from expected pod counts + +Oversubscribing physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Take steps to avoid memory swapping. + +Some tested maximums apply in only one dimension at a time. They change when many objects run on the cluster at once. + +Older Red{nbsp}Hat test setups and tunings can differ from the OpenShift 4.21 tables in the preceding sections. Treat historical formulas as general guidance unless you match the same test profile. + +While planning your environment, estimate how many pods can fit per node by using the following formula: + +---- +required pods per cluster / pods per node = total number of nodes needed +---- + +The tested maximum number of pods per node in the preceding tables is 250. The number of pods that fit on a node still depends on each application memory, CPU, and storage profile, as described in _Planning your environment based on application requirements_. + +For example, if you want to scope your cluster for 2200 pods, you need at least nine worker nodes when you plan for 250 pods per node: + +---- +2200 / 250 = 8.8 +---- + +If you increase the number of worker nodes to 20, then the pod distribution changes to 110 pods per node: + +---- +2200 / 20 = 110 +---- + +Where: + +---- +required pods per cluster / total number of nodes = expected pods per node +---- diff --git a/modules/rosa-planning-environment-cluster-max-hcp.adoc b/modules/rosa-planning-environment-cluster-max-hcp.adoc new file mode 100644 index 000000000000..2b4534503180 --- /dev/null +++ b/modules/rosa-planning-environment-cluster-max-hcp.adoc @@ -0,0 +1,63 @@ +// Module included in the following assemblies: +// +// * rosa_planning/rosa-planning-environment.adoc + +:_mod-docs-content-type: REFERENCE +[id="planning-environment-cluster-maximums_{context}"] += Planning your environment based on tested cluster maximums + +[role="_abstract"] +You can use tested cluster maximums nce runs when you size {product-title} clusters and namespaces. These values are not hard limits or Red{nbsp}Hat supported ceilings for production. + +The following tables summarize the latest published tested maximums for your architecture. + +[NOTE] +==== +These numbers come from internal Red{nbsp}Hat tests on the latest {product-title} clusters using default cluster tunings. Reaching or exceeding a number does not mean the cluster fails or degrade immediately. + +Each value reflects limits for individual OpenShift resources measured with separate workloads that target a few resource types at a time. The workloads do not reproduce every production load, but they target patterns that stay close to common customer use cases. Tests used the Cloud Native Computing Foundation (CNCF) workload orchestrator. For more information, see link:https://www.cncf.io/projects/kube-burner/[the project page for this orchestrator]. +==== + +[id="planning-environment-tested-maximums-classic_{context}"] +== Tested cluster maximums for {product-title} + +[options="header",cols="2,1"] +|=== +|Maximum type |{product-title} tested maximum + +|Number of compute (worker) nodes +|249 + +|Number of pods +|60,000 + +|Number of pods per node +|250 + +|Number of deployments +|11,200 + +|Number of namespaces +|2,250 + +|Number of routes +|4,500 + +|Number of secrets +|29,000 + +|Number of config maps +|34,000 + +|Number of services +|22,000 + +|Number of pods per namespace +|2,000 + +|Number of services per namespace +|1,000 + +|Number of deployments per namespace +|2,000 +|=== \ No newline at end of file diff --git a/modules/rosa-planning-environment-cluster-max.adoc b/modules/rosa-planning-environment-cluster-max.adoc deleted file mode 100644 index bfefdbe417ab..000000000000 --- a/modules/rosa-planning-environment-cluster-max.adoc +++ /dev/null @@ -1,40 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_planning/rosa-planning-environment.adoc - -:_mod-docs-content-type: REFERENCE -[id="planning-environment-cluster-maximums_{context}"] -= Planning your environment based on tested cluster maximums - -[role="_abstract"] -Oversubscribing the physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. - -Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster. - -The numbers noted in this documentation are based on Red{nbsp}Hat testing methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments. - -While planning your environment, determine how many pods are expected to fit per node using the following formula: - ----- -required pods per cluster / pods per node = total number of nodes needed ----- - -The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application’s memory, CPU, and storage requirements, as described in _Planning your environment based on application requirements_. - -For example, if you want to scope your cluster for 2200 pods per cluster, you would need at least nine nodes, assuming that there are 250 maximum pods per node: - ----- -2200 / 250 = 8.8 ----- - -If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node: - ----- -2200 / 20 = 110 ----- - -Where: - ----- -required pods per cluster / total number of nodes = expected pods per node ----- diff --git a/rosa_planning/rosa-planning-environment.adoc b/rosa_planning/rosa-planning-environment.adoc index 5ac6056a2d3d..01f58b9ea198 100644 --- a/rosa_planning/rosa-planning-environment.adoc +++ b/rosa_planning/rosa-planning-environment.adoc @@ -8,7 +8,14 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] [role="_abstract"] -This document describes how to plan your {product-title} environment based on the tested cluster maximums. +This document describes how to plan your {product-title} environment based on tested cluster maximums. -include::modules/rosa-planning-environment-cluster-max.adoc[leveloffset=+1] +ifdef::openshift-rosa[] +include::modules/rosa-planning-environment-cluster-max-classic.adoc[leveloffset=+1] +include::modules/rosa-planning-environment-cluster-max-classic-considerations.adoc[leveloffset=+1] +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +include::modules/rosa-planning-environment-cluster-max-hcp.adoc[leveloffset=+1] +include::modules/rosa-planning-environment-cluster-max-hcp-considerations.adoc[leveloffset=+1] +endif::openshift-rosa-hcp[] include::modules/rosa-planning-environment-application-reqs.adoc[leveloffset=+1] \ No newline at end of file