Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
// Module included in the following assemblies:
//
// * rosa_planning/rosa-planning-environment.adoc

:_mod-docs-content-type: REFERENCE
[id="planning-environment-cluster-maximums-considerations-classic_{context}"]
= Considerations for cluster maximum tests
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.ShortDescription: Assign [role="_abstract"] to a paragraph to use it as in DITA.


* *Idle pod baselines:* Metrics include `pause` pods and default cluster control plane pods on control plane nodes. Results use the `node-density` workload and change with active application load.
* *Resource and I/O limitations:* Observed limits depend on workload intensity. For example, high-density I/O workloads (for example, PostgreSQL) or hitting the PVC-per-node ceiling can reduce the maximum stable pod count.
* *Namespace composition:* Tests include default cluster operator namespaces. Each test namespace has idle pods, config maps, and secrets to model routine operational load.
* *Service scalability:* Data reflects tested `ClusterIP` services with a single endpoint using the `cni-density` workload. Other service types can scale differently.
* *Infrastructure and routing:* The test environment used two routers with default settings on dedicated {rosa-classic-title} infrastructure nodes. Limits used the `cluster-density` workload.
endif::openshift-rosa-hcp[]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDoc.ValidConditions: File contains unbalanced if statements. Review the file to ensure it contains matching opening and closing if statements.


[id="planning-environment-cluster-maximums-estimate-classic_{context}"]
== Estimating worker nodes from expected pod counts

Oversubscribing physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Take steps to avoid memory swapping.

Some tested maximums apply in only one dimension at a time. They change when many objects run on the cluster at once.

Older Red{nbsp}Hat test setups and tunings can differ from the OpenShift 4.21 tables in the preceding sections. Treat historical formulas as general guidance unless you match the same test profile.

While planning your environment, estimate how many pods can fit per node by using the following formula:

----
required pods per cluster / pods per node = total number of nodes needed
----

The tested maximum number of pods per node in the preceding tables is 250. The number of pods that fit on a node still depends on each application memory, CPU, and storage profile, as described in _Planning your environment based on application requirements_.

For example, if you want to scope your cluster for 2200 pods, you need at least nine worker nodes when you plan for 250 pods per node:

----
2200 / 250 = 8.8
----

If you increase the number of worker nodes to 20, then the pod distribution changes to 110 pods per node:

----
2200 / 20 = 110
----

Where:

----
required pods per cluster / total number of nodes = expected pods per node
----
63 changes: 63 additions & 0 deletions modules/rosa-planning-environment-cluster-max-classic.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
// Module included in the following assemblies:
//
// * rosa_planning/rosa-planning-environment.adoc

:_mod-docs-content-type: REFERENCE
[id="planning-environment-cluster-maximums_{context}"]
= Planning your environment based on tested cluster maximums

[role="_abstract"]
You can use tested cluster maximums nce runs when you size {product-title} clusters and namespaces. These values are not hard limits or Red{nbsp}Hat supported ceilings for production.

The following tables summarize the latest published tested maximums for your architecture.

[NOTE]
====
These numbers come from internal Red{nbsp}Hat tests on the latest {product-title} clusters using default cluster tunings. Reaching or exceeding a number does not mean the cluster fails or degrade immediately.

Each value reflects limits for individual OpenShift resources measured with separate workloads that target a few resource types at a time. The workloads do not reproduce every production load, but they target patterns that stay close to common customer use cases. Tests used the Cloud Native Computing Foundation (CNCF) workload orchestrator. For more information, see link:https://www.cncf.io/projects/kube-burner/[the project page for this orchestrator].
====

[id="planning-environment-tested-maximums-classic_{context}"]
== Tested cluster maximums for {product-title}

[options="header",cols="2,1"]
|===
|Maximum type |{product-title} tested maximum

|Number of compute (worker) nodes
|249

|Number of pods
|60,000

|Number of pods per node
|250

|Number of deployments
|11,200

|Number of namespaces
|2,250

|Number of routes
|4,500

|Number of secrets
|29,000

|Number of config maps
|34,000

|Number of services
|22,000

|Number of pods per namespace
|2,000

|Number of services per namespace
|1,000

|Number of deployments per namespace
|2,000
|===
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
// Module included in the following assemblies:
//
// * rosa_planning/rosa-planning-environment.adoc

:_mod-docs-content-type: REFERENCE
[id="planning-environment-cluster-maximums-considerations-hcp_{context}"]
= Considerations for cluster maximum tests
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.ShortDescription: Assign [role="_abstract"] to a paragraph to use it as in DITA.


* *Idle pod baselines:* Metrics include `pause` pods and default cluster control plane pods on control plane nodes. Results use the `node-density` workload and change with active application load.
* *Resource and I/O limitations:* Observed limits depend on workload intensity. For example, high-density I/O workloads (for example, PostgreSQL) or hitting the PVC-per-node ceiling can reduce the maximum stable pod count.
* *Namespace composition:* Tests include default cluster operator namespaces. Each test namespace has idle pods, config maps, and secrets to model routine operational load.
* *Service scalability:* Data reflects tested `ClusterIP` services with a single endpoint using the `cni-density` workload. Other service types can scale differently.
* *Infrastructure and routing:* The test environment used two routers with default settings on dedicated {rosa-classic-title} infrastructure nodes. Limits used the `cluster-density` workload.

[id="planning-environment-cluster-maximums-estimate-hcp_{context}"]
== Estimating worker nodes from expected pod counts

Oversubscribing physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Take steps to avoid memory swapping.

Some tested maximums apply in only one dimension at a time. They change when many objects run on the cluster at once.

Older Red{nbsp}Hat test setups and tunings can differ from the OpenShift 4.21 tables in the preceding sections. Treat historical formulas as general guidance unless you match the same test profile.

While planning your environment, estimate how many pods can fit per node by using the following formula:

----
required pods per cluster / pods per node = total number of nodes needed
----

The tested maximum number of pods per node in the preceding tables is 250. The number of pods that fit on a node still depends on each application memory, CPU, and storage profile, as described in _Planning your environment based on application requirements_.

For example, if you want to scope your cluster for 2200 pods, you need at least nine worker nodes when you plan for 250 pods per node:

----
2200 / 250 = 8.8
----

If you increase the number of worker nodes to 20, then the pod distribution changes to 110 pods per node:

----
2200 / 20 = 110
----

Where:

----
required pods per cluster / total number of nodes = expected pods per node
----
63 changes: 63 additions & 0 deletions modules/rosa-planning-environment-cluster-max-hcp.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
// Module included in the following assemblies:
//
// * rosa_planning/rosa-planning-environment.adoc

:_mod-docs-content-type: REFERENCE
[id="planning-environment-cluster-maximums_{context}"]
= Planning your environment based on tested cluster maximums

[role="_abstract"]
You can use tested cluster maximums nce runs when you size {product-title} clusters and namespaces. These values are not hard limits or Red{nbsp}Hat supported ceilings for production.

The following tables summarize the latest published tested maximums for your architecture.

[NOTE]
====
These numbers come from internal Red{nbsp}Hat tests on the latest {product-title} clusters using default cluster tunings. Reaching or exceeding a number does not mean the cluster fails or degrade immediately.

Each value reflects limits for individual OpenShift resources measured with separate workloads that target a few resource types at a time. The workloads do not reproduce every production load, but they target patterns that stay close to common customer use cases. Tests used the Cloud Native Computing Foundation (CNCF) workload orchestrator. For more information, see link:https://www.cncf.io/projects/kube-burner/[the project page for this orchestrator].
====

[id="planning-environment-tested-maximums-classic_{context}"]
== Tested cluster maximums for {product-title}

[options="header",cols="2,1"]
|===
|Maximum type |{product-title} tested maximum

|Number of compute (worker) nodes
|249

|Number of pods
|60,000

|Number of pods per node
|250

|Number of deployments
|11,200

|Number of namespaces
|2,250

|Number of routes
|4,500

|Number of secrets
|29,000

|Number of config maps
|34,000

|Number of services
|22,000

|Number of pods per namespace
|2,000

|Number of services per namespace
|1,000

|Number of deployments per namespace
|2,000
|===
40 changes: 0 additions & 40 deletions modules/rosa-planning-environment-cluster-max.adoc

This file was deleted.

11 changes: 9 additions & 2 deletions rosa_planning/rosa-planning-environment.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,14 @@ include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]

[role="_abstract"]
This document describes how to plan your {product-title} environment based on the tested cluster maximums.
This document describes how to plan your {product-title} environment based on tested cluster maximums.

include::modules/rosa-planning-environment-cluster-max.adoc[leveloffset=+1]
ifdef::openshift-rosa[]
include::modules/rosa-planning-environment-cluster-max-classic.adoc[leveloffset=+1]
include::modules/rosa-planning-environment-cluster-max-classic-considerations.adoc[leveloffset=+1]
endif::openshift-rosa[]
ifdef::openshift-rosa-hcp[]
include::modules/rosa-planning-environment-cluster-max-hcp.adoc[leveloffset=+1]
include::modules/rosa-planning-environment-cluster-max-hcp-considerations.adoc[leveloffset=+1]
endif::openshift-rosa-hcp[]
include::modules/rosa-planning-environment-application-reqs.adoc[leveloffset=+1]