-
Notifications
You must be signed in to change notification settings - Fork 185
Set mon_target_pg_per_osd to 400 only during cluster creation #3170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Set mon_target_pg_per_osd to 400 only during cluster creation #3170
Conversation
This can be used to specify any Ceph config options on the cluster Signed-off-by: Malay Kumar Parida <[email protected]>
On existing clusters the value of mon_target_pg_per_osd is by default set to 100. If we set it to 400, there will be a massive increase in the number of PGs which will cause rebalancing & data movement. On existing clusters customers will have to set the value of mon_target_pg_per_osd to 400 manually on the storageCluster CR if they want to increase the number of PGs. Signed-off-by: Malay Kumar Parida <[email protected]>
Signed-off-by: Malay Kumar Parida <[email protected]>
968287e
to
d4fd23d
Compare
/retest |
/cc @travisn PTAL |
@malayparida2000: GitHub didn't allow me to request PR reviews from the following users: PTAL. Note that only red-hat-storage members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one comment to consider now, or as a follow up.
@@ -1559,3 +1559,34 @@ func generateCephReplicatedSpec(initData *ocsv1.StorageCluster, poolType string) | |||
|
|||
return crs | |||
} | |||
|
|||
// setMonTargetPgPerOsd sets the mon_target_pg_per_osd value to 400 if not already set | |||
func setMonTargetPgPerOsd(cephConfig *map[string]map[string]string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like us to consider moving other settings from the configmap to this CephConfig
section as well, so I hope we can generalize this implementation of preserving default ceph settings. But this can be a 4.20 work item, not urgent for now.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: malayparida2000, travisn The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold for the last question |
On existing clusters the value of mon_target_pg_per_osd is by default set to 100. If we set it to 400, there will be a massive increase in the number of PGs which will cause rebalancing & data movement.
On existing clusters customers will have to set the value of mon_target_pg_per_osd to 400 manually on the storageCluster CR if they want to increase the number of PGs.
Ref-https://issues.redhat.com/browse/DFBUGS-2391