[WIP] feat: support changing cachingMode through VolumeAttributeClass#3584
[WIP] feat: support changing cachingMode through VolumeAttributeClass#3584andyzhangx wants to merge 2 commits intokubernetes-sigs:masterfrom
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andyzhangx The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
53abd35 to
5af5d6f
Compare
5f0ea93 to
9c1c857
Compare
In ControllerModifyVolume: - Validate the cachingMode parameter - Enforce caching limits (disks >= 4 TiB support only None) - Enforce PremiumV2_LRS constraint (only None) - Pass cachingMode to ModifyDisk via ManagedDiskOptions In ModifyDisk: - Accept CachingMode in ManagedDiskOptions - Log cachingMode update (applied on next ControllerPublishVolume attach) Note: cachingMode is a VM-level data disk attachment property, not a disk resource property. The new value is persisted in PV volumeAttributes via VolumeAttributeClass and takes effect on next ControllerPublishVolume. Fixes: kubernetes-sigs#3471
9c1c857 to
0048abf
Compare
Add modify-volume-params.yaml with cachingmode: ReadOnly and pass --csi.testvolumemutableparameters to csi-sanity so ModifyVolume tests exercise the real cachingMode code path instead of empty parameters.
|
@andyzhangx: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Pull request overview
Adds initial wiring to accept cachingMode as a mutable parameter (intended for Kubernetes VolumeAttributesClass) and exercises it via CSI sanity tests.
Changes:
- Pass
--csi.testvolumemutableparametersinto the sanity test runner and add a sample params YAML. - Validate
cachingModeinControllerModifyVolume(including PremiumV2_LRS and >= 4 TiB constraints) and plumb it intoManagedDiskOptions. - Add unit tests for valid/invalid
cachingModevalues.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| test/sanity/run-test.sh | Enables csi-sanity mutable-parameter test input file. |
| test/sanity/modify-volume-params.yaml | Provides a mutable-params YAML containing cachingmode: ReadOnly. |
| pkg/azuredisk/controllerserver.go | Validates cachingMode in ControllerModifyVolume and passes it into disk modification options. |
| pkg/azuredisk/controllerserver_test.go | Adds ControllerModifyVolume unit test cases for caching mode parsing. |
| pkg/azuredisk/azure_managedDiskController.go | Extends ManagedDiskOptions with CachingMode and updates ModifyDisk logging/flow. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -588,6 +612,7 @@ func (d *Driver) ControllerModifyVolume(ctx context.Context, req *csi.Controller | |||
| StorageAccountType: skuName, | |||
| SourceResourceID: diskURI, | |||
| SourceType: consts.SourceVolume, | |||
| CachingMode: cachingMode, | |||
| } | |||
There was a problem hiding this comment.
ControllerModifyVolume validates cachingMode and passes it through ManagedDiskOptions, but ManagedDiskController.ModifyDisk does not apply caching mode anywhere (it only logs). Since caching mode is a VM data-disk attachment property (not a disk resource property), this change currently has no effect for attached volumes and does not reliably affect future attaches either. Implement the intended behavior by updating the VM attachment when currentDisk.ManagedBy is set (e.g., re-attach the disk on the VM with the existing LUN and new caching mode), and keep the no-op behavior for unattached disks.
| // PerformancePlus - Set this flag to true to get a boost on the performance target of the disk deployed | ||
| PerformancePlus *bool | ||
| // CachingMode - The caching mode of the disk when attached to a VM. | ||
| // This is a VM-level property stored in the PV volumeAttributes, applied on next ControllerPublishVolume. |
There was a problem hiding this comment.
The new CachingMode field comment says the value is stored in PV volumeAttributes and applied on next ControllerPublishVolume, but this code path does not persist caching mode anywhere (and ModifyDisk doesn’t apply it either). Either wire this field into an actual VM-attachment update path (for attached disks) and/or clarify in the comment what component is responsible for persisting and re-supplying the value on future attaches (e.g., Kubernetes VAC).
| // This is a VM-level property stored in the PV volumeAttributes, applied on next ControllerPublishVolume. | |
| // This is a VM attachment-level setting supplied to publish/attach operations; it is not persisted by | |
| // ManagedDiskOptions or by managed disk modification. If the value must survive detach/reattach cycles, | |
| // a higher-level component must persist and re-supply it (for example via Kubernetes PV volumeAttributes/VAC). |
| } | ||
|
|
||
| if options.CachingMode != "" { | ||
| klog.V(2).Infof("azureDisk - cachingMode(%s) updated for disk(%s), will take effect on next attach", options.CachingMode, diskName) |
There was a problem hiding this comment.
ModifyDisk logs that cachingMode was updated and will take effect on next attach, but there is no actual update performed (and cachingMode isn’t persisted on the disk resource). This log is misleading as-is. Either perform the VM attachment update here/elsewhere (for attached disks), or change the log to reflect that only validation occurred and the new value must be provided again via volume_context on a future attach.
| klog.V(2).Infof("azureDisk - cachingMode(%s) updated for disk(%s), will take effect on next attach", options.CachingMode, diskName) | |
| klog.V(2).Infof("azureDisk - cachingMode(%s) for disk(%s) was not persisted by ModifyDisk; provide it again via volume_context on a future attach", options.CachingMode, diskName) |
| { | ||
| desc: "success with valid cachingMode ReadOnly", | ||
| req: &csi.ControllerModifyVolumeRequest{ | ||
| VolumeId: testVolumeID, | ||
| MutableParameters: map[string]string{ | ||
| consts.CachingModeField: "ReadOnly", | ||
| }, | ||
| }, | ||
| oldSKU: to.Ptr(armcompute.DiskStorageAccountTypesPremiumLRS), | ||
| expectedResp: &csi.ControllerModifyVolumeResponse{}, | ||
| expectMigrationStarted: false, | ||
| }, | ||
| { | ||
| desc: "success with valid cachingMode None", | ||
| req: &csi.ControllerModifyVolumeRequest{ | ||
| VolumeId: testVolumeID, | ||
| MutableParameters: map[string]string{ | ||
| consts.CachingModeField: "None", | ||
| }, | ||
| }, | ||
| oldSKU: to.Ptr(armcompute.DiskStorageAccountTypesPremiumLRS), | ||
| expectedResp: &csi.ControllerModifyVolumeResponse{}, | ||
| expectMigrationStarted: false, | ||
| }, | ||
| { | ||
| desc: "success with valid cachingMode ReadWrite", | ||
| req: &csi.ControllerModifyVolumeRequest{ | ||
| VolumeId: testVolumeID, | ||
| MutableParameters: map[string]string{ | ||
| consts.CachingModeField: "ReadWrite", | ||
| }, | ||
| }, | ||
| oldSKU: to.Ptr(armcompute.DiskStorageAccountTypesPremiumLRS), | ||
| expectedResp: &csi.ControllerModifyVolumeResponse{}, | ||
| expectMigrationStarted: false, | ||
| }, | ||
| { | ||
| desc: "fail with invalid cachingMode", | ||
| req: &csi.ControllerModifyVolumeRequest{ | ||
| VolumeId: testVolumeID, | ||
| MutableParameters: map[string]string{ | ||
| consts.CachingModeField: "WriteOnly", | ||
| }, | ||
| }, | ||
| expectedResp: nil, | ||
| expectedErrCode: codes.InvalidArgument, | ||
| expectMigrationStarted: false, | ||
| }, |
There was a problem hiding this comment.
The added cachingMode test cases only cover “valid/invalid value” parsing. The new behavior in ControllerModifyVolume also enforces size (>= 4 TiB) and PremiumV2_LRS constraints, and (per PR intent) should update caching mode in-place for attached disks; none of that is covered here. Add unit tests that (1) assert InvalidArgument for non-None caching on >=4TiB and PremiumV2_LRS, and (2) verify the VM attachment update path is invoked when the disk is attached (e.g., ManagedBy set) and is a no-op when unattached.
What type of PR is this?
/kind feature
What this PR does / why we need it:
Adds support for changing
cachingModeviaVolumeAttributeClassby validating the parameter inControllerModifyVolumeand updating the disk's caching mode on the VM in-place.How it works
cachingModeis a VM-level data disk attachment property. When modified via VolumeAttributeClass:ControllerModifyVolumevalidates the new cachingMode valueNone; PremiumV2_LRS only supportsNoneUpdateDiskCachingModewhich:disk.ManagedByAttachDiskcall with the existing LUN and new cachingMode to update the VMvolumeAttributesChanges
azure_controller_common.go: AddedUpdateDiskCachingModemethod oncontrollerCommoncontrollerserver.go: Added cachingMode validation and update logic inControllerModifyVolumecontrollerserver_test.go: Added 4 unit test cases (3 valid modes + 1 invalid)Example VolumeAttributeClass
Which issue(s) this PR fixes:
Fixes #3471
Special notes for your reviewer:
None,ReadOnly,ReadWrite