Skip to content

Feature/enhanced health #1676

Open
oyiz-michael wants to merge 3 commits intokubernetes-sigs:masterfrom
oyiz-michael:feature/enhanced-health-monitoring
Open

Feature/enhanced health #1676
oyiz-michael wants to merge 3 commits intokubernetes-sigs:masterfrom
oyiz-michael:feature/enhanced-health-monitoring

Conversation

@oyiz-michael
Copy link
Copy Markdown
Contributor

#1675
Is this a bug fix or adding new feature?
Adding new feature

What is this PR about? / Why do we need it?
This PR implements enhanced EFS mount health monitoring to address critical production issues where pods crash-loop when EFS mounts degrade.

Problem: Current health probes only check if the CSI driver process is running, not whether EFS mounts actually work for I/O operations. This causes false positives where pods appear healthy but can't access storage.

Solution:

Real-time monitoring of EFS mount points with actual I/O testing
Multiple health endpoints (/healthz, /healthz/ready, /healthz/live, /healthz/mounts)
Prometheus-style metrics for observability
Enhanced CSI probe integration
Configurable health check intervals and timeouts
Benefits:

Prevents pod crash-loops by detecting mount issues before applications fail
Better observability with detailed mount health metrics
Kubernetes-native integration with readiness/liveness probes
Addresses issues #336, #1411, #1156.

What testing is done?
✅ Comprehensive unit tests for all health monitoring components
✅ Full driver test suite passes (31/31 tests)
✅ HTTP endpoint testing with proper status codes and JSON responses
✅ I/O testing validates actual file operations on mount points
✅ Code formatting (gofmt) and quality (go vet) verified
✅ Binary compilation successful
✅ Zero breaking changes - backward compatible

- Implement real-time EFS mount health monitoring with I/O testing
- Add multiple health endpoints (/healthz, /healthz/ready, /healthz/live, /healthz/mounts)
- Integrate Prometheus-style metrics for monitoring systems
- Prevent pod crash-loops through enhanced readiness/liveness checks
- Add comprehensive test coverage and documentation

This addresses the Kubernetes ecosystem need for modernized health probe
logic to support proper CSI health monitoring, moving beyond basic HTTP
responses to actual mount health assessment.

Resolves: Enhanced health monitoring for production EFS CSI deployments
- Apply gofmt formatting fixes
- Ensure consistent code style across all files
- Ready for code review
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 1, 2025
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: oyiz-michael
Once this PR has been reviewed and has the lgtm label, please assign dankova22 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Aug 1, 2025
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

Hi @oyiz-michael. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Aug 1, 2025
@oyiz-michael
Copy link
Copy Markdown
Contributor Author

oyiz-michael commented Sep 2, 2025

cc @justinsb @mskanth972

@oyiz-michael
Copy link
Copy Markdown
Contributor Author

/ok-to-test

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

@oyiz-michael: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

Details

In response to this:

/ok-to-test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 3, 2025
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 15, 2025
@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Feb 13, 2026
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@abeowlu
Copy link
Copy Markdown

abeowlu commented Mar 5, 2026

@DavidXU12345 this PR is important for High Availability and EFS server-side resiliency

  • wanted to clarify if there is anything blocking pick up the PR exactly?
  • left a comment on the fail-fast implementation on the driver here and wondering if that is it? - I can tweak the implementation to leave the driver running for dead mounts and issue events for irrecoverable volumes for kublet instead

@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants