OCPBUGS-77949: OCPBUGS-77948: TNF node replacement test updates#30846
OCPBUGS-77949: OCPBUGS-77948: TNF node replacement test updates#30846jaypoulz wants to merge 4 commits intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughUpdated the two-node test suite: replaced a hardcoded Redfish BMC address with a parameterized authority form, extended and parallelized node-replacement and recovery timeouts/flows, added job/pod log capture and force-delete helpers, improved pacemaker status/debugging, and truncated large SSH outputs for logging. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jaypoulz The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/jira refresh |
|
@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is valid. 3 validation(s) were run on this bug
No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/two_node/utils/core/ssh.go`:
- Around line 136-141: The current log truncation slices the UTF-8 string bytes
with out[:maxLogOutputBytes], which can cut a multi-byte rune and produce
invalid UTF-8 in logs; update the truncation logic around stdout.String()/out
and the e2e.Logf call to perform rune-safe truncation (for example, convert to
runes or iterate runes until adding the next rune would exceed
maxLogOutputBytes) and then log the safely truncated string along with the total
byte length using the existing maxLogOutputBytes and e2e.Logf call sites.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3ecbedc9-3c0f-4263-b6e6-a6a05314b2c2
📒 Files selected for processing (6)
test/extended/testdata/two_node/baremetalhost-template.yamltest/extended/two_node/tnf_node_replacement.gotest/extended/two_node/utils/common.gotest/extended/two_node/utils/core/ssh.gotest/extended/two_node/utils/services/etcd.gotest/extended/two_node/utils/services/pacemaker.go
|
@jaypoulz: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
eggfoobar
left a comment
There was a problem hiding this comment.
Looking great, just had some small suggestions.
| automatedCleaningMode: metadata | ||
| bmc: | ||
| address: redfish+https://{REDFISH_IP}:8000/redfish/v1/Systems/{UUID} | ||
| address: redfish+https://{REDFISH_HOST_PORT}/redfish/v1/Systems/{UUID} |
There was a problem hiding this comment.
Maybe, we should just rename this to REDFISH_HOST, the port can cause confusion
| // to empty so the test can proceed without blocking on controller cleanup. | ||
| func deleteOcResourceWithRetry(oc *exutil.CLI, resourceType, resourceName, namespace string) error { | ||
| return core.RetryWithOptions(func() error { | ||
| done := make(chan error, 1) |
There was a problem hiding this comment.
Instead of doing this timer, can we just use the old core.RetryWithOptions and if that fails we fall to force delete?
| "{CREDENTIALS_NAME}": testConfig.TargetNode.BMCSecretName, | ||
| "{BOOT_MAC_ADDRESS}": newMACAddress, | ||
| "{BMH_NAME}": testConfig.TargetNode.BMHName, | ||
| "{REDFISH_HOST_PORT}": redfishHostPort, |
There was a problem hiding this comment.
Same here on updating to REDFISH_HOST
There was a problem hiding this comment.
I decided to go with REDFISH_AUTHORITY since that's the technical term for the host:port part of a URL
…logs - Use PcsStatusFull / PcsStatusFullViaDebug for human-readable pacemaker status instead of XML - Truncate long SSH stdout/stderr in logs (e.g. avoid pcs status xml dumps) - On WaitForNodesOnline timeout, log full pacemaker status for debugging Made-with: Cursor
…nHostPort for IPv6
- Use net.JoinHostPort(RedfishIP, port) so IPv6 addresses are bracketed (RFC 3986)
- BMH template placeholder {REDFISH_HOST_PORT} replaces {REDFISH_IP}; port 8000 in code
Made-with: Cursor
…e replacement - Capture job pod logs and gate on survivor update-setup run - Force-delete stuck BMH/Machine after 1m; shorten recovery and BMH timeouts; rename timeouts to match duration (3m/7m/10m) - Wait for survivor and target update-setup jobs in parallel Made-with: Cursor
CSR approval was never observed in testing; remove the wait to simplify recovery. Made-with: Cursor
547c9eb to
0816c76
Compare
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-techpreview |
|
@jaypoulz: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c38d4d90-1cb2-11f1-8125-bec258656377-0 |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/two_node/tnf_node_replacement.go`:
- Around line 1273-1280: The wait uses a hardcoded minPodCreationTime
(time.Now().Add(-2 * time.Minute)) which can include pre-Ready stale pods;
change waitForNodeRecovery to return the node Ready timestamp (e.g., readyTime)
and use that exact timestamp here instead of time.Now().Add(...), passing the
returned readyTime as minPodCreationTime into
services.WaitForSurvivorUpdateSetupJobCompletion (and the symmetric
WaitForTargetUpdateSetupJobCompletion) so the waits are gated on the node Ready
time rather than an approximate clock offset.
- Around line 1337-1354: The current attempt spawns oc.AsAdmin().Run("delete")
in a goroutine and uses time.After(deleteAttemptTimeout), which leaves the
delete running if the timer fires; replace that pattern with a per-attempt
cancelable context so each delete is actually bounded: inside the
RetryWithOptions callback create ctx, cancel :=
context.WithTimeout(context.Background(), deleteAttemptTimeout) and defer
cancel(), then invoke the delete command with that context (e.g.,
oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n",
namespace).WithContext(ctx).Output() or the project’s equivalent Run/Output
method that accepts a context), remove the extra goroutine and select, capture
and log the returned error from the cancelable delete call, and then use
ocResourceExists(oc, resourceType, resourceName, namespace) to decide
success/failure as before.
- Around line 1466-1468: The call to core.RetryOptions in
waitForEtcdResourceToStop is ignoring the function's timeout parameter and
hardcodes threeMinuteTimeout, preventing callers from controlling the deadline;
change the RetryOptions Timeout to use the function's timeout argument (the
timeout parameter of waitForEtcdResourceToStop) instead of threeMinuteTimeout,
and ensure any associated log message that references the timeout reflects the
passed-in timeout value so logs match behavior.
- Around line 1378-1393: The function forceDeleteOcResourceByRemovingFinalizers
currently returns nil even when the confirm loop times out, causing callers to
assume deletion succeeded; change the final branch so that after the timeout
(where it currently logs the WARNING) the function returns a non-nil error
(e.g., fmt.Errorf with context including resourceType, resourceName and
forceDeleteConfirmTimeout) instead of nil so callers see the failure and can
handle the retry/error path; update the log call in
forceDeleteOcResourceByRemovingFinalizers to include the same error context when
returning.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fce9d0e8-a5c5-480a-b91a-17f85f92e721
📒 Files selected for processing (6)
test/extended/testdata/two_node/baremetalhost-template.yamltest/extended/two_node/tnf_node_replacement.gotest/extended/two_node/utils/common.gotest/extended/two_node/utils/core/ssh.gotest/extended/two_node/utils/services/etcd.gotest/extended/two_node/utils/services/pacemaker.go
🚧 Files skipped from review as they are similar to previous changes (1)
- test/extended/testdata/two_node/baremetalhost-template.yaml
| minPodCreationTime := time.Now().Add(-2 * time.Minute) | ||
| e2e.Logf("Waiting for both CEO update-setup jobs (survivor and target) in parallel") | ||
| var wg sync.WaitGroup | ||
| var errSurvivor, errTarget error | ||
| wg.Add(2) | ||
| go func() { | ||
| defer wg.Done() | ||
| errSurvivor = services.WaitForSurvivorUpdateSetupJobCompletion(testConfig.Jobs.UpdateSetupJobSurvivorName, etcdNamespace, minPodCreationTime, tenMinuteTimeout, utils.ThirtySecondPollInterval, oc) |
There was a problem hiding this comment.
Gate this wait on the actual Ready timestamp.
time.Now().Add(-2 * time.Minute) can still admit an update-setup pod that started before the replacement node became Ready, which is exactly the stale run this logic is trying to exclude. Capture the Ready time in waitForNodeRecovery and pass that exact timestamp through here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/two_node/tnf_node_replacement.go` around lines 1273 - 1280, The
wait uses a hardcoded minPodCreationTime (time.Now().Add(-2 * time.Minute))
which can include pre-Ready stale pods; change waitForNodeRecovery to return the
node Ready timestamp (e.g., readyTime) and use that exact timestamp here instead
of time.Now().Add(...), passing the returned readyTime as minPodCreationTime
into services.WaitForSurvivorUpdateSetupJobCompletion (and the symmetric
WaitForTargetUpdateSetupJobCompletion) so the waits are gated on the node Ready
time rather than an approximate clock offset.
| err := core.RetryWithOptions(func() error { | ||
| done := make(chan error, 1) | ||
| go func() { | ||
| _, err := oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n", namespace).Output() | ||
| done <- err | ||
| }() | ||
| select { | ||
| case err := <-done: | ||
| if err != nil { | ||
| e2e.Logf("%s returned error: %v", opName, err) | ||
| } | ||
| case <-time.After(deleteAttemptTimeout): | ||
| e2e.Logf("%s did not complete within %v", opName, deleteAttemptTimeout) | ||
| } | ||
| if !ocResourceExists(oc, resourceType, resourceName, namespace) { | ||
| return nil | ||
| } | ||
| return fmt.Errorf("resource still exists") |
There was a problem hiding this comment.
deleteAttemptTimeout is not actually enforced here.
When time.After(deleteAttemptTimeout) fires, the oc delete keeps running in the background. Later retries can overlap the earlier delete, so a hung attempt is never really capped at 20 seconds. Use a cancelable delete path per attempt instead of goroutine + time.After.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/two_node/tnf_node_replacement.go` around lines 1337 - 1354, The
current attempt spawns oc.AsAdmin().Run("delete") in a goroutine and uses
time.After(deleteAttemptTimeout), which leaves the delete running if the timer
fires; replace that pattern with a per-attempt cancelable context so each delete
is actually bounded: inside the RetryWithOptions callback create ctx, cancel :=
context.WithTimeout(context.Background(), deleteAttemptTimeout) and defer
cancel(), then invoke the delete command with that context (e.g.,
oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n",
namespace).WithContext(ctx).Output() or the project’s equivalent Run/Output
method that accepts a context), remove the extra goroutine and select, capture
and log the returned error from the cancelable delete call, and then use
ocResourceExists(oc, resourceType, resourceName, namespace) to decide
success/failure as before.
| func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error { | ||
| _, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output() | ||
| if err != nil { | ||
| return fmt.Errorf("force-delete (patch finalizers) failed: %w", err) | ||
| } | ||
| e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval) | ||
| deadline := time.Now().Add(forceDeleteConfirmTimeout) | ||
| for time.Now().Before(deadline) { | ||
| if !ocResourceExists(oc, resourceType, resourceName, namespace) { | ||
| e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName) | ||
| return nil | ||
| } | ||
| time.Sleep(forceDeleteConfirmInterval) | ||
| } | ||
| e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout) | ||
| return nil |
There was a problem hiding this comment.
Don’t report success while the resource still exists.
If the confirm loop expires, callers continue as though deletion succeeded, but the later recreate steps still race the old BMH/Machine object. Return an error here; otherwise the failure is deferred into a much harder-to-diagnose conflict downstream.
🛠️ Minimal fix
for time.Now().Before(deadline) {
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
return nil
}
time.Sleep(forceDeleteConfirmInterval)
}
- e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout)
- return nil
+ return fmt.Errorf("%s %s still present after %v even after finalizer patch", resourceType, resourceName, forceDeleteConfirmTimeout)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error { | |
| _, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output() | |
| if err != nil { | |
| return fmt.Errorf("force-delete (patch finalizers) failed: %w", err) | |
| } | |
| e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval) | |
| deadline := time.Now().Add(forceDeleteConfirmTimeout) | |
| for time.Now().Before(deadline) { | |
| if !ocResourceExists(oc, resourceType, resourceName, namespace) { | |
| e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName) | |
| return nil | |
| } | |
| time.Sleep(forceDeleteConfirmInterval) | |
| } | |
| e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout) | |
| return nil | |
| func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error { | |
| _, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output() | |
| if err != nil { | |
| return fmt.Errorf("force-delete (patch finalizers) failed: %w", err) | |
| } | |
| e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval) | |
| deadline := time.Now().Add(forceDeleteConfirmTimeout) | |
| for time.Now().Before(deadline) { | |
| if !ocResourceExists(oc, resourceType, resourceName, namespace) { | |
| e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName) | |
| return nil | |
| } | |
| time.Sleep(forceDeleteConfirmInterval) | |
| } | |
| return fmt.Errorf("%s %s still present after %v even after finalizer patch", resourceType, resourceName, forceDeleteConfirmTimeout) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/two_node/tnf_node_replacement.go` around lines 1378 - 1393, The
function forceDeleteOcResourceByRemovingFinalizers currently returns nil even
when the confirm loop times out, causing callers to assume deletion succeeded;
change the final branch so that after the timeout (where it currently logs the
WARNING) the function returns a non-nil error (e.g., fmt.Errorf with context
including resourceType, resourceName and forceDeleteConfirmTimeout) instead of
nil so callers see the failure and can handle the retry/error path; update the
log call in forceDeleteOcResourceByRemovingFinalizers to include the same error
context when returning.
| }, core.RetryOptions{ | ||
| Timeout: fiveMinuteTimeout, | ||
| Timeout: threeMinuteTimeout, | ||
| PollInterval: utils.FiveSecondPollInterval, |
There was a problem hiding this comment.
Honor the timeout parameter here.
waitForEtcdResourceToStop accepts timeout but hardcodes threeMinuteTimeout, so callers cannot actually control the deadline and the log message above can be misleading.
🛠️ Minimal fix
}, core.RetryOptions{
- Timeout: threeMinuteTimeout,
+ Timeout: timeout,
PollInterval: utils.FiveSecondPollInterval,
}, fmt.Sprintf("etcd stop on %s", testConfig.SurvivingNode.Name))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/two_node/tnf_node_replacement.go` around lines 1466 - 1468, The
call to core.RetryOptions in waitForEtcdResourceToStop is ignoring the
function's timeout parameter and hardcodes threeMinuteTimeout, preventing
callers from controlling the deadline; change the RetryOptions Timeout to use
the function's timeout argument (the timeout parameter of
waitForEtcdResourceToStop) instead of threeMinuteTimeout, and ensure any
associated log message that references the timeout reflects the passed-in
timeout value so logs match behavior.
Summary by CodeRabbit