Skip to content

OCPBUGS-77949: OCPBUGS-77948: TNF node replacement test updates#30846

Open
jaypoulz wants to merge 4 commits intoopenshift:mainfrom
jaypoulz:tnf-node-replacement-fixes
Open

OCPBUGS-77949: OCPBUGS-77948: TNF node replacement test updates#30846
jaypoulz wants to merge 4 commits intoopenshift:mainfrom
jaypoulz:tnf-node-replacement-fixes

Conversation

@jaypoulz
Copy link
Contributor

@jaypoulz jaypoulz commented Mar 6, 2026

  • Tightens up timeouts in core test loop
  • Fixes podman-etcd logging to feature human-readable output
  • Fixes a bug with IPv6 IP address formatting in URL

Summary by CodeRabbit

  • Tests
    • Improved node-replacement reliability with longer, per-operation timeouts and parallelized waits
    • Enhanced cleanup and force-delete resilience for test resources
    • Added automatic pod log capture after job completion and safer SSH output truncation to limit log size
    • Expanded pacemaker/status debugging with fuller status dumps on failures
    • Updated test templates to use a Redfish authority-style BMC address format

@openshift-ci-robot
Copy link

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 6, 2026
@openshift-ci-robot
Copy link

@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

  • Tightens up timeouts in core test loop
  • Fixes podman-etcd logging to feature human-readable output
  • Fixes a bug with IPv6 IP address formatting in URL

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Walkthrough

Updated the two-node test suite: replaced a hardcoded Redfish BMC address with a parameterized authority form, extended and parallelized node-replacement and recovery timeouts/flows, added job/pod log capture and force-delete helpers, improved pacemaker status/debugging, and truncated large SSH outputs for logging.

Changes

Cohort / File(s) Summary
Test Configuration
test/extended/testdata/two_node/baremetalhost-template.yaml
BMC address value changed to use a Redfish authority string ({REDFISH_AUTHORITY}/redfish/v1/Systems/{UUID}) instead of a host:port IP literal.
Node replacement test
test/extended/two_node/tnf_node_replacement.go
Added per-operation timeouts and retry constants, parallelized update-setup waits for survivor/target, extended VM/etcd/API wait windows, swapped status calls for verbose variants, and switched BMH creation to use Redfish authority. Added helper wiring for bounded/force delete flows.
SSH logging utilities
test/extended/two_node/utils/core/ssh.go
Introduced maxLogOutputBytes and truncateForLog (UTF-8 aware) and applied truncation to stdout/stderr logging with indicators when output is shortened.
Common utilities
test/extended/two_node/utils/common.go
Changed TryPacemakerCleanup to call the more verbose PcsStatusFullViaDebug variant for pacemaker status retrieval.
Job & pod handling (etcd)
test/extended/two_node/utils/services/etcd.go
Added DumpJobPodLogs(jobName, namespace, oc); refactored WaitForJobCompletion to capture logs after completion; added WaitForSurvivorUpdateSetupJobCompletion which gates completion on pod creation time and dumps logs on finish.
Pacemaker services
test/extended/two_node/utils/services/pacemaker.go
Added PcsStatusFullViaDebug(ctx, oc, nodeName) to retrieve full pacemaker status via debug container; WaitForNodesOnline now collects and logs full status on poll timeout/failure and tolerates transient retrieval/parse errors during polling.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Test as Test Orchestrator
participant OC as OpenShift API (oc)
participant JobPod as Job / Pod
participant BMC as Redfish BMC
participant Pacemaker as Pacemaker (cluster)

Test->>OC: create BareMetalHost (uses REDFISH_AUTHORITY)
Test->>OC: create replacement Job
OC->>JobPod: schedule & run job pod
JobPod->>BMC: invoke Redfish actions (provisioning/inspection)
JobPod->>OC: write pod logs / status
Test->>OC: poll Job and Pod status (WaitForSurvivorUpdateSetupJobCompletion)
alt job completes or timeout
  OC->>Test: return completion + logs (DumpJobPodLogs)
  Test->>Pacemaker: poll cluster state (PcsStatusFullViaDebug)
  Pacemaker->>Test: return full status
end

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Test Structure And Quality ⚠️ Warning Test code contains unaddressed timeout enforcement and error handling issues flagged in review comments. Replace background goroutine timeouts with context cancellation, use timeout parameter in waitForEtcdResourceToStop, return errors on confirmation expiry, and use actual Ready timestamp.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title references two Jira issues (OCPBUGS-77949 and OCPBUGS-77948) and indicates this is about TNF node replacement test updates, which aligns with the core changes across multiple test files focused on node replacement testing improvements.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Stable And Deterministic Test Names ✅ Passed The pull request contains only static test names in Ginkgo test definitions with no dynamic identifiers such as pod names, node names, IPs, or timestamps.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.5.0)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 6, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jaypoulz
Once this PR has been reviewed and has the lgtm label, please assign xueqzhan for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jaypoulz
Copy link
Contributor Author

jaypoulz commented Mar 6, 2026

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 6, 2026
@openshift-ci-robot
Copy link

@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

Details

In response to this:

  • Tightens up timeouts in core test loop
  • Fixes podman-etcd logging to feature human-readable output
  • Fixes a bug with IPv6 IP address formatting in URL

Summary by CodeRabbit

Release Notes

  • Tests
  • Enhanced node replacement test reliability with improved concurrency and timeout handling for job waits
  • Strengthened resource cleanup logic with finalizer-based deletion support
  • Expanded debugging capabilities with verbose pacemaker status logging and extended error reporting
  • Improved SSH command output logging with automatic truncation to prevent excessive log sizes
  • Added automatic pod log capture for job completion monitoring

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/extended/two_node/utils/core/ssh.go`:
- Around line 136-141: The current log truncation slices the UTF-8 string bytes
with out[:maxLogOutputBytes], which can cut a multi-byte rune and produce
invalid UTF-8 in logs; update the truncation logic around stdout.String()/out
and the e2e.Logf call to perform rune-safe truncation (for example, convert to
runes or iterate runes until adding the next rune would exceed
maxLogOutputBytes) and then log the safely truncated string along with the total
byte length using the existing maxLogOutputBytes and e2e.Logf call sites.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3ecbedc9-3c0f-4263-b6e6-a6a05314b2c2

📥 Commits

Reviewing files that changed from the base of the PR and between 35bab74 and 547c9eb.

📒 Files selected for processing (6)
  • test/extended/testdata/two_node/baremetalhost-template.yaml
  • test/extended/two_node/tnf_node_replacement.go
  • test/extended/two_node/utils/common.go
  • test/extended/two_node/utils/core/ssh.go
  • test/extended/two_node/utils/services/etcd.go
  • test/extended/two_node/utils/services/pacemaker.go

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 6, 2026

@jaypoulz: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/verify 547c9eb link true /test verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link
Contributor

@eggfoobar eggfoobar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking great, just had some small suggestions.

automatedCleaningMode: metadata
bmc:
address: redfish+https://{REDFISH_IP}:8000/redfish/v1/Systems/{UUID}
address: redfish+https://{REDFISH_HOST_PORT}/redfish/v1/Systems/{UUID}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, we should just rename this to REDFISH_HOST, the port can cause confusion

// to empty so the test can proceed without blocking on controller cleanup.
func deleteOcResourceWithRetry(oc *exutil.CLI, resourceType, resourceName, namespace string) error {
return core.RetryWithOptions(func() error {
done := make(chan error, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing this timer, can we just use the old core.RetryWithOptions and if that fails we fall to force delete?

"{CREDENTIALS_NAME}": testConfig.TargetNode.BMCSecretName,
"{BOOT_MAC_ADDRESS}": newMACAddress,
"{BMH_NAME}": testConfig.TargetNode.BMHName,
"{REDFISH_HOST_PORT}": redfishHostPort,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here on updating to REDFISH_HOST

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to go with REDFISH_AUTHORITY since that's the technical term for the host:port part of a URL

…logs

- Use PcsStatusFull / PcsStatusFullViaDebug for human-readable pacemaker
  status instead of XML
- Truncate long SSH stdout/stderr in logs (e.g. avoid pcs status xml dumps)
- On WaitForNodesOnline timeout, log full pacemaker status for debugging

Made-with: Cursor
…nHostPort for IPv6

- Use net.JoinHostPort(RedfishIP, port) so IPv6 addresses are bracketed (RFC 3986)
- BMH template placeholder {REDFISH_HOST_PORT} replaces {REDFISH_IP}; port 8000 in code

Made-with: Cursor
…e replacement

- Capture job pod logs and gate on survivor update-setup run
- Force-delete stuck BMH/Machine after 1m; shorten recovery and BMH timeouts;
  rename timeouts to match duration (3m/7m/10m)
- Wait for survivor and target update-setup jobs in parallel

Made-with: Cursor
CSR approval was never observed in testing; remove the wait to simplify recovery.

Made-with: Cursor
@jaypoulz jaypoulz force-pushed the tnf-node-replacement-fixes branch from 547c9eb to 0816c76 Compare March 10, 2026 18:55
@jaypoulz
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 10, 2026

@jaypoulz: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c38d4d90-1cb2-11f1-8125-bec258656377-0

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/extended/two_node/tnf_node_replacement.go`:
- Around line 1273-1280: The wait uses a hardcoded minPodCreationTime
(time.Now().Add(-2 * time.Minute)) which can include pre-Ready stale pods;
change waitForNodeRecovery to return the node Ready timestamp (e.g., readyTime)
and use that exact timestamp here instead of time.Now().Add(...), passing the
returned readyTime as minPodCreationTime into
services.WaitForSurvivorUpdateSetupJobCompletion (and the symmetric
WaitForTargetUpdateSetupJobCompletion) so the waits are gated on the node Ready
time rather than an approximate clock offset.
- Around line 1337-1354: The current attempt spawns oc.AsAdmin().Run("delete")
in a goroutine and uses time.After(deleteAttemptTimeout), which leaves the
delete running if the timer fires; replace that pattern with a per-attempt
cancelable context so each delete is actually bounded: inside the
RetryWithOptions callback create ctx, cancel :=
context.WithTimeout(context.Background(), deleteAttemptTimeout) and defer
cancel(), then invoke the delete command with that context (e.g.,
oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n",
namespace).WithContext(ctx).Output() or the project’s equivalent Run/Output
method that accepts a context), remove the extra goroutine and select, capture
and log the returned error from the cancelable delete call, and then use
ocResourceExists(oc, resourceType, resourceName, namespace) to decide
success/failure as before.
- Around line 1466-1468: The call to core.RetryOptions in
waitForEtcdResourceToStop is ignoring the function's timeout parameter and
hardcodes threeMinuteTimeout, preventing callers from controlling the deadline;
change the RetryOptions Timeout to use the function's timeout argument (the
timeout parameter of waitForEtcdResourceToStop) instead of threeMinuteTimeout,
and ensure any associated log message that references the timeout reflects the
passed-in timeout value so logs match behavior.
- Around line 1378-1393: The function forceDeleteOcResourceByRemovingFinalizers
currently returns nil even when the confirm loop times out, causing callers to
assume deletion succeeded; change the final branch so that after the timeout
(where it currently logs the WARNING) the function returns a non-nil error
(e.g., fmt.Errorf with context including resourceType, resourceName and
forceDeleteConfirmTimeout) instead of nil so callers see the failure and can
handle the retry/error path; update the log call in
forceDeleteOcResourceByRemovingFinalizers to include the same error context when
returning.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: fce9d0e8-a5c5-480a-b91a-17f85f92e721

📥 Commits

Reviewing files that changed from the base of the PR and between 547c9eb and 0816c76.

📒 Files selected for processing (6)
  • test/extended/testdata/two_node/baremetalhost-template.yaml
  • test/extended/two_node/tnf_node_replacement.go
  • test/extended/two_node/utils/common.go
  • test/extended/two_node/utils/core/ssh.go
  • test/extended/two_node/utils/services/etcd.go
  • test/extended/two_node/utils/services/pacemaker.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/extended/testdata/two_node/baremetalhost-template.yaml

Comment on lines +1273 to +1280
minPodCreationTime := time.Now().Add(-2 * time.Minute)
e2e.Logf("Waiting for both CEO update-setup jobs (survivor and target) in parallel")
var wg sync.WaitGroup
var errSurvivor, errTarget error
wg.Add(2)
go func() {
defer wg.Done()
errSurvivor = services.WaitForSurvivorUpdateSetupJobCompletion(testConfig.Jobs.UpdateSetupJobSurvivorName, etcdNamespace, minPodCreationTime, tenMinuteTimeout, utils.ThirtySecondPollInterval, oc)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Gate this wait on the actual Ready timestamp.

time.Now().Add(-2 * time.Minute) can still admit an update-setup pod that started before the replacement node became Ready, which is exactly the stale run this logic is trying to exclude. Capture the Ready time in waitForNodeRecovery and pass that exact timestamp through here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1273 - 1280, The
wait uses a hardcoded minPodCreationTime (time.Now().Add(-2 * time.Minute))
which can include pre-Ready stale pods; change waitForNodeRecovery to return the
node Ready timestamp (e.g., readyTime) and use that exact timestamp here instead
of time.Now().Add(...), passing the returned readyTime as minPodCreationTime
into services.WaitForSurvivorUpdateSetupJobCompletion (and the symmetric
WaitForTargetUpdateSetupJobCompletion) so the waits are gated on the node Ready
time rather than an approximate clock offset.

Comment on lines +1337 to +1354
err := core.RetryWithOptions(func() error {
done := make(chan error, 1)
go func() {
_, err := oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n", namespace).Output()
done <- err
}()
select {
case err := <-done:
if err != nil {
e2e.Logf("%s returned error: %v", opName, err)
}
case <-time.After(deleteAttemptTimeout):
e2e.Logf("%s did not complete within %v", opName, deleteAttemptTimeout)
}
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
return nil
}
return fmt.Errorf("resource still exists")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

deleteAttemptTimeout is not actually enforced here.

When time.After(deleteAttemptTimeout) fires, the oc delete keeps running in the background. Later retries can overlap the earlier delete, so a hung attempt is never really capped at 20 seconds. Use a cancelable delete path per attempt instead of goroutine + time.After.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1337 - 1354, The
current attempt spawns oc.AsAdmin().Run("delete") in a goroutine and uses
time.After(deleteAttemptTimeout), which leaves the delete running if the timer
fires; replace that pattern with a per-attempt cancelable context so each delete
is actually bounded: inside the RetryWithOptions callback create ctx, cancel :=
context.WithTimeout(context.Background(), deleteAttemptTimeout) and defer
cancel(), then invoke the delete command with that context (e.g.,
oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n",
namespace).WithContext(ctx).Output() or the project’s equivalent Run/Output
method that accepts a context), remove the extra goroutine and select, capture
and log the returned error from the cancelable delete call, and then use
ocResourceExists(oc, resourceType, resourceName, namespace) to decide
success/failure as before.

Comment on lines +1378 to +1393
func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error {
_, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output()
if err != nil {
return fmt.Errorf("force-delete (patch finalizers) failed: %w", err)
}
e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval)
deadline := time.Now().Add(forceDeleteConfirmTimeout)
for time.Now().Before(deadline) {
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
return nil
}
time.Sleep(forceDeleteConfirmInterval)
}
e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout)
return nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t report success while the resource still exists.

If the confirm loop expires, callers continue as though deletion succeeded, but the later recreate steps still race the old BMH/Machine object. Return an error here; otherwise the failure is deferred into a much harder-to-diagnose conflict downstream.

🛠️ Minimal fix
 	for time.Now().Before(deadline) {
 		if !ocResourceExists(oc, resourceType, resourceName, namespace) {
 			e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
 			return nil
 		}
 		time.Sleep(forceDeleteConfirmInterval)
 	}
-	e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout)
-	return nil
+	return fmt.Errorf("%s %s still present after %v even after finalizer patch", resourceType, resourceName, forceDeleteConfirmTimeout)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error {
_, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output()
if err != nil {
return fmt.Errorf("force-delete (patch finalizers) failed: %w", err)
}
e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval)
deadline := time.Now().Add(forceDeleteConfirmTimeout)
for time.Now().Before(deadline) {
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
return nil
}
time.Sleep(forceDeleteConfirmInterval)
}
e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout)
return nil
func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error {
_, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output()
if err != nil {
return fmt.Errorf("force-delete (patch finalizers) failed: %w", err)
}
e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval)
deadline := time.Now().Add(forceDeleteConfirmTimeout)
for time.Now().Before(deadline) {
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
return nil
}
time.Sleep(forceDeleteConfirmInterval)
}
return fmt.Errorf("%s %s still present after %v even after finalizer patch", resourceType, resourceName, forceDeleteConfirmTimeout)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1378 - 1393, The
function forceDeleteOcResourceByRemovingFinalizers currently returns nil even
when the confirm loop times out, causing callers to assume deletion succeeded;
change the final branch so that after the timeout (where it currently logs the
WARNING) the function returns a non-nil error (e.g., fmt.Errorf with context
including resourceType, resourceName and forceDeleteConfirmTimeout) instead of
nil so callers see the failure and can handle the retry/error path; update the
log call in forceDeleteOcResourceByRemovingFinalizers to include the same error
context when returning.

Comment on lines 1466 to 1468
}, core.RetryOptions{
Timeout: fiveMinuteTimeout,
Timeout: threeMinuteTimeout,
PollInterval: utils.FiveSecondPollInterval,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Honor the timeout parameter here.

waitForEtcdResourceToStop accepts timeout but hardcodes threeMinuteTimeout, so callers cannot actually control the deadline and the log message above can be misleading.

🛠️ Minimal fix
 	}, core.RetryOptions{
-		Timeout:      threeMinuteTimeout,
+		Timeout:      timeout,
 		PollInterval: utils.FiveSecondPollInterval,
 	}, fmt.Sprintf("etcd stop on %s", testConfig.SurvivingNode.Name))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1466 - 1468, The
call to core.RetryOptions in waitForEtcdResourceToStop is ignoring the
function's timeout parameter and hardcodes threeMinuteTimeout, preventing
callers from controlling the deadline; change the RetryOptions Timeout to use
the function's timeout argument (the timeout parameter of
waitForEtcdResourceToStop) instead of threeMinuteTimeout, and ensure any
associated log message that references the timeout reflects the passed-in
timeout value so logs match behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants