Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcdserver: separate "raft log compact" from snapshot #18372

Conversation

clement2026
Copy link
Contributor

@clement2026 clement2026 commented Jul 27, 2024

Part of #17098. This PR separates "raft log compact" from snapshot.

Changes

  • Introduce a new variable CompactRaftLogEveryNApplies to control how often raft log is compacted
  • Make CompactRaftLogEveryNApplies tunable.
  • Fix the failing test case TestV3WatchRestoreSnapshotUnsync by setting CompactRaftLogEveryNApplies to 1, ensuring compaction occurs after each snapshot.

Need Help

This PR also breaks some e2e test cases:

because they expect a compaction right after the snapshot to ensure a snapshot is sent to their followers.

I haven’t figured out a fix yet, other than making CompactRaftLogEveryNApplies a command-line argument, but that doesn’t seem ideal.

To Do

  • Fix failing tests
  • Benchmark different values for CompactRaftLogEveryNApplies: 1, 10, 100, 1000, and choose the best on as default value.

@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: clement2026
Once this PR has been reviewed and has the lgtm label, please assign serathius for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link

Hi @clement2026. Thanks for your PR.

I'm waiting for a etcd-io member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@clement2026 clement2026 force-pushed the issue-17098-separate-raft-log-compact-from-snapshot branch from b758e6b to 5b4a4f8 Compare July 27, 2024 06:05
@clement2026 clement2026 force-pushed the issue-17098-separate-raft-log-compact-from-snapshot branch from 8681276 to a6ea774 Compare July 27, 2024 20:47
@clement2026 clement2026 force-pushed the issue-17098-separate-raft-log-compact-from-snapshot branch from a6ea774 to 55fbbfa Compare July 28, 2024 19:18
@clement2026
Copy link
Contributor Author

How do you guys usually find the failing test case in the logs?

I tried the following keywords but no luck this time, any tips?

Error Trace, failed to, fatal

logs_26517259659.zip

@clement2026 clement2026 changed the title [WIP] etcdserver: separate "raft log compact" from snapshot etcdserver: separate "raft log compact" from snapshot Jul 28, 2024
@clement2026 clement2026 marked this pull request as ready for review July 28, 2024 20:09
@ivanvc
Copy link
Member

ivanvc commented Jul 29, 2024

Hi @clement2026, search for FAIL in 5_Run set -euo pipefail.txt. It looks like TestMixVersionsSnapshotByMockingPartition is the one failing.

@clement2026
Copy link
Contributor Author

Hi @clement2026, search for FAIL in 5_Run set -euo pipefail.txt. It looks like TestMixVersionsSnapshotByMockingPartition is the one failing.

Thanks a lot! The all-caps ‘FAIL’ is super helpful.

@k8s-ci-robot
Copy link

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

3 participants