Skip to content

Commit 7fe3b32

Browse files
Added support for differential snapshots
Summary: The motivation for this PR is to add to RocksDB support for differential (incremental) snapshots, as snapshot of the DB changes between two points in time (one can think of it as diff between to sequence numbers, or the diff D which can be thought of as an SST file or just set of KVs that can be applied to sequence number S1 to get the database to the state at sequence number S2). This feature would be useful for various distributed storages layers built on top of RocksDB, as it should help reduce resources (time and network bandwidth) needed to recover and rebuilt DB instances as replicas in the context of distributed storages. From the API standpoint that would like client app requesting iterator between (start seqnum) and current DB state, and reading the "diff". This is a very draft PR for initial review in the discussion on the approach, i'm going to rework some parts and keep updating the PR. For now, what's done here according to initial discussions: Preserving deletes: - We want to be able to optionally preserve recent deletes for some defined period of time, so that if a delete came in recently and might need to be included in the next incremental snapshot it would't get dropped by a compaction. This is done by adding new param to Options (preserve deletes flag) and new variable to DB Impl where we keep track of the sequence number after which we don't want to drop tombstones, even if they are otherwise eligible for deletion. - I also added a new API call for clients to be able to advance this cutoff seqnum after which we drop deletes; i assume it's more flexible to let clients control this, since otherwise we'd need to keep some kind of timestamp < -- > seqnum mapping inside the DB, which sounds messy and painful to support. Clients could make use of it by periodically calling GetLatestSequenceNumber(), noting the timestamp, doing some calculation and figuring out by how much we need to advance the cutoff seqnum. - Compaction codepath in compaction_iterator.cc has been modified to avoid dropping tombstones with seqnum > cutoff seqnum. Iterator changes: - couple params added to ReadOptions, to optionally allow client to request internal keys instead of user keys (so that client can get the latest value of a key, be it delete marker or a put), as well as min timestamp and min seqnum. TableCache changes: - I modified table_cache code to be able to quickly exclude SST files from iterators heep if creation_time on the file is less then iter_start_ts as passed in ReadOptions. That would help a lot in some DB settings (like reading very recent data only or using FIFO compactions), but not so much for universal compaction with more or less long iterator time span. What's left: - Still looking at how to best plug that inside DBIter codepath. So far it seems that FindNextUserKeyInternal only parses values as UserKeys, and iter->key() call generally returns user key. Can we add new API to DBIter as internal_key(), and modify this internal method to optionally set saved_key_ to point to the full internal key? I don't need to store actual seqnum there, but I do need to store type. Closes facebook#2999 Differential Revision: D6175602 Pulled By: mikhail-antonov fbshipit-source-id: c779a6696ee2d574d86c69cec866a3ae095aa900
1 parent 17731a4 commit 7fe3b32

30 files changed

+432
-56
lines changed

HISTORY.md

+4
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@
22
## Unreleased
33
### Public API Change
44
* `BackupableDBOptions::max_valid_backups_to_open == 0` now means no backups will be opened during BackupEngine initialization. Previously this condition disabled limiting backups opened.
5+
* `DBOptions::preserve_deletes` is a new option that allows one to specify that DB should not drop tombstones for regular deletes if they have sequence number larger than what was set by the new API call `DB::SetPreserveDeletesSequenceNumber(SequenceNumber seqnum)`. Disabled by default.
6+
* API call `DB::SetPreserveDeletesSequenceNumber(SequenceNumber seqnum)` was added, users who wish to preserve deletes are expected to periodically call this function to advance the cutoff seqnum (all deletes made before this seqnum can be dropped by DB). It's user responsibility to figure out how to advance the seqnum in the way so the tombstones are kept for the desired period of time, yet are eventually processed in time and don't eat up too much space.
7+
* `ReadOptions::iter_start_seqnum` was added; if set to something > 0 user will see 2 changes in iterators behavior 1) only keys written with sequence larger than this parameter would be returned and 2) the `Slice` returned by iter->key() now points to the the memory that keep User-oriented representation of the internal key, rather than user key. New struct `FullKey` was added to represent internal keys, along with a new helper function `ParseFullKey(const Slice& internal_key, FullKey* result);`.
58
* Deprecate trash_dir param in NewSstFileManager, right now we will rename deleted files to <name>.trash instead of moving them to trash directory
69
* Return an error on write if write_options.sync = true and write_options.disableWAL = true to warn user of inconsistent options. Previously we will not write to WAL and not respecting the sync options in this case.
710

@@ -14,6 +17,7 @@
1417
* Add a new db property "rocksdb.estimate-oldest-key-time" to return oldest data timestamp. The property is available only for FIFO compaction with compaction_options_fifo.allow_compaction = false.
1518
* Upon snapshot release, recompact bottommost files containing deleted/overwritten keys that previously could not be dropped due to the snapshot. This alleviates space-amp caused by long-held snapshots.
1619
* Support lower bound on iterators specified via `ReadOptions::iterate_lower_bound`.
20+
* Support for differential snapshots (via iterator emitting the sequence of key-values representing the difference between DB state at two different sequence numbers). Supports preserving and emitting puts and regular deletes, doesn't support SingleDeletes, MergeOperator, Blobs and Range Deletes.
1721

1822
### Bug Fixes
1923
* Fix a potential data inconsistency issue during point-in-time recovery. `DB:Open()` will abort if column family inconsistency is found during PIT recovery.

db/compaction_iterator.cc

+21-7
Original file line numberDiff line numberDiff line change
@@ -45,14 +45,16 @@ CompactionIterator::CompactionIterator(
4545
bool expect_valid_internal_key, RangeDelAggregator* range_del_agg,
4646
const Compaction* compaction, const CompactionFilter* compaction_filter,
4747
CompactionEventListener* compaction_listener,
48-
const std::atomic<bool>* shutting_down)
48+
const std::atomic<bool>* shutting_down,
49+
const SequenceNumber preserve_deletes_seqnum)
4950
: CompactionIterator(
5051
input, cmp, merge_helper, last_sequence, snapshots,
5152
earliest_write_conflict_snapshot, snapshot_checker, env,
5253
expect_valid_internal_key, range_del_agg,
5354
std::unique_ptr<CompactionProxy>(
5455
compaction ? new CompactionProxy(compaction) : nullptr),
55-
compaction_filter, compaction_listener, shutting_down) {}
56+
compaction_filter, compaction_listener, shutting_down,
57+
preserve_deletes_seqnum) {}
5658

5759
CompactionIterator::CompactionIterator(
5860
InternalIterator* input, const Comparator* cmp, MergeHelper* merge_helper,
@@ -63,7 +65,9 @@ CompactionIterator::CompactionIterator(
6365
std::unique_ptr<CompactionProxy> compaction,
6466
const CompactionFilter* compaction_filter,
6567
CompactionEventListener* compaction_listener,
66-
const std::atomic<bool>* shutting_down)
68+
const std::atomic<bool>* shutting_down,
69+
const SequenceNumber preserve_deletes_seqnum
70+
)
6771
: input_(input),
6872
cmp_(cmp),
6973
merge_helper_(merge_helper),
@@ -79,6 +83,7 @@ CompactionIterator::CompactionIterator(
7983
compaction_listener_(compaction_listener),
8084
#endif // ROCKSDB_LITE
8185
shutting_down_(shutting_down),
86+
preserve_deletes_seqnum_(preserve_deletes_seqnum),
8287
ignore_snapshots_(false),
8388
current_user_key_sequence_(0),
8489
current_user_key_snapshot_(0),
@@ -496,6 +501,7 @@ void CompactionIterator::NextFromInput() {
496501
input_->Next();
497502
} else if (compaction_ != nullptr && ikey_.type == kTypeDeletion &&
498503
ikey_.sequence <= earliest_snapshot_ &&
504+
ikeyNotNeededForIncrementalSnapshot() &&
499505
compaction_->KeyNotExistsBeyondOutputLevel(ikey_.user_key,
500506
&level_ptrs_)) {
501507
// TODO(noetzli): This is the only place where we use compaction_
@@ -595,11 +601,12 @@ void CompactionIterator::PrepareOutput() {
595601

596602
// This is safe for TransactionDB write-conflict checking since transactions
597603
// only care about sequence number larger than any active snapshots.
598-
if ((compaction_ != nullptr && !compaction_->allow_ingest_behind()) &&
604+
if ((compaction_ != nullptr &&
605+
!compaction_->allow_ingest_behind()) &&
606+
ikeyNotNeededForIncrementalSnapshot() &&
599607
bottommost_level_ && valid_ && ikey_.sequence <= earliest_snapshot_ &&
600-
(snapshot_checker_ == nullptr ||
601-
LIKELY(snapshot_checker_->IsInSnapshot(ikey_.sequence,
602-
earliest_snapshot_))) &&
608+
(snapshot_checker_ == nullptr || LIKELY(snapshot_checker_->IsInSnapshot(
609+
ikey_.sequence, earliest_snapshot_))) &&
603610
ikey_.type != kTypeMerge &&
604611
!cmp_->Equal(compaction_->GetLargestUserKey(), ikey_.user_key)) {
605612
assert(ikey_.type != kTypeDeletion && ikey_.type != kTypeSingleDeletion);
@@ -626,4 +633,11 @@ inline SequenceNumber CompactionIterator::findEarliestVisibleSnapshot(
626633
return kMaxSequenceNumber;
627634
}
628635

636+
// used in 2 places - prevents deletion markers to be dropped if they may be
637+
// needed and disables seqnum zero-out in PrepareOutput for recent keys.
638+
inline bool CompactionIterator::ikeyNotNeededForIncrementalSnapshot() {
639+
return (!compaction_->preserve_deletes()) ||
640+
(ikey_.sequence < preserve_deletes_seqnum_);
641+
}
642+
629643
} // namespace rocksdb

db/compaction_iterator.h

+13-2
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,9 @@ class CompactionIterator {
4949
virtual bool allow_ingest_behind() const {
5050
return compaction_->immutable_cf_options()->allow_ingest_behind;
5151
}
52+
virtual bool preserve_deletes() const {
53+
return compaction_->immutable_cf_options()->preserve_deletes;
54+
}
5255

5356
protected:
5457
CompactionProxy() = default;
@@ -67,7 +70,8 @@ class CompactionIterator {
6770
const Compaction* compaction = nullptr,
6871
const CompactionFilter* compaction_filter = nullptr,
6972
CompactionEventListener* compaction_listener = nullptr,
70-
const std::atomic<bool>* shutting_down = nullptr);
73+
const std::atomic<bool>* shutting_down = nullptr,
74+
const SequenceNumber preserve_deletes_seqnum = 0);
7175

7276
// Constructor with custom CompactionProxy, used for tests.
7377
CompactionIterator(InternalIterator* input, const Comparator* cmp,
@@ -80,7 +84,8 @@ class CompactionIterator {
8084
std::unique_ptr<CompactionProxy> compaction,
8185
const CompactionFilter* compaction_filter = nullptr,
8286
CompactionEventListener* compaction_listener = nullptr,
83-
const std::atomic<bool>* shutting_down = nullptr);
87+
const std::atomic<bool>* shutting_down = nullptr,
88+
const SequenceNumber preserve_deletes_seqnum = 0);
8489

8590
~CompactionIterator();
8691

@@ -126,6 +131,11 @@ class CompactionIterator {
126131
inline SequenceNumber findEarliestVisibleSnapshot(
127132
SequenceNumber in, SequenceNumber* prev_snapshot);
128133

134+
// Checks whether the currently seen ikey_ is needed for
135+
// incremental (differential) snapshot and hence can't be dropped
136+
// or seqnum be zero-ed out even if all other conditions for it are met.
137+
inline bool ikeyNotNeededForIncrementalSnapshot();
138+
129139
InternalIterator* input_;
130140
const Comparator* cmp_;
131141
MergeHelper* merge_helper_;
@@ -141,6 +151,7 @@ class CompactionIterator {
141151
CompactionEventListener* compaction_listener_;
142152
#endif // !ROCKSDB_LITE
143153
const std::atomic<bool>* shutting_down_;
154+
const SequenceNumber preserve_deletes_seqnum_;
144155
bool bottommost_level_;
145156
bool valid_ = false;
146157
bool visible_at_tip_;

db/compaction_iterator_test.cc

+2
Original file line numberDiff line numberDiff line change
@@ -156,6 +156,8 @@ class FakeCompaction : public CompactionIterator::CompactionProxy {
156156
}
157157
virtual bool allow_ingest_behind() const { return false; }
158158

159+
virtual bool preserve_deletes() const {return false; }
160+
159161
bool key_not_exists_beyond_output_level = false;
160162
};
161163

db/compaction_job.cc

+5-2
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,9 @@ void CompactionJob::AggregateStatistics() {
264264
CompactionJob::CompactionJob(
265265
int job_id, Compaction* compaction, const ImmutableDBOptions& db_options,
266266
const EnvOptions env_options, VersionSet* versions,
267-
const std::atomic<bool>* shutting_down, LogBuffer* log_buffer,
267+
const std::atomic<bool>* shutting_down,
268+
const SequenceNumber preserve_deletes_seqnum,
269+
LogBuffer* log_buffer,
268270
Directory* db_directory, Directory* output_directory, Statistics* stats,
269271
InstrumentedMutex* db_mutex, Status* db_bg_error,
270272
std::vector<SequenceNumber> existing_snapshots,
@@ -282,6 +284,7 @@ CompactionJob::CompactionJob(
282284
env_(db_options.env),
283285
versions_(versions),
284286
shutting_down_(shutting_down),
287+
preserve_deletes_seqnum_(preserve_deletes_seqnum),
285288
log_buffer_(log_buffer),
286289
db_directory_(db_directory),
287290
output_directory_(output_directory),
@@ -764,7 +767,7 @@ void CompactionJob::ProcessKeyValueCompaction(SubcompactionState* sub_compact) {
764767
&existing_snapshots_, earliest_write_conflict_snapshot_,
765768
snapshot_checker_, env_, false, range_del_agg.get(),
766769
sub_compact->compaction, compaction_filter, comp_event_listener,
767-
shutting_down_));
770+
shutting_down_, preserve_deletes_seqnum_));
768771
auto c_iter = sub_compact->c_iter.get();
769772
c_iter->SeekToFirst();
770773
if (c_iter->Valid() &&

db/compaction_job.h

+4-1
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,9 @@ class CompactionJob {
5858
CompactionJob(int job_id, Compaction* compaction,
5959
const ImmutableDBOptions& db_options,
6060
const EnvOptions env_options, VersionSet* versions,
61-
const std::atomic<bool>* shutting_down, LogBuffer* log_buffer,
61+
const std::atomic<bool>* shutting_down,
62+
const SequenceNumber preserve_deletes_seqnum,
63+
LogBuffer* log_buffer,
6264
Directory* db_directory, Directory* output_directory,
6365
Statistics* stats, InstrumentedMutex* db_mutex,
6466
Status* db_bg_error,
@@ -134,6 +136,7 @@ class CompactionJob {
134136
Env* env_;
135137
VersionSet* versions_;
136138
const std::atomic<bool>* shutting_down_;
139+
const SequenceNumber preserve_deletes_seqnum_;
137140
LogBuffer* log_buffer_;
138141
Directory* db_directory_;
139142
Directory* output_directory_;

db/compaction_job_test.cc

+4-2
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,7 @@ class CompactionJobTest : public testing::Test {
7676
table_cache_.get(), &write_buffer_manager_,
7777
&write_controller_)),
7878
shutting_down_(false),
79+
preserve_deletes_seqnum_(0),
7980
mock_table_factory_(new mock::MockTableFactory()) {
8081
EXPECT_OK(env_->CreateDirIfMissing(dbname_));
8182
db_options_.db_paths.emplace_back(dbname_,
@@ -253,12 +254,12 @@ class CompactionJobTest : public testing::Test {
253254
// TODO(yiwu) add a mock snapshot checker and add test for it.
254255
SnapshotChecker* snapshot_checker = nullptr;
255256
CompactionJob compaction_job(0, &compaction, db_options_, env_options_,
256-
versions_.get(), &shutting_down_, &log_buffer,
257+
versions_.get(), &shutting_down_,
258+
preserve_deletes_seqnum_, &log_buffer,
257259
nullptr, nullptr, nullptr, &mutex_, &bg_error_,
258260
snapshots, earliest_write_conflict_snapshot,
259261
snapshot_checker, table_cache_, &event_logger,
260262
false, false, dbname_, &compaction_job_stats_);
261-
262263
VerifyInitializationOfCompactionJobStats(compaction_job_stats_);
263264

264265
compaction_job.Prepare();
@@ -294,6 +295,7 @@ class CompactionJobTest : public testing::Test {
294295
std::unique_ptr<VersionSet> versions_;
295296
InstrumentedMutex mutex_;
296297
std::atomic<bool> shutting_down_;
298+
SequenceNumber preserve_deletes_seqnum_;
297299
std::shared_ptr<mock::MockTableFactory> mock_table_factory_;
298300
CompactionJobStats compaction_job_stats_;
299301
ColumnFamilyData* cfd_;

db/db_compaction_test.cc

+78
Original file line numberDiff line numberDiff line change
@@ -218,6 +218,84 @@ TEST_P(DBCompactionTestWithParam, CompactionDeletionTrigger) {
218218
}
219219
}
220220

221+
TEST_P(DBCompactionTestWithParam, CompactionsPreserveDeletes) {
222+
// For each options type we test following
223+
// - Enable preserve_deletes
224+
// - write bunch of keys and deletes
225+
// - Set start_seqnum to the beginning; compact; check that keys are present
226+
// - rewind start_seqnum way forward; compact; check that keys are gone
227+
228+
for (int tid = 0; tid < 3; ++tid) {
229+
Options options = DeletionTriggerOptions(CurrentOptions());
230+
options.max_subcompactions = max_subcompactions_;
231+
options.preserve_deletes=true;
232+
options.num_levels = 2;
233+
234+
if (tid == 1) {
235+
options.skip_stats_update_on_db_open = true;
236+
} else if (tid == 2) {
237+
// third pass with universal compaction
238+
options.compaction_style = kCompactionStyleUniversal;
239+
}
240+
241+
DestroyAndReopen(options);
242+
Random rnd(301);
243+
// highlight the default; all deletes should be preserved
244+
SetPreserveDeletesSequenceNumber(0);
245+
246+
const int kTestSize = kCDTKeysPerBuffer;
247+
std::vector<std::string> values;
248+
for (int k = 0; k < kTestSize; ++k) {
249+
values.push_back(RandomString(&rnd, kCDTValueSize));
250+
ASSERT_OK(Put(Key(k), values[k]));
251+
}
252+
253+
for (int k = 0; k < kTestSize; ++k) {
254+
ASSERT_OK(Delete(Key(k)));
255+
}
256+
// to ensure we tackle all tombstones
257+
CompactRangeOptions cro;
258+
cro.change_level = true;
259+
cro.target_level = 2;
260+
cro.bottommost_level_compaction = BottommostLevelCompaction::kForce;
261+
262+
dbfull()->TEST_WaitForFlushMemTable();
263+
dbfull()->CompactRange(cro, nullptr, nullptr);
264+
265+
// check that normal user iterator doesn't see anything
266+
Iterator* db_iter = dbfull()->NewIterator(ReadOptions());
267+
int i = 0;
268+
for (db_iter->SeekToFirst(); db_iter->Valid(); db_iter->Next()) {
269+
i++;
270+
}
271+
ASSERT_EQ(i, 0);
272+
delete db_iter;
273+
274+
// check that iterator that sees internal keys sees tombstones
275+
ReadOptions ro;
276+
ro.iter_start_seqnum=1;
277+
db_iter = dbfull()->NewIterator(ro);
278+
i = 0;
279+
for (db_iter->SeekToFirst(); db_iter->Valid(); db_iter->Next()) {
280+
i++;
281+
}
282+
ASSERT_EQ(i, 4);
283+
delete db_iter;
284+
285+
// now all deletes should be gone
286+
SetPreserveDeletesSequenceNumber(100000000);
287+
dbfull()->CompactRange(cro, nullptr, nullptr);
288+
289+
db_iter = dbfull()->NewIterator(ro);
290+
i = 0;
291+
for (db_iter->SeekToFirst(); db_iter->Valid(); db_iter->Next()) {
292+
i++;
293+
}
294+
ASSERT_EQ(i, 0);
295+
delete db_iter;
296+
}
297+
}
298+
221299
TEST_F(DBCompactionTest, SkipStatsUpdateTest) {
222300
// This test verify UpdateAccumulatedStats is not on
223301
// if options.skip_stats_update_on_db_open = true

db/db_impl.cc

+25-1
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,8 @@ DBImpl::DBImpl(const DBOptions& options, const std::string& dbname)
196196
manual_wal_flush_(options.manual_wal_flush),
197197
seq_per_batch_(options.seq_per_batch),
198198
// TODO(myabandeh): revise this when we change options.seq_per_batch
199-
use_custom_gc_(options.seq_per_batch) {
199+
use_custom_gc_(options.seq_per_batch),
200+
preserve_deletes_(options.preserve_deletes) {
200201
env_->GetAbsolutePath(dbname, &db_absolute_path_);
201202

202203
// Reserve ten files or so for other uses and give the rest to TableCache.
@@ -218,6 +219,11 @@ DBImpl::DBImpl(const DBOptions& options, const std::string& dbname)
218219
immutable_db_options_.Dump(immutable_db_options_.info_log.get());
219220
mutable_db_options_.Dump(immutable_db_options_.info_log.get());
220221
DumpSupportInfo(immutable_db_options_.info_log.get());
222+
223+
// always open the DB with 0 here, which means if preserve_deletes_==true
224+
// we won't drop any deletion markers until SetPreserveDeletesSequenceNumber()
225+
// is called by client and this seqnum is advanced.
226+
preserve_deletes_seqnum_.store(0);
221227
}
222228

223229
// Will lock the mutex_, will wait for completion if wait is true
@@ -748,6 +754,15 @@ SequenceNumber DBImpl::IncAndFetchSequenceNumber() {
748754
return versions_->FetchAddLastToBeWrittenSequence(1ull) + 1ull;
749755
}
750756

757+
bool DBImpl::SetPreserveDeletesSequenceNumber(SequenceNumber seqnum) {
758+
if (seqnum > preserve_deletes_seqnum_.load()) {
759+
preserve_deletes_seqnum_.store(seqnum);
760+
return true;
761+
} else {
762+
return false;
763+
}
764+
}
765+
751766
InternalIterator* DBImpl::NewInternalIterator(
752767
Arena* arena, RangeDelAggregator* range_del_agg,
753768
ColumnFamilyHandle* column_family) {
@@ -1421,6 +1436,15 @@ Iterator* DBImpl::NewIterator(const ReadOptions& read_options,
14211436
return NewErrorIterator(Status::NotSupported(
14221437
"ReadTier::kPersistedData is not yet supported in iterators."));
14231438
}
1439+
// if iterator wants internal keys, we can only proceed if
1440+
// we can guarantee the deletes haven't been processed yet
1441+
if (immutable_db_options_.preserve_deletes &&
1442+
read_options.iter_start_seqnum > 0 &&
1443+
read_options.iter_start_seqnum < preserve_deletes_seqnum_.load()) {
1444+
return NewErrorIterator(Status::InvalidArgument(
1445+
"Iterator requested internal keys which are too old and are not"
1446+
" guaranteed to be preserved, try larger iter_start_seqnum opt."));
1447+
}
14241448
auto cfh = reinterpret_cast<ColumnFamilyHandleImpl*>(column_family);
14251449
auto cfd = cfh->cfd();
14261450
ReadCallback* read_callback = nullptr; // No read callback provided.

db/db_impl.h

+9
Original file line numberDiff line numberDiff line change
@@ -225,6 +225,8 @@ class DBImpl : public DB {
225225
// also on data written to the WAL but not to the memtable.
226226
SequenceNumber TEST_GetLatestVisibleSequenceNumber() const;
227227

228+
virtual bool SetPreserveDeletesSequenceNumber(SequenceNumber seqnum) override;
229+
228230
bool HasActiveSnapshotLaterThanSN(SequenceNumber sn);
229231

230232
#ifndef ROCKSDB_LITE
@@ -1319,6 +1321,13 @@ class DBImpl : public DB {
13191321
const bool manual_wal_flush_;
13201322
const bool seq_per_batch_;
13211323
const bool use_custom_gc_;
1324+
1325+
// Clients must periodically call SetPreserveDeletesSequenceNumber()
1326+
// to advance this seqnum. Default value is 0 which means ALL deletes are
1327+
// preserved. Note that this has no effect if DBOptions.preserve_deletes
1328+
// is set to false.
1329+
std::atomic<SequenceNumber> preserve_deletes_seqnum_;
1330+
const bool preserve_deletes_;
13221331
};
13231332

13241333
extern Options SanitizeOptions(const std::string& db,

0 commit comments

Comments
 (0)