Skip to content

Commit

Permalink
tests: row_cache: Add reproducer for reader producing missing closing…
Browse files Browse the repository at this point in the history
… range tombstone

Adds a reproducer for scylladb#12462, which doesn't manifest in master any
more after f73e2c9. It's still useful
to keep the test to avoid regresions.

The bug manifests by reader throwing:

  std::logic_error: Stream ends with an active range tombstone: {range_tombstone_change: pos={position: clustered,ckp{},-1}, {tombstone: timestamp=-9223372036854775805, deletion_time=2}}

The reason is that prior to the rework of the cache reader,
range_tombstone_generator::flush() was used with end_of_range=true to
produce the closing range_tombstone_change and it did not handle
correctly the case when there are two adjacent range tombstones and
flush(pos, end_of_range=true) is called such that pos is the boundary
between the two.

Closes scylladb#13665
  • Loading branch information
tgrabiec authored and kbr-scylla committed Apr 25, 2023
1 parent 8765442 commit a717c80
Showing 1 changed file with 60 additions and 0 deletions.
60 changes: 60 additions & 0 deletions test/boost/row_cache_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
#include "readers/from_mutations_v2.hh"
#include "readers/delegating_v2.hh"
#include "readers/empty_v2.hh"
#include "seastar/testing/thread_test_case.hh"

using namespace std::chrono_literals;

Expand Down Expand Up @@ -2371,6 +2372,65 @@ SEASTAR_TEST_CASE(test_tombstone_and_row_with_small_buffer) {
});
}

// Reproducer for https://github.com/scylladb/scylladb/issues/12462
SEASTAR_THREAD_TEST_CASE(test_range_tombstone_adjacent_to_slice_is_closed) {
simple_schema s;
tests::reader_concurrency_semaphore_wrapper semaphore;
cache_tracker tracker;
memtable_snapshot_source underlying(s.schema());

auto pk = s.make_pkey(0);
auto pr = dht::partition_range::make_singular(pk);

mutation m1(s.schema(), pk);
auto rt0 = s.make_range_tombstone(*position_range_to_clustering_range(position_range(
position_in_partition::before_key(s.make_ckey(1)),
position_in_partition::before_key(s.make_ckey(3))), *s.schema()));
m1.partition().apply_delete(*s.schema(), rt0);
s.add_row(m1, s.make_ckey(0), "v1");

underlying.apply(m1);

row_cache cache(s.schema(), snapshot_source([&] { return underlying(); }), tracker);
populate_range(cache);

// Create a reader to pin the MVCC version
auto rd0 = cache.make_reader(s.schema(), semaphore.make_permit(), pr);
auto close_rd0 = deferred_close(rd0);
rd0.set_max_buffer_size(1);
rd0.fill_buffer().get();

mutation m2(s.schema(), pk);
auto rt1 = s.make_range_tombstone(*position_range_to_clustering_range(position_range(
position_in_partition::before_key(s.make_ckey(1)),
position_in_partition::before_key(s.make_ckey(2))), *s.schema()));
m2.partition().apply_delete(*s.schema(), rt1);
apply(cache, underlying, m2);

// State of cache:
// v2: ROW(0), RT(before(1), before(2))@t1
// v1: RT(before(1), before(3))@t0

// range_tombstone_change_generator will work with the stream: RT(1, before(2))@t1, RT(before(2), before(3))@t0
// It's important that there is an RT which starts exactly at the slice upper bound to trigger
// the problem, and the RT will be in the stream only because it is a residual from RT(before(1), before(3)),
// which overlaps with the slice in the older version. That's why we need two MVCC versions.

auto slice = partition_slice_builder(*s.schema())
.with_range(*position_range_to_clustering_range(position_range(
position_in_partition::before_key(s.make_ckey(0)),
position_in_partition::before_key(s.make_ckey(2))), *s.schema()))
.build();

assert_that(cache.make_reader(s.schema(), semaphore.make_permit(), pr, slice))
.produces_partition_start(pk)
.produces_row_with_key(s.make_ckey(0))
.produces_range_tombstone_change(start_change(rt1))
.produces_range_tombstone_change(end_change(rt1))
.produces_partition_end()
.produces_end_of_stream();
}

//
// Tests the following case of eviction and re-population:
//
Expand Down

0 comments on commit a717c80

Please sign in to comment.