Skip to content

Commit

Permalink
Refine Ribbon configuration, improve testing, add Homogeneous (facebo…
Browse files Browse the repository at this point in the history
…ok#7879)

Summary:
This change only affects non-schema-critical aspects of the production candidate Ribbon filter. Specifically, it refines choice of internal configuration parameters based on inputs. The changes are minor enough that the schema tests in bloom_test, some of which depend on this, are unaffected. There are also some minor optimizations and refactorings.

This would be a schema change for "smash" Ribbon, to fix some known issues with small filters, but "smash" Ribbon is not accessible in public APIs. Unit test CompactnessAndBacktrackAndFpRate updated to test small and medium-large filters. Run with --thoroughness=100 or so for much better detection power (not appropriate for continuous regression testing).

Homogenous Ribbon:
This change adds internally a Ribbon filter variant we call Homogeneous Ribbon, in collaboration with Stefan Walzer. The expected "result" value for every key is zero, instead of computed from a hash. Entropy for queries not to be false positives comes from free variables ("overhead") in the solution structure, which are populated pseudorandomly. Construction is slightly faster for not tracking result values, and never fails. Instead, FP rate can jump up whenever and whereever entries are packed too tightly. For small structures, we can choose overhead to make this FP rate jump unlikely, as seen in updated unit test CompactnessAndBacktrackAndFpRate.

Unlike standard Ribbon, Homogeneous Ribbon seems to scale to arbitrary number of keys when accepting an FP rate penalty for small pockets of high FP rate in the structure. For example, 64-bit ribbon with 8 solution columns and 10% allocated space overhead for slots seems to achieve about 10.5% space overhead vs. information-theoretic minimum based on its observed FP rate with expected pockets of degradation. (FP rate is close to 1/256.) If targeting a higher FP rate with fewer solution columns, Homogeneous Ribbon can be even more space efficient, because the penalty from degradation is relatively smaller. If targeting a lower FP rate, Homogeneous Ribbon is less space efficient, as more allocated overhead is needed to keep the FP rate impact of degradation relatively under control. The new OptimizeHomogAtScale tool in ribbon_test helps to find these optimal allocation overheads for different numbers of solution columns. And Ribbon widths, with 128-bit Ribbon apparently cutting space overheads in half vs. 64-bit.

Other misc item specifics:
* Ribbon APIs in util/ribbon_config.h now provide configuration data for not just 5% construction failure rate (95% success), but also 50% and 0.1%.
  * Note that the Ribbon structure does not exhibit "threshold" behavior as standard Xor filter does, so there is a roughly fixed space penalty to cut construction failure rate in half. Thus, there isn't really an "almost sure" setting.
  * Although we can extrapolate settings for large filters, we don't have a good formula for configuring smaller filters (< 2^17 slots or so), and efforts to summarize with a formula have failed. Thus, small data is hard-coded from updated FindOccupancy tool.
* Enhances ApproximateNumEntries for public API Ribbon using more precise data (new API GetNumToAdd), thus a more accurate but not perfect reversal of CalculateSpace. (bloom_test updated to expect the greater precision)
* Move EndianSwapValue from coding.h to coding_lean.h to keep Ribbon code easily transferable from RocksDB
* Add some missing 'const' to member functions
* Small optimization to 128-bit BitParity
* Small refactoring of BandingStorage in ribbon_alg.h to support Homogeneous Ribbon
* CompactnessAndBacktrackAndFpRate now has an "expand" test: on construction failure, a possible alternative to re-seeding hash functions is simply to increase the number of slots (allocated space overhead) and try again with essentially the same hash values. (Start locations will be different roundings of the same scaled hash values--because fastrange not mod.) This seems to be as effective or more effective than re-seeding, as long as we increase the number of slots (m) by roughly m += m/w where w is the Ribbon width. This way, there is effectively an expansion by one slot for each ribbon-width window in the banding. (This approach assumes that getting "bad data" from your hash function is as unlikely as it naturally should be, e.g. no adversary.)
* 32-bit and 16-bit Ribbon configurations are added to ribbon_test for understanding their behavior, e.g. with FindOccupancy. They are not considered useful at this time and not tested with CompactnessAndBacktrackAndFpRate.

Pull Request resolved: facebook#7879

Test Plan: unit test updates included

Reviewed By: jay-zhuang

Differential Revision: D26371245

Pulled By: pdillinger

fbshipit-source-id: da6600d90a3785b99ad17a88b2a3027710b4ea3a
  • Loading branch information
pdillinger authored and facebook-github-bot committed Feb 26, 2021
1 parent c370d8a commit a8b3b9a
Show file tree
Hide file tree
Showing 13 changed files with 1,614 additions and 509 deletions.
1 change: 1 addition & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -766,6 +766,7 @@ set(SOURCES
util/murmurhash.cc
util/random.cc
util/rate_limiter.cc
util/ribbon_config.cc
util/slice.cc
util/file_checksum_helper.cc
util/status.cc
Expand Down
2 changes: 2 additions & 0 deletions TARGETS
Original file line number Diff line number Diff line change
Expand Up @@ -342,6 +342,7 @@ cpp_library(
"util/murmurhash.cc",
"util/random.cc",
"util/rate_limiter.cc",
"util/ribbon_config.cc",
"util/slice.cc",
"util/status.cc",
"util/string_util.cc",
Expand Down Expand Up @@ -647,6 +648,7 @@ cpp_library(
"util/murmurhash.cc",
"util/random.cc",
"util/rate_limiter.cc",
"util/ribbon_config.cc",
"util/slice.cc",
"util/status.cc",
"util/string_util.cc",
Expand Down
1 change: 1 addition & 0 deletions src.mk
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@ LIB_SOURCES = \
util/murmurhash.cc \
util/random.cc \
util/rate_limiter.cc \
util/ribbon_config.cc \
util/slice.cc \
util/file_checksum_helper.cc \
util/status.cc \
Expand Down
15 changes: 6 additions & 9 deletions table/block_based/filter_policy.cc
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
#include "util/bloom_impl.h"
#include "util/coding.h"
#include "util/hash.h"
#include "util/ribbon_config.h"
#include "util/ribbon_impl.h"

namespace ROCKSDB_NAMESPACE {
Expand Down Expand Up @@ -399,6 +400,7 @@ struct Standard128RibbonRehasherTypesAndSettings {
// These are schema-critical. Any change almost certainly changes
// underlying data.
static constexpr bool kIsFilter = true;
static constexpr bool kHomogeneous = false;
static constexpr bool kFirstCoeffAlwaysOne = true;
static constexpr bool kUseSmash = false;
using CoeffRow = ROCKSDB_NAMESPACE::Unsigned128;
Expand Down Expand Up @@ -598,8 +600,7 @@ class Standard128RibbonBitsBuilder : public XXH3pFilterBitsBuilder {

// Let's not bother accounting for overflow to Bloom filter
// (Includes NaN case)
if (!(max_slots <
BandingType::GetNumSlotsFor95PctSuccess(kMaxRibbonEntries))) {
if (!(max_slots < ConfigHelper::GetNumSlots(kMaxRibbonEntries))) {
return kMaxRibbonEntries;
}

Expand Down Expand Up @@ -628,12 +629,7 @@ class Standard128RibbonBitsBuilder : public XXH3pFilterBitsBuilder {
slots = SolnType::RoundDownNumSlots(slots - 1);
}

// Using slots instead of entries to get overhead factor estimate
double f = BandingType::GetFactorFor95PctSuccess(slots);
uint32_t num_entries = static_cast<uint32_t>(slots / f);
// Improve precision with another round
f = BandingType::GetFactorFor95PctSuccess(num_entries);
num_entries = static_cast<uint32_t>(slots / f + 0.999999999);
uint32_t num_entries = ConfigHelper::GetNumToAdd(slots);

// Consider possible Bloom fallback for small filters
if (slots < 1024) {
Expand Down Expand Up @@ -675,9 +671,10 @@ class Standard128RibbonBitsBuilder : public XXH3pFilterBitsBuilder {
using TS = Standard128RibbonTypesAndSettings;
using SolnType = ribbon::SerializableInterleavedSolution<TS>;
using BandingType = ribbon::StandardBanding<TS>;
using ConfigHelper = ribbon::BandingConfigHelper1TS<ribbon::kOneIn20, TS>;

static uint32_t NumEntriesToNumSlots(uint32_t num_entries) {
uint32_t num_slots1 = BandingType::GetNumSlotsFor95PctSuccess(num_entries);
uint32_t num_slots1 = ConfigHelper::GetNumSlots(num_entries);
return SolnType::RoundUpNumSlots(num_slots1);
}

Expand Down
4 changes: 2 additions & 2 deletions util/bloom_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -431,10 +431,10 @@ TEST_P(FullBloomTest, FilterSize) {
size_t n2 = bits_builder->ApproximateNumEntries(space);
EXPECT_GE(n2, n);
size_t space2 = bits_builder->CalculateSpace(n2);
if (n > 6000 && GetParam() == BloomFilterPolicy::kStandard128Ribbon) {
if (n > 12000 && GetParam() == BloomFilterPolicy::kStandard128Ribbon) {
// TODO(peterd): better approximation?
EXPECT_GE(space2, space);
EXPECT_LE(space2 * 0.98 - 16.0, space * 1.0);
EXPECT_LE(space2 * 0.998, space * 1.0);
} else {
EXPECT_EQ(space2, space);
}
Expand Down
32 changes: 0 additions & 32 deletions util/coding.h
Original file line number Diff line number Diff line change
Expand Up @@ -320,38 +320,6 @@ inline bool GetVarsignedint64(Slice* input, int64_t* value) {
}
}

// Swaps between big and little endian. Can be used to in combination
// with the little-endian encoding/decoding functions to encode/decode
// big endian.
template <typename T>
inline T EndianSwapValue(T v) {
static_assert(std::is_integral<T>::value, "non-integral type");

#ifdef _MSC_VER
if (sizeof(T) == 2) {
return static_cast<T>(_byteswap_ushort(static_cast<uint16_t>(v)));
} else if (sizeof(T) == 4) {
return static_cast<T>(_byteswap_ulong(static_cast<uint32_t>(v)));
} else if (sizeof(T) == 8) {
return static_cast<T>(_byteswap_uint64(static_cast<uint64_t>(v)));
}
#else
if (sizeof(T) == 2) {
return static_cast<T>(__builtin_bswap16(static_cast<uint16_t>(v)));
} else if (sizeof(T) == 4) {
return static_cast<T>(__builtin_bswap32(static_cast<uint32_t>(v)));
} else if (sizeof(T) == 8) {
return static_cast<T>(__builtin_bswap64(static_cast<uint64_t>(v)));
}
#endif
// Recognized by clang as bswap, but not by gcc :(
T ret_val = 0;
for (size_t i = 0; i < sizeof(T); ++i) {
ret_val |= ((v >> (8 * i)) & 0xff) << (8 * (sizeof(T) - 1 - i));
}
return ret_val;
}

inline bool GetLengthPrefixedSlice(Slice* input, Slice* result) {
uint32_t len = 0;
if (GetVarint32(input, &len) && input->size() >= len) {
Expand Down
32 changes: 32 additions & 0 deletions util/coding_lean.h
Original file line number Diff line number Diff line change
Expand Up @@ -98,4 +98,36 @@ inline uint64_t DecodeFixed64(const char* ptr) {
}
}

// Swaps between big and little endian. Can be used to in combination
// with the little-endian encoding/decoding functions to encode/decode
// big endian.
template <typename T>
inline T EndianSwapValue(T v) {
static_assert(std::is_integral<T>::value, "non-integral type");

#ifdef _MSC_VER
if (sizeof(T) == 2) {
return static_cast<T>(_byteswap_ushort(static_cast<uint16_t>(v)));
} else if (sizeof(T) == 4) {
return static_cast<T>(_byteswap_ulong(static_cast<uint32_t>(v)));
} else if (sizeof(T) == 8) {
return static_cast<T>(_byteswap_uint64(static_cast<uint64_t>(v)));
}
#else
if (sizeof(T) == 2) {
return static_cast<T>(__builtin_bswap16(static_cast<uint16_t>(v)));
} else if (sizeof(T) == 4) {
return static_cast<T>(__builtin_bswap32(static_cast<uint32_t>(v)));
} else if (sizeof(T) == 8) {
return static_cast<T>(__builtin_bswap64(static_cast<uint64_t>(v)));
}
#endif
// Recognized by clang as bswap, but not by gcc :(
T ret_val = 0;
for (size_t i = 0; i < sizeof(T); ++i) {
ret_val |= ((v >> (8 * i)) & 0xff) << (8 * (sizeof(T) - 1 - i));
}
return ret_val;
}

} // namespace ROCKSDB_NAMESPACE
2 changes: 1 addition & 1 deletion util/math128.h
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ inline int BitsSetToOne(Unsigned128 v) {

template <>
inline int BitParity(Unsigned128 v) {
return BitParity(Lower64of128(v)) ^ BitParity(Upper64of128(v));
return BitParity(Lower64of128(v) ^ Upper64of128(v));
}

template <typename T>
Expand Down
51 changes: 28 additions & 23 deletions util/ribbon_alg.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
#include <array>
#include <memory>

#include "rocksdb/rocksdb_namespace.h"
#include "util/math128.h"

namespace ROCKSDB_NAMESPACE {
Expand Down Expand Up @@ -501,12 +502,13 @@ namespace ribbon {
// // slot index i.
// void Prefetch(Index i) const;
//
// // Returns a pointer to CoeffRow for slot index i.
// CoeffRow* CoeffRowPtr(Index i);
//
// // Returns a pointer to ResultRow for slot index i. (Gaussian row
// // operations involve both side of the equation.)
// ResultRow* ResultRowPtr(Index i);
// // Load or store CoeffRow and ResultRow for slot index i.
// // (Gaussian row operations involve both sides of the equation.)
// // Bool `for_back_subst` indicates that customizing values for
// // unconstrained solution rows (cr == 0) is allowed.
// void LoadRow(Index i, CoeffRow *cr, ResultRow *rr, bool for_back_subst)
// const;
// void StoreRow(Index i, CoeffRow cr, ResultRow rr);
//
// // Returns the number of columns that can start an r-sequence of
// // coefficients, which is the number of slots minus r (kCoeffBits)
Expand Down Expand Up @@ -548,6 +550,7 @@ bool BandingAdd(BandingStorage *bs, typename BandingStorage::Index start,
typename BandingStorage::CoeffRow cr, BacktrackStorage *bts,
typename BandingStorage::Index *backtrack_pos) {
using CoeffRow = typename BandingStorage::CoeffRow;
using ResultRow = typename BandingStorage::ResultRow;
using Index = typename BandingStorage::Index;

Index i = start;
Expand All @@ -561,18 +564,19 @@ bool BandingAdd(BandingStorage *bs, typename BandingStorage::Index start,

for (;;) {
assert((cr & 1) == 1);
CoeffRow other = *(bs->CoeffRowPtr(i));
if (other == 0) {
*(bs->CoeffRowPtr(i)) = cr;
*(bs->ResultRowPtr(i)) = rr;
CoeffRow cr_at_i;
ResultRow rr_at_i;
bs->LoadRow(i, &cr_at_i, &rr_at_i, /* for_back_subst */ false);
if (cr_at_i == 0) {
bs->StoreRow(i, cr, rr);
bts->BacktrackPut(*backtrack_pos, i);
++*backtrack_pos;
return true;
}
assert((other & 1) == 1);
assert((cr_at_i & 1) == 1);
// Gaussian row reduction
cr ^= other;
rr ^= *(bs->ResultRowPtr(i));
cr ^= cr_at_i;
rr ^= rr_at_i;
if (cr == 0) {
// Inconsistency or (less likely) redundancy
break;
Expand Down Expand Up @@ -678,12 +682,11 @@ bool BandingAddRange(BandingStorage *bs, BacktrackStorage *bts,
while (backtrack_pos > 0) {
--backtrack_pos;
Index i = bts->BacktrackGet(backtrack_pos);
*(bs->CoeffRowPtr(i)) = 0;
// Not strictly required, but is required for good FP rate on
// inputs that might have been backtracked out. (We don't want
// anything we've backtracked on to leak into final result, as
// that might not be "harmless".)
*(bs->ResultRowPtr(i)) = 0;
// Clearing the ResultRow is not strictly required, but is required
// for good FP rate on inputs that might have been backtracked out.
// (We don't want anything we've backtracked on to leak into final
// result, as that might not be "harmless".)
bs->StoreRow(i, 0, 0);
}
}
return false;
Expand Down Expand Up @@ -780,8 +783,9 @@ void SimpleBackSubst(SimpleSolutionStorage *sss, const BandingStorage &bs) {

for (Index i = num_slots; i > 0;) {
--i;
CoeffRow cr = *const_cast<BandingStorage &>(bs).CoeffRowPtr(i);
ResultRow rr = *const_cast<BandingStorage &>(bs).ResultRowPtr(i);
CoeffRow cr;
ResultRow rr;
bs.LoadRow(i, &cr, &rr, /* for_back_subst */ true);
// solution row
ResultRow sr = 0;
for (Index j = 0; j < kResultBits; ++j) {
Expand Down Expand Up @@ -976,8 +980,9 @@ inline void BackSubstBlock(typename BandingStorage::CoeffRow *state,

for (Index i = start_slot + kCoeffBits; i > start_slot;) {
--i;
CoeffRow cr = *const_cast<BandingStorage &>(bs).CoeffRowPtr(i);
ResultRow rr = *const_cast<BandingStorage &>(bs).ResultRowPtr(i);
CoeffRow cr;
ResultRow rr;
bs.LoadRow(i, &cr, &rr, /* for_back_subst */ true);
for (Index j = 0; j < num_columns; ++j) {
// Compute next solution bit at row i, column j (see derivation below)
CoeffRow tmp = state[j] << 1;
Expand Down
Loading

0 comments on commit a8b3b9a

Please sign in to comment.