Tags: WentaoRen/kudu
Tags
[tablet] reinforce the CountLiveRows API In the recent patch 13426, I found that the CountLiveRows API is not safe after the tablet is been shut down. Though the API has not been used by any real users except test cases, I think it's necessary to add this patch to the 1.10.x release in progress if it's possible. Change-Id: I56b25a6acb61564ce089be11a1605a19c25eb9e0 Reviewed-on: http://gerrit.cloudera.org:8080/13734 Tested-by: Kudu Jenkins Reviewed-by: Andrew Wong <[email protected]> Reviewed-by: Grant Henke <[email protected]> (cherry picked from commit 30b88b4) Reviewed-on: http://gerrit.cloudera.org:8080/13741
thirdparty: fix build-if-necessary in tarballs build-if-necessary, when run from a tarball, uses a build stamp file for each build configuration to know whether it needs to re-run. However, a normal build will build both 'common' and then 'tsan' configurations, in that order. When we go back to check whether 'common' needs a rebuild, we'll see the tsan build-stamp file and think it needs to be rebuilt. This fixes the check to exclude other build-stamp files. Change-Id: Ifc600d065362e902f4f768080e1f91c90b9f0594 Reviewed-on: http://gerrit.cloudera.org:8080/13707 Tested-by: Kudu Jenkins Reviewed-by: Andrew Wong <[email protected]>
Bump version to 1.10.0 (non-SNAPSHOT) Change-Id: If4c9014deb9d5958bf97dbb6229bb2cb61095e63 Reviewed-on: http://gerrit.cloudera.org:8080/13656 Reviewed-by: Andrew Wong <[email protected]> Tested-by: Grant Henke <[email protected]>
KUDU-2706: Work around the lack of thread safety in krb5_parse_name() krb5_init_context() sets the field 'default_realm' in a krb5_context object to 0. Upon first call to krb5_parse_name() with a principal without realm specified (e.g. foo/bar), 'default_realm' in the krb5_context object is lazily initialized. When more than one negotiation threads are configured, it's possible for multiple threads to call CanonicalizeKrb5Principal() in parallel. CanonicalizeKrb5Principal() in turn calls krb5_parse_name(g_krb5_ctx, ...) with no lock held. In addition, krb5_parse_name() is not thread safe as it lazily initializes 'context->default_realm' without holding lock. Consequently, 'g_krb5_ctx' which is shared and not supposed to be modified after initialization may be inadvertently modified concurrently by multiple threads, leading to crashes (e.g. double free) or errors. This change works around the problem by initializing 'g_krb5_ctx->default_realm' once in InitKrb5Ctx() by calling krb5_get_default_realm(). TODO: Fix unsafe sharing of 'g_krb5_ctx'. According to Kerberos documentation (https://github.com/krb5/krb5/blob/master/doc/threads.txt), any use of krb5_context must be confined to one thread at a time by the application code. The current sharing of 'g_krb5_ctx' between threads without synchronization is in fact unsafe. Change-Id: I1bf9224516e2996f51f319088179727f76741ebe Reviewed-on: http://gerrit.cloudera.org:8080/12545 Reviewed-by: Alexey Serbin <[email protected]> Tested-by: Kudu Jenkins (cherry picked from commit 25af98e) Reviewed-on: http://gerrit.cloudera.org:8080/12607 Reviewed-by: Andrew Wong <[email protected]> Tested-by: Andrew Wong <[email protected]>
KUDU-2704: Rowsets that are much bigger than the target size discoura… …ge compactions If rowsets are flushed that are much bigger than the target rowset size, then they may get a negative contribution to their score from the size-based portion of their valuation in the compaction knapsack problem. This is a problem for two reasons: 1. It can cause fruitful height-based compactions not to run even though the compaction is under budget. 2. In an extreme case, the value of the rowset can become negative, which breaks an invariant of the knapsack problem that item weights be nonnegative. This fixes the issue by flooring the size-based contribution at 0. A regression test is included that is based on the real-world example that I saw. I also tested that the real-life case I observed was fixed by this patch. Why do rowsets get flushed "too big"? It could be because the target size was changed after they were flushed, but I also see almost all rowsets flushed with a size that is much too big when the number of columns becomes large. For example, on the cluster where I discovered this problem, a table with 279 columns was flushing 85MB rowsets even though the target size is 32MB. That issue ought to be investigated, but in the meantime this is a workable fix. It has existed for a long time- the KUDU-2701 fix just made it apparent because it increased how much the rowsets exceed the target size in many cases. Change-Id: I1771cd3dbbb17c87160a4bc38b48b3fbc7307676 Reviewed-on: http://gerrit.cloudera.org:8080/12538 Reviewed-by: Andrew Wong <[email protected]> Tested-by: Kudu Jenkins (cherry picked from commit fad69bb) Reviewed-on: http://gerrit.cloudera.org:8080/12539
docs: update release note for KUDU-2463 Change-Id: Id8dce61da14f67e39f6573fa42ec54809f3ceb19 Reviewed-on: http://gerrit.cloudera.org:8080/11691 Reviewed-by: Grant Henke <[email protected]> Reviewed-by: Mike Percy <[email protected]> Tested-by: Andrew Wong <[email protected]>
Add release notes for 1.8.0 Change-Id: I15b0ce686c5e69648fe09a18ca82b9bf54cab837 Reviewed-on: http://gerrit.cloudera.org:8080/11647 Reviewed-by: Dan Burkert <[email protected]> Tested-by: Kudu Jenkins Reviewed-by: Alexey Serbin <[email protected]>
PreviousNext