-
Notifications
You must be signed in to change notification settings - Fork 11.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[indexer-alt] Add pruner pipeline for obj_info #20539
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
3 Skipped Deployments
|
37e5b35
to
3af1bd2
Compare
3af1bd2
to
6ecab30
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some thoughts/questions on consistency between impls but mostly I'm curious/scared about issuing each delete as its own request to the DB 😬
@@ -246,7 +248,7 @@ impl ConcurrentLayer { | |||
(None, _) | (_, None) => None, | |||
(Some(pruner), Some(base)) => Some(pruner.finish(base)), | |||
}, | |||
checkpoint_lag: self.checkpoint_lag.or(base.checkpoint_lag), | |||
checkpoint_lag: base.checkpoint_lag, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to keep the logic here consistent with the sequential layer (although I don't particularly mind whether we permit overriding per pipeline or not) -- is there a reason we can't do that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because ConcurrentLayer
does not have the checkpoint_lag
field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But you added checkpoint_lag
to ConcurrentConfig
right? Why not add it to ConcurrentLayer
? Even if it's unlikely that someone would want to set a checkpoint lag for all concurrent layers, it seems confusing to selectively offer the override logic -- it breaks the mental model for people who are interacting with this mainly be reading the configs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative is to remove this field entirely even from ConcurrentConfig.
The primary intention is indeed to make sure that users cannot specify a global lag for all concurrent pipelines, since it does not make sense. This field comes from the consistency layer, instead of concurrent layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ended up adding it to ConcurrentLayer. I realize this is not part of the global indexer config, so it's probably fine.
// TODO: We could consider make this more efficient by doing some grouping in the collector | ||
// so that we could merge as many objects as possible across checkpoints. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can maybe make this slightly better by setting a long commit interval? That way each individual delete might clean up multiple old versions of a given object?
}); | ||
let mut committed_rows = 0; | ||
for (object_id, cp_sequence_number_exclusive) in to_prune { | ||
committed_rows += diesel::delete(obj_info::table) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ...pretty nervous about this.
pub(crate) enum ProcessedObjInfoUpdate { | ||
Insert(Object), | ||
Delete(ObjectID), | ||
} | ||
|
||
pub(crate) struct ProcessedObjInfo { | ||
pub cp_sequence_number: u64, | ||
pub update: ProcessedObjInfoUpdate, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could you move the top-level elements around for consistent file order? types at the top, then impl blocks, then trait impls, then free functions.
nit (optional): I think I started with something quite similar to this pattern for StoredObjectUpdate
, but ended up going for something more like this (not using a special enum type and pulling the object_id
into the outer struct) as it made some things neater:
pub(crate) struct ProcessedObjInfo {
pub object_id: ObjectID,
pub cp_sequence_number: u64,
pub update: Option<Object>
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could you move the top-level elements around for consistent file order? types at the top, then impl blocks, then trait impls, then free functions.
hmm isn't it always better for impls to be right next to its struct definition? Otherwise it makes it diffult to locate them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This ordering of elements in the file helps in two ways:
- It's consistent across the codebase (in this case, in
sui-indexer-alt
/sui-graphql-rpc
/sui-package-resolver
) which helps to orient quickly (the reason I noticed this was that I looked forProcessedObjInfo
at the top of the file when I saw it mentioned and I couldn't find it there). - It's a forcing function for splitting up files, when they get large enough that a type is far away from its impl block -- otherwise it becomes very easy to concatenate together multiple clusters of types with their impl blocks in a file that grows arbitrarily long without noticing, because you view each cluster as its own unit. It only becomes a problem when someone tries to take a holistic view of the file, and that person struggles.
6ecab30
to
929071d
Compare
929071d
to
167f008
Compare
167f008
to
67dcb2a
Compare
Am I understanding correctly that this pruning implementation is unique for the obj_info table, and is needed because we cannot directly prune this table based on checkpoint sequence number, as objects in the pruning range may still be considered live object info? |
Yes that is correct! |
Description
This PR implements the obj_info_pruner pipeline.
I refactored the obj_info pipeline so that these two pipelines could share the same process function logic.
For obj_info_pruner, it look at every (obj_id, checkpoint) pair produced from the obj_info processing, and prune accordingly. Within each commit, it does sequential pruning which is less ideal, but hopefully if we have enough concurrent at the outer layer, this is not a huge problem.
Test plan
How did you test the new or updated feature?
Release notes
Check each box that your changes affect. If none of the boxes relate to your changes, release notes aren't required.
For each box you select, include information after the relevant heading that describes the impact of your changes that a user might notice and any actions they must take to implement updates.