-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[indexer-alt] Add obj_info pipeline #20436
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
3 Skipped Deployments
|
b18dd8d
to
1c2254c
Compare
1c2254c
to
6d4de25
Compare
if !latest_live_output_objects.contains_key(object_id) { | ||
// If an input object is not in the latest live output objects, it must have been deleted | ||
// or wrapped in this checkpoint. We keep an entry for it in the table. | ||
// This is necessary when we query objects and iterating over them, so that we don't |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not following here, can you elaborate on the query that needs to use deleted / wrapped entries?
also looks like there is no "marker column" marking if the object ID is deleted/wrapped, would that be problematic, for example if a query asking for 50 IDs ended up getting some deleted objects, and as a result end graphql response has < 50 results?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you elaborate on the query that needs to use deleted / wrapped entries?
All queries need to track deleted / wrapped objects, because they will start by finding candidates that meet the ownership/type criteria, and they will follow that up by checking whether there is some later version of the object that supersedes the candidate (in which case we discard it), and an object should be considered superseded if it has been deleted or wrapped.
would that be problematic, for example if a query asking for 50 IDs ended up getting some deleted objects, and as a result end graphql response has < 50 results?
We have this problem today with object_history
and objects_snapshot
(or wal_obj_types
and sum_obj_types
) and we solve it by pushing the LIMIT
as close into the query as possible. E.g. if we need to find 50 matching objects, we:
- Select 50 matching candidates from the summary/snapshot table, after removing objects where there's a newer version in the history/wal table.
- Select 50 matching candidates from the history/wal table, after removing objects where there's a newer version in the history/wal table.
- Union both of these together, order by object ID and again limit it to 50 results.
With obj_info
, it works similarly, it's just that the summary and WAL tables are combined into one:
- Select 50 matching candidates from
obj_info
after removing objects that have been updated at later checkpoints inobj_info
. - Map these candidates to their latest versions in the checkpoint they are being viewed at (This is something that @lxfind and I have been struggling with a little bit on how to make this efficient).
6d4de25
to
8837c4c
Compare
8837c4c
to
cf918de
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
name TEXT, | ||
-- The type's type parameters, as a BCS-encoded array of TypeTags. | ||
instantiation BYTEA, | ||
PRIMARY KEY (object_id, cp_sequence_number) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the primary key be the other way around, to support the unfiltered query efficiently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have been thinking about this, and I think that it actually should be (object_id, cp_sequence_number)
to support unfiltered query that is consistent with how we deal with filtering.
So when we do filter on owner and types, we would filter down to a list of object id entries that match the filtering condition bounded by the view checkpoint, and join with another table where we find the max cp_sequence_number for each object ID. The second part above, where we find the max cp_sequence_number for each object ID, is where we need the index to be (object_id, cp_sequence_number)
. This should not change when filtering is empty.
Owner::AddressOwner(_) => StoredOwnerKind::Address, | ||
Owner::ObjectOwner(_) => StoredOwnerKind::Object, | ||
Owner::Shared { .. } => StoredOwnerKind::Shared, | ||
Owner::Immutable => StoredOwnerKind::Immutable, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(will need to add the case for Owner::ConsensusV2
here).
if !latest_live_output_objects.contains_key(object_id) { | ||
// If an input object is not in the latest live output objects, it must have been deleted | ||
// or wrapped in this checkpoint. We keep an entry for it in the table. | ||
// This is necessary when we query objects and iterating over them, so that we don't |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you elaborate on the query that needs to use deleted / wrapped entries?
All queries need to track deleted / wrapped objects, because they will start by finding candidates that meet the ownership/type criteria, and they will follow that up by checking whether there is some later version of the object that supersedes the candidate (in which case we discard it), and an object should be considered superseded if it has been deleted or wrapped.
would that be problematic, for example if a query asking for 50 IDs ended up getting some deleted objects, and as a result end graphql response has < 50 results?
We have this problem today with object_history
and objects_snapshot
(or wal_obj_types
and sum_obj_types
) and we solve it by pushing the LIMIT
as close into the query as possible. E.g. if we need to find 50 matching objects, we:
- Select 50 matching candidates from the summary/snapshot table, after removing objects where there's a newer version in the history/wal table.
- Select 50 matching candidates from the history/wal table, after removing objects where there's a newer version in the history/wal table.
- Union both of these together, order by object ID and again limit it to 50 results.
With obj_info
, it works similarly, it's just that the summary and WAL tables are combined into one:
- Select 50 matching candidates from
obj_info
after removing objects that have been updated at later checkpoints inobj_info
. - Map these candidates to their latest versions in the checkpoint they are being viewed at (This is something that @lxfind and I have been struggling with a little bit on how to make this efficient).
cf918de
to
5b8a777
Compare
Describe the changes or additions included in this PR. How did you test the new or updated feature? --- Check each box that your changes affect. If none of the boxes relate to your changes, release notes aren't required. For each box you select, include information after the relevant heading that describes the impact of your changes that a user might notice and any actions they must take to implement updates. - [ ] Protocol: - [ ] Nodes (Validators and Full nodes): - [ ] Indexer: - [ ] JSON-RPC: - [ ] GraphQL: - [ ] CLI: - [ ] Rust SDK: - [ ] REST API:
Description
Describe the changes or additions included in this PR.
Test plan
How did you test the new or updated feature?
Release notes
Check each box that your changes affect. If none of the boxes relate to your changes, release notes aren't required.
For each box you select, include information after the relevant heading that describes the impact of your changes that a user might notice and any actions they must take to implement updates.