Skip to content

Commit

Permalink
fix(backfill): chunk up writes to DB (#19699)
Browse files Browse the repository at this point in the history
## Description

When a given range of transactions generated too many affected object
IDs, the backfill script would stop, because of limits in postgres'
protocol over the wire.

By chunking up the writes into batches of 1000, we avoid hitting this
limit.

## Test plan

Running this backfill against the DBs now -- and it is not crashing.

---

## Release notes

Check each box that your changes affect. If none of the boxes relate to
your changes, release notes aren't required.

For each box you select, include information after the relevant heading
that describes the impact of your changes that a user might notice and
any actions they must take to implement updates.

- [ ] Protocol: 
- [ ] Nodes (Validators and Full nodes): 
- [ ] Indexer: 
- [ ] JSON-RPC: 
- [ ] GraphQL: 
- [ ] CLI: 
- [ ] Rust SDK:
- [ ] REST API:
  • Loading branch information
amnn authored Oct 3, 2024
1 parent 2e58d1d commit 66f9da1
Showing 1 changed file with 8 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,13 @@ impl BackfillTask for TxAffectedObjectsBackfill {
})
.collect();

diesel::insert_into(tx_affected_objects::table)
.values(&affected_objects)
.on_conflict_do_nothing()
.execute(&mut conn)
.await
.unwrap();
for chunk in affected_objects.chunks(1000) {
diesel::insert_into(tx_affected_objects::table)
.values(chunk)
.on_conflict_do_nothing()
.execute(&mut conn)
.await
.unwrap();
}
}
}

0 comments on commit 66f9da1

Please sign in to comment.