Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

Increase sync size to 1s to avoid hitting 429 too many requests issue #21

Open
wants to merge 87 commits into
base: main
Choose a base branch
from

Conversation

madjarevicn
Copy link

📝 Summary

Increased a delay from 0.5s to 1s in order to avoid hitting rate limit on flashbots api.

📚 References


Ruteri and others added 30 commits August 26, 2022 13:44
* Add remote relay connection for getting validator data
* Add block submission to remote relay
* Adjust readme
* Add block build trigger from beacon node

* remove empty block building

Co-authored-by: avalonche <[email protected]>
* fix issue with geth not shutting down (flashbots#97)
* Add eth_callBundle rpc method (flashbots#14)
* flashbots: add eth_estimateGasBundle (flashbots#102)
* feat(ethash): flashbots_getWork RPC with profit (flashbots#106)
* Calculate megabundle as soon as it's received (flashbots#112)
* Add v0.5 specification link (flashbots#118)
flashbots#123)

* Discard reverting megabundle blocks and head change interrupted blocks

* Discard all blocks with incomplete bundles

* Run reverting megabundles regression test separately from bundle tests
@Ruteri
Copy link
Collaborator

Ruteri commented Dec 7, 2022

You probably want to change this
limiter: rate.NewLimiter(rate.Every(time.Millisecond), 510), to a second instead.
Both should be configurable, either through a constant (build time) or configuration

@@ -81,7 +81,7 @@ func NewBuilder(sk *bls.SecretKey, ds flashbotsextra.IDatabaseService, relay IRe
builderPublicKey: pk,
builderSigningDomain: builderSigningDomain,

limiter: rate.NewLimiter(rate.Every(time.Millisecond), 510),
limiter: rate.NewLimiter(rate.Every(time.Millisecond), 600),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 510 is enough above 500 to avoid issues most of the time for relays wanting 2 blocks/s
I think the bigger problem we could try to solve is that some relays limit to 1 block/s and we'd want the rate limit to be configurable in that case

@@ -286,7 +286,7 @@ func (b *Builder) runBuildingJob(slotCtx context.Context, proposerPubkey boostTy
}

// resubmits block builder requests every second
runRetryLoop(ctx, 500*time.Millisecond, func() {
runRetryLoop(ctx, 1*time.Second, func() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not impact the relay rate limit, as it's a loop scheduling new blocks and not a loop controlling how often they are submitted
Again, would be nice to make it either dynamic based on load or configurable - changing from one hardcoded value to another won't bring much benefit

@@ -106,6 +106,9 @@ func (b *Builder) onSealedBlock(block *types.Block, ordersClosedAt time.Time, se

value := new(boostTypes.U256Str)
err = value.FromBig(block.Profit)

log.Info("Block profit for block", value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has to be log.Info("message", "value name", value) otherwise it will print errors.

@metachris
Copy link
Collaborator

Fyi, the rate limit is now 450 blocks / 5 minutes, so the appropriate internal limit works be 3 block submissions within 2 seconds.

avalonche pushed a commit that referenced this pull request Feb 7, 2023
avalonche pushed a commit that referenced this pull request Mar 9, 2023
avalonche pushed a commit that referenced this pull request Mar 15, 2023
avalonche pushed a commit that referenced this pull request Mar 17, 2023
avalonche pushed a commit that referenced this pull request Mar 22, 2023
avalonche pushed a commit that referenced this pull request Jul 6, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.