You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How exactly will Block Producers implement their own rate-limiting Algorithm in practice? Will they have to come out with their own solution for that?
Since smart contracts will be deployed once and excecuted many times can each executed wasm instructions be indexed into different bandwidth class based on preliminary test on each smart contract to be deployed on the main blockchain? One example solution is by preprocessed any smartcontract to be deployed on the blockchain by running said smartcontract into a testnet to see its resource consumption which then can be assigned a resource-consumption classifier once deployed into the main blockchain.
So the EOSIO software only needs to compute N X M to get a max transactions allowed per each block. Where N= number of transaction, M=[resource classifier1, resource classifier 2,...], X is standard matrix operation. Larger than just to compute N number of transaction but negligible enough in performance hit on the blockchain speed in the grand scheme of things.
Just one bare example of course. Criticism, help in fleshing this idea will be welcomed.
Hopefully this will lead to less subjectivity on the hands of Block Producers, which subjectivity can lead to DOS/ Favouritism by BPs.
The text was updated successfully, but these errors were encountered:
This issue target Subjective Best Effort Scheduling part of the Whitepaper
How exactly will Block Producers implement their own rate-limiting Algorithm in practice? Will they have to come out with their own solution for that?
Since smart contracts will be deployed once and excecuted many times can each executed wasm instructions be indexed into different bandwidth class based on preliminary test on each smart contract to be deployed on the main blockchain? One example solution is by preprocessed any smartcontract to be deployed on the blockchain by running said smartcontract into a testnet to see its resource consumption which then can be assigned a resource-consumption classifier once deployed into the main blockchain.
So the EOSIO software only needs to compute N X M to get a max transactions allowed per each block. Where N= number of transaction, M=[resource classifier1, resource classifier 2,...], X is standard matrix operation. Larger than just to compute N number of transaction but negligible enough in performance hit on the blockchain speed in the grand scheme of things.
Just one bare example of course. Criticism, help in fleshing this idea will be welcomed.
Hopefully this will lead to less subjectivity on the hands of Block Producers, which subjectivity can lead to DOS/ Favouritism by BPs.
The text was updated successfully, but these errors were encountered: