-
-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prioritize retry state over created for same singleton key in stately fetch #536
base: master
Are you sure you want to change the base?
Prioritize retry state over created for same singleton key in stately fetch #536
Conversation
@timgit could you please take a look? We are experiencing issues is our production application |
Thanks for pointing out this limitation with stately queues. You're correct in your assessment regarding how to extend a stately queue with a singletonKey. After reviewing the PR, I think it adds too much complexity to enforce across all queue types, and would negatively impact performance in a larger queue. I also don't think it will correctly resolve all failure use cases. The failure case you're seeing is because once the next job would produce a unique constraint violation (with or without a singletonKey), all job processing will be blocked. One example of how this fails is:
The only way I see to avoid this use case is to not use batching with stately queues. The batch processing SQL statement needs to be enhanced to allow dropping one of the previously accepted jobs. This feels like a gray area since the job was previously accepted, but stately queues are already the type of policy that is accustomed to dropping jobs. In its current state, batching with a unique constraint violation will block all processing until the batch size is reduced back to 1. Another side effect of this behavior is more closely related to this PR, which adds a sort condition for the singletonKey. However, this would still produce a processing limitation once a conflict is experienced on a particular key. Once a unique constraint is triggered for any job, no other jobs can be processed. This more closely aligns with the original intent of these queue policies, which is to reduce concurrency as much as possible. |
Thank you for your detailed response and for the great work you are doing with pgboss. We truly appreciate your efforts in maintaining and improving this library.
We understand your perspective; however, using
Could you provide more details or results from performance tests that highlight this impact? In our view, a queue processing one job per query (batch size 1) represents a far greater performance concern for the entire system. We would be interested in understanding how the proposed changes specifically add complexity or impact performance in large queues.
Could you clarify how a conflict would occur, given that the implementation uses DISTINCT on the singletonKey? From our understanding, this should prevent such conflicts from arising. We have been attempting to upgrade to v10 for nearly two months now but are facing significant performance issues without batch processing. We sincerely hope to find a resolution through collaboration, as we value the capabilities of pgboss. However, if we cannot maintain the necessary system performance, we may need to explore alternative solutions. Once again, thank you for your hard work and for taking the time to consider our input. We look forward to hearing your thoughts on this matter. |
@timgit any update on this? |
Issue: #535