Skip to content

A resque plugin that ensures that only one job for a given queue will be running on any worker at a given time.

License

Notifications You must be signed in to change notification settings

wallace/resque-lonely_job

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Resque::LonelyJob

A Resque plugin. Requires Resque >= 1.20.0.

Ensures that for a given queue, only one worker is working on a job at any given time.

This differs from [resque-lock](from https://github.com/defunkt/resque-lock) and resque-loner in that the same job may be queued multiple times but you're guaranteed that first job queued will run to completion before subsequent jobs are run. In other words, job ordering is preserved.

Installation

Add this line to your application's Gemfile:

gem 'resque-lonely_job'

And then execute:

$ bundle

Or install it yourself as:

$ gem install resque-lonely_job

Usage

Example #1

require 'resque/plugins/lonely_job'

class StrictlySerialJob
  extend Resque::Plugins::LonelyJob

  @queue = :serial_work

  def self.perform
    # only one at a time in this block, no parallelism allowed for this
    # particular queue
  end
end

Example #2

Let's say you want the serial constraint to apply at a more granular level. Instead of applying at the queue level, you can overwrite the .redis_key method.

require 'resque/plugins/lonely_job'

class StrictlySerialJob
  extend Resque::Plugins::LonelyJob

  @queue = :serial_work

  # Returns a string that will be used as the redis key
  # NOTE: it is recommended to prefix your string with the 'lonely_job:' to
  # namespace your key!
  def self.redis_key(account_id, *args)
    "lonely_job:strictly_serial_job:#{account_id}"
  end

  def self.perform(account_id, *args)
    # only one at a time in this block, no parallelism allowed for this
    # particular redis_key
  end
end

NOTE: Without careful consideration of your problem domain, worker starvation and/or unfairness is possible for jobs in this example. Imagine a scenario where you have three jobs in the queue with two resque workers:

+---------------------------------------------------+
| :serial_work                                      |
|---------------------------------------------------|
|             |             |             |         |
| redis_key:  | redis_key:  | redis_key:  | ...     |
|    A        |    A        |    B        |         |
|             |             |             |         |
| job 1       | job 2       | job 3       |         |
+---------------------------------------------------+
                                  ^
                                  |
  Possible starvation +-----------+
  for this job and
  subsequent ones

When the first worker grabs job 1, it'll acquire the mutex for processing redis_key A. The second worker tries to grab the next job off the queue but is unable to acquire the mutex for redis_key A so it places job 2 back at the head of the :serial_work queue. Until worker 1 completes job 1 and releases the mutex for redis_key A, no work will be done in this queue.

This issue may be avoided by employing dynamic queues, http://blog.kabisa.nl/2010/03/16/dynamic-queue-assignment-for-resque-jobs/, where the queue is a one to one mapping to the redis_key.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Added some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

About

A resque plugin that ensures that only one job for a given queue will be running on any worker at a given time.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages