开发者

Start or ensure that Delayed Job runs when an application/server restarts

We have to use delayed_job (or some other background-job processor) to run jobs in the background, but we're not allowed to change the boot scripts/boot-levels on the server. This means that the daemon is not guaranteed to remain available if the provider restarts the server (since the daemon would have been started by a capistrano recipe that is only run once per deployment).

Currently, the best way I can think of to ensure the delayed_job daemon is always running, is to add an initializer to our Rails application that checks if the daemon is running. If it's not running, then the initializer starts开发者_如何学JAVA the daemon, otherwise, it just leaves it be.

The question, therefore, is how do we detect that the Delayed_Job daemon is running from inside a script? (We should be able to start up a daemon fairly easily, bit I don't know how to detect if one is already active).

Anyone have any ideas?

Regards, Bernie

Based on the answer below, this is what I came up with. Just put it in config/initializers and you're all set:

#config/initializers/delayed_job.rb

DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"

def start_delayed_job
  Thread.new do 
    `ruby script/delayed_job start`
  end
end

def process_is_dead?
  begin
    pid = File.read(DELAYED_JOB_PID_PATH).strip
    Process.kill(0, pid.to_i)
    false
  rescue
    true
  end
end

if !File.exist?(DELAYED_JOB_PID_PATH) && process_is_dead?
  start_delayed_job
end


Some more cleanup ideas: The "begin" is not needed. You should rescue "no such process" in order not to fire new processes when something else goes wrong. Rescue "no such file or directory" as well to simplify the condition.

DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"

def start_delayed_job
  Thread.new do 
    `ruby script/delayed_job start`
  end
end

def daemon_is_running?
  pid = File.read(DELAYED_JOB_PID_PATH).strip
  Process.kill(0, pid.to_i)
  true
rescue Errno::ENOENT, Errno::ESRCH   # file or process not found
  false
end

start_delayed_job unless daemon_is_running?

Keep in mind that this code won't work if you start more than one worker. And check out the "-m" argument of script/delayed_job which spawns a monitor process along with the daemon(s).


Check for the existence of the daemons PID file (File.exist? ...). If it's there then assume it's running else start it up.


Thank you for the solution provided in the question (and the answer that inspired it :-) ), it works for me, even with multiple workers (Rails 3.2.9, Ruby 1.9.3p327).

It worries me that I might forget to restart delayed_job after making some changes to lib for example, causing me to debug for hours before realizing that.

I added the following to my script/rails file in order to allow the code provided in the question to execute every time we start rails but not every time a worker starts:

puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid',  __FILE__)
begin
  File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
puts "delayed_job ready."

A little drawback that I'm facing with this though is that it also gets called with rails generate for example. I did not spend much time looking for a solution for that but suggestions are welcome :-)

Note that if you're using unicorn, you might want to add the same code to config/unicorn.rb before the before_fork call.

-- EDITED: After playing around a little more with the solutions above, I ended up doing the following:

I created a file script/start_delayed_job.rb with the content:

puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid',  __FILE__)

def kill_delayed(path)
  begin
    pid = File.read(path).strip
    Process.kill(0, pid.to_i)
    false
  rescue
    true
  end
end

kill_delayed(dj_pid_path)

begin
  File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end

# spawn delayed
env = ARGV[1]
puts "spawing delayed job in the same env: #{env}" 

# edited, next line has been replaced with the following on in order to ensure delayed job is running in the same environment as the one that spawned it
#Process.spawn("ruby script/delayed_job start")
system({ "RAILS_ENV" => env}, "ruby script/delayed_job start")

puts "delayed_job ready."

Now I can require this file anywhere I want, including 'script/rails' and 'config/unicorn.rb' by doing:

# in top of script/rails
START_DELAYED_PATH = File.expand_path('../start_delayed_job',  __FILE__)
require "#{START_DELAYED_PATH}"

# in config/unicorn.rb, before before_fork, different expand_path
START_DELAYED_PATH = File.expand_path('../../script/start_delayed_job',  __FILE__)
require "#{START_DELAYED_PATH}"


not great, but works

disclaimer: I say not great because this causes a periodic restart, which for many will not be desirable. And simply trying to start can cause problems because the implementation of DJ can lock up the queue if duplicate instances are created.

You could schedule cron tasks that run periodically to start the job(s) in question. Since DJ treats start commands as no-ops when the job is already running, it just works. This approach also takes care of the case where DJ dies for some reason other than a host restart.

# crontab example 
0 * * * * /bin/bash -l -c 'cd /var/your-app/releases/20151207224034 && RAILS_ENV=production bundle exec script/delayed_job --queue=default -i=1 restart'

If you are using a gem like whenever this is pretty straightforward.

every 1.hour do
  script "delayed_job --queue=default -i=1 restart"
  script "delayed_job --queue=lowpri -i=2 restart"
end
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜