Delayed_job not executing the perform method but emptying the job queue
I have a fresh rails 3 app, here's my Gemfile:
source 'http://rubygems.org'
gem 'rails', '3.0.0' gem 'delayed_job'
gem 'sqlite3-ruby', :require => 开发者_如何学Go'sqlite3'
Here's the class that represents the job that I want to queue:
class Me < Struct.new(:something)
def perform
puts "Hello from me"
logger.info "Hello from me"
logger.debug "Hello from me"
raise Exception.new
end
end
From the console with no workers running:
irb(main):002:0> Delayed::Job.enqueue Me.new(1)
=> #<Delayed::Backend::ActiveRecord::Job id: 7, priority: 0, attempts: 0, handler: "--- !ruby/struct:Me \nsomething: 1\n", last_error: nil, run_at: "2010-12-29 07:24:11", locked_at: nil, failed_at: nil, locked_by: nil, created_at: "2010-12-29 07:24:11", updated_at: "2010-12-29 07:24:11">
Like I mentioned: there are no workers running:
irb(main):003:0> Delayed::Job.all
=> [#<Delayed::Backend::ActiveRecord::Job id: 7, priority: 0, attempts: 0, handler: "--- !ruby/struct:Me \nsomething: 1\n", last_error: nil, run_at: "2010-12-29 07:24:11", locked_at: nil, failed_at: nil, locked_by: nil, created_at: "2010-12-29 07:24:11", updated_at: "2010-12-29 07:24:11">]
I start a worker with script/delayed_job run
The queue gets emptied:
irb(main):006:0> Delayed::Job.all
=> []
However, nothing happens as a result of the puts
, nothing is logged from the logger
calls, and no exception is raised. I'd appreciate any help / insight or anything to try.
By default, delayed_job destroys failed jobs:
So the first step is to configure an initializer and deactivate that behaviour
Delayed::Worker.destroy_failed_jobs = false
Also, if you get a deserialization failure, then that is an instant job failure.
This triggers job deletion (if you haven't deleted it).
So try adding the following around line 120 of the worker.rb
rescue DeserializationError => error
say "DeserializationError: #{error.message}"
job.last_error = "{#{error.message}\n#{error.backtrace.join('\n')}"
failed(job)
It's fairly unhelpful, but at least you'll know it's a deserialization error then.
I ended up just using the job w. perform() method construct. Much more reliable. Also remember to put your job definition as a separate file so the class loader can find it when it is running (rather than just inlining the class definition into your model somewhere).
Ben W is absolutely right, Make sure you have a file under
"#{Rails.root}/config/initializers/delayed_job_worker.rb"
This defines how the worker should behave. The worker is going to quietly just remove errors otherwise.
Once you do this, you should be able to find out more about your error. For my example, I was using delayed_job_mongoid, so this added an entry of "last_error" (which I think you should have in your mysql table for delayed_job regardless..)
And as Ben W concluded, you need to make sure the object you're creating is known to the application (or to the worker for that matter). My problem was I was in Rails Console testing out a class object. The worker didn't know of this class, so it barfed.
In my application.rb file:
module TextSender
class Application < Rails::Application
require "#{Rails.root.to_s}/lib/SendTextJob.rb"
and my lib file:
class SendTextJob < Struct.new(:text, :number)
def perform
Rails.logger.info "Sending #{text} to #{number}"
puts "Successfully sent text"
end
end
Then running
Delayed::Job.enqueue SendTextJob.new("Work on this text NOW, please?", "5551231234")
Was confirmed in my log/development.log file that this was successful. I also tested creating and object (a user object or whatever model you may have) in this perform method, and it worked.
I had this problem, and I found that it was because in Rails 3 files in the lib/
directory aren't autoloaded. To diagnose, I added:
# application.rb
Delayed::Worker.destroy_failed_jobs = false
as mentioned by Ben W. This told me what was going on, as I could inspect the last_error
.
So to solve the autoloading problem, I found a couple answers on SO, but the gist is adding this:
# application.rb
# Custom directories with classes and modules you want to be autoloadable.
# config.autoload_paths += %W(#{config.root}/extras)
config.autoload_paths += %W(#{config.root}/lib)
config.autoload_paths += Dir["#{config.root}/lib/**/"]
Which was kindly provided by http://hemju.com/2010/09/22/rails-3-quicktip-autoload-lib-directory-including-all-subdirectories/.
I'd be interested to see how you could solve this without turning on autoloading for the lib directory. Any thoughts?
I just copied your class into irb and tried to do Me.new.perform
:
Hello from me
NameError: undefined local variable or method `logger' for #<struct Me something=nil>
from (irb):6:in `perform'
from (irb):14
Does your class have access to 'logger'?
You could try doing something else, like opening and writing to a file?
File.open("testing.txt", 'w') {|f| f.write("hello") }
Bear in mind that the delayed job worker's 'puts' command will output to its stdout so you will probably never see this. And if you want to do any logging, I think you have to create a new Logger instance within your perform method in order to do so.
What version of delayed_job are you running?
You need the 2.1 branch at https://github.com/collectiveidea/delayed_job for Rails 3.0 and above.
check the logs, you should see something about DelayedJob messages, at least which job is launched and what's its exit status
in a terminal, launch:
tail -f log/development.log
then retry from a rails console to check what happens doing a simple ActiveRecord query, then using DelayedJob. you can read what is logged on the other terminal
cheers, A.
精彩评论