I\'m trying to send mail in a Rails 3 application using collectiveidea\'s delayed_job. If I try to send mail regularly (.deliver) it works fine, but as soon as I switch to delayed job, things fall to
I try to log from within a delayed_job in rails. I configure it as following: Delayed::Worker.destroy_failed_jobs = false
I\'m running 2.1.1, Rails 3, and having a heckuva time getting the delayed_job gem working. If I strip out handle_asynchronously on a mailer, everything works fine...but if I put it back in, I get:
In production, our delayed_job process is dying for some reaso开发者_如何学Gon. I\'m not sure if it\'s crashing or being killed by the operating system or what. I don\'t see any errors in the delayed_
I followed the railscast which uses CollectiveIdea\'s fork. I\'m not able to get it to work. I created a new file in my /lib folder and included this
I\'ve been using collectiveidea\'s fork of delaye开发者_开发技巧d_job as a gem in my Rails 3 app, and it\'s working fine.I\'m now looking for a solution to autoscale workers, specifically for Heroku.I
I cannot start delayed job process using a capistrano recipe. Here\'s the error I am getting. /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.1/lib/delayed/command.rb:62:in `mkdir\': File exists
I am running a Rails 3.0.3 with Postgres 9.0.1 and delayed_job 2.1.1. I configured delayed_job for Solr reindexing on an after_save callback which works great in development. When running cucumber tes
I\'m not sure if you guys test like this, but I\'m a TDD guy and keep stumbling into wierd stuff. The timestamps are converted somehow by DJ, or the time zone... I don\'t know. Test example follows
I have a bookmarklet that, when used, submits all of the URLs on the current browser page to a Rails 3 app for processing. Behind the scenes I\'m using Typhoeus开发者_如何学JAVA to check that each URL