In a company I’m currently working we have been using resque to do some heavy asynchronous work. And well, that’s like obvious choice in Rails world – just go with resque, fork some processes and relax ;). But when you get to a point, when you need few hundred workers and few servers, you start to ask yourself – can’t you do it better?
Sidekiq uses multi-threading, so you can leverage that even if you use MRI (that have un-famous GIL) – if you have a lot of I/O bound work you can still benefit from this great piece of software (even without need to migrate to JRuby or Rubinius).
I ran some simple benchmarks and said to myself – hell yeah, let’s just do it! About 70 commits later I completely integrated sidekiq, updated test suite and created some custom sidekiq middleware that I needed (man it’s great, just take a look at the docs). Within 24 hours it processed over 1000000 jobs using 20x less amount of workers. How cool is that? I would say – pretty cool.
Few general tips and notes to all of you and to future-myself ;–)
if you’re coming from resque – sidekiq-failures is like the second thing you should checkout, this will make you feel like home
when using mongoid – be sure to install kiqstand to properly disconnect workers from db
remember about thread-safety, that also includes gems you are using, be aware of what you are putting in your Gemfile
if you need just write your of middleware, it’s easy to test and easy to extract into separate gem and reuse in your other projects
I wouldn’t recommend using resque with sidekiq within the same redis namespace. Theoretically you can, but I just wouldn’t go for it, it’s too messy. Yeah, you will need to web backends to monitor your jobs (if you are using resque+sidekiq combination somehow), but personally I think it’s even better, especially if you are doing a lot of asynch work – it’s easier to see what’s going on.