One Quarter of Sidekiq

It’s been about three months since I released Sidekiq. Let’s get to the numbers:

  • 668 GitHub watchers
  • 171 issues, 11 open
  • 7 commercial licenses purchased!
  • 54 pull requests with 30+ different developers!
  • 1 EngineYard podcast interview
  • 1 RailsConf lightning talk by @jwo
  • 1 South African Ruby group talk on Sidekiq!
  • 1 new license (LGPL rather than GPL)
  • 0 locks in the multithreaded codebase

I consider that a success; I’ve never had a project grow this fast with just my own promotion and community word of mouth. TheClymb switched to Sidekiq last week and our biggest problem so far has been that Sidekiq can be too parallel and crush servers with traffic; we’ve had to rewrite some jobs to be serial!

My goals remain the same:

  • Provide the easiest and best supported queueing system for Ruby.
  • Be the first Rubygem people mention or consider when choosing a queueing system
  • Improve Ruby’s overall efficiency and perceived performance through multi-threading.
  • Evangelize multi-threaded infrastructure written with actor abstractions as relatively straightforward for knowledgable developers to build. Celluloid continues to be a huge asset to Sidekiq’s ease of development and stability.

Thank you all for your support so far! Thank you especially to EngineYard, Tony Arcieri and the early adopters that have made hacking on Sidekiq an exciting adventure rather than a lonely chore.

9 thoughts on “One Quarter of Sidekiq”

  1. Mike, nice work on this project. I’m in the process of convincing our CEO to adopt.

    He had a question today, “Do the threads work across multiple cores? Or are they stuck on the same core as the master process?”

  2. Ruby 1.9 can only “peg” one core due to the GIL. They aren’t stuck on a core but can’t really take advantage of multi-core. You can start multiple sidekiq processes or use JRuby if you really want to crush your CPU. In practice, I find that I can kill our database with traffic before MRI’s limited threading becomes an issue.

  3. Thanks. Kinda what I suspected. That’s fine. We will definitely have several Sidekick processes spun up, like we do now with Resque (just fewer of them). Naturally, a different core will handle each of those processes (just like today’s Resque implementation).

    I listened to your interview today on the EngineYard’s “Cloud Out Loud” podcast. Again, nice job. The best part was that you somehow worked “motorcycle racing and track days” into the subject matter. [thumbs up] :-D

  4. Replaced beanstalkd with sidekiq this weekend. Definitely liking it so far, thanks for the hard work!

    I have see you talk a lot about event machine and fibers for tasks that involve lots of blocking i/o. Would you still recommend taking that approach within a sidekiq worker job, or would you recommend spinning up even more concurrent threads? Basically I want to cache tons of web links. And instead of having one worker for each, I can have batch jobs that are evented. Wondering your thoughts here.

  5. Brian, I do not recommend fibers anymore. I find them difficult to use and brittle. I’ve been using threads with actors for the last year for all my concurrency work and enjoy it much more.

  6. (exceptionally late comment)

    Hey Mike,

    Can you elaborate on why you find fibers brittle? I haven’t used them much at all, so I don’t have any experience, and Googling doesn’t link to anyone else talking about them being unsuitable for concurrency, so I guess it’d be great for the community to have a little more information on practical real-world pitfalls… without having to trip into the pit themselves, first. :)

  7. Two issues with Fibers:

    1) they are hard to debug because they naturally swallow errors so you wind up rescuing around every single callback.
    2) you have to fiber every single source of IO in your process or else your concurrency goes to crap. Add a new source of IO and forget to fiber it? Watch your performance grind to a halt. Add a new gem that does its own IO? Performance grinds to a halt.

Comments are closed.