Weird Microservices Redis setup that worked nicely

It was a follow-up on that resque/sidekiq-based application - it was growing and growing, along with the business logic complexity. In 2016 I suggested splitting it into a bunch of smaller ruby-sidekiq apps that would communicate via Redis pipeline - after all, we already used Redis for communication and were relying on sidekiq heavily - so why reinvent the wheel?

We ended up with a bunch of small apps and two internal gems. In the end, this solution brought its challenges to the table - especially when it comes to orchestrating services. We ended up sharing the database (that acted as cache layer) across few services in order not to duplicate work which was less than ideal.

The biggest benefit was the ability to scale services independently (a huge problem in the original solution where we had to do a lot of tricks regarding how to limit the work execution in some cases), we could stop/start different parts of the pipeline individually and capability to simply plug new services with ease - and that was helpful when adding a new feature that had to do some extra processing on existing data.

For a while that bulky application was a holy grail and nobody had a clear idea what to do with it - it was extremely hard to maintain and it turned out that rewrite in this case was a good call. Plus the initial rewrite itself didn’t take that long - as the basic idea and the architecture were rather simple on their own we managed to split the work pretty easily.

Re-using existing building blocks, parts of the infrastructure speed up the process as we didn’t have to introduce a completely new stack in the company at that point (but more on introducing new tech later on!).