That being said, I regret that we have switched from good_job (https://github.com/bensheldon/good_job). The thing is - Basecamp is a MySQL shop and their policy is not to accept RDMS engine specific queries. You can see in their issues in Github that they try to stick "universal" SQL and are personally mostly concerned how it performs in MySQL(https://github.com/rails/solid_queue/issues/567#issuecomment... , https://github.com/rails/solid_queue/issues/508#issuecomment...). They also still have no support for batch jobs: https://github.com/rails/solid_queue/pull/142 .
- No reason to switch to SolidQueue or GoodJob if you have no issue with Sidekiq. Only do it if you want to remove the Redis infra, no other big benefits other than that imo. - For new projects, I might be more biased towards GoodJob. They're more matured, great community and have more features. - One thing I don't like about SolidQueue is the lack of solid UI. Compared to GoodJob or Sidekiq, it's pretty basic. When I tried it last time, the main page would hang due to unoptimized indexes. Only happens when your data reaches certain threshold. Might have been fixed though.
Another consideration with using RDBMS instead of Redis is that you might need to allocate proper connection pool now. Depends on your database setup. It's nothing big, but that's one additional "cost" as you never really had to consider when you're using Redis.
https://oban.pro/articles/one-million-jobs-a-minute-with-oba...
We still keep rate limiters in Redis though, it would be pretty easy for some scanner to overload the DB if every rogue request would need a round trip to the DB before being processed. Because we only store ephemeral data in Redis it does not need backups.
I've benchmarked Redis (Sidekiq), Postgres (using GoodJob) and SQLite (SolidQueue), Redis beats everything else for the above usecase.
SolidQueue backed by SQLite may be good when you are just passing around primary keys. I still wonder if you can have a lot of workers polling from the same database and update the queue with the job status. I've done something similar in the past using SQLite for some personal work and it is easy to hit the wall even with 10 or so workers.
How will you hold an open transaction for 15 minutes without seriously compromising the performance of the database?
Allowing people to do this easily will just result in an antipattern with horrible performance and reliability once network starts to randomly end transactions. Pretty sure, just like Python can’t figure out connection to the db was closed, so can’t Rails.
Once people add transaction pinning proxies, and try to actually get most performance from db, these kind of locking mechanisms that require a long running open transaction start falling apart.
Edit: I must have misunderstood and it is a lease.
From TFA. Are there really people using Rails for HFT?
> Deploy, version, patch, and monitor the server software
And with PostgreSQL you don't need it?
> Configure a persistence strategy. Do you choose RDB snapshots, AOF logs, or both?
It's a one-time decision. You don't need to do it daily.
> Sustain network connectivity, including firewall rules, between Rails and Redis
And for a PostgreSQL DB you don't need it?
> Authenticate your Redis clients
And your PostgreSQL works without that?
> Build and care for a high availability (HA) Redis cluster
If you want a cluster of PostgreSQL databases, perhaps you will do that too.
There are already a few implementations, and the reference one (django-tasks), even has a database-backed task backend that also uses FOR UPDATE SKIP LOCKED to control concurrency. With django-tasks and a few extra packages you can already get quite far compared to what Solid Queue provides, except maybe for features like concurrency controls and using a separate database for the queues.
I really enjoyed learning about the internals of Solid Queue, to the point that I decided to port it to Django [1]. It provides all of Solid Queue's features, except for retries on errors which is something that IMHO should be provided by the Django task interface, like Active Job does.
We ran into some serious issues in high throughput scenarios (~2k jobs/min currently, and ~5k job/min during peak hours) and switched to Redis+BullMQ and have never looked back ever since. Our bottleneck was Postgres performance.
I wonder if SolidQueue runs into similar issues during high load, high throughput scenarios...
Curious about your experience with SolidQueue's reliability - have you run into any edge cases or issues with job retries/failures? Redis has been battle-tested for so long that switching always feels risky.
Would love to hear more about your production experience after a few months!
I'm not a fan boy of DHH but I really like his critical thinking about the status quo. I'm not able to leave the cloud or I better phrase it as it's too comfortable right now. I really wanted to leave redis behind me as it's mostly a hidden part of Rails nothing I use directly but often I have to pay for it "in the cloud"
I quickly hit an issue with the family of Solid features: Documentation doesn't really cover the case "inside your existing application" (at least when I looked into it shortly after Rails 8 was released). Being in the cloud (render.com, fly.io and friends) I had to create multiple DBs, one for each Solid feature. That was not acceptable as you usually pay per service/DB not per usage - similar how you have to pay for Redis.
This was a great motivation to research the cloud space once again and then I found Railway. You pay per usage. So I've right now multiple DBs, one for each Solid feature. And on top multiple environments multiplying those DBs and I pay like cents for that part of the app while it's not really filled. Of course in this setup I would also pay cents for Redis but it's still good to see a less complex landscape in my deployment environment.
Long story short, while try to integrate SolidQueue myself I found Railway. Deployment are fun again with that! Maybe that helps someone today as well.
When all we are talking about is "good enough" the bar is set at a whole different level.
Especially when building new and unproven applications I'm always looking for things that trade the time I need to set tings up properly with he time I need to BUILD THE ACTUAL PRODUCT. Therefore I really like the recent changes to the Ruby on Rails ecosystem very much.
What we need is a larger user base setting everything up and discovering edge-cases and (!) writing about it (AND notifying the people around Rails). The more experience and knowledge there is, the better the tooling becomes. The happy path needs to become as broad as a road!
Like Kamal, at first only used by 36signals and now used by them and me. :D At least, of course.
Kudos!
Best, Steviee
For caching, though, I wouldn’t drop Redis so fast. As a in-memory cache, the ops overhead of running Redis is a lot lower. You can even ignore HA for most use cases.
Source: I helped design and run a multi-tiered Redis caching architecture for a Rails-based SaaS serving millions of daily users, coordinating shared data across hundreds of database clusters and thousands of app servers across a dozen AWS regions, with separate per-host, per-cluster, per-region, and global cache layers.
We used Postgres for the job queues, though. Entirely separate from the primary app DBs.