I’m a huge fan of Heroku. Way back when, I used to manage the entire deployment infrastructure manually. I’d grab a VPS from RackSpace/AWS, install nginx, configure ruby, tinker with deployment scripts, and then in the weeks ahead endlessly tinker with settings when things didn’t work just right. Although I did enjoy the capture-the-flag feel of finding the right service configuration to solve a problem, once Heroku became a thing I switched over every application I managed.
There’s a huge amount of leverage in never having to worry about the details of your deployment infrastructure. Heroku is expensive, but it’s orders-of-magnitude cheaper than hiring a devops expert.
However, there are some limitations. The one you’ll most likely run into is the 30-second web worker timeout. If your web request doesn’t finish in time, it will be killed and the user receives a 500 error. Not good.
A much better UX is displaying some sort of ‘loading slowly, please refresh’ message to the user and implementing progressive caching. This way, if there is some sort of slow service causing an IO block, you can cache the response in the first request made by the user, so the second page load attempt works successfully.
(You may be wondering why a page load would ever take 30s. Great question. I work a lot with NetSuite, and sometimes need to pull content dynamically. If there is an API slowdown—which happens often—this can cause the page load time to spike).
The best way I’ve found to gracefully handle this situation is to use the ruby stdlib
Timeout::timeoutmethod to throw an exception after 29 seconds. However, this method is dangerous. You’ll want to first understand how this operates under the hood:
In your rails controller, here’s how you can ‘protect’ a method that could run for a long time and display a friendly timeout page instead of a standard 500.
rescue_from WebWorkerTimeoutError do
around_action :raise_on_web_timeout, only: :show
@state = the_long_running_thing
Timeout::timeout(29, WebWorkerTimeoutError) do