Systemd Service for Sinatra + Thin Keeps Restarting

sinatrasystemdthinUbuntu

I have systemd service for a Sinatra app running with Thin server behind an Nginx reverse proxy. It works fine but because it receives a lot of traffic i'm seeing a lot of nginx errors about being unable to connect to upstream. Upon inspecting the service i noticed that it never runs for very long, just a few minutes at best which would explain why Nginx can't connect a lot of times (while the service is rebooting).

Looking at the output of journalctl for the service i see a lot of this:

Dec 20 22:09:48 cs2092 systemd[1]: Started My app web site.
Dec 20 22:10:59 cs2092 bundle[11576]: pure virtual method called
Dec 20 22:10:59 cs2092 bundle[11576]: terminate called without an active exception
Dec 20 22:10:59 cs2092 systemd[1]: my-service.service: Main process exited, code=killed, status=6/ABRT
Dec 20 22:10:59 cs2092 systemd[1]: my-service.service: Failed with result 'signal'.
Dec 20 22:10:59 cs2092 systemd[1]: my-service.service: Service hold-off time over, scheduling restart.
Dec 20 22:10:59 cs2092 systemd[1]: my-service.service: Scheduled restart job, restart counter is at 7.
Dec 20 22:10:59 cs2092 systemd[1]: Stopped My app web site.
Dec 20 22:10:59 cs2092 systemd[1]: Started My app web site.
Dec 20 22:11:19 cs2092 bundle[11828]: pure virtual method called
Dec 20 22:11:19 cs2092 bundle[11828]: terminate called without an active exception
Dec 20 22:11:19 cs2092 systemd[1]: my-service.service: Main process exited, code=killed, status=6/ABRT
Dec 20 22:11:19 cs2092 systemd[1]: my-service.service: Failed with result 'signal'.
Dec 20 22:11:19 cs2092 systemd[1]: my-service.service: Service hold-off time over, scheduling restart.
Dec 20 22:11:19 cs2092 systemd[1]: my-service.service: Scheduled restart job, restart counter is at 8.
Dec 20 22:11:19 cs2092 systemd[1]: Stopped My app web site.
Dec 20 22:11:19 cs2092 systemd[1]: Started My app web site.
Dec 20 22:14:28 cs2092 bundle[11968]: pure virtual method called
Dec 20 22:14:28 cs2092 bundle[11968]: terminate called without an active exception
Dec 20 22:14:28 cs2092 systemd[1]: my-service.service: Main process exited, code=killed, status=6/ABRT
Dec 20 22:14:28 cs2092 systemd[1]: my-service.service: Failed with result 'signal'.
Dec 20 22:14:28 cs2092 systemd[1]: my-service.service: Service hold-off time over, scheduling restart.
Dec 20 22:14:28 cs2092 systemd[1]: my-service.service: Scheduled restart job, restart counter is at 9.
Dec 20 22:14:28 cs2092 systemd[1]: Stopped My app web site.
Dec 20 22:14:28 cs2092 systemd[1]: Started My app web site.

It looks like the app is getting killed regularly? Why is this happening?

Here's the service:

[Unit]
Description=My app web site
Documentation=https://myapp.com
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/www/my-app
Environment="RACK_ENV=production"
ExecStart=/usr/local/bin/bundle exec /usr/local/bin/thin -R /var/www/my-app/config.ru -p 6903 --max-conns 15360 --max-persistent-conns 2048 --threaded --debug start
ExecStop=/usr/local/bin/bundle exec /usr/local/bin/thin -R /var/www/my-app/config.ru -p 6903 stop
ExecReload=/usr/local/bin/bundle exec /usr/local/bin/thin -R /var/www/my-app/config.ru -p 6903 --max-conns 15360 --max-persistent-conns 2048 --threaded --debug restart
Restart=on-failure
User=julien

[Install]
WantedBy=multi-user.target

Another thing i don't understand, as you can see from the service i am starting Sinatra with --max-conns 15360 and yet in the journalctl output i can see the maximum connections are set to 1024:

Dec 21 10:24:24 cs2092 bundle[21058]: Starting my-app in production...
Dec 21 10:24:24 cs2092 bundle[21058]: 2021-12-21 10:22:30 +0000 Thin web server (v1.8.1 codename Infinite Smoothie)
Dec 21 10:24:24 cs2092 bundle[21058]: 2021-12-21 10:22:30 +0000 Debugging ON
Dec 21 10:24:24 cs2092 bundle[21058]: 2021-12-21 10:22:30 +0000 Maximum connections set to 1024
Dec 21 10:24:24 cs2092 bundle[21058]: 2021-12-21 10:22:30 +0000 Listening on 0.0.0.0:6903, CTRL+C to stop

Any idea of what's going on?

Note: Ubuntu 18.04.4

Best Answer

So it seems the problem was within the Thin server, once I replaced it with Puma all problems disappeared.