Tag: server

Puma Jungle script fully working with RVM and Pumactl

Finally I've decided to switch some of my services from Apache+Passenger to Nginx+Puma. Passenger was very convenient when having more than one app per server. Although I used Passenger Standard edition and sometimes apps that should have max 1-2 workers would consume at least half of pool. I was not able to prioritize apps easily. Also it started to get pretty heavy and disadvantages finally exceeded benefits of using it.

Switching static and PHP-based content from Apache to Nginx was really simple. I've installed Nginx, started it on port 82 and 445 for HTTPS and to maintain uptime, I just proxy passed each app at a time from Apache to Nginx. That way I kept Apache temporarily as a proxy engine for all the "simple to move" content and as a Passenger wrapper for my Rails apps.

I've decided to use Puma as my default Rails server for apps on this particular machine. Everything worked great until I've tried to use Jungle script to manage all the apps at once (and add it to init.d). After few seconds of googling I found Johannes Opper post on how to configure Puma Jungle to work with RVM. It turned out, you just need to edit /usr/local/bin/run-puma file and add this line:

# Use bash_profile of your rvm/deploy user
source ~/.bash_profile

After that I was able to start all the apps at once:

/etc/init.d/puma start

Unfortunately, I was not able to stop/restart Puma instances with this script. It was giving me this message:

# when stopping
/etc/init.d/puma: line 99: pumactl command not found
# when restarting
/etc/init.d/puma: line 129: pumactl command not found

Since there's an RVM on the server, Puma was installed as one of gems for one Ruby version. The same goes for Pumactl. To be able to use Pumactl I had to change the /etc/init.d/puma script a bit. First I had to change it from sh to bash script:

#! /bin/bash
# instead of
#! /bin/sh

After that I had to add my deploy user bash profile:

source ~/.bash_profile

and change how pumactl is executed in few places:

do_stop_one method (line 99):

# replace this
pumactl --state $STATEFILE stop
# with this
user=`ls -l $PIDFILE | awk '{print $3}'`
su - $user -c "cd $dir && bundle exec pumactl --state $STATEFILE stop"

do_restart_one (line 129):

# replace this
pumactl --state $dir/tmp/puma/state restart
# with this
user=`ls -l $PIDFILE | awk '{print $3}'`
su - $user -c "cd $dir && bundle exec pumactl --state $dir/tmp/puma/state restart"

do_status_one (line 168):

# replace this
pumactl --state $dir/tmp/puma/state stats
# with this
user=`ls -l $PIDFILE | awk '{print $3}'`
su - $user -c "cd $dir && bundle exec pumactl --state $dir/tmp/puma/state stats"

and that's all. After that you should be able to manage all your Puma apps with Jungle.

QNAP NAS: File System not clean. Examination failed (Cannot unmount disk)

If you get this message from your QNAP:

The file system is not clean. It is suggested that you run "check disk"

and after you start a disk check and you end up with message like this:

[Mirror Disk Volume: Drive 1 2 3 4] Examination failed (Cannot unmount disk).

You need to get into SSH and execute following:

/etc/init.d/services.sh stop
/etc/init.d/opentftp.sh stop
/etc/init.d/Qthttpd.sh stop
umount /dev/md0
e2fsck -f -v -C 0 /dev/md0
mount /dev/md0
reboot

After all of above, your QNAP NAS will reboot and everything will get back to normal.

Copyright © 2024 Closer to Code

Theme by Anders NorenUp ↑