Table of Contents
Apache Passenger is quite nice for smaller applications (or on a server with multiply apps). It is easy to deploy, maintain and upgrade. However it has also some limitations. For example we cannot set max memory consumption limit. We can set PassengerMaxRequests limit - so each Passenger instance will be killed after N requests, but this won't help us, when Passenger instance suddenly get really big (150 and more MB).
If you use MRI instead of REE on production, you can encounter this issue. Standard small app worker should consume around 75-125MB of memory. But sometimes, something goes crazy and they start to grow rapidly until they reach memory limit. After that, server starts to respond really slow (or stop responding).
Passenger memory status to the rescue!
What can we do, to protect against such situations? First of all, we can monitor Passenger memory consumption with command passenger-memory-stats. The output should look like this:
---------- Apache processes ---------- PID PPID VMSize Private Name -------------------------------------- 1437 15768 178.1 MB 0.6 MB /usr/sbin/apache2 -k start 3415 15768 178.0 MB 0.7 MB /usr/sbin/apache2 -k start 3417 15768 178.1 MB 1.0 MB /usr/sbin/apache2 -k start 4345 15768 178.1 MB 0.7 MB /usr/sbin/apache2 -k start 4346 15768 178.2 MB 1.2 MB /usr/sbin/apache2 -k start 4352 15768 178.1 MB 0.8 MB /usr/sbin/apache2 -k start 4546 15768 178.0 MB 0.5 MB /usr/sbin/apache2 -k start 4628 15768 178.1 MB 1.2 MB /usr/sbin/apache2 -k start 4664 15768 178.1 MB 0.5 MB /usr/sbin/apache2 -k start 4669 15768 178.2 MB 0.7 MB /usr/sbin/apache2 -k start 4796 15768 178.1 MB 0.7 MB /usr/sbin/apache2 -k start 5362 15768 177.7 MB 0.5 MB /usr/sbin/apache2 -k start 6195 15768 178.0 MB 0.7 MB /usr/sbin/apache2 -k start 6208 15768 209.3 MB 32.4 MB /usr/sbin/apache2 -k start 6211 15768 178.0 MB 0.6 MB /usr/sbin/apache2 -k start 6213 15768 177.6 MB 0.3 MB /usr/sbin/apache2 -k start 6214 15768 178.0 MB 0.9 MB /usr/sbin/apache2 -k start 6256 15768 201.7 MB 25.9 MB /usr/sbin/apache2 -k start 6257 15768 177.9 MB 0.8 MB /usr/sbin/apache2 -k start 6353 15768 177.5 MB 0.2 MB /usr/sbin/apache2 -k start 15768 1 177.5 MB 0.1 MB /usr/sbin/apache2 -k start ### Processes: 21 ### Total private dirty RSS: 70.92 MB -------- Nginx processes -------- ### Processes: 0 ### Total private dirty RSS: 0.00 MB ----- Passenger processes ------ PID VMSize Private Name --------------------------------^[[0m 1643 901.9 MB 105.0 MB Rails: /rails/app/current 1658 900.6 MB 103.3 MB Rails: /rails/app/current 3425 898.4 MB 95.4 MB Rails: /rails/app/current 6323 874.2 MB 49.5 MB Passenger ApplicationSpawner: /rails/app/current 6409 887.7 MB 62.9 MB Rails: /rails/app/current 15775 22.9 MB 0.3 MB PassengerWatchdog 15778 164.5 MB 2.6 MB PassengerHelperAgent 15780 43.1 MB 7.0 MB Passenger spawn server 15783 136.9 MB 0.7 MB PassengerLoggingAgent 32082 961.7 MB 126.9 MB Rails: /rails/app/current ### Processes: 10 ### Total private dirty RSS: 553.53 MB
We are particularly interested in Passenger processes section. To see just PID and MB consumption of all the workers, we could filter out unneeded data:
passenger-memory-stats | grep Rails:\ /home | awk ' { print $1 " - " $4}'
So the output would look like this:
# PID - MEMORY USAGE 1643 - 105.0 1658 - 106.9 3425 - 99.1 6409 - 70.7 8381 - 0.1 32082 - 130.3
So now we can have a quick overview on how our server is doing.
I'm to lazy! I don't want to check it all the time. This should monitor itself!
It is quite obvious, that monitoring should be done in an automatic way. Of course it is recommended to check Passenger stats from time to time, but who would monitor and kill bloated Passenger workers on his one? Probably no one. That's why, we're gonna create a simple Ruby program, to monitor and shutdown gracefully (or kill if they don't want to shutdown) Passenger workers.
How to kill Passenger processes from Ruby?
Each Passenger instance is a separate process and it has it's own PID. Killing processes from Ruby is really easy. We do this by executing following command:
Process.kill(signal, pid)
We will use this method and try to kill Passenger processes gracefully (gracefully means that Passenger process will complete any request that it is performing right now and will shutdown). If this fails, we will send a TERM signal and kill it instantaneously.
- SIGUSR1 signal - shutdown gracefully
- TERM signal - kill it instantaneously
Final Ruby monitoring
Ok, so now we know how to kill Passenger process, there rest is simple - we need to extract PID and memory usage, set a limit, check it and kill every instance that exceeds this limit:
# Finds bloating passengers and try to kill them gracefully. # @example: # PassengerMonitor.run require 'logger' class PassengerMonitor # How much memory (MB) single Passenger instance can use DEFAULT_MEMORY_LIMIT = 150 # Log file name DEFAULT_LOG_FILE = 'passenger_monitoring.log' # How long should we wait after graceful kill attempt, before force kill WAIT_TIME = 10 def self.run(params = {}) new(params).check end # Set up memory limit, log file and logger def initialize(params = {}) @memory_limit = params[:memory_limit] || DEFAULT_MEMORY_LIMIT @log_file = params[:log_file] || DEFAULT_LOG_FILE @logger = Logger.new(@log_file) end # Check all the Passenger processes def check @logger.info 'Checking for bloated Passenger workers' `passenger-memory-stats`.each_line do |line| next unless (line =~ /RackApp: / || line =~ /Rails: /) pid, memory_usage = extract_stats(line) # If a given passenger process is bloated try to # kill it gracefully and if it fails, force killing it if bloated?(pid, memory_usage) kill(pid) wait kill!(pid) if process_running?(pid) end end @logger.info 'Finished checking for bloated Passenger workers' end private # Check if a given process is still running def process_running?(pid) Process.getpgid(pid) != -1 rescue Errno::ESRCH false end # Wait for process to be killed def wait @logger.error "Waiting for worker to shutdown..." sleep(WAIT_TIME) end # Kill it gracefully def kill(pid) @logger.error "Trying to kill #{pid} gracefully..." Process.kill("SIGUSR1", pid) end # Kill it with fire def kill!(pid) @logger.fatal "Force kill: #{pid}" Process.kill("TERM", pid) end # Extract pid and memory usage of a single Passenger def extract_stats(line) stats = line.split return stats[0].to_i, stats[3].to_f end # Check if a given process is exceeding memory limit def bloated?(pid, size) bloated = size > @memory_limit @logger.error "Found bloated worker: #{pid} - #{size}MB" if bloated bloated end end
Source code is easy and it has comments so there is no need for further explanations. Usage is reduced to just one line:
PassengerMonitor.run
How to incorporate it into your Rails project and run it from cron?
Using this with your Rails app is really easy. First of all copy-paste the the source code from above and put it in /lib dir of your project, in a file called passenger_monitor.rb.
Then, create a file in /scripts named passenger_monitor.rb(or whatever) and insert there given code:
file_path = File.expand_path(File.dirname(__FILE__)) # Load PassengerMonitor from '/lib/passenger_monitor.rb' require File.join(file_path, '..', 'lib', 'passenger_monitor') # Set logger to log into Rails project /log directory and start monitoring PassengerMonitor.run( :log_file => File.join(file_path, '..', 'log', 'passenger_monitor.log') )
There is one more thing that we need to do. We need to set it up in cron, so it will execute every minute. To do so we type crontab -e and insert following line in our crontab:
* * * * * env -i /usr/local/bin/ruby /rails/app/script/passenger_monitor.rb
Of course remember to replace /rails/app/ path with path to your application.
Checking if monitoring is working
How to check if monitoring is working? Go to you app root directory and type:
cat log/passenger_monitor.log
You should see something like this:
I, [TIMESTAMP] INFO -- : Finished checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Finished checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Finished checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Finished checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Finished checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Checking for bloated Passenger workers I, [TIMESTAMP] INFO -- : Finished checking for bloated Passenger workers
To see only kill attemps, type:
cat log/passenger_monitor.log | egrep 'ERROR|FATAL'
Result:
E, [TIMESTAMP] ERROR -- : Found bloated worker: 4162 - 151.8MB E, [TIMESTAMP] ERROR -- : Trying to kill 4162 gracefully... E, [TIMESTAMP] ERROR -- : Waiting for worker to shutdown... E, [TIMESTAMP] ERROR -- : Found bloated worker: 24192 - 152.6MB E, [TIMESTAMP] ERROR -- : Trying to kill 24192 gracefully... E, [TIMESTAMP] ERROR -- : Waiting for worker to shutdown... E, [TIMESTAMP] ERROR -- : Found bloated worker: 3425 - 150.3MB E, [TIMESTAMP] ERROR -- : Trying to kill 3425 gracefully... E, [TIMESTAMP] ERROR -- : Waiting for worker to shutdown...
August 4, 2012 — 17:08
Good stuff. What would be really nice is to accomplish the same goal using a proven monitoring daemon like Monit.
August 5, 2012 — 17:23
Looks like passenger’s maybe got a built-in option for this but only in the for-pay ‘enterprise’? http://www.modrails.com/documentation/Users%20guide%20Apache.html#PassengerMemoryLimit
The advantage of the built-in option is it will shut down the instance ‘gracefully’ not losing any requests, which would be difficult to duplicate without accesss to passenger internals.
On the other hand, money.
August 5, 2012 — 19:41
My solution also shutdowns single Passenger gracefully and if it fails, it will kill an instance.
March 8, 2013 — 10:34
Thanks for the script!
I was getting errors like “passenger is not part of the bundle. Please add it to Gemfile” when running this under passenger+rvm. The workaround was to add “gem ‘passenger'” to the Gemfile, even though this isn’t commonly done (http://stackoverflow.com/questions/5228185/do-i-need-to-install-passenger-as-a-regular-gem-even-though-my-app-uses-bundler).
May 30, 2013 — 12:37
I have followed this tutorial and it is working. It is killing the instance of the application that consumes memory above 40MB. But the application spawner still goes down and the server gives an error “INTERNAL SERVER ERROR”.
Can anyone help me????
Any help would be appreciated!!!!
May 30, 2013 — 12:53
First of all check the apache errors in /var/log/something-here ;) and see what’s going on. Btw I think, that 40MB is a bit less for Passenger instances. Try having a bigger limit.
May 31, 2013 — 09:33
i have checked the logs and the error is ” Unexpected error in mod_passenger: Cannot spawn application ‘/product/MyApp’: The spawn server died unexpectedly, and restarting it failed.”
And i have increased the limit too….
Please help me :(
May 31, 2013 — 11:29
Well, looks like ny code is killing the spawner (which should not happen). You should try to modify the line: “(line =~ /RackApp: / || line =~ /Rails: /)” so it catches only the Passenger instances (without the Spawner).
May 31, 2013 — 11:45
My passenger-monitor-stats look like this ;
—– Passenger processes ——
^[37mPID VMSize Private Name
——————————–^[0m
12157 215.8 MB 0.3 MB PassengerWatchdog
12162 1633.9 MB 1.5 MB PassengerHelperAgent
12164 109.2 MB 6.0 MB Passenger spawn server
12167 165.4 MB 1.0 MB PassengerLoggingAgent
12242 254.8 MB 43.8 MB Passenger ApplicationSpawner: /product/Superwifi
12392 262.9 MB 50.1 MB Rack: /product/MyApp
12400 261.6 MB 48.5 MB Rack: /product/MyApp
### Processes: 7
Should i still write this line????
May 31, 2013 — 11:56
Well in this case, spawner should not get caught – really, really interesting. Can you mail me, instead of posting here? If we figure something out together, I’ll just update the post with fix ;)
May 31, 2013 — 12:00
ya sure….Your mail id please??
May 31, 2013 — 12:02
Check your email ;) I’ve send you a message from maciej@mensfeld.pl
May 31, 2013 — 12:06
thanks…:)
December 5, 2013 — 05:22
Cool article. You may want to point out that ‘passenger-memory-stats’ can only properly determine the RSS if run as root, so this would have to be executed from the root crontab.
April 21, 2016 — 12:58
Good Stuff, I had to make following change to work with my passenger instances. (line =~ /RubyApp: / || line =~ /Rails: /) ,instead of RackApp
August 12, 2016 — 19:16
Is there a “modern” method of doing this?
December 7, 2016 — 21:17
You can set a cron job to simply touch tmp/restart.txt with some frequency (will only work if your memory bloat build up slowly).
But I still think the stuff in this guide is valid. Minor adjustments may be needed, but overall it should work just fine.
January 22, 2018 — 18:42
I am getting “No such file or directory – passenger-memory-stats (Errno::ENOENT)”.
I have passenger installed, and running passenger-memory-stats in console works, but not in cron!