Sometimes we want to perform some sort of tasks that consume whole available IO bandwidth. This may lead to some unexpected behaviours from our OS. OS might even kill the given process due to resource lack.
Lets take an example. We want to tar.gz a huge directory with a lot of files in it. Our machine also have a web-server which serves several sites. If we start "taring" our directory, it might lead to timeouts on server (it won't be able to respond as fast as we would expect). On the other hand, we don't care so much about the time needed to create archive file. We always can throw it in screen and detach it.
# Standard approach - will slow IO response time tar -cvf ./dir.tar ./dir
pv to the rescue!
To slow things down to a tolerable level we will use pv tool. pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. It can also limit the speed of incoming data, if used wisely.
To tar a file with a given speed (in MB) we need to use following command:
tar -chf - ./dir | pv -L 2m > ./dir.tar
Above example will allow us to tar ./dir with max speed 2MB/s.
We could use the same method to slow down a mysqldump method:
mysqldump --add-drop-table -u user -p password -h host db_name | bzip2 -c > ./dump.sql.bz2
March 11, 2013 — 06:44
this was very useful for me.
October 15, 2013 — 09:58
This will do longer harm, because when doing a dump it can happen that mysql tables are locked. Since you are limiting throughput the locking will take longer. Better is to run mysqldump with some extra parameters:
mysqldump –add-drop-table –single-transaction –quick –lock-tables=false -u user -p password -h host db_name | bzip2 -c > ./dump.sql.bz2
It’s up to you to combine it with pv. You could also use: ionice -c3 nice -n19 :-)
Bas van Beek