Tag: mysqldump

Backup (mysql dump) all your MySQL databases in separate files

Sometimes we would like to dump all the MySQL databases. MySQL provides an easy solution to this problem:

mysqldump -u root -p --all-databases > all_dbs.sql

However this will dump stuff into one file. How to dump all databses into separate files? Well, here is my solution. A small bash script:

#! /bin/bash

TIMESTAMP=$(date +"%F")
BACKUP_DIR="/backup/$TIMESTAMP"
MYSQL_USER="backup"
MYSQL=/usr/bin/mysql
MYSQL_PASSWORD="password"
MYSQLDUMP=/usr/bin/mysqldump

mkdir -p "$BACKUP_DIR/mysql"

databases=`$MYSQL --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema)"`

for db in $databases; do
  $MYSQLDUMP --force --opt --user=$MYSQL_USER -p$MYSQL_PASSWORD --databases $db | gzip > "$BACKUP_DIR/mysql/$db.gz"
done

Be aware, that in order to execute this script from cron, you need to store password in it (so cron won't be prompted to provide a password). That's why, you should not use a root account. Instead just create a new user only for backups, with following privileges:

  • SHOW DATABASES
  • SELECT
  • LOCK TABLES
  • RELOAD
  • SHOW VIEW

Slowing down (limiting) tar, mysqldump or other processes to save IO bandwidth

Sometimes we want to perform some sort of tasks that consume whole available IO bandwidth. This may lead to some unexpected behaviours from our OS. OS might even kill the given process due to resource lack.

Lets take an example. We want to tar.gz a huge directory with a lot of files in it. Our machine also have a web-server which serves several sites. If we start "taring" our directory, it might lead to timeouts on server (it won't be able to respond as fast as we would expect). On the other hand, we don't care so much about the time needed to create archive file. We always can throw it in screen and detach it.

# Standard approach - will slow IO response time
tar -cvf ./dir.tar ./dir

pv to the rescue!

To slow things down to a tolerable level we will use pv tool. pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. It can also limit the speed of incoming data, if used wisely.

To tar a file with a given speed (in MB) we need to use following command:

tar -chf - ./dir | pv -L 2m > ./dir.tar

Above example will allow us to tar ./dir with max speed 2MB/s.

We could use the same method to slow down a mysqldump method:

mysqldump --add-drop-table -u user -p password -h host db_name | bzip2 -c > ./dump.sql.bz2

Copyright © 2024 Closer to Code

Theme by Anders NorenUp ↑