If you get this message from your QNAP:
The file system is not clean. It is suggested that you run "check disk"
and after you start a disk check and you end up with message like this:
[Mirror Disk Volume: Drive 1 2 3 4] Examination failed (Cannot unmount disk).
You need to get into SSH and execute following:
/etc/init.d/services.sh stop /etc/init.d/opentftp.sh stop /etc/init.d/Qthttpd.sh stop umount /dev/md0 e2fsck -f -v -C 0 /dev/md0 mount /dev/md0 reboot
After all of above, your QNAP NAS will reboot and everything will get back to normal.
October 19, 2013 — 12:16
Thank you so much for posting this solution!! My ts-869 was totally stuffed and I had tried everything and nothing worked. This did the trick and all of my data shows again.
Thank-you!
regards
Casey
October 25, 2013 — 10:36
Hi i could not run e2fsck it was to old.
e2fsck -f -v -C 0 /dev/md0
e2fsck 1.41.4 (27-Jan-2009)
/dev/md0 has unsupported feature(s): 64bit
e2fsck: Get a newer version of e2fsck!
Regards
Bop
October 31, 2013 — 14:06
Hi,
I also did exactly what was written above. Everything was fine, until I was looking for two folders that have disappeared. I looked everywhere, but nowhere to be found.
Is there a solution to getting these folders back, because these folders contained very important data. Can I rebuild the RAID set again or should I try a recovery program first?
I know. Yes it was stupid of me not to backup first.
Anyone ideas?
Cheers Rick
December 8, 2013 — 03:30
After running the three shell commands listed above, I was still not able to umount the array. The error indicates that the device is busy. Is there any easy way to figure out what else needs to be shut down?
Steve
December 8, 2013 — 20:33
I have a 6 bay and a 4 bay server. I have the same cannot unmount disk error message on both of them. I followed the directions above and I get a message stating that the server is busy. I turned the servers off and then on again and I still get the same message.
What should i try next.
Thanks
December 16, 2013 — 20:25
If the “mount /dev/md0” command fails with the error message:
mount: can’t find /dev/md0 in /etc/fstab or /etc/mtab
In my case I mount with the following command:
mount -t ext4 /dev/md0 /share/MD0_DATA
December 16, 2013 — 20:32
Ahh, another tip, if you cannot unmount /dev/md0 because the filesystem is BUSY, in my case It was because I was running svn server. If you stop the svn service (or kill the process) then you will can unmount the filesystem.
You can see what process lock /dev/md0 with the following command and kill it:
lsof +f — /dev/md0
January 28, 2014 — 17:02
I got the following error message when trying to run the line “mount /dev/md0”:
mount: can’t find /dev/md0 in /etc/fstab or /etc/mtab
As previously mentioned by Bernardo this could be solved by the command “mount /dev/md0 /share/MD0_DATA -t ext4”. However, I would want to go one step further and would like to say that the line “mount /dev/md0” should be replaced by the line “mount /dev/md0 /share/MD0_DATA -t ext4”. Many forums claim that fstab or mtab is not used by QNAP, and is the reason for the error to appear. To avoid the error the mentioned, corrected, mount line should be used instead.
March 8, 2014 — 17:35
I got the check file system examination failed message and want to run the above script, but I don’t understand how to get into SSH. Would someone please give me step by step instructions to follow?
I’m running a TS-859 PRO+ with an 8 drive RAID5. System Firmware is 4.0.3.
Disks are Seagate 2GB ST32000542AS.
I changed a fail drive, and system was in rebuild when a power failure hit. On reboot it started rebuilding again, then RAID went inactive when file system error popped up.
March 14, 2014 — 03:52
In my case a bash process locked the RAID. (Got it with lsof, thanks for the reminder.) The reason was bash was set as login shell in /etc/passwd for admin. Additionally home directory for admin was set there to /share/homes/admin.
So even if i get to /root after logon with ssh, bash locked the ‘home’.
Channging home to /root in passwd or changing to sh as default shell (also passwd) does with the other commands the trick.
remount the RAID was not necessary.
Greetings
April 5, 2014 — 13:52
On my QNAP TS-869L 4.0.5 neither lsof +f — /dev/md0 nor lsof +f — /share/MD0_DATA showed anything, but umount /dev/md0 failed because /share/MD0_DATA was detected as to be still in use.
Fortunately I have been successfull to do a check disk from web-interface by logging in as user “admin”. Bevore I had logged in with my administrative user.
April 12, 2014 — 15:15
Thanks guys. Worked a treat (with the comments – u’r a champ Bernardo).
Got to pretend I was a techie for a a few hours.
May 15, 2014 — 09:53
Hi,
i get the following errors.
– when i try to unmount it says: umount: /dev/md0: not mounted
– when i try to run e2fsck i get Invalid argument while trying to open /dev/md0
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
No idea what i’m doing wrong, hope you can help.
Thanks in advance
Piet Hein
May 15, 2014 — 15:46
Did you try stuff mentioned in comments above?
May 17, 2014 — 13:19
regarding lsof returning an error I think there is a formatting-issue in Bernardo’s post replace the “—” with a simple minus when copying the command.
And regarding the mount-command returning “…can’t find…” I just reboot the QNAP after the filecheck was finished. The RAID got mounted automatically after reboot again…
June 25, 2014 — 05:05
I got this problem… i couldn’t umount the disk since a dd process was spawning constantly… when you kill/stop all process that lock the drive and there is a spawning process that loops and recreate itself, you have to stop in a single command all involving processes and also umount filesystem.
how?
> kill -9 PID1 PID2 PID3 … PIDN && umount /dev/mdx
[~] # cat /etc/mtab
/proc /proc proc rw 0 0
none /dev/pts devpts rw,gid=5,mode=620 0 0
sysfs /sys sysfs rw 0 0
tmpfs /tmp tmpfs rw,size=64M 0 0
none /proc/bus/usb usbfs rw 0 0
/dev/sda4 /mnt/ext ext3 rw 0 0
/dev/md9 /mnt/HDA_ROOT ext3 rw,data=ordered 0 0
/dev/sda3 /share/HDA_DATA ext4 rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,noacl 0 0
tmpfs /var/syslog_maildir tmpfs rw,size=8M 0 0
none /sys/kernel/config configfs rw 0 0
[~] # umount /dev/md9
umount: /mnt/HDA_ROOT: device is busy
umount: /mnt/HDA_ROOT: device is busy
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
hwmond 6306 admin 3r REG 9,9 51 6700 /mnt/HDA_ROOT/.config/sysHealth.conf (deleted)
dd 7503 admin 4w REG 9,9 19369 13287 /mnt/HDA_ROOT/.logs/kmsg
[~] # kill 6306
[~] # kill 7503
[~] # umount /dev/md9
umount: /mnt/HDA_ROOT: device is busy
umount: /mnt/HDA_ROOT: device is busy
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dd 24933 admin 4w REG 9,9 0 13292 /mnt/HDA_ROOT/.logs/kmsg
[~] # kill 24933
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dd 24939 admin 4w REG 9,9 0 13287 /mnt/HDA_ROOT/.logs/kmsg
[~] # umount /dev/md9
umount: /mnt/HDA_ROOT: device is busy
umount: /mnt/HDA_ROOT: device is busy
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dd 24939 admin 4w REG 9,9 0 13287 /mnt/HDA_ROOT/.logs/kmsg
hwmond 24950 admin 3r REG 9,9 51 6725 /mnt/HDA_ROOT/.config/sysHealth.conf (deleted)
[~] # kill 24939
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
hwmond 24950 admin 3r REG 9,9 51 6725 /mnt/HDA_ROOT/.config/sysHealth.conf (deleted)
dd 25028 admin 4w REG 9,9 0 13292 /mnt/HDA_ROOT/.logs/kmsg
[~] # kill -9 24950 25028
[~] # umount /dev/md9
umount: /mnt/HDA_ROOT: device is busy
umount: /mnt/HDA_ROOT: device is busy
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dd 25038 admin 4w REG 9,9 0 13287 /mnt/HDA_ROOT/.logs/kmsg
[~] # kill -9 25038
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dd 25047 admin 4w REG 9,9 0 13292 /mnt/HDA_ROOT/.logs/kmsg
[~] # kill -9 25047 && umount /dev/md9
umount: /mnt/HDA_ROOT: device is busy
umount: /mnt/HDA_ROOT: device is busy
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
hwmond 25058 admin 3r REG 9,9 51 6700 /mnt/HDA_ROOT/.config/sysHealth.conf (deleted)
dd 25096 admin 4w REG 9,9 0 13287 /mnt/HDA_ROOT/.logs/kmsg
[~] # lsof + /dev/md9
lsof: status error on +: No such file or directory
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
hwmond 25058 admin 3r REG 9,9 51 6700 /mnt/HDA_ROOT/.config/sysHealth.conf (deleted)
dd 25096 admin 4w REG 9,9 0 13287 /mnt/HDA_ROOT/.logs/kmsg
[~] # kill -9 25058 25096 && umount /dev/md9
[~] #
# history
1 exit
2 /etc/init.d/services.sh stop
3 /etc/init.d/opentftp.sh stop
4 /etc/init.d/Qthttpd.sh stop
5 umount /dev/md0
6 cat /etc/mtab
7 umount /dev/md9
8 umount /dev/sda3
9 umount /dev/sda4
10 ps ef
11 ps ef | clear
12 ps ef
13 kill 8732
14 ps ef | grep apache
15 kill 4201
16 ps ef | grep apache
17 kill 7902
18 ps ef | grep apache
19 kill 24687
20 ps ef | grep apache
21 kill 24697
22 ps ef | grep apache
23 ps ef
24 kill 16516
25 kill 16843
26 ps ef | grep ffm
27 kill 5586
28 kill 7341
29 kill 9822
30 kill 16472
31 kill 16492
32 kill 16516
33 kill 16843
34 ps ef | grep ffm
35 kill 16492
36 kill 16516
37 kill 16843
38 ps ef | grep ffm
39 ps ef | less
40 ps ef | more
41 ps ef | grep apache
42 kill 24770
43 kill 24781
44 ps ef | grep apache
45 kill 24793
46 ps ef | grep apache
47 ps ef | grep http
48 kill 4091
49 kill 16303
50 kill 16423
51 kill 16508
52 kill 24844
53 ps ef | grep http
54 kill 16303
55 kill 16423
56 kill 16508
57 kill 24855
58 ps ef | grep http
59 cat /etc/mtab
60 umount /dev/md9
61 lsof + /dev/md9
62 kill 6306
63 kill 7503
64 umount /dev/md9
65 lsof + /dev/md9
66 kill 24933
67 lsof + /dev/md9
68 umount /dev/md9
69 lsof + /dev/md9
70 kill 24939
71 lsof + /dev/md9
72 kill -9 24950 25028
73 umount /dev/md9
74 lsof + /dev/md9
75 kill -9 25038
76 lsof + /dev/md9
77 kill -9 25047 && umount /dev/md9
78 lsof + /dev/md9
79 lsof + /dev/md9
80 kill -9 25058 25096 && umount /dev/md9
December 16, 2014 — 14:43
I’ve been having this problem for a couple of days now. I run this script in ssh, and when I input the umount command, I get the error “device busy” twice, and that is all. I’m kind of new at this, so any help would be appreciated, thanks.
December 21, 2014 — 22:56
Thanks much, this worked.
For everyone else having trouble, I’m not sure whether it’s a recent addition (I have the latest firmware for TS-212) but the culprit appears to be Apache Proxy.
To verify this, run (as previously suggested) the lsof command: you will see a bunch of `apache_pr` open `.lock` files – to stop it, simply run (after the first three suggested lines) the additional:
/etc/init.d/thttpd.sh stop
(I also have the impression that running the `Qthttpd.sh` stop command is redundant, as that’s taken care of by the first `services.sh stop` – but it won’t do any harm anyway).
Hope this helps.
[1] To verify it’s actually Apache Proxy:
[~] # lsof +f — /dev/md0
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
apache_pr 5282 admin mem-r REG 9,0 36864 28573703 /share/MD0_DATA/.locks/gencache_notrans.tdb
apache_pr 5282 admin mem REG 9,0 40960 28573702 /share/MD0_DATA/.locks/gencache.tdb
apache_pr 5282 admin 4u REG 9,0 40960 28573702 /share/MD0_DATA/.locks/gencache.tdb
apache_pr 5282 admin 5ur REG 9,0 36864 28573703 /share/MD0_DATA/.locks/gencache_notrans.tdb
apache_pr 27169 admin mem REG 9,0 36864 28573703 /share/MD0_DATA/.locks/gencache_notrans.tdb
apache_pr 27169 admin mem REG 9,0 40960 28573702 /share/MD0_DATA/.locks/gencache.tdb
apache_pr 27169 admin 4u REG 9,0 40960 28573702 /share/MD0_DATA/.locks/gencache.tdb
… and many more like these
March 17, 2015 — 03:23
Good to see that someone else is sharing my SVN woes. It looks like the reason that the disk will not unmount is because the SVN service script provided with the QPKG does not contain any shutdown code… it only contains startup code. In fact, if we run “/etc/init.d/services.sh stop”, it will actually try to *start* svnserve a second time.
I ended up modifying the “svnstart.sh” file (for me it was located in /share/MD0_DATA/.qpkg/Optware) to accept a start/stop/restart argument using some other QNAP scripts as a model. I added a “stop” function that will automatically locate the PID of the svnserve process and kill it. My new file looks like this:
#!/bin/sh
RETVAL=0
case “$1” in
start)
echo “Starting svnserve…”
(sleep 10; /share/MD0_DATA/.qpkg/Optware/bin/svnserve -d –listen-port=3690) &
;;
stop)
echo “Stopping svnserve…”
kill $(ps aux | grep ‘[s]vnserve’ | awk ‘{print $1}’)
;;
restart)
$0 stop
$0 start
;;
*)
echo “Usage: $0 (start|stop|restart)”
exit 1
esac
exit $RETVAL
After modifying this script, I can now scan the volume for errors via the web UI without having to go to the terminal. However, I am still getting the error every time I restart the NAS. This is beyond frustrating and I don’t have any idea what to try next. Let me know if you have any ideas. I’m thinking converting my entire repository to Git might be easier at this point in time
March 17, 2015 — 06:30
Good to see that someone else is sharing my SVN woes. It looks like the reason that the disk will not unmount is because the SVN service script provided with the QPKG does not contain any shutdown code… it only contains startup code. In fact, if we run “/etc/init.d/services.sh stop”, it will actually try to *start* svnserve a second time.
I ended up modifying the “svnstart.sh” file (for me it was located in /share/MD0_DATA/.qpkg/Optware) to accept a start/stop/restart argument using some other QNAP scripts as a model. I added a “stop” function that will automatically locate the PID of the svnserve process and kill it. My new file looks like this:
#!/bin/sh
RETVAL=0
case "$1" in
start)
echo "Starting svnserve..."
(sleep 10; /share/MD0_DATA/.qpkg/Optware/bin/svnserve -d --listen-port=3690) &
;;
stop)
echo "Stopping svnserve..."
kill $(ps aux | grep '[s]vnserve' | awk '{print $1}')
;;
restart)
$0 stop
$0 start
;;
*)
echo "Usage: $0 (start|stop|restart)"
exit 1
esac
exit $RETVAL
After modifying this script, I can now scan the volume for errors via the web UI without having to go to the terminal. However, I am still getting the error every time I restart the NAS. This is beyond frustrating and I don’t have any idea what to try next. Let me know if you have any ideas. I’m thinking converting my entire repository to Git might be easier at this point in time
January 14, 2016 — 17:08
The solution on newer Qnap versions (4+) is as follows
/etc/init.d/services.sh stop
/etc/init.d/opentftp.sh stop
/etc/init.d/Qthttpd.sh stop
umount /dev/mapper/cachedev1
e2fsck_64 -f -v -C 0 /dev/mapper/cachedev1
mount -t ext4 /dev/mapper/cachedev1 /share/CACHEDEV1_DATA
reboot
Also if you know this solution will work for you (the mount points and services are correct) you can run it as one line by placing double ampersands in between each command, for example:
/etc/init.d/services.sh stop && /etc/init.d/opentftp.sh stop && /etc/init.d/Qthttpd.sh stop && umount /dev/mapper/cachedev1 && e2fsck_64 -f -v -C 0 /dev/mapper/cachedev1 && mount -t ext4 /dev/mapper/cachedev1 /share/CACHEDEV1_DATA && reboot
July 30, 2016 — 19:09
e2fsck_64 -f -v -C 0 /dev/md0 gets round the “e2fsck: Get a newer version of e2fsck!” error
August 1, 2016 — 13:32
I had the memory error – I updated the firmware and then I was able to run the e2fsck_64 -f -v -C 0 /dev/md0 command successfully.
August 21, 2016 — 23:42
Thanks a lot for this info! It solved a major problem on my nas because important web services did not work anymore because somehow the file system was not mounted correctly and could not fix using the GUI or reboot. Thanks again.
August 22, 2016 — 01:14
:) Glad to see that the title on my business card “Trained Monkey” is not true.
August 22, 2016 — 09:00
Unfortunately after a reboot the filesystem is not mounted again and your procedure does not solve it anymore. Non of my web services are running now..Anything else i can try?
August 22, 2016 — 21:30
A few things could be happening, the filesystem could actually be corrupted, disks damaged, or not in the location that you’re typing them in. Try running the commands one by one and when you get an error message, google it to see if there’s a fix. You might consider what got you to run the procedure above in the first place. When I had to run it, I was performing a firmware upgrade, and somehow that borked the filesystem.
August 22, 2016 — 23:01
A firmware upgrade would be an obvious reason for trouble, but the nas have been running for quite some time without any changes at al. I found out there was a problem because i could not reach a virtual machine anymore. I started virtualisation station to check whats up, and it reported an error that it was not availible. Since the system not been rebooted a long time i thought i might try that for quick solution but after i did the problems just keep stacking up…
August 22, 2016 — 23:18
From what you describe, it doesn’t sound like a firmware upgrade, unless you told it to upgrade the firmware it wouldn’t have started that process. You could try upgrading the firmware. Really you should look at the logs, and or disk health to see if one or more of the disks have gone bad (if by a long time rebooting you mean years).
August 22, 2016 — 23:46
The system not been running years since the last reboot but for weeks. Maybe 2 or 3 months max. I have the latest firmware already. I want to save what can be saved. I do have all data backupped. Perhaps you know where the sql database files are stored on a QNAP or how i can find out (QmariaDB)? I searched for that a lot but cant find it and phpmyadmin also does not work to make a backup..
August 23, 2016 — 00:11
I would first try to figure out why they drives are not mounting. I don’t know how off the top, but there will be logs you can extract, and look at, find the errors, and then google those to see what they mean if you don’t understand. If it is a bad disk, and or bad disks, then go from there. Then recovery will depend on your setup. The bad disk was only a guess on my part since the sequence of commands that I suggested first does a file system check, and if that initially corrected the problem and then didn’t it could be that the disk is failing. Again, all speculation until you look at the logs.
August 23, 2016 — 00:13
Also, you could make sure that the file system that you are mounting is in fact ext4 and not say ext2.