Hallo zusammen,
ich habe ein größeres Problem.
in einer TS469U mit Firmware 4.20 als Raid 5 mit Hotspare ist anscheinend eine Platte kaputt gegangen und die QNAP war nicht mehr erreichbar.
Nach aus und einschalten kam die qnap mit der meldung:
Code
"602","Warning","2018-11-26","21:51:59","System","","localhost","[RAID5 Disk Volume: Host Drive: 2 3 4] The file system is not clean. It is suggested that you go to [Storage Manager] to run "Check File System"."
"603","Error","2018-11-26","21:52:01","System","","localhost","[RAID5 Disk Volume Host Drive: 2 3 4] Volume is unmounted."
Hab ich gemacht, dann:
Code
"606","Information","2018-11-26","22:12:46","System","","localhost","[RAID5 Disk Volume Host Drive: 2 3 4] Started examination."
"607","Error","2018-11-26","22:12:49","System","","localhost","[RAID5 Disk Volume Host Drive: 2 3 4] Examination failed."
Dann war die Idee, ein Reboot tut gut ...
Code
"608","Information","2018-11-27","00:00:46","admin","10.0.1.20","---","[Power Management] System restarting."
"609","Information","2018-11-27","00:01:22","System","","localhost","System was shut down on Tue Nov 27 00:01:22 CET 2018."
"610","Information","2018-11-27","00:03:40","System","","localhost","System started."
"611","Warning","2018-11-27","00:04:43","System","","localhost","[RAID5 Disk Volume: Host Drive: 2 3 4] The file system is not clean. It is suggested that you go to [Storage Manager] to run "Check File System"."
"612","Error","2018-11-27","00:04:45","System","","localhost","[RAID5 Disk Volume Host Drive: 2 3 4] Volume is unmounted."
Also nochmal Filesystemcheck ...
Code
"613","Information","2018-11-27","00:13:42","System","","localhost","[RAID5 Disk Volume Host Drive: 2 3 4] Started examination."
"614","Information","2018-11-27","02:22:06","System","","localhost","[RAID5 Disk Volume Host Drive: 2 3 4] Examination completed."
"615","Information","2018-11-27","02:22:11","System","","localhost","[RAID5 Disk Volume Host Drive: 2 3 4] Start ext4lazyinit."
"616","Information","2018-11-27","02:28:51","System","","localhost","[RAID5 Disk Volume Host Drive: 2 3 4] Ext4lazyinit completed."
Heute morgen lies sich das Volume wieder mounten, aber nix mehr da ...
nur noch 2,1 TB in Lost+found ...
Was kann man noch tun?
Bzw. wie bekommt man sinnvoll die Daten aus Lost+found wieder zurück?
Ist professionelle Datenrettung eine Option?
Hier noch ein paar Infos:
Die defekte Platte ist noch nicht getauscht ...
Code
"596","Error","2018-11-24","00:22:08","System","","localhost","[RAID5 Disk Volume Host Drive: 1 2 3 Host Hot Spare Drive: 4] Host: Drive1 failed."
"597","Information","2018-11-24","00:22:18","System","","localhost","[RAID Group 0] Started data migration from Drive 1 to Drive 4"
"598","Information","2018-11-24","12:11:25","System","","localhost","[RAID Group 0] Data migration done from Drive 1 to Drive 4"
Code
[admin@qnap-01 ~]# md_checker
Welcome to MD superblock checker - have a nice day~
Scanning system...
HAL firmware detected!
Scanning Enclosure 0...
RAID metadata found!
UUID: f9109f99:25113218:4e8f1fed:a74a4472
Level: raid5
Devices: 3
Name: md0
Chunk Size: 64K
md Version: 1.0
Creation Time: Nov 21 00:37:18 2013
Status: ONLINE (md0) [UUU]
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 Active Nov 24 00:21:03 2018 45677 AAA
4 /dev/sdd3 0 Active Nov 27 11:17:47 2018 67480 AAA
2 /dev/sdb3 1 Active Nov 27 11:17:47 2018 67480 AAA
3 /dev/sdc3 2 Active Nov 27 11:17:47 2018 67480 AAA
===============================================================================
WARNING: Duplicate device detected for #(0)!
Alles anzeigen
Code
[admin@qnap-01 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sdd3[3] sdc3[2] sdb3[1]
5857395072 blocks super 1.0 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdc4[0] sda4[3](S) sdd4[2] sdb4[1]
458880 blocks [3/3] [UUU]
bitmap: 2/57 pages [8KB], 4KB chunk
md9 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
530048 blocks [4/4] [UUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>
Alles anzeigen
Code
[admin@qnap-01 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Thu Nov 21 00:37:18 2013
Raid Level : raid5
Array Size : 5857395072 (5586.05 GiB 5997.97 GB)
Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Nov 27 11:17:47 2018
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : 0
UUID : f9109f99:25113218:4e8f1fed:a74a4472
Events : 67480
Number Major Minor RaidDevice State
3 8 51 0 active sync /dev/sdd3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
Alles anzeigen
Mehr weiß ich leider auch nicht und hoffe es hat wer ne Idee.
Danke schonmal.
Grüße
Marc