Hi,
nachdem letzte Woche die Backup QNAP TS-859 Pro+ FW Version 4.2. ohne Vorwarnung abstürzte, steht der Status des logischen Datenträgers auf entladen.
Bis auf entfernen und formatieren ist nichts mehr möglich.
Die 8 Festplatten waren bis zu diesem Zeitpunkt alle ok.
dmesg sagt: md0 unkown partition table
Code
Intel(R) PRO/1000 Network Driver - 2.4.14-NAPI
[ 124.529672] e1000e: Copyright(c) 1999 - 2013 Intel Corporation.
[ 124.533336] e1000e 0000:02:00.0: Disabling ASPM L0s L1
[ 124.538405] e1000e 0000:02:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[ 124.543502] e1000e 0000:02:00.0: irq 46 for MSI/MSI-X
[ 124.543518] e1000e 0000:02:00.0: irq 47 for MSI/MSI-X
[ 124.543575] e1000e 0000:02:00.0: irq 48 for MSI/MSI-X
[ 124.652111] e1000e 0000:02:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:08:9b:c9:f4:f3
[ 124.656913] e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection
[ 124.660723] e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF
[ 124.664392] e1000e 0000:03:00.0: Disabling ASPM L0s L1
[ 124.668153] e1000e 0000:03:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[ 124.671909] e1000e 0000:03:00.0: irq 49 for MSI/MSI-X
[ 124.671922] e1000e 0000:03:00.0: irq 50 for MSI/MSI-X
[ 124.671932] e1000e 0000:03:00.0: irq 51 for MSI/MSI-X
[ 124.774969] e1000e 0000:03:00.0: eth1: (PCI Express:2.5GT/s:Width x1) 00:08:9b:c9:f4:f2
[ 124.779828] e1000e 0000:03:00.0: eth1: Intel(R) PRO/1000 Network Connection
[ 124.783675] e1000e 0000:03:00.0: eth1: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF
[ 124.790815] jnl: driver (lke_9.2.0 QNAP, LBD=OFF) loaded at ffffffffa0189000
[ 124.799405] ufsd: module license 'Commercial product' taints kernel.
[ 124.803208] Disabling lock debugging due to kernel taint
[ 124.809974] ufsd: driver (lke_9.2.0 QNAP, build_host("BuildServer36"), acl, ioctl, bdi, sd2(0), fua, bz, rsrc) loaded at ffffffffa0194000
[ 124.809982] NTFS support included
[ 124.809985] Hfs+/HfsJ support included
[ 124.809988] optimized: speed
[ 124.809991] Build_for__QNAP_Atom_x86_64_k3.4.6_2014-09-17_lke_9.2.0_r245986_b9
[ 124.809995]
[ 124.866617] fnotify: Load file notify kernel module.
[ 125.934580] usbcore: registered new interface driver snd-usb-audio
[ 125.943206] usbcore: registered new interface driver snd-usb-caiaq
[ 125.957878] Linux video capture interface: v2.00
[ 125.978173] usbcore: registered new interface driver uvcvideo
[ 125.982746] USB Video Class driver (1.1.1)
[ 126.140063] 8021q: 802.1Q VLAN Support v1.8
[ 127.547545] 8021q: adding VLAN 0 to HW filter on device eth0
[ 127.634614] 8021q: adding VLAN 0 to HW filter on device eth1
[ 130.059045] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[ 139.162144] kjournald starting. Commit interval 5 seconds
[ 139.170258] EXT3-fs (md9): using internal journal
[ 139.174387] EXT3-fs (md9): mounted filesystem with ordered data mode
[ 142.801254] md: bind<sda2>
[ 142.806808] md/raid1:md8: active with 1 out of 1 mirrors
[ 142.810946] md8: detected capacity change from 0 to 542851072
[ 143.855216] md8: unknown partition table
[ 145.935515] Adding 530124k swap on /dev/md8. Priority:-1 extents:1 across:530124k
[ 149.107195] md: bind<sdb2>
[ 149.135291] RAID1 conf printout:
[ 149.135302] --- wd:1 rd:2
[ 149.135309] disk 0, wo:0, o:1, dev:sda2
[ 149.135317] disk 1, wo:1, o:1, dev:sdb2
[ 149.135431] md: recovery of RAID array md8
[ 149.139470] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 149.143557] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 149.148188] md: using 128k window, over a total of 530128k.
[ 151.242876] md: bind<sdc2>
[ 153.433864] md: bind<sdd2>
[ 155.574533] md: bind<sde2>
[ 157.712447] md: bind<sdf2>
[ 159.885867] md: bind<sdg2>
[ 161.474092] md: md8: recovery done.
[ 161.550513] RAID1 conf printout:
[ 161.550522] --- wd:2 rd:2
[ 161.550530] disk 0, wo:0, o:1, dev:sda2
[ 161.550537] disk 1, wo:0, o:1, dev:sdb2
[ 161.581633] RAID1 conf printout:
[ 161.581639] --- wd:2 rd:2
[ 161.581645] disk 0, wo:0, o:1, dev:sda2
[ 161.581650] disk 1, wo:0, o:1, dev:sdb2
[ 161.581654] RAID1 conf printout:
[ 161.581657] --- wd:2 rd:2
[ 161.581662] disk 0, wo:0, o:1, dev:sda2
[ 161.581667] disk 1, wo:0, o:1, dev:sdb2
[ 161.581670] RAID1 conf printout:
[ 161.581674] --- wd:2 rd:2
[ 161.581678] disk 0, wo:0, o:1, dev:sda2
[ 161.581683] disk 1, wo:0, o:1, dev:sdb2
[ 161.581687] RAID1 conf printout:
[ 161.581690] --- wd:2 rd:2
[ 161.581695] disk 0, wo:0, o:1, dev:sda2
[ 161.581700] disk 1, wo:0, o:1, dev:sdb2
[ 161.581703] RAID1 conf printout:
[ 161.581707] --- wd:2 rd:2
[ 161.581711] disk 0, wo:0, o:1, dev:sda2
[ 161.581716] disk 1, wo:0, o:1, dev:sdb2
[ 161.971923] md: bind<sdh2>
[ 161.998255] RAID1 conf printout:
[ 161.998264] --- wd:2 rd:2
[ 161.998272] disk 0, wo:0, o:1, dev:sda2
[ 161.998279] disk 1, wo:0, o:1, dev:sdb2
[ 161.998284] RAID1 conf printout:
[ 161.998289] --- wd:2 rd:2
[ 161.998296] disk 0, wo:0, o:1, dev:sda2
[ 161.998302] disk 1, wo:0, o:1, dev:sdb2
[ 161.998307] RAID1 conf printout:
[ 161.998312] --- wd:2 rd:2
[ 161.998317] disk 0, wo:0, o:1, dev:sda2
[ 161.998324] disk 1, wo:0, o:1, dev:sdb2
[ 161.998329] RAID1 conf printout:
[ 161.998333] --- wd:2 rd:2
[ 161.998339] disk 0, wo:0, o:1, dev:sda2
[ 161.998346] disk 1, wo:0, o:1, dev:sdb2
[ 161.998350] RAID1 conf printout:
[ 161.998355] --- wd:2 rd:2
[ 161.998361] disk 0, wo:0, o:1, dev:sda2
[ 161.998368] disk 1, wo:0, o:1, dev:sdb2
[ 161.998373] RAID1 conf printout:
[ 161.998378] --- wd:2 rd:2
[ 161.998383] disk 0, wo:0, o:1, dev:sda2
[ 161.998390] disk 1, wo:0, o:1, dev:sdb2
[ 162.007575] md: md0 stopped.
[ 162.020959] md: md0 stopped.
[ 162.217888] md: bind<sdb3>
[ 162.221654] md: bind<sdc3>
[ 162.225332] md: bind<sdd3>
[ 162.228956] md: bind<sde3>
[ 162.232512] md: bind<sdf3>
[ 162.236000] md: bind<sdg3>
[ 162.239427] md: bind<sdh3>
[ 162.242705] md: bind<sda3>
[ 162.246961] md/raid:md0: device sda3 operational as raid disk 0
[ 162.249880] md/raid:md0: device sdh3 operational as raid disk 7
[ 162.252619] md/raid:md0: device sdg3 operational as raid disk 6
[ 162.255320] md/raid:md0: device sdf3 operational as raid disk 5
[ 162.257989] md/raid:md0: device sde3 operational as raid disk 4
[ 162.260629] md/raid:md0: device sdd3 operational as raid disk 3
[ 162.263181] md/raid:md0: device sdc3 operational as raid disk 2
[ 162.265646] md/raid:md0: device sdb3 operational as raid disk 1
[ 162.288247] md/raid:md0: allocated 136320kB
[ 162.290746] md/raid:md0: raid level 5 active with 8 out of 8 devices, algorithm 2
[ 162.293252] RAID conf printout:
[ 162.293256] --- level:5 rd:8 wd:8
[ 162.293263] disk 0, o:1, dev:sda3
[ 162.293268] disk 1, o:1, dev:sdb3
[ 162.293273] disk 2, o:1, dev:sdc3
[ 162.293278] disk 3, o:1, dev:sdd3
[ 162.293283] disk 4, o:1, dev:sde3
[ 162.293288] disk 5, o:1, dev:sdf3
[ 162.293293] disk 6, o:1, dev:sdg3
[ 162.293298] disk 7, o:1, dev:sdh3
[ 162.293377] md0: detected capacity change from 0 to 20992903938048
[ 163.534105] md0: unknown partition table
[ 166.784616] EXT4-fs (md0): Mount option "noacl" will be removed by 3.5
[ 166.784621] Contact linux-ext4@vger.kernel.org if you think we should keep it.
[ 166.784624]
[ 167.380392] EXT4-fs (md0): ext4_check_descriptors: Checksum for group 111488 failed (52141!=20867)
[ 167.383164] EXT4-fs (md0): group descriptors corrupted!
[ 194.085066] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[ 234.595797] rule type=2, num=0
[ 234.758231] Loading iSCSI transport class v2.0-871.
[ 234.779460] iscsi: registered transport (tcp)
[ 234.799581] iscsid (7547): /proc/7547/oom_adj is deprecated, please use /proc/7547/oom_score_adj instead.
[ 244.976605] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 244.979216] NFSD: starting 90-second grace period
Alles anzeigen
Ich habe mir zwar schon diverse Threads dazu durchgelesen, möchte aber natürlich nichts falsch machen.
Reicht an dieser Stelle ein "e2fsck_64 -y -C 0 /dev/md0" ?
Vielen Dank!