Hallo
Bei mir ist eine Disk augefallen, Drive 4 Seagate ST3000DM001 3TB.
Dies habe ich ersetzt mit einer WD40EFRX anschliessed hat der Raid resync gestartet.
Nach ca 2h ist das Device 1 rausgeflogen Read/Error und nichts ging mehr.
Leider ist mein Backup nicht ganz vollständig.
Hab nun die Disk 1 mit ddrescue auf ein baugleiches Modell kopiert
Disk + durch die kopierte ersetzt und NAS wieder gestartet.
RAID kann nicht aktivert werden.
Kann mir jemand helfen?
Hier noch diverse Logs und die genauen Config angaben:
Urspründliche Config
Device 1: WD30EFRX - Smart Status - Gut
Device 2: WD30EFRX -Smart Status - Gut
Device 3: ST3000DM001 -Smart Status - Gut
Device 4: ST3000DM001, rausgelflogen -> ersetzt mit WD40EFRX
RAID 5
Volumeverschlüsselung Nein
QNAP TS-459-II
Firmware: 4.1.3 build 20150408
[~] # dmesg _QNAP_Atom_x86_64_k3.4.6_2014-09-17_lke_9.2.0_r245986_b9[ 63.541366] [ 63.600301] xhci_hcd 0000:05:00.0: xHCI Host Controller[ 63.604559] xhci_hcd 0000:05:00.0: new USB bus registered, assigned bus number 9[ 63.609388] xhci_hcd 0000:05:00.0: irq 17, io mem 0xfebfe000[ 63.613638] xhci_hcd 0000:05:00.0: irq 55 for MSI/MSI-X[ 63.613650] xhci_hcd 0000:05:00.0: irq 56 for MSI/MSI-X[ 63.613660] xhci_hcd 0000:05:00.0: irq 57 for MSI/MSI-X[ 63.613671] xhci_hcd 0000:05:00.0: irq 58 for MSI/MSI-X[ 63.613681] xhci_hcd 0000:05:00.0: irq 59 for MSI/MSI-X[ 63.618498] xHCI xhci_add_endpoint called for root hub[ 63.618506] xHCI xhci_check_bandwidth called for root hub[ 63.618868] hub 9-0:1.0: USB hub found[ 63.624262] hub 9-0:1.0: 2 ports detected[ 63.629917] xhci_hcd 0000:05:00.0: xHCI Host Controller[ 63.634646] xhci_hcd 0000:05:00.0: new USB bus registered, assigned bus number 10[ 63.640185] xHCI xhci_add_endpoint called for root hub[ 63.640194] xHCI xhci_check_bandwidth called for root hub[ 63.640838] hub 10-0:1.0: USB hub found[ 63.646514] hub 10-0:1.0: 2 ports detected[ 63.668911] fnotify: Load file notify kernel module.[ 64.747843] usbcore: registered new interface driver snd-usb-audio[ 64.755823] usbcore: registered new interface driver snd-usb-caiaq[ 64.766988] Linux video capture interface: v2.00[ 64.787645] usbcore: registered new interface driver uvcvideo[ 64.791779] USB Video Class driver (1.1.1)[ 64.942072] 8021q: 802.1Q VLAN Support v1.8[ 66.423744] 8021q: adding VLAN 0 to HW filter on device eth0[ 66.510503] 8021q: adding VLAN 0 to HW filter on device eth1[ 68.857047] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None[ 79.628287] kjournald starting. Commit interval 5 seconds[ 79.636205] EXT3-fs (md9): using internal journal[ 79.640039] EXT3-fs (md9): mounted filesystem with ordered data mode[ 80.004026] md: bind<sda1>[ 80.021000] RAID1 conf printout:[ 80.021025] --- wd:1 rd:4[ 80.021034] disk 0, wo:1, o:1, dev:sda1[ 80.021041] disk 1, wo:0, o:1, dev:sdb1[ 80.021160] md: recovery of RAID array md9[ 80.025025] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.[ 80.028839] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.[ 80.032797] md: using 128k window, over a total of 530048k.[ 80.788624] md: md9: recovery done.[ 80.820970] RAID1 conf printout:[ 80.820979] --- wd:2 rd:4[ 80.820986] disk 0, wo:0, o:1, dev:sda1[ 80.820991] disk 1, wo:0, o:1, dev:sdb1[ 85.126593] md: bind<sdc1>[ 85.170512] RAID1 conf printout:[ 85.170520] --- wd:2 rd:4[ 85.170527] disk 0, wo:0, o:1, dev:sda1[ 85.170532] disk 1, wo:0, o:1, dev:sdb1[ 85.170537] disk 2, wo:1, o:1, dev:sdc1[ 85.170650] md: recovery of RAID array md9[ 85.174510] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.[ 85.178297] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.[ 85.182225] md: using 128k window, over a total of 530048k.[ 86.050276] md: md9: recovery done.[ 86.103813] RAID1 conf printout:[ 86.103822] --- wd:3 rd:4[ 86.103830] disk 0, wo:0, o:1, dev:sda1[ 86.103837] disk 1, wo:0, o:1, dev:sdb1[ 86.103844] disk 2, wo:0, o:1, dev:sdc1[ 94.432576] md: bind<sda2>[ 94.438064] md/raid1:md4: active with 1 out of 1 mirrors[ 94.441959] md4: detected capacity change from 0 to 542851072[ 95.453832] md4: unknown partition table[ 97.541961] Adding 530124k swap on /dev/md4. Priority:-1 extents:1 across:530124k [ 101.522273] md: bind<sdb2>[ 101.548888] RAID1 conf printout:[ 101.548898] --- wd:1 rd:2[ 101.548906] disk 0, wo:0, o:1, dev:sda2[ 101.548913] disk 1, wo:1, o:1, dev:sdb2[ 101.549061] md: recovery of RAID array md4[ 101.552834] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.[ 101.556458] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.[ 101.560225] md: using 128k window, over a total of 530128k.[ 103.927272] md: bind<sdc2>[ 110.043801] md: md0 stopped.[ 110.056328] md: md0 stopped.[ 110.272816] md: bind<sdb3>[ 110.276551] md: bind<sdc3>[ 110.280203] md: bind<sda3>[ 110.284934] md/raid:md0: not clean -- starting background reconstruction[ 110.288793] md/raid:md0: device sda3 operational as raid disk 0[ 110.292328] md/raid:md0: device sdc3 operational as raid disk 2[ 110.295799] md/raid:md0: device sdb3 operational as raid disk 1[ 110.299215] NR_STRIPES is 4096 for total 253173 ram pages[ 110.314764] md/raid:md0: allocated 68992kB[ 110.318282] md/raid:md0: cannot start dirty degraded array.[ 110.321728] RAID conf printout:[ 110.321735] --- level:5 rd:4 wd:3[ 110.321743] disk 0, o:1, dev:sda3[ 110.321749] disk 1, o:1, dev:sdb3[ 110.321756] disk 2, o:1, dev:sdc3[ 110.330480] md/raid:md0: failed to run raid set.[ 110.333876] md: pers->run() failed ...[ 111.346463] md: md0 stopped.[ 111.349840] md: unbind<sda3>[ 111.358026] md: export_rdev(sda3)[ 111.361494] md: unbind<sdc3>[ 111.370021] md: export_rdev(sdc3)[ 111.373349] md: unbind<sdb3>[ 111.382024] md: export_rdev(sdb3)[ 113.469813] md: md4: recovery done.[ 113.521687] RAID1 conf printout:[ 113.521696] --- wd:2 rd:2[ 113.521703] disk 0, wo:0, o:1, dev:sda2[ 113.521708] disk 1, wo:0, o:1, dev:sdb2[ 113.543879] RAID1 conf printout:[ 113.543884] --- wd:2 rd:2[ 113.543890] disk 0, wo:0, o:1, dev:sda2[ 113.543895] disk 1, wo:0, o:1, dev:sdb2[ 113.646041] md: md0 stopped.[ 116.943267] md: md0 stopped.[ 119.042115] md: md0 stopped.[ 122.226427] md: md0 stopped.[ 150.053063] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None[ 166.959899] warning: `proftpd' uses 32-bit capabilities (legacy support in use)[ 181.736905] EXT4-fs (sds1): warning: maximal mount count reached, running e2fsck is recommended[ 181.741687] ext4_init_reserve_inode_table0: sds1, 15[ 181.744909] ext4_init_reserve_inode_table2: sds1, 15, 0, 0, 4096[ 181.748084] EXT4-fs (sds1): recovery complete[ 181.752066] EXT4-fs (sds1): mounted filesystem with ordered data mode. Opts: (null)[ 182.848363] rule type=2, num=0[ 183.030633] Loading iSCSI transport class v2.0-871.[ 183.058325] iscsi: registered transport (tcp)[ 183.079008] iscsid (8195): /proc/8195/oom_adj is deprecated, please use /proc/8195/oom_score_adj instead.[ 186.352499] PPP generic driver version 2.4.2[ 186.368463] PPP MPPE Compression module registered[ 186.374547] PPP BSD Compression module registered[ 186.380975] PPP Deflate Compression module registered[ 186.399211] nf_conntrack version 0.5.0 (7911 buckets, 31644 max)[ 186.417933] ip_tables: (C) 2000-2006 Netfilter Core Team[ 189.719087] Set msys_nodify as 0[ 194.786438] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory[ 194.790018] NFSD: starting 90-second grace period[ 204.078948] nfsd: last server has exited, flushing export cache[ 213.072982] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory[ 213.076911] NFSD: starting 90-second grace period[ 1634.572219] nfsd: last server has exited, flushing export cache[ 1665.319839] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory[ 1665.323999] NFSD: starting 90-second grace period[ 1676.940592] PPP generic driver version 2.4.2[ 1676.951639] PPP MPPE Compression module registered[ 1676.962826] PPP BSD Compression module registered[ 1676.972902] PPP Deflate Compression module registered[ 1679.799079] Set msys_nodify as 0[ 2151.896942] nfsd: last server has exited, flushing export cache[~] #
[~] # cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md4 : active raid1 sdc2[3](S) sdb2[2] sda2[0] 530128 blocks super 1.0 [2/2] [UU]md13 : active raid1 sda4[0] sdc4[2] sdb4[1] 458880 blocks [4/3] [UUU_] bitmap: 49/57 pages [196KB], 4KB chunkmd9 : active raid1 sdc1[2] sda1[0] sdb1[1] 530048 blocks [4/3] [UUU_] bitmap: 36/65 pages [144KB], 4KB chunkunused devices: <none>[~] #
[~] # e2fsck /dev/md0
e2fsck 1.41.4 (27-Jan-2009)
e2fsck: Invalid argument while trying to open /dev/md0
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
[~] #
Alles anzeigen
Besten Dank für Euer Feedback
Gruss
labnet