Hallo Zusammen,
Murphy hat bei mir scheinbar voll zugeschlagen : Ich habe ein TS-419+ (Firmaware 4.1.0) mit 4 x Samsung HD204UI im Raid5-Verbund in Betrieb.
Normalerweise hängt das NAS an einer USV, bei der aber aktuell der Akku gewechselt werden muss und normalerweise hängen am NAS dann noch zwei Platten für die Backups... und normalerweise haut´s bei uns auch nicht die Sicherung raus.
Genau das ist natürlich alles jetzt bei der "USV-Wartung" passiert und nun lässt sich das RAID5 über den Speichermanager nicht mehr herstellen ( Fehler (2): Raid recovery failed ).
Hat jemand einen Tipp, wie ich das Raid5 wieder herstellen kann?
Anbei noch ein paar Informationen aus der Konsole:
[~] # cat /proc/mdstatPersonalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4]md4 : active raid1 sdd2[2](S) sdc2[3](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU]md13 : active raid1 sdc4[0] sdb4[3] sda4[2] sdd4[1] 458880 blocks [4/4] [UUUU] bitmap: 0/57 pages [0KB], 4KB chunkmd9 : active raid1 sdc1[0] sdb1[3] sda1[2] sdd1[1] 530048 blocks [4/4] [UUUU] bitmap: 0/65 pages [0KB], 4KB chunkunused devices: <none>
[~] # mdadm --detail /dev/md0mdadm: md device /dev/md0 does not appear to be active.
[~] # cat /etc/mtab/proc /proc proc rw 0 0none /dev/pts devpts rw,gid=5,mode=620 0 0sysfs /sys sysfs rw 0 0tmpfs /tmp tmpfs rw,size=64M 0 0none /proc/bus/usb usbfs rw 0 0/dev/sda4 /mnt/ext ext3 rw 0 0/dev/md9 /mnt/HDA_ROOT ext3 rw,data=ordered 0 0tmpfs /.eaccelerator.tmp tmpfs rw,size=32M 0 0
fdisk bringt folgendes:
[~] # fdisk -lDisk /dev/mtdblock0: 0 MB, 524288 bytes255 heads, 63 sectors/track, 0 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/mtdblock0 doesn't contain a valid partition tableDisk /dev/mtdblock1: 2 MB, 2097152 bytes255 heads, 63 sectors/track, 0 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/mtdblock1 doesn't contain a valid partition tableDisk /dev/mtdblock2: 9 MB, 9437184 bytes255 heads, 63 sectors/track, 1 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/mtdblock2 doesn't contain a valid partition tableDisk /dev/mtdblock3: 3 MB, 3145728 bytes255 heads, 63 sectors/track, 0 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/mtdblock3 doesn't contain a valid partition tableDisk /dev/mtdblock4: 0 MB, 262144 bytes255 heads, 63 sectors/track, 0 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/mtdblock4 doesn't contain a valid partition tableDisk /dev/mtdblock5: 1 MB, 1310720 bytes255 heads, 63 sectors/track, 0 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/mtdblock5 doesn't contain a valid partition tableDisk /dev/sda: 2000.3 GB, 2000398934016 bytes255 heads, 63 sectors/track, 243201 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 66 530125 83 Linux/dev/sda2 67 132 530142 83 Linux/dev/sda3 133 243138 1951945693 83 Linux/dev/sda4 243139 243200 498012 83 LinuxDisk /dev/sda4: 469 MB, 469893120 bytes2 heads, 4 sectors/track, 114720 cylindersUnits = cylinders of 8 * 512 = 4096 bytesDisk /dev/sda4 doesn't contain a valid partition tableDisk /dev/sdb: 2000.3 GB, 2000398934016 bytes255 heads, 63 sectors/track, 243201 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 66 530125 83 Linux/dev/sdb2 67 132 530142 83 Linux/dev/sdb3 133 243138 1951945693 83 Linux/dev/sdb4 243139 243200 498012 83 LinuxDisk /dev/sdc: 2000.3 GB, 2000398934016 bytes255 heads, 63 sectors/track, 243201 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdc1 1 66 530125 83 Linux/dev/sdc2 67 132 530142 83 Linux/dev/sdc3 133 243138 1951945693 83 Linux/dev/sdc4 243139 243200 498012 83 LinuxDisk /dev/sdd: 2000.3 GB, 2000398934016 bytes255 heads, 63 sectors/track, 243201 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdd1 1 66 530125 83 Linux/dev/sdd2 67 132 530142 83 Linux/dev/sdd3 133 243138 1951945693 83 Linux/dev/sdd4 243139 243200 498012 83 LinuxDisk /dev/md9: 542 MB, 542769152 bytes2 heads, 4 sectors/track, 132512 cylindersUnits = cylinders of 8 * 512 = 4096 bytesDisk /dev/md9 doesn't contain a valid partition tableDisk /dev/md4: 542 MB, 542769152 bytes2 heads, 4 sectors/track, 132512 cylindersUnits = cylinders of 8 * 512 = 4096 bytesDisk /dev/md4 doesn't contain a valid partition table[~] #
--- ModEdit ---
Hier noch ein paar Infos mit dmesg (u.a. "cannot start dirty degraded array."):
[~] # dmesg
t_rdev(sdb3)
[ 121.858236] md: unbind<sdd3>
[ 121.858334] md: export_rdev(sdd3)
[ 121.858364] md: unbind<sdc3>
[ 121.858427] md: export_rdev(sdc3)
[ 125.015745] md: md0 stopped.
[ 126.612749] md: bind<sdb2>
[ 126.712503] RAID1 conf printout:
[ 126.712511] --- wd:1 rd:2
[ 126.712518] disk 0, wo:0, o:1, dev:sda2
[ 126.712525] disk 1, wo:1, o:1, dev:sdb2
[ 126.713577] md: recovery of RAID array md4
[ 126.713775] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 126.713796] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 126.713823] md: using 128k window, over a total of 530048k.
[ 129.161062] md: bind<sdc2>
[ 129.482950] md: md0 stopped.
[ 129.494499] md: bind<sda3>
[ 129.494763] md: bind<sdc3>
[ 129.495005] md: bind<sdd3>
[ 129.495242] md: bind<sdb3>
[ 129.495355] md: kicking non-fresh sda3 from array!
[ 129.495380] md: unbind<sda3>
[ 129.495400] md: export_rdev(sda3)
[ 129.503272] md/raid:md0: not clean -- starting background reconstruction
[ 129.503321] md/raid:md0: device sdb3 operational as raid disk 1
[ 129.503340] md/raid:md0: device sdd3 operational as raid disk 3
[ 129.503357] md/raid:md0: device sdc3 operational as raid disk 2
[ 129.503374] NR_STRIPES is 4096 for total 128875 ram pages
[ 129.517473] md/raid:md0: allocated 67840kB
[ 129.517594] md/raid:md0: cannot start dirty degraded array.
[ 129.517654] RAID conf printout:
[ 129.517660] --- level:5 rd:4 wd:3
[ 129.517668] disk 1, o:1, dev:sdb3
[ 129.517675] disk 2, o:1, dev:sdc3
[ 129.517681] disk 3, o:1, dev:sdd3
[ 129.529389] md/raid:md0: failed to run raid set.
[ 129.529427] md: pers->run() failed ...
[ 129.545467] md: md0 stopped.
[ 129.545507] md: unbind<sdb3>
[ 129.545531] md: export_rdev(sdb3)
[ 129.545559] md: unbind<sdd3>
[ 129.545658] md: export_rdev(sdd3)
[ 129.545688] md: unbind<sdc3>
[ 129.545754] md: export_rdev(sdc3)
[ 131.712780] md: bind<sdd2>
[ 131.929384] md: md0 stopped.
[ 135.646821] md: md0 stopped.
[ 135.657919] md: bind<sda3>
[ 135.658182] md: bind<sdc3>
[ 135.658431] md: bind<sdd3>
[ 135.658683] md: bind<sdb3>
[ 135.658802] md: kicking non-fresh sda3 from array!
[ 135.658827] md: unbind<sda3>
[ 135.658848] md: export_rdev(sda3)
[ 135.665471] md/raid:md0: not clean -- starting background reconstruction
[ 135.665520] md/raid:md0: device sdb3 operational as raid disk 1
[ 135.665539] md/raid:md0: device sdd3 operational as raid disk 3
[ 135.665557] md/raid:md0: device sdc3 operational as raid disk 2
[ 135.665573] NR_STRIPES is 4096 for total 128875 ram pages
[ 135.679632] md/raid:md0: allocated 67840kB
[ 135.681649] md/raid:md0: cannot start dirty degraded array.
[ 135.681704] RAID conf printout:
[ 135.681711] --- level:5 rd:4 wd:3
[ 135.681719] disk 1, o:1, dev:sdb3
[ 135.681725] disk 2, o:1, dev:sdc3
[ 135.681732] disk 3, o:1, dev:sdd3
[ 135.693326] md/raid:md0: failed to run raid set.
[ 135.693363] md: pers->run() failed ...
[ 135.723600] md: md0 stopped.
[ 135.723640] md: unbind<sdb3>
[ 135.723665] md: export_rdev(sdb3)
[ 135.723797] md: unbind<sdd3>
[ 135.723821] md: export_rdev(sdd3)
[ 135.723845] md: unbind<sdc3>
[ 135.723915] md: export_rdev(sdc3)
[ 149.981003] eth0: stopped
[ 149.990075] eth1: stopped
[ 151.441907] eth0: link down
[ 151.441940] eth0: started
[ 151.570153] eth1: started
[ 153.874286] eth0: link up, full duplex, speed 1 Gbps
[ 160.653995] md: md4: recovery done.
[ 161.422583] RAID1 conf printout:
[ 161.422691] --- wd:2 rd:2
[ 161.422700] disk 0, wo:0, o:1, dev:sda2
[ 161.422707] disk 1, wo:0, o:1, dev:sdb2
[ 161.522568] RAID1 conf printout:
[ 161.522577] --- wd:2 rd:2
[ 161.522585] disk 0, wo:0, o:1, dev:sda2
[ 161.522592] disk 1, wo:0, o:1, dev:sdb2
[ 161.522597] RAID1 conf printout:
[ 161.522602] --- wd:2 rd:2
[ 161.522607] disk 0, wo:0, o:1, dev:sda2
[ 161.522614] disk 1, wo:0, o:1, dev:sdb2
[ 167.226402] eth0: stopped
[ 167.251137] eth0: link down
[ 167.251162] eth0: started
[ 167.310098] eth1: stopped
[ 167.332769] eth1: started
[ 169.557185] eth0: link up, full duplex, speed 1 Gbps
[ 190.398208] active port 0 :139
[ 190.398233] active port 1 :445
[ 190.398245] active port 2 :20
[ 190.915815] warning: `proftpd' uses 32-bit capabilities (legacy support in use)
[ 200.045573] EXT2-fs (mtdblock5): warning: mounting unchecked fs, running e2fsck is recommended
[ 204.698881] warning: process `pic_raw' used the deprecated sysctl system call with 8.1.2.
[ 207.683441] rule type=2, num=0
[ 207.846571] MAC:00:08:9B:C2:D6:4F
[ 207.850234] eth0: link down
[ 207.850250] page 3, reg 18 = 4a85
[ 207.851414] eth0: link up, full duplex, speed 1 Gbps
[ 207.851714] WOL enable
[ 207.874900] MAC:00:08:9B:C2:D6:50
[ 207.875510] page 3, reg 18 = 4a85
[ 207.876727] WOL enable
[ 213.733218] Loading iSCSI transport class v2.0-871.
[ 214.047289] iscsi: registered transport (tcp)
[ 214.730458] iscsid (6551): /proc/6551/oom_adj is deprecated, please use /proc/6551/oom_score_adj instead.
[10333.495435] md: md0 stopped.
[10333.508541] md: bind<sda3>
[10333.508809] md: bind<sdc3>
[10333.509062] md: bind<sdd3>
[10333.509314] md: bind<sdb3>
[10333.509426] md: kicking non-fresh sda3 from array!
[10333.509451] md: unbind<sda3>
[10333.509471] md: export_rdev(sda3)
[10333.520404] md/raid:md0: not clean -- starting background reconstruction
[10333.520452] md/raid:md0: device sdb3 operational as raid disk 1
[10333.520471] md/raid:md0: device sdd3 operational as raid disk 3
[10333.520489] md/raid:md0: device sdc3 operational as raid disk 2
[10333.520505] NR_STRIPES is 4096 for total 128875 ram pages
[10333.534702] md/raid:md0: allocated 67840kB
[10333.538789] md/raid:md0: cannot start dirty degraded array.
[10333.538852] RAID conf printout:
[10333.538859] --- level:5 rd:4 wd:3
[10333.538867] disk 1, o:1, dev:sdb3
[10333.538873] disk 2, o:1, dev:sdc3
[10333.538879] disk 3, o:1, dev:sdd3
[10333.550410] md/raid:md0: failed to run raid set.
[10333.550448] md: pers->run() failed ...
[10336.052778] md: md0 stopped.
[10336.052817] md: unbind<sdb3>
[10336.052841] md: export_rdev(sdb3)
[10336.052979] md: unbind<sdd3>
[10336.053003] md: export_rdev(sdd3)
[10336.053027] md: unbind<sdc3>
[10336.053096] md: export_rdev(sdc3)
[10340.980709] md: md0 stopped.
[10341.025119] md: md0 stopped.
[11195.440255] md: md0 stopped.
[11195.446537] md: bind<sda3>
[11195.446808] md: bind<sdc3>
[11195.447068] md: bind<sdd3>
[11195.447315] md: bind<sdb3>
[11195.447430] md: kicking non-fresh sda3 from array!
[11195.447454] md: unbind<sda3>
[11195.447474] md: export_rdev(sda3)
[11195.465210] md/raid:md0: not clean -- starting background reconstruction
[11195.465259] md/raid:md0: device sdb3 operational as raid disk 1
[11195.465277] md/raid:md0: device sdd3 operational as raid disk 3
[11195.465295] md/raid:md0: device sdc3 operational as raid disk 2
[11195.465311] NR_STRIPES is 4096 for total 128875 ram pages
[11195.479395] md/raid:md0: allocated 67840kB
[b][11195.492506] md/raid:md0: cannot start dirty degraded array.
[11195.492579] RAID conf printout:
[11195.492586] --- level:5 rd:4 wd:3
[11195.492594] disk 1, o:1, dev:sdb3
[11195.492600] disk 2, o:1, dev:sdc3
[11195.492607] disk 3, o:1, dev:sdd3
[11195.504147] md/raid:md0: failed to run raid set.[/b]
[11195.504184] md: pers->run() failed ...
[11197.903933] md: md0 stopped.
[11197.903971] md: unbind<sdb3>
[11197.903995] md: export_rdev(sdb3)
[11197.904124] md: unbind<sdd3>
[11197.904148] md: export_rdev(sdd3)
[11197.904173] md: unbind<sdc3>
[11197.904240] md: export_rdev(sdc3)
[11202.832354] md: md0 stopped.
[11202.887666] md: md0 stopped.
Alles anzeigen