Hi
Hallo erst mal an alle aus dem Forum. Leider treibt mich ein Problem hier her
Es handelt sich um ein TS-869 Pro
RAID6 - mit 5 Festplatten -
Seagate ST3000DM001-9YN1CC4B (3 TB) - also passend gemäß Hardwarecomp. Liste - und auch für den 24/7 Einsatz.
Das Ding hängt auch gut gekühlt an einer USV.
Jetzt habe ich gerade eine Email erhalten - da ich die automatischen Benachrichtigung eingeschaltet habe -
Level: Warning
Raid6 Disk Volume: Drive 1 2 3 4 5 Rebuilding skipped.
Die NFS Freigabe war auch gleichzeitig nicht mehr online
Ich habe das gute Stück jetzt mal neu gestartet. Die NFS Freigabe ist zwar wieder online. Allerdings nur lesend :cry:
Aber ich habe im Log die Einträge:
LOG:
2013-05-05325 19:50:29 System 127.0.0.1 localhost [RAID6 Disk Volume: Drive 1 2 3 4 5] Rebuilding skipped. 2013-05-05324 19:50:14 System 127.0.0.1 localhost Lan 2 link is Up. 2013-05-05323 19:48:29 System 127.0.0.1 localhost [RAID6 Disk Volume: Drive 1 2 3 4 5 Hot Spare Disk: 5] Mount the file system read-only. 2013-05-05322 19:48:26 System 127.0.0.1 localhost [RAID6 Disk Volume: Drive 1 2 3 4 5 Hot Spare Disk: 5] Mount the file system read-only. 2013-05-05321 19:48:19 System 127.0.0.1 localhost [RAID6 Disk Volume: Drive 1 2 3 4 5] Drive 5 added into the volume. 2013-05-05320 19:48:19 System 127.0.0.1 localhost [RAID6 Disk Volume: Drive 1 2 3 4 5] Drive 4 added into the volume. 2013-05-05319 19:48:06 System 127.0.0.1 localhost System started. 2013-05-05318 19:46:14 System 127.0.0.1 localhost System was shut down on Sun May 5 19:46:14 CEST 2013. 2013-05-05317 19:43:42 admin 192.168.0.4 --- [Power Management] System will be restart now. 2013-05-05316 17:39:09 System 127.0.0.1 localhost [RAID6 Disk Volume: Drive 1 2 3 4 5] Error occurred while accessing the devices of the volume in degraded mode.
Der Smart Status aller Platten ist ok. Ebenso die Temp.
Unter dem Raid Management sehe ich nur:
RAID 6 Disk Volume: Drive 1 2 3 5
8313.05 GB No In degraded mode Read only , Failed Drive(s): 2 No operation can be executed for this drive configuration.
Ich kann also im Moment nix machen. Und der Rebuild läuft nicht.
Wie komme ich überhaupt darauf, welche Disks jetzt beide defekt sind?
Oder kann ich das Rebuild irgendwie manuell anwerfen? Ich kann im Raid Menü ja wirklich gar nix anklicken...
cat /proc/mdstat
[/] # cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi path]md0 : active (read-only) raid6 sde3[4](S) sdd3[3](F) sda3[0] sdc3[2] sdb3[1] 8786092608 blocks super 1.0 level 6, 64k chunk, algorithm 2 [5/3] [UUU__]md8 : active raid1 sde2[2](S) sdd2[3](S) sdc2[4](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU]md13 : active raid1 sda4[0] sde4[4] sdd4[3] sdc4[2] sdb4[1] 458880 blocks [8/5] [UUUUU___] bitmap: 40/57 pages [160KB], 4KB chunkmd9 : active raid1 sda1[0] sde1[4] sdd1[3] sdc1[2] sdb1[1] 530048 blocks [8/5] [UUUUU___] bitmap: 56/65 pages [224KB], 4KB chunkunused devices: <none>[/] #
dmesg
[ 59.735986] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 59.741130] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 62.638164] md: bind<sda2>
[ 62.639463] raid1: raid set md8 active with 1 out of 1 mirrors
[ 62.639609] md8: detected capacity change from 0 to 542769152
[ 63.643872] md8: unknown partition table
[ 65.763665] Adding 530040k swap on /dev/md8. Priority:-1 extents:1 across:530040k
[ 69.458686] md: bind<sdb2>
[ 69.474542] RAID1 conf printout:
[ 69.474661] --- wd:1 rd:2
[ 69.474771] disk 0, wo:0, o:1, dev:sda2
[ 69.474876] disk 1, wo:1, o:1, dev:sdb2
[ 69.475048] md: recovery of RAID array md8
[ 69.475154] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 69.475276] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 69.475435] md: using 128k window, over a total of 530048 blocks.
[ 71.796529] md: bind<sdc2>
[ 73.259712] md: md0 stopped.
[ 73.267402] md: md0 stopped.
[ 73.350682] md: bind<sdb3>
[ 73.350986] md: bind<sdc3>
[ 73.351258] md: bind<sdd3>
[ 73.351536] md: bind<sde3>
[ 73.351812] md: bind<sda3>
[ 73.351944] md: kicking non-fresh sde3 from array!
[ 73.352066] md: unbind<sde3>
[ 73.358970] md: export_rdev(sde3)
[ 73.359094] md: kicking non-fresh sdd3 from array!
[ 73.359208] md: unbind<sdd3>
[ 73.366968] md: export_rdev(sdd3)
[ 73.368141] raid5: device sda3 operational as raid disk 0
[ 73.368253] raid5: device sdc3 operational as raid disk 2
[ 73.368364] raid5: device sdb3 operational as raid disk 1
[ 73.378256] raid5: allocated 85344kB for md0
[ 73.378454] 0: w=1 pa=0 pr=5 m=2 a=2 r=5 op1=0 op2=0
[ 73.378567] 2: w=2 pa=0 pr=5 m=2 a=2 r=5 op1=0 op2=0
[ 73.378677] 1: w=3 pa=0 pr=5 m=2 a=2 r=5 op1=0 op2=0
[ 73.378787] raid5: raid level 6 set md0 active with 3 out of 5 devices, algorithm 2
[ 73.378953] RAID5 conf printout:
[ 73.379072] --- rd:5 wd:3
[ 73.379182] disk 0, o:1, dev:sda3
[ 73.379290] disk 1, o:1, dev:sdb3
[ 73.379397] disk 2, o:1, dev:sdc3
[ 73.379563] md0: detected capacity change from 0 to 8996958830592
[ 73.990888] md: bind<sdd2>
[ 74.417119] md0: unknown partition table
[ 74.421583] md: bind<sdd3>
[ 74.443103] RAID5 conf printout:
[ 74.443221] --- rd:5 wd:3
[ 74.443332] disk 0, o:1, dev:sda3
[ 74.443443] disk 1, o:1, dev:sdb3
[ 74.443552] disk 2, o:1, dev:sdc3
[ 74.443662] disk 3, o:1, dev:sdd3
[ 74.443848] md: delaying recovery of md0 until md8 has finished (they share one or more physical units)
[ 74.548015] md: bind<sde3>
[ 76.125120] md: bind<sde2>
[ 76.814204] EXT4-fs (md0): mounted filesystem with ordered data mode
[ 77.744250] raid5: Disk failure on sdd3, disabling device.
[ 77.744254] raid5: Operation continuing on 3 devices.
[ 79.747986] md: cannot remove active disk sdd3 from md0 ...
[ 80.084907] md: recovery skipped: md0
[ 80.111057] md: md0 switched to read-only mode.
[ 81.159637] EXT4-fs (md0): mounted filesystem with ordered data mode
[ 84.637450] EXT4-fs (md0): mounted filesystem with ordered data mode
[ 88.632116] md: md8: recovery done.
[ 88.790233] RAID1 conf printout:
[ 88.790350] --- wd:2 rd:2
[ 88.790458] disk 0, wo:0, o:1, dev:sda2
[ 88.790566] disk 1, wo:0, o:1, dev:sdb2
[ 100.509432] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 100.611423] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 112.037711] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 112.163702] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 116.561091] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 116.561303] NFSD: starting 90-second grace period
[ 130.085725] active port 0 :139
[ 130.085844] active port 1 :445
[ 130.085951] active port 2 :20
[ 130.143883] warning: `proftpd' uses 32-bit capabilities (legacy support in use)
[ 132.199858] nfsd: last server has exited, flushing export cache
[ 137.592502] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 137.592690] NFSD: starting 90-second grace period
[ 178.957784] rule type=2, num=0
[ 179.808694] Loading iSCSI transport class v2.0-871.
[ 179.845512] iscsi: registered transport (tcp)
[ 183.056717] nfsd: last server has exited, flushing export cache
[ 188.564349] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 188.564538] NFSD: starting 90-second grace period
[ 202.069448] nfsd: last server has exited, flushing export cache
[ 207.503684] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 207.503897] NFSD: starting 90-second grace period
Alles anzeigen
Besten dank für jeden Tipp
viele Grüße