Thanks David.
Gut das zu hören. Wie gesagt Platten sind bestellt das dauert dann ja wohl auch noch was bis die da sind.
Thanks David.
Gut das zu hören. Wie gesagt Platten sind bestellt das dauert dann ja wohl auch noch was bis die da sind.
Tja war wohl wirklich nicht genau das was du gesagt hast.
Hab die platte 3 raus genommen und ohne die gestartet. dann die Platte 3 rein gemacht noch mal gestartet dann auf check gedrückt im GUI.
Tja war nicht ganz so klug hätte mal besser auf dich gehört. Habe mir jetzt aber drei neue Platten und ein externes esata 3 TB laufwerkbestellt.
[/proc] # mdadm --examine /dev/sda3/dev/sda3: Magic : a92b4efc Version : 00.90.00 UUID : 8c52ae2b:6d84edde:adc0cbf6:9c65352f Creation Time : Tue Jul 13 06:43:45 2010 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 2927139200 (2791.54 GiB 2997.39 GB) Raid Devices : 3 Total Devices : 3Preferred Minor : 0 Update Time : Thu Mar 17 01:53:45 2011 State : clean Active Devices : 2Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Checksum : de273e33 - correct Events : 0.122078 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice Statethis 3 8 3 3 spare /dev/sda3 0 0 0 0 0 removed 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 3 3 spare /dev/sda3[/proc] # mdadm --examine /dev/sdb3/dev/sdb3: Magic : a92b4efc Version : 00.90.00 UUID : 8c52ae2b:6d84edde:adc0cbf6:9c65352f Creation Time : Tue Jul 13 06:43:45 2010 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 2927139200 (2791.54 GiB 2997.39 GB) Raid Devices : 3 Total Devices : 3Preferred Minor : 0 Update Time : Thu Mar 17 01:53:45 2011 State : clean Active Devices : 2Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Checksum : de273e45 - correct Events : 0.122078 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice Statethis 1 8 19 1 active sync /dev/sdb3 0 0 0 0 0 removed 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 3 3 spare /dev/sda3[/proc] # mdadm --examine /dev/sdc3/dev/sdc3: Magic : a92b4efc Version : 00.90.00 UUID : 8c52ae2b:6d84edde:adc0cbf6:9c65352f Creation Time : Tue Jul 13 06:43:45 2010 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 2927139200 (2791.54 GiB 2997.39 GB) Raid Devices : 3 Total Devices : 3Preferred Minor : 0 Update Time : Thu Mar 17 01:53:45 2011 State : clean Active Devices : 2Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Checksum : de273e57 - correct Events : 0.122078 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice Statethis 2 8 35 2 active sync /dev/sdc3 0 0 0 0 0 removed 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 3 3 spare /dev/sda3[/proc] #
Was ich gemacht habe wird vielleicht auch aus dem syslog klar.
2011-03-17 15:22:05 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] RAID Recovery failed.
2011-03-17 14:38:18 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Examination failed.
2011-03-17 14:38:18 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Examination failed.
2011-03-17 14:38:17 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Start examination.
2011-03-17 14:37:47 System 127.0.0.1 localhost Default share Qmultimedia is not found. TwonkyMedia start failed.
2011-03-17 14:27:13 System 127.0.0.1 localhost Drive 3 plugged in.
2011-03-17 14:19:46 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Examination failed.
2011-03-17 14:19:46 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Examination failed.
2011-03-17 14:19:46 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Start examination.
2011-03-17 14:19:16 System 127.0.0.1 localhost Default share Qmultimedia is not found. TwonkyMedia start failed.
2011-03-17 14:16:28 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] RAID Recovery failed: Not enough devices to start the array.
2011-03-17 14:16:00 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] RAID Recovery failed: Not enough devices to start the array.
2011-03-17 14:15:07 System 127.0.0.1 localhost Default share Qmultimedia is not found. TwonkyMedia start failed.
2011-03-17 14:13:04 System 127.0.0.1 localhost System started.
2011-03-17 13:59:27 System 127.0.0.1 localhost System was shut down on Thu Mar 17 13:59:27 CET 2011.
2011-03-17 13:56:53 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] RAID device in degraded mode.
2011-03-17 13:56:53 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Hot-remove drive 3 failed.
2011-03-17 13:56:44 System 127.0.0.1 localhost Drive 3 plugged out.
2011-03-17 01:54:23 System 127.0.0.1 localhost Default share Qmultimedia is not found. TwonkyMedia start failed.
2011-03-17 01:53:45 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Mount the file system read-only failure.
Alles anzeigen
ich verspreche ich mache jetzt keine blöden Sachen mehr. Soll ich es noch mal hochfahren mit allen Platten drin?
Liebe Grüße Alex
Hi,
nachdem David schon so nett war und mir echt schon weiter geholfen geholfen hat. Habe ich leider die nächste blöde action gestartet.
Ich dachte ich ziehe einfach mal die schon ausgefallen Platte raus und versuche so das RAID irgendwie wieder in den degraded mode zu bekommen.
Naja das war leider eine ziemlich sehr blöde Idee und von mir kann man nur lernen wie man schlimme Sachen noch schlimmer machen kann. (Bitte an jeden Nachfolger, möglichst die Finger vom Raid lassen wenns schon blöd ist, immer ein Backup machen.)
Auf alle Fälle ist die Situation jetzt so das alle drei Paltten im Raid im webgui als good angezeigt werden und mein raid als unkown.
mdstad zeigt nun leider:
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [raid0] [raid10] md4 : active raid1 sdc2[2](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdc4[2] sda4[0] sdb4[1] 458880 blocks [4/3] [UUU_] bitmap: 41/57 pages [164KB], 4KB chunk md9 : active raid1 sdc1[0] sdb1[1] sda1[2] 530048 blocks [4/3] [UUU_] bitmap: 65/65 pages [260KB], 4KB chunk unused devices: <none>
und dmesg:
[/proc] # dmesg END FABRIC API >>>>>>>>>>>>>>>>>>>>>>LIO_TARGET[0] - Cleared lio_target_fabric_configfsUnloading Complete.TARGET_CORE[0]: Released ConfigFS Fabric InfrastructureCORE_RD[0] - Deactivating Device with TCQ: 32 at Ramdisk Device ID: 0CORE_RD[0] - Released device space for Ramdisk Device ID: 0, pages 8 in 1 tables total bytes 32768CORE_HBA[0] - Detached Ramdisk HBA: 0 from Generic Target CoreCORE_HBA[0] - Detached HBA from Generic Target Coreactive port 0 :139active port 1 :445active port 2 :20TARGET_CORE[0]: Loading Generic Kernel Storage Engine: v3.5.0 on Linux/armv5tel on 2.6.33.2TARGET_CORE[0]: Initialized ConfigFS Fabric Infrastructure: v3.5.0 on Linux/armv5tel on 2.6.33.2SE_PC[0] - Registered Plugin Class: TRANSPORTPLUGIN_TRANSPORT[1] - pscsi registeredPLUGIN_TRANSPORT[4] - iblock registeredPLUGIN_TRANSPORT[5] - rd_dr registeredPLUGIN_TRANSPORT[6] - rd_mcp registeredPLUGIN_TRANSPORT[7] - fileio registeredSE_PC[1] - Registered Plugin Class: OBJPLUGIN_OBJ[1] - dev registeredCORE_HBA[0] - Linux-iSCSI.org Ramdisk HBA Driver v3.1 on Generic Target Core Stack v3.5.0CORE_HBA[0] - Attached Ramdisk HBA: 0 to Generic Target Core TCQ Depth: 256 MaxSectors: 1024CORE_HBA[0] - Attached HBA to Generic Target CoreRAMDISK: Referencing Page Count: 8CORE_RD[0] - Built Ramdisk Device ID: 0 space of 8 pages in 1 tablesrd_dr: Using SPC_PASSTHROUGH, no reservation emulationrd_dr: Using SPC_ALUA_PASSTHROUGH, no ALUA emulationCORE_RD[0] - Activating Device with TCQ: 0 at Ramdisk Device ID: 0 Vendor: QNAP Model: RAMDISK-DR Revision: 3.1 Type: Direct-Access ANSI SCSI revision: 05T10 VPD Unit Serial Number: 1234567890:0_0T10 VPD Page Length: 38T10 VPD Identifer Length: 34T10 VPD Identifier Association: addressed logical unitT10 VPD Identifier Type: T10 Vendor ID basedT10 VPD ASCII Device Identifier: QNAPCORE_RD[0] - Added LIO DIRECT Ramdisk Device ID: 0 of 8 pages in 1 tables, 32768 total bytesInitiate iscsi target log successfully.Linux-iSCSI.org iSCSI Target Core Stack v3.5.0 on Linux/armv5tel on 2.6.33.2<<<<<<<<<<<<<<<<<<<<<< BEGIN FABRIC API >>>>>>>>>>>>>>>>>>>>>>Initialized struct target_fabric_configfs: c0963400 for iscsi<<<<<<<<<<<<<<<<<<<<<< END FABRIC API >>>>>>>>>>>>>>>>>>>>>>LIO_TARGET[0] - Set fabric -> lio_target_fabric_configfsiscsi_allocate_thread_sets:205: ***OPS*** Spawned 4 thread set(s) (8 total threads).TARGET_CORE[iSCSI]: Allocated Discovery se_portal_group_t for endpoint: None, Portal Tag: 1CORE[0] - Allocated Discovery TPGLoading Complete.iscsi_log_rcv_msg: get log pid = 7277.[1 2] Detect fake interrupts.QNAP: Got fake interrupts.(cnt=64)QNAP:ignore this connect interruptENABLE_WRITE_CACHE (current: enabled).scsi 4:0:0:0: Direct-Access WDC WD15EADS-00R6B0 01.0 PQ: 0 ANSI: 5Check proc_name[mvSata].Check proc_name[mvSata].sd 4:0:0:0: [sdc] Sector size 0 reported, assuming 512.sd 4:0:0:0: [sdc] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB)sd 4:0:0:0: [sdc] 0-byte physical blockssd 4:0:0:0: Attached scsi generic sg2 type 0sd 4:0:0:0: [sdc] Write Protect is offsd 4:0:0:0: [sdc] Mode Sense: 23 00 00 00sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUAsd 4:0:0:0: [sdc] Sector size 0 reported, assuming 512.sdc: sdc1 sdc2 sdc3 sdc4sd 4:0:0:0: [sdc] Sector size 0 reported, assuming 512.sd 4:0:0:0: [sdc] Attached SCSI diskmd: bind<sdc2>md: bind<sdc1>RAID1 conf printout:--- wd:2 rd:4disk 0, wo:1, o:1, dev:sdc1disk 1, wo:0, o:1, dev:sdb1disk 2, wo:0, o:1, dev:sda1md: recovery of RAID array md9md: minimum _guaranteed_ speed: 5000 KB/sec/disk.md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.md: using 128k window, over a total of 530048 blocks.md: bind<sdc4>RAID1 conf printout:--- wd:2 rd:4disk 0, wo:0, o:1, dev:sda4disk 1, wo:0, o:1, dev:sdb4disk 2, wo:1, o:1, dev:sdc4md: delaying recovery of md13 until md9 has finished (they share one or more physical units)md: md9: recovery done.md: recovery of RAID array md13md: minimum _guaranteed_ speed: 5000 KB/sec/disk.md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.md: using 128k window, over a total of 458880 blocks.RAID1 conf printout:--- wd:3 rd:4disk 0, wo:0, o:1, dev:sdc1disk 1, wo:0, o:1, dev:sdb1disk 2, wo:0, o:1, dev:sda1md: md13: recovery done.RAID1 conf printout:--- wd:3 rd:4disk 0, wo:0, o:1, dev:sda4disk 1, wo:0, o:1, dev:sdb4disk 2, wo:0, o:1, dev:sdc4nfsd: last server has exited, flushing export cacheiscsi target log cleanup successfully.iscsi_deallocate_thread_sets:253: ***OPS*** Stopped 4 thread set(s) (8 total threads).TARGET_CORE[Discovery]: Deallocating iSCSI se_portal_group_t for endpoint: (null) Portal Tag 1<<<<<<<<<<<<<<<<<<<<<< BEGIN FABRIC API >>>>>>>>>>>>>>>>>>>>>>Target_Core_ConfigFS: DEREGISTER -> Releasing tf: iscsiReleasing fabric: c4e3a740<<<<<<<<<<<<<<<<<<<<<< END FABRIC API >>>>>>>>>>>>>>>>>>>>>>LIO_TARGET[0] - Cleared lio_target_fabric_configfsUnloading Complete.TARGET_CORE[0]: Released ConfigFS Fabric InfrastructureCORE_RD[0] - Deactivating Device with TCQ: 32 at Ramdisk Device ID: 0CORE_RD[0] - Released device space for Ramdisk Device ID: 0, pages 8 in 1 tables total bytes 32768CORE_HBA[0] - Detached Ramdisk HBA: 0 from Generic Target CoreCORE_HBA[0] - Detached HBA from Generic Target Coreactive port 0 :139active port 1 :445active port 2 :20TARGET_CORE[0]: Loading Generic Kernel Storage Engine: v3.5.0 on Linux/armv5tel on 2.6.33.2TARGET_CORE[0]: Initialized ConfigFS Fabric Infrastructure: v3.5.0 on Linux/armv5tel on 2.6.33.2SE_PC[0] - Registered Plugin Class: TRANSPORTPLUGIN_TRANSPORT[1] - pscsi registeredPLUGIN_TRANSPORT[4] - iblock registeredPLUGIN_TRANSPORT[5] - rd_dr registeredPLUGIN_TRANSPORT[6] - rd_mcp registeredPLUGIN_TRANSPORT[7] - fileio registeredSE_PC[1] - Registered Plugin Class: OBJPLUGIN_OBJ[1] - dev registeredCORE_HBA[0] - Linux-iSCSI.org Ramdisk HBA Driver v3.1 on Generic Target Core Stack v3.5.0CORE_HBA[0] - Attached Ramdisk HBA: 0 to Generic Target Core TCQ Depth: 256 MaxSectors: 1024CORE_HBA[0] - Attached HBA to Generic Target CoreRAMDISK: Referencing Page Count: 8CORE_RD[0] - Built Ramdisk Device ID: 0 space of 8 pages in 1 tablesrd_dr: Using SPC_PASSTHROUGH, no reservation emulationrd_dr: Using SPC_ALUA_PASSTHROUGH, no ALUA emulationCORE_RD[0] - Activating Device with TCQ: 0 at Ramdisk Device ID: 0 Vendor: QNAP Model: RAMDISK-DR Revision: 3.1 Type: Direct-Access ANSI SCSI revision: 05T10 VPD Unit Serial Number: 1234567890:0_0T10 VPD Page Length: 38T10 VPD Identifer Length: 34T10 VPD Identifier Association: addressed logical unitT10 VPD Identifier Type: T10 Vendor ID basedT10 VPD ASCII Device Identifier: QNAPCORE_RD[0] - Added LIO DIRECT Ramdisk Device ID: 0 of 8 pages in 1 tables, 32768 total bytesInitiate iscsi target log successfully.Linux-iSCSI.org iSCSI Target Core Stack v3.5.0 on Linux/armv5tel on 2.6.33.2<<<<<<<<<<<<<<<<<<<<<< BEGIN FABRIC API >>>>>>>>>>>>>>>>>>>>>>Initialized struct target_fabric_configfs: cc2c1800 for iscsi<<<<<<<<<<<<<<<<<<<<<< END FABRIC API >>>>>>>>>>>>>>>>>>>>>>LIO_TARGET[0] - Set fabric -> lio_target_fabric_configfsiscsi_allocate_thread_sets:205: ***OPS*** Spawned 4 thread set(s) (8 total threads).TARGET_CORE[iSCSI]: Allocated Discovery se_portal_group_t for endpoint: None, Portal Tag: 1CORE[0] - Allocated Discovery TPGLoading Complete.iscsi_log_rcv_msg: get log pid = 12431.md: md0 stopped.md: unbind<sdb3>md: export_rdev(sdb3)md: unbind<sda3>md: export_rdev(sda3)
mdstat sieht jetzt sogar für die anderen partitionen anders aus. Und md0 ist unmounted. Ich hab das qnap jetzt erst mal abgeschaltet und mir neue Festplatten bestellt.
Vielleicht kann mir jemand sagen 1. ob mein MD0 jetzt vollkommen weg ist und jeder Rettungsversuch jetzt vergeben ist.
Und vielleicht weis noch jemand wie ich genau rausfinden kann was auf welcher Platte (Einschub ist). Heißt
das sdc2 auf einschub 3 ist?
Viele Grüße
Alex
Kann ich nicht irgendwie wenigstens das degrated wieder mounten um mein backup der Fotos zu machen?
Der Qnap support war sehr unhöflich und meine ohne irgendwie sich damit näher zu beschäftigen.
raid 5 volume is crashed with 2 disks failed
12:27 PM
there is nothing we can do to recovery the data
12:27 PM
you can only seek help from data recovery company
Das finde ich schon ein wenig sagen wir mal grenzwertig.
Wie teuer ist eine Datenrettung kann man das als Privatmann bezahlen?
Wenn ich zwei Platten bekommen kann wie würde das vorgehen aussehen?
Müssten das dieselben sein? Sonst würde ich schnell welche kaufen. Das wäre es mir definitiv wert.
Da sind Fotos von meinen letzten Reisen drauf die nirgendwo anders habe. :cry:
Viel Grüße
Alex
Hallo David,
also das Ergebnis von dem ganzen hier unten. Sieht glaube ich nicht gut aus:
mdst:
[/proc] # cat mdstatPersonalities : [raid1] [raid6] [raid5] [raid4] [linear] [raid0] [raid10] md0 : active (read-only) raid5 sda3[3] sdb3[1] sdc3[2] 2927139200 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]md4 : active raid1 sda2[2](S) sdc2[1] sdb2[0] 530048 blocks [2/2] [UU]md13 : active raid1 sda4[0] sdc4[2] sdb4[1] 458880 blocks [4/3] [UUU_] bitmap: 41/57 pages [164KB], 4KB chunkmd9 : active raid1 sda1[2] sdc1[0] sdb1[1] 530048 blocks [4/3] [UUU_] bitmap: 65/65 pages [260KB], 4KB chunkunused devices: <none>
dmesg
/proc] # dmesg
e data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
sd 4:0:0:0: [sdc] Unhandled sense code
sd 4:0:0:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08
sd 4:0:0:0: [sdc] Sense Key : 0x3 [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
02 60 87 41
sd 4:0:0:0: [sdc] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 02 60 87 3c 00 00 08 00
end_request: I/O error, dev sdc, sector 39880513
raid5:md0: read error not correctable (sector 37759928 on sdc3).
raid5: some error occurred in a active device:2 of md0.
md: recovery skipped: md0
md: recovery of RAID array md0
md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: Recovering started: md0
md: using 128k window, over a total of 1463569600 blocks.
md: resuming recovery of md0 from checkpoint.
nfsd: last server has exited, flushing export cache
md: md_do_sync() got signal ... exiting
md: recovery skipped: md0
md: md0 switched to read-only mode.
EXT3-fs (md0): error: couldn't mount because of unsupported optional features (240)
active port 0 :139
active port 1 :445
active port 2 :20
iscsi_log_rcv_msg: get log pid = 12253.
[/proc] #
Alles anzeigen
Hoffe dir sagt das mehr als mir und es gibt noch Hoffnung.
Grüße und Danke Alex
Hallo,
ich bin ziemlich verzweifelt. Im Moment komme ich nicht mehr an die Daten in meinem Qnap (Fotos ) und habe von einigen Bildern kein backup.
Im Moment befindet sich das Raid 5mit 3 disks im unmounted status.
Ich schildere mal genau wie sich die ganze Sache ergeben hat und vielleicht kann mir irgendjemand helfen meinen Daten wiederherzustellen.
Gestern habe ich nach einem Urlaub bemerkt das eine Fest platte in meinem qnap 410 nach einem read / write error nicht mehr im Raid lief.
2011-01-02 21:07:50 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Drive 1 failed. 2011-01-02 21:07:46 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] RAID device in degraded mode. 2011-01-02 21:07:46 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Drive 1 removed.
Der status der Raids war degraded.
Allerdings war das wohl schon was länger so und ich hatte ich das da nicht bemerkt und noch munter das Raid weiter benutzt und sogar erfolgreich ein Firmware update drauf gemacht. :shock:
2011-02-23 22:22:51 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] RAID device in degraded mode. 2011-02-23 22:22:49 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] The file system is not clean. It is suggested that you run "check disk". 2011-02-23 22:22:23 System 127.0.0.1 localhost System started. 2011-02-23 22:18:17 System 127.0.0.1 localhost System was shut down on Wed Feb 23 22:18:17 CET 2011. 2011-02-23 22:10:18 System 127.0.0.1 localhost System updated successfully from 3.3.6 to 3.4.0.
Also hab ich gestern anstelle direkt mal ein Backup zu ziehen was natürlich das vernünftigste gewesen wäre. Das System neu gestartet und einen check der Festplatte 1 vorgenommen.
Nach dem Check das System hat keinen Fehler gefunden hat der qnap automatisch versucht das Raid zu rebuilden.
Allerdings war das nach 3 Stunden immer noch bei 1 % und als ich heute moregn aufgewacht bin befand sich das RAID im unmounted Status ich komme an nichts and den Platten mehr dran.
2011-03-17 01:53:45 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Mount the file system read-only failure.
2011-03-17 01:52:16 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Error occurred while accessing the devices of the volume in degraded mode.
2011-03-16 21:37:00 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Error occurred while accessing Drive 3.
2011-03-16 21:26:40 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Start rebuilding.
2011-03-16 21:26:32 System 127.0.0.1 localhost [Drive 1] Bad Blocks Scan completed.
2011-03-16 21:26:32 System 127.0.0.1 localhost [RAID5 Disk Volume: Drive 1 2 3] Drive 1 added into the volume.
2011-03-16 15:48:11 System 127.0.0.1 localhost [Drive 1] Start scanning bad blocks.
Ich habe Angst die Sache jetzt noch weiter zu verschlimmern und im Moment mache ich gar nichts mit dem RAID.
Gibt es irgendeine Chance noch mal wenigstens an die Fotos für ein backup dran zu kommen?
Kann mir irgendjemand helfen?
Viele Grüße
Alex