Hellas zusammen, ich bin neu hier, meine Suche nach dem Thema brachte mich nicht wirklich weiter .. vor allem wollte ich nicht blind irgendwelche gerosteten befehle ausprobieren die womöglich für ander Prozessoren oder Versionen geschrieben wurden...
Kurz zu mir, bin 48, mit Linux seit 20 Jahren vertraut, allerdings mehr als user als als admin oder Entwickler.
Ich nenne eine TS 809u mien eigen, die seit 3 jähren eigentlich super läuft und habe da 8 platten in einem RAID 5 drin.
kürzlich habe ich einen Routinemäßigen FS check gemacht von der User Interface Oberfläche aus, nach 44% stieg mir die NASmit einem Fehler aus, seitdem lässt sich der RAID nicht mehr mountain.
ein
cat /proc/partitions
ergibt
major minor #blocks name
65 112 125056 sdx
65 113 1008 sdx1
65 114 55296 sdx2
65 115 55296 sdx3
65 116 1 sdx4
65 117 5232 sdx5
65 118 8176 sdx6
8 48 4883770584 sdd
8 49 530125 sdd1
8 50 530142 sdd2
8 51 4882201693 sdd3
8 52 498012 sdd4
8 32 4883770584 sdc
8 33 530125 sdc1
8 34 530142 sdc2
8 35 4882201693 sdc3
8 36 498012 sdc4
8 16 4883770584 sdb
8 17 530125 sdb1
8 18 530142 sdb2
8 19 4882201693 sdb3
8 20 498012 sdb4
8 0 4883770584 sda
8 1 530125 sda1
8 2 530142 sda2
8 3 4882201693 sda3
8 4 498012 sda4
8 112 4883770584 sdh
8 113 530125 sdh1
8 114 530142 sdh2
8 115 4882201693 sdh3
8 116 498012 sdh4
8 96 4883770584 sdg
8 97 530125 sdg1
8 98 530142 sdg2
8 99 4882201693 sdg3
8 100 498012 sdg4
8 80 4883770584 sdf
8 81 530125 sdf1
8 82 530142 sdf2
8 83 4882201693 sdf3
8 84 498012 sdf4
8 64 4883770584 sde
8 65 530125 sde1
8 66 530142 sde2
8 67 4882201693 sde3
8 68 498012 sde4
9 9 530112 md9
9 13 458880 md13
9 8 530128 md8
9 0 34175410752 md0
Alles anzeigen
ein
mdadm --examine /dev/sda3
ergibt
/dev/sda3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : dbb1c846:b417de10:bcb6b9fc:f38644ff
Name : 0
Creation Time : Sat Mar 24 18:16:41 2018
Raid Level : raid5
Raid Devices : 8
Used Dev Size : 9764403112 (4656.03 GiB 4999.37 GB)
Array Size : 68350821504 (32592.21 GiB 34995.62 GB)
Used Size : 9764403072 (4656.03 GiB 4999.37 GB)
Super Offset : 9764403368 sectors
State : clean
Device UUID : 272cf0d8:608b95c6:50828225:f340f5c0
Internal Bitmap : 2 sectors from superblock
Update Time : Sun Jun 10 23:43:07 2018
Checksum : 9f8de1d - correct
Events : 69
Layout : left-symmetric
Chunk Size : 64K
Array Slot : 0 (0, 1, 2, 3, 4, 5, 6, 7, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
Array State : Uuuuuuuu 376 failed
Alles anzeigen
Das OS ist das derzeit aktuelle 4.2.6
der QNAP support hat nach einer Woche nicht geantwortet
jemand ne Idee ?
eins noch .. dieser Beitrag hier ist für die Tonne :
https://helpdesk.qnap.com/inde…n-md0-neu-assembeln-mdadm
hat bei mir nix gebracht :
Usage: config_util input
input=0: Check if any HD existed.
input=1: Mirror ROOT partition.
input=2: Mirror Swap Space (not yet).
input=4: Mirror RFS_EXT partition.
[~] # config_util 1
Start to mirror ROOT part...
config_util: ret=-1, /dev/sda1 CANNOT be mounted on /mnt/HDA_ROOT.
config_util: ret=-1, /dev/sdb1 CANNOT be mounted on /mnt/HDB_ROOT.
config_util: ret=-1, /dev/sdc1 CANNOT be mounted on /mnt/HDC_ROOT.
config_util: ret=-1, /dev/sdd1 CANNOT be mounted on /mnt/HDD_ROOT.
config_util: ret=-1, /dev/sde1 CANNOT be mounted on /mnt/HDE_ROOT.
config_util: ret=-1, /dev/sdf1 CANNOT be mounted on /mnt/HDF_ROOT.
config_util: ret=-1, /dev/sdg1 CANNOT be mounted on /mnt/HDG_ROOT.
config_util: ret=-1, /dev/sdh1 CANNOT be mounted on /mnt/HDH_ROOT.
config_util: No valid HD exists.
Mirror of ROOT failed
[~] # config_util 4
Start to mirror RFS_EXT part...
config_util: HD1 is TS-NASX86.
config_util: HD2 is TS-NASX86.
config_util: HD3 is TS-NASX86.
config_util: HD4 is TS-NASX86.
config_util: HD5 is TS-NASX86.
config_util: HD6 is TS-NASX86.
config_util: HD7 is TS-NASX86.
config_util: HD8 is TS-NASX86.
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: device /dev/sda4 already active - cannot assemble it
mdadm: fail to stop array /dev/sda4: Device or resource busy
config_util: Cannot start raid volume /dev/sda4
mdadm: fail to stop array /dev/sda4: Device or resource busy
mdadm: another array by this name is already running.
Mirror of RFS_EXT failed
Alles anzeigen
eins noch .. das script
storage_boot_init 2
lief durch und gab mir folgendes aus:
mdadm: /dev/md0 not identified in config file.
mdadm: stopped /dev/md0
mdadm: /dev/md0 has been started with 8 drives.
storage_boot_init.c: Start raid device /dev/md0 successfully
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
storage_boot_init.c: /dev/md0 is active.
storage_boot_init.c: Check filesystem on /dev/md0.
storage_boot_init.c: Cannot mount /dev/md0.
storage_boot_init.c: check_last_degrade_error...
Alles anzeigen
wo check ich denn den degrade error ?? in /var/logs is nuescht....