Hi,
da ich vom QNAP-support für dieses Modell leider keine Hilfe mehr bekomme, versuche ich es hier.
Nach einem Neustart werden die Platten nicht mehr gemountet. Ich habe schon mehrfach versucht durch Neustart zu beheben und auch Firmware aktualisiert auf 4.2.6 (20181026), aber leider ohne Erfolg.
Aus anderen Threads habe ich gesehen, dass folgendes zum Troubleshoot hilfreich sein kann:
Hier also mein Output, wäre über Hilfe zum wieder einbinden sehr dankbar.
Beste Grüße,
John
Code
fdisk -l
Disk /dev/sdd: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sdc: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sdb: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sda: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sda4: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/sda4 doesn't contain a valid partition table
Disk /dev/sdg: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sdh: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdh1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sdf: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sde: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sdx: 515 MB, 515899392 bytes
8 heads, 32 sectors/track, 3936 cylinders
Units = cylinders of 256 * 512 = 131072 bytes
Device Boot Start End Blocks Id System
/dev/sdx1 1 17 2160 83 Linux
/dev/sdx2 18 1910 242304 83 Linux
/dev/sdx3 1911 3803 242304 83 Linux
/dev/sdx4 3804 3936 17024 5 Extended
/dev/sdx5 3804 3868 8304 83 Linux
/dev/sdx6 3869 3936 8688 83 Linux
Disk /dev/md9: 542 MB, 542834688 bytes
2 heads, 4 sectors/track, 132528 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md9 doesn't contain a valid partition table
Disk /dev/md8: 542 MB, 542851072 bytes
2 heads, 4 sectors/track, 132532 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md8 doesn't contain a valid partition table
Disk /dev/md0: 63999.6 GB, 63999653183488 bytes
2 heads, 4 sectors/track, -1 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md0 doesn't contain a valid partition table
Alles anzeigen
Code
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid0 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
62499661312 blocks super 1.0 64k chunks
md8 : active raid1 sdh2[8](S) sdg2[7](S) sdf2[6](S) sde2[5](S) sdd2[4](S) sdc2[3](S) sdb2[2] sda2[0]
530128 blocks super 1.0 [2/2] [UU]
md13 : active raid1 sda4[0] sdd4[5] sde4[6] sdf4[7] sdg4[8] sdh4[9] sdc4[4] sdb4[3]
458880 blocks super 1.0 [8/8] [UUUUUUUU]
bitmap: 0/8 pages [0KB], 32KB chunk
md9 : active raid1 sda1[0] sdh1[14] sdg1[13] sdf1[12] sde1[11] sdd1[10] sdc1[9] sdb1[8]
530112 blocks super 1.0 [8/8] [UUUUUUUU]
bitmap: 0/9 pages [0KB], 32KB chunk
unused devices: <none>
Alles anzeigen
Code
cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / ext2 rw,relatime,errors=continue 0 0
/proc /proc proc rw,relatime 0 0
none /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
sysfs /sys sysfs rw,relatime 0 0
tmpfs /tmp tmpfs rw,relatime,size=65536k 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
tmpfs /share tmpfs rw,relatime,size=16384k 0 0
none /proc/bus/usb usbfs rw,relatime 0 0
/dev/sda4 /mnt/ext ext3 rw,relatime,errors=continue,barrier=1,data=writeback 0 0
/dev/md9 /mnt/HDA_ROOT ext3 rw,relatime,errors=continue,barrier=1,data=ordered 0 0
tmpfs /samba tmpfs rw,relatime,size=65536k 0 0
tmpfs /mnt/rf/nd tmpfs rw,relatime,size=1024k 0 0
nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
Alles anzeigen
Code
mdadm -E /dev/sd[abcdefgh]3
/dev/sda3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 865dae76:37c6f445:7f5ab627:0a53cabc
Name : 0
Creation Time : Mon Jan 14 04:34:59 2002
Raid Level : raid0
Raid Devices : 8
Used Dev Size : 15624915368 (7450.54 GiB 7999.96 GB)
Used Size : 0
Super Offset : 15624915368 sectors
State : active
Device UUID : 0a6c7248:800beb3a:9973163b:dac9abc0
Update Time : Mon Jan 14 04:34:59 2002
Checksum : 88a60ea5 - correct
Events : 0
Chunk Size : 64K
Array Slot : 0 (0, 1, 2, 3, 4, 5, 6, 7)
Array State : Uuuuuuuu
/dev/sdb3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 865dae76:37c6f445:7f5ab627:0a53cabc
Name : 0
Creation Time : Mon Jan 14 04:34:59 2002
Raid Level : raid0
Raid Devices : 8
Used Dev Size : 15624915368 (7450.54 GiB 7999.96 GB)
Used Size : 0
Super Offset : 15624915368 sectors
State : active
Device UUID : 3839375e:92d99ea7:1f061eba:10682482
Update Time : Mon Jan 14 04:34:59 2002
Checksum : 4b9edaa3 - correct
Events : 0
Chunk Size : 64K
Array Slot : 1 (0, 1, 2, 3, 4, 5, 6, 7)
Array State : uUuuuuuu
/dev/sdc3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 865dae76:37c6f445:7f5ab627:0a53cabc
Name : 0
Creation Time : Mon Jan 14 04:34:59 2002
Raid Level : raid0
Raid Devices : 8
Used Dev Size : 15624915368 (7450.54 GiB 7999.96 GB)
Used Size : 0
Super Offset : 15624915368 sectors
State : active
Device UUID : 93fb2f47:47cac6ae:47ca5ff1:e6feeb18
Update Time : Mon Jan 14 04:34:59 2002
Checksum : 9c8e8b2 - correct
Events : 0
Chunk Size : 64K
Array Slot : 2 (0, 1, 2, 3, 4, 5, 6, 7)
Array State : uuUuuuuu
/dev/sdd3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 865dae76:37c6f445:7f5ab627:0a53cabc
Name : 0
Creation Time : Mon Jan 14 04:34:59 2002
Raid Level : raid0
Raid Devices : 8
Used Dev Size : 15624915368 (7450.54 GiB 7999.96 GB)
Used Size : 0
Super Offset : 15624915368 sectors
State : active
Device UUID : 2946c993:9c7d3e39:991d4f2a:c2891439
Update Time : Mon Jan 14 04:34:59 2002
Checksum : 39f1c4cb - correct
Events : 0
Chunk Size : 64K
Array Slot : 3 (0, 1, 2, 3, 4, 5, 6, 7)
Array State : uuuUuuuu
/dev/sde3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 865dae76:37c6f445:7f5ab627:0a53cabc
Name : 0
Creation Time : Mon Jan 14 04:34:59 2002
Raid Level : raid0
Raid Devices : 8
Used Dev Size : 15624915368 (7450.54 GiB 7999.96 GB)
Used Size : 0
Super Offset : 15624915368 sectors
State : active
Device UUID : 346973e5:c8ae2114:611ffc1f:7d10984e
Update Time : Mon Jan 14 04:34:59 2002
Checksum : 71afa186 - correct
Events : 0
Chunk Size : 64K
Array Slot : 4 (0, 1, 2, 3, 4, 5, 6, 7)
Array State : uuuuUuuu
/dev/sdf3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 865dae76:37c6f445:7f5ab627:0a53cabc
Name : 0
Creation Time : Mon Jan 14 04:34:59 2002
Raid Level : raid0
Raid Devices : 8
Used Dev Size : 15624915368 (7450.54 GiB 7999.96 GB)
Used Size : 0
Super Offset : 15624915368 sectors
State : active
Device UUID : 0cf94d24:e0193e58:a0de7d7f:1bfa2de1
Update Time : Mon Jan 14 04:34:59 2002
Checksum : e6be4554 - correct
Events : 0
Chunk Size : 64K
Array Slot : 5 (0, 1, 2, 3, 4, 5, 6, 7)
Array State : uuuuuUuu
/dev/sdg3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 865dae76:37c6f445:7f5ab627:0a53cabc
Name : 0
Creation Time : Mon Jan 14 04:34:59 2002
Raid Level : raid0
Raid Devices : 8
Used Dev Size : 15624915368 (7450.54 GiB 7999.96 GB)
Used Size : 0
Super Offset : 15624915368 sectors
State : active
Device UUID : a370df16:f7bbf1d3:9696fdd0:3e6c5ef4
Update Time : Mon Jan 14 04:34:59 2002
Checksum : b9b3891d - correct
Events : 0
Chunk Size : 64K
Array Slot : 6 (0, 1, 2, 3, 4, 5, 6, 7)
Array State : uuuuuuUu
/dev/sdh3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 865dae76:37c6f445:7f5ab627:0a53cabc
Name : 0
Creation Time : Mon Jan 14 04:34:59 2002
Raid Level : raid0
Raid Devices : 8
Used Dev Size : 15624915368 (7450.54 GiB 7999.96 GB)
Used Size : 0
Super Offset : 15624915368 sectors
State : active
Device UUID : 0696ec6d:dbdc2ab4:ff12b6e6:83e6625d
Update Time : Mon Jan 14 04:34:59 2002
Checksum : 6fb6c613 - correct
Events : 0
Chunk Size : 64K
Array Slot : 7 (0, 1, 2, 3, 4, 5, 6, 7)
Array State : uuuuuuuU
Alles anzeigen
Code
dmesg
l: driver (lke_9.2.0 QNAP, LBD=OFF) loaded at ffffffffa0189000
[ 137.692150] ufsd: module license 'Commercial product' taints kernel.
[ 137.696720] Disabling lock debugging due to kernel taint
[ 137.702655] ufsd: driver (lke_9.2.0 QNAP, build_host("BuildServer36"), acl, ioctl, bdi, sd2(0), fua, bz, rsrc) loaded at ffffffffa0194000
[ 137.702663] NTFS support included
[ 137.702666] Hfs+/HfsJ support included
[ 137.702668] optimized: speed
[ 137.702671] Build_for__QNAP_Atom_x86_64_k3.4.6_2014-09-17_lke_9.2.0_r245986_b9
[ 137.702675]
[ 137.761171] fnotify: Load file notify kernel module.
[ 138.856490] usbcore: registered new interface driver snd-usb-audio
[ 138.867986] usbcore: registered new interface driver snd-usb-caiaq
[ 138.880655] Linux video capture interface: v2.00
[ 138.902779] usbcore: registered new interface driver uvcvideo
[ 138.907994] USB Video Class driver (1.1.1)
[ 139.068078] 8021q: 802.1Q VLAN Support v1.8
[ 140.491817] 8021q: adding VLAN 0 to HW filter on device eth0
[ 140.578818] 8021q: adding VLAN 0 to HW filter on device eth1
[ 143.797151] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[ 152.299403] kjournald starting. Commit interval 5 seconds
[ 152.307456] EXT3-fs (md9): using internal journal
[ 152.311397] EXT3-fs (md9): mounted filesystem with ordered data mode
[ 155.703398] md: bind<sda2>
[ 155.708983] md/raid1:md8: active with 1 out of 1 mirrors
[ 155.713346] md8: detected capacity change from 0 to 542851072
[ 156.722970] md8: unknown partition table
[ 158.811473] Adding 530124k swap on /dev/md8. Priority:-1 extents:1 across:530124k
[ 161.466494] md: bind<sdb2>
[ 161.486429] RAID1 conf printout:
[ 161.486438] --- wd:1 rd:2
[ 161.486445] disk 0, wo:0, o:1, dev:sda2
[ 161.486452] disk 1, wo:1, o:1, dev:sdb2
[ 161.486582] md: recovery of RAID array md8
[ 161.491836] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 161.497094] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 161.502526] md: using 128k window, over a total of 530128k.
[ 163.545996] md: bind<sdc2>
[ 165.640577] md: bind<sdd2>
[ 167.181781] md: md8: recovery done.
[ 167.232868] RAID1 conf printout:
[ 167.232878] --- wd:2 rd:2
[ 167.232886] disk 0, wo:0, o:1, dev:sda2
[ 167.232893] disk 1, wo:0, o:1, dev:sdb2
[ 167.249507] RAID1 conf printout:
[ 167.249514] --- wd:2 rd:2
[ 167.249519] disk 0, wo:0, o:1, dev:sda2
[ 167.249524] disk 1, wo:0, o:1, dev:sdb2
[ 167.249528] RAID1 conf printout:
[ 167.249531] --- wd:2 rd:2
[ 167.249536] disk 0, wo:0, o:1, dev:sda2
[ 167.249540] disk 1, wo:0, o:1, dev:sdb2
[ 167.734751] md: bind<sde2>
[ 167.765332] RAID1 conf printout:
[ 167.765340] --- wd:2 rd:2
[ 167.765346] disk 0, wo:0, o:1, dev:sda2
[ 167.765351] disk 1, wo:0, o:1, dev:sdb2
[ 167.765355] RAID1 conf printout:
[ 167.765358] --- wd:2 rd:2
[ 167.765363] disk 0, wo:0, o:1, dev:sda2
[ 167.765367] disk 1, wo:0, o:1, dev:sdb2
[ 167.765371] RAID1 conf printout:
[ 167.765375] --- wd:2 rd:2
[ 167.765379] disk 0, wo:0, o:1, dev:sda2
[ 167.765384] disk 1, wo:0, o:1, dev:sdb2
[ 169.838562] md: bind<sdf2>
[ 169.874563] RAID1 conf printout:
[ 169.874572] --- wd:2 rd:2
[ 169.874581] disk 0, wo:0, o:1, dev:sda2
[ 169.874588] disk 1, wo:0, o:1, dev:sdb2
[ 169.874594] RAID1 conf printout:
[ 169.874598] --- wd:2 rd:2
[ 169.874604] disk 0, wo:0, o:1, dev:sda2
[ 169.874611] disk 1, wo:0, o:1, dev:sdb2
[ 169.874615] RAID1 conf printout:
[ 169.874620] --- wd:2 rd:2
[ 169.874626] disk 0, wo:0, o:1, dev:sda2
[ 169.874632] disk 1, wo:0, o:1, dev:sdb2
[ 169.874637] RAID1 conf printout:
[ 169.874642] --- wd:2 rd:2
[ 169.874648] disk 0, wo:0, o:1, dev:sda2
[ 169.874655] disk 1, wo:0, o:1, dev:sdb2
[ 171.931513] md: bind<sdg2>
[ 171.953621] RAID1 conf printout:
[ 171.953629] --- wd:2 rd:2
[ 171.953636] disk 0, wo:0, o:1, dev:sda2
[ 171.953641] disk 1, wo:0, o:1, dev:sdb2
[ 171.953645] RAID1 conf printout:
[ 171.953648] --- wd:2 rd:2
[ 171.953653] disk 0, wo:0, o:1, dev:sda2
[ 171.953658] disk 1, wo:0, o:1, dev:sdb2
[ 171.953661] RAID1 conf printout:
[ 171.953665] --- wd:2 rd:2
[ 171.953670] disk 0, wo:0, o:1, dev:sda2
[ 171.953674] disk 1, wo:0, o:1, dev:sdb2
[ 171.953678] RAID1 conf printout:
[ 171.953682] --- wd:2 rd:2
[ 171.953686] disk 0, wo:0, o:1, dev:sda2
[ 171.953691] disk 1, wo:0, o:1, dev:sdb2
[ 171.953695] RAID1 conf printout:
[ 171.953699] --- wd:2 rd:2
[ 171.953703] disk 0, wo:0, o:1, dev:sda2
[ 171.953708] disk 1, wo:0, o:1, dev:sdb2
[ 174.021466] md: bind<sdh2>
[ 174.048638] RAID1 conf printout:
[ 174.048647] --- wd:2 rd:2
[ 174.048655] disk 0, wo:0, o:1, dev:sda2
[ 174.048662] disk 1, wo:0, o:1, dev:sdb2
[ 174.048668] RAID1 conf printout:
[ 174.048673] --- wd:2 rd:2
[ 174.048679] disk 0, wo:0, o:1, dev:sda2
[ 174.048685] disk 1, wo:0, o:1, dev:sdb2
[ 174.048690] RAID1 conf printout:
[ 174.048694] --- wd:2 rd:2
[ 174.048700] disk 0, wo:0, o:1, dev:sda2
[ 174.048706] disk 1, wo:0, o:1, dev:sdb2
[ 174.048711] RAID1 conf printout:
[ 174.048716] --- wd:2 rd:2
[ 174.048722] disk 0, wo:0, o:1, dev:sda2
[ 174.048728] disk 1, wo:0, o:1, dev:sdb2
[ 174.048733] RAID1 conf printout:
[ 174.048738] --- wd:2 rd:2
[ 174.048744] disk 0, wo:0, o:1, dev:sda2
[ 174.048751] disk 1, wo:0, o:1, dev:sdb2
[ 174.048756] RAID1 conf printout:
[ 174.048760] --- wd:2 rd:2
[ 174.048766] disk 0, wo:0, o:1, dev:sda2
[ 174.048773] disk 1, wo:0, o:1, dev:sdb2
[ 211.202455] md: md0 stopped.
[ 211.216353] md: md0 stopped.
[ 211.354946] md: bind<sdb3>
[ 211.358739] md: bind<sdc3>
[ 211.362418] md: bind<sdd3>
[ 211.366062] md: bind<sde3>
[ 211.369617] md: bind<sdf3>
[ 211.373402] md: bind<sdg3>
[ 211.376823] md: bind<sdh3>
[ 211.380090] md: bind<sda3>
[ 211.384366] md/raid0:md0: md_size is 124999322624 sectors.
[ 211.387287] md: RAID0 configuration for md0 - 1 zone
[ 211.390079] md: zone0=[sda3/sdb3/sdc3/sdd3/sde3/sdf3/sdg3/sdh3]
[ 211.392991] zone-offset= 0KB, device-offset= 0KB, size=62499661312KB
[ 211.395937]
[ 211.398839] md0: detected capacity change from 0 to 63999653183488
[ 212.469317] md0: unknown partition table
[ 214.680787] EXT4-fs (md0): Mount option "noacl" will be removed by 3.5
[ 214.680793] Contact [email='linux-ext4@vger.kernel.org'][/email] if you think we should keep it.
[ 214.680796]
[ 216.548955] EXT4-fs (md0): ext4_check_descriptors: Checksum for group 10496 failed (2478!=24900)
[ 216.552106] EXT4-fs (md0): group descriptors corrupted!
[ 244.523095] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[ 271.094567] warning: `proftpd' uses 32-bit capabilities (legacy support in use)
[ 291.896740] rule type=2, num=0
[ 292.014391] Loading iSCSI transport class v2.0-871.
[ 292.036147] iscsi: registered transport (tcp)
[ 292.063844] iscsid (9728): /proc/9728/oom_adj is deprecated, please use /proc/9728/oom_score_adj instead.
[ 294.127975] tun: Universal TUN/TAP device driver, 1.6
[ 294.132349] tun: (C) 1999-2004 Max Krasnyansky <[email='maxk@qualcomm.com'][/email]>
[ 294.148980] PPP generic driver version 2.4.2
[ 294.192006] nf_conntrack version 0.5.0 (7911 buckets, 31644 max)
[ 294.250832] PPP MPPE Compression module registered
[ 294.268942] PPP BSD Compression module registered
[ 294.283915] PPP Deflate Compression module registered
[ 297.723188] sysRequest.cgi[10955]: segfault at 18 ip 00000000f6faac2e sp 00000000ff809e94 error 6 in libc-2.6.1.so[f6f21000+12d000]
[ 302.533154] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 302.537238] NFSD: starting 90-second grace period
Alles anzeigen