Ein Update auf die 5.0.1.2277 brachte leider keine Besserung.
Beiträge von Circlar
-
-
7. Zusätzlich habe ich:
md_checker
Code
Alles anzeigen[~] # md_checker Welcome to MD superblock checker (v2.0) - have a nice day~ Scanning system... RAID metadata found! UUID: a3323a86:5769e9a7:3d0e7f40:9856cff1 Level: raid5 Devices: 4 Name: md1 Chunk Size: 512K md Version: 1.0 Creation Time: Jan 27 02:02:54 2018 Status: ONLINE (md1) [UUUU] =============================================================================================== Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State =============================================================================================== NAS_HOST 1 /dev/sdh3 0 Active Jan 30 22:12:54 2023 1042 AAAA NAS_HOST 2 /dev/sdg3 1 Active Jan 30 22:12:54 2023 1042 AAAA NAS_HOST 3 /dev/sdf3 2 Active Jan 30 22:12:54 2023 1042 AAAA NAS_HOST 4 /dev/sde3 3 Active Jan 30 22:12:54 2023 1042 AAAA =============================================================================================== RAID metadata found! UUID: 5c889c4e:fa2f8b44:3537761f:30ca051c Level: raid0 Devices: 2 Name: md4 Chunk Size: 512K md Version: 1.0 Creation Time: Sep 28 21:18:12 2021 Status: ONLINE (md4) raid0 =============================================================================================== Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State =============================================================================================== NAS_HOST 7 /dev/sdc3 0 Active Sep 28 21:18:12 2021 0 AA NAS_HOST 8 /dev/sdd3 1 Active Sep 28 21:18:12 2021 0 AA =============================================================================================== RAID metadata found! UUID: 3e640ee1:5e84438a:ac6e37ca:9bfbfddc Level: raid1 Devices: 1 Name: md2 Chunk Size: - md Version: 1.0 Creation Time: Mar 8 21:46:44 2019 Status: ONLINE (md2) [U] =============================================================================================== Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State =============================================================================================== NAS_HOST C1 /dev/sdb3 0 Active Jan 30 22:17:32 2023 630 A =============================================================================================== RAID metadata found! UUID: dc41d3dd:fa04cf1a:0ac169a2:b8490995 Level: raid1 Devices: 1 Name: md3 Chunk Size: - md Version: 1.0 Creation Time: Feb 11 21:11:43 2022 Status: ONLINE (md3) [U] =============================================================================================== Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State =============================================================================================== NAS_HOST C2 /dev/sda3 0 Active Jan 30 22:14:31 2023 24 A ===============================================================================================
und pvscan, vgscan, lvmdiskscan
Code
Alles anzeigen[~] # pvscan PV /dev/md3 VG vg256 lvm2 [192.67 GiB / 0 free] PV /dev/drbd2 VG vg288 lvm2 [214.07 GiB / 0 free] PV /dev/drbd1 VG vg1 lvm2 [27.26 TiB / 0 free] PV /dev/drbd4 VG vg2 lvm2 [32.72 TiB / 0 free] Total: 4 [60.38 TiB] / in use: 4 [60.38 TiB] / in no VG: 0 [0 ] [~] # vgscan Reading all physical volumes. This may take a while... Found volume group "vg256" using metadata type lvm2 Found volume group "vg288" using metadata type lvm2 Found volume group "vg1" using metadata type lvm2 Found volume group "vg2" using metadata type lvm2 [~] # lvmdiskscan /dev/md256 [ 517.69 MiB] /dev/md1 [ 27.26 TiB] LVM physical volume WARNING: duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze is being used from both devices /dev/md1 and /dev/drbd1 Found duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze: using /dev/drbd1 not /dev/md1 Using duplicate PV /dev/drbd1 from subsystem DRBD, replacing /dev/md1 /dev/drbd1 [ 27.26 TiB] LVM physical volume /dev/md2 [ 214.08 GiB] LVM physical volume WARNING: duplicate PV iQ9oniCQV2ea5OWidl0CNJpdixIB3yyT is being used from both devices /dev/md2 and /dev/drbd2 Found duplicate PV iQ9oniCQV2ea5OWidl0CNJpdixIB3yyT: using /dev/drbd2 not /dev/md2 Using duplicate PV /dev/drbd2 from subsystem DRBD, replacing /dev/md2 /dev/drbd2 [ 214.08 GiB] LVM physical volume /dev/md3 [ 192.67 GiB] LVM physical volume /dev/md4 [ 32.72 TiB] LVM physical volume WARNING: duplicate PV PRTIlW4Dbr5lYR0N3RoCOIPSP8sGjn7P is being used from both devices /dev/md4 and /dev/drbd4 Found duplicate PV PRTIlW4Dbr5lYR0N3RoCOIPSP8sGjn7P: using /dev/drbd4 not /dev/md4 Using duplicate PV /dev/drbd4 from subsystem DRBD, replacing /dev/md4 /dev/drbd4 [ 32.72 TiB] LVM physical volume /dev/md9 [ 517.62 MiB] /dev/md13 [ 448.12 MiB] /dev/md321 [ 7.90 GiB] /dev/md322 [ 6.90 GiB] 0 disks 5 partitions 0 LVM physical volume whole disks 7 LVM physical volumes
8. Ergebnis der [~] # cat /etc/volume.conf
Code
Alles anzeigen[~] # cat /etc/volume.conf [Global] volBitmap = 0x1e pd_00000001_3_Vol_Bitmap = 0x2 pd_00000002_3_Vol_Bitmap = 0x2 pd_00000003_3_Vol_Bitmap = 0x2 pd_00000004_3_Vol_Bitmap = 0x2 pd_00170001_3_Vol_Bitmap = 0x4 pd_00000007_3_Vol_Bitmap = 0x8 pd_00000008_3_Vol_Bitmap = 0x8 pd_00170002_3_Vol_Bitmap = 4 [VOL_1] volId = 1 volName = raidId = 1 raidName = /dev/md1 encryption = no ssdCache = no unclean = no need_rehash = no creating = no mappingName = /dev/mapper/cachedev1 qnapResize = no delayAlloc = yes privilege = no readOnly = no writeCache = yes invisible = no raidLevel = 5 partNo = 3 status = -3 filesystem = 18 internal = 1 time = Mon Jan 30 22:58:43 2023 volType = 1 baseId = 1 baseName = /dev/mapper/cachedev1 [VOL_2] volId = 2 volName = SSD raidId = 2 raidName = /dev/md2 encryption = no ssdCache = no unclean = no need_rehash = no creating = no mappingName = /dev/mapper/cachedev2 qnapResize = no delayAlloc = yes privilege = no readOnly = no writeCache = yes invisible = no raidLevel = 1 partNo = 3 status = 0 filesystem = 9 internal = 1 time = Mon Jan 30 22:58:43 2023 volType = 1 baseId = 2 baseName = /dev/mapper/cachedev2 inodeRatio = 32768 inodeCount = 6942720 fsFeature = 0x3 [VOL_3] volId = 3 volName = Backup raidId = 4 raidName = /dev/md4 encryption = yes ssdCache = no unclean = no need_rehash = no creating = no mappingName = /dev/mapper/ce_cachedev3 qnapResize = no delayAlloc = yes privilege = no readOnly = no writeCache = yes invisible = no raidLevel = 0 partNo = 3 status = 0 filesystem = 9 internal = 1 time = Mon Jan 30 22:58:43 2023 volType = 1 baseId = 3 baseName = /dev/mapper/cachedev3 inodeRatio = 65536 inodeCount = 545560576 fsFeature = 0x3 [VOL_4] volId = 4 volName = 2018101900159 raidId = -1 raidName = /dev/sda3 encryption = no mappingName = /dev/sda3 qnapResize = no delayAlloc = yes privilege = no readOnly = no writeCache = yes invisible = no raidLevel = -2 partNo = 3 isGlobalSpare = no diskId = 0x00170002 status = -1 filesystem = 18 internal = 1 time = Mon Jan 30 22:58:46 2023 unclean = no need_rehash = no creating = no
9. Ergebnis der [~] # cat /etc/config/raid.conf
Code
Alles anzeigen[~] # cat /etc/config/raid.conf [Global] raidBitmap = 0x16 pd_5000CCA266D967C7_Raid_Bitmap = 0x2 pd_5000CCA266DC30D2_Raid_Bitmap = 0x2 pd_5000CCA266DA9ABB_Raid_Bitmap = 0x2 pd_5000CCA266DB035E_Raid_Bitmap = 0x2 pd_5000CCA284D175D0_Raid_Bitmap = 0x10 pd_5000CCA284F42EF4_Raid_Bitmap = 0x10 pd_2018101000043_Raid_Bitmap = 0x4 [RAID_1] uuid = a3323a86:5769e9a7:3d0e7f40:9856cff1 id = 1 partNo = 3 aggreMember = no readonly = no legacy = no version2 = yes overProvisioning = 0 deviceName = /dev/md1 raidLevel = 5 internal = 1 mdBitmap = 0 chunkSize = 512 readAhead = 0 stripeCacheSize = 0 speedLimitMax = 0 speedLimitMin = 0 data_0 = 1, 5000CCA266D967C7 data_1 = 2, 5000CCA266DC30D2 data_2 = 3, 5000CCA266DA9ABB data_3 = 4, 5000CCA266DB035E dataBitmap = F scrubStatus = 1 [RAID_4] uuid = 5c889c4e:fa2f8b44:3537761f:30ca051c id = 4 partNo = 3 aggreMember = no readonly = no legacy = no version2 = yes overProvisioning = 0 deviceName = /dev/md4 raidLevel = 0 internal = 1 mdBitmap = 0 chunkSize = 512 readAhead = 0 stripeCacheSize = 0 speedLimitMax = 0 speedLimitMin = 0 data_0 = 7, 5000CCA284D175D0 data_1 = 8, 5000CCA284F42EF4 dataBitmap = 3 scrubStatus = 1 [RAID_2] uuid = 3e640ee1:5e84438a:ac6e37ca:9bfbfddc id = 2 partNo = 3 aggreMember = no readonly = no legacy = no version2 = yes overProvisioning = 0 deviceName = /dev/md2 raidLevel = 1 internal = 1 mdBitmap = 0 chunkSize = 0 readAhead = 0 stripeCacheSize = 0 speedLimitMax = 0 speedLimitMin = 0 data_0 = 170001, 2018101000043 dataBitmap = 1 scrubStatus = 1 [~] #
10. Ich habe noch
Code[~] # storage_util --volume_scan do_scan_raid=1 sh: /sys/block/dm-6/dm/pool/tier/relocation_rate: Permission denied
probiert, alles ohne Erfolg.
-
Hallo liebes Forum.
Seit gestern zeigt mir mein Qnap 880EC beim Speicherpool 1, den Fehler "Entladen" an.
Das NAS selbst hatte sich irgendwann selbst neu gestartet. Ursache unbekannt, es war nur auffällig, das es stufenweise alle Funktionen eingestellt hatte. Webinterface ging nicht, dann Netzlaufwerke nicht mehr erreichbar. Eine VM via QVS ging am Anfang noch, Medienzugriff via Plex ging am Anfang auch noch. Dann mitten in der Nacht der Neustart.
Könnt Ihr mir bitte helfen?
Typ: Raid 5 mit Verschlüsselung
FW: QTS 5.0.1.2248
Im Log stand:
Folgendes habe ich versucht:
1. lvchange -ay
Code
Alles anzeigen[~] # lvchange -ay /dev/vg1/tp1 WARNING: duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze is being used from both devices /dev/drbd1 and /dev/md1 Found duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze: using /dev/drbd1 not /dev/md1 Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1 Check of pool vg1/tp1 failed (status:1). Manual repair required! [~] # lvchange -ay /dev/vg1/lv1 WARNING: duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze is being used from both devices /dev/drbd1 and /dev/md1 Found duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze: using /dev/drbd1 not /dev/md1 Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1 Check of pool vg1/tp1 failed (status:1). Manual repair required!
2. Dann vgcfgrestore --list vg1
3. Ich habe den letzten Eintrag vom 07.01 genommen, die anderen entstadnen nach dem Problem.
vgcfgrestore --force -f /etc/config/lvm/archive/vg1_00195-640727086.vg vg1
Code
Alles anzeigen[~] # vgcfgrestore --force -f /etc/config/lvm/archive/vg1_00195-640727086.vg vg1 WARNING: duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze is being used from both devices /dev/drbd1 and /dev/md1 Found duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze: using /dev/drbd1 not /dev/md1 Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1 WARNING: duplicate PV iQ9oniCQV2ea5OWidl0CNJpdixIB3yyT is being used from both devices /dev/drbd2 and /dev/md2 Found duplicate PV iQ9oniCQV2ea5OWidl0CNJpdixIB3yyT: using /dev/drbd2 not /dev/md2 Using duplicate PV /dev/drbd2 from subsystem DRBD, ignoring /dev/md2 WARNING: duplicate PV PRTIlW4Dbr5lYR0N3RoCOIPSP8sGjn7P is being used from both devices /dev/drbd4 and /dev/md4 Found duplicate PV PRTIlW4Dbr5lYR0N3RoCOIPSP8sGjn7P: using /dev/drbd4 not /dev/md4 Using duplicate PV /dev/drbd4 from subsystem DRBD, ignoring /dev/md4 WARNING: Forced restore of Volume Group vg1 with thin volumes. WARNING: duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze is being used from both devices /dev/drbd1 and /dev/md1 Found duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze: using /dev/drbd1 not /dev/md1 Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1 Restored volume group vg1
4. Dann wieder lvchange -ay
Code[~] # lvchange -ay /dev/vg1/tp1 Check of pool vg1/tp1 failed (status:1). Manual repair required! [~] # lvchange -ay /dev/vg1/lv1 Check of pool vg1/tp1 failed (status:1). Manual repair required!
5. Dann
Code[~] # lvconvert --repair vg1/tp1 Volume group "vg1" has insufficient free space (0 extents): 16384 required.
6. Dann
Code
Alles anzeigen[tt][~] # /etc/init.d/init_lvm.sh[/tt] Changing old config name... Reinitialing... Detect disk(8, 80)... dev_count ++ = 0Detect disk(8, 48)... dev_count ++ = 1Detect disk(8, 16)... dev_count ++ = 2Detect disk(8, 128)... ignore non-root enclosure disk(8, 128). Detect disk(8, 96)... dev_count ++ = 3Detect disk(253, 0)... ignore non-root enclosure disk(253, 0). Detect disk(8, 64)... dev_count ++ = 4Detect disk(8, 32)... dev_count ++ = 5Detect disk(8, 0)... dev_count ++ = 6Detect disk(8, 112)... dev_count ++ = 7Detect disk(8, 80)... Detect disk(8, 48)... Detect disk(8, 16)... Detect disk(8, 128)... ignore non-root enclosure disk(8, 128). Detect disk(8, 96)... Detect disk(253, 0)... ignore non-root enclosure disk(253, 0). Detect disk(8, 64)... Detect disk(8, 32)... Detect disk(8, 0)... Detect disk(8, 112)... sys_startup_p2:got called count = -1 WARNING: duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze is being used from both devices /dev/drbd1 and /dev/md1 Found duplicate PV EeVNEwI61Q6bXdxWzo5ZO8097iGPfpze: using /dev/drbd1 not /dev/md1 Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1 LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available LV Status NOT available WARNING: duplicate PV PRTIlW4Dbr5lYR0N3RoCOIPSP8sGjn7P is being used from both devices /dev/drbd4 and /dev/md4 Found duplicate PV PRTIlW4Dbr5lYR0N3RoCOIPSP8sGjn7P: using /dev/drbd4 not /dev/md4 Using duplicate PV /dev/drbd4 from subsystem DRBD, ignoring /dev/md4 WARNING: duplicate PV iQ9oniCQV2ea5OWidl0CNJpdixIB3yyT is being used from both devices /dev/drbd2 and /dev/md2 Found duplicate PV iQ9oniCQV2ea5OWidl0CNJpdixIB3yyT: using /dev/drbd2 not /dev/md2 Using duplicate PV /dev/drbd2 from subsystem DRBD, ignoring /dev/md2 /dev/drbd4(53248) sh: /sys/block/dm-6/dm/pool/tier/relocation_rate: Permission denied Done
ohne Erfolg
-
Hallo,
ist es möglich den Powerbutton am NAS zu deaktivieren?
Viele Grüße
-
Ich kann den Fehler nur bestätigen, seit 4.2 das gleiche Problem mit einer VM auf Win 8.1. Es hilft nur die VM herunter zu fahren und abzuschaltn und dann nreu zu starten. Ein reiner Neustart hilft nicht.