Hi dr_mike,
du hast recht. Ich habe aber mal etwas weiter "gegraben" und folgende Infos rausgefunden.
[/] # vgscanReading all physical volumes. This may take a while...Found volume group "vg1" using metadata type lvm2
[/] # vgdisplay--- Volume group ---VG Name vg1System IDFormat lvm2Metadata Areas 1Metadata Sequence No 281VG Access read/writeVG Status resizableMAX LV 0Cur LV 8Open LV 0Max PV 0Cur PV 1Act PV 1VG Size 10.89 TiBPE Size 4.00 MiBTotal PE 2854293Alloc PE / Size 2854293 / 10.89 TiBFree PE / Size 0 / 0VG UUID Im3vAK-81pH-vL0T-j7Cf-w2ni-5r9P-cTt2Ci
[/] # lvdisplay--- Logical volume ---LV Path /dev/vg1/lv544LV Name lv544VG Name vg1LV UUID blxtBz-06zO-ts3y-2YKu-RFJg-07Ie-aAUWRELV Write Access read/writeLV Creation host, time NAS002AB5, 2016-08-01 16:24:17 +0100LV Status NOT availableLV Size 111.49 GiBCurrent LE 28542Segments 2Allocation inheritRead ahead sectors 8192--- Logical volume ---LV Name tp1VG Name vg1LV UUID yN02SQ-HgLX-mPVj-50no-t0zK-7709-hkMuuYLV Write Access read/writeLV Creation host, time NAS002AB5, 2016-08-01 16:24:18 +0100LV Pool transaction ID 68LV Pool metadata tp1_tmetaLV Pool data tp1_tierdata_0LV Pool chunk size 512.00 KiBLV Zero new blocks noLV Status NOT availableLV Size 10.71 TiBAllocated pool chunks 0Current LE 2808682Segments 1Allocation inheritRead ahead sectors auto--- Logical volume ---LV Path /dev/vg1/lv1LV Name lv1VG Name vg1LV UUID h0gAc7-84Op-HxFG-RZpl-5wo5-m7v5-weIor2LV Write Access read/writeLV Creation host, time NAS002AB5, 2016-08-01 16:29:10 +0100LV Pool name tp1LV Thin device ID 1LV Status NOT availableLV Size 8.16 TiBMapped sectors 0Current LE 2138996Segments 1Allocation inheritRead ahead sectors 8192--- Logical volume ---LV Path /dev/vg1/lv1312LV Name lv1312VG Name vg1LV UUID Qjfuy1-wjZb-A9Z5-VpqN-fQAE-SJyk-HJTDXWLV Write Access read/writeLV Creation host, time NAS01, 2016-08-07 17:38:59 +0100LV Status NOT availableLV Size 1.11 GiBCurrent LE 285Segments 1Allocation inheritRead ahead sectors 8192--- Logical volume ---LV Path /dev/vg1/lv2LV Name lv2VG Name vg1LV UUID T9IS63-edbL-gXi2-2W0x-rysd-TDTa-omEBzGLV Write Access read/writeLV Creation host, time NAS01, 2016-08-07 18:48:33 +0100LV Pool name tp1LV Thin device ID 2LV Status NOT availableLV Size 1.46 TiBMapped sectors 0Current LE 384000Segments 1Allocation inheritRead ahead sectors 8192--- Logical volume ---LV Path /dev/vg1/snap10001LV Name snap10001VG Name vg1LV UUID Ow4Hbi-CmYl-mBep-04X9-RR72-WpsO-hgAbi3LV Write Access read/writeLV Creation host, time NAS01, 2016-09-18 01:00:03 +0100LV Pool name tp1LV Thin device ID 3LV Thin origin name lv1LV Status NOT availableLV Size 8.16 TiBMapped sectors 0Current LE 2138996Segments 1Allocation inheritRead ahead sectors auto
[/] # pvscanPV /dev/md0 VG vg1 lvm2 [10.89 TiB / 0 free]Total: 1 [10.89 TiB] / in use: 1 [10.89 TiB] / in no VG: 0 [0 ]
[/] # lvscaninactive '/dev/vg1/lv544' [111.49 GiB] inheritinactive '/dev/vg1/tp1' [10.71 TiB] inheritinactive '/dev/vg1/lv1' [8.16 TiB] inheritinactive '/dev/vg1/lv1312' [1.11 GiB] inheritinactive '/dev/vg1/lv2' [1.46 TiB] inheritinactive '/dev/vg1/snap10001' [8.16 TiB] inheritinactive '/dev/vg1/snap20001' [1.46 TiB] inheritinactive '/dev/vg1/snap10002' [8.16 TiB] inherit
So ab hier wird es schon Interessanter. Vorhanden scheinen die Volumes ja noch zu sein. Mit dem Befehl vgchange -ay kann ich alle auf Active setzen. Mit diesen kleinen Problemen
[/] # vgchange -ayThin pool transaction_id=69, while expected: 67.Unable to deactivate open vg1-tp1_tmeta (253:1)Unable to deactivate open vg1-tp1_tierdata_0 (253:2)Unable to deactivate open vg1-tp1_tierdata_1 (253:3)Unable to deactivate open vg1-tp1_tierdata_2 (253:4)Failed to deactivate vg1-tp1-tpoolThin pool transaction_id=69, while expected: 67.Unable to deactivate open vg1-tp1_tmeta (253:1)Unable to deactivate open vg1-tp1_tierdata_2 (253:4)Unable to deactivate open vg1-tp1_tierdata_1 (253:3)Unable to deactivate open vg1-tp1_tierdata_0 (253:2)Failed to deactivate vg1-tp1-tpoolThin pool transaction_id=69, while expected: 67.Unable to deactivate open vg1-tp1_tmeta (253:1)Unable to deactivate open vg1-tp1_tierdata_2 (253:4)Unable to deactivate open vg1-tp1_tierdata_1 (253:3)Unable to deactivate open vg1-tp1_tierdata_0 (253:2)Failed to deactivate vg1-tp1-tpoolThin pool transaction_id=69, while expected: 67.Unable to deactivate open vg1-tp1_tmeta (253:1)Unable to deactivate open vg1-tp1_tierdata_2 (253:4)Unable to deactivate open vg1-tp1_tierdata_1 (253:3)Unable to deactivate open vg1-tp1_tierdata_0 (253:2)Failed to deactivate vg1-tp1-tpoolThin pool transaction_id=69, while expected: 67.Unable to deactivate open vg1-tp1_tmeta (253:1)Unable to deactivate open vg1-tp1_tierdata_2 (253:4)Unable to deactivate open vg1-tp1_tierdata_1 (253:3)Unable to deactivate open vg1-tp1_tierdata_0 (253:2)Failed to deactivate vg1-tp1-tpoolThin pool transaction_id=69, while expected: 67.Unable to deactivate open vg1-tp1_tmeta (253:1)Unable to deactivate open vg1-tp1_tierdata_2 (253:4)Unable to deactivate open vg1-tp1_tierdata_1 (253:3)Unable to deactivate open vg1-tp1_tierdata_0 (253:2)Failed to deactivate vg1-tp1-tpool8 logical volume(s) in volume group "vg1" now active
[/] # lvscanACTIVE '/dev/vg1/lv544' [111.49 GiB] inheritACTIVE '/dev/vg1/tp1' [10.71 TiB] inheritACTIVE '/dev/vg1/lv1' [8.16 TiB] inheritACTIVE '/dev/vg1/lv1312' [1.11 GiB] inheritACTIVE '/dev/vg1/lv2' [1.46 TiB] inheritACTIVE '/dev/vg1/snap10001' [8.16 TiB] inheritACTIVE '/dev/vg1/snap20001' [1.46 TiB] inheritACTIVE '/dev/vg1/snap10002' [8.16 TiB] inherit
Versuche ich nun das Volume /dev/vg1/tp1 zu mounten, so erhalte ich nur folgenden Fehler
[/] # mount /dev/vg1/tp1 /mnt/myvolmount: special device /dev/vg1/tp1 does not exist
[/] # cat /proc/mountsnone / tmpfs rw,relatime,size=204800k,mode=755 0 0devtmpfs /dev devtmpfs rw,relatime,size=1968316k,nr_inodes=492079,mode=755 0 0/proc /proc proc rw,relatime 0 0devpts /dev/pts devpts rw,relatime,mode=600,ptmxmode=000 0 0sysfs /sys sysfs rw,relatime 0 0tmpfs /tmp tmpfs rw,relatime,size=65536k 0 0tmpfs /dev/shm tmpfs rw,relatime 0 0tmpfs /share tmpfs rw,relatime,size=16384k 0 0cgroup_root /sys/fs/cgroup tmpfs rw,relatime 0 0none /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0
[/] # cat /proc/partitionsmajor minor #blocks name8 0 3907018584 sda8 1 530125 sda18 2 530142 sda28 3 3897063763 sda38 4 530144 sda48 5 8353796 sda58 16 3907018584 sdb8 17 530125 sdb18 18 530142 sdb28 19 3897063763 sdb38 20 530144 sdb48 21 8353796 sdb58 32 3907018584 sdc8 33 530125 sdc18 34 530142 sdc28 35 3905449693 sdc38 36 498012 sdc48 48 3907018584 sdd8 49 530125 sdd18 50 530142 sdd28 51 3897063763 sdd38 52 530144 sdd48 53 8353796 sdd58 64 503808 sde8 65 2160 sde18 66 242304 sde28 67 242304 sde38 68 1 sde48 69 8304 sde58 70 8688 sde69 256 530112 md2569 0 11691190272 md0253 0 116908032 dm-0253 1 67108864 dm-1253 2 4096 dm-2253 3 4096 dm-3253 4 11505999872 dm-4253 5 11504361472 dm-5253 8 1167360 dm-8
[/] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[0] sdd3[3] sdc3[2] sdb3[1]
11691190272 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
Mir geht es gar nicht darum mehr irgendwie die QTS wieder ans laufen zu bringen. Würde nur noch gerne versuchen die Daten wegzukopieren und von vorne anzufangen.
Eine Idee von mir (keine Idee ob das geht) ...
NAS ausschalten
Alle Platten raus
NAS neustarten
Platte Nr. 1 gegen eine alte Platte tauschen
Die NAS damit Initialisieren (QTS wäre wieder da)
Platten 2,3,4 wieder reinstecken
Hoffen die NAS die Platten mit Volumen erkennt und einbindet
Dann alles sichern
Und dann von vorne anfangen (NAS initialisieren mit 4x4 TB; RAID neu aufbauen, Volumen erstellen etc.)