mdadm /dev/md2 -AfR /dev/sdd3
mdadm: failed to get exclusive lock on mapfile - continue anyway...mdadm: /dev/md2 has been started with 1 drive.
PV /dev/md2 VG vg2 lvm2 [922.02 GiB / 0 free] PV /dev/md1 VG vg1 lvm2 [5.44 TiB / 0 free] Total: 2 [6.34 TiB] / in use: 2 [6.34 TiB] / in no VG: 0 [0 ]
Reading all physical volumes. This may take a while... Found volume group "vg2" using metadata type lvm2 Found volume group "vg1" using metadata type lvm2
inactive '/dev/vg2/lv545' [9.22 GiB] inherit inactive '/dev/vg2/tp2' [896.80 GiB] inherit inactive '/dev/vg2/lv2' [890.93 GiB] inherit inactive '/dev/vg1/lv544' [20.00 GiB] inherit ACTIVE '/dev/vg1/tp1' [5.40 TiB] inherit ACTIVE '/dev/vg1/lv1' [5.40 TiB] inherit
/dev/md256 [ 517.69 MiB] /dev/md1 [ 5.44 TiB] LVM physical volume /dev/md2 [ 922.02 GiB] LVM physical volume /dev/md9 [ 517.62 MiB] /dev/md13 [ 448.12 MiB] 0 disks 3 partitions 0 LVM physical volume whole disks 2 LVM physical volumes
3 logical volume(s) in volume group "vg2" now active
lvchange -a y /dev/mapper/vg2-lv2
lvchange -a n /dev/mapper/vg2-lv545
Device does not exist.Command failed
Setting activation/monitoring to 1
Processing: vgmknodes -vvv vg2
O_DIRECT will be used
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/locking_dir to /var/lock/lvm
Setting global/prioritise_write_locks to 1
dm version OF [16384] (*1)
dm mknodes vg2-tp2_tdata NF [16384] (*1)
vg2-tp2_tdata: Stacking NODE_ADD (253,8) 0:0 0600
dm mknodes vg2-tp2 NF [16384] (*1)
vg2-tp2: Stacking NODE_ADD (253,10) 0:0 0600
dm mknodes vg2-tp2-tpool NF [16384] (*1)
vg2-tp2-tpool: Stacking NODE_ADD (253,9) 0:0 0600
dm mknodes vg2-lv2 NF [16384] (*1)
vg2-lv2: Stacking NODE_ADD (253,11) 0:0 0600
dm mknodes vg2-tp2_tmeta NF [16384] (*1)
vg2-tp2_tmeta: Stacking NODE_ADD (253,7) 0:0 0600
dm mknodes cachedev1 NF [16384] (*1)
cachedev1: Stacking NODE_ADD (253,0) 0:0 0600
dm mknodes vg1-lv1 NF [16384] (*1)
vg1-lv1: Stacking NODE_ADD (253,5) 0:0 0600
dm mknodes vg1-tp1 NF [16384] (*1)
vg1-tp1: Stacking NODE_ADD (253,4) 0:0 0600
dm mknodes vg1-tp1-tpool NF [16384] (*1)
vg1-tp1-tpool: Stacking NODE_ADD (253,3) 0:0 0600
dm mknodes vg1-tp1_tdata NF [16384] (*1)
vg1-tp1_tdata: Stacking NODE_ADD (253,2) 0:0 0600
dm mknodes vg1-tp1_tmeta NF [16384] (*1)
vg1-tp1_tmeta: Stacking NODE_ADD (253,1) 0:0 0600
dm names OF [16384] (*1)
dm mknodes cachedev1 NF [16384] (*1)
cachedev1: Stacking NODE_ADD (253,0) 0:0 0600
dm mknodes vg1-lv1 NF [16384] (*1)
vg1-lv1: Stacking NODE_ADD (253,5) 0:0 0600
dm mknodes vg2-tp2-tpool NF [16384] (*1)
vg2-tp2-tpool: Stacking NODE_ADD (253,9) 0:0 0600
dm mknodes vg2-tp2_tdata NF [16384] (*1)
vg2-tp2_tdata: Stacking NODE_ADD (253,8) 0:0 0600
dm mknodes vg1-tp1 NF [16384] (*1)
vg1-tp1: Stacking NODE_ADD (253,4) 0:0 0600
dm mknodes vg2-lv2 NF [16384] (*1)
vg2-lv2: Stacking NODE_ADD (253,11) 0:0 0600
dm mknodes vg2-tp2_tmeta NF [16384] (*1)
vg2-tp2_tmeta: Stacking NODE_ADD (253,7) 0:0 0600
dm mknodes vg2-tp2 NF [16384] (*1)
vg2-tp2: Stacking NODE_ADD (253,10) 0:0 0600
dm mknodes vg1-tp1-tpool NF [16384] (*1)
vg1-tp1-tpool: Stacking NODE_ADD (253,3) 0:0 0600
dm mknodes vg1-tp1_tdata NF [16384] (*1)
vg1-tp1_tdata: Stacking NODE_ADD (253,2) 0:0 0600
dm mknodes vg1-tp1_tmeta NF [16384] (*1)
vg1-tp1_tmeta: Stacking NODE_ADD (253,1) 0:0 0600
Syncing device names
vg2-tp2_tdata: Processing NODE_ADD (253,8) 0:0 0600
vg2-tp2: Processing NODE_ADD (253,10) 0:0 0600
vg2-tp2-tpool: Processing NODE_ADD (253,9) 0:0 0600
vg2-lv2: Processing NODE_ADD (253,11) 0:0 0600
vg2-tp2_tmeta: Processing NODE_ADD (253,7) 0:0 0600
cachedev1: Processing NODE_ADD (253,0) 0:0 0600
vg1-lv1: Processing NODE_ADD (253,5) 0:0 0600
vg1-tp1: Processing NODE_ADD (253,4) 0:0 0600
vg1-tp1-tpool: Processing NODE_ADD (253,3) 0:0 0600
vg1-tp1_tdata: Processing NODE_ADD (253,2) 0:0 0600
vg1-tp1_tmeta: Processing NODE_ADD (253,1) 0:0 0600
cachedev1: Processing NODE_ADD (253,0) 0:0 0600
vg1-lv1: Processing NODE_ADD (253,5) 0:0 0600
vg2-tp2-tpool: Processing NODE_ADD (253,9) 0:0 0600
vg2-tp2_tdata: Processing NODE_ADD (253,8) 0:0 0600
vg1-tp1: Processing NODE_ADD (253,4) 0:0 0600
vg2-lv2: Processing NODE_ADD (253,11) 0:0 0600
vg2-tp2_tmeta: Processing NODE_ADD (253,7) 0:0 0600
vg2-tp2: Processing NODE_ADD (253,10) 0:0 0600
vg1-tp1-tpool: Processing NODE_ADD (253,3) 0:0 0600
vg1-tp1_tdata: Processing NODE_ADD (253,2) 0:0 0600
vg1-tp1_tmeta: Processing NODE_ADD (253,1) 0:0 0600
Using logical volume(s) on command line
Locking /var/lock/lvm/V_vg2 RB
_do_flock /var/lock/lvm/V_vg2:aux WB
_undo_flock /var/lock/lvm/V_vg2:aux
_do_flock /var/lock/lvm/V_vg2 RB
Opened /dev/md256 RO O_DIRECT
/dev/md256: block size is 4096 bytes
/dev/md256: No label detected
Closed /dev/md256
dm status (253:0) OF [16384] (*1)
/dev/mapper/cachedev1: Skipping (regex)
Opened /dev/md1 RO O_DIRECT
/dev/md1: block size is 4096 bytes
/dev/md1: lvm2 label detected at sector 1
lvmcache: /dev/md1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
/dev/md1: Found metadata at 13312 size 2297 (in area at 4096 size 1044480) for vg1 (PvvrkF-fpg6-RUCZ-ZA1j-IjcZ-RPju-da10L5)
lvmcache: /dev/md1: now in VG vg1 with 1 mdas
lvmcache: /dev/md1: setting vg1 VGID to PvvrkFfpg6RUCZZA1jIjcZRPjuda10L5
lvmcache: /dev/md1: VG vg1: Set creation host to NASDF4F14.
Closed /dev/md1
dm status (253:1) OF [16384] (*1)
/dev/mapper/vg1-tp1_tmeta: Reserved internal LV device vg1/tp1_tmeta not usable.
/dev/mapper/vg1-tp1_tmeta: Skipping unusable device
Opened /dev/md2 RO O_DIRECT
/dev/md2: block size is 4096 bytes
/dev/md2: lvm2 label detected at sector 1
lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
/dev/md2: Found metadata at 12800 size 2249 (in area at 4096 size 1044480) for vg2 (aAESo6-R24R-51By-Qh71-wzzo-c5r6-7w2RNQ)
lvmcache: /dev/md2: now in VG vg2 with 1 mdas
lvmcache: /dev/md2: setting vg2 VGID to aAESo6R24R51ByQh71wzzoc5r67w2RNQ
lvmcache: /dev/md2: VG vg2: Set creation host to nas.
dm status (253:2) OF [16384] (*1)
/dev/mapper/vg1-tp1_tdata: Reserved internal LV device vg1/tp1_tdata not usable.
/dev/mapper/vg1-tp1_tdata: Skipping unusable device
dm status (253:3) OF [16384] (*1)
/dev/mapper/vg1-tp1-tpool: Reserved internal LV device vg1/tp1-tpool not usable.
/dev/mapper/vg1-tp1-tpool: Skipping unusable device
dm status (253:4) OF [16384] (*1)
/dev/vg1/tp1: Skipping (regex)
dm status (253:5) OF [16384] (*1)
/dev/vg1/lv1: Skipping (regex)
Opened /dev/md9 RO O_DIRECT
/dev/md9: block size is 4096 bytes
/dev/md9: No label detected
Closed /dev/md9
Opened /dev/md13 RO O_DIRECT
/dev/md13: block size is 4096 bytes
/dev/md13: No label detected
Closed /dev/md13
Using cached label for /dev/md2
Allocated VG vg2 at 0x8180138.
Using cached label for /dev/md2
Adding tp2:0 as an user of tp2_tmeta
Stack tp2:0[0] on LV tp2_tdata:0
Adding tp2:0 as an user of tp2_tdata
Adding lv2:0 as an user of tp2
Read vg2 metadata (6) from /dev/md2 at 12800 size 2249
/dev/md2 0: 0 2360: lv545(0:0)
/dev/md2 1: 2360 229580: tp2_tdata(0:0)
/dev/md2 2: 231940 4096: tp2_tmeta(0:0)
dm mknodes vg2-lv545 NF [16384] (*1)
vg2-lv545: Stacking NODE_DEL
Syncing device names
vg2-lv545: Processing NODE_DEL
dm mknodes vg2-tp2 NF [16384] (*1)
vg2-tp2: Stacking NODE_ADD (253,10) 0:0 0600
Removing /dev/vg2/tp2
Linking /dev/vg2/tp2 -> /dev/mapper/vg2-tp2
Syncing device names
vg2-tp2: Processing NODE_ADD (253,10) 0:0 0600
dm mknodes vg2-lv2 NF [16384] (*1)
vg2-lv2: Stacking NODE_ADD (253,11) 0:0 0600
Removing /dev/vg2/lv2
Linking /dev/vg2/lv2 -> /dev/mapper/vg2-lv2
Syncing device names
vg2-lv2: Processing NODE_ADD (253,11) 0:0 0600
Unlock: Memlock counters: locked:0 critical:0 daemon:0 suspended:0
Syncing device names
Unlocking /var/lock/lvm/V_vg2
_undo_flock /var/lock/lvm/V_vg2
Closed /dev/md2
Freeing VG vg2 at 0x8180138.
Alles anzeigen
Und ab jetzt starte ich nicht mehr neu.... Schwör!