Morgääääähn!
Ich sag ma so... Das hat nicht so funktioniert... :-|
Hast du weitere Ideen??
habe ich dann gar nicht mehr ausprobiert...
Morgääääähn!
Ich sag ma so... Das hat nicht so funktioniert... :-|
Hast du weitere Ideen??
habe ich dann gar nicht mehr ausprobiert...
drwxr-xr-x 2 admin administ 180 Sep 13 12:21 ./drwxr-xr-x 13 admin administ 20360 Sep 14 03:00 ../brw------- 1 admin administ 253, 0 Sep 13 12:21 cachedev1crw------- 1 admin administ 10, 236 Sep 13 12:23 controlbrw------- 1 admin administ 253, 5 Sep 13 12:20 vg1-lv1brw------- 1 admin administ 253, 4 Sep 13 12:20 vg1-tp1brw------- 1 admin administ 253, 3 Sep 13 12:20 vg1-tp1-tpoolbrw------- 1 admin administ 253, 2 Sep 13 12:20 vg1-tp1_tdatabrw------- 1 admin administ 253, 1 Sep 13 12:20 vg1-tp1_tmeta
drwxr-xr-x 2 admin administ 80 Sep 13 12:21 vg1/crw------- 1 admin administ 10, 63 Sep 13 12:23 vga_arbiter
# Generated by LVM2 version 2.02.96(2) (2012-06-08): Sat Sep 13 10:20:58 2014
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgscan'"
creation_host = "nas" # Linux nas 3.4.6 #1 SMP Thu Jun 12 17:15:43 CST 2014 x86_64
creation_time = 1410596458 # Sat Sep 13 10:20:58 2014
vg2 {
id = "aAESo6-R24R-51By-Qh71-wzzo-c5r6-7w2RNQ"
seqno = 6
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "wMXJsS-2a0W-tbRp-sYQ0-BgbZ-3Yrn-g6PPNs"
device = "/dev/md2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1933615232 # 922.02 Gigabytes
pe_start = 2048
pe_count = 236036 # 922.016 Gigabytes
}
}
logical_volumes {
lv545 {
id = "3cTDvr-qi1o-Ni9x-PxM5-dZb4-7JHv-cvt799"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "nas"
creation_time = 1410100197 # 2014-09-07 16:29:57 +0200
read_ahead = 4096
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2360 # 9.21875 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
tp2 {
id = "oKeIrT-YPMV-FZ0q-HPsF-5LVU-aJpT-bVQpEO"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "nas"
creation_time = 1410100203 # 2014-09-07 16:30:03 +0200
segment_count = 1
segment1 {
start_extent = 0
extent_count = 229580 # 896.797 Gigabytes
type = "thin-pool"
metadata = "tp2_tmeta"
pool = "tp2_tdata"
transaction_id = 1
chunk_size = 2048 # 1024 Kilobytes
zero_new_blocks = 1
}
}
lv2 {
id = "SbloUk-55yp-JPR2-dNzf-SUFR-8RmF-2GFaGb"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "nas"
creation_time = 1410100205 # 2014-09-07 16:30:05 +0200
read_ahead = 4096
segment_count = 1
segment1 {
start_extent = 0
extent_count = 228079 # 890.934 Gigabytes
type = "thick"
thin_pool = "tp2"
transaction_id = 0
device_id = 1
}
}
tp2_tmeta {
id = "BJDTX4-mBL5-QbnJ-wpyN-om9C-GS6C-QgxWEN"
status = ["READ", "WRITE"]
flags = []
creation_host = "nas"
creation_time = 1410100203 # 2014-09-07 16:30:03 +0200
segment_count = 1
segment1 {
start_extent = 0
extent_count = 4096 # 16 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 231940
]
}
}
tp2_tdata {
id = "cM2A5C-BMxJ-lQsI-XILj-prBI-IpsU-rHBRRg"
status = ["READ", "WRITE"]
flags = []
creation_host = "nas"
creation_time = 1410100203 # 2014-09-07 16:30:03 +0200
segment_count = 1
segment1 {
start_extent = 0
extent_count = 229580 # 896.797 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 2360
]
}
}
}
}
Alles anzeigen
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid5 sda3[0] sdc3[2] sdb3[1] 5840623232 blocks super 1.0 level 5, 64k chunk, algorithm 2 [3/3] [UUU]md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0] 530112 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunkmd13 : active raid1 sda4[0] sdd4[24] sdc4[2] sdb4[1] 458880 blocks super 1.0 [24/4] [UUUU____________________] bitmap: 1/1 pages [4KB], 65536KB chunkmd9 : active raid1 sda1[0] sdd1[24] sdc1[2] sdb1[1] 530048 blocks super 1.0 [24/4] [UUUU____________________] bitmap: 1/1 pages [4KB], 65536KB chunkunused devices: <none>
Den ersten Block mit den ganzen "Not a block device" und den Block mit loop176-loop254 habe ich abgekürzt!)
/dev/vcsa5: Not a block device /dev/vcsa6: Not a block device /dev/vg1/lv1: Already in device cache /dev/vg1/tp1: Already in device cache /dev/vga_arbiter: Not a block device /dev/video: Not a block device /dev/video0: Not a block device /dev/video1: Not a block device /dev/video2: Not a block device /dev/video3: Not a block device /dev/video4: Not a block device Opened /dev/loop175 RO O_DIRECT /dev/loop175: size is 0 sectors /dev/loop175: Skipping: Too small to hold a PV Closed /dev/loop175 /dev/fbsnap175: Skipping (sysfs) /dev/fbdisk175: Skipping (sysfs) Opened /dev/loop255 RO O_DIRECT /dev/loop255: size is 0 sectors /dev/loop255: Skipping: Too small to hold a PV Closed /dev/loop255 /dev/fbsnap255: Skipping (sysfs) /dev/fbdisk255: Skipping (sysfs) Volume group "vg2" not found Unlock: Memlock counters: locked:0 critical:0 daemon:0 suspended:0 Syncing device names Unlocking /var/lock/lvm/V_vg2 _undo_flock /var/lock/lvm/V_vg2 Allocated VG (null) at 0x81a6f18. Failed to vg_read vg2 Freeing VG (null) at 0x81a6f18. Skipping volume group vg2
Finde ich super von dir, dass du jedesmal rebootest, ohne mal was zu sagen. Da brauch ich mir auch keine Mühe geben und mich wundern, dass so überhaupt nichts funktioniert.
Zitat von "Vossen"Ich sag ma so... Das hat nicht so funktioniert... :-|
Ich weiss auch warum. :cry:
Mist, verdammter!
Bitte entschuldige! Das war ein so beiläufiger, unbeachter Neustart, dass ich nicht mehr dran gedacht habe, es zu erwähnen...
Wobei der Fehler ja schon in dem Neustart lag...
Wie kann ich das nun wieder gut machen?
mdadm: failed to get exclusive lock on mapfile - continue anyway...mdadm: /dev/md2 has been started with 1 drive.
PV /dev/md2 VG vg2 lvm2 [922.02 GiB / 0 free] PV /dev/md1 VG vg1 lvm2 [5.44 TiB / 0 free] Total: 2 [6.34 TiB] / in use: 2 [6.34 TiB] / in no VG: 0 [0 ]
Reading all physical volumes. This may take a while... Found volume group "vg2" using metadata type lvm2 Found volume group "vg1" using metadata type lvm2
inactive '/dev/vg2/lv545' [9.22 GiB] inherit inactive '/dev/vg2/tp2' [896.80 GiB] inherit inactive '/dev/vg2/lv2' [890.93 GiB] inherit inactive '/dev/vg1/lv544' [20.00 GiB] inherit ACTIVE '/dev/vg1/tp1' [5.40 TiB] inherit ACTIVE '/dev/vg1/lv1' [5.40 TiB] inherit
/dev/md256 [ 517.69 MiB] /dev/md1 [ 5.44 TiB] LVM physical volume /dev/md2 [ 922.02 GiB] LVM physical volume /dev/md9 [ 517.62 MiB] /dev/md13 [ 448.12 MiB] 0 disks 3 partitions 0 LVM physical volume whole disks 2 LVM physical volumes
Setting activation/monitoring to 1
Processing: vgmknodes -vvv vg2
O_DIRECT will be used
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/locking_dir to /var/lock/lvm
Setting global/prioritise_write_locks to 1
dm version OF [16384] (*1)
dm mknodes vg2-tp2_tdata NF [16384] (*1)
vg2-tp2_tdata: Stacking NODE_ADD (253,8) 0:0 0600
dm mknodes vg2-tp2 NF [16384] (*1)
vg2-tp2: Stacking NODE_ADD (253,10) 0:0 0600
dm mknodes vg2-tp2-tpool NF [16384] (*1)
vg2-tp2-tpool: Stacking NODE_ADD (253,9) 0:0 0600
dm mknodes vg2-lv2 NF [16384] (*1)
vg2-lv2: Stacking NODE_ADD (253,11) 0:0 0600
dm mknodes vg2-tp2_tmeta NF [16384] (*1)
vg2-tp2_tmeta: Stacking NODE_ADD (253,7) 0:0 0600
dm mknodes cachedev1 NF [16384] (*1)
cachedev1: Stacking NODE_ADD (253,0) 0:0 0600
dm mknodes vg1-lv1 NF [16384] (*1)
vg1-lv1: Stacking NODE_ADD (253,5) 0:0 0600
dm mknodes vg1-tp1 NF [16384] (*1)
vg1-tp1: Stacking NODE_ADD (253,4) 0:0 0600
dm mknodes vg1-tp1-tpool NF [16384] (*1)
vg1-tp1-tpool: Stacking NODE_ADD (253,3) 0:0 0600
dm mknodes vg1-tp1_tdata NF [16384] (*1)
vg1-tp1_tdata: Stacking NODE_ADD (253,2) 0:0 0600
dm mknodes vg1-tp1_tmeta NF [16384] (*1)
vg1-tp1_tmeta: Stacking NODE_ADD (253,1) 0:0 0600
dm names OF [16384] (*1)
dm mknodes cachedev1 NF [16384] (*1)
cachedev1: Stacking NODE_ADD (253,0) 0:0 0600
dm mknodes vg1-lv1 NF [16384] (*1)
vg1-lv1: Stacking NODE_ADD (253,5) 0:0 0600
dm mknodes vg2-tp2-tpool NF [16384] (*1)
vg2-tp2-tpool: Stacking NODE_ADD (253,9) 0:0 0600
dm mknodes vg2-tp2_tdata NF [16384] (*1)
vg2-tp2_tdata: Stacking NODE_ADD (253,8) 0:0 0600
dm mknodes vg1-tp1 NF [16384] (*1)
vg1-tp1: Stacking NODE_ADD (253,4) 0:0 0600
dm mknodes vg2-lv2 NF [16384] (*1)
vg2-lv2: Stacking NODE_ADD (253,11) 0:0 0600
dm mknodes vg2-tp2_tmeta NF [16384] (*1)
vg2-tp2_tmeta: Stacking NODE_ADD (253,7) 0:0 0600
dm mknodes vg2-tp2 NF [16384] (*1)
vg2-tp2: Stacking NODE_ADD (253,10) 0:0 0600
dm mknodes vg1-tp1-tpool NF [16384] (*1)
vg1-tp1-tpool: Stacking NODE_ADD (253,3) 0:0 0600
dm mknodes vg1-tp1_tdata NF [16384] (*1)
vg1-tp1_tdata: Stacking NODE_ADD (253,2) 0:0 0600
dm mknodes vg1-tp1_tmeta NF [16384] (*1)
vg1-tp1_tmeta: Stacking NODE_ADD (253,1) 0:0 0600
Syncing device names
vg2-tp2_tdata: Processing NODE_ADD (253,8) 0:0 0600
vg2-tp2: Processing NODE_ADD (253,10) 0:0 0600
vg2-tp2-tpool: Processing NODE_ADD (253,9) 0:0 0600
vg2-lv2: Processing NODE_ADD (253,11) 0:0 0600
vg2-tp2_tmeta: Processing NODE_ADD (253,7) 0:0 0600
cachedev1: Processing NODE_ADD (253,0) 0:0 0600
vg1-lv1: Processing NODE_ADD (253,5) 0:0 0600
vg1-tp1: Processing NODE_ADD (253,4) 0:0 0600
vg1-tp1-tpool: Processing NODE_ADD (253,3) 0:0 0600
vg1-tp1_tdata: Processing NODE_ADD (253,2) 0:0 0600
vg1-tp1_tmeta: Processing NODE_ADD (253,1) 0:0 0600
cachedev1: Processing NODE_ADD (253,0) 0:0 0600
vg1-lv1: Processing NODE_ADD (253,5) 0:0 0600
vg2-tp2-tpool: Processing NODE_ADD (253,9) 0:0 0600
vg2-tp2_tdata: Processing NODE_ADD (253,8) 0:0 0600
vg1-tp1: Processing NODE_ADD (253,4) 0:0 0600
vg2-lv2: Processing NODE_ADD (253,11) 0:0 0600
vg2-tp2_tmeta: Processing NODE_ADD (253,7) 0:0 0600
vg2-tp2: Processing NODE_ADD (253,10) 0:0 0600
vg1-tp1-tpool: Processing NODE_ADD (253,3) 0:0 0600
vg1-tp1_tdata: Processing NODE_ADD (253,2) 0:0 0600
vg1-tp1_tmeta: Processing NODE_ADD (253,1) 0:0 0600
Using logical volume(s) on command line
Locking /var/lock/lvm/V_vg2 RB
_do_flock /var/lock/lvm/V_vg2:aux WB
_undo_flock /var/lock/lvm/V_vg2:aux
_do_flock /var/lock/lvm/V_vg2 RB
Opened /dev/md256 RO O_DIRECT
/dev/md256: block size is 4096 bytes
/dev/md256: No label detected
Closed /dev/md256
dm status (253:0) OF [16384] (*1)
/dev/mapper/cachedev1: Skipping (regex)
Opened /dev/md1 RO O_DIRECT
/dev/md1: block size is 4096 bytes
/dev/md1: lvm2 label detected at sector 1
lvmcache: /dev/md1: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
/dev/md1: Found metadata at 13312 size 2297 (in area at 4096 size 1044480) for vg1 (PvvrkF-fpg6-RUCZ-ZA1j-IjcZ-RPju-da10L5)
lvmcache: /dev/md1: now in VG vg1 with 1 mdas
lvmcache: /dev/md1: setting vg1 VGID to PvvrkFfpg6RUCZZA1jIjcZRPjuda10L5
lvmcache: /dev/md1: VG vg1: Set creation host to NASDF4F14.
Closed /dev/md1
dm status (253:1) OF [16384] (*1)
/dev/mapper/vg1-tp1_tmeta: Reserved internal LV device vg1/tp1_tmeta not usable.
/dev/mapper/vg1-tp1_tmeta: Skipping unusable device
Opened /dev/md2 RO O_DIRECT
/dev/md2: block size is 4096 bytes
/dev/md2: lvm2 label detected at sector 1
lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
/dev/md2: Found metadata at 12800 size 2249 (in area at 4096 size 1044480) for vg2 (aAESo6-R24R-51By-Qh71-wzzo-c5r6-7w2RNQ)
lvmcache: /dev/md2: now in VG vg2 with 1 mdas
lvmcache: /dev/md2: setting vg2 VGID to aAESo6R24R51ByQh71wzzoc5r67w2RNQ
lvmcache: /dev/md2: VG vg2: Set creation host to nas.
dm status (253:2) OF [16384] (*1)
/dev/mapper/vg1-tp1_tdata: Reserved internal LV device vg1/tp1_tdata not usable.
/dev/mapper/vg1-tp1_tdata: Skipping unusable device
dm status (253:3) OF [16384] (*1)
/dev/mapper/vg1-tp1-tpool: Reserved internal LV device vg1/tp1-tpool not usable.
/dev/mapper/vg1-tp1-tpool: Skipping unusable device
dm status (253:4) OF [16384] (*1)
/dev/vg1/tp1: Skipping (regex)
dm status (253:5) OF [16384] (*1)
/dev/vg1/lv1: Skipping (regex)
Opened /dev/md9 RO O_DIRECT
/dev/md9: block size is 4096 bytes
/dev/md9: No label detected
Closed /dev/md9
Opened /dev/md13 RO O_DIRECT
/dev/md13: block size is 4096 bytes
/dev/md13: No label detected
Closed /dev/md13
Using cached label for /dev/md2
Allocated VG vg2 at 0x8180138.
Using cached label for /dev/md2
Adding tp2:0 as an user of tp2_tmeta
Stack tp2:0[0] on LV tp2_tdata:0
Adding tp2:0 as an user of tp2_tdata
Adding lv2:0 as an user of tp2
Read vg2 metadata (6) from /dev/md2 at 12800 size 2249
/dev/md2 0: 0 2360: lv545(0:0)
/dev/md2 1: 2360 229580: tp2_tdata(0:0)
/dev/md2 2: 231940 4096: tp2_tmeta(0:0)
dm mknodes vg2-lv545 NF [16384] (*1)
vg2-lv545: Stacking NODE_DEL
Syncing device names
vg2-lv545: Processing NODE_DEL
dm mknodes vg2-tp2 NF [16384] (*1)
vg2-tp2: Stacking NODE_ADD (253,10) 0:0 0600
Removing /dev/vg2/tp2
Linking /dev/vg2/tp2 -> /dev/mapper/vg2-tp2
Syncing device names
vg2-tp2: Processing NODE_ADD (253,10) 0:0 0600
dm mknodes vg2-lv2 NF [16384] (*1)
vg2-lv2: Stacking NODE_ADD (253,11) 0:0 0600
Removing /dev/vg2/lv2
Linking /dev/vg2/lv2 -> /dev/mapper/vg2-lv2
Syncing device names
vg2-lv2: Processing NODE_ADD (253,11) 0:0 0600
Unlock: Memlock counters: locked:0 critical:0 daemon:0 suspended:0
Syncing device names
Unlocking /var/lock/lvm/V_vg2
_undo_flock /var/lock/lvm/V_vg2
Closed /dev/md2
Freeing VG vg2 at 0x8180138.
Alles anzeigen
Und ab jetzt starte ich nicht mehr neu.... Schwör!
drwxr-xr-x 2 admin administ 280 Sep 16 02:58 ./
drwxr-xr-x 16 admin administ 20560 Sep 16 03:00 ../
brw------- 1 admin administ 253, 0 Sep 14 23:35 cachedev1
crw------- 1 admin administ 10, 236 Sep 14 23:38 control
brw------- 1 admin administ 253, 5 Sep 14 23:35 vg1-lv1
brw------- 1 admin administ 253, 4 Sep 14 23:35 vg1-tp1
brw------- 1 admin administ 253, 3 Sep 14 23:35 vg1-tp1-tpool
brw------- 1 admin administ 253, 2 Sep 14 23:35 vg1-tp1_tdata
brw------- 1 admin administ 253, 1 Sep 14 23:35 vg1-tp1_tmeta
lrwxrwxrwx 1 admin administ 8 Sep 16 02:57 vg2-lv2 -> ../dm-11
lrwxrwxrwx 1 admin administ 8 Sep 16 02:57 vg2-tp2 -> ../dm-10
lrwxrwxrwx 1 admin administ 7 Sep 16 02:57 vg2-tp2-tpool -> ../dm-9
lrwxrwxrwx 1 admin administ 7 Sep 16 02:57 vg2-tp2_tdata -> ../dm-8
lrwxrwxrwx 1 admin administ 7 Sep 16 02:57 vg2-tp2_tmeta -> ../dm-7
Alles anzeigen
Changing old config name...Reinitialing...Detect disk(8, 0)...Detect disk(8, 16)...Detect disk(8, 32)...Detect disk(8, 48)...Detect disk(8, 64)...ignore non-root enclosure disk(8, 64).sys_startup_p2:got called count = -1Done.
[Global]ssdCacheBitmap=0x6cgBitmap=0x1[SSDCache_1]ssdCacheId=1ssdCacheName=/dev/mapper/cachedev1qdmId=1lvId=1groupId=0uuid=debd0b9e-6e94-44be-9c88-b4251e6448a1flag=0x1enabled=0reserved=0[CG_0]groupId=0groupName=CG0lvId=-12replaceAlgorithm=LRUbypass_threshold=16384flag=0x0enabled=0member_0=1memberBitmap=0x3member_1=2[SSDCache_2]ssdCacheId=2ssdCacheName=/dev/mapper/cachedev2qdmId=2lvId=2groupId=0uuid=d5eb97b9-3da1-47de-bc4e-c6753e2fcf9fflag=0x1enabled=0reserved=0
[Global]ssdCacheBitmap=0x6cgBitmap=0x1[SSDCache_1]ssdCacheId=1ssdCacheName=/dev/mapper/cachedev1qdmId=1lvId=1groupId=0uuid=debd0b9e-6e94-44be-9c88-b4251e6448a1flag=0x1enabled=0reserved=0[CG_0]groupId=0groupName=CG0lvId=-12replaceAlgorithm=LRUbypass_threshold=16384flag=0x0enabled=0member_0=1memberBitmap=0x3member_1=2[SSDCache_2]ssdCacheId=2ssdCacheName=/dev/mapper/cachedev2qdmId=2lvId=2groupId=0uuid=d5eb97b9-3da1-47de-bc4e-c6753e2fcf9fflag=0x1enabled=0reserved=0[~] #[~] # cat /etc/config/raid.conf[Global]raidBitmap=0x6pd_50014EE25F0BA283_Raid_Bitmap=0x2pd_50014EE209B66FA0_Raid_Bitmap=0x2pd_50014EE65A1E5207_Raid_Bitmap=0x2pd_5000C5002CFB594A_Raid_Bitmap=0x4[RAID_1]uuid=4298ea59:c10a73f1:32f246f6:7fdcecb8id=1partNo=3aggreMember=noreadonly=nolegacy=noversion2=yesdeviceName=/dev/md1raidLevel=5internal=1mdBitmap=0chunkSize=64readAhead=0stripeCacheSize=0speedLimitMax=0speedLimitMin=0data_0=1, 50014EE25F0BA283data_1=2, 50014EE209B66FA0data_2=3, 50014EE65A1E5207dataBitmap=7[RAID_2]uuid=df130144:30690c74:01a8f9a3:1478ff59id=2partNo=3aggreMember=noreadonly=nolegacy=noversion2=yesdeviceName=/dev/md2raidLevel=1internal=1mdBitmap=0chunkSize=0readAhead=0stripeCacheSize=0speedLimitMax=0speedLimitMin=0data_0=4, 5000C5002CFB594AdataBitmap=1
[Global]poolIdBitmap=0x6member_1_Pool=1memberIdBitmap=0x6tpIdBitmap=0x6member_2_Pool=2lvIdBitmap=0x30000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000006member_1_LV_Bitmap=0x10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002member_2_LV_Bitmap=0x20000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004[POOL_1]poolId=1tpId=1poolName=vg1flag=0x0threshold=0uuid=PvvrkF-fpg6-RUCZ-ZA1j-IjcZ-RPju-da10L5overThreshold=nomember_0=1memberBitmap=1[MEMBER_1]memberId=1memberName=/dev/md1uuid=4298ea59:c10a73f1:32f246f6:7fdcecb8memberUuid=4oe2M3-1ITA-Y8XH-BHK7-uXY1-W1yt-5pCTg0[TP_1]tpId=1tpName=/dev/mapper/vg1-tp1uuid=RnJ31x-uLQn-YI3g-Tm2y-Qpa6-lzgb-eqSMNltpSize=11605745664metaSize=33554432[POOL_2]poolId=2tpId=2poolName=vg2flag=0x0threshold=0uuid=aAESo6-R24R-51By-Qh71-wzzo-c5r6-7w2RNQoverThreshold=nomember_0=2memberBitmap=1[MEMBER_2]memberId=2memberName=/dev/md2uuid=df130144:30690c74:01a8f9a3:1478ff59memberUuid=wMXJsS-2a0W-tbRp-sYQ0-BgbZ-3Yrn-g6PPNs[TP_2]tpId=2tpName=/dev/mapper/vg2-tp2uuid=oKeIrT-YPMV-FZ0q-HPsF-5LVU-aJpT-bVQpEOtpSize=1880719360metaSize=33554432[LV_1]lvId=1poolId=1layout=0flag=0x10000threshold=0lvName=/dev/mapper/vg1-lv1uuid=7Bwegg-mp8S-BegV-gyYI-RotV-MxYr-piXKR4completeFsResize=yesoverThreshold=nolvSize=11593449472member_0=1memberBitmap=1volName=DataVol1[LV_544]lvId=544poolId=1layout=0flag=0x200000threshold=0lvName=/dev/mapper/vg1-lv544uuid=t4ldEn-h7eV-uhSG-NOjL-gdcI-AnNe-52TsDscompleteFsResize=yesoverThreshold=nolvSize=41943040member_0=1memberBitmap=1volName=[LV_2]lvId=2poolId=2layout=0flag=0x10000threshold=0lvName=/dev/mapper/vg2-lv2uuid=SbloUk-55yp-JPR2-dNzf-SUFR-8RmF-2GFaGbcompleteFsResize=yesoverThreshold=nolvSize=1868423168member_0=2memberBitmap=1volName=DataVol2[LV_545]lvId=545poolId=2layout=0flag=0x200000threshold=0lvName=/dev/mapper/vg2-lv545uuid=3cTDvr-qi1o-Ni9x-PxM5-dZb4-7JHv-cvt799completeFsResize=yesoverThreshold=nolvSize=19333120member_0=2memberBitmap=1volName=
drwxr-xr-x 2 admin administ 300 Sep 16 20:32 ./
drwxr-xr-x 16 admin administ 20620 Sep 16 20:32 ../
brw------- 1 admin administ 253, 0 Sep 14 23:35 cachedev1
lrwxrwxrwx 1 admin administ 7 Sep 16 15:02 cachedev2 -> ../dm-6
crw------- 1 admin administ 10, 236 Sep 14 23:38 control
brw------- 1 admin administ 253, 5 Sep 14 23:35 vg1-lv1
brw------- 1 admin administ 253, 4 Sep 14 23:35 vg1-tp1
brw------- 1 admin administ 253, 3 Sep 14 23:35 vg1-tp1-tpool
brw------- 1 admin administ 253, 2 Sep 14 23:35 vg1-tp1_tdata
brw------- 1 admin administ 253, 1 Sep 14 23:35 vg1-tp1_tmeta
lrwxrwxrwx 1 admin administ 8 Sep 16 02:57 vg2-lv2 -> ../dm-11
lrwxrwxrwx 1 admin administ 8 Sep 16 02:57 vg2-tp2 -> ../dm-10
lrwxrwxrwx 1 admin administ 7 Sep 16 02:57 vg2-tp2-tpool -> ../dm-9
lrwxrwxrwx 1 admin administ 7 Sep 16 02:57 vg2-tp2_tdata -> ../dm-8
lrwxrwxrwx 1 admin administ 7 Sep 16 02:57 vg2-tp2_tmeta -> ../dm-7
Alles anzeigen
So, nun darfst du einen Reboot machen. Anschliessend bitte Erfolg melden.
Neustart ist gerade erfolgt.
Der ersten Erfolge des Abends sind, dass beide deutschen Mannschaften das CL-Spiel gewonnen haben.
Der zweite Erfolg:
Oh man! Wie geil ist das denn???
DANKE!!!!
:thumb:
Übrigens:
ich habe gerade ne Mail von gestern Nachmittag aus dem Spam-Ordner gefischt. Die kam vom QNAP-Support, den ich ganz am Anfang mal gefragt hatte.
Die schreiben kurz und bündig:
Zitat
Bitte einmal folgenden Befehls satz ablegen, der LVM vergibt eine automatisch Anorndung über den Gerätemapper:
/etc/init.d/init.lvm.sh restart
Viel Erfolg!
Ich weiß nicht, ob das zum Erfolg geführt hätte, aber ich wollte dir die Antwort nicht vorenthalten.
Ich bin auf jeden Fall ziemlich glücklich, dass du dir die Zeit und Geduld genommen hast, mir zu helfen! Es ist schon blöd zu wissen, dass man alle wichtigen Daten gesichert hat, um dann nicht dran zu kommen!
Also noch mal: Vielen, vielen Dank!
Zitat von "Vossen"
Ich weiß nicht, ob das zum Erfolg geführt hätte
Kann ich dir auch nicht sagen. Den Parameter 'restart' kennt das Script nicht. Ansonsten habe ich das Script ja auch zum Schluss angegeben.
Allerdings enthält es auch nicht mehr, als die zwei Befehle, die auch beim Start des NAS ausgeführt werden:
Daher hätte die Platte auch schon beim Start eingebunden werden müssen. Einziger unterschied ist, dass im Script vorher die Konfigurationsdateien umbenannt werden und alle Volumes dem System bekannt sind, was bedeutet, dass die Ausführung der 2 Befehle die Konfigurationsdateien neu erstellen. Ob dies allerdings auch geschieht, wenn beim Systemstart die Konfdateien fehlen und die Volumes dem System noch nicht bekannt sind, das weiss ich nicht.
Zitat von "Vossen"Also noch mal: Vielen, vielen Dank!
Bitteschön!