Ich habe aufgegeben.
Beiträge von RMaTsm1f
-
-
Ach ja, das ist schön. Dann ist das sauber.
Ich möchte mich an dieser Stelle schon mal für Deine bisherige Unterstützung bedanken. alleine wäre ich nicht bis hier gekommen.
Nun frag ich mich, wie komme ich nun mit meinem Volume weiter voran. Was kann ich tun, um es einzubinden?
In der Gui wird mir das Volume als entladen angezeigt. Ich kann aber unter Aktionen nicht wie früher "Entschlüsseln" auswählen
-
Hallo Mike,
jetzt bin ich wieder zurück auf der Spur. Das Raid ist wieder in Ordnung.
Codecat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid5 sda3[3] sde3[5] sdd3[4] sdc3[2] sdb3[1] 7774238464 blocks super 1.0 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] bitmap: 0/15 pages [0KB], 65536KB chunkmd256 : active raid1 sde2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0] 530112 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunkmd13 : active raid1 sda4[0] sdc4[4] sdd4[3] sde4[2] sdb4[1] 458880 blocks [5/5] [UUUUU] bitmap: 0/57 pages [0KB], 4KB chunkmd9 : active raid1 sda1[0] sdc1[4] sdd1[3] sde1[2] sdb1[1] 530048 blocks [5/5] [UUUUU] bitmap: 0/65 pages [0KB], 4KB chunkunused devices: <none>
Code
Alles anzeigenmdadm -E /dev/sd[abcdef]3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 12bc523a:cf06c4ff:0a79082a:b20410fe Internal Bitmap : -16 sectors from superblock Update Time : Fri Nov 4 08:07:56 2016 Checksum : 2f8718c3 - correct Events : 1490296 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdb3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 7de88da3:f0ec886c:06e13258:830b00b0 Internal Bitmap : -16 sectors from superblock Update Time : Fri Nov 4 08:07:56 2016 Checksum : e5a19a19 - correct Events : 1490296 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 5eee6d8c:850d18e5:edb2cc95:deffbc9d Internal Bitmap : -16 sectors from superblock Update Time : Fri Nov 4 08:07:56 2016 Checksum : 726786f3 - correct Events : 1490296 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 824dca2d:851b64f0:0ad88b1a:b10af9be Internal Bitmap : -16 sectors from superblock Update Time : Fri Nov 4 08:07:56 2016 Checksum : c50b23e8 - correct Events : 1490296 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sde3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 1bfe81c6:48ccadbe:f4af3f24:f4ee0512 Internal Bitmap : -16 sectors from superblock Update Time : Fri Nov 4 08:07:56 2016 Bad Block Log : 512 entries available at offset -8 sectors Checksum : 88d5416b - correct Events : 1490296 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) mdadm: No md superblock detected on /dev/sdf3.
Eine Frage habe ich noch, was bedeutet mdadm: No md superblock detected on /dev/sdf3.?
-
Ich habe jetzt verstanden was Du meintest als ich bei der Webrecherche ein Bild von cat /proc/mdstat beim rebuild gesehen habe. Also ist klar, das der rebuild wegen badbocks abbricht. Ich bin jetzt dabei das defekte Device mit ddrescue zu sichern und dann auf die intakte neu Platte zu übertragen. Wenn das geklappt hat melde ich mich wieder.
-
Ganz ehrlich an cat /proc/mdstat erkenn' ich nichts. Einen Fortschritt erst recht nicht.
Code
Alles anzeigencat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid5 sdd3[5] sda3[3] sdc3[4] sdf3[2] sdb3[1] 7774238464 blocks super 1.0 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] bitmap: 10/15 pages [40KB], 65536KB chunk md256 : active raid1 sdf2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0] 530112 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md13 : active raid1 sdf4[4] sdc4[3] sdd4[2] sdb4[1] sda4[0] 458880 blocks [5/5] [UUUUU] bitmap: 0/57 pages [0KB], 4KB chunk md9 : active raid1 sdf1[4] sda1[0] sdc1[3] sdd1[2] sdb1[1] 530048 blocks [5/5] [UUUUU] bitmap: 1/65 pages [4KB], 4KB chunk unused devices: <none>
Ich würde behaupten der rebuild steht schon seit August. Oder gibt es anders wo noch Informationen?
Jetzt die andere Frage, kann ich den rebuild nochmal anstossen?
-
Kann ich irgendwo sehen, ob der rebuild läuft? und wenn nicht, kann ich das erneut veranlassen?
Kriege ich irgenwie raus, welche physische Platte sdd ist?
-
Aber das hatte ich doch schon? da kamen doch die Fehler aus Post 23? Ich habe noch eine Ergänzung.
Code
Alles anzeigenmdadm -D /dev/md1 /dev/md1: Version : 1.0 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 1943559616 (1853.52 GiB 1990.21 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue Oct 25 21:08:57 2016 State : active, degraded Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Name : 1 UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Events : 1489540 Number Major Minor RaidDevice State 3 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 8 83 2 active sync /dev/sdf3 4 8 35 3 active sync /dev/sdc3 5 8 51 4 spare rebuilding /dev/sdd3
Warum sagt er hier spare rebuilding? Vor allem wenn das Update Datum für sdd3 am 28. August liegt?
-
Du hast recht, es ist irgendwie komisch. Das ist die Ausgabe von:
Codemdadm -E /dev/sd[abcdef]3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 12bc523a:cf06c4ff:0a79082a:b20410feInternal Bitmap : -16 sectors from superblock Update Time : Tue Oct 25 20:18:04 2016 Checksum : 2f7a83dd - correct Events : 1489538 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)/dev/sdb3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 7de88da3:f0ec886c:06e13258:830b00b0Internal Bitmap : -16 sectors from superblock Update Time : Tue Oct 25 20:18:04 2016 Checksum : e5950533 - correct Events : 1489538 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)/dev/sdc3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 824dca2d:851b64f0:0ad88b1a:b10af9beInternal Bitmap : -16 sectors from superblock Update Time : Tue Oct 25 20:18:04 2016 Checksum : c4fe8f02 - correct Events : 1489538 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)/dev/sdd3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 1bfe81c6:48ccadbe:f4af3f24:f4ee0512Internal Bitmap : -16 sectors from superblock Update Time : Sun Aug 28 22:55:30 2016 Bad Block Log : 512 entries available at offset -8 sectors Checksum : 88664e73 - correct Events : 44682 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)mdadm: No md superblock detected on /dev/sde3./dev/sdf3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 5eee6d8c:850d18e5:edb2cc95:deffbc9dInternal Bitmap : -16 sectors from superblock Update Time : Tue Oct 25 20:18:04 2016 Checksum : 725af20d - correct Events : 1489538 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
Ich habe dann storage_util --sys_startup und storage_util --sys_startup_p2 ausgeführt und jetzt ist md1 wieder da(read-only).
Code
Alles anzeigencat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid5 sdd3[5] sda3[3] sdc3[4] sdf3[2] sdb3[1] 7774238464 blocks super 1.0 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] bitmap: 10/15 pages [40KB], 65536KB chunk md256 : active raid1 sdf2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0] 530112 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md13 : active raid1 sdf4[4] sdc4[3] sdd4[2] sdb4[1] sda4[0] 458880 blocks [5/5] [UUUUU] bitmap: 0/57 pages [0KB], 4KB chunk md9 : active raid1 sdf1[4] sda1[0] sdc1[3] sdd1[2] sdb1[1] 530048 blocks [5/5] [UUUUU] bitmap: 1/65 pages [4KB], 4KB chunk unused devices: <none>
Allerdings zeigt er mir in der Gui im Gegensatz zu vorher an, dass die Platte 3 anomal ist.
-
Die Ausgaben sprechen aber dagegen.
Codemdadm -E /dev/sda3/dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 12bc523a:cf06c4ff:0a79082a:b20410feInternal Bitmap : -16 sectors from superblock Update Time : Wed Oct 19 18:59:41 2016 Checksum : 2f728875 - correct Events : 1489529 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)[~] # mdadm -E /dev/sdb3/dev/sdb3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 7de88da3:f0ec886c:06e13258:830b00b0Internal Bitmap : -16 sectors from superblock Update Time : Wed Oct 19 18:59:41 2016 Checksum : e58d09cb - correct Events : 1489529 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)[~] # mdadm -E /dev/sdc3/dev/sdc3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 824dca2d:851b64f0:0ad88b1a:b10af9beInternal Bitmap : -16 sectors from superblock Update Time : Wed Oct 19 18:59:41 2016 Checksum : c4f6939a - correct Events : 1489529 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)[~] # mdadm -E /dev/sdd3/dev/sdd3: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : f2b19cf3:41850704:1635a3bf:59cc2f6b Name : 1 Creation Time : Sun Aug 31 13:59:27 2014 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB) Array Size : 7774238464 (7414.09 GiB 7960.82 GB) Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB) Super Offset : 3887119504 sectors Unused Space : before=0 sectors, after=256 sectors State : clean Device UUID : 1bfe81c6:48ccadbe:f4af3f24:f4ee0512Internal Bitmap : -16 sectors from superblock Update Time : Sun Aug 28 22:55:30 2016 Bad Block Log : 512 entries available at offset -8 sectors Checksum : 88664e73 - correct Events : 44682 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)[~] # mdadm -E /dev/sde3mdadm: No md superblock detected on /dev/sde3.
In dmseg finde ich allerdings
Wie löse ich das auf ohne die Daten zu verlieren?
-
Hallo Revan,
wie schon ein paar Beiträge vorher erwähnt, hat der Qnap-Support leider seine Unterstützung eingestellt. Ich bin also auf Eure Unterstützung angewiesen.
Viele Grüße
-
So, jetzt habe ich die neue Platte eingebaut.
md1 wird nicht mehr aufgebaut.
Code
Alles anzeigencat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md256 : active raid1 sdf2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0] 530112 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md13 : active raid1 sdf4[4] sdc4[3] sdd4[2] sdb4[1] sda4[0] 458880 blocks [5/5] [UUUUU] bitmap: 0/57 pages [0KB], 4KB chunk md9 : active raid1 sdf1[4] sda1[0] sdc1[3] sdd1[2] sdb1[1] 530048 blocks [5/5] [UUUUU] bitmap: 1/65 pages [4KB], 4KB chunk unused devices: <none>
Wie kann ich das starten und ist das sinnvoll?
-
Ich weiß nicht, ob es daran liegt. Aber jJa, die habe ich noch nicht austauschen können, der Versand zieht sich.
-
Also in der Gui bekomme ich:
1.[Pool 1] Started rebuilding with RAID Group 1.
und dann noch 5sec
2.[Pool 1] Rebuilding skipped with RAID Group 1.
und das alle 5secund in dmesg finde ich jetzt alle 5sec diese Einträge
Code
Alles anzeigen[ 1372.334085] RAID conf printout: [ 1372.334092] --- level:5 rd:5 wd:4 [ 1372.334097] disk 0, o:1, dev:sda3 [ 1372.334101] disk 1, o:1, dev:sdb3 [ 1372.334105] disk 2, o:1, dev:sdc3 [ 1372.334109] disk 3, o:1, dev:sdd3 [ 1372.334112] disk 4, o:1, dev:sde3 [ 1372.334115] RAID conf printout: [ 1372.334118] --- level:5 rd:5 wd:4 [ 1372.334122] disk 0, o:1, dev:sda3 [ 1372.334125] disk 1, o:1, dev:sdb3 [ 1372.334129] disk 2, o:1, dev:sdc3 [ 1372.334132] disk 3, o:1, dev:sdd3 [ 1372.334136] disk 4, o:1, dev:sde3 [ 1372.334222] md: recovery of RAID array md1 [ 1372.334230] md: minimum _guaranteed_ speed: 5000 KB/sec/disk. [ 1372.334236] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 1372.334243] md: Recovering started: md1 [ 1372.334262] md: using 128k window, over a total of 1943559616k. [ 1372.334268] md: resuming recovery of md1 from checkpoint. [ 1372.334471] md: md1: recovery done. [ 1372.336194] md: recovery skipped: md1
-
Hi Mike,
nein, habe ich noch nicht. Ich wollte nichts machen ohne Absprache. Nachher mach ich was kaputt. Jetzt starte ich mal neu.
-
Code
Alles anzeigencat /etc/blkid.tab <device DEVNO="0x0801" TIME="1476824512" UUID="b250bd98-aa69-f25d-1327-fe8d012da042" TYPE="mdraid">/dev/sda1</device> <device DEVNO="0x0802" TIME="1476824512" UUID="63e5d842-530d-43ae-8f95-604e5e9856a7" TYPE="swap">/dev/sda2</device> <device DEVNO="0x0803" TIME="1476824512" UUID="UAhDX0-WACj-6MZY-K36b-07jE-vgBU-sAodvT" TYPE="lvm2pv">/dev/sda3</device> <device DEVNO="0x0804" TIME="1476824512" UUID="4fb9b4ca-355f-a091-1f2a-c662c250d6ea" TYPE="mdraid">/dev/sda4</device> <device DEVNO="0x0805" TIME="1476824512" UUID="9062b6a2-6b22-4ca9-9168-c88e2243e7b4" TYPE="swap">/dev/sda5</device> <device DEVNO="0x0811" TIME="1476824513" UUID="b250bd98-aa69-f25d-1327-fe8d012da042" TYPE="mdraid">/dev/sdb1</device> <device DEVNO="0x0812" TIME="1476824513" UUID="63e5d842-530d-43ae-8f95-604e5e9856a7" TYPE="swap">/dev/sdb2</device> <device DEVNO="0x0814" TIME="1476824513" UUID="4fb9b4ca-355f-a091-1f2a-c662c250d6ea" TYPE="mdraid">/dev/sdb4</device> <device DEVNO="0x0815" TIME="1476824513" UUID="9062b6a2-6b22-4ca9-9168-c88e2243e7b4" TYPE="swap">/dev/sdb5</device> <device DEVNO="0x0821" TIME="1476824513" UUID="b250bd98-aa69-f25d-1327-fe8d012da042" TYPE="mdraid">/dev/sdc1</device> <device DEVNO="0x0822" TIME="1476824513" UUID="9781f787-06cf-43d4-97ae-fd3462f8eac8" TYPE="swap">/dev/sdc2</device> <device DEVNO="0x0824" TIME="1476824513" UUID="4fb9b4ca-355f-a091-1f2a-c662c250d6ea" TYPE="mdraid">/dev/sdc4</device> <device DEVNO="0x0825" TIME="1476824513" UUID="9062b6a2-6b22-4ca9-9168-c88e2243e7b4" TYPE="swap">/dev/sdc5</device> <device DEVNO="0x0831" TIME="1476824514" UUID="b250bd98-aa69-f25d-1327-fe8d012da042" TYPE="mdraid">/dev/sdd1</device> <device DEVNO="0x0834" TIME="1476824514" UUID="4fb9b4ca-355f-a091-1f2a-c662c250d6ea" TYPE="mdraid">/dev/sdd4</device> <device DEVNO="0x0835" TIME="1476824514" UUID="9062b6a2-6b22-4ca9-9168-c88e2243e7b4" TYPE="swap">/dev/sdd5</device> <device DEVNO="0x0841" TIME="1476824514" UUID="b250bd98-aa69-f25d-1327-fe8d012da042" TYPE="mdraid">/dev/sde1</device> <device DEVNO="0x0844" TIME="1476824514" UUID="4fb9b4ca-355f-a091-1f2a-c662c250d6ea" TYPE="mdraid">/dev/sde4</device> <device DEVNO="0x0845" TIME="1476824514" UUID="6df24296-9243-4806-9747-b97461915853" TYPE="swap">/dev/sde5</device> <device DEVNO="0x0851" TIME="1476824514" UUID="90a34014-e609-4965-b6b4-95f277bfdf23" TYPE="ext2">/dev/sdf1</device> <device DEVNO="0x0852" TIME="1476824514" LABEL="QTS_BOOT_PART2" UUID="2d2b8fdc-be8f-4e8f-81dc-9b57447bbaa1" TYPE="ext2">/dev/sdf2</device> <device DEVNO="0x0853" TIME="1476824514" LABEL="QTS_BOOT_PART3" UUID="9342cb19-29fb-4b59-b7b7-e89336f1edf5" TYPE="ext2">/dev/sdf3</device> <device DEVNO="0x0855" TIME="1476824514" UUID="ba5d6eee-24a9-43bc-b861-8f4ebfbcd2c7" TYPE="ext2">/dev/sdf5</device> <device DEVNO="0x0856" TIME="1476824514" UUID="c5a5224c-2a7e-46b4-b98c-e87b07fd65f9" TYPE="ext2">/dev/sdf6</device> <device DEVNO="0x0909" TIME="1476824514" PRI="10" UUID="cf7add2d-b2fc-4ced-aa1f-69305448112d" TYPE="ext3">/dev/md9</device> <device DEVNO="0x090d" TIME="1476824514" PRI="10" UUID="6a1541fa-437f-443a-b66d-19f29b5c2701" SEC_TYPE="ext2" TYPE="ext3">/dev/md13</device> <device DEVNO="0x100900" TIME="1476824514" PRI="10" UUID="63e5d842-530d-43ae-8f95-604e5e9856a7" TYPE="swap">/dev/md256</device> <device DEVNO="0x0901" TIME="1476824514" PRI="10" UUID="UAhDX0-WACj-6MZY-K36b-07jE-vgBU-sAodvT" TYPE="lvm2pv">/dev/md1</device>
Was bedeutet das?
-
-
Das ist der aktuelle Stand:
Codecat /etc/config/qlvm.conf [Global]poolIdBitmap=0x2member_1_Pool=1memberIdBitmap=0x2tpIdBitmap=0x2lvIdBitmap=0x10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002member_1_LV_Bitmap=0x10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002[POOL_1]poolId=1tpId=1poolName=vg1flag=0x0threshold=0uuid=o2cNmm-HPTE-aBfi-GNKz-Ub8B-3fxf-bbCW3zoverThreshold=nomember_0=1memberBitmap=1[MEMBER_1]memberId=1memberName=/dev/md1uuid=f2b19cf3:41850704:1635a3bf:59cc2f6bmemberUuid=UAhDX0-WACj-6MZY-K36b-07jE-vgBU-sAodvT[TP_1]tpId=1tpName=/dev/mapper/vg1-tp1uuid=oIRzJX-mOPO-XnBX-v60g-zZx4-3qz1-vFs1eBtpSize=15359434752metaSize=33554432metaSpareSize=0[LV_1]lvId=1poolId=1layout=0flag=0x20000threshold=0lvName=/dev/mapper/vg1-lv1uuid=K2mTFP-2y4j-Zrnt-5aSA-20b9-flwg-4TYe5LcompleteFsResize=yesoverThreshold=nolvSize=14680064000member_0=1memberBitmap=1volName=[LV_544]lvId=544poolId=1layout=0flag=0x200000threshold=0lvName=/dev/mapper/vg1-lv544uuid=qhQpEq-iitN-jDyu-3TEZ-YRxz-ivkM-jBsj2dcompleteFsResize=yesoverThreshold=nolvSize=155484160member_0=1memberBitmap=1volName=
nochzusatzlich der output von /etc/config/ssdcache.conf
Code
Alles anzeigencat /etc/config/ssdcache.conf [Global] ssdCacheBitmap=0x2 cgBitmap=0x1 [SSDCache_1] ssdCacheId=1 ssdCacheName=/dev/mapper/cachedev1 qdmId=1 lvId=1 groupId=0 uuid=0 flag=0x1 enabled=0 reserved=0 [CG_0] groupId=0 groupName=CG0 lvId=-12 mode=2 replaceAlgorithm=LRU bypass_threshold=1024 flag=0x0 enabled=0 member_0=1 memberBitmap=1
-
wie folgt:
Codell /dev/mapper/ drwxr-xr-x 2 admin administ 160 Oct 17 00:07 ./ drwxr-xr-x 14 admin administ 19.7k Oct 16 17:50 ../ crw------- 1 admin administ 10, 236 Oct 11 21:39 control brw------- 1 admin administ 253, 0 Oct 17 00:07 vg1-lv1 brw------- 1 admin administ 253, 4 Oct 11 19:40 vg1-tp1 brw------- 1 admin administ 253, 3 Oct 11 19:40 vg1-tp1-tpool brw------- 1 admin administ 253, 2 Oct 11 19:40 vg1-tp1_tdata brw------- 1 admin administ 253, 1 Oct 11 19:40 vg1-tp1_tmeta
-
Wieder danke!
vgmknodes -vvv vg1 hat das mapper device angelegt.
Aber ich habe das Gefühl, das mir noch einiges fehlt. Wo ist eigentlich dieses Cachedev-Device oder brauche ich das nicht?
hab nochmal das init_lvm-Skript ausgeführt. In der Gui sehe ich jetzt unter dem Speicherppool ein Volume ohne Namen (-) und den Hinweis entladen.
-
Habe ich natürlich sofort ausprobiert.
Code/etc/init.d/init_lvm.shChanging old config name...mv: unable to rename `/etc/config/qdrbd.conf': No such file or directoryReinitialing...Detect disk(8, 0)...dev_count ++ = 0Detect disk(8, 16)...dev_count ++ = 1Detect disk(8, 32)...dev_count ++ = 2Detect disk(8, 48)...dev_count ++ = 3Detect disk(8, 64)...dev_count ++ = 4Detect disk(8, 80)...ignore non-root enclosure disk(8, 80).Detect disk(8, 0)...Detect disk(8, 16)...Detect disk(8, 32)...Detect disk(8, 48)...Detect disk(8, 64)...Detect disk(8, 80)...ignore non-root enclosure disk(8, 80).sys_startup_p2:got called count = -1Done
aber immer noch kein mapper-Device
Codell /dev/mapper/ drwxr-xr-x 2 admin administ 140 Oct 16 17:50 ./ drwxr-xr-x 14 admin administ 19.7k Oct 16 17:50 ../ crw------- 1 admin administ 10, 236 Oct 11 21:39 control brw------- 1 admin administ 253, 4 Oct 11 19:40 vg1-tp1 brw------- 1 admin administ 253, 3 Oct 11 19:40 vg1-tp1-tpool brw------- 1 admin administ 253, 2 Oct 11 19:40 vg1-tp1_tdata brw------- 1 admin administ 253, 1 Oct 11 19:40 vg1-tp1_tmeta