Tach auch,
bei mir sehen die Platten mit fdisk genauso aus, das liegt wohl daran, das diese EFI GPT Partitionen enthalten. ( Vermute ich )
Ich habe aber damit noch keine Erfahrung. :oops:
[/dev] # fdisk -lDisk /dev/sda: 3000.5 GB, 3000592982016 bytes255 heads, 63 sectors/track, 364801 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 267350 2147483647+ ee EFI GPTDisk /dev/sda4: 469 MB, 469893120 bytes2 heads, 4 sectors/track, 114720 cylindersUnits = cylinders of 8 * 512 = 4096 bytesDisk /dev/sda4 doesn't contain a valid partition tableDisk /dev/sdb: 3000.5 GB, 3000592982016 bytes255 heads, 63 sectors/track, 364801 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 267350 2147483647+ ee EFI GPTDisk /dev/sdc: 3000.5 GB, 3000592982016 bytes255 heads, 63 sectors/track, 364801 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdc1 1 267350 2147483647+ ee EFI GPTDisk /dev/sdd: 3000.5 GB, 3000592982016 bytes255 heads, 63 sectors/track, 364801 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdd1 1 267350 2147483647+ ee EFI GPTDisk /dev/sde: 3000.5 GB, 3000592982016 bytes255 heads, 63 sectors/track, 364801 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sde1 1 267350 2147483647+ ee EFI GPTDisk /dev/sdf: 3000.5 GB, 3000592982016 bytes255 heads, 63 sectors/track, 364801 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdf1 1 267350 2147483647+ ee EFI GPTDisk /dev/sdx: 515 MB, 515899392 bytes8 heads, 32 sectors/track, 3936 cylindersUnits = cylinders of 256 * 512 = 131072 bytes Device Boot Start End Blocks Id System/dev/sdx1 1 17 2160 83 Linux/dev/sdx2 18 1910 242304 83 Linux/dev/sdx3 1911 3803 242304 83 Linux/dev/sdx4 3804 3936 17024 5 Extended/dev/sdx5 3804 3868 8304 83 Linux/dev/sdx6 3869 3936 8688 83 Linux
Ach, meine RAID-Konfig ist ähnlich, aber ne Lösung habe noch nicht für dein Problem.
[/dev] # cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]md0 : active raid6 sda3[0] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1] 11714790144 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]md6 : active raid1 sdf2[2](S) sde2[3](S) sdd2[4](S) sdc2[5](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU]md13 : active raid1 sdb4[0] sda4[5] sdf4[4] sde4[3] sdd4[2] sdc4[1] 458880 blocks [6/6] [UUUUUU] bitmap: 0/57 pages [0KB], 4KB chunkmd9 : active raid1 sdd1[0] sda1[5] sdf1[4] sde1[3] sdc1[2] sdb1[1] 530048 blocks [6/6] [UUUUUU] bitmap: 0/65 pages [0KB], 4KB chunk
Schau doch bitte mal die /etc/raidtab an, meine sieht so aus.
raiddev /dev/md0
raid-level 6
nr-raid-disks 6
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
raid-disk 3
device /dev/sde3
raid-disk 4
device /dev/sdf3
raid-disk 5
Alles anzeigen
Vergleich das mal deiner ! :thumb:
Evtl. hilft das schon.
Gruß
pi-bear