Installation fails on pre-partitioned RAID

Issues related to applications and software problems and general support
Post Reply
User avatar
mshopf
Posts: 2
Joined: 2020/01/09 13:58:15
Contact:

Installation fails on pre-partitioned RAID

Post by mshopf » 2020/01/09 15:27:31

I have a server that has its OS on a RAID 1 of two SSDs. I wanted to install CentOS as the new operating system, while keeping the old one available, so I shrunk the original filesystem, repartitioned the SSDs and created new RAIDs on the partitions, v0.9 metadata for the OS to be on the safe side for booting. The old OS boots and works just fine.

Disk layout for both SSDs is like follows:
# fdisk -l /dev/sdi

Disk /dev/sdi: 300.1 GB, 300069052416 bytes
255 heads, 63 sectors/track, 36481 cylinders, total 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00008b43

Device Boot Start End Blocks Id System
/dev/sdi1 2048 4196351 2097152 82 Linux swap / Solaris
/dev/sdi2 * 4196352 134219775 65011712 fd Linux raid autodetect
/dev/sdi3 134219776 264243199 65011712 fd Linux raid autodetect
/dev/sdi4 264243200 586072367 160914584 fd Linux raid autodetect

The first raid partition is a RAID1 with the old OS:
# mdadm -D /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Fri Dec 13 17:58:06 2019
Raid Level : raid1
Array Size : 65011648 (62.00 GiB 66.57 GB)
Used Dev Size : 65011648 (62.00 GiB 66.57 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Jan 9 15:14:56 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : f2d83e21:53b00d3b:04894333:532a878b
Events : 0.1
Number Major Minor RaidDevice State
0 8 130 0 active sync /dev/sdi2
1 8 146 1 active sync /dev/sdj2

I created new RAIDs on the new partitions (abbreviated):
# mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Thu Jan 9 15:36:39 2020
Raid Level : raid1
Array Size : 65011648 (62.00 GiB 66.57 GB)
[...]
UUID : 15d9716d:e9e47889:a6b0c1bc:0e02d2b0 (local to host zuse2)
Events : 0.11
Number Major Minor RaidDevice State
0 8 131 0 active sync /dev/sdi3
1 8 147 1 active sync /dev/sdj3
# mdadm -D /dev/md3 (this one with v1.2 metadata, because it doesn't need to be bootable):
/dev/md3:
Version : 1.2
Creation Time : Thu Jan 9 15:37:11 2020
Raid Level : raid1
Array Size : 160783424 (153.34 GiB 164.64 GB)
[...]
Name : zuse2:sys (local to host zuse2)
UUID : 46a71480:7534c6bf:943a2006:983ac7bd
Events : 1
Number Major Minor RaidDevice State
0 8 132 0 active sync /dev/sdi4
1 8 148 1 active sync /dev/sdj4


I tried to install CentOS on sdi3+sdj3, with and without pre-creating a raid there, with sdi1+sdj1 reserved as swap (not RAID!) and sdi4+sdj4 to be assembled as a system data RAID 1 (OS-independent).

Installation always fails after it booted into X. All I see is a black screen with a mouse pointer, when booting without the SSDs attached I got into the graphical user interface, so graphics itself is working. Installing with textual interface doesn't do any good either.
When switching back to VT1, on the text console it states the following error:
ERROR:pugi-argument.c:1006:_pygi_argument_to_object: code should not be reached

anaconda.log logs the following error (all log files attached):
[...]
File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 264, in handle_device
device = helper_class(self, info).run()
File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/partition.py", line 112, in run
self._devicetree._add_device(device)
File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
return m(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/blivet/devicetree.py", line 158, in _add_device
raise ValueError("device is already in tree")
ValueError: device is already in tree

I checked the uuids of the parititions, and indeed the (potential) RAIDs have identical uuids - but that seems to be normal (according to what I read):

# blkid /dev/sdi? /dev/sdj?
/dev/sdi1: UUID="42e567fd-92e2-4245-b143-6f6fead66003" TYPE="swap"
/dev/sdi2: UUID="f2d83e21-53b0-0d3b-0489-4333532a878b" TYPE="linux_raid_member"
/dev/sdi3: UUID="15d9716d-e9e4-7889-a6b0-c1bc0e02d2b0" TYPE="linux_raid_member"
/dev/sdi4: UUID="46a71480-7534-c6bf-943a-2006983ac7bd" UUID_SUB="4264cf23-e3f9-99b9-7978-478f5a7c18cf" LABEL="zuse2:sys" TYPE="linux_raid_member"
/dev/sdj1: UUID="823a2345-5b62-4f28-8618-e1184e1c4032" TYPE="swap"
/dev/sdj2: UUID="f2d83e21-53b0-0d3b-0489-4333532a878b" TYPE="linux_raid_member"
/dev/sdj3: UUID="15d9716d-e9e4-7889-a6b0-c1bc0e02d2b0" TYPE="linux_raid_member"
/dev/sdj4: UUID="46a71480-7534-c6bf-943a-2006983ac7bd" UUID_SUB="99872eb9-362f-38bb-6d06-71c0b9d20480" LABEL="zuse2:sys" TYPE="linux_raid_member"


What's happening here, and more importantly - how do I install CentOS on this system? What additional information would be necessary to debug the problem?
Attachments
anaconda.log
(4.56 KiB) Downloaded 2 times

User avatar
mshopf
Posts: 2
Joined: 2020/01/09 13:58:15
Contact:

Re: Installation fails on pre-partitioned RAID

Post by mshopf » 2020/01/09 22:44:11

Links to other log files (X, anaconda, dbus, dnf.librepo.log ifcfg.log packaging.log program.log storage.log syslog) : http://schorsch.efi.fh-nuernberg.de/pri ... pf/centos/

Post Reply

Return to “CentOS 8 - General Support”