Fail to boot after raid configuartion

Issues related to applications and software problems and general support
Post Reply
GioMBG
Posts: 59
Joined: 2012/02/27 00:28:14
Location: Conthey Suisse
Contact:

Fail to boot after raid configuartion

Post by GioMBG » 2023/05/08 13:07:58

Hi All,
hoping to find You all well.
I have a problem setting raid 6 onto my new CentOS stream 8...

Code: Select all

mdadm --create --verbose /dev/md3 --level=6 --raid-devices=10 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj

mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 15625747456K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md3 started.

[root@stream ~]# mkfs.ext4 /dev/md3
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 31251494912 4k blocks and 1953218560 inodes
Filesystem UUID: 3f13af13-60ac-4d65-888a-a194e0a1fdbb
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
	102400000, 214990848, 512000000, 550731776, 644972544, 1934917632, 
	2560000000, 3855122432, 5804752896, 12800000000, 17414258688, 
	26985857024

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

mkfs.ext4 /dev/md3
mount /dev/md3 /mnt

vi /etc/fstab
/dev/md3   /mnt   ext4   defaults   0   0
sudo mount -a
( any advice )
after, with the next reboot, the machine @ hetzner cp seems on but was impossible to login...

The doubt is that I have also update the sys... maybe I have an installonly_limit=3 that make problem on ssds ?

Code: Select all

cat /etc/yum.conf
installonly_limit=3
any suggestion ?
thx
Giò

User avatar
TrevorH
Site Admin
Posts: 33216
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Fail to boot after raid configuartion

Post by TrevorH » 2023/05/08 13:37:52

any suggestion ?
Yeah, don't add it to fstab until after you've rebooted and seen if it survives!

I am suspecting that the problem is that the RAID array is still initializing and will be for several hours/days given its size. 15TB will take a while to sync...
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

GioMBG
Posts: 59
Joined: 2012/02/27 00:28:14
Location: Conthey Suisse
Contact:

Re: Fail to boot after raid configuartion

Post by GioMBG » 2023/05/08 16:18:51

Hi Trevor,
Big respect for You and thanks to reply me,
Yeah, don't add it to fstab until after you've rebooted and seen if it survives!
was do as You suggest ( of course ) as I reboot the machine and when I see that everythink looks "normal" I've add to fstab

Code: Select all

/dev/md3   /mnt   ext4   defaults   0   0
but after she don't want make me able to re-login as the machine is marked on, on the hetzner robot, but impossible to log in.
Question :
ORIGINAL fstab

Code: Select all

[root@stream etc]# cat /etc/fstab
proc /proc proc defaults 0 0
# /dev/md/0
UUID=a5a160c6-f7fb-494b-8bc5-8700080dfcfd none swap sw 0 0
# /dev/md/1
UUID=3de98a26-4fde-4cf7-a503-fbfc4f332c09 /boot ext3 defaults 0 0
# /dev/md/2
UUID=1a33d33e-c742-4f36-a05f-36fc754d8791 / ext4 defaults 0 0
EDIT fstab

Code: Select all

[root@stream etc]# cat /etc/fstab
proc /proc proc defaults 0 0
# /dev/md/0
UUID=a5a160c6-f7fb-494b-8bc5-8700080dfcfd none swap sw 0 0
# /dev/md/1
UUID=3de98a26-4fde-4cf7-a503-fbfc4f332c09 /boot ext3 defaults 0 0
# /dev/md/2
UUID=1a33d33e-c742-4f36-a05f-36fc754d8791 / ext4 defaults 0 0
/dev/md3   /mnt   ext4   defaults   0   0
sudo mount -a
- do NOT generate any advice, so would be correct or at least re-bootable no ?
- I can write on /mnt
- lsblk

Code: Select all

[root@stream etc]# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sdb           8:16   0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sdc           8:32   0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sdd           8:48   0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sde           8:64   0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sdf           8:80   0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sdg           8:96   0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sdh           8:112  0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sdi           8:128  0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
sdj           8:144  0  14.6T  0 disk  
└─md3         9:3    0 116.4T  0 raid6 /mnt
nvme1n1     259:0    0 894.3G  0 disk  
├─nvme1n1p1 259:1    0     4G  0 part  
│ └─md0       9:0    0     4G  0 raid1 [SWAP]
├─nvme1n1p2 259:2    0     1G  0 part  
│ └─md1       9:1    0  1022M  0 raid1 /boot
└─nvme1n1p3 259:3    0 889.3G  0 part  
  └─md2       9:2    0 889.1G  0 raid1 /
nvme0n1     259:4    0 894.3G  0 disk  
├─nvme0n1p1 259:5    0     4G  0 part  
│ └─md0       9:0    0     4G  0 raid1 [SWAP]
├─nvme0n1p2 259:6    0     1G  0 part  
│ └─md1       9:1    0  1022M  0 raid1 /boot
└─nvme0n1p3 259:7    0 889.3G  0 part  
  └─md2       9:2    0 889.1G  0 raid1 /
[root@stream etc]#
can be possible that the machine it is too busy to permit login ?
or maybe I have to edit fstab in another way ?
THANKS x2

Post Reply