Centos 5 / Raid restart? Help

A 5 star hangout for overworked and underpaid system admins.
Post Reply
ramasule
Posts: 5
Joined: 2008/09/29 02:47:46

Centos 5 / Raid restart? Help

Post by ramasule » 2022/04/12 02:43:23

Hello,

My little samba server that could has chugged along in the closet for many years.
I ended up installing SME / E-Smith / Koozali on it.
I backed up my data semi-regularly onto a usb so I'm pretty sure its all there however I'd like to check before it joins the scrap heap.

On boot I'm getting the error;
Kernel panic - not syncing: attempted to kill init!

Above that it says something about no volume groups found, leading me to think that something messed up the the raid.
I believe the faulted drive is /dev/sdb2

If I recall i had it configured as a raid 5.

I'm trying to follow this guide here, https://wiki.koozali.org/Recovering_SME ... lvm_drives , I am using the systemrecovery usb currently.

cat /proc/mdstat returns

Code: Select all

cat /proc/mdstat  
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : inactive sda2[0](S) sdd2[3](S) sdc2[2](S) sdb2[1](S)
      5860125952 blocks
       
md1 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      104320 blocks [4/4] [UUUU]



mdadm -E /dev/sda1 /sdb1 /sdc1 /sdd1 all show clean

Code: Select all

 mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 62120fcd:f089dc5e:dc76e3e7:4dcfb726
  Creation Time : Tue Feb  2 22:37:25 2010
     Raid Level : raid1
  Used Dev Size : 104320 (101.88 MiB 106.82 MB)
     Array Size : 104320 (101.88 MiB 106.82 MB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1

    Update Time : Thu Jan 13 16:42:51 2022
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : d3587fd3 - correct
         Events : 2212


      Number   Major   Minor   RaidDevice State
this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       8       33        2      active sync   /dev/sdc1
   3     3       8       49        3      active sync   /dev/sdd1
mdadm -E /dev/sda2 /sdc2 /sdd2 state that /sdb2 is faulted.

Code: Select all

mdadm -E /dev/sda2
/dev/sda2:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 24c5e2b3:1ce971c5:7b9762b2:cca24e1f
  Creation Time : Tue Feb  2 22:37:26 2010
     Raid Level : raid5
  Used Dev Size : 1465031424 (1397.16 GiB 1500.19 GB)
     Array Size : 4395094272 (4191.49 GiB 4500.58 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 2

    Update Time : Thu Jan 13 16:48:49 2022
          State : active
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 3929e382 - correct
         Events : 24506893

         Layout : left-symmetric
     Chunk Size : 256K

      Number   Major   Minor   RaidDevice State
this     0       8        2        0      active sync   /dev/sda2

   0     0       8        2        0      active sync   /dev/sda2
   1     1       0        0        1      faulty removed
   2     2       8       34        2      active sync   /dev/sdc2
   3     3       8       50        3      active sync   /dev/sdd2
I'm just not sure how to restart / activate my /dev/md2 (Now its /dev/md127 because I stopped it or something trying to troubleshoot)


I tried a reassemble and got the following

mdadm --assemble --run --force --update=resync /dev/md127 /dev/sda2 /dev/sdc2 /dev/sdd2
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping


Anyways thanks for your time,

Ram

ramasule
Posts: 5
Joined: 2008/09/29 02:47:46

Re: Centos 5 / Raid restart? Help

Post by ramasule » 2022/04/12 03:12:38

ok I stopped the array again and added the 3 devices then restarted.

I still go the same error as before (no volumes) then when I reloaded systemrescue it said md2 was inactive again.

So this time I stopped, reassembled and then added sdb2 back into the array to be greeted with this:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid5 sdb2[4] sda2[0] sdd2[3] sdc2[2]
4395094272 blocks level 5, 256k chunk, algorithm 2 [4/3] [U_UU]
[>....................] recovery = 0.0% (100828/1465031424) finish=10293.6min speed=2371K/sec

10293 minutes?!?!

1 week?!?!

Seems very odd, might be time to just throw this clunker away and get a off the shelf NAS as its all I use this box for now anyways.

Ram

Post Reply