Problem with Raid 10 after power failure.

Issues related to hardware problems
Post Reply
inertz
Posts: 1
Joined: 2019/03/06 04:23:21

Problem with Raid 10 after power failure.

Post by inertz » 2021/06/13 10:21:18

I have 1 server that using RAID 10 software. However after recent power failure the server no longer can boot ans stuck at grub rescue.

Using ls command for all device n the server does not detect the content thus cannot reinstall grub.

Using rescue mode to at least assemble the raid also fail.
Raid assemble command
Raid assemble command
assemble-raid.jpg (119.33 KiB) Viewed 5881 times
lsblk command

Code: Select all

NAME        FSTYPE          LABEL UUID                                   MOUNTPOINT
sda1        xfs                   fe29ddb2-0a44-4740-b765-d23ab4336dee   /boot
└─sda                                                                    
sdb1        isw_raid_member                                              
└─sdb       isw_raid_member                                              
sdb2        isw_raid_member                                              
└─sdb       isw_raid_member                                              
sdc1        isw_raid_member                                              
└─sdc       isw_raid_member                                              
sdc2        isw_raid_member                                              
└─sdc       isw_raid_member                                              
sdd         isw_raid_member                                              
sde         isw_raid_member                                              
centos-root xfs                   078037f6-03c5-420f-8c7e-7b6962166265   /
└─sda2      LVM2_member           rKiRUb-v32S-v1xQ-exAi-b96u-KWWc-0EAURN 
  └─sda                                                                  
centos-swap swap                  4ca8dbe7-523d-475f-a8d5-3f3d9758b061   [SWAP]
└─sda2      LVM2_member           rKiRUb-v32S-v1xQ-exAi-b96u-KWWc-0EAURN 
  └─sda                                                                  
centos-home xfs                   9575bd3d-b397-4cbd-9af8-347264958212   /home
└─sda2      LVM2_member           rKiRUb-v32S-v1xQ-exAi-b96u-KWWc-0EAURN 
  └─sda                                                                  





Command mdadm examine

Code: Select all

/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.2.01
    Orig Family : 3b433edb
         Family : 3b433edb
     Generation : 00034ee1
  Creation Time : Unknown
     Attributes : All supported
           UUID : 2107ed16:4510c3f0:16f8ca12:a60b8645
       Checksum : 94c47391 correct
    MPB Sectors : 2
          Disks : 4
   RAID Devices : 1

  Disk00 Serial : 0702068A00008785
          State : active
             Id : 00000000
    Usable Size : 468851726 (223.57 GiB 240.05 GB)

[VolRaid10]:
       Subarray : 0
           UUID : 1afd8bb1:b4cee853:f06be87c:78890be6
     RAID Level : 10
        Members : 4
          Slots : [UUUU]
    Failed disk : none
      This Slot : 0
    Sector Size : 512
     Array Size : 890863616 (424.80 GiB 456.12 GB)
   Per Dev Size : 445432072 (212.40 GiB 228.06 GB)
  Sector Offset : 0
    Num Stripes : 1739968
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : dirty
     RWH Policy : off
      Volume ID : 1

  Disk01 Serial : 079A150400071953
          State : active
             Id : 00000001
    Usable Size : 468851726 (223.57 GiB 240.05 GB)

  Disk02 Serial : 070206EF00007830
          State : active
             Id : 00000002
    Usable Size : 468851726 (223.57 GiB 240.05 GB)

  Disk03 Serial : 07050DF700047881
          State : active
             Id : 00000003
    Usable Size : 468851726 (223.57 GiB 240.05 GB)







Post Reply