RAID rebuild done wrongly on my server
Posted: 2013/08/20 21:19:46
Dear all,
I have a software RAID 1 setup on my hosted server and recently the sda disc went bad and was replaced but the support said I should rebuild the RAID back myself.
I tried to read some things I could and did some part of it. Now I have only one added to the RAID and the other simply won't add. Even the one that is added can't be removed.
I don't know what to do, please help out.
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi
path] [faulty]
md2 : active raid1 sda[0] sdb2[1]
1851110336 blocks [2/2] [UU]
md3 : active raid1 sdb3[1]
101374912 blocks [2/1] [_U]
unused devices:
then:
mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Mon Mar 4 16:43:28 2013
Raid Level : raid1
Array Size : 1851110336 (1765.36 GiB 1895.54 GB)
Used Dev Size : 1851110336 (1765.36 GiB 1895.54 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue Aug 20 22:15:00 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 291a1dfb:a478a64c:a4d2adc2:26fd5302
Events : 0.81004
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 18 1 active sync /dev/sdb2
Also:
mdadm -D /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Mon Mar 4 16:43:34 2013
Raid Level : raid1
Array Size : 101374912 (96.68 GiB 103.81 GB)
Used Dev Size : 101374912 (96.68 GiB 103.81 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Tue Aug 20 16:30:46 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 833f335c:91a3b5ba:a4d2adc2:26fd5302
Events : 0.73
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 19 1 active sync /dev/sdb3
then:
cat /boot/grub/grub.conf
default=0
timeout=5
title linux centos6_64
kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 root=/dev/md2 ro
root (hd0,1)
Please can anybody help me out on what to do
I have a software RAID 1 setup on my hosted server and recently the sda disc went bad and was replaced but the support said I should rebuild the RAID back myself.
I tried to read some things I could and did some part of it. Now I have only one added to the RAID and the other simply won't add. Even the one that is added can't be removed.
I don't know what to do, please help out.
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi
path] [faulty]
md2 : active raid1 sda[0] sdb2[1]
1851110336 blocks [2/2] [UU]
md3 : active raid1 sdb3[1]
101374912 blocks [2/1] [_U]
unused devices:
then:
mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Mon Mar 4 16:43:28 2013
Raid Level : raid1
Array Size : 1851110336 (1765.36 GiB 1895.54 GB)
Used Dev Size : 1851110336 (1765.36 GiB 1895.54 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue Aug 20 22:15:00 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 291a1dfb:a478a64c:a4d2adc2:26fd5302
Events : 0.81004
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 18 1 active sync /dev/sdb2
Also:
mdadm -D /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Mon Mar 4 16:43:34 2013
Raid Level : raid1
Array Size : 101374912 (96.68 GiB 103.81 GB)
Used Dev Size : 101374912 (96.68 GiB 103.81 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Tue Aug 20 16:30:46 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 833f335c:91a3b5ba:a4d2adc2:26fd5302
Events : 0.73
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 19 1 active sync /dev/sdb3
then:
cat /boot/grub/grub.conf
default=0
timeout=5
title linux centos6_64
kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 root=/dev/md2 ro
root (hd0,1)
Please can anybody help me out on what to do