RAID rebuild done wrongly on my server

Installing, Configuring, Troubleshooting server daemons such as Web and Mail
Post Reply
dedon2kx
Posts: 4
Joined: 2013/08/20 21:02:48

RAID rebuild done wrongly on my server

Post by dedon2kx » 2013/08/20 21:19:46

Dear all,

I have a software RAID 1 setup on my hosted server and recently the sda disc went bad and was replaced but the support said I should rebuild the RAID back myself.
I tried to read some things I could and did some part of it. Now I have only one added to the RAID and the other simply won't add. Even the one that is added can't be removed.

I don't know what to do, please help out.

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi
path] [faulty]
md2 : active raid1 sda[0] sdb2[1]
1851110336 blocks [2/2] [UU]

md3 : active raid1 sdb3[1]
101374912 blocks [2/1] [_U]

unused devices:

then:

mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Mon Mar 4 16:43:28 2013
Raid Level : raid1
Array Size : 1851110336 (1765.36 GiB 1895.54 GB)
Used Dev Size : 1851110336 (1765.36 GiB 1895.54 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Tue Aug 20 22:15:00 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : 291a1dfb:a478a64c:a4d2adc2:26fd5302
Events : 0.81004

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 18 1 active sync /dev/sdb2


Also:
mdadm -D /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Mon Mar 4 16:43:34 2013
Raid Level : raid1
Array Size : 101374912 (96.68 GiB 103.81 GB)
Used Dev Size : 101374912 (96.68 GiB 103.81 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 3
Persistence : Superblock is persistent

Update Time : Tue Aug 20 16:30:46 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : 833f335c:91a3b5ba:a4d2adc2:26fd5302
Events : 0.73

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 19 1 active sync /dev/sdb3

then:
cat /boot/grub/grub.conf
default=0
timeout=5

title linux centos6_64
kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 root=/dev/md2 ro
root (hd0,1)

Please can anybody help me out on what to do

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

RAID rebuild done wrongly on my server

Post by gerald_clark » 2013/08/20 21:56:09

I looks like you forgot to partition sda, and added the whole thing to md2.

Please read and understand before using.
https://raid.wiki.kernel.org/index.php/Linux_Raid

dedon2kx
Posts: 4
Joined: 2013/08/20 21:02:48

Re: RAID rebuild done wrongly on my server

Post by dedon2kx » 2013/08/20 23:14:38

Hi gerald_clark,

Yes I know now you are right, but after the damage has been done.

However, the problem now is I've been trying to remove md2 from it but it keeps telling me device is busy.

so the question is, is reinstalling the server my only option?

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Re: RAID rebuild done wrongly on my server

Post by gerald_clark » 2013/08/20 23:30:28

Just fail the sda.
Then you can partition it and add the partitions to the proper arrays.

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: RAID rebuild done wrongly on my server

Post by TrevorH » 2013/08/21 00:03:34

[quote]
I've been trying to remove md2 from it
[/quote]

Wrong way round: md2 is the array. You want to remove /dev/sda from the array by failing it, then repartition /dev/sda into the correct sized partitions, tag them 0xfd, then add each partition into their relevant /dev/mdX array.

dedon2kx
Posts: 4
Joined: 2013/08/20 21:02:48

Re: RAID rebuild done wrongly on my server

Post by dedon2kx » 2013/08/21 19:29:29

Thank you gerald_clark, TrevorH,

You guys rock.

I will do that and give you guys a feedback on the result.

dedon2kx
Posts: 4
Joined: 2013/08/20 21:02:48

Re: RAID rebuild done wrongly on my server

Post by dedon2kx » 2013/08/21 20:36:56

thank you guys again,

I did as you suggested by failing sda on md2 (Thanks TrevorH) and then i removed sda from the array. after That I copied the partitions from sdb (after hiting some glitches):
[b]sgdisk --backup=table /dev/sdb[/b]
The operation has completed successfully.
[b]sgdisk --load-backup=table /dev/sda[/b]
Creating new GPT entries.
The operation has completed successfully.

Then I listing again I had:
[b]cat /proc/mdstat[/b]
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md2 : active raid1 sdb2[1]
1851110336 blocks [2/1] [_U]

md3 : active raid1 sdb3[1]
101374912 blocks [2/1] [_U]

unused devices:

so I readded the sda back into the array, this time putting the sda2 with sdb2 and sda3 with sdb3 (i think that is how it was before, plus since I copied the partition from sdb, it should follow the same rule) :lol:
with commands:
[b]mdadm --manage /dev/md2 --add /dev/sda2[/b]
mdadm: added /dev/sda2
[b]mdadm --manage /dev/md3 --add /dev/sda3[/b]
mdadm: added /dev/sda3
[b]cat /proc/mdstat[/b]
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md2 : active raid1 sda2[2] sdb2[1]
1851110336 blocks [2/1] [_U]
[>....................] recovery = 0.0% (1481920/1851110336) finish=1155
.4min speed=26679K/sec

md3 : active raid1 sda3[2] sdb3[1]
101374912 blocks [2/1] [_U]
resync=DELAYED

Post Reply