RAID 5 - unable to mount xfs volume
Re: RAID 5 - unable to mount xfs volume
I am a bit confused. You have a hardware RAID controller but appear to be using mdadm software RAID?
CentOS 8 died a premature death at the end of 2021 - migrate to Rocky/Alma/OEL/Springdale ASAP.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are dead, do not use them.
Use the FAQ Luke
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are dead, do not use them.
Use the FAQ Luke
-
- Posts: 9
- Joined: 2023/05/04 09:00:29
Re: RAID 5 - unable to mount xfs volume
Yes, that's how the setup was done. On the RAID Controller they are setup as RAID 0, then on the software RAID as RAID 5.
-
- Posts: 9
- Joined: 2023/05/04 09:00:29
Re: RAID 5 - unable to mount xfs volume
tunk wrote: ↑2023/05/04 13:04:38You may be able to run smartctl on the RAID disks, something like this:
fdisk -l
smartctl --scan
smartctl -a -d megaraid,YY /dev/sdX
E.g.:
# fdisk -l
....
Disk /dev/sdb: ....
Disk model: PERC Hxxx
....
# smartctl --scan
....
/dev/sdb -d scsi # /dev/sdb, SCSI device
....
/dev/bus/5 -d megaraid,22 # /dev/bus/5 [megaraid_disk_22], SCSI device
....
# smartctl -a -d megaraid,22 /dev/sdb
Code: Select all
# smartctl --scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d scsi # /dev/sdb, SCSI device
/dev/sdc -d scsi # /dev/sdc, SCSI device
/dev/sdd -d scsi # /dev/sdd, SCSI device
/dev/sde -d scsi # /dev/sde, SCSI device
/dev/sdf -d scsi # /dev/sdf, SCSI device
/dev/sdg -d scsi # /dev/sdg, SCSI device
/dev/sdh -d scsi # /dev/sdh, SCSI device
/dev/sdi -d scsi # /dev/sdi, SCSI device
/dev/sdj -d scsi # /dev/sdj, SCSI device
/dev/sdk -d scsi # /dev/sdk, SCSI device
/dev/sdl -d scsi # /dev/sdl, SCSI device
/dev/sdm -d scsi # /dev/sdm, SCSI device
/dev/bus/0 -d megaraid,0 # /dev/bus/0 [megaraid_disk_00], SCSI device
/dev/bus/0 -d megaraid,1 # /dev/bus/0 [megaraid_disk_01], SCSI device
/dev/bus/0 -d megaraid,2 # /dev/bus/0 [megaraid_disk_02], SCSI device
/dev/bus/0 -d megaraid,3 # /dev/bus/0 [megaraid_disk_03], SCSI device
/dev/bus/0 -d megaraid,4 # /dev/bus/0 [megaraid_disk_04], SCSI device
/dev/bus/0 -d megaraid,5 # /dev/bus/0 [megaraid_disk_05], SCSI device
/dev/bus/0 -d megaraid,6 # /dev/bus/0 [megaraid_disk_06], SCSI device
/dev/bus/0 -d megaraid,7 # /dev/bus/0 [megaraid_disk_07], SCSI device
/dev/bus/0 -d megaraid,8 # /dev/bus/0 [megaraid_disk_08], SCSI device
/dev/bus/0 -d megaraid,9 # /dev/bus/0 [megaraid_disk_09], SCSI device
/dev/bus/0 -d megaraid,10 # /dev/bus/0 [megaraid_disk_10], SCSI device
/dev/bus/0 -d megaraid,11 # /dev/bus/0 [megaraid_disk_11], SCSI device
/dev/bus/0 -d megaraid,12 # /dev/bus/0 [megaraid_disk_12], SCSI device
Code: Select all
smartctl -a -d megaraid,4 /dev/sdb
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-1160.53.1.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate IronWolf
Device Model: ST8000VN0022-2EL112
Serial Number: ZA18RGLV
LU WWN Device Id: 5 000c50 0a445cab7
Firmware Version: SC61
User Capacity: 8,001,563,222,016 bytes [8.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Thu May 4 18:06:21 2023 EAT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART Status not supported: ATA return descriptor not supported by controller firmware
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
Warning: This result is based on an Attribute check.
See vendor-specific Attribute list for failed Attributes.
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 567) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 753) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x50bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 100 044 044 Pre-fail Always In_the_past 0
3 Spin_Up_Time 0x0003 085 084 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 170
5 Reallocated_Sector_Ct 0x0033 001 001 010 Pre-fail Always FAILING_NOW 0 (0 6)
7 Seek_Error_Rate 0x000f 083 060 045 Pre-fail Always - 200680622
9 Power_On_Hours 0x0032 051 051 000 Old_age Always - 43330 (81 167 0)
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 195
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 001 001 000 Old_age Always - 5424
188 Command_Timeout 0x0032 100 099 000 Old_age Always - 65537
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 055 050 040 Old_age Always - 45 (Min/Max 21/46)
191 G-Sense_Error_Rate 0x0032 099 099 000 Old_age Always - 3716
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 115
193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 307043
194 Temperature_Celsius 0x0022 045 050 000 Old_age Always - 45 (0 19 0 0 0)
195 Hardware_ECC_Recovered 0x001a 100 001 000 Old_age Always - 0
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 37523 (11 42 0)
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 19411709622
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 7039530761194
Re: RAID 5 - unable to mount xfs volume
Forgive me if this was done by you but that sounds totally idiotic. What is the point in paying several hundred currency units for a hardware RAID controller with battery backed cache and then not using it!Yes, that's how the setup was done. On the RAID Controller they are setup as RAID 0, then on the software RAID as RAID 5.
CentOS 8 died a premature death at the end of 2021 - migrate to Rocky/Alma/OEL/Springdale ASAP.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are dead, do not use them.
Use the FAQ Luke
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are dead, do not use them.
Use the FAQ Luke
-
- Posts: 9
- Joined: 2023/05/04 09:00:29
Re: RAID 5 - unable to mount xfs volume
A consultant did the setup.TrevorH wrote: ↑2023/05/04 15:27:08Forgive me if this was done by you but that sounds totally idiotic. What is the point in paying several hundred currency units for a hardware RAID controller with battery backed cache and then not using it!Yes, that's how the setup was done. On the RAID Controller they are setup as RAID 0, then on the software RAID as RAID 5.
Last edited by elkingsparx on 2023/05/25 11:15:54, edited 1 time in total.
-
- Posts: 9
- Joined: 2023/05/04 09:00:29
Re: RAID 5 - unable to mount xfs volume
Have done a S.M.A.R.T check on all the drives and it seems there are two failing drive. Not sure if the second drive will hold if I was to replace one and have a rebuild of the RAID.
Code: Select all
smartctl -a -d megaraid,8 /dev/sdb
-----------------------------------------------------------------------
Model Family: Seagate IronWolf
Device Model: ST8000VN0022-2EL112
Serial Number: ZA18RGMM
LU WWN Device Id: 5 000c50 0a445bc17
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 077 035 044 Pre-fail Always In_the_past 52006424
3 Spin_Up_Time 0x0003 084 084 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 182
5 Reallocated_Sector_Ct 0x0033 001 001 010 Pre-fail Always FAILING_NOW 0 (0 6)
7 Seek_Error_Rate 0x000f 088 060 045 Pre-fail Always - 600603165
9 Power_On_Hours 0x0032 051 051 000 Old_age Always - 43330 (147 69 0)
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 188
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 001 001 000 Old_age Always - 60371
188 Command_Timeout 0x0032 100 085 000 Old_age Always - 30065819677
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 057 051 040 Old_age Always - 43 (Min/Max 21/44)
191 G-Sense_Error_Rate 0x0032 098 098 000 Old_age Always - 5067
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 131
193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 334635
194 Temperature_Celsius 0x0022 043 049 000 Old_age Always - 43 (0 16 0 0 0)
195 Hardware_ECC_Recovered 0x001a 006 001 000 Old_age Always - 52006424
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 41941 (21 114 0)
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 20610615214
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 7746529107369
SMART Error Log Version: 1
ATA Error Count: 41493 (device log contains only the most recent five errors)
-----------------------------
smartctl -a -d megaraid,4 /dev/sdb
-------------------------------------------------------------------------------------------------------
Model Family: Seagate IronWolf
Device Model: ST8000VN0022-2EL112
Serial Number: ZA18RGLV
LU WWN Device Id: 5 000c50 0a445cab7
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 100 044 044 Pre-fail Always In_the_past 0
3 Spin_Up_Time 0x0003 085 084 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 170
5 Reallocated_Sector_Ct 0x0033 001 001 010 Pre-fail Always FAILING_NOW 0 (0 6)
7 Seek_Error_Rate 0x000f 083 060 045 Pre-fail Always - 200680622
9 Power_On_Hours 0x0032 051 051 000 Old_age Always - 43330 (81 167 0)
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 195
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 001 001 000 Old_age Always - 5424
188 Command_Timeout 0x0032 100 099 000 Old_age Always - 65537
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 055 050 040 Old_age Always - 45 (Min/Max 21/46)
191 G-Sense_Error_Rate 0x0032 099 099 000 Old_age Always - 3716
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 115
193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 307043
194 Temperature_Celsius 0x0022 045 050 000 Old_age Always - 45 (0 19 0 0 0)
195 Hardware_ECC_Recovered 0x001a 100 001 000 Old_age Always - 0
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 37523 (11 42 0)
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 19411709622
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 7039530761194
SMART Error Log Version: 1
ATA Error Count: 14702 (device log contains only the most recent five errors)
Re: RAID 5 - unable to mount xfs volume
Depends on your available choices. If this is the only copy of the data on there (i.e. no backups!) then you don't have much choice. If you can mount it all and back it all up then at least you know you have a copy of the data.Not sure if the second drive will hold if I was to replace one and have a rebuild of the RAID.
CentOS 8 died a premature death at the end of 2021 - migrate to Rocky/Alma/OEL/Springdale ASAP.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are dead, do not use them.
Use the FAQ Luke
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are dead, do not use them.
Use the FAQ Luke
Re: RAID 5 - unable to mount xfs volume
In a RAID 5 at most one disk can go bad.
Your XFS errors and seemingly two bad disks suggests that the RAID is
dead. Time for new disks (the old are 5 years)? RAID 6 plus hot spare?
And maybe a script that checks the disks daily and sends an e-mail if
there's problems?
Your XFS errors and seemingly two bad disks suggests that the RAID is
dead. Time for new disks (the old are 5 years)? RAID 6 plus hot spare?
And maybe a script that checks the disks daily and sends an e-mail if
there's problems?
-
- Posts: 9
- Joined: 2023/05/04 09:00:29
Re: RAID 5 - unable to mount xfs volume
Thanks for your recommendations, will implement in the next build.
Re: RAID 5 - unable to mount xfs volume
A comment on the hardware raid 0 plus software raid 5.
Quite recently I tried to use twelve 16TB disks with a Perc H800
controller (raid 6 + hot spare). Did not work. Finally I read the
specs and it has a maximum 64TB raid size. Your H700 is in the
same family and may have the same limitation. If so, the hardware
plus software raid may have been used to circumvent this limit.
Quite recently I tried to use twelve 16TB disks with a Perc H800
controller (raid 6 + hot spare). Did not work. Finally I read the
specs and it has a maximum 64TB raid size. Your H700 is in the
same family and may have the same limitation. If so, the hardware
plus software raid may have been used to circumvent this limit.