Degraded raid 1 in Centos7

General support questions
obla4ko
Posts: 10
Joined: 2014/07/19 17:13:35

Degraded raid 1 in Centos7

Post by obla4ko » 2014/07/19 17:34:53

Good day!
I have a strange problem. After installed Centos 7. After installing Centos 7 my raid is degraded.

My steps:
1. Install Centos 7 from USB (create raid 1)
2. Install dhcp, bind, ntp and etc.
3. wait 3-4 hours.
4. System freeze, after restart i see what raid degraded.

At first I thought the problem in hard disks. I connected them to Centos 6.5 and created raid 1. System worked for several days, and there was no degradation.
I check smartctl and found no problems, all attributes in normal


Info:

version:

Code: Select all

Linux c12000 3.10.0-123.4.2.el7.x86_64 #1 SMP Mon Jun 30 16:09:14 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
lspci

Code: Select all

00:00.0 Host bridge: NVIDIA Corporation C55 Host Bridge (rev a2)
00:00.1 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:00.2 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:00.3 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:00.4 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:00.5 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a2)
00:00.6 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:00.7 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:01.0 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:01.1 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:01.2 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:01.3 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:01.4 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:01.5 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:01.6 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:02.0 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:02.1 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:02.2 RAM memory: NVIDIA Corporation C55 Memory Controller (rev a1)
00:03.0 PCI bridge: NVIDIA Corporation C55 PCI Express bridge (rev a1)
00:09.0 RAM memory: NVIDIA Corporation MCP55 Memory Controller (rev a2)
00:0a.0 ISA bridge: NVIDIA Corporation MCP55 LPC Bridge (rev a3)
00:0a.1 SMBus: NVIDIA Corporation MCP55 SMBus (rev a3)
00:0b.0 USB controller: NVIDIA Corporation MCP55 USB Controller (rev a1)
00:0b.1 USB controller: NVIDIA Corporation MCP55 USB Controller (rev a2)
00:0d.0 IDE interface: NVIDIA Corporation MCP55 IDE (rev a1)
00:0e.0 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a3)
00:0e.1 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a3)
00:0e.2 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a3)
00:0f.0 PCI bridge: NVIDIA Corporation MCP55 PCI bridge (rev a2)
00:11.0 Bridge: NVIDIA Corporation MCP55 Ethernet (rev a3)
00:12.0 Bridge: NVIDIA Corporation MCP55 Ethernet (rev a3)
00:13.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a3)
01:00.0 PCI bridge: NVIDIA Corporation NF200 PCIe 2.0 switch (rev a2)
02:00.0 PCI bridge: NVIDIA Corporation NF200 PCIe 2.0 switch (rev a2)
02:02.0 PCI bridge: NVIDIA Corporation NF200 PCIe 2.0 switch (rev a2)
03:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1)
03:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1)
04:00.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
05:02.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
05:04.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
06:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
06:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
07:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
07:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)

Code: Select all

smartctl /dev/sda
=== START OF INFORMATION SECTION ===
Device Model:     HGST HUS724020ALA640
Serial Number:    xxx
LU WWN Device Id: 5 000cca 22dd5d5cb
Firmware Version: MF6OAA70
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    7200 rpm
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sat Jul 19 21:23:31 2014 MSK
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED



SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   136   136   054    Pre-fail  Offline      -       80
  3 Spin_Up_Time            0x0007   144   144   024    Pre-fail  Always       -       402 (Average 464)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       30
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   145   145   020    Pre-fail  Offline      -       24
  9 Power_On_Hours          0x0012   100   100   000    Old_age   Always       -       212
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       30
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       37
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       37
194 Temperature_Celsius     0x0002   139   139   000    Old_age   Always       -       43 (Min/Max 24/45)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%        98         -
# 2  Extended offline    Completed without error       00%         4         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
smartctl /dev/sdb

Code: Select all

=== START OF INFORMATION SECTION ===
Device Model:     HGST HUS724020ALA640
Serial Number:    xxxxx
LU WWN Device Id: 5 000cca 22dcf8d3c
Firmware Version: MF6OAA70
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    7200 rpm
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sat Jul 19 21:23:36 2014 MSK
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   137   137   054    Pre-fail  Offline      -       78
  3 Spin_Up_Time            0x0007   140   140   024    Pre-fail  Always       -       453 (Average 436)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       28
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   142   142   020    Pre-fail  Offline      -       25
  9 Power_On_Hours          0x0012   100   100   000    Old_age   Always       -       212
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       28
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       35
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       35
194 Temperature_Celsius     0x0002   139   139   000    Old_age   Always       -       43 (Min/Max 24/49)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%        98         -
# 2  Extended offline    Completed without error       00%         4         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
mdadm version

Code: Select all

mdadm - v3.2.6 - 25th October 2012
mdstat

Code: Select all

cat /proc/mdstat 
Personalities : [raid1] 
md15 : active raid1 sdb5[1]
      1845859712 blocks super 1.1 [2/1] [b][_U[/b]]
      
md11 : active raid1 sdb1[1] sda1[0]
      1023936 blocks super 1.0 [2/2] [UU]
      
md12 : active raid1 sdb2[1] sda2[0]
      4093888 blocks super 1.1 [2/2] [UU]
      
md13 : active raid1 sdb3[1] sda3[0]
      102334336 blocks super 1.1 [2/2] [UU]
      [>....................]  resync =  0.1% (194176/102334336) finish=1648.7min speed=1032K/sec
Sometimes, destroyed all Raid, sometimes what that one (in this example destoyed md15)

On the one hand, it seems that the problem in hard disk drives, but on the other hand, no such problem in 6.5 Centos

any1 have idea?
Last edited by obla4ko on 2015/03/14 16:11:19, edited 2 times in total.

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Re: degraded raid 1 in Centos7

Post by gerald_clark » 2014/07/19 18:11:33

What drives are these? Some green drives afre problematic with software RAID.
Are the drives properly aligned on 4K boundaries?

obla4ko
Posts: 10
Joined: 2014/07/19 17:13:35

Re: degraded raid 1 in Centos7

Post by obla4ko » 2014/07/19 18:53:43

2x Hitachi 7k400 Ultrastar. They are positioned as 24x7 (http://www.hgst.com/tech/techlib.nsf/te ... 000_ds.pdf)
Also i cheked block size

Code: Select all

[x]# blockdev --getbsz /dev/md11
4096
[x]# blockdev --getbsz /dev/md12
4096
[x]# blockdev --getbsz /dev/md13
4096
[x]# blockdev --getbsz /dev/md15
4096
[x]#
info mdadm -D /dev/

md11

Code: Select all

/dev/md11:
        Version : 1.0
  Creation Time : Thu Jul 17 11:05:17 2014
     Raid Level : raid1
     Array Size : 1023936 (1000.11 MiB 1048.51 MB)
  Used Dev Size : 1023936 (1000.11 MiB 1048.51 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Jul 19 22:46:47 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : c12000:11  (local to host c12000)
           UUID : c69ec837:4f24afe0:5fa413c6:83abcd09
         Events : 27

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
md12

Code: Select all

/dev/md12:
        Version : 1.1
  Creation Time : Thu Jul 17 11:06:13 2014
     Raid Level : raid1
     Array Size : 4093888 (3.90 GiB 4.19 GB)
  Used Dev Size : 4093888 (3.90 GiB 4.19 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Jul 19 21:59:23 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : c12000:12  (local to host c12000)
           UUID : 344d2463:920b8655:e2813124:f630eeec
         Events : 27

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
md13

Code: Select all

/dev/md13:
        Version : 1.1
  Creation Time : Thu Jul 17 11:05:34 2014
     Raid Level : raid1
     Array Size : 102334336 (97.59 GiB 104.79 GB)
  Used Dev Size : 102334336 (97.59 GiB 104.79 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Jul 19 22:50:38 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : c12000:13  (local to host c12000)
           UUID : e8ecc573:fd919479:290ba3bc:75852b71
         Events : 48

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
md15

Code: Select all

/dev/md15:
        Version : 1.1
  Creation Time : Thu Jul 17 11:58:19 2014
     Raid Level : raid1
     Array Size : 1845859712 (1760.35 GiB 1890.16 GB)
  Used Dev Size : 1845859712 (1760.35 GiB 1890.16 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Jul 19 22:50:46 2014
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 32% complete

           Name : c12000:15  (local to host c12000)
           UUID : 56589d7d:d8a13b2b:55f989e8:97f93407
         Events : 14032

    Number   Major   Minor   RaidDevice State
       2       8        5        0      spare rebuilding   /dev/sda5
       1       8       21        1      active sync   /dev/sdb5

Md15 now is rebylding

Code: Select all

Personalities : [raid1]
md15 : active raid1 sda5[2] sdb5[1]
      1845859712 blocks super 1.1 [2/1] [_U]
      [======>..............]  recovery = 33.1% (612687360/1845859712) finish=148.9min speed=137964K/sec

md11 : active raid1 sdb1[1] sda1[0]
      1023936 blocks super 1.0 [2/2] [UU]

md13 : active raid1 sdb3[1] sda3[0]
      102334336 blocks super 1.1 [2/2] [UU]

md12 : active raid1 sdb2[1] sda2[0]
      4093888 blocks super 1.1 [2/2] [UU]


fdisk information

Code: Select all

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d91b9

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2050047     1024000   83  Linux
/dev/sda2         2050048    10242047     4096000   83  Linux
/dev/sda3        10242048   215042047   102400000   83  Linux
/dev/sda4       215042048  3907028991  1845993472    5  Extended
/dev/sda5       215042111  3907028991  1845993440+  83  Linux

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00050ea5

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     2050047     1024000   83  Linux
/dev/sdb2         2050048    10242047     4096000   83  Linux
/dev/sdb3        10242048   215042047   102400000   83  Linux
/dev/sdb4       215042048  3907024064  1845991008+   5  Extended
/dev/sdb5       215042111  3907024064  1845990977   83  Linux

Disk /dev/md12: 4192 MB, 4192141312 bytes, 8187776 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md13: 104.8 GB, 104790360064 bytes, 204668672 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md11: 1048 MB, 1048510464 bytes, 2047872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

df -h

Code: Select all

 df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md13        96G  5.6G   86G   7% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G  104K  3.9G   1% /dev/shm
tmpfs           3.9G  9.3M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/md11       969M  133M  770M  15% /boot
/dev/md15       1.7T   77M  1.7T   1% /DATA

I founded this

Code: Select all

[root@ ~]# mdadm -D /dev/md15 | grep Event
         Events : 14303
[root@ ~]# mdadm -D /dev/md15 | grep Event
         Events : 14305
[root@ ~]# mdadm -D /dev/md15 | grep Event
         Events : 14309
Every second events counter is grow... IDK this is normal or not, when rebuilding is starting.

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Re: degraded raid 1 in Centos7

Post by gerald_clark » 2014/07/19 18:59:48

sda5 is not aligned.

obla4ko
Posts: 10
Joined: 2014/07/19 17:13:35

Re: degraded raid 1 in Centos7

Post by obla4ko » 2014/07/19 19:13:35

gerald_clark wrote:sda5 is not aligned.
do you mean size?

Code: Select all

/dev/sda5       215042111  3907028991  1845993440+  83  Linux
/dev/sdb5       215042111  3907024064  1845990977   83  Linux

Code: Select all

mdadm -D /dev/md15 | grep Size
     Array Size : 1845859712 (1760.35 GiB 1890.16 GB)
  Used Dev Size : 1845859712 (1760.35 GiB 1890.16 GB)

as it may affect the raid?
As far as I know, when you create a Raid is taken equal to the area of the smallest drive. Ie if there are 2 disk 1 TB and 2 TB, the Raid will only 1TB, the remaining space will not be used
Last edited by obla4ko on 2014/07/19 19:15:23, edited 1 time in total.

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Re: degraded raid 1 in Centos7

Post by gerald_clark » 2014/07/19 19:14:59

It does not start on a 4K boundary.

obla4ko
Posts: 10
Joined: 2014/07/19 17:13:35

Re: degraded raid 1 in Centos7

Post by obla4ko » 2014/07/19 19:17:28

gerald_clark wrote:It does not start on a 4K boundary.
But disks have 512 sectors.

Code: Select all

 smartctl --all /dev/sda | grep Size
Sector Size:      512 bytes logical/physical
[root@c12000 ~]# smartctl --all /dev/sdb | grep Size
Sector Size:      512 bytes logical/physical


User avatar
TrevorH
Forum Moderator
Posts: 29113
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: degraded raid 1 in Centos7

Post by TrevorH » 2014/07/19 19:42:33

Are there messages logged in /var/log from when the RAID went degraded? They'd be more interesting from POV of discovering why it happened as anything else is just guesswork.
CentOS 6 will die in November 2020 - migrate sooner rather than later!
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 is dead, do not use it.
Full time Geek, part time moderator. Use the FAQ Luke

obla4ko
Posts: 10
Joined: 2014/07/19 17:13:35

Re: degraded raid 1 in Centos7

Post by obla4ko » 2014/07/19 20:02:56

TrevorH wrote:Are there messages logged in /var/log from when the RAID went degraded? They'd be more interesting from POV of discovering why it happened as anything else is just guesswork.
Sorry, no reports of degradation in the /var/log/messages. I too was looking at them first. I cleared mail and other journals, and now waiting when raid degraded...

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Re: degraded raid 1 in Centos7

Post by gerald_clark » 2014/07/19 21:48:51

You are mistaken.
Those are advanced format drives with 4096 byte sectors.
They only emulate 512 byte sectoring, and if blocks are misaligned they do it with a massive performance hit.

Post Reply

Return to “CentOS 7 - General Support”