Using RAID in Linux

Introduction

RAID (redundant array of independent disks) isย a way of storing the same data in different places on multiple hard disks or solid-state drives (SSDs) to protect data in the case of a drive failure

โœจ There are different RAID L…


This content originally appeared on DEV Community and was authored by Waji

Introduction

RAID (redundant array of independent disks) isย a way of storing the same data in different places on multiple hard disks or solid-state drives (SSDs) to protect data in the case of a drive failure

โœจ There are different RAID Levels:

๐Ÿ‘‰ RAID Level 0 (Stripe Volume)

Combines empty spaces from two or more disks (up to a maximum of 32) into a single volume
When data is written to the stripe volume, it is evenly distributed across all disks in 64KB blocks
Provides improved performance, but no fault tolerance
Performance improves with more disks

๐Ÿ‘‰ RAID Level 1 (Mirror Volume)

Requires an even number of disks
Mirrors an existing simple volume
Provides fault tolerance
Available disk capacity is half the total disk capacity

๐Ÿ‘‰ RAID Level 5 (Stripe with Parity)

Requires a minimum of three disks
Provides fault tolerance with a single additional disk
Uses parity bits for error checking
Available disk capacity is the total disk capacity minus one disk's capacity

๐Ÿ‘‰ RAID Level 6 (Dual Parity)

Requires a minimum of four disks
Can recover from the failure of two or more HDDs, which is a weakness of RAID 5
Uses dual parity bits for error checking
Available disk capacity is the total disk capacity minus two disks' capacity

๐Ÿ‘‰ RAID Level 1+0

Requires a minimum of four disks
Forms a RAID 1 configuration and then reconfigures as RAID 0
Provides excellent reliability and performance but is less efficient
Available disk capacity is half the total disk capacity

Simple hands on to test every level

I will be using Linux CentOS7 installed in my VM.

We can first check,

[root@Linux-1 ~]# rpm -qa | grep mdadm

As there is nothing, we will install this package using โ€˜yumโ€™,

[root@Linux-1 ~]# yum -y install mdadm

Now,

[root@Linux-1 ~]# rpm -qa | grep mdadm
mdadm-4.1-9.el7_9.x86_64

Raid - 0 ๊ตฌ์„ฑ

๐Ÿ‘‰ Raid - 0 ์„ Test ํ•˜๊ธฐ ์œ„ํ•ด์„œ ์ƒˆ๋กœ์šด HDD 2๊ฐœ๋ฅผ ์ถ”๊ฐ€

*์šฐ๋ฆฌ๋Š” ์ด๋ฏธ 2๊ฐœ๊ฐ€ ์žˆ๊ธฐ๋•Œ๋ฌธ์— ๋ฐ”๋กœ ์‹œ์ž‘!

We will first make raid system partition for /dev/sdb and /dev/sdc,

[root@Linux-1 ~]# fdisk /dev/sdb

Command (m for help): n

Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): 
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set

Command (m for help): t
Selected partition 1

Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

# Checking with p

Command (m for help): p

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2097151     1047552   fd  Linux raid

# Same for /dev/sdc

[root@Linux-1 ~]# fdisk /dev/sdc

Command (m for help): n

Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): 
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set

Command (m for help): t
Selected partition 1

Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

# Checking with p

Command (m for help): p

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2097151     1047552   fd  Linux raid

We will save and exit using โ€˜wโ€™.

๐Ÿ‘‰ Linux์—์„œ๋Š” ๊ธฐ๋ณธ์ ์€ ์žฅ์น˜๋ฅผ ์ปจํŠธ๋กค ํ•˜๊ธฐ ์œ„ํ•ด ์žฅ์น˜ ํŒŒ์ผ์ด ์žˆ์–ด์•ผ ํ•œ๋‹ค, ํ•˜์ง€๋งŒ ํ˜„์žฌ RAID๊ตฌ์„ฑ ์‹œ RAID์— ๊ด€ํ•œ ์žฅ์น˜ ํŒŒ์ผ์ด ์—†์œผ๋ฏ€๋กœ
์žฅ์น˜ ํŒŒ์ผ์„ ์ˆ˜๋™์œผ๋กœ ์ƒ์„ฑ, ์ด๋•Œ ์“ฐ๋Š” ๋ช…๋ น์–ด๊ฐ€ mknod ๋ช…๋ น์–ด ์ด๋ฉฐ, ๊ธฐ๋ณธ์‚ฌ์šฉ ํ˜•์‹์€ mknod [์ƒ์„ฑํ•  ์žฅ์น˜ํŒŒ์ผ ์ด๋ฆ„] [์žฅ์น˜ํŒŒ์ผํ˜•์‹] [์ฃผ ๋ฒˆํ˜ธ] [๋ณด์กฐ ๋ฒˆํ˜ธ] ํ˜•์‹์ด๋‹ค.
์žฅ์น˜ ํŒŒ์ผํ˜•์‹์€ b , (c, u) , p๋ฅผ ์‚ฌ์šฉ ๊ฐ ์˜๋ฏธ๋Š” b=blocks Device, p=FIFO, c, u = Character ํŠน์ˆ˜ํŒŒ์ผ์„ ์˜๋ฏธ ํ•œ๋‹ค.
์ฃผ ๋ฒˆํ˜ธ์™€ ๋ณด์กฐ ๋ฒˆํ˜ธ๋Š” ํŠน๋ณ„ํ•œ ์˜๋ฏธ๋Š” ์—†์œผ๋ฉฐ, ๋น„์Šทํ•œ ์—ญํ• ์„ ์ง„ํ–‰ํ•˜๋Š” ์žฅ์น˜ ํŒŒ์ผ๊ฐ„ ์ฃผ ๋ฒˆํ˜ธ๋Š” ํ†ต์ผํ•ด์„œ ์‚ฌ์šฉํ•˜๊ณ  ๋ณด์กฐ ๋ฒˆํ˜ธ๋กœ ๊ฐ ์žฅ์น˜๋ฅผ ์„ธ๋ถ€ ๊ตฌ๋ถ„ํ•˜๋Š” ํ˜•ํƒœ๋กœ ์“ฐ์ธ๋‹ค.
ํ†ต์ƒ์ ์œผ๋กœ md ์žฅ์น˜์˜ ์ฃผ ๋ฒˆํ˜ธ ๊ฐ’์€ 9๋ฒˆ์œผ๋กœ ํ†ต์ผํ•˜์—ฌ ์‚ฌ์šฉํ•œ๋‹ค.

mknod /dev/md0 b 9 0

Now, we will use โ€˜mdadmโ€™ command to create raid devices

[root@Linux-1 ~]# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Fail to create md0 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

์œ„์—์„œ ์ƒ์„ฑํ•œ ์žฅ์น˜ ํŒŒ์ผ์— Raid 0์˜ ์ •๋ณด๋ฅผ ์ž…๋ ฅ ํ•œ๋‹ค.

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=Linux-1:0 UUID=d35b06b8:3ba0c441:8ba52bb1:02fa155d

์ •๋ณด ์ž…๋ ฅ ํ›„ ํ™•์ธํ•˜๊ฒŒ ๋˜๋ฉด, ์ž…๋ ฅ ๋œ ์ •๋ณด๋ฅผ ํ† ๋Œ€๋กœ ์žฅ์น˜์˜ UUID๊ฐ’ ๋“ฑ์ด ํ‘œ์‹œ๋˜๋Š” ๊ฒƒ์„ ํ™•์ธ

[root@Linux-1 ~]# mdadm --query --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan 25 12:41:04 2023
        Raid Level : raid0
        Array Size : 2091008 (2042.00 MiB 2141.19 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jan 25 12:41:04 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : Linux-1:0  (local to host Linux-1)
              UUID : d35b06b8:3ba0c441:8ba52bb1:02fa155d
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Formatting the Raid device with the xfs file system.

[root@Linux-1 ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=8, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=522752, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Finally, making the /raid0 directory to mount the /dev/md0 into it.

[root@Linux-1 ~]# mkdir /raid0
[root@Linux-1 ~]# mount /dev/md0 /raid0
[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G  10% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md0                          2.0G   33M  2.0G   2% /raid0

We can save md information into the /etc/mdadm.conf as the device number can change when the system reboots.

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Now, letโ€™s configure auto mount,

[root@Linux-1 ~]# vi /etc/fstab

# /etc/fstab
# Created by anaconda on Tue Jan 10 10:45:44 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_linux--1-root /                       xfs     defaults        0 0
UUID=2d2f3276-dc8a-403c-bb04-53e472b9184c /boot                   xfs     defaults        0 0
/dev/mapper/centos_linux--1-swap swap                    swap    defaults        0 0

/dev/md0        /raid0          xfs     defaults        0 0

Now, we can reboot the system to check if the auto mount works.

Raid - 1 ๊ตฌ์„ฑ

๐Ÿ‘‰ ์Šค๋ƒ…์ƒท ์ดˆ๊ธฐ์„ค์ • ์ƒํƒœ๋กœ ๋ณ€๊ฒฝ ํ•  ๊ฒƒ !!

We will first make 3 1GBs of hard disks.

Then we will create raid system partitions for sdb, sdc and sdd as we did above using fdisk command.

[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd

We will use mknod command here,

[root@Linux-1 ~]# mknod /dev/md1 b 9 1

Now we will use mdadm command.

[root@Linux-1 ~]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? Y
mdadm: Fail to create md1 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

Raid 1๋ฒˆ์˜ ๊ฒฝ์šฐ ์ค‘๋ณต๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ธฐ๋ณธ ์ ์œผ๋กœ /boot์— ๊ด€ํ•œ ๋ถ€๋ถ„์€ Raid๋กœ ์„ค์ •์„ ํ•˜๋ฉด ์•ˆ ๋œ๋‹ค.
์ฆ‰, ๋ถ€ํŒ…๊ณผ ๊ด€๋ จ ๋œ ๋ฐ์ดํ„ฐ๋Š” Raid 1๋ฒˆ ์žฅ์น˜์—๋Š” ์ ํ•ฉํ•œ ํ˜•ํƒœ๊ฐ€ ์•„๋‹ˆ๋‹ค.

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md1 metadata=1.2 name=Linux-1:1 UUID=91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jan 25 14:10:18 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jan 25 14:10:23 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : Linux-1:1  (local to host Linux-1)
              UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Formatting using the xfs file system,

[root@Linux-1 ~]# mkfs.xfs /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=261632, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=855, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Finally, we will mount the /dev/md1 partition,

[root@Linux-1 ~]# mkdir /raid1
[root@Linux-1 ~]# mount /dev/md1 /raid1

We can confirm,

[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G   9% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md1                         1019M   33M  987M   4% /raid1

To make the md information be saved to the .conf file,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Configuring auto mount,

[root@Linux-1 ~]# vi /etc/fstab

# /etc/fstab
# Created by anaconda on Tue Jan 10 10:45:44 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_linux--1-root /                       xfs     defaults        0 0
UUID=2d2f3276-dc8a-403c-bb04-53e472b9184c /boot                   xfs     defaults        0 0
/dev/mapper/centos_linux--1-swap swap                    swap    defaults        0 0

/dev/md1        /raid1          xfs     defaults        0 0

Now, we can reboot the system to check if the auto mount works.

We can now test if our raid 1 is working. We will turn off the linux system and remove one hard disk from the VM machine. (Remove the RAID1 hard disk partition).

This will trigger an issue in the RAID 1 and so we will use a new HDD 1GB to perform the backup.

[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jan 25 14:10:18 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Wed Jan 25 14:29:00 2023
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : Linux-1:1  (local to host Linux-1)
              UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
            Events : 19

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       17        1      active sync   /dev/sdb1

๐Ÿ’ก ์—ฌ๊ธฐ์„œ ์ฃผ์˜ ํ•  ์ ์œผ๋กœ๋Š” ๋ฌผ๋ฆฌ์ ์ธ HDD๋ฅผ ์‚ญ์ œ๋ฅผ ํ•˜๊ฒŒ ๋˜๋ฉด fdisk -l /dev/sd* ๋ช…๋ น์–ด๋ฅผ ์ด์šฉํ•˜์—ฌ ํ™•์ธํ•˜๋ฉด ๊ฐ Disk์˜ ์•ŒํŒŒ๋ฒณ ์ด๋ฆ„์ด ๋‹ฌ๋ผ์ง„๋‹ค.
๊ธฐ์กด์— 3๊ฐœ์˜ HDD์—์„œ 2๊ฐœ๋กœ ๋ณ€๊ฒฝ๋˜์—ˆ๋Š”๋ฐ ๊ธฐ์กด์˜ ์ด๋ฆ„์€ /dev/sdb , /dev/sdc , /dev/sdd ๋กœ ํ‘œ์‹œ ๋˜์—ˆ์ง€๋งŒ, /dev/sdc Disk ํ•˜๋‚˜๋ฅผ ์‚ญ์ œํ•œ ์ง€๊ธˆ
ํ™•์ธํ•ด ๋ณด๋ฉด /dev/sdb, /dev/sdd ๊ฐ€ ์•„๋‹Œ /dev/sdb , /dev/sdc ๋กœ ํ‘œ์‹œ ๋˜๋Š” ๊ฒƒ์„ ํ™•์ธ ํ•  ์ˆ˜ ์žˆ๋‹ค, ๊ธฐ๋ณธ ์ ์œผ๋กœ System์ด ๋ถ€ํŒ… ๋˜๋ฉด์„œ
์‚ญ์ œ HDD์˜ ์ด๋ฆ„์„ ๋น„์–ด ๋‘๊ณ  ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹Œ, ์‚ญ์ œ Disk ๋‹ค์Œ Disk์—๊ฒŒ ์ˆœ์ฐจ์ ์ธ ์ด๋ฆ„๋ถ€์—ฌ๋ฅผ ์ง„ํ–‰ํ•˜๊ฒŒ ๋œ๋‹ค.

๐Ÿšซ ๊ธฐ๋ณธ์ ์œผ๋กœ ์šฐ๋ฆฌ๋Š” HDD๋ฅผ ์ œ๊ฑฐ ํ•˜์˜€๊ธฐ ๋•Œ๋ฌธ์—, Removed ์ƒํƒœ๋กœ ํ‘œ์‹œ๊ฐ€ ๋œ๋‹ค.
ํ•˜์ง€๋งŒ, ์‹ค์ œ ์—…๋ฌด์—์„œ๋Š” HDD๊ฐ€ ์ œ๊ฑฐ๋˜๋Š” ์ผ์€ ๊ฑฐ์˜ ์—†๊ณ  ๋Œ€๋ถ€๋ถ„ HDD์— ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋Š” ์ƒํƒœ์ด๋‹ค, ์ด๋•Œ ์žฅ์น˜์˜ ์ƒํƒœ๋ฅผ Failed ์ƒํƒœ๋ผ ๋งํ•œ๋‹ค,
Failed ์ƒํƒœ์ผ ๋•Œ์—๋Š” ๋ฌธ์ œ๊ฐ€ ์ƒ๊ธด Failed Disk๋ฅผ ๋จผ์ € md ์žฅ์น˜์—์„œ ์ œ๊ฑฐ ํ›„ ๋ณต๊ตฌ ์ž‘์—…์„ ์ง„ํ–‰ ํ•ด์•ผ ํ•œ๋‹ค.

Failed ์žฅ์น˜ ์ œ๊ฑฐ ๋ฐ ๋ณต๊ตฌ ์ˆœ์„œ

  1. umount /dev/md1 ( Mount ํ•ด์ œ )
  2. mdadm /dev/md1 -r /dev/sdc1 ( Failed ์žฅ์น˜ MD์žฅ์น˜์—์„œ ์ œ๊ฑฐ )
  3. ๋ณต๊ตฌ ์ž‘์—… ์ง„ํ–‰
[root@Linux-1 ~]# mdadm /dev/md1 --add /dev/sdc1
mdadm: added /dev/sdc1

Now if we check,

[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jan 25 14:10:18 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jan 25 14:48:40 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : Linux-1:1  (local to host Linux-1)
              UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
            Events : 38

    Number   Major   Minor   RaidDevice State
       2       8       33        0      active sync   /dev/sdc1
       1       8       17        1      active sync   /dev/sdb1

For the final step,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Raid - 5 ๊ตฌ์„ฑ

๐Ÿ‘‰ ์Šค๋ƒ…์ƒท ์ดˆ๊ธฐ์„ค์ • ์ƒํƒœ๋กœ ๋ณ€๊ฒฝ ํ•  ๊ฒƒ !!

We will first make 5 1GBs of hard disks.

Then we will create raid system partitions for sdb, sdc , sdd, sde and sdf as we did using fdisk command.

[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf

Using the mknod command,

[root@Linux-1 ~]# mknod /dev/md5 b 9 5

Using mdadm command,

[root@Linux-1 ~]# mdadm --create /dev/md5 --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Fail to create md5 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

Confirming the details,

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md5 metadata=1.2 name=Linux-1:5 UUID=00f0e81a:fd3cf4e3:29b61bf1:9fd35847

Confirming query details,

[root@Linux-1 ~]# mdadm --query --detail /dev/md5
.
.
.

Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1

Formatting this partition,

[root@Linux-1 ~]# mkfs.xfs /dev/md5
meta-data=/dev/md5               isize=512    agcount=8, agsize=98048 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=784128, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Making an empty directory and mounting this partition,

[root@Linux-1 ~]# mkdir /raid5
[root@Linux-1 ~]# mount /dev/md5 /raid5

Confirming the mount,

[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G   9% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md5                          3.0G   33M  3.0G   2% /raid5

Saving md details to the .conf file,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Making the auto mount,

[root@Linux-1 ~]# vi /etc/fstab
/dev/md5        /raid5          xfs     defaults        0 0

We can confirm after the reboot if the auto mount is working correctly.

Raid - 5 ๋ณต๊ตฌ ์ž‘์—…

halt

๐Ÿ‘‰ ์ด ์ž‘์—…์œผ๋กœ ์ธํ•˜์—ฌ ๊ธฐ์กด์— RAID 5๋ฒˆ์— ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๊ฒŒ ๋  ๊ฒƒ์ด๋ฉฐ, ์šฐ๋ฆฌ๋Š” ์ƒˆ๋กœ์šด HDD 1GB๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ณต๊ตฌ ์ž‘์—…์„ ์ง„ํ–‰ํ•œ๋‹ค

[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Fri Jun 23 12:45:34 2017
     Raid Level : raid5
     Array Size : 3139584 (2.99 GiB 3.21 GB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 12:59:29 2017
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : RAID:5  (local to host RAID)
           UUID : eb497ba9:59a635f0:e4a4acc1:4876bb0c
         Events : 22

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       -       0        0        3      removed

๐Ÿ‘‰ ๊ธฐ๋ณธ์ ์œผ๋กœ ์šฐ๋ฆฌ๋Š” HDD๋ฅผ ์ œ๊ฑฐ ํ•˜์˜€๊ธฐ ๋•Œ๋ฌธ์—, Removed ์ƒํƒœ๋กœ ํ‘œ์‹œ๊ฐ€ ๋œ๋‹ค. ํ•˜์ง€๋งŒ, ์‹ค์ œ ์—…๋ฌด์—์„œ๋Š” HDD๊ฐ€ ์ œ๊ฑฐ๋˜๋Š” ์ผ์€ ๊ฑฐ์˜ ์—†๊ณ  ๋Œ€๋ถ€๋ถ„ HDD์— ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋Š” ์ƒํƒœ์ด๋‹ค, ์ด๋•Œ ์žฅ์น˜์˜ ์ƒํƒœ๋ฅผ Failed ์ƒํƒœ๋ผ ๋งํ•œ๋‹ค.

Failed ์ƒํƒœ์ผ ๋•Œ์—๋Š” ๋ฌธ์ œ๊ฐ€ ์ƒ๊ธด Failed Disk๋ฅผ ๋จผ์ € md ์žฅ์น˜์—์„œ ์ œ๊ฑฐ ํ›„ ๋ณต๊ตฌ ์ž‘์—…์„ ์ง„ํ–‰ ํ•ด์•ผ ํ•œ๋‹ค.

Failed ์žฅ์น˜ ์ œ๊ฑฐ ๋ฐ ๋ณต๊ตฌ ์ˆœ์„œ

  1. umount /dev/md5 ( Mount ํ•ด์ œ )
  2. mdadm /dev/md5 -r /dev/sdb1 ( Failed ์žฅ์น˜ MD์žฅ์น˜์—์„œ ์ œ๊ฑฐ )
  3. ๋ณต๊ตฌ ์ž‘์—… ์ง„ํ–‰
[root@Linux-1 ~]# mdadm /dev/md5 --add /dev/sde1
mdadm: added /dev/sde1

์œ„์—์„œ ์ƒ์„ฑํ•œ ๋ณต๊ตฌ์šฉ Disk๋ฅผ ์ด์šฉํ•˜์—ฌ md5 ์žฅ์น˜์— ๋ณต๊ตฌ Partition์œผ๋กœ ์ง€์ •ํ•œ๋‹ค.

[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Fri Jun 23 13:51:00 2017
     Raid Level : raid5
     Array Size : 3139584 (2.99 GiB 3.21 GB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 13:55:40 2017
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 95% complete

           Name : RAID:5  (local to host RAID)
           UUID : 5b78e0c0:648d86dd:9fa5f44d:fea935de
         Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      spare rebuilding   /dev/sde1
[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Fri Jun 23 12:45:34 2017
     Raid Level : raid5
     Array Size : 3139584 (2.99 GiB 3.21 GB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 13:02:36 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : RAID:5  (local to host RAID)
           UUID : eb497ba9:59a635f0:e4a4acc1:4876bb0c
         Events : 41

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Raid - 6 ๊ตฌ์„ฑ

๐Ÿ‘‰ ์Šค๋ƒ…์ƒท ์ดˆ๊ธฐ์„ค์ • ์ƒํƒœ๋กœ ๋ณ€๊ฒฝ ํ•  ๊ฒƒ !!

We will first make 6 1GBs of hard disks.

Then we will create raid system partitions for sdb, sdc , sdd, sde, sdf and sdg.

[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf
[root@Linux-1 ~]# fdisk /dev/sdg

Using the mknod command,

[root@Linux-1 ~]# mknod /dev/md5 b 9 6

Using mdadm command,

[root@Linux-1 ~]# mdadm --create /dev/md6 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Fail to create md5 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.

Confirming the details,

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md6 metadata=1.2 name=Linux-1:6 UUID=00f0e81a:fd3cf4e3:29b61bf1:9fd35847

Confirming query details,

[root@Linux-1 ~]# mdadm --query --detail /dev/md6
.
.
.

Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

Formatting this partition,

[root@Linux-1 ~]# mkfs.xfs /dev/md6
meta-data=/dev/md6               isize=512    agcount=8, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=523264, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Making an empty directory and mounting this partition,

[root@Linux-1 ~]# mkdir /raid6
[root@Linux-1 ~]# mount /dev/md6 /raid6

Confirming the mount,

[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G   9% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md6                          2.0G   33M  2.0G   2% /raid6

Saving md details to the .conf file,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Making the auto mount,

[root@Linux-1 ~]# vi /etc/fstab
/dev/md6        /raid6          xfs     defaults        0 0

We can confirm after the reboot if the auto mount is working correctly.

Raid - 6 ๋ณต๊ตฌ ์ž‘์—…

halt

๐Ÿ‘‰ ์ด ์ž‘์—…์œผ๋กœ ์ธํ•˜์—ฌ ๊ธฐ์กด์— Raid 6๋ฒˆ์— ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๊ฒŒ ๋  ๊ฒƒ์ด๋ฉฐ, ์šฐ๋ฆฌ๋Š” HDD 1GB * 2๊ฐœ๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ณต๊ตฌ ์ž‘์—…์„ ์ง„ํ–‰ํ•œ๋‹ค.

[root@Linux-1 ~]# mdadm --query --detail /dev/md6
/dev/md6:
        Version : 1.2
  Creation Time : Fri Jun 23 14:02:07 2017
     Raid Level : raid6
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 14:09:38 2017
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : RAID:6  (local to host RAID)
           UUID : d7dfa1f7:3cfbb984:2c40ff2f:d38404f5
         Events : 21

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       -       0        0        2      removed
       -       0        0        3      removed

๐Ÿ‘‰ ๊ธฐ๋ณธ์ ์œผ๋กœ ์šฐ๋ฆฌ๋Š” HDD๋ฅผ ์ œ๊ฑฐ ํ•˜์˜€๊ธฐ ๋•Œ๋ฌธ์—, Removed ์ƒํƒœ๋กœ ํ‘œ์‹œ๊ฐ€ ๋œ๋‹ค.
ํ•˜์ง€๋งŒ, ์‹ค์ œ ์—…๋ฌด์—์„œ๋Š” HDD๊ฐ€ ์ œ๊ฑฐ๋˜๋Š” ์ผ์€ ๊ฑฐ์˜ ์—†๊ณ  ๋Œ€๋ถ€๋ถ„ HDD์— ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋Š” ์ƒํƒœ์ด๋‹ค, ์ด๋•Œ ์žฅ์น˜์˜ ์ƒํƒœ๋ฅผ Failed ์ƒํƒœ๋ผ ๋งํ•œ๋‹ค, Failed ์ƒํƒœ์ผ ๋•Œ์—๋Š” ๋ฌธ์ œ๊ฐ€ ์ƒ๊ธด Failed Disk๋ฅผ ๋จผ์ € md ์žฅ์น˜์—์„œ ์ œ๊ฑฐ ํ›„ ๋ณต๊ตฌ ์ž‘์—…์„ ์ง„ํ–‰ ํ•ด์•ผ ํ•œ๋‹ค.

Failed ์žฅ์น˜ ์ œ๊ฑฐ ๋ฐ ๋ณต๊ตฌ ์ˆœ์„œ

  1. umount /dev/md1 ( Mount ํ•ด์ œ )
  2. mdadm /dev/md1 -r /dev/sdb1 ( Failed ์žฅ์น˜ MD์žฅ์น˜์—์„œ ์ œ๊ฑฐ )
  3. ๋ณต๊ตฌ ์ž‘์—… ์ง„ํ–‰
[root@Linux-1 ~]# mdadm /dev/md6 --add /dev/sdd1
mdadm: added /dev/sdd1

[root@Linux-1 ~]# mdadm /dev/md6 --add /dev/sde1
mdadm: added /dev/sde1

๋ณต๊ตฌ์šฉ HDD๋ฅผ ์ด์šฉํ•˜์—ฌ, ๋ณต๊ตฌ ์ž‘์—… ์ง„ํ–‰

[root@Linux-1 ~]# mdadm --query --detail /dev/md6
/dev/md6:
        Version : 1.2
  Creation Time : Fri Jun 23 14:02:07 2017
     Raid Level : raid6
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 14:14:09 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : RAID:6  (local to host RAID)
           UUID : d7dfa1f7:3cfbb984:2c40ff2f:d38404f5
         Events : 58

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1
       5       8       65        3      active sync   /dev/sde1

๋ณต๊ตฌ ์ž‘์—… ์™„๋ฃŒ ํ›„ ์ƒํƒœ ํ™•์ธ

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Raid - 1+0 ๊ตฌ์„ฑ

๐Ÿ‘‰ ์Šค๋ƒ…์ƒท ์ดˆ๊ธฐ์„ค์ • ์ƒํƒœ๋กœ ๋ณ€๊ฒฝ ํ•  ๊ฒƒ !!

We will first make 6 1GBs of hard disks.

Then we will create raid system partitions for sdb, sdc , sdd, sde, sdf and sdg.

[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf
[root@Linux-1 ~]# fdisk /dev/sdg

Using the mknod command,

[root@Linux-1 ~]# mknod /dev/md5 b 9 10

Using mdadm command,

[root@Linux-1 ~]# mdadm --create /dev/md10 --level=10 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

Confirming the details,

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md10 metadata=1.2 name=RAID:10 UUID=3d4080a1:2669cb55:1411317c:dcdf8fbd

Confirming query details,

[root@Linux-1 ~]# mdadm --query --detail /dev/md10
.
.
.

Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1

Formatting this partition,

[root@Linux-1 ~]# mkfs.xfs /dev/md10
meta-data=/dev/md10              isize=512    agcount=8, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=523264, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Making an empty directory and mounting this partition,

[root@Linux-1 ~]# mkdir /raid10
[root@Linux-1 ~]# mount /dev/md10 /raid10

Confirming the mount,

[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G   9% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md10                          2.0G   33M  2.0G   2% /raid10

Saving md details to the .conf file,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Making the auto mount,

[root@Linux-1 ~]# vi /etc/fstab
/dev/md10        /raid10          xfs     defaults        0 0

We can confirm after the reboot if the auto mount is working correctly.

Raid - 1+0 ๋ณต๊ตฌ ์ž‘์—…

halt

๐Ÿ‘‰ Raid 1+0์˜ ๊ฒฝ์šฐ ๊ฐ€์ƒ ๋””์Šคํฌ๋ฅผ ๊ฐ•์ œ๋กœ ์‚ญ์ œํ•˜๊ฒŒ ๋˜๋ฉด, MD์žฅ์น˜๊ฐ€ ์‚ฌ๋ผ์ง€๋ฏ€๋กœ, ๊ฐ•์ œ๋กœ Failed ์ƒํƒœ๋กœ ๋งŒ๋“  ํ›„ ๋ณต๊ตฌ ์ž‘์—…์„ ์ง„ํ–‰ํ•œ๋‹ค.
๋‹จ, ์ฃผ์˜ ํ•  ๊ฒƒ์€ Set์œผ๋กœ ๋ฌถ์ธ HDD 2๊ฐœ๋ฅผ ๋™์‹œ์— Failed ํ•˜๋ฉด ์•ˆ ๋œ๋‹ค, ๋ฐ˜๋“œ์‹œ ๊ฐ Set ๋งˆ๋‹ค 1๊ฐœ์”ฉ๋งŒ Failed ํ•  ๊ฒƒ.

( /dev/sdb , /dev/sde ๋ฅผ Failed ํ•˜๋ฉด ๋œ๋‹ค. )

๐Ÿ’ก ๋ณต๊ตฌ๊ฐ€ ๊ฐ€๋Šฅํ•œ Raid 1 , Raid 5 , Raid 6 ๊ฐ™์€ ๊ฒฝ์šฐ์—๋Š” HDD๋ฅผ ์ œ๊ฑฐํ•˜์—ฌ๋„ ๋ถ€ํŒ…์ด ๊ฐ€๋Šฅํ•˜์ง€๋งŒ, Raid 0 , Raid 1+0์€ ๋ถˆ๊ฐ€๋Šฅ ํ•˜๋‹ค.
์ฆ๋ช…, Raid 1 , 5 , 6์„ ๊ตฌ์„ฑ ํ›„ Mount ์„ค์ • ๊ฐ ์žฅ์น˜์— Data๋ฅผ ์ƒ์„ฑ HDD 1๊ฐœ ํ˜น์€ 2๊ฐœ๋ฅผ ์ง€์šฐ๊ณ  ์žฌ ๋ถ€ํŒ…ํ•ด๋„ ํ•ด๋‹น Data๋Š” ๋ณต๊ตฌ๊ฐ€ ๊ฐ€๋Šฅํ•ด ์ •์ƒ์ ์œผ๋กœ ํ‘œ์‹œ ๋œ๋‹ค.

[root@Linux-1 ~]# umount /dev/md10
[root@Linux-1 ~]# mdadm /dev/md10 -f /dev/sdb1 /dev/sde1
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
/dev/md10:
        Version : 1.2
  Creation Time : Fri Jun 23 16:03:46 2017
     Raid Level : raid10
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 16:08:31 2017
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : RAID:10  (local to host RAID)
           UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
         Events : 19

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       -       0        0        3      removed

       0       8       17        -      faulty   /dev/sdb1
       3       8       65        -      faulty   /dev/sde1 
[root@Linux-1 ~]# mdadm /dev/md10 -r /dev/sdb1 /dev/sde1 
mdadm: hot removed /dev/sdb1 from /dev/md10
mdadm: hot removed /dev/sde1 from /dev/md10
[root@Linux-1 ~]# reboot
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
dev/md10:
        Version : 1.2
  Creation Time : Fri Jun 23 16:03:46 2017
     Raid Level : raid10
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 16:13:57 2017
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : RAID:10  (local to host RAID)
           UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
         Events : 21

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       -       0        0        3      removed
[root@Linux-1 ~]# mdadm /dev/md10 --add /dev/sdf1 /dev/sdg1
mdadm: added /dev/sdf1
mdadm: added /dev/sdg1
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
/dev/md10:
        Version : 1.2
  Creation Time : Fri Jun 23 16:03:46 2017
     Raid Level : raid10
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 16:29:22 2017
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2

         Layout : near=2
     Chunk Size : 512K

           Name : RAID:10  (local to host RAID)
           UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
         Events : 48

    Number   Major   Minor   RaidDevice State
       5       8       97        0      active sync set-A   /dev/sdg1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       4       8       81        3      active sync set-B   /dev/sdf1

๋ณต๊ตฌ ์ž‘์—… ์™„๋ฃŒ ํ›„ ์ƒํƒœ ํ™•์ธ

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf

Deleting the RAID array

๐Ÿ‘‰ After this hands on, I needed to delete the RAID system that we configured

  1. mount ๊ด€๋ จ ์ •๋ณด ์‚ญ์ œ ํ•˜๊ธฐ

    [root@Linux-1 ~]# umount /dev/md10
    [root@Linux-1 ~]# vi /etc/fstab
    
  2. md ์žฅ์น˜ ์‚ญ์ œํ•˜๊ธฐ

    [root@Linux-1 ~]# mdadm -S /dev/md10
    mdadm: stopped /dev/md10
    
  3. md ์žฅ์น˜์—์„œ ์‚ฌ์šฉํ•œ Partition superblock ์ดˆ๊ธฐํ™” ํ•˜๊ธฐ

    [root@Linux-1 ~]# mdadm --zero-superblock /dev/sdb1
    [root@Linux-1 ~]# mdadm --zero-superblock /dev/sdc1
    [root@Linux-1 ~]# mdadm --zero-superblock /dev/sdd1
    [root@Linux-1 ~]# mdadm --zero-superblock /dev/sde1
    

In this post, I discussed what RAID is and how we can set them up. I also mentioned how to remove failed device and restore from the backup RAID.


This content originally appeared on DEV Community and was authored by Waji


Print Share Comment Cite Upload Translate Updates
APA

Waji | Sciencx (2023-02-25T14:37:03+00:00) Using RAID in Linux. Retrieved from https://www.scien.cx/2023/02/25/using-raid-in-linux/

MLA
" » Using RAID in Linux." Waji | Sciencx - Saturday February 25, 2023, https://www.scien.cx/2023/02/25/using-raid-in-linux/
HARVARD
Waji | Sciencx Saturday February 25, 2023 » Using RAID in Linux., viewed ,<https://www.scien.cx/2023/02/25/using-raid-in-linux/>
VANCOUVER
Waji | Sciencx - » Using RAID in Linux. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2023/02/25/using-raid-in-linux/
CHICAGO
" » Using RAID in Linux." Waji | Sciencx - Accessed . https://www.scien.cx/2023/02/25/using-raid-in-linux/
IEEE
" » Using RAID in Linux." Waji | Sciencx [Online]. Available: https://www.scien.cx/2023/02/25/using-raid-in-linux/. [Accessed: ]
rf:citation
» Using RAID in Linux | Waji | Sciencx | https://www.scien.cx/2023/02/25/using-raid-in-linux/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.