损坏磁盘阵列及修复
1、[root@linuxprobe ~]# mdadm /dev/md0 -f /dev/sdbmdadm: set /dev/sdb faulty in /dev/md0[root@linuxprobe ~]# mdadm -D /dev/md0/dev/md0:Version : 1.2Creation Time : Fri May 8 08:11:00 2017Raid Level : raid10Array Size : 41909248 (39.97 GiB 42.92 GB)Used Dev Size : 20954624 (19.98 GiB 21.46 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentUpdate Time : Fri May 8 08:27:18 2017State : clean, degradedActive Devices : 3Working Devices : 3Failed Devices : 1Spare Devices : 0Layout : near=2Chunk Size : 512KName : linuxprobe.com:0 (local to host linuxprobe.com)UUID : f2993bbd:99c1eb63:bd61d4d4:3f06c3b0Events : 21Number Major Minor RaidDevice State0 0 0 0 removed1 8 32 1 active sync /dev/sdc2 8 48 2 active sync /dev/sdd3 8 64 3 active sync /dev/sde0 8 16 - faulty /dev/sdb
2、因为RAID10级别的磁盘阵列组允许一组RAID1硬盘组中存在一个故障盘而不影响使用,所以同学们此时可以尝试下在/伊怕锱鳏RAID目录中正常的创建或删除文件都是不受影响的。当购买了新的硬盘存储设备后再使用mdadm命令来予以恢复即可,但因为虚拟机模拟硬盘的原因需要重启后才把新的硬盘添加到RAID磁盘阵列组中。[root@linuxprobe ~]# umount /RAID[root@linuxprobe ~]# mdadm /dev/md0 -a /dev/sdb[root@linuxprobe ~]# mdadm -D /dev/md0/dev/md0:Version : 1.2Creation Time : Mon Jan 30 00:08:56 2017Raid Level : raid10Array Size : 41909248 (39.97 GiB 42.92 GB)Used Dev Size : 20954624 (19.98 GiB 21.46 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentUpdate Time : Mon Jan 30 00:19:53 2017State : cleanActive Devices : 4Working Devices : 4Failed Devices : 0Spare Devices : 0Layout : near=2Chunk Size : 512KName : localhost.localdomain:0 (local to host localhost.localdomain)UUID : d3491c05:cfc81ca0:32489f04:716a2cf0Events : 56Number Major Minor RaidDevice State4 8 16 0 active sync /dev/sdb1 8 32 1 active sync /dev/sdc2 8 48 2 active sync /dev/sdd3 8 64 3 active sync /dev/sde[root@linuxprobe ~]# mount -a