у меня такая же фигня с RAID 10
lsof | grep md1
md1_raid1 198 root cwd DIR 9,1 4096 2 /
md1_raid1 198 root rtd DIR 9,1 4096 2 /
md1_raid1 198 root txt unknown /proc/198/exe
jbd2/md1- 212 root cwd DIR 9,1 4096 2 /
jbd2/md1- 212 root rtd DIR 9,1 4096 2 /
jbd2/md1- 212 root txt unknown /proc/212/exe
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid10 sdb1[1] sdc1[2] sda1[0] sdd1[3]
7805952 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md1 : active raid10 sdc2[2] sda2[0] sdd2[3]
1945447424 blocks super 1.2 512K chunks 2 near-copies [4/3] [U_UU]
mdadm -D -v /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Thu Sep 3 17:42:32 2015
Raid Level : raid10
Array Size : 1945447424 (1855.32 GiB 1992.14 GB)
Used Dev Size : 972723712 (927.66 GiB 996.07 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri Sep 25 09:17:07 2015
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : storage:1 (local to host storage)
UUID : 97aea7eb:052d12ee:a91700d6:dbc35dcb
Events : 234
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 0 0 1 removed
2 8 34 2 active sync /dev/sdc2
3 8 50 3 active sync /dev/sdd2