Полгода массив прожил беспроблемно, но вот начал глючить. В разных сессиях отваливались разные диски, их поверхность сканировал - отличная.
sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 24 18:33:25 2013
Raid Level : raid5
Array Size : 8790401472 (8383.18 GiB 9001.37 GB)
Used Dev Size : 2930133824 (2794.39 GiB 3000.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : /var/md0_intent
Update Time : Tue Jun 10 14:07:08 2014
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : home:0 (local to host home)
UUID : 980e5006:c88fb1dd:8225b599:6f48b251
Events : 72976
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 0 0 2 removed
3 8 65 3 active sync /dev/sde1
2 8 49 - faulty spare /dev/sdd1
sudo mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 980e5006:c88fb1dd:8225b599:6f48b251
Name : home:0 (local to host home)
Creation Time : Sat Aug 24 18:33:25 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 8790401472 (8383.18 GiB 9001.37 GB)
Used Dev Size : 5860267648 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b46f7141:b11aa224:2ff52d60:ead818fe
Update Time : Tue Jun 10 14:07:08 2014
Checksum : 270e87a - correct
Events : 72976
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AA.A ('A' == active, '.' == missing)
sudo mdadm -E /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 980e5006:c88fb1dd:8225b599:6f48b251
Name : home:0 (local to host home)
Creation Time : Sat Aug 24 18:33:25 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 8790401472 (8383.18 GiB 9001.37 GB)
Used Dev Size : 5860267648 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d9595226:24ac120a:10513a5a:8038f950
Update Time : Tue Jun 10 14:07:08 2014
Checksum : 19af1f89 - correct
Events : 72976
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AA.A ('A' == active, '.' == missing)
sudo mdadm -E /dev/sdd1
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 980e5006:c88fb1dd:8225b599:6f48b251
Name : home:0 (local to host home)
Creation Time : Sat Aug 24 18:33:25 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 8790401472 (8383.18 GiB 9001.37 GB)
Used Dev Size : 5860267648 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : de9ca131:ecebd69b:344affa5:9742df1b
Update Time : Mon Jun 9 23:00:21 2014
Checksum : cd6bcae4 - correct
Events : 71397
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing)
sudo mdadm -E /dev/sde1
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 980e5006:c88fb1dd:8225b599:6f48b251
Name : home:0 (local to host home)
Creation Time : Sat Aug 24 18:33:25 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 8790401472 (8383.18 GiB 9001.37 GB)
Used Dev Size : 5860267648 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : e92afca6:c6af40dd:061d3624:e8009e6d
Update Time : Tue Jun 10 14:07:08 2014
Checksum : 5427889c - correct
Events : 72976
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : AA.A ('A' == active, '.' == missing)
После загрузки сессии дисков в массиве обычно три, при добавлении четвёртый помечается как faulty spare. Правда, однажды помогла пересборка
sudo mdadm --assemble --scan --forcemdadm: forcing event count in /dev/sde1(3) from 58161 upto 71394
mdadm: clearing FAULTY flag for device 0 in /dev/md0 for /dev/sde1
mdadm: Marking array /dev/md0 as 'clean'
mdadm: /dev/md0 assembled from 4 drives - not enough to start the array.
после этого все четыре диска оказались active sync... До следующей сессии
Как это вылечить? Неужели повторным созданием массива? Велик ли риск потерять данные при этом?