1# 2# Block device driver configuration 3# 4 5menuconfig MD 6 bool "Multiple devices driver support (RAID and LVM)" 7 depends on BLOCK 8 help 9 Support multiple physical spindles through a single logical device. 10 Required for RAID and logical volume management. 11 12if MD 13 14config BLK_DEV_MD 15 tristate "RAID support" 16 ---help--- 17 This driver lets you combine several hard disk partitions into one 18 logical block device. This can be used to simply append one 19 partition to another one or to combine several redundant hard disks 20 into a RAID1/4/5 device so as to provide protection against hard 21 disk failures. This is called "Software RAID" since the combining of 22 the partitions is done by the kernel. "Hardware RAID" means that the 23 combining is done by a dedicated controller; if you have such a 24 controller, you do not need to say Y here. 25 26 More information about Software RAID on Linux is contained in the 27 Software RAID mini-HOWTO, available from 28 <http://www.tldp.org/docs.html#howto>. There you will also learn 29 where to get the supporting user space utilities raidtools. 30 31 If unsure, say N. 32 33config MD_AUTODETECT 34 bool "Autodetect RAID arrays during kernel boot" 35 depends on BLK_DEV_MD=y 36 default y 37 ---help--- 38 If you say Y here, then the kernel will try to autodetect raid 39 arrays as part of its boot process. 40 41 If you don't use raid and say Y, this autodetection can cause 42 a several-second delay in the boot time due to various 43 synchronisation steps that are part of this step. 44 45 If unsure, say Y. 46 47config MD_LINEAR 48 tristate "Linear (append) mode" 49 depends on BLK_DEV_MD 50 ---help--- 51 If you say Y here, then your multiple devices driver will be able to 52 use the so-called linear mode, i.e. it will combine the hard disk 53 partitions by simply appending one to the other. 54 55 To compile this as a module, choose M here: the module 56 will be called linear. 57 58 If unsure, say Y. 59 60config MD_RAID0 61 tristate "RAID-0 (striping) mode" 62 depends on BLK_DEV_MD 63 ---help--- 64 If you say Y here, then your multiple devices driver will be able to 65 use the so-called raid0 mode, i.e. it will combine the hard disk 66 partitions into one logical device in such a fashion as to fill them 67 up evenly, one chunk here and one chunk there. This will increase 68 the throughput rate if the partitions reside on distinct disks. 69 70 Information about Software RAID on Linux is contained in the 71 Software-RAID mini-HOWTO, available from 72 <http://www.tldp.org/docs.html#howto>. There you will also 73 learn where to get the supporting user space utilities raidtools. 74 75 To compile this as a module, choose M here: the module 76 will be called raid0. 77 78 If unsure, say Y. 79 80config MD_RAID1 81 tristate "RAID-1 (mirroring) mode" 82 depends on BLK_DEV_MD 83 ---help--- 84 A RAID-1 set consists of several disk drives which are exact copies 85 of each other. In the event of a mirror failure, the RAID driver 86 will continue to use the operational mirrors in the set, providing 87 an error free MD (multiple device) to the higher levels of the 88 kernel. In a set with N drives, the available space is the capacity 89 of a single drive, and the set protects against a failure of (N - 1) 90 drives. 91 92 Information about Software RAID on Linux is contained in the 93 Software-RAID mini-HOWTO, available from 94 <http://www.tldp.org/docs.html#howto>. There you will also 95 learn where to get the supporting user space utilities raidtools. 96 97 If you want to use such a RAID-1 set, say Y. To compile this code 98 as a module, choose M here: the module will be called raid1. 99 100 If unsure, say Y. 101 102config MD_RAID10 103 tristate "RAID-10 (mirrored striping) mode" 104 depends on BLK_DEV_MD 105 ---help--- 106 RAID-10 provides a combination of striping (RAID-0) and 107 mirroring (RAID-1) with easier configuration and more flexible 108 layout. 109 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to 110 be the same size (or at least, only as much as the smallest device 111 will be used). 112 RAID-10 provides a variety of layouts that provide different levels 113 of redundancy and performance. 114 115 RAID-10 requires mdadm-1.7.0 or later, available at: 116 117 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/ 118 119 If unsure, say Y. 120 121config MD_RAID456 122 tristate "RAID-4/RAID-5/RAID-6 mode" 123 depends on BLK_DEV_MD 124 select RAID6_PQ 125 select ASYNC_MEMCPY 126 select ASYNC_XOR 127 select ASYNC_PQ 128 select ASYNC_RAID6_RECOV 129 ---help--- 130 A RAID-5 set of N drives with a capacity of C MB per drive provides 131 the capacity of C * (N - 1) MB, and protects against a failure 132 of a single drive. For a given sector (row) number, (N - 1) drives 133 contain data sectors, and one drive contains the parity protection. 134 For a RAID-4 set, the parity blocks are present on a single drive, 135 while a RAID-5 set distributes the parity across the drives in one 136 of the available parity distribution methods. 137 138 A RAID-6 set of N drives with a capacity of C MB per drive 139 provides the capacity of C * (N - 2) MB, and protects 140 against a failure of any two drives. For a given sector 141 (row) number, (N - 2) drives contain data sectors, and two 142 drives contains two independent redundancy syndromes. Like 143 RAID-5, RAID-6 distributes the syndromes across the drives 144 in one of the available parity distribution methods. 145 146 Information about Software RAID on Linux is contained in the 147 Software-RAID mini-HOWTO, available from 148 <http://www.tldp.org/docs.html#howto>. There you will also 149 learn where to get the supporting user space utilities raidtools. 150 151 If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To 152 compile this code as a module, choose M here: the module 153 will be called raid456. 154 155 If unsure, say Y. 156 157config MULTICORE_RAID456 158 bool "RAID-4/RAID-5/RAID-6 Multicore processing (EXPERIMENTAL)" 159 depends on MD_RAID456 160 depends on SMP 161 depends on EXPERIMENTAL 162 ---help--- 163 Enable the raid456 module to dispatch per-stripe raid operations to a 164 thread pool. 165 166 If unsure, say N. 167 168config MD_MULTIPATH 169 tristate "Multipath I/O support" 170 depends on BLK_DEV_MD 171 help 172 MD_MULTIPATH provides a simple multi-path personality for use 173 the MD framework. It is not under active development. New 174 projects should consider using DM_MULTIPATH which has more 175 features and more testing. 176 177 If unsure, say N. 178 179config MD_FAULTY 180 tristate "Faulty test module for MD" 181 depends on BLK_DEV_MD 182 help 183 The "faulty" module allows for a block device that occasionally returns 184 read or write errors. It is useful for testing. 185 186 In unsure, say N. 187 188config BLK_DEV_DM 189 tristate "Device mapper support" 190 ---help--- 191 Device-mapper is a low level volume manager. It works by allowing 192 people to specify mappings for ranges of logical sectors. Various 193 mapping types are available, in addition people may write their own 194 modules containing custom mappings if they wish. 195 196 Higher level volume managers such as LVM2 use this driver. 197 198 To compile this as a module, choose M here: the module will be 199 called dm-mod. 200 201 If unsure, say N. 202 203config DM_DEBUG 204 boolean "Device mapper debugging support" 205 depends on BLK_DEV_DM 206 ---help--- 207 Enable this for messages that may help debug device-mapper problems. 208 209 If unsure, say N. 210 211config DM_CRYPT 212 tristate "Crypt target support" 213 depends on BLK_DEV_DM 214 select CRYPTO 215 select CRYPTO_CBC 216 ---help--- 217 This device-mapper target allows you to create a device that 218 transparently encrypts the data on it. You'll need to activate 219 the ciphers you're going to use in the cryptoapi configuration. 220 221 Information on how to use dm-crypt can be found on 222 223 <http://www.saout.de/misc/dm-crypt/> 224 225 To compile this code as a module, choose M here: the module will 226 be called dm-crypt. 227 228 If unsure, say N. 229 230config DM_SNAPSHOT 231 tristate "Snapshot target" 232 depends on BLK_DEV_DM 233 ---help--- 234 Allow volume managers to take writable snapshots of a device. 235 236config DM_MIRROR 237 tristate "Mirror target" 238 depends on BLK_DEV_DM 239 ---help--- 240 Allow volume managers to mirror logical volumes, also 241 needed for live data migration tools such as 'pvmove'. 242 243config DM_RAID 244 tristate "RAID 4/5/6 target (EXPERIMENTAL)" 245 depends on BLK_DEV_DM && EXPERIMENTAL 246 select MD_RAID456 247 select BLK_DEV_MD 248 ---help--- 249 A dm target that supports RAID4, RAID5 and RAID6 mappings 250 251 A RAID-5 set of N drives with a capacity of C MB per drive provides 252 the capacity of C * (N - 1) MB, and protects against a failure 253 of a single drive. For a given sector (row) number, (N - 1) drives 254 contain data sectors, and one drive contains the parity protection. 255 For a RAID-4 set, the parity blocks are present on a single drive, 256 while a RAID-5 set distributes the parity across the drives in one 257 of the available parity distribution methods. 258 259 A RAID-6 set of N drives with a capacity of C MB per drive 260 provides the capacity of C * (N - 2) MB, and protects 261 against a failure of any two drives. For a given sector 262 (row) number, (N - 2) drives contain data sectors, and two 263 drives contains two independent redundancy syndromes. Like 264 RAID-5, RAID-6 distributes the syndromes across the drives 265 in one of the available parity distribution methods. 266 267config DM_LOG_USERSPACE 268 tristate "Mirror userspace logging (EXPERIMENTAL)" 269 depends on DM_MIRROR && EXPERIMENTAL && NET 270 select CONNECTOR 271 ---help--- 272 The userspace logging module provides a mechanism for 273 relaying the dm-dirty-log API to userspace. Log designs 274 which are more suited to userspace implementation (e.g. 275 shared storage logs) or experimental logs can be implemented 276 by leveraging this framework. 277 278config DM_ZERO 279 tristate "Zero target" 280 depends on BLK_DEV_DM 281 ---help--- 282 A target that discards writes, and returns all zeroes for 283 reads. Useful in some recovery situations. 284 285config DM_MULTIPATH 286 tristate "Multipath target" 287 depends on BLK_DEV_DM 288 # nasty syntax but means make DM_MULTIPATH independent 289 # of SCSI_DH if the latter isn't defined but if 290 # it is, DM_MULTIPATH must depend on it. We get a build 291 # error if SCSI_DH=m and DM_MULTIPATH=y 292 depends on SCSI_DH || !SCSI_DH 293 ---help--- 294 Allow volume managers to support multipath hardware. 295 296config DM_MULTIPATH_QL 297 tristate "I/O Path Selector based on the number of in-flight I/Os" 298 depends on DM_MULTIPATH 299 ---help--- 300 This path selector is a dynamic load balancer which selects 301 the path with the least number of in-flight I/Os. 302 303 If unsure, say N. 304 305config DM_MULTIPATH_ST 306 tristate "I/O Path Selector based on the service time" 307 depends on DM_MULTIPATH 308 ---help--- 309 This path selector is a dynamic load balancer which selects 310 the path expected to complete the incoming I/O in the shortest 311 time. 312 313 If unsure, say N. 314 315config DM_DELAY 316 tristate "I/O delaying target (EXPERIMENTAL)" 317 depends on BLK_DEV_DM && EXPERIMENTAL 318 ---help--- 319 A target that delays reads and/or writes and can send 320 them to different devices. Useful for testing. 321 322 If unsure, say N. 323 324config DM_UEVENT 325 bool "DM uevents (EXPERIMENTAL)" 326 depends on BLK_DEV_DM && EXPERIMENTAL 327 ---help--- 328 Generate udev events for DM events. 329 330endif # MD 331