1# 2# Block device driver configuration 3# 4 5menuconfig MD 6 bool "Multiple devices driver support (RAID and LVM)" 7 depends on BLOCK 8 help 9 Support multiple physical spindles through a single logical device. 10 Required for RAID and logical volume management. 11 12if MD 13 14config BLK_DEV_MD 15 tristate "RAID support" 16 ---help--- 17 This driver lets you combine several hard disk partitions into one 18 logical block device. This can be used to simply append one 19 partition to another one or to combine several redundant hard disks 20 into a RAID1/4/5 device so as to provide protection against hard 21 disk failures. This is called "Software RAID" since the combining of 22 the partitions is done by the kernel. "Hardware RAID" means that the 23 combining is done by a dedicated controller; if you have such a 24 controller, you do not need to say Y here. 25 26 More information about Software RAID on Linux is contained in the 27 Software RAID mini-HOWTO, available from 28 <http://www.tldp.org/docs.html#howto>. There you will also learn 29 where to get the supporting user space utilities raidtools. 30 31 If unsure, say N. 32 33config MD_AUTODETECT 34 bool "Autodetect RAID arrays during kernel boot" 35 depends on BLK_DEV_MD=y 36 default y 37 ---help--- 38 If you say Y here, then the kernel will try to autodetect raid 39 arrays as part of its boot process. 40 41 If you don't use raid and say Y, this autodetection can cause 42 a several-second delay in the boot time due to various 43 synchronisation steps that are part of this step. 44 45 If unsure, say Y. 46 47config MD_LINEAR 48 tristate "Linear (append) mode" 49 depends on BLK_DEV_MD 50 ---help--- 51 If you say Y here, then your multiple devices driver will be able to 52 use the so-called linear mode, i.e. it will combine the hard disk 53 partitions by simply appending one to the other. 54 55 To compile this as a module, choose M here: the module 56 will be called linear. 57 58 If unsure, say Y. 59 60config MD_RAID0 61 tristate "RAID-0 (striping) mode" 62 depends on BLK_DEV_MD 63 ---help--- 64 If you say Y here, then your multiple devices driver will be able to 65 use the so-called raid0 mode, i.e. it will combine the hard disk 66 partitions into one logical device in such a fashion as to fill them 67 up evenly, one chunk here and one chunk there. This will increase 68 the throughput rate if the partitions reside on distinct disks. 69 70 Information about Software RAID on Linux is contained in the 71 Software-RAID mini-HOWTO, available from 72 <http://www.tldp.org/docs.html#howto>. There you will also 73 learn where to get the supporting user space utilities raidtools. 74 75 To compile this as a module, choose M here: the module 76 will be called raid0. 77 78 If unsure, say Y. 79 80config MD_RAID1 81 tristate "RAID-1 (mirroring) mode" 82 depends on BLK_DEV_MD 83 ---help--- 84 A RAID-1 set consists of several disk drives which are exact copies 85 of each other. In the event of a mirror failure, the RAID driver 86 will continue to use the operational mirrors in the set, providing 87 an error free MD (multiple device) to the higher levels of the 88 kernel. In a set with N drives, the available space is the capacity 89 of a single drive, and the set protects against a failure of (N - 1) 90 drives. 91 92 Information about Software RAID on Linux is contained in the 93 Software-RAID mini-HOWTO, available from 94 <http://www.tldp.org/docs.html#howto>. There you will also 95 learn where to get the supporting user space utilities raidtools. 96 97 If you want to use such a RAID-1 set, say Y. To compile this code 98 as a module, choose M here: the module will be called raid1. 99 100 If unsure, say Y. 101 102config MD_RAID10 103 tristate "RAID-10 (mirrored striping) mode" 104 depends on BLK_DEV_MD 105 ---help--- 106 RAID-10 provides a combination of striping (RAID-0) and 107 mirroring (RAID-1) with easier configuration and more flexible 108 layout. 109 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to 110 be the same size (or at least, only as much as the smallest device 111 will be used). 112 RAID-10 provides a variety of layouts that provide different levels 113 of redundancy and performance. 114 115 RAID-10 requires mdadm-1.7.0 or later, available at: 116 117 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/ 118 119 If unsure, say Y. 120 121config MD_RAID456 122 tristate "RAID-4/RAID-5/RAID-6 mode" 123 depends on BLK_DEV_MD 124 select RAID6_PQ 125 select ASYNC_MEMCPY 126 select ASYNC_XOR 127 select ASYNC_PQ 128 select ASYNC_RAID6_RECOV 129 ---help--- 130 A RAID-5 set of N drives with a capacity of C MB per drive provides 131 the capacity of C * (N - 1) MB, and protects against a failure 132 of a single drive. For a given sector (row) number, (N - 1) drives 133 contain data sectors, and one drive contains the parity protection. 134 For a RAID-4 set, the parity blocks are present on a single drive, 135 while a RAID-5 set distributes the parity across the drives in one 136 of the available parity distribution methods. 137 138 A RAID-6 set of N drives with a capacity of C MB per drive 139 provides the capacity of C * (N - 2) MB, and protects 140 against a failure of any two drives. For a given sector 141 (row) number, (N - 2) drives contain data sectors, and two 142 drives contains two independent redundancy syndromes. Like 143 RAID-5, RAID-6 distributes the syndromes across the drives 144 in one of the available parity distribution methods. 145 146 Information about Software RAID on Linux is contained in the 147 Software-RAID mini-HOWTO, available from 148 <http://www.tldp.org/docs.html#howto>. There you will also 149 learn where to get the supporting user space utilities raidtools. 150 151 If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To 152 compile this code as a module, choose M here: the module 153 will be called raid456. 154 155 If unsure, say Y. 156 157config MD_MULTIPATH 158 tristate "Multipath I/O support" 159 depends on BLK_DEV_MD 160 help 161 MD_MULTIPATH provides a simple multi-path personality for use 162 the MD framework. It is not under active development. New 163 projects should consider using DM_MULTIPATH which has more 164 features and more testing. 165 166 If unsure, say N. 167 168config MD_FAULTY 169 tristate "Faulty test module for MD" 170 depends on BLK_DEV_MD 171 help 172 The "faulty" module allows for a block device that occasionally returns 173 read or write errors. It is useful for testing. 174 175 In unsure, say N. 176 177config BLK_DEV_DM 178 tristate "Device mapper support" 179 ---help--- 180 Device-mapper is a low level volume manager. It works by allowing 181 people to specify mappings for ranges of logical sectors. Various 182 mapping types are available, in addition people may write their own 183 modules containing custom mappings if they wish. 184 185 Higher level volume managers such as LVM2 use this driver. 186 187 To compile this as a module, choose M here: the module will be 188 called dm-mod. 189 190 If unsure, say N. 191 192config DM_DEBUG 193 boolean "Device mapper debugging support" 194 depends on BLK_DEV_DM 195 ---help--- 196 Enable this for messages that may help debug device-mapper problems. 197 198 If unsure, say N. 199 200config DM_BUFIO 201 tristate 202 depends on BLK_DEV_DM 203 ---help--- 204 This interface allows you to do buffered I/O on a device and acts 205 as a cache, holding recently-read blocks in memory and performing 206 delayed writes. 207 208config DM_BIO_PRISON 209 tristate 210 depends on BLK_DEV_DM 211 ---help--- 212 Some bio locking schemes used by other device-mapper targets 213 including thin provisioning. 214 215source "drivers/md/persistent-data/Kconfig" 216 217config DM_CRYPT 218 tristate "Crypt target support" 219 depends on BLK_DEV_DM 220 select CRYPTO 221 select CRYPTO_CBC 222 ---help--- 223 This device-mapper target allows you to create a device that 224 transparently encrypts the data on it. You'll need to activate 225 the ciphers you're going to use in the cryptoapi configuration. 226 227 Information on how to use dm-crypt can be found on 228 229 <http://www.saout.de/misc/dm-crypt/> 230 231 To compile this code as a module, choose M here: the module will 232 be called dm-crypt. 233 234 If unsure, say N. 235 236config DM_SNAPSHOT 237 tristate "Snapshot target" 238 depends on BLK_DEV_DM 239 ---help--- 240 Allow volume managers to take writable snapshots of a device. 241 242config DM_THIN_PROVISIONING 243 tristate "Thin provisioning target" 244 depends on BLK_DEV_DM 245 select DM_PERSISTENT_DATA 246 select DM_BIO_PRISON 247 ---help--- 248 Provides thin provisioning and snapshots that share a data store. 249 250config DM_DEBUG_BLOCK_STACK_TRACING 251 boolean "Keep stack trace of thin provisioning block lock holders" 252 depends on STACKTRACE_SUPPORT && DM_THIN_PROVISIONING 253 select STACKTRACE 254 ---help--- 255 Enable this for messages that may help debug problems with the 256 block manager locking used by thin provisioning. 257 258 If unsure, say N. 259 260config DM_CACHE 261 tristate "Cache target (EXPERIMENTAL)" 262 depends on BLK_DEV_DM 263 default n 264 select DM_PERSISTENT_DATA 265 select DM_BIO_PRISON 266 ---help--- 267 dm-cache attempts to improve performance of a block device by 268 moving frequently used data to a smaller, higher performance 269 device. Different 'policy' plugins can be used to change the 270 algorithms used to select which blocks are promoted, demoted, 271 cleaned etc. It supports writeback and writethrough modes. 272 273config DM_CACHE_MQ 274 tristate "MQ Cache Policy (EXPERIMENTAL)" 275 depends on DM_CACHE 276 default y 277 ---help--- 278 A cache policy that uses a multiqueue ordered by recent hit 279 count to select which blocks should be promoted and demoted. 280 This is meant to be a general purpose policy. It prioritises 281 reads over writes. 282 283config DM_CACHE_CLEANER 284 tristate "Cleaner Cache Policy (EXPERIMENTAL)" 285 depends on DM_CACHE 286 default y 287 ---help--- 288 A simple cache policy that writes back all data to the 289 origin. Used when decommissioning a dm-cache. 290 291config DM_MIRROR 292 tristate "Mirror target" 293 depends on BLK_DEV_DM 294 ---help--- 295 Allow volume managers to mirror logical volumes, also 296 needed for live data migration tools such as 'pvmove'. 297 298config DM_RAID 299 tristate "RAID 1/4/5/6/10 target" 300 depends on BLK_DEV_DM 301 select MD_RAID1 302 select MD_RAID10 303 select MD_RAID456 304 select BLK_DEV_MD 305 ---help--- 306 A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings 307 308 A RAID-5 set of N drives with a capacity of C MB per drive provides 309 the capacity of C * (N - 1) MB, and protects against a failure 310 of a single drive. For a given sector (row) number, (N - 1) drives 311 contain data sectors, and one drive contains the parity protection. 312 For a RAID-4 set, the parity blocks are present on a single drive, 313 while a RAID-5 set distributes the parity across the drives in one 314 of the available parity distribution methods. 315 316 A RAID-6 set of N drives with a capacity of C MB per drive 317 provides the capacity of C * (N - 2) MB, and protects 318 against a failure of any two drives. For a given sector 319 (row) number, (N - 2) drives contain data sectors, and two 320 drives contains two independent redundancy syndromes. Like 321 RAID-5, RAID-6 distributes the syndromes across the drives 322 in one of the available parity distribution methods. 323 324config DM_LOG_USERSPACE 325 tristate "Mirror userspace logging" 326 depends on DM_MIRROR && NET 327 select CONNECTOR 328 ---help--- 329 The userspace logging module provides a mechanism for 330 relaying the dm-dirty-log API to userspace. Log designs 331 which are more suited to userspace implementation (e.g. 332 shared storage logs) or experimental logs can be implemented 333 by leveraging this framework. 334 335config DM_ZERO 336 tristate "Zero target" 337 depends on BLK_DEV_DM 338 ---help--- 339 A target that discards writes, and returns all zeroes for 340 reads. Useful in some recovery situations. 341 342config DM_MULTIPATH 343 tristate "Multipath target" 344 depends on BLK_DEV_DM 345 # nasty syntax but means make DM_MULTIPATH independent 346 # of SCSI_DH if the latter isn't defined but if 347 # it is, DM_MULTIPATH must depend on it. We get a build 348 # error if SCSI_DH=m and DM_MULTIPATH=y 349 depends on SCSI_DH || !SCSI_DH 350 ---help--- 351 Allow volume managers to support multipath hardware. 352 353config DM_MULTIPATH_QL 354 tristate "I/O Path Selector based on the number of in-flight I/Os" 355 depends on DM_MULTIPATH 356 ---help--- 357 This path selector is a dynamic load balancer which selects 358 the path with the least number of in-flight I/Os. 359 360 If unsure, say N. 361 362config DM_MULTIPATH_ST 363 tristate "I/O Path Selector based on the service time" 364 depends on DM_MULTIPATH 365 ---help--- 366 This path selector is a dynamic load balancer which selects 367 the path expected to complete the incoming I/O in the shortest 368 time. 369 370 If unsure, say N. 371 372config DM_DELAY 373 tristate "I/O delaying target" 374 depends on BLK_DEV_DM 375 ---help--- 376 A target that delays reads and/or writes and can send 377 them to different devices. Useful for testing. 378 379 If unsure, say N. 380 381config DM_UEVENT 382 bool "DM uevents" 383 depends on BLK_DEV_DM 384 ---help--- 385 Generate udev events for DM events. 386 387config DM_FLAKEY 388 tristate "Flakey target" 389 depends on BLK_DEV_DM 390 ---help--- 391 A target that intermittently fails I/O for debugging purposes. 392 393config DM_VERITY 394 tristate "Verity target support" 395 depends on BLK_DEV_DM 396 select CRYPTO 397 select CRYPTO_HASH 398 select DM_BUFIO 399 ---help--- 400 This device-mapper target creates a read-only device that 401 transparently validates the data on one underlying device against 402 a pre-generated tree of cryptographic checksums stored on a second 403 device. 404 405 You'll need to activate the digests you're going to use in the 406 cryptoapi configuration. 407 408 To compile this code as a module, choose M here: the module will 409 be called dm-verity. 410 411 If unsure, say N. 412 413endif # MD 414