1================================== 2DMAengine controller documentation 3================================== 4 5Hardware Introduction 6===================== 7 8Most of the Slave DMA controllers have the same general principles of 9operations. 10 11They have a given number of channels to use for the DMA transfers, and 12a given number of requests lines. 13 14Requests and channels are pretty much orthogonal. Channels can be used 15to serve several to any requests. To simplify, channels are the 16entities that will be doing the copy, and requests what endpoints are 17involved. 18 19The request lines actually correspond to physical lines going from the 20DMA-eligible devices to the controller itself. Whenever the device 21will want to start a transfer, it will assert a DMA request (DRQ) by 22asserting that request line. 23 24A very simple DMA controller would only take into account a single 25parameter: the transfer size. At each clock cycle, it would transfer a 26byte of data from one buffer to another, until the transfer size has 27been reached. 28 29That wouldn't work well in the real world, since slave devices might 30require a specific number of bits to be transferred in a single 31cycle. For example, we may want to transfer as much data as the 32physical bus allows to maximize performances when doing a simple 33memory copy operation, but our audio device could have a narrower FIFO 34that requires data to be written exactly 16 or 24 bits at a time. This 35is why most if not all of the DMA controllers can adjust this, using a 36parameter called the transfer width. 37 38Moreover, some DMA controllers, whenever the RAM is used as a source 39or destination, can group the reads or writes in memory into a buffer, 40so instead of having a lot of small memory accesses, which is not 41really efficient, you'll get several bigger transfers. This is done 42using a parameter called the burst size, that defines how many single 43reads/writes it's allowed to do without the controller splitting the 44transfer into smaller sub-transfers. 45 46Our theoretical DMA controller would then only be able to do transfers 47that involve a single contiguous block of data. However, some of the 48transfers we usually have are not, and want to copy data from 49non-contiguous buffers to a contiguous buffer, which is called 50scatter-gather. 51 52DMAEngine, at least for mem2dev transfers, require support for 53scatter-gather. So we're left with two cases here: either we have a 54quite simple DMA controller that doesn't support it, and we'll have to 55implement it in software, or we have a more advanced DMA controller, 56that implements in hardware scatter-gather. 57 58The latter are usually programmed using a collection of chunks to 59transfer, and whenever the transfer is started, the controller will go 60over that collection, doing whatever we programmed there. 61 62This collection is usually either a table or a linked list. You will 63then push either the address of the table and its number of elements, 64or the first item of the list to one channel of the DMA controller, 65and whenever a DRQ will be asserted, it will go through the collection 66to know where to fetch the data from. 67 68Either way, the format of this collection is completely dependent on 69your hardware. Each DMA controller will require a different structure, 70but all of them will require, for every chunk, at least the source and 71destination addresses, whether it should increment these addresses or 72not and the three parameters we saw earlier: the burst size, the 73transfer width and the transfer size. 74 75The one last thing is that usually, slave devices won't issue DRQ by 76default, and you have to enable this in your slave device driver first 77whenever you're willing to use DMA. 78 79These were just the general memory-to-memory (also called mem2mem) or 80memory-to-device (mem2dev) kind of transfers. Most devices often 81support other kind of transfers or memory operations that dmaengine 82support and will be detailed later in this document. 83 84DMA Support in Linux 85==================== 86 87Historically, DMA controller drivers have been implemented using the 88async TX API, to offload operations such as memory copy, XOR, 89cryptography, etc., basically any memory to memory operation. 90 91Over time, the need for memory to device transfers arose, and 92dmaengine was extended. Nowadays, the async TX API is written as a 93layer on top of dmaengine, and acts as a client. Still, dmaengine 94accommodates that API in some cases, and made some design choices to 95ensure that it stayed compatible. 96 97For more information on the Async TX API, please look the relevant 98documentation file in Documentation/crypto/async-tx-api.txt. 99 100DMAEngine APIs 101============== 102 103``struct dma_device`` Initialization 104------------------------------------ 105 106Just like any other kernel framework, the whole DMAEngine registration 107relies on the driver filling a structure and registering against the 108framework. In our case, that structure is dma_device. 109 110The first thing you need to do in your driver is to allocate this 111structure. Any of the usual memory allocators will do, but you'll also 112need to initialize a few fields in there: 113 114- ``channels``: should be initialized as a list using the 115 INIT_LIST_HEAD macro for example 116 117- ``src_addr_widths``: 118 should contain a bitmask of the supported source transfer width 119 120- ``dst_addr_widths``: 121 should contain a bitmask of the supported destination transfer width 122 123- ``directions``: 124 should contain a bitmask of the supported slave directions 125 (i.e. excluding mem2mem transfers) 126 127- ``residue_granularity``: 128 granularity of the transfer residue reported to dma_set_residue. 129 This can be either: 130 131 - Descriptor: 132 your device doesn't support any kind of residue 133 reporting. The framework will only know that a particular 134 transaction descriptor is done. 135 136 - Segment: 137 your device is able to report which chunks have been transferred 138 139 - Burst: 140 your device is able to report which burst have been transferred 141 142- ``dev``: should hold the pointer to the ``struct device`` associated 143 to your current driver instance. 144 145Supported transaction types 146--------------------------- 147 148The next thing you need is to set which transaction types your device 149(and driver) supports. 150 151Our ``dma_device structure`` has a field called cap_mask that holds the 152various types of transaction supported, and you need to modify this 153mask using the dma_cap_set function, with various flags depending on 154transaction types you support as an argument. 155 156All those capabilities are defined in the ``dma_transaction_type enum``, 157in ``include/linux/dmaengine.h`` 158 159Currently, the types available are: 160 161- DMA_MEMCPY 162 163 - The device is able to do memory to memory copies 164 165- DMA_XOR 166 167 - The device is able to perform XOR operations on memory areas 168 169 - Used to accelerate XOR intensive tasks, such as RAID5 170 171- DMA_XOR_VAL 172 173 - The device is able to perform parity check using the XOR 174 algorithm against a memory buffer. 175 176- DMA_PQ 177 178 - The device is able to perform RAID6 P+Q computations, P being a 179 simple XOR, and Q being a Reed-Solomon algorithm. 180 181- DMA_PQ_VAL 182 183 - The device is able to perform parity check using RAID6 P+Q 184 algorithm against a memory buffer. 185 186- DMA_INTERRUPT 187 188 - The device is able to trigger a dummy transfer that will 189 generate periodic interrupts 190 191 - Used by the client drivers to register a callback that will be 192 called on a regular basis through the DMA controller interrupt 193 194- DMA_PRIVATE 195 196 - The devices only supports slave transfers, and as such isn't 197 available for async transfers. 198 199- DMA_ASYNC_TX 200 201 - Must not be set by the device, and will be set by the framework 202 if needed 203 204 - TODO: What is it about? 205 206- DMA_SLAVE 207 208 - The device can handle device to memory transfers, including 209 scatter-gather transfers. 210 211 - While in the mem2mem case we were having two distinct types to 212 deal with a single chunk to copy or a collection of them, here, 213 we just have a single transaction type that is supposed to 214 handle both. 215 216 - If you want to transfer a single contiguous memory buffer, 217 simply build a scatter list with only one item. 218 219- DMA_CYCLIC 220 221 - The device can handle cyclic transfers. 222 223 - A cyclic transfer is a transfer where the chunk collection will 224 loop over itself, with the last item pointing to the first. 225 226 - It's usually used for audio transfers, where you want to operate 227 on a single ring buffer that you will fill with your audio data. 228 229- DMA_INTERLEAVE 230 231 - The device supports interleaved transfer. 232 233 - These transfers can transfer data from a non-contiguous buffer 234 to a non-contiguous buffer, opposed to DMA_SLAVE that can 235 transfer data from a non-contiguous data set to a continuous 236 destination buffer. 237 238 - It's usually used for 2d content transfers, in which case you 239 want to transfer a portion of uncompressed data directly to the 240 display to print it 241 242These various types will also affect how the source and destination 243addresses change over time. 244 245Addresses pointing to RAM are typically incremented (or decremented) 246after each transfer. In case of a ring buffer, they may loop 247(DMA_CYCLIC). Addresses pointing to a device's register (e.g. a FIFO) 248are typically fixed. 249 250Device operations 251----------------- 252 253Our dma_device structure also requires a few function pointers in 254order to implement the actual logic, now that we described what 255operations we were able to perform. 256 257The functions that we have to fill in there, and hence have to 258implement, obviously depend on the transaction types you reported as 259supported. 260 261- ``device_alloc_chan_resources`` 262 263- ``device_free_chan_resources`` 264 265 - These functions will be called whenever a driver will call 266 ``dma_request_channel`` or ``dma_release_channel`` for the first/last 267 time on the channel associated to that driver. 268 269 - They are in charge of allocating/freeing all the needed 270 resources in order for that channel to be useful for your driver. 271 272 - These functions can sleep. 273 274- ``device_prep_dma_*`` 275 276 - These functions are matching the capabilities you registered 277 previously. 278 279 - These functions all take the buffer or the scatterlist relevant 280 for the transfer being prepared, and should create a hardware 281 descriptor or a list of hardware descriptors from it 282 283 - These functions can be called from an interrupt context 284 285 - Any allocation you might do should be using the GFP_NOWAIT 286 flag, in order not to potentially sleep, but without depleting 287 the emergency pool either. 288 289 - Drivers should try to pre-allocate any memory they might need 290 during the transfer setup at probe time to avoid putting to 291 much pressure on the nowait allocator. 292 293 - It should return a unique instance of the 294 ``dma_async_tx_descriptor structure``, that further represents this 295 particular transfer. 296 297 - This structure can be initialized using the function 298 ``dma_async_tx_descriptor_init``. 299 300 - You'll also need to set two fields in this structure: 301 302 - flags: 303 TODO: Can it be modified by the driver itself, or 304 should it be always the flags passed in the arguments 305 306 - tx_submit: A pointer to a function you have to implement, 307 that is supposed to push the current transaction descriptor to a 308 pending queue, waiting for issue_pending to be called. 309 310 - In this structure the function pointer callback_result can be 311 initialized in order for the submitter to be notified that a 312 transaction has completed. In the earlier code the function pointer 313 callback has been used. However it does not provide any status to the 314 transaction and will be deprecated. The result structure defined as 315 ``dmaengine_result`` that is passed in to callback_result 316 has two fields: 317 318 - result: This provides the transfer result defined by 319 ``dmaengine_tx_result``. Either success or some error condition. 320 321 - residue: Provides the residue bytes of the transfer for those that 322 support residue. 323 324- ``device_issue_pending`` 325 326 - Takes the first transaction descriptor in the pending queue, 327 and starts the transfer. Whenever that transfer is done, it 328 should move to the next transaction in the list. 329 330 - This function can be called in an interrupt context 331 332- ``device_tx_status`` 333 334 - Should report the bytes left to go over on the given channel 335 336 - Should only care about the transaction descriptor passed as 337 argument, not the currently active one on a given channel 338 339 - The tx_state argument might be NULL 340 341 - Should use dma_set_residue to report it 342 343 - In the case of a cyclic transfer, it should only take into 344 account the current period. 345 346 - This function can be called in an interrupt context. 347 348- device_config 349 350 - Reconfigures the channel with the configuration given as argument 351 352 - This command should NOT perform synchronously, or on any 353 currently queued transfers, but only on subsequent ones 354 355 - In this case, the function will receive a ``dma_slave_config`` 356 structure pointer as an argument, that will detail which 357 configuration to use. 358 359 - Even though that structure contains a direction field, this 360 field is deprecated in favor of the direction argument given to 361 the prep_* functions 362 363 - This call is mandatory for slave operations only. This should NOT be 364 set or expected to be set for memcpy operations. 365 If a driver support both, it should use this call for slave 366 operations only and not for memcpy ones. 367 368- device_pause 369 370 - Pauses a transfer on the channel 371 372 - This command should operate synchronously on the channel, 373 pausing right away the work of the given channel 374 375- device_resume 376 377 - Resumes a transfer on the channel 378 379 - This command should operate synchronously on the channel, 380 resuming right away the work of the given channel 381 382- device_terminate_all 383 384 - Aborts all the pending and ongoing transfers on the channel 385 386 - For aborted transfers the complete callback should not be called 387 388 - Can be called from atomic context or from within a complete 389 callback of a descriptor. Must not sleep. Drivers must be able 390 to handle this correctly. 391 392 - Termination may be asynchronous. The driver does not have to 393 wait until the currently active transfer has completely stopped. 394 See device_synchronize. 395 396- device_synchronize 397 398 - Must synchronize the termination of a channel to the current 399 context. 400 401 - Must make sure that memory for previously submitted 402 descriptors is no longer accessed by the DMA controller. 403 404 - Must make sure that all complete callbacks for previously 405 submitted descriptors have finished running and none are 406 scheduled to run. 407 408 - May sleep. 409 410 411Misc notes 412========== 413 414(stuff that should be documented, but don't really know 415where to put them) 416 417``dma_run_dependencies`` 418 419- Should be called at the end of an async TX transfer, and can be 420 ignored in the slave transfers case. 421 422- Makes sure that dependent operations are run before marking it 423 as complete. 424 425dma_cookie_t 426 427- it's a DMA transaction ID that will increment over time. 428 429- Not really relevant any more since the introduction of ``virt-dma`` 430 that abstracts it away. 431 432DMA_CTRL_ACK 433 434- If clear, the descriptor cannot be reused by provider until the 435 client acknowledges receipt, i.e. has has a chance to establish any 436 dependency chains 437 438- This can be acked by invoking async_tx_ack() 439 440- If set, does not mean descriptor can be reused 441 442DMA_CTRL_REUSE 443 444- If set, the descriptor can be reused after being completed. It should 445 not be freed by provider if this flag is set. 446 447- The descriptor should be prepared for reuse by invoking 448 ``dmaengine_desc_set_reuse()`` which will set DMA_CTRL_REUSE. 449 450- ``dmaengine_desc_set_reuse()`` will succeed only when channel support 451 reusable descriptor as exhibited by capabilities 452 453- As a consequence, if a device driver wants to skip the 454 ``dma_map_sg()`` and ``dma_unmap_sg()`` in between 2 transfers, 455 because the DMA'd data wasn't used, it can resubmit the transfer right after 456 its completion. 457 458- Descriptor can be freed in few ways 459 460 - Clearing DMA_CTRL_REUSE by invoking 461 ``dmaengine_desc_clear_reuse()`` and submitting for last txn 462 463 - Explicitly invoking ``dmaengine_desc_free()``, this can succeed only 464 when DMA_CTRL_REUSE is already set 465 466 - Terminating the channel 467 468- DMA_PREP_CMD 469 470 - If set, the client driver tells DMA controller that passed data in DMA 471 API is command data. 472 473 - Interpretation of command data is DMA controller specific. It can be 474 used for issuing commands to other peripherals/register reads/register 475 writes for which the descriptor should be in different format from 476 normal data descriptors. 477 478General Design Notes 479==================== 480 481Most of the DMAEngine drivers you'll see are based on a similar design 482that handles the end of transfer interrupts in the handler, but defer 483most work to a tasklet, including the start of a new transfer whenever 484the previous transfer ended. 485 486This is a rather inefficient design though, because the inter-transfer 487latency will be not only the interrupt latency, but also the 488scheduling latency of the tasklet, which will leave the channel idle 489in between, which will slow down the global transfer rate. 490 491You should avoid this kind of practice, and instead of electing a new 492transfer in your tasklet, move that part to the interrupt handler in 493order to have a shorter idle window (that we can't really avoid 494anyway). 495 496Glossary 497======== 498 499- Burst: A number of consecutive read or write operations that 500 can be queued to buffers before being flushed to memory. 501 502- Chunk: A contiguous collection of bursts 503 504- Transfer: A collection of chunks (be it contiguous or not) 505