Lines Matching +full:single +full:- +full:channel
20 DMA-eligible devices to the controller itself. Whenever the device
24 A very simple DMA controller would only take into account a single
30 require a specific number of bits to be transferred in a single
42 using a parameter called the burst size, that defines how many single
44 transfer into smaller sub-transfers.
47 that involve a single contiguous block of data. However, some of the
49 non-contiguous buffers to a contiguous buffer, which is called
50 scatter-gather.
53 scatter-gather. So we're left with two cases here: either we have a
56 that implements in hardware scatter-gather.
64 or the first item of the list to one channel of the DMA controller,
79 These were just the general memory-to-memory (also called mem2mem) or
80 memory-to-device (mem2dev) kind of transfers. Most devices often
98 documentation file in Documentation/crypto/async-tx-api.rst.
104 ------------------------------------
114 - ``channels``: should be initialized as a list using the
117 - ``src_addr_widths``:
120 - ``dst_addr_widths``:
123 - ``directions``:
127 - ``residue_granularity``:
131 - Descriptor:
136 - Segment:
139 - Burst:
142 - ``dev``: should hold the pointer to the ``struct device`` associated
146 ---------------------------
161 - DMA_MEMCPY
163 - The device is able to do memory to memory copies
165 - No matter what the overall size of the combined chunks for source and
167 transmitted. That means the number and size of the scatter-gather buffers in
170 total size of the two scatter-gather list buffers.
172 - It's usually used for copying pixel data between host memory and
173 memory-mapped GPU device memory, such as found on modern PCI video graphics
178 - DMA_XOR
180 - The device is able to perform XOR operations on memory areas
182 - Used to accelerate XOR intensive tasks, such as RAID5
184 - DMA_XOR_VAL
186 - The device is able to perform parity check using the XOR
189 - DMA_PQ
191 - The device is able to perform RAID6 P+Q computations, P being a
192 simple XOR, and Q being a Reed-Solomon algorithm.
194 - DMA_PQ_VAL
196 - The device is able to perform parity check using RAID6 P+Q
199 - DMA_MEMSET
201 - The device is able to fill memory with the provided pattern
203 - The pattern is treated as a single byte signed value.
205 - DMA_INTERRUPT
207 - The device is able to trigger a dummy transfer that will
210 - Used by the client drivers to register a callback that will be
213 - DMA_PRIVATE
215 - The devices only supports slave transfers, and as such isn't
218 - DMA_ASYNC_TX
220 - Must not be set by the device, and will be set by the framework
223 - TODO: What is it about?
225 - DMA_SLAVE
227 - The device can handle device to memory transfers, including
228 scatter-gather transfers.
230 - While in the mem2mem case we were having two distinct types to
231 deal with a single chunk to copy or a collection of them, here,
232 we just have a single transaction type that is supposed to
235 - If you want to transfer a single contiguous memory buffer,
238 - DMA_CYCLIC
240 - The device can handle cyclic transfers.
242 - A cyclic transfer is a transfer where the chunk collection will
245 - It's usually used for audio transfers, where you want to operate
246 on a single ring buffer that you will fill with your audio data.
248 - DMA_INTERLEAVE
250 - The device supports interleaved transfer.
252 - These transfers can transfer data from a non-contiguous buffer
253 to a non-contiguous buffer, opposed to DMA_SLAVE that can
254 transfer data from a non-contiguous data set to a continuous
257 - It's usually used for 2d content transfers, in which case you
261 - DMA_COMPLETION_NO_ORDER
263 - The device does not support in order completion.
265 - The driver should return DMA_OUT_OF_ORDER for device_tx_status if
268 - All cookie tracking and checking API should be treated as invalid if
271 - At this point, this is incompatible with polling option for dmatest.
273 - If this cap is set, the user is recommended to provide an unique
277 - DMA_REPEAT
279 - The device supports repeated transfers. A repeated transfer, indicated by
284 - This feature is limited to interleaved transfers, this flag should thus not
289 - DMA_LOAD_EOT
291 - The device supports replacing repeated transfers at end of transfer (EOT)
294 - Support for replacing a currently running transfer at another point (such
307 -------------------------------
319 - DESC_METADATA_CLIENT
326 - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM
332 - DMA_DEV_TO_MEM
339 - DESC_METADATA_ENGINE
348 - get_metadata_ptr()
353 - set_metadata_len()
363 -----------------
373 - ``device_alloc_chan_resources``
375 - ``device_free_chan_resources``
377 - These functions will be called whenever a driver will call
379 time on the channel associated to that driver.
381 - They are in charge of allocating/freeing all the needed
382 resources in order for that channel to be useful for your driver.
384 - These functions can sleep.
386 - ``device_prep_dma_*``
388 - These functions are matching the capabilities you registered
391 - These functions all take the buffer or the scatterlist relevant
395 - These functions can be called from an interrupt context
397 - Any allocation you might do should be using the GFP_NOWAIT
401 - Drivers should try to pre-allocate any memory they might need
405 - It should return a unique instance of the
409 - This structure can be initialized using the function
412 - You'll also need to set two fields in this structure:
414 - flags:
418 - tx_submit: A pointer to a function you have to implement,
422 - In this structure the function pointer callback_result can be
430 - result: This provides the transfer result defined by
433 - residue: Provides the residue bytes of the transfer for those that
436 - ``device_issue_pending``
438 - Takes the first transaction descriptor in the pending queue,
442 - This function can be called in an interrupt context
444 - ``device_tx_status``
446 - Should report the bytes left to go over on the given channel
448 - Should only care about the transaction descriptor passed as
449 argument, not the currently active one on a given channel
451 - The tx_state argument might be NULL
453 - Should use dma_set_residue to report it
455 - In the case of a cyclic transfer, it should only take into
458 - Should return DMA_OUT_OF_ORDER if the device does not support in order
461 - This function can be called in an interrupt context.
463 - device_config
465 - Reconfigures the channel with the configuration given as argument
467 - This command should NOT perform synchronously, or on any
470 - In this case, the function will receive a ``dma_slave_config``
474 - Even though that structure contains a direction field, this
478 - This call is mandatory for slave operations only. This should NOT be
483 - device_pause
485 - Pauses a transfer on the channel
487 - This command should operate synchronously on the channel,
488 pausing right away the work of the given channel
490 - device_resume
492 - Resumes a transfer on the channel
494 - This command should operate synchronously on the channel,
495 resuming right away the work of the given channel
497 - device_terminate_all
499 - Aborts all the pending and ongoing transfers on the channel
501 - For aborted transfers the complete callback should not be called
503 - Can be called from atomic context or from within a complete
507 - Termination may be asynchronous. The driver does not have to
511 - device_synchronize
513 - Must synchronize the termination of a channel to the current
516 - Must make sure that memory for previously submitted
519 - Must make sure that all complete callbacks for previously
523 - May sleep.
534 - Should be called at the end of an async TX transfer, and can be
537 - Makes sure that dependent operations are run before marking it
542 - it's a DMA transaction ID that will increment over time.
544 - Not really relevant any more since the introduction of ``virt-dma``
549 - If clear, the descriptor cannot be reused by provider until the
553 - This can be acked by invoking async_tx_ack()
555 - If set, does not mean descriptor can be reused
559 - If set, the descriptor can be reused after being completed. It should
562 - The descriptor should be prepared for reuse by invoking
565 - ``dmaengine_desc_set_reuse()`` will succeed only when channel support
568 - As a consequence, if a device driver wants to skip the
573 - Descriptor can be freed in few ways
575 - Clearing DMA_CTRL_REUSE by invoking
578 - Explicitly invoking ``dmaengine_desc_free()``, this can succeed only
581 - Terminating the channel
583 - DMA_PREP_CMD
585 - If set, the client driver tells DMA controller that passed data in DMA
588 - Interpretation of command data is DMA controller specific. It can be
593 - DMA_PREP_REPEAT
595 - If set, the transfer will be automatically repeated when it ends until a
596 new transfer is queued on the same channel with the DMA_PREP_LOAD_EOT flag.
597 If the next transfer to be queued on the channel does not have the
601 - This flag is only supported if the channel reports the DMA_REPEAT
604 - DMA_PREP_LOAD_EOT
606 - If set, the transfer will replace the transfer currently being executed at
609 - This is the default behaviour for non-repeated transfers, specifying
610 DMA_PREP_LOAD_EOT for non-repeated transfers will thus make no difference.
612 - When using repeated transfers, DMA clients will usually need to set the
613 DMA_PREP_LOAD_EOT flag on all transfers, otherwise the channel will keep
615 queued. Failure to set DMA_PREP_LOAD_EOT will appear as if the channel was
618 - This flag is only supported if the channel reports the DMA_LOAD_EOT
629 This is a rather inefficient design though, because the inter-transfer
631 scheduling latency of the tasklet, which will leave the channel idle
642 - Burst: A number of consecutive read or write operations that
645 - Chunk: A contiguous collection of bursts
647 - Transfer: A collection of chunks (be it contiguous or not)