Lines Matching +full:data +full:- +full:mapping
2 Dynamic DMA mapping Guide
10 with example pseudo-code. For a concise description of the API, see
11 DMA-API.txt.
39 supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40 so devices only need to use 32-bit DMA addresses.
49 +-------+ +------+ +------+
52 C +-------+ --------> B +------+ ----------> +------+ A
53 | | mapping | | by host | |
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
59 | | Virtual |Buffer| Mapping | |
60 X +-------+ --------> Y +------+ <---------- +------+ Z
61 | | mapping | RAM | by IOMMU
64 +-------+ +------+
86 mapping and returns the DMA address Z. The driver then tells the device to
90 So that Linux can use the dynamic DMA mapping, it needs some help from the
100 bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
105 #include <linux/dma-mapping.h>
109 everywhere you hold a DMA address returned from the DMA mapping functions.
115 be used with the DMA mapping facilities. There has been an unwritten
133 (items in data/text/bss segments), nor module image addresses, nor
137 buffers were cacheline-aligned. Without that, you'd see cacheline
138 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
152 By default, the kernel assumes that your device can address 32-bits of DMA
153 addressing. For a 64-bit capable device, this needs to be increased, and for
156 Special note about PCI: PCI-X specification requires PCI-X devices to support
157 64-bit addressing (DAC) for all transactions. And at least one platform (SGI
158 SN2) requires 64-bit consistent allocations to operate correctly when the IO
159 bus is in PCI-X mode.
184 device struct of your device is embedded in the bus-specific device struct of
185 your device. For example, &pdev->dev is a pointer to the device struct of a
191 system. If it returns non-zero, your device cannot perform DMA properly on
198 1) Use some non-DMA mode for data transfer, if possible.
206 The standard 64-bit addressing device would do something like this::
213 If the device only supports 32-bit addressing for descriptors in the
214 coherent allocations, but supports full 64-bits for streaming mappings
227 Finally, if your device can only drive the low 24-bits of
231 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
248 Here is pseudo-code showing how this might be done::
258 card->playback_enabled = 1;
260 card->playback_enabled = 0;
262 card->name);
265 card->record_enabled = 1;
267 card->record_enabled = 0;
269 card->name);
281 - Consistent DMA mappings which are usually mapped at driver
283 guarantee that the device and the CPU can access the data
296 - Network card DMA ring descriptors.
297 - SCSI adapter mailbox command data structures.
298 - Device firmware microcode executed out of
314 desc->word0 = address;
316 desc->word1 = DESC_VALID;
325 - Streaming DMA mappings which are usually mapped for one DMA
334 - Networking buffers transmitted/received by a device.
335 - Filesystem buffers written/read by a SCSI device.
337 The interfaces for using this type of mapping were designed in
342 Neither type of DMA mapping has alignment restrictions that come from
344 Also, systems with caches that aren't DMA-coherent will work better
345 when the underlying buffers don't share cache lines with other data.
368 The consistent DMA mapping interfaces, will by default return a DMA address
369 which is 32-bit addressable. Even if the device indicates (via the DMA mask)
370 that it may address the upper 32-bits, consistent allocation will only
371 return > 32-bit addresses for DMA if the consistent DMA mask has been
409 type of data is "align" (which is expressed in bytes, and must be a
455 It is the direction in which the data moves during the DMA
468 hold this in a data structure before you come to know the
473 potential platform-specific optimizations of such) is for debugging.
496 The streaming DMA mapping routines can be called from interrupt
503 struct device *dev = &my_dev->dev;
505 void *addr = buffer->ptr;
506 size_t size = buffer->len;
511 * reduce current DMA mapping usage,
523 error. Doing so will ensure that the mapping code will work correctly on all
526 result in failures ranging from panics to silent data corruption. The same
538 struct device *dev = &my_dev->dev;
540 struct page *page = buffer->page;
541 unsigned long offset = buffer->offset;
542 size_t size = buffer->len;
547 * reduce current DMA mapping usage,
579 into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
581 ends and the second one starts on a page boundary - in fact this is a huge
582 advantage for cards which either cannot do scatter-gather or have very
583 limited number of scatter-gather entries) and returns the actual number
588 accessed sg->address and sg->length as shown above.
608 the data in between the DMA transfers, the buffer needs to be synced
609 properly in order for the CPU and device to see the most up-to-date and
624 finish accessing the data with the CPU, and then before actually
643 dma_unmap_{single,sg}(). If you don't touch the data from the first
652 dma_addr_t mapping;
654 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
655 if (dma_mapping_error(cp->dev, mapping)) {
657 * reduce current DMA mapping usage,
664 cp->rx_buf = buffer;
665 cp->rx_len = len;
666 cp->rx_dma = mapping;
682 * to accept the data. But synchronize
686 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
687 cp->rx_len,
691 hp = (struct my_card_header *) cp->rx_buf;
693 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
695 pass_to_upper_layers(cp->rx_buf);
699 * DMA_FROM_DEVICE-mapped area,
702 * for DMA_BIDIRECTIONAL mapping if
716 - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
718 - checking the dma_addr_t returned from dma_map_single() and dma_map_page()
726 * reduce current DMA mapping usage,
733 - unmap pages that are already mapped, when mapping error occurs in the middle
734 of a multiple page mapping attempt. These example are applicable to
745 * reduce current DMA mapping usage,
754 * reduce current DMA mapping usage,
771 * mapping error is detected in the middle
785 * reduce current DMA mapping usage,
807 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
811 SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
819 Therefore, keeping track of the mapping address and length is a waste
832 dma_addr_t mapping;
840 DEFINE_DMA_UNMAP_ADDR(mapping);
847 ringp->mapping = FOO;
848 ringp->len = BAR;
852 dma_unmap_addr_set(ringp, mapping, FOO);
858 dma_unmap_single(dev, ringp->mapping, ringp->len,
864 dma_unmap_addr(ringp, mapping),
868 It really should be self-explanatory. We treat the ADDR and LEN
887 DMA-safe. Drivers and subsystems depend on it. If an architecture
888 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
889 the CPU cache is identical to data in main memory),
895 constraints. You don't need to worry about the architecture data
896 alignment constraints (e.g. the alignment constraints about 64-bit
915 David Mosberger-Tang <davidm@hpl.hp.com>