1============================================
2Dynamic DMA mapping using the generic device
3============================================
4
5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
6
7This document describes the DMA API.  For a more gentle introduction
8of the API (and actual examples), see :doc:`/core-api/dma-api-howto`.
9
10This API is split into two pieces.  Part I describes the basic API.
11Part II describes extensions for supporting non-consistent memory
12machines.  Unless you know that your driver absolutely has to support
13non-consistent platforms (this is usually only legacy platforms) you
14should only use the API described in part I.
15
16Part I - dma_API
17----------------
18
19To get the dma_API, you must #include <linux/dma-mapping.h>.  This
20provides dma_addr_t and the interfaces described below.
21
22A dma_addr_t can hold any valid DMA address for the platform.  It can be
23given to a device to use as a DMA source or target.  A CPU cannot reference
24a dma_addr_t directly because there may be translation between its physical
25address space and the DMA address space.
26
27Part Ia - Using large DMA-coherent buffers
28------------------------------------------
29
30::
31
32	void *
33	dma_alloc_coherent(struct device *dev, size_t size,
34			   dma_addr_t *dma_handle, gfp_t flag)
35
36Consistent memory is memory for which a write by either the device or
37the processor can immediately be read by the processor or device
38without having to worry about caching effects.  (You may however need
39to make sure to flush the processor's write buffers before telling
40devices to read that memory.)
41
42This routine allocates a region of <size> bytes of consistent memory.
43
44It returns a pointer to the allocated region (in the processor's virtual
45address space) or NULL if the allocation failed.
46
47It also returns a <dma_handle> which may be cast to an unsigned integer the
48same width as the bus and given to the device as the DMA address base of
49the region.
50
51Note: consistent memory can be expensive on some platforms, and the
52minimum allocation length may be as big as a page, so you should
53consolidate your requests for consistent memory as much as possible.
54The simplest way to do that is to use the dma_pool calls (see below).
55
56The flag parameter (dma_alloc_coherent() only) allows the caller to
57specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
58implementation may choose to ignore flags that affect the location of
59the returned memory, like GFP_DMA).
60
61::
62
63	void
64	dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
65			  dma_addr_t dma_handle)
66
67Free a region of consistent memory you previously allocated.  dev,
68size and dma_handle must all be the same as those passed into
69dma_alloc_coherent().  cpu_addr must be the virtual address returned by
70the dma_alloc_coherent().
71
72Note that unlike their sibling allocation calls, these routines
73may only be called with IRQs enabled.
74
75
76Part Ib - Using small DMA-coherent buffers
77------------------------------------------
78
79To get this part of the dma_API, you must #include <linux/dmapool.h>
80
81Many drivers need lots of small DMA-coherent memory regions for DMA
82descriptors or I/O buffers.  Rather than allocating in units of a page
83or more using dma_alloc_coherent(), you can use DMA pools.  These work
84much like a struct kmem_cache, except that they use the DMA-coherent allocator,
85not __get_free_pages().  Also, they understand common hardware constraints
86for alignment, like queue heads needing to be aligned on N-byte boundaries.
87
88
89::
90
91	struct dma_pool *
92	dma_pool_create(const char *name, struct device *dev,
93			size_t size, size_t align, size_t alloc);
94
95dma_pool_create() initializes a pool of DMA-coherent buffers
96for use with a given device.  It must be called in a context which
97can sleep.
98
99The "name" is for diagnostics (like a struct kmem_cache name); dev and size
100are like what you'd pass to dma_alloc_coherent().  The device's hardware
101alignment requirement for this type of data is "align" (which is expressed
102in bytes, and must be a power of two).  If your device has no boundary
103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104from this pool must not cross 4KByte boundaries.
105
106::
107
108	void *
109	dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
110		        dma_addr_t *handle)
111
112Wraps dma_pool_alloc() and also zeroes the returned memory if the
113allocation attempt succeeded.
114
115
116::
117
118	void *
119	dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
120		       dma_addr_t *dma_handle);
121
122This allocates memory from the pool; the returned memory will meet the
123size and alignment requirements specified at creation time.  Pass
124GFP_ATOMIC to prevent blocking, or if it's permitted (not
125in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
126blocking.  Like dma_alloc_coherent(), this returns two values:  an
127address usable by the CPU, and the DMA address usable by the pool's
128device.
129
130::
131
132	void
133	dma_pool_free(struct dma_pool *pool, void *vaddr,
134		      dma_addr_t addr);
135
136This puts memory back into the pool.  The pool is what was passed to
137dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
138were returned when that routine allocated the memory being freed.
139
140::
141
142	void
143	dma_pool_destroy(struct dma_pool *pool);
144
145dma_pool_destroy() frees the resources of the pool.  It must be
146called in a context which can sleep.  Make sure you've freed all allocated
147memory back to the pool before you destroy it.
148
149
150Part Ic - DMA addressing limitations
151------------------------------------
152
153::
154
155	int
156	dma_set_mask_and_coherent(struct device *dev, u64 mask)
157
158Checks to see if the mask is possible and updates the device
159streaming and coherent DMA mask parameters if it is.
160
161Returns: 0 if successful and a negative error if not.
162
163::
164
165	int
166	dma_set_mask(struct device *dev, u64 mask)
167
168Checks to see if the mask is possible and updates the device
169parameters if it is.
170
171Returns: 0 if successful and a negative error if not.
172
173::
174
175	int
176	dma_set_coherent_mask(struct device *dev, u64 mask)
177
178Checks to see if the mask is possible and updates the device
179parameters if it is.
180
181Returns: 0 if successful and a negative error if not.
182
183::
184
185	u64
186	dma_get_required_mask(struct device *dev)
187
188This API returns the mask that the platform requires to
189operate efficiently.  Usually this means the returned mask
190is the minimum required to cover all of memory.  Examining the
191required mask gives drivers with variable descriptor sizes the
192opportunity to use smaller descriptors as necessary.
193
194Requesting the required mask does not alter the current mask.  If you
195wish to take advantage of it, you should issue a dma_set_mask()
196call to set the mask to the value returned.
197
198::
199
200	size_t
201	dma_max_mapping_size(struct device *dev);
202
203Returns the maximum size of a mapping for the device. The size parameter
204of the mapping functions like dma_map_single(), dma_map_page() and
205others should not be larger than the returned value.
206
207::
208
209	bool
210	dma_need_sync(struct device *dev, dma_addr_t dma_addr);
211
212Returns %true if dma_sync_single_for_{device,cpu} calls are required to
213transfer memory ownership.  Returns %false if those calls can be skipped.
214
215::
216
217	unsigned long
218	dma_get_merge_boundary(struct device *dev);
219
220Returns the DMA merge boundary. If the device cannot merge any the DMA address
221segments, the function returns 0.
222
223Part Id - Streaming DMA mappings
224--------------------------------
225
226::
227
228	dma_addr_t
229	dma_map_single(struct device *dev, void *cpu_addr, size_t size,
230		       enum dma_data_direction direction)
231
232Maps a piece of processor virtual memory so it can be accessed by the
233device and returns the DMA address of the memory.
234
235The direction for both APIs may be converted freely by casting.
236However the dma_API uses a strongly typed enumerator for its
237direction:
238
239======================= =============================================
240DMA_NONE		no direction (used for debugging)
241DMA_TO_DEVICE		data is going from the memory to the device
242DMA_FROM_DEVICE		data is coming from the device to the memory
243DMA_BIDIRECTIONAL	direction isn't known
244======================= =============================================
245
246.. note::
247
248	Not all memory regions in a machine can be mapped by this API.
249	Further, contiguous kernel virtual space may not be contiguous as
250	physical memory.  Since this API does not provide any scatter/gather
251	capability, it will fail if the user tries to map a non-physically
252	contiguous piece of memory.  For this reason, memory to be mapped by
253	this API should be obtained from sources which guarantee it to be
254	physically contiguous (like kmalloc).
255
256	Further, the DMA address of the memory must be within the
257	dma_mask of the device (the dma_mask is a bit mask of the
258	addressable region for the device, i.e., if the DMA address of
259	the memory ANDed with the dma_mask is still equal to the DMA
260	address, then the device can perform DMA to the memory).  To
261	ensure that the memory allocated by kmalloc is within the dma_mask,
262	the driver may specify various platform-dependent flags to restrict
263	the DMA address range of the allocation (e.g., on x86, GFP_DMA
264	guarantees to be within the first 16MB of available DMA addresses,
265	as required by ISA devices).
266
267	Note also that the above constraints on physical contiguity and
268	dma_mask may not apply if the platform has an IOMMU (a device which
269	maps an I/O DMA address to a physical memory address).  However, to be
270	portable, device driver writers may *not* assume that such an IOMMU
271	exists.
272
273.. warning::
274
275	Memory coherency operates at a granularity called the cache
276	line width.  In order for memory mapped by this API to operate
277	correctly, the mapped region must begin exactly on a cache line
278	boundary and end exactly on one (to prevent two separately mapped
279	regions from sharing a single cache line).  Since the cache line size
280	may not be known at compile time, the API will not enforce this
281	requirement.  Therefore, it is recommended that driver writers who
282	don't take special care to determine the cache line size at run time
283	only map virtual regions that begin and end on page boundaries (which
284	are guaranteed also to be cache line boundaries).
285
286	DMA_TO_DEVICE synchronisation must be done after the last modification
287	of the memory region by the software and before it is handed off to
288	the device.  Once this primitive is used, memory covered by this
289	primitive should be treated as read-only by the device.  If the device
290	may write to it at any point, it should be DMA_BIDIRECTIONAL (see
291	below).
292
293	DMA_FROM_DEVICE synchronisation must be done before the driver
294	accesses data that may be changed by the device.  This memory should
295	be treated as read-only by the driver.  If the driver needs to write
296	to it at any point, it should be DMA_BIDIRECTIONAL (see below).
297
298	DMA_BIDIRECTIONAL requires special handling: it means that the driver
299	isn't sure if the memory was modified before being handed off to the
300	device and also isn't sure if the device will also modify it.  Thus,
301	you must always sync bidirectional memory twice: once before the
302	memory is handed off to the device (to make sure all memory changes
303	are flushed from the processor) and once before the data may be
304	accessed after being used by the device (to make sure any processor
305	cache lines are updated with data that the device may have changed).
306
307::
308
309	void
310	dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
311			 enum dma_data_direction direction)
312
313Unmaps the region previously mapped.  All the parameters passed in
314must be identical to those passed in (and returned) by the mapping
315API.
316
317::
318
319	dma_addr_t
320	dma_map_page(struct device *dev, struct page *page,
321		     unsigned long offset, size_t size,
322		     enum dma_data_direction direction)
323
324	void
325	dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
326		       enum dma_data_direction direction)
327
328API for mapping and unmapping for pages.  All the notes and warnings
329for the other mapping APIs apply here.  Also, although the <offset>
330and <size> parameters are provided to do partial page mapping, it is
331recommended that you never use these unless you really know what the
332cache width is.
333
334::
335
336	dma_addr_t
337	dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
338			 enum dma_data_direction dir, unsigned long attrs)
339
340	void
341	dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
342			   enum dma_data_direction dir, unsigned long attrs)
343
344API for mapping and unmapping for MMIO resources. All the notes and
345warnings for the other mapping APIs apply here. The API should only be
346used to map device MMIO resources, mapping of RAM is not permitted.
347
348::
349
350	int
351	dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
352
353In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
354will fail to create a mapping. A driver can check for these errors by testing
355the returned DMA address with dma_mapping_error(). A non-zero return value
356means the mapping could not be created and the driver should take appropriate
357action (e.g. reduce current DMA mapping usage or delay and try again later).
358
359::
360
361	int
362	dma_map_sg(struct device *dev, struct scatterlist *sg,
363		   int nents, enum dma_data_direction direction)
364
365Returns: the number of DMA address segments mapped (this may be shorter
366than <nents> passed in if some elements of the scatter/gather list are
367physically or virtually adjacent and an IOMMU maps them with a single
368entry).
369
370Please note that the sg cannot be mapped again if it has been mapped once.
371The mapping process is allowed to destroy information in the sg.
372
373As with the other mapping interfaces, dma_map_sg() can fail. When it
374does, 0 is returned and a driver must take appropriate action. It is
375critical that the driver do something, in the case of a block driver
376aborting the request or even oopsing is better than doing nothing and
377corrupting the filesystem.
378
379With scatterlists, you use the resulting mapping like this::
380
381	int i, count = dma_map_sg(dev, sglist, nents, direction);
382	struct scatterlist *sg;
383
384	for_each_sg(sglist, sg, count, i) {
385		hw_address[i] = sg_dma_address(sg);
386		hw_len[i] = sg_dma_len(sg);
387	}
388
389where nents is the number of entries in the sglist.
390
391The implementation is free to merge several consecutive sglist entries
392into one (e.g. with an IOMMU, or if several pages just happen to be
393physically contiguous) and returns the actual number of sg entries it
394mapped them to. On failure 0, is returned.
395
396Then you should loop count times (note: this can be less than nents times)
397and use sg_dma_address() and sg_dma_len() macros where you previously
398accessed sg->address and sg->length as shown above.
399
400::
401
402	void
403	dma_unmap_sg(struct device *dev, struct scatterlist *sg,
404		     int nents, enum dma_data_direction direction)
405
406Unmap the previously mapped scatter/gather list.  All the parameters
407must be the same as those and passed in to the scatter/gather mapping
408API.
409
410Note: <nents> must be the number you passed in, *not* the number of
411DMA address entries returned.
412
413::
414
415	void
416	dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
417				size_t size,
418				enum dma_data_direction direction)
419
420	void
421	dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
422				   size_t size,
423				   enum dma_data_direction direction)
424
425	void
426	dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
427			    int nents,
428			    enum dma_data_direction direction)
429
430	void
431	dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
432			       int nents,
433			       enum dma_data_direction direction)
434
435Synchronise a single contiguous or scatter/gather mapping for the CPU
436and device. With the sync_sg API, all the parameters must be the same
437as those passed into the single mapping API. With the sync_single API,
438you can use dma_handle and size parameters that aren't identical to
439those passed into the single mapping API to do a partial sync.
440
441
442.. note::
443
444   You must do this:
445
446   - Before reading values that have been written by DMA from the device
447     (use the DMA_FROM_DEVICE direction)
448   - After writing values that will be written to the device using DMA
449     (use the DMA_TO_DEVICE) direction
450   - before *and* after handing memory to the device if the memory is
451     DMA_BIDIRECTIONAL
452
453See also dma_map_single().
454
455::
456
457	dma_addr_t
458	dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
459			     enum dma_data_direction dir,
460			     unsigned long attrs)
461
462	void
463	dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
464			       size_t size, enum dma_data_direction dir,
465			       unsigned long attrs)
466
467	int
468	dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
469			 int nents, enum dma_data_direction dir,
470			 unsigned long attrs)
471
472	void
473	dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
474			   int nents, enum dma_data_direction dir,
475			   unsigned long attrs)
476
477The four functions above are just like the counterpart functions
478without the _attrs suffixes, except that they pass an optional
479dma_attrs.
480
481The interpretation of DMA attributes is architecture-specific, and
482each attribute should be documented in :doc:`/core-api/dma-attributes`.
483
484If dma_attrs are 0, the semantics of each of these functions
485is identical to those of the corresponding function
486without the _attrs suffix. As a result dma_map_single_attrs()
487can generally replace dma_map_single(), etc.
488
489As an example of the use of the ``*_attrs`` functions, here's how
490you could pass an attribute DMA_ATTR_FOO when mapping memory
491for DMA::
492
493	#include <linux/dma-mapping.h>
494	/* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
495	* documented in Documentation/core-api/dma-attributes.rst */
496	...
497
498		unsigned long attr;
499		attr |= DMA_ATTR_FOO;
500		....
501		n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
502		....
503
504Architectures that care about DMA_ATTR_FOO would check for its
505presence in their implementations of the mapping and unmapping
506routines, e.g.:::
507
508	void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
509				     size_t size, enum dma_data_direction dir,
510				     unsigned long attrs)
511	{
512		....
513		if (attrs & DMA_ATTR_FOO)
514			/* twizzle the frobnozzle */
515		....
516	}
517
518
519Part II - Non-coherent DMA allocations
520--------------------------------------
521
522These APIs allow to allocate pages that are guaranteed to be DMA addressable
523by the passed in device, but which need explicit management of memory ownership
524for the kernel vs the device.
525
526If you don't understand how cache line coherency works between a processor and
527an I/O device, you should not be using this part of the API.
528
529::
530
531	void *
532	dma_alloc_noncoherent(struct device *dev, size_t size,
533			dma_addr_t *dma_handle, enum dma_data_direction dir,
534			gfp_t gfp)
535
536This routine allocates a region of <size> bytes of consistent memory.  It
537returns a pointer to the allocated region (in the processor's virtual address
538space) or NULL if the allocation failed.  The returned memory may or may not
539be in the kernel direct mapping.  Drivers must not call virt_to_page on
540the returned memory region.
541
542It also returns a <dma_handle> which may be cast to an unsigned integer the
543same width as the bus and given to the device as the DMA address base of
544the region.
545
546The dir parameter specified if data is read and/or written by the device,
547see dma_map_single() for details.
548
549The gfp parameter allows the caller to specify the ``GFP_`` flags (see
550kmalloc()) for the allocation, but rejects flags used to specify a memory
551zone such as GFP_DMA or GFP_HIGHMEM.
552
553Before giving the memory to the device, dma_sync_single_for_device() needs
554to be called, and before reading memory written by the device,
555dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
556reused.
557
558::
559
560	void
561	dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
562			dma_addr_t dma_handle, enum dma_data_direction dir)
563
564Free a region of memory previously allocated using dma_alloc_noncoherent().
565dev, size and dma_handle and dir must all be the same as those passed into
566dma_alloc_noncoherent().  cpu_addr must be the virtual address returned by
567dma_alloc_noncoherent().
568
569::
570
571	struct page *
572	dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,
573			enum dma_data_direction dir, gfp_t gfp)
574
575This routine allocates a region of <size> bytes of non-coherent memory.  It
576returns a pointer to first struct page for the region, or NULL if the
577allocation failed. The resulting struct page can be used for everything a
578struct page is suitable for.
579
580It also returns a <dma_handle> which may be cast to an unsigned integer the
581same width as the bus and given to the device as the DMA address base of
582the region.
583
584The dir parameter specified if data is read and/or written by the device,
585see dma_map_single() for details.
586
587The gfp parameter allows the caller to specify the ``GFP_`` flags (see
588kmalloc()) for the allocation, but rejects flags used to specify a memory
589zone such as GFP_DMA or GFP_HIGHMEM.
590
591Before giving the memory to the device, dma_sync_single_for_device() needs
592to be called, and before reading memory written by the device,
593dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
594reused.
595
596::
597
598	void
599	dma_free_pages(struct device *dev, size_t size, struct page *page,
600			dma_addr_t dma_handle, enum dma_data_direction dir)
601
602Free a region of memory previously allocated using dma_alloc_pages().
603dev, size and dma_handle and dir must all be the same as those passed into
604dma_alloc_noncoherent().  page must be the pointer returned by
605dma_alloc_pages().
606
607::
608
609	int
610	dma_get_cache_alignment(void)
611
612Returns the processor cache alignment.  This is the absolute minimum
613alignment *and* width that you must observe when either mapping
614memory or doing partial flushes.
615
616.. note::
617
618	This API may return a number *larger* than the actual cache
619	line, but it will guarantee that one or more cache lines fit exactly
620	into the width returned by this call.  It will also always be a power
621	of two for easy alignment.
622
623
624Part III - Debug drivers use of the DMA-API
625-------------------------------------------
626
627The DMA-API as described above has some constraints. DMA addresses must be
628released with the corresponding function with the same size for example. With
629the advent of hardware IOMMUs it becomes more and more important that drivers
630do not violate those constraints. In the worst case such a violation can
631result in data corruption up to destroyed filesystems.
632
633To debug drivers and find bugs in the usage of the DMA-API checking code can
634be compiled into the kernel which will tell the developer about those
635violations. If your architecture supports it you can select the "Enable
636debugging of DMA-API usage" option in your kernel configuration. Enabling this
637option has a performance impact. Do not enable it in production kernels.
638
639If you boot the resulting kernel will contain code which does some bookkeeping
640about what DMA memory was allocated for which device. If this code detects an
641error it prints a warning message with some details into your kernel log. An
642example warning message may look like this::
643
644	WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
645		check_unmap+0x203/0x490()
646	Hardware name:
647	forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
648		function [device address=0x00000000640444be] [size=66 bytes] [mapped as
649	single] [unmapped as page]
650	Modules linked in: nfsd exportfs bridge stp llc r8169
651	Pid: 0, comm: swapper Tainted: G        W  2.6.28-dmatest-09289-g8bb99c0 #1
652	Call Trace:
653	<IRQ>  [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
654	[<ffffffff80647b70>] _spin_unlock+0x10/0x30
655	[<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
656	[<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
657	[<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
658	[<ffffffff80252f96>] queue_work+0x56/0x60
659	[<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
660	[<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
661	[<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
662	[<ffffffff80235177>] find_busiest_group+0x207/0x8a0
663	[<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
664	[<ffffffff803c7ea3>] check_unmap+0x203/0x490
665	[<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
666	[<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
667	[<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
668	[<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
669	[<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
670	[<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
671	[<ffffffff8020c093>] ret_from_intr+0x0/0xa
672	<EOI> <4>---[ end trace f6435a98e2a38c0e ]---
673
674The driver developer can find the driver and the device including a stacktrace
675of the DMA-API call which caused this warning.
676
677Per default only the first error will result in a warning message. All other
678errors will only silently counted. This limitation exist to prevent the code
679from flooding your kernel log. To support debugging a device driver this can
680be disabled via debugfs. See the debugfs interface documentation below for
681details.
682
683The debugfs directory for the DMA-API debugging code is called dma-api/. In
684this directory the following files can currently be found:
685
686=============================== ===============================================
687dma-api/all_errors		This file contains a numeric value. If this
688				value is not equal to zero the debugging code
689				will print a warning for every error it finds
690				into the kernel log. Be careful with this
691				option, as it can easily flood your logs.
692
693dma-api/disabled		This read-only file contains the character 'Y'
694				if the debugging code is disabled. This can
695				happen when it runs out of memory or if it was
696				disabled at boot time
697
698dma-api/dump			This read-only file contains current DMA
699				mappings.
700
701dma-api/error_count		This file is read-only and shows the total
702				numbers of errors found.
703
704dma-api/num_errors		The number in this file shows how many
705				warnings will be printed to the kernel log
706				before it stops. This number is initialized to
707				one at system boot and be set by writing into
708				this file
709
710dma-api/min_free_entries	This read-only file can be read to get the
711				minimum number of free dma_debug_entries the
712				allocator has ever seen. If this value goes
713				down to zero the code will attempt to increase
714				nr_total_entries to compensate.
715
716dma-api/num_free_entries	The current number of free dma_debug_entries
717				in the allocator.
718
719dma-api/nr_total_entries	The total number of dma_debug_entries in the
720				allocator, both free and used.
721
722dma-api/driver_filter		You can write a name of a driver into this file
723				to limit the debug output to requests from that
724				particular driver. Write an empty string to
725				that file to disable the filter and see
726				all errors again.
727=============================== ===============================================
728
729If you have this code compiled into your kernel it will be enabled by default.
730If you want to boot without the bookkeeping anyway you can provide
731'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
732Notice that you can not enable it again at runtime. You have to reboot to do
733so.
734
735If you want to see debug messages only for a special device driver you can
736specify the dma_debug_driver=<drivername> parameter. This will enable the
737driver filter at boot time. The debug code will only print errors for that
738driver afterwards. This filter can be disabled or changed later using debugfs.
739
740When the code disables itself at runtime this is most likely because it ran
741out of dma_debug_entries and was unable to allocate more on-demand. 65536
742entries are preallocated at boot - if this is too low for you boot with
743'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
744that the code allocates entries in batches, so the exact number of
745preallocated entries may be greater than the actual number requested. The
746code will print to the kernel log each time it has dynamically allocated
747as many entries as were initially preallocated. This is to indicate that a
748larger preallocation size may be appropriate, or if it happens continually
749that a driver may be leaking mappings.
750
751::
752
753	void
754	debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
755
756dma-debug interface debug_dma_mapping_error() to debug drivers that fail
757to check DMA mapping errors on addresses returned by dma_map_single() and
758dma_map_page() interfaces. This interface clears a flag set by
759debug_dma_map_page() to indicate that dma_mapping_error() has been called by
760the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
761this flag is still set, prints warning message that includes call trace that
762leads up to the unmap. This interface can be called from dma_mapping_error()
763routines to enable DMA mapping error check debugging.
764