mapping.c (8be98d2f2a0a262f8bf8a0bc1fdf522b3c7aab17) | mapping.c (a61cb6017df0a9be072a35259e6e9ae7aa0ef6b3) |
---|---|
1// SPDX-License-Identifier: GPL-2.0 2/* 3 * arch-independent dma-mapping routines 4 * 5 * Copyright (c) 2006 SUSE Linux Products GmbH 6 * Copyright (c) 2006 Tejun Heo <teheo@suse.de> 7 */ 8#include <linux/memblock.h> /* for max_pfn */ --- 163 unchanged lines hidden (view full) --- 172 arch_dma_unmap_page_direct(dev, addr + size)) 173 dma_direct_unmap_page(dev, addr, size, dir, attrs); 174 else if (ops->unmap_page) 175 ops->unmap_page(dev, addr, size, dir, attrs); 176 debug_dma_unmap_page(dev, addr, size, dir); 177} 178EXPORT_SYMBOL(dma_unmap_page_attrs); 179 | 1// SPDX-License-Identifier: GPL-2.0 2/* 3 * arch-independent dma-mapping routines 4 * 5 * Copyright (c) 2006 SUSE Linux Products GmbH 6 * Copyright (c) 2006 Tejun Heo <teheo@suse.de> 7 */ 8#include <linux/memblock.h> /* for max_pfn */ --- 163 unchanged lines hidden (view full) --- 172 arch_dma_unmap_page_direct(dev, addr + size)) 173 dma_direct_unmap_page(dev, addr, size, dir, attrs); 174 else if (ops->unmap_page) 175 ops->unmap_page(dev, addr, size, dir, attrs); 176 debug_dma_unmap_page(dev, addr, size, dir); 177} 178EXPORT_SYMBOL(dma_unmap_page_attrs); 179 |
180/* 181 * dma_maps_sg_attrs returns 0 on error and > 0 on success. 182 * It should never return a value < 0. 183 */ 184int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, 185 enum dma_data_direction dir, unsigned long attrs) | 180static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, 181 int nents, enum dma_data_direction dir, unsigned long attrs) |
186{ 187 const struct dma_map_ops *ops = get_dma_ops(dev); 188 int ents; 189 190 BUG_ON(!valid_dma_direction(dir)); 191 192 if (WARN_ON_ONCE(!dev->dma_mask)) 193 return 0; 194 195 if (dma_map_direct(dev, ops) || 196 arch_dma_map_sg_direct(dev, sg, nents)) 197 ents = dma_direct_map_sg(dev, sg, nents, dir, attrs); 198 else 199 ents = ops->map_sg(dev, sg, nents, dir, attrs); | 182{ 183 const struct dma_map_ops *ops = get_dma_ops(dev); 184 int ents; 185 186 BUG_ON(!valid_dma_direction(dir)); 187 188 if (WARN_ON_ONCE(!dev->dma_mask)) 189 return 0; 190 191 if (dma_map_direct(dev, ops) || 192 arch_dma_map_sg_direct(dev, sg, nents)) 193 ents = dma_direct_map_sg(dev, sg, nents, dir, attrs); 194 else 195 ents = ops->map_sg(dev, sg, nents, dir, attrs); |
200 BUG_ON(ents < 0); 201 debug_dma_map_sg(dev, sg, nents, ents, dir); | |
202 | 196 |
197 if (ents > 0) 198 debug_dma_map_sg(dev, sg, nents, ents, dir); 199 else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && 200 ents != -EIO)) 201 return -EIO; 202 |
|
203 return ents; 204} | 203 return ents; 204} |
205 206/** 207 * dma_map_sg_attrs - Map the given buffer for DMA 208 * @dev: The device for which to perform the DMA operation 209 * @sg: The sg_table object describing the buffer 210 * @nents: Number of entries to map 211 * @dir: DMA direction 212 * @attrs: Optional DMA attributes for the map operation 213 * 214 * Maps a buffer described by a scatterlist passed in the sg argument with 215 * nents segments for the @dir DMA operation by the @dev device. 216 * 217 * Returns the number of mapped entries (which can be less than nents) 218 * on success. Zero is returned for any error. 219 * 220 * dma_unmap_sg_attrs() should be used to unmap the buffer with the 221 * original sg and original nents (not the value returned by this funciton). 222 */ 223unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, 224 int nents, enum dma_data_direction dir, unsigned long attrs) 225{ 226 int ret; 227 228 ret = __dma_map_sg_attrs(dev, sg, nents, dir, attrs); 229 if (ret < 0) 230 return 0; 231 return ret; 232} |
|
205EXPORT_SYMBOL(dma_map_sg_attrs); 206 | 233EXPORT_SYMBOL(dma_map_sg_attrs); 234 |
235/** 236 * dma_map_sgtable - Map the given buffer for DMA 237 * @dev: The device for which to perform the DMA operation 238 * @sgt: The sg_table object describing the buffer 239 * @dir: DMA direction 240 * @attrs: Optional DMA attributes for the map operation 241 * 242 * Maps a buffer described by a scatterlist stored in the given sg_table 243 * object for the @dir DMA operation by the @dev device. After success, the 244 * ownership for the buffer is transferred to the DMA domain. One has to 245 * call dma_sync_sgtable_for_cpu() or dma_unmap_sgtable() to move the 246 * ownership of the buffer back to the CPU domain before touching the 247 * buffer by the CPU. 248 * 249 * Returns 0 on success or a negative error code on error. The following 250 * error codes are supported with the given meaning: 251 * 252 * -EINVAL - An invalid argument, unaligned access or other error 253 * in usage. Will not succeed if retried. 254 * -ENOMEM - Insufficient resources (like memory or IOVA space) to 255 * complete the mapping. Should succeed if retried later. 256 * -EIO - Legacy error code with an unknown meaning. eg. this is 257 * returned if a lower level call returned DMA_MAPPING_ERROR. 258 */ 259int dma_map_sgtable(struct device *dev, struct sg_table *sgt, 260 enum dma_data_direction dir, unsigned long attrs) 261{ 262 int nents; 263 264 nents = __dma_map_sg_attrs(dev, sgt->sgl, sgt->orig_nents, dir, attrs); 265 if (nents < 0) 266 return nents; 267 sgt->nents = nents; 268 return 0; 269} 270EXPORT_SYMBOL_GPL(dma_map_sgtable); 271 |
|
207void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, 208 int nents, enum dma_data_direction dir, 209 unsigned long attrs) 210{ 211 const struct dma_map_ops *ops = get_dma_ops(dev); 212 213 BUG_ON(!valid_dma_direction(dir)); 214 debug_dma_unmap_sg(dev, sg, nents, dir); --- 524 unchanged lines hidden --- | 272void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, 273 int nents, enum dma_data_direction dir, 274 unsigned long attrs) 275{ 276 const struct dma_map_ops *ops = get_dma_ops(dev); 277 278 BUG_ON(!valid_dma_direction(dir)); 279 debug_dma_unmap_sg(dev, sg, nents, dir); --- 524 unchanged lines hidden --- |