1================================ 2Devres - Managed Device Resource 3================================ 4 5Tejun Heo <teheo@suse.de> 6 7First draft 10 January 2007 8 9.. contents 10 11 1. Intro : Huh? Devres? 12 2. Devres : Devres in a nutshell 13 3. Devres Group : Group devres'es and release them together 14 4. Details : Life time rules, calling context, ... 15 5. Overhead : How much do we have to pay for this? 16 6. List of managed interfaces: Currently implemented managed interfaces 17 18 191. Intro 20-------- 21 22devres came up while trying to convert libata to use iomap. Each 23iomapped address should be kept and unmapped on driver detach. For 24example, a plain SFF ATA controller (that is, good old PCI IDE) in 25native mode makes use of 5 PCI BARs and all of them should be 26maintained. 27 28As with many other device drivers, libata low level drivers have 29sufficient bugs in ->remove and ->probe failure path. Well, yes, 30that's probably because libata low level driver developers are lazy 31bunch, but aren't all low level driver developers? After spending a 32day fiddling with braindamaged hardware with no document or 33braindamaged document, if it's finally working, well, it's working. 34 35For one reason or another, low level drivers don't receive as much 36attention or testing as core code, and bugs on driver detach or 37initialization failure don't happen often enough to be noticeable. 38Init failure path is worse because it's much less travelled while 39needs to handle multiple entry points. 40 41So, many low level drivers end up leaking resources on driver detach 42and having half broken failure path implementation in ->probe() which 43would leak resources or even cause oops when failure occurs. iomap 44adds more to this mix. So do msi and msix. 45 46 472. Devres 48--------- 49 50devres is basically linked list of arbitrarily sized memory areas 51associated with a struct device. Each devres entry is associated with 52a release function. A devres can be released in several ways. No 53matter what, all devres entries are released on driver detach. On 54release, the associated release function is invoked and then the 55devres entry is freed. 56 57Managed interface is created for resources commonly used by device 58drivers using devres. For example, coherent DMA memory is acquired 59using dma_alloc_coherent(). The managed version is called 60dmam_alloc_coherent(). It is identical to dma_alloc_coherent() except 61for the DMA memory allocated using it is managed and will be 62automatically released on driver detach. Implementation looks like 63the following:: 64 65 struct dma_devres { 66 size_t size; 67 void *vaddr; 68 dma_addr_t dma_handle; 69 }; 70 71 static void dmam_coherent_release(struct device *dev, void *res) 72 { 73 struct dma_devres *this = res; 74 75 dma_free_coherent(dev, this->size, this->vaddr, this->dma_handle); 76 } 77 78 dmam_alloc_coherent(dev, size, dma_handle, gfp) 79 { 80 struct dma_devres *dr; 81 void *vaddr; 82 83 dr = devres_alloc(dmam_coherent_release, sizeof(*dr), gfp); 84 ... 85 86 /* alloc DMA memory as usual */ 87 vaddr = dma_alloc_coherent(...); 88 ... 89 90 /* record size, vaddr, dma_handle in dr */ 91 dr->vaddr = vaddr; 92 ... 93 94 devres_add(dev, dr); 95 96 return vaddr; 97 } 98 99If a driver uses dmam_alloc_coherent(), the area is guaranteed to be 100freed whether initialization fails half-way or the device gets 101detached. If most resources are acquired using managed interface, a 102driver can have much simpler init and exit code. Init path basically 103looks like the following:: 104 105 my_init_one() 106 { 107 struct mydev *d; 108 109 d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL); 110 if (!d) 111 return -ENOMEM; 112 113 d->ring = dmam_alloc_coherent(...); 114 if (!d->ring) 115 return -ENOMEM; 116 117 if (check something) 118 return -EINVAL; 119 ... 120 121 return register_to_upper_layer(d); 122 } 123 124And exit path:: 125 126 my_remove_one() 127 { 128 unregister_from_upper_layer(d); 129 shutdown_my_hardware(); 130 } 131 132As shown above, low level drivers can be simplified a lot by using 133devres. Complexity is shifted from less maintained low level drivers 134to better maintained higher layer. Also, as init failure path is 135shared with exit path, both can get more testing. 136 137Note though that when converting current calls or assignments to 138managed devm_* versions it is up to you to check if internal operations 139like allocating memory, have failed. Managed resources pertains to the 140freeing of these resources *only* - all other checks needed are still 141on you. In some cases this may mean introducing checks that were not 142necessary before moving to the managed devm_* calls. 143 144 1453. Devres group 146--------------- 147 148Devres entries can be grouped using devres group. When a group is 149released, all contained normal devres entries and properly nested 150groups are released. One usage is to rollback series of acquired 151resources on failure. For example:: 152 153 if (!devres_open_group(dev, NULL, GFP_KERNEL)) 154 return -ENOMEM; 155 156 acquire A; 157 if (failed) 158 goto err; 159 160 acquire B; 161 if (failed) 162 goto err; 163 ... 164 165 devres_remove_group(dev, NULL); 166 return 0; 167 168 err: 169 devres_release_group(dev, NULL); 170 return err_code; 171 172As resource acquisition failure usually means probe failure, constructs 173like above are usually useful in midlayer driver (e.g. libata core 174layer) where interface function shouldn't have side effect on failure. 175For LLDs, just returning error code suffices in most cases. 176 177Each group is identified by `void *id`. It can either be explicitly 178specified by @id argument to devres_open_group() or automatically 179created by passing NULL as @id as in the above example. In both 180cases, devres_open_group() returns the group's id. The returned id 181can be passed to other devres functions to select the target group. 182If NULL is given to those functions, the latest open group is 183selected. 184 185For example, you can do something like the following:: 186 187 int my_midlayer_create_something() 188 { 189 if (!devres_open_group(dev, my_midlayer_create_something, GFP_KERNEL)) 190 return -ENOMEM; 191 192 ... 193 194 devres_close_group(dev, my_midlayer_create_something); 195 return 0; 196 } 197 198 void my_midlayer_destroy_something() 199 { 200 devres_release_group(dev, my_midlayer_create_something); 201 } 202 203 2044. Details 205---------- 206 207Lifetime of a devres entry begins on devres allocation and finishes 208when it is released or destroyed (removed and freed) - no reference 209counting. 210 211devres core guarantees atomicity to all basic devres operations and 212has support for single-instance devres types (atomic 213lookup-and-add-if-not-found). Other than that, synchronizing 214concurrent accesses to allocated devres data is caller's 215responsibility. This is usually non-issue because bus ops and 216resource allocations already do the job. 217 218For an example of single-instance devres type, read pcim_iomap_table() 219in lib/devres.c. 220 221All devres interface functions can be called without context if the 222right gfp mask is given. 223 224 2255. Overhead 226----------- 227 228Each devres bookkeeping info is allocated together with requested data 229area. With debug option turned off, bookkeeping info occupies 16 230bytes on 32bit machines and 24 bytes on 64bit (three pointers rounded 231up to ull alignment). If singly linked list is used, it can be 232reduced to two pointers (8 bytes on 32bit, 16 bytes on 64bit). 233 234Each devres group occupies 8 pointers. It can be reduced to 6 if 235singly linked list is used. 236 237Memory space overhead on ahci controller with two ports is between 300 238and 400 bytes on 32bit machine after naive conversion (we can 239certainly invest a bit more effort into libata core layer). 240 241 2426. List of managed interfaces 243----------------------------- 244 245CLOCK 246 devm_clk_get() 247 devm_clk_get_optional() 248 devm_clk_put() 249 devm_clk_bulk_get() 250 devm_clk_bulk_get_all() 251 devm_clk_bulk_get_optional() 252 devm_get_clk_from_child() 253 devm_clk_hw_register() 254 devm_of_clk_add_hw_provider() 255 devm_clk_hw_register_clkdev() 256 257DMA 258 dmaenginem_async_device_register() 259 dmam_alloc_coherent() 260 dmam_alloc_attrs() 261 dmam_free_coherent() 262 dmam_pool_create() 263 dmam_pool_destroy() 264 265DRM 266 devm_drm_dev_alloc() 267 268GPIO 269 devm_gpiod_get() 270 devm_gpiod_get_array() 271 devm_gpiod_get_array_optional() 272 devm_gpiod_get_index() 273 devm_gpiod_get_index_optional() 274 devm_gpiod_get_optional() 275 devm_gpiod_put() 276 devm_gpiod_unhinge() 277 devm_gpiochip_add_data() 278 devm_gpio_request() 279 devm_gpio_request_one() 280 281I2C 282 devm_i2c_add_adapter() 283 devm_i2c_new_dummy_device() 284 285IIO 286 devm_iio_device_alloc() 287 devm_iio_device_register() 288 devm_iio_dmaengine_buffer_setup() 289 devm_iio_kfifo_buffer_setup() 290 devm_iio_kfifo_buffer_setup_ext() 291 devm_iio_map_array_register() 292 devm_iio_triggered_buffer_setup() 293 devm_iio_triggered_buffer_setup_ext() 294 devm_iio_trigger_alloc() 295 devm_iio_trigger_register() 296 devm_iio_channel_get() 297 devm_iio_channel_get_all() 298 devm_iio_hw_consumer_alloc() 299 devm_fwnode_iio_channel_get_by_name() 300 301INPUT 302 devm_input_allocate_device() 303 304IO region 305 devm_release_mem_region() 306 devm_release_region() 307 devm_release_resource() 308 devm_request_mem_region() 309 devm_request_free_mem_region() 310 devm_request_region() 311 devm_request_resource() 312 313IOMAP 314 devm_ioport_map() 315 devm_ioport_unmap() 316 devm_ioremap() 317 devm_ioremap_uc() 318 devm_ioremap_wc() 319 devm_ioremap_resource() : checks resource, requests memory region, ioremaps 320 devm_ioremap_resource_wc() 321 devm_platform_ioremap_resource() : calls devm_ioremap_resource() for platform device 322 devm_platform_ioremap_resource_byname() 323 devm_platform_get_and_ioremap_resource() 324 devm_iounmap() 325 pcim_iomap() 326 pcim_iomap_regions() : do request_region() and iomap() on multiple BARs 327 pcim_iomap_table() : array of mapped addresses indexed by BAR 328 pcim_iounmap() 329 330IRQ 331 devm_free_irq() 332 devm_request_any_context_irq() 333 devm_request_irq() 334 devm_request_threaded_irq() 335 devm_irq_alloc_descs() 336 devm_irq_alloc_desc() 337 devm_irq_alloc_desc_at() 338 devm_irq_alloc_desc_from() 339 devm_irq_alloc_descs_from() 340 devm_irq_alloc_generic_chip() 341 devm_irq_setup_generic_chip() 342 devm_irq_domain_create_sim() 343 344LED 345 devm_led_classdev_register() 346 devm_led_classdev_register_ext() 347 devm_led_classdev_unregister() 348 devm_led_trigger_register() 349 devm_of_led_get() 350 351MDIO 352 devm_mdiobus_alloc() 353 devm_mdiobus_alloc_size() 354 devm_mdiobus_register() 355 devm_of_mdiobus_register() 356 357MEM 358 devm_free_pages() 359 devm_get_free_pages() 360 devm_kasprintf() 361 devm_kcalloc() 362 devm_kfree() 363 devm_kmalloc() 364 devm_kmalloc_array() 365 devm_kmemdup() 366 devm_krealloc() 367 devm_krealloc_array() 368 devm_kstrdup() 369 devm_kstrdup_const() 370 devm_kvasprintf() 371 devm_kzalloc() 372 373MFD 374 devm_mfd_add_devices() 375 376MUX 377 devm_mux_chip_alloc() 378 devm_mux_chip_register() 379 devm_mux_control_get() 380 devm_mux_state_get() 381 382NET 383 devm_alloc_etherdev() 384 devm_alloc_etherdev_mqs() 385 devm_register_netdev() 386 387PER-CPU MEM 388 devm_alloc_percpu() 389 devm_free_percpu() 390 391PCI 392 devm_pci_alloc_host_bridge() : managed PCI host bridge allocation 393 devm_pci_remap_cfgspace() : ioremap PCI configuration space 394 devm_pci_remap_cfg_resource() : ioremap PCI configuration space resource 395 pcim_enable_device() : after success, all PCI ops become managed 396 pcim_pin_device() : keep PCI device enabled after release 397 398PHY 399 devm_usb_get_phy() 400 devm_usb_get_phy_by_node() 401 devm_usb_get_phy_by_phandle() 402 devm_usb_put_phy() 403 404PINCTRL 405 devm_pinctrl_get() 406 devm_pinctrl_put() 407 devm_pinctrl_get_select() 408 devm_pinctrl_register() 409 devm_pinctrl_register_and_init() 410 devm_pinctrl_unregister() 411 412POWER 413 devm_reboot_mode_register() 414 devm_reboot_mode_unregister() 415 416PWM 417 devm_pwmchip_add() 418 devm_pwm_get() 419 devm_fwnode_pwm_get() 420 421REGULATOR 422 devm_regulator_bulk_register_supply_alias() 423 devm_regulator_bulk_get() 424 devm_regulator_bulk_get_const() 425 devm_regulator_bulk_get_enable() 426 devm_regulator_bulk_put() 427 devm_regulator_get() 428 devm_regulator_get_enable() 429 devm_regulator_get_enable_optional() 430 devm_regulator_get_exclusive() 431 devm_regulator_get_optional() 432 devm_regulator_irq_helper() 433 devm_regulator_put() 434 devm_regulator_register() 435 devm_regulator_register_notifier() 436 devm_regulator_register_supply_alias() 437 devm_regulator_unregister_notifier() 438 439RESET 440 devm_reset_control_get() 441 devm_reset_controller_register() 442 443RTC 444 devm_rtc_device_register() 445 devm_rtc_allocate_device() 446 devm_rtc_register_device() 447 devm_rtc_nvmem_register() 448 449SERDEV 450 devm_serdev_device_open() 451 452SLAVE DMA ENGINE 453 devm_acpi_dma_controller_register() 454 devm_acpi_dma_controller_free() 455 456SPI 457 devm_spi_alloc_master() 458 devm_spi_alloc_slave() 459 devm_spi_register_master() 460 461WATCHDOG 462 devm_watchdog_register_device() 463