blk-ioc.c (90b627f5426ce144cdd4ea585d1f7812359a1a6a) | blk-ioc.c (5ef1630586317e92c9ebd7b4ce48f393b7ff790f) |
---|---|
1// SPDX-License-Identifier: GPL-2.0 2/* 3 * Functions related to io context handling 4 */ 5#include <linux/kernel.h> 6#include <linux/module.h> 7#include <linux/init.h> 8#include <linux/bio.h> --- 5 unchanged lines hidden (view full) --- 14#include "blk.h" 15#include "blk-mq-sched.h" 16 17/* 18 * For io context allocations 19 */ 20static struct kmem_cache *iocontext_cachep; 21 | 1// SPDX-License-Identifier: GPL-2.0 2/* 3 * Functions related to io context handling 4 */ 5#include <linux/kernel.h> 6#include <linux/module.h> 7#include <linux/init.h> 8#include <linux/bio.h> --- 5 unchanged lines hidden (view full) --- 14#include "blk.h" 15#include "blk-mq-sched.h" 16 17/* 18 * For io context allocations 19 */ 20static struct kmem_cache *iocontext_cachep; 21 |
22#ifdef CONFIG_BLK_ICQ |
|
22/** 23 * get_io_context - increment reference count to io_context 24 * @ioc: io_context to get 25 * 26 * Increment reference count to @ioc. 27 */ 28static void get_io_context(struct io_context *ioc) 29{ --- 128 unchanged lines hidden (view full) --- 158 spin_unlock_irqrestore(&ioc->lock, flags); 159 return true; 160 } 161 spin_unlock_irqrestore(&ioc->lock, flags); 162 return false; 163} 164 165/** | 23/** 24 * get_io_context - increment reference count to io_context 25 * @ioc: io_context to get 26 * 27 * Increment reference count to @ioc. 28 */ 29static void get_io_context(struct io_context *ioc) 30{ --- 128 unchanged lines hidden (view full) --- 159 spin_unlock_irqrestore(&ioc->lock, flags); 160 return true; 161 } 162 spin_unlock_irqrestore(&ioc->lock, flags); 163 return false; 164} 165 166/** |
167 * ioc_clear_queue - break any ioc association with the specified queue 168 * @q: request_queue being cleared 169 * 170 * Walk @q->icq_list and exit all io_cq's. 171 */ 172void ioc_clear_queue(struct request_queue *q) 173{ 174 LIST_HEAD(icq_list); 175 176 spin_lock_irq(&q->queue_lock); 177 list_splice_init(&q->icq_list, &icq_list); 178 spin_unlock_irq(&q->queue_lock); 179 180 rcu_read_lock(); 181 while (!list_empty(&icq_list)) { 182 struct io_cq *icq = 183 list_entry(icq_list.next, struct io_cq, q_node); 184 185 spin_lock_irq(&icq->ioc->lock); 186 if (!(icq->flags & ICQ_DESTROYED)) 187 ioc_destroy_icq(icq); 188 spin_unlock_irq(&icq->ioc->lock); 189 } 190 rcu_read_unlock(); 191} 192#else /* CONFIG_BLK_ICQ */ 193static inline void ioc_exit_icqs(struct io_context *ioc) 194{ 195} 196static inline bool ioc_delay_free(struct io_context *ioc) 197{ 198 return false; 199} 200#endif /* CONFIG_BLK_ICQ */ 201 202/** |
|
166 * put_io_context - put a reference of io_context 167 * @ioc: io_context to put 168 * 169 * Decrement reference count of @ioc and release it if the count reaches 170 * zero. 171 */ 172void put_io_context(struct io_context *ioc) 173{ --- 14 unchanged lines hidden (view full) --- 188 task_unlock(task); 189 190 if (atomic_dec_and_test(&ioc->active_ref)) { 191 ioc_exit_icqs(ioc); 192 put_io_context(ioc); 193 } 194} 195 | 203 * put_io_context - put a reference of io_context 204 * @ioc: io_context to put 205 * 206 * Decrement reference count of @ioc and release it if the count reaches 207 * zero. 208 */ 209void put_io_context(struct io_context *ioc) 210{ --- 14 unchanged lines hidden (view full) --- 225 task_unlock(task); 226 227 if (atomic_dec_and_test(&ioc->active_ref)) { 228 ioc_exit_icqs(ioc); 229 put_io_context(ioc); 230 } 231} 232 |
196/** 197 * ioc_clear_queue - break any ioc association with the specified queue 198 * @q: request_queue being cleared 199 * 200 * Walk @q->icq_list and exit all io_cq's. 201 */ 202void ioc_clear_queue(struct request_queue *q) 203{ 204 LIST_HEAD(icq_list); 205 206 spin_lock_irq(&q->queue_lock); 207 list_splice_init(&q->icq_list, &icq_list); 208 spin_unlock_irq(&q->queue_lock); 209 210 rcu_read_lock(); 211 while (!list_empty(&icq_list)) { 212 struct io_cq *icq = 213 list_entry(icq_list.next, struct io_cq, q_node); 214 215 spin_lock_irq(&icq->ioc->lock); 216 if (!(icq->flags & ICQ_DESTROYED)) 217 ioc_destroy_icq(icq); 218 spin_unlock_irq(&icq->ioc->lock); 219 } 220 rcu_read_unlock(); 221} 222 | |
223static struct io_context *alloc_io_context(gfp_t gfp_flags, int node) 224{ 225 struct io_context *ioc; 226 227 ioc = kmem_cache_alloc_node(iocontext_cachep, gfp_flags | __GFP_ZERO, 228 node); 229 if (unlikely(!ioc)) 230 return NULL; 231 232 atomic_long_set(&ioc->refcount, 1); 233 atomic_set(&ioc->active_ref, 1); | 233static struct io_context *alloc_io_context(gfp_t gfp_flags, int node) 234{ 235 struct io_context *ioc; 236 237 ioc = kmem_cache_alloc_node(iocontext_cachep, gfp_flags | __GFP_ZERO, 238 node); 239 if (unlikely(!ioc)) 240 return NULL; 241 242 atomic_long_set(&ioc->refcount, 1); 243 atomic_set(&ioc->active_ref, 1); |
244#ifdef CONFIG_BLK_ICQ |
|
234 spin_lock_init(&ioc->lock); 235 INIT_RADIX_TREE(&ioc->icq_tree, GFP_ATOMIC); 236 INIT_HLIST_HEAD(&ioc->icq_list); 237 INIT_WORK(&ioc->release_work, ioc_release_fn); | 245 spin_lock_init(&ioc->lock); 246 INIT_RADIX_TREE(&ioc->icq_tree, GFP_ATOMIC); 247 INIT_HLIST_HEAD(&ioc->icq_list); 248 INIT_WORK(&ioc->release_work, ioc_release_fn); |
249#endif |
|
238 return ioc; 239} 240 241int set_task_ioprio(struct task_struct *task, int ioprio) 242{ 243 int err; 244 const struct cred *cred = current_cred(), *tcred; 245 --- 49 unchanged lines hidden (view full) --- 295 if (!tsk->io_context) 296 return -ENOMEM; 297 tsk->io_context->ioprio = ioc->ioprio; 298 } 299 300 return 0; 301} 302 | 250 return ioc; 251} 252 253int set_task_ioprio(struct task_struct *task, int ioprio) 254{ 255 int err; 256 const struct cred *cred = current_cred(), *tcred; 257 --- 49 unchanged lines hidden (view full) --- 307 if (!tsk->io_context) 308 return -ENOMEM; 309 tsk->io_context->ioprio = ioc->ioprio; 310 } 311 312 return 0; 313} 314 |
315#ifdef CONFIG_BLK_ICQ |
|
303/** 304 * ioc_lookup_icq - lookup io_cq from ioc 305 * @q: the associated request_queue 306 * 307 * Look up io_cq associated with @ioc - @q pair from @ioc. Must be called 308 * with @q->queue_lock held. 309 */ 310struct io_cq *ioc_lookup_icq(struct request_queue *q) --- 112 unchanged lines hidden (view full) --- 423 if (!icq) { 424 put_io_context(ioc); 425 return NULL; 426 } 427 } 428 return icq; 429} 430EXPORT_SYMBOL_GPL(ioc_find_get_icq); | 316/** 317 * ioc_lookup_icq - lookup io_cq from ioc 318 * @q: the associated request_queue 319 * 320 * Look up io_cq associated with @ioc - @q pair from @ioc. Must be called 321 * with @q->queue_lock held. 322 */ 323struct io_cq *ioc_lookup_icq(struct request_queue *q) --- 112 unchanged lines hidden (view full) --- 436 if (!icq) { 437 put_io_context(ioc); 438 return NULL; 439 } 440 } 441 return icq; 442} 443EXPORT_SYMBOL_GPL(ioc_find_get_icq); |
444#endif /* CONFIG_BLK_ICQ */ |
|
431 432static int __init blk_ioc_init(void) 433{ 434 iocontext_cachep = kmem_cache_create("blkdev_ioc", 435 sizeof(struct io_context), 0, SLAB_PANIC, NULL); 436 return 0; 437} 438subsys_initcall(blk_ioc_init); | 445 446static int __init blk_ioc_init(void) 447{ 448 iocontext_cachep = kmem_cache_create("blkdev_ioc", 449 sizeof(struct io_context), 0, SLAB_PANIC, NULL); 450 return 0; 451} 452subsys_initcall(blk_ioc_init); |