/openbmc/linux/block/ |
H A D | kyber-iosched.c | 185 unsigned int batching; member 505 khd->batching = 0; in kyber_init_hctx() 776 khd->batching++; in kyber_dispatch_cur_domain() 789 khd->batching++; in kyber_dispatch_cur_domain() 816 if (khd->batching < kyber_batch_size[khd->cur_domain]) { in kyber_dispatch_request() 831 khd->batching = 0; in kyber_dispatch_request() 983 seq_printf(m, "%u\n", khd->batching); in kyber_batching_show()
|
H A D | mq-deadline.c | 91 unsigned int batching; /* number of sequential requests made */ member 468 if (rq && dd->batching < dd->fifo_batch) { in __dd_dispatch_request() 536 dd->batching = 0; in __dd_dispatch_request() 545 dd->batching++; in __dd_dispatch_request() 1115 seq_printf(m, "%u\n", dd->batching); in deadline_batching_show()
|
/openbmc/linux/kernel/rcu/ |
H A D | Kconfig | 325 thus defeating the 32-callback batching used to amortize the 329 jiffy, and overrides the 32-callback batching if this limit
|
/openbmc/linux/Documentation/trace/ |
H A D | events-kmem.rst | 66 callers should be batching their activities.
|
/openbmc/linux/tools/memory-model/Documentation/ |
H A D | simple.txt | 64 single-threaded grace-period processing is use of batching, where all 67 it more efficient. Nor is RCU unique: Similar batching optimizations
|
/openbmc/linux/Documentation/RCU/Design/Expedited-Grace-Periods/ |
H A D | Expedited-Grace-Periods.rst | 258 This batching is controlled by a sequence counter named 497 batching, so that a single grace-period operation can serve numerous 520 permits much higher degrees of batching, and thus much lower per-request
|
/openbmc/linux/Documentation/networking/ |
H A D | kcm.rst | 245 Message batching
|
H A D | napi.rst | 186 In most scenarios batching happens due to IRQ coalescing which is done
|
/openbmc/linux/Documentation/scheduler/ |
H A D | sched-design-CFS.rst | 100 "server" (i.e., good batching) workloads. It defaults to a setting suitable
|
/openbmc/qemu/docs/interop/ |
H A D | vhost-user.rst | 775 * Only available when batching is used for submitting */ 825 #. Steps 1,2,3 may be performed repeatedly if batching is possible 967 #. Steps 1,2,3,4 may be performed repeatedly if batching is possible
|
/openbmc/openbmc/meta-security/recipes-ids/suricata/files/ |
H A D | suricata.yaml | 498 # The timeout limit for batching of packets in microseconds. 499 batching-timeout: 2000
|
/openbmc/linux/Documentation/RCU/ |
H A D | RTFP.txt | 36 this paper helped inspire the update-side batching used in the later 2382 RCU updates, RCU grace-period batching, update overhead, 2663 RCU updates, RCU grace-period batching, update overhead,
|
H A D | whatisRCU.rst | 372 implementations of the RCU infrastructure make heavy use of batching in
|
/openbmc/linux/Documentation/RCU/Design/Requirements/ |
H A D | Requirements.rst | 1217 synchronize_rcu() are required to use batching optimizations so that 1560 requirement is another factor driving batching of grace periods, but it 1577 complete more quickly, but at the cost of restricting RCU's batching 2280 must *decrease* the per-operation overhead, witness the batching
|
/openbmc/linux/Documentation/admin-guide/ |
H A D | kernel-parameters.txt | 5390 callback batching for call_rcu_tasks(). 5392 of zero will disable batching. Batching is 5397 Rude asynchronous callback batching for 5400 disable batching. Batching is always disabled 5405 Trace asynchronous callback batching for 5408 disable batching. Batching is always disabled
|
/openbmc/linux/drivers/scsi/aic7xxx/ |
H A D | aic79xx.seq | 365 * our batching and round-robin selection scheme
|
/openbmc/linux/Documentation/filesystems/ |
H A D | xfs-delayed-logging-design.rst | 357 in memory - batching them, if you like - to minimise the impact of the log IO on
|