1 /* SPDX-License-Identifier: GPL-2.0 2 * 3 * IO cost model based controller. 4 * 5 * Copyright (C) 2019 Tejun Heo <tj@kernel.org> 6 * Copyright (C) 2019 Andy Newell <newella@fb.com> 7 * Copyright (C) 2019 Facebook 8 * 9 * One challenge of controlling IO resources is the lack of trivially 10 * observable cost metric. This is distinguished from CPU and memory where 11 * wallclock time and the number of bytes can serve as accurate enough 12 * approximations. 13 * 14 * Bandwidth and iops are the most commonly used metrics for IO devices but 15 * depending on the type and specifics of the device, different IO patterns 16 * easily lead to multiple orders of magnitude variations rendering them 17 * useless for the purpose of IO capacity distribution. While on-device 18 * time, with a lot of clutches, could serve as a useful approximation for 19 * non-queued rotational devices, this is no longer viable with modern 20 * devices, even the rotational ones. 21 * 22 * While there is no cost metric we can trivially observe, it isn't a 23 * complete mystery. For example, on a rotational device, seek cost 24 * dominates while a contiguous transfer contributes a smaller amount 25 * proportional to the size. If we can characterize at least the relative 26 * costs of these different types of IOs, it should be possible to 27 * implement a reasonable work-conserving proportional IO resource 28 * distribution. 29 * 30 * 1. IO Cost Model 31 * 32 * IO cost model estimates the cost of an IO given its basic parameters and 33 * history (e.g. the end sector of the last IO). The cost is measured in 34 * device time. If a given IO is estimated to cost 10ms, the device should 35 * be able to process ~100 of those IOs in a second. 36 * 37 * Currently, there's only one builtin cost model - linear. Each IO is 38 * classified as sequential or random and given a base cost accordingly. 39 * On top of that, a size cost proportional to the length of the IO is 40 * added. While simple, this model captures the operational 41 * characteristics of a wide varienty of devices well enough. Default 42 * parameters for several different classes of devices are provided and the 43 * parameters can be configured from userspace via 44 * /sys/fs/cgroup/io.cost.model. 45 * 46 * If needed, tools/cgroup/iocost_coef_gen.py can be used to generate 47 * device-specific coefficients. 48 * 49 * 2. Control Strategy 50 * 51 * The device virtual time (vtime) is used as the primary control metric. 52 * The control strategy is composed of the following three parts. 53 * 54 * 2-1. Vtime Distribution 55 * 56 * When a cgroup becomes active in terms of IOs, its hierarchical share is 57 * calculated. Please consider the following hierarchy where the numbers 58 * inside parentheses denote the configured weights. 59 * 60 * root 61 * / \ 62 * A (w:100) B (w:300) 63 * / \ 64 * A0 (w:100) A1 (w:100) 65 * 66 * If B is idle and only A0 and A1 are actively issuing IOs, as the two are 67 * of equal weight, each gets 50% share. If then B starts issuing IOs, B 68 * gets 300/(100+300) or 75% share, and A0 and A1 equally splits the rest, 69 * 12.5% each. The distribution mechanism only cares about these flattened 70 * shares. They're called hweights (hierarchical weights) and always add 71 * upto 1 (WEIGHT_ONE). 72 * 73 * A given cgroup's vtime runs slower in inverse proportion to its hweight. 74 * For example, with 12.5% weight, A0's time runs 8 times slower (100/12.5) 75 * against the device vtime - an IO which takes 10ms on the underlying 76 * device is considered to take 80ms on A0. 77 * 78 * This constitutes the basis of IO capacity distribution. Each cgroup's 79 * vtime is running at a rate determined by its hweight. A cgroup tracks 80 * the vtime consumed by past IOs and can issue a new IO if doing so 81 * wouldn't outrun the current device vtime. Otherwise, the IO is 82 * suspended until the vtime has progressed enough to cover it. 83 * 84 * 2-2. Vrate Adjustment 85 * 86 * It's unrealistic to expect the cost model to be perfect. There are too 87 * many devices and even on the same device the overall performance 88 * fluctuates depending on numerous factors such as IO mixture and device 89 * internal garbage collection. The controller needs to adapt dynamically. 90 * 91 * This is achieved by adjusting the overall IO rate according to how busy 92 * the device is. If the device becomes overloaded, we're sending down too 93 * many IOs and should generally slow down. If there are waiting issuers 94 * but the device isn't saturated, we're issuing too few and should 95 * generally speed up. 96 * 97 * To slow down, we lower the vrate - the rate at which the device vtime 98 * passes compared to the wall clock. For example, if the vtime is running 99 * at the vrate of 75%, all cgroups added up would only be able to issue 100 * 750ms worth of IOs per second, and vice-versa for speeding up. 101 * 102 * Device business is determined using two criteria - rq wait and 103 * completion latencies. 104 * 105 * When a device gets saturated, the on-device and then the request queues 106 * fill up and a bio which is ready to be issued has to wait for a request 107 * to become available. When this delay becomes noticeable, it's a clear 108 * indication that the device is saturated and we lower the vrate. This 109 * saturation signal is fairly conservative as it only triggers when both 110 * hardware and software queues are filled up, and is used as the default 111 * busy signal. 112 * 113 * As devices can have deep queues and be unfair in how the queued commands 114 * are executed, soley depending on rq wait may not result in satisfactory 115 * control quality. For a better control quality, completion latency QoS 116 * parameters can be configured so that the device is considered saturated 117 * if N'th percentile completion latency rises above the set point. 118 * 119 * The completion latency requirements are a function of both the 120 * underlying device characteristics and the desired IO latency quality of 121 * service. There is an inherent trade-off - the tighter the latency QoS, 122 * the higher the bandwidth lossage. Latency QoS is disabled by default 123 * and can be set through /sys/fs/cgroup/io.cost.qos. 124 * 125 * 2-3. Work Conservation 126 * 127 * Imagine two cgroups A and B with equal weights. A is issuing a small IO 128 * periodically while B is sending out enough parallel IOs to saturate the 129 * device on its own. Let's say A's usage amounts to 100ms worth of IO 130 * cost per second, i.e., 10% of the device capacity. The naive 131 * distribution of half and half would lead to 60% utilization of the 132 * device, a significant reduction in the total amount of work done 133 * compared to free-for-all competition. This is too high a cost to pay 134 * for IO control. 135 * 136 * To conserve the total amount of work done, we keep track of how much 137 * each active cgroup is actually using and yield part of its weight if 138 * there are other cgroups which can make use of it. In the above case, 139 * A's weight will be lowered so that it hovers above the actual usage and 140 * B would be able to use the rest. 141 * 142 * As we don't want to penalize a cgroup for donating its weight, the 143 * surplus weight adjustment factors in a margin and has an immediate 144 * snapback mechanism in case the cgroup needs more IO vtime for itself. 145 * 146 * Note that adjusting down surplus weights has the same effects as 147 * accelerating vtime for other cgroups and work conservation can also be 148 * implemented by adjusting vrate dynamically. However, squaring who can 149 * donate and should take back how much requires hweight propagations 150 * anyway making it easier to implement and understand as a separate 151 * mechanism. 152 * 153 * 3. Monitoring 154 * 155 * Instead of debugfs or other clumsy monitoring mechanisms, this 156 * controller uses a drgn based monitoring script - 157 * tools/cgroup/iocost_monitor.py. For details on drgn, please see 158 * https://github.com/osandov/drgn. The output looks like the following. 159 * 160 * sdb RUN per=300ms cur_per=234.218:v203.695 busy= +1 vrate= 62.12% 161 * active weight hweight% inflt% dbt delay usages% 162 * test/a * 50/ 50 33.33/ 33.33 27.65 2 0*041 033:033:033 163 * test/b * 100/ 100 66.67/ 66.67 17.56 0 0*000 066:079:077 164 * 165 * - per : Timer period 166 * - cur_per : Internal wall and device vtime clock 167 * - vrate : Device virtual time rate against wall clock 168 * - weight : Surplus-adjusted and configured weights 169 * - hweight : Surplus-adjusted and configured hierarchical weights 170 * - inflt : The percentage of in-flight IO cost at the end of last period 171 * - del_ms : Deferred issuer delay induction level and duration 172 * - usages : Usage history 173 */ 174 175 #include <linux/kernel.h> 176 #include <linux/module.h> 177 #include <linux/timer.h> 178 #include <linux/time64.h> 179 #include <linux/parser.h> 180 #include <linux/sched/signal.h> 181 #include <linux/blk-cgroup.h> 182 #include <asm/local.h> 183 #include <asm/local64.h> 184 #include "blk-rq-qos.h" 185 #include "blk-stat.h" 186 #include "blk-wbt.h" 187 188 #ifdef CONFIG_TRACEPOINTS 189 190 /* copied from TRACE_CGROUP_PATH, see cgroup-internal.h */ 191 #define TRACE_IOCG_PATH_LEN 1024 192 static DEFINE_SPINLOCK(trace_iocg_path_lock); 193 static char trace_iocg_path[TRACE_IOCG_PATH_LEN]; 194 195 #define TRACE_IOCG_PATH(type, iocg, ...) \ 196 do { \ 197 unsigned long flags; \ 198 if (trace_iocost_##type##_enabled()) { \ 199 spin_lock_irqsave(&trace_iocg_path_lock, flags); \ 200 cgroup_path(iocg_to_blkg(iocg)->blkcg->css.cgroup, \ 201 trace_iocg_path, TRACE_IOCG_PATH_LEN); \ 202 trace_iocost_##type(iocg, trace_iocg_path, \ 203 ##__VA_ARGS__); \ 204 spin_unlock_irqrestore(&trace_iocg_path_lock, flags); \ 205 } \ 206 } while (0) 207 208 #else /* CONFIG_TRACE_POINTS */ 209 #define TRACE_IOCG_PATH(type, iocg, ...) do { } while (0) 210 #endif /* CONFIG_TRACE_POINTS */ 211 212 enum { 213 MILLION = 1000000, 214 215 /* timer period is calculated from latency requirements, bound it */ 216 MIN_PERIOD = USEC_PER_MSEC, 217 MAX_PERIOD = USEC_PER_SEC, 218 219 /* 220 * iocg->vtime is targeted at 50% behind the device vtime, which 221 * serves as its IO credit buffer. Surplus weight adjustment is 222 * immediately canceled if the vtime margin runs below 10%. 223 */ 224 MARGIN_MIN_PCT = 10, 225 MARGIN_LOW_PCT = 20, 226 MARGIN_TARGET_PCT = 50, 227 228 INUSE_ADJ_STEP_PCT = 25, 229 230 /* Have some play in timer operations */ 231 TIMER_SLACK_PCT = 1, 232 233 /* 1/64k is granular enough and can easily be handled w/ u32 */ 234 WEIGHT_ONE = 1 << 16, 235 236 /* 237 * As vtime is used to calculate the cost of each IO, it needs to 238 * be fairly high precision. For example, it should be able to 239 * represent the cost of a single page worth of discard with 240 * suffificient accuracy. At the same time, it should be able to 241 * represent reasonably long enough durations to be useful and 242 * convenient during operation. 243 * 244 * 1s worth of vtime is 2^37. This gives us both sub-nanosecond 245 * granularity and days of wrap-around time even at extreme vrates. 246 */ 247 VTIME_PER_SEC_SHIFT = 37, 248 VTIME_PER_SEC = 1LLU << VTIME_PER_SEC_SHIFT, 249 VTIME_PER_USEC = VTIME_PER_SEC / USEC_PER_SEC, 250 VTIME_PER_NSEC = VTIME_PER_SEC / NSEC_PER_SEC, 251 252 /* bound vrate adjustments within two orders of magnitude */ 253 VRATE_MIN_PPM = 10000, /* 1% */ 254 VRATE_MAX_PPM = 100000000, /* 10000% */ 255 256 VRATE_MIN = VTIME_PER_USEC * VRATE_MIN_PPM / MILLION, 257 VRATE_CLAMP_ADJ_PCT = 4, 258 259 /* if IOs end up waiting for requests, issue less */ 260 RQ_WAIT_BUSY_PCT = 5, 261 262 /* unbusy hysterisis */ 263 UNBUSY_THR_PCT = 75, 264 265 /* 266 * The effect of delay is indirect and non-linear and a huge amount of 267 * future debt can accumulate abruptly while unthrottled. Linearly scale 268 * up delay as debt is going up and then let it decay exponentially. 269 * This gives us quick ramp ups while delay is accumulating and long 270 * tails which can help reducing the frequency of debt explosions on 271 * unthrottle. The parameters are experimentally determined. 272 * 273 * The delay mechanism provides adequate protection and behavior in many 274 * cases. However, this is far from ideal and falls shorts on both 275 * fronts. The debtors are often throttled too harshly costing a 276 * significant level of fairness and possibly total work while the 277 * protection against their impacts on the system can be choppy and 278 * unreliable. 279 * 280 * The shortcoming primarily stems from the fact that, unlike for page 281 * cache, the kernel doesn't have well-defined back-pressure propagation 282 * mechanism and policies for anonymous memory. Fully addressing this 283 * issue will likely require substantial improvements in the area. 284 */ 285 MIN_DELAY_THR_PCT = 500, 286 MAX_DELAY_THR_PCT = 25000, 287 MIN_DELAY = 250, 288 MAX_DELAY = 250 * USEC_PER_MSEC, 289 290 /* halve debts if avg usage over 100ms is under 50% */ 291 DFGV_USAGE_PCT = 50, 292 DFGV_PERIOD = 100 * USEC_PER_MSEC, 293 294 /* don't let cmds which take a very long time pin lagging for too long */ 295 MAX_LAGGING_PERIODS = 10, 296 297 /* switch iff the conditions are met for longer than this */ 298 AUTOP_CYCLE_NSEC = 10LLU * NSEC_PER_SEC, 299 300 /* 301 * Count IO size in 4k pages. The 12bit shift helps keeping 302 * size-proportional components of cost calculation in closer 303 * numbers of digits to per-IO cost components. 304 */ 305 IOC_PAGE_SHIFT = 12, 306 IOC_PAGE_SIZE = 1 << IOC_PAGE_SHIFT, 307 IOC_SECT_TO_PAGE_SHIFT = IOC_PAGE_SHIFT - SECTOR_SHIFT, 308 309 /* if apart further than 16M, consider randio for linear model */ 310 LCOEF_RANDIO_PAGES = 4096, 311 }; 312 313 enum ioc_running { 314 IOC_IDLE, 315 IOC_RUNNING, 316 IOC_STOP, 317 }; 318 319 /* io.cost.qos controls including per-dev enable of the whole controller */ 320 enum { 321 QOS_ENABLE, 322 QOS_CTRL, 323 NR_QOS_CTRL_PARAMS, 324 }; 325 326 /* io.cost.qos params */ 327 enum { 328 QOS_RPPM, 329 QOS_RLAT, 330 QOS_WPPM, 331 QOS_WLAT, 332 QOS_MIN, 333 QOS_MAX, 334 NR_QOS_PARAMS, 335 }; 336 337 /* io.cost.model controls */ 338 enum { 339 COST_CTRL, 340 COST_MODEL, 341 NR_COST_CTRL_PARAMS, 342 }; 343 344 /* builtin linear cost model coefficients */ 345 enum { 346 I_LCOEF_RBPS, 347 I_LCOEF_RSEQIOPS, 348 I_LCOEF_RRANDIOPS, 349 I_LCOEF_WBPS, 350 I_LCOEF_WSEQIOPS, 351 I_LCOEF_WRANDIOPS, 352 NR_I_LCOEFS, 353 }; 354 355 enum { 356 LCOEF_RPAGE, 357 LCOEF_RSEQIO, 358 LCOEF_RRANDIO, 359 LCOEF_WPAGE, 360 LCOEF_WSEQIO, 361 LCOEF_WRANDIO, 362 NR_LCOEFS, 363 }; 364 365 enum { 366 AUTOP_INVALID, 367 AUTOP_HDD, 368 AUTOP_SSD_QD1, 369 AUTOP_SSD_DFL, 370 AUTOP_SSD_FAST, 371 }; 372 373 struct ioc_params { 374 u32 qos[NR_QOS_PARAMS]; 375 u64 i_lcoefs[NR_I_LCOEFS]; 376 u64 lcoefs[NR_LCOEFS]; 377 u32 too_fast_vrate_pct; 378 u32 too_slow_vrate_pct; 379 }; 380 381 struct ioc_margins { 382 s64 min; 383 s64 low; 384 s64 target; 385 }; 386 387 struct ioc_missed { 388 local_t nr_met; 389 local_t nr_missed; 390 u32 last_met; 391 u32 last_missed; 392 }; 393 394 struct ioc_pcpu_stat { 395 struct ioc_missed missed[2]; 396 397 local64_t rq_wait_ns; 398 u64 last_rq_wait_ns; 399 }; 400 401 /* per device */ 402 struct ioc { 403 struct rq_qos rqos; 404 405 bool enabled; 406 407 struct ioc_params params; 408 struct ioc_margins margins; 409 u32 period_us; 410 u32 timer_slack_ns; 411 u64 vrate_min; 412 u64 vrate_max; 413 414 spinlock_t lock; 415 struct timer_list timer; 416 struct list_head active_iocgs; /* active cgroups */ 417 struct ioc_pcpu_stat __percpu *pcpu_stat; 418 419 enum ioc_running running; 420 atomic64_t vtime_rate; 421 u64 vtime_base_rate; 422 s64 vtime_err; 423 424 seqcount_spinlock_t period_seqcount; 425 u64 period_at; /* wallclock starttime */ 426 u64 period_at_vtime; /* vtime starttime */ 427 428 atomic64_t cur_period; /* inc'd each period */ 429 int busy_level; /* saturation history */ 430 431 bool weights_updated; 432 atomic_t hweight_gen; /* for lazy hweights */ 433 434 /* debt forgivness */ 435 u64 dfgv_period_at; 436 u64 dfgv_period_rem; 437 u64 dfgv_usage_us_sum; 438 439 u64 autop_too_fast_at; 440 u64 autop_too_slow_at; 441 int autop_idx; 442 bool user_qos_params:1; 443 bool user_cost_model:1; 444 }; 445 446 struct iocg_pcpu_stat { 447 local64_t abs_vusage; 448 }; 449 450 struct iocg_stat { 451 u64 usage_us; 452 u64 wait_us; 453 u64 indebt_us; 454 u64 indelay_us; 455 }; 456 457 /* per device-cgroup pair */ 458 struct ioc_gq { 459 struct blkg_policy_data pd; 460 struct ioc *ioc; 461 462 /* 463 * A iocg can get its weight from two sources - an explicit 464 * per-device-cgroup configuration or the default weight of the 465 * cgroup. `cfg_weight` is the explicit per-device-cgroup 466 * configuration. `weight` is the effective considering both 467 * sources. 468 * 469 * When an idle cgroup becomes active its `active` goes from 0 to 470 * `weight`. `inuse` is the surplus adjusted active weight. 471 * `active` and `inuse` are used to calculate `hweight_active` and 472 * `hweight_inuse`. 473 * 474 * `last_inuse` remembers `inuse` while an iocg is idle to persist 475 * surplus adjustments. 476 * 477 * `inuse` may be adjusted dynamically during period. `saved_*` are used 478 * to determine and track adjustments. 479 */ 480 u32 cfg_weight; 481 u32 weight; 482 u32 active; 483 u32 inuse; 484 485 u32 last_inuse; 486 s64 saved_margin; 487 488 sector_t cursor; /* to detect randio */ 489 490 /* 491 * `vtime` is this iocg's vtime cursor which progresses as IOs are 492 * issued. If lagging behind device vtime, the delta represents 493 * the currently available IO budget. If running ahead, the 494 * overage. 495 * 496 * `vtime_done` is the same but progressed on completion rather 497 * than issue. The delta behind `vtime` represents the cost of 498 * currently in-flight IOs. 499 */ 500 atomic64_t vtime; 501 atomic64_t done_vtime; 502 u64 abs_vdebt; 503 504 /* current delay in effect and when it started */ 505 u64 delay; 506 u64 delay_at; 507 508 /* 509 * The period this iocg was last active in. Used for deactivation 510 * and invalidating `vtime`. 511 */ 512 atomic64_t active_period; 513 struct list_head active_list; 514 515 /* see __propagate_weights() and current_hweight() for details */ 516 u64 child_active_sum; 517 u64 child_inuse_sum; 518 u64 child_adjusted_sum; 519 int hweight_gen; 520 u32 hweight_active; 521 u32 hweight_inuse; 522 u32 hweight_donating; 523 u32 hweight_after_donation; 524 525 struct list_head walk_list; 526 struct list_head surplus_list; 527 528 struct wait_queue_head waitq; 529 struct hrtimer waitq_timer; 530 531 /* timestamp at the latest activation */ 532 u64 activated_at; 533 534 /* statistics */ 535 struct iocg_pcpu_stat __percpu *pcpu_stat; 536 struct iocg_stat local_stat; 537 struct iocg_stat desc_stat; 538 struct iocg_stat last_stat; 539 u64 last_stat_abs_vusage; 540 u64 usage_delta_us; 541 u64 wait_since; 542 u64 indebt_since; 543 u64 indelay_since; 544 545 /* this iocg's depth in the hierarchy and ancestors including self */ 546 int level; 547 struct ioc_gq *ancestors[]; 548 }; 549 550 /* per cgroup */ 551 struct ioc_cgrp { 552 struct blkcg_policy_data cpd; 553 unsigned int dfl_weight; 554 }; 555 556 struct ioc_now { 557 u64 now_ns; 558 u64 now; 559 u64 vnow; 560 u64 vrate; 561 }; 562 563 struct iocg_wait { 564 struct wait_queue_entry wait; 565 struct bio *bio; 566 u64 abs_cost; 567 bool committed; 568 }; 569 570 struct iocg_wake_ctx { 571 struct ioc_gq *iocg; 572 u32 hw_inuse; 573 s64 vbudget; 574 }; 575 576 static const struct ioc_params autop[] = { 577 [AUTOP_HDD] = { 578 .qos = { 579 [QOS_RLAT] = 250000, /* 250ms */ 580 [QOS_WLAT] = 250000, 581 [QOS_MIN] = VRATE_MIN_PPM, 582 [QOS_MAX] = VRATE_MAX_PPM, 583 }, 584 .i_lcoefs = { 585 [I_LCOEF_RBPS] = 174019176, 586 [I_LCOEF_RSEQIOPS] = 41708, 587 [I_LCOEF_RRANDIOPS] = 370, 588 [I_LCOEF_WBPS] = 178075866, 589 [I_LCOEF_WSEQIOPS] = 42705, 590 [I_LCOEF_WRANDIOPS] = 378, 591 }, 592 }, 593 [AUTOP_SSD_QD1] = { 594 .qos = { 595 [QOS_RLAT] = 25000, /* 25ms */ 596 [QOS_WLAT] = 25000, 597 [QOS_MIN] = VRATE_MIN_PPM, 598 [QOS_MAX] = VRATE_MAX_PPM, 599 }, 600 .i_lcoefs = { 601 [I_LCOEF_RBPS] = 245855193, 602 [I_LCOEF_RSEQIOPS] = 61575, 603 [I_LCOEF_RRANDIOPS] = 6946, 604 [I_LCOEF_WBPS] = 141365009, 605 [I_LCOEF_WSEQIOPS] = 33716, 606 [I_LCOEF_WRANDIOPS] = 26796, 607 }, 608 }, 609 [AUTOP_SSD_DFL] = { 610 .qos = { 611 [QOS_RLAT] = 25000, /* 25ms */ 612 [QOS_WLAT] = 25000, 613 [QOS_MIN] = VRATE_MIN_PPM, 614 [QOS_MAX] = VRATE_MAX_PPM, 615 }, 616 .i_lcoefs = { 617 [I_LCOEF_RBPS] = 488636629, 618 [I_LCOEF_RSEQIOPS] = 8932, 619 [I_LCOEF_RRANDIOPS] = 8518, 620 [I_LCOEF_WBPS] = 427891549, 621 [I_LCOEF_WSEQIOPS] = 28755, 622 [I_LCOEF_WRANDIOPS] = 21940, 623 }, 624 .too_fast_vrate_pct = 500, 625 }, 626 [AUTOP_SSD_FAST] = { 627 .qos = { 628 [QOS_RLAT] = 5000, /* 5ms */ 629 [QOS_WLAT] = 5000, 630 [QOS_MIN] = VRATE_MIN_PPM, 631 [QOS_MAX] = VRATE_MAX_PPM, 632 }, 633 .i_lcoefs = { 634 [I_LCOEF_RBPS] = 3102524156LLU, 635 [I_LCOEF_RSEQIOPS] = 724816, 636 [I_LCOEF_RRANDIOPS] = 778122, 637 [I_LCOEF_WBPS] = 1742780862LLU, 638 [I_LCOEF_WSEQIOPS] = 425702, 639 [I_LCOEF_WRANDIOPS] = 443193, 640 }, 641 .too_slow_vrate_pct = 10, 642 }, 643 }; 644 645 /* 646 * vrate adjust percentages indexed by ioc->busy_level. We adjust up on 647 * vtime credit shortage and down on device saturation. 648 */ 649 static u32 vrate_adj_pct[] = 650 { 0, 0, 0, 0, 651 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 652 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 653 4, 4, 4, 4, 4, 4, 4, 4, 8, 8, 8, 8, 8, 8, 8, 8, 16 }; 654 655 static struct blkcg_policy blkcg_policy_iocost; 656 657 /* accessors and helpers */ 658 static struct ioc *rqos_to_ioc(struct rq_qos *rqos) 659 { 660 return container_of(rqos, struct ioc, rqos); 661 } 662 663 static struct ioc *q_to_ioc(struct request_queue *q) 664 { 665 return rqos_to_ioc(rq_qos_id(q, RQ_QOS_COST)); 666 } 667 668 static const char *q_name(struct request_queue *q) 669 { 670 if (blk_queue_registered(q)) 671 return kobject_name(q->kobj.parent); 672 else 673 return "<unknown>"; 674 } 675 676 static const char __maybe_unused *ioc_name(struct ioc *ioc) 677 { 678 return q_name(ioc->rqos.q); 679 } 680 681 static struct ioc_gq *pd_to_iocg(struct blkg_policy_data *pd) 682 { 683 return pd ? container_of(pd, struct ioc_gq, pd) : NULL; 684 } 685 686 static struct ioc_gq *blkg_to_iocg(struct blkcg_gq *blkg) 687 { 688 return pd_to_iocg(blkg_to_pd(blkg, &blkcg_policy_iocost)); 689 } 690 691 static struct blkcg_gq *iocg_to_blkg(struct ioc_gq *iocg) 692 { 693 return pd_to_blkg(&iocg->pd); 694 } 695 696 static struct ioc_cgrp *blkcg_to_iocc(struct blkcg *blkcg) 697 { 698 return container_of(blkcg_to_cpd(blkcg, &blkcg_policy_iocost), 699 struct ioc_cgrp, cpd); 700 } 701 702 /* 703 * Scale @abs_cost to the inverse of @hw_inuse. The lower the hierarchical 704 * weight, the more expensive each IO. Must round up. 705 */ 706 static u64 abs_cost_to_cost(u64 abs_cost, u32 hw_inuse) 707 { 708 return DIV64_U64_ROUND_UP(abs_cost * WEIGHT_ONE, hw_inuse); 709 } 710 711 /* 712 * The inverse of abs_cost_to_cost(). Must round up. 713 */ 714 static u64 cost_to_abs_cost(u64 cost, u32 hw_inuse) 715 { 716 return DIV64_U64_ROUND_UP(cost * hw_inuse, WEIGHT_ONE); 717 } 718 719 static void iocg_commit_bio(struct ioc_gq *iocg, struct bio *bio, 720 u64 abs_cost, u64 cost) 721 { 722 struct iocg_pcpu_stat *gcs; 723 724 bio->bi_iocost_cost = cost; 725 atomic64_add(cost, &iocg->vtime); 726 727 gcs = get_cpu_ptr(iocg->pcpu_stat); 728 local64_add(abs_cost, &gcs->abs_vusage); 729 put_cpu_ptr(gcs); 730 } 731 732 static void iocg_lock(struct ioc_gq *iocg, bool lock_ioc, unsigned long *flags) 733 { 734 if (lock_ioc) { 735 spin_lock_irqsave(&iocg->ioc->lock, *flags); 736 spin_lock(&iocg->waitq.lock); 737 } else { 738 spin_lock_irqsave(&iocg->waitq.lock, *flags); 739 } 740 } 741 742 static void iocg_unlock(struct ioc_gq *iocg, bool unlock_ioc, unsigned long *flags) 743 { 744 if (unlock_ioc) { 745 spin_unlock(&iocg->waitq.lock); 746 spin_unlock_irqrestore(&iocg->ioc->lock, *flags); 747 } else { 748 spin_unlock_irqrestore(&iocg->waitq.lock, *flags); 749 } 750 } 751 752 #define CREATE_TRACE_POINTS 753 #include <trace/events/iocost.h> 754 755 static void ioc_refresh_margins(struct ioc *ioc) 756 { 757 struct ioc_margins *margins = &ioc->margins; 758 u32 period_us = ioc->period_us; 759 u64 vrate = ioc->vtime_base_rate; 760 761 margins->min = (period_us * MARGIN_MIN_PCT / 100) * vrate; 762 margins->low = (period_us * MARGIN_LOW_PCT / 100) * vrate; 763 margins->target = (period_us * MARGIN_TARGET_PCT / 100) * vrate; 764 } 765 766 /* latency Qos params changed, update period_us and all the dependent params */ 767 static void ioc_refresh_period_us(struct ioc *ioc) 768 { 769 u32 ppm, lat, multi, period_us; 770 771 lockdep_assert_held(&ioc->lock); 772 773 /* pick the higher latency target */ 774 if (ioc->params.qos[QOS_RLAT] >= ioc->params.qos[QOS_WLAT]) { 775 ppm = ioc->params.qos[QOS_RPPM]; 776 lat = ioc->params.qos[QOS_RLAT]; 777 } else { 778 ppm = ioc->params.qos[QOS_WPPM]; 779 lat = ioc->params.qos[QOS_WLAT]; 780 } 781 782 /* 783 * We want the period to be long enough to contain a healthy number 784 * of IOs while short enough for granular control. Define it as a 785 * multiple of the latency target. Ideally, the multiplier should 786 * be scaled according to the percentile so that it would nominally 787 * contain a certain number of requests. Let's be simpler and 788 * scale it linearly so that it's 2x >= pct(90) and 10x at pct(50). 789 */ 790 if (ppm) 791 multi = max_t(u32, (MILLION - ppm) / 50000, 2); 792 else 793 multi = 2; 794 period_us = multi * lat; 795 period_us = clamp_t(u32, period_us, MIN_PERIOD, MAX_PERIOD); 796 797 /* calculate dependent params */ 798 ioc->period_us = period_us; 799 ioc->timer_slack_ns = div64_u64( 800 (u64)period_us * NSEC_PER_USEC * TIMER_SLACK_PCT, 801 100); 802 ioc_refresh_margins(ioc); 803 } 804 805 static int ioc_autop_idx(struct ioc *ioc) 806 { 807 int idx = ioc->autop_idx; 808 const struct ioc_params *p = &autop[idx]; 809 u32 vrate_pct; 810 u64 now_ns; 811 812 /* rotational? */ 813 if (!blk_queue_nonrot(ioc->rqos.q)) 814 return AUTOP_HDD; 815 816 /* handle SATA SSDs w/ broken NCQ */ 817 if (blk_queue_depth(ioc->rqos.q) == 1) 818 return AUTOP_SSD_QD1; 819 820 /* use one of the normal ssd sets */ 821 if (idx < AUTOP_SSD_DFL) 822 return AUTOP_SSD_DFL; 823 824 /* if user is overriding anything, maintain what was there */ 825 if (ioc->user_qos_params || ioc->user_cost_model) 826 return idx; 827 828 /* step up/down based on the vrate */ 829 vrate_pct = div64_u64(ioc->vtime_base_rate * 100, VTIME_PER_USEC); 830 now_ns = ktime_get_ns(); 831 832 if (p->too_fast_vrate_pct && p->too_fast_vrate_pct <= vrate_pct) { 833 if (!ioc->autop_too_fast_at) 834 ioc->autop_too_fast_at = now_ns; 835 if (now_ns - ioc->autop_too_fast_at >= AUTOP_CYCLE_NSEC) 836 return idx + 1; 837 } else { 838 ioc->autop_too_fast_at = 0; 839 } 840 841 if (p->too_slow_vrate_pct && p->too_slow_vrate_pct >= vrate_pct) { 842 if (!ioc->autop_too_slow_at) 843 ioc->autop_too_slow_at = now_ns; 844 if (now_ns - ioc->autop_too_slow_at >= AUTOP_CYCLE_NSEC) 845 return idx - 1; 846 } else { 847 ioc->autop_too_slow_at = 0; 848 } 849 850 return idx; 851 } 852 853 /* 854 * Take the followings as input 855 * 856 * @bps maximum sequential throughput 857 * @seqiops maximum sequential 4k iops 858 * @randiops maximum random 4k iops 859 * 860 * and calculate the linear model cost coefficients. 861 * 862 * *@page per-page cost 1s / (@bps / 4096) 863 * *@seqio base cost of a seq IO max((1s / @seqiops) - *@page, 0) 864 * @randiops base cost of a rand IO max((1s / @randiops) - *@page, 0) 865 */ 866 static void calc_lcoefs(u64 bps, u64 seqiops, u64 randiops, 867 u64 *page, u64 *seqio, u64 *randio) 868 { 869 u64 v; 870 871 *page = *seqio = *randio = 0; 872 873 if (bps) 874 *page = DIV64_U64_ROUND_UP(VTIME_PER_SEC, 875 DIV_ROUND_UP_ULL(bps, IOC_PAGE_SIZE)); 876 877 if (seqiops) { 878 v = DIV64_U64_ROUND_UP(VTIME_PER_SEC, seqiops); 879 if (v > *page) 880 *seqio = v - *page; 881 } 882 883 if (randiops) { 884 v = DIV64_U64_ROUND_UP(VTIME_PER_SEC, randiops); 885 if (v > *page) 886 *randio = v - *page; 887 } 888 } 889 890 static void ioc_refresh_lcoefs(struct ioc *ioc) 891 { 892 u64 *u = ioc->params.i_lcoefs; 893 u64 *c = ioc->params.lcoefs; 894 895 calc_lcoefs(u[I_LCOEF_RBPS], u[I_LCOEF_RSEQIOPS], u[I_LCOEF_RRANDIOPS], 896 &c[LCOEF_RPAGE], &c[LCOEF_RSEQIO], &c[LCOEF_RRANDIO]); 897 calc_lcoefs(u[I_LCOEF_WBPS], u[I_LCOEF_WSEQIOPS], u[I_LCOEF_WRANDIOPS], 898 &c[LCOEF_WPAGE], &c[LCOEF_WSEQIO], &c[LCOEF_WRANDIO]); 899 } 900 901 static bool ioc_refresh_params(struct ioc *ioc, bool force) 902 { 903 const struct ioc_params *p; 904 int idx; 905 906 lockdep_assert_held(&ioc->lock); 907 908 idx = ioc_autop_idx(ioc); 909 p = &autop[idx]; 910 911 if (idx == ioc->autop_idx && !force) 912 return false; 913 914 if (idx != ioc->autop_idx) 915 atomic64_set(&ioc->vtime_rate, VTIME_PER_USEC); 916 917 ioc->autop_idx = idx; 918 ioc->autop_too_fast_at = 0; 919 ioc->autop_too_slow_at = 0; 920 921 if (!ioc->user_qos_params) 922 memcpy(ioc->params.qos, p->qos, sizeof(p->qos)); 923 if (!ioc->user_cost_model) 924 memcpy(ioc->params.i_lcoefs, p->i_lcoefs, sizeof(p->i_lcoefs)); 925 926 ioc_refresh_period_us(ioc); 927 ioc_refresh_lcoefs(ioc); 928 929 ioc->vrate_min = DIV64_U64_ROUND_UP((u64)ioc->params.qos[QOS_MIN] * 930 VTIME_PER_USEC, MILLION); 931 ioc->vrate_max = div64_u64((u64)ioc->params.qos[QOS_MAX] * 932 VTIME_PER_USEC, MILLION); 933 934 return true; 935 } 936 937 /* 938 * When an iocg accumulates too much vtime or gets deactivated, we throw away 939 * some vtime, which lowers the overall device utilization. As the exact amount 940 * which is being thrown away is known, we can compensate by accelerating the 941 * vrate accordingly so that the extra vtime generated in the current period 942 * matches what got lost. 943 */ 944 static void ioc_refresh_vrate(struct ioc *ioc, struct ioc_now *now) 945 { 946 s64 pleft = ioc->period_at + ioc->period_us - now->now; 947 s64 vperiod = ioc->period_us * ioc->vtime_base_rate; 948 s64 vcomp, vcomp_min, vcomp_max; 949 950 lockdep_assert_held(&ioc->lock); 951 952 /* we need some time left in this period */ 953 if (pleft <= 0) 954 goto done; 955 956 /* 957 * Calculate how much vrate should be adjusted to offset the error. 958 * Limit the amount of adjustment and deduct the adjusted amount from 959 * the error. 960 */ 961 vcomp = -div64_s64(ioc->vtime_err, pleft); 962 vcomp_min = -(ioc->vtime_base_rate >> 1); 963 vcomp_max = ioc->vtime_base_rate; 964 vcomp = clamp(vcomp, vcomp_min, vcomp_max); 965 966 ioc->vtime_err += vcomp * pleft; 967 968 atomic64_set(&ioc->vtime_rate, ioc->vtime_base_rate + vcomp); 969 done: 970 /* bound how much error can accumulate */ 971 ioc->vtime_err = clamp(ioc->vtime_err, -vperiod, vperiod); 972 } 973 974 static void ioc_adjust_base_vrate(struct ioc *ioc, u32 rq_wait_pct, 975 int nr_lagging, int nr_shortages, 976 int prev_busy_level, u32 *missed_ppm) 977 { 978 u64 vrate = ioc->vtime_base_rate; 979 u64 vrate_min = ioc->vrate_min, vrate_max = ioc->vrate_max; 980 981 if (!ioc->busy_level || (ioc->busy_level < 0 && nr_lagging)) { 982 if (ioc->busy_level != prev_busy_level || nr_lagging) 983 trace_iocost_ioc_vrate_adj(ioc, atomic64_read(&ioc->vtime_rate), 984 missed_ppm, rq_wait_pct, 985 nr_lagging, nr_shortages); 986 987 return; 988 } 989 990 /* rq_wait signal is always reliable, ignore user vrate_min */ 991 if (rq_wait_pct > RQ_WAIT_BUSY_PCT) 992 vrate_min = VRATE_MIN; 993 994 /* 995 * If vrate is out of bounds, apply clamp gradually as the 996 * bounds can change abruptly. Otherwise, apply busy_level 997 * based adjustment. 998 */ 999 if (vrate < vrate_min) { 1000 vrate = div64_u64(vrate * (100 + VRATE_CLAMP_ADJ_PCT), 100); 1001 vrate = min(vrate, vrate_min); 1002 } else if (vrate > vrate_max) { 1003 vrate = div64_u64(vrate * (100 - VRATE_CLAMP_ADJ_PCT), 100); 1004 vrate = max(vrate, vrate_max); 1005 } else { 1006 int idx = min_t(int, abs(ioc->busy_level), 1007 ARRAY_SIZE(vrate_adj_pct) - 1); 1008 u32 adj_pct = vrate_adj_pct[idx]; 1009 1010 if (ioc->busy_level > 0) 1011 adj_pct = 100 - adj_pct; 1012 else 1013 adj_pct = 100 + adj_pct; 1014 1015 vrate = clamp(DIV64_U64_ROUND_UP(vrate * adj_pct, 100), 1016 vrate_min, vrate_max); 1017 } 1018 1019 trace_iocost_ioc_vrate_adj(ioc, vrate, missed_ppm, rq_wait_pct, 1020 nr_lagging, nr_shortages); 1021 1022 ioc->vtime_base_rate = vrate; 1023 ioc_refresh_margins(ioc); 1024 } 1025 1026 /* take a snapshot of the current [v]time and vrate */ 1027 static void ioc_now(struct ioc *ioc, struct ioc_now *now) 1028 { 1029 unsigned seq; 1030 1031 now->now_ns = ktime_get(); 1032 now->now = ktime_to_us(now->now_ns); 1033 now->vrate = atomic64_read(&ioc->vtime_rate); 1034 1035 /* 1036 * The current vtime is 1037 * 1038 * vtime at period start + (wallclock time since the start) * vrate 1039 * 1040 * As a consistent snapshot of `period_at_vtime` and `period_at` is 1041 * needed, they're seqcount protected. 1042 */ 1043 do { 1044 seq = read_seqcount_begin(&ioc->period_seqcount); 1045 now->vnow = ioc->period_at_vtime + 1046 (now->now - ioc->period_at) * now->vrate; 1047 } while (read_seqcount_retry(&ioc->period_seqcount, seq)); 1048 } 1049 1050 static void ioc_start_period(struct ioc *ioc, struct ioc_now *now) 1051 { 1052 WARN_ON_ONCE(ioc->running != IOC_RUNNING); 1053 1054 write_seqcount_begin(&ioc->period_seqcount); 1055 ioc->period_at = now->now; 1056 ioc->period_at_vtime = now->vnow; 1057 write_seqcount_end(&ioc->period_seqcount); 1058 1059 ioc->timer.expires = jiffies + usecs_to_jiffies(ioc->period_us); 1060 add_timer(&ioc->timer); 1061 } 1062 1063 /* 1064 * Update @iocg's `active` and `inuse` to @active and @inuse, update level 1065 * weight sums and propagate upwards accordingly. If @save, the current margin 1066 * is saved to be used as reference for later inuse in-period adjustments. 1067 */ 1068 static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse, 1069 bool save, struct ioc_now *now) 1070 { 1071 struct ioc *ioc = iocg->ioc; 1072 int lvl; 1073 1074 lockdep_assert_held(&ioc->lock); 1075 1076 inuse = clamp_t(u32, inuse, 1, active); 1077 1078 iocg->last_inuse = iocg->inuse; 1079 if (save) 1080 iocg->saved_margin = now->vnow - atomic64_read(&iocg->vtime); 1081 1082 if (active == iocg->active && inuse == iocg->inuse) 1083 return; 1084 1085 for (lvl = iocg->level - 1; lvl >= 0; lvl--) { 1086 struct ioc_gq *parent = iocg->ancestors[lvl]; 1087 struct ioc_gq *child = iocg->ancestors[lvl + 1]; 1088 u32 parent_active = 0, parent_inuse = 0; 1089 1090 /* update the level sums */ 1091 parent->child_active_sum += (s32)(active - child->active); 1092 parent->child_inuse_sum += (s32)(inuse - child->inuse); 1093 /* apply the udpates */ 1094 child->active = active; 1095 child->inuse = inuse; 1096 1097 /* 1098 * The delta between inuse and active sums indicates that 1099 * much of weight is being given away. Parent's inuse 1100 * and active should reflect the ratio. 1101 */ 1102 if (parent->child_active_sum) { 1103 parent_active = parent->weight; 1104 parent_inuse = DIV64_U64_ROUND_UP( 1105 parent_active * parent->child_inuse_sum, 1106 parent->child_active_sum); 1107 } 1108 1109 /* do we need to keep walking up? */ 1110 if (parent_active == parent->active && 1111 parent_inuse == parent->inuse) 1112 break; 1113 1114 active = parent_active; 1115 inuse = parent_inuse; 1116 } 1117 1118 ioc->weights_updated = true; 1119 } 1120 1121 static void commit_weights(struct ioc *ioc) 1122 { 1123 lockdep_assert_held(&ioc->lock); 1124 1125 if (ioc->weights_updated) { 1126 /* paired with rmb in current_hweight(), see there */ 1127 smp_wmb(); 1128 atomic_inc(&ioc->hweight_gen); 1129 ioc->weights_updated = false; 1130 } 1131 } 1132 1133 static void propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse, 1134 bool save, struct ioc_now *now) 1135 { 1136 __propagate_weights(iocg, active, inuse, save, now); 1137 commit_weights(iocg->ioc); 1138 } 1139 1140 static void current_hweight(struct ioc_gq *iocg, u32 *hw_activep, u32 *hw_inusep) 1141 { 1142 struct ioc *ioc = iocg->ioc; 1143 int lvl; 1144 u32 hwa, hwi; 1145 int ioc_gen; 1146 1147 /* hot path - if uptodate, use cached */ 1148 ioc_gen = atomic_read(&ioc->hweight_gen); 1149 if (ioc_gen == iocg->hweight_gen) 1150 goto out; 1151 1152 /* 1153 * Paired with wmb in commit_weights(). If we saw the updated 1154 * hweight_gen, all the weight updates from __propagate_weights() are 1155 * visible too. 1156 * 1157 * We can race with weight updates during calculation and get it 1158 * wrong. However, hweight_gen would have changed and a future 1159 * reader will recalculate and we're guaranteed to discard the 1160 * wrong result soon. 1161 */ 1162 smp_rmb(); 1163 1164 hwa = hwi = WEIGHT_ONE; 1165 for (lvl = 0; lvl <= iocg->level - 1; lvl++) { 1166 struct ioc_gq *parent = iocg->ancestors[lvl]; 1167 struct ioc_gq *child = iocg->ancestors[lvl + 1]; 1168 u64 active_sum = READ_ONCE(parent->child_active_sum); 1169 u64 inuse_sum = READ_ONCE(parent->child_inuse_sum); 1170 u32 active = READ_ONCE(child->active); 1171 u32 inuse = READ_ONCE(child->inuse); 1172 1173 /* we can race with deactivations and either may read as zero */ 1174 if (!active_sum || !inuse_sum) 1175 continue; 1176 1177 active_sum = max_t(u64, active, active_sum); 1178 hwa = div64_u64((u64)hwa * active, active_sum); 1179 1180 inuse_sum = max_t(u64, inuse, inuse_sum); 1181 hwi = div64_u64((u64)hwi * inuse, inuse_sum); 1182 } 1183 1184 iocg->hweight_active = max_t(u32, hwa, 1); 1185 iocg->hweight_inuse = max_t(u32, hwi, 1); 1186 iocg->hweight_gen = ioc_gen; 1187 out: 1188 if (hw_activep) 1189 *hw_activep = iocg->hweight_active; 1190 if (hw_inusep) 1191 *hw_inusep = iocg->hweight_inuse; 1192 } 1193 1194 /* 1195 * Calculate the hweight_inuse @iocg would get with max @inuse assuming all the 1196 * other weights stay unchanged. 1197 */ 1198 static u32 current_hweight_max(struct ioc_gq *iocg) 1199 { 1200 u32 hwm = WEIGHT_ONE; 1201 u32 inuse = iocg->active; 1202 u64 child_inuse_sum; 1203 int lvl; 1204 1205 lockdep_assert_held(&iocg->ioc->lock); 1206 1207 for (lvl = iocg->level - 1; lvl >= 0; lvl--) { 1208 struct ioc_gq *parent = iocg->ancestors[lvl]; 1209 struct ioc_gq *child = iocg->ancestors[lvl + 1]; 1210 1211 child_inuse_sum = parent->child_inuse_sum + inuse - child->inuse; 1212 hwm = div64_u64((u64)hwm * inuse, child_inuse_sum); 1213 inuse = DIV64_U64_ROUND_UP(parent->active * child_inuse_sum, 1214 parent->child_active_sum); 1215 } 1216 1217 return max_t(u32, hwm, 1); 1218 } 1219 1220 static void weight_updated(struct ioc_gq *iocg, struct ioc_now *now) 1221 { 1222 struct ioc *ioc = iocg->ioc; 1223 struct blkcg_gq *blkg = iocg_to_blkg(iocg); 1224 struct ioc_cgrp *iocc = blkcg_to_iocc(blkg->blkcg); 1225 u32 weight; 1226 1227 lockdep_assert_held(&ioc->lock); 1228 1229 weight = iocg->cfg_weight ?: iocc->dfl_weight; 1230 if (weight != iocg->weight && iocg->active) 1231 propagate_weights(iocg, weight, iocg->inuse, true, now); 1232 iocg->weight = weight; 1233 } 1234 1235 static bool iocg_activate(struct ioc_gq *iocg, struct ioc_now *now) 1236 { 1237 struct ioc *ioc = iocg->ioc; 1238 u64 last_period, cur_period; 1239 u64 vtime, vtarget; 1240 int i; 1241 1242 /* 1243 * If seem to be already active, just update the stamp to tell the 1244 * timer that we're still active. We don't mind occassional races. 1245 */ 1246 if (!list_empty(&iocg->active_list)) { 1247 ioc_now(ioc, now); 1248 cur_period = atomic64_read(&ioc->cur_period); 1249 if (atomic64_read(&iocg->active_period) != cur_period) 1250 atomic64_set(&iocg->active_period, cur_period); 1251 return true; 1252 } 1253 1254 /* racy check on internal node IOs, treat as root level IOs */ 1255 if (iocg->child_active_sum) 1256 return false; 1257 1258 spin_lock_irq(&ioc->lock); 1259 1260 ioc_now(ioc, now); 1261 1262 /* update period */ 1263 cur_period = atomic64_read(&ioc->cur_period); 1264 last_period = atomic64_read(&iocg->active_period); 1265 atomic64_set(&iocg->active_period, cur_period); 1266 1267 /* already activated or breaking leaf-only constraint? */ 1268 if (!list_empty(&iocg->active_list)) 1269 goto succeed_unlock; 1270 for (i = iocg->level - 1; i > 0; i--) 1271 if (!list_empty(&iocg->ancestors[i]->active_list)) 1272 goto fail_unlock; 1273 1274 if (iocg->child_active_sum) 1275 goto fail_unlock; 1276 1277 /* 1278 * Always start with the target budget. On deactivation, we throw away 1279 * anything above it. 1280 */ 1281 vtarget = now->vnow - ioc->margins.target; 1282 vtime = atomic64_read(&iocg->vtime); 1283 1284 atomic64_add(vtarget - vtime, &iocg->vtime); 1285 atomic64_add(vtarget - vtime, &iocg->done_vtime); 1286 vtime = vtarget; 1287 1288 /* 1289 * Activate, propagate weight and start period timer if not 1290 * running. Reset hweight_gen to avoid accidental match from 1291 * wrapping. 1292 */ 1293 iocg->hweight_gen = atomic_read(&ioc->hweight_gen) - 1; 1294 list_add(&iocg->active_list, &ioc->active_iocgs); 1295 1296 propagate_weights(iocg, iocg->weight, 1297 iocg->last_inuse ?: iocg->weight, true, now); 1298 1299 TRACE_IOCG_PATH(iocg_activate, iocg, now, 1300 last_period, cur_period, vtime); 1301 1302 iocg->activated_at = now->now; 1303 1304 if (ioc->running == IOC_IDLE) { 1305 ioc->running = IOC_RUNNING; 1306 ioc->dfgv_period_at = now->now; 1307 ioc->dfgv_period_rem = 0; 1308 ioc_start_period(ioc, now); 1309 } 1310 1311 succeed_unlock: 1312 spin_unlock_irq(&ioc->lock); 1313 return true; 1314 1315 fail_unlock: 1316 spin_unlock_irq(&ioc->lock); 1317 return false; 1318 } 1319 1320 static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now) 1321 { 1322 struct ioc *ioc = iocg->ioc; 1323 struct blkcg_gq *blkg = iocg_to_blkg(iocg); 1324 u64 tdelta, delay, new_delay; 1325 s64 vover, vover_pct; 1326 u32 hwa; 1327 1328 lockdep_assert_held(&iocg->waitq.lock); 1329 1330 /* calculate the current delay in effect - 1/2 every second */ 1331 tdelta = now->now - iocg->delay_at; 1332 if (iocg->delay) 1333 delay = iocg->delay >> div64_u64(tdelta, USEC_PER_SEC); 1334 else 1335 delay = 0; 1336 1337 /* calculate the new delay from the debt amount */ 1338 current_hweight(iocg, &hwa, NULL); 1339 vover = atomic64_read(&iocg->vtime) + 1340 abs_cost_to_cost(iocg->abs_vdebt, hwa) - now->vnow; 1341 vover_pct = div64_s64(100 * vover, 1342 ioc->period_us * ioc->vtime_base_rate); 1343 1344 if (vover_pct <= MIN_DELAY_THR_PCT) 1345 new_delay = 0; 1346 else if (vover_pct >= MAX_DELAY_THR_PCT) 1347 new_delay = MAX_DELAY; 1348 else 1349 new_delay = MIN_DELAY + 1350 div_u64((MAX_DELAY - MIN_DELAY) * 1351 (vover_pct - MIN_DELAY_THR_PCT), 1352 MAX_DELAY_THR_PCT - MIN_DELAY_THR_PCT); 1353 1354 /* pick the higher one and apply */ 1355 if (new_delay > delay) { 1356 iocg->delay = new_delay; 1357 iocg->delay_at = now->now; 1358 delay = new_delay; 1359 } 1360 1361 if (delay >= MIN_DELAY) { 1362 if (!iocg->indelay_since) 1363 iocg->indelay_since = now->now; 1364 blkcg_set_delay(blkg, delay * NSEC_PER_USEC); 1365 return true; 1366 } else { 1367 if (iocg->indelay_since) { 1368 iocg->local_stat.indelay_us += now->now - iocg->indelay_since; 1369 iocg->indelay_since = 0; 1370 } 1371 iocg->delay = 0; 1372 blkcg_clear_delay(blkg); 1373 return false; 1374 } 1375 } 1376 1377 static void iocg_incur_debt(struct ioc_gq *iocg, u64 abs_cost, 1378 struct ioc_now *now) 1379 { 1380 struct iocg_pcpu_stat *gcs; 1381 1382 lockdep_assert_held(&iocg->ioc->lock); 1383 lockdep_assert_held(&iocg->waitq.lock); 1384 WARN_ON_ONCE(list_empty(&iocg->active_list)); 1385 1386 /* 1387 * Once in debt, debt handling owns inuse. @iocg stays at the minimum 1388 * inuse donating all of it share to others until its debt is paid off. 1389 */ 1390 if (!iocg->abs_vdebt && abs_cost) { 1391 iocg->indebt_since = now->now; 1392 propagate_weights(iocg, iocg->active, 0, false, now); 1393 } 1394 1395 iocg->abs_vdebt += abs_cost; 1396 1397 gcs = get_cpu_ptr(iocg->pcpu_stat); 1398 local64_add(abs_cost, &gcs->abs_vusage); 1399 put_cpu_ptr(gcs); 1400 } 1401 1402 static void iocg_pay_debt(struct ioc_gq *iocg, u64 abs_vpay, 1403 struct ioc_now *now) 1404 { 1405 lockdep_assert_held(&iocg->ioc->lock); 1406 lockdep_assert_held(&iocg->waitq.lock); 1407 1408 /* make sure that nobody messed with @iocg */ 1409 WARN_ON_ONCE(list_empty(&iocg->active_list)); 1410 WARN_ON_ONCE(iocg->inuse > 1); 1411 1412 iocg->abs_vdebt -= min(abs_vpay, iocg->abs_vdebt); 1413 1414 /* if debt is paid in full, restore inuse */ 1415 if (!iocg->abs_vdebt) { 1416 iocg->local_stat.indebt_us += now->now - iocg->indebt_since; 1417 iocg->indebt_since = 0; 1418 1419 propagate_weights(iocg, iocg->active, iocg->last_inuse, 1420 false, now); 1421 } 1422 } 1423 1424 static int iocg_wake_fn(struct wait_queue_entry *wq_entry, unsigned mode, 1425 int flags, void *key) 1426 { 1427 struct iocg_wait *wait = container_of(wq_entry, struct iocg_wait, wait); 1428 struct iocg_wake_ctx *ctx = (struct iocg_wake_ctx *)key; 1429 u64 cost = abs_cost_to_cost(wait->abs_cost, ctx->hw_inuse); 1430 1431 ctx->vbudget -= cost; 1432 1433 if (ctx->vbudget < 0) 1434 return -1; 1435 1436 iocg_commit_bio(ctx->iocg, wait->bio, wait->abs_cost, cost); 1437 1438 /* 1439 * autoremove_wake_function() removes the wait entry only when it 1440 * actually changed the task state. We want the wait always 1441 * removed. Remove explicitly and use default_wake_function(). 1442 */ 1443 list_del_init(&wq_entry->entry); 1444 wait->committed = true; 1445 1446 default_wake_function(wq_entry, mode, flags, key); 1447 return 0; 1448 } 1449 1450 /* 1451 * Calculate the accumulated budget, pay debt if @pay_debt and wake up waiters 1452 * accordingly. When @pay_debt is %true, the caller must be holding ioc->lock in 1453 * addition to iocg->waitq.lock. 1454 */ 1455 static void iocg_kick_waitq(struct ioc_gq *iocg, bool pay_debt, 1456 struct ioc_now *now) 1457 { 1458 struct ioc *ioc = iocg->ioc; 1459 struct iocg_wake_ctx ctx = { .iocg = iocg }; 1460 u64 vshortage, expires, oexpires; 1461 s64 vbudget; 1462 u32 hwa; 1463 1464 lockdep_assert_held(&iocg->waitq.lock); 1465 1466 current_hweight(iocg, &hwa, NULL); 1467 vbudget = now->vnow - atomic64_read(&iocg->vtime); 1468 1469 /* pay off debt */ 1470 if (pay_debt && iocg->abs_vdebt && vbudget > 0) { 1471 u64 abs_vbudget = cost_to_abs_cost(vbudget, hwa); 1472 u64 abs_vpay = min_t(u64, abs_vbudget, iocg->abs_vdebt); 1473 u64 vpay = abs_cost_to_cost(abs_vpay, hwa); 1474 1475 lockdep_assert_held(&ioc->lock); 1476 1477 atomic64_add(vpay, &iocg->vtime); 1478 atomic64_add(vpay, &iocg->done_vtime); 1479 iocg_pay_debt(iocg, abs_vpay, now); 1480 vbudget -= vpay; 1481 } 1482 1483 if (iocg->abs_vdebt || iocg->delay) 1484 iocg_kick_delay(iocg, now); 1485 1486 /* 1487 * Debt can still be outstanding if we haven't paid all yet or the 1488 * caller raced and called without @pay_debt. Shouldn't wake up waiters 1489 * under debt. Make sure @vbudget reflects the outstanding amount and is 1490 * not positive. 1491 */ 1492 if (iocg->abs_vdebt) { 1493 s64 vdebt = abs_cost_to_cost(iocg->abs_vdebt, hwa); 1494 vbudget = min_t(s64, 0, vbudget - vdebt); 1495 } 1496 1497 /* 1498 * Wake up the ones which are due and see how much vtime we'll need for 1499 * the next one. As paying off debt restores hw_inuse, it must be read 1500 * after the above debt payment. 1501 */ 1502 ctx.vbudget = vbudget; 1503 current_hweight(iocg, NULL, &ctx.hw_inuse); 1504 1505 __wake_up_locked_key(&iocg->waitq, TASK_NORMAL, &ctx); 1506 1507 if (!waitqueue_active(&iocg->waitq)) { 1508 if (iocg->wait_since) { 1509 iocg->local_stat.wait_us += now->now - iocg->wait_since; 1510 iocg->wait_since = 0; 1511 } 1512 return; 1513 } 1514 1515 if (!iocg->wait_since) 1516 iocg->wait_since = now->now; 1517 1518 if (WARN_ON_ONCE(ctx.vbudget >= 0)) 1519 return; 1520 1521 /* determine next wakeup, add a timer margin to guarantee chunking */ 1522 vshortage = -ctx.vbudget; 1523 expires = now->now_ns + 1524 DIV64_U64_ROUND_UP(vshortage, ioc->vtime_base_rate) * 1525 NSEC_PER_USEC; 1526 expires += ioc->timer_slack_ns; 1527 1528 /* if already active and close enough, don't bother */ 1529 oexpires = ktime_to_ns(hrtimer_get_softexpires(&iocg->waitq_timer)); 1530 if (hrtimer_is_queued(&iocg->waitq_timer) && 1531 abs(oexpires - expires) <= ioc->timer_slack_ns) 1532 return; 1533 1534 hrtimer_start_range_ns(&iocg->waitq_timer, ns_to_ktime(expires), 1535 ioc->timer_slack_ns, HRTIMER_MODE_ABS); 1536 } 1537 1538 static enum hrtimer_restart iocg_waitq_timer_fn(struct hrtimer *timer) 1539 { 1540 struct ioc_gq *iocg = container_of(timer, struct ioc_gq, waitq_timer); 1541 bool pay_debt = READ_ONCE(iocg->abs_vdebt); 1542 struct ioc_now now; 1543 unsigned long flags; 1544 1545 ioc_now(iocg->ioc, &now); 1546 1547 iocg_lock(iocg, pay_debt, &flags); 1548 iocg_kick_waitq(iocg, pay_debt, &now); 1549 iocg_unlock(iocg, pay_debt, &flags); 1550 1551 return HRTIMER_NORESTART; 1552 } 1553 1554 static void ioc_lat_stat(struct ioc *ioc, u32 *missed_ppm_ar, u32 *rq_wait_pct_p) 1555 { 1556 u32 nr_met[2] = { }; 1557 u32 nr_missed[2] = { }; 1558 u64 rq_wait_ns = 0; 1559 int cpu, rw; 1560 1561 for_each_online_cpu(cpu) { 1562 struct ioc_pcpu_stat *stat = per_cpu_ptr(ioc->pcpu_stat, cpu); 1563 u64 this_rq_wait_ns; 1564 1565 for (rw = READ; rw <= WRITE; rw++) { 1566 u32 this_met = local_read(&stat->missed[rw].nr_met); 1567 u32 this_missed = local_read(&stat->missed[rw].nr_missed); 1568 1569 nr_met[rw] += this_met - stat->missed[rw].last_met; 1570 nr_missed[rw] += this_missed - stat->missed[rw].last_missed; 1571 stat->missed[rw].last_met = this_met; 1572 stat->missed[rw].last_missed = this_missed; 1573 } 1574 1575 this_rq_wait_ns = local64_read(&stat->rq_wait_ns); 1576 rq_wait_ns += this_rq_wait_ns - stat->last_rq_wait_ns; 1577 stat->last_rq_wait_ns = this_rq_wait_ns; 1578 } 1579 1580 for (rw = READ; rw <= WRITE; rw++) { 1581 if (nr_met[rw] + nr_missed[rw]) 1582 missed_ppm_ar[rw] = 1583 DIV64_U64_ROUND_UP((u64)nr_missed[rw] * MILLION, 1584 nr_met[rw] + nr_missed[rw]); 1585 else 1586 missed_ppm_ar[rw] = 0; 1587 } 1588 1589 *rq_wait_pct_p = div64_u64(rq_wait_ns * 100, 1590 ioc->period_us * NSEC_PER_USEC); 1591 } 1592 1593 /* was iocg idle this period? */ 1594 static bool iocg_is_idle(struct ioc_gq *iocg) 1595 { 1596 struct ioc *ioc = iocg->ioc; 1597 1598 /* did something get issued this period? */ 1599 if (atomic64_read(&iocg->active_period) == 1600 atomic64_read(&ioc->cur_period)) 1601 return false; 1602 1603 /* is something in flight? */ 1604 if (atomic64_read(&iocg->done_vtime) != atomic64_read(&iocg->vtime)) 1605 return false; 1606 1607 return true; 1608 } 1609 1610 /* 1611 * Call this function on the target leaf @iocg's to build pre-order traversal 1612 * list of all the ancestors in @inner_walk. The inner nodes are linked through 1613 * ->walk_list and the caller is responsible for dissolving the list after use. 1614 */ 1615 static void iocg_build_inner_walk(struct ioc_gq *iocg, 1616 struct list_head *inner_walk) 1617 { 1618 int lvl; 1619 1620 WARN_ON_ONCE(!list_empty(&iocg->walk_list)); 1621 1622 /* find the first ancestor which hasn't been visited yet */ 1623 for (lvl = iocg->level - 1; lvl >= 0; lvl--) { 1624 if (!list_empty(&iocg->ancestors[lvl]->walk_list)) 1625 break; 1626 } 1627 1628 /* walk down and visit the inner nodes to get pre-order traversal */ 1629 while (++lvl <= iocg->level - 1) { 1630 struct ioc_gq *inner = iocg->ancestors[lvl]; 1631 1632 /* record traversal order */ 1633 list_add_tail(&inner->walk_list, inner_walk); 1634 } 1635 } 1636 1637 /* collect per-cpu counters and propagate the deltas to the parent */ 1638 static void iocg_flush_stat_one(struct ioc_gq *iocg, struct ioc_now *now) 1639 { 1640 struct ioc *ioc = iocg->ioc; 1641 struct iocg_stat new_stat; 1642 u64 abs_vusage = 0; 1643 u64 vusage_delta; 1644 int cpu; 1645 1646 lockdep_assert_held(&iocg->ioc->lock); 1647 1648 /* collect per-cpu counters */ 1649 for_each_possible_cpu(cpu) { 1650 abs_vusage += local64_read( 1651 per_cpu_ptr(&iocg->pcpu_stat->abs_vusage, cpu)); 1652 } 1653 vusage_delta = abs_vusage - iocg->last_stat_abs_vusage; 1654 iocg->last_stat_abs_vusage = abs_vusage; 1655 1656 iocg->usage_delta_us = div64_u64(vusage_delta, ioc->vtime_base_rate); 1657 iocg->local_stat.usage_us += iocg->usage_delta_us; 1658 1659 /* propagate upwards */ 1660 new_stat.usage_us = 1661 iocg->local_stat.usage_us + iocg->desc_stat.usage_us; 1662 new_stat.wait_us = 1663 iocg->local_stat.wait_us + iocg->desc_stat.wait_us; 1664 new_stat.indebt_us = 1665 iocg->local_stat.indebt_us + iocg->desc_stat.indebt_us; 1666 new_stat.indelay_us = 1667 iocg->local_stat.indelay_us + iocg->desc_stat.indelay_us; 1668 1669 /* propagate the deltas to the parent */ 1670 if (iocg->level > 0) { 1671 struct iocg_stat *parent_stat = 1672 &iocg->ancestors[iocg->level - 1]->desc_stat; 1673 1674 parent_stat->usage_us += 1675 new_stat.usage_us - iocg->last_stat.usage_us; 1676 parent_stat->wait_us += 1677 new_stat.wait_us - iocg->last_stat.wait_us; 1678 parent_stat->indebt_us += 1679 new_stat.indebt_us - iocg->last_stat.indebt_us; 1680 parent_stat->indelay_us += 1681 new_stat.indelay_us - iocg->last_stat.indelay_us; 1682 } 1683 1684 iocg->last_stat = new_stat; 1685 } 1686 1687 /* get stat counters ready for reading on all active iocgs */ 1688 static void iocg_flush_stat(struct list_head *target_iocgs, struct ioc_now *now) 1689 { 1690 LIST_HEAD(inner_walk); 1691 struct ioc_gq *iocg, *tiocg; 1692 1693 /* flush leaves and build inner node walk list */ 1694 list_for_each_entry(iocg, target_iocgs, active_list) { 1695 iocg_flush_stat_one(iocg, now); 1696 iocg_build_inner_walk(iocg, &inner_walk); 1697 } 1698 1699 /* keep flushing upwards by walking the inner list backwards */ 1700 list_for_each_entry_safe_reverse(iocg, tiocg, &inner_walk, walk_list) { 1701 iocg_flush_stat_one(iocg, now); 1702 list_del_init(&iocg->walk_list); 1703 } 1704 } 1705 1706 /* 1707 * Determine what @iocg's hweight_inuse should be after donating unused 1708 * capacity. @hwm is the upper bound and used to signal no donation. This 1709 * function also throws away @iocg's excess budget. 1710 */ 1711 static u32 hweight_after_donation(struct ioc_gq *iocg, u32 old_hwi, u32 hwm, 1712 u32 usage, struct ioc_now *now) 1713 { 1714 struct ioc *ioc = iocg->ioc; 1715 u64 vtime = atomic64_read(&iocg->vtime); 1716 s64 excess, delta, target, new_hwi; 1717 1718 /* debt handling owns inuse for debtors */ 1719 if (iocg->abs_vdebt) 1720 return 1; 1721 1722 /* see whether minimum margin requirement is met */ 1723 if (waitqueue_active(&iocg->waitq) || 1724 time_after64(vtime, now->vnow - ioc->margins.min)) 1725 return hwm; 1726 1727 /* throw away excess above target */ 1728 excess = now->vnow - vtime - ioc->margins.target; 1729 if (excess > 0) { 1730 atomic64_add(excess, &iocg->vtime); 1731 atomic64_add(excess, &iocg->done_vtime); 1732 vtime += excess; 1733 ioc->vtime_err -= div64_u64(excess * old_hwi, WEIGHT_ONE); 1734 } 1735 1736 /* 1737 * Let's say the distance between iocg's and device's vtimes as a 1738 * fraction of period duration is delta. Assuming that the iocg will 1739 * consume the usage determined above, we want to determine new_hwi so 1740 * that delta equals MARGIN_TARGET at the end of the next period. 1741 * 1742 * We need to execute usage worth of IOs while spending the sum of the 1743 * new budget (1 - MARGIN_TARGET) and the leftover from the last period 1744 * (delta): 1745 * 1746 * usage = (1 - MARGIN_TARGET + delta) * new_hwi 1747 * 1748 * Therefore, the new_hwi is: 1749 * 1750 * new_hwi = usage / (1 - MARGIN_TARGET + delta) 1751 */ 1752 delta = div64_s64(WEIGHT_ONE * (now->vnow - vtime), 1753 now->vnow - ioc->period_at_vtime); 1754 target = WEIGHT_ONE * MARGIN_TARGET_PCT / 100; 1755 new_hwi = div64_s64(WEIGHT_ONE * usage, WEIGHT_ONE - target + delta); 1756 1757 return clamp_t(s64, new_hwi, 1, hwm); 1758 } 1759 1760 /* 1761 * For work-conservation, an iocg which isn't using all of its share should 1762 * donate the leftover to other iocgs. There are two ways to achieve this - 1. 1763 * bumping up vrate accordingly 2. lowering the donating iocg's inuse weight. 1764 * 1765 * #1 is mathematically simpler but has the drawback of requiring synchronous 1766 * global hweight_inuse updates when idle iocg's get activated or inuse weights 1767 * change due to donation snapbacks as it has the possibility of grossly 1768 * overshooting what's allowed by the model and vrate. 1769 * 1770 * #2 is inherently safe with local operations. The donating iocg can easily 1771 * snap back to higher weights when needed without worrying about impacts on 1772 * other nodes as the impacts will be inherently correct. This also makes idle 1773 * iocg activations safe. The only effect activations have is decreasing 1774 * hweight_inuse of others, the right solution to which is for those iocgs to 1775 * snap back to higher weights. 1776 * 1777 * So, we go with #2. The challenge is calculating how each donating iocg's 1778 * inuse should be adjusted to achieve the target donation amounts. This is done 1779 * using Andy's method described in the following pdf. 1780 * 1781 * https://drive.google.com/file/d/1PsJwxPFtjUnwOY1QJ5AeICCcsL7BM3bo 1782 * 1783 * Given the weights and target after-donation hweight_inuse values, Andy's 1784 * method determines how the proportional distribution should look like at each 1785 * sibling level to maintain the relative relationship between all non-donating 1786 * pairs. To roughly summarize, it divides the tree into donating and 1787 * non-donating parts, calculates global donation rate which is used to 1788 * determine the target hweight_inuse for each node, and then derives per-level 1789 * proportions. 1790 * 1791 * The following pdf shows that global distribution calculated this way can be 1792 * achieved by scaling inuse weights of donating leaves and propagating the 1793 * adjustments upwards proportionally. 1794 * 1795 * https://drive.google.com/file/d/1vONz1-fzVO7oY5DXXsLjSxEtYYQbOvsE 1796 * 1797 * Combining the above two, we can determine how each leaf iocg's inuse should 1798 * be adjusted to achieve the target donation. 1799 * 1800 * https://drive.google.com/file/d/1WcrltBOSPN0qXVdBgnKm4mdp9FhuEFQN 1801 * 1802 * The inline comments use symbols from the last pdf. 1803 * 1804 * b is the sum of the absolute budgets in the subtree. 1 for the root node. 1805 * f is the sum of the absolute budgets of non-donating nodes in the subtree. 1806 * t is the sum of the absolute budgets of donating nodes in the subtree. 1807 * w is the weight of the node. w = w_f + w_t 1808 * w_f is the non-donating portion of w. w_f = w * f / b 1809 * w_b is the donating portion of w. w_t = w * t / b 1810 * s is the sum of all sibling weights. s = Sum(w) for siblings 1811 * s_f and s_t are the non-donating and donating portions of s. 1812 * 1813 * Subscript p denotes the parent's counterpart and ' the adjusted value - e.g. 1814 * w_pt is the donating portion of the parent's weight and w'_pt the same value 1815 * after adjustments. Subscript r denotes the root node's values. 1816 */ 1817 static void transfer_surpluses(struct list_head *surpluses, struct ioc_now *now) 1818 { 1819 LIST_HEAD(over_hwa); 1820 LIST_HEAD(inner_walk); 1821 struct ioc_gq *iocg, *tiocg, *root_iocg; 1822 u32 after_sum, over_sum, over_target, gamma; 1823 1824 /* 1825 * It's pretty unlikely but possible for the total sum of 1826 * hweight_after_donation's to be higher than WEIGHT_ONE, which will 1827 * confuse the following calculations. If such condition is detected, 1828 * scale down everyone over its full share equally to keep the sum below 1829 * WEIGHT_ONE. 1830 */ 1831 after_sum = 0; 1832 over_sum = 0; 1833 list_for_each_entry(iocg, surpluses, surplus_list) { 1834 u32 hwa; 1835 1836 current_hweight(iocg, &hwa, NULL); 1837 after_sum += iocg->hweight_after_donation; 1838 1839 if (iocg->hweight_after_donation > hwa) { 1840 over_sum += iocg->hweight_after_donation; 1841 list_add(&iocg->walk_list, &over_hwa); 1842 } 1843 } 1844 1845 if (after_sum >= WEIGHT_ONE) { 1846 /* 1847 * The delta should be deducted from the over_sum, calculate 1848 * target over_sum value. 1849 */ 1850 u32 over_delta = after_sum - (WEIGHT_ONE - 1); 1851 WARN_ON_ONCE(over_sum <= over_delta); 1852 over_target = over_sum - over_delta; 1853 } else { 1854 over_target = 0; 1855 } 1856 1857 list_for_each_entry_safe(iocg, tiocg, &over_hwa, walk_list) { 1858 if (over_target) 1859 iocg->hweight_after_donation = 1860 div_u64((u64)iocg->hweight_after_donation * 1861 over_target, over_sum); 1862 list_del_init(&iocg->walk_list); 1863 } 1864 1865 /* 1866 * Build pre-order inner node walk list and prepare for donation 1867 * adjustment calculations. 1868 */ 1869 list_for_each_entry(iocg, surpluses, surplus_list) { 1870 iocg_build_inner_walk(iocg, &inner_walk); 1871 } 1872 1873 root_iocg = list_first_entry(&inner_walk, struct ioc_gq, walk_list); 1874 WARN_ON_ONCE(root_iocg->level > 0); 1875 1876 list_for_each_entry(iocg, &inner_walk, walk_list) { 1877 iocg->child_adjusted_sum = 0; 1878 iocg->hweight_donating = 0; 1879 iocg->hweight_after_donation = 0; 1880 } 1881 1882 /* 1883 * Propagate the donating budget (b_t) and after donation budget (b'_t) 1884 * up the hierarchy. 1885 */ 1886 list_for_each_entry(iocg, surpluses, surplus_list) { 1887 struct ioc_gq *parent = iocg->ancestors[iocg->level - 1]; 1888 1889 parent->hweight_donating += iocg->hweight_donating; 1890 parent->hweight_after_donation += iocg->hweight_after_donation; 1891 } 1892 1893 list_for_each_entry_reverse(iocg, &inner_walk, walk_list) { 1894 if (iocg->level > 0) { 1895 struct ioc_gq *parent = iocg->ancestors[iocg->level - 1]; 1896 1897 parent->hweight_donating += iocg->hweight_donating; 1898 parent->hweight_after_donation += iocg->hweight_after_donation; 1899 } 1900 } 1901 1902 /* 1903 * Calculate inner hwa's (b) and make sure the donation values are 1904 * within the accepted ranges as we're doing low res calculations with 1905 * roundups. 1906 */ 1907 list_for_each_entry(iocg, &inner_walk, walk_list) { 1908 if (iocg->level) { 1909 struct ioc_gq *parent = iocg->ancestors[iocg->level - 1]; 1910 1911 iocg->hweight_active = DIV64_U64_ROUND_UP( 1912 (u64)parent->hweight_active * iocg->active, 1913 parent->child_active_sum); 1914 1915 } 1916 1917 iocg->hweight_donating = min(iocg->hweight_donating, 1918 iocg->hweight_active); 1919 iocg->hweight_after_donation = min(iocg->hweight_after_donation, 1920 iocg->hweight_donating - 1); 1921 if (WARN_ON_ONCE(iocg->hweight_active <= 1 || 1922 iocg->hweight_donating <= 1 || 1923 iocg->hweight_after_donation == 0)) { 1924 pr_warn("iocg: invalid donation weights in "); 1925 pr_cont_cgroup_path(iocg_to_blkg(iocg)->blkcg->css.cgroup); 1926 pr_cont(": active=%u donating=%u after=%u\n", 1927 iocg->hweight_active, iocg->hweight_donating, 1928 iocg->hweight_after_donation); 1929 } 1930 } 1931 1932 /* 1933 * Calculate the global donation rate (gamma) - the rate to adjust 1934 * non-donating budgets by. 1935 * 1936 * No need to use 64bit multiplication here as the first operand is 1937 * guaranteed to be smaller than WEIGHT_ONE (1<<16). 1938 * 1939 * We know that there are beneficiary nodes and the sum of the donating 1940 * hweights can't be whole; however, due to the round-ups during hweight 1941 * calculations, root_iocg->hweight_donating might still end up equal to 1942 * or greater than whole. Limit the range when calculating the divider. 1943 * 1944 * gamma = (1 - t_r') / (1 - t_r) 1945 */ 1946 gamma = DIV_ROUND_UP( 1947 (WEIGHT_ONE - root_iocg->hweight_after_donation) * WEIGHT_ONE, 1948 WEIGHT_ONE - min_t(u32, root_iocg->hweight_donating, WEIGHT_ONE - 1)); 1949 1950 /* 1951 * Calculate adjusted hwi, child_adjusted_sum and inuse for the inner 1952 * nodes. 1953 */ 1954 list_for_each_entry(iocg, &inner_walk, walk_list) { 1955 struct ioc_gq *parent; 1956 u32 inuse, wpt, wptp; 1957 u64 st, sf; 1958 1959 if (iocg->level == 0) { 1960 /* adjusted weight sum for 1st level: s' = s * b_pf / b'_pf */ 1961 iocg->child_adjusted_sum = DIV64_U64_ROUND_UP( 1962 iocg->child_active_sum * (WEIGHT_ONE - iocg->hweight_donating), 1963 WEIGHT_ONE - iocg->hweight_after_donation); 1964 continue; 1965 } 1966 1967 parent = iocg->ancestors[iocg->level - 1]; 1968 1969 /* b' = gamma * b_f + b_t' */ 1970 iocg->hweight_inuse = DIV64_U64_ROUND_UP( 1971 (u64)gamma * (iocg->hweight_active - iocg->hweight_donating), 1972 WEIGHT_ONE) + iocg->hweight_after_donation; 1973 1974 /* w' = s' * b' / b'_p */ 1975 inuse = DIV64_U64_ROUND_UP( 1976 (u64)parent->child_adjusted_sum * iocg->hweight_inuse, 1977 parent->hweight_inuse); 1978 1979 /* adjusted weight sum for children: s' = s_f + s_t * w'_pt / w_pt */ 1980 st = DIV64_U64_ROUND_UP( 1981 iocg->child_active_sum * iocg->hweight_donating, 1982 iocg->hweight_active); 1983 sf = iocg->child_active_sum - st; 1984 wpt = DIV64_U64_ROUND_UP( 1985 (u64)iocg->active * iocg->hweight_donating, 1986 iocg->hweight_active); 1987 wptp = DIV64_U64_ROUND_UP( 1988 (u64)inuse * iocg->hweight_after_donation, 1989 iocg->hweight_inuse); 1990 1991 iocg->child_adjusted_sum = sf + DIV64_U64_ROUND_UP(st * wptp, wpt); 1992 } 1993 1994 /* 1995 * All inner nodes now have ->hweight_inuse and ->child_adjusted_sum and 1996 * we can finally determine leaf adjustments. 1997 */ 1998 list_for_each_entry(iocg, surpluses, surplus_list) { 1999 struct ioc_gq *parent = iocg->ancestors[iocg->level - 1]; 2000 u32 inuse; 2001 2002 /* 2003 * In-debt iocgs participated in the donation calculation with 2004 * the minimum target hweight_inuse. Configuring inuse 2005 * accordingly would work fine but debt handling expects 2006 * @iocg->inuse stay at the minimum and we don't wanna 2007 * interfere. 2008 */ 2009 if (iocg->abs_vdebt) { 2010 WARN_ON_ONCE(iocg->inuse > 1); 2011 continue; 2012 } 2013 2014 /* w' = s' * b' / b'_p, note that b' == b'_t for donating leaves */ 2015 inuse = DIV64_U64_ROUND_UP( 2016 parent->child_adjusted_sum * iocg->hweight_after_donation, 2017 parent->hweight_inuse); 2018 2019 TRACE_IOCG_PATH(inuse_transfer, iocg, now, 2020 iocg->inuse, inuse, 2021 iocg->hweight_inuse, 2022 iocg->hweight_after_donation); 2023 2024 __propagate_weights(iocg, iocg->active, inuse, true, now); 2025 } 2026 2027 /* walk list should be dissolved after use */ 2028 list_for_each_entry_safe(iocg, tiocg, &inner_walk, walk_list) 2029 list_del_init(&iocg->walk_list); 2030 } 2031 2032 /* 2033 * A low weight iocg can amass a large amount of debt, for example, when 2034 * anonymous memory gets reclaimed aggressively. If the system has a lot of 2035 * memory paired with a slow IO device, the debt can span multiple seconds or 2036 * more. If there are no other subsequent IO issuers, the in-debt iocg may end 2037 * up blocked paying its debt while the IO device is idle. 2038 * 2039 * The following protects against such cases. If the device has been 2040 * sufficiently idle for a while, the debts are halved and delays are 2041 * recalculated. 2042 */ 2043 static void ioc_forgive_debts(struct ioc *ioc, u64 usage_us_sum, int nr_debtors, 2044 struct ioc_now *now) 2045 { 2046 struct ioc_gq *iocg; 2047 u64 dur, usage_pct, nr_cycles; 2048 2049 /* if no debtor, reset the cycle */ 2050 if (!nr_debtors) { 2051 ioc->dfgv_period_at = now->now; 2052 ioc->dfgv_period_rem = 0; 2053 ioc->dfgv_usage_us_sum = 0; 2054 return; 2055 } 2056 2057 /* 2058 * Debtors can pass through a lot of writes choking the device and we 2059 * don't want to be forgiving debts while the device is struggling from 2060 * write bursts. If we're missing latency targets, consider the device 2061 * fully utilized. 2062 */ 2063 if (ioc->busy_level > 0) 2064 usage_us_sum = max_t(u64, usage_us_sum, ioc->period_us); 2065 2066 ioc->dfgv_usage_us_sum += usage_us_sum; 2067 if (time_before64(now->now, ioc->dfgv_period_at + DFGV_PERIOD)) 2068 return; 2069 2070 /* 2071 * At least DFGV_PERIOD has passed since the last period. Calculate the 2072 * average usage and reset the period counters. 2073 */ 2074 dur = now->now - ioc->dfgv_period_at; 2075 usage_pct = div64_u64(100 * ioc->dfgv_usage_us_sum, dur); 2076 2077 ioc->dfgv_period_at = now->now; 2078 ioc->dfgv_usage_us_sum = 0; 2079 2080 /* if was too busy, reset everything */ 2081 if (usage_pct > DFGV_USAGE_PCT) { 2082 ioc->dfgv_period_rem = 0; 2083 return; 2084 } 2085 2086 /* 2087 * Usage is lower than threshold. Let's forgive some debts. Debt 2088 * forgiveness runs off of the usual ioc timer but its period usually 2089 * doesn't match ioc's. Compensate the difference by performing the 2090 * reduction as many times as would fit in the duration since the last 2091 * run and carrying over the left-over duration in @ioc->dfgv_period_rem 2092 * - if ioc period is 75% of DFGV_PERIOD, one out of three consecutive 2093 * reductions is doubled. 2094 */ 2095 nr_cycles = dur + ioc->dfgv_period_rem; 2096 ioc->dfgv_period_rem = do_div(nr_cycles, DFGV_PERIOD); 2097 2098 list_for_each_entry(iocg, &ioc->active_iocgs, active_list) { 2099 u64 __maybe_unused old_debt, __maybe_unused old_delay; 2100 2101 if (!iocg->abs_vdebt && !iocg->delay) 2102 continue; 2103 2104 spin_lock(&iocg->waitq.lock); 2105 2106 old_debt = iocg->abs_vdebt; 2107 old_delay = iocg->delay; 2108 2109 if (iocg->abs_vdebt) 2110 iocg->abs_vdebt = iocg->abs_vdebt >> nr_cycles ?: 1; 2111 if (iocg->delay) 2112 iocg->delay = iocg->delay >> nr_cycles ?: 1; 2113 2114 iocg_kick_waitq(iocg, true, now); 2115 2116 TRACE_IOCG_PATH(iocg_forgive_debt, iocg, now, usage_pct, 2117 old_debt, iocg->abs_vdebt, 2118 old_delay, iocg->delay); 2119 2120 spin_unlock(&iocg->waitq.lock); 2121 } 2122 } 2123 2124 /* 2125 * Check the active iocgs' state to avoid oversleeping and deactive 2126 * idle iocgs. 2127 * 2128 * Since waiters determine the sleep durations based on the vrate 2129 * they saw at the time of sleep, if vrate has increased, some 2130 * waiters could be sleeping for too long. Wake up tardy waiters 2131 * which should have woken up in the last period and expire idle 2132 * iocgs. 2133 */ 2134 static int ioc_check_iocgs(struct ioc *ioc, struct ioc_now *now) 2135 { 2136 int nr_debtors = 0; 2137 struct ioc_gq *iocg, *tiocg; 2138 2139 list_for_each_entry_safe(iocg, tiocg, &ioc->active_iocgs, active_list) { 2140 if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt && 2141 !iocg->delay && !iocg_is_idle(iocg)) 2142 continue; 2143 2144 spin_lock(&iocg->waitq.lock); 2145 2146 /* flush wait and indebt stat deltas */ 2147 if (iocg->wait_since) { 2148 iocg->local_stat.wait_us += now->now - iocg->wait_since; 2149 iocg->wait_since = now->now; 2150 } 2151 if (iocg->indebt_since) { 2152 iocg->local_stat.indebt_us += 2153 now->now - iocg->indebt_since; 2154 iocg->indebt_since = now->now; 2155 } 2156 if (iocg->indelay_since) { 2157 iocg->local_stat.indelay_us += 2158 now->now - iocg->indelay_since; 2159 iocg->indelay_since = now->now; 2160 } 2161 2162 if (waitqueue_active(&iocg->waitq) || iocg->abs_vdebt || 2163 iocg->delay) { 2164 /* might be oversleeping vtime / hweight changes, kick */ 2165 iocg_kick_waitq(iocg, true, now); 2166 if (iocg->abs_vdebt || iocg->delay) 2167 nr_debtors++; 2168 } else if (iocg_is_idle(iocg)) { 2169 /* no waiter and idle, deactivate */ 2170 u64 vtime = atomic64_read(&iocg->vtime); 2171 s64 excess; 2172 2173 /* 2174 * @iocg has been inactive for a full duration and will 2175 * have a high budget. Account anything above target as 2176 * error and throw away. On reactivation, it'll start 2177 * with the target budget. 2178 */ 2179 excess = now->vnow - vtime - ioc->margins.target; 2180 if (excess > 0) { 2181 u32 old_hwi; 2182 2183 current_hweight(iocg, NULL, &old_hwi); 2184 ioc->vtime_err -= div64_u64(excess * old_hwi, 2185 WEIGHT_ONE); 2186 } 2187 2188 __propagate_weights(iocg, 0, 0, false, now); 2189 list_del_init(&iocg->active_list); 2190 } 2191 2192 spin_unlock(&iocg->waitq.lock); 2193 } 2194 2195 commit_weights(ioc); 2196 return nr_debtors; 2197 } 2198 2199 static void ioc_timer_fn(struct timer_list *timer) 2200 { 2201 struct ioc *ioc = container_of(timer, struct ioc, timer); 2202 struct ioc_gq *iocg, *tiocg; 2203 struct ioc_now now; 2204 LIST_HEAD(surpluses); 2205 int nr_debtors, nr_shortages = 0, nr_lagging = 0; 2206 u64 usage_us_sum = 0; 2207 u32 ppm_rthr = MILLION - ioc->params.qos[QOS_RPPM]; 2208 u32 ppm_wthr = MILLION - ioc->params.qos[QOS_WPPM]; 2209 u32 missed_ppm[2], rq_wait_pct; 2210 u64 period_vtime; 2211 int prev_busy_level; 2212 2213 /* how were the latencies during the period? */ 2214 ioc_lat_stat(ioc, missed_ppm, &rq_wait_pct); 2215 2216 /* take care of active iocgs */ 2217 spin_lock_irq(&ioc->lock); 2218 2219 ioc_now(ioc, &now); 2220 2221 period_vtime = now.vnow - ioc->period_at_vtime; 2222 if (WARN_ON_ONCE(!period_vtime)) { 2223 spin_unlock_irq(&ioc->lock); 2224 return; 2225 } 2226 2227 nr_debtors = ioc_check_iocgs(ioc, &now); 2228 2229 /* 2230 * Wait and indebt stat are flushed above and the donation calculation 2231 * below needs updated usage stat. Let's bring stat up-to-date. 2232 */ 2233 iocg_flush_stat(&ioc->active_iocgs, &now); 2234 2235 /* calc usage and see whether some weights need to be moved around */ 2236 list_for_each_entry(iocg, &ioc->active_iocgs, active_list) { 2237 u64 vdone, vtime, usage_us; 2238 u32 hw_active, hw_inuse; 2239 2240 /* 2241 * Collect unused and wind vtime closer to vnow to prevent 2242 * iocgs from accumulating a large amount of budget. 2243 */ 2244 vdone = atomic64_read(&iocg->done_vtime); 2245 vtime = atomic64_read(&iocg->vtime); 2246 current_hweight(iocg, &hw_active, &hw_inuse); 2247 2248 /* 2249 * Latency QoS detection doesn't account for IOs which are 2250 * in-flight for longer than a period. Detect them by 2251 * comparing vdone against period start. If lagging behind 2252 * IOs from past periods, don't increase vrate. 2253 */ 2254 if ((ppm_rthr != MILLION || ppm_wthr != MILLION) && 2255 !atomic_read(&iocg_to_blkg(iocg)->use_delay) && 2256 time_after64(vtime, vdone) && 2257 time_after64(vtime, now.vnow - 2258 MAX_LAGGING_PERIODS * period_vtime) && 2259 time_before64(vdone, now.vnow - period_vtime)) 2260 nr_lagging++; 2261 2262 /* 2263 * Determine absolute usage factoring in in-flight IOs to avoid 2264 * high-latency completions appearing as idle. 2265 */ 2266 usage_us = iocg->usage_delta_us; 2267 usage_us_sum += usage_us; 2268 2269 /* see whether there's surplus vtime */ 2270 WARN_ON_ONCE(!list_empty(&iocg->surplus_list)); 2271 if (hw_inuse < hw_active || 2272 (!waitqueue_active(&iocg->waitq) && 2273 time_before64(vtime, now.vnow - ioc->margins.low))) { 2274 u32 hwa, old_hwi, hwm, new_hwi, usage; 2275 u64 usage_dur; 2276 2277 if (vdone != vtime) { 2278 u64 inflight_us = DIV64_U64_ROUND_UP( 2279 cost_to_abs_cost(vtime - vdone, hw_inuse), 2280 ioc->vtime_base_rate); 2281 2282 usage_us = max(usage_us, inflight_us); 2283 } 2284 2285 /* convert to hweight based usage ratio */ 2286 if (time_after64(iocg->activated_at, ioc->period_at)) 2287 usage_dur = max_t(u64, now.now - iocg->activated_at, 1); 2288 else 2289 usage_dur = max_t(u64, now.now - ioc->period_at, 1); 2290 2291 usage = clamp_t(u32, 2292 DIV64_U64_ROUND_UP(usage_us * WEIGHT_ONE, 2293 usage_dur), 2294 1, WEIGHT_ONE); 2295 2296 /* 2297 * Already donating or accumulated enough to start. 2298 * Determine the donation amount. 2299 */ 2300 current_hweight(iocg, &hwa, &old_hwi); 2301 hwm = current_hweight_max(iocg); 2302 new_hwi = hweight_after_donation(iocg, old_hwi, hwm, 2303 usage, &now); 2304 if (new_hwi < hwm) { 2305 iocg->hweight_donating = hwa; 2306 iocg->hweight_after_donation = new_hwi; 2307 list_add(&iocg->surplus_list, &surpluses); 2308 } else { 2309 TRACE_IOCG_PATH(inuse_shortage, iocg, &now, 2310 iocg->inuse, iocg->active, 2311 iocg->hweight_inuse, new_hwi); 2312 2313 __propagate_weights(iocg, iocg->active, 2314 iocg->active, true, &now); 2315 nr_shortages++; 2316 } 2317 } else { 2318 /* genuinely short on vtime */ 2319 nr_shortages++; 2320 } 2321 } 2322 2323 if (!list_empty(&surpluses) && nr_shortages) 2324 transfer_surpluses(&surpluses, &now); 2325 2326 commit_weights(ioc); 2327 2328 /* surplus list should be dissolved after use */ 2329 list_for_each_entry_safe(iocg, tiocg, &surpluses, surplus_list) 2330 list_del_init(&iocg->surplus_list); 2331 2332 /* 2333 * If q is getting clogged or we're missing too much, we're issuing 2334 * too much IO and should lower vtime rate. If we're not missing 2335 * and experiencing shortages but not surpluses, we're too stingy 2336 * and should increase vtime rate. 2337 */ 2338 prev_busy_level = ioc->busy_level; 2339 if (rq_wait_pct > RQ_WAIT_BUSY_PCT || 2340 missed_ppm[READ] > ppm_rthr || 2341 missed_ppm[WRITE] > ppm_wthr) { 2342 /* clearly missing QoS targets, slow down vrate */ 2343 ioc->busy_level = max(ioc->busy_level, 0); 2344 ioc->busy_level++; 2345 } else if (rq_wait_pct <= RQ_WAIT_BUSY_PCT * UNBUSY_THR_PCT / 100 && 2346 missed_ppm[READ] <= ppm_rthr * UNBUSY_THR_PCT / 100 && 2347 missed_ppm[WRITE] <= ppm_wthr * UNBUSY_THR_PCT / 100) { 2348 /* QoS targets are being met with >25% margin */ 2349 if (nr_shortages) { 2350 /* 2351 * We're throttling while the device has spare 2352 * capacity. If vrate was being slowed down, stop. 2353 */ 2354 ioc->busy_level = min(ioc->busy_level, 0); 2355 2356 /* 2357 * If there are IOs spanning multiple periods, wait 2358 * them out before pushing the device harder. 2359 */ 2360 if (!nr_lagging) 2361 ioc->busy_level--; 2362 } else { 2363 /* 2364 * Nobody is being throttled and the users aren't 2365 * issuing enough IOs to saturate the device. We 2366 * simply don't know how close the device is to 2367 * saturation. Coast. 2368 */ 2369 ioc->busy_level = 0; 2370 } 2371 } else { 2372 /* inside the hysterisis margin, we're good */ 2373 ioc->busy_level = 0; 2374 } 2375 2376 ioc->busy_level = clamp(ioc->busy_level, -1000, 1000); 2377 2378 ioc_adjust_base_vrate(ioc, rq_wait_pct, nr_lagging, nr_shortages, 2379 prev_busy_level, missed_ppm); 2380 2381 ioc_refresh_params(ioc, false); 2382 2383 ioc_forgive_debts(ioc, usage_us_sum, nr_debtors, &now); 2384 2385 /* 2386 * This period is done. Move onto the next one. If nothing's 2387 * going on with the device, stop the timer. 2388 */ 2389 atomic64_inc(&ioc->cur_period); 2390 2391 if (ioc->running != IOC_STOP) { 2392 if (!list_empty(&ioc->active_iocgs)) { 2393 ioc_start_period(ioc, &now); 2394 } else { 2395 ioc->busy_level = 0; 2396 ioc->vtime_err = 0; 2397 ioc->running = IOC_IDLE; 2398 } 2399 2400 ioc_refresh_vrate(ioc, &now); 2401 } 2402 2403 spin_unlock_irq(&ioc->lock); 2404 } 2405 2406 static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime, 2407 u64 abs_cost, struct ioc_now *now) 2408 { 2409 struct ioc *ioc = iocg->ioc; 2410 struct ioc_margins *margins = &ioc->margins; 2411 u32 __maybe_unused old_inuse = iocg->inuse, __maybe_unused old_hwi; 2412 u32 hwi, adj_step; 2413 s64 margin; 2414 u64 cost, new_inuse; 2415 2416 current_hweight(iocg, NULL, &hwi); 2417 old_hwi = hwi; 2418 cost = abs_cost_to_cost(abs_cost, hwi); 2419 margin = now->vnow - vtime - cost; 2420 2421 /* debt handling owns inuse for debtors */ 2422 if (iocg->abs_vdebt) 2423 return cost; 2424 2425 /* 2426 * We only increase inuse during period and do so if the margin has 2427 * deteriorated since the previous adjustment. 2428 */ 2429 if (margin >= iocg->saved_margin || margin >= margins->low || 2430 iocg->inuse == iocg->active) 2431 return cost; 2432 2433 spin_lock_irq(&ioc->lock); 2434 2435 /* we own inuse only when @iocg is in the normal active state */ 2436 if (iocg->abs_vdebt || list_empty(&iocg->active_list)) { 2437 spin_unlock_irq(&ioc->lock); 2438 return cost; 2439 } 2440 2441 /* 2442 * Bump up inuse till @abs_cost fits in the existing budget. 2443 * adj_step must be determined after acquiring ioc->lock - we might 2444 * have raced and lost to another thread for activation and could 2445 * be reading 0 iocg->active before ioc->lock which will lead to 2446 * infinite loop. 2447 */ 2448 new_inuse = iocg->inuse; 2449 adj_step = DIV_ROUND_UP(iocg->active * INUSE_ADJ_STEP_PCT, 100); 2450 do { 2451 new_inuse = new_inuse + adj_step; 2452 propagate_weights(iocg, iocg->active, new_inuse, true, now); 2453 current_hweight(iocg, NULL, &hwi); 2454 cost = abs_cost_to_cost(abs_cost, hwi); 2455 } while (time_after64(vtime + cost, now->vnow) && 2456 iocg->inuse != iocg->active); 2457 2458 spin_unlock_irq(&ioc->lock); 2459 2460 TRACE_IOCG_PATH(inuse_adjust, iocg, now, 2461 old_inuse, iocg->inuse, old_hwi, hwi); 2462 2463 return cost; 2464 } 2465 2466 static void calc_vtime_cost_builtin(struct bio *bio, struct ioc_gq *iocg, 2467 bool is_merge, u64 *costp) 2468 { 2469 struct ioc *ioc = iocg->ioc; 2470 u64 coef_seqio, coef_randio, coef_page; 2471 u64 pages = max_t(u64, bio_sectors(bio) >> IOC_SECT_TO_PAGE_SHIFT, 1); 2472 u64 seek_pages = 0; 2473 u64 cost = 0; 2474 2475 switch (bio_op(bio)) { 2476 case REQ_OP_READ: 2477 coef_seqio = ioc->params.lcoefs[LCOEF_RSEQIO]; 2478 coef_randio = ioc->params.lcoefs[LCOEF_RRANDIO]; 2479 coef_page = ioc->params.lcoefs[LCOEF_RPAGE]; 2480 break; 2481 case REQ_OP_WRITE: 2482 coef_seqio = ioc->params.lcoefs[LCOEF_WSEQIO]; 2483 coef_randio = ioc->params.lcoefs[LCOEF_WRANDIO]; 2484 coef_page = ioc->params.lcoefs[LCOEF_WPAGE]; 2485 break; 2486 default: 2487 goto out; 2488 } 2489 2490 if (iocg->cursor) { 2491 seek_pages = abs(bio->bi_iter.bi_sector - iocg->cursor); 2492 seek_pages >>= IOC_SECT_TO_PAGE_SHIFT; 2493 } 2494 2495 if (!is_merge) { 2496 if (seek_pages > LCOEF_RANDIO_PAGES) { 2497 cost += coef_randio; 2498 } else { 2499 cost += coef_seqio; 2500 } 2501 } 2502 cost += pages * coef_page; 2503 out: 2504 *costp = cost; 2505 } 2506 2507 static u64 calc_vtime_cost(struct bio *bio, struct ioc_gq *iocg, bool is_merge) 2508 { 2509 u64 cost; 2510 2511 calc_vtime_cost_builtin(bio, iocg, is_merge, &cost); 2512 return cost; 2513 } 2514 2515 static void calc_size_vtime_cost_builtin(struct request *rq, struct ioc *ioc, 2516 u64 *costp) 2517 { 2518 unsigned int pages = blk_rq_stats_sectors(rq) >> IOC_SECT_TO_PAGE_SHIFT; 2519 2520 switch (req_op(rq)) { 2521 case REQ_OP_READ: 2522 *costp = pages * ioc->params.lcoefs[LCOEF_RPAGE]; 2523 break; 2524 case REQ_OP_WRITE: 2525 *costp = pages * ioc->params.lcoefs[LCOEF_WPAGE]; 2526 break; 2527 default: 2528 *costp = 0; 2529 } 2530 } 2531 2532 static u64 calc_size_vtime_cost(struct request *rq, struct ioc *ioc) 2533 { 2534 u64 cost; 2535 2536 calc_size_vtime_cost_builtin(rq, ioc, &cost); 2537 return cost; 2538 } 2539 2540 static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio) 2541 { 2542 struct blkcg_gq *blkg = bio->bi_blkg; 2543 struct ioc *ioc = rqos_to_ioc(rqos); 2544 struct ioc_gq *iocg = blkg_to_iocg(blkg); 2545 struct ioc_now now; 2546 struct iocg_wait wait; 2547 u64 abs_cost, cost, vtime; 2548 bool use_debt, ioc_locked; 2549 unsigned long flags; 2550 2551 /* bypass IOs if disabled or for root cgroup */ 2552 if (!ioc->enabled || !iocg->level) 2553 return; 2554 2555 /* calculate the absolute vtime cost */ 2556 abs_cost = calc_vtime_cost(bio, iocg, false); 2557 if (!abs_cost) 2558 return; 2559 2560 if (!iocg_activate(iocg, &now)) 2561 return; 2562 2563 iocg->cursor = bio_end_sector(bio); 2564 vtime = atomic64_read(&iocg->vtime); 2565 cost = adjust_inuse_and_calc_cost(iocg, vtime, abs_cost, &now); 2566 2567 /* 2568 * If no one's waiting and within budget, issue right away. The 2569 * tests are racy but the races aren't systemic - we only miss once 2570 * in a while which is fine. 2571 */ 2572 if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt && 2573 time_before_eq64(vtime + cost, now.vnow)) { 2574 iocg_commit_bio(iocg, bio, abs_cost, cost); 2575 return; 2576 } 2577 2578 /* 2579 * We're over budget. This can be handled in two ways. IOs which may 2580 * cause priority inversions are punted to @ioc->aux_iocg and charged as 2581 * debt. Otherwise, the issuer is blocked on @iocg->waitq. Debt handling 2582 * requires @ioc->lock, waitq handling @iocg->waitq.lock. Determine 2583 * whether debt handling is needed and acquire locks accordingly. 2584 */ 2585 use_debt = bio_issue_as_root_blkg(bio) || fatal_signal_pending(current); 2586 ioc_locked = use_debt || READ_ONCE(iocg->abs_vdebt); 2587 retry_lock: 2588 iocg_lock(iocg, ioc_locked, &flags); 2589 2590 /* 2591 * @iocg must stay activated for debt and waitq handling. Deactivation 2592 * is synchronized against both ioc->lock and waitq.lock and we won't 2593 * get deactivated as long as we're waiting or has debt, so we're good 2594 * if we're activated here. In the unlikely cases that we aren't, just 2595 * issue the IO. 2596 */ 2597 if (unlikely(list_empty(&iocg->active_list))) { 2598 iocg_unlock(iocg, ioc_locked, &flags); 2599 iocg_commit_bio(iocg, bio, abs_cost, cost); 2600 return; 2601 } 2602 2603 /* 2604 * We're over budget. If @bio has to be issued regardless, remember 2605 * the abs_cost instead of advancing vtime. iocg_kick_waitq() will pay 2606 * off the debt before waking more IOs. 2607 * 2608 * This way, the debt is continuously paid off each period with the 2609 * actual budget available to the cgroup. If we just wound vtime, we 2610 * would incorrectly use the current hw_inuse for the entire amount 2611 * which, for example, can lead to the cgroup staying blocked for a 2612 * long time even with substantially raised hw_inuse. 2613 * 2614 * An iocg with vdebt should stay online so that the timer can keep 2615 * deducting its vdebt and [de]activate use_delay mechanism 2616 * accordingly. We don't want to race against the timer trying to 2617 * clear them and leave @iocg inactive w/ dangling use_delay heavily 2618 * penalizing the cgroup and its descendants. 2619 */ 2620 if (use_debt) { 2621 iocg_incur_debt(iocg, abs_cost, &now); 2622 if (iocg_kick_delay(iocg, &now)) 2623 blkcg_schedule_throttle(rqos->q, 2624 (bio->bi_opf & REQ_SWAP) == REQ_SWAP); 2625 iocg_unlock(iocg, ioc_locked, &flags); 2626 return; 2627 } 2628 2629 /* guarantee that iocgs w/ waiters have maximum inuse */ 2630 if (!iocg->abs_vdebt && iocg->inuse != iocg->active) { 2631 if (!ioc_locked) { 2632 iocg_unlock(iocg, false, &flags); 2633 ioc_locked = true; 2634 goto retry_lock; 2635 } 2636 propagate_weights(iocg, iocg->active, iocg->active, true, 2637 &now); 2638 } 2639 2640 /* 2641 * Append self to the waitq and schedule the wakeup timer if we're 2642 * the first waiter. The timer duration is calculated based on the 2643 * current vrate. vtime and hweight changes can make it too short 2644 * or too long. Each wait entry records the absolute cost it's 2645 * waiting for to allow re-evaluation using a custom wait entry. 2646 * 2647 * If too short, the timer simply reschedules itself. If too long, 2648 * the period timer will notice and trigger wakeups. 2649 * 2650 * All waiters are on iocg->waitq and the wait states are 2651 * synchronized using waitq.lock. 2652 */ 2653 init_waitqueue_func_entry(&wait.wait, iocg_wake_fn); 2654 wait.wait.private = current; 2655 wait.bio = bio; 2656 wait.abs_cost = abs_cost; 2657 wait.committed = false; /* will be set true by waker */ 2658 2659 __add_wait_queue_entry_tail(&iocg->waitq, &wait.wait); 2660 iocg_kick_waitq(iocg, ioc_locked, &now); 2661 2662 iocg_unlock(iocg, ioc_locked, &flags); 2663 2664 while (true) { 2665 set_current_state(TASK_UNINTERRUPTIBLE); 2666 if (wait.committed) 2667 break; 2668 io_schedule(); 2669 } 2670 2671 /* waker already committed us, proceed */ 2672 finish_wait(&iocg->waitq, &wait.wait); 2673 } 2674 2675 static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq, 2676 struct bio *bio) 2677 { 2678 struct ioc_gq *iocg = blkg_to_iocg(bio->bi_blkg); 2679 struct ioc *ioc = iocg->ioc; 2680 sector_t bio_end = bio_end_sector(bio); 2681 struct ioc_now now; 2682 u64 vtime, abs_cost, cost; 2683 unsigned long flags; 2684 2685 /* bypass if disabled or for root cgroup */ 2686 if (!ioc->enabled || !iocg->level) 2687 return; 2688 2689 abs_cost = calc_vtime_cost(bio, iocg, true); 2690 if (!abs_cost) 2691 return; 2692 2693 ioc_now(ioc, &now); 2694 2695 vtime = atomic64_read(&iocg->vtime); 2696 cost = adjust_inuse_and_calc_cost(iocg, vtime, abs_cost, &now); 2697 2698 /* update cursor if backmerging into the request at the cursor */ 2699 if (blk_rq_pos(rq) < bio_end && 2700 blk_rq_pos(rq) + blk_rq_sectors(rq) == iocg->cursor) 2701 iocg->cursor = bio_end; 2702 2703 /* 2704 * Charge if there's enough vtime budget and the existing request has 2705 * cost assigned. 2706 */ 2707 if (rq->bio && rq->bio->bi_iocost_cost && 2708 time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow)) { 2709 iocg_commit_bio(iocg, bio, abs_cost, cost); 2710 return; 2711 } 2712 2713 /* 2714 * Otherwise, account it as debt if @iocg is online, which it should 2715 * be for the vast majority of cases. See debt handling in 2716 * ioc_rqos_throttle() for details. 2717 */ 2718 spin_lock_irqsave(&ioc->lock, flags); 2719 spin_lock(&iocg->waitq.lock); 2720 2721 if (likely(!list_empty(&iocg->active_list))) { 2722 iocg_incur_debt(iocg, abs_cost, &now); 2723 if (iocg_kick_delay(iocg, &now)) 2724 blkcg_schedule_throttle(rqos->q, 2725 (bio->bi_opf & REQ_SWAP) == REQ_SWAP); 2726 } else { 2727 iocg_commit_bio(iocg, bio, abs_cost, cost); 2728 } 2729 2730 spin_unlock(&iocg->waitq.lock); 2731 spin_unlock_irqrestore(&ioc->lock, flags); 2732 } 2733 2734 static void ioc_rqos_done_bio(struct rq_qos *rqos, struct bio *bio) 2735 { 2736 struct ioc_gq *iocg = blkg_to_iocg(bio->bi_blkg); 2737 2738 if (iocg && bio->bi_iocost_cost) 2739 atomic64_add(bio->bi_iocost_cost, &iocg->done_vtime); 2740 } 2741 2742 static void ioc_rqos_done(struct rq_qos *rqos, struct request *rq) 2743 { 2744 struct ioc *ioc = rqos_to_ioc(rqos); 2745 struct ioc_pcpu_stat *ccs; 2746 u64 on_q_ns, rq_wait_ns, size_nsec; 2747 int pidx, rw; 2748 2749 if (!ioc->enabled || !rq->alloc_time_ns || !rq->start_time_ns) 2750 return; 2751 2752 switch (req_op(rq) & REQ_OP_MASK) { 2753 case REQ_OP_READ: 2754 pidx = QOS_RLAT; 2755 rw = READ; 2756 break; 2757 case REQ_OP_WRITE: 2758 pidx = QOS_WLAT; 2759 rw = WRITE; 2760 break; 2761 default: 2762 return; 2763 } 2764 2765 on_q_ns = ktime_get_ns() - rq->alloc_time_ns; 2766 rq_wait_ns = rq->start_time_ns - rq->alloc_time_ns; 2767 size_nsec = div64_u64(calc_size_vtime_cost(rq, ioc), VTIME_PER_NSEC); 2768 2769 ccs = get_cpu_ptr(ioc->pcpu_stat); 2770 2771 if (on_q_ns <= size_nsec || 2772 on_q_ns - size_nsec <= ioc->params.qos[pidx] * NSEC_PER_USEC) 2773 local_inc(&ccs->missed[rw].nr_met); 2774 else 2775 local_inc(&ccs->missed[rw].nr_missed); 2776 2777 local64_add(rq_wait_ns, &ccs->rq_wait_ns); 2778 2779 put_cpu_ptr(ccs); 2780 } 2781 2782 static void ioc_rqos_queue_depth_changed(struct rq_qos *rqos) 2783 { 2784 struct ioc *ioc = rqos_to_ioc(rqos); 2785 2786 spin_lock_irq(&ioc->lock); 2787 ioc_refresh_params(ioc, false); 2788 spin_unlock_irq(&ioc->lock); 2789 } 2790 2791 static void ioc_rqos_exit(struct rq_qos *rqos) 2792 { 2793 struct ioc *ioc = rqos_to_ioc(rqos); 2794 2795 blkcg_deactivate_policy(rqos->q, &blkcg_policy_iocost); 2796 2797 spin_lock_irq(&ioc->lock); 2798 ioc->running = IOC_STOP; 2799 spin_unlock_irq(&ioc->lock); 2800 2801 del_timer_sync(&ioc->timer); 2802 free_percpu(ioc->pcpu_stat); 2803 kfree(ioc); 2804 } 2805 2806 static struct rq_qos_ops ioc_rqos_ops = { 2807 .throttle = ioc_rqos_throttle, 2808 .merge = ioc_rqos_merge, 2809 .done_bio = ioc_rqos_done_bio, 2810 .done = ioc_rqos_done, 2811 .queue_depth_changed = ioc_rqos_queue_depth_changed, 2812 .exit = ioc_rqos_exit, 2813 }; 2814 2815 static int blk_iocost_init(struct request_queue *q) 2816 { 2817 struct ioc *ioc; 2818 struct rq_qos *rqos; 2819 int i, cpu, ret; 2820 2821 ioc = kzalloc(sizeof(*ioc), GFP_KERNEL); 2822 if (!ioc) 2823 return -ENOMEM; 2824 2825 ioc->pcpu_stat = alloc_percpu(struct ioc_pcpu_stat); 2826 if (!ioc->pcpu_stat) { 2827 kfree(ioc); 2828 return -ENOMEM; 2829 } 2830 2831 for_each_possible_cpu(cpu) { 2832 struct ioc_pcpu_stat *ccs = per_cpu_ptr(ioc->pcpu_stat, cpu); 2833 2834 for (i = 0; i < ARRAY_SIZE(ccs->missed); i++) { 2835 local_set(&ccs->missed[i].nr_met, 0); 2836 local_set(&ccs->missed[i].nr_missed, 0); 2837 } 2838 local64_set(&ccs->rq_wait_ns, 0); 2839 } 2840 2841 rqos = &ioc->rqos; 2842 rqos->id = RQ_QOS_COST; 2843 rqos->ops = &ioc_rqos_ops; 2844 rqos->q = q; 2845 2846 spin_lock_init(&ioc->lock); 2847 timer_setup(&ioc->timer, ioc_timer_fn, 0); 2848 INIT_LIST_HEAD(&ioc->active_iocgs); 2849 2850 ioc->running = IOC_IDLE; 2851 ioc->vtime_base_rate = VTIME_PER_USEC; 2852 atomic64_set(&ioc->vtime_rate, VTIME_PER_USEC); 2853 seqcount_spinlock_init(&ioc->period_seqcount, &ioc->lock); 2854 ioc->period_at = ktime_to_us(ktime_get()); 2855 atomic64_set(&ioc->cur_period, 0); 2856 atomic_set(&ioc->hweight_gen, 0); 2857 2858 spin_lock_irq(&ioc->lock); 2859 ioc->autop_idx = AUTOP_INVALID; 2860 ioc_refresh_params(ioc, true); 2861 spin_unlock_irq(&ioc->lock); 2862 2863 rq_qos_add(q, rqos); 2864 ret = blkcg_activate_policy(q, &blkcg_policy_iocost); 2865 if (ret) { 2866 rq_qos_del(q, rqos); 2867 free_percpu(ioc->pcpu_stat); 2868 kfree(ioc); 2869 return ret; 2870 } 2871 return 0; 2872 } 2873 2874 static struct blkcg_policy_data *ioc_cpd_alloc(gfp_t gfp) 2875 { 2876 struct ioc_cgrp *iocc; 2877 2878 iocc = kzalloc(sizeof(struct ioc_cgrp), gfp); 2879 if (!iocc) 2880 return NULL; 2881 2882 iocc->dfl_weight = CGROUP_WEIGHT_DFL * WEIGHT_ONE; 2883 return &iocc->cpd; 2884 } 2885 2886 static void ioc_cpd_free(struct blkcg_policy_data *cpd) 2887 { 2888 kfree(container_of(cpd, struct ioc_cgrp, cpd)); 2889 } 2890 2891 static struct blkg_policy_data *ioc_pd_alloc(gfp_t gfp, struct request_queue *q, 2892 struct blkcg *blkcg) 2893 { 2894 int levels = blkcg->css.cgroup->level + 1; 2895 struct ioc_gq *iocg; 2896 2897 iocg = kzalloc_node(struct_size(iocg, ancestors, levels), gfp, q->node); 2898 if (!iocg) 2899 return NULL; 2900 2901 iocg->pcpu_stat = alloc_percpu_gfp(struct iocg_pcpu_stat, gfp); 2902 if (!iocg->pcpu_stat) { 2903 kfree(iocg); 2904 return NULL; 2905 } 2906 2907 return &iocg->pd; 2908 } 2909 2910 static void ioc_pd_init(struct blkg_policy_data *pd) 2911 { 2912 struct ioc_gq *iocg = pd_to_iocg(pd); 2913 struct blkcg_gq *blkg = pd_to_blkg(&iocg->pd); 2914 struct ioc *ioc = q_to_ioc(blkg->q); 2915 struct ioc_now now; 2916 struct blkcg_gq *tblkg; 2917 unsigned long flags; 2918 2919 ioc_now(ioc, &now); 2920 2921 iocg->ioc = ioc; 2922 atomic64_set(&iocg->vtime, now.vnow); 2923 atomic64_set(&iocg->done_vtime, now.vnow); 2924 atomic64_set(&iocg->active_period, atomic64_read(&ioc->cur_period)); 2925 INIT_LIST_HEAD(&iocg->active_list); 2926 INIT_LIST_HEAD(&iocg->walk_list); 2927 INIT_LIST_HEAD(&iocg->surplus_list); 2928 iocg->hweight_active = WEIGHT_ONE; 2929 iocg->hweight_inuse = WEIGHT_ONE; 2930 2931 init_waitqueue_head(&iocg->waitq); 2932 hrtimer_init(&iocg->waitq_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); 2933 iocg->waitq_timer.function = iocg_waitq_timer_fn; 2934 2935 iocg->level = blkg->blkcg->css.cgroup->level; 2936 2937 for (tblkg = blkg; tblkg; tblkg = tblkg->parent) { 2938 struct ioc_gq *tiocg = blkg_to_iocg(tblkg); 2939 iocg->ancestors[tiocg->level] = tiocg; 2940 } 2941 2942 spin_lock_irqsave(&ioc->lock, flags); 2943 weight_updated(iocg, &now); 2944 spin_unlock_irqrestore(&ioc->lock, flags); 2945 } 2946 2947 static void ioc_pd_free(struct blkg_policy_data *pd) 2948 { 2949 struct ioc_gq *iocg = pd_to_iocg(pd); 2950 struct ioc *ioc = iocg->ioc; 2951 unsigned long flags; 2952 2953 if (ioc) { 2954 spin_lock_irqsave(&ioc->lock, flags); 2955 2956 if (!list_empty(&iocg->active_list)) { 2957 struct ioc_now now; 2958 2959 ioc_now(ioc, &now); 2960 propagate_weights(iocg, 0, 0, false, &now); 2961 list_del_init(&iocg->active_list); 2962 } 2963 2964 WARN_ON_ONCE(!list_empty(&iocg->walk_list)); 2965 WARN_ON_ONCE(!list_empty(&iocg->surplus_list)); 2966 2967 spin_unlock_irqrestore(&ioc->lock, flags); 2968 2969 hrtimer_cancel(&iocg->waitq_timer); 2970 } 2971 free_percpu(iocg->pcpu_stat); 2972 kfree(iocg); 2973 } 2974 2975 static size_t ioc_pd_stat(struct blkg_policy_data *pd, char *buf, size_t size) 2976 { 2977 struct ioc_gq *iocg = pd_to_iocg(pd); 2978 struct ioc *ioc = iocg->ioc; 2979 size_t pos = 0; 2980 2981 if (!ioc->enabled) 2982 return 0; 2983 2984 if (iocg->level == 0) { 2985 unsigned vp10k = DIV64_U64_ROUND_CLOSEST( 2986 ioc->vtime_base_rate * 10000, 2987 VTIME_PER_USEC); 2988 pos += scnprintf(buf + pos, size - pos, " cost.vrate=%u.%02u", 2989 vp10k / 100, vp10k % 100); 2990 } 2991 2992 pos += scnprintf(buf + pos, size - pos, " cost.usage=%llu", 2993 iocg->last_stat.usage_us); 2994 2995 if (blkcg_debug_stats) 2996 pos += scnprintf(buf + pos, size - pos, 2997 " cost.wait=%llu cost.indebt=%llu cost.indelay=%llu", 2998 iocg->last_stat.wait_us, 2999 iocg->last_stat.indebt_us, 3000 iocg->last_stat.indelay_us); 3001 3002 return pos; 3003 } 3004 3005 static u64 ioc_weight_prfill(struct seq_file *sf, struct blkg_policy_data *pd, 3006 int off) 3007 { 3008 const char *dname = blkg_dev_name(pd->blkg); 3009 struct ioc_gq *iocg = pd_to_iocg(pd); 3010 3011 if (dname && iocg->cfg_weight) 3012 seq_printf(sf, "%s %u\n", dname, iocg->cfg_weight / WEIGHT_ONE); 3013 return 0; 3014 } 3015 3016 3017 static int ioc_weight_show(struct seq_file *sf, void *v) 3018 { 3019 struct blkcg *blkcg = css_to_blkcg(seq_css(sf)); 3020 struct ioc_cgrp *iocc = blkcg_to_iocc(blkcg); 3021 3022 seq_printf(sf, "default %u\n", iocc->dfl_weight / WEIGHT_ONE); 3023 blkcg_print_blkgs(sf, blkcg, ioc_weight_prfill, 3024 &blkcg_policy_iocost, seq_cft(sf)->private, false); 3025 return 0; 3026 } 3027 3028 static ssize_t ioc_weight_write(struct kernfs_open_file *of, char *buf, 3029 size_t nbytes, loff_t off) 3030 { 3031 struct blkcg *blkcg = css_to_blkcg(of_css(of)); 3032 struct ioc_cgrp *iocc = blkcg_to_iocc(blkcg); 3033 struct blkg_conf_ctx ctx; 3034 struct ioc_now now; 3035 struct ioc_gq *iocg; 3036 u32 v; 3037 int ret; 3038 3039 if (!strchr(buf, ':')) { 3040 struct blkcg_gq *blkg; 3041 3042 if (!sscanf(buf, "default %u", &v) && !sscanf(buf, "%u", &v)) 3043 return -EINVAL; 3044 3045 if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX) 3046 return -EINVAL; 3047 3048 spin_lock(&blkcg->lock); 3049 iocc->dfl_weight = v * WEIGHT_ONE; 3050 hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { 3051 struct ioc_gq *iocg = blkg_to_iocg(blkg); 3052 3053 if (iocg) { 3054 spin_lock_irq(&iocg->ioc->lock); 3055 ioc_now(iocg->ioc, &now); 3056 weight_updated(iocg, &now); 3057 spin_unlock_irq(&iocg->ioc->lock); 3058 } 3059 } 3060 spin_unlock(&blkcg->lock); 3061 3062 return nbytes; 3063 } 3064 3065 ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, buf, &ctx); 3066 if (ret) 3067 return ret; 3068 3069 iocg = blkg_to_iocg(ctx.blkg); 3070 3071 if (!strncmp(ctx.body, "default", 7)) { 3072 v = 0; 3073 } else { 3074 if (!sscanf(ctx.body, "%u", &v)) 3075 goto einval; 3076 if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX) 3077 goto einval; 3078 } 3079 3080 spin_lock(&iocg->ioc->lock); 3081 iocg->cfg_weight = v * WEIGHT_ONE; 3082 ioc_now(iocg->ioc, &now); 3083 weight_updated(iocg, &now); 3084 spin_unlock(&iocg->ioc->lock); 3085 3086 blkg_conf_finish(&ctx); 3087 return nbytes; 3088 3089 einval: 3090 blkg_conf_finish(&ctx); 3091 return -EINVAL; 3092 } 3093 3094 static u64 ioc_qos_prfill(struct seq_file *sf, struct blkg_policy_data *pd, 3095 int off) 3096 { 3097 const char *dname = blkg_dev_name(pd->blkg); 3098 struct ioc *ioc = pd_to_iocg(pd)->ioc; 3099 3100 if (!dname) 3101 return 0; 3102 3103 seq_printf(sf, "%s enable=%d ctrl=%s rpct=%u.%02u rlat=%u wpct=%u.%02u wlat=%u min=%u.%02u max=%u.%02u\n", 3104 dname, ioc->enabled, ioc->user_qos_params ? "user" : "auto", 3105 ioc->params.qos[QOS_RPPM] / 10000, 3106 ioc->params.qos[QOS_RPPM] % 10000 / 100, 3107 ioc->params.qos[QOS_RLAT], 3108 ioc->params.qos[QOS_WPPM] / 10000, 3109 ioc->params.qos[QOS_WPPM] % 10000 / 100, 3110 ioc->params.qos[QOS_WLAT], 3111 ioc->params.qos[QOS_MIN] / 10000, 3112 ioc->params.qos[QOS_MIN] % 10000 / 100, 3113 ioc->params.qos[QOS_MAX] / 10000, 3114 ioc->params.qos[QOS_MAX] % 10000 / 100); 3115 return 0; 3116 } 3117 3118 static int ioc_qos_show(struct seq_file *sf, void *v) 3119 { 3120 struct blkcg *blkcg = css_to_blkcg(seq_css(sf)); 3121 3122 blkcg_print_blkgs(sf, blkcg, ioc_qos_prfill, 3123 &blkcg_policy_iocost, seq_cft(sf)->private, false); 3124 return 0; 3125 } 3126 3127 static const match_table_t qos_ctrl_tokens = { 3128 { QOS_ENABLE, "enable=%u" }, 3129 { QOS_CTRL, "ctrl=%s" }, 3130 { NR_QOS_CTRL_PARAMS, NULL }, 3131 }; 3132 3133 static const match_table_t qos_tokens = { 3134 { QOS_RPPM, "rpct=%s" }, 3135 { QOS_RLAT, "rlat=%u" }, 3136 { QOS_WPPM, "wpct=%s" }, 3137 { QOS_WLAT, "wlat=%u" }, 3138 { QOS_MIN, "min=%s" }, 3139 { QOS_MAX, "max=%s" }, 3140 { NR_QOS_PARAMS, NULL }, 3141 }; 3142 3143 static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input, 3144 size_t nbytes, loff_t off) 3145 { 3146 struct block_device *bdev; 3147 struct ioc *ioc; 3148 u32 qos[NR_QOS_PARAMS]; 3149 bool enable, user; 3150 char *p; 3151 int ret; 3152 3153 bdev = blkcg_conf_open_bdev(&input); 3154 if (IS_ERR(bdev)) 3155 return PTR_ERR(bdev); 3156 3157 ioc = q_to_ioc(bdev->bd_disk->queue); 3158 if (!ioc) { 3159 ret = blk_iocost_init(bdev->bd_disk->queue); 3160 if (ret) 3161 goto err; 3162 ioc = q_to_ioc(bdev->bd_disk->queue); 3163 } 3164 3165 spin_lock_irq(&ioc->lock); 3166 memcpy(qos, ioc->params.qos, sizeof(qos)); 3167 enable = ioc->enabled; 3168 user = ioc->user_qos_params; 3169 spin_unlock_irq(&ioc->lock); 3170 3171 while ((p = strsep(&input, " \t\n"))) { 3172 substring_t args[MAX_OPT_ARGS]; 3173 char buf[32]; 3174 int tok; 3175 s64 v; 3176 3177 if (!*p) 3178 continue; 3179 3180 switch (match_token(p, qos_ctrl_tokens, args)) { 3181 case QOS_ENABLE: 3182 match_u64(&args[0], &v); 3183 enable = v; 3184 continue; 3185 case QOS_CTRL: 3186 match_strlcpy(buf, &args[0], sizeof(buf)); 3187 if (!strcmp(buf, "auto")) 3188 user = false; 3189 else if (!strcmp(buf, "user")) 3190 user = true; 3191 else 3192 goto einval; 3193 continue; 3194 } 3195 3196 tok = match_token(p, qos_tokens, args); 3197 switch (tok) { 3198 case QOS_RPPM: 3199 case QOS_WPPM: 3200 if (match_strlcpy(buf, &args[0], sizeof(buf)) >= 3201 sizeof(buf)) 3202 goto einval; 3203 if (cgroup_parse_float(buf, 2, &v)) 3204 goto einval; 3205 if (v < 0 || v > 10000) 3206 goto einval; 3207 qos[tok] = v * 100; 3208 break; 3209 case QOS_RLAT: 3210 case QOS_WLAT: 3211 if (match_u64(&args[0], &v)) 3212 goto einval; 3213 qos[tok] = v; 3214 break; 3215 case QOS_MIN: 3216 case QOS_MAX: 3217 if (match_strlcpy(buf, &args[0], sizeof(buf)) >= 3218 sizeof(buf)) 3219 goto einval; 3220 if (cgroup_parse_float(buf, 2, &v)) 3221 goto einval; 3222 if (v < 0) 3223 goto einval; 3224 qos[tok] = clamp_t(s64, v * 100, 3225 VRATE_MIN_PPM, VRATE_MAX_PPM); 3226 break; 3227 default: 3228 goto einval; 3229 } 3230 user = true; 3231 } 3232 3233 if (qos[QOS_MIN] > qos[QOS_MAX]) 3234 goto einval; 3235 3236 spin_lock_irq(&ioc->lock); 3237 3238 if (enable) { 3239 blk_stat_enable_accounting(ioc->rqos.q); 3240 blk_queue_flag_set(QUEUE_FLAG_RQ_ALLOC_TIME, ioc->rqos.q); 3241 ioc->enabled = true; 3242 } else { 3243 blk_queue_flag_clear(QUEUE_FLAG_RQ_ALLOC_TIME, ioc->rqos.q); 3244 ioc->enabled = false; 3245 } 3246 3247 if (user) { 3248 memcpy(ioc->params.qos, qos, sizeof(qos)); 3249 ioc->user_qos_params = true; 3250 } else { 3251 ioc->user_qos_params = false; 3252 } 3253 3254 ioc_refresh_params(ioc, true); 3255 spin_unlock_irq(&ioc->lock); 3256 3257 blkdev_put_no_open(bdev); 3258 return nbytes; 3259 einval: 3260 ret = -EINVAL; 3261 err: 3262 blkdev_put_no_open(bdev); 3263 return ret; 3264 } 3265 3266 static u64 ioc_cost_model_prfill(struct seq_file *sf, 3267 struct blkg_policy_data *pd, int off) 3268 { 3269 const char *dname = blkg_dev_name(pd->blkg); 3270 struct ioc *ioc = pd_to_iocg(pd)->ioc; 3271 u64 *u = ioc->params.i_lcoefs; 3272 3273 if (!dname) 3274 return 0; 3275 3276 seq_printf(sf, "%s ctrl=%s model=linear " 3277 "rbps=%llu rseqiops=%llu rrandiops=%llu " 3278 "wbps=%llu wseqiops=%llu wrandiops=%llu\n", 3279 dname, ioc->user_cost_model ? "user" : "auto", 3280 u[I_LCOEF_RBPS], u[I_LCOEF_RSEQIOPS], u[I_LCOEF_RRANDIOPS], 3281 u[I_LCOEF_WBPS], u[I_LCOEF_WSEQIOPS], u[I_LCOEF_WRANDIOPS]); 3282 return 0; 3283 } 3284 3285 static int ioc_cost_model_show(struct seq_file *sf, void *v) 3286 { 3287 struct blkcg *blkcg = css_to_blkcg(seq_css(sf)); 3288 3289 blkcg_print_blkgs(sf, blkcg, ioc_cost_model_prfill, 3290 &blkcg_policy_iocost, seq_cft(sf)->private, false); 3291 return 0; 3292 } 3293 3294 static const match_table_t cost_ctrl_tokens = { 3295 { COST_CTRL, "ctrl=%s" }, 3296 { COST_MODEL, "model=%s" }, 3297 { NR_COST_CTRL_PARAMS, NULL }, 3298 }; 3299 3300 static const match_table_t i_lcoef_tokens = { 3301 { I_LCOEF_RBPS, "rbps=%u" }, 3302 { I_LCOEF_RSEQIOPS, "rseqiops=%u" }, 3303 { I_LCOEF_RRANDIOPS, "rrandiops=%u" }, 3304 { I_LCOEF_WBPS, "wbps=%u" }, 3305 { I_LCOEF_WSEQIOPS, "wseqiops=%u" }, 3306 { I_LCOEF_WRANDIOPS, "wrandiops=%u" }, 3307 { NR_I_LCOEFS, NULL }, 3308 }; 3309 3310 static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input, 3311 size_t nbytes, loff_t off) 3312 { 3313 struct block_device *bdev; 3314 struct ioc *ioc; 3315 u64 u[NR_I_LCOEFS]; 3316 bool user; 3317 char *p; 3318 int ret; 3319 3320 bdev = blkcg_conf_open_bdev(&input); 3321 if (IS_ERR(bdev)) 3322 return PTR_ERR(bdev); 3323 3324 ioc = q_to_ioc(bdev->bd_disk->queue); 3325 if (!ioc) { 3326 ret = blk_iocost_init(bdev->bd_disk->queue); 3327 if (ret) 3328 goto err; 3329 ioc = q_to_ioc(bdev->bd_disk->queue); 3330 } 3331 3332 spin_lock_irq(&ioc->lock); 3333 memcpy(u, ioc->params.i_lcoefs, sizeof(u)); 3334 user = ioc->user_cost_model; 3335 spin_unlock_irq(&ioc->lock); 3336 3337 while ((p = strsep(&input, " \t\n"))) { 3338 substring_t args[MAX_OPT_ARGS]; 3339 char buf[32]; 3340 int tok; 3341 u64 v; 3342 3343 if (!*p) 3344 continue; 3345 3346 switch (match_token(p, cost_ctrl_tokens, args)) { 3347 case COST_CTRL: 3348 match_strlcpy(buf, &args[0], sizeof(buf)); 3349 if (!strcmp(buf, "auto")) 3350 user = false; 3351 else if (!strcmp(buf, "user")) 3352 user = true; 3353 else 3354 goto einval; 3355 continue; 3356 case COST_MODEL: 3357 match_strlcpy(buf, &args[0], sizeof(buf)); 3358 if (strcmp(buf, "linear")) 3359 goto einval; 3360 continue; 3361 } 3362 3363 tok = match_token(p, i_lcoef_tokens, args); 3364 if (tok == NR_I_LCOEFS) 3365 goto einval; 3366 if (match_u64(&args[0], &v)) 3367 goto einval; 3368 u[tok] = v; 3369 user = true; 3370 } 3371 3372 spin_lock_irq(&ioc->lock); 3373 if (user) { 3374 memcpy(ioc->params.i_lcoefs, u, sizeof(u)); 3375 ioc->user_cost_model = true; 3376 } else { 3377 ioc->user_cost_model = false; 3378 } 3379 ioc_refresh_params(ioc, true); 3380 spin_unlock_irq(&ioc->lock); 3381 3382 blkdev_put_no_open(bdev); 3383 return nbytes; 3384 3385 einval: 3386 ret = -EINVAL; 3387 err: 3388 blkdev_put_no_open(bdev); 3389 return ret; 3390 } 3391 3392 static struct cftype ioc_files[] = { 3393 { 3394 .name = "weight", 3395 .flags = CFTYPE_NOT_ON_ROOT, 3396 .seq_show = ioc_weight_show, 3397 .write = ioc_weight_write, 3398 }, 3399 { 3400 .name = "cost.qos", 3401 .flags = CFTYPE_ONLY_ON_ROOT, 3402 .seq_show = ioc_qos_show, 3403 .write = ioc_qos_write, 3404 }, 3405 { 3406 .name = "cost.model", 3407 .flags = CFTYPE_ONLY_ON_ROOT, 3408 .seq_show = ioc_cost_model_show, 3409 .write = ioc_cost_model_write, 3410 }, 3411 {} 3412 }; 3413 3414 static struct blkcg_policy blkcg_policy_iocost = { 3415 .dfl_cftypes = ioc_files, 3416 .cpd_alloc_fn = ioc_cpd_alloc, 3417 .cpd_free_fn = ioc_cpd_free, 3418 .pd_alloc_fn = ioc_pd_alloc, 3419 .pd_init_fn = ioc_pd_init, 3420 .pd_free_fn = ioc_pd_free, 3421 .pd_stat_fn = ioc_pd_stat, 3422 }; 3423 3424 static int __init ioc_init(void) 3425 { 3426 return blkcg_policy_register(&blkcg_policy_iocost); 3427 } 3428 3429 static void __exit ioc_exit(void) 3430 { 3431 blkcg_policy_unregister(&blkcg_policy_iocost); 3432 } 3433 3434 module_init(ioc_init); 3435 module_exit(ioc_exit); 3436