1 /* 2 * Copyright © 2015-2016 Intel Corporation 3 * 4 * Permission is hereby granted, free of charge, to any person obtaining a 5 * copy of this software and associated documentation files (the "Software"), 6 * to deal in the Software without restriction, including without limitation 7 * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 * and/or sell copies of the Software, and to permit persons to whom the 9 * Software is furnished to do so, subject to the following conditions: 10 * 11 * The above copyright notice and this permission notice (including the next 12 * paragraph) shall be included in all copies or substantial portions of the 13 * Software. 14 * 15 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 20 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 21 * IN THE SOFTWARE. 22 * 23 * Authors: 24 * Robert Bragg <robert@sixbynine.org> 25 */ 26 27 28 /** 29 * DOC: i915 Perf Overview 30 * 31 * Gen graphics supports a large number of performance counters that can help 32 * driver and application developers understand and optimize their use of the 33 * GPU. 34 * 35 * This i915 perf interface enables userspace to configure and open a file 36 * descriptor representing a stream of GPU metrics which can then be read() as 37 * a stream of sample records. 38 * 39 * The interface is particularly suited to exposing buffered metrics that are 40 * captured by DMA from the GPU, unsynchronized with and unrelated to the CPU. 41 * 42 * Streams representing a single context are accessible to applications with a 43 * corresponding drm file descriptor, such that OpenGL can use the interface 44 * without special privileges. Access to system-wide metrics requires root 45 * privileges by default, unless changed via the dev.i915.perf_event_paranoid 46 * sysctl option. 47 * 48 */ 49 50 /** 51 * DOC: i915 Perf History and Comparison with Core Perf 52 * 53 * The interface was initially inspired by the core Perf infrastructure but 54 * some notable differences are: 55 * 56 * i915 perf file descriptors represent a "stream" instead of an "event"; where 57 * a perf event primarily corresponds to a single 64bit value, while a stream 58 * might sample sets of tightly-coupled counters, depending on the 59 * configuration. For example the Gen OA unit isn't designed to support 60 * orthogonal configurations of individual counters; it's configured for a set 61 * of related counters. Samples for an i915 perf stream capturing OA metrics 62 * will include a set of counter values packed in a compact HW specific format. 63 * The OA unit supports a number of different packing formats which can be 64 * selected by the user opening the stream. Perf has support for grouping 65 * events, but each event in the group is configured, validated and 66 * authenticated individually with separate system calls. 67 * 68 * i915 perf stream configurations are provided as an array of u64 (key,value) 69 * pairs, instead of a fixed struct with multiple miscellaneous config members, 70 * interleaved with event-type specific members. 71 * 72 * i915 perf doesn't support exposing metrics via an mmap'd circular buffer. 73 * The supported metrics are being written to memory by the GPU unsynchronized 74 * with the CPU, using HW specific packing formats for counter sets. Sometimes 75 * the constraints on HW configuration require reports to be filtered before it 76 * would be acceptable to expose them to unprivileged applications - to hide 77 * the metrics of other processes/contexts. For these use cases a read() based 78 * interface is a good fit, and provides an opportunity to filter data as it 79 * gets copied from the GPU mapped buffers to userspace buffers. 80 * 81 * 82 * Issues hit with first prototype based on Core Perf 83 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 84 * 85 * The first prototype of this driver was based on the core perf 86 * infrastructure, and while we did make that mostly work, with some changes to 87 * perf, we found we were breaking or working around too many assumptions baked 88 * into perf's currently cpu centric design. 89 * 90 * In the end we didn't see a clear benefit to making perf's implementation and 91 * interface more complex by changing design assumptions while we knew we still 92 * wouldn't be able to use any existing perf based userspace tools. 93 * 94 * Also considering the Gen specific nature of the Observability hardware and 95 * how userspace will sometimes need to combine i915 perf OA metrics with 96 * side-band OA data captured via MI_REPORT_PERF_COUNT commands; we're 97 * expecting the interface to be used by a platform specific userspace such as 98 * OpenGL or tools. This is to say; we aren't inherently missing out on having 99 * a standard vendor/architecture agnostic interface by not using perf. 100 * 101 * 102 * For posterity, in case we might re-visit trying to adapt core perf to be 103 * better suited to exposing i915 metrics these were the main pain points we 104 * hit: 105 * 106 * - The perf based OA PMU driver broke some significant design assumptions: 107 * 108 * Existing perf pmus are used for profiling work on a cpu and we were 109 * introducing the idea of _IS_DEVICE pmus with different security 110 * implications, the need to fake cpu-related data (such as user/kernel 111 * registers) to fit with perf's current design, and adding _DEVICE records 112 * as a way to forward device-specific status records. 113 * 114 * The OA unit writes reports of counters into a circular buffer, without 115 * involvement from the CPU, making our PMU driver the first of a kind. 116 * 117 * Given the way we were periodically forward data from the GPU-mapped, OA 118 * buffer to perf's buffer, those bursts of sample writes looked to perf like 119 * we were sampling too fast and so we had to subvert its throttling checks. 120 * 121 * Perf supports groups of counters and allows those to be read via 122 * transactions internally but transactions currently seem designed to be 123 * explicitly initiated from the cpu (say in response to a userspace read()) 124 * and while we could pull a report out of the OA buffer we can't 125 * trigger a report from the cpu on demand. 126 * 127 * Related to being report based; the OA counters are configured in HW as a 128 * set while perf generally expects counter configurations to be orthogonal. 129 * Although counters can be associated with a group leader as they are 130 * opened, there's no clear precedent for being able to provide group-wide 131 * configuration attributes (for example we want to let userspace choose the 132 * OA unit report format used to capture all counters in a set, or specify a 133 * GPU context to filter metrics on). We avoided using perf's grouping 134 * feature and forwarded OA reports to userspace via perf's 'raw' sample 135 * field. This suited our userspace well considering how coupled the counters 136 * are when dealing with normalizing. It would be inconvenient to split 137 * counters up into separate events, only to require userspace to recombine 138 * them. For Mesa it's also convenient to be forwarded raw, periodic reports 139 * for combining with the side-band raw reports it captures using 140 * MI_REPORT_PERF_COUNT commands. 141 * 142 * - As a side note on perf's grouping feature; there was also some concern 143 * that using PERF_FORMAT_GROUP as a way to pack together counter values 144 * would quite drastically inflate our sample sizes, which would likely 145 * lower the effective sampling resolutions we could use when the available 146 * memory bandwidth is limited. 147 * 148 * With the OA unit's report formats, counters are packed together as 32 149 * or 40bit values, with the largest report size being 256 bytes. 150 * 151 * PERF_FORMAT_GROUP values are 64bit, but there doesn't appear to be a 152 * documented ordering to the values, implying PERF_FORMAT_ID must also be 153 * used to add a 64bit ID before each value; giving 16 bytes per counter. 154 * 155 * Related to counter orthogonality; we can't time share the OA unit, while 156 * event scheduling is a central design idea within perf for allowing 157 * userspace to open + enable more events than can be configured in HW at any 158 * one time. The OA unit is not designed to allow re-configuration while in 159 * use. We can't reconfigure the OA unit without losing internal OA unit 160 * state which we can't access explicitly to save and restore. Reconfiguring 161 * the OA unit is also relatively slow, involving ~100 register writes. From 162 * userspace Mesa also depends on a stable OA configuration when emitting 163 * MI_REPORT_PERF_COUNT commands and importantly the OA unit can't be 164 * disabled while there are outstanding MI_RPC commands lest we hang the 165 * command streamer. 166 * 167 * The contents of sample records aren't extensible by device drivers (i.e. 168 * the sample_type bits). As an example; Sourab Gupta had been looking to 169 * attach GPU timestamps to our OA samples. We were shoehorning OA reports 170 * into sample records by using the 'raw' field, but it's tricky to pack more 171 * than one thing into this field because events/core.c currently only lets a 172 * pmu give a single raw data pointer plus len which will be copied into the 173 * ring buffer. To include more than the OA report we'd have to copy the 174 * report into an intermediate larger buffer. I'd been considering allowing a 175 * vector of data+len values to be specified for copying the raw data, but 176 * it felt like a kludge to being using the raw field for this purpose. 177 * 178 * - It felt like our perf based PMU was making some technical compromises 179 * just for the sake of using perf: 180 * 181 * perf_event_open() requires events to either relate to a pid or a specific 182 * cpu core, while our device pmu related to neither. Events opened with a 183 * pid will be automatically enabled/disabled according to the scheduling of 184 * that process - so not appropriate for us. When an event is related to a 185 * cpu id, perf ensures pmu methods will be invoked via an inter process 186 * interrupt on that core. To avoid invasive changes our userspace opened OA 187 * perf events for a specific cpu. This was workable but it meant the 188 * majority of the OA driver ran in atomic context, including all OA report 189 * forwarding, which wasn't really necessary in our case and seems to make 190 * our locking requirements somewhat complex as we handled the interaction 191 * with the rest of the i915 driver. 192 */ 193 194 #include <linux/anon_inodes.h> 195 #include <linux/nospec.h> 196 #include <linux/sizes.h> 197 #include <linux/uuid.h> 198 199 #include "gem/i915_gem_context.h" 200 #include "gem/i915_gem_internal.h" 201 #include "gt/intel_engine_pm.h" 202 #include "gt/intel_engine_regs.h" 203 #include "gt/intel_engine_user.h" 204 #include "gt/intel_execlists_submission.h" 205 #include "gt/intel_gpu_commands.h" 206 #include "gt/intel_gt.h" 207 #include "gt/intel_gt_clock_utils.h" 208 #include "gt/intel_gt_mcr.h" 209 #include "gt/intel_gt_regs.h" 210 #include "gt/intel_lrc.h" 211 #include "gt/intel_lrc_reg.h" 212 #include "gt/intel_rc6.h" 213 #include "gt/intel_ring.h" 214 #include "gt/uc/intel_guc_slpc.h" 215 216 #include "i915_drv.h" 217 #include "i915_file_private.h" 218 #include "i915_perf.h" 219 #include "i915_perf_oa_regs.h" 220 #include "i915_reg.h" 221 222 /* HW requires this to be a power of two, between 128k and 16M, though driver 223 * is currently generally designed assuming the largest 16M size is used such 224 * that the overflow cases are unlikely in normal operation. 225 */ 226 #define OA_BUFFER_SIZE SZ_16M 227 228 #define OA_TAKEN(tail, head) ((tail - head) & (OA_BUFFER_SIZE - 1)) 229 230 /** 231 * DOC: OA Tail Pointer Race 232 * 233 * There's a HW race condition between OA unit tail pointer register updates and 234 * writes to memory whereby the tail pointer can sometimes get ahead of what's 235 * been written out to the OA buffer so far (in terms of what's visible to the 236 * CPU). 237 * 238 * Although this can be observed explicitly while copying reports to userspace 239 * by checking for a zeroed report-id field in tail reports, we want to account 240 * for this earlier, as part of the oa_buffer_check_unlocked to avoid lots of 241 * redundant read() attempts. 242 * 243 * We workaround this issue in oa_buffer_check_unlocked() by reading the reports 244 * in the OA buffer, starting from the tail reported by the HW until we find a 245 * report with its first 2 dwords not 0 meaning its previous report is 246 * completely in memory and ready to be read. Those dwords are also set to 0 247 * once read and the whole buffer is cleared upon OA buffer initialization. The 248 * first dword is the reason for this report while the second is the timestamp, 249 * making the chances of having those 2 fields at 0 fairly unlikely. A more 250 * detailed explanation is available in oa_buffer_check_unlocked(). 251 * 252 * Most of the implementation details for this workaround are in 253 * oa_buffer_check_unlocked() and _append_oa_reports() 254 * 255 * Note for posterity: previously the driver used to define an effective tail 256 * pointer that lagged the real pointer by a 'tail margin' measured in bytes 257 * derived from %OA_TAIL_MARGIN_NSEC and the configured sampling frequency. 258 * This was flawed considering that the OA unit may also automatically generate 259 * non-periodic reports (such as on context switch) or the OA unit may be 260 * enabled without any periodic sampling. 261 */ 262 #define OA_TAIL_MARGIN_NSEC 100000ULL 263 #define INVALID_TAIL_PTR 0xffffffff 264 265 /* The default frequency for checking whether the OA unit has written new 266 * reports to the circular OA buffer... 267 */ 268 #define DEFAULT_POLL_FREQUENCY_HZ 200 269 #define DEFAULT_POLL_PERIOD_NS (NSEC_PER_SEC / DEFAULT_POLL_FREQUENCY_HZ) 270 271 /* for sysctl proc_dointvec_minmax of dev.i915.perf_stream_paranoid */ 272 static u32 i915_perf_stream_paranoid = true; 273 274 /* The maximum exponent the hardware accepts is 63 (essentially it selects one 275 * of the 64bit timestamp bits to trigger reports from) but there's currently 276 * no known use case for sampling as infrequently as once per 47 thousand years. 277 * 278 * Since the timestamps included in OA reports are only 32bits it seems 279 * reasonable to limit the OA exponent where it's still possible to account for 280 * overflow in OA report timestamps. 281 */ 282 #define OA_EXPONENT_MAX 31 283 284 #define INVALID_CTX_ID 0xffffffff 285 286 /* On Gen8+ automatically triggered OA reports include a 'reason' field... */ 287 #define OAREPORT_REASON_MASK 0x3f 288 #define OAREPORT_REASON_MASK_EXTENDED 0x7f 289 #define OAREPORT_REASON_SHIFT 19 290 #define OAREPORT_REASON_TIMER (1<<0) 291 #define OAREPORT_REASON_CTX_SWITCH (1<<3) 292 #define OAREPORT_REASON_CLK_RATIO (1<<5) 293 294 #define HAS_MI_SET_PREDICATE(i915) (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 295 296 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate 297 * 298 * The highest sampling frequency we can theoretically program the OA unit 299 * with is always half the timestamp frequency: E.g. 6.25Mhz for Haswell. 300 * 301 * Initialized just before we register the sysctl parameter. 302 */ 303 static int oa_sample_rate_hard_limit; 304 305 /* Theoretically we can program the OA unit to sample every 160ns but don't 306 * allow that by default unless root... 307 * 308 * The default threshold of 100000Hz is based on perf's similar 309 * kernel.perf_event_max_sample_rate sysctl parameter. 310 */ 311 static u32 i915_oa_max_sample_rate = 100000; 312 313 /* XXX: beware if future OA HW adds new report formats that the current 314 * code assumes all reports have a power-of-two size and ~(size - 1) can 315 * be used as a mask to align the OA tail pointer. 316 */ 317 static const struct i915_oa_format oa_formats[I915_OA_FORMAT_MAX] = { 318 [I915_OA_FORMAT_A13] = { 0, 64 }, 319 [I915_OA_FORMAT_A29] = { 1, 128 }, 320 [I915_OA_FORMAT_A13_B8_C8] = { 2, 128 }, 321 /* A29_B8_C8 Disallowed as 192 bytes doesn't factor into buffer size */ 322 [I915_OA_FORMAT_B4_C8] = { 4, 64 }, 323 [I915_OA_FORMAT_A45_B8_C8] = { 5, 256 }, 324 [I915_OA_FORMAT_B4_C8_A16] = { 6, 128 }, 325 [I915_OA_FORMAT_C4_B8] = { 7, 64 }, 326 [I915_OA_FORMAT_A12] = { 0, 64 }, 327 [I915_OA_FORMAT_A12_B8_C8] = { 2, 128 }, 328 [I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 }, 329 [I915_OAR_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 }, 330 [I915_OA_FORMAT_A24u40_A14u32_B8_C8] = { 5, 256 }, 331 [I915_OAM_FORMAT_MPEC8u64_B8_C8] = { 1, 192, TYPE_OAM, HDR_64_BIT }, 332 [I915_OAM_FORMAT_MPEC8u32_B8_C8] = { 2, 128, TYPE_OAM, HDR_64_BIT }, 333 }; 334 335 static const u32 mtl_oa_base[] = { 336 [PERF_GROUP_OAM_SAMEDIA_0] = 0x393000, 337 }; 338 339 #define SAMPLE_OA_REPORT (1<<0) 340 341 /** 342 * struct perf_open_properties - for validated properties given to open a stream 343 * @sample_flags: `DRM_I915_PERF_PROP_SAMPLE_*` properties are tracked as flags 344 * @single_context: Whether a single or all gpu contexts should be monitored 345 * @hold_preemption: Whether the preemption is disabled for the filtered 346 * context 347 * @ctx_handle: A gem ctx handle for use with @single_context 348 * @metrics_set: An ID for an OA unit metric set advertised via sysfs 349 * @oa_format: An OA unit HW report format 350 * @oa_periodic: Whether to enable periodic OA unit sampling 351 * @oa_period_exponent: The OA unit sampling period is derived from this 352 * @engine: The engine (typically rcs0) being monitored by the OA unit 353 * @has_sseu: Whether @sseu was specified by userspace 354 * @sseu: internal SSEU configuration computed either from the userspace 355 * specified configuration in the opening parameters or a default value 356 * (see get_default_sseu_config()) 357 * @poll_oa_period: The period in nanoseconds at which the CPU will check for OA 358 * data availability 359 * 360 * As read_properties_unlocked() enumerates and validates the properties given 361 * to open a stream of metrics the configuration is built up in the structure 362 * which starts out zero initialized. 363 */ 364 struct perf_open_properties { 365 u32 sample_flags; 366 367 u64 single_context:1; 368 u64 hold_preemption:1; 369 u64 ctx_handle; 370 371 /* OA sampling state */ 372 int metrics_set; 373 int oa_format; 374 bool oa_periodic; 375 int oa_period_exponent; 376 377 struct intel_engine_cs *engine; 378 379 bool has_sseu; 380 struct intel_sseu sseu; 381 382 u64 poll_oa_period; 383 }; 384 385 struct i915_oa_config_bo { 386 struct llist_node node; 387 388 struct i915_oa_config *oa_config; 389 struct i915_vma *vma; 390 }; 391 392 static struct ctl_table_header *sysctl_header; 393 394 static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer); 395 396 void i915_oa_config_release(struct kref *ref) 397 { 398 struct i915_oa_config *oa_config = 399 container_of(ref, typeof(*oa_config), ref); 400 401 kfree(oa_config->flex_regs); 402 kfree(oa_config->b_counter_regs); 403 kfree(oa_config->mux_regs); 404 405 kfree_rcu(oa_config, rcu); 406 } 407 408 struct i915_oa_config * 409 i915_perf_get_oa_config(struct i915_perf *perf, int metrics_set) 410 { 411 struct i915_oa_config *oa_config; 412 413 rcu_read_lock(); 414 oa_config = idr_find(&perf->metrics_idr, metrics_set); 415 if (oa_config) 416 oa_config = i915_oa_config_get(oa_config); 417 rcu_read_unlock(); 418 419 return oa_config; 420 } 421 422 static void free_oa_config_bo(struct i915_oa_config_bo *oa_bo) 423 { 424 i915_oa_config_put(oa_bo->oa_config); 425 i915_vma_put(oa_bo->vma); 426 kfree(oa_bo); 427 } 428 429 static inline const 430 struct i915_perf_regs *__oa_regs(struct i915_perf_stream *stream) 431 { 432 return &stream->engine->oa_group->regs; 433 } 434 435 static u32 gen12_oa_hw_tail_read(struct i915_perf_stream *stream) 436 { 437 struct intel_uncore *uncore = stream->uncore; 438 439 return intel_uncore_read(uncore, __oa_regs(stream)->oa_tail_ptr) & 440 GEN12_OAG_OATAILPTR_MASK; 441 } 442 443 static u32 gen8_oa_hw_tail_read(struct i915_perf_stream *stream) 444 { 445 struct intel_uncore *uncore = stream->uncore; 446 447 return intel_uncore_read(uncore, GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK; 448 } 449 450 static u32 gen7_oa_hw_tail_read(struct i915_perf_stream *stream) 451 { 452 struct intel_uncore *uncore = stream->uncore; 453 u32 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); 454 455 return oastatus1 & GEN7_OASTATUS1_TAIL_MASK; 456 } 457 458 #define oa_report_header_64bit(__s) \ 459 ((__s)->oa_buffer.format->header == HDR_64_BIT) 460 461 static u64 oa_report_id(struct i915_perf_stream *stream, void *report) 462 { 463 return oa_report_header_64bit(stream) ? *(u64 *)report : *(u32 *)report; 464 } 465 466 static u64 oa_report_reason(struct i915_perf_stream *stream, void *report) 467 { 468 return (oa_report_id(stream, report) >> OAREPORT_REASON_SHIFT) & 469 (GRAPHICS_VER(stream->perf->i915) == 12 ? 470 OAREPORT_REASON_MASK_EXTENDED : 471 OAREPORT_REASON_MASK); 472 } 473 474 static void oa_report_id_clear(struct i915_perf_stream *stream, u32 *report) 475 { 476 if (oa_report_header_64bit(stream)) 477 *(u64 *)report = 0; 478 else 479 *report = 0; 480 } 481 482 static bool oa_report_ctx_invalid(struct i915_perf_stream *stream, void *report) 483 { 484 return !(oa_report_id(stream, report) & 485 stream->perf->gen8_valid_ctx_bit) && 486 GRAPHICS_VER(stream->perf->i915) <= 11; 487 } 488 489 static u64 oa_timestamp(struct i915_perf_stream *stream, void *report) 490 { 491 return oa_report_header_64bit(stream) ? 492 *((u64 *)report + 1) : 493 *((u32 *)report + 1); 494 } 495 496 static void oa_timestamp_clear(struct i915_perf_stream *stream, u32 *report) 497 { 498 if (oa_report_header_64bit(stream)) 499 *(u64 *)&report[2] = 0; 500 else 501 report[1] = 0; 502 } 503 504 static u32 oa_context_id(struct i915_perf_stream *stream, u32 *report) 505 { 506 u32 ctx_id = oa_report_header_64bit(stream) ? report[4] : report[2]; 507 508 return ctx_id & stream->specific_ctx_id_mask; 509 } 510 511 static void oa_context_id_squash(struct i915_perf_stream *stream, u32 *report) 512 { 513 if (oa_report_header_64bit(stream)) 514 report[4] = INVALID_CTX_ID; 515 else 516 report[2] = INVALID_CTX_ID; 517 } 518 519 /** 520 * oa_buffer_check_unlocked - check for data and update tail ptr state 521 * @stream: i915 stream instance 522 * 523 * This is either called via fops (for blocking reads in user ctx) or the poll 524 * check hrtimer (atomic ctx) to check the OA buffer tail pointer and check 525 * if there is data available for userspace to read. 526 * 527 * This function is central to providing a workaround for the OA unit tail 528 * pointer having a race with respect to what data is visible to the CPU. 529 * It is responsible for reading tail pointers from the hardware and giving 530 * the pointers time to 'age' before they are made available for reading. 531 * (See description of OA_TAIL_MARGIN_NSEC above for further details.) 532 * 533 * Besides returning true when there is data available to read() this function 534 * also updates the tail, aging_tail and aging_timestamp in the oa_buffer 535 * object. 536 * 537 * Note: It's safe to read OA config state here unlocked, assuming that this is 538 * only called while the stream is enabled, while the global OA configuration 539 * can't be modified. 540 * 541 * Returns: %true if the OA buffer contains data, else %false 542 */ 543 static bool oa_buffer_check_unlocked(struct i915_perf_stream *stream) 544 { 545 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 546 int report_size = stream->oa_buffer.format->size; 547 unsigned long flags; 548 bool pollin; 549 u32 hw_tail; 550 u64 now; 551 u32 partial_report_size; 552 553 /* We have to consider the (unlikely) possibility that read() errors 554 * could result in an OA buffer reset which might reset the head and 555 * tail state. 556 */ 557 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 558 559 hw_tail = stream->perf->ops.oa_hw_tail_read(stream); 560 561 /* The tail pointer increases in 64 byte increments, not in report_size 562 * steps. Also the report size may not be a power of 2. Compute 563 * potentially partially landed report in the OA buffer 564 */ 565 partial_report_size = OA_TAKEN(hw_tail, stream->oa_buffer.tail); 566 partial_report_size %= report_size; 567 568 /* Subtract partial amount off the tail */ 569 hw_tail = gtt_offset + OA_TAKEN(hw_tail, partial_report_size); 570 571 now = ktime_get_mono_fast_ns(); 572 573 if (hw_tail == stream->oa_buffer.aging_tail && 574 (now - stream->oa_buffer.aging_timestamp) > OA_TAIL_MARGIN_NSEC) { 575 /* If the HW tail hasn't move since the last check and the HW 576 * tail has been aging for long enough, declare it the new 577 * tail. 578 */ 579 stream->oa_buffer.tail = stream->oa_buffer.aging_tail; 580 } else { 581 u32 head, tail, aged_tail; 582 583 /* NB: The head we observe here might effectively be a little 584 * out of date. If a read() is in progress, the head could be 585 * anywhere between this head and stream->oa_buffer.tail. 586 */ 587 head = stream->oa_buffer.head - gtt_offset; 588 aged_tail = stream->oa_buffer.tail - gtt_offset; 589 590 hw_tail -= gtt_offset; 591 tail = hw_tail; 592 593 /* Walk the stream backward until we find a report with report 594 * id and timestmap not at 0. Since the circular buffer pointers 595 * progress by increments of 64 bytes and that reports can be up 596 * to 256 bytes long, we can't tell whether a report has fully 597 * landed in memory before the report id and timestamp of the 598 * following report have effectively landed. 599 * 600 * This is assuming that the writes of the OA unit land in 601 * memory in the order they were written to. 602 * If not : (╯°□°)╯︵ ┻━┻ 603 */ 604 while (OA_TAKEN(tail, aged_tail) >= report_size) { 605 void *report = stream->oa_buffer.vaddr + tail; 606 607 if (oa_report_id(stream, report) || 608 oa_timestamp(stream, report)) 609 break; 610 611 tail = (tail - report_size) & (OA_BUFFER_SIZE - 1); 612 } 613 614 if (OA_TAKEN(hw_tail, tail) > report_size && 615 __ratelimit(&stream->perf->tail_pointer_race)) 616 drm_notice(&stream->uncore->i915->drm, 617 "unlanded report(s) head=0x%x tail=0x%x hw_tail=0x%x\n", 618 head, tail, hw_tail); 619 620 stream->oa_buffer.tail = gtt_offset + tail; 621 stream->oa_buffer.aging_tail = gtt_offset + hw_tail; 622 stream->oa_buffer.aging_timestamp = now; 623 } 624 625 pollin = OA_TAKEN(stream->oa_buffer.tail - gtt_offset, 626 stream->oa_buffer.head - gtt_offset) >= report_size; 627 628 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 629 630 return pollin; 631 } 632 633 /** 634 * append_oa_status - Appends a status record to a userspace read() buffer. 635 * @stream: An i915-perf stream opened for OA metrics 636 * @buf: destination buffer given by userspace 637 * @count: the number of bytes userspace wants to read 638 * @offset: (inout): the current position for writing into @buf 639 * @type: The kind of status to report to userspace 640 * 641 * Writes a status record (such as `DRM_I915_PERF_RECORD_OA_REPORT_LOST`) 642 * into the userspace read() buffer. 643 * 644 * The @buf @offset will only be updated on success. 645 * 646 * Returns: 0 on success, negative error code on failure. 647 */ 648 static int append_oa_status(struct i915_perf_stream *stream, 649 char __user *buf, 650 size_t count, 651 size_t *offset, 652 enum drm_i915_perf_record_type type) 653 { 654 struct drm_i915_perf_record_header header = { type, 0, sizeof(header) }; 655 656 if ((count - *offset) < header.size) 657 return -ENOSPC; 658 659 if (copy_to_user(buf + *offset, &header, sizeof(header))) 660 return -EFAULT; 661 662 (*offset) += header.size; 663 664 return 0; 665 } 666 667 /** 668 * append_oa_sample - Copies single OA report into userspace read() buffer. 669 * @stream: An i915-perf stream opened for OA metrics 670 * @buf: destination buffer given by userspace 671 * @count: the number of bytes userspace wants to read 672 * @offset: (inout): the current position for writing into @buf 673 * @report: A single OA report to (optionally) include as part of the sample 674 * 675 * The contents of a sample are configured through `DRM_I915_PERF_PROP_SAMPLE_*` 676 * properties when opening a stream, tracked as `stream->sample_flags`. This 677 * function copies the requested components of a single sample to the given 678 * read() @buf. 679 * 680 * The @buf @offset will only be updated on success. 681 * 682 * Returns: 0 on success, negative error code on failure. 683 */ 684 static int append_oa_sample(struct i915_perf_stream *stream, 685 char __user *buf, 686 size_t count, 687 size_t *offset, 688 const u8 *report) 689 { 690 int report_size = stream->oa_buffer.format->size; 691 struct drm_i915_perf_record_header header; 692 int report_size_partial; 693 u8 *oa_buf_end; 694 695 header.type = DRM_I915_PERF_RECORD_SAMPLE; 696 header.pad = 0; 697 header.size = stream->sample_size; 698 699 if ((count - *offset) < header.size) 700 return -ENOSPC; 701 702 buf += *offset; 703 if (copy_to_user(buf, &header, sizeof(header))) 704 return -EFAULT; 705 buf += sizeof(header); 706 707 oa_buf_end = stream->oa_buffer.vaddr + OA_BUFFER_SIZE; 708 report_size_partial = oa_buf_end - report; 709 710 if (report_size_partial < report_size) { 711 if (copy_to_user(buf, report, report_size_partial)) 712 return -EFAULT; 713 buf += report_size_partial; 714 715 if (copy_to_user(buf, stream->oa_buffer.vaddr, 716 report_size - report_size_partial)) 717 return -EFAULT; 718 } else if (copy_to_user(buf, report, report_size)) { 719 return -EFAULT; 720 } 721 722 (*offset) += header.size; 723 724 return 0; 725 } 726 727 /** 728 * gen8_append_oa_reports - Copies all buffered OA reports into 729 * userspace read() buffer. 730 * @stream: An i915-perf stream opened for OA metrics 731 * @buf: destination buffer given by userspace 732 * @count: the number of bytes userspace wants to read 733 * @offset: (inout): the current position for writing into @buf 734 * 735 * Notably any error condition resulting in a short read (-%ENOSPC or 736 * -%EFAULT) will be returned even though one or more records may 737 * have been successfully copied. In this case it's up to the caller 738 * to decide if the error should be squashed before returning to 739 * userspace. 740 * 741 * Note: reports are consumed from the head, and appended to the 742 * tail, so the tail chases the head?... If you think that's mad 743 * and back-to-front you're not alone, but this follows the 744 * Gen PRM naming convention. 745 * 746 * Returns: 0 on success, negative error code on failure. 747 */ 748 static int gen8_append_oa_reports(struct i915_perf_stream *stream, 749 char __user *buf, 750 size_t count, 751 size_t *offset) 752 { 753 struct intel_uncore *uncore = stream->uncore; 754 int report_size = stream->oa_buffer.format->size; 755 u8 *oa_buf_base = stream->oa_buffer.vaddr; 756 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 757 u32 mask = (OA_BUFFER_SIZE - 1); 758 size_t start_offset = *offset; 759 unsigned long flags; 760 u32 head, tail; 761 int ret = 0; 762 763 if (drm_WARN_ON(&uncore->i915->drm, !stream->enabled)) 764 return -EIO; 765 766 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 767 768 head = stream->oa_buffer.head; 769 tail = stream->oa_buffer.tail; 770 771 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 772 773 /* 774 * NB: oa_buffer.head/tail include the gtt_offset which we don't want 775 * while indexing relative to oa_buf_base. 776 */ 777 head -= gtt_offset; 778 tail -= gtt_offset; 779 780 /* 781 * An out of bounds or misaligned head or tail pointer implies a driver 782 * bug since we validate + align the tail pointers we read from the 783 * hardware and we are in full control of the head pointer which should 784 * only be incremented by multiples of the report size. 785 */ 786 if (drm_WARN_ONCE(&uncore->i915->drm, 787 head > OA_BUFFER_SIZE || 788 tail > OA_BUFFER_SIZE, 789 "Inconsistent OA buffer pointers: head = %u, tail = %u\n", 790 head, tail)) 791 return -EIO; 792 793 794 for (/* none */; 795 OA_TAKEN(tail, head); 796 head = (head + report_size) & mask) { 797 u8 *report = oa_buf_base + head; 798 u32 *report32 = (void *)report; 799 u32 ctx_id; 800 u64 reason; 801 802 /* 803 * The reason field includes flags identifying what 804 * triggered this specific report (mostly timer 805 * triggered or e.g. due to a context switch). 806 * 807 * In MMIO triggered reports, some platforms do not set the 808 * reason bit in this field and it is valid to have a reason 809 * field of zero. 810 */ 811 reason = oa_report_reason(stream, report); 812 ctx_id = oa_context_id(stream, report32); 813 814 /* 815 * Squash whatever is in the CTX_ID field if it's marked as 816 * invalid to be sure we avoid false-positive, single-context 817 * filtering below... 818 * 819 * Note: that we don't clear the valid_ctx_bit so userspace can 820 * understand that the ID has been squashed by the kernel. 821 */ 822 if (oa_report_ctx_invalid(stream, report)) { 823 ctx_id = INVALID_CTX_ID; 824 oa_context_id_squash(stream, report32); 825 } 826 827 /* 828 * NB: For Gen 8 the OA unit no longer supports clock gating 829 * off for a specific context and the kernel can't securely 830 * stop the counters from updating as system-wide / global 831 * values. 832 * 833 * Automatic reports now include a context ID so reports can be 834 * filtered on the cpu but it's not worth trying to 835 * automatically subtract/hide counter progress for other 836 * contexts while filtering since we can't stop userspace 837 * issuing MI_REPORT_PERF_COUNT commands which would still 838 * provide a side-band view of the real values. 839 * 840 * To allow userspace (such as Mesa/GL_INTEL_performance_query) 841 * to normalize counters for a single filtered context then it 842 * needs be forwarded bookend context-switch reports so that it 843 * can track switches in between MI_REPORT_PERF_COUNT commands 844 * and can itself subtract/ignore the progress of counters 845 * associated with other contexts. Note that the hardware 846 * automatically triggers reports when switching to a new 847 * context which are tagged with the ID of the newly active 848 * context. To avoid the complexity (and likely fragility) of 849 * reading ahead while parsing reports to try and minimize 850 * forwarding redundant context switch reports (i.e. between 851 * other, unrelated contexts) we simply elect to forward them 852 * all. 853 * 854 * We don't rely solely on the reason field to identify context 855 * switches since it's not-uncommon for periodic samples to 856 * identify a switch before any 'context switch' report. 857 */ 858 if (!stream->ctx || 859 stream->specific_ctx_id == ctx_id || 860 stream->oa_buffer.last_ctx_id == stream->specific_ctx_id || 861 reason & OAREPORT_REASON_CTX_SWITCH) { 862 863 /* 864 * While filtering for a single context we avoid 865 * leaking the IDs of other contexts. 866 */ 867 if (stream->ctx && 868 stream->specific_ctx_id != ctx_id) { 869 oa_context_id_squash(stream, report32); 870 } 871 872 ret = append_oa_sample(stream, buf, count, offset, 873 report); 874 if (ret) 875 break; 876 877 stream->oa_buffer.last_ctx_id = ctx_id; 878 } 879 880 if (is_power_of_2(report_size)) { 881 /* 882 * Clear out the report id and timestamp as a means 883 * to detect unlanded reports. 884 */ 885 oa_report_id_clear(stream, report32); 886 oa_timestamp_clear(stream, report32); 887 } else { 888 /* Zero out the entire report */ 889 memset(report32, 0, report_size); 890 } 891 } 892 893 if (start_offset != *offset) { 894 i915_reg_t oaheadptr; 895 896 oaheadptr = GRAPHICS_VER(stream->perf->i915) == 12 ? 897 __oa_regs(stream)->oa_head_ptr : 898 GEN8_OAHEADPTR; 899 900 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 901 902 /* 903 * We removed the gtt_offset for the copy loop above, indexing 904 * relative to oa_buf_base so put back here... 905 */ 906 head += gtt_offset; 907 intel_uncore_write(uncore, oaheadptr, 908 head & GEN12_OAG_OAHEADPTR_MASK); 909 stream->oa_buffer.head = head; 910 911 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 912 } 913 914 return ret; 915 } 916 917 /** 918 * gen8_oa_read - copy status records then buffered OA reports 919 * @stream: An i915-perf stream opened for OA metrics 920 * @buf: destination buffer given by userspace 921 * @count: the number of bytes userspace wants to read 922 * @offset: (inout): the current position for writing into @buf 923 * 924 * Checks OA unit status registers and if necessary appends corresponding 925 * status records for userspace (such as for a buffer full condition) and then 926 * initiate appending any buffered OA reports. 927 * 928 * Updates @offset according to the number of bytes successfully copied into 929 * the userspace buffer. 930 * 931 * NB: some data may be successfully copied to the userspace buffer 932 * even if an error is returned, and this is reflected in the 933 * updated @offset. 934 * 935 * Returns: zero on success or a negative error code 936 */ 937 static int gen8_oa_read(struct i915_perf_stream *stream, 938 char __user *buf, 939 size_t count, 940 size_t *offset) 941 { 942 struct intel_uncore *uncore = stream->uncore; 943 u32 oastatus; 944 i915_reg_t oastatus_reg; 945 int ret; 946 947 if (drm_WARN_ON(&uncore->i915->drm, !stream->oa_buffer.vaddr)) 948 return -EIO; 949 950 oastatus_reg = GRAPHICS_VER(stream->perf->i915) == 12 ? 951 __oa_regs(stream)->oa_status : 952 GEN8_OASTATUS; 953 954 oastatus = intel_uncore_read(uncore, oastatus_reg); 955 956 /* 957 * We treat OABUFFER_OVERFLOW as a significant error: 958 * 959 * Although theoretically we could handle this more gracefully 960 * sometimes, some Gens don't correctly suppress certain 961 * automatically triggered reports in this condition and so we 962 * have to assume that old reports are now being trampled 963 * over. 964 * 965 * Considering how we don't currently give userspace control 966 * over the OA buffer size and always configure a large 16MB 967 * buffer, then a buffer overflow does anyway likely indicate 968 * that something has gone quite badly wrong. 969 */ 970 if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) { 971 ret = append_oa_status(stream, buf, count, offset, 972 DRM_I915_PERF_RECORD_OA_BUFFER_LOST); 973 if (ret) 974 return ret; 975 976 drm_dbg(&stream->perf->i915->drm, 977 "OA buffer overflow (exponent = %d): force restart\n", 978 stream->period_exponent); 979 980 stream->perf->ops.oa_disable(stream); 981 stream->perf->ops.oa_enable(stream); 982 983 /* 984 * Note: .oa_enable() is expected to re-init the oabuffer and 985 * reset GEN8_OASTATUS for us 986 */ 987 oastatus = intel_uncore_read(uncore, oastatus_reg); 988 } 989 990 if (oastatus & GEN8_OASTATUS_REPORT_LOST) { 991 ret = append_oa_status(stream, buf, count, offset, 992 DRM_I915_PERF_RECORD_OA_REPORT_LOST); 993 if (ret) 994 return ret; 995 996 intel_uncore_rmw(uncore, oastatus_reg, 997 GEN8_OASTATUS_COUNTER_OVERFLOW | 998 GEN8_OASTATUS_REPORT_LOST, 999 IS_GRAPHICS_VER(uncore->i915, 8, 11) ? 1000 (GEN8_OASTATUS_HEAD_POINTER_WRAP | 1001 GEN8_OASTATUS_TAIL_POINTER_WRAP) : 0); 1002 } 1003 1004 return gen8_append_oa_reports(stream, buf, count, offset); 1005 } 1006 1007 /** 1008 * gen7_append_oa_reports - Copies all buffered OA reports into 1009 * userspace read() buffer. 1010 * @stream: An i915-perf stream opened for OA metrics 1011 * @buf: destination buffer given by userspace 1012 * @count: the number of bytes userspace wants to read 1013 * @offset: (inout): the current position for writing into @buf 1014 * 1015 * Notably any error condition resulting in a short read (-%ENOSPC or 1016 * -%EFAULT) will be returned even though one or more records may 1017 * have been successfully copied. In this case it's up to the caller 1018 * to decide if the error should be squashed before returning to 1019 * userspace. 1020 * 1021 * Note: reports are consumed from the head, and appended to the 1022 * tail, so the tail chases the head?... If you think that's mad 1023 * and back-to-front you're not alone, but this follows the 1024 * Gen PRM naming convention. 1025 * 1026 * Returns: 0 on success, negative error code on failure. 1027 */ 1028 static int gen7_append_oa_reports(struct i915_perf_stream *stream, 1029 char __user *buf, 1030 size_t count, 1031 size_t *offset) 1032 { 1033 struct intel_uncore *uncore = stream->uncore; 1034 int report_size = stream->oa_buffer.format->size; 1035 u8 *oa_buf_base = stream->oa_buffer.vaddr; 1036 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 1037 u32 mask = (OA_BUFFER_SIZE - 1); 1038 size_t start_offset = *offset; 1039 unsigned long flags; 1040 u32 head, tail; 1041 int ret = 0; 1042 1043 if (drm_WARN_ON(&uncore->i915->drm, !stream->enabled)) 1044 return -EIO; 1045 1046 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1047 1048 head = stream->oa_buffer.head; 1049 tail = stream->oa_buffer.tail; 1050 1051 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1052 1053 /* NB: oa_buffer.head/tail include the gtt_offset which we don't want 1054 * while indexing relative to oa_buf_base. 1055 */ 1056 head -= gtt_offset; 1057 tail -= gtt_offset; 1058 1059 /* An out of bounds or misaligned head or tail pointer implies a driver 1060 * bug since we validate + align the tail pointers we read from the 1061 * hardware and we are in full control of the head pointer which should 1062 * only be incremented by multiples of the report size (notably also 1063 * all a power of two). 1064 */ 1065 if (drm_WARN_ONCE(&uncore->i915->drm, 1066 head > OA_BUFFER_SIZE || head % report_size || 1067 tail > OA_BUFFER_SIZE || tail % report_size, 1068 "Inconsistent OA buffer pointers: head = %u, tail = %u\n", 1069 head, tail)) 1070 return -EIO; 1071 1072 1073 for (/* none */; 1074 OA_TAKEN(tail, head); 1075 head = (head + report_size) & mask) { 1076 u8 *report = oa_buf_base + head; 1077 u32 *report32 = (void *)report; 1078 1079 /* All the report sizes factor neatly into the buffer 1080 * size so we never expect to see a report split 1081 * between the beginning and end of the buffer. 1082 * 1083 * Given the initial alignment check a misalignment 1084 * here would imply a driver bug that would result 1085 * in an overrun. 1086 */ 1087 if (drm_WARN_ON(&uncore->i915->drm, 1088 (OA_BUFFER_SIZE - head) < report_size)) { 1089 drm_err(&uncore->i915->drm, 1090 "Spurious OA head ptr: non-integral report offset\n"); 1091 break; 1092 } 1093 1094 /* The report-ID field for periodic samples includes 1095 * some undocumented flags related to what triggered 1096 * the report and is never expected to be zero so we 1097 * can check that the report isn't invalid before 1098 * copying it to userspace... 1099 */ 1100 if (report32[0] == 0) { 1101 if (__ratelimit(&stream->perf->spurious_report_rs)) 1102 drm_notice(&uncore->i915->drm, 1103 "Skipping spurious, invalid OA report\n"); 1104 continue; 1105 } 1106 1107 ret = append_oa_sample(stream, buf, count, offset, report); 1108 if (ret) 1109 break; 1110 1111 /* Clear out the first 2 dwords as a mean to detect unlanded 1112 * reports. 1113 */ 1114 report32[0] = 0; 1115 report32[1] = 0; 1116 } 1117 1118 if (start_offset != *offset) { 1119 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1120 1121 /* We removed the gtt_offset for the copy loop above, indexing 1122 * relative to oa_buf_base so put back here... 1123 */ 1124 head += gtt_offset; 1125 1126 intel_uncore_write(uncore, GEN7_OASTATUS2, 1127 (head & GEN7_OASTATUS2_HEAD_MASK) | 1128 GEN7_OASTATUS2_MEM_SELECT_GGTT); 1129 stream->oa_buffer.head = head; 1130 1131 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1132 } 1133 1134 return ret; 1135 } 1136 1137 /** 1138 * gen7_oa_read - copy status records then buffered OA reports 1139 * @stream: An i915-perf stream opened for OA metrics 1140 * @buf: destination buffer given by userspace 1141 * @count: the number of bytes userspace wants to read 1142 * @offset: (inout): the current position for writing into @buf 1143 * 1144 * Checks Gen 7 specific OA unit status registers and if necessary appends 1145 * corresponding status records for userspace (such as for a buffer full 1146 * condition) and then initiate appending any buffered OA reports. 1147 * 1148 * Updates @offset according to the number of bytes successfully copied into 1149 * the userspace buffer. 1150 * 1151 * Returns: zero on success or a negative error code 1152 */ 1153 static int gen7_oa_read(struct i915_perf_stream *stream, 1154 char __user *buf, 1155 size_t count, 1156 size_t *offset) 1157 { 1158 struct intel_uncore *uncore = stream->uncore; 1159 u32 oastatus1; 1160 int ret; 1161 1162 if (drm_WARN_ON(&uncore->i915->drm, !stream->oa_buffer.vaddr)) 1163 return -EIO; 1164 1165 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); 1166 1167 /* XXX: On Haswell we don't have a safe way to clear oastatus1 1168 * bits while the OA unit is enabled (while the tail pointer 1169 * may be updated asynchronously) so we ignore status bits 1170 * that have already been reported to userspace. 1171 */ 1172 oastatus1 &= ~stream->perf->gen7_latched_oastatus1; 1173 1174 /* We treat OABUFFER_OVERFLOW as a significant error: 1175 * 1176 * - The status can be interpreted to mean that the buffer is 1177 * currently full (with a higher precedence than OA_TAKEN() 1178 * which will start to report a near-empty buffer after an 1179 * overflow) but it's awkward that we can't clear the status 1180 * on Haswell, so without a reset we won't be able to catch 1181 * the state again. 1182 * 1183 * - Since it also implies the HW has started overwriting old 1184 * reports it may also affect our sanity checks for invalid 1185 * reports when copying to userspace that assume new reports 1186 * are being written to cleared memory. 1187 * 1188 * - In the future we may want to introduce a flight recorder 1189 * mode where the driver will automatically maintain a safe 1190 * guard band between head/tail, avoiding this overflow 1191 * condition, but we avoid the added driver complexity for 1192 * now. 1193 */ 1194 if (unlikely(oastatus1 & GEN7_OASTATUS1_OABUFFER_OVERFLOW)) { 1195 ret = append_oa_status(stream, buf, count, offset, 1196 DRM_I915_PERF_RECORD_OA_BUFFER_LOST); 1197 if (ret) 1198 return ret; 1199 1200 drm_dbg(&stream->perf->i915->drm, 1201 "OA buffer overflow (exponent = %d): force restart\n", 1202 stream->period_exponent); 1203 1204 stream->perf->ops.oa_disable(stream); 1205 stream->perf->ops.oa_enable(stream); 1206 1207 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); 1208 } 1209 1210 if (unlikely(oastatus1 & GEN7_OASTATUS1_REPORT_LOST)) { 1211 ret = append_oa_status(stream, buf, count, offset, 1212 DRM_I915_PERF_RECORD_OA_REPORT_LOST); 1213 if (ret) 1214 return ret; 1215 stream->perf->gen7_latched_oastatus1 |= 1216 GEN7_OASTATUS1_REPORT_LOST; 1217 } 1218 1219 return gen7_append_oa_reports(stream, buf, count, offset); 1220 } 1221 1222 /** 1223 * i915_oa_wait_unlocked - handles blocking IO until OA data available 1224 * @stream: An i915-perf stream opened for OA metrics 1225 * 1226 * Called when userspace tries to read() from a blocking stream FD opened 1227 * for OA metrics. It waits until the hrtimer callback finds a non-empty 1228 * OA buffer and wakes us. 1229 * 1230 * Note: it's acceptable to have this return with some false positives 1231 * since any subsequent read handling will return -EAGAIN if there isn't 1232 * really data ready for userspace yet. 1233 * 1234 * Returns: zero on success or a negative error code 1235 */ 1236 static int i915_oa_wait_unlocked(struct i915_perf_stream *stream) 1237 { 1238 /* We would wait indefinitely if periodic sampling is not enabled */ 1239 if (!stream->periodic) 1240 return -EIO; 1241 1242 return wait_event_interruptible(stream->poll_wq, 1243 oa_buffer_check_unlocked(stream)); 1244 } 1245 1246 /** 1247 * i915_oa_poll_wait - call poll_wait() for an OA stream poll() 1248 * @stream: An i915-perf stream opened for OA metrics 1249 * @file: An i915 perf stream file 1250 * @wait: poll() state table 1251 * 1252 * For handling userspace polling on an i915 perf stream opened for OA metrics, 1253 * this starts a poll_wait with the wait queue that our hrtimer callback wakes 1254 * when it sees data ready to read in the circular OA buffer. 1255 */ 1256 static void i915_oa_poll_wait(struct i915_perf_stream *stream, 1257 struct file *file, 1258 poll_table *wait) 1259 { 1260 poll_wait(file, &stream->poll_wq, wait); 1261 } 1262 1263 /** 1264 * i915_oa_read - just calls through to &i915_oa_ops->read 1265 * @stream: An i915-perf stream opened for OA metrics 1266 * @buf: destination buffer given by userspace 1267 * @count: the number of bytes userspace wants to read 1268 * @offset: (inout): the current position for writing into @buf 1269 * 1270 * Updates @offset according to the number of bytes successfully copied into 1271 * the userspace buffer. 1272 * 1273 * Returns: zero on success or a negative error code 1274 */ 1275 static int i915_oa_read(struct i915_perf_stream *stream, 1276 char __user *buf, 1277 size_t count, 1278 size_t *offset) 1279 { 1280 return stream->perf->ops.read(stream, buf, count, offset); 1281 } 1282 1283 static struct intel_context *oa_pin_context(struct i915_perf_stream *stream) 1284 { 1285 struct i915_gem_engines_iter it; 1286 struct i915_gem_context *ctx = stream->ctx; 1287 struct intel_context *ce; 1288 struct i915_gem_ww_ctx ww; 1289 int err = -ENODEV; 1290 1291 for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { 1292 if (ce->engine != stream->engine) /* first match! */ 1293 continue; 1294 1295 err = 0; 1296 break; 1297 } 1298 i915_gem_context_unlock_engines(ctx); 1299 1300 if (err) 1301 return ERR_PTR(err); 1302 1303 i915_gem_ww_ctx_init(&ww, true); 1304 retry: 1305 /* 1306 * As the ID is the gtt offset of the context's vma we 1307 * pin the vma to ensure the ID remains fixed. 1308 */ 1309 err = intel_context_pin_ww(ce, &ww); 1310 if (err == -EDEADLK) { 1311 err = i915_gem_ww_ctx_backoff(&ww); 1312 if (!err) 1313 goto retry; 1314 } 1315 i915_gem_ww_ctx_fini(&ww); 1316 1317 if (err) 1318 return ERR_PTR(err); 1319 1320 stream->pinned_ctx = ce; 1321 return stream->pinned_ctx; 1322 } 1323 1324 static int 1325 __store_reg_to_mem(struct i915_request *rq, i915_reg_t reg, u32 ggtt_offset) 1326 { 1327 u32 *cs, cmd; 1328 1329 cmd = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT; 1330 if (GRAPHICS_VER(rq->engine->i915) >= 8) 1331 cmd++; 1332 1333 cs = intel_ring_begin(rq, 4); 1334 if (IS_ERR(cs)) 1335 return PTR_ERR(cs); 1336 1337 *cs++ = cmd; 1338 *cs++ = i915_mmio_reg_offset(reg); 1339 *cs++ = ggtt_offset; 1340 *cs++ = 0; 1341 1342 intel_ring_advance(rq, cs); 1343 1344 return 0; 1345 } 1346 1347 static int 1348 __read_reg(struct intel_context *ce, i915_reg_t reg, u32 ggtt_offset) 1349 { 1350 struct i915_request *rq; 1351 int err; 1352 1353 rq = i915_request_create(ce); 1354 if (IS_ERR(rq)) 1355 return PTR_ERR(rq); 1356 1357 i915_request_get(rq); 1358 1359 err = __store_reg_to_mem(rq, reg, ggtt_offset); 1360 1361 i915_request_add(rq); 1362 if (!err && i915_request_wait(rq, 0, HZ / 2) < 0) 1363 err = -ETIME; 1364 1365 i915_request_put(rq); 1366 1367 return err; 1368 } 1369 1370 static int 1371 gen12_guc_sw_ctx_id(struct intel_context *ce, u32 *ctx_id) 1372 { 1373 struct i915_vma *scratch; 1374 u32 *val; 1375 int err; 1376 1377 scratch = __vm_create_scratch_for_read_pinned(&ce->engine->gt->ggtt->vm, 4); 1378 if (IS_ERR(scratch)) 1379 return PTR_ERR(scratch); 1380 1381 err = i915_vma_sync(scratch); 1382 if (err) 1383 goto err_scratch; 1384 1385 err = __read_reg(ce, RING_EXECLIST_STATUS_HI(ce->engine->mmio_base), 1386 i915_ggtt_offset(scratch)); 1387 if (err) 1388 goto err_scratch; 1389 1390 val = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB); 1391 if (IS_ERR(val)) { 1392 err = PTR_ERR(val); 1393 goto err_scratch; 1394 } 1395 1396 *ctx_id = *val; 1397 i915_gem_object_unpin_map(scratch->obj); 1398 1399 err_scratch: 1400 i915_vma_unpin_and_release(&scratch, 0); 1401 return err; 1402 } 1403 1404 /* 1405 * For execlist mode of submission, pick an unused context id 1406 * 0 - (NUM_CONTEXT_TAG -1) are used by other contexts 1407 * XXX_MAX_CONTEXT_HW_ID is used by idle context 1408 * 1409 * For GuC mode of submission read context id from the upper dword of the 1410 * EXECLIST_STATUS register. Note that we read this value only once and expect 1411 * that the value stays fixed for the entire OA use case. There are cases where 1412 * GuC KMD implementation may deregister a context to reuse it's context id, but 1413 * we prevent that from happening to the OA context by pinning it. 1414 */ 1415 static int gen12_get_render_context_id(struct i915_perf_stream *stream) 1416 { 1417 u32 ctx_id, mask; 1418 int ret; 1419 1420 if (intel_engine_uses_guc(stream->engine)) { 1421 ret = gen12_guc_sw_ctx_id(stream->pinned_ctx, &ctx_id); 1422 if (ret) 1423 return ret; 1424 1425 mask = ((1U << GEN12_GUC_SW_CTX_ID_WIDTH) - 1) << 1426 (GEN12_GUC_SW_CTX_ID_SHIFT - 32); 1427 } else if (GRAPHICS_VER_FULL(stream->engine->i915) >= IP_VER(12, 50)) { 1428 ctx_id = (XEHP_MAX_CONTEXT_HW_ID - 1) << 1429 (XEHP_SW_CTX_ID_SHIFT - 32); 1430 1431 mask = ((1U << XEHP_SW_CTX_ID_WIDTH) - 1) << 1432 (XEHP_SW_CTX_ID_SHIFT - 32); 1433 } else { 1434 ctx_id = (GEN12_MAX_CONTEXT_HW_ID - 1) << 1435 (GEN11_SW_CTX_ID_SHIFT - 32); 1436 1437 mask = ((1U << GEN11_SW_CTX_ID_WIDTH) - 1) << 1438 (GEN11_SW_CTX_ID_SHIFT - 32); 1439 } 1440 stream->specific_ctx_id = ctx_id & mask; 1441 stream->specific_ctx_id_mask = mask; 1442 1443 return 0; 1444 } 1445 1446 static bool oa_find_reg_in_lri(u32 *state, u32 reg, u32 *offset, u32 end) 1447 { 1448 u32 idx = *offset; 1449 u32 len = min(MI_LRI_LEN(state[idx]) + idx, end); 1450 bool found = false; 1451 1452 idx++; 1453 for (; idx < len; idx += 2) { 1454 if (state[idx] == reg) { 1455 found = true; 1456 break; 1457 } 1458 } 1459 1460 *offset = idx; 1461 return found; 1462 } 1463 1464 static u32 oa_context_image_offset(struct intel_context *ce, u32 reg) 1465 { 1466 u32 offset, len = (ce->engine->context_size - PAGE_SIZE) / 4; 1467 u32 *state = ce->lrc_reg_state; 1468 1469 if (drm_WARN_ON(&ce->engine->i915->drm, !state)) 1470 return U32_MAX; 1471 1472 for (offset = 0; offset < len; ) { 1473 if (IS_MI_LRI_CMD(state[offset])) { 1474 /* 1475 * We expect reg-value pairs in MI_LRI command, so 1476 * MI_LRI_LEN() should be even, if not, issue a warning. 1477 */ 1478 drm_WARN_ON(&ce->engine->i915->drm, 1479 MI_LRI_LEN(state[offset]) & 0x1); 1480 1481 if (oa_find_reg_in_lri(state, reg, &offset, len)) 1482 break; 1483 } else { 1484 offset++; 1485 } 1486 } 1487 1488 return offset < len ? offset : U32_MAX; 1489 } 1490 1491 static int set_oa_ctx_ctrl_offset(struct intel_context *ce) 1492 { 1493 i915_reg_t reg = GEN12_OACTXCONTROL(ce->engine->mmio_base); 1494 struct i915_perf *perf = &ce->engine->i915->perf; 1495 u32 offset = perf->ctx_oactxctrl_offset; 1496 1497 /* Do this only once. Failure is stored as offset of U32_MAX */ 1498 if (offset) 1499 goto exit; 1500 1501 offset = oa_context_image_offset(ce, i915_mmio_reg_offset(reg)); 1502 perf->ctx_oactxctrl_offset = offset; 1503 1504 drm_dbg(&ce->engine->i915->drm, 1505 "%s oa ctx control at 0x%08x dword offset\n", 1506 ce->engine->name, offset); 1507 1508 exit: 1509 return offset && offset != U32_MAX ? 0 : -ENODEV; 1510 } 1511 1512 static bool engine_supports_mi_query(struct intel_engine_cs *engine) 1513 { 1514 return engine->class == RENDER_CLASS; 1515 } 1516 1517 /** 1518 * oa_get_render_ctx_id - determine and hold ctx hw id 1519 * @stream: An i915-perf stream opened for OA metrics 1520 * 1521 * Determine the render context hw id, and ensure it remains fixed for the 1522 * lifetime of the stream. This ensures that we don't have to worry about 1523 * updating the context ID in OACONTROL on the fly. 1524 * 1525 * Returns: zero on success or a negative error code 1526 */ 1527 static int oa_get_render_ctx_id(struct i915_perf_stream *stream) 1528 { 1529 struct intel_context *ce; 1530 int ret = 0; 1531 1532 ce = oa_pin_context(stream); 1533 if (IS_ERR(ce)) 1534 return PTR_ERR(ce); 1535 1536 if (engine_supports_mi_query(stream->engine) && 1537 HAS_LOGICAL_RING_CONTEXTS(stream->perf->i915)) { 1538 /* 1539 * We are enabling perf query here. If we don't find the context 1540 * offset here, just return an error. 1541 */ 1542 ret = set_oa_ctx_ctrl_offset(ce); 1543 if (ret) { 1544 intel_context_unpin(ce); 1545 drm_err(&stream->perf->i915->drm, 1546 "Enabling perf query failed for %s\n", 1547 stream->engine->name); 1548 return ret; 1549 } 1550 } 1551 1552 switch (GRAPHICS_VER(ce->engine->i915)) { 1553 case 7: { 1554 /* 1555 * On Haswell we don't do any post processing of the reports 1556 * and don't need to use the mask. 1557 */ 1558 stream->specific_ctx_id = i915_ggtt_offset(ce->state); 1559 stream->specific_ctx_id_mask = 0; 1560 break; 1561 } 1562 1563 case 8: 1564 case 9: 1565 if (intel_engine_uses_guc(ce->engine)) { 1566 /* 1567 * When using GuC, the context descriptor we write in 1568 * i915 is read by GuC and rewritten before it's 1569 * actually written into the hardware. The LRCA is 1570 * what is put into the context id field of the 1571 * context descriptor by GuC. Because it's aligned to 1572 * a page, the lower 12bits are always at 0 and 1573 * dropped by GuC. They won't be part of the context 1574 * ID in the OA reports, so squash those lower bits. 1575 */ 1576 stream->specific_ctx_id = ce->lrc.lrca >> 12; 1577 1578 /* 1579 * GuC uses the top bit to signal proxy submission, so 1580 * ignore that bit. 1581 */ 1582 stream->specific_ctx_id_mask = 1583 (1U << (GEN8_CTX_ID_WIDTH - 1)) - 1; 1584 } else { 1585 stream->specific_ctx_id_mask = 1586 (1U << GEN8_CTX_ID_WIDTH) - 1; 1587 stream->specific_ctx_id = stream->specific_ctx_id_mask; 1588 } 1589 break; 1590 1591 case 11: 1592 case 12: 1593 ret = gen12_get_render_context_id(stream); 1594 break; 1595 1596 default: 1597 MISSING_CASE(GRAPHICS_VER(ce->engine->i915)); 1598 } 1599 1600 ce->tag = stream->specific_ctx_id; 1601 1602 drm_dbg(&stream->perf->i915->drm, 1603 "filtering on ctx_id=0x%x ctx_id_mask=0x%x\n", 1604 stream->specific_ctx_id, 1605 stream->specific_ctx_id_mask); 1606 1607 return ret; 1608 } 1609 1610 /** 1611 * oa_put_render_ctx_id - counterpart to oa_get_render_ctx_id releases hold 1612 * @stream: An i915-perf stream opened for OA metrics 1613 * 1614 * In case anything needed doing to ensure the context HW ID would remain valid 1615 * for the lifetime of the stream, then that can be undone here. 1616 */ 1617 static void oa_put_render_ctx_id(struct i915_perf_stream *stream) 1618 { 1619 struct intel_context *ce; 1620 1621 ce = fetch_and_zero(&stream->pinned_ctx); 1622 if (ce) { 1623 ce->tag = 0; /* recomputed on next submission after parking */ 1624 intel_context_unpin(ce); 1625 } 1626 1627 stream->specific_ctx_id = INVALID_CTX_ID; 1628 stream->specific_ctx_id_mask = 0; 1629 } 1630 1631 static void 1632 free_oa_buffer(struct i915_perf_stream *stream) 1633 { 1634 i915_vma_unpin_and_release(&stream->oa_buffer.vma, 1635 I915_VMA_RELEASE_MAP); 1636 1637 stream->oa_buffer.vaddr = NULL; 1638 } 1639 1640 static void 1641 free_oa_configs(struct i915_perf_stream *stream) 1642 { 1643 struct i915_oa_config_bo *oa_bo, *tmp; 1644 1645 i915_oa_config_put(stream->oa_config); 1646 llist_for_each_entry_safe(oa_bo, tmp, stream->oa_config_bos.first, node) 1647 free_oa_config_bo(oa_bo); 1648 } 1649 1650 static void 1651 free_noa_wait(struct i915_perf_stream *stream) 1652 { 1653 i915_vma_unpin_and_release(&stream->noa_wait, 0); 1654 } 1655 1656 static bool engine_supports_oa(const struct intel_engine_cs *engine) 1657 { 1658 return engine->oa_group; 1659 } 1660 1661 static bool engine_supports_oa_format(struct intel_engine_cs *engine, int type) 1662 { 1663 return engine->oa_group && engine->oa_group->type == type; 1664 } 1665 1666 static void i915_oa_stream_destroy(struct i915_perf_stream *stream) 1667 { 1668 struct i915_perf *perf = stream->perf; 1669 struct intel_gt *gt = stream->engine->gt; 1670 struct i915_perf_group *g = stream->engine->oa_group; 1671 1672 if (WARN_ON(stream != g->exclusive_stream)) 1673 return; 1674 1675 /* 1676 * Unset exclusive_stream first, it will be checked while disabling 1677 * the metric set on gen8+. 1678 * 1679 * See i915_oa_init_reg_state() and lrc_configure_all_contexts() 1680 */ 1681 WRITE_ONCE(g->exclusive_stream, NULL); 1682 perf->ops.disable_metric_set(stream); 1683 1684 free_oa_buffer(stream); 1685 1686 /* 1687 * Wa_16011777198:dg2: Unset the override of GUCRC mode to enable rc6. 1688 */ 1689 if (stream->override_gucrc) 1690 drm_WARN_ON(>->i915->drm, 1691 intel_guc_slpc_unset_gucrc_mode(>->uc.guc.slpc)); 1692 1693 intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL); 1694 intel_engine_pm_put(stream->engine); 1695 1696 if (stream->ctx) 1697 oa_put_render_ctx_id(stream); 1698 1699 free_oa_configs(stream); 1700 free_noa_wait(stream); 1701 1702 if (perf->spurious_report_rs.missed) { 1703 drm_notice(>->i915->drm, 1704 "%d spurious OA report notices suppressed due to ratelimiting\n", 1705 perf->spurious_report_rs.missed); 1706 } 1707 } 1708 1709 static void gen7_init_oa_buffer(struct i915_perf_stream *stream) 1710 { 1711 struct intel_uncore *uncore = stream->uncore; 1712 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 1713 unsigned long flags; 1714 1715 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1716 1717 /* Pre-DevBDW: OABUFFER must be set with counters off, 1718 * before OASTATUS1, but after OASTATUS2 1719 */ 1720 intel_uncore_write(uncore, GEN7_OASTATUS2, /* head */ 1721 gtt_offset | GEN7_OASTATUS2_MEM_SELECT_GGTT); 1722 stream->oa_buffer.head = gtt_offset; 1723 1724 intel_uncore_write(uncore, GEN7_OABUFFER, gtt_offset); 1725 1726 intel_uncore_write(uncore, GEN7_OASTATUS1, /* tail */ 1727 gtt_offset | OABUFFER_SIZE_16M); 1728 1729 /* Mark that we need updated tail pointers to read from... */ 1730 stream->oa_buffer.aging_tail = INVALID_TAIL_PTR; 1731 stream->oa_buffer.tail = gtt_offset; 1732 1733 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1734 1735 /* On Haswell we have to track which OASTATUS1 flags we've 1736 * already seen since they can't be cleared while periodic 1737 * sampling is enabled. 1738 */ 1739 stream->perf->gen7_latched_oastatus1 = 0; 1740 1741 /* NB: although the OA buffer will initially be allocated 1742 * zeroed via shmfs (and so this memset is redundant when 1743 * first allocating), we may re-init the OA buffer, either 1744 * when re-enabling a stream or in error/reset paths. 1745 * 1746 * The reason we clear the buffer for each re-init is for the 1747 * sanity check in gen7_append_oa_reports() that looks at the 1748 * report-id field to make sure it's non-zero which relies on 1749 * the assumption that new reports are being written to zeroed 1750 * memory... 1751 */ 1752 memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE); 1753 } 1754 1755 static void gen8_init_oa_buffer(struct i915_perf_stream *stream) 1756 { 1757 struct intel_uncore *uncore = stream->uncore; 1758 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 1759 unsigned long flags; 1760 1761 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1762 1763 intel_uncore_write(uncore, GEN8_OASTATUS, 0); 1764 intel_uncore_write(uncore, GEN8_OAHEADPTR, gtt_offset); 1765 stream->oa_buffer.head = gtt_offset; 1766 1767 intel_uncore_write(uncore, GEN8_OABUFFER_UDW, 0); 1768 1769 /* 1770 * PRM says: 1771 * 1772 * "This MMIO must be set before the OATAILPTR 1773 * register and after the OAHEADPTR register. This is 1774 * to enable proper functionality of the overflow 1775 * bit." 1776 */ 1777 intel_uncore_write(uncore, GEN8_OABUFFER, gtt_offset | 1778 OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT); 1779 intel_uncore_write(uncore, GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK); 1780 1781 /* Mark that we need updated tail pointers to read from... */ 1782 stream->oa_buffer.aging_tail = INVALID_TAIL_PTR; 1783 stream->oa_buffer.tail = gtt_offset; 1784 1785 /* 1786 * Reset state used to recognise context switches, affecting which 1787 * reports we will forward to userspace while filtering for a single 1788 * context. 1789 */ 1790 stream->oa_buffer.last_ctx_id = INVALID_CTX_ID; 1791 1792 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1793 1794 /* 1795 * NB: although the OA buffer will initially be allocated 1796 * zeroed via shmfs (and so this memset is redundant when 1797 * first allocating), we may re-init the OA buffer, either 1798 * when re-enabling a stream or in error/reset paths. 1799 * 1800 * The reason we clear the buffer for each re-init is for the 1801 * sanity check in gen8_append_oa_reports() that looks at the 1802 * reason field to make sure it's non-zero which relies on 1803 * the assumption that new reports are being written to zeroed 1804 * memory... 1805 */ 1806 memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE); 1807 } 1808 1809 static void gen12_init_oa_buffer(struct i915_perf_stream *stream) 1810 { 1811 struct intel_uncore *uncore = stream->uncore; 1812 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 1813 unsigned long flags; 1814 1815 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1816 1817 intel_uncore_write(uncore, __oa_regs(stream)->oa_status, 0); 1818 intel_uncore_write(uncore, __oa_regs(stream)->oa_head_ptr, 1819 gtt_offset & GEN12_OAG_OAHEADPTR_MASK); 1820 stream->oa_buffer.head = gtt_offset; 1821 1822 /* 1823 * PRM says: 1824 * 1825 * "This MMIO must be set before the OATAILPTR 1826 * register and after the OAHEADPTR register. This is 1827 * to enable proper functionality of the overflow 1828 * bit." 1829 */ 1830 intel_uncore_write(uncore, __oa_regs(stream)->oa_buffer, gtt_offset | 1831 OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT); 1832 intel_uncore_write(uncore, __oa_regs(stream)->oa_tail_ptr, 1833 gtt_offset & GEN12_OAG_OATAILPTR_MASK); 1834 1835 /* Mark that we need updated tail pointers to read from... */ 1836 stream->oa_buffer.aging_tail = INVALID_TAIL_PTR; 1837 stream->oa_buffer.tail = gtt_offset; 1838 1839 /* 1840 * Reset state used to recognise context switches, affecting which 1841 * reports we will forward to userspace while filtering for a single 1842 * context. 1843 */ 1844 stream->oa_buffer.last_ctx_id = INVALID_CTX_ID; 1845 1846 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1847 1848 /* 1849 * NB: although the OA buffer will initially be allocated 1850 * zeroed via shmfs (and so this memset is redundant when 1851 * first allocating), we may re-init the OA buffer, either 1852 * when re-enabling a stream or in error/reset paths. 1853 * 1854 * The reason we clear the buffer for each re-init is for the 1855 * sanity check in gen8_append_oa_reports() that looks at the 1856 * reason field to make sure it's non-zero which relies on 1857 * the assumption that new reports are being written to zeroed 1858 * memory... 1859 */ 1860 memset(stream->oa_buffer.vaddr, 0, 1861 stream->oa_buffer.vma->size); 1862 } 1863 1864 static int alloc_oa_buffer(struct i915_perf_stream *stream) 1865 { 1866 struct drm_i915_private *i915 = stream->perf->i915; 1867 struct intel_gt *gt = stream->engine->gt; 1868 struct drm_i915_gem_object *bo; 1869 struct i915_vma *vma; 1870 int ret; 1871 1872 if (drm_WARN_ON(&i915->drm, stream->oa_buffer.vma)) 1873 return -ENODEV; 1874 1875 BUILD_BUG_ON_NOT_POWER_OF_2(OA_BUFFER_SIZE); 1876 BUILD_BUG_ON(OA_BUFFER_SIZE < SZ_128K || OA_BUFFER_SIZE > SZ_16M); 1877 1878 bo = i915_gem_object_create_shmem(stream->perf->i915, OA_BUFFER_SIZE); 1879 if (IS_ERR(bo)) { 1880 drm_err(&i915->drm, "Failed to allocate OA buffer\n"); 1881 return PTR_ERR(bo); 1882 } 1883 1884 i915_gem_object_set_cache_coherency(bo, I915_CACHE_LLC); 1885 1886 /* PreHSW required 512K alignment, HSW requires 16M */ 1887 vma = i915_vma_instance(bo, >->ggtt->vm, NULL); 1888 if (IS_ERR(vma)) { 1889 ret = PTR_ERR(vma); 1890 goto err_unref; 1891 } 1892 1893 /* 1894 * PreHSW required 512K alignment. 1895 * HSW and onwards, align to requested size of OA buffer. 1896 */ 1897 ret = i915_vma_pin(vma, 0, SZ_16M, PIN_GLOBAL | PIN_HIGH); 1898 if (ret) { 1899 drm_err(>->i915->drm, "Failed to pin OA buffer %d\n", ret); 1900 goto err_unref; 1901 } 1902 1903 stream->oa_buffer.vma = vma; 1904 1905 stream->oa_buffer.vaddr = 1906 i915_gem_object_pin_map_unlocked(bo, I915_MAP_WB); 1907 if (IS_ERR(stream->oa_buffer.vaddr)) { 1908 ret = PTR_ERR(stream->oa_buffer.vaddr); 1909 goto err_unpin; 1910 } 1911 1912 return 0; 1913 1914 err_unpin: 1915 __i915_vma_unpin(vma); 1916 1917 err_unref: 1918 i915_gem_object_put(bo); 1919 1920 stream->oa_buffer.vaddr = NULL; 1921 stream->oa_buffer.vma = NULL; 1922 1923 return ret; 1924 } 1925 1926 static u32 *save_restore_register(struct i915_perf_stream *stream, u32 *cs, 1927 bool save, i915_reg_t reg, u32 offset, 1928 u32 dword_count) 1929 { 1930 u32 cmd; 1931 u32 d; 1932 1933 cmd = save ? MI_STORE_REGISTER_MEM : MI_LOAD_REGISTER_MEM; 1934 cmd |= MI_SRM_LRM_GLOBAL_GTT; 1935 if (GRAPHICS_VER(stream->perf->i915) >= 8) 1936 cmd++; 1937 1938 for (d = 0; d < dword_count; d++) { 1939 *cs++ = cmd; 1940 *cs++ = i915_mmio_reg_offset(reg) + 4 * d; 1941 *cs++ = i915_ggtt_offset(stream->noa_wait) + offset + 4 * d; 1942 *cs++ = 0; 1943 } 1944 1945 return cs; 1946 } 1947 1948 static int alloc_noa_wait(struct i915_perf_stream *stream) 1949 { 1950 struct drm_i915_private *i915 = stream->perf->i915; 1951 struct intel_gt *gt = stream->engine->gt; 1952 struct drm_i915_gem_object *bo; 1953 struct i915_vma *vma; 1954 const u64 delay_ticks = 0xffffffffffffffff - 1955 intel_gt_ns_to_clock_interval(to_gt(stream->perf->i915), 1956 atomic64_read(&stream->perf->noa_programming_delay)); 1957 const u32 base = stream->engine->mmio_base; 1958 #define CS_GPR(x) GEN8_RING_CS_GPR(base, x) 1959 u32 *batch, *ts0, *cs, *jump; 1960 struct i915_gem_ww_ctx ww; 1961 int ret, i; 1962 enum { 1963 START_TS, 1964 NOW_TS, 1965 DELTA_TS, 1966 JUMP_PREDICATE, 1967 DELTA_TARGET, 1968 N_CS_GPR 1969 }; 1970 i915_reg_t mi_predicate_result = HAS_MI_SET_PREDICATE(i915) ? 1971 MI_PREDICATE_RESULT_2_ENGINE(base) : 1972 MI_PREDICATE_RESULT_1(RENDER_RING_BASE); 1973 1974 /* 1975 * gt->scratch was being used to save/restore the GPR registers, but on 1976 * MTL the scratch uses stolen lmem. An MI_SRM to this memory region 1977 * causes an engine hang. Instead allocate an additional page here to 1978 * save/restore GPR registers 1979 */ 1980 bo = i915_gem_object_create_internal(i915, 8192); 1981 if (IS_ERR(bo)) { 1982 drm_err(&i915->drm, 1983 "Failed to allocate NOA wait batchbuffer\n"); 1984 return PTR_ERR(bo); 1985 } 1986 1987 i915_gem_ww_ctx_init(&ww, true); 1988 retry: 1989 ret = i915_gem_object_lock(bo, &ww); 1990 if (ret) 1991 goto out_ww; 1992 1993 /* 1994 * We pin in GGTT because we jump into this buffer now because 1995 * multiple OA config BOs will have a jump to this address and it 1996 * needs to be fixed during the lifetime of the i915/perf stream. 1997 */ 1998 vma = i915_vma_instance(bo, >->ggtt->vm, NULL); 1999 if (IS_ERR(vma)) { 2000 ret = PTR_ERR(vma); 2001 goto out_ww; 2002 } 2003 2004 ret = i915_vma_pin_ww(vma, &ww, 0, 0, PIN_GLOBAL | PIN_HIGH); 2005 if (ret) 2006 goto out_ww; 2007 2008 batch = cs = i915_gem_object_pin_map(bo, I915_MAP_WB); 2009 if (IS_ERR(batch)) { 2010 ret = PTR_ERR(batch); 2011 goto err_unpin; 2012 } 2013 2014 stream->noa_wait = vma; 2015 2016 #define GPR_SAVE_OFFSET 4096 2017 #define PREDICATE_SAVE_OFFSET 4160 2018 2019 /* Save registers. */ 2020 for (i = 0; i < N_CS_GPR; i++) 2021 cs = save_restore_register( 2022 stream, cs, true /* save */, CS_GPR(i), 2023 GPR_SAVE_OFFSET + 8 * i, 2); 2024 cs = save_restore_register( 2025 stream, cs, true /* save */, mi_predicate_result, 2026 PREDICATE_SAVE_OFFSET, 1); 2027 2028 /* First timestamp snapshot location. */ 2029 ts0 = cs; 2030 2031 /* 2032 * Initial snapshot of the timestamp register to implement the wait. 2033 * We work with 32b values, so clear out the top 32b bits of the 2034 * register because the ALU works 64bits. 2035 */ 2036 *cs++ = MI_LOAD_REGISTER_IMM(1); 2037 *cs++ = i915_mmio_reg_offset(CS_GPR(START_TS)) + 4; 2038 *cs++ = 0; 2039 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); 2040 *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(base)); 2041 *cs++ = i915_mmio_reg_offset(CS_GPR(START_TS)); 2042 2043 /* 2044 * This is the location we're going to jump back into until the 2045 * required amount of time has passed. 2046 */ 2047 jump = cs; 2048 2049 /* 2050 * Take another snapshot of the timestamp register. Take care to clear 2051 * up the top 32bits of CS_GPR(1) as we're using it for other 2052 * operations below. 2053 */ 2054 *cs++ = MI_LOAD_REGISTER_IMM(1); 2055 *cs++ = i915_mmio_reg_offset(CS_GPR(NOW_TS)) + 4; 2056 *cs++ = 0; 2057 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); 2058 *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(base)); 2059 *cs++ = i915_mmio_reg_offset(CS_GPR(NOW_TS)); 2060 2061 /* 2062 * Do a diff between the 2 timestamps and store the result back into 2063 * CS_GPR(1). 2064 */ 2065 *cs++ = MI_MATH(5); 2066 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(NOW_TS)); 2067 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(START_TS)); 2068 *cs++ = MI_MATH_SUB; 2069 *cs++ = MI_MATH_STORE(MI_MATH_REG(DELTA_TS), MI_MATH_REG_ACCU); 2070 *cs++ = MI_MATH_STORE(MI_MATH_REG(JUMP_PREDICATE), MI_MATH_REG_CF); 2071 2072 /* 2073 * Transfer the carry flag (set to 1 if ts1 < ts0, meaning the 2074 * timestamp have rolled over the 32bits) into the predicate register 2075 * to be used for the predicated jump. 2076 */ 2077 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); 2078 *cs++ = i915_mmio_reg_offset(CS_GPR(JUMP_PREDICATE)); 2079 *cs++ = i915_mmio_reg_offset(mi_predicate_result); 2080 2081 if (HAS_MI_SET_PREDICATE(i915)) 2082 *cs++ = MI_SET_PREDICATE | 1; 2083 2084 /* Restart from the beginning if we had timestamps roll over. */ 2085 *cs++ = (GRAPHICS_VER(i915) < 8 ? 2086 MI_BATCH_BUFFER_START : 2087 MI_BATCH_BUFFER_START_GEN8) | 2088 MI_BATCH_PREDICATE; 2089 *cs++ = i915_ggtt_offset(vma) + (ts0 - batch) * 4; 2090 *cs++ = 0; 2091 2092 if (HAS_MI_SET_PREDICATE(i915)) 2093 *cs++ = MI_SET_PREDICATE; 2094 2095 /* 2096 * Now add the diff between to previous timestamps and add it to : 2097 * (((1 * << 64) - 1) - delay_ns) 2098 * 2099 * When the Carry Flag contains 1 this means the elapsed time is 2100 * longer than the expected delay, and we can exit the wait loop. 2101 */ 2102 *cs++ = MI_LOAD_REGISTER_IMM(2); 2103 *cs++ = i915_mmio_reg_offset(CS_GPR(DELTA_TARGET)); 2104 *cs++ = lower_32_bits(delay_ticks); 2105 *cs++ = i915_mmio_reg_offset(CS_GPR(DELTA_TARGET)) + 4; 2106 *cs++ = upper_32_bits(delay_ticks); 2107 2108 *cs++ = MI_MATH(4); 2109 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(DELTA_TS)); 2110 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(DELTA_TARGET)); 2111 *cs++ = MI_MATH_ADD; 2112 *cs++ = MI_MATH_STOREINV(MI_MATH_REG(JUMP_PREDICATE), MI_MATH_REG_CF); 2113 2114 *cs++ = MI_ARB_CHECK; 2115 2116 /* 2117 * Transfer the result into the predicate register to be used for the 2118 * predicated jump. 2119 */ 2120 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); 2121 *cs++ = i915_mmio_reg_offset(CS_GPR(JUMP_PREDICATE)); 2122 *cs++ = i915_mmio_reg_offset(mi_predicate_result); 2123 2124 if (HAS_MI_SET_PREDICATE(i915)) 2125 *cs++ = MI_SET_PREDICATE | 1; 2126 2127 /* Predicate the jump. */ 2128 *cs++ = (GRAPHICS_VER(i915) < 8 ? 2129 MI_BATCH_BUFFER_START : 2130 MI_BATCH_BUFFER_START_GEN8) | 2131 MI_BATCH_PREDICATE; 2132 *cs++ = i915_ggtt_offset(vma) + (jump - batch) * 4; 2133 *cs++ = 0; 2134 2135 if (HAS_MI_SET_PREDICATE(i915)) 2136 *cs++ = MI_SET_PREDICATE; 2137 2138 /* Restore registers. */ 2139 for (i = 0; i < N_CS_GPR; i++) 2140 cs = save_restore_register( 2141 stream, cs, false /* restore */, CS_GPR(i), 2142 GPR_SAVE_OFFSET + 8 * i, 2); 2143 cs = save_restore_register( 2144 stream, cs, false /* restore */, mi_predicate_result, 2145 PREDICATE_SAVE_OFFSET, 1); 2146 2147 /* And return to the ring. */ 2148 *cs++ = MI_BATCH_BUFFER_END; 2149 2150 GEM_BUG_ON(cs - batch > PAGE_SIZE / sizeof(*batch)); 2151 2152 i915_gem_object_flush_map(bo); 2153 __i915_gem_object_release_map(bo); 2154 2155 goto out_ww; 2156 2157 err_unpin: 2158 i915_vma_unpin_and_release(&vma, 0); 2159 out_ww: 2160 if (ret == -EDEADLK) { 2161 ret = i915_gem_ww_ctx_backoff(&ww); 2162 if (!ret) 2163 goto retry; 2164 } 2165 i915_gem_ww_ctx_fini(&ww); 2166 if (ret) 2167 i915_gem_object_put(bo); 2168 return ret; 2169 } 2170 2171 static u32 *write_cs_mi_lri(u32 *cs, 2172 const struct i915_oa_reg *reg_data, 2173 u32 n_regs) 2174 { 2175 u32 i; 2176 2177 for (i = 0; i < n_regs; i++) { 2178 if ((i % MI_LOAD_REGISTER_IMM_MAX_REGS) == 0) { 2179 u32 n_lri = min_t(u32, 2180 n_regs - i, 2181 MI_LOAD_REGISTER_IMM_MAX_REGS); 2182 2183 *cs++ = MI_LOAD_REGISTER_IMM(n_lri); 2184 } 2185 *cs++ = i915_mmio_reg_offset(reg_data[i].addr); 2186 *cs++ = reg_data[i].value; 2187 } 2188 2189 return cs; 2190 } 2191 2192 static int num_lri_dwords(int num_regs) 2193 { 2194 int count = 0; 2195 2196 if (num_regs > 0) { 2197 count += DIV_ROUND_UP(num_regs, MI_LOAD_REGISTER_IMM_MAX_REGS); 2198 count += num_regs * 2; 2199 } 2200 2201 return count; 2202 } 2203 2204 static struct i915_oa_config_bo * 2205 alloc_oa_config_buffer(struct i915_perf_stream *stream, 2206 struct i915_oa_config *oa_config) 2207 { 2208 struct drm_i915_gem_object *obj; 2209 struct i915_oa_config_bo *oa_bo; 2210 struct i915_gem_ww_ctx ww; 2211 size_t config_length = 0; 2212 u32 *cs; 2213 int err; 2214 2215 oa_bo = kzalloc(sizeof(*oa_bo), GFP_KERNEL); 2216 if (!oa_bo) 2217 return ERR_PTR(-ENOMEM); 2218 2219 config_length += num_lri_dwords(oa_config->mux_regs_len); 2220 config_length += num_lri_dwords(oa_config->b_counter_regs_len); 2221 config_length += num_lri_dwords(oa_config->flex_regs_len); 2222 config_length += 3; /* MI_BATCH_BUFFER_START */ 2223 config_length = ALIGN(sizeof(u32) * config_length, I915_GTT_PAGE_SIZE); 2224 2225 obj = i915_gem_object_create_shmem(stream->perf->i915, config_length); 2226 if (IS_ERR(obj)) { 2227 err = PTR_ERR(obj); 2228 goto err_free; 2229 } 2230 2231 i915_gem_ww_ctx_init(&ww, true); 2232 retry: 2233 err = i915_gem_object_lock(obj, &ww); 2234 if (err) 2235 goto out_ww; 2236 2237 cs = i915_gem_object_pin_map(obj, I915_MAP_WB); 2238 if (IS_ERR(cs)) { 2239 err = PTR_ERR(cs); 2240 goto out_ww; 2241 } 2242 2243 cs = write_cs_mi_lri(cs, 2244 oa_config->mux_regs, 2245 oa_config->mux_regs_len); 2246 cs = write_cs_mi_lri(cs, 2247 oa_config->b_counter_regs, 2248 oa_config->b_counter_regs_len); 2249 cs = write_cs_mi_lri(cs, 2250 oa_config->flex_regs, 2251 oa_config->flex_regs_len); 2252 2253 /* Jump into the active wait. */ 2254 *cs++ = (GRAPHICS_VER(stream->perf->i915) < 8 ? 2255 MI_BATCH_BUFFER_START : 2256 MI_BATCH_BUFFER_START_GEN8); 2257 *cs++ = i915_ggtt_offset(stream->noa_wait); 2258 *cs++ = 0; 2259 2260 i915_gem_object_flush_map(obj); 2261 __i915_gem_object_release_map(obj); 2262 2263 oa_bo->vma = i915_vma_instance(obj, 2264 &stream->engine->gt->ggtt->vm, 2265 NULL); 2266 if (IS_ERR(oa_bo->vma)) { 2267 err = PTR_ERR(oa_bo->vma); 2268 goto out_ww; 2269 } 2270 2271 oa_bo->oa_config = i915_oa_config_get(oa_config); 2272 llist_add(&oa_bo->node, &stream->oa_config_bos); 2273 2274 out_ww: 2275 if (err == -EDEADLK) { 2276 err = i915_gem_ww_ctx_backoff(&ww); 2277 if (!err) 2278 goto retry; 2279 } 2280 i915_gem_ww_ctx_fini(&ww); 2281 2282 if (err) 2283 i915_gem_object_put(obj); 2284 err_free: 2285 if (err) { 2286 kfree(oa_bo); 2287 return ERR_PTR(err); 2288 } 2289 return oa_bo; 2290 } 2291 2292 static struct i915_vma * 2293 get_oa_vma(struct i915_perf_stream *stream, struct i915_oa_config *oa_config) 2294 { 2295 struct i915_oa_config_bo *oa_bo; 2296 2297 /* 2298 * Look for the buffer in the already allocated BOs attached 2299 * to the stream. 2300 */ 2301 llist_for_each_entry(oa_bo, stream->oa_config_bos.first, node) { 2302 if (oa_bo->oa_config == oa_config && 2303 memcmp(oa_bo->oa_config->uuid, 2304 oa_config->uuid, 2305 sizeof(oa_config->uuid)) == 0) 2306 goto out; 2307 } 2308 2309 oa_bo = alloc_oa_config_buffer(stream, oa_config); 2310 if (IS_ERR(oa_bo)) 2311 return ERR_CAST(oa_bo); 2312 2313 out: 2314 return i915_vma_get(oa_bo->vma); 2315 } 2316 2317 static int 2318 emit_oa_config(struct i915_perf_stream *stream, 2319 struct i915_oa_config *oa_config, 2320 struct intel_context *ce, 2321 struct i915_active *active) 2322 { 2323 struct i915_request *rq; 2324 struct i915_vma *vma; 2325 struct i915_gem_ww_ctx ww; 2326 int err; 2327 2328 vma = get_oa_vma(stream, oa_config); 2329 if (IS_ERR(vma)) 2330 return PTR_ERR(vma); 2331 2332 i915_gem_ww_ctx_init(&ww, true); 2333 retry: 2334 err = i915_gem_object_lock(vma->obj, &ww); 2335 if (err) 2336 goto err; 2337 2338 err = i915_vma_pin_ww(vma, &ww, 0, 0, PIN_GLOBAL | PIN_HIGH); 2339 if (err) 2340 goto err; 2341 2342 intel_engine_pm_get(ce->engine); 2343 rq = i915_request_create(ce); 2344 intel_engine_pm_put(ce->engine); 2345 if (IS_ERR(rq)) { 2346 err = PTR_ERR(rq); 2347 goto err_vma_unpin; 2348 } 2349 2350 if (!IS_ERR_OR_NULL(active)) { 2351 /* After all individual context modifications */ 2352 err = i915_request_await_active(rq, active, 2353 I915_ACTIVE_AWAIT_ACTIVE); 2354 if (err) 2355 goto err_add_request; 2356 2357 err = i915_active_add_request(active, rq); 2358 if (err) 2359 goto err_add_request; 2360 } 2361 2362 err = i915_vma_move_to_active(vma, rq, 0); 2363 if (err) 2364 goto err_add_request; 2365 2366 err = rq->engine->emit_bb_start(rq, 2367 i915_vma_offset(vma), 0, 2368 I915_DISPATCH_SECURE); 2369 if (err) 2370 goto err_add_request; 2371 2372 err_add_request: 2373 i915_request_add(rq); 2374 err_vma_unpin: 2375 i915_vma_unpin(vma); 2376 err: 2377 if (err == -EDEADLK) { 2378 err = i915_gem_ww_ctx_backoff(&ww); 2379 if (!err) 2380 goto retry; 2381 } 2382 2383 i915_gem_ww_ctx_fini(&ww); 2384 i915_vma_put(vma); 2385 return err; 2386 } 2387 2388 static struct intel_context *oa_context(struct i915_perf_stream *stream) 2389 { 2390 return stream->pinned_ctx ?: stream->engine->kernel_context; 2391 } 2392 2393 static int 2394 hsw_enable_metric_set(struct i915_perf_stream *stream, 2395 struct i915_active *active) 2396 { 2397 struct intel_uncore *uncore = stream->uncore; 2398 2399 /* 2400 * PRM: 2401 * 2402 * OA unit is using “crclk” for its functionality. When trunk 2403 * level clock gating takes place, OA clock would be gated, 2404 * unable to count the events from non-render clock domain. 2405 * Render clock gating must be disabled when OA is enabled to 2406 * count the events from non-render domain. Unit level clock 2407 * gating for RCS should also be disabled. 2408 */ 2409 intel_uncore_rmw(uncore, GEN7_MISCCPCTL, 2410 GEN7_DOP_CLOCK_GATE_ENABLE, 0); 2411 intel_uncore_rmw(uncore, GEN6_UCGCTL1, 2412 0, GEN6_CSUNIT_CLOCK_GATE_DISABLE); 2413 2414 return emit_oa_config(stream, 2415 stream->oa_config, oa_context(stream), 2416 active); 2417 } 2418 2419 static void hsw_disable_metric_set(struct i915_perf_stream *stream) 2420 { 2421 struct intel_uncore *uncore = stream->uncore; 2422 2423 intel_uncore_rmw(uncore, GEN6_UCGCTL1, 2424 GEN6_CSUNIT_CLOCK_GATE_DISABLE, 0); 2425 intel_uncore_rmw(uncore, GEN7_MISCCPCTL, 2426 0, GEN7_DOP_CLOCK_GATE_ENABLE); 2427 2428 intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0); 2429 } 2430 2431 static u32 oa_config_flex_reg(const struct i915_oa_config *oa_config, 2432 i915_reg_t reg) 2433 { 2434 u32 mmio = i915_mmio_reg_offset(reg); 2435 int i; 2436 2437 /* 2438 * This arbitrary default will select the 'EU FPU0 Pipeline 2439 * Active' event. In the future it's anticipated that there 2440 * will be an explicit 'No Event' we can select, but not yet... 2441 */ 2442 if (!oa_config) 2443 return 0; 2444 2445 for (i = 0; i < oa_config->flex_regs_len; i++) { 2446 if (i915_mmio_reg_offset(oa_config->flex_regs[i].addr) == mmio) 2447 return oa_config->flex_regs[i].value; 2448 } 2449 2450 return 0; 2451 } 2452 /* 2453 * NB: It must always remain pointer safe to run this even if the OA unit 2454 * has been disabled. 2455 * 2456 * It's fine to put out-of-date values into these per-context registers 2457 * in the case that the OA unit has been disabled. 2458 */ 2459 static void 2460 gen8_update_reg_state_unlocked(const struct intel_context *ce, 2461 const struct i915_perf_stream *stream) 2462 { 2463 u32 ctx_oactxctrl = stream->perf->ctx_oactxctrl_offset; 2464 u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset; 2465 /* The MMIO offsets for Flex EU registers aren't contiguous */ 2466 static const i915_reg_t flex_regs[] = { 2467 EU_PERF_CNTL0, 2468 EU_PERF_CNTL1, 2469 EU_PERF_CNTL2, 2470 EU_PERF_CNTL3, 2471 EU_PERF_CNTL4, 2472 EU_PERF_CNTL5, 2473 EU_PERF_CNTL6, 2474 }; 2475 u32 *reg_state = ce->lrc_reg_state; 2476 int i; 2477 2478 reg_state[ctx_oactxctrl + 1] = 2479 (stream->period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) | 2480 (stream->periodic ? GEN8_OA_TIMER_ENABLE : 0) | 2481 GEN8_OA_COUNTER_RESUME; 2482 2483 for (i = 0; i < ARRAY_SIZE(flex_regs); i++) 2484 reg_state[ctx_flexeu0 + i * 2 + 1] = 2485 oa_config_flex_reg(stream->oa_config, flex_regs[i]); 2486 } 2487 2488 struct flex { 2489 i915_reg_t reg; 2490 u32 offset; 2491 u32 value; 2492 }; 2493 2494 static int 2495 gen8_store_flex(struct i915_request *rq, 2496 struct intel_context *ce, 2497 const struct flex *flex, unsigned int count) 2498 { 2499 u32 offset; 2500 u32 *cs; 2501 2502 cs = intel_ring_begin(rq, 4 * count); 2503 if (IS_ERR(cs)) 2504 return PTR_ERR(cs); 2505 2506 offset = i915_ggtt_offset(ce->state) + LRC_STATE_OFFSET; 2507 do { 2508 *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; 2509 *cs++ = offset + flex->offset * sizeof(u32); 2510 *cs++ = 0; 2511 *cs++ = flex->value; 2512 } while (flex++, --count); 2513 2514 intel_ring_advance(rq, cs); 2515 2516 return 0; 2517 } 2518 2519 static int 2520 gen8_load_flex(struct i915_request *rq, 2521 struct intel_context *ce, 2522 const struct flex *flex, unsigned int count) 2523 { 2524 u32 *cs; 2525 2526 GEM_BUG_ON(!count || count > 63); 2527 2528 cs = intel_ring_begin(rq, 2 * count + 2); 2529 if (IS_ERR(cs)) 2530 return PTR_ERR(cs); 2531 2532 *cs++ = MI_LOAD_REGISTER_IMM(count); 2533 do { 2534 *cs++ = i915_mmio_reg_offset(flex->reg); 2535 *cs++ = flex->value; 2536 } while (flex++, --count); 2537 *cs++ = MI_NOOP; 2538 2539 intel_ring_advance(rq, cs); 2540 2541 return 0; 2542 } 2543 2544 static int gen8_modify_context(struct intel_context *ce, 2545 const struct flex *flex, unsigned int count) 2546 { 2547 struct i915_request *rq; 2548 int err; 2549 2550 rq = intel_engine_create_kernel_request(ce->engine); 2551 if (IS_ERR(rq)) 2552 return PTR_ERR(rq); 2553 2554 /* Serialise with the remote context */ 2555 err = intel_context_prepare_remote_request(ce, rq); 2556 if (err == 0) 2557 err = gen8_store_flex(rq, ce, flex, count); 2558 2559 i915_request_add(rq); 2560 return err; 2561 } 2562 2563 static int 2564 gen8_modify_self(struct intel_context *ce, 2565 const struct flex *flex, unsigned int count, 2566 struct i915_active *active) 2567 { 2568 struct i915_request *rq; 2569 int err; 2570 2571 intel_engine_pm_get(ce->engine); 2572 rq = i915_request_create(ce); 2573 intel_engine_pm_put(ce->engine); 2574 if (IS_ERR(rq)) 2575 return PTR_ERR(rq); 2576 2577 if (!IS_ERR_OR_NULL(active)) { 2578 err = i915_active_add_request(active, rq); 2579 if (err) 2580 goto err_add_request; 2581 } 2582 2583 err = gen8_load_flex(rq, ce, flex, count); 2584 if (err) 2585 goto err_add_request; 2586 2587 err_add_request: 2588 i915_request_add(rq); 2589 return err; 2590 } 2591 2592 static int gen8_configure_context(struct i915_perf_stream *stream, 2593 struct i915_gem_context *ctx, 2594 struct flex *flex, unsigned int count) 2595 { 2596 struct i915_gem_engines_iter it; 2597 struct intel_context *ce; 2598 int err = 0; 2599 2600 for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { 2601 GEM_BUG_ON(ce == ce->engine->kernel_context); 2602 2603 if (ce->engine->class != RENDER_CLASS) 2604 continue; 2605 2606 /* Otherwise OA settings will be set upon first use */ 2607 if (!intel_context_pin_if_active(ce)) 2608 continue; 2609 2610 flex->value = intel_sseu_make_rpcs(ce->engine->gt, &ce->sseu); 2611 err = gen8_modify_context(ce, flex, count); 2612 2613 intel_context_unpin(ce); 2614 if (err) 2615 break; 2616 } 2617 i915_gem_context_unlock_engines(ctx); 2618 2619 return err; 2620 } 2621 2622 static int gen12_configure_oar_context(struct i915_perf_stream *stream, 2623 struct i915_active *active) 2624 { 2625 int err; 2626 struct intel_context *ce = stream->pinned_ctx; 2627 u32 format = stream->oa_buffer.format->format; 2628 u32 offset = stream->perf->ctx_oactxctrl_offset; 2629 struct flex regs_context[] = { 2630 { 2631 GEN8_OACTXCONTROL, 2632 offset + 1, 2633 active ? GEN8_OA_COUNTER_RESUME : 0, 2634 }, 2635 }; 2636 /* Offsets in regs_lri are not used since this configuration is only 2637 * applied using LRI. Initialize the correct offsets for posterity. 2638 */ 2639 #define GEN12_OAR_OACONTROL_OFFSET 0x5B0 2640 struct flex regs_lri[] = { 2641 { 2642 GEN12_OAR_OACONTROL, 2643 GEN12_OAR_OACONTROL_OFFSET + 1, 2644 (format << GEN12_OAR_OACONTROL_COUNTER_FORMAT_SHIFT) | 2645 (active ? GEN12_OAR_OACONTROL_COUNTER_ENABLE : 0) 2646 }, 2647 { 2648 RING_CONTEXT_CONTROL(ce->engine->mmio_base), 2649 CTX_CONTEXT_CONTROL, 2650 _MASKED_FIELD(GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE, 2651 active ? 2652 GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE : 2653 0) 2654 }, 2655 }; 2656 2657 /* Modify the context image of pinned context with regs_context */ 2658 err = intel_context_lock_pinned(ce); 2659 if (err) 2660 return err; 2661 2662 err = gen8_modify_context(ce, regs_context, 2663 ARRAY_SIZE(regs_context)); 2664 intel_context_unlock_pinned(ce); 2665 if (err) 2666 return err; 2667 2668 /* Apply regs_lri using LRI with pinned context */ 2669 return gen8_modify_self(ce, regs_lri, ARRAY_SIZE(regs_lri), active); 2670 } 2671 2672 /* 2673 * Manages updating the per-context aspects of the OA stream 2674 * configuration across all contexts. 2675 * 2676 * The awkward consideration here is that OACTXCONTROL controls the 2677 * exponent for periodic sampling which is primarily used for system 2678 * wide profiling where we'd like a consistent sampling period even in 2679 * the face of context switches. 2680 * 2681 * Our approach of updating the register state context (as opposed to 2682 * say using a workaround batch buffer) ensures that the hardware 2683 * won't automatically reload an out-of-date timer exponent even 2684 * transiently before a WA BB could be parsed. 2685 * 2686 * This function needs to: 2687 * - Ensure the currently running context's per-context OA state is 2688 * updated 2689 * - Ensure that all existing contexts will have the correct per-context 2690 * OA state if they are scheduled for use. 2691 * - Ensure any new contexts will be initialized with the correct 2692 * per-context OA state. 2693 * 2694 * Note: it's only the RCS/Render context that has any OA state. 2695 * Note: the first flex register passed must always be R_PWR_CLK_STATE 2696 */ 2697 static int 2698 oa_configure_all_contexts(struct i915_perf_stream *stream, 2699 struct flex *regs, 2700 size_t num_regs, 2701 struct i915_active *active) 2702 { 2703 struct drm_i915_private *i915 = stream->perf->i915; 2704 struct intel_engine_cs *engine; 2705 struct intel_gt *gt = stream->engine->gt; 2706 struct i915_gem_context *ctx, *cn; 2707 int err; 2708 2709 lockdep_assert_held(>->perf.lock); 2710 2711 /* 2712 * The OA register config is setup through the context image. This image 2713 * might be written to by the GPU on context switch (in particular on 2714 * lite-restore). This means we can't safely update a context's image, 2715 * if this context is scheduled/submitted to run on the GPU. 2716 * 2717 * We could emit the OA register config through the batch buffer but 2718 * this might leave small interval of time where the OA unit is 2719 * configured at an invalid sampling period. 2720 * 2721 * Note that since we emit all requests from a single ring, there 2722 * is still an implicit global barrier here that may cause a high 2723 * priority context to wait for an otherwise independent low priority 2724 * context. Contexts idle at the time of reconfiguration are not 2725 * trapped behind the barrier. 2726 */ 2727 spin_lock(&i915->gem.contexts.lock); 2728 list_for_each_entry_safe(ctx, cn, &i915->gem.contexts.list, link) { 2729 if (!kref_get_unless_zero(&ctx->ref)) 2730 continue; 2731 2732 spin_unlock(&i915->gem.contexts.lock); 2733 2734 err = gen8_configure_context(stream, ctx, regs, num_regs); 2735 if (err) { 2736 i915_gem_context_put(ctx); 2737 return err; 2738 } 2739 2740 spin_lock(&i915->gem.contexts.lock); 2741 list_safe_reset_next(ctx, cn, link); 2742 i915_gem_context_put(ctx); 2743 } 2744 spin_unlock(&i915->gem.contexts.lock); 2745 2746 /* 2747 * After updating all other contexts, we need to modify ourselves. 2748 * If we don't modify the kernel_context, we do not get events while 2749 * idle. 2750 */ 2751 for_each_uabi_engine(engine, i915) { 2752 struct intel_context *ce = engine->kernel_context; 2753 2754 if (engine->class != RENDER_CLASS) 2755 continue; 2756 2757 regs[0].value = intel_sseu_make_rpcs(engine->gt, &ce->sseu); 2758 2759 err = gen8_modify_self(ce, regs, num_regs, active); 2760 if (err) 2761 return err; 2762 } 2763 2764 return 0; 2765 } 2766 2767 static int 2768 gen12_configure_all_contexts(struct i915_perf_stream *stream, 2769 const struct i915_oa_config *oa_config, 2770 struct i915_active *active) 2771 { 2772 struct flex regs[] = { 2773 { 2774 GEN8_R_PWR_CLK_STATE(RENDER_RING_BASE), 2775 CTX_R_PWR_CLK_STATE, 2776 }, 2777 }; 2778 2779 if (stream->engine->class != RENDER_CLASS) 2780 return 0; 2781 2782 return oa_configure_all_contexts(stream, 2783 regs, ARRAY_SIZE(regs), 2784 active); 2785 } 2786 2787 static int 2788 lrc_configure_all_contexts(struct i915_perf_stream *stream, 2789 const struct i915_oa_config *oa_config, 2790 struct i915_active *active) 2791 { 2792 u32 ctx_oactxctrl = stream->perf->ctx_oactxctrl_offset; 2793 /* The MMIO offsets for Flex EU registers aren't contiguous */ 2794 const u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset; 2795 #define ctx_flexeuN(N) (ctx_flexeu0 + 2 * (N) + 1) 2796 struct flex regs[] = { 2797 { 2798 GEN8_R_PWR_CLK_STATE(RENDER_RING_BASE), 2799 CTX_R_PWR_CLK_STATE, 2800 }, 2801 { 2802 GEN8_OACTXCONTROL, 2803 ctx_oactxctrl + 1, 2804 }, 2805 { EU_PERF_CNTL0, ctx_flexeuN(0) }, 2806 { EU_PERF_CNTL1, ctx_flexeuN(1) }, 2807 { EU_PERF_CNTL2, ctx_flexeuN(2) }, 2808 { EU_PERF_CNTL3, ctx_flexeuN(3) }, 2809 { EU_PERF_CNTL4, ctx_flexeuN(4) }, 2810 { EU_PERF_CNTL5, ctx_flexeuN(5) }, 2811 { EU_PERF_CNTL6, ctx_flexeuN(6) }, 2812 }; 2813 #undef ctx_flexeuN 2814 int i; 2815 2816 regs[1].value = 2817 (stream->period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) | 2818 (stream->periodic ? GEN8_OA_TIMER_ENABLE : 0) | 2819 GEN8_OA_COUNTER_RESUME; 2820 2821 for (i = 2; i < ARRAY_SIZE(regs); i++) 2822 regs[i].value = oa_config_flex_reg(oa_config, regs[i].reg); 2823 2824 return oa_configure_all_contexts(stream, 2825 regs, ARRAY_SIZE(regs), 2826 active); 2827 } 2828 2829 static int 2830 gen8_enable_metric_set(struct i915_perf_stream *stream, 2831 struct i915_active *active) 2832 { 2833 struct intel_uncore *uncore = stream->uncore; 2834 struct i915_oa_config *oa_config = stream->oa_config; 2835 int ret; 2836 2837 /* 2838 * We disable slice/unslice clock ratio change reports on SKL since 2839 * they are too noisy. The HW generates a lot of redundant reports 2840 * where the ratio hasn't really changed causing a lot of redundant 2841 * work to processes and increasing the chances we'll hit buffer 2842 * overruns. 2843 * 2844 * Although we don't currently use the 'disable overrun' OABUFFER 2845 * feature it's worth noting that clock ratio reports have to be 2846 * disabled before considering to use that feature since the HW doesn't 2847 * correctly block these reports. 2848 * 2849 * Currently none of the high-level metrics we have depend on knowing 2850 * this ratio to normalize. 2851 * 2852 * Note: This register is not power context saved and restored, but 2853 * that's OK considering that we disable RC6 while the OA unit is 2854 * enabled. 2855 * 2856 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to 2857 * be read back from automatically triggered reports, as part of the 2858 * RPT_ID field. 2859 */ 2860 if (IS_GRAPHICS_VER(stream->perf->i915, 9, 11)) { 2861 intel_uncore_write(uncore, GEN8_OA_DEBUG, 2862 _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS | 2863 GEN9_OA_DEBUG_INCLUDE_CLK_RATIO)); 2864 } 2865 2866 /* 2867 * Update all contexts prior writing the mux configurations as we need 2868 * to make sure all slices/subslices are ON before writing to NOA 2869 * registers. 2870 */ 2871 ret = lrc_configure_all_contexts(stream, oa_config, active); 2872 if (ret) 2873 return ret; 2874 2875 return emit_oa_config(stream, 2876 stream->oa_config, oa_context(stream), 2877 active); 2878 } 2879 2880 static u32 oag_report_ctx_switches(const struct i915_perf_stream *stream) 2881 { 2882 return _MASKED_FIELD(GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS, 2883 (stream->sample_flags & SAMPLE_OA_REPORT) ? 2884 0 : GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS); 2885 } 2886 2887 static int 2888 gen12_enable_metric_set(struct i915_perf_stream *stream, 2889 struct i915_active *active) 2890 { 2891 struct drm_i915_private *i915 = stream->perf->i915; 2892 struct intel_uncore *uncore = stream->uncore; 2893 struct i915_oa_config *oa_config = stream->oa_config; 2894 bool periodic = stream->periodic; 2895 u32 period_exponent = stream->period_exponent; 2896 u32 sqcnt1; 2897 int ret; 2898 2899 /* 2900 * Wa_1508761755:xehpsdv, dg2 2901 * EU NOA signals behave incorrectly if EU clock gating is enabled. 2902 * Disable thread stall DOP gating and EU DOP gating. 2903 */ 2904 if (IS_XEHPSDV(i915) || IS_DG2(i915)) { 2905 intel_gt_mcr_multicast_write(uncore->gt, GEN8_ROW_CHICKEN, 2906 _MASKED_BIT_ENABLE(STALL_DOP_GATING_DISABLE)); 2907 intel_uncore_write(uncore, GEN7_ROW_CHICKEN2, 2908 _MASKED_BIT_ENABLE(GEN12_DISABLE_DOP_GATING)); 2909 } 2910 2911 intel_uncore_write(uncore, __oa_regs(stream)->oa_debug, 2912 /* Disable clk ratio reports, like previous Gens. */ 2913 _MASKED_BIT_ENABLE(GEN12_OAG_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS | 2914 GEN12_OAG_OA_DEBUG_INCLUDE_CLK_RATIO) | 2915 /* 2916 * If the user didn't require OA reports, instruct 2917 * the hardware not to emit ctx switch reports. 2918 */ 2919 oag_report_ctx_switches(stream)); 2920 2921 intel_uncore_write(uncore, __oa_regs(stream)->oa_ctx_ctrl, periodic ? 2922 (GEN12_OAG_OAGLBCTXCTRL_COUNTER_RESUME | 2923 GEN12_OAG_OAGLBCTXCTRL_TIMER_ENABLE | 2924 (period_exponent << GEN12_OAG_OAGLBCTXCTRL_TIMER_PERIOD_SHIFT)) 2925 : 0); 2926 2927 /* 2928 * Initialize Super Queue Internal Cnt Register 2929 * Set PMON Enable in order to collect valid metrics. 2930 * Enable byets per clock reporting in OA for XEHPSDV onward. 2931 */ 2932 sqcnt1 = GEN12_SQCNT1_PMON_ENABLE | 2933 (HAS_OA_BPC_REPORTING(i915) ? GEN12_SQCNT1_OABPC : 0); 2934 2935 intel_uncore_rmw(uncore, GEN12_SQCNT1, 0, sqcnt1); 2936 2937 /* 2938 * Update all contexts prior writing the mux configurations as we need 2939 * to make sure all slices/subslices are ON before writing to NOA 2940 * registers. 2941 */ 2942 ret = gen12_configure_all_contexts(stream, oa_config, active); 2943 if (ret) 2944 return ret; 2945 2946 /* 2947 * For Gen12, performance counters are context 2948 * saved/restored. Only enable it for the context that 2949 * requested this. 2950 */ 2951 if (stream->ctx) { 2952 ret = gen12_configure_oar_context(stream, active); 2953 if (ret) 2954 return ret; 2955 } 2956 2957 return emit_oa_config(stream, 2958 stream->oa_config, oa_context(stream), 2959 active); 2960 } 2961 2962 static void gen8_disable_metric_set(struct i915_perf_stream *stream) 2963 { 2964 struct intel_uncore *uncore = stream->uncore; 2965 2966 /* Reset all contexts' slices/subslices configurations. */ 2967 lrc_configure_all_contexts(stream, NULL, NULL); 2968 2969 intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0); 2970 } 2971 2972 static void gen11_disable_metric_set(struct i915_perf_stream *stream) 2973 { 2974 struct intel_uncore *uncore = stream->uncore; 2975 2976 /* Reset all contexts' slices/subslices configurations. */ 2977 lrc_configure_all_contexts(stream, NULL, NULL); 2978 2979 /* Make sure we disable noa to save power. */ 2980 intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); 2981 } 2982 2983 static void gen12_disable_metric_set(struct i915_perf_stream *stream) 2984 { 2985 struct intel_uncore *uncore = stream->uncore; 2986 struct drm_i915_private *i915 = stream->perf->i915; 2987 u32 sqcnt1; 2988 2989 /* 2990 * Wa_1508761755:xehpsdv, dg2 2991 * Enable thread stall DOP gating and EU DOP gating. 2992 */ 2993 if (IS_XEHPSDV(i915) || IS_DG2(i915)) { 2994 intel_gt_mcr_multicast_write(uncore->gt, GEN8_ROW_CHICKEN, 2995 _MASKED_BIT_DISABLE(STALL_DOP_GATING_DISABLE)); 2996 intel_uncore_write(uncore, GEN7_ROW_CHICKEN2, 2997 _MASKED_BIT_DISABLE(GEN12_DISABLE_DOP_GATING)); 2998 } 2999 3000 /* Reset all contexts' slices/subslices configurations. */ 3001 gen12_configure_all_contexts(stream, NULL, NULL); 3002 3003 /* disable the context save/restore or OAR counters */ 3004 if (stream->ctx) 3005 gen12_configure_oar_context(stream, NULL); 3006 3007 /* Make sure we disable noa to save power. */ 3008 intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); 3009 3010 sqcnt1 = GEN12_SQCNT1_PMON_ENABLE | 3011 (HAS_OA_BPC_REPORTING(i915) ? GEN12_SQCNT1_OABPC : 0); 3012 3013 /* Reset PMON Enable to save power. */ 3014 intel_uncore_rmw(uncore, GEN12_SQCNT1, sqcnt1, 0); 3015 } 3016 3017 static void gen7_oa_enable(struct i915_perf_stream *stream) 3018 { 3019 struct intel_uncore *uncore = stream->uncore; 3020 struct i915_gem_context *ctx = stream->ctx; 3021 u32 ctx_id = stream->specific_ctx_id; 3022 bool periodic = stream->periodic; 3023 u32 period_exponent = stream->period_exponent; 3024 u32 report_format = stream->oa_buffer.format->format; 3025 3026 /* 3027 * Reset buf pointers so we don't forward reports from before now. 3028 * 3029 * Think carefully if considering trying to avoid this, since it 3030 * also ensures status flags and the buffer itself are cleared 3031 * in error paths, and we have checks for invalid reports based 3032 * on the assumption that certain fields are written to zeroed 3033 * memory which this helps maintains. 3034 */ 3035 gen7_init_oa_buffer(stream); 3036 3037 intel_uncore_write(uncore, GEN7_OACONTROL, 3038 (ctx_id & GEN7_OACONTROL_CTX_MASK) | 3039 (period_exponent << 3040 GEN7_OACONTROL_TIMER_PERIOD_SHIFT) | 3041 (periodic ? GEN7_OACONTROL_TIMER_ENABLE : 0) | 3042 (report_format << GEN7_OACONTROL_FORMAT_SHIFT) | 3043 (ctx ? GEN7_OACONTROL_PER_CTX_ENABLE : 0) | 3044 GEN7_OACONTROL_ENABLE); 3045 } 3046 3047 static void gen8_oa_enable(struct i915_perf_stream *stream) 3048 { 3049 struct intel_uncore *uncore = stream->uncore; 3050 u32 report_format = stream->oa_buffer.format->format; 3051 3052 /* 3053 * Reset buf pointers so we don't forward reports from before now. 3054 * 3055 * Think carefully if considering trying to avoid this, since it 3056 * also ensures status flags and the buffer itself are cleared 3057 * in error paths, and we have checks for invalid reports based 3058 * on the assumption that certain fields are written to zeroed 3059 * memory which this helps maintains. 3060 */ 3061 gen8_init_oa_buffer(stream); 3062 3063 /* 3064 * Note: we don't rely on the hardware to perform single context 3065 * filtering and instead filter on the cpu based on the context-id 3066 * field of reports 3067 */ 3068 intel_uncore_write(uncore, GEN8_OACONTROL, 3069 (report_format << GEN8_OA_REPORT_FORMAT_SHIFT) | 3070 GEN8_OA_COUNTER_ENABLE); 3071 } 3072 3073 static void gen12_oa_enable(struct i915_perf_stream *stream) 3074 { 3075 const struct i915_perf_regs *regs; 3076 u32 val; 3077 3078 /* 3079 * If we don't want OA reports from the OA buffer, then we don't even 3080 * need to program the OAG unit. 3081 */ 3082 if (!(stream->sample_flags & SAMPLE_OA_REPORT)) 3083 return; 3084 3085 gen12_init_oa_buffer(stream); 3086 3087 regs = __oa_regs(stream); 3088 val = (stream->oa_buffer.format->format << regs->oa_ctrl_counter_format_shift) | 3089 GEN12_OAG_OACONTROL_OA_COUNTER_ENABLE; 3090 3091 intel_uncore_write(stream->uncore, regs->oa_ctrl, val); 3092 } 3093 3094 /** 3095 * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream 3096 * @stream: An i915 perf stream opened for OA metrics 3097 * 3098 * [Re]enables hardware periodic sampling according to the period configured 3099 * when opening the stream. This also starts a hrtimer that will periodically 3100 * check for data in the circular OA buffer for notifying userspace (e.g. 3101 * during a read() or poll()). 3102 */ 3103 static void i915_oa_stream_enable(struct i915_perf_stream *stream) 3104 { 3105 stream->pollin = false; 3106 3107 stream->perf->ops.oa_enable(stream); 3108 3109 if (stream->sample_flags & SAMPLE_OA_REPORT) 3110 hrtimer_start(&stream->poll_check_timer, 3111 ns_to_ktime(stream->poll_oa_period), 3112 HRTIMER_MODE_REL_PINNED); 3113 } 3114 3115 static void gen7_oa_disable(struct i915_perf_stream *stream) 3116 { 3117 struct intel_uncore *uncore = stream->uncore; 3118 3119 intel_uncore_write(uncore, GEN7_OACONTROL, 0); 3120 if (intel_wait_for_register(uncore, 3121 GEN7_OACONTROL, GEN7_OACONTROL_ENABLE, 0, 3122 50)) 3123 drm_err(&stream->perf->i915->drm, 3124 "wait for OA to be disabled timed out\n"); 3125 } 3126 3127 static void gen8_oa_disable(struct i915_perf_stream *stream) 3128 { 3129 struct intel_uncore *uncore = stream->uncore; 3130 3131 intel_uncore_write(uncore, GEN8_OACONTROL, 0); 3132 if (intel_wait_for_register(uncore, 3133 GEN8_OACONTROL, GEN8_OA_COUNTER_ENABLE, 0, 3134 50)) 3135 drm_err(&stream->perf->i915->drm, 3136 "wait for OA to be disabled timed out\n"); 3137 } 3138 3139 static void gen12_oa_disable(struct i915_perf_stream *stream) 3140 { 3141 struct intel_uncore *uncore = stream->uncore; 3142 3143 intel_uncore_write(uncore, __oa_regs(stream)->oa_ctrl, 0); 3144 if (intel_wait_for_register(uncore, 3145 __oa_regs(stream)->oa_ctrl, 3146 GEN12_OAG_OACONTROL_OA_COUNTER_ENABLE, 0, 3147 50)) 3148 drm_err(&stream->perf->i915->drm, 3149 "wait for OA to be disabled timed out\n"); 3150 3151 intel_uncore_write(uncore, GEN12_OA_TLB_INV_CR, 1); 3152 if (intel_wait_for_register(uncore, 3153 GEN12_OA_TLB_INV_CR, 3154 1, 0, 3155 50)) 3156 drm_err(&stream->perf->i915->drm, 3157 "wait for OA tlb invalidate timed out\n"); 3158 } 3159 3160 /** 3161 * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream 3162 * @stream: An i915 perf stream opened for OA metrics 3163 * 3164 * Stops the OA unit from periodically writing counter reports into the 3165 * circular OA buffer. This also stops the hrtimer that periodically checks for 3166 * data in the circular OA buffer, for notifying userspace. 3167 */ 3168 static void i915_oa_stream_disable(struct i915_perf_stream *stream) 3169 { 3170 stream->perf->ops.oa_disable(stream); 3171 3172 if (stream->sample_flags & SAMPLE_OA_REPORT) 3173 hrtimer_cancel(&stream->poll_check_timer); 3174 } 3175 3176 static const struct i915_perf_stream_ops i915_oa_stream_ops = { 3177 .destroy = i915_oa_stream_destroy, 3178 .enable = i915_oa_stream_enable, 3179 .disable = i915_oa_stream_disable, 3180 .wait_unlocked = i915_oa_wait_unlocked, 3181 .poll_wait = i915_oa_poll_wait, 3182 .read = i915_oa_read, 3183 }; 3184 3185 static int i915_perf_stream_enable_sync(struct i915_perf_stream *stream) 3186 { 3187 struct i915_active *active; 3188 int err; 3189 3190 active = i915_active_create(); 3191 if (!active) 3192 return -ENOMEM; 3193 3194 err = stream->perf->ops.enable_metric_set(stream, active); 3195 if (err == 0) 3196 __i915_active_wait(active, TASK_UNINTERRUPTIBLE); 3197 3198 i915_active_put(active); 3199 return err; 3200 } 3201 3202 static void 3203 get_default_sseu_config(struct intel_sseu *out_sseu, 3204 struct intel_engine_cs *engine) 3205 { 3206 const struct sseu_dev_info *devinfo_sseu = &engine->gt->info.sseu; 3207 3208 *out_sseu = intel_sseu_from_device_info(devinfo_sseu); 3209 3210 if (GRAPHICS_VER(engine->i915) == 11) { 3211 /* 3212 * We only need subslice count so it doesn't matter which ones 3213 * we select - just turn off low bits in the amount of half of 3214 * all available subslices per slice. 3215 */ 3216 out_sseu->subslice_mask = 3217 ~(~0 << (hweight8(out_sseu->subslice_mask) / 2)); 3218 out_sseu->slice_mask = 0x1; 3219 } 3220 } 3221 3222 static int 3223 get_sseu_config(struct intel_sseu *out_sseu, 3224 struct intel_engine_cs *engine, 3225 const struct drm_i915_gem_context_param_sseu *drm_sseu) 3226 { 3227 if (drm_sseu->engine.engine_class != engine->uabi_class || 3228 drm_sseu->engine.engine_instance != engine->uabi_instance) 3229 return -EINVAL; 3230 3231 return i915_gem_user_to_context_sseu(engine->gt, drm_sseu, out_sseu); 3232 } 3233 3234 /* 3235 * OA timestamp frequency = CS timestamp frequency in most platforms. On some 3236 * platforms OA unit ignores the CTC_SHIFT and the 2 timestamps differ. In such 3237 * cases, return the adjusted CS timestamp frequency to the user. 3238 */ 3239 u32 i915_perf_oa_timestamp_frequency(struct drm_i915_private *i915) 3240 { 3241 /* 3242 * Wa_18013179988:dg2 3243 * Wa_14015846243:mtl 3244 */ 3245 if (IS_DG2(i915) || IS_METEORLAKE(i915)) { 3246 intel_wakeref_t wakeref; 3247 u32 reg, shift; 3248 3249 with_intel_runtime_pm(to_gt(i915)->uncore->rpm, wakeref) 3250 reg = intel_uncore_read(to_gt(i915)->uncore, RPM_CONFIG0); 3251 3252 shift = REG_FIELD_GET(GEN10_RPM_CONFIG0_CTC_SHIFT_PARAMETER_MASK, 3253 reg); 3254 3255 return to_gt(i915)->clock_frequency << (3 - shift); 3256 } 3257 3258 return to_gt(i915)->clock_frequency; 3259 } 3260 3261 /** 3262 * i915_oa_stream_init - validate combined props for OA stream and init 3263 * @stream: An i915 perf stream 3264 * @param: The open parameters passed to `DRM_I915_PERF_OPEN` 3265 * @props: The property state that configures stream (individually validated) 3266 * 3267 * While read_properties_unlocked() validates properties in isolation it 3268 * doesn't ensure that the combination necessarily makes sense. 3269 * 3270 * At this point it has been determined that userspace wants a stream of 3271 * OA metrics, but still we need to further validate the combined 3272 * properties are OK. 3273 * 3274 * If the configuration makes sense then we can allocate memory for 3275 * a circular OA buffer and apply the requested metric set configuration. 3276 * 3277 * Returns: zero on success or a negative error code. 3278 */ 3279 static int i915_oa_stream_init(struct i915_perf_stream *stream, 3280 struct drm_i915_perf_open_param *param, 3281 struct perf_open_properties *props) 3282 { 3283 struct drm_i915_private *i915 = stream->perf->i915; 3284 struct i915_perf *perf = stream->perf; 3285 struct i915_perf_group *g; 3286 struct intel_gt *gt; 3287 int ret; 3288 3289 if (!props->engine) { 3290 drm_dbg(&stream->perf->i915->drm, 3291 "OA engine not specified\n"); 3292 return -EINVAL; 3293 } 3294 gt = props->engine->gt; 3295 g = props->engine->oa_group; 3296 3297 /* 3298 * If the sysfs metrics/ directory wasn't registered for some 3299 * reason then don't let userspace try their luck with config 3300 * IDs 3301 */ 3302 if (!perf->metrics_kobj) { 3303 drm_dbg(&stream->perf->i915->drm, 3304 "OA metrics weren't advertised via sysfs\n"); 3305 return -EINVAL; 3306 } 3307 3308 if (!(props->sample_flags & SAMPLE_OA_REPORT) && 3309 (GRAPHICS_VER(perf->i915) < 12 || !stream->ctx)) { 3310 drm_dbg(&stream->perf->i915->drm, 3311 "Only OA report sampling supported\n"); 3312 return -EINVAL; 3313 } 3314 3315 if (!perf->ops.enable_metric_set) { 3316 drm_dbg(&stream->perf->i915->drm, 3317 "OA unit not supported\n"); 3318 return -ENODEV; 3319 } 3320 3321 /* 3322 * To avoid the complexity of having to accurately filter 3323 * counter reports and marshal to the appropriate client 3324 * we currently only allow exclusive access 3325 */ 3326 if (g->exclusive_stream) { 3327 drm_dbg(&stream->perf->i915->drm, 3328 "OA unit already in use\n"); 3329 return -EBUSY; 3330 } 3331 3332 if (!props->oa_format) { 3333 drm_dbg(&stream->perf->i915->drm, 3334 "OA report format not specified\n"); 3335 return -EINVAL; 3336 } 3337 3338 stream->engine = props->engine; 3339 stream->uncore = stream->engine->gt->uncore; 3340 3341 stream->sample_size = sizeof(struct drm_i915_perf_record_header); 3342 3343 stream->oa_buffer.format = &perf->oa_formats[props->oa_format]; 3344 if (drm_WARN_ON(&i915->drm, stream->oa_buffer.format->size == 0)) 3345 return -EINVAL; 3346 3347 stream->sample_flags = props->sample_flags; 3348 stream->sample_size += stream->oa_buffer.format->size; 3349 3350 stream->hold_preemption = props->hold_preemption; 3351 3352 stream->periodic = props->oa_periodic; 3353 if (stream->periodic) 3354 stream->period_exponent = props->oa_period_exponent; 3355 3356 if (stream->ctx) { 3357 ret = oa_get_render_ctx_id(stream); 3358 if (ret) { 3359 drm_dbg(&stream->perf->i915->drm, 3360 "Invalid context id to filter with\n"); 3361 return ret; 3362 } 3363 } 3364 3365 ret = alloc_noa_wait(stream); 3366 if (ret) { 3367 drm_dbg(&stream->perf->i915->drm, 3368 "Unable to allocate NOA wait batch buffer\n"); 3369 goto err_noa_wait_alloc; 3370 } 3371 3372 stream->oa_config = i915_perf_get_oa_config(perf, props->metrics_set); 3373 if (!stream->oa_config) { 3374 drm_dbg(&stream->perf->i915->drm, 3375 "Invalid OA config id=%i\n", props->metrics_set); 3376 ret = -EINVAL; 3377 goto err_config; 3378 } 3379 3380 /* PRM - observability performance counters: 3381 * 3382 * OACONTROL, performance counter enable, note: 3383 * 3384 * "When this bit is set, in order to have coherent counts, 3385 * RC6 power state and trunk clock gating must be disabled. 3386 * This can be achieved by programming MMIO registers as 3387 * 0xA094=0 and 0xA090[31]=1" 3388 * 3389 * In our case we are expecting that taking pm + FORCEWAKE 3390 * references will effectively disable RC6. 3391 */ 3392 intel_engine_pm_get(stream->engine); 3393 intel_uncore_forcewake_get(stream->uncore, FORCEWAKE_ALL); 3394 3395 /* 3396 * Wa_16011777198:dg2: GuC resets render as part of the Wa. This causes 3397 * OA to lose the configuration state. Prevent this by overriding GUCRC 3398 * mode. 3399 */ 3400 if (intel_uc_uses_guc_rc(>->uc) && 3401 (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_C0) || 3402 IS_DG2_GRAPHICS_STEP(gt->i915, G11, STEP_A0, STEP_B0))) { 3403 ret = intel_guc_slpc_override_gucrc_mode(>->uc.guc.slpc, 3404 SLPC_GUCRC_MODE_GUCRC_NO_RC6); 3405 if (ret) { 3406 drm_dbg(&stream->perf->i915->drm, 3407 "Unable to override gucrc mode\n"); 3408 goto err_gucrc; 3409 } 3410 3411 stream->override_gucrc = true; 3412 } 3413 3414 ret = alloc_oa_buffer(stream); 3415 if (ret) 3416 goto err_oa_buf_alloc; 3417 3418 stream->ops = &i915_oa_stream_ops; 3419 3420 stream->engine->gt->perf.sseu = props->sseu; 3421 WRITE_ONCE(g->exclusive_stream, stream); 3422 3423 ret = i915_perf_stream_enable_sync(stream); 3424 if (ret) { 3425 drm_dbg(&stream->perf->i915->drm, 3426 "Unable to enable metric set\n"); 3427 goto err_enable; 3428 } 3429 3430 drm_dbg(&stream->perf->i915->drm, 3431 "opening stream oa config uuid=%s\n", 3432 stream->oa_config->uuid); 3433 3434 hrtimer_init(&stream->poll_check_timer, 3435 CLOCK_MONOTONIC, HRTIMER_MODE_REL); 3436 stream->poll_check_timer.function = oa_poll_check_timer_cb; 3437 init_waitqueue_head(&stream->poll_wq); 3438 spin_lock_init(&stream->oa_buffer.ptr_lock); 3439 mutex_init(&stream->lock); 3440 3441 return 0; 3442 3443 err_enable: 3444 WRITE_ONCE(g->exclusive_stream, NULL); 3445 perf->ops.disable_metric_set(stream); 3446 3447 free_oa_buffer(stream); 3448 3449 err_oa_buf_alloc: 3450 if (stream->override_gucrc) 3451 intel_guc_slpc_unset_gucrc_mode(>->uc.guc.slpc); 3452 3453 err_gucrc: 3454 intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL); 3455 intel_engine_pm_put(stream->engine); 3456 3457 free_oa_configs(stream); 3458 3459 err_config: 3460 free_noa_wait(stream); 3461 3462 err_noa_wait_alloc: 3463 if (stream->ctx) 3464 oa_put_render_ctx_id(stream); 3465 3466 return ret; 3467 } 3468 3469 void i915_oa_init_reg_state(const struct intel_context *ce, 3470 const struct intel_engine_cs *engine) 3471 { 3472 struct i915_perf_stream *stream; 3473 3474 if (engine->class != RENDER_CLASS) 3475 return; 3476 3477 /* perf.exclusive_stream serialised by lrc_configure_all_contexts() */ 3478 stream = READ_ONCE(engine->oa_group->exclusive_stream); 3479 if (stream && GRAPHICS_VER(stream->perf->i915) < 12) 3480 gen8_update_reg_state_unlocked(ce, stream); 3481 } 3482 3483 /** 3484 * i915_perf_read - handles read() FOP for i915 perf stream FDs 3485 * @file: An i915 perf stream file 3486 * @buf: destination buffer given by userspace 3487 * @count: the number of bytes userspace wants to read 3488 * @ppos: (inout) file seek position (unused) 3489 * 3490 * The entry point for handling a read() on a stream file descriptor from 3491 * userspace. Most of the work is left to the i915_perf_read_locked() and 3492 * &i915_perf_stream_ops->read but to save having stream implementations (of 3493 * which we might have multiple later) we handle blocking read here. 3494 * 3495 * We can also consistently treat trying to read from a disabled stream 3496 * as an IO error so implementations can assume the stream is enabled 3497 * while reading. 3498 * 3499 * Returns: The number of bytes copied or a negative error code on failure. 3500 */ 3501 static ssize_t i915_perf_read(struct file *file, 3502 char __user *buf, 3503 size_t count, 3504 loff_t *ppos) 3505 { 3506 struct i915_perf_stream *stream = file->private_data; 3507 size_t offset = 0; 3508 int ret; 3509 3510 /* To ensure it's handled consistently we simply treat all reads of a 3511 * disabled stream as an error. In particular it might otherwise lead 3512 * to a deadlock for blocking file descriptors... 3513 */ 3514 if (!stream->enabled || !(stream->sample_flags & SAMPLE_OA_REPORT)) 3515 return -EIO; 3516 3517 if (!(file->f_flags & O_NONBLOCK)) { 3518 /* There's the small chance of false positives from 3519 * stream->ops->wait_unlocked. 3520 * 3521 * E.g. with single context filtering since we only wait until 3522 * oabuffer has >= 1 report we don't immediately know whether 3523 * any reports really belong to the current context 3524 */ 3525 do { 3526 ret = stream->ops->wait_unlocked(stream); 3527 if (ret) 3528 return ret; 3529 3530 mutex_lock(&stream->lock); 3531 ret = stream->ops->read(stream, buf, count, &offset); 3532 mutex_unlock(&stream->lock); 3533 } while (!offset && !ret); 3534 } else { 3535 mutex_lock(&stream->lock); 3536 ret = stream->ops->read(stream, buf, count, &offset); 3537 mutex_unlock(&stream->lock); 3538 } 3539 3540 /* We allow the poll checking to sometimes report false positive EPOLLIN 3541 * events where we might actually report EAGAIN on read() if there's 3542 * not really any data available. In this situation though we don't 3543 * want to enter a busy loop between poll() reporting a EPOLLIN event 3544 * and read() returning -EAGAIN. Clearing the oa.pollin state here 3545 * effectively ensures we back off until the next hrtimer callback 3546 * before reporting another EPOLLIN event. 3547 * The exception to this is if ops->read() returned -ENOSPC which means 3548 * that more OA data is available than could fit in the user provided 3549 * buffer. In this case we want the next poll() call to not block. 3550 */ 3551 if (ret != -ENOSPC) 3552 stream->pollin = false; 3553 3554 /* Possible values for ret are 0, -EFAULT, -ENOSPC, -EIO, ... */ 3555 return offset ?: (ret ?: -EAGAIN); 3556 } 3557 3558 static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer) 3559 { 3560 struct i915_perf_stream *stream = 3561 container_of(hrtimer, typeof(*stream), poll_check_timer); 3562 3563 if (oa_buffer_check_unlocked(stream)) { 3564 stream->pollin = true; 3565 wake_up(&stream->poll_wq); 3566 } 3567 3568 hrtimer_forward_now(hrtimer, 3569 ns_to_ktime(stream->poll_oa_period)); 3570 3571 return HRTIMER_RESTART; 3572 } 3573 3574 /** 3575 * i915_perf_poll_locked - poll_wait() with a suitable wait queue for stream 3576 * @stream: An i915 perf stream 3577 * @file: An i915 perf stream file 3578 * @wait: poll() state table 3579 * 3580 * For handling userspace polling on an i915 perf stream, this calls through to 3581 * &i915_perf_stream_ops->poll_wait to call poll_wait() with a wait queue that 3582 * will be woken for new stream data. 3583 * 3584 * Returns: any poll events that are ready without sleeping 3585 */ 3586 static __poll_t i915_perf_poll_locked(struct i915_perf_stream *stream, 3587 struct file *file, 3588 poll_table *wait) 3589 { 3590 __poll_t events = 0; 3591 3592 stream->ops->poll_wait(stream, file, wait); 3593 3594 /* Note: we don't explicitly check whether there's something to read 3595 * here since this path may be very hot depending on what else 3596 * userspace is polling, or on the timeout in use. We rely solely on 3597 * the hrtimer/oa_poll_check_timer_cb to notify us when there are 3598 * samples to read. 3599 */ 3600 if (stream->pollin) 3601 events |= EPOLLIN; 3602 3603 return events; 3604 } 3605 3606 /** 3607 * i915_perf_poll - call poll_wait() with a suitable wait queue for stream 3608 * @file: An i915 perf stream file 3609 * @wait: poll() state table 3610 * 3611 * For handling userspace polling on an i915 perf stream, this ensures 3612 * poll_wait() gets called with a wait queue that will be woken for new stream 3613 * data. 3614 * 3615 * Note: Implementation deferred to i915_perf_poll_locked() 3616 * 3617 * Returns: any poll events that are ready without sleeping 3618 */ 3619 static __poll_t i915_perf_poll(struct file *file, poll_table *wait) 3620 { 3621 struct i915_perf_stream *stream = file->private_data; 3622 __poll_t ret; 3623 3624 mutex_lock(&stream->lock); 3625 ret = i915_perf_poll_locked(stream, file, wait); 3626 mutex_unlock(&stream->lock); 3627 3628 return ret; 3629 } 3630 3631 /** 3632 * i915_perf_enable_locked - handle `I915_PERF_IOCTL_ENABLE` ioctl 3633 * @stream: A disabled i915 perf stream 3634 * 3635 * [Re]enables the associated capture of data for this stream. 3636 * 3637 * If a stream was previously enabled then there's currently no intention 3638 * to provide userspace any guarantee about the preservation of previously 3639 * buffered data. 3640 */ 3641 static void i915_perf_enable_locked(struct i915_perf_stream *stream) 3642 { 3643 if (stream->enabled) 3644 return; 3645 3646 /* Allow stream->ops->enable() to refer to this */ 3647 stream->enabled = true; 3648 3649 if (stream->ops->enable) 3650 stream->ops->enable(stream); 3651 3652 if (stream->hold_preemption) 3653 intel_context_set_nopreempt(stream->pinned_ctx); 3654 } 3655 3656 /** 3657 * i915_perf_disable_locked - handle `I915_PERF_IOCTL_DISABLE` ioctl 3658 * @stream: An enabled i915 perf stream 3659 * 3660 * Disables the associated capture of data for this stream. 3661 * 3662 * The intention is that disabling an re-enabling a stream will ideally be 3663 * cheaper than destroying and re-opening a stream with the same configuration, 3664 * though there are no formal guarantees about what state or buffered data 3665 * must be retained between disabling and re-enabling a stream. 3666 * 3667 * Note: while a stream is disabled it's considered an error for userspace 3668 * to attempt to read from the stream (-EIO). 3669 */ 3670 static void i915_perf_disable_locked(struct i915_perf_stream *stream) 3671 { 3672 if (!stream->enabled) 3673 return; 3674 3675 /* Allow stream->ops->disable() to refer to this */ 3676 stream->enabled = false; 3677 3678 if (stream->hold_preemption) 3679 intel_context_clear_nopreempt(stream->pinned_ctx); 3680 3681 if (stream->ops->disable) 3682 stream->ops->disable(stream); 3683 } 3684 3685 static long i915_perf_config_locked(struct i915_perf_stream *stream, 3686 unsigned long metrics_set) 3687 { 3688 struct i915_oa_config *config; 3689 long ret = stream->oa_config->id; 3690 3691 config = i915_perf_get_oa_config(stream->perf, metrics_set); 3692 if (!config) 3693 return -EINVAL; 3694 3695 if (config != stream->oa_config) { 3696 int err; 3697 3698 /* 3699 * If OA is bound to a specific context, emit the 3700 * reconfiguration inline from that context. The update 3701 * will then be ordered with respect to submission on that 3702 * context. 3703 * 3704 * When set globally, we use a low priority kernel context, 3705 * so it will effectively take effect when idle. 3706 */ 3707 err = emit_oa_config(stream, config, oa_context(stream), NULL); 3708 if (!err) 3709 config = xchg(&stream->oa_config, config); 3710 else 3711 ret = err; 3712 } 3713 3714 i915_oa_config_put(config); 3715 3716 return ret; 3717 } 3718 3719 /** 3720 * i915_perf_ioctl_locked - support ioctl() usage with i915 perf stream FDs 3721 * @stream: An i915 perf stream 3722 * @cmd: the ioctl request 3723 * @arg: the ioctl data 3724 * 3725 * Returns: zero on success or a negative error code. Returns -EINVAL for 3726 * an unknown ioctl request. 3727 */ 3728 static long i915_perf_ioctl_locked(struct i915_perf_stream *stream, 3729 unsigned int cmd, 3730 unsigned long arg) 3731 { 3732 switch (cmd) { 3733 case I915_PERF_IOCTL_ENABLE: 3734 i915_perf_enable_locked(stream); 3735 return 0; 3736 case I915_PERF_IOCTL_DISABLE: 3737 i915_perf_disable_locked(stream); 3738 return 0; 3739 case I915_PERF_IOCTL_CONFIG: 3740 return i915_perf_config_locked(stream, arg); 3741 } 3742 3743 return -EINVAL; 3744 } 3745 3746 /** 3747 * i915_perf_ioctl - support ioctl() usage with i915 perf stream FDs 3748 * @file: An i915 perf stream file 3749 * @cmd: the ioctl request 3750 * @arg: the ioctl data 3751 * 3752 * Implementation deferred to i915_perf_ioctl_locked(). 3753 * 3754 * Returns: zero on success or a negative error code. Returns -EINVAL for 3755 * an unknown ioctl request. 3756 */ 3757 static long i915_perf_ioctl(struct file *file, 3758 unsigned int cmd, 3759 unsigned long arg) 3760 { 3761 struct i915_perf_stream *stream = file->private_data; 3762 long ret; 3763 3764 mutex_lock(&stream->lock); 3765 ret = i915_perf_ioctl_locked(stream, cmd, arg); 3766 mutex_unlock(&stream->lock); 3767 3768 return ret; 3769 } 3770 3771 /** 3772 * i915_perf_destroy_locked - destroy an i915 perf stream 3773 * @stream: An i915 perf stream 3774 * 3775 * Frees all resources associated with the given i915 perf @stream, disabling 3776 * any associated data capture in the process. 3777 * 3778 * Note: The >->perf.lock mutex has been taken to serialize 3779 * with any non-file-operation driver hooks. 3780 */ 3781 static void i915_perf_destroy_locked(struct i915_perf_stream *stream) 3782 { 3783 if (stream->enabled) 3784 i915_perf_disable_locked(stream); 3785 3786 if (stream->ops->destroy) 3787 stream->ops->destroy(stream); 3788 3789 if (stream->ctx) 3790 i915_gem_context_put(stream->ctx); 3791 3792 kfree(stream); 3793 } 3794 3795 /** 3796 * i915_perf_release - handles userspace close() of a stream file 3797 * @inode: anonymous inode associated with file 3798 * @file: An i915 perf stream file 3799 * 3800 * Cleans up any resources associated with an open i915 perf stream file. 3801 * 3802 * NB: close() can't really fail from the userspace point of view. 3803 * 3804 * Returns: zero on success or a negative error code. 3805 */ 3806 static int i915_perf_release(struct inode *inode, struct file *file) 3807 { 3808 struct i915_perf_stream *stream = file->private_data; 3809 struct i915_perf *perf = stream->perf; 3810 struct intel_gt *gt = stream->engine->gt; 3811 3812 /* 3813 * Within this call, we know that the fd is being closed and we have no 3814 * other user of stream->lock. Use the perf lock to destroy the stream 3815 * here. 3816 */ 3817 mutex_lock(>->perf.lock); 3818 i915_perf_destroy_locked(stream); 3819 mutex_unlock(>->perf.lock); 3820 3821 /* Release the reference the perf stream kept on the driver. */ 3822 drm_dev_put(&perf->i915->drm); 3823 3824 return 0; 3825 } 3826 3827 3828 static const struct file_operations fops = { 3829 .owner = THIS_MODULE, 3830 .llseek = no_llseek, 3831 .release = i915_perf_release, 3832 .poll = i915_perf_poll, 3833 .read = i915_perf_read, 3834 .unlocked_ioctl = i915_perf_ioctl, 3835 /* Our ioctl have no arguments, so it's safe to use the same function 3836 * to handle 32bits compatibility. 3837 */ 3838 .compat_ioctl = i915_perf_ioctl, 3839 }; 3840 3841 3842 /** 3843 * i915_perf_open_ioctl_locked - DRM ioctl() for userspace to open a stream FD 3844 * @perf: i915 perf instance 3845 * @param: The open parameters passed to 'DRM_I915_PERF_OPEN` 3846 * @props: individually validated u64 property value pairs 3847 * @file: drm file 3848 * 3849 * See i915_perf_ioctl_open() for interface details. 3850 * 3851 * Implements further stream config validation and stream initialization on 3852 * behalf of i915_perf_open_ioctl() with the >->perf.lock mutex 3853 * taken to serialize with any non-file-operation driver hooks. 3854 * 3855 * Note: at this point the @props have only been validated in isolation and 3856 * it's still necessary to validate that the combination of properties makes 3857 * sense. 3858 * 3859 * In the case where userspace is interested in OA unit metrics then further 3860 * config validation and stream initialization details will be handled by 3861 * i915_oa_stream_init(). The code here should only validate config state that 3862 * will be relevant to all stream types / backends. 3863 * 3864 * Returns: zero on success or a negative error code. 3865 */ 3866 static int 3867 i915_perf_open_ioctl_locked(struct i915_perf *perf, 3868 struct drm_i915_perf_open_param *param, 3869 struct perf_open_properties *props, 3870 struct drm_file *file) 3871 { 3872 struct i915_gem_context *specific_ctx = NULL; 3873 struct i915_perf_stream *stream = NULL; 3874 unsigned long f_flags = 0; 3875 bool privileged_op = true; 3876 int stream_fd; 3877 int ret; 3878 3879 if (props->single_context) { 3880 u32 ctx_handle = props->ctx_handle; 3881 struct drm_i915_file_private *file_priv = file->driver_priv; 3882 3883 specific_ctx = i915_gem_context_lookup(file_priv, ctx_handle); 3884 if (IS_ERR(specific_ctx)) { 3885 drm_dbg(&perf->i915->drm, 3886 "Failed to look up context with ID %u for opening perf stream\n", 3887 ctx_handle); 3888 ret = PTR_ERR(specific_ctx); 3889 goto err; 3890 } 3891 } 3892 3893 /* 3894 * On Haswell the OA unit supports clock gating off for a specific 3895 * context and in this mode there's no visibility of metrics for the 3896 * rest of the system, which we consider acceptable for a 3897 * non-privileged client. 3898 * 3899 * For Gen8->11 the OA unit no longer supports clock gating off for a 3900 * specific context and the kernel can't securely stop the counters 3901 * from updating as system-wide / global values. Even though we can 3902 * filter reports based on the included context ID we can't block 3903 * clients from seeing the raw / global counter values via 3904 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to 3905 * enable the OA unit by default. 3906 * 3907 * For Gen12+ we gain a new OAR unit that only monitors the RCS on a 3908 * per context basis. So we can relax requirements there if the user 3909 * doesn't request global stream access (i.e. query based sampling 3910 * using MI_RECORD_PERF_COUNT. 3911 */ 3912 if (IS_HASWELL(perf->i915) && specific_ctx) 3913 privileged_op = false; 3914 else if (GRAPHICS_VER(perf->i915) == 12 && specific_ctx && 3915 (props->sample_flags & SAMPLE_OA_REPORT) == 0) 3916 privileged_op = false; 3917 3918 if (props->hold_preemption) { 3919 if (!props->single_context) { 3920 drm_dbg(&perf->i915->drm, 3921 "preemption disable with no context\n"); 3922 ret = -EINVAL; 3923 goto err; 3924 } 3925 privileged_op = true; 3926 } 3927 3928 /* 3929 * Asking for SSEU configuration is a priviliged operation. 3930 */ 3931 if (props->has_sseu) 3932 privileged_op = true; 3933 else 3934 get_default_sseu_config(&props->sseu, props->engine); 3935 3936 /* Similar to perf's kernel.perf_paranoid_cpu sysctl option 3937 * we check a dev.i915.perf_stream_paranoid sysctl option 3938 * to determine if it's ok to access system wide OA counters 3939 * without CAP_PERFMON or CAP_SYS_ADMIN privileges. 3940 */ 3941 if (privileged_op && 3942 i915_perf_stream_paranoid && !perfmon_capable()) { 3943 drm_dbg(&perf->i915->drm, 3944 "Insufficient privileges to open i915 perf stream\n"); 3945 ret = -EACCES; 3946 goto err_ctx; 3947 } 3948 3949 stream = kzalloc(sizeof(*stream), GFP_KERNEL); 3950 if (!stream) { 3951 ret = -ENOMEM; 3952 goto err_ctx; 3953 } 3954 3955 stream->perf = perf; 3956 stream->ctx = specific_ctx; 3957 stream->poll_oa_period = props->poll_oa_period; 3958 3959 ret = i915_oa_stream_init(stream, param, props); 3960 if (ret) 3961 goto err_alloc; 3962 3963 /* we avoid simply assigning stream->sample_flags = props->sample_flags 3964 * to have _stream_init check the combination of sample flags more 3965 * thoroughly, but still this is the expected result at this point. 3966 */ 3967 if (WARN_ON(stream->sample_flags != props->sample_flags)) { 3968 ret = -ENODEV; 3969 goto err_flags; 3970 } 3971 3972 if (param->flags & I915_PERF_FLAG_FD_CLOEXEC) 3973 f_flags |= O_CLOEXEC; 3974 if (param->flags & I915_PERF_FLAG_FD_NONBLOCK) 3975 f_flags |= O_NONBLOCK; 3976 3977 stream_fd = anon_inode_getfd("[i915_perf]", &fops, stream, f_flags); 3978 if (stream_fd < 0) { 3979 ret = stream_fd; 3980 goto err_flags; 3981 } 3982 3983 if (!(param->flags & I915_PERF_FLAG_DISABLED)) 3984 i915_perf_enable_locked(stream); 3985 3986 /* Take a reference on the driver that will be kept with stream_fd 3987 * until its release. 3988 */ 3989 drm_dev_get(&perf->i915->drm); 3990 3991 return stream_fd; 3992 3993 err_flags: 3994 if (stream->ops->destroy) 3995 stream->ops->destroy(stream); 3996 err_alloc: 3997 kfree(stream); 3998 err_ctx: 3999 if (specific_ctx) 4000 i915_gem_context_put(specific_ctx); 4001 err: 4002 return ret; 4003 } 4004 4005 static u64 oa_exponent_to_ns(struct i915_perf *perf, int exponent) 4006 { 4007 u64 nom = (2ULL << exponent) * NSEC_PER_SEC; 4008 u32 den = i915_perf_oa_timestamp_frequency(perf->i915); 4009 4010 return div_u64(nom + den - 1, den); 4011 } 4012 4013 static __always_inline bool 4014 oa_format_valid(struct i915_perf *perf, enum drm_i915_oa_format format) 4015 { 4016 return test_bit(format, perf->format_mask); 4017 } 4018 4019 static __always_inline void 4020 oa_format_add(struct i915_perf *perf, enum drm_i915_oa_format format) 4021 { 4022 __set_bit(format, perf->format_mask); 4023 } 4024 4025 /** 4026 * read_properties_unlocked - validate + copy userspace stream open properties 4027 * @perf: i915 perf instance 4028 * @uprops: The array of u64 key value pairs given by userspace 4029 * @n_props: The number of key value pairs expected in @uprops 4030 * @props: The stream configuration built up while validating properties 4031 * 4032 * Note this function only validates properties in isolation it doesn't 4033 * validate that the combination of properties makes sense or that all 4034 * properties necessary for a particular kind of stream have been set. 4035 * 4036 * Note that there currently aren't any ordering requirements for properties so 4037 * we shouldn't validate or assume anything about ordering here. This doesn't 4038 * rule out defining new properties with ordering requirements in the future. 4039 */ 4040 static int read_properties_unlocked(struct i915_perf *perf, 4041 u64 __user *uprops, 4042 u32 n_props, 4043 struct perf_open_properties *props) 4044 { 4045 struct drm_i915_gem_context_param_sseu user_sseu; 4046 const struct i915_oa_format *f; 4047 u64 __user *uprop = uprops; 4048 bool config_instance = false; 4049 bool config_class = false; 4050 bool config_sseu = false; 4051 u8 class, instance; 4052 u32 i; 4053 int ret; 4054 4055 memset(props, 0, sizeof(struct perf_open_properties)); 4056 props->poll_oa_period = DEFAULT_POLL_PERIOD_NS; 4057 4058 /* Considering that ID = 0 is reserved and assuming that we don't 4059 * (currently) expect any configurations to ever specify duplicate 4060 * values for a particular property ID then the last _PROP_MAX value is 4061 * one greater than the maximum number of properties we expect to get 4062 * from userspace. 4063 */ 4064 if (!n_props || n_props >= DRM_I915_PERF_PROP_MAX) { 4065 drm_dbg(&perf->i915->drm, 4066 "Invalid number of i915 perf properties given\n"); 4067 return -EINVAL; 4068 } 4069 4070 /* Defaults when class:instance is not passed */ 4071 class = I915_ENGINE_CLASS_RENDER; 4072 instance = 0; 4073 4074 for (i = 0; i < n_props; i++) { 4075 u64 oa_period, oa_freq_hz; 4076 u64 id, value; 4077 4078 ret = get_user(id, uprop); 4079 if (ret) 4080 return ret; 4081 4082 ret = get_user(value, uprop + 1); 4083 if (ret) 4084 return ret; 4085 4086 if (id == 0 || id >= DRM_I915_PERF_PROP_MAX) { 4087 drm_dbg(&perf->i915->drm, 4088 "Unknown i915 perf property ID\n"); 4089 return -EINVAL; 4090 } 4091 4092 switch ((enum drm_i915_perf_property_id)id) { 4093 case DRM_I915_PERF_PROP_CTX_HANDLE: 4094 props->single_context = 1; 4095 props->ctx_handle = value; 4096 break; 4097 case DRM_I915_PERF_PROP_SAMPLE_OA: 4098 if (value) 4099 props->sample_flags |= SAMPLE_OA_REPORT; 4100 break; 4101 case DRM_I915_PERF_PROP_OA_METRICS_SET: 4102 if (value == 0) { 4103 drm_dbg(&perf->i915->drm, 4104 "Unknown OA metric set ID\n"); 4105 return -EINVAL; 4106 } 4107 props->metrics_set = value; 4108 break; 4109 case DRM_I915_PERF_PROP_OA_FORMAT: 4110 if (value == 0 || value >= I915_OA_FORMAT_MAX) { 4111 drm_dbg(&perf->i915->drm, 4112 "Out-of-range OA report format %llu\n", 4113 value); 4114 return -EINVAL; 4115 } 4116 if (!oa_format_valid(perf, value)) { 4117 drm_dbg(&perf->i915->drm, 4118 "Unsupported OA report format %llu\n", 4119 value); 4120 return -EINVAL; 4121 } 4122 props->oa_format = value; 4123 break; 4124 case DRM_I915_PERF_PROP_OA_EXPONENT: 4125 if (value > OA_EXPONENT_MAX) { 4126 drm_dbg(&perf->i915->drm, 4127 "OA timer exponent too high (> %u)\n", 4128 OA_EXPONENT_MAX); 4129 return -EINVAL; 4130 } 4131 4132 /* Theoretically we can program the OA unit to sample 4133 * e.g. every 160ns for HSW, 167ns for BDW/SKL or 104ns 4134 * for BXT. We don't allow such high sampling 4135 * frequencies by default unless root. 4136 */ 4137 4138 BUILD_BUG_ON(sizeof(oa_period) != 8); 4139 oa_period = oa_exponent_to_ns(perf, value); 4140 4141 /* This check is primarily to ensure that oa_period <= 4142 * UINT32_MAX (before passing to do_div which only 4143 * accepts a u32 denominator), but we can also skip 4144 * checking anything < 1Hz which implicitly can't be 4145 * limited via an integer oa_max_sample_rate. 4146 */ 4147 if (oa_period <= NSEC_PER_SEC) { 4148 u64 tmp = NSEC_PER_SEC; 4149 do_div(tmp, oa_period); 4150 oa_freq_hz = tmp; 4151 } else 4152 oa_freq_hz = 0; 4153 4154 if (oa_freq_hz > i915_oa_max_sample_rate && !perfmon_capable()) { 4155 drm_dbg(&perf->i915->drm, 4156 "OA exponent would exceed the max sampling frequency (sysctl dev.i915.oa_max_sample_rate) %uHz without CAP_PERFMON or CAP_SYS_ADMIN privileges\n", 4157 i915_oa_max_sample_rate); 4158 return -EACCES; 4159 } 4160 4161 props->oa_periodic = true; 4162 props->oa_period_exponent = value; 4163 break; 4164 case DRM_I915_PERF_PROP_HOLD_PREEMPTION: 4165 props->hold_preemption = !!value; 4166 break; 4167 case DRM_I915_PERF_PROP_GLOBAL_SSEU: { 4168 if (GRAPHICS_VER_FULL(perf->i915) >= IP_VER(12, 50)) { 4169 drm_dbg(&perf->i915->drm, 4170 "SSEU config not supported on gfx %x\n", 4171 GRAPHICS_VER_FULL(perf->i915)); 4172 return -ENODEV; 4173 } 4174 4175 if (copy_from_user(&user_sseu, 4176 u64_to_user_ptr(value), 4177 sizeof(user_sseu))) { 4178 drm_dbg(&perf->i915->drm, 4179 "Unable to copy global sseu parameter\n"); 4180 return -EFAULT; 4181 } 4182 config_sseu = true; 4183 break; 4184 } 4185 case DRM_I915_PERF_PROP_POLL_OA_PERIOD: 4186 if (value < 100000 /* 100us */) { 4187 drm_dbg(&perf->i915->drm, 4188 "OA availability timer too small (%lluns < 100us)\n", 4189 value); 4190 return -EINVAL; 4191 } 4192 props->poll_oa_period = value; 4193 break; 4194 case DRM_I915_PERF_PROP_OA_ENGINE_CLASS: 4195 class = (u8)value; 4196 config_class = true; 4197 break; 4198 case DRM_I915_PERF_PROP_OA_ENGINE_INSTANCE: 4199 instance = (u8)value; 4200 config_instance = true; 4201 break; 4202 default: 4203 MISSING_CASE(id); 4204 return -EINVAL; 4205 } 4206 4207 uprop += 2; 4208 } 4209 4210 if ((config_class && !config_instance) || 4211 (config_instance && !config_class)) { 4212 drm_dbg(&perf->i915->drm, 4213 "OA engine-class and engine-instance parameters must be passed together\n"); 4214 return -EINVAL; 4215 } 4216 4217 props->engine = intel_engine_lookup_user(perf->i915, class, instance); 4218 if (!props->engine) { 4219 drm_dbg(&perf->i915->drm, 4220 "OA engine class and instance invalid %d:%d\n", 4221 class, instance); 4222 return -EINVAL; 4223 } 4224 4225 if (!engine_supports_oa(props->engine)) { 4226 drm_dbg(&perf->i915->drm, 4227 "Engine not supported by OA %d:%d\n", 4228 class, instance); 4229 return -EINVAL; 4230 } 4231 4232 /* 4233 * Wa_14017512683: mtl[a0..c0): Use of OAM must be preceded with Media 4234 * C6 disable in BIOS. Fail if Media C6 is enabled on steppings where OAM 4235 * does not work as expected. 4236 */ 4237 if (IS_MTL_MEDIA_STEP(props->engine->i915, STEP_A0, STEP_C0) && 4238 props->engine->oa_group->type == TYPE_OAM && 4239 intel_check_bios_c6_setup(&props->engine->gt->rc6)) { 4240 drm_dbg(&perf->i915->drm, 4241 "OAM requires media C6 to be disabled in BIOS\n"); 4242 return -EINVAL; 4243 } 4244 4245 i = array_index_nospec(props->oa_format, I915_OA_FORMAT_MAX); 4246 f = &perf->oa_formats[i]; 4247 if (!engine_supports_oa_format(props->engine, f->type)) { 4248 drm_dbg(&perf->i915->drm, 4249 "Invalid OA format %d for class %d\n", 4250 f->type, props->engine->class); 4251 return -EINVAL; 4252 } 4253 4254 if (config_sseu) { 4255 ret = get_sseu_config(&props->sseu, props->engine, &user_sseu); 4256 if (ret) { 4257 drm_dbg(&perf->i915->drm, 4258 "Invalid SSEU configuration\n"); 4259 return ret; 4260 } 4261 props->has_sseu = true; 4262 } 4263 4264 return 0; 4265 } 4266 4267 /** 4268 * i915_perf_open_ioctl - DRM ioctl() for userspace to open a stream FD 4269 * @dev: drm device 4270 * @data: ioctl data copied from userspace (unvalidated) 4271 * @file: drm file 4272 * 4273 * Validates the stream open parameters given by userspace including flags 4274 * and an array of u64 key, value pair properties. 4275 * 4276 * Very little is assumed up front about the nature of the stream being 4277 * opened (for instance we don't assume it's for periodic OA unit metrics). An 4278 * i915-perf stream is expected to be a suitable interface for other forms of 4279 * buffered data written by the GPU besides periodic OA metrics. 4280 * 4281 * Note we copy the properties from userspace outside of the i915 perf 4282 * mutex to avoid an awkward lockdep with mmap_lock. 4283 * 4284 * Most of the implementation details are handled by 4285 * i915_perf_open_ioctl_locked() after taking the >->perf.lock 4286 * mutex for serializing with any non-file-operation driver hooks. 4287 * 4288 * Return: A newly opened i915 Perf stream file descriptor or negative 4289 * error code on failure. 4290 */ 4291 int i915_perf_open_ioctl(struct drm_device *dev, void *data, 4292 struct drm_file *file) 4293 { 4294 struct i915_perf *perf = &to_i915(dev)->perf; 4295 struct drm_i915_perf_open_param *param = data; 4296 struct intel_gt *gt; 4297 struct perf_open_properties props; 4298 u32 known_open_flags; 4299 int ret; 4300 4301 if (!perf->i915) { 4302 drm_dbg(&perf->i915->drm, 4303 "i915 perf interface not available for this system\n"); 4304 return -ENOTSUPP; 4305 } 4306 4307 known_open_flags = I915_PERF_FLAG_FD_CLOEXEC | 4308 I915_PERF_FLAG_FD_NONBLOCK | 4309 I915_PERF_FLAG_DISABLED; 4310 if (param->flags & ~known_open_flags) { 4311 drm_dbg(&perf->i915->drm, 4312 "Unknown drm_i915_perf_open_param flag\n"); 4313 return -EINVAL; 4314 } 4315 4316 ret = read_properties_unlocked(perf, 4317 u64_to_user_ptr(param->properties_ptr), 4318 param->num_properties, 4319 &props); 4320 if (ret) 4321 return ret; 4322 4323 gt = props.engine->gt; 4324 4325 mutex_lock(>->perf.lock); 4326 ret = i915_perf_open_ioctl_locked(perf, param, &props, file); 4327 mutex_unlock(>->perf.lock); 4328 4329 return ret; 4330 } 4331 4332 /** 4333 * i915_perf_register - exposes i915-perf to userspace 4334 * @i915: i915 device instance 4335 * 4336 * In particular OA metric sets are advertised under a sysfs metrics/ 4337 * directory allowing userspace to enumerate valid IDs that can be 4338 * used to open an i915-perf stream. 4339 */ 4340 void i915_perf_register(struct drm_i915_private *i915) 4341 { 4342 struct i915_perf *perf = &i915->perf; 4343 struct intel_gt *gt = to_gt(i915); 4344 4345 if (!perf->i915) 4346 return; 4347 4348 /* To be sure we're synchronized with an attempted 4349 * i915_perf_open_ioctl(); considering that we register after 4350 * being exposed to userspace. 4351 */ 4352 mutex_lock(>->perf.lock); 4353 4354 perf->metrics_kobj = 4355 kobject_create_and_add("metrics", 4356 &i915->drm.primary->kdev->kobj); 4357 4358 mutex_unlock(>->perf.lock); 4359 } 4360 4361 /** 4362 * i915_perf_unregister - hide i915-perf from userspace 4363 * @i915: i915 device instance 4364 * 4365 * i915-perf state cleanup is split up into an 'unregister' and 4366 * 'deinit' phase where the interface is first hidden from 4367 * userspace by i915_perf_unregister() before cleaning up 4368 * remaining state in i915_perf_fini(). 4369 */ 4370 void i915_perf_unregister(struct drm_i915_private *i915) 4371 { 4372 struct i915_perf *perf = &i915->perf; 4373 4374 if (!perf->metrics_kobj) 4375 return; 4376 4377 kobject_put(perf->metrics_kobj); 4378 perf->metrics_kobj = NULL; 4379 } 4380 4381 static bool gen8_is_valid_flex_addr(struct i915_perf *perf, u32 addr) 4382 { 4383 static const i915_reg_t flex_eu_regs[] = { 4384 EU_PERF_CNTL0, 4385 EU_PERF_CNTL1, 4386 EU_PERF_CNTL2, 4387 EU_PERF_CNTL3, 4388 EU_PERF_CNTL4, 4389 EU_PERF_CNTL5, 4390 EU_PERF_CNTL6, 4391 }; 4392 int i; 4393 4394 for (i = 0; i < ARRAY_SIZE(flex_eu_regs); i++) { 4395 if (i915_mmio_reg_offset(flex_eu_regs[i]) == addr) 4396 return true; 4397 } 4398 return false; 4399 } 4400 4401 static bool reg_in_range_table(u32 addr, const struct i915_range *table) 4402 { 4403 while (table->start || table->end) { 4404 if (addr >= table->start && addr <= table->end) 4405 return true; 4406 4407 table++; 4408 } 4409 4410 return false; 4411 } 4412 4413 #define REG_EQUAL(addr, mmio) \ 4414 ((addr) == i915_mmio_reg_offset(mmio)) 4415 4416 static const struct i915_range gen7_oa_b_counters[] = { 4417 { .start = 0x2710, .end = 0x272c }, /* OASTARTTRIG[1-8] */ 4418 { .start = 0x2740, .end = 0x275c }, /* OAREPORTTRIG[1-8] */ 4419 { .start = 0x2770, .end = 0x27ac }, /* OACEC[0-7][0-1] */ 4420 {} 4421 }; 4422 4423 static const struct i915_range gen12_oa_b_counters[] = { 4424 { .start = 0x2b2c, .end = 0x2b2c }, /* GEN12_OAG_OA_PESS */ 4425 { .start = 0xd900, .end = 0xd91c }, /* GEN12_OAG_OASTARTTRIG[1-8] */ 4426 { .start = 0xd920, .end = 0xd93c }, /* GEN12_OAG_OAREPORTTRIG1[1-8] */ 4427 { .start = 0xd940, .end = 0xd97c }, /* GEN12_OAG_CEC[0-7][0-1] */ 4428 { .start = 0xdc00, .end = 0xdc3c }, /* GEN12_OAG_SCEC[0-7][0-1] */ 4429 { .start = 0xdc40, .end = 0xdc40 }, /* GEN12_OAG_SPCTR_CNF */ 4430 { .start = 0xdc44, .end = 0xdc44 }, /* GEN12_OAA_DBG_REG */ 4431 {} 4432 }; 4433 4434 static const struct i915_range mtl_oam_b_counters[] = { 4435 { .start = 0x393000, .end = 0x39301c }, /* GEN12_OAM_STARTTRIG1[1-8] */ 4436 { .start = 0x393020, .end = 0x39303c }, /* GEN12_OAM_REPORTTRIG1[1-8] */ 4437 { .start = 0x393040, .end = 0x39307c }, /* GEN12_OAM_CEC[0-7][0-1] */ 4438 { .start = 0x393200, .end = 0x39323C }, /* MPES[0-7] */ 4439 {} 4440 }; 4441 4442 static const struct i915_range xehp_oa_b_counters[] = { 4443 { .start = 0xdc48, .end = 0xdc48 }, /* OAA_ENABLE_REG */ 4444 { .start = 0xdd00, .end = 0xdd48 }, /* OAG_LCE0_0 - OAA_LENABLE_REG */ 4445 }; 4446 4447 static const struct i915_range gen7_oa_mux_regs[] = { 4448 { .start = 0x91b8, .end = 0x91cc }, /* OA_PERFCNT[1-2], OA_PERFMATRIX */ 4449 { .start = 0x9800, .end = 0x9888 }, /* MICRO_BP0_0 - NOA_WRITE */ 4450 { .start = 0xe180, .end = 0xe180 }, /* HALF_SLICE_CHICKEN2 */ 4451 {} 4452 }; 4453 4454 static const struct i915_range hsw_oa_mux_regs[] = { 4455 { .start = 0x09e80, .end = 0x09ea4 }, /* HSW_MBVID2_NOA[0-9] */ 4456 { .start = 0x09ec0, .end = 0x09ec0 }, /* HSW_MBVID2_MISR0 */ 4457 { .start = 0x25100, .end = 0x2ff90 }, 4458 {} 4459 }; 4460 4461 static const struct i915_range chv_oa_mux_regs[] = { 4462 { .start = 0x182300, .end = 0x1823a4 }, 4463 {} 4464 }; 4465 4466 static const struct i915_range gen8_oa_mux_regs[] = { 4467 { .start = 0x0d00, .end = 0x0d2c }, /* RPM_CONFIG[0-1], NOA_CONFIG[0-8] */ 4468 { .start = 0x20cc, .end = 0x20cc }, /* WAIT_FOR_RC6_EXIT */ 4469 {} 4470 }; 4471 4472 static const struct i915_range gen11_oa_mux_regs[] = { 4473 { .start = 0x91c8, .end = 0x91dc }, /* OA_PERFCNT[3-4] */ 4474 {} 4475 }; 4476 4477 static const struct i915_range gen12_oa_mux_regs[] = { 4478 { .start = 0x0d00, .end = 0x0d04 }, /* RPM_CONFIG[0-1] */ 4479 { .start = 0x0d0c, .end = 0x0d2c }, /* NOA_CONFIG[0-8] */ 4480 { .start = 0x9840, .end = 0x9840 }, /* GDT_CHICKEN_BITS */ 4481 { .start = 0x9884, .end = 0x9888 }, /* NOA_WRITE */ 4482 { .start = 0x20cc, .end = 0x20cc }, /* WAIT_FOR_RC6_EXIT */ 4483 {} 4484 }; 4485 4486 /* 4487 * Ref: 14010536224: 4488 * 0x20cc is repurposed on MTL, so use a separate array for MTL. 4489 */ 4490 static const struct i915_range mtl_oa_mux_regs[] = { 4491 { .start = 0x0d00, .end = 0x0d04 }, /* RPM_CONFIG[0-1] */ 4492 { .start = 0x0d0c, .end = 0x0d2c }, /* NOA_CONFIG[0-8] */ 4493 { .start = 0x9840, .end = 0x9840 }, /* GDT_CHICKEN_BITS */ 4494 { .start = 0x9884, .end = 0x9888 }, /* NOA_WRITE */ 4495 { .start = 0x38d100, .end = 0x38d114}, /* VISACTL */ 4496 {} 4497 }; 4498 4499 static bool gen7_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr) 4500 { 4501 return reg_in_range_table(addr, gen7_oa_b_counters); 4502 } 4503 4504 static bool gen8_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4505 { 4506 return reg_in_range_table(addr, gen7_oa_mux_regs) || 4507 reg_in_range_table(addr, gen8_oa_mux_regs); 4508 } 4509 4510 static bool gen11_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4511 { 4512 return reg_in_range_table(addr, gen7_oa_mux_regs) || 4513 reg_in_range_table(addr, gen8_oa_mux_regs) || 4514 reg_in_range_table(addr, gen11_oa_mux_regs); 4515 } 4516 4517 static bool hsw_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4518 { 4519 return reg_in_range_table(addr, gen7_oa_mux_regs) || 4520 reg_in_range_table(addr, hsw_oa_mux_regs); 4521 } 4522 4523 static bool chv_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4524 { 4525 return reg_in_range_table(addr, gen7_oa_mux_regs) || 4526 reg_in_range_table(addr, chv_oa_mux_regs); 4527 } 4528 4529 static bool gen12_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr) 4530 { 4531 return reg_in_range_table(addr, gen12_oa_b_counters); 4532 } 4533 4534 static bool mtl_is_valid_oam_b_counter_addr(struct i915_perf *perf, u32 addr) 4535 { 4536 if (HAS_OAM(perf->i915) && 4537 GRAPHICS_VER_FULL(perf->i915) >= IP_VER(12, 70)) 4538 return reg_in_range_table(addr, mtl_oam_b_counters); 4539 4540 return false; 4541 } 4542 4543 static bool xehp_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr) 4544 { 4545 return reg_in_range_table(addr, xehp_oa_b_counters) || 4546 reg_in_range_table(addr, gen12_oa_b_counters) || 4547 mtl_is_valid_oam_b_counter_addr(perf, addr); 4548 } 4549 4550 static bool gen12_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4551 { 4552 if (IS_METEORLAKE(perf->i915)) 4553 return reg_in_range_table(addr, mtl_oa_mux_regs); 4554 else 4555 return reg_in_range_table(addr, gen12_oa_mux_regs); 4556 } 4557 4558 static u32 mask_reg_value(u32 reg, u32 val) 4559 { 4560 /* HALF_SLICE_CHICKEN2 is programmed with a the 4561 * WaDisableSTUnitPowerOptimization workaround. Make sure the value 4562 * programmed by userspace doesn't change this. 4563 */ 4564 if (REG_EQUAL(reg, HALF_SLICE_CHICKEN2)) 4565 val = val & ~_MASKED_BIT_ENABLE(GEN8_ST_PO_DISABLE); 4566 4567 /* WAIT_FOR_RC6_EXIT has only one bit fullfilling the function 4568 * indicated by its name and a bunch of selection fields used by OA 4569 * configs. 4570 */ 4571 if (REG_EQUAL(reg, WAIT_FOR_RC6_EXIT)) 4572 val = val & ~_MASKED_BIT_ENABLE(HSW_WAIT_FOR_RC6_EXIT_ENABLE); 4573 4574 return val; 4575 } 4576 4577 static struct i915_oa_reg *alloc_oa_regs(struct i915_perf *perf, 4578 bool (*is_valid)(struct i915_perf *perf, u32 addr), 4579 u32 __user *regs, 4580 u32 n_regs) 4581 { 4582 struct i915_oa_reg *oa_regs; 4583 int err; 4584 u32 i; 4585 4586 if (!n_regs) 4587 return NULL; 4588 4589 /* No is_valid function means we're not allowing any register to be programmed. */ 4590 GEM_BUG_ON(!is_valid); 4591 if (!is_valid) 4592 return ERR_PTR(-EINVAL); 4593 4594 oa_regs = kmalloc_array(n_regs, sizeof(*oa_regs), GFP_KERNEL); 4595 if (!oa_regs) 4596 return ERR_PTR(-ENOMEM); 4597 4598 for (i = 0; i < n_regs; i++) { 4599 u32 addr, value; 4600 4601 err = get_user(addr, regs); 4602 if (err) 4603 goto addr_err; 4604 4605 if (!is_valid(perf, addr)) { 4606 drm_dbg(&perf->i915->drm, 4607 "Invalid oa_reg address: %X\n", addr); 4608 err = -EINVAL; 4609 goto addr_err; 4610 } 4611 4612 err = get_user(value, regs + 1); 4613 if (err) 4614 goto addr_err; 4615 4616 oa_regs[i].addr = _MMIO(addr); 4617 oa_regs[i].value = mask_reg_value(addr, value); 4618 4619 regs += 2; 4620 } 4621 4622 return oa_regs; 4623 4624 addr_err: 4625 kfree(oa_regs); 4626 return ERR_PTR(err); 4627 } 4628 4629 static ssize_t show_dynamic_id(struct kobject *kobj, 4630 struct kobj_attribute *attr, 4631 char *buf) 4632 { 4633 struct i915_oa_config *oa_config = 4634 container_of(attr, typeof(*oa_config), sysfs_metric_id); 4635 4636 return sprintf(buf, "%d\n", oa_config->id); 4637 } 4638 4639 static int create_dynamic_oa_sysfs_entry(struct i915_perf *perf, 4640 struct i915_oa_config *oa_config) 4641 { 4642 sysfs_attr_init(&oa_config->sysfs_metric_id.attr); 4643 oa_config->sysfs_metric_id.attr.name = "id"; 4644 oa_config->sysfs_metric_id.attr.mode = S_IRUGO; 4645 oa_config->sysfs_metric_id.show = show_dynamic_id; 4646 oa_config->sysfs_metric_id.store = NULL; 4647 4648 oa_config->attrs[0] = &oa_config->sysfs_metric_id.attr; 4649 oa_config->attrs[1] = NULL; 4650 4651 oa_config->sysfs_metric.name = oa_config->uuid; 4652 oa_config->sysfs_metric.attrs = oa_config->attrs; 4653 4654 return sysfs_create_group(perf->metrics_kobj, 4655 &oa_config->sysfs_metric); 4656 } 4657 4658 /** 4659 * i915_perf_add_config_ioctl - DRM ioctl() for userspace to add a new OA config 4660 * @dev: drm device 4661 * @data: ioctl data (pointer to struct drm_i915_perf_oa_config) copied from 4662 * userspace (unvalidated) 4663 * @file: drm file 4664 * 4665 * Validates the submitted OA register to be saved into a new OA config that 4666 * can then be used for programming the OA unit and its NOA network. 4667 * 4668 * Returns: A new allocated config number to be used with the perf open ioctl 4669 * or a negative error code on failure. 4670 */ 4671 int i915_perf_add_config_ioctl(struct drm_device *dev, void *data, 4672 struct drm_file *file) 4673 { 4674 struct i915_perf *perf = &to_i915(dev)->perf; 4675 struct drm_i915_perf_oa_config *args = data; 4676 struct i915_oa_config *oa_config, *tmp; 4677 struct i915_oa_reg *regs; 4678 int err, id; 4679 4680 if (!perf->i915) { 4681 drm_dbg(&perf->i915->drm, 4682 "i915 perf interface not available for this system\n"); 4683 return -ENOTSUPP; 4684 } 4685 4686 if (!perf->metrics_kobj) { 4687 drm_dbg(&perf->i915->drm, 4688 "OA metrics weren't advertised via sysfs\n"); 4689 return -EINVAL; 4690 } 4691 4692 if (i915_perf_stream_paranoid && !perfmon_capable()) { 4693 drm_dbg(&perf->i915->drm, 4694 "Insufficient privileges to add i915 OA config\n"); 4695 return -EACCES; 4696 } 4697 4698 if ((!args->mux_regs_ptr || !args->n_mux_regs) && 4699 (!args->boolean_regs_ptr || !args->n_boolean_regs) && 4700 (!args->flex_regs_ptr || !args->n_flex_regs)) { 4701 drm_dbg(&perf->i915->drm, 4702 "No OA registers given\n"); 4703 return -EINVAL; 4704 } 4705 4706 oa_config = kzalloc(sizeof(*oa_config), GFP_KERNEL); 4707 if (!oa_config) { 4708 drm_dbg(&perf->i915->drm, 4709 "Failed to allocate memory for the OA config\n"); 4710 return -ENOMEM; 4711 } 4712 4713 oa_config->perf = perf; 4714 kref_init(&oa_config->ref); 4715 4716 if (!uuid_is_valid(args->uuid)) { 4717 drm_dbg(&perf->i915->drm, 4718 "Invalid uuid format for OA config\n"); 4719 err = -EINVAL; 4720 goto reg_err; 4721 } 4722 4723 /* Last character in oa_config->uuid will be 0 because oa_config is 4724 * kzalloc. 4725 */ 4726 memcpy(oa_config->uuid, args->uuid, sizeof(args->uuid)); 4727 4728 oa_config->mux_regs_len = args->n_mux_regs; 4729 regs = alloc_oa_regs(perf, 4730 perf->ops.is_valid_mux_reg, 4731 u64_to_user_ptr(args->mux_regs_ptr), 4732 args->n_mux_regs); 4733 4734 if (IS_ERR(regs)) { 4735 drm_dbg(&perf->i915->drm, 4736 "Failed to create OA config for mux_regs\n"); 4737 err = PTR_ERR(regs); 4738 goto reg_err; 4739 } 4740 oa_config->mux_regs = regs; 4741 4742 oa_config->b_counter_regs_len = args->n_boolean_regs; 4743 regs = alloc_oa_regs(perf, 4744 perf->ops.is_valid_b_counter_reg, 4745 u64_to_user_ptr(args->boolean_regs_ptr), 4746 args->n_boolean_regs); 4747 4748 if (IS_ERR(regs)) { 4749 drm_dbg(&perf->i915->drm, 4750 "Failed to create OA config for b_counter_regs\n"); 4751 err = PTR_ERR(regs); 4752 goto reg_err; 4753 } 4754 oa_config->b_counter_regs = regs; 4755 4756 if (GRAPHICS_VER(perf->i915) < 8) { 4757 if (args->n_flex_regs != 0) { 4758 err = -EINVAL; 4759 goto reg_err; 4760 } 4761 } else { 4762 oa_config->flex_regs_len = args->n_flex_regs; 4763 regs = alloc_oa_regs(perf, 4764 perf->ops.is_valid_flex_reg, 4765 u64_to_user_ptr(args->flex_regs_ptr), 4766 args->n_flex_regs); 4767 4768 if (IS_ERR(regs)) { 4769 drm_dbg(&perf->i915->drm, 4770 "Failed to create OA config for flex_regs\n"); 4771 err = PTR_ERR(regs); 4772 goto reg_err; 4773 } 4774 oa_config->flex_regs = regs; 4775 } 4776 4777 err = mutex_lock_interruptible(&perf->metrics_lock); 4778 if (err) 4779 goto reg_err; 4780 4781 /* We shouldn't have too many configs, so this iteration shouldn't be 4782 * too costly. 4783 */ 4784 idr_for_each_entry(&perf->metrics_idr, tmp, id) { 4785 if (!strcmp(tmp->uuid, oa_config->uuid)) { 4786 drm_dbg(&perf->i915->drm, 4787 "OA config already exists with this uuid\n"); 4788 err = -EADDRINUSE; 4789 goto sysfs_err; 4790 } 4791 } 4792 4793 err = create_dynamic_oa_sysfs_entry(perf, oa_config); 4794 if (err) { 4795 drm_dbg(&perf->i915->drm, 4796 "Failed to create sysfs entry for OA config\n"); 4797 goto sysfs_err; 4798 } 4799 4800 /* Config id 0 is invalid, id 1 for kernel stored test config. */ 4801 oa_config->id = idr_alloc(&perf->metrics_idr, 4802 oa_config, 2, 4803 0, GFP_KERNEL); 4804 if (oa_config->id < 0) { 4805 drm_dbg(&perf->i915->drm, 4806 "Failed to create sysfs entry for OA config\n"); 4807 err = oa_config->id; 4808 goto sysfs_err; 4809 } 4810 id = oa_config->id; 4811 4812 drm_dbg(&perf->i915->drm, 4813 "Added config %s id=%i\n", oa_config->uuid, oa_config->id); 4814 mutex_unlock(&perf->metrics_lock); 4815 4816 return id; 4817 4818 sysfs_err: 4819 mutex_unlock(&perf->metrics_lock); 4820 reg_err: 4821 i915_oa_config_put(oa_config); 4822 drm_dbg(&perf->i915->drm, 4823 "Failed to add new OA config\n"); 4824 return err; 4825 } 4826 4827 /** 4828 * i915_perf_remove_config_ioctl - DRM ioctl() for userspace to remove an OA config 4829 * @dev: drm device 4830 * @data: ioctl data (pointer to u64 integer) copied from userspace 4831 * @file: drm file 4832 * 4833 * Configs can be removed while being used, the will stop appearing in sysfs 4834 * and their content will be freed when the stream using the config is closed. 4835 * 4836 * Returns: 0 on success or a negative error code on failure. 4837 */ 4838 int i915_perf_remove_config_ioctl(struct drm_device *dev, void *data, 4839 struct drm_file *file) 4840 { 4841 struct i915_perf *perf = &to_i915(dev)->perf; 4842 u64 *arg = data; 4843 struct i915_oa_config *oa_config; 4844 int ret; 4845 4846 if (!perf->i915) { 4847 drm_dbg(&perf->i915->drm, 4848 "i915 perf interface not available for this system\n"); 4849 return -ENOTSUPP; 4850 } 4851 4852 if (i915_perf_stream_paranoid && !perfmon_capable()) { 4853 drm_dbg(&perf->i915->drm, 4854 "Insufficient privileges to remove i915 OA config\n"); 4855 return -EACCES; 4856 } 4857 4858 ret = mutex_lock_interruptible(&perf->metrics_lock); 4859 if (ret) 4860 return ret; 4861 4862 oa_config = idr_find(&perf->metrics_idr, *arg); 4863 if (!oa_config) { 4864 drm_dbg(&perf->i915->drm, 4865 "Failed to remove unknown OA config\n"); 4866 ret = -ENOENT; 4867 goto err_unlock; 4868 } 4869 4870 GEM_BUG_ON(*arg != oa_config->id); 4871 4872 sysfs_remove_group(perf->metrics_kobj, &oa_config->sysfs_metric); 4873 4874 idr_remove(&perf->metrics_idr, *arg); 4875 4876 mutex_unlock(&perf->metrics_lock); 4877 4878 drm_dbg(&perf->i915->drm, 4879 "Removed config %s id=%i\n", oa_config->uuid, oa_config->id); 4880 4881 i915_oa_config_put(oa_config); 4882 4883 return 0; 4884 4885 err_unlock: 4886 mutex_unlock(&perf->metrics_lock); 4887 return ret; 4888 } 4889 4890 static struct ctl_table oa_table[] = { 4891 { 4892 .procname = "perf_stream_paranoid", 4893 .data = &i915_perf_stream_paranoid, 4894 .maxlen = sizeof(i915_perf_stream_paranoid), 4895 .mode = 0644, 4896 .proc_handler = proc_dointvec_minmax, 4897 .extra1 = SYSCTL_ZERO, 4898 .extra2 = SYSCTL_ONE, 4899 }, 4900 { 4901 .procname = "oa_max_sample_rate", 4902 .data = &i915_oa_max_sample_rate, 4903 .maxlen = sizeof(i915_oa_max_sample_rate), 4904 .mode = 0644, 4905 .proc_handler = proc_dointvec_minmax, 4906 .extra1 = SYSCTL_ZERO, 4907 .extra2 = &oa_sample_rate_hard_limit, 4908 }, 4909 {} 4910 }; 4911 4912 static u32 num_perf_groups_per_gt(struct intel_gt *gt) 4913 { 4914 return 1; 4915 } 4916 4917 static u32 __oam_engine_group(struct intel_engine_cs *engine) 4918 { 4919 if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 70)) { 4920 /* 4921 * There's 1 SAMEDIA gt and 1 OAM per SAMEDIA gt. All media slices 4922 * within the gt use the same OAM. All MTL SKUs list 1 SA MEDIA. 4923 */ 4924 drm_WARN_ON(&engine->i915->drm, 4925 engine->gt->type != GT_MEDIA); 4926 4927 return PERF_GROUP_OAM_SAMEDIA_0; 4928 } 4929 4930 return PERF_GROUP_INVALID; 4931 } 4932 4933 static u32 __oa_engine_group(struct intel_engine_cs *engine) 4934 { 4935 switch (engine->class) { 4936 case RENDER_CLASS: 4937 return PERF_GROUP_OAG; 4938 4939 case VIDEO_DECODE_CLASS: 4940 case VIDEO_ENHANCEMENT_CLASS: 4941 return __oam_engine_group(engine); 4942 4943 default: 4944 return PERF_GROUP_INVALID; 4945 } 4946 } 4947 4948 static struct i915_perf_regs __oam_regs(u32 base) 4949 { 4950 return (struct i915_perf_regs) { 4951 base, 4952 GEN12_OAM_HEAD_POINTER(base), 4953 GEN12_OAM_TAIL_POINTER(base), 4954 GEN12_OAM_BUFFER(base), 4955 GEN12_OAM_CONTEXT_CONTROL(base), 4956 GEN12_OAM_CONTROL(base), 4957 GEN12_OAM_DEBUG(base), 4958 GEN12_OAM_STATUS(base), 4959 GEN12_OAM_CONTROL_COUNTER_FORMAT_SHIFT, 4960 }; 4961 } 4962 4963 static struct i915_perf_regs __oag_regs(void) 4964 { 4965 return (struct i915_perf_regs) { 4966 0, 4967 GEN12_OAG_OAHEADPTR, 4968 GEN12_OAG_OATAILPTR, 4969 GEN12_OAG_OABUFFER, 4970 GEN12_OAG_OAGLBCTXCTRL, 4971 GEN12_OAG_OACONTROL, 4972 GEN12_OAG_OA_DEBUG, 4973 GEN12_OAG_OASTATUS, 4974 GEN12_OAG_OACONTROL_OA_COUNTER_FORMAT_SHIFT, 4975 }; 4976 } 4977 4978 static void oa_init_groups(struct intel_gt *gt) 4979 { 4980 int i, num_groups = gt->perf.num_perf_groups; 4981 4982 for (i = 0; i < num_groups; i++) { 4983 struct i915_perf_group *g = >->perf.group[i]; 4984 4985 /* Fused off engines can result in a group with num_engines == 0 */ 4986 if (g->num_engines == 0) 4987 continue; 4988 4989 if (i == PERF_GROUP_OAG && gt->type != GT_MEDIA) { 4990 g->regs = __oag_regs(); 4991 g->type = TYPE_OAG; 4992 } else if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 70)) { 4993 g->regs = __oam_regs(mtl_oa_base[i]); 4994 g->type = TYPE_OAM; 4995 } 4996 } 4997 } 4998 4999 static int oa_init_gt(struct intel_gt *gt) 5000 { 5001 u32 num_groups = num_perf_groups_per_gt(gt); 5002 struct intel_engine_cs *engine; 5003 struct i915_perf_group *g; 5004 intel_engine_mask_t tmp; 5005 5006 g = kcalloc(num_groups, sizeof(*g), GFP_KERNEL); 5007 if (!g) 5008 return -ENOMEM; 5009 5010 for_each_engine_masked(engine, gt, ALL_ENGINES, tmp) { 5011 u32 index = __oa_engine_group(engine); 5012 5013 engine->oa_group = NULL; 5014 if (index < num_groups) { 5015 g[index].num_engines++; 5016 engine->oa_group = &g[index]; 5017 } 5018 } 5019 5020 gt->perf.num_perf_groups = num_groups; 5021 gt->perf.group = g; 5022 5023 oa_init_groups(gt); 5024 5025 return 0; 5026 } 5027 5028 static int oa_init_engine_groups(struct i915_perf *perf) 5029 { 5030 struct intel_gt *gt; 5031 int i, ret; 5032 5033 for_each_gt(gt, perf->i915, i) { 5034 ret = oa_init_gt(gt); 5035 if (ret) 5036 return ret; 5037 } 5038 5039 return 0; 5040 } 5041 5042 static void oa_init_supported_formats(struct i915_perf *perf) 5043 { 5044 struct drm_i915_private *i915 = perf->i915; 5045 enum intel_platform platform = INTEL_INFO(i915)->platform; 5046 5047 switch (platform) { 5048 case INTEL_HASWELL: 5049 oa_format_add(perf, I915_OA_FORMAT_A13); 5050 oa_format_add(perf, I915_OA_FORMAT_A13); 5051 oa_format_add(perf, I915_OA_FORMAT_A29); 5052 oa_format_add(perf, I915_OA_FORMAT_A13_B8_C8); 5053 oa_format_add(perf, I915_OA_FORMAT_B4_C8); 5054 oa_format_add(perf, I915_OA_FORMAT_A45_B8_C8); 5055 oa_format_add(perf, I915_OA_FORMAT_B4_C8_A16); 5056 oa_format_add(perf, I915_OA_FORMAT_C4_B8); 5057 break; 5058 5059 case INTEL_BROADWELL: 5060 case INTEL_CHERRYVIEW: 5061 case INTEL_SKYLAKE: 5062 case INTEL_BROXTON: 5063 case INTEL_KABYLAKE: 5064 case INTEL_GEMINILAKE: 5065 case INTEL_COFFEELAKE: 5066 case INTEL_COMETLAKE: 5067 case INTEL_ICELAKE: 5068 case INTEL_ELKHARTLAKE: 5069 case INTEL_JASPERLAKE: 5070 case INTEL_TIGERLAKE: 5071 case INTEL_ROCKETLAKE: 5072 case INTEL_DG1: 5073 case INTEL_ALDERLAKE_S: 5074 case INTEL_ALDERLAKE_P: 5075 oa_format_add(perf, I915_OA_FORMAT_A12); 5076 oa_format_add(perf, I915_OA_FORMAT_A12_B8_C8); 5077 oa_format_add(perf, I915_OA_FORMAT_A32u40_A4u32_B8_C8); 5078 oa_format_add(perf, I915_OA_FORMAT_C4_B8); 5079 break; 5080 5081 case INTEL_DG2: 5082 oa_format_add(perf, I915_OAR_FORMAT_A32u40_A4u32_B8_C8); 5083 oa_format_add(perf, I915_OA_FORMAT_A24u40_A14u32_B8_C8); 5084 break; 5085 5086 case INTEL_METEORLAKE: 5087 oa_format_add(perf, I915_OAR_FORMAT_A32u40_A4u32_B8_C8); 5088 oa_format_add(perf, I915_OA_FORMAT_A24u40_A14u32_B8_C8); 5089 oa_format_add(perf, I915_OAM_FORMAT_MPEC8u64_B8_C8); 5090 oa_format_add(perf, I915_OAM_FORMAT_MPEC8u32_B8_C8); 5091 break; 5092 5093 default: 5094 MISSING_CASE(platform); 5095 } 5096 } 5097 5098 static void i915_perf_init_info(struct drm_i915_private *i915) 5099 { 5100 struct i915_perf *perf = &i915->perf; 5101 5102 switch (GRAPHICS_VER(i915)) { 5103 case 8: 5104 perf->ctx_oactxctrl_offset = 0x120; 5105 perf->ctx_flexeu0_offset = 0x2ce; 5106 perf->gen8_valid_ctx_bit = BIT(25); 5107 break; 5108 case 9: 5109 perf->ctx_oactxctrl_offset = 0x128; 5110 perf->ctx_flexeu0_offset = 0x3de; 5111 perf->gen8_valid_ctx_bit = BIT(16); 5112 break; 5113 case 11: 5114 perf->ctx_oactxctrl_offset = 0x124; 5115 perf->ctx_flexeu0_offset = 0x78e; 5116 perf->gen8_valid_ctx_bit = BIT(16); 5117 break; 5118 case 12: 5119 /* 5120 * Calculate offset at runtime in oa_pin_context for gen12 and 5121 * cache the value in perf->ctx_oactxctrl_offset. 5122 */ 5123 break; 5124 default: 5125 MISSING_CASE(GRAPHICS_VER(i915)); 5126 } 5127 } 5128 5129 /** 5130 * i915_perf_init - initialize i915-perf state on module bind 5131 * @i915: i915 device instance 5132 * 5133 * Initializes i915-perf state without exposing anything to userspace. 5134 * 5135 * Note: i915-perf initialization is split into an 'init' and 'register' 5136 * phase with the i915_perf_register() exposing state to userspace. 5137 */ 5138 int i915_perf_init(struct drm_i915_private *i915) 5139 { 5140 struct i915_perf *perf = &i915->perf; 5141 5142 perf->oa_formats = oa_formats; 5143 if (IS_HASWELL(i915)) { 5144 perf->ops.is_valid_b_counter_reg = gen7_is_valid_b_counter_addr; 5145 perf->ops.is_valid_mux_reg = hsw_is_valid_mux_addr; 5146 perf->ops.is_valid_flex_reg = NULL; 5147 perf->ops.enable_metric_set = hsw_enable_metric_set; 5148 perf->ops.disable_metric_set = hsw_disable_metric_set; 5149 perf->ops.oa_enable = gen7_oa_enable; 5150 perf->ops.oa_disable = gen7_oa_disable; 5151 perf->ops.read = gen7_oa_read; 5152 perf->ops.oa_hw_tail_read = gen7_oa_hw_tail_read; 5153 } else if (HAS_LOGICAL_RING_CONTEXTS(i915)) { 5154 /* Note: that although we could theoretically also support the 5155 * legacy ringbuffer mode on BDW (and earlier iterations of 5156 * this driver, before upstreaming did this) it didn't seem 5157 * worth the complexity to maintain now that BDW+ enable 5158 * execlist mode by default. 5159 */ 5160 perf->ops.read = gen8_oa_read; 5161 i915_perf_init_info(i915); 5162 5163 if (IS_GRAPHICS_VER(i915, 8, 9)) { 5164 perf->ops.is_valid_b_counter_reg = 5165 gen7_is_valid_b_counter_addr; 5166 perf->ops.is_valid_mux_reg = 5167 gen8_is_valid_mux_addr; 5168 perf->ops.is_valid_flex_reg = 5169 gen8_is_valid_flex_addr; 5170 5171 if (IS_CHERRYVIEW(i915)) { 5172 perf->ops.is_valid_mux_reg = 5173 chv_is_valid_mux_addr; 5174 } 5175 5176 perf->ops.oa_enable = gen8_oa_enable; 5177 perf->ops.oa_disable = gen8_oa_disable; 5178 perf->ops.enable_metric_set = gen8_enable_metric_set; 5179 perf->ops.disable_metric_set = gen8_disable_metric_set; 5180 perf->ops.oa_hw_tail_read = gen8_oa_hw_tail_read; 5181 } else if (GRAPHICS_VER(i915) == 11) { 5182 perf->ops.is_valid_b_counter_reg = 5183 gen7_is_valid_b_counter_addr; 5184 perf->ops.is_valid_mux_reg = 5185 gen11_is_valid_mux_addr; 5186 perf->ops.is_valid_flex_reg = 5187 gen8_is_valid_flex_addr; 5188 5189 perf->ops.oa_enable = gen8_oa_enable; 5190 perf->ops.oa_disable = gen8_oa_disable; 5191 perf->ops.enable_metric_set = gen8_enable_metric_set; 5192 perf->ops.disable_metric_set = gen11_disable_metric_set; 5193 perf->ops.oa_hw_tail_read = gen8_oa_hw_tail_read; 5194 } else if (GRAPHICS_VER(i915) == 12) { 5195 perf->ops.is_valid_b_counter_reg = 5196 HAS_OA_SLICE_CONTRIB_LIMITS(i915) ? 5197 xehp_is_valid_b_counter_addr : 5198 gen12_is_valid_b_counter_addr; 5199 perf->ops.is_valid_mux_reg = 5200 gen12_is_valid_mux_addr; 5201 perf->ops.is_valid_flex_reg = 5202 gen8_is_valid_flex_addr; 5203 5204 perf->ops.oa_enable = gen12_oa_enable; 5205 perf->ops.oa_disable = gen12_oa_disable; 5206 perf->ops.enable_metric_set = gen12_enable_metric_set; 5207 perf->ops.disable_metric_set = gen12_disable_metric_set; 5208 perf->ops.oa_hw_tail_read = gen12_oa_hw_tail_read; 5209 } 5210 } 5211 5212 if (perf->ops.enable_metric_set) { 5213 struct intel_gt *gt; 5214 int i, ret; 5215 5216 for_each_gt(gt, i915, i) 5217 mutex_init(>->perf.lock); 5218 5219 /* Choose a representative limit */ 5220 oa_sample_rate_hard_limit = to_gt(i915)->clock_frequency / 2; 5221 5222 mutex_init(&perf->metrics_lock); 5223 idr_init_base(&perf->metrics_idr, 1); 5224 5225 /* We set up some ratelimit state to potentially throttle any 5226 * _NOTES about spurious, invalid OA reports which we don't 5227 * forward to userspace. 5228 * 5229 * We print a _NOTE about any throttling when closing the 5230 * stream instead of waiting until driver _fini which no one 5231 * would ever see. 5232 * 5233 * Using the same limiting factors as printk_ratelimit() 5234 */ 5235 ratelimit_state_init(&perf->spurious_report_rs, 5 * HZ, 10); 5236 /* Since we use a DRM_NOTE for spurious reports it would be 5237 * inconsistent to let __ratelimit() automatically print a 5238 * warning for throttling. 5239 */ 5240 ratelimit_set_flags(&perf->spurious_report_rs, 5241 RATELIMIT_MSG_ON_RELEASE); 5242 5243 ratelimit_state_init(&perf->tail_pointer_race, 5244 5 * HZ, 10); 5245 ratelimit_set_flags(&perf->tail_pointer_race, 5246 RATELIMIT_MSG_ON_RELEASE); 5247 5248 atomic64_set(&perf->noa_programming_delay, 5249 500 * 1000 /* 500us */); 5250 5251 perf->i915 = i915; 5252 5253 ret = oa_init_engine_groups(perf); 5254 if (ret) { 5255 drm_err(&i915->drm, 5256 "OA initialization failed %d\n", ret); 5257 return ret; 5258 } 5259 5260 oa_init_supported_formats(perf); 5261 } 5262 5263 return 0; 5264 } 5265 5266 static int destroy_config(int id, void *p, void *data) 5267 { 5268 i915_oa_config_put(p); 5269 return 0; 5270 } 5271 5272 int i915_perf_sysctl_register(void) 5273 { 5274 sysctl_header = register_sysctl("dev/i915", oa_table); 5275 return 0; 5276 } 5277 5278 void i915_perf_sysctl_unregister(void) 5279 { 5280 unregister_sysctl_table(sysctl_header); 5281 } 5282 5283 /** 5284 * i915_perf_fini - Counter part to i915_perf_init() 5285 * @i915: i915 device instance 5286 */ 5287 void i915_perf_fini(struct drm_i915_private *i915) 5288 { 5289 struct i915_perf *perf = &i915->perf; 5290 struct intel_gt *gt; 5291 int i; 5292 5293 if (!perf->i915) 5294 return; 5295 5296 for_each_gt(gt, perf->i915, i) 5297 kfree(gt->perf.group); 5298 5299 idr_for_each(&perf->metrics_idr, destroy_config, perf); 5300 idr_destroy(&perf->metrics_idr); 5301 5302 memset(&perf->ops, 0, sizeof(perf->ops)); 5303 perf->i915 = NULL; 5304 } 5305 5306 /** 5307 * i915_perf_ioctl_version - Version of the i915-perf subsystem 5308 * 5309 * This version number is used by userspace to detect available features. 5310 */ 5311 int i915_perf_ioctl_version(struct drm_i915_private *i915) 5312 { 5313 /* 5314 * 1: Initial version 5315 * I915_PERF_IOCTL_ENABLE 5316 * I915_PERF_IOCTL_DISABLE 5317 * 5318 * 2: Added runtime modification of OA config. 5319 * I915_PERF_IOCTL_CONFIG 5320 * 5321 * 3: Add DRM_I915_PERF_PROP_HOLD_PREEMPTION parameter to hold 5322 * preemption on a particular context so that performance data is 5323 * accessible from a delta of MI_RPC reports without looking at the 5324 * OA buffer. 5325 * 5326 * 4: Add DRM_I915_PERF_PROP_ALLOWED_SSEU to limit what contexts can 5327 * be run for the duration of the performance recording based on 5328 * their SSEU configuration. 5329 * 5330 * 5: Add DRM_I915_PERF_PROP_POLL_OA_PERIOD parameter that controls the 5331 * interval for the hrtimer used to check for OA data. 5332 * 5333 * 6: Add DRM_I915_PERF_PROP_OA_ENGINE_CLASS and 5334 * DRM_I915_PERF_PROP_OA_ENGINE_INSTANCE 5335 * 5336 * 7: Add support for video decode and enhancement classes. 5337 */ 5338 5339 /* 5340 * Wa_14017512683: mtl[a0..c0): Use of OAM must be preceded with Media 5341 * C6 disable in BIOS. If Media C6 is enabled in BIOS, return version 6 5342 * to indicate that OA media is not supported. 5343 */ 5344 if (IS_MTL_MEDIA_STEP(i915, STEP_A0, STEP_C0)) { 5345 struct intel_gt *gt; 5346 int i; 5347 5348 for_each_gt(gt, i915, i) { 5349 if (gt->type == GT_MEDIA && 5350 intel_check_bios_c6_setup(>->rc6)) 5351 return 6; 5352 } 5353 } 5354 5355 return 7; 5356 } 5357 5358 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) 5359 #include "selftests/i915_perf.c" 5360 #endif 5361