1 /* 2 * Copyright © 2015-2016 Intel Corporation 3 * 4 * Permission is hereby granted, free of charge, to any person obtaining a 5 * copy of this software and associated documentation files (the "Software"), 6 * to deal in the Software without restriction, including without limitation 7 * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 * and/or sell copies of the Software, and to permit persons to whom the 9 * Software is furnished to do so, subject to the following conditions: 10 * 11 * The above copyright notice and this permission notice (including the next 12 * paragraph) shall be included in all copies or substantial portions of the 13 * Software. 14 * 15 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 20 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 21 * IN THE SOFTWARE. 22 * 23 * Authors: 24 * Robert Bragg <robert@sixbynine.org> 25 */ 26 27 28 /** 29 * DOC: i915 Perf Overview 30 * 31 * Gen graphics supports a large number of performance counters that can help 32 * driver and application developers understand and optimize their use of the 33 * GPU. 34 * 35 * This i915 perf interface enables userspace to configure and open a file 36 * descriptor representing a stream of GPU metrics which can then be read() as 37 * a stream of sample records. 38 * 39 * The interface is particularly suited to exposing buffered metrics that are 40 * captured by DMA from the GPU, unsynchronized with and unrelated to the CPU. 41 * 42 * Streams representing a single context are accessible to applications with a 43 * corresponding drm file descriptor, such that OpenGL can use the interface 44 * without special privileges. Access to system-wide metrics requires root 45 * privileges by default, unless changed via the dev.i915.perf_event_paranoid 46 * sysctl option. 47 * 48 */ 49 50 /** 51 * DOC: i915 Perf History and Comparison with Core Perf 52 * 53 * The interface was initially inspired by the core Perf infrastructure but 54 * some notable differences are: 55 * 56 * i915 perf file descriptors represent a "stream" instead of an "event"; where 57 * a perf event primarily corresponds to a single 64bit value, while a stream 58 * might sample sets of tightly-coupled counters, depending on the 59 * configuration. For example the Gen OA unit isn't designed to support 60 * orthogonal configurations of individual counters; it's configured for a set 61 * of related counters. Samples for an i915 perf stream capturing OA metrics 62 * will include a set of counter values packed in a compact HW specific format. 63 * The OA unit supports a number of different packing formats which can be 64 * selected by the user opening the stream. Perf has support for grouping 65 * events, but each event in the group is configured, validated and 66 * authenticated individually with separate system calls. 67 * 68 * i915 perf stream configurations are provided as an array of u64 (key,value) 69 * pairs, instead of a fixed struct with multiple miscellaneous config members, 70 * interleaved with event-type specific members. 71 * 72 * i915 perf doesn't support exposing metrics via an mmap'd circular buffer. 73 * The supported metrics are being written to memory by the GPU unsynchronized 74 * with the CPU, using HW specific packing formats for counter sets. Sometimes 75 * the constraints on HW configuration require reports to be filtered before it 76 * would be acceptable to expose them to unprivileged applications - to hide 77 * the metrics of other processes/contexts. For these use cases a read() based 78 * interface is a good fit, and provides an opportunity to filter data as it 79 * gets copied from the GPU mapped buffers to userspace buffers. 80 * 81 * 82 * Issues hit with first prototype based on Core Perf 83 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 84 * 85 * The first prototype of this driver was based on the core perf 86 * infrastructure, and while we did make that mostly work, with some changes to 87 * perf, we found we were breaking or working around too many assumptions baked 88 * into perf's currently cpu centric design. 89 * 90 * In the end we didn't see a clear benefit to making perf's implementation and 91 * interface more complex by changing design assumptions while we knew we still 92 * wouldn't be able to use any existing perf based userspace tools. 93 * 94 * Also considering the Gen specific nature of the Observability hardware and 95 * how userspace will sometimes need to combine i915 perf OA metrics with 96 * side-band OA data captured via MI_REPORT_PERF_COUNT commands; we're 97 * expecting the interface to be used by a platform specific userspace such as 98 * OpenGL or tools. This is to say; we aren't inherently missing out on having 99 * a standard vendor/architecture agnostic interface by not using perf. 100 * 101 * 102 * For posterity, in case we might re-visit trying to adapt core perf to be 103 * better suited to exposing i915 metrics these were the main pain points we 104 * hit: 105 * 106 * - The perf based OA PMU driver broke some significant design assumptions: 107 * 108 * Existing perf pmus are used for profiling work on a cpu and we were 109 * introducing the idea of _IS_DEVICE pmus with different security 110 * implications, the need to fake cpu-related data (such as user/kernel 111 * registers) to fit with perf's current design, and adding _DEVICE records 112 * as a way to forward device-specific status records. 113 * 114 * The OA unit writes reports of counters into a circular buffer, without 115 * involvement from the CPU, making our PMU driver the first of a kind. 116 * 117 * Given the way we were periodically forward data from the GPU-mapped, OA 118 * buffer to perf's buffer, those bursts of sample writes looked to perf like 119 * we were sampling too fast and so we had to subvert its throttling checks. 120 * 121 * Perf supports groups of counters and allows those to be read via 122 * transactions internally but transactions currently seem designed to be 123 * explicitly initiated from the cpu (say in response to a userspace read()) 124 * and while we could pull a report out of the OA buffer we can't 125 * trigger a report from the cpu on demand. 126 * 127 * Related to being report based; the OA counters are configured in HW as a 128 * set while perf generally expects counter configurations to be orthogonal. 129 * Although counters can be associated with a group leader as they are 130 * opened, there's no clear precedent for being able to provide group-wide 131 * configuration attributes (for example we want to let userspace choose the 132 * OA unit report format used to capture all counters in a set, or specify a 133 * GPU context to filter metrics on). We avoided using perf's grouping 134 * feature and forwarded OA reports to userspace via perf's 'raw' sample 135 * field. This suited our userspace well considering how coupled the counters 136 * are when dealing with normalizing. It would be inconvenient to split 137 * counters up into separate events, only to require userspace to recombine 138 * them. For Mesa it's also convenient to be forwarded raw, periodic reports 139 * for combining with the side-band raw reports it captures using 140 * MI_REPORT_PERF_COUNT commands. 141 * 142 * - As a side note on perf's grouping feature; there was also some concern 143 * that using PERF_FORMAT_GROUP as a way to pack together counter values 144 * would quite drastically inflate our sample sizes, which would likely 145 * lower the effective sampling resolutions we could use when the available 146 * memory bandwidth is limited. 147 * 148 * With the OA unit's report formats, counters are packed together as 32 149 * or 40bit values, with the largest report size being 256 bytes. 150 * 151 * PERF_FORMAT_GROUP values are 64bit, but there doesn't appear to be a 152 * documented ordering to the values, implying PERF_FORMAT_ID must also be 153 * used to add a 64bit ID before each value; giving 16 bytes per counter. 154 * 155 * Related to counter orthogonality; we can't time share the OA unit, while 156 * event scheduling is a central design idea within perf for allowing 157 * userspace to open + enable more events than can be configured in HW at any 158 * one time. The OA unit is not designed to allow re-configuration while in 159 * use. We can't reconfigure the OA unit without losing internal OA unit 160 * state which we can't access explicitly to save and restore. Reconfiguring 161 * the OA unit is also relatively slow, involving ~100 register writes. From 162 * userspace Mesa also depends on a stable OA configuration when emitting 163 * MI_REPORT_PERF_COUNT commands and importantly the OA unit can't be 164 * disabled while there are outstanding MI_RPC commands lest we hang the 165 * command streamer. 166 * 167 * The contents of sample records aren't extensible by device drivers (i.e. 168 * the sample_type bits). As an example; Sourab Gupta had been looking to 169 * attach GPU timestamps to our OA samples. We were shoehorning OA reports 170 * into sample records by using the 'raw' field, but it's tricky to pack more 171 * than one thing into this field because events/core.c currently only lets a 172 * pmu give a single raw data pointer plus len which will be copied into the 173 * ring buffer. To include more than the OA report we'd have to copy the 174 * report into an intermediate larger buffer. I'd been considering allowing a 175 * vector of data+len values to be specified for copying the raw data, but 176 * it felt like a kludge to being using the raw field for this purpose. 177 * 178 * - It felt like our perf based PMU was making some technical compromises 179 * just for the sake of using perf: 180 * 181 * perf_event_open() requires events to either relate to a pid or a specific 182 * cpu core, while our device pmu related to neither. Events opened with a 183 * pid will be automatically enabled/disabled according to the scheduling of 184 * that process - so not appropriate for us. When an event is related to a 185 * cpu id, perf ensures pmu methods will be invoked via an inter process 186 * interrupt on that core. To avoid invasive changes our userspace opened OA 187 * perf events for a specific cpu. This was workable but it meant the 188 * majority of the OA driver ran in atomic context, including all OA report 189 * forwarding, which wasn't really necessary in our case and seems to make 190 * our locking requirements somewhat complex as we handled the interaction 191 * with the rest of the i915 driver. 192 */ 193 194 #include <linux/anon_inodes.h> 195 #include <linux/nospec.h> 196 #include <linux/sizes.h> 197 #include <linux/uuid.h> 198 199 #include "gem/i915_gem_context.h" 200 #include "gem/i915_gem_internal.h" 201 #include "gt/intel_engine_pm.h" 202 #include "gt/intel_engine_regs.h" 203 #include "gt/intel_engine_user.h" 204 #include "gt/intel_execlists_submission.h" 205 #include "gt/intel_gpu_commands.h" 206 #include "gt/intel_gt.h" 207 #include "gt/intel_gt_clock_utils.h" 208 #include "gt/intel_gt_mcr.h" 209 #include "gt/intel_gt_regs.h" 210 #include "gt/intel_lrc.h" 211 #include "gt/intel_lrc_reg.h" 212 #include "gt/intel_rc6.h" 213 #include "gt/intel_ring.h" 214 #include "gt/uc/intel_guc_slpc.h" 215 216 #include "i915_drv.h" 217 #include "i915_file_private.h" 218 #include "i915_perf.h" 219 #include "i915_perf_oa_regs.h" 220 #include "i915_reg.h" 221 222 /* HW requires this to be a power of two, between 128k and 16M, though driver 223 * is currently generally designed assuming the largest 16M size is used such 224 * that the overflow cases are unlikely in normal operation. 225 */ 226 #define OA_BUFFER_SIZE SZ_16M 227 228 #define OA_TAKEN(tail, head) ((tail - head) & (OA_BUFFER_SIZE - 1)) 229 230 /** 231 * DOC: OA Tail Pointer Race 232 * 233 * There's a HW race condition between OA unit tail pointer register updates and 234 * writes to memory whereby the tail pointer can sometimes get ahead of what's 235 * been written out to the OA buffer so far (in terms of what's visible to the 236 * CPU). 237 * 238 * Although this can be observed explicitly while copying reports to userspace 239 * by checking for a zeroed report-id field in tail reports, we want to account 240 * for this earlier, as part of the oa_buffer_check_unlocked to avoid lots of 241 * redundant read() attempts. 242 * 243 * We workaround this issue in oa_buffer_check_unlocked() by reading the reports 244 * in the OA buffer, starting from the tail reported by the HW until we find a 245 * report with its first 2 dwords not 0 meaning its previous report is 246 * completely in memory and ready to be read. Those dwords are also set to 0 247 * once read and the whole buffer is cleared upon OA buffer initialization. The 248 * first dword is the reason for this report while the second is the timestamp, 249 * making the chances of having those 2 fields at 0 fairly unlikely. A more 250 * detailed explanation is available in oa_buffer_check_unlocked(). 251 * 252 * Most of the implementation details for this workaround are in 253 * oa_buffer_check_unlocked() and _append_oa_reports() 254 * 255 * Note for posterity: previously the driver used to define an effective tail 256 * pointer that lagged the real pointer by a 'tail margin' measured in bytes 257 * derived from %OA_TAIL_MARGIN_NSEC and the configured sampling frequency. 258 * This was flawed considering that the OA unit may also automatically generate 259 * non-periodic reports (such as on context switch) or the OA unit may be 260 * enabled without any periodic sampling. 261 */ 262 #define OA_TAIL_MARGIN_NSEC 100000ULL 263 #define INVALID_TAIL_PTR 0xffffffff 264 265 /* The default frequency for checking whether the OA unit has written new 266 * reports to the circular OA buffer... 267 */ 268 #define DEFAULT_POLL_FREQUENCY_HZ 200 269 #define DEFAULT_POLL_PERIOD_NS (NSEC_PER_SEC / DEFAULT_POLL_FREQUENCY_HZ) 270 271 /* for sysctl proc_dointvec_minmax of dev.i915.perf_stream_paranoid */ 272 static u32 i915_perf_stream_paranoid = true; 273 274 /* The maximum exponent the hardware accepts is 63 (essentially it selects one 275 * of the 64bit timestamp bits to trigger reports from) but there's currently 276 * no known use case for sampling as infrequently as once per 47 thousand years. 277 * 278 * Since the timestamps included in OA reports are only 32bits it seems 279 * reasonable to limit the OA exponent where it's still possible to account for 280 * overflow in OA report timestamps. 281 */ 282 #define OA_EXPONENT_MAX 31 283 284 #define INVALID_CTX_ID 0xffffffff 285 286 /* On Gen8+ automatically triggered OA reports include a 'reason' field... */ 287 #define OAREPORT_REASON_MASK 0x3f 288 #define OAREPORT_REASON_MASK_EXTENDED 0x7f 289 #define OAREPORT_REASON_SHIFT 19 290 #define OAREPORT_REASON_TIMER (1<<0) 291 #define OAREPORT_REASON_CTX_SWITCH (1<<3) 292 #define OAREPORT_REASON_CLK_RATIO (1<<5) 293 294 #define HAS_MI_SET_PREDICATE(i915) (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 295 296 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate 297 * 298 * The highest sampling frequency we can theoretically program the OA unit 299 * with is always half the timestamp frequency: E.g. 6.25Mhz for Haswell. 300 * 301 * Initialized just before we register the sysctl parameter. 302 */ 303 static int oa_sample_rate_hard_limit; 304 305 /* Theoretically we can program the OA unit to sample every 160ns but don't 306 * allow that by default unless root... 307 * 308 * The default threshold of 100000Hz is based on perf's similar 309 * kernel.perf_event_max_sample_rate sysctl parameter. 310 */ 311 static u32 i915_oa_max_sample_rate = 100000; 312 313 /* XXX: beware if future OA HW adds new report formats that the current 314 * code assumes all reports have a power-of-two size and ~(size - 1) can 315 * be used as a mask to align the OA tail pointer. 316 */ 317 static const struct i915_oa_format oa_formats[I915_OA_FORMAT_MAX] = { 318 [I915_OA_FORMAT_A13] = { 0, 64 }, 319 [I915_OA_FORMAT_A29] = { 1, 128 }, 320 [I915_OA_FORMAT_A13_B8_C8] = { 2, 128 }, 321 /* A29_B8_C8 Disallowed as 192 bytes doesn't factor into buffer size */ 322 [I915_OA_FORMAT_B4_C8] = { 4, 64 }, 323 [I915_OA_FORMAT_A45_B8_C8] = { 5, 256 }, 324 [I915_OA_FORMAT_B4_C8_A16] = { 6, 128 }, 325 [I915_OA_FORMAT_C4_B8] = { 7, 64 }, 326 [I915_OA_FORMAT_A12] = { 0, 64 }, 327 [I915_OA_FORMAT_A12_B8_C8] = { 2, 128 }, 328 [I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 }, 329 [I915_OAR_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 }, 330 [I915_OA_FORMAT_A24u40_A14u32_B8_C8] = { 5, 256 }, 331 [I915_OAM_FORMAT_MPEC8u64_B8_C8] = { 1, 192, TYPE_OAM, HDR_64_BIT }, 332 [I915_OAM_FORMAT_MPEC8u32_B8_C8] = { 2, 128, TYPE_OAM, HDR_64_BIT }, 333 }; 334 335 static const u32 mtl_oa_base[] = { 336 [PERF_GROUP_OAM_SAMEDIA_0] = 0x393000, 337 }; 338 339 #define SAMPLE_OA_REPORT (1<<0) 340 341 /** 342 * struct perf_open_properties - for validated properties given to open a stream 343 * @sample_flags: `DRM_I915_PERF_PROP_SAMPLE_*` properties are tracked as flags 344 * @single_context: Whether a single or all gpu contexts should be monitored 345 * @hold_preemption: Whether the preemption is disabled for the filtered 346 * context 347 * @ctx_handle: A gem ctx handle for use with @single_context 348 * @metrics_set: An ID for an OA unit metric set advertised via sysfs 349 * @oa_format: An OA unit HW report format 350 * @oa_periodic: Whether to enable periodic OA unit sampling 351 * @oa_period_exponent: The OA unit sampling period is derived from this 352 * @engine: The engine (typically rcs0) being monitored by the OA unit 353 * @has_sseu: Whether @sseu was specified by userspace 354 * @sseu: internal SSEU configuration computed either from the userspace 355 * specified configuration in the opening parameters or a default value 356 * (see get_default_sseu_config()) 357 * @poll_oa_period: The period in nanoseconds at which the CPU will check for OA 358 * data availability 359 * 360 * As read_properties_unlocked() enumerates and validates the properties given 361 * to open a stream of metrics the configuration is built up in the structure 362 * which starts out zero initialized. 363 */ 364 struct perf_open_properties { 365 u32 sample_flags; 366 367 u64 single_context:1; 368 u64 hold_preemption:1; 369 u64 ctx_handle; 370 371 /* OA sampling state */ 372 int metrics_set; 373 int oa_format; 374 bool oa_periodic; 375 int oa_period_exponent; 376 377 struct intel_engine_cs *engine; 378 379 bool has_sseu; 380 struct intel_sseu sseu; 381 382 u64 poll_oa_period; 383 }; 384 385 struct i915_oa_config_bo { 386 struct llist_node node; 387 388 struct i915_oa_config *oa_config; 389 struct i915_vma *vma; 390 }; 391 392 static struct ctl_table_header *sysctl_header; 393 394 static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer); 395 396 void i915_oa_config_release(struct kref *ref) 397 { 398 struct i915_oa_config *oa_config = 399 container_of(ref, typeof(*oa_config), ref); 400 401 kfree(oa_config->flex_regs); 402 kfree(oa_config->b_counter_regs); 403 kfree(oa_config->mux_regs); 404 405 kfree_rcu(oa_config, rcu); 406 } 407 408 struct i915_oa_config * 409 i915_perf_get_oa_config(struct i915_perf *perf, int metrics_set) 410 { 411 struct i915_oa_config *oa_config; 412 413 rcu_read_lock(); 414 oa_config = idr_find(&perf->metrics_idr, metrics_set); 415 if (oa_config) 416 oa_config = i915_oa_config_get(oa_config); 417 rcu_read_unlock(); 418 419 return oa_config; 420 } 421 422 static void free_oa_config_bo(struct i915_oa_config_bo *oa_bo) 423 { 424 i915_oa_config_put(oa_bo->oa_config); 425 i915_vma_put(oa_bo->vma); 426 kfree(oa_bo); 427 } 428 429 static inline const 430 struct i915_perf_regs *__oa_regs(struct i915_perf_stream *stream) 431 { 432 return &stream->engine->oa_group->regs; 433 } 434 435 static u32 gen12_oa_hw_tail_read(struct i915_perf_stream *stream) 436 { 437 struct intel_uncore *uncore = stream->uncore; 438 439 return intel_uncore_read(uncore, __oa_regs(stream)->oa_tail_ptr) & 440 GEN12_OAG_OATAILPTR_MASK; 441 } 442 443 static u32 gen8_oa_hw_tail_read(struct i915_perf_stream *stream) 444 { 445 struct intel_uncore *uncore = stream->uncore; 446 447 return intel_uncore_read(uncore, GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK; 448 } 449 450 static u32 gen7_oa_hw_tail_read(struct i915_perf_stream *stream) 451 { 452 struct intel_uncore *uncore = stream->uncore; 453 u32 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); 454 455 return oastatus1 & GEN7_OASTATUS1_TAIL_MASK; 456 } 457 458 #define oa_report_header_64bit(__s) \ 459 ((__s)->oa_buffer.format->header == HDR_64_BIT) 460 461 static u64 oa_report_id(struct i915_perf_stream *stream, void *report) 462 { 463 return oa_report_header_64bit(stream) ? *(u64 *)report : *(u32 *)report; 464 } 465 466 static u64 oa_report_reason(struct i915_perf_stream *stream, void *report) 467 { 468 return (oa_report_id(stream, report) >> OAREPORT_REASON_SHIFT) & 469 (GRAPHICS_VER(stream->perf->i915) == 12 ? 470 OAREPORT_REASON_MASK_EXTENDED : 471 OAREPORT_REASON_MASK); 472 } 473 474 static void oa_report_id_clear(struct i915_perf_stream *stream, u32 *report) 475 { 476 if (oa_report_header_64bit(stream)) 477 *(u64 *)report = 0; 478 else 479 *report = 0; 480 } 481 482 static bool oa_report_ctx_invalid(struct i915_perf_stream *stream, void *report) 483 { 484 return !(oa_report_id(stream, report) & 485 stream->perf->gen8_valid_ctx_bit) && 486 GRAPHICS_VER(stream->perf->i915) <= 11; 487 } 488 489 static u64 oa_timestamp(struct i915_perf_stream *stream, void *report) 490 { 491 return oa_report_header_64bit(stream) ? 492 *((u64 *)report + 1) : 493 *((u32 *)report + 1); 494 } 495 496 static void oa_timestamp_clear(struct i915_perf_stream *stream, u32 *report) 497 { 498 if (oa_report_header_64bit(stream)) 499 *(u64 *)&report[2] = 0; 500 else 501 report[1] = 0; 502 } 503 504 static u32 oa_context_id(struct i915_perf_stream *stream, u32 *report) 505 { 506 u32 ctx_id = oa_report_header_64bit(stream) ? report[4] : report[2]; 507 508 return ctx_id & stream->specific_ctx_id_mask; 509 } 510 511 static void oa_context_id_squash(struct i915_perf_stream *stream, u32 *report) 512 { 513 if (oa_report_header_64bit(stream)) 514 report[4] = INVALID_CTX_ID; 515 else 516 report[2] = INVALID_CTX_ID; 517 } 518 519 /** 520 * oa_buffer_check_unlocked - check for data and update tail ptr state 521 * @stream: i915 stream instance 522 * 523 * This is either called via fops (for blocking reads in user ctx) or the poll 524 * check hrtimer (atomic ctx) to check the OA buffer tail pointer and check 525 * if there is data available for userspace to read. 526 * 527 * This function is central to providing a workaround for the OA unit tail 528 * pointer having a race with respect to what data is visible to the CPU. 529 * It is responsible for reading tail pointers from the hardware and giving 530 * the pointers time to 'age' before they are made available for reading. 531 * (See description of OA_TAIL_MARGIN_NSEC above for further details.) 532 * 533 * Besides returning true when there is data available to read() this function 534 * also updates the tail in the oa_buffer object. 535 * 536 * Note: It's safe to read OA config state here unlocked, assuming that this is 537 * only called while the stream is enabled, while the global OA configuration 538 * can't be modified. 539 * 540 * Returns: %true if the OA buffer contains data, else %false 541 */ 542 static bool oa_buffer_check_unlocked(struct i915_perf_stream *stream) 543 { 544 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 545 int report_size = stream->oa_buffer.format->size; 546 u32 head, tail, read_tail; 547 unsigned long flags; 548 bool pollin; 549 u32 hw_tail; 550 u32 partial_report_size; 551 552 /* We have to consider the (unlikely) possibility that read() errors 553 * could result in an OA buffer reset which might reset the head and 554 * tail state. 555 */ 556 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 557 558 hw_tail = stream->perf->ops.oa_hw_tail_read(stream); 559 560 /* The tail pointer increases in 64 byte increments, not in report_size 561 * steps. Also the report size may not be a power of 2. Compute 562 * potentially partially landed report in the OA buffer 563 */ 564 partial_report_size = OA_TAKEN(hw_tail, stream->oa_buffer.tail); 565 partial_report_size %= report_size; 566 567 /* Subtract partial amount off the tail */ 568 hw_tail = OA_TAKEN(hw_tail, partial_report_size); 569 570 /* NB: The head we observe here might effectively be a little 571 * out of date. If a read() is in progress, the head could be 572 * anywhere between this head and stream->oa_buffer.tail. 573 */ 574 head = stream->oa_buffer.head - gtt_offset; 575 read_tail = stream->oa_buffer.tail - gtt_offset; 576 577 tail = hw_tail; 578 579 /* Walk the stream backward until we find a report with report 580 * id and timestmap not at 0. Since the circular buffer pointers 581 * progress by increments of 64 bytes and that reports can be up 582 * to 256 bytes long, we can't tell whether a report has fully 583 * landed in memory before the report id and timestamp of the 584 * following report have effectively landed. 585 * 586 * This is assuming that the writes of the OA unit land in 587 * memory in the order they were written to. 588 * If not : (╯°□°)╯︵ ┻━┻ 589 */ 590 while (OA_TAKEN(tail, read_tail) >= report_size) { 591 void *report = stream->oa_buffer.vaddr + tail; 592 593 if (oa_report_id(stream, report) || 594 oa_timestamp(stream, report)) 595 break; 596 597 tail = (tail - report_size) & (OA_BUFFER_SIZE - 1); 598 } 599 600 if (OA_TAKEN(hw_tail, tail) > report_size && 601 __ratelimit(&stream->perf->tail_pointer_race)) 602 drm_notice(&stream->uncore->i915->drm, 603 "unlanded report(s) head=0x%x tail=0x%x hw_tail=0x%x\n", 604 head, tail, hw_tail); 605 606 stream->oa_buffer.tail = gtt_offset + tail; 607 608 pollin = OA_TAKEN(stream->oa_buffer.tail, 609 stream->oa_buffer.head) >= report_size; 610 611 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 612 613 return pollin; 614 } 615 616 /** 617 * append_oa_status - Appends a status record to a userspace read() buffer. 618 * @stream: An i915-perf stream opened for OA metrics 619 * @buf: destination buffer given by userspace 620 * @count: the number of bytes userspace wants to read 621 * @offset: (inout): the current position for writing into @buf 622 * @type: The kind of status to report to userspace 623 * 624 * Writes a status record (such as `DRM_I915_PERF_RECORD_OA_REPORT_LOST`) 625 * into the userspace read() buffer. 626 * 627 * The @buf @offset will only be updated on success. 628 * 629 * Returns: 0 on success, negative error code on failure. 630 */ 631 static int append_oa_status(struct i915_perf_stream *stream, 632 char __user *buf, 633 size_t count, 634 size_t *offset, 635 enum drm_i915_perf_record_type type) 636 { 637 struct drm_i915_perf_record_header header = { type, 0, sizeof(header) }; 638 639 if ((count - *offset) < header.size) 640 return -ENOSPC; 641 642 if (copy_to_user(buf + *offset, &header, sizeof(header))) 643 return -EFAULT; 644 645 (*offset) += header.size; 646 647 return 0; 648 } 649 650 /** 651 * append_oa_sample - Copies single OA report into userspace read() buffer. 652 * @stream: An i915-perf stream opened for OA metrics 653 * @buf: destination buffer given by userspace 654 * @count: the number of bytes userspace wants to read 655 * @offset: (inout): the current position for writing into @buf 656 * @report: A single OA report to (optionally) include as part of the sample 657 * 658 * The contents of a sample are configured through `DRM_I915_PERF_PROP_SAMPLE_*` 659 * properties when opening a stream, tracked as `stream->sample_flags`. This 660 * function copies the requested components of a single sample to the given 661 * read() @buf. 662 * 663 * The @buf @offset will only be updated on success. 664 * 665 * Returns: 0 on success, negative error code on failure. 666 */ 667 static int append_oa_sample(struct i915_perf_stream *stream, 668 char __user *buf, 669 size_t count, 670 size_t *offset, 671 const u8 *report) 672 { 673 int report_size = stream->oa_buffer.format->size; 674 struct drm_i915_perf_record_header header; 675 int report_size_partial; 676 u8 *oa_buf_end; 677 678 header.type = DRM_I915_PERF_RECORD_SAMPLE; 679 header.pad = 0; 680 header.size = stream->sample_size; 681 682 if ((count - *offset) < header.size) 683 return -ENOSPC; 684 685 buf += *offset; 686 if (copy_to_user(buf, &header, sizeof(header))) 687 return -EFAULT; 688 buf += sizeof(header); 689 690 oa_buf_end = stream->oa_buffer.vaddr + OA_BUFFER_SIZE; 691 report_size_partial = oa_buf_end - report; 692 693 if (report_size_partial < report_size) { 694 if (copy_to_user(buf, report, report_size_partial)) 695 return -EFAULT; 696 buf += report_size_partial; 697 698 if (copy_to_user(buf, stream->oa_buffer.vaddr, 699 report_size - report_size_partial)) 700 return -EFAULT; 701 } else if (copy_to_user(buf, report, report_size)) { 702 return -EFAULT; 703 } 704 705 (*offset) += header.size; 706 707 return 0; 708 } 709 710 /** 711 * gen8_append_oa_reports - Copies all buffered OA reports into 712 * userspace read() buffer. 713 * @stream: An i915-perf stream opened for OA metrics 714 * @buf: destination buffer given by userspace 715 * @count: the number of bytes userspace wants to read 716 * @offset: (inout): the current position for writing into @buf 717 * 718 * Notably any error condition resulting in a short read (-%ENOSPC or 719 * -%EFAULT) will be returned even though one or more records may 720 * have been successfully copied. In this case it's up to the caller 721 * to decide if the error should be squashed before returning to 722 * userspace. 723 * 724 * Note: reports are consumed from the head, and appended to the 725 * tail, so the tail chases the head?... If you think that's mad 726 * and back-to-front you're not alone, but this follows the 727 * Gen PRM naming convention. 728 * 729 * Returns: 0 on success, negative error code on failure. 730 */ 731 static int gen8_append_oa_reports(struct i915_perf_stream *stream, 732 char __user *buf, 733 size_t count, 734 size_t *offset) 735 { 736 struct intel_uncore *uncore = stream->uncore; 737 int report_size = stream->oa_buffer.format->size; 738 u8 *oa_buf_base = stream->oa_buffer.vaddr; 739 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 740 u32 mask = (OA_BUFFER_SIZE - 1); 741 size_t start_offset = *offset; 742 unsigned long flags; 743 u32 head, tail; 744 int ret = 0; 745 746 if (drm_WARN_ON(&uncore->i915->drm, !stream->enabled)) 747 return -EIO; 748 749 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 750 751 head = stream->oa_buffer.head; 752 tail = stream->oa_buffer.tail; 753 754 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 755 756 /* 757 * NB: oa_buffer.head/tail include the gtt_offset which we don't want 758 * while indexing relative to oa_buf_base. 759 */ 760 head -= gtt_offset; 761 tail -= gtt_offset; 762 763 /* 764 * An out of bounds or misaligned head or tail pointer implies a driver 765 * bug since we validate + align the tail pointers we read from the 766 * hardware and we are in full control of the head pointer which should 767 * only be incremented by multiples of the report size. 768 */ 769 if (drm_WARN_ONCE(&uncore->i915->drm, 770 head > OA_BUFFER_SIZE || 771 tail > OA_BUFFER_SIZE, 772 "Inconsistent OA buffer pointers: head = %u, tail = %u\n", 773 head, tail)) 774 return -EIO; 775 776 777 for (/* none */; 778 OA_TAKEN(tail, head); 779 head = (head + report_size) & mask) { 780 u8 *report = oa_buf_base + head; 781 u32 *report32 = (void *)report; 782 u32 ctx_id; 783 u64 reason; 784 785 /* 786 * The reason field includes flags identifying what 787 * triggered this specific report (mostly timer 788 * triggered or e.g. due to a context switch). 789 * 790 * In MMIO triggered reports, some platforms do not set the 791 * reason bit in this field and it is valid to have a reason 792 * field of zero. 793 */ 794 reason = oa_report_reason(stream, report); 795 ctx_id = oa_context_id(stream, report32); 796 797 /* 798 * Squash whatever is in the CTX_ID field if it's marked as 799 * invalid to be sure we avoid false-positive, single-context 800 * filtering below... 801 * 802 * Note: that we don't clear the valid_ctx_bit so userspace can 803 * understand that the ID has been squashed by the kernel. 804 */ 805 if (oa_report_ctx_invalid(stream, report)) { 806 ctx_id = INVALID_CTX_ID; 807 oa_context_id_squash(stream, report32); 808 } 809 810 /* 811 * NB: For Gen 8 the OA unit no longer supports clock gating 812 * off for a specific context and the kernel can't securely 813 * stop the counters from updating as system-wide / global 814 * values. 815 * 816 * Automatic reports now include a context ID so reports can be 817 * filtered on the cpu but it's not worth trying to 818 * automatically subtract/hide counter progress for other 819 * contexts while filtering since we can't stop userspace 820 * issuing MI_REPORT_PERF_COUNT commands which would still 821 * provide a side-band view of the real values. 822 * 823 * To allow userspace (such as Mesa/GL_INTEL_performance_query) 824 * to normalize counters for a single filtered context then it 825 * needs be forwarded bookend context-switch reports so that it 826 * can track switches in between MI_REPORT_PERF_COUNT commands 827 * and can itself subtract/ignore the progress of counters 828 * associated with other contexts. Note that the hardware 829 * automatically triggers reports when switching to a new 830 * context which are tagged with the ID of the newly active 831 * context. To avoid the complexity (and likely fragility) of 832 * reading ahead while parsing reports to try and minimize 833 * forwarding redundant context switch reports (i.e. between 834 * other, unrelated contexts) we simply elect to forward them 835 * all. 836 * 837 * We don't rely solely on the reason field to identify context 838 * switches since it's not-uncommon for periodic samples to 839 * identify a switch before any 'context switch' report. 840 */ 841 if (!stream->ctx || 842 stream->specific_ctx_id == ctx_id || 843 stream->oa_buffer.last_ctx_id == stream->specific_ctx_id || 844 reason & OAREPORT_REASON_CTX_SWITCH) { 845 846 /* 847 * While filtering for a single context we avoid 848 * leaking the IDs of other contexts. 849 */ 850 if (stream->ctx && 851 stream->specific_ctx_id != ctx_id) { 852 oa_context_id_squash(stream, report32); 853 } 854 855 ret = append_oa_sample(stream, buf, count, offset, 856 report); 857 if (ret) 858 break; 859 860 stream->oa_buffer.last_ctx_id = ctx_id; 861 } 862 863 if (is_power_of_2(report_size)) { 864 /* 865 * Clear out the report id and timestamp as a means 866 * to detect unlanded reports. 867 */ 868 oa_report_id_clear(stream, report32); 869 oa_timestamp_clear(stream, report32); 870 } else { 871 /* Zero out the entire report */ 872 memset(report32, 0, report_size); 873 } 874 } 875 876 if (start_offset != *offset) { 877 i915_reg_t oaheadptr; 878 879 oaheadptr = GRAPHICS_VER(stream->perf->i915) == 12 ? 880 __oa_regs(stream)->oa_head_ptr : 881 GEN8_OAHEADPTR; 882 883 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 884 885 /* 886 * We removed the gtt_offset for the copy loop above, indexing 887 * relative to oa_buf_base so put back here... 888 */ 889 head += gtt_offset; 890 intel_uncore_write(uncore, oaheadptr, 891 head & GEN12_OAG_OAHEADPTR_MASK); 892 stream->oa_buffer.head = head; 893 894 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 895 } 896 897 return ret; 898 } 899 900 /** 901 * gen8_oa_read - copy status records then buffered OA reports 902 * @stream: An i915-perf stream opened for OA metrics 903 * @buf: destination buffer given by userspace 904 * @count: the number of bytes userspace wants to read 905 * @offset: (inout): the current position for writing into @buf 906 * 907 * Checks OA unit status registers and if necessary appends corresponding 908 * status records for userspace (such as for a buffer full condition) and then 909 * initiate appending any buffered OA reports. 910 * 911 * Updates @offset according to the number of bytes successfully copied into 912 * the userspace buffer. 913 * 914 * NB: some data may be successfully copied to the userspace buffer 915 * even if an error is returned, and this is reflected in the 916 * updated @offset. 917 * 918 * Returns: zero on success or a negative error code 919 */ 920 static int gen8_oa_read(struct i915_perf_stream *stream, 921 char __user *buf, 922 size_t count, 923 size_t *offset) 924 { 925 struct intel_uncore *uncore = stream->uncore; 926 u32 oastatus; 927 i915_reg_t oastatus_reg; 928 int ret; 929 930 if (drm_WARN_ON(&uncore->i915->drm, !stream->oa_buffer.vaddr)) 931 return -EIO; 932 933 oastatus_reg = GRAPHICS_VER(stream->perf->i915) == 12 ? 934 __oa_regs(stream)->oa_status : 935 GEN8_OASTATUS; 936 937 oastatus = intel_uncore_read(uncore, oastatus_reg); 938 939 /* 940 * We treat OABUFFER_OVERFLOW as a significant error: 941 * 942 * Although theoretically we could handle this more gracefully 943 * sometimes, some Gens don't correctly suppress certain 944 * automatically triggered reports in this condition and so we 945 * have to assume that old reports are now being trampled 946 * over. 947 * 948 * Considering how we don't currently give userspace control 949 * over the OA buffer size and always configure a large 16MB 950 * buffer, then a buffer overflow does anyway likely indicate 951 * that something has gone quite badly wrong. 952 */ 953 if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) { 954 ret = append_oa_status(stream, buf, count, offset, 955 DRM_I915_PERF_RECORD_OA_BUFFER_LOST); 956 if (ret) 957 return ret; 958 959 drm_dbg(&stream->perf->i915->drm, 960 "OA buffer overflow (exponent = %d): force restart\n", 961 stream->period_exponent); 962 963 stream->perf->ops.oa_disable(stream); 964 stream->perf->ops.oa_enable(stream); 965 966 /* 967 * Note: .oa_enable() is expected to re-init the oabuffer and 968 * reset GEN8_OASTATUS for us 969 */ 970 oastatus = intel_uncore_read(uncore, oastatus_reg); 971 } 972 973 if (oastatus & GEN8_OASTATUS_REPORT_LOST) { 974 ret = append_oa_status(stream, buf, count, offset, 975 DRM_I915_PERF_RECORD_OA_REPORT_LOST); 976 if (ret) 977 return ret; 978 979 intel_uncore_rmw(uncore, oastatus_reg, 980 GEN8_OASTATUS_COUNTER_OVERFLOW | 981 GEN8_OASTATUS_REPORT_LOST, 982 IS_GRAPHICS_VER(uncore->i915, 8, 11) ? 983 (GEN8_OASTATUS_HEAD_POINTER_WRAP | 984 GEN8_OASTATUS_TAIL_POINTER_WRAP) : 0); 985 } 986 987 return gen8_append_oa_reports(stream, buf, count, offset); 988 } 989 990 /** 991 * gen7_append_oa_reports - Copies all buffered OA reports into 992 * userspace read() buffer. 993 * @stream: An i915-perf stream opened for OA metrics 994 * @buf: destination buffer given by userspace 995 * @count: the number of bytes userspace wants to read 996 * @offset: (inout): the current position for writing into @buf 997 * 998 * Notably any error condition resulting in a short read (-%ENOSPC or 999 * -%EFAULT) will be returned even though one or more records may 1000 * have been successfully copied. In this case it's up to the caller 1001 * to decide if the error should be squashed before returning to 1002 * userspace. 1003 * 1004 * Note: reports are consumed from the head, and appended to the 1005 * tail, so the tail chases the head?... If you think that's mad 1006 * and back-to-front you're not alone, but this follows the 1007 * Gen PRM naming convention. 1008 * 1009 * Returns: 0 on success, negative error code on failure. 1010 */ 1011 static int gen7_append_oa_reports(struct i915_perf_stream *stream, 1012 char __user *buf, 1013 size_t count, 1014 size_t *offset) 1015 { 1016 struct intel_uncore *uncore = stream->uncore; 1017 int report_size = stream->oa_buffer.format->size; 1018 u8 *oa_buf_base = stream->oa_buffer.vaddr; 1019 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 1020 u32 mask = (OA_BUFFER_SIZE - 1); 1021 size_t start_offset = *offset; 1022 unsigned long flags; 1023 u32 head, tail; 1024 int ret = 0; 1025 1026 if (drm_WARN_ON(&uncore->i915->drm, !stream->enabled)) 1027 return -EIO; 1028 1029 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1030 1031 head = stream->oa_buffer.head; 1032 tail = stream->oa_buffer.tail; 1033 1034 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1035 1036 /* NB: oa_buffer.head/tail include the gtt_offset which we don't want 1037 * while indexing relative to oa_buf_base. 1038 */ 1039 head -= gtt_offset; 1040 tail -= gtt_offset; 1041 1042 /* An out of bounds or misaligned head or tail pointer implies a driver 1043 * bug since we validate + align the tail pointers we read from the 1044 * hardware and we are in full control of the head pointer which should 1045 * only be incremented by multiples of the report size (notably also 1046 * all a power of two). 1047 */ 1048 if (drm_WARN_ONCE(&uncore->i915->drm, 1049 head > OA_BUFFER_SIZE || head % report_size || 1050 tail > OA_BUFFER_SIZE || tail % report_size, 1051 "Inconsistent OA buffer pointers: head = %u, tail = %u\n", 1052 head, tail)) 1053 return -EIO; 1054 1055 1056 for (/* none */; 1057 OA_TAKEN(tail, head); 1058 head = (head + report_size) & mask) { 1059 u8 *report = oa_buf_base + head; 1060 u32 *report32 = (void *)report; 1061 1062 /* All the report sizes factor neatly into the buffer 1063 * size so we never expect to see a report split 1064 * between the beginning and end of the buffer. 1065 * 1066 * Given the initial alignment check a misalignment 1067 * here would imply a driver bug that would result 1068 * in an overrun. 1069 */ 1070 if (drm_WARN_ON(&uncore->i915->drm, 1071 (OA_BUFFER_SIZE - head) < report_size)) { 1072 drm_err(&uncore->i915->drm, 1073 "Spurious OA head ptr: non-integral report offset\n"); 1074 break; 1075 } 1076 1077 /* The report-ID field for periodic samples includes 1078 * some undocumented flags related to what triggered 1079 * the report and is never expected to be zero so we 1080 * can check that the report isn't invalid before 1081 * copying it to userspace... 1082 */ 1083 if (report32[0] == 0) { 1084 if (__ratelimit(&stream->perf->spurious_report_rs)) 1085 drm_notice(&uncore->i915->drm, 1086 "Skipping spurious, invalid OA report\n"); 1087 continue; 1088 } 1089 1090 ret = append_oa_sample(stream, buf, count, offset, report); 1091 if (ret) 1092 break; 1093 1094 /* Clear out the first 2 dwords as a mean to detect unlanded 1095 * reports. 1096 */ 1097 report32[0] = 0; 1098 report32[1] = 0; 1099 } 1100 1101 if (start_offset != *offset) { 1102 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1103 1104 /* We removed the gtt_offset for the copy loop above, indexing 1105 * relative to oa_buf_base so put back here... 1106 */ 1107 head += gtt_offset; 1108 1109 intel_uncore_write(uncore, GEN7_OASTATUS2, 1110 (head & GEN7_OASTATUS2_HEAD_MASK) | 1111 GEN7_OASTATUS2_MEM_SELECT_GGTT); 1112 stream->oa_buffer.head = head; 1113 1114 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1115 } 1116 1117 return ret; 1118 } 1119 1120 /** 1121 * gen7_oa_read - copy status records then buffered OA reports 1122 * @stream: An i915-perf stream opened for OA metrics 1123 * @buf: destination buffer given by userspace 1124 * @count: the number of bytes userspace wants to read 1125 * @offset: (inout): the current position for writing into @buf 1126 * 1127 * Checks Gen 7 specific OA unit status registers and if necessary appends 1128 * corresponding status records for userspace (such as for a buffer full 1129 * condition) and then initiate appending any buffered OA reports. 1130 * 1131 * Updates @offset according to the number of bytes successfully copied into 1132 * the userspace buffer. 1133 * 1134 * Returns: zero on success or a negative error code 1135 */ 1136 static int gen7_oa_read(struct i915_perf_stream *stream, 1137 char __user *buf, 1138 size_t count, 1139 size_t *offset) 1140 { 1141 struct intel_uncore *uncore = stream->uncore; 1142 u32 oastatus1; 1143 int ret; 1144 1145 if (drm_WARN_ON(&uncore->i915->drm, !stream->oa_buffer.vaddr)) 1146 return -EIO; 1147 1148 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); 1149 1150 /* XXX: On Haswell we don't have a safe way to clear oastatus1 1151 * bits while the OA unit is enabled (while the tail pointer 1152 * may be updated asynchronously) so we ignore status bits 1153 * that have already been reported to userspace. 1154 */ 1155 oastatus1 &= ~stream->perf->gen7_latched_oastatus1; 1156 1157 /* We treat OABUFFER_OVERFLOW as a significant error: 1158 * 1159 * - The status can be interpreted to mean that the buffer is 1160 * currently full (with a higher precedence than OA_TAKEN() 1161 * which will start to report a near-empty buffer after an 1162 * overflow) but it's awkward that we can't clear the status 1163 * on Haswell, so without a reset we won't be able to catch 1164 * the state again. 1165 * 1166 * - Since it also implies the HW has started overwriting old 1167 * reports it may also affect our sanity checks for invalid 1168 * reports when copying to userspace that assume new reports 1169 * are being written to cleared memory. 1170 * 1171 * - In the future we may want to introduce a flight recorder 1172 * mode where the driver will automatically maintain a safe 1173 * guard band between head/tail, avoiding this overflow 1174 * condition, but we avoid the added driver complexity for 1175 * now. 1176 */ 1177 if (unlikely(oastatus1 & GEN7_OASTATUS1_OABUFFER_OVERFLOW)) { 1178 ret = append_oa_status(stream, buf, count, offset, 1179 DRM_I915_PERF_RECORD_OA_BUFFER_LOST); 1180 if (ret) 1181 return ret; 1182 1183 drm_dbg(&stream->perf->i915->drm, 1184 "OA buffer overflow (exponent = %d): force restart\n", 1185 stream->period_exponent); 1186 1187 stream->perf->ops.oa_disable(stream); 1188 stream->perf->ops.oa_enable(stream); 1189 1190 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); 1191 } 1192 1193 if (unlikely(oastatus1 & GEN7_OASTATUS1_REPORT_LOST)) { 1194 ret = append_oa_status(stream, buf, count, offset, 1195 DRM_I915_PERF_RECORD_OA_REPORT_LOST); 1196 if (ret) 1197 return ret; 1198 stream->perf->gen7_latched_oastatus1 |= 1199 GEN7_OASTATUS1_REPORT_LOST; 1200 } 1201 1202 return gen7_append_oa_reports(stream, buf, count, offset); 1203 } 1204 1205 /** 1206 * i915_oa_wait_unlocked - handles blocking IO until OA data available 1207 * @stream: An i915-perf stream opened for OA metrics 1208 * 1209 * Called when userspace tries to read() from a blocking stream FD opened 1210 * for OA metrics. It waits until the hrtimer callback finds a non-empty 1211 * OA buffer and wakes us. 1212 * 1213 * Note: it's acceptable to have this return with some false positives 1214 * since any subsequent read handling will return -EAGAIN if there isn't 1215 * really data ready for userspace yet. 1216 * 1217 * Returns: zero on success or a negative error code 1218 */ 1219 static int i915_oa_wait_unlocked(struct i915_perf_stream *stream) 1220 { 1221 /* We would wait indefinitely if periodic sampling is not enabled */ 1222 if (!stream->periodic) 1223 return -EIO; 1224 1225 return wait_event_interruptible(stream->poll_wq, 1226 oa_buffer_check_unlocked(stream)); 1227 } 1228 1229 /** 1230 * i915_oa_poll_wait - call poll_wait() for an OA stream poll() 1231 * @stream: An i915-perf stream opened for OA metrics 1232 * @file: An i915 perf stream file 1233 * @wait: poll() state table 1234 * 1235 * For handling userspace polling on an i915 perf stream opened for OA metrics, 1236 * this starts a poll_wait with the wait queue that our hrtimer callback wakes 1237 * when it sees data ready to read in the circular OA buffer. 1238 */ 1239 static void i915_oa_poll_wait(struct i915_perf_stream *stream, 1240 struct file *file, 1241 poll_table *wait) 1242 { 1243 poll_wait(file, &stream->poll_wq, wait); 1244 } 1245 1246 /** 1247 * i915_oa_read - just calls through to &i915_oa_ops->read 1248 * @stream: An i915-perf stream opened for OA metrics 1249 * @buf: destination buffer given by userspace 1250 * @count: the number of bytes userspace wants to read 1251 * @offset: (inout): the current position for writing into @buf 1252 * 1253 * Updates @offset according to the number of bytes successfully copied into 1254 * the userspace buffer. 1255 * 1256 * Returns: zero on success or a negative error code 1257 */ 1258 static int i915_oa_read(struct i915_perf_stream *stream, 1259 char __user *buf, 1260 size_t count, 1261 size_t *offset) 1262 { 1263 return stream->perf->ops.read(stream, buf, count, offset); 1264 } 1265 1266 static struct intel_context *oa_pin_context(struct i915_perf_stream *stream) 1267 { 1268 struct i915_gem_engines_iter it; 1269 struct i915_gem_context *ctx = stream->ctx; 1270 struct intel_context *ce; 1271 struct i915_gem_ww_ctx ww; 1272 int err = -ENODEV; 1273 1274 for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { 1275 if (ce->engine != stream->engine) /* first match! */ 1276 continue; 1277 1278 err = 0; 1279 break; 1280 } 1281 i915_gem_context_unlock_engines(ctx); 1282 1283 if (err) 1284 return ERR_PTR(err); 1285 1286 i915_gem_ww_ctx_init(&ww, true); 1287 retry: 1288 /* 1289 * As the ID is the gtt offset of the context's vma we 1290 * pin the vma to ensure the ID remains fixed. 1291 */ 1292 err = intel_context_pin_ww(ce, &ww); 1293 if (err == -EDEADLK) { 1294 err = i915_gem_ww_ctx_backoff(&ww); 1295 if (!err) 1296 goto retry; 1297 } 1298 i915_gem_ww_ctx_fini(&ww); 1299 1300 if (err) 1301 return ERR_PTR(err); 1302 1303 stream->pinned_ctx = ce; 1304 return stream->pinned_ctx; 1305 } 1306 1307 static int 1308 __store_reg_to_mem(struct i915_request *rq, i915_reg_t reg, u32 ggtt_offset) 1309 { 1310 u32 *cs, cmd; 1311 1312 cmd = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT; 1313 if (GRAPHICS_VER(rq->engine->i915) >= 8) 1314 cmd++; 1315 1316 cs = intel_ring_begin(rq, 4); 1317 if (IS_ERR(cs)) 1318 return PTR_ERR(cs); 1319 1320 *cs++ = cmd; 1321 *cs++ = i915_mmio_reg_offset(reg); 1322 *cs++ = ggtt_offset; 1323 *cs++ = 0; 1324 1325 intel_ring_advance(rq, cs); 1326 1327 return 0; 1328 } 1329 1330 static int 1331 __read_reg(struct intel_context *ce, i915_reg_t reg, u32 ggtt_offset) 1332 { 1333 struct i915_request *rq; 1334 int err; 1335 1336 rq = i915_request_create(ce); 1337 if (IS_ERR(rq)) 1338 return PTR_ERR(rq); 1339 1340 i915_request_get(rq); 1341 1342 err = __store_reg_to_mem(rq, reg, ggtt_offset); 1343 1344 i915_request_add(rq); 1345 if (!err && i915_request_wait(rq, 0, HZ / 2) < 0) 1346 err = -ETIME; 1347 1348 i915_request_put(rq); 1349 1350 return err; 1351 } 1352 1353 static int 1354 gen12_guc_sw_ctx_id(struct intel_context *ce, u32 *ctx_id) 1355 { 1356 struct i915_vma *scratch; 1357 u32 *val; 1358 int err; 1359 1360 scratch = __vm_create_scratch_for_read_pinned(&ce->engine->gt->ggtt->vm, 4); 1361 if (IS_ERR(scratch)) 1362 return PTR_ERR(scratch); 1363 1364 err = i915_vma_sync(scratch); 1365 if (err) 1366 goto err_scratch; 1367 1368 err = __read_reg(ce, RING_EXECLIST_STATUS_HI(ce->engine->mmio_base), 1369 i915_ggtt_offset(scratch)); 1370 if (err) 1371 goto err_scratch; 1372 1373 val = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB); 1374 if (IS_ERR(val)) { 1375 err = PTR_ERR(val); 1376 goto err_scratch; 1377 } 1378 1379 *ctx_id = *val; 1380 i915_gem_object_unpin_map(scratch->obj); 1381 1382 err_scratch: 1383 i915_vma_unpin_and_release(&scratch, 0); 1384 return err; 1385 } 1386 1387 /* 1388 * For execlist mode of submission, pick an unused context id 1389 * 0 - (NUM_CONTEXT_TAG -1) are used by other contexts 1390 * XXX_MAX_CONTEXT_HW_ID is used by idle context 1391 * 1392 * For GuC mode of submission read context id from the upper dword of the 1393 * EXECLIST_STATUS register. Note that we read this value only once and expect 1394 * that the value stays fixed for the entire OA use case. There are cases where 1395 * GuC KMD implementation may deregister a context to reuse it's context id, but 1396 * we prevent that from happening to the OA context by pinning it. 1397 */ 1398 static int gen12_get_render_context_id(struct i915_perf_stream *stream) 1399 { 1400 u32 ctx_id, mask; 1401 int ret; 1402 1403 if (intel_engine_uses_guc(stream->engine)) { 1404 ret = gen12_guc_sw_ctx_id(stream->pinned_ctx, &ctx_id); 1405 if (ret) 1406 return ret; 1407 1408 mask = ((1U << GEN12_GUC_SW_CTX_ID_WIDTH) - 1) << 1409 (GEN12_GUC_SW_CTX_ID_SHIFT - 32); 1410 } else if (GRAPHICS_VER_FULL(stream->engine->i915) >= IP_VER(12, 50)) { 1411 ctx_id = (XEHP_MAX_CONTEXT_HW_ID - 1) << 1412 (XEHP_SW_CTX_ID_SHIFT - 32); 1413 1414 mask = ((1U << XEHP_SW_CTX_ID_WIDTH) - 1) << 1415 (XEHP_SW_CTX_ID_SHIFT - 32); 1416 } else { 1417 ctx_id = (GEN12_MAX_CONTEXT_HW_ID - 1) << 1418 (GEN11_SW_CTX_ID_SHIFT - 32); 1419 1420 mask = ((1U << GEN11_SW_CTX_ID_WIDTH) - 1) << 1421 (GEN11_SW_CTX_ID_SHIFT - 32); 1422 } 1423 stream->specific_ctx_id = ctx_id & mask; 1424 stream->specific_ctx_id_mask = mask; 1425 1426 return 0; 1427 } 1428 1429 static bool oa_find_reg_in_lri(u32 *state, u32 reg, u32 *offset, u32 end) 1430 { 1431 u32 idx = *offset; 1432 u32 len = min(MI_LRI_LEN(state[idx]) + idx, end); 1433 bool found = false; 1434 1435 idx++; 1436 for (; idx < len; idx += 2) { 1437 if (state[idx] == reg) { 1438 found = true; 1439 break; 1440 } 1441 } 1442 1443 *offset = idx; 1444 return found; 1445 } 1446 1447 static u32 oa_context_image_offset(struct intel_context *ce, u32 reg) 1448 { 1449 u32 offset, len = (ce->engine->context_size - PAGE_SIZE) / 4; 1450 u32 *state = ce->lrc_reg_state; 1451 1452 if (drm_WARN_ON(&ce->engine->i915->drm, !state)) 1453 return U32_MAX; 1454 1455 for (offset = 0; offset < len; ) { 1456 if (IS_MI_LRI_CMD(state[offset])) { 1457 /* 1458 * We expect reg-value pairs in MI_LRI command, so 1459 * MI_LRI_LEN() should be even, if not, issue a warning. 1460 */ 1461 drm_WARN_ON(&ce->engine->i915->drm, 1462 MI_LRI_LEN(state[offset]) & 0x1); 1463 1464 if (oa_find_reg_in_lri(state, reg, &offset, len)) 1465 break; 1466 } else { 1467 offset++; 1468 } 1469 } 1470 1471 return offset < len ? offset : U32_MAX; 1472 } 1473 1474 static int set_oa_ctx_ctrl_offset(struct intel_context *ce) 1475 { 1476 i915_reg_t reg = GEN12_OACTXCONTROL(ce->engine->mmio_base); 1477 struct i915_perf *perf = &ce->engine->i915->perf; 1478 u32 offset = perf->ctx_oactxctrl_offset; 1479 1480 /* Do this only once. Failure is stored as offset of U32_MAX */ 1481 if (offset) 1482 goto exit; 1483 1484 offset = oa_context_image_offset(ce, i915_mmio_reg_offset(reg)); 1485 perf->ctx_oactxctrl_offset = offset; 1486 1487 drm_dbg(&ce->engine->i915->drm, 1488 "%s oa ctx control at 0x%08x dword offset\n", 1489 ce->engine->name, offset); 1490 1491 exit: 1492 return offset && offset != U32_MAX ? 0 : -ENODEV; 1493 } 1494 1495 static bool engine_supports_mi_query(struct intel_engine_cs *engine) 1496 { 1497 return engine->class == RENDER_CLASS; 1498 } 1499 1500 /** 1501 * oa_get_render_ctx_id - determine and hold ctx hw id 1502 * @stream: An i915-perf stream opened for OA metrics 1503 * 1504 * Determine the render context hw id, and ensure it remains fixed for the 1505 * lifetime of the stream. This ensures that we don't have to worry about 1506 * updating the context ID in OACONTROL on the fly. 1507 * 1508 * Returns: zero on success or a negative error code 1509 */ 1510 static int oa_get_render_ctx_id(struct i915_perf_stream *stream) 1511 { 1512 struct intel_context *ce; 1513 int ret = 0; 1514 1515 ce = oa_pin_context(stream); 1516 if (IS_ERR(ce)) 1517 return PTR_ERR(ce); 1518 1519 if (engine_supports_mi_query(stream->engine) && 1520 HAS_LOGICAL_RING_CONTEXTS(stream->perf->i915)) { 1521 /* 1522 * We are enabling perf query here. If we don't find the context 1523 * offset here, just return an error. 1524 */ 1525 ret = set_oa_ctx_ctrl_offset(ce); 1526 if (ret) { 1527 intel_context_unpin(ce); 1528 drm_err(&stream->perf->i915->drm, 1529 "Enabling perf query failed for %s\n", 1530 stream->engine->name); 1531 return ret; 1532 } 1533 } 1534 1535 switch (GRAPHICS_VER(ce->engine->i915)) { 1536 case 7: { 1537 /* 1538 * On Haswell we don't do any post processing of the reports 1539 * and don't need to use the mask. 1540 */ 1541 stream->specific_ctx_id = i915_ggtt_offset(ce->state); 1542 stream->specific_ctx_id_mask = 0; 1543 break; 1544 } 1545 1546 case 8: 1547 case 9: 1548 if (intel_engine_uses_guc(ce->engine)) { 1549 /* 1550 * When using GuC, the context descriptor we write in 1551 * i915 is read by GuC and rewritten before it's 1552 * actually written into the hardware. The LRCA is 1553 * what is put into the context id field of the 1554 * context descriptor by GuC. Because it's aligned to 1555 * a page, the lower 12bits are always at 0 and 1556 * dropped by GuC. They won't be part of the context 1557 * ID in the OA reports, so squash those lower bits. 1558 */ 1559 stream->specific_ctx_id = ce->lrc.lrca >> 12; 1560 1561 /* 1562 * GuC uses the top bit to signal proxy submission, so 1563 * ignore that bit. 1564 */ 1565 stream->specific_ctx_id_mask = 1566 (1U << (GEN8_CTX_ID_WIDTH - 1)) - 1; 1567 } else { 1568 stream->specific_ctx_id_mask = 1569 (1U << GEN8_CTX_ID_WIDTH) - 1; 1570 stream->specific_ctx_id = stream->specific_ctx_id_mask; 1571 } 1572 break; 1573 1574 case 11: 1575 case 12: 1576 ret = gen12_get_render_context_id(stream); 1577 break; 1578 1579 default: 1580 MISSING_CASE(GRAPHICS_VER(ce->engine->i915)); 1581 } 1582 1583 ce->tag = stream->specific_ctx_id; 1584 1585 drm_dbg(&stream->perf->i915->drm, 1586 "filtering on ctx_id=0x%x ctx_id_mask=0x%x\n", 1587 stream->specific_ctx_id, 1588 stream->specific_ctx_id_mask); 1589 1590 return ret; 1591 } 1592 1593 /** 1594 * oa_put_render_ctx_id - counterpart to oa_get_render_ctx_id releases hold 1595 * @stream: An i915-perf stream opened for OA metrics 1596 * 1597 * In case anything needed doing to ensure the context HW ID would remain valid 1598 * for the lifetime of the stream, then that can be undone here. 1599 */ 1600 static void oa_put_render_ctx_id(struct i915_perf_stream *stream) 1601 { 1602 struct intel_context *ce; 1603 1604 ce = fetch_and_zero(&stream->pinned_ctx); 1605 if (ce) { 1606 ce->tag = 0; /* recomputed on next submission after parking */ 1607 intel_context_unpin(ce); 1608 } 1609 1610 stream->specific_ctx_id = INVALID_CTX_ID; 1611 stream->specific_ctx_id_mask = 0; 1612 } 1613 1614 static void 1615 free_oa_buffer(struct i915_perf_stream *stream) 1616 { 1617 i915_vma_unpin_and_release(&stream->oa_buffer.vma, 1618 I915_VMA_RELEASE_MAP); 1619 1620 stream->oa_buffer.vaddr = NULL; 1621 } 1622 1623 static void 1624 free_oa_configs(struct i915_perf_stream *stream) 1625 { 1626 struct i915_oa_config_bo *oa_bo, *tmp; 1627 1628 i915_oa_config_put(stream->oa_config); 1629 llist_for_each_entry_safe(oa_bo, tmp, stream->oa_config_bos.first, node) 1630 free_oa_config_bo(oa_bo); 1631 } 1632 1633 static void 1634 free_noa_wait(struct i915_perf_stream *stream) 1635 { 1636 i915_vma_unpin_and_release(&stream->noa_wait, 0); 1637 } 1638 1639 static bool engine_supports_oa(const struct intel_engine_cs *engine) 1640 { 1641 return engine->oa_group; 1642 } 1643 1644 static bool engine_supports_oa_format(struct intel_engine_cs *engine, int type) 1645 { 1646 return engine->oa_group && engine->oa_group->type == type; 1647 } 1648 1649 static void i915_oa_stream_destroy(struct i915_perf_stream *stream) 1650 { 1651 struct i915_perf *perf = stream->perf; 1652 struct intel_gt *gt = stream->engine->gt; 1653 struct i915_perf_group *g = stream->engine->oa_group; 1654 1655 if (WARN_ON(stream != g->exclusive_stream)) 1656 return; 1657 1658 /* 1659 * Unset exclusive_stream first, it will be checked while disabling 1660 * the metric set on gen8+. 1661 * 1662 * See i915_oa_init_reg_state() and lrc_configure_all_contexts() 1663 */ 1664 WRITE_ONCE(g->exclusive_stream, NULL); 1665 perf->ops.disable_metric_set(stream); 1666 1667 free_oa_buffer(stream); 1668 1669 /* 1670 * Wa_16011777198:dg2: Unset the override of GUCRC mode to enable rc6. 1671 */ 1672 if (stream->override_gucrc) 1673 drm_WARN_ON(>->i915->drm, 1674 intel_guc_slpc_unset_gucrc_mode(>->uc.guc.slpc)); 1675 1676 intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL); 1677 intel_engine_pm_put(stream->engine); 1678 1679 if (stream->ctx) 1680 oa_put_render_ctx_id(stream); 1681 1682 free_oa_configs(stream); 1683 free_noa_wait(stream); 1684 1685 if (perf->spurious_report_rs.missed) { 1686 drm_notice(>->i915->drm, 1687 "%d spurious OA report notices suppressed due to ratelimiting\n", 1688 perf->spurious_report_rs.missed); 1689 } 1690 } 1691 1692 static void gen7_init_oa_buffer(struct i915_perf_stream *stream) 1693 { 1694 struct intel_uncore *uncore = stream->uncore; 1695 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 1696 unsigned long flags; 1697 1698 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1699 1700 /* Pre-DevBDW: OABUFFER must be set with counters off, 1701 * before OASTATUS1, but after OASTATUS2 1702 */ 1703 intel_uncore_write(uncore, GEN7_OASTATUS2, /* head */ 1704 gtt_offset | GEN7_OASTATUS2_MEM_SELECT_GGTT); 1705 stream->oa_buffer.head = gtt_offset; 1706 1707 intel_uncore_write(uncore, GEN7_OABUFFER, gtt_offset); 1708 1709 intel_uncore_write(uncore, GEN7_OASTATUS1, /* tail */ 1710 gtt_offset | OABUFFER_SIZE_16M); 1711 1712 /* Mark that we need updated tail pointers to read from... */ 1713 stream->oa_buffer.tail = gtt_offset; 1714 1715 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1716 1717 /* On Haswell we have to track which OASTATUS1 flags we've 1718 * already seen since they can't be cleared while periodic 1719 * sampling is enabled. 1720 */ 1721 stream->perf->gen7_latched_oastatus1 = 0; 1722 1723 /* NB: although the OA buffer will initially be allocated 1724 * zeroed via shmfs (and so this memset is redundant when 1725 * first allocating), we may re-init the OA buffer, either 1726 * when re-enabling a stream or in error/reset paths. 1727 * 1728 * The reason we clear the buffer for each re-init is for the 1729 * sanity check in gen7_append_oa_reports() that looks at the 1730 * report-id field to make sure it's non-zero which relies on 1731 * the assumption that new reports are being written to zeroed 1732 * memory... 1733 */ 1734 memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE); 1735 } 1736 1737 static void gen8_init_oa_buffer(struct i915_perf_stream *stream) 1738 { 1739 struct intel_uncore *uncore = stream->uncore; 1740 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 1741 unsigned long flags; 1742 1743 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1744 1745 intel_uncore_write(uncore, GEN8_OASTATUS, 0); 1746 intel_uncore_write(uncore, GEN8_OAHEADPTR, gtt_offset); 1747 stream->oa_buffer.head = gtt_offset; 1748 1749 intel_uncore_write(uncore, GEN8_OABUFFER_UDW, 0); 1750 1751 /* 1752 * PRM says: 1753 * 1754 * "This MMIO must be set before the OATAILPTR 1755 * register and after the OAHEADPTR register. This is 1756 * to enable proper functionality of the overflow 1757 * bit." 1758 */ 1759 intel_uncore_write(uncore, GEN8_OABUFFER, gtt_offset | 1760 OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT); 1761 intel_uncore_write(uncore, GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK); 1762 1763 /* Mark that we need updated tail pointers to read from... */ 1764 stream->oa_buffer.tail = gtt_offset; 1765 1766 /* 1767 * Reset state used to recognise context switches, affecting which 1768 * reports we will forward to userspace while filtering for a single 1769 * context. 1770 */ 1771 stream->oa_buffer.last_ctx_id = INVALID_CTX_ID; 1772 1773 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1774 1775 /* 1776 * NB: although the OA buffer will initially be allocated 1777 * zeroed via shmfs (and so this memset is redundant when 1778 * first allocating), we may re-init the OA buffer, either 1779 * when re-enabling a stream or in error/reset paths. 1780 * 1781 * The reason we clear the buffer for each re-init is for the 1782 * sanity check in gen8_append_oa_reports() that looks at the 1783 * reason field to make sure it's non-zero which relies on 1784 * the assumption that new reports are being written to zeroed 1785 * memory... 1786 */ 1787 memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE); 1788 } 1789 1790 static void gen12_init_oa_buffer(struct i915_perf_stream *stream) 1791 { 1792 struct intel_uncore *uncore = stream->uncore; 1793 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); 1794 unsigned long flags; 1795 1796 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); 1797 1798 intel_uncore_write(uncore, __oa_regs(stream)->oa_status, 0); 1799 intel_uncore_write(uncore, __oa_regs(stream)->oa_head_ptr, 1800 gtt_offset & GEN12_OAG_OAHEADPTR_MASK); 1801 stream->oa_buffer.head = gtt_offset; 1802 1803 /* 1804 * PRM says: 1805 * 1806 * "This MMIO must be set before the OATAILPTR 1807 * register and after the OAHEADPTR register. This is 1808 * to enable proper functionality of the overflow 1809 * bit." 1810 */ 1811 intel_uncore_write(uncore, __oa_regs(stream)->oa_buffer, gtt_offset | 1812 OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT); 1813 intel_uncore_write(uncore, __oa_regs(stream)->oa_tail_ptr, 1814 gtt_offset & GEN12_OAG_OATAILPTR_MASK); 1815 1816 /* Mark that we need updated tail pointers to read from... */ 1817 stream->oa_buffer.tail = gtt_offset; 1818 1819 /* 1820 * Reset state used to recognise context switches, affecting which 1821 * reports we will forward to userspace while filtering for a single 1822 * context. 1823 */ 1824 stream->oa_buffer.last_ctx_id = INVALID_CTX_ID; 1825 1826 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); 1827 1828 /* 1829 * NB: although the OA buffer will initially be allocated 1830 * zeroed via shmfs (and so this memset is redundant when 1831 * first allocating), we may re-init the OA buffer, either 1832 * when re-enabling a stream or in error/reset paths. 1833 * 1834 * The reason we clear the buffer for each re-init is for the 1835 * sanity check in gen8_append_oa_reports() that looks at the 1836 * reason field to make sure it's non-zero which relies on 1837 * the assumption that new reports are being written to zeroed 1838 * memory... 1839 */ 1840 memset(stream->oa_buffer.vaddr, 0, 1841 stream->oa_buffer.vma->size); 1842 } 1843 1844 static int alloc_oa_buffer(struct i915_perf_stream *stream) 1845 { 1846 struct drm_i915_private *i915 = stream->perf->i915; 1847 struct intel_gt *gt = stream->engine->gt; 1848 struct drm_i915_gem_object *bo; 1849 struct i915_vma *vma; 1850 int ret; 1851 1852 if (drm_WARN_ON(&i915->drm, stream->oa_buffer.vma)) 1853 return -ENODEV; 1854 1855 BUILD_BUG_ON_NOT_POWER_OF_2(OA_BUFFER_SIZE); 1856 BUILD_BUG_ON(OA_BUFFER_SIZE < SZ_128K || OA_BUFFER_SIZE > SZ_16M); 1857 1858 bo = i915_gem_object_create_shmem(stream->perf->i915, OA_BUFFER_SIZE); 1859 if (IS_ERR(bo)) { 1860 drm_err(&i915->drm, "Failed to allocate OA buffer\n"); 1861 return PTR_ERR(bo); 1862 } 1863 1864 i915_gem_object_set_cache_coherency(bo, I915_CACHE_LLC); 1865 1866 /* PreHSW required 512K alignment, HSW requires 16M */ 1867 vma = i915_vma_instance(bo, >->ggtt->vm, NULL); 1868 if (IS_ERR(vma)) { 1869 ret = PTR_ERR(vma); 1870 goto err_unref; 1871 } 1872 1873 /* 1874 * PreHSW required 512K alignment. 1875 * HSW and onwards, align to requested size of OA buffer. 1876 */ 1877 ret = i915_vma_pin(vma, 0, SZ_16M, PIN_GLOBAL | PIN_HIGH); 1878 if (ret) { 1879 drm_err(>->i915->drm, "Failed to pin OA buffer %d\n", ret); 1880 goto err_unref; 1881 } 1882 1883 stream->oa_buffer.vma = vma; 1884 1885 stream->oa_buffer.vaddr = 1886 i915_gem_object_pin_map_unlocked(bo, I915_MAP_WB); 1887 if (IS_ERR(stream->oa_buffer.vaddr)) { 1888 ret = PTR_ERR(stream->oa_buffer.vaddr); 1889 goto err_unpin; 1890 } 1891 1892 return 0; 1893 1894 err_unpin: 1895 __i915_vma_unpin(vma); 1896 1897 err_unref: 1898 i915_gem_object_put(bo); 1899 1900 stream->oa_buffer.vaddr = NULL; 1901 stream->oa_buffer.vma = NULL; 1902 1903 return ret; 1904 } 1905 1906 static u32 *save_restore_register(struct i915_perf_stream *stream, u32 *cs, 1907 bool save, i915_reg_t reg, u32 offset, 1908 u32 dword_count) 1909 { 1910 u32 cmd; 1911 u32 d; 1912 1913 cmd = save ? MI_STORE_REGISTER_MEM : MI_LOAD_REGISTER_MEM; 1914 cmd |= MI_SRM_LRM_GLOBAL_GTT; 1915 if (GRAPHICS_VER(stream->perf->i915) >= 8) 1916 cmd++; 1917 1918 for (d = 0; d < dword_count; d++) { 1919 *cs++ = cmd; 1920 *cs++ = i915_mmio_reg_offset(reg) + 4 * d; 1921 *cs++ = i915_ggtt_offset(stream->noa_wait) + offset + 4 * d; 1922 *cs++ = 0; 1923 } 1924 1925 return cs; 1926 } 1927 1928 static int alloc_noa_wait(struct i915_perf_stream *stream) 1929 { 1930 struct drm_i915_private *i915 = stream->perf->i915; 1931 struct intel_gt *gt = stream->engine->gt; 1932 struct drm_i915_gem_object *bo; 1933 struct i915_vma *vma; 1934 const u64 delay_ticks = 0xffffffffffffffff - 1935 intel_gt_ns_to_clock_interval(to_gt(stream->perf->i915), 1936 atomic64_read(&stream->perf->noa_programming_delay)); 1937 const u32 base = stream->engine->mmio_base; 1938 #define CS_GPR(x) GEN8_RING_CS_GPR(base, x) 1939 u32 *batch, *ts0, *cs, *jump; 1940 struct i915_gem_ww_ctx ww; 1941 int ret, i; 1942 enum { 1943 START_TS, 1944 NOW_TS, 1945 DELTA_TS, 1946 JUMP_PREDICATE, 1947 DELTA_TARGET, 1948 N_CS_GPR 1949 }; 1950 i915_reg_t mi_predicate_result = HAS_MI_SET_PREDICATE(i915) ? 1951 MI_PREDICATE_RESULT_2_ENGINE(base) : 1952 MI_PREDICATE_RESULT_1(RENDER_RING_BASE); 1953 1954 /* 1955 * gt->scratch was being used to save/restore the GPR registers, but on 1956 * MTL the scratch uses stolen lmem. An MI_SRM to this memory region 1957 * causes an engine hang. Instead allocate an additional page here to 1958 * save/restore GPR registers 1959 */ 1960 bo = i915_gem_object_create_internal(i915, 8192); 1961 if (IS_ERR(bo)) { 1962 drm_err(&i915->drm, 1963 "Failed to allocate NOA wait batchbuffer\n"); 1964 return PTR_ERR(bo); 1965 } 1966 1967 i915_gem_ww_ctx_init(&ww, true); 1968 retry: 1969 ret = i915_gem_object_lock(bo, &ww); 1970 if (ret) 1971 goto out_ww; 1972 1973 /* 1974 * We pin in GGTT because we jump into this buffer now because 1975 * multiple OA config BOs will have a jump to this address and it 1976 * needs to be fixed during the lifetime of the i915/perf stream. 1977 */ 1978 vma = i915_vma_instance(bo, >->ggtt->vm, NULL); 1979 if (IS_ERR(vma)) { 1980 ret = PTR_ERR(vma); 1981 goto out_ww; 1982 } 1983 1984 ret = i915_vma_pin_ww(vma, &ww, 0, 0, PIN_GLOBAL | PIN_HIGH); 1985 if (ret) 1986 goto out_ww; 1987 1988 batch = cs = i915_gem_object_pin_map(bo, I915_MAP_WB); 1989 if (IS_ERR(batch)) { 1990 ret = PTR_ERR(batch); 1991 goto err_unpin; 1992 } 1993 1994 stream->noa_wait = vma; 1995 1996 #define GPR_SAVE_OFFSET 4096 1997 #define PREDICATE_SAVE_OFFSET 4160 1998 1999 /* Save registers. */ 2000 for (i = 0; i < N_CS_GPR; i++) 2001 cs = save_restore_register( 2002 stream, cs, true /* save */, CS_GPR(i), 2003 GPR_SAVE_OFFSET + 8 * i, 2); 2004 cs = save_restore_register( 2005 stream, cs, true /* save */, mi_predicate_result, 2006 PREDICATE_SAVE_OFFSET, 1); 2007 2008 /* First timestamp snapshot location. */ 2009 ts0 = cs; 2010 2011 /* 2012 * Initial snapshot of the timestamp register to implement the wait. 2013 * We work with 32b values, so clear out the top 32b bits of the 2014 * register because the ALU works 64bits. 2015 */ 2016 *cs++ = MI_LOAD_REGISTER_IMM(1); 2017 *cs++ = i915_mmio_reg_offset(CS_GPR(START_TS)) + 4; 2018 *cs++ = 0; 2019 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); 2020 *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(base)); 2021 *cs++ = i915_mmio_reg_offset(CS_GPR(START_TS)); 2022 2023 /* 2024 * This is the location we're going to jump back into until the 2025 * required amount of time has passed. 2026 */ 2027 jump = cs; 2028 2029 /* 2030 * Take another snapshot of the timestamp register. Take care to clear 2031 * up the top 32bits of CS_GPR(1) as we're using it for other 2032 * operations below. 2033 */ 2034 *cs++ = MI_LOAD_REGISTER_IMM(1); 2035 *cs++ = i915_mmio_reg_offset(CS_GPR(NOW_TS)) + 4; 2036 *cs++ = 0; 2037 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); 2038 *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(base)); 2039 *cs++ = i915_mmio_reg_offset(CS_GPR(NOW_TS)); 2040 2041 /* 2042 * Do a diff between the 2 timestamps and store the result back into 2043 * CS_GPR(1). 2044 */ 2045 *cs++ = MI_MATH(5); 2046 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(NOW_TS)); 2047 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(START_TS)); 2048 *cs++ = MI_MATH_SUB; 2049 *cs++ = MI_MATH_STORE(MI_MATH_REG(DELTA_TS), MI_MATH_REG_ACCU); 2050 *cs++ = MI_MATH_STORE(MI_MATH_REG(JUMP_PREDICATE), MI_MATH_REG_CF); 2051 2052 /* 2053 * Transfer the carry flag (set to 1 if ts1 < ts0, meaning the 2054 * timestamp have rolled over the 32bits) into the predicate register 2055 * to be used for the predicated jump. 2056 */ 2057 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); 2058 *cs++ = i915_mmio_reg_offset(CS_GPR(JUMP_PREDICATE)); 2059 *cs++ = i915_mmio_reg_offset(mi_predicate_result); 2060 2061 if (HAS_MI_SET_PREDICATE(i915)) 2062 *cs++ = MI_SET_PREDICATE | 1; 2063 2064 /* Restart from the beginning if we had timestamps roll over. */ 2065 *cs++ = (GRAPHICS_VER(i915) < 8 ? 2066 MI_BATCH_BUFFER_START : 2067 MI_BATCH_BUFFER_START_GEN8) | 2068 MI_BATCH_PREDICATE; 2069 *cs++ = i915_ggtt_offset(vma) + (ts0 - batch) * 4; 2070 *cs++ = 0; 2071 2072 if (HAS_MI_SET_PREDICATE(i915)) 2073 *cs++ = MI_SET_PREDICATE; 2074 2075 /* 2076 * Now add the diff between to previous timestamps and add it to : 2077 * (((1 * << 64) - 1) - delay_ns) 2078 * 2079 * When the Carry Flag contains 1 this means the elapsed time is 2080 * longer than the expected delay, and we can exit the wait loop. 2081 */ 2082 *cs++ = MI_LOAD_REGISTER_IMM(2); 2083 *cs++ = i915_mmio_reg_offset(CS_GPR(DELTA_TARGET)); 2084 *cs++ = lower_32_bits(delay_ticks); 2085 *cs++ = i915_mmio_reg_offset(CS_GPR(DELTA_TARGET)) + 4; 2086 *cs++ = upper_32_bits(delay_ticks); 2087 2088 *cs++ = MI_MATH(4); 2089 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(DELTA_TS)); 2090 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(DELTA_TARGET)); 2091 *cs++ = MI_MATH_ADD; 2092 *cs++ = MI_MATH_STOREINV(MI_MATH_REG(JUMP_PREDICATE), MI_MATH_REG_CF); 2093 2094 *cs++ = MI_ARB_CHECK; 2095 2096 /* 2097 * Transfer the result into the predicate register to be used for the 2098 * predicated jump. 2099 */ 2100 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); 2101 *cs++ = i915_mmio_reg_offset(CS_GPR(JUMP_PREDICATE)); 2102 *cs++ = i915_mmio_reg_offset(mi_predicate_result); 2103 2104 if (HAS_MI_SET_PREDICATE(i915)) 2105 *cs++ = MI_SET_PREDICATE | 1; 2106 2107 /* Predicate the jump. */ 2108 *cs++ = (GRAPHICS_VER(i915) < 8 ? 2109 MI_BATCH_BUFFER_START : 2110 MI_BATCH_BUFFER_START_GEN8) | 2111 MI_BATCH_PREDICATE; 2112 *cs++ = i915_ggtt_offset(vma) + (jump - batch) * 4; 2113 *cs++ = 0; 2114 2115 if (HAS_MI_SET_PREDICATE(i915)) 2116 *cs++ = MI_SET_PREDICATE; 2117 2118 /* Restore registers. */ 2119 for (i = 0; i < N_CS_GPR; i++) 2120 cs = save_restore_register( 2121 stream, cs, false /* restore */, CS_GPR(i), 2122 GPR_SAVE_OFFSET + 8 * i, 2); 2123 cs = save_restore_register( 2124 stream, cs, false /* restore */, mi_predicate_result, 2125 PREDICATE_SAVE_OFFSET, 1); 2126 2127 /* And return to the ring. */ 2128 *cs++ = MI_BATCH_BUFFER_END; 2129 2130 GEM_BUG_ON(cs - batch > PAGE_SIZE / sizeof(*batch)); 2131 2132 i915_gem_object_flush_map(bo); 2133 __i915_gem_object_release_map(bo); 2134 2135 goto out_ww; 2136 2137 err_unpin: 2138 i915_vma_unpin_and_release(&vma, 0); 2139 out_ww: 2140 if (ret == -EDEADLK) { 2141 ret = i915_gem_ww_ctx_backoff(&ww); 2142 if (!ret) 2143 goto retry; 2144 } 2145 i915_gem_ww_ctx_fini(&ww); 2146 if (ret) 2147 i915_gem_object_put(bo); 2148 return ret; 2149 } 2150 2151 static u32 *write_cs_mi_lri(u32 *cs, 2152 const struct i915_oa_reg *reg_data, 2153 u32 n_regs) 2154 { 2155 u32 i; 2156 2157 for (i = 0; i < n_regs; i++) { 2158 if ((i % MI_LOAD_REGISTER_IMM_MAX_REGS) == 0) { 2159 u32 n_lri = min_t(u32, 2160 n_regs - i, 2161 MI_LOAD_REGISTER_IMM_MAX_REGS); 2162 2163 *cs++ = MI_LOAD_REGISTER_IMM(n_lri); 2164 } 2165 *cs++ = i915_mmio_reg_offset(reg_data[i].addr); 2166 *cs++ = reg_data[i].value; 2167 } 2168 2169 return cs; 2170 } 2171 2172 static int num_lri_dwords(int num_regs) 2173 { 2174 int count = 0; 2175 2176 if (num_regs > 0) { 2177 count += DIV_ROUND_UP(num_regs, MI_LOAD_REGISTER_IMM_MAX_REGS); 2178 count += num_regs * 2; 2179 } 2180 2181 return count; 2182 } 2183 2184 static struct i915_oa_config_bo * 2185 alloc_oa_config_buffer(struct i915_perf_stream *stream, 2186 struct i915_oa_config *oa_config) 2187 { 2188 struct drm_i915_gem_object *obj; 2189 struct i915_oa_config_bo *oa_bo; 2190 struct i915_gem_ww_ctx ww; 2191 size_t config_length = 0; 2192 u32 *cs; 2193 int err; 2194 2195 oa_bo = kzalloc(sizeof(*oa_bo), GFP_KERNEL); 2196 if (!oa_bo) 2197 return ERR_PTR(-ENOMEM); 2198 2199 config_length += num_lri_dwords(oa_config->mux_regs_len); 2200 config_length += num_lri_dwords(oa_config->b_counter_regs_len); 2201 config_length += num_lri_dwords(oa_config->flex_regs_len); 2202 config_length += 3; /* MI_BATCH_BUFFER_START */ 2203 config_length = ALIGN(sizeof(u32) * config_length, I915_GTT_PAGE_SIZE); 2204 2205 obj = i915_gem_object_create_shmem(stream->perf->i915, config_length); 2206 if (IS_ERR(obj)) { 2207 err = PTR_ERR(obj); 2208 goto err_free; 2209 } 2210 2211 i915_gem_ww_ctx_init(&ww, true); 2212 retry: 2213 err = i915_gem_object_lock(obj, &ww); 2214 if (err) 2215 goto out_ww; 2216 2217 cs = i915_gem_object_pin_map(obj, I915_MAP_WB); 2218 if (IS_ERR(cs)) { 2219 err = PTR_ERR(cs); 2220 goto out_ww; 2221 } 2222 2223 cs = write_cs_mi_lri(cs, 2224 oa_config->mux_regs, 2225 oa_config->mux_regs_len); 2226 cs = write_cs_mi_lri(cs, 2227 oa_config->b_counter_regs, 2228 oa_config->b_counter_regs_len); 2229 cs = write_cs_mi_lri(cs, 2230 oa_config->flex_regs, 2231 oa_config->flex_regs_len); 2232 2233 /* Jump into the active wait. */ 2234 *cs++ = (GRAPHICS_VER(stream->perf->i915) < 8 ? 2235 MI_BATCH_BUFFER_START : 2236 MI_BATCH_BUFFER_START_GEN8); 2237 *cs++ = i915_ggtt_offset(stream->noa_wait); 2238 *cs++ = 0; 2239 2240 i915_gem_object_flush_map(obj); 2241 __i915_gem_object_release_map(obj); 2242 2243 oa_bo->vma = i915_vma_instance(obj, 2244 &stream->engine->gt->ggtt->vm, 2245 NULL); 2246 if (IS_ERR(oa_bo->vma)) { 2247 err = PTR_ERR(oa_bo->vma); 2248 goto out_ww; 2249 } 2250 2251 oa_bo->oa_config = i915_oa_config_get(oa_config); 2252 llist_add(&oa_bo->node, &stream->oa_config_bos); 2253 2254 out_ww: 2255 if (err == -EDEADLK) { 2256 err = i915_gem_ww_ctx_backoff(&ww); 2257 if (!err) 2258 goto retry; 2259 } 2260 i915_gem_ww_ctx_fini(&ww); 2261 2262 if (err) 2263 i915_gem_object_put(obj); 2264 err_free: 2265 if (err) { 2266 kfree(oa_bo); 2267 return ERR_PTR(err); 2268 } 2269 return oa_bo; 2270 } 2271 2272 static struct i915_vma * 2273 get_oa_vma(struct i915_perf_stream *stream, struct i915_oa_config *oa_config) 2274 { 2275 struct i915_oa_config_bo *oa_bo; 2276 2277 /* 2278 * Look for the buffer in the already allocated BOs attached 2279 * to the stream. 2280 */ 2281 llist_for_each_entry(oa_bo, stream->oa_config_bos.first, node) { 2282 if (oa_bo->oa_config == oa_config && 2283 memcmp(oa_bo->oa_config->uuid, 2284 oa_config->uuid, 2285 sizeof(oa_config->uuid)) == 0) 2286 goto out; 2287 } 2288 2289 oa_bo = alloc_oa_config_buffer(stream, oa_config); 2290 if (IS_ERR(oa_bo)) 2291 return ERR_CAST(oa_bo); 2292 2293 out: 2294 return i915_vma_get(oa_bo->vma); 2295 } 2296 2297 static int 2298 emit_oa_config(struct i915_perf_stream *stream, 2299 struct i915_oa_config *oa_config, 2300 struct intel_context *ce, 2301 struct i915_active *active) 2302 { 2303 struct i915_request *rq; 2304 struct i915_vma *vma; 2305 struct i915_gem_ww_ctx ww; 2306 int err; 2307 2308 vma = get_oa_vma(stream, oa_config); 2309 if (IS_ERR(vma)) 2310 return PTR_ERR(vma); 2311 2312 i915_gem_ww_ctx_init(&ww, true); 2313 retry: 2314 err = i915_gem_object_lock(vma->obj, &ww); 2315 if (err) 2316 goto err; 2317 2318 err = i915_vma_pin_ww(vma, &ww, 0, 0, PIN_GLOBAL | PIN_HIGH); 2319 if (err) 2320 goto err; 2321 2322 intel_engine_pm_get(ce->engine); 2323 rq = i915_request_create(ce); 2324 intel_engine_pm_put(ce->engine); 2325 if (IS_ERR(rq)) { 2326 err = PTR_ERR(rq); 2327 goto err_vma_unpin; 2328 } 2329 2330 if (!IS_ERR_OR_NULL(active)) { 2331 /* After all individual context modifications */ 2332 err = i915_request_await_active(rq, active, 2333 I915_ACTIVE_AWAIT_ACTIVE); 2334 if (err) 2335 goto err_add_request; 2336 2337 err = i915_active_add_request(active, rq); 2338 if (err) 2339 goto err_add_request; 2340 } 2341 2342 err = i915_vma_move_to_active(vma, rq, 0); 2343 if (err) 2344 goto err_add_request; 2345 2346 err = rq->engine->emit_bb_start(rq, 2347 i915_vma_offset(vma), 0, 2348 I915_DISPATCH_SECURE); 2349 if (err) 2350 goto err_add_request; 2351 2352 err_add_request: 2353 i915_request_add(rq); 2354 err_vma_unpin: 2355 i915_vma_unpin(vma); 2356 err: 2357 if (err == -EDEADLK) { 2358 err = i915_gem_ww_ctx_backoff(&ww); 2359 if (!err) 2360 goto retry; 2361 } 2362 2363 i915_gem_ww_ctx_fini(&ww); 2364 i915_vma_put(vma); 2365 return err; 2366 } 2367 2368 static struct intel_context *oa_context(struct i915_perf_stream *stream) 2369 { 2370 return stream->pinned_ctx ?: stream->engine->kernel_context; 2371 } 2372 2373 static int 2374 hsw_enable_metric_set(struct i915_perf_stream *stream, 2375 struct i915_active *active) 2376 { 2377 struct intel_uncore *uncore = stream->uncore; 2378 2379 /* 2380 * PRM: 2381 * 2382 * OA unit is using “crclk” for its functionality. When trunk 2383 * level clock gating takes place, OA clock would be gated, 2384 * unable to count the events from non-render clock domain. 2385 * Render clock gating must be disabled when OA is enabled to 2386 * count the events from non-render domain. Unit level clock 2387 * gating for RCS should also be disabled. 2388 */ 2389 intel_uncore_rmw(uncore, GEN7_MISCCPCTL, 2390 GEN7_DOP_CLOCK_GATE_ENABLE, 0); 2391 intel_uncore_rmw(uncore, GEN6_UCGCTL1, 2392 0, GEN6_CSUNIT_CLOCK_GATE_DISABLE); 2393 2394 return emit_oa_config(stream, 2395 stream->oa_config, oa_context(stream), 2396 active); 2397 } 2398 2399 static void hsw_disable_metric_set(struct i915_perf_stream *stream) 2400 { 2401 struct intel_uncore *uncore = stream->uncore; 2402 2403 intel_uncore_rmw(uncore, GEN6_UCGCTL1, 2404 GEN6_CSUNIT_CLOCK_GATE_DISABLE, 0); 2405 intel_uncore_rmw(uncore, GEN7_MISCCPCTL, 2406 0, GEN7_DOP_CLOCK_GATE_ENABLE); 2407 2408 intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0); 2409 } 2410 2411 static u32 oa_config_flex_reg(const struct i915_oa_config *oa_config, 2412 i915_reg_t reg) 2413 { 2414 u32 mmio = i915_mmio_reg_offset(reg); 2415 int i; 2416 2417 /* 2418 * This arbitrary default will select the 'EU FPU0 Pipeline 2419 * Active' event. In the future it's anticipated that there 2420 * will be an explicit 'No Event' we can select, but not yet... 2421 */ 2422 if (!oa_config) 2423 return 0; 2424 2425 for (i = 0; i < oa_config->flex_regs_len; i++) { 2426 if (i915_mmio_reg_offset(oa_config->flex_regs[i].addr) == mmio) 2427 return oa_config->flex_regs[i].value; 2428 } 2429 2430 return 0; 2431 } 2432 /* 2433 * NB: It must always remain pointer safe to run this even if the OA unit 2434 * has been disabled. 2435 * 2436 * It's fine to put out-of-date values into these per-context registers 2437 * in the case that the OA unit has been disabled. 2438 */ 2439 static void 2440 gen8_update_reg_state_unlocked(const struct intel_context *ce, 2441 const struct i915_perf_stream *stream) 2442 { 2443 u32 ctx_oactxctrl = stream->perf->ctx_oactxctrl_offset; 2444 u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset; 2445 /* The MMIO offsets for Flex EU registers aren't contiguous */ 2446 static const i915_reg_t flex_regs[] = { 2447 EU_PERF_CNTL0, 2448 EU_PERF_CNTL1, 2449 EU_PERF_CNTL2, 2450 EU_PERF_CNTL3, 2451 EU_PERF_CNTL4, 2452 EU_PERF_CNTL5, 2453 EU_PERF_CNTL6, 2454 }; 2455 u32 *reg_state = ce->lrc_reg_state; 2456 int i; 2457 2458 reg_state[ctx_oactxctrl + 1] = 2459 (stream->period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) | 2460 (stream->periodic ? GEN8_OA_TIMER_ENABLE : 0) | 2461 GEN8_OA_COUNTER_RESUME; 2462 2463 for (i = 0; i < ARRAY_SIZE(flex_regs); i++) 2464 reg_state[ctx_flexeu0 + i * 2 + 1] = 2465 oa_config_flex_reg(stream->oa_config, flex_regs[i]); 2466 } 2467 2468 struct flex { 2469 i915_reg_t reg; 2470 u32 offset; 2471 u32 value; 2472 }; 2473 2474 static int 2475 gen8_store_flex(struct i915_request *rq, 2476 struct intel_context *ce, 2477 const struct flex *flex, unsigned int count) 2478 { 2479 u32 offset; 2480 u32 *cs; 2481 2482 cs = intel_ring_begin(rq, 4 * count); 2483 if (IS_ERR(cs)) 2484 return PTR_ERR(cs); 2485 2486 offset = i915_ggtt_offset(ce->state) + LRC_STATE_OFFSET; 2487 do { 2488 *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; 2489 *cs++ = offset + flex->offset * sizeof(u32); 2490 *cs++ = 0; 2491 *cs++ = flex->value; 2492 } while (flex++, --count); 2493 2494 intel_ring_advance(rq, cs); 2495 2496 return 0; 2497 } 2498 2499 static int 2500 gen8_load_flex(struct i915_request *rq, 2501 struct intel_context *ce, 2502 const struct flex *flex, unsigned int count) 2503 { 2504 u32 *cs; 2505 2506 GEM_BUG_ON(!count || count > 63); 2507 2508 cs = intel_ring_begin(rq, 2 * count + 2); 2509 if (IS_ERR(cs)) 2510 return PTR_ERR(cs); 2511 2512 *cs++ = MI_LOAD_REGISTER_IMM(count); 2513 do { 2514 *cs++ = i915_mmio_reg_offset(flex->reg); 2515 *cs++ = flex->value; 2516 } while (flex++, --count); 2517 *cs++ = MI_NOOP; 2518 2519 intel_ring_advance(rq, cs); 2520 2521 return 0; 2522 } 2523 2524 static int gen8_modify_context(struct intel_context *ce, 2525 const struct flex *flex, unsigned int count) 2526 { 2527 struct i915_request *rq; 2528 int err; 2529 2530 rq = intel_engine_create_kernel_request(ce->engine); 2531 if (IS_ERR(rq)) 2532 return PTR_ERR(rq); 2533 2534 /* Serialise with the remote context */ 2535 err = intel_context_prepare_remote_request(ce, rq); 2536 if (err == 0) 2537 err = gen8_store_flex(rq, ce, flex, count); 2538 2539 i915_request_add(rq); 2540 return err; 2541 } 2542 2543 static int 2544 gen8_modify_self(struct intel_context *ce, 2545 const struct flex *flex, unsigned int count, 2546 struct i915_active *active) 2547 { 2548 struct i915_request *rq; 2549 int err; 2550 2551 intel_engine_pm_get(ce->engine); 2552 rq = i915_request_create(ce); 2553 intel_engine_pm_put(ce->engine); 2554 if (IS_ERR(rq)) 2555 return PTR_ERR(rq); 2556 2557 if (!IS_ERR_OR_NULL(active)) { 2558 err = i915_active_add_request(active, rq); 2559 if (err) 2560 goto err_add_request; 2561 } 2562 2563 err = gen8_load_flex(rq, ce, flex, count); 2564 if (err) 2565 goto err_add_request; 2566 2567 err_add_request: 2568 i915_request_add(rq); 2569 return err; 2570 } 2571 2572 static int gen8_configure_context(struct i915_perf_stream *stream, 2573 struct i915_gem_context *ctx, 2574 struct flex *flex, unsigned int count) 2575 { 2576 struct i915_gem_engines_iter it; 2577 struct intel_context *ce; 2578 int err = 0; 2579 2580 for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { 2581 GEM_BUG_ON(ce == ce->engine->kernel_context); 2582 2583 if (ce->engine->class != RENDER_CLASS) 2584 continue; 2585 2586 /* Otherwise OA settings will be set upon first use */ 2587 if (!intel_context_pin_if_active(ce)) 2588 continue; 2589 2590 flex->value = intel_sseu_make_rpcs(ce->engine->gt, &ce->sseu); 2591 err = gen8_modify_context(ce, flex, count); 2592 2593 intel_context_unpin(ce); 2594 if (err) 2595 break; 2596 } 2597 i915_gem_context_unlock_engines(ctx); 2598 2599 return err; 2600 } 2601 2602 static int gen12_configure_oar_context(struct i915_perf_stream *stream, 2603 struct i915_active *active) 2604 { 2605 int err; 2606 struct intel_context *ce = stream->pinned_ctx; 2607 u32 format = stream->oa_buffer.format->format; 2608 u32 offset = stream->perf->ctx_oactxctrl_offset; 2609 struct flex regs_context[] = { 2610 { 2611 GEN8_OACTXCONTROL, 2612 offset + 1, 2613 active ? GEN8_OA_COUNTER_RESUME : 0, 2614 }, 2615 }; 2616 /* Offsets in regs_lri are not used since this configuration is only 2617 * applied using LRI. Initialize the correct offsets for posterity. 2618 */ 2619 #define GEN12_OAR_OACONTROL_OFFSET 0x5B0 2620 struct flex regs_lri[] = { 2621 { 2622 GEN12_OAR_OACONTROL, 2623 GEN12_OAR_OACONTROL_OFFSET + 1, 2624 (format << GEN12_OAR_OACONTROL_COUNTER_FORMAT_SHIFT) | 2625 (active ? GEN12_OAR_OACONTROL_COUNTER_ENABLE : 0) 2626 }, 2627 { 2628 RING_CONTEXT_CONTROL(ce->engine->mmio_base), 2629 CTX_CONTEXT_CONTROL, 2630 _MASKED_FIELD(GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE, 2631 active ? 2632 GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE : 2633 0) 2634 }, 2635 }; 2636 2637 /* Modify the context image of pinned context with regs_context */ 2638 err = intel_context_lock_pinned(ce); 2639 if (err) 2640 return err; 2641 2642 err = gen8_modify_context(ce, regs_context, 2643 ARRAY_SIZE(regs_context)); 2644 intel_context_unlock_pinned(ce); 2645 if (err) 2646 return err; 2647 2648 /* Apply regs_lri using LRI with pinned context */ 2649 return gen8_modify_self(ce, regs_lri, ARRAY_SIZE(regs_lri), active); 2650 } 2651 2652 /* 2653 * Manages updating the per-context aspects of the OA stream 2654 * configuration across all contexts. 2655 * 2656 * The awkward consideration here is that OACTXCONTROL controls the 2657 * exponent for periodic sampling which is primarily used for system 2658 * wide profiling where we'd like a consistent sampling period even in 2659 * the face of context switches. 2660 * 2661 * Our approach of updating the register state context (as opposed to 2662 * say using a workaround batch buffer) ensures that the hardware 2663 * won't automatically reload an out-of-date timer exponent even 2664 * transiently before a WA BB could be parsed. 2665 * 2666 * This function needs to: 2667 * - Ensure the currently running context's per-context OA state is 2668 * updated 2669 * - Ensure that all existing contexts will have the correct per-context 2670 * OA state if they are scheduled for use. 2671 * - Ensure any new contexts will be initialized with the correct 2672 * per-context OA state. 2673 * 2674 * Note: it's only the RCS/Render context that has any OA state. 2675 * Note: the first flex register passed must always be R_PWR_CLK_STATE 2676 */ 2677 static int 2678 oa_configure_all_contexts(struct i915_perf_stream *stream, 2679 struct flex *regs, 2680 size_t num_regs, 2681 struct i915_active *active) 2682 { 2683 struct drm_i915_private *i915 = stream->perf->i915; 2684 struct intel_engine_cs *engine; 2685 struct intel_gt *gt = stream->engine->gt; 2686 struct i915_gem_context *ctx, *cn; 2687 int err; 2688 2689 lockdep_assert_held(>->perf.lock); 2690 2691 /* 2692 * The OA register config is setup through the context image. This image 2693 * might be written to by the GPU on context switch (in particular on 2694 * lite-restore). This means we can't safely update a context's image, 2695 * if this context is scheduled/submitted to run on the GPU. 2696 * 2697 * We could emit the OA register config through the batch buffer but 2698 * this might leave small interval of time where the OA unit is 2699 * configured at an invalid sampling period. 2700 * 2701 * Note that since we emit all requests from a single ring, there 2702 * is still an implicit global barrier here that may cause a high 2703 * priority context to wait for an otherwise independent low priority 2704 * context. Contexts idle at the time of reconfiguration are not 2705 * trapped behind the barrier. 2706 */ 2707 spin_lock(&i915->gem.contexts.lock); 2708 list_for_each_entry_safe(ctx, cn, &i915->gem.contexts.list, link) { 2709 if (!kref_get_unless_zero(&ctx->ref)) 2710 continue; 2711 2712 spin_unlock(&i915->gem.contexts.lock); 2713 2714 err = gen8_configure_context(stream, ctx, regs, num_regs); 2715 if (err) { 2716 i915_gem_context_put(ctx); 2717 return err; 2718 } 2719 2720 spin_lock(&i915->gem.contexts.lock); 2721 list_safe_reset_next(ctx, cn, link); 2722 i915_gem_context_put(ctx); 2723 } 2724 spin_unlock(&i915->gem.contexts.lock); 2725 2726 /* 2727 * After updating all other contexts, we need to modify ourselves. 2728 * If we don't modify the kernel_context, we do not get events while 2729 * idle. 2730 */ 2731 for_each_uabi_engine(engine, i915) { 2732 struct intel_context *ce = engine->kernel_context; 2733 2734 if (engine->class != RENDER_CLASS) 2735 continue; 2736 2737 regs[0].value = intel_sseu_make_rpcs(engine->gt, &ce->sseu); 2738 2739 err = gen8_modify_self(ce, regs, num_regs, active); 2740 if (err) 2741 return err; 2742 } 2743 2744 return 0; 2745 } 2746 2747 static int 2748 gen12_configure_all_contexts(struct i915_perf_stream *stream, 2749 const struct i915_oa_config *oa_config, 2750 struct i915_active *active) 2751 { 2752 struct flex regs[] = { 2753 { 2754 GEN8_R_PWR_CLK_STATE(RENDER_RING_BASE), 2755 CTX_R_PWR_CLK_STATE, 2756 }, 2757 }; 2758 2759 if (stream->engine->class != RENDER_CLASS) 2760 return 0; 2761 2762 return oa_configure_all_contexts(stream, 2763 regs, ARRAY_SIZE(regs), 2764 active); 2765 } 2766 2767 static int 2768 lrc_configure_all_contexts(struct i915_perf_stream *stream, 2769 const struct i915_oa_config *oa_config, 2770 struct i915_active *active) 2771 { 2772 u32 ctx_oactxctrl = stream->perf->ctx_oactxctrl_offset; 2773 /* The MMIO offsets for Flex EU registers aren't contiguous */ 2774 const u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset; 2775 #define ctx_flexeuN(N) (ctx_flexeu0 + 2 * (N) + 1) 2776 struct flex regs[] = { 2777 { 2778 GEN8_R_PWR_CLK_STATE(RENDER_RING_BASE), 2779 CTX_R_PWR_CLK_STATE, 2780 }, 2781 { 2782 GEN8_OACTXCONTROL, 2783 ctx_oactxctrl + 1, 2784 }, 2785 { EU_PERF_CNTL0, ctx_flexeuN(0) }, 2786 { EU_PERF_CNTL1, ctx_flexeuN(1) }, 2787 { EU_PERF_CNTL2, ctx_flexeuN(2) }, 2788 { EU_PERF_CNTL3, ctx_flexeuN(3) }, 2789 { EU_PERF_CNTL4, ctx_flexeuN(4) }, 2790 { EU_PERF_CNTL5, ctx_flexeuN(5) }, 2791 { EU_PERF_CNTL6, ctx_flexeuN(6) }, 2792 }; 2793 #undef ctx_flexeuN 2794 int i; 2795 2796 regs[1].value = 2797 (stream->period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) | 2798 (stream->periodic ? GEN8_OA_TIMER_ENABLE : 0) | 2799 GEN8_OA_COUNTER_RESUME; 2800 2801 for (i = 2; i < ARRAY_SIZE(regs); i++) 2802 regs[i].value = oa_config_flex_reg(oa_config, regs[i].reg); 2803 2804 return oa_configure_all_contexts(stream, 2805 regs, ARRAY_SIZE(regs), 2806 active); 2807 } 2808 2809 static int 2810 gen8_enable_metric_set(struct i915_perf_stream *stream, 2811 struct i915_active *active) 2812 { 2813 struct intel_uncore *uncore = stream->uncore; 2814 struct i915_oa_config *oa_config = stream->oa_config; 2815 int ret; 2816 2817 /* 2818 * We disable slice/unslice clock ratio change reports on SKL since 2819 * they are too noisy. The HW generates a lot of redundant reports 2820 * where the ratio hasn't really changed causing a lot of redundant 2821 * work to processes and increasing the chances we'll hit buffer 2822 * overruns. 2823 * 2824 * Although we don't currently use the 'disable overrun' OABUFFER 2825 * feature it's worth noting that clock ratio reports have to be 2826 * disabled before considering to use that feature since the HW doesn't 2827 * correctly block these reports. 2828 * 2829 * Currently none of the high-level metrics we have depend on knowing 2830 * this ratio to normalize. 2831 * 2832 * Note: This register is not power context saved and restored, but 2833 * that's OK considering that we disable RC6 while the OA unit is 2834 * enabled. 2835 * 2836 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to 2837 * be read back from automatically triggered reports, as part of the 2838 * RPT_ID field. 2839 */ 2840 if (IS_GRAPHICS_VER(stream->perf->i915, 9, 11)) { 2841 intel_uncore_write(uncore, GEN8_OA_DEBUG, 2842 _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS | 2843 GEN9_OA_DEBUG_INCLUDE_CLK_RATIO)); 2844 } 2845 2846 /* 2847 * Update all contexts prior writing the mux configurations as we need 2848 * to make sure all slices/subslices are ON before writing to NOA 2849 * registers. 2850 */ 2851 ret = lrc_configure_all_contexts(stream, oa_config, active); 2852 if (ret) 2853 return ret; 2854 2855 return emit_oa_config(stream, 2856 stream->oa_config, oa_context(stream), 2857 active); 2858 } 2859 2860 static u32 oag_report_ctx_switches(const struct i915_perf_stream *stream) 2861 { 2862 return _MASKED_FIELD(GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS, 2863 (stream->sample_flags & SAMPLE_OA_REPORT) ? 2864 0 : GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS); 2865 } 2866 2867 static int 2868 gen12_enable_metric_set(struct i915_perf_stream *stream, 2869 struct i915_active *active) 2870 { 2871 struct drm_i915_private *i915 = stream->perf->i915; 2872 struct intel_uncore *uncore = stream->uncore; 2873 struct i915_oa_config *oa_config = stream->oa_config; 2874 bool periodic = stream->periodic; 2875 u32 period_exponent = stream->period_exponent; 2876 u32 sqcnt1; 2877 int ret; 2878 2879 /* 2880 * Wa_1508761755:xehpsdv, dg2 2881 * EU NOA signals behave incorrectly if EU clock gating is enabled. 2882 * Disable thread stall DOP gating and EU DOP gating. 2883 */ 2884 if (IS_XEHPSDV(i915) || IS_DG2(i915)) { 2885 intel_gt_mcr_multicast_write(uncore->gt, GEN8_ROW_CHICKEN, 2886 _MASKED_BIT_ENABLE(STALL_DOP_GATING_DISABLE)); 2887 intel_uncore_write(uncore, GEN7_ROW_CHICKEN2, 2888 _MASKED_BIT_ENABLE(GEN12_DISABLE_DOP_GATING)); 2889 } 2890 2891 intel_uncore_write(uncore, __oa_regs(stream)->oa_debug, 2892 /* Disable clk ratio reports, like previous Gens. */ 2893 _MASKED_BIT_ENABLE(GEN12_OAG_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS | 2894 GEN12_OAG_OA_DEBUG_INCLUDE_CLK_RATIO) | 2895 /* 2896 * If the user didn't require OA reports, instruct 2897 * the hardware not to emit ctx switch reports. 2898 */ 2899 oag_report_ctx_switches(stream)); 2900 2901 intel_uncore_write(uncore, __oa_regs(stream)->oa_ctx_ctrl, periodic ? 2902 (GEN12_OAG_OAGLBCTXCTRL_COUNTER_RESUME | 2903 GEN12_OAG_OAGLBCTXCTRL_TIMER_ENABLE | 2904 (period_exponent << GEN12_OAG_OAGLBCTXCTRL_TIMER_PERIOD_SHIFT)) 2905 : 0); 2906 2907 /* 2908 * Initialize Super Queue Internal Cnt Register 2909 * Set PMON Enable in order to collect valid metrics. 2910 * Enable byets per clock reporting in OA for XEHPSDV onward. 2911 */ 2912 sqcnt1 = GEN12_SQCNT1_PMON_ENABLE | 2913 (HAS_OA_BPC_REPORTING(i915) ? GEN12_SQCNT1_OABPC : 0); 2914 2915 intel_uncore_rmw(uncore, GEN12_SQCNT1, 0, sqcnt1); 2916 2917 /* 2918 * Update all contexts prior writing the mux configurations as we need 2919 * to make sure all slices/subslices are ON before writing to NOA 2920 * registers. 2921 */ 2922 ret = gen12_configure_all_contexts(stream, oa_config, active); 2923 if (ret) 2924 return ret; 2925 2926 /* 2927 * For Gen12, performance counters are context 2928 * saved/restored. Only enable it for the context that 2929 * requested this. 2930 */ 2931 if (stream->ctx) { 2932 ret = gen12_configure_oar_context(stream, active); 2933 if (ret) 2934 return ret; 2935 } 2936 2937 return emit_oa_config(stream, 2938 stream->oa_config, oa_context(stream), 2939 active); 2940 } 2941 2942 static void gen8_disable_metric_set(struct i915_perf_stream *stream) 2943 { 2944 struct intel_uncore *uncore = stream->uncore; 2945 2946 /* Reset all contexts' slices/subslices configurations. */ 2947 lrc_configure_all_contexts(stream, NULL, NULL); 2948 2949 intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0); 2950 } 2951 2952 static void gen11_disable_metric_set(struct i915_perf_stream *stream) 2953 { 2954 struct intel_uncore *uncore = stream->uncore; 2955 2956 /* Reset all contexts' slices/subslices configurations. */ 2957 lrc_configure_all_contexts(stream, NULL, NULL); 2958 2959 /* Make sure we disable noa to save power. */ 2960 intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); 2961 } 2962 2963 static void gen12_disable_metric_set(struct i915_perf_stream *stream) 2964 { 2965 struct intel_uncore *uncore = stream->uncore; 2966 struct drm_i915_private *i915 = stream->perf->i915; 2967 u32 sqcnt1; 2968 2969 /* 2970 * Wa_1508761755:xehpsdv, dg2 2971 * Enable thread stall DOP gating and EU DOP gating. 2972 */ 2973 if (IS_XEHPSDV(i915) || IS_DG2(i915)) { 2974 intel_gt_mcr_multicast_write(uncore->gt, GEN8_ROW_CHICKEN, 2975 _MASKED_BIT_DISABLE(STALL_DOP_GATING_DISABLE)); 2976 intel_uncore_write(uncore, GEN7_ROW_CHICKEN2, 2977 _MASKED_BIT_DISABLE(GEN12_DISABLE_DOP_GATING)); 2978 } 2979 2980 /* Reset all contexts' slices/subslices configurations. */ 2981 gen12_configure_all_contexts(stream, NULL, NULL); 2982 2983 /* disable the context save/restore or OAR counters */ 2984 if (stream->ctx) 2985 gen12_configure_oar_context(stream, NULL); 2986 2987 /* Make sure we disable noa to save power. */ 2988 intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); 2989 2990 sqcnt1 = GEN12_SQCNT1_PMON_ENABLE | 2991 (HAS_OA_BPC_REPORTING(i915) ? GEN12_SQCNT1_OABPC : 0); 2992 2993 /* Reset PMON Enable to save power. */ 2994 intel_uncore_rmw(uncore, GEN12_SQCNT1, sqcnt1, 0); 2995 } 2996 2997 static void gen7_oa_enable(struct i915_perf_stream *stream) 2998 { 2999 struct intel_uncore *uncore = stream->uncore; 3000 struct i915_gem_context *ctx = stream->ctx; 3001 u32 ctx_id = stream->specific_ctx_id; 3002 bool periodic = stream->periodic; 3003 u32 period_exponent = stream->period_exponent; 3004 u32 report_format = stream->oa_buffer.format->format; 3005 3006 /* 3007 * Reset buf pointers so we don't forward reports from before now. 3008 * 3009 * Think carefully if considering trying to avoid this, since it 3010 * also ensures status flags and the buffer itself are cleared 3011 * in error paths, and we have checks for invalid reports based 3012 * on the assumption that certain fields are written to zeroed 3013 * memory which this helps maintains. 3014 */ 3015 gen7_init_oa_buffer(stream); 3016 3017 intel_uncore_write(uncore, GEN7_OACONTROL, 3018 (ctx_id & GEN7_OACONTROL_CTX_MASK) | 3019 (period_exponent << 3020 GEN7_OACONTROL_TIMER_PERIOD_SHIFT) | 3021 (periodic ? GEN7_OACONTROL_TIMER_ENABLE : 0) | 3022 (report_format << GEN7_OACONTROL_FORMAT_SHIFT) | 3023 (ctx ? GEN7_OACONTROL_PER_CTX_ENABLE : 0) | 3024 GEN7_OACONTROL_ENABLE); 3025 } 3026 3027 static void gen8_oa_enable(struct i915_perf_stream *stream) 3028 { 3029 struct intel_uncore *uncore = stream->uncore; 3030 u32 report_format = stream->oa_buffer.format->format; 3031 3032 /* 3033 * Reset buf pointers so we don't forward reports from before now. 3034 * 3035 * Think carefully if considering trying to avoid this, since it 3036 * also ensures status flags and the buffer itself are cleared 3037 * in error paths, and we have checks for invalid reports based 3038 * on the assumption that certain fields are written to zeroed 3039 * memory which this helps maintains. 3040 */ 3041 gen8_init_oa_buffer(stream); 3042 3043 /* 3044 * Note: we don't rely on the hardware to perform single context 3045 * filtering and instead filter on the cpu based on the context-id 3046 * field of reports 3047 */ 3048 intel_uncore_write(uncore, GEN8_OACONTROL, 3049 (report_format << GEN8_OA_REPORT_FORMAT_SHIFT) | 3050 GEN8_OA_COUNTER_ENABLE); 3051 } 3052 3053 static void gen12_oa_enable(struct i915_perf_stream *stream) 3054 { 3055 const struct i915_perf_regs *regs; 3056 u32 val; 3057 3058 /* 3059 * If we don't want OA reports from the OA buffer, then we don't even 3060 * need to program the OAG unit. 3061 */ 3062 if (!(stream->sample_flags & SAMPLE_OA_REPORT)) 3063 return; 3064 3065 gen12_init_oa_buffer(stream); 3066 3067 regs = __oa_regs(stream); 3068 val = (stream->oa_buffer.format->format << regs->oa_ctrl_counter_format_shift) | 3069 GEN12_OAG_OACONTROL_OA_COUNTER_ENABLE; 3070 3071 intel_uncore_write(stream->uncore, regs->oa_ctrl, val); 3072 } 3073 3074 /** 3075 * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream 3076 * @stream: An i915 perf stream opened for OA metrics 3077 * 3078 * [Re]enables hardware periodic sampling according to the period configured 3079 * when opening the stream. This also starts a hrtimer that will periodically 3080 * check for data in the circular OA buffer for notifying userspace (e.g. 3081 * during a read() or poll()). 3082 */ 3083 static void i915_oa_stream_enable(struct i915_perf_stream *stream) 3084 { 3085 stream->pollin = false; 3086 3087 stream->perf->ops.oa_enable(stream); 3088 3089 if (stream->sample_flags & SAMPLE_OA_REPORT) 3090 hrtimer_start(&stream->poll_check_timer, 3091 ns_to_ktime(stream->poll_oa_period), 3092 HRTIMER_MODE_REL_PINNED); 3093 } 3094 3095 static void gen7_oa_disable(struct i915_perf_stream *stream) 3096 { 3097 struct intel_uncore *uncore = stream->uncore; 3098 3099 intel_uncore_write(uncore, GEN7_OACONTROL, 0); 3100 if (intel_wait_for_register(uncore, 3101 GEN7_OACONTROL, GEN7_OACONTROL_ENABLE, 0, 3102 50)) 3103 drm_err(&stream->perf->i915->drm, 3104 "wait for OA to be disabled timed out\n"); 3105 } 3106 3107 static void gen8_oa_disable(struct i915_perf_stream *stream) 3108 { 3109 struct intel_uncore *uncore = stream->uncore; 3110 3111 intel_uncore_write(uncore, GEN8_OACONTROL, 0); 3112 if (intel_wait_for_register(uncore, 3113 GEN8_OACONTROL, GEN8_OA_COUNTER_ENABLE, 0, 3114 50)) 3115 drm_err(&stream->perf->i915->drm, 3116 "wait for OA to be disabled timed out\n"); 3117 } 3118 3119 static void gen12_oa_disable(struct i915_perf_stream *stream) 3120 { 3121 struct intel_uncore *uncore = stream->uncore; 3122 3123 intel_uncore_write(uncore, __oa_regs(stream)->oa_ctrl, 0); 3124 if (intel_wait_for_register(uncore, 3125 __oa_regs(stream)->oa_ctrl, 3126 GEN12_OAG_OACONTROL_OA_COUNTER_ENABLE, 0, 3127 50)) 3128 drm_err(&stream->perf->i915->drm, 3129 "wait for OA to be disabled timed out\n"); 3130 3131 intel_uncore_write(uncore, GEN12_OA_TLB_INV_CR, 1); 3132 if (intel_wait_for_register(uncore, 3133 GEN12_OA_TLB_INV_CR, 3134 1, 0, 3135 50)) 3136 drm_err(&stream->perf->i915->drm, 3137 "wait for OA tlb invalidate timed out\n"); 3138 } 3139 3140 /** 3141 * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream 3142 * @stream: An i915 perf stream opened for OA metrics 3143 * 3144 * Stops the OA unit from periodically writing counter reports into the 3145 * circular OA buffer. This also stops the hrtimer that periodically checks for 3146 * data in the circular OA buffer, for notifying userspace. 3147 */ 3148 static void i915_oa_stream_disable(struct i915_perf_stream *stream) 3149 { 3150 stream->perf->ops.oa_disable(stream); 3151 3152 if (stream->sample_flags & SAMPLE_OA_REPORT) 3153 hrtimer_cancel(&stream->poll_check_timer); 3154 } 3155 3156 static const struct i915_perf_stream_ops i915_oa_stream_ops = { 3157 .destroy = i915_oa_stream_destroy, 3158 .enable = i915_oa_stream_enable, 3159 .disable = i915_oa_stream_disable, 3160 .wait_unlocked = i915_oa_wait_unlocked, 3161 .poll_wait = i915_oa_poll_wait, 3162 .read = i915_oa_read, 3163 }; 3164 3165 static int i915_perf_stream_enable_sync(struct i915_perf_stream *stream) 3166 { 3167 struct i915_active *active; 3168 int err; 3169 3170 active = i915_active_create(); 3171 if (!active) 3172 return -ENOMEM; 3173 3174 err = stream->perf->ops.enable_metric_set(stream, active); 3175 if (err == 0) 3176 __i915_active_wait(active, TASK_UNINTERRUPTIBLE); 3177 3178 i915_active_put(active); 3179 return err; 3180 } 3181 3182 static void 3183 get_default_sseu_config(struct intel_sseu *out_sseu, 3184 struct intel_engine_cs *engine) 3185 { 3186 const struct sseu_dev_info *devinfo_sseu = &engine->gt->info.sseu; 3187 3188 *out_sseu = intel_sseu_from_device_info(devinfo_sseu); 3189 3190 if (GRAPHICS_VER(engine->i915) == 11) { 3191 /* 3192 * We only need subslice count so it doesn't matter which ones 3193 * we select - just turn off low bits in the amount of half of 3194 * all available subslices per slice. 3195 */ 3196 out_sseu->subslice_mask = 3197 ~(~0 << (hweight8(out_sseu->subslice_mask) / 2)); 3198 out_sseu->slice_mask = 0x1; 3199 } 3200 } 3201 3202 static int 3203 get_sseu_config(struct intel_sseu *out_sseu, 3204 struct intel_engine_cs *engine, 3205 const struct drm_i915_gem_context_param_sseu *drm_sseu) 3206 { 3207 if (drm_sseu->engine.engine_class != engine->uabi_class || 3208 drm_sseu->engine.engine_instance != engine->uabi_instance) 3209 return -EINVAL; 3210 3211 return i915_gem_user_to_context_sseu(engine->gt, drm_sseu, out_sseu); 3212 } 3213 3214 /* 3215 * OA timestamp frequency = CS timestamp frequency in most platforms. On some 3216 * platforms OA unit ignores the CTC_SHIFT and the 2 timestamps differ. In such 3217 * cases, return the adjusted CS timestamp frequency to the user. 3218 */ 3219 u32 i915_perf_oa_timestamp_frequency(struct drm_i915_private *i915) 3220 { 3221 /* 3222 * Wa_18013179988:dg2 3223 * Wa_14015846243:mtl 3224 */ 3225 if (IS_DG2(i915) || IS_METEORLAKE(i915)) { 3226 intel_wakeref_t wakeref; 3227 u32 reg, shift; 3228 3229 with_intel_runtime_pm(to_gt(i915)->uncore->rpm, wakeref) 3230 reg = intel_uncore_read(to_gt(i915)->uncore, RPM_CONFIG0); 3231 3232 shift = REG_FIELD_GET(GEN10_RPM_CONFIG0_CTC_SHIFT_PARAMETER_MASK, 3233 reg); 3234 3235 return to_gt(i915)->clock_frequency << (3 - shift); 3236 } 3237 3238 return to_gt(i915)->clock_frequency; 3239 } 3240 3241 /** 3242 * i915_oa_stream_init - validate combined props for OA stream and init 3243 * @stream: An i915 perf stream 3244 * @param: The open parameters passed to `DRM_I915_PERF_OPEN` 3245 * @props: The property state that configures stream (individually validated) 3246 * 3247 * While read_properties_unlocked() validates properties in isolation it 3248 * doesn't ensure that the combination necessarily makes sense. 3249 * 3250 * At this point it has been determined that userspace wants a stream of 3251 * OA metrics, but still we need to further validate the combined 3252 * properties are OK. 3253 * 3254 * If the configuration makes sense then we can allocate memory for 3255 * a circular OA buffer and apply the requested metric set configuration. 3256 * 3257 * Returns: zero on success or a negative error code. 3258 */ 3259 static int i915_oa_stream_init(struct i915_perf_stream *stream, 3260 struct drm_i915_perf_open_param *param, 3261 struct perf_open_properties *props) 3262 { 3263 struct drm_i915_private *i915 = stream->perf->i915; 3264 struct i915_perf *perf = stream->perf; 3265 struct i915_perf_group *g; 3266 struct intel_gt *gt; 3267 int ret; 3268 3269 if (!props->engine) { 3270 drm_dbg(&stream->perf->i915->drm, 3271 "OA engine not specified\n"); 3272 return -EINVAL; 3273 } 3274 gt = props->engine->gt; 3275 g = props->engine->oa_group; 3276 3277 /* 3278 * If the sysfs metrics/ directory wasn't registered for some 3279 * reason then don't let userspace try their luck with config 3280 * IDs 3281 */ 3282 if (!perf->metrics_kobj) { 3283 drm_dbg(&stream->perf->i915->drm, 3284 "OA metrics weren't advertised via sysfs\n"); 3285 return -EINVAL; 3286 } 3287 3288 if (!(props->sample_flags & SAMPLE_OA_REPORT) && 3289 (GRAPHICS_VER(perf->i915) < 12 || !stream->ctx)) { 3290 drm_dbg(&stream->perf->i915->drm, 3291 "Only OA report sampling supported\n"); 3292 return -EINVAL; 3293 } 3294 3295 if (!perf->ops.enable_metric_set) { 3296 drm_dbg(&stream->perf->i915->drm, 3297 "OA unit not supported\n"); 3298 return -ENODEV; 3299 } 3300 3301 /* 3302 * To avoid the complexity of having to accurately filter 3303 * counter reports and marshal to the appropriate client 3304 * we currently only allow exclusive access 3305 */ 3306 if (g->exclusive_stream) { 3307 drm_dbg(&stream->perf->i915->drm, 3308 "OA unit already in use\n"); 3309 return -EBUSY; 3310 } 3311 3312 if (!props->oa_format) { 3313 drm_dbg(&stream->perf->i915->drm, 3314 "OA report format not specified\n"); 3315 return -EINVAL; 3316 } 3317 3318 stream->engine = props->engine; 3319 stream->uncore = stream->engine->gt->uncore; 3320 3321 stream->sample_size = sizeof(struct drm_i915_perf_record_header); 3322 3323 stream->oa_buffer.format = &perf->oa_formats[props->oa_format]; 3324 if (drm_WARN_ON(&i915->drm, stream->oa_buffer.format->size == 0)) 3325 return -EINVAL; 3326 3327 stream->sample_flags = props->sample_flags; 3328 stream->sample_size += stream->oa_buffer.format->size; 3329 3330 stream->hold_preemption = props->hold_preemption; 3331 3332 stream->periodic = props->oa_periodic; 3333 if (stream->periodic) 3334 stream->period_exponent = props->oa_period_exponent; 3335 3336 if (stream->ctx) { 3337 ret = oa_get_render_ctx_id(stream); 3338 if (ret) { 3339 drm_dbg(&stream->perf->i915->drm, 3340 "Invalid context id to filter with\n"); 3341 return ret; 3342 } 3343 } 3344 3345 ret = alloc_noa_wait(stream); 3346 if (ret) { 3347 drm_dbg(&stream->perf->i915->drm, 3348 "Unable to allocate NOA wait batch buffer\n"); 3349 goto err_noa_wait_alloc; 3350 } 3351 3352 stream->oa_config = i915_perf_get_oa_config(perf, props->metrics_set); 3353 if (!stream->oa_config) { 3354 drm_dbg(&stream->perf->i915->drm, 3355 "Invalid OA config id=%i\n", props->metrics_set); 3356 ret = -EINVAL; 3357 goto err_config; 3358 } 3359 3360 /* PRM - observability performance counters: 3361 * 3362 * OACONTROL, performance counter enable, note: 3363 * 3364 * "When this bit is set, in order to have coherent counts, 3365 * RC6 power state and trunk clock gating must be disabled. 3366 * This can be achieved by programming MMIO registers as 3367 * 0xA094=0 and 0xA090[31]=1" 3368 * 3369 * In our case we are expecting that taking pm + FORCEWAKE 3370 * references will effectively disable RC6. 3371 */ 3372 intel_engine_pm_get(stream->engine); 3373 intel_uncore_forcewake_get(stream->uncore, FORCEWAKE_ALL); 3374 3375 /* 3376 * Wa_16011777198:dg2: GuC resets render as part of the Wa. This causes 3377 * OA to lose the configuration state. Prevent this by overriding GUCRC 3378 * mode. 3379 */ 3380 if (intel_uc_uses_guc_rc(>->uc) && 3381 (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_C0) || 3382 IS_DG2_GRAPHICS_STEP(gt->i915, G11, STEP_A0, STEP_B0))) { 3383 ret = intel_guc_slpc_override_gucrc_mode(>->uc.guc.slpc, 3384 SLPC_GUCRC_MODE_GUCRC_NO_RC6); 3385 if (ret) { 3386 drm_dbg(&stream->perf->i915->drm, 3387 "Unable to override gucrc mode\n"); 3388 goto err_gucrc; 3389 } 3390 3391 stream->override_gucrc = true; 3392 } 3393 3394 ret = alloc_oa_buffer(stream); 3395 if (ret) 3396 goto err_oa_buf_alloc; 3397 3398 stream->ops = &i915_oa_stream_ops; 3399 3400 stream->engine->gt->perf.sseu = props->sseu; 3401 WRITE_ONCE(g->exclusive_stream, stream); 3402 3403 ret = i915_perf_stream_enable_sync(stream); 3404 if (ret) { 3405 drm_dbg(&stream->perf->i915->drm, 3406 "Unable to enable metric set\n"); 3407 goto err_enable; 3408 } 3409 3410 drm_dbg(&stream->perf->i915->drm, 3411 "opening stream oa config uuid=%s\n", 3412 stream->oa_config->uuid); 3413 3414 hrtimer_init(&stream->poll_check_timer, 3415 CLOCK_MONOTONIC, HRTIMER_MODE_REL); 3416 stream->poll_check_timer.function = oa_poll_check_timer_cb; 3417 init_waitqueue_head(&stream->poll_wq); 3418 spin_lock_init(&stream->oa_buffer.ptr_lock); 3419 mutex_init(&stream->lock); 3420 3421 return 0; 3422 3423 err_enable: 3424 WRITE_ONCE(g->exclusive_stream, NULL); 3425 perf->ops.disable_metric_set(stream); 3426 3427 free_oa_buffer(stream); 3428 3429 err_oa_buf_alloc: 3430 if (stream->override_gucrc) 3431 intel_guc_slpc_unset_gucrc_mode(>->uc.guc.slpc); 3432 3433 err_gucrc: 3434 intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL); 3435 intel_engine_pm_put(stream->engine); 3436 3437 free_oa_configs(stream); 3438 3439 err_config: 3440 free_noa_wait(stream); 3441 3442 err_noa_wait_alloc: 3443 if (stream->ctx) 3444 oa_put_render_ctx_id(stream); 3445 3446 return ret; 3447 } 3448 3449 void i915_oa_init_reg_state(const struct intel_context *ce, 3450 const struct intel_engine_cs *engine) 3451 { 3452 struct i915_perf_stream *stream; 3453 3454 if (engine->class != RENDER_CLASS) 3455 return; 3456 3457 /* perf.exclusive_stream serialised by lrc_configure_all_contexts() */ 3458 stream = READ_ONCE(engine->oa_group->exclusive_stream); 3459 if (stream && GRAPHICS_VER(stream->perf->i915) < 12) 3460 gen8_update_reg_state_unlocked(ce, stream); 3461 } 3462 3463 /** 3464 * i915_perf_read - handles read() FOP for i915 perf stream FDs 3465 * @file: An i915 perf stream file 3466 * @buf: destination buffer given by userspace 3467 * @count: the number of bytes userspace wants to read 3468 * @ppos: (inout) file seek position (unused) 3469 * 3470 * The entry point for handling a read() on a stream file descriptor from 3471 * userspace. Most of the work is left to the i915_perf_read_locked() and 3472 * &i915_perf_stream_ops->read but to save having stream implementations (of 3473 * which we might have multiple later) we handle blocking read here. 3474 * 3475 * We can also consistently treat trying to read from a disabled stream 3476 * as an IO error so implementations can assume the stream is enabled 3477 * while reading. 3478 * 3479 * Returns: The number of bytes copied or a negative error code on failure. 3480 */ 3481 static ssize_t i915_perf_read(struct file *file, 3482 char __user *buf, 3483 size_t count, 3484 loff_t *ppos) 3485 { 3486 struct i915_perf_stream *stream = file->private_data; 3487 size_t offset = 0; 3488 int ret; 3489 3490 /* To ensure it's handled consistently we simply treat all reads of a 3491 * disabled stream as an error. In particular it might otherwise lead 3492 * to a deadlock for blocking file descriptors... 3493 */ 3494 if (!stream->enabled || !(stream->sample_flags & SAMPLE_OA_REPORT)) 3495 return -EIO; 3496 3497 if (!(file->f_flags & O_NONBLOCK)) { 3498 /* There's the small chance of false positives from 3499 * stream->ops->wait_unlocked. 3500 * 3501 * E.g. with single context filtering since we only wait until 3502 * oabuffer has >= 1 report we don't immediately know whether 3503 * any reports really belong to the current context 3504 */ 3505 do { 3506 ret = stream->ops->wait_unlocked(stream); 3507 if (ret) 3508 return ret; 3509 3510 mutex_lock(&stream->lock); 3511 ret = stream->ops->read(stream, buf, count, &offset); 3512 mutex_unlock(&stream->lock); 3513 } while (!offset && !ret); 3514 } else { 3515 mutex_lock(&stream->lock); 3516 ret = stream->ops->read(stream, buf, count, &offset); 3517 mutex_unlock(&stream->lock); 3518 } 3519 3520 /* We allow the poll checking to sometimes report false positive EPOLLIN 3521 * events where we might actually report EAGAIN on read() if there's 3522 * not really any data available. In this situation though we don't 3523 * want to enter a busy loop between poll() reporting a EPOLLIN event 3524 * and read() returning -EAGAIN. Clearing the oa.pollin state here 3525 * effectively ensures we back off until the next hrtimer callback 3526 * before reporting another EPOLLIN event. 3527 * The exception to this is if ops->read() returned -ENOSPC which means 3528 * that more OA data is available than could fit in the user provided 3529 * buffer. In this case we want the next poll() call to not block. 3530 */ 3531 if (ret != -ENOSPC) 3532 stream->pollin = false; 3533 3534 /* Possible values for ret are 0, -EFAULT, -ENOSPC, -EIO, ... */ 3535 return offset ?: (ret ?: -EAGAIN); 3536 } 3537 3538 static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer) 3539 { 3540 struct i915_perf_stream *stream = 3541 container_of(hrtimer, typeof(*stream), poll_check_timer); 3542 3543 if (oa_buffer_check_unlocked(stream)) { 3544 stream->pollin = true; 3545 wake_up(&stream->poll_wq); 3546 } 3547 3548 hrtimer_forward_now(hrtimer, 3549 ns_to_ktime(stream->poll_oa_period)); 3550 3551 return HRTIMER_RESTART; 3552 } 3553 3554 /** 3555 * i915_perf_poll_locked - poll_wait() with a suitable wait queue for stream 3556 * @stream: An i915 perf stream 3557 * @file: An i915 perf stream file 3558 * @wait: poll() state table 3559 * 3560 * For handling userspace polling on an i915 perf stream, this calls through to 3561 * &i915_perf_stream_ops->poll_wait to call poll_wait() with a wait queue that 3562 * will be woken for new stream data. 3563 * 3564 * Returns: any poll events that are ready without sleeping 3565 */ 3566 static __poll_t i915_perf_poll_locked(struct i915_perf_stream *stream, 3567 struct file *file, 3568 poll_table *wait) 3569 { 3570 __poll_t events = 0; 3571 3572 stream->ops->poll_wait(stream, file, wait); 3573 3574 /* Note: we don't explicitly check whether there's something to read 3575 * here since this path may be very hot depending on what else 3576 * userspace is polling, or on the timeout in use. We rely solely on 3577 * the hrtimer/oa_poll_check_timer_cb to notify us when there are 3578 * samples to read. 3579 */ 3580 if (stream->pollin) 3581 events |= EPOLLIN; 3582 3583 return events; 3584 } 3585 3586 /** 3587 * i915_perf_poll - call poll_wait() with a suitable wait queue for stream 3588 * @file: An i915 perf stream file 3589 * @wait: poll() state table 3590 * 3591 * For handling userspace polling on an i915 perf stream, this ensures 3592 * poll_wait() gets called with a wait queue that will be woken for new stream 3593 * data. 3594 * 3595 * Note: Implementation deferred to i915_perf_poll_locked() 3596 * 3597 * Returns: any poll events that are ready without sleeping 3598 */ 3599 static __poll_t i915_perf_poll(struct file *file, poll_table *wait) 3600 { 3601 struct i915_perf_stream *stream = file->private_data; 3602 __poll_t ret; 3603 3604 mutex_lock(&stream->lock); 3605 ret = i915_perf_poll_locked(stream, file, wait); 3606 mutex_unlock(&stream->lock); 3607 3608 return ret; 3609 } 3610 3611 /** 3612 * i915_perf_enable_locked - handle `I915_PERF_IOCTL_ENABLE` ioctl 3613 * @stream: A disabled i915 perf stream 3614 * 3615 * [Re]enables the associated capture of data for this stream. 3616 * 3617 * If a stream was previously enabled then there's currently no intention 3618 * to provide userspace any guarantee about the preservation of previously 3619 * buffered data. 3620 */ 3621 static void i915_perf_enable_locked(struct i915_perf_stream *stream) 3622 { 3623 if (stream->enabled) 3624 return; 3625 3626 /* Allow stream->ops->enable() to refer to this */ 3627 stream->enabled = true; 3628 3629 if (stream->ops->enable) 3630 stream->ops->enable(stream); 3631 3632 if (stream->hold_preemption) 3633 intel_context_set_nopreempt(stream->pinned_ctx); 3634 } 3635 3636 /** 3637 * i915_perf_disable_locked - handle `I915_PERF_IOCTL_DISABLE` ioctl 3638 * @stream: An enabled i915 perf stream 3639 * 3640 * Disables the associated capture of data for this stream. 3641 * 3642 * The intention is that disabling an re-enabling a stream will ideally be 3643 * cheaper than destroying and re-opening a stream with the same configuration, 3644 * though there are no formal guarantees about what state or buffered data 3645 * must be retained between disabling and re-enabling a stream. 3646 * 3647 * Note: while a stream is disabled it's considered an error for userspace 3648 * to attempt to read from the stream (-EIO). 3649 */ 3650 static void i915_perf_disable_locked(struct i915_perf_stream *stream) 3651 { 3652 if (!stream->enabled) 3653 return; 3654 3655 /* Allow stream->ops->disable() to refer to this */ 3656 stream->enabled = false; 3657 3658 if (stream->hold_preemption) 3659 intel_context_clear_nopreempt(stream->pinned_ctx); 3660 3661 if (stream->ops->disable) 3662 stream->ops->disable(stream); 3663 } 3664 3665 static long i915_perf_config_locked(struct i915_perf_stream *stream, 3666 unsigned long metrics_set) 3667 { 3668 struct i915_oa_config *config; 3669 long ret = stream->oa_config->id; 3670 3671 config = i915_perf_get_oa_config(stream->perf, metrics_set); 3672 if (!config) 3673 return -EINVAL; 3674 3675 if (config != stream->oa_config) { 3676 int err; 3677 3678 /* 3679 * If OA is bound to a specific context, emit the 3680 * reconfiguration inline from that context. The update 3681 * will then be ordered with respect to submission on that 3682 * context. 3683 * 3684 * When set globally, we use a low priority kernel context, 3685 * so it will effectively take effect when idle. 3686 */ 3687 err = emit_oa_config(stream, config, oa_context(stream), NULL); 3688 if (!err) 3689 config = xchg(&stream->oa_config, config); 3690 else 3691 ret = err; 3692 } 3693 3694 i915_oa_config_put(config); 3695 3696 return ret; 3697 } 3698 3699 /** 3700 * i915_perf_ioctl_locked - support ioctl() usage with i915 perf stream FDs 3701 * @stream: An i915 perf stream 3702 * @cmd: the ioctl request 3703 * @arg: the ioctl data 3704 * 3705 * Returns: zero on success or a negative error code. Returns -EINVAL for 3706 * an unknown ioctl request. 3707 */ 3708 static long i915_perf_ioctl_locked(struct i915_perf_stream *stream, 3709 unsigned int cmd, 3710 unsigned long arg) 3711 { 3712 switch (cmd) { 3713 case I915_PERF_IOCTL_ENABLE: 3714 i915_perf_enable_locked(stream); 3715 return 0; 3716 case I915_PERF_IOCTL_DISABLE: 3717 i915_perf_disable_locked(stream); 3718 return 0; 3719 case I915_PERF_IOCTL_CONFIG: 3720 return i915_perf_config_locked(stream, arg); 3721 } 3722 3723 return -EINVAL; 3724 } 3725 3726 /** 3727 * i915_perf_ioctl - support ioctl() usage with i915 perf stream FDs 3728 * @file: An i915 perf stream file 3729 * @cmd: the ioctl request 3730 * @arg: the ioctl data 3731 * 3732 * Implementation deferred to i915_perf_ioctl_locked(). 3733 * 3734 * Returns: zero on success or a negative error code. Returns -EINVAL for 3735 * an unknown ioctl request. 3736 */ 3737 static long i915_perf_ioctl(struct file *file, 3738 unsigned int cmd, 3739 unsigned long arg) 3740 { 3741 struct i915_perf_stream *stream = file->private_data; 3742 long ret; 3743 3744 mutex_lock(&stream->lock); 3745 ret = i915_perf_ioctl_locked(stream, cmd, arg); 3746 mutex_unlock(&stream->lock); 3747 3748 return ret; 3749 } 3750 3751 /** 3752 * i915_perf_destroy_locked - destroy an i915 perf stream 3753 * @stream: An i915 perf stream 3754 * 3755 * Frees all resources associated with the given i915 perf @stream, disabling 3756 * any associated data capture in the process. 3757 * 3758 * Note: The >->perf.lock mutex has been taken to serialize 3759 * with any non-file-operation driver hooks. 3760 */ 3761 static void i915_perf_destroy_locked(struct i915_perf_stream *stream) 3762 { 3763 if (stream->enabled) 3764 i915_perf_disable_locked(stream); 3765 3766 if (stream->ops->destroy) 3767 stream->ops->destroy(stream); 3768 3769 if (stream->ctx) 3770 i915_gem_context_put(stream->ctx); 3771 3772 kfree(stream); 3773 } 3774 3775 /** 3776 * i915_perf_release - handles userspace close() of a stream file 3777 * @inode: anonymous inode associated with file 3778 * @file: An i915 perf stream file 3779 * 3780 * Cleans up any resources associated with an open i915 perf stream file. 3781 * 3782 * NB: close() can't really fail from the userspace point of view. 3783 * 3784 * Returns: zero on success or a negative error code. 3785 */ 3786 static int i915_perf_release(struct inode *inode, struct file *file) 3787 { 3788 struct i915_perf_stream *stream = file->private_data; 3789 struct i915_perf *perf = stream->perf; 3790 struct intel_gt *gt = stream->engine->gt; 3791 3792 /* 3793 * Within this call, we know that the fd is being closed and we have no 3794 * other user of stream->lock. Use the perf lock to destroy the stream 3795 * here. 3796 */ 3797 mutex_lock(>->perf.lock); 3798 i915_perf_destroy_locked(stream); 3799 mutex_unlock(>->perf.lock); 3800 3801 /* Release the reference the perf stream kept on the driver. */ 3802 drm_dev_put(&perf->i915->drm); 3803 3804 return 0; 3805 } 3806 3807 3808 static const struct file_operations fops = { 3809 .owner = THIS_MODULE, 3810 .llseek = no_llseek, 3811 .release = i915_perf_release, 3812 .poll = i915_perf_poll, 3813 .read = i915_perf_read, 3814 .unlocked_ioctl = i915_perf_ioctl, 3815 /* Our ioctl have no arguments, so it's safe to use the same function 3816 * to handle 32bits compatibility. 3817 */ 3818 .compat_ioctl = i915_perf_ioctl, 3819 }; 3820 3821 3822 /** 3823 * i915_perf_open_ioctl_locked - DRM ioctl() for userspace to open a stream FD 3824 * @perf: i915 perf instance 3825 * @param: The open parameters passed to 'DRM_I915_PERF_OPEN` 3826 * @props: individually validated u64 property value pairs 3827 * @file: drm file 3828 * 3829 * See i915_perf_ioctl_open() for interface details. 3830 * 3831 * Implements further stream config validation and stream initialization on 3832 * behalf of i915_perf_open_ioctl() with the >->perf.lock mutex 3833 * taken to serialize with any non-file-operation driver hooks. 3834 * 3835 * Note: at this point the @props have only been validated in isolation and 3836 * it's still necessary to validate that the combination of properties makes 3837 * sense. 3838 * 3839 * In the case where userspace is interested in OA unit metrics then further 3840 * config validation and stream initialization details will be handled by 3841 * i915_oa_stream_init(). The code here should only validate config state that 3842 * will be relevant to all stream types / backends. 3843 * 3844 * Returns: zero on success or a negative error code. 3845 */ 3846 static int 3847 i915_perf_open_ioctl_locked(struct i915_perf *perf, 3848 struct drm_i915_perf_open_param *param, 3849 struct perf_open_properties *props, 3850 struct drm_file *file) 3851 { 3852 struct i915_gem_context *specific_ctx = NULL; 3853 struct i915_perf_stream *stream = NULL; 3854 unsigned long f_flags = 0; 3855 bool privileged_op = true; 3856 int stream_fd; 3857 int ret; 3858 3859 if (props->single_context) { 3860 u32 ctx_handle = props->ctx_handle; 3861 struct drm_i915_file_private *file_priv = file->driver_priv; 3862 3863 specific_ctx = i915_gem_context_lookup(file_priv, ctx_handle); 3864 if (IS_ERR(specific_ctx)) { 3865 drm_dbg(&perf->i915->drm, 3866 "Failed to look up context with ID %u for opening perf stream\n", 3867 ctx_handle); 3868 ret = PTR_ERR(specific_ctx); 3869 goto err; 3870 } 3871 } 3872 3873 /* 3874 * On Haswell the OA unit supports clock gating off for a specific 3875 * context and in this mode there's no visibility of metrics for the 3876 * rest of the system, which we consider acceptable for a 3877 * non-privileged client. 3878 * 3879 * For Gen8->11 the OA unit no longer supports clock gating off for a 3880 * specific context and the kernel can't securely stop the counters 3881 * from updating as system-wide / global values. Even though we can 3882 * filter reports based on the included context ID we can't block 3883 * clients from seeing the raw / global counter values via 3884 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to 3885 * enable the OA unit by default. 3886 * 3887 * For Gen12+ we gain a new OAR unit that only monitors the RCS on a 3888 * per context basis. So we can relax requirements there if the user 3889 * doesn't request global stream access (i.e. query based sampling 3890 * using MI_RECORD_PERF_COUNT. 3891 */ 3892 if (IS_HASWELL(perf->i915) && specific_ctx) 3893 privileged_op = false; 3894 else if (GRAPHICS_VER(perf->i915) == 12 && specific_ctx && 3895 (props->sample_flags & SAMPLE_OA_REPORT) == 0) 3896 privileged_op = false; 3897 3898 if (props->hold_preemption) { 3899 if (!props->single_context) { 3900 drm_dbg(&perf->i915->drm, 3901 "preemption disable with no context\n"); 3902 ret = -EINVAL; 3903 goto err; 3904 } 3905 privileged_op = true; 3906 } 3907 3908 /* 3909 * Asking for SSEU configuration is a priviliged operation. 3910 */ 3911 if (props->has_sseu) 3912 privileged_op = true; 3913 else 3914 get_default_sseu_config(&props->sseu, props->engine); 3915 3916 /* Similar to perf's kernel.perf_paranoid_cpu sysctl option 3917 * we check a dev.i915.perf_stream_paranoid sysctl option 3918 * to determine if it's ok to access system wide OA counters 3919 * without CAP_PERFMON or CAP_SYS_ADMIN privileges. 3920 */ 3921 if (privileged_op && 3922 i915_perf_stream_paranoid && !perfmon_capable()) { 3923 drm_dbg(&perf->i915->drm, 3924 "Insufficient privileges to open i915 perf stream\n"); 3925 ret = -EACCES; 3926 goto err_ctx; 3927 } 3928 3929 stream = kzalloc(sizeof(*stream), GFP_KERNEL); 3930 if (!stream) { 3931 ret = -ENOMEM; 3932 goto err_ctx; 3933 } 3934 3935 stream->perf = perf; 3936 stream->ctx = specific_ctx; 3937 stream->poll_oa_period = props->poll_oa_period; 3938 3939 ret = i915_oa_stream_init(stream, param, props); 3940 if (ret) 3941 goto err_alloc; 3942 3943 /* we avoid simply assigning stream->sample_flags = props->sample_flags 3944 * to have _stream_init check the combination of sample flags more 3945 * thoroughly, but still this is the expected result at this point. 3946 */ 3947 if (WARN_ON(stream->sample_flags != props->sample_flags)) { 3948 ret = -ENODEV; 3949 goto err_flags; 3950 } 3951 3952 if (param->flags & I915_PERF_FLAG_FD_CLOEXEC) 3953 f_flags |= O_CLOEXEC; 3954 if (param->flags & I915_PERF_FLAG_FD_NONBLOCK) 3955 f_flags |= O_NONBLOCK; 3956 3957 stream_fd = anon_inode_getfd("[i915_perf]", &fops, stream, f_flags); 3958 if (stream_fd < 0) { 3959 ret = stream_fd; 3960 goto err_flags; 3961 } 3962 3963 if (!(param->flags & I915_PERF_FLAG_DISABLED)) 3964 i915_perf_enable_locked(stream); 3965 3966 /* Take a reference on the driver that will be kept with stream_fd 3967 * until its release. 3968 */ 3969 drm_dev_get(&perf->i915->drm); 3970 3971 return stream_fd; 3972 3973 err_flags: 3974 if (stream->ops->destroy) 3975 stream->ops->destroy(stream); 3976 err_alloc: 3977 kfree(stream); 3978 err_ctx: 3979 if (specific_ctx) 3980 i915_gem_context_put(specific_ctx); 3981 err: 3982 return ret; 3983 } 3984 3985 static u64 oa_exponent_to_ns(struct i915_perf *perf, int exponent) 3986 { 3987 u64 nom = (2ULL << exponent) * NSEC_PER_SEC; 3988 u32 den = i915_perf_oa_timestamp_frequency(perf->i915); 3989 3990 return div_u64(nom + den - 1, den); 3991 } 3992 3993 static __always_inline bool 3994 oa_format_valid(struct i915_perf *perf, enum drm_i915_oa_format format) 3995 { 3996 return test_bit(format, perf->format_mask); 3997 } 3998 3999 static __always_inline void 4000 oa_format_add(struct i915_perf *perf, enum drm_i915_oa_format format) 4001 { 4002 __set_bit(format, perf->format_mask); 4003 } 4004 4005 /** 4006 * read_properties_unlocked - validate + copy userspace stream open properties 4007 * @perf: i915 perf instance 4008 * @uprops: The array of u64 key value pairs given by userspace 4009 * @n_props: The number of key value pairs expected in @uprops 4010 * @props: The stream configuration built up while validating properties 4011 * 4012 * Note this function only validates properties in isolation it doesn't 4013 * validate that the combination of properties makes sense or that all 4014 * properties necessary for a particular kind of stream have been set. 4015 * 4016 * Note that there currently aren't any ordering requirements for properties so 4017 * we shouldn't validate or assume anything about ordering here. This doesn't 4018 * rule out defining new properties with ordering requirements in the future. 4019 */ 4020 static int read_properties_unlocked(struct i915_perf *perf, 4021 u64 __user *uprops, 4022 u32 n_props, 4023 struct perf_open_properties *props) 4024 { 4025 struct drm_i915_gem_context_param_sseu user_sseu; 4026 const struct i915_oa_format *f; 4027 u64 __user *uprop = uprops; 4028 bool config_instance = false; 4029 bool config_class = false; 4030 bool config_sseu = false; 4031 u8 class, instance; 4032 u32 i; 4033 int ret; 4034 4035 memset(props, 0, sizeof(struct perf_open_properties)); 4036 props->poll_oa_period = DEFAULT_POLL_PERIOD_NS; 4037 4038 /* Considering that ID = 0 is reserved and assuming that we don't 4039 * (currently) expect any configurations to ever specify duplicate 4040 * values for a particular property ID then the last _PROP_MAX value is 4041 * one greater than the maximum number of properties we expect to get 4042 * from userspace. 4043 */ 4044 if (!n_props || n_props >= DRM_I915_PERF_PROP_MAX) { 4045 drm_dbg(&perf->i915->drm, 4046 "Invalid number of i915 perf properties given\n"); 4047 return -EINVAL; 4048 } 4049 4050 /* Defaults when class:instance is not passed */ 4051 class = I915_ENGINE_CLASS_RENDER; 4052 instance = 0; 4053 4054 for (i = 0; i < n_props; i++) { 4055 u64 oa_period, oa_freq_hz; 4056 u64 id, value; 4057 4058 ret = get_user(id, uprop); 4059 if (ret) 4060 return ret; 4061 4062 ret = get_user(value, uprop + 1); 4063 if (ret) 4064 return ret; 4065 4066 if (id == 0 || id >= DRM_I915_PERF_PROP_MAX) { 4067 drm_dbg(&perf->i915->drm, 4068 "Unknown i915 perf property ID\n"); 4069 return -EINVAL; 4070 } 4071 4072 switch ((enum drm_i915_perf_property_id)id) { 4073 case DRM_I915_PERF_PROP_CTX_HANDLE: 4074 props->single_context = 1; 4075 props->ctx_handle = value; 4076 break; 4077 case DRM_I915_PERF_PROP_SAMPLE_OA: 4078 if (value) 4079 props->sample_flags |= SAMPLE_OA_REPORT; 4080 break; 4081 case DRM_I915_PERF_PROP_OA_METRICS_SET: 4082 if (value == 0) { 4083 drm_dbg(&perf->i915->drm, 4084 "Unknown OA metric set ID\n"); 4085 return -EINVAL; 4086 } 4087 props->metrics_set = value; 4088 break; 4089 case DRM_I915_PERF_PROP_OA_FORMAT: 4090 if (value == 0 || value >= I915_OA_FORMAT_MAX) { 4091 drm_dbg(&perf->i915->drm, 4092 "Out-of-range OA report format %llu\n", 4093 value); 4094 return -EINVAL; 4095 } 4096 if (!oa_format_valid(perf, value)) { 4097 drm_dbg(&perf->i915->drm, 4098 "Unsupported OA report format %llu\n", 4099 value); 4100 return -EINVAL; 4101 } 4102 props->oa_format = value; 4103 break; 4104 case DRM_I915_PERF_PROP_OA_EXPONENT: 4105 if (value > OA_EXPONENT_MAX) { 4106 drm_dbg(&perf->i915->drm, 4107 "OA timer exponent too high (> %u)\n", 4108 OA_EXPONENT_MAX); 4109 return -EINVAL; 4110 } 4111 4112 /* Theoretically we can program the OA unit to sample 4113 * e.g. every 160ns for HSW, 167ns for BDW/SKL or 104ns 4114 * for BXT. We don't allow such high sampling 4115 * frequencies by default unless root. 4116 */ 4117 4118 BUILD_BUG_ON(sizeof(oa_period) != 8); 4119 oa_period = oa_exponent_to_ns(perf, value); 4120 4121 /* This check is primarily to ensure that oa_period <= 4122 * UINT32_MAX (before passing to do_div which only 4123 * accepts a u32 denominator), but we can also skip 4124 * checking anything < 1Hz which implicitly can't be 4125 * limited via an integer oa_max_sample_rate. 4126 */ 4127 if (oa_period <= NSEC_PER_SEC) { 4128 u64 tmp = NSEC_PER_SEC; 4129 do_div(tmp, oa_period); 4130 oa_freq_hz = tmp; 4131 } else 4132 oa_freq_hz = 0; 4133 4134 if (oa_freq_hz > i915_oa_max_sample_rate && !perfmon_capable()) { 4135 drm_dbg(&perf->i915->drm, 4136 "OA exponent would exceed the max sampling frequency (sysctl dev.i915.oa_max_sample_rate) %uHz without CAP_PERFMON or CAP_SYS_ADMIN privileges\n", 4137 i915_oa_max_sample_rate); 4138 return -EACCES; 4139 } 4140 4141 props->oa_periodic = true; 4142 props->oa_period_exponent = value; 4143 break; 4144 case DRM_I915_PERF_PROP_HOLD_PREEMPTION: 4145 props->hold_preemption = !!value; 4146 break; 4147 case DRM_I915_PERF_PROP_GLOBAL_SSEU: { 4148 if (GRAPHICS_VER_FULL(perf->i915) >= IP_VER(12, 50)) { 4149 drm_dbg(&perf->i915->drm, 4150 "SSEU config not supported on gfx %x\n", 4151 GRAPHICS_VER_FULL(perf->i915)); 4152 return -ENODEV; 4153 } 4154 4155 if (copy_from_user(&user_sseu, 4156 u64_to_user_ptr(value), 4157 sizeof(user_sseu))) { 4158 drm_dbg(&perf->i915->drm, 4159 "Unable to copy global sseu parameter\n"); 4160 return -EFAULT; 4161 } 4162 config_sseu = true; 4163 break; 4164 } 4165 case DRM_I915_PERF_PROP_POLL_OA_PERIOD: 4166 if (value < 100000 /* 100us */) { 4167 drm_dbg(&perf->i915->drm, 4168 "OA availability timer too small (%lluns < 100us)\n", 4169 value); 4170 return -EINVAL; 4171 } 4172 props->poll_oa_period = value; 4173 break; 4174 case DRM_I915_PERF_PROP_OA_ENGINE_CLASS: 4175 class = (u8)value; 4176 config_class = true; 4177 break; 4178 case DRM_I915_PERF_PROP_OA_ENGINE_INSTANCE: 4179 instance = (u8)value; 4180 config_instance = true; 4181 break; 4182 default: 4183 MISSING_CASE(id); 4184 return -EINVAL; 4185 } 4186 4187 uprop += 2; 4188 } 4189 4190 if ((config_class && !config_instance) || 4191 (config_instance && !config_class)) { 4192 drm_dbg(&perf->i915->drm, 4193 "OA engine-class and engine-instance parameters must be passed together\n"); 4194 return -EINVAL; 4195 } 4196 4197 props->engine = intel_engine_lookup_user(perf->i915, class, instance); 4198 if (!props->engine) { 4199 drm_dbg(&perf->i915->drm, 4200 "OA engine class and instance invalid %d:%d\n", 4201 class, instance); 4202 return -EINVAL; 4203 } 4204 4205 if (!engine_supports_oa(props->engine)) { 4206 drm_dbg(&perf->i915->drm, 4207 "Engine not supported by OA %d:%d\n", 4208 class, instance); 4209 return -EINVAL; 4210 } 4211 4212 /* 4213 * Wa_14017512683: mtl[a0..c0): Use of OAM must be preceded with Media 4214 * C6 disable in BIOS. Fail if Media C6 is enabled on steppings where OAM 4215 * does not work as expected. 4216 */ 4217 if (IS_MTL_MEDIA_STEP(props->engine->i915, STEP_A0, STEP_C0) && 4218 props->engine->oa_group->type == TYPE_OAM && 4219 intel_check_bios_c6_setup(&props->engine->gt->rc6)) { 4220 drm_dbg(&perf->i915->drm, 4221 "OAM requires media C6 to be disabled in BIOS\n"); 4222 return -EINVAL; 4223 } 4224 4225 i = array_index_nospec(props->oa_format, I915_OA_FORMAT_MAX); 4226 f = &perf->oa_formats[i]; 4227 if (!engine_supports_oa_format(props->engine, f->type)) { 4228 drm_dbg(&perf->i915->drm, 4229 "Invalid OA format %d for class %d\n", 4230 f->type, props->engine->class); 4231 return -EINVAL; 4232 } 4233 4234 if (config_sseu) { 4235 ret = get_sseu_config(&props->sseu, props->engine, &user_sseu); 4236 if (ret) { 4237 drm_dbg(&perf->i915->drm, 4238 "Invalid SSEU configuration\n"); 4239 return ret; 4240 } 4241 props->has_sseu = true; 4242 } 4243 4244 return 0; 4245 } 4246 4247 /** 4248 * i915_perf_open_ioctl - DRM ioctl() for userspace to open a stream FD 4249 * @dev: drm device 4250 * @data: ioctl data copied from userspace (unvalidated) 4251 * @file: drm file 4252 * 4253 * Validates the stream open parameters given by userspace including flags 4254 * and an array of u64 key, value pair properties. 4255 * 4256 * Very little is assumed up front about the nature of the stream being 4257 * opened (for instance we don't assume it's for periodic OA unit metrics). An 4258 * i915-perf stream is expected to be a suitable interface for other forms of 4259 * buffered data written by the GPU besides periodic OA metrics. 4260 * 4261 * Note we copy the properties from userspace outside of the i915 perf 4262 * mutex to avoid an awkward lockdep with mmap_lock. 4263 * 4264 * Most of the implementation details are handled by 4265 * i915_perf_open_ioctl_locked() after taking the >->perf.lock 4266 * mutex for serializing with any non-file-operation driver hooks. 4267 * 4268 * Return: A newly opened i915 Perf stream file descriptor or negative 4269 * error code on failure. 4270 */ 4271 int i915_perf_open_ioctl(struct drm_device *dev, void *data, 4272 struct drm_file *file) 4273 { 4274 struct i915_perf *perf = &to_i915(dev)->perf; 4275 struct drm_i915_perf_open_param *param = data; 4276 struct intel_gt *gt; 4277 struct perf_open_properties props; 4278 u32 known_open_flags; 4279 int ret; 4280 4281 if (!perf->i915) { 4282 drm_dbg(&perf->i915->drm, 4283 "i915 perf interface not available for this system\n"); 4284 return -ENOTSUPP; 4285 } 4286 4287 known_open_flags = I915_PERF_FLAG_FD_CLOEXEC | 4288 I915_PERF_FLAG_FD_NONBLOCK | 4289 I915_PERF_FLAG_DISABLED; 4290 if (param->flags & ~known_open_flags) { 4291 drm_dbg(&perf->i915->drm, 4292 "Unknown drm_i915_perf_open_param flag\n"); 4293 return -EINVAL; 4294 } 4295 4296 ret = read_properties_unlocked(perf, 4297 u64_to_user_ptr(param->properties_ptr), 4298 param->num_properties, 4299 &props); 4300 if (ret) 4301 return ret; 4302 4303 gt = props.engine->gt; 4304 4305 mutex_lock(>->perf.lock); 4306 ret = i915_perf_open_ioctl_locked(perf, param, &props, file); 4307 mutex_unlock(>->perf.lock); 4308 4309 return ret; 4310 } 4311 4312 /** 4313 * i915_perf_register - exposes i915-perf to userspace 4314 * @i915: i915 device instance 4315 * 4316 * In particular OA metric sets are advertised under a sysfs metrics/ 4317 * directory allowing userspace to enumerate valid IDs that can be 4318 * used to open an i915-perf stream. 4319 */ 4320 void i915_perf_register(struct drm_i915_private *i915) 4321 { 4322 struct i915_perf *perf = &i915->perf; 4323 struct intel_gt *gt = to_gt(i915); 4324 4325 if (!perf->i915) 4326 return; 4327 4328 /* To be sure we're synchronized with an attempted 4329 * i915_perf_open_ioctl(); considering that we register after 4330 * being exposed to userspace. 4331 */ 4332 mutex_lock(>->perf.lock); 4333 4334 perf->metrics_kobj = 4335 kobject_create_and_add("metrics", 4336 &i915->drm.primary->kdev->kobj); 4337 4338 mutex_unlock(>->perf.lock); 4339 } 4340 4341 /** 4342 * i915_perf_unregister - hide i915-perf from userspace 4343 * @i915: i915 device instance 4344 * 4345 * i915-perf state cleanup is split up into an 'unregister' and 4346 * 'deinit' phase where the interface is first hidden from 4347 * userspace by i915_perf_unregister() before cleaning up 4348 * remaining state in i915_perf_fini(). 4349 */ 4350 void i915_perf_unregister(struct drm_i915_private *i915) 4351 { 4352 struct i915_perf *perf = &i915->perf; 4353 4354 if (!perf->metrics_kobj) 4355 return; 4356 4357 kobject_put(perf->metrics_kobj); 4358 perf->metrics_kobj = NULL; 4359 } 4360 4361 static bool gen8_is_valid_flex_addr(struct i915_perf *perf, u32 addr) 4362 { 4363 static const i915_reg_t flex_eu_regs[] = { 4364 EU_PERF_CNTL0, 4365 EU_PERF_CNTL1, 4366 EU_PERF_CNTL2, 4367 EU_PERF_CNTL3, 4368 EU_PERF_CNTL4, 4369 EU_PERF_CNTL5, 4370 EU_PERF_CNTL6, 4371 }; 4372 int i; 4373 4374 for (i = 0; i < ARRAY_SIZE(flex_eu_regs); i++) { 4375 if (i915_mmio_reg_offset(flex_eu_regs[i]) == addr) 4376 return true; 4377 } 4378 return false; 4379 } 4380 4381 static bool reg_in_range_table(u32 addr, const struct i915_range *table) 4382 { 4383 while (table->start || table->end) { 4384 if (addr >= table->start && addr <= table->end) 4385 return true; 4386 4387 table++; 4388 } 4389 4390 return false; 4391 } 4392 4393 #define REG_EQUAL(addr, mmio) \ 4394 ((addr) == i915_mmio_reg_offset(mmio)) 4395 4396 static const struct i915_range gen7_oa_b_counters[] = { 4397 { .start = 0x2710, .end = 0x272c }, /* OASTARTTRIG[1-8] */ 4398 { .start = 0x2740, .end = 0x275c }, /* OAREPORTTRIG[1-8] */ 4399 { .start = 0x2770, .end = 0x27ac }, /* OACEC[0-7][0-1] */ 4400 {} 4401 }; 4402 4403 static const struct i915_range gen12_oa_b_counters[] = { 4404 { .start = 0x2b2c, .end = 0x2b2c }, /* GEN12_OAG_OA_PESS */ 4405 { .start = 0xd900, .end = 0xd91c }, /* GEN12_OAG_OASTARTTRIG[1-8] */ 4406 { .start = 0xd920, .end = 0xd93c }, /* GEN12_OAG_OAREPORTTRIG1[1-8] */ 4407 { .start = 0xd940, .end = 0xd97c }, /* GEN12_OAG_CEC[0-7][0-1] */ 4408 { .start = 0xdc00, .end = 0xdc3c }, /* GEN12_OAG_SCEC[0-7][0-1] */ 4409 { .start = 0xdc40, .end = 0xdc40 }, /* GEN12_OAG_SPCTR_CNF */ 4410 { .start = 0xdc44, .end = 0xdc44 }, /* GEN12_OAA_DBG_REG */ 4411 {} 4412 }; 4413 4414 static const struct i915_range mtl_oam_b_counters[] = { 4415 { .start = 0x393000, .end = 0x39301c }, /* GEN12_OAM_STARTTRIG1[1-8] */ 4416 { .start = 0x393020, .end = 0x39303c }, /* GEN12_OAM_REPORTTRIG1[1-8] */ 4417 { .start = 0x393040, .end = 0x39307c }, /* GEN12_OAM_CEC[0-7][0-1] */ 4418 { .start = 0x393200, .end = 0x39323C }, /* MPES[0-7] */ 4419 {} 4420 }; 4421 4422 static const struct i915_range xehp_oa_b_counters[] = { 4423 { .start = 0xdc48, .end = 0xdc48 }, /* OAA_ENABLE_REG */ 4424 { .start = 0xdd00, .end = 0xdd48 }, /* OAG_LCE0_0 - OAA_LENABLE_REG */ 4425 }; 4426 4427 static const struct i915_range gen7_oa_mux_regs[] = { 4428 { .start = 0x91b8, .end = 0x91cc }, /* OA_PERFCNT[1-2], OA_PERFMATRIX */ 4429 { .start = 0x9800, .end = 0x9888 }, /* MICRO_BP0_0 - NOA_WRITE */ 4430 { .start = 0xe180, .end = 0xe180 }, /* HALF_SLICE_CHICKEN2 */ 4431 {} 4432 }; 4433 4434 static const struct i915_range hsw_oa_mux_regs[] = { 4435 { .start = 0x09e80, .end = 0x09ea4 }, /* HSW_MBVID2_NOA[0-9] */ 4436 { .start = 0x09ec0, .end = 0x09ec0 }, /* HSW_MBVID2_MISR0 */ 4437 { .start = 0x25100, .end = 0x2ff90 }, 4438 {} 4439 }; 4440 4441 static const struct i915_range chv_oa_mux_regs[] = { 4442 { .start = 0x182300, .end = 0x1823a4 }, 4443 {} 4444 }; 4445 4446 static const struct i915_range gen8_oa_mux_regs[] = { 4447 { .start = 0x0d00, .end = 0x0d2c }, /* RPM_CONFIG[0-1], NOA_CONFIG[0-8] */ 4448 { .start = 0x20cc, .end = 0x20cc }, /* WAIT_FOR_RC6_EXIT */ 4449 {} 4450 }; 4451 4452 static const struct i915_range gen11_oa_mux_regs[] = { 4453 { .start = 0x91c8, .end = 0x91dc }, /* OA_PERFCNT[3-4] */ 4454 {} 4455 }; 4456 4457 static const struct i915_range gen12_oa_mux_regs[] = { 4458 { .start = 0x0d00, .end = 0x0d04 }, /* RPM_CONFIG[0-1] */ 4459 { .start = 0x0d0c, .end = 0x0d2c }, /* NOA_CONFIG[0-8] */ 4460 { .start = 0x9840, .end = 0x9840 }, /* GDT_CHICKEN_BITS */ 4461 { .start = 0x9884, .end = 0x9888 }, /* NOA_WRITE */ 4462 { .start = 0x20cc, .end = 0x20cc }, /* WAIT_FOR_RC6_EXIT */ 4463 {} 4464 }; 4465 4466 /* 4467 * Ref: 14010536224: 4468 * 0x20cc is repurposed on MTL, so use a separate array for MTL. 4469 */ 4470 static const struct i915_range mtl_oa_mux_regs[] = { 4471 { .start = 0x0d00, .end = 0x0d04 }, /* RPM_CONFIG[0-1] */ 4472 { .start = 0x0d0c, .end = 0x0d2c }, /* NOA_CONFIG[0-8] */ 4473 { .start = 0x9840, .end = 0x9840 }, /* GDT_CHICKEN_BITS */ 4474 { .start = 0x9884, .end = 0x9888 }, /* NOA_WRITE */ 4475 { .start = 0x38d100, .end = 0x38d114}, /* VISACTL */ 4476 {} 4477 }; 4478 4479 static bool gen7_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr) 4480 { 4481 return reg_in_range_table(addr, gen7_oa_b_counters); 4482 } 4483 4484 static bool gen8_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4485 { 4486 return reg_in_range_table(addr, gen7_oa_mux_regs) || 4487 reg_in_range_table(addr, gen8_oa_mux_regs); 4488 } 4489 4490 static bool gen11_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4491 { 4492 return reg_in_range_table(addr, gen7_oa_mux_regs) || 4493 reg_in_range_table(addr, gen8_oa_mux_regs) || 4494 reg_in_range_table(addr, gen11_oa_mux_regs); 4495 } 4496 4497 static bool hsw_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4498 { 4499 return reg_in_range_table(addr, gen7_oa_mux_regs) || 4500 reg_in_range_table(addr, hsw_oa_mux_regs); 4501 } 4502 4503 static bool chv_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4504 { 4505 return reg_in_range_table(addr, gen7_oa_mux_regs) || 4506 reg_in_range_table(addr, chv_oa_mux_regs); 4507 } 4508 4509 static bool gen12_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr) 4510 { 4511 return reg_in_range_table(addr, gen12_oa_b_counters); 4512 } 4513 4514 static bool mtl_is_valid_oam_b_counter_addr(struct i915_perf *perf, u32 addr) 4515 { 4516 if (HAS_OAM(perf->i915) && 4517 GRAPHICS_VER_FULL(perf->i915) >= IP_VER(12, 70)) 4518 return reg_in_range_table(addr, mtl_oam_b_counters); 4519 4520 return false; 4521 } 4522 4523 static bool xehp_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr) 4524 { 4525 return reg_in_range_table(addr, xehp_oa_b_counters) || 4526 reg_in_range_table(addr, gen12_oa_b_counters) || 4527 mtl_is_valid_oam_b_counter_addr(perf, addr); 4528 } 4529 4530 static bool gen12_is_valid_mux_addr(struct i915_perf *perf, u32 addr) 4531 { 4532 if (IS_METEORLAKE(perf->i915)) 4533 return reg_in_range_table(addr, mtl_oa_mux_regs); 4534 else 4535 return reg_in_range_table(addr, gen12_oa_mux_regs); 4536 } 4537 4538 static u32 mask_reg_value(u32 reg, u32 val) 4539 { 4540 /* HALF_SLICE_CHICKEN2 is programmed with a the 4541 * WaDisableSTUnitPowerOptimization workaround. Make sure the value 4542 * programmed by userspace doesn't change this. 4543 */ 4544 if (REG_EQUAL(reg, HALF_SLICE_CHICKEN2)) 4545 val = val & ~_MASKED_BIT_ENABLE(GEN8_ST_PO_DISABLE); 4546 4547 /* WAIT_FOR_RC6_EXIT has only one bit fullfilling the function 4548 * indicated by its name and a bunch of selection fields used by OA 4549 * configs. 4550 */ 4551 if (REG_EQUAL(reg, WAIT_FOR_RC6_EXIT)) 4552 val = val & ~_MASKED_BIT_ENABLE(HSW_WAIT_FOR_RC6_EXIT_ENABLE); 4553 4554 return val; 4555 } 4556 4557 static struct i915_oa_reg *alloc_oa_regs(struct i915_perf *perf, 4558 bool (*is_valid)(struct i915_perf *perf, u32 addr), 4559 u32 __user *regs, 4560 u32 n_regs) 4561 { 4562 struct i915_oa_reg *oa_regs; 4563 int err; 4564 u32 i; 4565 4566 if (!n_regs) 4567 return NULL; 4568 4569 /* No is_valid function means we're not allowing any register to be programmed. */ 4570 GEM_BUG_ON(!is_valid); 4571 if (!is_valid) 4572 return ERR_PTR(-EINVAL); 4573 4574 oa_regs = kmalloc_array(n_regs, sizeof(*oa_regs), GFP_KERNEL); 4575 if (!oa_regs) 4576 return ERR_PTR(-ENOMEM); 4577 4578 for (i = 0; i < n_regs; i++) { 4579 u32 addr, value; 4580 4581 err = get_user(addr, regs); 4582 if (err) 4583 goto addr_err; 4584 4585 if (!is_valid(perf, addr)) { 4586 drm_dbg(&perf->i915->drm, 4587 "Invalid oa_reg address: %X\n", addr); 4588 err = -EINVAL; 4589 goto addr_err; 4590 } 4591 4592 err = get_user(value, regs + 1); 4593 if (err) 4594 goto addr_err; 4595 4596 oa_regs[i].addr = _MMIO(addr); 4597 oa_regs[i].value = mask_reg_value(addr, value); 4598 4599 regs += 2; 4600 } 4601 4602 return oa_regs; 4603 4604 addr_err: 4605 kfree(oa_regs); 4606 return ERR_PTR(err); 4607 } 4608 4609 static ssize_t show_dynamic_id(struct kobject *kobj, 4610 struct kobj_attribute *attr, 4611 char *buf) 4612 { 4613 struct i915_oa_config *oa_config = 4614 container_of(attr, typeof(*oa_config), sysfs_metric_id); 4615 4616 return sprintf(buf, "%d\n", oa_config->id); 4617 } 4618 4619 static int create_dynamic_oa_sysfs_entry(struct i915_perf *perf, 4620 struct i915_oa_config *oa_config) 4621 { 4622 sysfs_attr_init(&oa_config->sysfs_metric_id.attr); 4623 oa_config->sysfs_metric_id.attr.name = "id"; 4624 oa_config->sysfs_metric_id.attr.mode = S_IRUGO; 4625 oa_config->sysfs_metric_id.show = show_dynamic_id; 4626 oa_config->sysfs_metric_id.store = NULL; 4627 4628 oa_config->attrs[0] = &oa_config->sysfs_metric_id.attr; 4629 oa_config->attrs[1] = NULL; 4630 4631 oa_config->sysfs_metric.name = oa_config->uuid; 4632 oa_config->sysfs_metric.attrs = oa_config->attrs; 4633 4634 return sysfs_create_group(perf->metrics_kobj, 4635 &oa_config->sysfs_metric); 4636 } 4637 4638 /** 4639 * i915_perf_add_config_ioctl - DRM ioctl() for userspace to add a new OA config 4640 * @dev: drm device 4641 * @data: ioctl data (pointer to struct drm_i915_perf_oa_config) copied from 4642 * userspace (unvalidated) 4643 * @file: drm file 4644 * 4645 * Validates the submitted OA register to be saved into a new OA config that 4646 * can then be used for programming the OA unit and its NOA network. 4647 * 4648 * Returns: A new allocated config number to be used with the perf open ioctl 4649 * or a negative error code on failure. 4650 */ 4651 int i915_perf_add_config_ioctl(struct drm_device *dev, void *data, 4652 struct drm_file *file) 4653 { 4654 struct i915_perf *perf = &to_i915(dev)->perf; 4655 struct drm_i915_perf_oa_config *args = data; 4656 struct i915_oa_config *oa_config, *tmp; 4657 struct i915_oa_reg *regs; 4658 int err, id; 4659 4660 if (!perf->i915) { 4661 drm_dbg(&perf->i915->drm, 4662 "i915 perf interface not available for this system\n"); 4663 return -ENOTSUPP; 4664 } 4665 4666 if (!perf->metrics_kobj) { 4667 drm_dbg(&perf->i915->drm, 4668 "OA metrics weren't advertised via sysfs\n"); 4669 return -EINVAL; 4670 } 4671 4672 if (i915_perf_stream_paranoid && !perfmon_capable()) { 4673 drm_dbg(&perf->i915->drm, 4674 "Insufficient privileges to add i915 OA config\n"); 4675 return -EACCES; 4676 } 4677 4678 if ((!args->mux_regs_ptr || !args->n_mux_regs) && 4679 (!args->boolean_regs_ptr || !args->n_boolean_regs) && 4680 (!args->flex_regs_ptr || !args->n_flex_regs)) { 4681 drm_dbg(&perf->i915->drm, 4682 "No OA registers given\n"); 4683 return -EINVAL; 4684 } 4685 4686 oa_config = kzalloc(sizeof(*oa_config), GFP_KERNEL); 4687 if (!oa_config) { 4688 drm_dbg(&perf->i915->drm, 4689 "Failed to allocate memory for the OA config\n"); 4690 return -ENOMEM; 4691 } 4692 4693 oa_config->perf = perf; 4694 kref_init(&oa_config->ref); 4695 4696 if (!uuid_is_valid(args->uuid)) { 4697 drm_dbg(&perf->i915->drm, 4698 "Invalid uuid format for OA config\n"); 4699 err = -EINVAL; 4700 goto reg_err; 4701 } 4702 4703 /* Last character in oa_config->uuid will be 0 because oa_config is 4704 * kzalloc. 4705 */ 4706 memcpy(oa_config->uuid, args->uuid, sizeof(args->uuid)); 4707 4708 oa_config->mux_regs_len = args->n_mux_regs; 4709 regs = alloc_oa_regs(perf, 4710 perf->ops.is_valid_mux_reg, 4711 u64_to_user_ptr(args->mux_regs_ptr), 4712 args->n_mux_regs); 4713 4714 if (IS_ERR(regs)) { 4715 drm_dbg(&perf->i915->drm, 4716 "Failed to create OA config for mux_regs\n"); 4717 err = PTR_ERR(regs); 4718 goto reg_err; 4719 } 4720 oa_config->mux_regs = regs; 4721 4722 oa_config->b_counter_regs_len = args->n_boolean_regs; 4723 regs = alloc_oa_regs(perf, 4724 perf->ops.is_valid_b_counter_reg, 4725 u64_to_user_ptr(args->boolean_regs_ptr), 4726 args->n_boolean_regs); 4727 4728 if (IS_ERR(regs)) { 4729 drm_dbg(&perf->i915->drm, 4730 "Failed to create OA config for b_counter_regs\n"); 4731 err = PTR_ERR(regs); 4732 goto reg_err; 4733 } 4734 oa_config->b_counter_regs = regs; 4735 4736 if (GRAPHICS_VER(perf->i915) < 8) { 4737 if (args->n_flex_regs != 0) { 4738 err = -EINVAL; 4739 goto reg_err; 4740 } 4741 } else { 4742 oa_config->flex_regs_len = args->n_flex_regs; 4743 regs = alloc_oa_regs(perf, 4744 perf->ops.is_valid_flex_reg, 4745 u64_to_user_ptr(args->flex_regs_ptr), 4746 args->n_flex_regs); 4747 4748 if (IS_ERR(regs)) { 4749 drm_dbg(&perf->i915->drm, 4750 "Failed to create OA config for flex_regs\n"); 4751 err = PTR_ERR(regs); 4752 goto reg_err; 4753 } 4754 oa_config->flex_regs = regs; 4755 } 4756 4757 err = mutex_lock_interruptible(&perf->metrics_lock); 4758 if (err) 4759 goto reg_err; 4760 4761 /* We shouldn't have too many configs, so this iteration shouldn't be 4762 * too costly. 4763 */ 4764 idr_for_each_entry(&perf->metrics_idr, tmp, id) { 4765 if (!strcmp(tmp->uuid, oa_config->uuid)) { 4766 drm_dbg(&perf->i915->drm, 4767 "OA config already exists with this uuid\n"); 4768 err = -EADDRINUSE; 4769 goto sysfs_err; 4770 } 4771 } 4772 4773 err = create_dynamic_oa_sysfs_entry(perf, oa_config); 4774 if (err) { 4775 drm_dbg(&perf->i915->drm, 4776 "Failed to create sysfs entry for OA config\n"); 4777 goto sysfs_err; 4778 } 4779 4780 /* Config id 0 is invalid, id 1 for kernel stored test config. */ 4781 oa_config->id = idr_alloc(&perf->metrics_idr, 4782 oa_config, 2, 4783 0, GFP_KERNEL); 4784 if (oa_config->id < 0) { 4785 drm_dbg(&perf->i915->drm, 4786 "Failed to create sysfs entry for OA config\n"); 4787 err = oa_config->id; 4788 goto sysfs_err; 4789 } 4790 id = oa_config->id; 4791 4792 drm_dbg(&perf->i915->drm, 4793 "Added config %s id=%i\n", oa_config->uuid, oa_config->id); 4794 mutex_unlock(&perf->metrics_lock); 4795 4796 return id; 4797 4798 sysfs_err: 4799 mutex_unlock(&perf->metrics_lock); 4800 reg_err: 4801 i915_oa_config_put(oa_config); 4802 drm_dbg(&perf->i915->drm, 4803 "Failed to add new OA config\n"); 4804 return err; 4805 } 4806 4807 /** 4808 * i915_perf_remove_config_ioctl - DRM ioctl() for userspace to remove an OA config 4809 * @dev: drm device 4810 * @data: ioctl data (pointer to u64 integer) copied from userspace 4811 * @file: drm file 4812 * 4813 * Configs can be removed while being used, the will stop appearing in sysfs 4814 * and their content will be freed when the stream using the config is closed. 4815 * 4816 * Returns: 0 on success or a negative error code on failure. 4817 */ 4818 int i915_perf_remove_config_ioctl(struct drm_device *dev, void *data, 4819 struct drm_file *file) 4820 { 4821 struct i915_perf *perf = &to_i915(dev)->perf; 4822 u64 *arg = data; 4823 struct i915_oa_config *oa_config; 4824 int ret; 4825 4826 if (!perf->i915) { 4827 drm_dbg(&perf->i915->drm, 4828 "i915 perf interface not available for this system\n"); 4829 return -ENOTSUPP; 4830 } 4831 4832 if (i915_perf_stream_paranoid && !perfmon_capable()) { 4833 drm_dbg(&perf->i915->drm, 4834 "Insufficient privileges to remove i915 OA config\n"); 4835 return -EACCES; 4836 } 4837 4838 ret = mutex_lock_interruptible(&perf->metrics_lock); 4839 if (ret) 4840 return ret; 4841 4842 oa_config = idr_find(&perf->metrics_idr, *arg); 4843 if (!oa_config) { 4844 drm_dbg(&perf->i915->drm, 4845 "Failed to remove unknown OA config\n"); 4846 ret = -ENOENT; 4847 goto err_unlock; 4848 } 4849 4850 GEM_BUG_ON(*arg != oa_config->id); 4851 4852 sysfs_remove_group(perf->metrics_kobj, &oa_config->sysfs_metric); 4853 4854 idr_remove(&perf->metrics_idr, *arg); 4855 4856 mutex_unlock(&perf->metrics_lock); 4857 4858 drm_dbg(&perf->i915->drm, 4859 "Removed config %s id=%i\n", oa_config->uuid, oa_config->id); 4860 4861 i915_oa_config_put(oa_config); 4862 4863 return 0; 4864 4865 err_unlock: 4866 mutex_unlock(&perf->metrics_lock); 4867 return ret; 4868 } 4869 4870 static struct ctl_table oa_table[] = { 4871 { 4872 .procname = "perf_stream_paranoid", 4873 .data = &i915_perf_stream_paranoid, 4874 .maxlen = sizeof(i915_perf_stream_paranoid), 4875 .mode = 0644, 4876 .proc_handler = proc_dointvec_minmax, 4877 .extra1 = SYSCTL_ZERO, 4878 .extra2 = SYSCTL_ONE, 4879 }, 4880 { 4881 .procname = "oa_max_sample_rate", 4882 .data = &i915_oa_max_sample_rate, 4883 .maxlen = sizeof(i915_oa_max_sample_rate), 4884 .mode = 0644, 4885 .proc_handler = proc_dointvec_minmax, 4886 .extra1 = SYSCTL_ZERO, 4887 .extra2 = &oa_sample_rate_hard_limit, 4888 }, 4889 {} 4890 }; 4891 4892 static u32 num_perf_groups_per_gt(struct intel_gt *gt) 4893 { 4894 return 1; 4895 } 4896 4897 static u32 __oam_engine_group(struct intel_engine_cs *engine) 4898 { 4899 if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 70)) { 4900 /* 4901 * There's 1 SAMEDIA gt and 1 OAM per SAMEDIA gt. All media slices 4902 * within the gt use the same OAM. All MTL SKUs list 1 SA MEDIA. 4903 */ 4904 drm_WARN_ON(&engine->i915->drm, 4905 engine->gt->type != GT_MEDIA); 4906 4907 return PERF_GROUP_OAM_SAMEDIA_0; 4908 } 4909 4910 return PERF_GROUP_INVALID; 4911 } 4912 4913 static u32 __oa_engine_group(struct intel_engine_cs *engine) 4914 { 4915 switch (engine->class) { 4916 case RENDER_CLASS: 4917 return PERF_GROUP_OAG; 4918 4919 case VIDEO_DECODE_CLASS: 4920 case VIDEO_ENHANCEMENT_CLASS: 4921 return __oam_engine_group(engine); 4922 4923 default: 4924 return PERF_GROUP_INVALID; 4925 } 4926 } 4927 4928 static struct i915_perf_regs __oam_regs(u32 base) 4929 { 4930 return (struct i915_perf_regs) { 4931 base, 4932 GEN12_OAM_HEAD_POINTER(base), 4933 GEN12_OAM_TAIL_POINTER(base), 4934 GEN12_OAM_BUFFER(base), 4935 GEN12_OAM_CONTEXT_CONTROL(base), 4936 GEN12_OAM_CONTROL(base), 4937 GEN12_OAM_DEBUG(base), 4938 GEN12_OAM_STATUS(base), 4939 GEN12_OAM_CONTROL_COUNTER_FORMAT_SHIFT, 4940 }; 4941 } 4942 4943 static struct i915_perf_regs __oag_regs(void) 4944 { 4945 return (struct i915_perf_regs) { 4946 0, 4947 GEN12_OAG_OAHEADPTR, 4948 GEN12_OAG_OATAILPTR, 4949 GEN12_OAG_OABUFFER, 4950 GEN12_OAG_OAGLBCTXCTRL, 4951 GEN12_OAG_OACONTROL, 4952 GEN12_OAG_OA_DEBUG, 4953 GEN12_OAG_OASTATUS, 4954 GEN12_OAG_OACONTROL_OA_COUNTER_FORMAT_SHIFT, 4955 }; 4956 } 4957 4958 static void oa_init_groups(struct intel_gt *gt) 4959 { 4960 int i, num_groups = gt->perf.num_perf_groups; 4961 4962 for (i = 0; i < num_groups; i++) { 4963 struct i915_perf_group *g = >->perf.group[i]; 4964 4965 /* Fused off engines can result in a group with num_engines == 0 */ 4966 if (g->num_engines == 0) 4967 continue; 4968 4969 if (i == PERF_GROUP_OAG && gt->type != GT_MEDIA) { 4970 g->regs = __oag_regs(); 4971 g->type = TYPE_OAG; 4972 } else if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 70)) { 4973 g->regs = __oam_regs(mtl_oa_base[i]); 4974 g->type = TYPE_OAM; 4975 } 4976 } 4977 } 4978 4979 static int oa_init_gt(struct intel_gt *gt) 4980 { 4981 u32 num_groups = num_perf_groups_per_gt(gt); 4982 struct intel_engine_cs *engine; 4983 struct i915_perf_group *g; 4984 intel_engine_mask_t tmp; 4985 4986 g = kcalloc(num_groups, sizeof(*g), GFP_KERNEL); 4987 if (!g) 4988 return -ENOMEM; 4989 4990 for_each_engine_masked(engine, gt, ALL_ENGINES, tmp) { 4991 u32 index = __oa_engine_group(engine); 4992 4993 engine->oa_group = NULL; 4994 if (index < num_groups) { 4995 g[index].num_engines++; 4996 engine->oa_group = &g[index]; 4997 } 4998 } 4999 5000 gt->perf.num_perf_groups = num_groups; 5001 gt->perf.group = g; 5002 5003 oa_init_groups(gt); 5004 5005 return 0; 5006 } 5007 5008 static int oa_init_engine_groups(struct i915_perf *perf) 5009 { 5010 struct intel_gt *gt; 5011 int i, ret; 5012 5013 for_each_gt(gt, perf->i915, i) { 5014 ret = oa_init_gt(gt); 5015 if (ret) 5016 return ret; 5017 } 5018 5019 return 0; 5020 } 5021 5022 static void oa_init_supported_formats(struct i915_perf *perf) 5023 { 5024 struct drm_i915_private *i915 = perf->i915; 5025 enum intel_platform platform = INTEL_INFO(i915)->platform; 5026 5027 switch (platform) { 5028 case INTEL_HASWELL: 5029 oa_format_add(perf, I915_OA_FORMAT_A13); 5030 oa_format_add(perf, I915_OA_FORMAT_A13); 5031 oa_format_add(perf, I915_OA_FORMAT_A29); 5032 oa_format_add(perf, I915_OA_FORMAT_A13_B8_C8); 5033 oa_format_add(perf, I915_OA_FORMAT_B4_C8); 5034 oa_format_add(perf, I915_OA_FORMAT_A45_B8_C8); 5035 oa_format_add(perf, I915_OA_FORMAT_B4_C8_A16); 5036 oa_format_add(perf, I915_OA_FORMAT_C4_B8); 5037 break; 5038 5039 case INTEL_BROADWELL: 5040 case INTEL_CHERRYVIEW: 5041 case INTEL_SKYLAKE: 5042 case INTEL_BROXTON: 5043 case INTEL_KABYLAKE: 5044 case INTEL_GEMINILAKE: 5045 case INTEL_COFFEELAKE: 5046 case INTEL_COMETLAKE: 5047 case INTEL_ICELAKE: 5048 case INTEL_ELKHARTLAKE: 5049 case INTEL_JASPERLAKE: 5050 case INTEL_TIGERLAKE: 5051 case INTEL_ROCKETLAKE: 5052 case INTEL_DG1: 5053 case INTEL_ALDERLAKE_S: 5054 case INTEL_ALDERLAKE_P: 5055 oa_format_add(perf, I915_OA_FORMAT_A12); 5056 oa_format_add(perf, I915_OA_FORMAT_A12_B8_C8); 5057 oa_format_add(perf, I915_OA_FORMAT_A32u40_A4u32_B8_C8); 5058 oa_format_add(perf, I915_OA_FORMAT_C4_B8); 5059 break; 5060 5061 case INTEL_DG2: 5062 oa_format_add(perf, I915_OAR_FORMAT_A32u40_A4u32_B8_C8); 5063 oa_format_add(perf, I915_OA_FORMAT_A24u40_A14u32_B8_C8); 5064 break; 5065 5066 case INTEL_METEORLAKE: 5067 oa_format_add(perf, I915_OAR_FORMAT_A32u40_A4u32_B8_C8); 5068 oa_format_add(perf, I915_OA_FORMAT_A24u40_A14u32_B8_C8); 5069 oa_format_add(perf, I915_OAM_FORMAT_MPEC8u64_B8_C8); 5070 oa_format_add(perf, I915_OAM_FORMAT_MPEC8u32_B8_C8); 5071 break; 5072 5073 default: 5074 MISSING_CASE(platform); 5075 } 5076 } 5077 5078 static void i915_perf_init_info(struct drm_i915_private *i915) 5079 { 5080 struct i915_perf *perf = &i915->perf; 5081 5082 switch (GRAPHICS_VER(i915)) { 5083 case 8: 5084 perf->ctx_oactxctrl_offset = 0x120; 5085 perf->ctx_flexeu0_offset = 0x2ce; 5086 perf->gen8_valid_ctx_bit = BIT(25); 5087 break; 5088 case 9: 5089 perf->ctx_oactxctrl_offset = 0x128; 5090 perf->ctx_flexeu0_offset = 0x3de; 5091 perf->gen8_valid_ctx_bit = BIT(16); 5092 break; 5093 case 11: 5094 perf->ctx_oactxctrl_offset = 0x124; 5095 perf->ctx_flexeu0_offset = 0x78e; 5096 perf->gen8_valid_ctx_bit = BIT(16); 5097 break; 5098 case 12: 5099 /* 5100 * Calculate offset at runtime in oa_pin_context for gen12 and 5101 * cache the value in perf->ctx_oactxctrl_offset. 5102 */ 5103 break; 5104 default: 5105 MISSING_CASE(GRAPHICS_VER(i915)); 5106 } 5107 } 5108 5109 /** 5110 * i915_perf_init - initialize i915-perf state on module bind 5111 * @i915: i915 device instance 5112 * 5113 * Initializes i915-perf state without exposing anything to userspace. 5114 * 5115 * Note: i915-perf initialization is split into an 'init' and 'register' 5116 * phase with the i915_perf_register() exposing state to userspace. 5117 */ 5118 int i915_perf_init(struct drm_i915_private *i915) 5119 { 5120 struct i915_perf *perf = &i915->perf; 5121 5122 perf->oa_formats = oa_formats; 5123 if (IS_HASWELL(i915)) { 5124 perf->ops.is_valid_b_counter_reg = gen7_is_valid_b_counter_addr; 5125 perf->ops.is_valid_mux_reg = hsw_is_valid_mux_addr; 5126 perf->ops.is_valid_flex_reg = NULL; 5127 perf->ops.enable_metric_set = hsw_enable_metric_set; 5128 perf->ops.disable_metric_set = hsw_disable_metric_set; 5129 perf->ops.oa_enable = gen7_oa_enable; 5130 perf->ops.oa_disable = gen7_oa_disable; 5131 perf->ops.read = gen7_oa_read; 5132 perf->ops.oa_hw_tail_read = gen7_oa_hw_tail_read; 5133 } else if (HAS_LOGICAL_RING_CONTEXTS(i915)) { 5134 /* Note: that although we could theoretically also support the 5135 * legacy ringbuffer mode on BDW (and earlier iterations of 5136 * this driver, before upstreaming did this) it didn't seem 5137 * worth the complexity to maintain now that BDW+ enable 5138 * execlist mode by default. 5139 */ 5140 perf->ops.read = gen8_oa_read; 5141 i915_perf_init_info(i915); 5142 5143 if (IS_GRAPHICS_VER(i915, 8, 9)) { 5144 perf->ops.is_valid_b_counter_reg = 5145 gen7_is_valid_b_counter_addr; 5146 perf->ops.is_valid_mux_reg = 5147 gen8_is_valid_mux_addr; 5148 perf->ops.is_valid_flex_reg = 5149 gen8_is_valid_flex_addr; 5150 5151 if (IS_CHERRYVIEW(i915)) { 5152 perf->ops.is_valid_mux_reg = 5153 chv_is_valid_mux_addr; 5154 } 5155 5156 perf->ops.oa_enable = gen8_oa_enable; 5157 perf->ops.oa_disable = gen8_oa_disable; 5158 perf->ops.enable_metric_set = gen8_enable_metric_set; 5159 perf->ops.disable_metric_set = gen8_disable_metric_set; 5160 perf->ops.oa_hw_tail_read = gen8_oa_hw_tail_read; 5161 } else if (GRAPHICS_VER(i915) == 11) { 5162 perf->ops.is_valid_b_counter_reg = 5163 gen7_is_valid_b_counter_addr; 5164 perf->ops.is_valid_mux_reg = 5165 gen11_is_valid_mux_addr; 5166 perf->ops.is_valid_flex_reg = 5167 gen8_is_valid_flex_addr; 5168 5169 perf->ops.oa_enable = gen8_oa_enable; 5170 perf->ops.oa_disable = gen8_oa_disable; 5171 perf->ops.enable_metric_set = gen8_enable_metric_set; 5172 perf->ops.disable_metric_set = gen11_disable_metric_set; 5173 perf->ops.oa_hw_tail_read = gen8_oa_hw_tail_read; 5174 } else if (GRAPHICS_VER(i915) == 12) { 5175 perf->ops.is_valid_b_counter_reg = 5176 HAS_OA_SLICE_CONTRIB_LIMITS(i915) ? 5177 xehp_is_valid_b_counter_addr : 5178 gen12_is_valid_b_counter_addr; 5179 perf->ops.is_valid_mux_reg = 5180 gen12_is_valid_mux_addr; 5181 perf->ops.is_valid_flex_reg = 5182 gen8_is_valid_flex_addr; 5183 5184 perf->ops.oa_enable = gen12_oa_enable; 5185 perf->ops.oa_disable = gen12_oa_disable; 5186 perf->ops.enable_metric_set = gen12_enable_metric_set; 5187 perf->ops.disable_metric_set = gen12_disable_metric_set; 5188 perf->ops.oa_hw_tail_read = gen12_oa_hw_tail_read; 5189 } 5190 } 5191 5192 if (perf->ops.enable_metric_set) { 5193 struct intel_gt *gt; 5194 int i, ret; 5195 5196 for_each_gt(gt, i915, i) 5197 mutex_init(>->perf.lock); 5198 5199 /* Choose a representative limit */ 5200 oa_sample_rate_hard_limit = to_gt(i915)->clock_frequency / 2; 5201 5202 mutex_init(&perf->metrics_lock); 5203 idr_init_base(&perf->metrics_idr, 1); 5204 5205 /* We set up some ratelimit state to potentially throttle any 5206 * _NOTES about spurious, invalid OA reports which we don't 5207 * forward to userspace. 5208 * 5209 * We print a _NOTE about any throttling when closing the 5210 * stream instead of waiting until driver _fini which no one 5211 * would ever see. 5212 * 5213 * Using the same limiting factors as printk_ratelimit() 5214 */ 5215 ratelimit_state_init(&perf->spurious_report_rs, 5 * HZ, 10); 5216 /* Since we use a DRM_NOTE for spurious reports it would be 5217 * inconsistent to let __ratelimit() automatically print a 5218 * warning for throttling. 5219 */ 5220 ratelimit_set_flags(&perf->spurious_report_rs, 5221 RATELIMIT_MSG_ON_RELEASE); 5222 5223 ratelimit_state_init(&perf->tail_pointer_race, 5224 5 * HZ, 10); 5225 ratelimit_set_flags(&perf->tail_pointer_race, 5226 RATELIMIT_MSG_ON_RELEASE); 5227 5228 atomic64_set(&perf->noa_programming_delay, 5229 500 * 1000 /* 500us */); 5230 5231 perf->i915 = i915; 5232 5233 ret = oa_init_engine_groups(perf); 5234 if (ret) { 5235 drm_err(&i915->drm, 5236 "OA initialization failed %d\n", ret); 5237 return ret; 5238 } 5239 5240 oa_init_supported_formats(perf); 5241 } 5242 5243 return 0; 5244 } 5245 5246 static int destroy_config(int id, void *p, void *data) 5247 { 5248 i915_oa_config_put(p); 5249 return 0; 5250 } 5251 5252 int i915_perf_sysctl_register(void) 5253 { 5254 sysctl_header = register_sysctl("dev/i915", oa_table); 5255 return 0; 5256 } 5257 5258 void i915_perf_sysctl_unregister(void) 5259 { 5260 unregister_sysctl_table(sysctl_header); 5261 } 5262 5263 /** 5264 * i915_perf_fini - Counter part to i915_perf_init() 5265 * @i915: i915 device instance 5266 */ 5267 void i915_perf_fini(struct drm_i915_private *i915) 5268 { 5269 struct i915_perf *perf = &i915->perf; 5270 struct intel_gt *gt; 5271 int i; 5272 5273 if (!perf->i915) 5274 return; 5275 5276 for_each_gt(gt, perf->i915, i) 5277 kfree(gt->perf.group); 5278 5279 idr_for_each(&perf->metrics_idr, destroy_config, perf); 5280 idr_destroy(&perf->metrics_idr); 5281 5282 memset(&perf->ops, 0, sizeof(perf->ops)); 5283 perf->i915 = NULL; 5284 } 5285 5286 /** 5287 * i915_perf_ioctl_version - Version of the i915-perf subsystem 5288 * @i915: The i915 device 5289 * 5290 * This version number is used by userspace to detect available features. 5291 */ 5292 int i915_perf_ioctl_version(struct drm_i915_private *i915) 5293 { 5294 /* 5295 * 1: Initial version 5296 * I915_PERF_IOCTL_ENABLE 5297 * I915_PERF_IOCTL_DISABLE 5298 * 5299 * 2: Added runtime modification of OA config. 5300 * I915_PERF_IOCTL_CONFIG 5301 * 5302 * 3: Add DRM_I915_PERF_PROP_HOLD_PREEMPTION parameter to hold 5303 * preemption on a particular context so that performance data is 5304 * accessible from a delta of MI_RPC reports without looking at the 5305 * OA buffer. 5306 * 5307 * 4: Add DRM_I915_PERF_PROP_ALLOWED_SSEU to limit what contexts can 5308 * be run for the duration of the performance recording based on 5309 * their SSEU configuration. 5310 * 5311 * 5: Add DRM_I915_PERF_PROP_POLL_OA_PERIOD parameter that controls the 5312 * interval for the hrtimer used to check for OA data. 5313 * 5314 * 6: Add DRM_I915_PERF_PROP_OA_ENGINE_CLASS and 5315 * DRM_I915_PERF_PROP_OA_ENGINE_INSTANCE 5316 * 5317 * 7: Add support for video decode and enhancement classes. 5318 */ 5319 5320 /* 5321 * Wa_14017512683: mtl[a0..c0): Use of OAM must be preceded with Media 5322 * C6 disable in BIOS. If Media C6 is enabled in BIOS, return version 6 5323 * to indicate that OA media is not supported. 5324 */ 5325 if (IS_MTL_MEDIA_STEP(i915, STEP_A0, STEP_C0)) { 5326 struct intel_gt *gt; 5327 int i; 5328 5329 for_each_gt(gt, i915, i) { 5330 if (gt->type == GT_MEDIA && 5331 intel_check_bios_c6_setup(>->rc6)) 5332 return 6; 5333 } 5334 } 5335 5336 return 7; 5337 } 5338 5339 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) 5340 #include "selftests/i915_perf.c" 5341 #endif 5342