Home
last modified time | relevance | path

Searched hist:"6 c1c07b33eb093e5a2a313ece89baa596ba6135e" (Results 1 – 1 of 1) sorted by relevance

/openbmc/linux/arch/x86/events/intel/
H A Dcore.cdiff 6c1c07b33eb093e5a2a313ece89baa596ba6135e Tue Jan 21 12:13:38 CST 2020 Kan Liang <kan.liang@linux.intel.com> perf/x86/intel: Avoid unnecessary PEBS_ENABLE MSR access in PMI

The perf PMI handler, intel_pmu_handle_irq(), currently does
unnecessary MSR accesses for PEBS_ENABLE MSR in
__intel_pmu_enable/disable_all() when PEBS is enabled.

When entering the handler, global ctrl is explicitly disabled. All
counters do not count anymore. It doesn't matter if PEBS is enabled
or not in a PMI handler.
Furthermore, for most cases, the cpuc->pebs_enabled is not changed in
PMI. The PEBS status doesn't change. The PEBS_ENABLE MSR doesn't need to
be changed either when exiting the handler.

PMI throttle may change the PEBS status during PMI handler. The
x86_pmu_stop() ends up in intel_pmu_pebs_disable() which can update
cpuc->pebs_enabled. But the MSR_IA32_PEBS_ENABLE is not updated
at the same time. Because the cpuc->enabled has been forced to 0.
The patch explicitly update the MSR_IA32_PEBS_ENABLE for this case.

Use ftrace to measure the duration of intel_pmu_handle_irq() on BDX.
#perf record -e cycles:P -- ./tchain_edit

The average duration of intel_pmu_handle_irq():

Without the patch 1.144 us
With the patch 1.025 us

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20200121181338.3234-1-kan.liang@linux.intel.com