/openbmc/linux/drivers/crypto/intel/qat/ |
H A D | Kconfig | 24 for accelerating crypto and compression workloads. 35 for accelerating crypto and compression workloads. 46 for accelerating crypto and compression workloads. 57 for accelerating crypto and compression workloads. 70 Virtual Function for accelerating crypto and compression workloads. 82 Virtual Function for accelerating crypto and compression workloads. 94 Virtual Function for accelerating crypto and compression workloads.
|
/openbmc/linux/Documentation/mm/damon/ |
H A D | index.rst | 16 of the size of target workloads). 20 users who have special information about their workloads can write personalized 21 applications for better understanding and optimizations of their workloads and
|
/openbmc/linux/tools/perf/tests/shell/lib/ |
H A D | perf_metric_validation.py | 19 self.workloads = [x for x in workload.split(",") if x] 316 for i in range(0, len(self.workloads)): 320 alldata.append({"Workload": self.workloads[i], "Report": data}) 327 …allres = [{"Workload": self.workloads[i], "Results": self.allresults[i]} for i in range(0, len(sel… 400 workload = self.workloads[self.wlidx] 517 for i in range(0, len(self.workloads)): 520 self.collect_perf(self.workloads[i]) 536 print("Workload: ", self.workloads[i])
|
/openbmc/linux/drivers/accel/qaic/ |
H A D | Kconfig | 15 designed to accelerate Deep Learning inference workloads. 18 for users to submit workloads to the devices.
|
/openbmc/linux/drivers/accel/habanalabs/ |
H A D | Kconfig | 18 designed to accelerate Deep Learning inference and training workloads. 21 the user to submit workloads to the devices.
|
/openbmc/linux/Documentation/driver-api/ |
H A D | dma-buf.rst | 276 randomly hangs workloads until the timeout kicks in. Workloads, which from 289 workloads. This also means no implicit fencing for shared buffers in these 311 faults on GPUs are limited to pure compute workloads. 327 - Compute workloads can always be preempted, even when a page fault is pending 330 - DMA fence workloads and workloads which need page fault handling have 333 reservations for DMA fence workloads. 336 hardware resources for DMA fence workloads when they are in-flight. This must 341 all workloads must be flushed from the GPU when switching between jobs 345 made visible anywhere in the system, all compute workloads must be preempted 356 Note that workloads that run on independent hardware like copy engines or other [all …]
|
/openbmc/linux/Documentation/accel/qaic/ |
H A D | aic100.rst | 13 inference workloads. They are AI accelerators. 16 (x8). An individual SoC on a card can have up to 16 NSPs for running workloads. 20 performance. AIC100 cards are multi-user capable and able to execute workloads 81 the processors that run the workloads on AIC100. Each NSP is a Qualcomm Hexagon 84 one workload, AIC100 is limited to 16 concurrent workloads. Workload 92 in and out of workloads. AIC100 has one of these. The DMA Bridge has 16 102 This DDR is used to store workloads, data for the workloads, and is used by the 113 for generic compute workloads. 159 ready to process workloads. 238 other workloads. [all …]
|
/openbmc/linux/Documentation/timers/ |
H A D | no_hz.rst | 26 workloads, you will normally -not- want this option. 39 right approach, for example, in heavy workloads with lots of tasks 42 hundreds of microseconds). For these types of workloads, scheduling 56 are running light workloads, you should therefore read the following 118 computationally intensive short-iteration workloads: If any CPU is 231 aggressive real-time workloads, which have the option of disabling 233 some workloads will no doubt want to use adaptive ticks to 235 options for these workloads: 255 workloads, which have few such transitions. Careful benchmarking 256 will be required to determine whether or not other workloads
|
/openbmc/linux/drivers/cpuidle/ |
H A D | Kconfig | 33 Some workloads benefit from using it and it generally should be safe 45 Some virtualized workloads benefit from using it.
|
/openbmc/linux/drivers/crypto/cavium/nitrox/ |
H A D | Kconfig | 18 for accelerating crypto workloads.
|
/openbmc/linux/drivers/infiniband/hw/mana/ |
H A D | Kconfig | 8 for workloads (e.g. DPDK, MPI etc) that uses RDMA verbs to directly
|
/openbmc/linux/Documentation/admin-guide/ |
H A D | workload-tracing.rst | 34 to evaluate safety considerations. We use strace tool to trace workloads. 67 We used strace to trace the perf, stress-ng, paxtest workloads to illustrate 69 be applied to trace other workloads. 101 paxtest workloads to show how to analyze a workload and identify Linux 102 subsystems used by these workloads. Let's start with an overview of these 103 three workloads to get a better understanding of what they do and how to 173 by three workloads we have chose for this analysis. 312 Tracing workloads 315 Now that we understand the workloads, let's start tracing them. 595 information on the resources in use by workloads using strace.
|
/openbmc/linux/tools/perf/tests/ |
H A D | builtin-test.c | 135 static struct test_workload *workloads[] = { variable 506 for (i = 0; i < ARRAY_SIZE(workloads); i++) { in run_workload() 507 twl = workloads[i]; in run_workload()
|
H A D | Build | 79 perf-y += workloads/
|
/openbmc/openbmc/meta-openembedded/meta-oe/recipes-extended/haveged/ |
H A D | haveged_1.9.18.bb | 2 …entropy conditions in the Linux random device that can occur under some workloads, especially on h…
|
/openbmc/linux/Documentation/tools/rtla/ |
H A D | common_timerlat_options.rst | 32 Set timerlat to run without a workload, and then dispatches user-space workloads
|
/openbmc/linux/drivers/gpu/drm/i915/gvt/ |
H A D | scheduler.c | 1330 kmem_cache_destroy(s->workloads); in intel_vgpu_clean_submission() 1422 s->workloads = kmem_cache_create_usercopy("gvt-g_vgpu_workload", in intel_vgpu_setup_submission() 1429 if (!s->workloads) { in intel_vgpu_setup_submission() 1538 kmem_cache_free(s->workloads, workload); in intel_vgpu_destroy_workload() 1547 workload = kmem_cache_zalloc(s->workloads, GFP_KERNEL); in alloc_workload() 1721 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload() 1735 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload() 1746 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload()
|
/openbmc/linux/Documentation/accounting/ |
H A D | psi.rst | 10 When CPU, memory or IO devices are contended, workloads experience 19 such resource crunches and the time impact it has on complex workloads 23 scarcity aids users in sizing workloads to hardware--or provisioning
|
/openbmc/linux/security/ |
H A D | Kconfig.hardening | 177 sees a 1% slowdown, other systems and workloads may vary and you 217 your workloads. 238 workloads have measured as high as 7%. 256 synthetic workloads have measured as high as 8%. 276 workloads. Image size growth depends on architecture, and should
|
/openbmc/linux/Documentation/admin-guide/pm/ |
H A D | intel_uncore_frequency_scaling.rst | 23 Users may have some latency sensitive workloads where they do not want any 24 change to uncore frequency. Also, users may have workloads which require
|
/openbmc/qemu/docs/ |
H A D | xbzrle.txt | 7 workloads that are typical of large enterprise applications such as SAP ERP 55 XBZRLE has a sustained bandwidth of 2-2.5 GB/s for typical workloads making it
|
/openbmc/linux/Documentation/filesystems/ext4/ |
H A D | orphan.rst | 18 global single linked list is a scalability bottleneck for workloads that result
|
/openbmc/linux/Documentation/scheduler/ |
H A D | sched-design-CFS.rst | 100 "server" (i.e., good batching) workloads. It defaults to a setting suitable 101 for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too. 109 than the previous vanilla scheduler: both types of workloads are isolated much
|
/openbmc/linux/drivers/cpufreq/ |
H A D | Kconfig.x86 | 190 the CPUs' workloads are. CPU-bound workloads will be more sensitive 192 workloads will be less sensitive -- they will not necessarily perform
|
/openbmc/linux/Documentation/driver-api/md/ |
H A D | raid5-cache.rst | 58 completely avoid the overhead, so it's very helpful for some workloads. A 74 mode depending on the workloads. It's recommended to use a cache disk with at
|