/openbmc/linux/arch/arm64/boot/dts/microchip/ |
H A D | sparx5_pcb135_board.dtsi | 376 microchip,bandwidth = <1000>; 383 microchip,bandwidth = <1000>; 390 microchip,bandwidth = <1000>; 397 microchip,bandwidth = <1000>; 404 microchip,bandwidth = <1000>; 411 microchip,bandwidth = <1000>; 418 microchip,bandwidth = <1000>; 425 microchip,bandwidth = <1000>; 432 microchip,bandwidth = <1000>; 439 microchip,bandwidth = <1000>; [all …]
|
/openbmc/linux/drivers/firewire/ |
H A D | core-iso.c | 5 * - Isochronous bus resource management (channels, bandwidth), client side 201 * Isochronous bus resource management (channels, bandwidth), client side 205 int bandwidth, bool allocate) in manage_bandwidth() argument 216 new = allocate ? old - bandwidth : old + bandwidth; in manage_bandwidth() 227 /* A generation change frees all bandwidth. */ in manage_bandwidth() 228 return allocate ? -EAGAIN : bandwidth; in manage_bandwidth() 232 return bandwidth; in manage_bandwidth() 308 * fw_iso_resource_manage() - Allocate or deallocate a channel and/or bandwidth 313 * @bandwidth: pointer for returning bandwidth allocation result 316 * In parameters: card, generation, channels_mask, bandwidth, allocate [all …]
|
/openbmc/linux/sound/firewire/ |
H A D | iso-resources.c | 55 /* convert to bandwidth units (quadlets at S1600 = bytes at S400) */ in packet_bandwidth() 69 * 88.3 + N * 24.3 in bandwidth units. in current_bandwidth_overhead() 91 * fw_iso_resources_allocate - allocate isochronous channel and bandwidth 96 * This function allocates one isochronous channel and enough bandwidth for the 109 int bandwidth, channel, err; in fw_iso_resources_allocate() local 114 r->bandwidth = packet_bandwidth(max_payload_bytes, speed); in fw_iso_resources_allocate() 128 bandwidth = r->bandwidth + r->bandwidth_overhead; in fw_iso_resources_allocate() 130 &channel, &bandwidth, true); in fw_iso_resources_allocate() 167 int bandwidth, channel; in fw_iso_resources_update() local 181 bandwidth = r->bandwidth + r->bandwidth_overhead; in fw_iso_resources_update() [all …]
|
/openbmc/qemu/tests/migration/guestperf/ |
H A D | comparison.py | 42 # Looking at use of post-copy in relation to bandwidth 44 Comparison("post-copy-bandwidth", scenarios = [ 46 post_copy=True, bandwidth=12), 48 post_copy=True, bandwidth=37), 50 post_copy=True, bandwidth=125), 52 post_copy=True, bandwidth=1250), 54 post_copy=True, bandwidth=12500), 84 # Looking at use of auto-converge in relation to bandwidth 86 Comparison("auto-converge-bandwidth", scenarios = [ 88 auto_converge=True, bandwidth=12), [all …]
|
/openbmc/linux/drivers/thunderbolt/ |
H A D | tunnel.h | 32 * @maximum_bandwidth: Returns maximum possible bandwidth for this tunnel 33 * @allocated_bandwidth: Return how much bandwidth is allocated for the tunnel 34 * @alloc_bandwidth: Change tunnel bandwidth allocation 35 * @consumed_bandwidth: Return how much bandwidth the tunnel consumes 36 * @release_unused_bandwidth: Release all unused bandwidth 37 * @reclaim_available_bandwidth: Reclaim back available bandwidth 40 * @max_up: Maximum upstream bandwidth (Mb/s) available for the tunnel. 41 * Only set if the bandwidth needs to be limited. 42 * @max_down: Maximum downstream bandwidth (Mb/s) available for the tunnel. 43 * Only set if the bandwidth needs to be limited. [all …]
|
H A D | tb.c | 22 * Minimum bandwidth (in Mb/s) that is needed in the single transmitter/receiver 23 * direction. This is 40G - 10% guard band bandwidth. 28 * Threshold bandwidth (in Mb/s) that is used to switch the links to 55 * @groups: Bandwidth groups used in this domain. 100 tb_port_dbg(in, "attached to bandwidth group %d\n", group->index); in tb_bandwidth_group_attach_port() 149 tb_port_warn(in, "no available bandwidth groups\n"); in tb_attach_bandwidth_group() 180 tb_port_dbg(in, "detached from bandwidth group %d\n", group->index); in tb_detach_bandwidth_group() 653 * tb_consumed_usb3_pcie_bandwidth() - Consumed USB3/PCIe bandwidth over a single link 657 * @port: USB4 port the consumed bandwidth is calculated 658 * @consumed_up: Consumed upsream bandwidth (Mb/s) [all …]
|
H A D | tunnel.c | 65 * Reserve additional bandwidth for USB 3.x and PCIe bulk traffic 81 "enable bandwidth allocation mode if supported (default: true)"); 383 * tb_tunnel_reserved_pci() - Amount of bandwidth to reserve for PCIe 385 * @reserved_up: Upstream bandwidth in Mb/s to reserve 386 * @reserved_down: Downstream bandwidth in Mb/s to reserve 389 * bandwidth needs to be left in reserve for possible PCIe bulk traffic. 666 "DP IN maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n", in tb_dp_xchg_caps() 670 * If the tunnel bandwidth is limited (max_bw is set) then see in tb_dp_xchg_caps() 671 * if we need to reduce bandwidth to fit there. in tb_dp_xchg_caps() 677 "DP OUT maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n", in tb_dp_xchg_caps() [all …]
|
/openbmc/linux/drivers/net/ethernet/intel/ixgbe/ |
H A D | ixgbe_dcb.h | 26 /* Error in bandwidth group allocation */ 28 /* Error in traffic class bandwidth allocation */ 32 /* Link strict traffic class has non zero bandwidth */ 34 /* Link strict bandwidth group has non zero bandwidth */ 36 /* Traffic class has zero bandwidth */ 73 /* Traffic class bandwidth allocation per direction */ 75 u8 bwg_id; /* Bandwidth Group (BWG) ID */ 76 u8 bwg_percent; /* % of BWG's bandwidth */ 77 u8 link_percent; /* % of link bandwidth */ 114 u32 link_speed; /* For bandwidth allocation validation purpose */
|
/openbmc/linux/Documentation/scheduler/ |
H A D | sched-bwc.rst | 2 CFS Bandwidth Control 6 This document only discusses CPU bandwidth control for SCHED_NORMAL. 9 CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the 10 specification of the maximum CPU bandwidth available to a group or hierarchy. 12 The bandwidth allowed for a group is specified using a quota and period. Within 21 cfs_quota units at each period boundary. As threads consume this bandwidth it 30 Traditional (UP-EDF) bandwidth control is something like: 89 bandwidth restriction in place, such a group is described as an unconstrained 90 bandwidth group. This represents the traditional work-conserving behavior for 94 enact the specified bandwidth limit. The minimum quota allowed for the quota or [all …]
|
H A D | sched-deadline.rst | 11 2.2 Bandwidth reclaiming 17 4. Bandwidth management 42 algorithm, augmented with a mechanism (called Constant Bandwidth Server, CBS) 62 "admission control" strategy (see Section "4. Bandwidth management") is used 67 interference between different tasks (bandwidth isolation), while the EDF[1] 125 2.2 Bandwidth reclaiming 128 Bandwidth reclaiming for deadline tasks is based on the GRUB (Greedy 129 Reclamation of Unused Bandwidth) algorithm [15, 16, 17] and it is enabled 164 bandwidth cannot be immediately reclaimed without breaking the 167 the 0-lag time, when the task's bandwidth can be reclaimed without [all …]
|
H A D | sched-rt-group.rst | 43 the amount of bandwidth (eg. CPU time) being constant. In order to schedule 90 The scheduling period that is equivalent to 100% CPU bandwidth 95 processes. With CONFIG_RT_GROUP_SCHED it signifies the total bandwidth 114 By default all bandwidth is assigned to the root group and new groups get the 116 want to assign bandwidth to another group, reduce the root group's bandwidth 120 bandwidth to the group before it will accept realtime tasks. Therefore you will 130 CPU bandwidth to task groups. 158 Consider two sibling groups A and B; both have 50% bandwidth, but A's
|
/openbmc/linux/Documentation/admin-guide/perf/ |
H A D | hisi-pcie-pmu.rst | 6 bandwidth, latency, bus utilization and buffer occupancy data of PCIe. 75 "bdf" filter can only be used in bandwidth events, target Endpoint is 76 selected by configuring BDF to "bdf". Counter only counts the bandwidth of 90 only be used in bandwidth events. 104 "thr_mode". This filter can only be used in bandwidth events. 116 When counting bandwidth, the data can be composed of certain parts of TLP 120 - 2'b01: Bandwidth of TLP payloads 121 - 2'b10: Bandwidth of TLP headers 122 - 2'b11: Bandwidth of both TLP payloads and headers 124 For example, "len_mode=2" means only counting the bandwidth of TLP headers [all …]
|
H A D | alibaba_pmu.rst | 30 - Group 1: PMU Bandwidth Counters. This group has 8 counters that are used 54 interface, we could calculate the bandwidth. Example usage of counting memory 55 data bandwidth:: 91 Example usage of counting all memory read/write bandwidth by metric:: 96 The average DRAM bandwidth can be calculated as follows: 98 - Read Bandwidth = perf_hif_rd * DDRC_WIDTH * DDRC_Freq / DDRC_Cycle 99 - Write Bandwidth = (perf_hif_wr + perf_hif_rmw) * DDRC_WIDTH * DDRC_Freq / DDRC_Cycle
|
/openbmc/linux/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ |
H A D | cake.json | 18 …"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate non… 27 "name": "Create CAKE with bandwidth limit", 38 "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake bandwidth 1000", 41 …"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth 1Kbit diffserv3 triple-isolate nonat n… 64 …"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited autorate-ingress diffserv3 t… 87 …"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate non… 110 …"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited besteffort triple-isolate no… 133 …"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv8 triple-isolate non… 156 …"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv4 triple-isolate non… 179 …"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 flowblind nonat no… [all …]
|
H A D | choke.json | 15 "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000", 38 …"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 min 100", 61 …"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 max 900", 84 … "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 ecn", 107 …"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 burst 10… 129 "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000" 152 "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000" 154 …"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 min … 176 "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000" 178 …"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 min 1…
|
/openbmc/linux/drivers/usb/host/ |
H A D | xhci-mtk.h | 28 * bandwidth to it. 34 * @fs_bus_bw: array to keep track of bandwidth already used for FS 43 * struct mu3h_sch_bw_info: schedule information for bandwidth domain 45 * @bus_bw: array to keep track of bandwidth already used at each uframes 47 * treat a HS root port as a bandwidth domain, but treat a SS root port as 48 * two bandwidth domains, one for IN eps and another for OUT eps. 61 * @bw_cost_per_microframe: bandwidth cost per microframe 63 * @endpoint: linked into bandwidth domain which it belongs to 65 * @bw_info: bandwidth domain which this endpoint belongs 70 * @allocated: the bandwidth is aready allocated from bus_bw
|
/openbmc/linux/tools/perf/pmu-events/arch/powerpc/power9/ |
H A D | nest_metrics.json | 31 "MetricGroup" : "memory-bandwidth", 37 "MetricGroup" : "memory-bandwidth", 43 "MetricGroup" : "memory-bandwidth", 49 "MetricGroup" : "memory-bandwidth", 59 "MetricName" : "Memory-bandwidth-MCS", 60 "MetricGroup" : "memory-bandwidth",
|
/openbmc/linux/drivers/gpu/drm/bridge/ |
H A D | cros-ec-anx7688.c | 61 /* Read both regs 0x85 (bandwidth) and 0x86 (lane count). */ in cros_ec_anx7688_bridge_mode_fixup() 64 DRM_ERROR("Failed to read bandwidth/lane count\n"); in cros_ec_anx7688_bridge_mode_fixup() 70 /* Maximum 0x19 bandwidth (6.75 Gbps Turbo mode), 2 lanes */ in cros_ec_anx7688_bridge_mode_fixup() 72 DRM_ERROR("Invalid bandwidth/lane count (%02x/%d)\n", dpbw, in cros_ec_anx7688_bridge_mode_fixup() 77 /* Compute available bandwidth (kHz) */ in cros_ec_anx7688_bridge_mode_fixup() 80 /* Required bandwidth (8 bpc, kHz) */ in cros_ec_anx7688_bridge_mode_fixup() 83 DRM_DEBUG_KMS("DP bandwidth: %d kHz (%02x/%d); mode requires %d Khz\n", in cros_ec_anx7688_bridge_mode_fixup() 87 DRM_ERROR("Bandwidth/lane count are 0, not rejecting modes\n"); in cros_ec_anx7688_bridge_mode_fixup() 148 /* FW version >= 0.85 supports bandwidth/lane count registers */ in cros_ec_anx7688_bridge_probe()
|
/openbmc/linux/Documentation/arch/x86/ |
H A D | resctrl.rst | 25 MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local" 26 MBA (Memory Bandwidth Allocation) "mba" 27 SMBA (Slow Memory Bandwidth Allocation) "" 28 BMEC (Bandwidth Monitoring Event Configuration) "" 48 bandwidth in MBps 128 Memory bandwidth(MB) subdirectory contains the following files 132 The minimum memory bandwidth percentage which 136 The granularity in which the memory bandwidth 140 available bandwidth control steps are: 151 request different memory bandwidth percentages: [all …]
|
/openbmc/linux/Documentation/devicetree/bindings/net/ |
H A D | microchip,sparx5-switch.yaml | 100 microchip,bandwidth: 101 description: Specifies bandwidth in Mbit/s allocated to the port. 118 - microchip,bandwidth 156 microchip,bandwidth = <1000>; 165 microchip,bandwidth = <25000>; 174 microchip,bandwidth = <25000>; 183 microchip,bandwidth = <25000>; 192 microchip,bandwidth = <25000>; 202 microchip,bandwidth = <1000>;
|
/openbmc/linux/include/linux/ |
H A D | resctrl.h | 61 * @mbm_total: saved state for MBM total bandwidth 62 * @mbm_local: saved state for MBM local bandwidth 110 * enum membw_throttle_mode - System's memory bandwidth throttling mode 112 * @THREAD_THROTTLE_MAX: Memory bandwidth is throttled at the core 113 * always using smallest bandwidth percentage 115 * @THREAD_THROTTLE_PER_THREAD: Memory bandwidth is throttled at the thread 124 * struct resctrl_membw - Memory bandwidth allocation related data 125 * @min_bw: Minimum memory bandwidth percentage user can request 126 * @bw_gran: Granularity at which the memory bandwidth is allocated 129 * @throttle_mode: Bandwidth throttling mode when threads request [all …]
|
/openbmc/linux/tools/perf/pmu-events/arch/x86/jaketown/ |
H A D | uncore-interconnect.json | 363 …nes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latenc… 475 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… 484 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… 493 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… 502 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… 511 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… 520 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… 529 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… 538 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… 547 …bandwidth of the link by taking: flits*80b/time. Note that this is not the same as 'data' bandwid… [all …]
|
/openbmc/linux/arch/x86/include/asm/ |
H A D | vmware.h | 12 * number to distinguish between high- and low bandwidth versions. 15 * bandwidth mode and transfer direction. The flags should be loaded 31 /* The low bandwidth call. The low word of edx is presumed clear. */ 39 * The high bandwidth out call. The low word of edx is presumed to have the 49 * The high bandwidth in call. The low word of edx is presumed to have the
|
/openbmc/linux/arch/x86/kernel/cpu/resctrl/ |
H A D | monitor.c | 427 * Supporting function to calculate the memory bandwidth 428 * and delta bandwidth in MBps. The chunks value previously read by 488 * adjust the bandwidth percentage values via the IA32_MBA_THRTL_MSRs so 491 * current bandwidth(cur_bw) < user specified bandwidth(user_bw) 493 * This uses the MBM counters to measure the bandwidth and MBA throttle 494 * MSRs to control the bandwidth for a particular rdtgrp. It builds on the 498 * timer. Having 1s interval makes the calculation of bandwidth simpler. 500 * Although MBA's goal is to restrict the bandwidth to a maximum, there may 501 * be a need to increase the bandwidth to avoid unnecessarily restricting 504 * Since MBA controls the L2 external bandwidth where as MBM measures the [all …]
|
/openbmc/linux/drivers/media/tuners/ |
H A D | si2157.c | 450 u32 bandwidth; in si2157_set_params() local 463 bandwidth = 1700000; in si2157_set_params() 466 bandwidth = 6000000; in si2157_set_params() 469 bandwidth = 6100000; in si2157_set_params() 472 bandwidth = 7000000; in si2157_set_params() 475 bandwidth = 8000000; in si2157_set_params() 555 dev->bandwidth = bandwidth; in si2157_set_params() 562 dev->bandwidth = 0; in si2157_set_params() 577 u32 bandwidth = 0; in si2157_set_analog_params() local 602 * bandwidth = 1700000; //best can do for FM, AGC will be a mess though in si2157_set_analog_params() [all …]
|