1.. _psi:
2
3================================
4PSI - Pressure Stall Information
5================================
6
7:Date: April, 2018
8:Author: Johannes Weiner <hannes@cmpxchg.org>
9
10When CPU, memory or IO devices are contended, workloads experience
11latency spikes, throughput losses, and run the risk of OOM kills.
12
13Without an accurate measure of such contention, users are forced to
14either play it safe and under-utilize their hardware resources, or
15roll the dice and frequently suffer the disruptions resulting from
16excessive overcommit.
17
18The psi feature identifies and quantifies the disruptions caused by
19such resource crunches and the time impact it has on complex workloads
20or even entire systems.
21
22Having an accurate measure of productivity losses caused by resource
23scarcity aids users in sizing workloads to hardware--or provisioning
24hardware according to workload demand.
25
26As psi aggregates this information in realtime, systems can be managed
27dynamically using techniques such as load shedding, migrating jobs to
28other systems or data centers, or strategically pausing or killing low
29priority or restartable batch jobs.
30
31This allows maximizing hardware utilization without sacrificing
32workload health or risking major disruptions such as OOM kills.
33
34Pressure interface
35==================
36
37Pressure information for each resource is exported through the
38respective file in /proc/pressure/ -- cpu, memory, and io.
39
40The format is as such::
41
42	some avg10=0.00 avg60=0.00 avg300=0.00 total=0
43	full avg10=0.00 avg60=0.00 avg300=0.00 total=0
44
45The "some" line indicates the share of time in which at least some
46tasks are stalled on a given resource.
47
48The "full" line indicates the share of time in which all non-idle
49tasks are stalled on a given resource simultaneously. In this state
50actual CPU cycles are going to waste, and a workload that spends
51extended time in this state is considered to be thrashing. This has
52severe impact on performance, and it's useful to distinguish this
53situation from a state where some tasks are stalled but the CPU is
54still doing productive work. As such, time spent in this subset of the
55stall state is tracked separately and exported in the "full" averages.
56
57CPU full is undefined at the system level, but has been reported
58since 5.13, so it is set to zero for backward compatibility.
59
60The ratios (in %) are tracked as recent trends over ten, sixty, and
61three hundred second windows, which gives insight into short term events
62as well as medium and long term trends. The total absolute stall time
63(in us) is tracked and exported as well, to allow detection of latency
64spikes which wouldn't necessarily make a dent in the time averages,
65or to average trends over custom time frames.
66
67Monitoring for pressure thresholds
68==================================
69
70Users can register triggers and use poll() to be woken up when resource
71pressure exceeds certain thresholds.
72
73A trigger describes the maximum cumulative stall time over a specific
74time window, e.g. 100ms of total stall time within any 500ms window to
75generate a wakeup event.
76
77To register a trigger user has to open psi interface file under
78/proc/pressure/ representing the resource to be monitored and write the
79desired threshold and time window. The open file descriptor should be
80used to wait for trigger events using select(), poll() or epoll().
81The following format is used::
82
83	<some|full> <stall amount in us> <time window in us>
84
85For example writing "some 150000 1000000" into /proc/pressure/memory
86would add 150ms threshold for partial memory stall measured within
871sec time window. Writing "full 50000 1000000" into /proc/pressure/io
88would add 50ms threshold for full io stall measured within 1sec time window.
89
90Triggers can be set on more than one psi metric and more than one trigger
91for the same psi metric can be specified. However for each trigger a separate
92file descriptor is required to be able to poll it separately from others,
93therefore for each trigger a separate open() syscall should be made even
94when opening the same psi interface file. Write operations to a file descriptor
95with an already existing psi trigger will fail with EBUSY.
96
97Monitors activate only when system enters stall state for the monitored
98psi metric and deactivates upon exit from the stall state. While system is
99in the stall state psi signal growth is monitored at a rate of 10 times per
100tracking window.
101
102The kernel accepts window sizes ranging from 500ms to 10s, therefore min
103monitoring update interval is 50ms and max is 1s. Min limit is set to
104prevent overly frequent polling. Max limit is chosen as a high enough number
105after which monitors are most likely not needed and psi averages can be used
106instead.
107
108Unprivileged users can also create monitors, with the only limitation that the
109window size must be a multiple of 2s, in order to prevent excessive resource
110usage.
111
112When activated, psi monitor stays active for at least the duration of one
113tracking window to avoid repeated activations/deactivations when system is
114bouncing in and out of the stall state.
115
116Notifications to the userspace are rate-limited to one per tracking window.
117
118The trigger will de-register when the file descriptor used to define the
119trigger  is closed.
120
121Userspace monitor usage example
122===============================
123
124::
125
126  #include <errno.h>
127  #include <fcntl.h>
128  #include <stdio.h>
129  #include <poll.h>
130  #include <string.h>
131  #include <unistd.h>
132
133  /*
134   * Monitor memory partial stall with 1s tracking window size
135   * and 150ms threshold.
136   */
137  int main() {
138	const char trig[] = "some 150000 1000000";
139	struct pollfd fds;
140	int n;
141
142	fds.fd = open("/proc/pressure/memory", O_RDWR | O_NONBLOCK);
143	if (fds.fd < 0) {
144		printf("/proc/pressure/memory open error: %s\n",
145			strerror(errno));
146		return 1;
147	}
148	fds.events = POLLPRI;
149
150	if (write(fds.fd, trig, strlen(trig) + 1) < 0) {
151		printf("/proc/pressure/memory write error: %s\n",
152			strerror(errno));
153		return 1;
154	}
155
156	printf("waiting for events...\n");
157	while (1) {
158		n = poll(&fds, 1, -1);
159		if (n < 0) {
160			printf("poll error: %s\n", strerror(errno));
161			return 1;
162		}
163		if (fds.revents & POLLERR) {
164			printf("got POLLERR, event source is gone\n");
165			return 0;
166		}
167		if (fds.revents & POLLPRI) {
168			printf("event triggered!\n");
169		} else {
170			printf("unknown event received: 0x%x\n", fds.revents);
171			return 1;
172		}
173	}
174
175	return 0;
176  }
177
178Cgroup2 interface
179=================
180
181In a system with a CONFIG_CGROUP=y kernel and the cgroup2 filesystem
182mounted, pressure stall information is also tracked for tasks grouped
183into cgroups. Each subdirectory in the cgroupfs mountpoint contains
184cpu.pressure, memory.pressure, and io.pressure files; the format is
185the same as the /proc/pressure/ files.
186
187Per-cgroup psi monitors can be specified and used the same way as
188system-wide ones.
189