Lines Matching refs:cache

43 	Enable code/data prioritization in L3 cache allocations.
45 Enable code/data prioritization in L2 cache allocations.
54 pseudo-locking is a unique way of using cache control to "pin" or
55 "lock" data in the cache. Details can be found in
91 setting up exclusive cache partitions. Note that
93 own settings for cache use which can over-ride
339 cache pseudo-locked region is created by first writing
340 "pseudo-locksetup" to the "mode" file before writing the cache
386 Notes on cache occupancy monitoring and control
389 this only affects *new* cache allocations by the task. E.g. you may have
390 a task in a monitor group showing 3 MB of cache occupancy. If you move
393 the new group zero. When the task accesses locations still in cache from
395 you will likely see the occupancy in the old group go down as cache lines
397 the task accesses memory and loads into the cache are counted based on
400 The same applies to cache allocation control. Moving a task to a group
401 with a smaller cache partition will not evict any cache lines. The
415 the RMID is still tagged the cache lines of the previous user of RMID.
416 Hence such RMIDs are placed on limbo list and checked back if the cache
432 On current generation systems there is one L3 cache per socket and L2
435 caches on a socket, multiple cores could share an L2 cache. So instead
437 a resource we use a "Cache ID". At a given cache level this will be a
440 CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
444 For cache resources we describe the portion of the cache that is available
446 by each cpu model (and may be different for different cache levels). It
452 of the capacity of the cache. You could partition the cache into four
545 Memory b/w domain is L3 cache.
553 Memory bandwidth domain is L3 cache.
571 The bandwidth domain for slow memory is L3 cache. Its schemata file
596 When writing to the file, you need to specify what cache id you wish to
599 For example, to allocate 2GB/s limit on the first cache id:
617 For example, to allocate 8GB/s limit on the first cache id:
634 CAT enables a user to specify the amount of cache space that an
637 allocated area on a cache hit. With cache pseudo-locking, data can be
638 preloaded into a reserved portion of cache that no application can
639 fill, and from that point on will only serve cache hits. The cache
644 The creation of a cache pseudo-locked region is triggered by a request
646 to be pseudo-locked. The cache pseudo-locked region is created as follows:
649 from the user of the cache region that will contain the pseudo-locked
651 on the system and no future overlap with this cache region is allowed
653 - Create a contiguous region of memory of the same size as the cache
655 - Flush the cache, disable hardware prefetchers, disable preemption.
657 it into the cache.
659 - At this point the closid CLOSNEW can be released - the cache
661 any CAT allocation. Even though the cache pseudo-locked region will from
664 the region continues to serve cache hits.
665 - The contiguous region of memory loaded into the cache is exposed to
669 in the cache via carefully configuring the CAT feature and controlling
671 cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
672 “locked” data from cache. Power management C-states may shrink or
673 power off cache. Deeper C-states will automatically be restricted on
678 with the cache on which the pseudo-locked region resides. A sanity check
680 unless it runs with affinity to cores associated with the cache on which the
688 of cache that should be dedicated to pseudo-locking. At this time an
690 cache portion, and exposed as a character device.
710 An example of cache pseudo-locked region creation and usage can be found below.
718 location is present in the cache. The pseudo-locking debugging interface uses
719 the tracing infrastructure to provide two ways to measure cache residency of
726 are disabled. This also provides a substitute visualization of cache
729 available. Depending on the levels of cache on the system the pseudo_lock_l2
743 writing "2" to the pseudo_lock_measure file will trigger the L2 cache
744 residency (cache hits and misses) measurement captured in the
747 writing "3" to the pseudo_lock_measure file will trigger the L3 cache
748 residency (cache hits and misses) measurement captured in the
788 Example of cache hits/misses debugging
791 cache of a platform. Here is how we can obtain details of the cache hits
818 On a two socket machine (one L3 cache per socket) with just four bits
819 for cache bit masks, minimum b/w of 10% with a memory bandwidth
833 "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
834 Tasks in group "p1" use the "lower" 50% of cache on both sockets.
839 Note that unlike cache masks, memory b/w cannot specify whether these
861 of L3 cache on socket 0.
868 50% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
874 it access to the "top" 25% of the cache on socket 0.
889 Ditto for the second real time task (with the remaining 25% of cache)::
924 50% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
930 to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
938 kernel and the tasks running there get 50% of the cache. They should
948 mode allowing sharing of their cache allocations. If one resource group
949 configures a cache allocation then nothing prevents another resource group
953 system with two L2 cache instances that can be configured with an 8-bit
955 25% of each cache instance.
962 cache::
1002 The bit_usage will reflect how the cache is used::
1016 Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
1056 * Example code to access one page of pseudo-locked cache region
1125 As an example, the allocation of an exclusive reservation of L3 cache
1245 On a two socket machine (one L3 cache per socket) with just four bits
1246 for cache bit masks::
1260 "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1261 Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1289 On a two socket machine (one L3 cache per socket)::
1315 This can also be used to profile jobs cache size footprint before being
1346 and non real time tasks on other cpus. We want to monitor the cache