/openbmc/qemu/tests/qemu-iotests/ |
H A D | 142 | 3 # Test for configuring cache modes of arbitrary nodes (requires O_DIRECT) 43 # We test all cache modes anyway, but O_DIRECT needs to be supported 51 if ! test -t 0; then 57 ) | $QEMU -nographic -monitor stdio -nodefaults "$@" 70 _make_test_img -b "$TEST_IMG.base" $size -F $IMGFMT 73 echo === Simple test for all cache modes === 76 run_qemu -drive file="$TEST_IMG",cache=none 77 run_qemu -drive file="$TEST_IMG",cache=directsync 78 run_qemu -drive file="$TEST_IMG",cache=writeback 79 run_qemu -drive file="$TEST_IMG",cache=writethrough [all …]
|
H A D | 186.out | 6 Testing: -device floppy 7 QEMU X.Y.Z monitor - type 'help' for more information 8 (qemu) info block 9 /machine/peripheral-anon/device[1]: [not inserted] 10 Attached to: /machine/peripheral-anon/device[N] 14 Testing: -device floppy,id=qdev_id 15 QEMU X.Y.Z monitor - type 'help' for more information 16 (qemu) info block 22 Testing: -device ide-cd 23 QEMU X.Y.Z monitor - type 'help' for more information [all …]
|
H A D | 026.out.nocache | 11 Event: l1_update; errno: 5; imm: off; once: on; write -b 17 qemu-io: Failed to flush the L2 table cache: Input/output error 18 qemu-io: Failed to flush the refcount block cache: Input/output error 23 Event: l1_update; errno: 5; imm: off; once: off; write -b 24 qemu-io: Failed to flush the L2 table cache: Input/output error 25 qemu-io: Failed to flush the refcount block cache: Input/output error 35 Event: l1_update; errno: 28; imm: off; once: on; write -b 41 qemu-io: Failed to flush the L2 table cache: No space left on device 42 qemu-io: Failed to flush the refcount block cache: No space left on device 47 Event: l1_update; errno: 28; imm: off; once: off; write -b [all …]
|
H A D | 026.out | 11 Event: l1_update; errno: 5; imm: off; once: on; write -b 17 qemu-io: Failed to flush the L2 table cache: Input/output error 18 qemu-io: Failed to flush the refcount block cache: Input/output error 23 Event: l1_update; errno: 5; imm: off; once: off; write -b 24 qemu-io: Failed to flush the L2 table cache: Input/output error 25 qemu-io: Failed to flush the refcount block cache: Input/output error 35 Event: l1_update; errno: 28; imm: off; once: on; write -b 41 qemu-io: Failed to flush the L2 table cache: No space left on device 42 qemu-io: Failed to flush the refcount block cache: No space left on device 47 Event: l1_update; errno: 28; imm: off; once: off; write -b [all …]
|
H A D | 142.out | 6 === Simple test for all cache modes === 8 Testing: -drive file=TEST_DIR/t.qcow2,cache=none 9 QEMU X.Y.Z monitor - type 'help' for more information 12 Testing: -drive file=TEST_DIR/t.qcow2,cache=directsync 13 QEMU X.Y.Z monitor - type 'help' for more information 16 Testing: -drive file=TEST_DIR/t.qcow2,cache=writeback 17 QEMU X.Y.Z monitor - type 'help' for more information 20 Testing: -drive file=TEST_DIR/t.qcow2,cache=writethrough 21 QEMU X.Y.Z monitor - type 'help' for more information 24 Testing: -drive file=TEST_DIR/t.qcow2,cache=unsafe [all …]
|
/openbmc/linux/Documentation/admin-guide/device-mapper/ |
H A D | cache.rst | 2 Cache title 8 dm-cache is a device mapper target written by Joe Thornber, Heinz 11 It aims to improve performance of a block device (eg, a spindle) by 15 This device-mapper solution allows us to insert this caching at 17 a thin-provisioning pool. Caching solutions that are integrated more 20 The target reuses the metadata library used in the thin-provisioning 23 The decision as to what data to migrate and when is left to a plug-in 32 Movement of the primary copy of a logical block from one 39 The origin device always contains a copy of the logical block, which 40 may be out of date or kept in sync with the copy on the cache device [all …]
|
H A D | cache-policies.rst | 21 doesn't update states (eg, hit counts) for a block more than once 26 Overview of supplied cache replacement policies 30 --------------- 43 --------------------------- 47 The stochastic multi-queue (smq) policy addresses some of the problems 55 DM table that is using the cache target. Doing so will cause all of the 56 mq policy's hints to be dropped. Also, performance of the cache may 63 The mq policy used a lot of memory; 88 bytes per cache block on a 64 67 pointers. It avoids storing an explicit hit count for each block. It 68 has a 'hotspot' queue, rather than a pre-cache, which uses a quarter of [all …]
|
H A D | writecache.rst | 6 doesn't cache reads because reads are supposed to be cached in page cache 14 1. type of the cache device - "p" or "s" 15 - p - persistent memory 16 - s - SSD 18 3. the cache device 19 4. block size (4096 is recommended; the maximum block size is the page 25 offset from the start of cache device in 512-byte sectors 45 applicable only to persistent memory - use the FUA flag 49 applicable only to persistent memory - don't use the FUA 53 - some underlying devices perform better with fua, some [all …]
|
/openbmc/linux/fs/squashfs/ |
H A D | cache.c | 1 // SPDX-License-Identifier: GPL-2.0-or-later 3 * Squashfs - a compressed read only filesystem for Linux 8 * cache.c 15 * This file implements a generic cache implementation used for both caches, 16 * plus functions layered ontop of the generic cache implementation to 19 * To avoid out of memory and fragmentation issues with vmalloc the cache 22 * It should be noted that the cache is not used for file datablocks, these 23 * are decompressed and cached in the page-cache in the normal way. The 24 * cache is only used to temporarily cache fragment and metadata blocks 29 * have been packed with it, these because of locality-of-reference may be read [all …]
|
H A D | file.c | 1 // SPDX-License-Identifier: GPL-2.0-or-later 3 * Squashfs - a compressed read only filesystem for Linux 14 * compressed fragment block (tail-end packed block). The compressed size 15 * of each datablock is stored in a block list contained within the 19 * larger), the code implements an index cache that caches the mapping from 20 * block index to datablock location on disk. 22 * The index cache allows Squashfs to handle large files (up to 1.75 TiB) while 23 * retaining a simple and space-efficient block list on disk. The cache 26 * The index cache is designed to be memory efficient, and by default uses 45 * Locate cache slot in range [offset, index] for specified inode. If [all …]
|
/openbmc/linux/fs/btrfs/ |
H A D | block-group.c | 1 // SPDX-License-Identifier: GPL-2.0 7 #include "block-group.h" 8 #include "space-info.h" 9 #include "disk-io.h" 10 #include "free-space-cache.h" 11 #include "free-space-tree.h" 14 #include "ref-verify.h" 16 #include "tree-log.h" 17 #include "delalloc-space.h" 23 #include "extent-tree.h" [all …]
|
H A D | block-group.h | 1 /* SPDX-License-Identifier: GPL-2.0 */ 6 #include "free-space-cache.h" 60 /* Block group flags set at runtime */ 69 /* Does the block group need to be added to the free space tree? */ 71 /* Indicate that the block group is placed on a sequential zone */ 74 * Indicate that block group is in the list of new block groups of a 117 * The last committed used bytes of this block group, if the above @used 118 * is still the same as @commit_used, we don't need to update block 119 * group item of this block group. 123 * If the free space extent count exceeds this number, convert the block [all …]
|
/openbmc/linux/Documentation/admin-guide/ |
H A D | bcache.rst | 2 A block layer cache (bcache) 6 nice if you could use them as cache... Hence bcache. 11 This is the git repository of bcache-tools: 12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/ 17 It's designed around the performance characteristics of SSDs - it only allocates 18 in erase block sized buckets, and it uses a hybrid btree/log to track cached 20 designed to avoid random writes at all costs; it fills up an erase block 25 great lengths to protect your data - it reliably handles unclean shutdown. (It 29 Writeback caching can use most of the cache for buffering writes - writing 36 average is above the cutoff it will skip all IO from that task - instead of [all …]
|
/openbmc/linux/include/linux/ |
H A D | sysv_fs.h | 1 /* SPDX-License-Identifier: GPL-2.0 */ 16 /* Block numbers are 24 bit, sometimes stored in 32 bit. 17 On Coherent FS, they are always stored in PDP-11 manner: the least 21 /* 0 is non-existent */ 26 /* Xenix super-block data on disk */ 27 #define XENIX_NICINOD 100 /* number of inode cache entries */ 28 #define XENIX_NICFREE 100 /* number of free block list chunk entries */ 32 /* the start of the free block list: */ 34 sysv_zone_t s_free[XENIX_NICFREE]; /* first free block list chunk */ 35 /* the cache of free inodes: */ [all …]
|
/openbmc/linux/drivers/md/ |
H A D | dm-cache-target.c | 1 // SPDX-License-Identifier: GPL-2.0-only 9 #include "dm-bio-prison-v2.h" 10 #include "dm-bio-record.h" 11 #include "dm-cache-metadata.h" 12 #include "dm-io-tracker.h" 13 #include "dm-cache-background-tracker.h" 15 #include <linux/dm-io.h> 16 #include <linux/dm-kcopyd.h> 25 #define DM_MSG_PREFIX "cache" 28 "A percentage of time allocated for copying to and/or from cache"); [all …]
|
/openbmc/linux/arch/riscv/boot/dts/sifive/ |
H A D | fu540-c000.dtsi | 1 // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 /* Copyright (c) 2018-2019 SiFive, Inc */ 4 /dts-v1/; 6 #include <dt-bindings/clock/sifive-fu540-prci.h> 9 #address-cells = <2>; 10 #size-cells = <2>; 11 compatible = "sifive,fu540-c000", "sifive,fu540"; 23 #address-cells = <1>; 24 #size-cells = <0>; 28 i-cache-block-size = <64>; [all …]
|
H A D | fu740-c000.dtsi | 1 // SPDX-License-Identifier: (GPL-2.0 OR MIT) 4 /dts-v1/; 6 #include <dt-bindings/clock/sifive-fu740-prci.h> 9 #address-cells = <2>; 10 #size-cells = <2>; 11 compatible = "sifive,fu740-c000", "sifive,fu740"; 23 #address-cells = <1>; 24 #size-cells = <0>; 28 i-cache-block-size = <64>; 29 i-cache-sets = <128>; [all …]
|
/openbmc/qemu/contrib/plugins/ |
H A D | cache.c | 5 * See the COPYING file in the top-level directory. 12 #include <qemu-plugin.h> 37 * A CacheSet is a set of cache blocks. A memory block that maps to a set can be 38 * put in any of the blocks inside the set. The number of block per set is 41 * Each block contains the stored tag and a valid bit. Since this is not 43 * whether a block is in the cache or not by searching for its tag. 45 * In order to search for memory data in the cache, the set identifier and tag 49 * An address is logically divided into three portions: The block offset, 52 * The set number is used to identify the set in which the block may exist. 81 } Cache; typedef [all …]
|
/openbmc/qemu/tests/qemu-iotests/tests/ |
H A D | block-status-cache | 4 # Test cases for the block-status cache. 36 def setUp(self) -> None: 37 """Just create an empty image with a read-only NBD server on it""" 38 qemu_img_create('-f', iotests.imgfmt, test_img, str(image_size)) 40 # Pass --allocation-depth to enable the qemu:allocation-depth context, 41 # which we are going to query to provoke a block-status inquiry with 43 assert qemu_nbd(f'--socket={nbd_sock}', 44 f'--format={iotests.imgfmt}', 45 '--persistent', 46 '--allocation-depth', [all …]
|
/openbmc/u-boot/drivers/block/ |
H A D | Kconfig | 2 bool "Support block devices" 6 Enable support for block devices, such as SCSI, MMC and USB 7 flash sticks. These provide a block-level interface which permits 8 reading, writing and (in some cases) erasing blocks. Block 10 be partitioned into several areas, called 'partitions' in U-Boot. 14 bool "Enable Legacy Block Device" 16 Some devices require block support whether or not DM is enabled 19 bool "Support block devices in SPL" 23 Enable support for block devices, such as SCSI, MMC and USB 24 flash sticks. These provide a block-level interface which permits [all …]
|
/openbmc/u-boot/arch/x86/lib/ |
H A D | mrccache.c | 1 // SPDX-License-Identifier: GPL-2.0 21 struct mrc_data_container *cache) in next_mrc_block() argument 24 u32 mrc_size = sizeof(*cache) + cache->data_size; in next_mrc_block() 25 u8 *region_ptr = (u8 *)cache; in next_mrc_block() 27 if (mrc_size & (MRC_DATA_ALIGN - 1UL)) { in next_mrc_block() 28 mrc_size &= ~(MRC_DATA_ALIGN - 1UL); in next_mrc_block() 37 static int is_mrc_cache(struct mrc_data_container *cache) in is_mrc_cache() argument 39 return cache && (cache->signature == MRC_DATA_SIGNATURE); in is_mrc_cache() 44 struct mrc_data_container *cache, *next; in mrccache_find_current() local 48 base_addr = entry->base + entry->offset; in mrccache_find_current() [all …]
|
/openbmc/linux/Documentation/filesystems/ |
H A D | squashfs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Squashfs is a compressed read-only filesystem for Linux. 11 minimise data overhead. Block sizes greater than 4K are supported up to a 12 maximum of 1Mbytes (default block size 128K). 14 Squashfs is intended for general read-only filesystem use, for archival 16 block device/memory systems (e.g. embedded systems) where low overhead is 19 Mailing list: squashfs-devel@lists.sourceforge.net 23 ---------------------- 35 Max block size 1 MiB 4 KiB 39 Tail-end packing (fragments) yes no [all …]
|
/openbmc/linux/fs/nilfs2/ |
H A D | alloc.c | 1 // SPDX-License-Identifier: GPL-2.0+ 5 * Copyright (C) 2006-2008 Nippon Telegraph and Telephone Corporation. 21 * nilfs_palloc_groups_per_desc_block - get the number of groups that a group 22 * descriptor block can maintain 33 * nilfs_palloc_groups_count - get maximum number of groups 39 return 1UL << (BITS_PER_LONG - (inode->i_blkbits + 3 /* log2(8) */)); in nilfs_palloc_groups_count() 43 * nilfs_palloc_init_blockgroup - initialize private variables for allocator 51 mi->mi_bgl = kmalloc(sizeof(*mi->mi_bgl), GFP_NOFS); in nilfs_palloc_init_blockgroup() 52 if (!mi->mi_bgl) in nilfs_palloc_init_blockgroup() 53 return -ENOMEM; in nilfs_palloc_init_blockgroup() [all …]
|
/openbmc/linux/arch/riscv/boot/dts/thead/ |
H A D | th1520.dtsi | 1 // SPDX-License-Identifier: (GPL-2.0 OR MIT) 7 #include <dt-bindings/interrupt-controller/irq.h> 11 #address-cells = <2>; 12 #size-cells = <2>; 15 #address-cells = <1>; 16 #size-cells = <0>; 17 timebase-frequency = <3000000>; 24 i-cache-block-size = <64>; 25 i-cache-size = <65536>; 26 i-cache-sets = <512>; [all …]
|
/openbmc/linux/Documentation/block/ |
H A D | writeback_cache_control.rst | 2 Explicit volatile write back cache control 6 ------------ 10 operating system before data actually has hit the non-volatile storage. This 12 system needs to force data out to the non-volatile storage when it performs 15 The Linux block layer provides two simple mechanisms that let filesystems 17 a forced cache flush, and the Force Unit Access (FUA) flag for requests. 20 Explicit cache flushes 21 ---------------------- 24 the filesystem and will make sure the volatile cache of the storage device 26 guarantees that previously completed write requests are on non-volatile [all …]
|