Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14 |
|
#
e97ebc2a |
| 25-Jan-2024 |
Filipe Manana <fdmanana@suse.com> |
btrfs: add and use helper to check if block group is used
commit 1693d5442c458ae8d5b0d58463b873cd879569ed upstream.
Add a helper function to determine if a block group is being used and make use of
btrfs: add and use helper to check if block group is used
commit 1693d5442c458ae8d5b0d58463b873cd879569ed upstream.
Add a helper function to determine if a block group is being used and make use of it at btrfs_delete_unused_bgs(). This helper will also be used in future code changes.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37 |
|
#
3b9f0995 |
| 30-Jun-2023 |
Filipe Manana <fdmanana@suse.com> |
btrfs: rename add_new_free_space() to btrfs_add_new_free_space()
Since add_new_free_space() is exported, used outside block-group.c, rename it to include the 'btrfs_' prefix.
Signed-off-by: Filipe
btrfs: rename add_new_free_space() to btrfs_add_new_free_space()
Since add_new_free_space() is exported, used outside block-group.c, rename it to include the 'btrfs_' prefix.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
fc1f91b9 |
| 21-Jul-2023 |
Josef Bacik <josef@toxicpanda.com> |
btrfs: wait for actual caching progress during allocation
Recently we've been having mysterious hangs while running generic/475 on the CI system. This turned out to be something like this:
Task
btrfs: wait for actual caching progress during allocation
Recently we've been having mysterious hangs while running generic/475 on the CI system. This turned out to be something like this:
Task 1 dmsetup suspend --nolockfs -> __dm_suspend -> dm_wait_for_completion -> dm_wait_for_bios_completion -> Unable to complete because of IO's on a plug in Task 2
Task 2 wb_workfn -> wb_writeback -> blk_start_plug -> writeback_sb_inodes -> Infinite loop unable to make an allocation
Task 3 cache_block_group ->read_extent_buffer_pages ->Waiting for IO to complete that can't be submitted because Task 1 suspended the DM device
The problem here is that we need Task 2 to be scheduled completely for the blk plug to flush. Normally this would happen, we normally wait for the block group caching to finish (Task 3), and this schedule would result in the block plug flushing.
However if there's enough free space available from the current caching to satisfy the allocation we won't actually wait for the caching to complete. This check however just checks that we have enough space, not that we can make the allocation. In this particular case we were trying to allocate 9MiB, and we had 10MiB of free space, but we didn't have 9MiB of contiguous space to allocate, and thus the allocation failed and we looped.
We specifically don't cycle through the FFE loop until we stop finding cached block groups because we don't want to allocate new block groups just because we're caching, so we short circuit the normal loop once we hit LOOP_CACHING_WAIT and we found a caching block group.
This is normally fine, except in this particular case where the caching thread can't make progress because the DM device has been suspended.
Fix this by not only waiting for free space to >= the amount of space we want to allocate, but also that we make some progress in caching from the time we start waiting. This will keep us from busy looping when the caching is taking a while but still theoretically has enough space for us to allocate from, and fixes this particular case by forcing us to actually sleep and wait for forward progress, which will flush the plug.
With this fix we're no longer hanging with generic/475.
CC: stable@vger.kernel.org # 6.1+ Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
d8ccbd21 |
| 30-Jun-2023 |
Filipe Manana <fdmanana@suse.com> |
btrfs: remove BUG_ON()'s in add_new_free_space()
At add_new_free_space() we have these BUG_ON()'s that are there to deal with any failure to add free space to the in memory free space cache. Such fa
btrfs: remove BUG_ON()'s in add_new_free_space()
At add_new_free_space() we have these BUG_ON()'s that are there to deal with any failure to add free space to the in memory free space cache. Such failures are mostly -ENOMEM that should be very rare. However there's no need to have these BUG_ON()'s, we can just return any error to the caller and all callers and their upper call chain are already dealing with errors.
So just make add_new_free_space() return any errors, while removing the BUG_ON()'s, and returning the total amount of added free space to an optional u64 pointer argument.
Reported-by: syzbot+3ba856e07b7127889d8c@syzkaller.appspotmail.com Link: https://lore.kernel.org/linux-btrfs/000000000000e9cb8305ff4e8327@google.com/ Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
0657b20c |
| 28-Jun-2023 |
Filipe Manana <fdmanana@suse.com> |
btrfs: fix use-after-free of new block group that became unused
If a task creates a new block group and that block group becomes unused before we finish its creation, at btrfs_create_pending_block_g
btrfs: fix use-after-free of new block group that became unused
If a task creates a new block group and that block group becomes unused before we finish its creation, at btrfs_create_pending_block_groups(), then when btrfs_mark_bg_unused() is called against the block group, we assume that the block group is currently in the list of block groups to reclaim, and we move it out of the list of new block groups and into the list of unused block groups. This has two consequences:
1) We move it out of the list of new block groups associated to the current transaction. So the block group creation is not finished and if we attempt to delete the bg because it's unused, we will not find the block group item in the extent tree (or the new block group tree), its device extent items in the device tree etc, resulting in the deletion to fail due to the missing items;
2) We don't increment the reference count on the block group when we move it to the list of unused block groups, because we assumed the block group was on the list of block groups to reclaim, and in that case it already has the correct reference count. However the block group was on the list of new block groups, in which case no extra reference was taken because it's local to the current task. This later results in doing an extra reference count decrement when removing the block group from the unused list, eventually leading the reference count to 0.
This second case was caught when running generic/297 from fstests, which produced the following assertion failure and stack trace:
[589.559] assertion failed: refcount_read(&block_group->refs) == 1, in fs/btrfs/block-group.c:4299 [589.559] ------------[ cut here ]------------ [589.559] kernel BUG at fs/btrfs/block-group.c:4299! [589.560] invalid opcode: 0000 [#1] PREEMPT SMP PTI [589.560] CPU: 8 PID: 2819134 Comm: umount Tainted: G W 6.4.0-rc6-btrfs-next-134+ #1 [589.560] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014 [589.560] RIP: 0010:btrfs_free_block_groups+0x449/0x4a0 [btrfs] [589.561] Code: 68 62 da c0 (...) [589.561] RSP: 0018:ffffa55a8c3b3d98 EFLAGS: 00010246 [589.561] RAX: 0000000000000058 RBX: ffff8f030d7f2000 RCX: 0000000000000000 [589.562] RDX: 0000000000000000 RSI: ffffffff953f0878 RDI: 00000000ffffffff [589.562] RBP: ffff8f030d7f2088 R08: 0000000000000000 R09: ffffa55a8c3b3c50 [589.562] R10: 0000000000000001 R11: 0000000000000001 R12: ffff8f05850b4c00 [589.562] R13: ffff8f030d7f2090 R14: ffff8f05850b4cd8 R15: dead000000000100 [589.563] FS: 00007f497fd2e840(0000) GS:ffff8f09dfc00000(0000) knlGS:0000000000000000 [589.563] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [589.563] CR2: 00007f497ff8ec10 CR3: 0000000271472006 CR4: 0000000000370ee0 [589.563] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [589.564] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [589.564] Call Trace: [589.564] <TASK> [589.565] ? __die_body+0x1b/0x60 [589.565] ? die+0x39/0x60 [589.565] ? do_trap+0xeb/0x110 [589.565] ? btrfs_free_block_groups+0x449/0x4a0 [btrfs] [589.566] ? do_error_trap+0x6a/0x90 [589.566] ? btrfs_free_block_groups+0x449/0x4a0 [btrfs] [589.566] ? exc_invalid_op+0x4e/0x70 [589.566] ? btrfs_free_block_groups+0x449/0x4a0 [btrfs] [589.567] ? asm_exc_invalid_op+0x16/0x20 [589.567] ? btrfs_free_block_groups+0x449/0x4a0 [btrfs] [589.567] ? btrfs_free_block_groups+0x449/0x4a0 [btrfs] [589.567] close_ctree+0x35d/0x560 [btrfs] [589.568] ? fsnotify_sb_delete+0x13e/0x1d0 [589.568] ? dispose_list+0x3a/0x50 [589.568] ? evict_inodes+0x151/0x1a0 [589.568] generic_shutdown_super+0x73/0x1a0 [589.569] kill_anon_super+0x14/0x30 [589.569] btrfs_kill_super+0x12/0x20 [btrfs] [589.569] deactivate_locked_super+0x2e/0x70 [589.569] cleanup_mnt+0x104/0x160 [589.570] task_work_run+0x56/0x90 [589.570] exit_to_user_mode_prepare+0x160/0x170 [589.570] syscall_exit_to_user_mode+0x22/0x50 [589.570] ? __x64_sys_umount+0x12/0x20 [589.571] do_syscall_64+0x48/0x90 [589.571] entry_SYSCALL_64_after_hwframe+0x72/0xdc [589.571] RIP: 0033:0x7f497ff0a567 [589.571] Code: af 98 0e (...) [589.572] RSP: 002b:00007ffc98347358 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 [589.572] RAX: 0000000000000000 RBX: 00007f49800b8264 RCX: 00007f497ff0a567 [589.572] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000557f558abfa0 [589.573] RBP: 0000557f558a6ba0 R08: 0000000000000000 R09: 00007ffc98346100 [589.573] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [589.573] R13: 0000557f558abfa0 R14: 0000557f558a6cb0 R15: 0000557f558a6dd0 [589.573] </TASK> [589.574] Modules linked in: dm_snapshot dm_thin_pool (...) [589.576] ---[ end trace 0000000000000000 ]---
Fix this by adding a runtime flag to the block group to tell that the block group is still in the list of new block groups, and therefore it should not be moved to the list of unused block groups, at btrfs_mark_bg_unused(), until the flag is cleared, when we finish the creation of the block group at btrfs_create_pending_block_groups().
Fixes: a9f189716cf1 ("btrfs: move out now unused BG from the reclaim list") CC: stable@vger.kernel.org # 5.15+ Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33 |
|
#
aadb164b |
| 06-Jun-2023 |
Filipe Manana <fdmanana@suse.com> |
btrfs: update documentation for a block group's bg_list member
Currently we are only documenting two uses of the bg_list member of a block group, but there two more:
1) To track deleted block group
btrfs: update documentation for a block group's bg_list member
Currently we are only documenting two uses of the bg_list member of a block group, but there two more:
1) To track deleted block groups for discard purposes, introduced in commit e33e17ee1098 ("btrfs: add missing discards when unpinning extents with -o discard");
2) To track block groups for automatic reclaim, introduced more recently by commit 18bb8bbf13c1 ("btrfs: zoned: automatically reclaim zones")
So document those two other use cases.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22 |
|
#
e9255d6c |
| 29-Mar-2023 |
Qu Wenruo <wqu@suse.com> |
btrfs: scrub: remove the old scrub recheck code
The old scrub code has different entrance to verify the content, and since we have removed the writeback path, now we can start removing the re-check
btrfs: scrub: remove the old scrub recheck code
The old scrub code has different entrance to verify the content, and since we have removed the writeback path, now we can start removing the re-check part, including:
- scrub_recover structure - scrub_sector::recover member - function scrub_setup_recheck_block() - function scrub_recheck_block() - function scrub_recheck_block_checksum() - function scrub_repair_block_group_good_copy() - function scrub_repair_sector_from_good_copy() - function scrub_is_page_on_raid56()
- function full_stripe_lock() - function search_full_stripe_lock() - function get_full_stripe_logical() - function insert_full_stripe_lock() - function lock_full_stripe() - function unlock_full_stripe() - btrfs_block_group::full_stripe_locks_root member - btrfs_full_stripe_locks_tree structure This infrastructure is to ensure RAID56 scrub is properly handling recovery and P/Q scrub correctly.
This is no longer needed, before P/Q scrub we will wait for all the involved data stripes to be scrubbed first, and RAID56 code has internal lock to ensure no race in the same full stripe.
- function scrub_print_warning() - function scrub_get_recover() - function scrub_put_recover() - function scrub_handle_errored_block() - function scrub_setup_recheck_block() - function scrub_bio_wait_endio() - function scrub_submit_raid56_bio_wait() - function scrub_recheck_block_on_raid56() - function scrub_recheck_block() - function scrub_recheck_block_checksum() - function scrub_repair_block_from_good_copy() - function scrub_repair_sector_from_good_copy()
And two more functions exported temporarily for later cleanup:
- alloc_scrub_sector() - alloc_scrub_block()
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.1.21 |
|
#
5758d1bd |
| 21-Mar-2023 |
Filipe Manana <fdmanana@suse.com> |
btrfs: remove bytes_used argument from btrfs_make_block_group()
The only caller of btrfs_make_block_group() always passes 0 as the value for the bytes_used argument, so remove it.
Reviewed-by: Jose
btrfs: remove bytes_used argument from btrfs_make_block_group()
The only caller of btrfs_make_block_group() always passes 0 as the value for the bytes_used argument, so remove it.
Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13 |
|
#
1eb82ef8 |
| 12-Dec-2022 |
Christoph Hellwig <hch@lst.de> |
btrfs: remove the bdev argument to btrfs_rmap_block
The only user in the zoned remap code is gone now, so remove the argument.
Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Christo
btrfs: remove the bdev argument to btrfs_rmap_block
The only user in the zoned remap code is gone now, so remove the argument.
Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
cb0922f2 |
| 15-Dec-2022 |
Boris Burkov <boris@bur.io> |
btrfs: don't use size classes for zoned file systems
When a file system has ZNS devices which are constrained by a maximum number of active block groups, then not being able to use all the block gro
btrfs: don't use size classes for zoned file systems
When a file system has ZNS devices which are constrained by a maximum number of active block groups, then not being able to use all the block groups for every allocation is not ideal, and could cause us to loop a ton with mixed size allocations.
In general, since zoned doesn't write into gaps behind where block groups are writing, it is not susceptible to the same sort of fragmentation that size classes are designed to solve, so we can skip size classes for zoned file systems in general, even though there would probably be no harm for SMR devices.
Signed-off-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
52bb7a21 |
| 15-Dec-2022 |
Boris Burkov <boris@bur.io> |
btrfs: introduce size class to block group allocator
The aim of this patch is to reduce the fragmentation of block groups under certain unhappy workloads. It is particularly effective when the size
btrfs: introduce size class to block group allocator
The aim of this patch is to reduce the fragmentation of block groups under certain unhappy workloads. It is particularly effective when the size of extents correlates with their lifetime, which is something we have observed causing fragmentation in the fleet at Meta.
This patch categorizes extents into size classes:
- x < 128KiB: "small" - 128KiB < x < 8MiB: "medium" - x > 8MiB: "large"
and as much as possible reduces allocations of extents into block groups that don't match the size class. This takes advantage of any (possible) correlation between size and lifetime and also leaves behind predictable re-usable gaps when extents are freed; small writes don't gum up bigger holes.
Size classes are implemented in the following way:
- Mark each new block group with a size class of the first allocation that goes into it.
- Add two new passes to ffe: "unset size class" and "wrong size class". First, try only matching block groups, then try unset ones, then allow allocation of new ones, and finally allow mismatched block groups.
- Filtering is done just by skipping inappropriate ones, there is no special size class indexing.
Other solutions I considered were:
- A best fit allocator with an rb-tree. This worked well, as small writes didn't leak big holes from large freed extents, but led to regressions in ffe and write performance due to lock contention on the rb-tree with every allocation possibly updating it in parallel. Perhaps something clever could be done to do the updates in the background while being "right enough".
- A fixed size "working set". This prevents freeing an extent drastically changing where writes currently land, and seems like a good option too. Doesn't take advantage of size in any way.
- The same size class idea, but implemented with xarray marks. This turned out to be slower than looping the linked list and skipping wrong block groups, and is also less flexible since we must have only 3 size classes (max #marks). With the current approach we can have as many as we like.
Performance testing was done via: https://github.com/josefbacik/fsperf Of particular relevance are the new fragmentation specific tests.
A brief summary of the testing results:
- Neutral results on existing tests. There are some minor regressions and improvements here and there, but nothing that truly stands out as notable. - Improvement on new tests where size class and extent lifetime are correlated. Fragmentation in these cases is completely eliminated and write performance is generally a little better. There is also significant improvement where extent sizes are just a bit larger than the size class boundaries. - Regression on one new tests: where the allocations are sized intentionally a hair under the borders of the size classes. Results are neutral on the test that intentionally attacks this new scheme by mixing extent size and lifetime.
The full dump of the performance results can be found here: https://bur.io/fsperf/size-class-2022-11-15.txt (there are ANSI escape codes, so best to curl and view in terminal)
Here is a snippet from the full results for a new test which mixes buffered writes appending to a long lived set of files and large short lived fallocates:
bufferedappendvsfallocate results metric baseline current stdev diff ====================================================================================== avg_commit_ms 31.13 29.20 2.67 -6.22% bg_count 14 15.60 0 11.43% commits 11.10 12.20 0.32 9.91% elapsed 27.30 26.40 2.98 -3.30% end_state_mount_ns 11122551.90 10635118.90 851143.04 -4.38% end_state_umount_ns 1.36e+09 1.35e+09 12248056.65 -1.07% find_free_extent_calls 116244.30 114354.30 964.56 -1.63% find_free_extent_ns_max 599507.20 1047168.20 103337.08 74.67% find_free_extent_ns_mean 3607.19 3672.11 101.20 1.80% find_free_extent_ns_min 500 512 6.67 2.40% find_free_extent_ns_p50 2848 2876 37.65 0.98% find_free_extent_ns_p95 4916 5000 75.45 1.71% find_free_extent_ns_p99 20734.49 20920.48 1670.93 0.90% frag_pct_max 61.67 0 8.05 -100.00% frag_pct_mean 43.59 0 6.10 -100.00% frag_pct_min 25.91 0 16.60 -100.00% frag_pct_p50 42.53 0 7.25 -100.00% frag_pct_p95 61.67 0 8.05 -100.00% frag_pct_p99 61.67 0 8.05 -100.00% fragmented_bg_count 6.10 0 1.45 -100.00% max_commit_ms 49.80 46 5.37 -7.63% sys_cpu 2.59 2.62 0.29 1.39% write_bw_bytes 1.62e+08 1.68e+08 17975843.50 3.23% write_clat_ns_mean 57426.39 54475.95 2292.72 -5.14% write_clat_ns_p50 46950.40 42905.60 2101.35 -8.62% write_clat_ns_p99 148070.40 143769.60 2115.17 -2.90% write_io_kbytes 4194304 4194304 0 0.00% write_iops 2476.15 2556.10 274.29 3.23% write_lat_ns_max 2101667.60 2251129.50 370556.59 7.11% write_lat_ns_mean 59374.91 55682.00 2523.09 -6.22% write_lat_ns_min 17353.10 16250 1646.08 -6.36%
There are some mixed improvements/regressions in most metrics along with an elimination of fragmentation in this workload.
On the balance, the drastic 1->0 improvement in the happy cases seems worth the mix of regressions and improvements we do observe.
Some considerations for future work:
- Experimenting with more size classes - More hinting/search ordering work to approximate a best-fit allocator
Signed-off-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77 |
|
#
961f5b8b |
| 31-Oct-2022 |
David Sterba <dsterba@suse.com> |
btrfs: convert btrfs_block_group::seq_zone to runtime flag
In zoned mode the sequential status of zone can be also tracked in the runtime flags of block group.
Reviewed-by: Anand Jain <anand.jain@o
btrfs: convert btrfs_block_group::seq_zone to runtime flag
In zoned mode the sequential status of zone can be also tracked in the runtime flags of block group.
Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
0d7764ff |
| 31-Oct-2022 |
David Sterba <dsterba@suse.com> |
btrfs: convert btrfs_block_group::needs_free_space to runtime flag
We already have flags in block group to track various status bits, convert needs_free_space as well and reduce size of btrfs_block_
btrfs: convert btrfs_block_group::needs_free_space to runtime flag
We already have flags in block group to track various status bits, convert needs_free_space as well and reduce size of btrfs_block_group.
Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68 |
|
#
7248e0ce |
| 09-Sep-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: skip update of block group item if used bytes are the same
[BACKGROUND]
When committing a transaction, we will update block group items for all dirty block groups.
But in fact, dirty block
btrfs: skip update of block group item if used bytes are the same
[BACKGROUND]
When committing a transaction, we will update block group items for all dirty block groups.
But in fact, dirty block groups don't always need to update their block group items. It's pretty common to have a metadata block group which experienced several COW operations, but still have the same amount of used bytes.
In that case, we may unnecessarily COW a tree block doing nothing.
[ENHANCEMENT]
This patch will introduce btrfs_block_group::commit_used member to remember the last used bytes, and use that new member to skip unnecessary block group item update.
This would be more common for large filesystems, where metadata block group can be as large as 1GiB, containing at most 64K metadata items.
In that case, if COW added and then deleted one metadata item near the end of the block group, then it's completely possible we don't need to touch the block group item at all.
[BENCHMARK]
The change itself can have quite a high chance (20~80%) to skip block group item updates in lot of workloads.
As a result, it would result shorter time spent on btrfs_write_dirty_block_groups(), and overall reduce the execution time of the critical section of btrfs_commit_transaction().
Here comes a fio command, which will do random writes in 4K block size, causing a very heavy metadata updates.
fio --filename=$mnt/file --size=512M --rw=randwrite --direct=1 --bs=4k \ --ioengine=libaio --iodepth=64 --runtime=300 --numjobs=4 \ --name=random_write --fallocate=none --time_based --fsync_on_close=1
The file size (512M) and number of threads (4) means 2GiB file size in total, but during the full 300s run time, my dedicated SATA SSD is able to write around 20~25GiB, which is over 10 times the file size.
Thus after we fill the initial 2G, we should not cause much block group item updates.
Please note, the fio numbers by themselves don't have much change, but if we look deeper, there is some reduced execution time, especially for the critical section of btrfs_commit_transaction().
I added extra trace_printk() to measure the following per-transaction execution time:
- Critical section of btrfs_commit_transaction() By re-using the existing update_commit_stats() function, which has already calculated the interval correctly.
- The while() loop for btrfs_write_dirty_block_groups() Although this includes the execution time of btrfs_run_delayed_refs(), it should still be representative overall.
Both result involves transid 7~30, the same amount of transaction committed.
The result looks like this:
| Before | After | Diff ----------------------+-------------------+----------------+-------- Transaction interval | 229247198.5 | 215016933.6 | -6.2% Block group interval | 23133.33333 | 18970.83333 | -18.0%
The change in block group item updates is more obvious, as skipped block group item updates also mean less delayed refs.
And the overall execution time for that block group update loop is pretty small, thus we can assume the extent tree is already mostly cached. If we can skip an uncached tree block, it would cause more obvious change.
Unfortunately the overall reduction in commit transaction critical section is much smaller, as the block group item updates loop is not really the major part, at least not for the above fio script.
But still we have a observable reduction in the critical section.
Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
06d61cb1 |
| 14-Sep-2022 |
Josef Bacik <josef@toxicpanda.com> |
btrfs: move btrfs_should_fragment_free_space into block-group.c
This function uses functions that are not defined in block-group.h, move it into block-group.c in order to keep the header clean.
Rev
btrfs: move btrfs_should_fragment_free_space into block-group.c
This function uses functions that are not defined in block-group.h, move it into block-group.c in order to keep the header clean.
Reviewed-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
c29abab4 |
| 14-Sep-2022 |
Josef Bacik <josef@toxicpanda.com> |
btrfs: move btrfs_full_stripe_locks_tree into block-group.h
This is actually embedded in struct btrfs_block_group, so move this definition to block-group.h, and then open-code the init of the tree w
btrfs: move btrfs_full_stripe_locks_tree into block-group.h
This is actually embedded in struct btrfs_block_group, so move this definition to block-group.h, and then open-code the init of the tree where we init the rest of the block group instead of using a helper.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
16708a88 |
| 14-Sep-2022 |
Josef Bacik <josef@toxicpanda.com> |
btrfs: move btrfs_caching_type to block-group.h
This is a block group related definition, move it into block-group.h.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Anand
btrfs: move btrfs_caching_type to block-group.h
This is a block group related definition, move it into block-group.h.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61 |
|
#
48ff7083 |
| 16-Aug-2022 |
Omar Sandoval <osandov@fb.com> |
btrfs: get rid of block group caching progress logic
struct btrfs_caching_ctl::progress and struct btrfs_block_group::last_byte_to_unpin were previously needed to ensure that unpin_extent_range() di
btrfs: get rid of block group caching progress logic
struct btrfs_caching_ctl::progress and struct btrfs_block_group::last_byte_to_unpin were previously needed to ensure that unpin_extent_range() didn't return a range to the free space cache before the caching thread had a chance to cache that range. However, the commit "btrfs: fix space cache corruption and potential double allocations" made it so that we always synchronously cache the block group at the time that we pin the extent, so this machinery is no longer necessary.
Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56 |
|
#
527c490f |
| 15-Jul-2022 |
Josef Bacik <josef@toxicpanda.com> |
btrfs: delete btrfs_wait_space_cache_v1_finished
We used to use this in a few spots, but now we only use it directly inside of block-group.c, so remove the helper and just open code where we were us
btrfs: delete btrfs_wait_space_cache_v1_finished
We used to use this in a few spots, but now we only use it directly inside of block-group.c, so remove the helper and just open code where we were using it.
Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
7b9c293b |
| 15-Jul-2022 |
Josef Bacik <josef@toxicpanda.com> |
btrfs: remove BLOCK_GROUP_FLAG_HAS_CACHING_CTL
This is used mostly to determine if we need to look at the caching ctl list and clean up any references to this block group. However we never clear th
btrfs: remove BLOCK_GROUP_FLAG_HAS_CACHING_CTL
This is used mostly to determine if we need to look at the caching ctl list and clean up any references to this block group. However we never clear this flag, specifically because we need to know if we have to remove a caching ctl we have for this block group still. This is in the remove block group path which isn't a fast path, so the optimization doesn't really matter, simplify this logic and remove the flag.
Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
3349b57f |
| 15-Jul-2022 |
Josef Bacik <josef@toxicpanda.com> |
btrfs: convert block group bit field to use bit helpers
We use a bit field in the btrfs_block_group for different flags, however this is awkward because we have to hold the block_group->lock for any
btrfs: convert block group bit field to use bit helpers
We use a bit field in the btrfs_block_group for different flags, however this is awkward because we have to hold the block_group->lock for any modification of any of these fields, and makes the code clunky for a few of these flags. Convert these to a properly flags setup so we can utilize the bit helpers.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
ced8ecf0 |
| 23-Aug-2022 |
Omar Sandoval <osandov@fb.com> |
btrfs: fix space cache corruption and potential double allocations
When testing space_cache v2 on a large set of machines, we encountered a few symptoms:
1. "unable to add free space :-17" (EEXIST)
btrfs: fix space cache corruption and potential double allocations
When testing space_cache v2 on a large set of machines, we encountered a few symptoms:
1. "unable to add free space :-17" (EEXIST) errors. 2. Missing free space info items, sometimes caught with a "missing free space info for X" error. 3. Double-accounted space: ranges that were allocated in the extent tree and also marked as free in the free space tree, ranges that were marked as allocated twice in the extent tree, or ranges that were marked as free twice in the free space tree. If the latter made it onto disk, the next reboot would hit the BUG_ON() in add_new_free_space(). 4. On some hosts with no on-disk corruption or error messages, the in-memory space cache (dumped with drgn) disagreed with the free space tree.
All of these symptoms have the same underlying cause: a race between caching the free space for a block group and returning free space to the in-memory space cache for pinned extents causes us to double-add a free range to the space cache. This race exists when free space is cached from the free space tree (space_cache=v2) or the extent tree (nospace_cache, or space_cache=v1 if the cache needs to be regenerated). struct btrfs_block_group::last_byte_to_unpin and struct btrfs_block_group::progress are supposed to protect against this race, but commit d0c2f4fa555e ("btrfs: make concurrent fsyncs wait less when waiting for a transaction commit") subtly broke this by allowing multiple transactions to be unpinning extents at the same time.
Specifically, the race is as follows:
1. An extent is deleted from an uncached block group in transaction A. 2. btrfs_commit_transaction() is called for transaction A. 3. btrfs_run_delayed_refs() -> __btrfs_free_extent() runs the delayed ref for the deleted extent. 4. __btrfs_free_extent() -> do_free_extent_accounting() -> add_to_free_space_tree() adds the deleted extent back to the free space tree. 5. do_free_extent_accounting() -> btrfs_update_block_group() -> btrfs_cache_block_group() queues up the block group to get cached. block_group->progress is set to block_group->start. 6. btrfs_commit_transaction() for transaction A calls switch_commit_roots(). It sets block_group->last_byte_to_unpin to block_group->progress, which is block_group->start because the block group hasn't been cached yet. 7. The caching thread gets to our block group. Since the commit roots were already switched, load_free_space_tree() sees the deleted extent as free and adds it to the space cache. It finishes caching and sets block_group->progress to U64_MAX. 8. btrfs_commit_transaction() advances transaction A to TRANS_STATE_SUPER_COMMITTED. 9. fsync calls btrfs_commit_transaction() for transaction B. Since transaction A is already in TRANS_STATE_SUPER_COMMITTED and the commit is for fsync, it advances. 10. btrfs_commit_transaction() for transaction B calls switch_commit_roots(). This time, the block group has already been cached, so it sets block_group->last_byte_to_unpin to U64_MAX. 11. btrfs_commit_transaction() for transaction A calls btrfs_finish_extent_commit(), which calls unpin_extent_range() for the deleted extent. It sees last_byte_to_unpin set to U64_MAX (by transaction B!), so it adds the deleted extent to the space cache again!
This explains all of our symptoms above:
* If the sequence of events is exactly as described above, when the free space is re-added in step 11, it will fail with EEXIST. * If another thread reallocates the deleted extent in between steps 7 and 11, then step 11 will silently re-add that space to the space cache as free even though it is actually allocated. Then, if that space is allocated *again*, the free space tree will be corrupted (namely, the wrong item will be deleted). * If we don't catch this free space tree corruption, it will continue to get worse as extents are deleted and reallocated.
The v1 space_cache is synchronously loaded when an extent is deleted (btrfs_update_block_group() with alloc=0 calls btrfs_cache_block_group() with load_cache_only=1), so it is not normally affected by this bug. However, as noted above, if we fail to load the space cache, we will fall back to caching from the extent tree and may hit this bug.
The easiest fix for this race is to also make caching from the free space tree or extent tree synchronous. Josef tested this and found no performance regressions.
A few extra changes fall out of this change. Namely, this fix does the following, with step 2 being the crucial fix:
1. Factor btrfs_caching_ctl_wait_done() out of btrfs_wait_block_group_cache_done() to allow waiting on a caching_ctl that we already hold a reference to. 2. Change the call in btrfs_cache_block_group() of btrfs_wait_space_cache_v1_finished() to btrfs_caching_ctl_wait_done(), which makes us wait regardless of the space_cache option. 3. Delete the now unused btrfs_wait_space_cache_v1_finished() and space_cache_v1_done(). 4. Change btrfs_cache_block_group()'s `int load_cache_only` parameter to `bool wait` to more accurately describe its new meaning. 5. Change a few callers which had a separate call to btrfs_wait_block_group_cache_done() to use wait = true instead. 6. Make btrfs_wait_block_group_cache_done() static now that it's not used outside of block-group.c anymore.
Fixes: d0c2f4fa555e ("btrfs: make concurrent fsyncs wait less when waiting for a transaction commit") CC: stable@vger.kernel.org # 5.12+ Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46 |
|
#
343d8a30 |
| 07-Jun-2022 |
Naohiro Aota <naohiro.aota@wdc.com> |
btrfs: zoned: prevent allocation from previous data relocation BG
After commit 5f0addf7b890 ("btrfs: zoned: use dedicated lock for data relocation"), we observe IO errors on e.g, btrfs/232 like belo
btrfs: zoned: prevent allocation from previous data relocation BG
After commit 5f0addf7b890 ("btrfs: zoned: use dedicated lock for data relocation"), we observe IO errors on e.g, btrfs/232 like below.
[09.0][T4038707] WARNING: CPU: 3 PID: 4038707 at fs/btrfs/extent-tree.c:2381 btrfs_cross_ref_exist+0xfc/0x120 [btrfs] <snip> [09.9][T4038707] Call Trace: [09.5][T4038707] <TASK> [09.3][T4038707] run_delalloc_nocow+0x7f1/0x11a0 [btrfs] [09.6][T4038707] ? test_range_bit+0x174/0x320 [btrfs] [09.2][T4038707] ? fallback_to_cow+0x980/0x980 [btrfs] [09.3][T4038707] ? find_lock_delalloc_range+0x33e/0x3e0 [btrfs] [09.5][T4038707] btrfs_run_delalloc_range+0x445/0x1320 [btrfs] [09.2][T4038707] ? test_range_bit+0x320/0x320 [btrfs] [09.4][T4038707] ? lock_downgrade+0x6a0/0x6a0 [09.2][T4038707] ? orc_find.part.0+0x1ed/0x300 [09.5][T4038707] ? __module_address.part.0+0x25/0x300 [09.0][T4038707] writepage_delalloc+0x159/0x310 [btrfs] <snip> [09.4][ C3] sd 10:0:1:0: [sde] tag#2620 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s [09.5][ C3] sd 10:0:1:0: [sde] tag#2620 Sense Key : Illegal Request [current] [09.9][ C3] sd 10:0:1:0: [sde] tag#2620 Add. Sense: Unaligned write command [09.5][ C3] sd 10:0:1:0: [sde] tag#2620 CDB: Write(16) 8a 00 00 00 00 00 02 f3 63 87 00 00 00 2c 00 00 [09.4][ C3] critical target error, dev sde, sector 396041272 op 0x1:(WRITE) flags 0x800 phys_seg 3 prio class 0 [09.9][ C3] BTRFS error (device dm-1): bdev /dev/mapper/dml_102_2 errs: wr 1, rd 0, flush 0, corrupt 0, gen 0
The IO errors occur when we allocate a regular extent in previous data relocation block group.
On zoned btrfs, we use a dedicated block group to relocate a data extent. Thus, we allocate relocating data extents (pre-alloc) only from the dedicated block group and vice versa. Once the free space in the dedicated block group gets tight, a relocating extent may not fit into the block group. In that case, we need to switch the dedicated block group to the next one. Then, the previous one is now freed up for allocating a regular extent. The BG is already not enough to allocate the relocating extent, but there is still room to allocate a smaller extent. Now the problem happens. By allocating a regular extent while nocow IOs for the relocation is still on-going, we will issue WRITE IOs (for relocation) and ZONE APPEND IOs (for the regular writes) at the same time. That mixed IOs confuses the write pointer and arises the unaligned write errors.
This commit introduces a new bit 'zoned_data_reloc_ongoing' to the btrfs_block_group. We set this bit before releasing the dedicated block group, and no extent are allocated from a block group having this bit set. This bit is similar to setting block_group->ro, but is different from it by allowing nocow writes to start.
Once all the nocow IO for relocation is done (hooked from btrfs_finish_ordered_io), we reset the bit to release the block group for further allocation.
Fixes: c2707a255623 ("btrfs: zoned: add a dedicated data relocation block group") CC: stable@vger.kernel.org # 5.16+ Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38 |
|
#
56fbb0a4 |
| 03-May-2022 |
Naohiro Aota <naohiro.aota@wdc.com> |
btrfs: zoned: properly finish block group on metadata write
Commit be1a1d7a5d24 ("btrfs: zoned: finish fully written block group") introduced zone finishing code both for data and metadata end_io pa
btrfs: zoned: properly finish block group on metadata write
Commit be1a1d7a5d24 ("btrfs: zoned: finish fully written block group") introduced zone finishing code both for data and metadata end_io path. However, the metadata side is not working as it should. First, it compares logical address (eb->start + eb->len) with offset within a block group (cache->zone_capacity) in submit_eb_page(). That essentially disabled zone finishing on metadata end_io path.
Furthermore, fixing the issue above revealed we cannot call btrfs_zone_finish_endio() in end_extent_buffer_writeback(). We cannot call btrfs_lookup_block_group() which require spin lock inside end_io context.
Introduce btrfs_schedule_zone_finish_bg() to wait for the extent buffer writeback and do the zone finish IO in a workqueue.
Also, drop EXTENT_BUFFER_ZONE_FINISH as it is no longer used.
Fixes: be1a1d7a5d24 ("btrfs: zoned: finish fully written block group") CC: stable@vger.kernel.org # 5.16+ Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.37, v5.15.36, v5.15.35, v5.15.34 |
|
#
2306e83e |
| 13-Apr-2022 |
Filipe Manana <fdmanana@suse.com> |
btrfs: avoid double search for block group during NOCOW writes
When doing a NOCOW write, either through direct IO or buffered IO, we do two lookups for the block group that contains the target exten
btrfs: avoid double search for block group during NOCOW writes
When doing a NOCOW write, either through direct IO or buffered IO, we do two lookups for the block group that contains the target extent: once when we call btrfs_inc_nocow_writers() and then later again when we call btrfs_dec_nocow_writers() after creating the ordered extent.
The lookups require taking a lock and navigating the red black tree used to track all block groups, which can take a non-negligible amount of time for a large filesystem with thousands of block groups, as well as lock contention and cache line bouncing.
Improve on this by having a single block group search: making btrfs_inc_nocow_writers() return the block group to its caller and then have the caller pass that block group to btrfs_dec_nocow_writers().
This is part of a patchset comprised of the following patches:
btrfs: remove search start argument from first_logical_byte() btrfs: use rbtree with leftmost node cached for tracking lowest block group btrfs: use a read/write lock for protecting the block groups tree btrfs: return block group directly at btrfs_next_block_group() btrfs: avoid double search for block group during NOCOW writes
The following test was used to test these changes from a performance perspective:
$ cat test.sh #!/bin/bash
modprobe null_blk nr_devices=0
NULL_DEV_PATH=/sys/kernel/config/nullb/nullb0 mkdir $NULL_DEV_PATH if [ $? -ne 0 ]; then echo "Failed to create nullb0 directory." exit 1 fi echo 2 > $NULL_DEV_PATH/submit_queues echo 16384 > $NULL_DEV_PATH/size # 16G echo 1 > $NULL_DEV_PATH/memory_backed echo 1 > $NULL_DEV_PATH/power
DEV=/dev/nullb0 MNT=/mnt/nullb0 LOOP_MNT="$MNT/loop" MOUNT_OPTIONS="-o ssd -o nodatacow" MKFS_OPTIONS="-R free-space-tree -O no-holes"
cat <<EOF > /tmp/fio-job.ini [io_uring_writes] rw=randwrite fsync=0 fallocate=posix group_reporting=1 direct=1 ioengine=io_uring iodepth=64 bs=64k filesize=1g runtime=300 time_based directory=$LOOP_MNT numjobs=8 thread EOF
echo performance | \ tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
echo echo "Using config:" echo cat /tmp/fio-job.ini echo
umount $MNT &> /dev/null mkfs.btrfs -f $MKFS_OPTIONS $DEV &> /dev/null mount $MOUNT_OPTIONS $DEV $MNT
mkdir $LOOP_MNT
truncate -s 4T $MNT/loopfile mkfs.btrfs -f $MKFS_OPTIONS $MNT/loopfile &> /dev/null mount $MOUNT_OPTIONS $MNT/loopfile $LOOP_MNT
# Trigger the allocation of about 3500 data block groups, without # actually consuming space on underlying filesystem, just to make # the tree of block group large. fallocate -l 3500G $LOOP_MNT/filler
fio /tmp/fio-job.ini
umount $LOOP_MNT umount $MNT
echo 0 > $NULL_DEV_PATH/power rmdir $NULL_DEV_PATH
The test was run on a non-debug kernel (Debian's default kernel config), the result were the following.
Before patchset:
WRITE: bw=1455MiB/s (1526MB/s), 1455MiB/s-1455MiB/s (1526MB/s-1526MB/s), io=426GiB (458GB), run=300006-300006msec
After patchset:
WRITE: bw=1503MiB/s (1577MB/s), 1503MiB/s-1503MiB/s (1577MB/s-1577MB/s), io=440GiB (473GB), run=300006-300006msec
+3.3% write throughput and +3.3% IO done in the same time period.
The test has somewhat limited coverage scope, as with only NOCOW writes we get less contention on the red black tree of block groups, since we don't have the extra contention caused by COW writes, namely when allocating data extents, pinning and unpinning data extents, but on the hand there's access to tree in the NOCOW path, when incrementing a block group's number of NOCOW writers.
Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|