Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36 |
|
#
3a3c7a7f |
| 28-Jun-2023 |
Qu Wenruo <wqu@suse.com> |
btrfs: raid56: remove unused BTRFS_RBIO_REBUILD_MISSING
Commit aca43fe839e4 ("btrfs: remove unused raid56 functions which were dedicated for scrub") removed the special handling of RAID56 scrub for
btrfs: raid56: remove unused BTRFS_RBIO_REBUILD_MISSING
Commit aca43fe839e4 ("btrfs: remove unused raid56 functions which were dedicated for scrub") removed the special handling of RAID56 scrub for missing device.
As scrub goes full mirror_num based recovery, that means if it hits a missing device in RAID56, it would just try the next mirror, which would go through the BTRFS_RBIO_READ_REBUILD operation.
This means there is no longer any use of BTRFS_RBIO_REBUILD_MISSING operation and we can safely remove it.
Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24 |
|
#
94ead93e |
| 13-Apr-2023 |
Qu Wenruo <wqu@suse.com> |
btrfs: scrub: use recovered data stripes as cache to avoid unnecessary read
For P/Q stripe scrub, we have quite some duplicated read IO:
- Data stripes read for verification This is triggered by
btrfs: scrub: use recovered data stripes as cache to avoid unnecessary read
For P/Q stripe scrub, we have quite some duplicated read IO:
- Data stripes read for verification This is triggered by the scrub_submit_initial_read() inside scrub_raid56_parity_stripe().
- Data stripes read (again) for P/Q stripe verification This is triggered by scrub_assemble_read_bios() from scrub_rbio().
Although we can have hit rbio cache and avoid unnecessary read, the chance is very low, as scrub would easily flush the whole rbio cache.
This means, even we're just scrubbing a single P/Q stripe, we would read the data stripes twice for the best case scenario. If we need to recover some data stripes, it would cause more reads on the same data stripes, again and again.
However before we call raid56_parity_submit_scrub_rbio() we already have all data stripes repaired and their contents ready to use. But RAID56 cache is unaware about the scrub cache, thus RAID56 layer itself still needs to re-read the data stripes.
To avoid such cache miss, this patch would:
- Introduce a new helper, raid56_parity_cache_data_pages() This function would grab the pages from an array, and copy the content to the rbio, marking all the involved sectors uptodate.
The page copy is unavoidable because of the cache pages of rbio are all self managed, thus can not utilize outside pages without screwing up the lifespan.
- Use the repaired data stripes as cache inside scrub_raid56_parity_stripe()
By this, we ensure all the data sectors of the scrub rbio are already uptodate, and no need to read them again from disk.
Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
aca43fe8 |
| 12-Apr-2023 |
Qu Wenruo <wqu@suse.com> |
btrfs: remove unused raid56 functions which were dedicated for scrub
Since the scrub rework, the following RAID56 functions are no longer called:
- raid56_add_scrub_pages() - raid56_alloc_missing_r
btrfs: remove unused raid56 functions which were dedicated for scrub
Since the scrub rework, the following RAID56 functions are no longer called:
- raid56_add_scrub_pages() - raid56_alloc_missing_rbio() - raid56_submit_missing_rbio()
Those functions are all utilized by scrub to handle missing device cases for RAID56.
However the new scrub code handle them in a completely different way:
- If it's data stripe, go recovery path through btrfs_submit_bio() - If it's P/Q stripe, it would be handled through raid56_parity_submit_scrub_rbio() And that function would handle dev-replace and repair properly.
Thus we can safely remove those functions.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.1.23, v6.1.22, v6.1.21 |
|
#
4886ff7b |
| 19-Mar-2023 |
Qu Wenruo <wqu@suse.com> |
btrfs: introduce a new helper to submit write bio for repair
Both scrub and read-repair are utilizing a special repair writes that:
- Only writes back to a single device Even for read-repair on R
btrfs: introduce a new helper to submit write bio for repair
Both scrub and read-repair are utilizing a special repair writes that:
- Only writes back to a single device Even for read-repair on RAID56, we only update the corrupted data stripe itself, not triggering the full RMW path.
- Requires a valid @mirror_num For RAID56 case, only @mirror_num == 1 is valid. For non-RAID56 cases, we need @mirror_num to locate our stripe.
- No data csum generation needed
These two call sites still have some differences though:
- Read-repair goes plain bio It doesn't need a full btrfs_bio, and goes submit_bio_wait().
- New scrub repair would go btrfs_bio To simplify both read and write path.
So here this patch would:
- Introduce a common helper, btrfs_map_repair_block() Due to the single device nature, we can use an on-stack btrfs_io_stripe to pass device and its physical bytenr.
- Introduce a new interface, btrfs_submit_repair_bio(), for later scrub code This is for the incoming scrub code.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7 |
|
#
67da05b3 |
| 17-Jan-2023 |
Colin Ian King <colin.i.king@gmail.com> |
btrfs: fix spelling mistakes found using codespell
There quite a few spelling mistakes as found using codespell. Fix them.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: David
btrfs: fix spelling mistakes found using codespell
There quite a few spelling mistakes as found using codespell. Fix them.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79 |
|
#
c5a41562 |
| 13-Nov-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: raid56: prepare data checksums for later RMW verification
This is for later data checksum verification at RMW time.
This patch will try to allocate the needed memory for a locked rbio if the
btrfs: raid56: prepare data checksums for later RMW verification
This is for later data checksum verification at RMW time.
This patch will try to allocate the needed memory for a locked rbio if the rbio is for data exclusively (we don't want to handle mixed bg yet). The memory will be released when the rbio is finished.
Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.0.8, v5.15.78 |
|
#
ad3daf1c |
| 07-Nov-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: raid56: remove the old error tracking system
Since all the recovery paths have been migrated to the new error bitmap based system, we can remove the old stripe number based system.
This clea
btrfs: raid56: remove the old error tracking system
Since all the recovery paths have been migrated to the new error bitmap based system, we can remove the old stripe number based system.
This cleanup involves one behavior change:
- Rebuild rbio can no longer be merged Previously a rebuild rbio (caused by retry after data csum mismatch) can be merged, if the error happens in the same stripe.
But with the new error bitmap based solution, it's much harder to compare error bitmaps.
So here we just don't merge rebuild rbio at all. This may introduce some performance impact at extreme corner cases, but we're willing to take it.
Other than that, this patch will cleanup the following members:
- rbio::faila - rbio::failb They will be replaced by per-vertical stripe check, which is more accurate.
- rbio::error It will be replace by per-vertical stripe error bitmap check.
- Allow get_rbio_vertical_errors() to accept NULL pointers for @faila and @failb Some call sites only want to check if we have errors beyond the tolerance.
Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
2942a50d |
| 07-Nov-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: raid56: introduce btrfs_raid_bio::error_bitmap
Currently btrfs raid56 uses btrfs_raid_bio::faila and failb to indicate which stripe(s) had IO errors.
But that has some problems:
- If one se
btrfs: raid56: introduce btrfs_raid_bio::error_bitmap
Currently btrfs raid56 uses btrfs_raid_bio::faila and failb to indicate which stripe(s) had IO errors.
But that has some problems:
- If one sector failed csum check, the whole stripe where the corruption is will be marked error. This can reduce the chance we do recover, like this:
0 4K 8K Data 1 |XX| | Data 2 | |XX| Parity | | |
In above case, 0~4K in data 1 should be recovered using data 2 and parity, while 4K~8K in data 2 should be recovered using data 1 and parity.
Currently if we trigger read on 0~4K of data 1, we will also recover 4K~8K of data 1 using corrupted data 2 and parity, causing wrong result in rbio cache.
- Harder to expand for future M-N scheme As we're limited to just faila/b, two corruptions.
- Harder to expand to handle extra csum errors This can be problematic if we start to do csum verification.
This patch will introduce an extra @error_bitmap, where one bit represents error that happened for that sector.
The choice to introduce a new error bitmap other than reusing sector_ptr, is to avoid extra search between rbio::stripe_sectors[] and rbio::bio_sectors[].
Since we can submit bio using sectors from both sectors, doing proper search on both array will more complex.
Although the new bitmap will take extra memory, later we can remove things like @error and faila/b to save some memory.
Currently the new error bitmap and failab mechanism coexists, the error bitmap is only updated at endio time and recover entrance.
Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v6.0.7, v5.15.77 |
|
#
1a1a2851 |
| 01-Nov-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: remove the unused endio_raid56_workers and btrfs_raid_bio::end_io_work
Since we have switched all raid56 workload to submit-and-wait method, there is no use for btrfs_fs_info::endio_raid56_wo
btrfs: remove the unused endio_raid56_workers and btrfs_raid_bio::end_io_work
Since we have switched all raid56 workload to submit-and-wait method, there is no use for btrfs_fs_info::endio_raid56_workers workqueue and btrfs_raid_bio::end_io_work.
Remove them to save some memory.
Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
93723095 |
| 01-Nov-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: raid56: switch write path to rmw_rbio()
This includes the following changes:
- Implement new raid_unplug() functions Now we don't need a workqueue to run the plug, as all our work is jus
btrfs: raid56: switch write path to rmw_rbio()
This includes the following changes:
- Implement new raid_unplug() functions Now we don't need a workqueue to run the plug, as all our work is just queue rmw_rbio_work() call, which can be executed without sleep.
- Implement a rmw_rbio_work_locked() helper This is for unlock_stripe(), which is already holding the full stripe lock.
- Remove all the old functions This should already shows how complex the old functions are, as we ended up removing the following functions:
* rmw_work() * validate_rbio_for_rmw() * raid56_rmw_end_io_work() * raid56_rmw_stripe() * full_stripe_write() * partial_stripe_write() * __raid56_parity_write() * run_plug() * unplug_work() * btrfs_raid_unplug() * rmw_work() * __raid56_parity_recover() * raid_recover_end_io_work()
- Unexport rmw_rbio()
Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
5eb30ee2 |
| 01-Nov-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: raid56: introduce the main entrance for RMW path
The new entrance will be called rmw_rbio(), it will have a streamlined workflow by using submit-and-wait method.
Thus there will be no weird
btrfs: raid56: introduce the main entrance for RMW path
The new entrance will be called rmw_rbio(), it will have a streamlined workflow by using submit-and-wait method.
Thus there will be no weird jumps between tons of functions, thus way more reader friendly, and will make later expansion easier, as it's now a straight workflow, the timing is way more clear.
Unfortunately we can not yet migrate the RMW path to use this new entrance as we still need extra work to address the plug and unlock_stripe() function.
Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
d817ce35 |
| 01-Nov-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: raid56: switch recovery path to a single function
Currently btrfs uses end_io functions to jump between different stages of recovery.
For example, we go the following different functions:
-
btrfs: raid56: switch recovery path to a single function
Currently btrfs uses end_io functions to jump between different stages of recovery.
For example, we go the following different functions:
- raid56_bio_end_io() This handles the read for all the sectors (except the missing device).
- __raid_recover_end_io() This does the real work, it's called inside the delayed work function raid_recover_end_io_work().
This one recovery path involves at least 3 different functions, which is a big burden for readers.
This patch will change the behavior by:
- Introduce a unified recovery entrance, recover_rbio()
- Use submit-and-wait method So the workflow is not interrupted by the endio function jump. This doesn't bring performance change, but reduce the burden for reviewers.
- Run the main function in the rmw_workers workqueue Now raid56_parity_recover() only needs to setup the work, and queue the work using start_async_work().
Now readers only need to do one function jump (start_async_work()) to find out the main entrance of recovery path.
Furthermore, recover_rbio() function can easily be reused by other paths.
The old recovery path is still utilized by degraded write path. It will be cleaned up when we have migrated the write path.
Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60 |
|
#
f1c29379 |
| 06-Aug-2022 |
Christoph Hellwig <hch@lst.de> |
btrfs: properly abstract the parity raid bio handling
The parity raid write/recover functionality is currently not very well abstracted from the bio submission and completion handling in volumes.c:
btrfs: properly abstract the parity raid bio handling
The parity raid write/recover functionality is currently not very well abstracted from the bio submission and completion handling in volumes.c:
- the raid56 code directly completes the original btrfs_bio fed into btrfs_submit_bio instead of dispatching back to volumes.c - the raid56 code consumes the bioc and bio_counter references taken by volumes.c, which also leads to special casing of the calls from the scrub code into the raid56 code
To fix this up supply a bi_end_io handler that calls back into the volumes.c machinery, which then puts the bioc, decrements the bio_counter and completes the original bio, and updates the scrub code to also take ownership of the bioc and bio_counter in all cases.
Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Anand Jain <anand.jain@oracle.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49 |
|
#
6065fd95 |
| 17-Jun-2022 |
Christoph Hellwig <hch@lst.de> |
btrfs: do not return errors from raid56_parity_recover
Always consume the bio and call the end_io handler on error instead of returning an error and letting the caller handle it. This matches what
btrfs: do not return errors from raid56_parity_recover
Always consume the bio and call the end_io handler on error instead of returning an error and letting the caller handle it. This matches what the block layer submission does and avoids any confusion on who needs to handle errors.
Also use the proper bool type for the generic_io argument.
Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
31683f4a |
| 17-Jun-2022 |
Christoph Hellwig <hch@lst.de> |
btrfs: do not return errors from raid56_parity_write
Always consume the bio and call the end_io handler on error instead of returning an error and letting the caller handle it. This matches what th
btrfs: do not return errors from raid56_parity_write
Always consume the bio and call the end_io handler on error instead of returning an error and letting the caller handle it. This matches what the block layer submission does and avoids any confusion on who needs to handle errors.
Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
ff18a4af |
| 17-Jun-2022 |
Christoph Hellwig <hch@lst.de> |
btrfs: raid56: use fixed stripe length everywhere
The raid56 code assumes a fixed stripe length BTRFS_STRIPE_LEN but there are functions passing it as arguments, this is not necessary. The fixed val
btrfs: raid56: use fixed stripe length everywhere
The raid56 code assumes a fixed stripe length BTRFS_STRIPE_LEN but there are functions passing it as arguments, this is not necessary. The fixed value has been used for a long time and though the stripe length should be configurable by super block member stripesize, this hasn't been implemented and would require more changes so we don't need to keep this code around until then.
Partially based on a patch from Qu Wenruo.
Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de> [ update changelog ] Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40 |
|
#
0b30f719 |
| 13-May-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: use btrfs_raid_array to calculate number of parity stripes
Use the raid table instead of hard coded values and rename the helper as it is exported. This could make later extension on RAID56
btrfs: use btrfs_raid_array to calculate number of parity stripes
Use the raid table instead of hard coded values and rename the helper as it is exported. This could make later extension on RAID56 based profiles easier.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
d34e123d |
| 26-May-2022 |
Christoph Hellwig <hch@lst.de> |
btrfs: defer I/O completion based on the btrfs_raid_bio
Instead of attaching an extra allocation an indirect call to each low-level bio issued by the RAID code, add a work_struct to struct btrfs_rai
btrfs: defer I/O completion based on the btrfs_raid_bio
Instead of attaching an extra allocation an indirect call to each low-level bio issued by the RAID code, add a work_struct to struct btrfs_raid_bio and only defer the per-rbio completion action. The per-bio action for all the I/Os are trivial and can be safely done from interrupt context.
As a nice side effect this also allows sharing the boilerplate code for the per-bio completions
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
b8bea09a |
| 01-Jun-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: add trace event for submitted RAID56 bio
Add tracepoint for better insight to how the RAID56 data are submitted.
The output looks like this: (trace event header and UUID skipped)
raid56_
btrfs: add trace event for submitted RAID56 bio
Add tracepoint for better insight to how the RAID56 data are submitted.
The output looks like this: (trace event header and UUID skipped)
raid56_read_partial: full_stripe=389152768 devid=3 type=DATA1 offset=32768 opf=0x0 physical=323059712 len=32768 raid56_read_partial: full_stripe=389152768 devid=1 type=DATA2 offset=0 opf=0x0 physical=67174400 len=65536 raid56_write_stripe: full_stripe=389152768 devid=3 type=DATA1 offset=0 opf=0x1 physical=323026944 len=32768 raid56_write_stripe: full_stripe=389152768 devid=2 type=PQ1 offset=0 opf=0x1 physical=323026944 len=32768
The above debug output is from a 32K data write into an empty RAID56 data chunk.
Some explanation on the event output:
full_stripe: the logical bytenr of the full stripe devid: btrfs devid type: raid stripe type. DATA1: the first data stripe DATA2: the second data stripe PQ1: the P stripe PQ2: the Q stripe offset: the offset inside the stripe. opf: the bio op type physical: the physical offset the bio is for len: the length of the bio
The first two lines are from partial RMW read, which is reading the remaining data stripes from disks.
The last two lines are for full stripe RMW write, which is writing the involved two 16K stripes (one for DATA1 stripe, one for P stripe). The stripe for DATA2 doesn't need to be written.
There are 5 types of trace events:
- raid56_read_partial Read remaining data for regular read/write path.
- raid56_write_stripe Write the modified stripes for regular read/write path.
- raid56_scrub_read_recover Read remaining data for scrub recovery path.
- raid56_scrub_write_stripe Write the modified stripes for scrub path.
- raid56_scrub_read Read remaining data for scrub path.
Also, since the trace events are included at super.c, we have to export needed structure definitions to 'raid56.h' and include the header in super.c, or we're unable to access those members.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> [ reformat comments ] Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33 |
|
#
6346f6bf |
| 01-Apr-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: raid56: make raid56_add_scrub_pages() subpage compatible
This requires one extra parameter @pgoff for the function.
In the current code base, scrub is still one page per sector, thus the new
btrfs: raid56: make raid56_add_scrub_pages() subpage compatible
This requires one extra parameter @pgoff for the function.
In the current code base, scrub is still one page per sector, thus the new parameter will always be 0.
It needs the extra subpage scrub optimization code to fully take advantage.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
cc353a8b |
| 12-Apr-2022 |
Qu Wenruo <wqu@suse.com> |
btrfs: reduce width for stripe_len from u64 to u32
Currently btrfs uses fixed stripe length (64K), thus u32 is wide enough for the usage.
Furthermore, even in the future we choose to enlarge stripe
btrfs: reduce width for stripe_len from u64 to u32
Currently btrfs uses fixed stripe length (64K), thus u32 is wide enough for the usage.
Furthermore, even in the future we choose to enlarge stripe length to larger values, I don't believe we would want stripe as large as 4G or larger.
So this patch will reduce the width for all in-memory structures and parameters, this involves:
- RAID56 related function argument lists This allows us to do direct division related to stripe_len. Although we will use bits shift to replace the division anyway.
- btrfs_io_geometry structure This involves one change to simplify the calculation of both @stripe_nr and @stripe_offset, using div64_u64_rem(). And add extra sanity check to make sure @stripe_offset is always small enough for u32.
This saves 8 bytes for the structure.
- map_lookup structure This convert @stripe_len to u32, which saves 8 bytes. (saved 4 bytes, and removed a 4-bytes hole)
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8 |
|
#
6a258d72 |
| 23-Sep-2021 |
Qu Wenruo <wqu@suse.com> |
btrfs: remove btrfs_raid_bio::fs_info member
We can grab fs_info reliably from btrfs_raid_bio::bioc, as the bioc is always passed into alloc_rbio(), and only get released when the raid bio is releas
btrfs: remove btrfs_raid_bio::fs_info member
We can grab fs_info reliably from btrfs_raid_bio::bioc, as the bioc is always passed into alloc_rbio(), and only get released when the raid bio is released.
Remove btrfs_raid_bio::fs_info member, and cleanup all the @fs_info parameters for alloc_rbio() callers.
Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
Revision tags: v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65 |
|
#
4c664611 |
| 15-Sep-2021 |
Qu Wenruo <wqu@suse.com> |
btrfs: rename btrfs_bio to btrfs_io_context
The structure btrfs_bio is used by two different sites:
- bio->bi_private for mirror based profiles For those profiles (SINGLE/DUP/RAID1*/RAID10), this
btrfs: rename btrfs_bio to btrfs_io_context
The structure btrfs_bio is used by two different sites:
- bio->bi_private for mirror based profiles For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures records how many mirrors are still pending, and save the original endio function of the bio.
- RAID56 code In that case, RAID56 only utilize the stripes info, and no long uses that to trace the pending mirrors.
So btrfs_bio is not always bind to a bio, and contains more info for IO context, thus renaming it will make the naming less confusing.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
show more ...
|
#
d2faf8ed |
| 15-Sep-2021 |
Qu Wenruo <wqu@suse.com> |
btrfs: rename btrfs_bio to btrfs_io_context
[ Upstream commit 4c6646117912397d026d70c04d92ec1599522e9f ]
The structure btrfs_bio is used by two different sites:
- bio->bi_private for mirror based
btrfs: rename btrfs_bio to btrfs_io_context
[ Upstream commit 4c6646117912397d026d70c04d92ec1599522e9f ]
The structure btrfs_bio is used by two different sites:
- bio->bi_private for mirror based profiles For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures records how many mirrors are still pending, and save the original endio function of the bio.
- RAID56 code In that case, RAID56 only utilize the stripes info, and no long uses that to trace the pending mirrors.
So btrfs_bio is not always bind to a bio, and contains more info for IO context, thus renaming it will make the naming less confusing.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
d2faf8ed |
| 15-Sep-2021 |
Qu Wenruo <wqu@suse.com> |
btrfs: rename btrfs_bio to btrfs_io_context
[ Upstream commit 4c6646117912397d026d70c04d92ec1599522e9f ]
The structure btrfs_bio is used by two different sites:
- bio->bi_private for mirror based
btrfs: rename btrfs_bio to btrfs_io_context
[ Upstream commit 4c6646117912397d026d70c04d92ec1599522e9f ]
The structure btrfs_bio is used by two different sites:
- bio->bi_private for mirror based profiles For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures records how many mirrors are still pending, and save the original endio function of the bio.
- RAID56 code In that case, RAID56 only utilize the stripes info, and no long uses that to trace the pending mirrors.
So btrfs_bio is not always bind to a bio, and contains more info for IO context, thus renaming it will make the naming less confusing.
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|