#
fd3b6975 |
| 04-Oct-2021 |
Guoqing Jiang <guoqing.jiang@linux.dev> |
md/raid1: only allocate write behind bio for WriteMostly device
Commit 6607cd319b6b91bff94e90f798a61c031650b514 ("raid1: ensure write behind bio has less than BIO_MAX_VECS sectors") tried to guarant
md/raid1: only allocate write behind bio for WriteMostly device
Commit 6607cd319b6b91bff94e90f798a61c031650b514 ("raid1: ensure write behind bio has less than BIO_MAX_VECS sectors") tried to guarantee the size of behind bio is not bigger than BIO_MAX_VECS sectors.
Unfortunately the same calltrace still could happen since an array could enable write-behind without write mostly device.
To match the manpage of mdadm (which says "write-behind is only attempted on drives marked as write-mostly"), we need to check WriteMostly flag to avoid such unexpected behavior.
[1]. https://bugzilla.kernel.org/show_bug.cgi?id=213181#c25
Cc: stable@vger.kernel.org # v5.12+ Cc: Jens Stutte <jens@chianterastutte.eu> Reported-by: Jens Stutte <jens@chianterastutte.eu> Signed-off-by: Guoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
3400aa7e |
| 03-Jan-2022 |
Song Liu <song@kernel.org> |
md/raid1: fix missing bitmap update w/o WriteMostly devices
commit 46669e8616c649c71c4cfcd712fd3d107e771380 upstream.
commit [1] causes missing bitmap updates when there isn't any WriteMostly devic
md/raid1: fix missing bitmap update w/o WriteMostly devices
commit 46669e8616c649c71c4cfcd712fd3d107e771380 upstream.
commit [1] causes missing bitmap updates when there isn't any WriteMostly devices.
Detailed steps to reproduce by Norbert (which somehow didn't make to lore):
# setup md10 (raid1) with two drives (1 GByte sparse files) dd if=/dev/zero of=disk1 bs=1024k seek=1024 count=0 dd if=/dev/zero of=disk2 bs=1024k seek=1024 count=0
losetup /dev/loop11 disk1 losetup /dev/loop12 disk2
mdadm --create /dev/md10 --level=1 --raid-devices=2 /dev/loop11 /dev/loop12
# add bitmap (aka write-intent log) mdadm /dev/md10 --grow --bitmap=internal
echo check > /sys/block/md10/md/sync_action
root:# cat /sys/block/md10/md/mismatch_cnt 0 root:#
# remove member drive disk2 (loop12) mdadm /dev/md10 -f loop12 ; mdadm /dev/md10 -r loop12
# modify degraded md device dd if=/dev/urandom of=/dev/md10 bs=512 count=1
# no blocks recorded as out of sync on the remaining member disk1/loop11 root:# mdadm -X /dev/loop11 | grep Bitmap Bitmap : 16 bits (chunks), 0 dirty (0.0%) root:#
# re-add disk2, nothing synced because of empty bitmap mdadm /dev/md10 --re-add /dev/loop12
# check integrity again echo check > /sys/block/md10/md/sync_action
# disk1 and disk2 are no longer in sync, reads return differend data root:# cat /sys/block/md10/md/mismatch_cnt 128 root:#
# clean up mdadm -S /dev/md10 losetup -d /dev/loop11 losetup -d /dev/loop12 rm disk1 disk2
Fix this by moving the WriteMostly check to the if condition for alloc_behind_master_bio().
[1] commit fd3b6975e9c1 ("md/raid1: only allocate write behind bio for WriteMostly device") Fixes: fd3b6975e9c1 ("md/raid1: only allocate write behind bio for WriteMostly device") Cc: stable@vger.kernel.org # v5.12+ Cc: Guoqing Jiang <guoqing.jiang@linux.dev> Cc: Jens Axboe <axboe@kernel.dk> Reported-by: Norbert Warmuth <nwarmuth@t-online.de> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
3c0049a0 |
| 04-Oct-2021 |
Guoqing Jiang <guoqing.jiang@linux.dev> |
md/raid1: only allocate write behind bio for WriteMostly device
commit fd3b6975e9c11c4fa00965f82a0bfbb3b7b44101 upstream.
Commit 6607cd319b6b91bff94e90f798a61c031650b514 ("raid1: ensure write behin
md/raid1: only allocate write behind bio for WriteMostly device
commit fd3b6975e9c11c4fa00965f82a0bfbb3b7b44101 upstream.
Commit 6607cd319b6b91bff94e90f798a61c031650b514 ("raid1: ensure write behind bio has less than BIO_MAX_VECS sectors") tried to guarantee the size of behind bio is not bigger than BIO_MAX_VECS sectors.
Unfortunately the same calltrace still could happen since an array could enable write-behind without write mostly device.
To match the manpage of mdadm (which says "write-behind is only attempted on drives marked as write-mostly"), we need to check WriteMostly flag to avoid such unexpected behavior.
[1]. https://bugzilla.kernel.org/show_bug.cgi?id=213181#c25
Cc: stable@vger.kernel.org # v5.12+ Cc: Jens Stutte <jens@chianterastutte.eu> Reported-by: Jens Stutte <jens@chianterastutte.eu> Signed-off-by: Guoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61 |
|
#
6607cd31 |
| 23-Aug-2021 |
Guoqing Jiang <jiangguoqing@kylinos.cn> |
raid1: ensure write behind bio has less than BIO_MAX_VECS sectors
We can't split write behind bio with more than BIO_MAX_VECS sectors, otherwise the below call trace was triggered because we could a
raid1: ensure write behind bio has less than BIO_MAX_VECS sectors
We can't split write behind bio with more than BIO_MAX_VECS sectors, otherwise the below call trace was triggered because we could allocate oversized write behind bio later.
[ 8.097936] bvec_alloc+0x90/0xc0 [ 8.098934] bio_alloc_bioset+0x1b3/0x260 [ 8.099959] raid1_make_request+0x9ce/0xc50 [raid1] [ 8.100988] ? __bio_clone_fast+0xa8/0xe0 [ 8.102008] md_handle_request+0x158/0x1d0 [md_mod] [ 8.103050] md_submit_bio+0xcd/0x110 [md_mod] [ 8.104084] submit_bio_noacct+0x139/0x530 [ 8.105127] submit_bio+0x78/0x1d0 [ 8.106163] ext4_io_submit+0x48/0x60 [ext4] [ 8.107242] ext4_writepages+0x652/0x1170 [ext4] [ 8.108300] ? do_writepages+0x41/0x100 [ 8.109338] ? __ext4_mark_inode_dirty+0x240/0x240 [ext4] [ 8.110406] do_writepages+0x41/0x100 [ 8.111450] __filemap_fdatawrite_range+0xc5/0x100 [ 8.112513] file_write_and_wait_range+0x61/0xb0 [ 8.113564] ext4_sync_file+0x73/0x370 [ext4] [ 8.114607] __x64_sys_fsync+0x33/0x60 [ 8.115635] do_syscall_64+0x33/0x40 [ 8.116670] entry_SYSCALL_64_after_hwframe+0x44/0xae
Thanks for the comment from Christoph.
[1]. https://bugs.archlinux.org/task/70992
Cc: stable@vger.kernel.org # v5.12+ Reported-by: Jens Stutte <jens@chianterastutte.eu> Tested-by: Jens Stutte <jens@chianterastutte.eu> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Guoqing Jiang <jiangguoqing@kylinos.cn> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
Revision tags: v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49 |
|
#
5ba03936 |
| 28-Jun-2021 |
Wei Shuyu <wsy@dogben.com> |
md/raid10: properly indicate failure when ending a failed write request
Similar to [1], this patch fixes the same bug in raid10. Also cleanup the comments.
[1] commit 2417b9869b81 ("md/raid1: prope
md/raid10: properly indicate failure when ending a failed write request
Similar to [1], this patch fixes the same bug in raid10. Also cleanup the comments.
[1] commit 2417b9869b81 ("md/raid1: properly indicate failure when ending a failed write request") Cc: stable@vger.kernel.org Fixes: 7cee6d4e6035 ("md/raid10: end bio when the device faulty") Signed-off-by: Wei Shuyu <wsy@dogben.com> Acked-by: Guoqing Jiang <jiangguoqing@kylinos.cn> Signed-off-by: Song Liu <song@kernel.org>
show more ...
|
Revision tags: v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40 |
|
#
a0159832 |
| 25-May-2021 |
Guoqing Jiang <jgq516@gmail.com> |
md/raid1: enable io accounting
For raid1, we record the start time between split bio and clone bio, and finish the accounting in the final endio.
Also introduce start_time in r1bio accordingly.
Si
md/raid1: enable io accounting
For raid1, we record the start time between split bio and clone bio, and finish the accounting in the final endio.
Also introduce start_time in r1bio accordingly.
Signed-off-by: Guoqing Jiang <jiangguoqing@kylinos.cn> Signed-off-by: Song Liu <song@kernel.org>
show more ...
|
#
9b8ae7b9 |
| 25-May-2021 |
Guoqing Jiang <jgq516@gmail.com> |
md/raid1: rename print_msg with r1bio_existed
The caller of raid1_read_request could pass NULL or a valid pointer for "struct r1bio *r1_bio", so it actually means whether r1_bio is existed or not.
md/raid1: rename print_msg with r1bio_existed
The caller of raid1_read_request could pass NULL or a valid pointer for "struct r1bio *r1_bio", so it actually means whether r1_bio is existed or not.
Signed-off-by: Guoqing Jiang <jiangguoqing@kylinos.cn> Signed-off-by: Song Liu <song@kernel.org>
show more ...
|
Revision tags: v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31 |
|
#
2417b986 |
| 15-Apr-2021 |
Paul Clements <paul.clements@us.sios.com> |
md/raid1: properly indicate failure when ending a failed write request
This patch addresses a data corruption bug in raid1 arrays using bitmaps. Without this fix, the bitmap bits for the failed I/O
md/raid1: properly indicate failure when ending a failed write request
This patch addresses a data corruption bug in raid1 arrays using bitmaps. Without this fix, the bitmap bits for the failed I/O end up being cleared.
Since we are in the failure leg of raid1_end_write_request, the request either needs to be retried (R1BIO_WriteError) or failed (R1BIO_Degraded).
Fixes: eeba6809d8d5 ("md/raid1: end bio when the device faulty") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Paul Clements <paul.clements@us.sios.com> Signed-off-by: Song Liu <song@kernel.org>
show more ...
|
Revision tags: v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14 |
|
#
a78f18da |
| 26-Jan-2021 |
Christoph Hellwig <hch@lst.de> |
md: remove bio_alloc_mddev
bio_alloc_mddev is never called with a NULL mddev, and ->bio_set is initialized in md_run, so it always must be initialized as well. Just open code the remaining call to
md: remove bio_alloc_mddev
bio_alloc_mddev is never called with a NULL mddev, and ->bio_set is initialized in md_run, so it always must be initialized as well. Just open code the remaining call to bio_alloc_bioset.
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Song Liu <song@kernel.org> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Acked-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
309dca30 |
| 24-Jan-2021 |
Christoph Hellwig <hch@lst.de> |
block: store a block_device pointer in struct bio
Replace the gendisk pointer in struct bio with a pointer to the newly improved struct block device. From that the gendisk can be trivially accessed
block: store a block_device pointer in struct bio
Replace the gendisk pointer in struct bio with a pointer to the newly improved struct block device. From that the gendisk can be trivially accessed with an extra indirection, but it also allows to directly look up all information related to partition remapping.
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
Revision tags: v5.10 |
|
#
1c02fca6 |
| 03-Dec-2020 |
Christoph Hellwig <hch@lst.de> |
block: remove the request_queue argument to the block_bio_remap tracepoint
The request_queue can trivially be derived from the bio.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien
block: remove the request_queue argument to the block_bio_remap tracepoint
The request_queue can trivially be derived from the bio.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
a786282b |
| 28-Jun-2021 |
Wei Shuyu <wsy@dogben.com> |
md/raid10: properly indicate failure when ending a failed write request
commit 5ba03936c05584b6f6f79be5ebe7e5036c1dd252 upstream.
Similar to [1], this patch fixes the same bug in raid10. Also clean
md/raid10: properly indicate failure when ending a failed write request
commit 5ba03936c05584b6f6f79be5ebe7e5036c1dd252 upstream.
Similar to [1], this patch fixes the same bug in raid10. Also cleanup the comments.
[1] commit 2417b9869b81 ("md/raid1: properly indicate failure when ending a failed write request") Cc: stable@vger.kernel.org Fixes: 7cee6d4e6035 ("md/raid10: end bio when the device faulty") Signed-off-by: Wei Shuyu <wsy@dogben.com> Acked-by: Guoqing Jiang <jiangguoqing@kylinos.cn> Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
661061a4 |
| 15-Apr-2021 |
Paul Clements <paul.clements@us.sios.com> |
md/raid1: properly indicate failure when ending a failed write request
commit 2417b9869b81882ab90fd5ed1081a1cb2d4db1dd upstream.
This patch addresses a data corruption bug in raid1 arrays using bit
md/raid1: properly indicate failure when ending a failed write request
commit 2417b9869b81882ab90fd5ed1081a1cb2d4db1dd upstream.
This patch addresses a data corruption bug in raid1 arrays using bitmaps. Without this fix, the bitmap bits for the failed I/O end up being cleared.
Since we are in the failure leg of raid1_end_write_request, the request either needs to be retried (R1BIO_WriteError) or failed (R1BIO_Degraded).
Fixes: eeba6809d8d5 ("md/raid1: end bio when the device faulty") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Paul Clements <paul.clements@us.sios.com> Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51 |
|
#
21cf8661 |
| 01-Jul-2020 |
Christoph Hellwig <hch@lst.de> |
writeback: remove bdi->congested_fn
Except for pktdvd, the only places setting congested bits are file systems that allocate their own backing_dev_info structures. And pktdvd is a deprecated driver
writeback: remove bdi->congested_fn
Except for pktdvd, the only places setting congested bits are file systems that allocate their own backing_dev_info structures. And pktdvd is a deprecated driver that isn't useful in stack setup either. So remove the dead congested_fn stacking infrastructure.
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Song Liu <song@kernel.org> Acked-by: David Sterba <dsterba@suse.com> [axboe: fixup unused variables in bcache/request.c] Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
ed00aabd |
| 01-Jul-2020 |
Christoph Hellwig <hch@lst.de> |
block: rename generic_make_request to submit_bio_noacct
generic_make_request has always been very confusingly misnamed, so rename it to submit_bio_noacct to make it clear that it is submit_bio minus
block: rename generic_make_request to submit_bio_noacct
generic_make_request has always been very confusingly misnamed, so rename it to submit_bio_noacct to make it clear that it is submit_bio minus accounting and a few checks.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
Revision tags: v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16 |
|
#
c91114c2 |
| 27-Jan-2020 |
David Jeffery <djeffery@redhat.com> |
md/raid1: release pending accounting for an I/O only after write-behind is also finished
When using RAID1 and write-behind, md can deadlock when errors occur. With write-behind, r1bio structs can be
md/raid1: release pending accounting for an I/O only after write-behind is also finished
When using RAID1 and write-behind, md can deadlock when errors occur. With write-behind, r1bio structs can be accounted by raid1 as queued but not counted as pending. The pending count is dropped when the original bio is returned complete but write-behind for the r1bio may still be active.
This breaks the accounting used in some conditions to know when the raid1 md device has reached an idle state. It can result in calls to freeze_array deadlocking. freeze_array will never complete from a negative "unqueued" value being calculated due to a queued count larger than the pending count.
To properly account for write-behind, move the call to allow_barrier from call_bio_endio to raid_end_bio_io. When using write-behind, md can call call_bio_endio before all write-behind I/O is complete. Using raid_end_bio_io for the point to call allow_barrier will release the pending count at a point where all I/O for an r1bio, even write-behind, is done.
Signed-off-by: David Jeffery <djeffery@redhat.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
Revision tags: v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7 |
|
#
d0d2d8ba |
| 23-Dec-2019 |
Guoqing Jiang <guoqing.jiang@cloud.ionos.com> |
md/raid1: introduce wait_for_serialization
Previously, we call check_and_add_serial when serialization is enabled for write IO, but it could allocate and free memory back and forth.
Now, let's just
md/raid1: introduce wait_for_serialization
Previously, we call check_and_add_serial when serialization is enabled for write IO, but it could allocate and free memory back and forth.
Now, let's just get an element from memory pool with the new function, then insert node to rb tree if no collision happens.
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
#
025471f9 |
| 23-Dec-2019 |
Guoqing Jiang <guoqing.jiang@cloud.ionos.com> |
md/raid1: use bucket based mechanism for IO serialization
Since raid1 had already used bucket based mechanism to reduce the conflict between write IO and resync IO, it is possible to speed up perfor
md/raid1: use bucket based mechanism for IO serialization
Since raid1 had already used bucket based mechanism to reduce the conflict between write IO and resync IO, it is possible to speed up performance for io serialization with refer to the same mechanism.
To align with the barrier bucket mechanism, we created arrays (with the same number of BARRIER_BUCKETS_NR) for spinlock, rb tree and waitqueue. Then we can reduce lock competition with multiple spinlocks, boost search performance with multiple rb trees and also reduce thundering herd problem with multiple waitqueues.
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
#
69b00b5b |
| 23-Dec-2019 |
Guoqing Jiang <guoqing.jiang@cloud.ionos.com> |
md: introduce a new struct for IO serialization
Obviously, IO serialization could cause the degradation of performance a lot. In order to reduce the degradation, so a rb interval tree is added in ra
md: introduce a new struct for IO serialization
Obviously, IO serialization could cause the degradation of performance a lot. In order to reduce the degradation, so a rb interval tree is added in raid1 to speed up the check of collision.
So, a rb root is needed in md_rdev, then abstract all the serialize related members to a new struct (serial_in_rdev), embed it into md_rdev.
Of course, we need to free the struct if it is not needed anymore, so rdev/rdevs_uninit_serial are added accordingly. And they should be called when destroty memory pool or can't alloc memory.
And we need to consider to call mddev_destroy_serial_pool in case serialize_policy/write-behind is disabled, bitmap is destroyed or in __md_stop_writes.
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
#
69df9cfc |
| 23-Dec-2019 |
Guoqing Jiang <guoqing.jiang@cloud.ionos.com> |
raid1: serialize the overlap write
Before dispatch write bio, raid1 array which enables serialize_policy need to check if overlap exists between this bio and previous on-flying bios. If there is ove
raid1: serialize the overlap write
Before dispatch write bio, raid1 array which enables serialize_policy need to check if overlap exists between this bio and previous on-flying bios. If there is overlap, then it has to wait until the collision is disappeared.
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
#
404659cf |
| 23-Dec-2019 |
Guoqing Jiang <guoqing.jiang@cloud.ionos.com> |
md: rename wb stuffs
Previously, wb_info_pool and wb_list stuffs are introduced to address potential data inconsistence issue for write behind device.
Now rename them to serial related name, since
md: rename wb stuffs
Previously, wb_info_pool and wb_list stuffs are introduced to address potential data inconsistence issue for write behind device.
Now rename them to serial related name, since the same mechanism will be used to address reorder overlap write issue for raid1.
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
Revision tags: v5.4.6, v5.4.5, v5.4.4, v5.4.3 |
|
#
028288df |
| 09-Dec-2019 |
Zhiqiang Liu <liuzhiqiang26@huawei.com> |
md: raid1: check rdev before reference in raid1_sync_request func
In raid1_sync_request func, rdev should be checked before reference.
Signed-off-by: Zhiqiang Liu <liuzhiqiang26@huawei.com> Signed-
md: raid1: check rdev before reference in raid1_sync_request func
In raid1_sync_request func, rdev should be checked before reference.
Signed-off-by: Zhiqiang Liu <liuzhiqiang26@huawei.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
Revision tags: v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8 |
|
#
5fa4f8ba |
| 25-Oct-2019 |
Hannes Reinecke <hare@suse.de> |
md/raid1: avoid soft lockup under high load
As all I/O is being pushed through a kernel thread the softlockup watchdog might be triggered under high load.
Signed-off-by: Hannes Reinecke <hare@suse.
md/raid1: avoid soft lockup under high load
As all I/O is being pushed through a kernel thread the softlockup watchdog might be triggered under high load.
Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
Revision tags: v5.3.7, v5.3.6, v5.3.5, v5.3.4, v5.3.3, v5.3.2, v5.3.1 |
|
#
775d7831 |
| 16-Sep-2019 |
David Jeffery <djeffery@redhat.com> |
md: improve handling of bio with REQ_PREFLUSH in md_flush_request()
If pers->make_request fails in md_flush_request(), the bio is lost. To fix this, pass back a bool to indicate if the original make
md: improve handling of bio with REQ_PREFLUSH in md_flush_request()
If pers->make_request fails in md_flush_request(), the bio is lost. To fix this, pass back a bool to indicate if the original make_request call should continue to handle the I/O and instead of assuming the flush logic will push it to completion.
Convert md_flush_request to return a bool and no longer calls the raid driver's make_request function. If the return is true, then the md flush logic has or will complete the bio and the md make_request call is done. If false, then the md make_request function needs to keep processing like it is a normal bio. Let the original call to md_handle_request handle any need to retry sending the bio to the raid driver's make_request function should it be needed.
Also mark md_flush_request and the make_request function pointer as __must_check to issue warnings should these critical return values be ignored.
Fixes: 2bc13b83e629 ("md: batch flush requests.") Cc: stable@vger.kernel.org # # v4.19+ Cc: NeilBrown <neilb@suse.com> Signed-off-by: David Jeffery <djeffery@redhat.com> Reviewed-by: Xiao Ni <xni@redhat.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|
Revision tags: v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12 |
|
#
07f1a685 |
| 03-Sep-2019 |
Yufen Yu <yuyufen@huawei.com> |
md/raid1: fail run raid1 array when active disk less than one
When run test case: mdadm -CR /dev/md1 -l 1 -n 4 /dev/sd[a-d] --assume-clean --bitmap=internal mdadm -S /dev/md1 mdadm -A /dev/md1
md/raid1: fail run raid1 array when active disk less than one
When run test case: mdadm -CR /dev/md1 -l 1 -n 4 /dev/sd[a-d] --assume-clean --bitmap=internal mdadm -S /dev/md1 mdadm -A /dev/md1 /dev/sd[b-c] --run --force
mdadm --zero /dev/sda mdadm /dev/md1 -a /dev/sda
echo offline > /sys/block/sdc/device/state echo offline > /sys/block/sdb/device/state sleep 5 mdadm -S /dev/md1
echo running > /sys/block/sdb/device/state echo running > /sys/block/sdc/device/state mdadm -A /dev/md1 /dev/sd[a-c] --run --force
mdadm run fail with kernel message as follow: [ 172.986064] md: kicking non-fresh sdb from array! [ 173.004210] md: kicking non-fresh sdc from array! [ 173.022383] md/raid1:md1: active with 0 out of 4 mirrors [ 173.022406] md1: failed to create bitmap (-5)
In fact, when active disk in raid1 array less than one, we need to return fail in raid1_run().
Reviewed-by: NeilBrown <neilb@suse.de> Signed-off-by: Yufen Yu <yuyufen@huawei.com> Signed-off-by: Song Liu <songliubraving@fb.com>
show more ...
|