Revision tags: v4.4.30, v4.4.29 |
|
#
7449f699 |
| 28-Oct-2016 |
Tomasz Majchrzak <tomasz.majchrzak@intel.com> |
raid1: handle read error also in readonly mode
If write is the first operation on a disk and it happens not to be aligned to page size, block layer sends read request first. If read operation fails,
raid1: handle read error also in readonly mode
If write is the first operation on a disk and it happens not to be aligned to page size, block layer sends read request first. If read operation fails, the disk is set as failed as no attempt to fix the error is made because array is in auto-readonly mode. Similarily, the disk is set as failed for read-only array.
Take the same approach as in raid10. Don't fail the disk if array is in readonly or auto-readonly mode. Try to redirect the request first and if unsuccessful, return a read error.
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
Revision tags: v4.4.28, v4.4.27, v4.7.10, openbmc-4.4-20161021-1, v4.7.9, v4.4.26, v4.7.8, v4.4.25, v4.4.24, v4.7.7 |
|
#
e3f948cd |
| 06-Oct-2016 |
Shaohua Li <shli@fb.com> |
RAID1: ignore discard error
If a write error occurs, raid1 will try to rewrite the bio in small chunk size. If the rewrite fails, raid1 will record the error in bad block. narrow_write_error will al
RAID1: ignore discard error
If a write error occurs, raid1 will try to rewrite the bio in small chunk size. If the rewrite fails, raid1 will record the error in bad block. narrow_write_error will always use WRITE for the bio, but actually it could be a discard. Since discard bio hasn't payload, write the bio will cause different issues. But discard error isn't fatal, we can safely ignore it. This is what this patch does.
This issue should exist since discard is added, but only exposed with recent arbitrary bio size feature.
Reported-and-tested-by: Sitsofe Wheeler <sitsofe@gmail.com> Cc: stable@vger.kernel.org (v3.6) Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
Revision tags: v4.8, v4.4.23, v4.7.6, v4.7.5, v4.4.22 |
|
#
491221f8 |
| 22-Sep-2016 |
Guoqing Jiang <gqjiang@suse.com> |
block: export bio_free_pages to other modules
bio_free_pages is introduced in commit 1dfa0f68c040 ("block: add a helper to free bio bounce buffer pages"), we can reuse the func in other modules afte
block: export bio_free_pages to other modules
bio_free_pages is introduced in commit 1dfa0f68c040 ("block: add a helper to free bio bounce buffer pages"), we can reuse the func in other modules after it was imported.
Cc: Christoph Hellwig <hch@infradead.org> Cc: Jens Axboe <axboe@fb.com> Cc: Mike Snitzer <snitzer@redhat.com> Cc: Shaohua Li <shli@fb.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Acked-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
show more ...
|
Revision tags: v4.4.21, v4.7.4, v4.7.3, v4.4.20, v4.7.2, v4.4.19, openbmc-4.4-20160819-1, v4.7.1, v4.4.18, v4.4.17 |
|
#
1eff9d32 |
| 05-Aug-2016 |
Jens Axboe <axboe@fb.com> |
block: rename bio bi_rw to bi_opf
Since commit 63a4cc24867d, bio->bi_rw contains flags in the lower portion and the op code in the higher portions. This means that old code that relies on manually s
block: rename bio bi_rw to bi_opf
Since commit 63a4cc24867d, bio->bi_rw contains flags in the lower portion and the op code in the higher portions. This means that old code that relies on manually setting bi_rw is most likely going to be broken. Instead of letting that brokeness linger, rename the member, to force old and out-of-tree code to break at compile time instead of at runtime.
No intended functional changes in this commit.
Signed-off-by: Jens Axboe <axboe@fb.com>
show more ...
|
Revision tags: openbmc-4.4-20160804-1, v4.4.16, v4.7, openbmc-4.4-20160722-1, openbmc-20160722-1 |
|
#
70246286 |
| 19-Jul-2016 |
Christoph Hellwig <hch@lst.de> |
block: get rid of bio_rw and READA
These two are confusing leftover of the old world order, combining values of the REQ_OP_ and REQ_ namespaces. For callers that don't special case we mostly just r
block: get rid of bio_rw and READA
These two are confusing leftover of the old world order, combining values of the REQ_OP_ and REQ_ namespaces. For callers that don't special case we mostly just replace bi_rw with bio_data_dir or op_is_write, except for the few cases where a switch over the REQ_OP_ values makes more sense. Any check for READA is replaced with an explicit check for REQ_RAHEAD. Also remove the READA alias for REQ_RAHEAD.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Mike Christie <mchristi@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
show more ...
|
Revision tags: openbmc-20160713-1, v4.4.15, v4.6.4, v4.6.3, v4.4.14, v4.6.2, v4.4.13, openbmc-20160606-1 |
|
#
d787be40 |
| 02-Jun-2016 |
NeilBrown <neilb@suse.com> |
md: reduce the number of synchronize_rcu() calls when multiple devices fail.
Every time a device is removed with ->hot_remove_disk() a synchronize_rcu() call is made which can delay several millisec
md: reduce the number of synchronize_rcu() calls when multiple devices fail.
Every time a device is removed with ->hot_remove_disk() a synchronize_rcu() call is made which can delay several milliseconds in some case. If lots of devices fail at once - as could happen with a large RAID10 where one set of devices are removed all at once - these delays can add up to be very inconcenient.
As failure is not reversible we can check for that first, setting a separate flag if it is found, and then all synchronize_rcu() once for all the flagged devices. Then ->hot_remove_disk() function can skip the synchronize_rcu() step if the flag is set.
fix build error(Shaohua) Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
#
707a6a42 |
| 02-Jun-2016 |
NeilBrown <neilb@suse.com> |
md/raid1: add rcu protection to rdev in fix_read_error
Since remove_and_add_spares() was added to hot_remove_disk() it has been possible for an rdev to be hot-removed while fix_read_error() was runn
md/raid1: add rcu protection to rdev in fix_read_error
Since remove_and_add_spares() was added to hot_remove_disk() it has been possible for an rdev to be hot-removed while fix_read_error() was running, so we need to be more careful, and take a reference to the rdev while performing IO.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
#
854abd75 |
| 02-Jun-2016 |
NeilBrown <neilb@suse.com> |
md/raid1: small code cleanup in end_sync_write
'mirror' is only used to find 'rdev', several times. So just find 'rdev' once, and use it instead.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-of
md/raid1: small code cleanup in end_sync_write
'mirror' is only used to find 'rdev', several times. So just find 'rdev' once, and use it instead.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
#
e5872d58 |
| 02-Jun-2016 |
NeilBrown <neilb@suse.com> |
md/raid1: small cleanup in raid1_end_read/write_request
Both functions use conf->mirrors[mirror].rdev several times, so improve readability by storing this in a local variable.
Signed-off-by: NeilB
md/raid1: small cleanup in raid1_end_read/write_request
Both functions use conf->mirrors[mirror].rdev several times, so improve readability by storing this in a local variable.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
#
414e6b9a |
| 02-Jun-2016 |
NeilBrown <neilb@suse.com> |
md/raid1, raid10: don't recheck "Faulty" flag in read-balance.
Re-checking the faulty flag here brings no value. The comment about "risk" refers to the risk that the device could be in the process o
md/raid1, raid10: don't recheck "Faulty" flag in read-balance.
Re-checking the faulty flag here brings no value. The comment about "risk" refers to the risk that the device could be in the process of being removed by ->hot_remove_disk(). However providing that the ->nr_pending count is incremented inside an rcu_read_locked() region, there is no risk of that happening.
This is because the rdev pointer (in the personalities array) is set to NULL before synchronize_rcu(), and ->nr_pending is tested afterwards. If the rcu_read_locked region happens before the synchronize_rcu(), the test will see that nr_pending has been incremented. If it happens afterwards, the rdev pointer will be NULL so there is nothing to increment.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
#
7ac50447 |
| 13-Jun-2016 |
Tomasz Majchrzak <tomasz.majchrzak@intel.com> |
raid1/raid10: slow down resync if there is non-resync activity pending
A performance drop of mkfs has been observed on RAID10 during resync since commit 09314799e4f0 ("md: remove 'go_faster' option
raid1/raid10: slow down resync if there is non-resync activity pending
A performance drop of mkfs has been observed on RAID10 during resync since commit 09314799e4f0 ("md: remove 'go_faster' option from ->sync_request()"). Resync sends so many IOs it slows down non-resync IOs significantly (few times). Add a short delay to a resync. The previous long sleep (1s) has proven unnecessary, even very short delay brings performance right.
The change also applied to raid1. The problem has not been observed on raid1, however it shares barriers code with raid10 so it might be an issue for some setup too.
Suggested-by: NeilBrown <neilb@suse.com> Link: http://lkml.kernel.org/r/20160609134555.GA9104@proton.igk.intel.com Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
#
288dab8a |
| 09-Jun-2016 |
Christoph Hellwig <hch@lst.de> |
block: add a separate operation type for secure erase
Instead of overloading the discard support with the REQ_SECURE flag. Use the opportunity to rename the queue flag as well, and remove the dead c
block: add a separate operation type for secure erase
Instead of overloading the discard support with the REQ_SECURE flag. Use the opportunity to rename the queue flag as well, and remove the dead checks for this flag in the RAID 1 and RAID 10 drivers that don't claim support for secure erase.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
show more ...
|
#
28a8f0d3 |
| 05-Jun-2016 |
Mike Christie <mchristi@redhat.com> |
block, drivers, fs: rename REQ_FLUSH to REQ_PREFLUSH
To avoid confusion between REQ_OP_FLUSH, which is handled by request_fn drivers, and upper layers requesting the block layer perform a flush sequ
block, drivers, fs: rename REQ_FLUSH to REQ_PREFLUSH
To avoid confusion between REQ_OP_FLUSH, which is handled by request_fn drivers, and upper layers requesting the block layer perform a flush sequence along with possibly a WRITE, this patch renames REQ_FLUSH to REQ_PREFLUSH.
Signed-off-by: Mike Christie <mchristi@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@fb.com>
show more ...
|
#
796a5cf0 |
| 05-Jun-2016 |
Mike Christie <mchristi@redhat.com> |
md: use bio op accessors
Separate the op from the rq_flag_bits and have md set/get the bio using bio_set_op_attrs/bio_op.
Signed-off-by: Mike Christie <mchristi@redhat.com> Reviewed-by: Christoph H
md: use bio op accessors
Separate the op from the rq_flag_bits and have md set/get the bio using bio_set_op_attrs/bio_op.
Signed-off-by: Mike Christie <mchristi@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@fb.com>
show more ...
|
#
4e49ea4a |
| 05-Jun-2016 |
Mike Christie <mchristi@redhat.com> |
block/fs/drivers: remove rw argument from submit_bio
This has callers of submit_bio/submit_bio_wait set the bio->bi_rw instead of passing it in. This makes that use the same as generic_make_request
block/fs/drivers: remove rw argument from submit_bio
This has callers of submit_bio/submit_bio_wait set the bio->bi_rw instead of passing it in. This makes that use the same as generic_make_request and how we set the other bio fields.
Signed-off-by: Mike Christie <mchristi@redhat.com>
Fixed up fs/ext4/crypto.c
Signed-off-by: Jens Axboe <axboe@fb.com>
show more ...
|
Revision tags: v4.6.1, v4.4.12, openbmc-20160521-1, v4.4.11, openbmc-20160518-1, v4.6, v4.4.10, openbmc-20160511-1, openbmc-20160505-1, v4.4.9 |
|
#
85ad1d13 |
| 03-May-2016 |
Guoqing Jiang <gqjiang@suse.com> |
md: set MD_CHANGE_PENDING in a atomic region
Some code waits for a metadata update by:
1. flagging that it is needed (MD_CHANGE_DEVS or MD_CHANGE_CLEAN) 2. setting MD_CHANGE_PENDING and waking the
md: set MD_CHANGE_PENDING in a atomic region
Some code waits for a metadata update by:
1. flagging that it is needed (MD_CHANGE_DEVS or MD_CHANGE_CLEAN) 2. setting MD_CHANGE_PENDING and waking the management thread 3. waiting for MD_CHANGE_PENDING to be cleared
If the first two are done without locking, the code in md_update_sb() which checks if it needs to repeat might test if an update is needed before step 1, then clear MD_CHANGE_PENDING after step 2, resulting in the wait returning early.
So make sure all places that set MD_CHANGE_PENDING are atomicial, and bit_clear_unless (suggested by Neil) is introduced for the purpose.
Cc: Martin Kepplinger <martink@posteo.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: <linux-kernel@vger.kernel.org> Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
Revision tags: v4.4.8, v4.4.7, openbmc-20160329-2, openbmc-20160329-1 |
|
#
816b0acf |
| 21-Mar-2016 |
Wei Fang <fangwei1@huawei.com> |
md:raid1: fix a dead loop when read from a WriteMostly disk
If first_bad == this_sector when we get the WriteMostly disk in read_balance(), valid disk will be returned with zero max_sectors. It'll l
md:raid1: fix a dead loop when read from a WriteMostly disk
If first_bad == this_sector when we get the WriteMostly disk in read_balance(), valid disk will be returned with zero max_sectors. It'll lead to a dead loop in make_request(), and OOM will happen because of endless allocation of struct bio.
Since we can't get data from this disk in this case, so continue for another disk.
Signed-off-by: Wei Fang <fangwei1@huawei.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
Revision tags: openbmc-20160321-1, v4.4.6, v4.5, v4.4.5, v4.4.4 |
|
#
ccfc7bf1 |
| 29-Feb-2016 |
Nate Dailey <nate.dailey@stratus.com> |
raid1: include bio_end_io_list in nr_queued to prevent freeze_array hang
If raid1d is handling a mix of read and write errors, handle_read_error's call to freeze_array can get stuck.
This can happe
raid1: include bio_end_io_list in nr_queued to prevent freeze_array hang
If raid1d is handling a mix of read and write errors, handle_read_error's call to freeze_array can get stuck.
This can happen because, though the bio_end_io_list is initially drained, writes can be added to it via handle_write_finished as the retry_list is processed. These writes contribute to nr_pending but are not included in nr_queued.
If a later entry on the retry_list triggers a call to handle_read_error, freeze array hangs waiting for nr_pending == nr_queued+extra. The writes on the bio_end_io_list aren't included in nr_queued so the condition will never be satisfied.
To prevent the hang, include bio_end_io_list writes in nr_queued.
There's probably a better way to handle decrementing nr_queued, but this seemed like the safest way to avoid breaking surrounding code.
I'm happy to supply the script I used to repro this hang.
Fixes: 55ce74d4bfe1b(md/raid1: ensure device failure recorded before write request returns.) Cc: stable@vger.kernel.org (v4.3+) Signed-off-by: Nate Dailey <nate.dailey@stratus.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
#
b3c95b42 |
| 14-Mar-2016 |
Guoqing Jiang <gqjiang@suse.com> |
md/raid1: remove unnecessary BUG_ON
Since bitmap_start_sync will not return until sync_blocks is not less than PAGE_SIZE>>9, so the BUG_ON is not needed anymore.
Signed-off-by: Guoqing Jiang <gqjia
md/raid1: remove unnecessary BUG_ON
Since bitmap_start_sync will not return until sync_blocks is not less than PAGE_SIZE>>9, so the BUG_ON is not needed anymore.
Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
show more ...
|
Revision tags: v4.4.3, openbmc-20160222-1, v4.4.2, openbmc-20160212-1, openbmc-20160210-1, openbmc-20160202-2, openbmc-20160202-1, v4.4.1, openbmc-20160127-1 |
|
#
849674e4 |
| 20-Jan-2016 |
Shaohua Li <shli@fb.com> |
MD: rename some functions
These short function names are hard to search. Rename them to make vim happy.
Signed-off-by: Shaohua Li <shli@fb.com>
|
Revision tags: openbmc-20160120-1 |
|
#
1501efad |
| 13-Jan-2016 |
Dan Williams <dan.j.williams@intel.com> |
md/raid: only permit hot-add of compatible integrity profiles
It is not safe for an integrity profile to be changed while i/o is in-flight in the queue. Prevent adding new disks or otherwise online
md/raid: only permit hot-add of compatible integrity profiles
It is not safe for an integrity profile to be changed while i/o is in-flight in the queue. Prevent adding new disks or otherwise online spares to an array if the device has an incompatible integrity profile.
The original change to the blk_integrity_unregister implementation in md, commmit c7bfced9a671 "md: suspend i/o during runtime blk_integrity_unregister" introduced an immediate hang regression.
This policy of disallowing changes the integrity profile once one has been established is shared with DM.
Here is an abbreviated log from a test run that: 1/ Creates a degraded raid1 with an integrity-enabled device (pmem0s) [ 59.076127] 2/ Tries to add an integrity-disabled device (pmem1m) [ 90.489209] 3/ Retries with an integrity-enabled device (pmem1s) [ 205.671277]
[ 59.076127] md/raid1:md0: active with 1 out of 2 mirrors [ 59.078302] md: data integrity enabled on md0 [..] [ 90.489209] md0: incompatible integrity profile for pmem1m [..] [ 205.671277] md: super_written gets error=-5 [ 205.677386] md/raid1:md0: Disk failure on pmem1m, disabling device. [ 205.677386] md/raid1:md0: Operation continuing on 1 devices. [ 205.683037] RAID1 conf printout: [ 205.684699] --- wd:1 rd:2 [ 205.685972] disk 0, wo:0, o:1, dev:pmem0s [ 205.687562] disk 1, wo:1, o:1, dev:pmem1s [ 205.691717] md: recovery of RAID array md0
Fixes: c7bfced9a671 ("md: suspend i/o during runtime blk_integrity_unregister") Cc: <stable@vger.kernel.org> Cc: Mike Snitzer <snitzer@redhat.com> Reported-by: NeilBrown <neilb@suse.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NeilBrown <neilb@suse.com>
show more ...
|
Revision tags: v4.4, openbmc-20151217-1, openbmc-20151210-1, openbmc-20151202-1, openbmc-20151123-1, openbmc-20151118-1, openbmc-20151104-1, v4.3, openbmc-20151102-1, openbmc-20151028-1 |
|
#
28c1b9fd |
| 22-Oct-2015 |
Goldwyn Rodrigues <rgoldwyn@suse.com> |
md-cluster: Call update_raid_disks() if another node --grow's raid_disks
To incorporate --grow feature executed on one node, other nodes need to acknowledge the change in number of disks. Call updat
md-cluster: Call update_raid_disks() if another node --grow's raid_disks
To incorporate --grow feature executed on one node, other nodes need to acknowledge the change in number of disks. Call update_raid_disks() to update internal data structures.
This leads to call check_reshape() -> md_allow_write() -> md_update_sb(), this results in a deadlock. This is done so it can safely allocate memory (which might trigger writeback which might write to raid1). This is not required for md with a bitmap.
In the clustered case, we don't perform md_update_sb() in md_allow_write(), but in do_md_run(). Also we disable safemode for clustered mode.
mddev->recovery_cp need not be set in check_sb_changes() because this is required only when a node reads another node's bitmap. mddev->recovery_cp (which is read from sb->resync_offset), is set only if mddev is in_sync. Since we disabled safemode, in_sync is set to zero. In a clustered environment, the MD may not be in sync because another node could be writing to it. So make sure that in_sync is not set in case of clustered node in __md_stop_writes().
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: NeilBrown <neilb@suse.com>
show more ...
|
#
bd8688a1 |
| 24-Oct-2015 |
NeilBrown <neilb@suse.com> |
md/raid1: don't clear bitmap bit when bad-block-list write fails.
When a write fails and a bad-block-list is present, we can update the bad-block-list instead of writing the data. If this succeeds
md/raid1: don't clear bitmap bit when bad-block-list write fails.
When a write fails and a bad-block-list is present, we can update the bad-block-list instead of writing the data. If this succeeds then it is OK clear the relevant bitmap-bit as no further 'sync' of the block is needed.
However if writing the bad-block-list fails then we need to treat the write as failed and particularly must not clear the bitmap bit. Otherwise the device can be re-added (after any hardware connection issues are resolved) and because the relevant bit in the bitmap is clear, that block will not be resynced. This leads to data corruption.
We already delay the final bio_endio() on the write until the bad-block-list is written so that when the write returns: either that data is safe, the bad-block record is safe, or the fact that the device is faulty is safe. However we *don't* delay the clearing of the bitmap, so the bitmap bit can be recorded as cleared before we know if the bad-block-list was written safely.
So: delay that until the write really is safe. i.e. move the call to close_write() until just before calling bio_endio(), and recheck the 'is array degraded' status before making that call.
This bug goes back to v3.1 when bad-block-lists were introduced, though it only affects arrays created with mdadm-3.3 or later as only those have bad-block lists.
Backports will require at least Commit: 55ce74d4bfe1 ("md/raid1: ensure device failure recorded before write request returns.") as well. I'll send that to 'stable' separately.
Note that of the two tests of R1BIO_WriteError that this patch adds, the first is certain to fail and the second is certain to succeed. However doing it this way makes the patch more obviously correct. I will tidy the code up in a future merge window.
Reported-and-tested-by: Nate Dailey <nate.dailey@stratus.com> Cc: Jes Sorensen <Jes.Sorensen@redhat.com> Fixes: cd5ff9a16f08 ("md/raid1: Handle write errors by updating badblock log.") Signed-off-by: NeilBrown <neilb@suse.com>
show more ...
|
#
c7bfced9 |
| 21-Oct-2015 |
Dan Williams <dan.j.williams@intel.com> |
md: suspend i/o during runtime blk_integrity_unregister
Synchronize pending i/o against a change in the integrity profile to avoid the possibility of spurious integrity errors. Given linear_add() i
md: suspend i/o during runtime blk_integrity_unregister
Synchronize pending i/o against a change in the integrity profile to avoid the possibility of spurious integrity errors. Given linear_add() is suspending the mddev before manipulating the mddev, do the same for the other personalities.
Acked-by: NeilBrown <neilb@suse.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
show more ...
|
#
203d27b0 |
| 20-Oct-2015 |
Jes Sorensen <Jes.Sorensen@redhat.com> |
md/raid1: submit_bio_wait() returns 0 on success
This was introduced with 9e882242c6193ae6f416f2d8d8db0d9126bd996b which changed the return value of submit_bio_wait() to return != 0 on error, but di
md/raid1: submit_bio_wait() returns 0 on success
This was introduced with 9e882242c6193ae6f416f2d8d8db0d9126bd996b which changed the return value of submit_bio_wait() to return != 0 on error, but didn't update the caller accordingly.
Fixes: 9e882242c6 ("block: Add submit_bio_wait(), remove from md") Cc: stable@vger.kernel.org (v3.10) Reported-by: Bill Kuzeja <William.Kuzeja@stratus.com> Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: NeilBrown <neilb@suse.com>
show more ...
|