#
6907dc49 |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: avoid extra memory allocation from mempool c->fill_iter
Mempool c->fill_iter is used to allocate memory for struct btree_iter in bch_btree_node_read_done() to iterate all keys of a read-in b
bcache: avoid extra memory allocation from mempool c->fill_iter
Mempool c->fill_iter is used to allocate memory for struct btree_iter in bch_btree_node_read_done() to iterate all keys of a read-in btree node.
The allocation size is defined in bch_cache_set_alloc() by, mempool_init_kmalloc_pool(&c->fill_iter, 1, iter_size)) where iter_size is defined by a calculation, (sb->bucket_size / sb->block_size + 1) * sizeof(struct btree_iter_set)
For 16bit width bucket_size the calculation is OK, but now the bucket size is extended to 32bit, the bucket size can be 2GB. By the above calculation, iter_size can be 2048 pages (order 11 is still accepted by buddy allocator).
But the actual size holds the bkeys in meta data bucket is limited to meta_bucket_pages() already, which is 16MB. By the above calculation, if replace sb->bucket_size by meta_bucket_pages() * PAGE_SECTORS, the result is 16 pages. This is the size large enough for the mempool allocation to struct btree_iter.
Therefore in worst case every time mempool c->fill_iter allocates, at most 4080 pages are wasted and won't be used. Therefore this patch uses meta_bucket_pages() * PAGE_SECTORS to calculate the iter size in bch_cache_set_alloc(), to avoid extra memory allocation from mempool c->fill_iter.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
ffa47032 |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: add bucket_size_hi into struct cache_sb_disk for large bucket
The large bucket feature is to extend bucket_size from 16bit to 32bit.
When create cache device on zoned device (e.g. zoned NVM
bcache: add bucket_size_hi into struct cache_sb_disk for large bucket
The large bucket feature is to extend bucket_size from 16bit to 32bit.
When create cache device on zoned device (e.g. zoned NVMe SSD), making a single bucket cover one or more zones of the zoned device is the simplest way to support zoned device as cache by bcache.
But current maximum bucket size is 16MB and a typical zone size of zoned device is 256MB, this is the major motiviation to extend bucket size to a larger bit width.
This patch is the basic and first change to support large bucket size, the major changes it makes are, - Add BCH_FEATURE_INCOMPAT_LARGE_BUCKET for the large bucket feature, INCOMPAT means it introduces incompatible on-disk format change. - Add BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LARGE_BUCKET) routines. - Adds __le16 bucket_size_hi into struct cache_sb_disk at offset 0x8d0 for the on-disk super block format. - For the in-memory super block struct cache_sb, member bucket_size is extended from __u16 to __32. - Add get_bucket_size() to combine the bucket_size and bucket_size_hi from struct cache_sb_disk into an unsigned int value.
Since we already have large bucket size helpers meta_bucket_pages(), meta_bucket_bytes() and alloc_meta_bucket_pages(), they make sure when bucket size > 8MB, the memory allocation for bcache meta data bucket won't fail no matter how large the bucket size extended. So these meta data buckets are handled properly when the bucket size width increase from 16bit to 32bit, we don't need to worry about them.
Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
f9c32a5a |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: handle btree node memory allocation properly for bucket size > 8MB
Currently the bcache internal btree node occupies a whole bucket. When loading the btree node from cache device into memory
bcache: handle btree node memory allocation properly for bucket size > 8MB
Currently the bcache internal btree node occupies a whole bucket. When loading the btree node from cache device into memory, mca_data_alloc() will call bch_btree_keys_alloc() to allocate memory for the whole bucket size, ilog2(b->c->btree_pages) is send to bch_btree_keys_alloc() as the parameter 'page_order'.
c->btree_pages is set as bucket_pages() in bch_cache_set_alloc(), for bucket size > 8MB, ilog2(b->c->btree_pages) is 12 for 4KB page size. By default the maximum page order __get_free_pages() accepts is MAX_ORDER (11), in this condition bch_btree_keys_alloc() will always fail.
Because of other over-page-order allocation failure fails the cache device registration, such btree node allocation failure wasn't observed during runtime. After other blocking page allocation failures for bucket size > 8MB, this btree node allocation issue may trigger potentical risk e.g. infinite dead-loop to retry btree node allocation after failure.
This patch fixes the potential problem by setting c->btree_pages to meta_bucket_pages() in bch_cache_set_alloc(). In the condition that bucket size > 8MB, meta_bucket_pages() will always return a number which won't exceed the maximum page order of the buddy allocator.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
c954ac8d |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: handle cache prio_buckets and disk_buckets properly for bucket size > 8MB
Similar to c->uuids, struct cache's prio_buckets and disk_buckets also have the potential memory allocation failure
bcache: handle cache prio_buckets and disk_buckets properly for bucket size > 8MB
Similar to c->uuids, struct cache's prio_buckets and disk_buckets also have the potential memory allocation failure during cache registration if the bucket size > 8MB.
ca->prio_buckets can be stored on cache device in multiple buckets, its in-memory space is allocated by kzalloc() interface but normally allocated by alloc_pages() because the size > KMALLOC_MAX_CACHE_SIZE.
So allocation of ca->prio_buckets has the MAX_ORDER restriction too. If the bucket size > 8MB, by default the page allocator will fail because the page order > 11 (default MAX_ORDER value). ca->prio_buckets should also use meta_bucket_bytes(), meta_bucket_pages() to decide its memory size and use alloc_meta_bucket_pages() to allocate pages, to avoid the allocation failure during cache set registration when bucket size > 8MB.
ca->disk_buckets is a single bucket size memory buffer, it is used to iterate each bucket of ca->prio_buckets, and compose the bio based on memory of ca->disk_buckets, then write ca->disk_buckets memory to cache disk one-by-one for each bucket of ca->prio_buckets. ca->disk_buckets should have in-memory size exact to the meta_bucket_pages(), this is the size that ca->prio_buckets will be stored into each on-disk bucket.
This patch fixes the above issues and handle cache's prio_buckets and disk_buckets properly for bucket size larger than 8MB.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
21e478dd |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: handle c->uuids properly for bucket size > 8MB
Bcache allocates a whole bucket to store c->uuids on cache device, and allocates continuous pages to store it in-memory. When the bucket size e
bcache: handle c->uuids properly for bucket size > 8MB
Bcache allocates a whole bucket to store c->uuids on cache device, and allocates continuous pages to store it in-memory. When the bucket size exceeds maximum allocable continuous pages, bch_cache_set_alloc() will fail and cache device registration will fail.
This patch allocates c->uuids by alloc_meta_bucket_pages(), and uses ilog2(meta_bucket_pages(c)) to indicate order of c->uuids pages when free it. When writing c->uuids to cache device, its size is decided by meta_bucket_pages(c) * PAGE_SECTORS. Now c->uuids is properly handled for bucket size > 8MB.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
de1fafab |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: introduce meta_bucket_pages() related helper routines
Currently the in-memory meta data like c->uuids or c->disk_buckets are allocated by alloc_bucket_pages(). The macro alloc_bucket_pages()
bcache: introduce meta_bucket_pages() related helper routines
Currently the in-memory meta data like c->uuids or c->disk_buckets are allocated by alloc_bucket_pages(). The macro alloc_bucket_pages() calls __get_free_pages() to allocated continuous pages with order indicated by ilog2(bucket_pages(c)), #define alloc_bucket_pages(gfp, c) \ ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))
The maximum order is defined as MAX_ORDER, the default value is 11 (and can be overwritten by CONFIG_FORCE_MAX_ZONEORDER). In bcache code the maximum bucket size width is 16bits, this is restricted both by KEY_SIZE size and bucket_size size from struct cache_sb_disk. The maximum 16bits width and power-of-2 value is (1<<15) in unit of sector (512byte). It means the maximum value of bucket size in bytes is (1<<24) bytes a.k.a 4096 pages.
When the bucket size is set to maximum permitted value, ilog2(4096) is 12, which exceeds the default maximum order __get_free_pages() can accepted, the failed pages allocation will fail cache set registration procedure and print a kernel oops message for the exceeded pages order.
This patch introduces meta_bucket_pages(), meta_bucket_bytes(), and alloc_bucket_pages() helper routines. meta_bucket_pages() indicates the maximum pages can be allocated to meta data bucket, meta_bucket_bytes() indicates the according maximum bytes, and alloc_bucket_pages() does the pages allocation for meta bucket. Because meta_bucket_pages() chooses the smaller value among the bucket size and MAX_ORDER_NR_PAGES, it still works when MAX_ORDER overwritten by CONFIG_FORCE_MAX_ZONEORDER.
Following patches will use these helper routines to decide maximum pages can be allocated for different meta data buckets. If the bucket size is larger than meta_bucket_bytes(), the bcache registration can continue to success, just the space more than meta_bucket_bytes() inside the bucket is wasted. Comparing bcache failed for large bucket size, wasting some space for meta data buckets is acceptable at this moment.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
198efa35 |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: move bucket related code into read_super_common()
Setting sb->first_bucket and checking sb->keys indeed are only for cache device, it does not make sense to do them in read_super() for backi
bcache: move bucket related code into read_super_common()
Setting sb->first_bucket and checking sb->keys indeed are only for cache device, it does not make sense to do them in read_super() for backing device too.
This patch moves the related code piece into read_super_common() explicitly for cache device and avoid the confusion.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
d721a43f |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: increase super block version for cache device and backing device
The new added super block version BCACHE_SB_VERSION_BDEV_WITH_FEATURES (5) BCACHE_SB_VERSION_CDEV_WITH_FEATURES value (6), is
bcache: increase super block version for cache device and backing device
The new added super block version BCACHE_SB_VERSION_BDEV_WITH_FEATURES (5) BCACHE_SB_VERSION_CDEV_WITH_FEATURES value (6), is for the feature set bits.
Devices have super block version equal to the new version will have three new members for feature set bits in the on-disk super block, __le64 feature_compat; __le64 feature_incompat; __le64 feature_ro_compat;
They are used for further new features which may introduce on-disk format change, and avoid unncessary super block version increase.
The very basic features handling code skeleton is also initialized in this patch.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
117f636e |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: fix super block seq numbers comparision in register_cache_set()
In register_cache_set(), c is pointer to struct cache_set, and ca is pointer to struct cache, if ca->sb.seq > c->sb.seq, it me
bcache: fix super block seq numbers comparision in register_cache_set()
In register_cache_set(), c is pointer to struct cache_set, and ca is pointer to struct cache, if ca->sb.seq > c->sb.seq, it means this registering cache has up to date version and other members, the in- memory version and other members should be updated to the newer value.
But current implementation makes a cache set only has a single cache device, so the above assumption works well except for a special case. The execption is when a cache device new created and both ca->sb.seq and c->sb.seq are 0, because the super block is never flushed out yet. In the location for the following if() check, 2156 if (ca->sb.seq > c->sb.seq) { 2157 c->sb.version = ca->sb.version; 2158 memcpy(c->sb.set_uuid, ca->sb.set_uuid, 16); 2159 c->sb.flags = ca->sb.flags; 2160 c->sb.seq = ca->sb.seq; 2161 pr_debug("set version = %llu\n", c->sb.version); 2162 } c->sb.version is not initialized yet and valued 0. When ca->sb.seq is 0, the if() check will fail (because both values are 0), and the cache set version, set_uuid, flags and seq won't be updated.
The above problem is hiden for current code, because the bucket size is compatible among different super block version. And the next time when running cache set again, ca->sb.seq will be larger than 0 and cache set super block version will be updated properly.
But if the large bucket feature is enabled, sb->bucket_size is the low 16bits of the bucket size. For a power of 2 value, when the actual bucket size exceeds 16bit width, sb->bucket_size will always be 0. Then read_super_common() will fail because the if() check to is_power_of_2(sb->bucket_size) is false. This is how the long time hidden bug is triggered.
This patch modifies the if() check to the following way, 2156 if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) { Then cache set's version, set_uuid, flags and seq will always be updated corectly including for a new created cache device.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
a42d3c64 |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: disassemble the big if() checks in bch_cache_set_alloc()
In bch_cache_set_alloc() there is a big if() checks combined by 11 items together. When this big if() statement fails, it is difficul
bcache: disassemble the big if() checks in bch_cache_set_alloc()
In bch_cache_set_alloc() there is a big if() checks combined by 11 items together. When this big if() statement fails, it is difficult to tell exactly which item fails indeed.
This patch disassembles this big if() checks into 11 single if() checks, which makes code debug more easier.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
c557a5f7 |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: add more accurate error information in read_super_common()
The improperly set bucket or block size will trigger error in read_super_common(). For large bucket size, a more accurate error mes
bcache: add more accurate error information in read_super_common()
The improperly set bucket or block size will trigger error in read_super_common(). For large bucket size, a more accurate error message for invalid bucket or block size is necessary.
This patch disassembles the combined if() checks into multiple single if() check, and provide more accurate error message for each check failure condition.
Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
5b21403c |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: add read_super_common() to read major part of super block
Later patches will introduce feature set bits to on-disk super block and increase super block version. Current code in read_super()
bcache: add read_super_common() to read major part of super block
Later patches will introduce feature set bits to on-disk super block and increase super block version. Current code in read_super() which reads common part of super block for version BCACHE_SB_VERSION_CDEV and version BCACHE_SB_VERSION_CDEV_WITH_UUID will be shared with the new version.
Therefore this patch moves the reusable part into read_super_common(), this preparation patch will make later patches more simplier and only focus on new feature set bits.
Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
65f0f017 |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: avoid nr_stripes overflow in bcache_device_init()
For some block devices which large capacity (e.g. 8TB) but small io_opt size (e.g. 8 sectors), in bcache_device_init() the stripes number ca
bcache: avoid nr_stripes overflow in bcache_device_init()
For some block devices which large capacity (e.g. 8TB) but small io_opt size (e.g. 8 sectors), in bcache_device_init() the stripes number calcu- lated by, DIV_ROUND_UP_ULL(sectors, d->stripe_size); might be overflow to the unsigned int bcache_device->nr_stripes.
This patch uses the uint64_t variable to store DIV_ROUND_UP_ULL() and after the value is checked to be available in unsigned int range, sets it to bache_device->nr_stripes. Then the overflow is avoided.
Reported-and-tested-by: Ken Raeburn <raeburn@redhat.com> Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Link: https://bugzilla.redhat.com/show_bug.cgi?id=1783075 Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
5fe48867 |
| 25-Jul-2020 |
Coly Li <colyli@suse.de> |
bcache: allocate meta data pages as compound pages
There are some meta data of bcache are allocated by multiple pages, and they are used as bio bv_page for I/Os to the cache device. for example cach
bcache: allocate meta data pages as compound pages
There are some meta data of bcache are allocated by multiple pages, and they are used as bio bv_page for I/Os to the cache device. for example cache_set->uuids, cache->disk_buckets, journal_write->data, bset_tree->data.
For such meta data memory, all the allocated pages should be treated as a single memory block. Then the memory management and underlying I/O code can treat them more clearly.
This patch adds __GFP_COMP flag to all the location allocating >0 order pages for the above mentioned meta data. Then their pages are treated as compound pages now.
Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
6acd193b |
| 25-Jul-2020 |
Jean Delvare <jdelvare@suse.de> |
bcache: Fix typo in Kconfig name
registraion -> registration
Fixes: 0c8d3fceade2 ("bcache: configure the asynchronous registertion to be experimental") Signed-off-by: Jean Delvare <jdelvare@suse.de
bcache: Fix typo in Kconfig name
registraion -> registration
Fixes: 0c8d3fceade2 ("bcache: configure the asynchronous registertion to be experimental") Signed-off-by: Jean Delvare <jdelvare@suse.de> Reviewed-by: Coly Li <colyli@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
Revision tags: v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51 |
|
#
21cf8661 |
| 01-Jul-2020 |
Christoph Hellwig <hch@lst.de> |
writeback: remove bdi->congested_fn
Except for pktdvd, the only places setting congested bits are file systems that allocate their own backing_dev_info structures. And pktdvd is a deprecated driver
writeback: remove bdi->congested_fn
Except for pktdvd, the only places setting congested bits are file systems that allocate their own backing_dev_info structures. And pktdvd is a deprecated driver that isn't useful in stack setup either. So remove the dead congested_fn stacking infrastructure.
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Song Liu <song@kernel.org> Acked-by: David Sterba <dsterba@suse.com> [axboe: fixup unused variables in bcache/request.c] Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
c62b37d9 |
| 01-Jul-2020 |
Christoph Hellwig <hch@lst.de> |
block: move ->make_request_fn to struct block_device_operations
The make_request_fn is a little weird in that it sits directly in struct request_queue instead of an operation vector. Replace it wit
block: move ->make_request_fn to struct block_device_operations
The make_request_fn is a little weird in that it sits directly in struct request_queue instead of an operation vector. Replace it with a block_device_operations method called submit_bio (which describes much better what it does). Also remove the request_queue argument to it, as the queue can be derived pretty trivially from the bio.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
987a0ef8 |
| 01-Jul-2020 |
Christoph Hellwig <hch@lst.de> |
bcache: stop setting ->queuedata
Nothing in bcache actually uses the ->queuedata field.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
Revision tags: v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47 |
|
#
4b25bbf5 |
| 14-Jun-2020 |
Coly Li <colyli@suse.de> |
bcache: pr_info() format clean up in bcache_device_init()
scripts/checkpatch.pl reports following warning for patch ("bcache: check and adjust logical block size for backing devices"), WARNING:
bcache: pr_info() format clean up in bcache_device_init()
scripts/checkpatch.pl reports following warning for patch ("bcache: check and adjust logical block size for backing devices"), WARNING: quoted string split across lines #146: FILE: drivers/md/bcache/super.c:896: + pr_info("%s: sb/logical block size (%u) greater than page size " + "(%lu) falling back to device logical block size (%u)",
There are two things to fix up, - The kernel message print should be in a single line. - pr_info() won't automatically add new line since v5.8, a '\n' should be added.
This patch just does the above cleanup in bcache_device_init().
Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
ee4a36f4 |
| 14-Jun-2020 |
Coly Li <colyli@suse.de> |
bcache: use delayed kworker fo asynchronous devices registration
This patch changes the asynchronous registration kworker to a delayed kworker. There is probability queue_work() queues the async reg
bcache: use delayed kworker fo asynchronous devices registration
This patch changes the asynchronous registration kworker to a delayed kworker. There is probability queue_work() queues the async registration kworker to the same CPU (even though very little), then the process which writing sysfs interface to reigster bcache device may won't return immeidately. queue_delayed_work() in this patch will delay 10 jiffies before insert the kworker to run queue, which makes sure the registering process may always returns to user space in time.
Fixes: 9e23ccf8f0a22 ("bcache: asynchronous devices registration") Signed-off-by: Coly Li <colyli@suse.de> Cc: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
dcacbc12 |
| 14-Jun-2020 |
Mauricio Faria de Oliveira <mfo@canonical.com> |
bcache: check and adjust logical block size for backing devices
It's possible for a block driver to set logical block size to a value greater than page size incorrectly; e.g. bcache takes the value
bcache: check and adjust logical block size for backing devices
It's possible for a block driver to set logical block size to a value greater than page size incorrectly; e.g. bcache takes the value from the superblock, set by the user w/ make-bcache.
This causes a BUG/NULL pointer dereference in the path:
__blkdev_get() -> set_init_blocksize() // set i_blkbits based on ... -> bdev_logical_block_size() -> queue_logical_block_size() // ... this value -> bdev_disk_changed() ... -> blkdev_readpage() -> block_read_full_page() -> create_page_buffers() // size = 1 << i_blkbits -> create_empty_buffers() // give size/take pointer -> alloc_page_buffers() // return NULL .. BUG!
Because alloc_page_buffers() is called with size > PAGE_SIZE, thus it initializes head = NULL, skips the loop, return head; then create_empty_buffers() gets (and uses) the NULL pointer.
This has been around longer than commit ad6bf88a6c19 ("block: fix an integer overflow in logical block size"); however, it increased the range of values that can trigger the issue.
Previously only 8k/16k/32k (on x86/4k page size) would do it, as greater values overflow unsigned short to zero, and queue_ logical_block_size() would then use the default of 512.
Now the range with unsigned int is much larger, and users w/ the 512k value, which happened to be zero'ed previously and work fine, started to hit this issue -- as the zero is gone, and queue_logical_block_size() does return 512k (>PAGE_SIZE.)
Fix this by checking the bcache device's logical block size, and if it's greater than page size, fallback to the backing/ cached device's logical page size.
This doesn't affect cache devices as those are still checked for block/page size in read_super(); only the backing/cached devices are not.
Apparently it's a regression from commit 2903381fce71 ("bcache: Take data offset from the bdev superblock."), moving the check into BCACHE_SB_VERSION_CDEV only. Now that we have superblocks of backing devices out there with this larger value, we cannot refuse to load them (i.e., have a similar check in _BDEV.)
Ideally perhaps bcache should use all values from the backing device (physical/logical/io_min block size)? But for now just fix the problematic case.
Test-case:
# IMG=/root/disk.img # dd if=/dev/zero of=$IMG bs=1 count=0 seek=1G # DEV=$(losetup --find --show $IMG) # make-bcache --bdev $DEV --block 8k < see dmesg >
Before:
# uname -r 5.7.0-rc7
[ 55.944046] BUG: kernel NULL pointer dereference, address: 0000000000000000 ... [ 55.949742] CPU: 3 PID: 610 Comm: bcache-register Not tainted 5.7.0-rc7 #4 ... [ 55.952281] RIP: 0010:create_empty_buffers+0x1a/0x100 ... [ 55.966434] Call Trace: [ 55.967021] create_page_buffers+0x48/0x50 [ 55.967834] block_read_full_page+0x49/0x380 [ 55.972181] do_read_cache_page+0x494/0x610 [ 55.974780] read_part_sector+0x2d/0xaa [ 55.975558] read_lba+0x10e/0x1e0 [ 55.977904] efi_partition+0x120/0x5a6 [ 55.980227] blk_add_partitions+0x161/0x390 [ 55.982177] bdev_disk_changed+0x61/0xd0 [ 55.982961] __blkdev_get+0x350/0x490 [ 55.983715] __device_add_disk+0x318/0x480 [ 55.984539] bch_cached_dev_run+0xc5/0x270 [ 55.986010] register_bcache.cold+0x122/0x179 [ 55.987628] kernfs_fop_write+0xbc/0x1a0 [ 55.988416] vfs_write+0xb1/0x1a0 [ 55.989134] ksys_write+0x5a/0xd0 [ 55.989825] do_syscall_64+0x43/0x140 [ 55.990563] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 55.991519] RIP: 0033:0x7f7d60ba3154 ...
After:
# uname -r 5.7.0.bcachelbspgsz
[ 31.672460] bcache: bcache_device_init() bcache0: sb/logical block size (8192) greater than page size (4096) falling back to device logical block size (512) [ 31.675133] bcache: register_bdev() registered backing device loop0
# grep ^ /sys/block/bcache0/queue/*_block_size /sys/block/bcache0/queue/logical_block_size:512 /sys/block/bcache0/queue/physical_block_size:8192
Reported-by: Ryan Finnie <ryan@finnie.org> Reported-by: Sebastian Marsching <sebastian@marsching.com> Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
Revision tags: v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43 |
|
#
0c8d3fce |
| 26-May-2020 |
Coly Li <colyli@suse.de> |
bcache: configure the asynchronous registertion to be experimental
In order to avoid the experimental async registration interface to be treated as new kernel ABI for common users, this patch makes
bcache: configure the asynchronous registertion to be experimental
In order to avoid the experimental async registration interface to be treated as new kernel ABI for common users, this patch makes it as an experimental kernel configure BCACHE_ASYNC_REGISTRAION.
This interface is for extreme large cached data situation, to make sure the bcache device can always created without the udev timeout issue. For normal users the async or sync registration does not make difference.
In future when we decide to use the asynchronous registration as default behavior, this experimental interface may be removed.
Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
9e23ccf8 |
| 26-May-2020 |
Coly Li <colyli@suse.de> |
bcache: asynchronous devices registration
When there is a lot of data cached on cache device, the bcach internal btree can take a very long to validate during the backing device and cache device reg
bcache: asynchronous devices registration
When there is a lot of data cached on cache device, the bcach internal btree can take a very long to validate during the backing device and cache device registration. In my test, it may takes 55+ minutes to check all the internal btree nodes.
The problem is that the registration is invoked by udev rules and the udevd has 180 seconds timeout by default. If the btree node checking time is longer than udevd timeout, the registering process will be killed by udevd with SIGKILL. If the registering process has pending sigal, creating kthread for bcache will fail and the device registration will fail. The result is, for bcache device which cached a lot of data on cache device, the bcache device node like /dev/bcache<N> won't create always due to the very long btree checking time.
A solution to avoid the udevd 180 seconds timeout is to register devices in an asynchronous way. Which is, after writing cache or backing device path into /sys/fs/bcache/register_async, the kernel code will create a kworker and move all the btree node checking (for cache device) or dirty data counting (for cached device) in the kwork context. Then the kworder is scheduled on system_wq and the registration code just returned to user space udev rule task. By this asynchronous way, the udev task for bcache rule will complete in seconds, no matter how long time spent in the kworker context, it won't be killed by udevd for a timeout.
After all the checking and counting are done asynchronously in the kworker, the bcache device will eventually be created successfully.
This patch does the above chagne and add a register sysfs file /sys/fs/bcache/register_async. Writing the registering device path into this sysfs file will do the asynchronous registration.
The register_async interface is for very rare condition and won't be used for common users. In future I plan to make the asynchronous registration as default behavior, which depends on feedback for this patch.
Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
86da9f73 |
| 26-May-2020 |
Coly Li <colyli@suse.de> |
bcache: fix refcount underflow in bcache_device_free()
The problematic code piece in bcache_device_free() is,
785 static void bcache_device_free(struct bcache_device *d) 786 { 787 struct gen
bcache: fix refcount underflow in bcache_device_free()
The problematic code piece in bcache_device_free() is,
785 static void bcache_device_free(struct bcache_device *d) 786 { 787 struct gendisk *disk = d->disk; [snipped] 799 if (disk) { 800 if (disk->flags & GENHD_FL_UP) 801 del_gendisk(disk); 802 803 if (disk->queue) 804 blk_cleanup_queue(disk->queue); 805 806 ida_simple_remove(&bcache_device_idx, 807 first_minor_to_idx(disk->first_minor)); 808 put_disk(disk); 809 } [snipped] 816 }
At line 808, put_disk(disk) may encounter kobject refcount of 'disk' being underflow.
Here is how to reproduce the issue, - Attche the backing device to a cache device and do random write to make the cache being dirty. - Stop the bcache device while the cache device has dirty data of the backing device. - Only register the backing device back, NOT register cache device. - The bcache device node /dev/bcache0 won't show up, because backing device waits for the cache device shows up for the missing dirty data. - Now echo 1 into /sys/fs/bcache/pendings_cleanup, to stop the pending backing device. - After the pending backing device stopped, use 'dmesg' to check kernel message, a use-after-free warning from KASA reported the refcount of kobject linked to the 'disk' is underflow.
The dropping refcount at line 808 in the above code piece is added by add_disk(d->disk) in bch_cached_dev_run(). But in the above condition the cache device is not registered, bch_cached_dev_run() has no chance to be called and the refcount is not added. The put_disk() for a non- added refcount of gendisk kobject triggers a underflow warning.
This patch checks whether GENHD_FL_UP is set in disk->flags, if it is not set then the bcache device was not added, don't call put_disk() and the the underflow issue can be avoided.
Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
46f5aa88 |
| 26-May-2020 |
Joe Perches <joe@perches.com> |
bcache: Convert pr_<level> uses to a more typical style
Remove the trailing newline from the define of pr_fmt and add newlines to the uses.
Miscellanea:
o Convert bch_bkey_dump from multiple uses
bcache: Convert pr_<level> uses to a more typical style
Remove the trailing newline from the define of pr_fmt and add newlines to the uses.
Miscellanea:
o Convert bch_bkey_dump from multiple uses of pr_err to pr_cont as the earlier conversion was inappropriate done causing multiple lines to be emitted where only a single output line was desired o Use vsprintf extension %pV in bch_cache_set_error to avoid multiple line output where only a single line output was desired o Coalesce formats
Fixes: 6ae63e3501c4 ("bcache: replace printk() by pr_*() routines")
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|