Home
last modified time | relevance | path

Searched hist:"598162 d0" (Results 1 – 2 of 2) sorted by relevance

/openbmc/linux/fs/erofs/
H A Ddecompressor.c598162d0 Tue Apr 06 23:39:26 CDT 2021 Gao Xiang <hsiangkao@redhat.com> erofs: support decompress big pcluster for lz4 backend

Prior to big pcluster, there was only one compressed page so it'd
easy to map this. However, when big pcluster is enabled, more work
needs to be done to handle multiple compressed pages. In detail,

- (maptype 0) if there is only one compressed page + no need
to copy inplace I/O, just map it directly what we did before;

- (maptype 1) if there are more compressed pages + no need to
copy inplace I/O, vmap such compressed pages instead;

- (maptype 2) if inplace I/O needs to be copied, use per-CPU
buffers for decompression then.

Another thing is how to detect inplace decompression is feasable or
not (it's still quite easy for non big pclusters), apart from the
inplace margin calculation, inplace I/O page reusing order is also
needed to be considered for each compressed page. Currently, if the
compressed page is the xth page, it shouldn't be reused as [0 ...
nrpages_out - nrpages_in + x], otherwise a full copy will be triggered.

Although there are some extra optimization ideas for this, I'd like
to make big pcluster work correctly first and obviously it can be
further optimized later since it has nothing with the on-disk format
at all.

Link: https://lore.kernel.org/r/20210407043927.10623-10-xiang@kernel.org
Acked-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
H A Dinternal.h598162d0 Tue Apr 06 23:39:26 CDT 2021 Gao Xiang <hsiangkao@redhat.com> erofs: support decompress big pcluster for lz4 backend

Prior to big pcluster, there was only one compressed page so it'd
easy to map this. However, when big pcluster is enabled, more work
needs to be done to handle multiple compressed pages. In detail,

- (maptype 0) if there is only one compressed page + no need
to copy inplace I/O, just map it directly what we did before;

- (maptype 1) if there are more compressed pages + no need to
copy inplace I/O, vmap such compressed pages instead;

- (maptype 2) if inplace I/O needs to be copied, use per-CPU
buffers for decompression then.

Another thing is how to detect inplace decompression is feasable or
not (it's still quite easy for non big pclusters), apart from the
inplace margin calculation, inplace I/O page reusing order is also
needed to be considered for each compressed page. Currently, if the
compressed page is the xth page, it shouldn't be reused as [0 ...
nrpages_out - nrpages_in + x], otherwise a full copy will be triggered.

Although there are some extra optimization ideas for this, I'd like
to make big pcluster work correctly first and obviously it can be
further optimized later since it has nothing with the on-disk format
at all.

Link: https://lore.kernel.org/r/20210407043927.10623-10-xiang@kernel.org
Acked-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Gao Xiang <hsiangkao@redhat.com>