History log of /openbmc/linux/drivers/mtd/ubi/wl.c (Results 1 – 25 of 283)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16
# f773f0a3 03-Mar-2023 ZhaoLong Wang <wangzhaolong1@huawei.com>

ubi: Fix deadlock caused by recursively holding work_sem

During the processing of the bgt, if the sync_erase() return -EBUSY
or some other error code in __erase_worker(),schedule_erase() called
agai

ubi: Fix deadlock caused by recursively holding work_sem

During the processing of the bgt, if the sync_erase() return -EBUSY
or some other error code in __erase_worker(),schedule_erase() called
again lead to the down_read(ubi->work_sem) hold twice and may get
block by down_write(ubi->work_sem) in ubi_update_fastmap(),
which cause deadlock.

ubi bgt other task
do_work
down_read(&ubi->work_sem) ubi_update_fastmap
erase_worker # Blocked by down_read
__erase_worker down_write(&ubi->work_sem)
schedule_erase
schedule_ubi_work
down_read(&ubi->work_sem)

Fix this by changing input parameter @nested of the schedule_erase() to
'true' to avoid recursively acquiring the down_read(&ubi->work_sem).

Also, fix the incorrect comment about @nested parameter of the
schedule_erase() because when down_write(ubi->work_sem) is held, the
@nested is also need be true.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=217093
Fixes: 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()")
Signed-off-by: ZhaoLong Wang <wangzhaolong1@huawei.com>
Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6
# 0b3bc49c 13-Jan-2023 Randy Dunlap <rdunlap@infradead.org>

ubi: use correct names in function kernel-doc comments

Fix kernel-doc warnings by using the correct function names in
their kernel-doc notation:

drivers/mtd/ubi/eba.c:72: warning: expecting prototy

ubi: use correct names in function kernel-doc comments

Fix kernel-doc warnings by using the correct function names in
their kernel-doc notation:

drivers/mtd/ubi/eba.c:72: warning: expecting prototype for next_sqnum(). Prototype was for ubi_next_sqnum() instead
drivers/mtd/ubi/wl.c:176: warning: expecting prototype for wl_tree_destroy(). Prototype was for wl_entry_destroy() instead
drivers/mtd/ubi/misc.c:24: warning: expecting prototype for calc_data_len(). Prototype was for ubi_calc_data_len() instead

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Miquel Raynal <miquel.raynal@bootlin.com>
Cc: Vignesh Raghavendra <vigneshr@ti.com>
Cc: linux-mtd@lists.infradead.org
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47
# 4d57a733 13-Jun-2022 Zhihao Cheng <chengzhihao1@huawei.com>

ubi: ubi_wl_put_peb: Fix infinite loop when wear-leveling work failed

Following process will trigger an infinite loop in ubi_wl_put_peb():

ubifs_bgt ubi_bgt
ubifs_leb_unmap
ubi_leb_unmap
ub

ubi: ubi_wl_put_peb: Fix infinite loop when wear-leveling work failed

Following process will trigger an infinite loop in ubi_wl_put_peb():

ubifs_bgt ubi_bgt
ubifs_leb_unmap
ubi_leb_unmap
ubi_eba_unmap_leb
ubi_wl_put_peb wear_leveling_worker
e1 = rb_entry(rb_first(&ubi->used)
e2 = get_peb_for_wl(ubi)
ubi_io_read_vid_hdr // return err (flash fault)
out_error:
ubi->move_from = ubi->move_to = NULL
wl_entry_destroy(ubi, e1)
ubi->lookuptbl[e->pnum] = NULL
retry:
e = ubi->lookuptbl[pnum]; // return NULL
if (e == ubi->move_from) { // NULL == NULL gets true
goto retry; // infinite loop !!!

$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM COMMAND
7676 root 20 0 0 0 0 R 100.0 0.0 ubifs_bgt0_0

Fix it by:
1) Letting ubi_wl_put_peb() returns directly if wearl leveling entry has
been removed from 'ubi->lookuptbl'.
2) Using 'ubi->wl_lock' protecting wl entry deletion to preventing an
use-after-free problem for wl entry in ubi_wl_put_peb().

Fetch a reproducer in [Link].

Fixes: 43f9b25a9cdd7b1 ("UBI: bugfix: protect from volume removal")
Fixes: ee59ba8b064f692 ("UBI: Fix stale pointers in ubi->lookuptbl")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=216111
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# a240bc5c 30-Jul-2022 Zhihao Cheng <chengzhihao1@huawei.com>

ubi: Fix UAF wear-leveling entry in eraseblk_count_seq_show()

Wear-leveling entry could be freed in error path, which may be accessed
again in eraseblk_count_seq_show(), for example:

__erase_worker

ubi: Fix UAF wear-leveling entry in eraseblk_count_seq_show()

Wear-leveling entry could be freed in error path, which may be accessed
again in eraseblk_count_seq_show(), for example:

__erase_worker eraseblk_count_seq_show
wl = ubi->lookuptbl[*block_number]
if (wl)
wl_entry_destroy
ubi->lookuptbl[e->pnum] = NULL
kmem_cache_free(ubi_wl_entry_slab, e)
erase_count = wl->ec // UAF!

Wear-leveling entry updating/accessing in ubi->lookuptbl should be
protected by ubi->wl_lock, fix it by adding ubi->wl_lock to serialize
wl entry accessing between wl_entry_destroy() and
eraseblk_count_seq_show().

Fetch a reproducer in [Link].

Link: https://bugzilla.kernel.org/show_bug.cgi?id=216305
Fixes: 7bccd12d27b7e3 ("ubi: Add debugfs file for tracking PEB state")
Fixes: 801c135ce73d5d ("UBI: Unsorted Block Images")
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# b58b2528 03-Jul-2022 Zhang Jiaming <jiaming@nfschina.com>

ubi: fastmap: Fix typo in comments

There are a typo(dont't) in comments.
Fix it.

Signed-off-by: Zhang Jiaming <jiaming@nfschina.com>
Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-b

ubi: fastmap: Fix typo in comments

There are a typo(dont't) in comments.
Fix it.

Signed-off-by: Zhang Jiaming <jiaming@nfschina.com>
Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# ec1f97f5 10-Aug-2022 Jilin Yuan <yuanjilin@cdjrlc.com>

ubi: Fix repeated words in comments

Delete the redundant word 'a'.
Delete the redundant word 'the'.

Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com>
Signed-off-by: Richard Weinberger <richard@nod.a

ubi: Fix repeated words in comments

Delete the redundant word 'a'.
Delete the redundant word 'the'.

Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39
# 14072ee3 10-May-2022 Zhihao Cheng <chengzhihao1@huawei.com>

ubi: fastmap: Check wl_pool for free peb before wear leveling

UBI fetches free peb from wl_pool during wear leveling, so UBI should
check wl_pool's empty status before wear leveling. Otherwise, UBI

ubi: fastmap: Check wl_pool for free peb before wear leveling

UBI fetches free peb from wl_pool during wear leveling, so UBI should
check wl_pool's empty status before wear leveling. Otherwise, UBI will
miss wear leveling chances when free pebs are run out.

Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# d09e9a2b 10-May-2022 Zhihao Cheng <chengzhihao1@huawei.com>

ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty

There at least 6 PEBs reserved on UBI device:
1. EBA_RESERVED_PEBS[1]
2. WL_RESERVED_PEBS[1]
3. UBI_LAYOUT_VOLUME_EBS[2]

ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty

There at least 6 PEBs reserved on UBI device:
1. EBA_RESERVED_PEBS[1]
2. WL_RESERVED_PEBS[1]
3. UBI_LAYOUT_VOLUME_EBS[2]
4. MIN_FASTMAP_RESERVED_PEBS[2]

When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS +
WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1])
free PEBs. Since commit f9c34bb529975fe ("ubi: Fix producing anchor PEBs")
and commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering
wear level rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] -
FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after
filling pool, wl_pool is always empty. So, UBI could be stuck in an
infinite loop:

ubi_thread system_wq
wear_leveling_worker <--------------------------------------------------
get_peb_for_wl |
// fm_wl_pool, used = size = 0 |
schedule_work(&ubi->fm_work) |
|
update_fastmap_work_fn |
ubi_update_fastmap |
ubi_refill_pools |
// ubi->free_count - ubi->beb_rsvd_pebs < 5 |
// wl_pool is not filled with any PEBs |
schedule_erase(old_fm_anchor) |
ubi_ensure_anchor_pebs |
__schedule_ubi_work(wear_leveling_worker) |
|
__erase_worker |
ensure_wear_leveling |
__schedule_ubi_work(wear_leveling_worker) --------------------------

, which cause high cpu usage of ubi_bgt:
top - 12:10:42 up 5 min, 2 users, load average: 1.76, 0.68, 0.27
Tasks: 123 total, 3 running, 54 sleeping, 0 stopped, 0 zombie

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1589 root 20 0 0 0 0 R 45.0 0.0 0:38.86 ubi_bgt0d
319 root 20 0 0 0 0 I 15.2 0.0 0:15.29 kworker/0:3-eve
371 root 20 0 0 0 0 I 14.9 0.0 0:12.85 kworker/3:3-eve
20 root 20 0 0 0 0 I 11.3 0.0 0:05.33 kworker/1:0-eve
202 root 20 0 0 0 0 I 11.3 0.0 0:04.93 kworker/2:3-eve

In commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering
wear level rules"), there are three key changes:
1) Choose the fastmap anchor when the most free PEBs are available.
2) Enable anchor move within the anchor area again as it is useful
for distributing wear.
3) Import a candidate fm anchor and check this PEB's erase count during
wear leveling. If the wear leveling limit is exceeded, use the used
anchor area PEB with the lowest erase count to replace it.

The anchor candidate can be removed, we can check fm_anchor PEB's erase
count during wear leveling. Fix it by:
1) Removing 'fm_next_anchor' and check 'fm_anchor' during wear leveling.
2) Preferentially filling one free peb into fm_wl_pool in condition of
ubi->free_count > ubi->beb_rsvd_pebs, then try to reserve enough
free count for fastmap non anchor pebs after the above prerequisites
are met.
Then, there are at least 1 PEB in pool and 1 PEB in wl_pool after calling
ubi_refill_pools() with all erase works done.

Fetch a reproducer in [Link].

Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# f61b9c87 10-May-2022 Zhihao Cheng <chengzhihao1@huawei.com>

ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty

[ Upstream commit d09e9a2bddba6c48e0fddb16c4383172ac593251 ]

There at least 6 PEBs reserved on UBI device:
1. EBA_RESERV

ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty

[ Upstream commit d09e9a2bddba6c48e0fddb16c4383172ac593251 ]

There at least 6 PEBs reserved on UBI device:
1. EBA_RESERVED_PEBS[1]
2. WL_RESERVED_PEBS[1]
3. UBI_LAYOUT_VOLUME_EBS[2]
4. MIN_FASTMAP_RESERVED_PEBS[2]

When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS +
WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1])
free PEBs. Since commit f9c34bb529975fe ("ubi: Fix producing anchor PEBs")
and commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering
wear level rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] -
FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after
filling pool, wl_pool is always empty. So, UBI could be stuck in an
infinite loop:

ubi_thread system_wq
wear_leveling_worker <--------------------------------------------------
get_peb_for_wl |
// fm_wl_pool, used = size = 0 |
schedule_work(&ubi->fm_work) |
|
update_fastmap_work_fn |
ubi_update_fastmap |
ubi_refill_pools |
// ubi->free_count - ubi->beb_rsvd_pebs < 5 |
// wl_pool is not filled with any PEBs |
schedule_erase(old_fm_anchor) |
ubi_ensure_anchor_pebs |
__schedule_ubi_work(wear_leveling_worker) |
|
__erase_worker |
ensure_wear_leveling |
__schedule_ubi_work(wear_leveling_worker) --------------------------

, which cause high cpu usage of ubi_bgt:
top - 12:10:42 up 5 min, 2 users, load average: 1.76, 0.68, 0.27
Tasks: 123 total, 3 running, 54 sleeping, 0 stopped, 0 zombie

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1589 root 20 0 0 0 0 R 45.0 0.0 0:38.86 ubi_bgt0d
319 root 20 0 0 0 0 I 15.2 0.0 0:15.29 kworker/0:3-eve
371 root 20 0 0 0 0 I 14.9 0.0 0:12.85 kworker/3:3-eve
20 root 20 0 0 0 0 I 11.3 0.0 0:05.33 kworker/1:0-eve
202 root 20 0 0 0 0 I 11.3 0.0 0:04.93 kworker/2:3-eve

In commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering
wear level rules"), there are three key changes:
1) Choose the fastmap anchor when the most free PEBs are available.
2) Enable anchor move within the anchor area again as it is useful
for distributing wear.
3) Import a candidate fm anchor and check this PEB's erase count during
wear leveling. If the wear leveling limit is exceeded, use the used
anchor area PEB with the lowest erase count to replace it.

The anchor candidate can be removed, we can check fm_anchor PEB's erase
count during wear leveling. Fix it by:
1) Removing 'fm_next_anchor' and check 'fm_anchor' during wear leveling.
2) Preferentially filling one free peb into fm_wl_pool in condition of
ubi->free_count > ubi->beb_rsvd_pebs, then try to reserve enough
free count for fastmap non anchor pebs after the above prerequisites
are met.
Then, there are at least 1 PEB in pool and 1 PEB in wl_pool after calling
ubi_refill_pools() with all erase works done.

Fetch a reproducer in [Link].

Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# f61b9c87 10-May-2022 Zhihao Cheng <chengzhihao1@huawei.com>

ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty

[ Upstream commit d09e9a2bddba6c48e0fddb16c4383172ac593251 ]

There at least 6 PEBs reserved on UBI device:
1. EBA_RESERV

ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty

[ Upstream commit d09e9a2bddba6c48e0fddb16c4383172ac593251 ]

There at least 6 PEBs reserved on UBI device:
1. EBA_RESERVED_PEBS[1]
2. WL_RESERVED_PEBS[1]
3. UBI_LAYOUT_VOLUME_EBS[2]
4. MIN_FASTMAP_RESERVED_PEBS[2]

When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS +
WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1])
free PEBs. Since commit f9c34bb529975fe ("ubi: Fix producing anchor PEBs")
and commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering
wear level rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] -
FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after
filling pool, wl_pool is always empty. So, UBI could be stuck in an
infinite loop:

ubi_thread system_wq
wear_leveling_worker <--------------------------------------------------
get_peb_for_wl |
// fm_wl_pool, used = size = 0 |
schedule_work(&ubi->fm_work) |
|
update_fastmap_work_fn |
ubi_update_fastmap |
ubi_refill_pools |
// ubi->free_count - ubi->beb_rsvd_pebs < 5 |
// wl_pool is not filled with any PEBs |
schedule_erase(old_fm_anchor) |
ubi_ensure_anchor_pebs |
__schedule_ubi_work(wear_leveling_worker) |
|
__erase_worker |
ensure_wear_leveling |
__schedule_ubi_work(wear_leveling_worker) --------------------------

, which cause high cpu usage of ubi_bgt:
top - 12:10:42 up 5 min, 2 users, load average: 1.76, 0.68, 0.27
Tasks: 123 total, 3 running, 54 sleeping, 0 stopped, 0 zombie

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1589 root 20 0 0 0 0 R 45.0 0.0 0:38.86 ubi_bgt0d
319 root 20 0 0 0 0 I 15.2 0.0 0:15.29 kworker/0:3-eve
371 root 20 0 0 0 0 I 14.9 0.0 0:12.85 kworker/3:3-eve
20 root 20 0 0 0 0 I 11.3 0.0 0:05.33 kworker/1:0-eve
202 root 20 0 0 0 0 I 11.3 0.0 0:04.93 kworker/2:3-eve

In commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering
wear level rules"), there are three key changes:
1) Choose the fastmap anchor when the most free PEBs are available.
2) Enable anchor move within the anchor area again as it is useful
for distributing wear.
3) Import a candidate fm anchor and check this PEB's erase count during
wear leveling. If the wear leveling limit is exceeded, use the used
anchor area PEB with the lowest erase count to replace it.

The anchor candidate can be removed, we can check fm_anchor PEB's erase
count during wear leveling. Fix it by:
1) Removing 'fm_next_anchor' and check 'fm_anchor' during wear leveling.
2) Preferentially filling one free peb into fm_wl_pool in condition of
ubi->free_count > ubi->beb_rsvd_pebs, then try to reserve enough
free count for fastmap non anchor pebs after the above prerequisites
are met.
Then, there are at least 1 PEB in pool and 1 PEB in wl_pool after calling
ubi_refill_pools() with all erase works done.

Fetch a reproducer in [Link].

Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10
# ab4e4de9 09-Nov-2020 Lee Jones <lee.jones@linaro.org>

mtd: ubi: wl: Fix a couple of kernel-doc issues

Fixes the following W=1 kernel build warning(s):

drivers/mtd/ubi/wl.c:584: warning: Function parameter or member 'nested' not described in 'schedule

mtd: ubi: wl: Fix a couple of kernel-doc issues

Fixes the following W=1 kernel build warning(s):

drivers/mtd/ubi/wl.c:584: warning: Function parameter or member 'nested' not described in 'schedule_erase'
drivers/mtd/ubi/wl.c:1075: warning: Excess function parameter 'shutdown' description in '__erase_worker'

Cc: Richard Weinberger <richard@nod.at>
Cc: Miquel Raynal <miquel.raynal@bootlin.com>
Cc: Vignesh Raghavendra <vigneshr@ti.com>
Cc: linux-mtd@lists.infradead.org
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20201109182206.3037326-13-lee.jones@linaro.org

show more ...


Revision tags: v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44
# d005f8c6 01-Jun-2020 Zhihao Cheng <chengzhihao1@huawei.com>

ubi: check kthread_should_stop() after the setting of task state

A detach hung is possible when a race occurs between the detach process
and the ubi background thread. The following sequences outlin

ubi: check kthread_should_stop() after the setting of task state

A detach hung is possible when a race occurs between the detach process
and the ubi background thread. The following sequences outline the race:

ubi thread: if (list_empty(&ubi->works)...

ubi detach: set_bit(KTHREAD_SHOULD_STOP, &kthread->flags)
=> by kthread_stop()
wake_up_process()
=> ubi thread is still running, so 0 is returned

ubi thread: set_current_state(TASK_INTERRUPTIBLE)
schedule()
=> ubi thread will never be scheduled again

ubi detach: wait_for_completion()
=> hung task!

To fix that, we need to check kthread_should_stop() after we set the
task state, so the ubi thread will either see the stop bit and exit or
the task state is reset to runnable such that it isn't scheduled out
indefinitely.

Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Cc: <stable@vger.kernel.org>
Fixes: 801c135ce73d5df1ca ("UBI: Unsorted Block Images")
Reported-by: syzbot+853639d0cb16c31c7a14@syzkaller.appspotmail.com
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# 3b185255 07-Jul-2020 Zhihao Cheng <chengzhihao1@huawei.com>

ubi: fastmap: Don't produce the initial next anchor PEB when fastmap is disabled

Following process triggers a memleak caused by forgetting to release the
initial next anchor PEB (CONFIG_MTD_UBI_FAST

ubi: fastmap: Don't produce the initial next anchor PEB when fastmap is disabled

Following process triggers a memleak caused by forgetting to release the
initial next anchor PEB (CONFIG_MTD_UBI_FASTMAP is disabled):
1. attach -> __erase_worker -> produce the initial next anchor PEB
2. detach -> ubi_fastmap_close (Do nothing, it should have released the
initial next anchor PEB)

Don't produce the initial next anchor PEB in __erase_worker() when fastmap
is disabled.

Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Suggested-by: Sascha Hauer <s.hauer@pengutronix.de>
Fixes: f9c34bb529975fe ("ubi: Fix producing anchor PEBs")
Reported-by: syzbot+d9aab50b1154e3d163f5@syzkaller.appspotmail.com
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12
# 4b68bf9a 13-Jan-2020 Arne Edholm <arne.edholm@axis.com>

ubi: Select fastmap anchor PEBs considering wear level rules

There is a risk that the fastmap anchor PEB is alternating between
just two PEBs, the current anchor and the previous anchor that was jus

ubi: Select fastmap anchor PEBs considering wear level rules

There is a risk that the fastmap anchor PEB is alternating between
just two PEBs, the current anchor and the previous anchor that was just
deleted. As the fastmap pools gets the first take on free PEBs, the
pools may leave no free PEBs to be selected as the new anchor,
resulting in the two PEBs alternating behaviour. If the anchor PEBs gets
a high erase count the PEBs will not be used by the pools but remain in
ubi->free, even more increasing the likelihood they will be used as
anchors.

Getting stuck using only a couple of PEBs continuously will result in an
uneven wear, eventually leading to failure.

To fix this:

- Choose the fastmap anchor when the most free PEBs are available. This is
during rebuilding of the fastmap pools, after the unused pool PEBs are
added to ubi->free but before the pools are populated again from the
free PEBs. Also reserve an additional second best PEB as a candidate
for the next time the fast map anchor is updated. If a better PEB is
found the next time the fast map anchor is updated, the candidate is
made available for building the pools.

- Enable anchor move within the anchor area again as it is useful for
distributing wear.

- The anchor candidate for the next fastmap update is the most suited free
PEB. Check this PEB's erase count during wear leveling. If the wear
leveling limit is exceeded, the PEB is considered unsuitable for now. As
all other non used anchor area PEBs should be even worse, free up the
used anchor area PEB with the lowest erase count.

Signed-off-by: Arne Edholm <arne.edholm@axis.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# 294a8dbe 10-Feb-2020 Hou Tao <houtao1@huawei.com>

ubi: fastmap: Only produce the initial anchor PEB when fastmap is used

Don't produce the initial anchor PEB when ubi device is read-only
or fastmap is disabled, else the resulting PEB will be unusab

ubi: fastmap: Only produce the initial anchor PEB when fastmap is used

Don't produce the initial anchor PEB when ubi device is read-only
or fastmap is disabled, else the resulting PEB will be unusable
to any volume.

Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14
# 770aa73d 27-Nov-2019 YueHaibing <yuehaibing@huawei.com>

ubi: wl: Remove set but not used variable 'prev_e'

Fixes gcc '-Wunused-but-set-variable' warning:

drivers/mtd/ubi/wl.c: In function 'find_wl_entry':
drivers/mtd/ubi/wl.c:322:27: warning:
variable

ubi: wl: Remove set but not used variable 'prev_e'

Fixes gcc '-Wunused-but-set-variable' warning:

drivers/mtd/ubi/wl.c: In function 'find_wl_entry':
drivers/mtd/ubi/wl.c:322:27: warning:
variable 'prev_e' set but not used [-Wunused-but-set-variable]

It's not used any more now, so remove it.

Fixes: f9c34bb52997 ("ubi: Fix producing anchor PEBs")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9
# f9c34bb5 05-Nov-2019 Sascha Hauer <s.hauer@pengutronix.de>

ubi: Fix producing anchor PEBs

When a new fastmap is about to be written UBI must make sure it has a
free block for a fastmap anchor available. For this ubi_update_fastmap()
calls ubi_ensure_anchor_

ubi: Fix producing anchor PEBs

When a new fastmap is about to be written UBI must make sure it has a
free block for a fastmap anchor available. For this ubi_update_fastmap()
calls ubi_ensure_anchor_pebs(). This stopped working with 2e8f08deabbc
("ubi: Fix races around ubi_refill_pools()"), with this commit the wear
leveling code is blocked and can no longer produce free PEBs. UBI then
more often than not falls back to write the new fastmap anchor to the
same block it was already on which means the same erase block gets
erased during each fastmap write and wears out quite fast.

As the locking prevents us from producing the anchor PEB when we
actually need it, this patch changes the strategy for creating the
anchor PEB. We no longer create it on demand right before we want to
write a fastmap, but instead we create an anchor PEB right after we have
written a fastmap. This gives us enough time to produce a new anchor PEB
before it is needed. To make sure we have an anchor PEB for the very
first fastmap write we call ubi_ensure_anchor_pebs() during
initialisation as well.

Fixes: 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()")
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v5.3.8, v5.3.7, v5.3.6, v5.3.5, v5.3.4, v5.3.3, v5.3.2, v5.3.1, v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12, v5.2.11, v5.2.10, v5.2.9, v5.2.8, v5.2.7, v5.2.6, v5.2.5, v5.2.4, v5.2.3
# 8596813a 25-Jul-2019 Richard Weinberger <richard@nod.at>

ubi: Don't do anchor move within fastmap area

To make sure that Fastmap can use a PEB within the first 64
PEBs, UBI moves blocks away from that area.
It uses regular wear-leveling for that job.

An

ubi: Don't do anchor move within fastmap area

To make sure that Fastmap can use a PEB within the first 64
PEBs, UBI moves blocks away from that area.
It uses regular wear-leveling for that job.

An anchor move can be triggered if no PEB is free in this area
or because of anticipation. In the latter case it can happen
that UBI decides to move a block but finds a free PEB
within the same area.
This case is in vain an increases only erase counters.

Catch this case and cancel wear-leveling if this happens.

Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v5.2.2, v5.2.1, v5.2, v5.1.16, v5.1.15, v5.1.14, v5.1.13, v5.1.12, v5.1.11, v5.1.10, v5.1.9, v5.1.8, v5.1.7, v5.1.6
# 1a59d1b8 27-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of th

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program if not write to the free software foundation inc
59 temple place suite 330 boston ma 02111 1307 usa

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 1334 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070033.113240726@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


Revision tags: v5.1.5, v5.1.4, v5.1.3, v5.1.2, v5.1.1, v5.0.14, v5.1, v5.0.13, v5.0.12, v5.0.11, v5.0.10, v5.0.9, v5.0.8, v5.0.7, v5.0.6, v5.0.5, v5.0.4, v5.0.3, v4.19.29, v5.0.2, v4.19.28, v5.0.1, v4.19.27, v5.0
# 04d37e5a 02-Mar-2019 Gustavo A. R. Silva <gustavo@embeddedor.com>

ubi: wl: Fix uninitialized variable

There is a potential execution path in which variable *err*
is compared against UBI_IO_BITFLIPS without being properly
initialized previously.

Fix this by initia

ubi: wl: Fix uninitialized variable

There is a potential execution path in which variable *err*
is compared against UBI_IO_BITFLIPS without being properly
initialized previously.

Fix this by initializing variable *err* to 0.

Addresses-Coverity-ID: 1477298 "(Uninitialized scalar variable")
Fixes: 663586c0a892 ("ubi: Expose the bitrot interface")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# 5578e48e 27-Feb-2019 Dan Carpenter <dan.carpenter@oracle.com>

ubi: wl: Silence uninitialized variable warning

This condition needs to be fipped around because "err" is uninitialized
when "force" is set. The Smatch static analysis tool complains and
UBsan will

ubi: wl: Silence uninitialized variable warning

This condition needs to be fipped around because "err" is uninitialized
when "force" is set. The Smatch static analysis tool complains and
UBsan will also complain at runtime.

Fixes: 663586c0a892 ("ubi: Expose the bitrot interface")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v4.19.26, v4.19.25, v4.19.24, v4.19.23, v4.19.22, v4.19.21, v4.19.20, v4.19.19, v4.19.18, v4.19.17, v4.19.16, v4.19.15, v4.19.14, v4.19.13, v4.19.12, v4.19.11, v4.19.10, v4.19.9, v4.19.8, v4.19.7, v4.19.6, v4.19.5, v4.19.4, v4.18.20, v4.19.3, v4.18.19, v4.19.2, v4.18.18
# 663586c0 07-Nov-2018 Richard Weinberger <richard@nod.at>

ubi: Expose the bitrot interface

Using UBI_IOCRPEB and UBI_IOCSPEB userspace can force
reading and scrubbing of PEBs.

In case of bitflips UBI will automatically take action
and move data to a diffe

ubi: Expose the bitrot interface

Using UBI_IOCRPEB and UBI_IOCSPEB userspace can force
reading and scrubbing of PEBs.

In case of bitflips UBI will automatically take action
and move data to a different PEB.
This interface allows a daemon to foster your NAND.

Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


# b32b78f8 07-Nov-2018 Richard Weinberger <richard@nod.at>

ubi: Introduce in_pq()

This function works like in_wl_tree() but checks whether an ubi_wl_entry
is currently in the protection queue.
We need this function to query the current state of an ubi_wl_en

ubi: Introduce in_pq()

This function works like in_wl_tree() but checks whether an ubi_wl_entry
is currently in the protection queue.
We need this function to query the current state of an ubi_wl_entry.

Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


Revision tags: v4.18.17, v4.19.1, v4.19, v4.18.16, v4.18.15, v4.18.14, v4.18.13, v4.18.12, v4.18.11, v4.18.10, v4.18.9, v4.18.7, v4.18.6, v4.18.5, v4.17.18, v4.18.4, v4.18.3, v4.17.17, v4.18.2, v4.17.16, v4.17.15, v4.18.1, v4.18, v4.17.14, v4.17.13, v4.17.12, v4.17.11, v4.17.10, v4.17.9, v4.17.8, v4.17.7, v4.17.6, v4.17.5, v4.17.4, v4.17.3, v4.17.2
# 6396bb22 12-Jun-2018 Kees Cook <keescook@chromium.org>

treewide: kzalloc() -> kcalloc()

The kzalloc() function has a 2-factor argument form, kcalloc(). This
patch replaces cases of:

kzalloc(a * b, gfp)

with:
kcalloc(a * b, gfp)

as wel

treewide: kzalloc() -> kcalloc()

The kzalloc() function has a 2-factor argument form, kcalloc(). This
patch replaces cases of:

kzalloc(a * b, gfp)

with:
kcalloc(a * b, gfp)

as well as handling cases of:

kzalloc(a * b * c, gfp)

with:

kzalloc(array3_size(a, b, c), gfp)

as it's slightly less ugly than:

kzalloc_array(array_size(a, b), c, gfp)

This does, however, attempt to ignore constant size factors like:

kzalloc(4 * 1024, gfp)

though any constants defined via macros get caught up in the conversion.

Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.

The Coccinelle script used for this was:

// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@

(
kzalloc(
- (sizeof(TYPE)) * E
+ sizeof(TYPE) * E
, ...)
|
kzalloc(
- (sizeof(THING)) * E
+ sizeof(THING) * E
, ...)
)

// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@

(
kzalloc(
- sizeof(u8) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(__u8) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(char) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(unsigned char) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(u8) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(__u8) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(char) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(unsigned char) * COUNT
+ COUNT
, ...)
)

// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@

(
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (COUNT_ID)
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * COUNT_ID
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (COUNT_CONST)
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * COUNT_CONST
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (COUNT_ID)
+ COUNT_ID, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * COUNT_ID
+ COUNT_ID, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (COUNT_CONST)
+ COUNT_CONST, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * COUNT_CONST
+ COUNT_CONST, sizeof(THING)
, ...)
)

// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@

- kzalloc
+ kcalloc
(
- SIZE * COUNT
+ COUNT, SIZE
, ...)

// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@

(
kzalloc(
- sizeof(TYPE) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(THING) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
)

// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@

(
kzalloc(
- sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kzalloc(
- sizeof(THING1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(THING1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
)

// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@

(
kzalloc(
- (COUNT) * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
)

// Any remaining multi-factor products, first at least 3-factor products,
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@

(
kzalloc(C1 * C2 * C3, ...)
|
kzalloc(
- (E1) * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- (E1) * (E2) * E3
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- (E1) * (E2) * (E3)
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- E1 * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
)

// And then all remaining 2 factors products when they're not all constants,
// keeping sizeof() as the second factor argument.
@@
expression THING, E1, E2;
type TYPE;
constant C1, C2, C3;
@@

(
kzalloc(sizeof(THING) * C2, ...)
|
kzalloc(sizeof(TYPE) * C2, ...)
|
kzalloc(C1 * C2 * C3, ...)
|
kzalloc(C1 * C2, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (E2)
+ E2, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * E2
+ E2, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (E2)
+ E2, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * E2
+ E2, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- (E1) * E2
+ E1, E2
, ...)
|
- kzalloc
+ kcalloc
(
- (E1) * (E2)
+ E1, E2
, ...)
|
- kzalloc
+ kcalloc
(
- E1 * E2
+ E1, E2
, ...)
)

Signed-off-by: Kees Cook <keescook@chromium.org>

show more ...


Revision tags: v4.17.1, v4.17
# 6e7d8016 16-May-2018 Richard Weinberger <richard@nod.at>

ubi: fastmap: Cancel work upon detach

Ben Hutchings pointed out that 29b7a6fa1ec0 ("ubi: fastmap: Don't flush
fastmap work on detach") does not really fix the problem, it just
reduces the risk to hi

ubi: fastmap: Cancel work upon detach

Ben Hutchings pointed out that 29b7a6fa1ec0 ("ubi: fastmap: Don't flush
fastmap work on detach") does not really fix the problem, it just
reduces the risk to hit the race window where fastmap work races against
free()'ing ubi->volumes[].

The correct approach is making sure that no more fastmap work is in
progress before we free ubi data structures.
So we cancel fastmap work right after the ubi background thread is
stopped.
By setting ubi->thread_enabled to zero we make sure that no further work
tries to wake the thread.

Fixes: 29b7a6fa1ec0 ("ubi: fastmap: Don't flush fastmap work on detach")
Fixes: 74cdaf24004a ("UBI: Fastmap: Fix memory leaks while closing the WL sub-system")
Cc: stable@vger.kernel.org
Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
Cc: Martin Townsend <mtownsend1973@gmail.com>

Signed-off-by: Richard Weinberger <richard@nod.at>

show more ...


12345678910>>...12