Name Date Size #Lines LOC

..16-Dec-2024-

KconfigH A D07-Mar-20215.5 KiB186156

LICENCEH A D07-Mar-20211.4 KiB3123

MakefileH A D07-Mar-2021786 2314

README.LockingH A D07-Mar-20216.9 KiB170128

acl.cH A D05-Apr-20246.8 KiB314269

acl.hH A D05-Apr-20241,007 4425

background.cH A D05-Apr-20244.2 KiB166115

build.cH A D05-Apr-202412.6 KiB435281

compr.cH A D05-Apr-202411.7 KiB427317

compr.hH A D05-Apr-20243.6 KiB12091

compr_lzo.cH A D07-Mar-20212.2 KiB11177

compr_rtime.cH A D16-Dec-20242.9 KiB13791

compr_rubin.cH A D07-Mar-20218.7 KiB453336

compr_zlib.cH A D07-Mar-20215.5 KiB223166

debug.cH A D07-Mar-202125.6 KiB867713

debug.hH A D24-May-20228.3 KiB277206

dir.cH A D05-Apr-202423.1 KiB885612

erase.cH A D11-Dec-202413.4 KiB486382

file.cH A D05-Apr-20249.4 KiB343246

fs.cH A D05-Apr-202419.5 KiB742544

gc.cH A D05-Apr-202444 KiB1,4071,018

ioctl.cH A D07-Mar-2021557 236

jffs2_fs_i.hH A D05-Apr-20241.6 KiB5718

jffs2_fs_sb.hH A D08-Mar-20215.7 KiB164104

malloc.cH A D07-Mar-20217.3 KiB316257

nodelist.cH A D07-Mar-202121.3 KiB756532

nodelist.hH A D24-May-202217.9 KiB485322

nodemgmt.cH A D07-Mar-202128.4 KiB884605

os-linux.hH A D05-Apr-20247.4 KiB198142

read.cH A D07-Mar-20216.7 KiB229184

readinode.cH A D08-Mar-202143.2 KiB1,448989

scan.cH A D24-May-202235.3 KiB1,183922

security.cH A D05-Apr-20242 KiB7453

summary.cH A D08-Mar-202123.7 KiB878663

summary.hH A D24-May-20226.4 KiB214163

super.cH A D12-Jul-202410.7 KiB443336

symlink.cH A D07-Mar-2021417 207

wbuf.cH A D05-Apr-202436.9 KiB1,351946

write.cH A D07-Mar-202121 KiB724530

writev.cH A D07-Mar-20211.1 KiB5234

xattr.cH A D13-Jun-202438.3 KiB1,3551,070

xattr.hH A D05-Apr-20244.1 KiB12893

xattr_trusted.cH A D05-Apr-20241.2 KiB4832

xattr_user.cH A D05-Apr-20241.1 KiB4227

README.Locking

1
2	JFFS2 LOCKING DOCUMENTATION
3	---------------------------
4
5This document attempts to describe the existing locking rules for
6JFFS2. It is not expected to remain perfectly up to date, but ought to
7be fairly close.
8
9
10	alloc_sem
11	---------
12
13The alloc_sem is a per-filesystem mutex, used primarily to ensure
14contiguous allocation of space on the medium. It is automatically
15obtained during space allocations (jffs2_reserve_space()) and freed
16upon write completion (jffs2_complete_reservation()). Note that
17the garbage collector will obtain this right at the beginning of
18jffs2_garbage_collect_pass() and release it at the end, thereby
19preventing any other write activity on the file system during a
20garbage collect pass.
21
22When writing new nodes, the alloc_sem must be held until the new nodes
23have been properly linked into the data structures for the inode to
24which they belong. This is for the benefit of NAND flash - adding new
25nodes to an inode may obsolete old ones, and by holding the alloc_sem
26until this happens we ensure that any data in the write-buffer at the
27time this happens are part of the new node, not just something that
28was written afterwards. Hence, we can ensure the newly-obsoleted nodes
29don't actually get erased until the write-buffer has been flushed to
30the medium.
31
32With the introduction of NAND flash support and the write-buffer,
33the alloc_sem is also used to protect the wbuf-related members of the
34jffs2_sb_info structure. Atomically reading the wbuf_len member to see
35if the wbuf is currently holding any data is permitted, though.
36
37Ordering constraints: See f->sem.
38
39
40	File Mutex f->sem
41	---------------------
42
43This is the JFFS2-internal equivalent of the inode mutex i->i_sem.
44It protects the contents of the jffs2_inode_info private inode data,
45including the linked list of node fragments (but see the notes below on
46erase_completion_lock), etc.
47
48The reason that the i_sem itself isn't used for this purpose is to
49avoid deadlocks with garbage collection -- the VFS will lock the i_sem
50before calling a function which may need to allocate space. The
51allocation may trigger garbage-collection, which may need to move a
52node belonging to the inode which was locked in the first place by the
53VFS. If the garbage collection code were to attempt to lock the i_sem
54of the inode from which it's garbage-collecting a physical node, this
55lead to deadlock, unless we played games with unlocking the i_sem
56before calling the space allocation functions.
57
58Instead of playing such games, we just have an extra internal
59mutex, which is obtained by the garbage collection code and also
60by the normal file system code _after_ allocation of space.
61
62Ordering constraints:
63
64	1. Never attempt to allocate space or lock alloc_sem with
65	   any f->sem held.
66	2. Never attempt to lock two file mutexes in one thread.
67	   No ordering rules have been made for doing so.
68	3. Never lock a page cache page with f->sem held.
69
70
71	erase_completion_lock spinlock
72	------------------------------
73
74This is used to serialise access to the eraseblock lists, to the
75per-eraseblock lists of physical jffs2_raw_node_ref structures, and
76(NB) the per-inode list of physical nodes. The latter is a special
77case - see below.
78
79As the MTD API no longer permits erase-completion callback functions
80to be called from bottom-half (timer) context (on the basis that nobody
81ever actually implemented such a thing), it's now sufficient to use
82a simple spin_lock() rather than spin_lock_bh().
83
84Note that the per-inode list of physical nodes (f->nodes) is a special
85case. Any changes to _valid_ nodes (i.e. ->flash_offset & 1 == 0) in
86the list are protected by the file mutex f->sem. But the erase code
87may remove _obsolete_ nodes from the list while holding only the
88erase_completion_lock. So you can walk the list only while holding the
89erase_completion_lock, and can drop the lock temporarily mid-walk as
90long as the pointer you're holding is to a _valid_ node, not an
91obsolete one.
92
93The erase_completion_lock is also used to protect the c->gc_task
94pointer when the garbage collection thread exits. The code to kill the
95GC thread locks it, sends the signal, then unlocks it - while the GC
96thread itself locks it, zeroes c->gc_task, then unlocks on the exit path.
97
98
99	inocache_lock spinlock
100	----------------------
101
102This spinlock protects the hashed list (c->inocache_list) of the
103in-core jffs2_inode_cache objects (each inode in JFFS2 has the
104correspondent jffs2_inode_cache object). So, the inocache_lock
105has to be locked while walking the c->inocache_list hash buckets.
106
107This spinlock also covers allocation of new inode numbers, which is
108currently just '++->highest_ino++', but might one day get more complicated
109if we need to deal with wrapping after 4 milliard inode numbers are used.
110
111Note, the f->sem guarantees that the correspondent jffs2_inode_cache
112will not be removed. So, it is allowed to access it without locking
113the inocache_lock spinlock.
114
115Ordering constraints:
116
117	If both erase_completion_lock and inocache_lock are needed, the
118	c->erase_completion has to be acquired first.
119
120
121	erase_free_sem
122	--------------
123
124This mutex is only used by the erase code which frees obsolete node
125references and the jffs2_garbage_collect_deletion_dirent() function.
126The latter function on NAND flash must read _obsolete_ nodes to
127determine whether the 'deletion dirent' under consideration can be
128discarded or whether it is still required to show that an inode has
129been unlinked. Because reading from the flash may sleep, the
130erase_completion_lock cannot be held, so an alternative, more
131heavyweight lock was required to prevent the erase code from freeing
132the jffs2_raw_node_ref structures in question while the garbage
133collection code is looking at them.
134
135Suggestions for alternative solutions to this problem would be welcomed.
136
137
138	wbuf_sem
139	--------
140
141This read/write semaphore protects against concurrent access to the
142write-behind buffer ('wbuf') used for flash chips where we must write
143in blocks. It protects both the contents of the wbuf and the metadata
144which indicates which flash region (if any) is currently covered by
145the buffer.
146
147Ordering constraints:
148	Lock wbuf_sem last, after the alloc_sem or and f->sem.
149
150
151	c->xattr_sem
152	------------
153
154This read/write semaphore protects against concurrent access to the
155xattr related objects which include stuff in superblock and ic->xref.
156In read-only path, write-semaphore is too much exclusion. It's enough
157by read-semaphore. But you must hold write-semaphore when updating,
158creating or deleting any xattr related object.
159
160Once xattr_sem released, there would be no assurance for the existence
161of those objects. Thus, a series of processes is often required to retry,
162when updating such a object is necessary under holding read semaphore.
163For example, do_jffs2_getxattr() holds read-semaphore to scan xref and
164xdatum at first. But it retries this process with holding write-semaphore
165after release read-semaphore, if it's necessary to load name/value pair
166from medium.
167
168Ordering constraints:
169	Lock xattr_sem last, after the alloc_sem.
170