1.. SPDX-License-Identifier: GPL-2.0
2.. _xfs_online_fsck_design:
3
4..
5        Mapping of heading styles within this document:
6        Heading 1 uses "====" above and below
7        Heading 2 uses "===="
8        Heading 3 uses "----"
9        Heading 4 uses "````"
10        Heading 5 uses "^^^^"
11        Heading 6 uses "~~~~"
12        Heading 7 uses "...."
13
14        Sections are manually numbered because apparently that's what everyone
15        does in the kernel.
16
17======================
18XFS Online Fsck Design
19======================
20
21This document captures the design of the online filesystem check feature for
22XFS.
23The purpose of this document is threefold:
24
25- To help kernel distributors understand exactly what the XFS online fsck
26  feature is, and issues about which they should be aware.
27
28- To help people reading the code to familiarize themselves with the relevant
29  concepts and design points before they start digging into the code.
30
31- To help developers maintaining the system by capturing the reasons
32  supporting higher level decision making.
33
34As the online fsck code is merged, the links in this document to topic branches
35will be replaced with links to code.
36
37This document is licensed under the terms of the GNU Public License, v2.
38The primary author is Darrick J. Wong.
39
40This design document is split into seven parts.
41Part 1 defines what fsck tools are and the motivations for writing a new one.
42Parts 2 and 3 present a high level overview of how online fsck process works
43and how it is tested to ensure correct functionality.
44Part 4 discusses the user interface and the intended usage modes of the new
45program.
46Parts 5 and 6 show off the high level components and how they fit together, and
47then present case studies of how each repair function actually works.
48Part 7 sums up what has been discussed so far and speculates about what else
49might be built atop online fsck.
50
51.. contents:: Table of Contents
52   :local:
53
541. What is a Filesystem Check?
55==============================
56
57A Unix filesystem has four main responsibilities:
58
59- Provide a hierarchy of names through which application programs can associate
60  arbitrary blobs of data for any length of time,
61
62- Virtualize physical storage media across those names, and
63
64- Retrieve the named data blobs at any time.
65
66- Examine resource usage.
67
68Metadata directly supporting these functions (e.g. files, directories, space
69mappings) are sometimes called primary metadata.
70Secondary metadata (e.g. reverse mapping and directory parent pointers) support
71operations internal to the filesystem, such as internal consistency checking
72and reorganization.
73Summary metadata, as the name implies, condense information contained in
74primary metadata for performance reasons.
75
76The filesystem check (fsck) tool examines all the metadata in a filesystem
77to look for errors.
78In addition to looking for obvious metadata corruptions, fsck also
79cross-references different types of metadata records with each other to look
80for inconsistencies.
81People do not like losing data, so most fsck tools also contains some ability
82to correct any problems found.
83As a word of caution -- the primary goal of most Linux fsck tools is to restore
84the filesystem metadata to a consistent state, not to maximize the data
85recovered.
86That precedent will not be challenged here.
87
88Filesystems of the 20th century generally lacked any redundancy in the ondisk
89format, which means that fsck can only respond to errors by erasing files until
90errors are no longer detected.
91More recent filesystem designs contain enough redundancy in their metadata that
92it is now possible to regenerate data structures when non-catastrophic errors
93occur; this capability aids both strategies.
94
95+--------------------------------------------------------------------------+
96| **Note**:                                                                |
97+--------------------------------------------------------------------------+
98| System administrators avoid data loss by increasing the number of        |
99| separate storage systems through the creation of backups; and they avoid |
100| downtime by increasing the redundancy of each storage system through the |
101| creation of RAID arrays.                                                 |
102| fsck tools address only the first problem.                               |
103+--------------------------------------------------------------------------+
104
105TLDR; Show Me the Code!
106-----------------------
107
108Code is posted to the kernel.org git trees as follows:
109`kernel changes <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-symlink>`_,
110`userspace changes <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-media-scan-service>`_, and
111`QA test changes <https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=repair-dirs>`_.
112Each kernel patchset adding an online repair function will use the same branch
113name across the kernel, xfsprogs, and fstests git repos.
114
115Existing Tools
116--------------
117
118The online fsck tool described here will be the third tool in the history of
119XFS (on Linux) to check and repair filesystems.
120Two programs precede it:
121
122The first program, ``xfs_check``, was created as part of the XFS debugger
123(``xfs_db``) and can only be used with unmounted filesystems.
124It walks all metadata in the filesystem looking for inconsistencies in the
125metadata, though it lacks any ability to repair what it finds.
126Due to its high memory requirements and inability to repair things, this
127program is now deprecated and will not be discussed further.
128
129The second program, ``xfs_repair``, was created to be faster and more robust
130than the first program.
131Like its predecessor, it can only be used with unmounted filesystems.
132It uses extent-based in-memory data structures to reduce memory consumption,
133and tries to schedule readahead IO appropriately to reduce I/O waiting time
134while it scans the metadata of the entire filesystem.
135The most important feature of this tool is its ability to respond to
136inconsistencies in file metadata and directory tree by erasing things as needed
137to eliminate problems.
138Space usage metadata are rebuilt from the observed file metadata.
139
140Problem Statement
141-----------------
142
143The current XFS tools leave several problems unsolved:
144
1451. **User programs** suddenly **lose access** to the filesystem when unexpected
146   shutdowns occur as a result of silent corruptions in the metadata.
147   These occur **unpredictably** and often without warning.
148
1492. **Users** experience a **total loss of service** during the recovery period
150   after an **unexpected shutdown** occurs.
151
1523. **Users** experience a **total loss of service** if the filesystem is taken
153   offline to **look for problems** proactively.
154
1554. **Data owners** cannot **check the integrity** of their stored data without
156   reading all of it.
157   This may expose them to substantial billing costs when a linear media scan
158   performed by the storage system administrator might suffice.
159
1605. **System administrators** cannot **schedule** a maintenance window to deal
161   with corruptions if they **lack the means** to assess filesystem health
162   while the filesystem is online.
163
1646. **Fleet monitoring tools** cannot **automate periodic checks** of filesystem
165   health when doing so requires **manual intervention** and downtime.
166
1677. **Users** can be tricked into **doing things they do not desire** when
168   malicious actors **exploit quirks of Unicode** to place misleading names
169   in directories.
170
171Given this definition of the problems to be solved and the actors who would
172benefit, the proposed solution is a third fsck tool that acts on a running
173filesystem.
174
175This new third program has three components: an in-kernel facility to check
176metadata, an in-kernel facility to repair metadata, and a userspace driver
177program to drive fsck activity on a live filesystem.
178``xfs_scrub`` is the name of the driver program.
179The rest of this document presents the goals and use cases of the new fsck
180tool, describes its major design points in connection to those goals, and
181discusses the similarities and differences with existing tools.
182
183+--------------------------------------------------------------------------+
184| **Note**:                                                                |
185+--------------------------------------------------------------------------+
186| Throughout this document, the existing offline fsck tool can also be     |
187| referred to by its current name "``xfs_repair``".                        |
188| The userspace driver program for the new online fsck tool can be         |
189| referred to as "``xfs_scrub``".                                          |
190| The kernel portion of online fsck that validates metadata is called      |
191| "online scrub", and portion of the kernel that fixes metadata is called  |
192| "online repair".                                                         |
193+--------------------------------------------------------------------------+
194
195The naming hierarchy is broken up into objects known as directories and files
196and the physical space is split into pieces known as allocation groups.
197Sharding enables better performance on highly parallel systems and helps to
198contain the damage when corruptions occur.
199The division of the filesystem into principal objects (allocation groups and
200inodes) means that there are ample opportunities to perform targeted checks and
201repairs on a subset of the filesystem.
202
203While this is going on, other parts continue processing IO requests.
204Even if a piece of filesystem metadata can only be regenerated by scanning the
205entire system, the scan can still be done in the background while other file
206operations continue.
207
208In summary, online fsck takes advantage of resource sharding and redundant
209metadata to enable targeted checking and repair operations while the system
210is running.
211This capability will be coupled to automatic system management so that
212autonomous self-healing of XFS maximizes service availability.
213
2142. Theory of Operation
215======================
216
217Because it is necessary for online fsck to lock and scan live metadata objects,
218online fsck consists of three separate code components.
219The first is the userspace driver program ``xfs_scrub``, which is responsible
220for identifying individual metadata items, scheduling work items for them,
221reacting to the outcomes appropriately, and reporting results to the system
222administrator.
223The second and third are in the kernel, which implements functions to check
224and repair each type of online fsck work item.
225
226+------------------------------------------------------------------+
227| **Note**:                                                        |
228+------------------------------------------------------------------+
229| For brevity, this document shortens the phrase "online fsck work |
230| item" to "scrub item".                                           |
231+------------------------------------------------------------------+
232
233Scrub item types are delineated in a manner consistent with the Unix design
234philosophy, which is to say that each item should handle one aspect of a
235metadata structure, and handle it well.
236
237Scope
238-----
239
240In principle, online fsck should be able to check and to repair everything that
241the offline fsck program can handle.
242However, online fsck cannot be running 100% of the time, which means that
243latent errors may creep in after a scrub completes.
244If these errors cause the next mount to fail, offline fsck is the only
245solution.
246This limitation means that maintenance of the offline fsck tool will continue.
247A second limitation of online fsck is that it must follow the same resource
248sharing and lock acquisition rules as the regular filesystem.
249This means that scrub cannot take *any* shortcuts to save time, because doing
250so could lead to concurrency problems.
251In other words, online fsck is not a complete replacement for offline fsck, and
252a complete run of online fsck may take longer than online fsck.
253However, both of these limitations are acceptable tradeoffs to satisfy the
254different motivations of online fsck, which are to **minimize system downtime**
255and to **increase predictability of operation**.
256
257.. _scrubphases:
258
259Phases of Work
260--------------
261
262The userspace driver program ``xfs_scrub`` splits the work of checking and
263repairing an entire filesystem into seven phases.
264Each phase concentrates on checking specific types of scrub items and depends
265on the success of all previous phases.
266The seven phases are as follows:
267
2681. Collect geometry information about the mounted filesystem and computer,
269   discover the online fsck capabilities of the kernel, and open the
270   underlying storage devices.
271
2722. Check allocation group metadata, all realtime volume metadata, and all quota
273   files.
274   Each metadata structure is scheduled as a separate scrub item.
275   If corruption is found in the inode header or inode btree and ``xfs_scrub``
276   is permitted to perform repairs, then those scrub items are repaired to
277   prepare for phase 3.
278   Repairs are implemented by using the information in the scrub item to
279   resubmit the kernel scrub call with the repair flag enabled; this is
280   discussed in the next section.
281   Optimizations and all other repairs are deferred to phase 4.
282
2833. Check all metadata of every file in the filesystem.
284   Each metadata structure is also scheduled as a separate scrub item.
285   If repairs are needed and ``xfs_scrub`` is permitted to perform repairs,
286   and there were no problems detected during phase 2, then those scrub items
287   are repaired immediately.
288   Optimizations, deferred repairs, and unsuccessful repairs are deferred to
289   phase 4.
290
2914. All remaining repairs and scheduled optimizations are performed during this
292   phase, if the caller permits them.
293   Before starting repairs, the summary counters are checked and any necessary
294   repairs are performed so that subsequent repairs will not fail the resource
295   reservation step due to wildly incorrect summary counters.
296   Unsuccesful repairs are requeued as long as forward progress on repairs is
297   made somewhere in the filesystem.
298   Free space in the filesystem is trimmed at the end of phase 4 if the
299   filesystem is clean.
300
3015. By the start of this phase, all primary and secondary filesystem metadata
302   must be correct.
303   Summary counters such as the free space counts and quota resource counts
304   are checked and corrected.
305   Directory entry names and extended attribute names are checked for
306   suspicious entries such as control characters or confusing Unicode sequences
307   appearing in names.
308
3096. If the caller asks for a media scan, read all allocated and written data
310   file extents in the filesystem.
311   The ability to use hardware-assisted data file integrity checking is new
312   to online fsck; neither of the previous tools have this capability.
313   If media errors occur, they will be mapped to the owning files and reported.
314
3157. Re-check the summary counters and presents the caller with a summary of
316   space usage and file counts.
317
318This allocation of responsibilities will be :ref:`revisited <scrubcheck>`
319later in this document.
320
321Steps for Each Scrub Item
322-------------------------
323
324The kernel scrub code uses a three-step strategy for checking and repairing
325the one aspect of a metadata object represented by a scrub item:
326
3271. The scrub item of interest is checked for corruptions; opportunities for
328   optimization; and for values that are directly controlled by the system
329   administrator but look suspicious.
330   If the item is not corrupt or does not need optimization, resource are
331   released and the positive scan results are returned to userspace.
332   If the item is corrupt or could be optimized but the caller does not permit
333   this, resources are released and the negative scan results are returned to
334   userspace.
335   Otherwise, the kernel moves on to the second step.
336
3372. The repair function is called to rebuild the data structure.
338   Repair functions generally choose rebuild a structure from other metadata
339   rather than try to salvage the existing structure.
340   If the repair fails, the scan results from the first step are returned to
341   userspace.
342   Otherwise, the kernel moves on to the third step.
343
3443. In the third step, the kernel runs the same checks over the new metadata
345   item to assess the efficacy of the repairs.
346   The results of the reassessment are returned to userspace.
347
348Classification of Metadata
349--------------------------
350
351Each type of metadata object (and therefore each type of scrub item) is
352classified as follows:
353
354Primary Metadata
355````````````````
356
357Metadata structures in this category should be most familiar to filesystem
358users either because they are directly created by the user or they index
359objects created by the user
360Most filesystem objects fall into this class:
361
362- Free space and reference count information
363
364- Inode records and indexes
365
366- Storage mapping information for file data
367
368- Directories
369
370- Extended attributes
371
372- Symbolic links
373
374- Quota limits
375
376Scrub obeys the same rules as regular filesystem accesses for resource and lock
377acquisition.
378
379Primary metadata objects are the simplest for scrub to process.
380The principal filesystem object (either an allocation group or an inode) that
381owns the item being scrubbed is locked to guard against concurrent updates.
382The check function examines every record associated with the type for obvious
383errors and cross-references healthy records against other metadata to look for
384inconsistencies.
385Repairs for this class of scrub item are simple, since the repair function
386starts by holding all the resources acquired in the previous step.
387The repair function scans available metadata as needed to record all the
388observations needed to complete the structure.
389Next, it stages the observations in a new ondisk structure and commits it
390atomically to complete the repair.
391Finally, the storage from the old data structure are carefully reaped.
392
393Because ``xfs_scrub`` locks a primary object for the duration of the repair,
394this is effectively an offline repair operation performed on a subset of the
395filesystem.
396This minimizes the complexity of the repair code because it is not necessary to
397handle concurrent updates from other threads, nor is it necessary to access
398any other part of the filesystem.
399As a result, indexed structures can be rebuilt very quickly, and programs
400trying to access the damaged structure will be blocked until repairs complete.
401The only infrastructure needed by the repair code are the staging area for
402observations and a means to write new structures to disk.
403Despite these limitations, the advantage that online repair holds is clear:
404targeted work on individual shards of the filesystem avoids total loss of
405service.
406
407This mechanism is described in section 2.1 ("Off-Line Algorithm") of
408V. Srinivasan and M. J. Carey, `"Performance of On-Line Index Construction
409Algorithms" <https://minds.wisconsin.edu/bitstream/handle/1793/59524/TR1047.pdf>`_,
410*Extending Database Technology*, pp. 293-309, 1992.
411
412Most primary metadata repair functions stage their intermediate results in an
413in-memory array prior to formatting the new ondisk structure, which is very
414similar to the list-based algorithm discussed in section 2.3 ("List-Based
415Algorithms") of Srinivasan.
416However, any data structure builder that maintains a resource lock for the
417duration of the repair is *always* an offline algorithm.
418
419.. _secondary_metadata:
420
421Secondary Metadata
422``````````````````
423
424Metadata structures in this category reflect records found in primary metadata,
425but are only needed for online fsck or for reorganization of the filesystem.
426
427Secondary metadata include:
428
429- Reverse mapping information
430
431- Directory parent pointers
432
433This class of metadata is difficult for scrub to process because scrub attaches
434to the secondary object but needs to check primary metadata, which runs counter
435to the usual order of resource acquisition.
436Frequently, this means that full filesystems scans are necessary to rebuild the
437metadata.
438Check functions can be limited in scope to reduce runtime.
439Repairs, however, require a full scan of primary metadata, which can take a
440long time to complete.
441Under these conditions, ``xfs_scrub`` cannot lock resources for the entire
442duration of the repair.
443
444Instead, repair functions set up an in-memory staging structure to store
445observations.
446Depending on the requirements of the specific repair function, the staging
447index will either have the same format as the ondisk structure or a design
448specific to that repair function.
449The next step is to release all locks and start the filesystem scan.
450When the repair scanner needs to record an observation, the staging data are
451locked long enough to apply the update.
452While the filesystem scan is in progress, the repair function hooks the
453filesystem so that it can apply pending filesystem updates to the staging
454information.
455Once the scan is done, the owning object is re-locked, the live data is used to
456write a new ondisk structure, and the repairs are committed atomically.
457The hooks are disabled and the staging staging area is freed.
458Finally, the storage from the old data structure are carefully reaped.
459
460Introducing concurrency helps online repair avoid various locking problems, but
461comes at a high cost to code complexity.
462Live filesystem code has to be hooked so that the repair function can observe
463updates in progress.
464The staging area has to become a fully functional parallel structure so that
465updates can be merged from the hooks.
466Finally, the hook, the filesystem scan, and the inode locking model must be
467sufficiently well integrated that a hook event can decide if a given update
468should be applied to the staging structure.
469
470In theory, the scrub implementation could apply these same techniques for
471primary metadata, but doing so would make it massively more complex and less
472performant.
473Programs attempting to access the damaged structures are not blocked from
474operation, which may cause application failure or an unplanned filesystem
475shutdown.
476
477Inspiration for the secondary metadata repair strategy was drawn from section
4782.4 of Srinivasan above, and sections 2 ("NSF: Inded Build Without Side-File")
479and 3.1.1 ("Duplicate Key Insert Problem") in C. Mohan, `"Algorithms for
480Creating Indexes for Very Large Tables Without Quiescing Updates"
481<https://dl.acm.org/doi/10.1145/130283.130337>`_, 1992.
482
483The sidecar index mentioned above bears some resemblance to the side file
484method mentioned in Srinivasan and Mohan.
485Their method consists of an index builder that extracts relevant record data to
486build the new structure as quickly as possible; and an auxiliary structure that
487captures all updates that would be committed to the index by other threads were
488the new index already online.
489After the index building scan finishes, the updates recorded in the side file
490are applied to the new index.
491To avoid conflicts between the index builder and other writer threads, the
492builder maintains a publicly visible cursor that tracks the progress of the
493scan through the record space.
494To avoid duplication of work between the side file and the index builder, side
495file updates are elided when the record ID for the update is greater than the
496cursor position within the record ID space.
497
498To minimize changes to the rest of the codebase, XFS online repair keeps the
499replacement index hidden until it's completely ready to go.
500In other words, there is no attempt to expose the keyspace of the new index
501while repair is running.
502The complexity of such an approach would be very high and perhaps more
503appropriate to building *new* indices.
504
505**Future Work Question**: Can the full scan and live update code used to
506facilitate a repair also be used to implement a comprehensive check?
507
508*Answer*: In theory, yes.  Check would be much stronger if each scrub function
509employed these live scans to build a shadow copy of the metadata and then
510compared the shadow records to the ondisk records.
511However, doing that is a fair amount more work than what the checking functions
512do now.
513The live scans and hooks were developed much later.
514That in turn increases the runtime of those scrub functions.
515
516Summary Information
517```````````````````
518
519Metadata structures in this last category summarize the contents of primary
520metadata records.
521These are often used to speed up resource usage queries, and are many times
522smaller than the primary metadata which they represent.
523
524Examples of summary information include:
525
526- Summary counts of free space and inodes
527
528- File link counts from directories
529
530- Quota resource usage counts
531
532Check and repair require full filesystem scans, but resource and lock
533acquisition follow the same paths as regular filesystem accesses.
534
535The superblock summary counters have special requirements due to the underlying
536implementation of the incore counters, and will be treated separately.
537Check and repair of the other types of summary counters (quota resource counts
538and file link counts) employ the same filesystem scanning and hooking
539techniques as outlined above, but because the underlying data are sets of
540integer counters, the staging data need not be a fully functional mirror of the
541ondisk structure.
542
543Inspiration for quota and file link count repair strategies were drawn from
544sections 2.12 ("Online Index Operations") through 2.14 ("Incremental View
545Maintenace") of G.  Graefe, `"Concurrent Queries and Updates in Summary Views
546and Their Indexes"
547<http://www.odbms.org/wp-content/uploads/2014/06/Increment-locks.pdf>`_, 2011.
548
549Since quotas are non-negative integer counts of resource usage, online
550quotacheck can use the incremental view deltas described in section 2.14 to
551track pending changes to the block and inode usage counts in each transaction,
552and commit those changes to a dquot side file when the transaction commits.
553Delta tracking is necessary for dquots because the index builder scans inodes,
554whereas the data structure being rebuilt is an index of dquots.
555Link count checking combines the view deltas and commit step into one because
556it sets attributes of the objects being scanned instead of writing them to a
557separate data structure.
558Each online fsck function will be discussed as case studies later in this
559document.
560
561Risk Management
562---------------
563
564During the development of online fsck, several risk factors were identified
565that may make the feature unsuitable for certain distributors and users.
566Steps can be taken to mitigate or eliminate those risks, though at a cost to
567functionality.
568
569- **Decreased performance**: Adding metadata indices to the filesystem
570  increases the time cost of persisting changes to disk, and the reverse space
571  mapping and directory parent pointers are no exception.
572  System administrators who require the maximum performance can disable the
573  reverse mapping features at format time, though this choice dramatically
574  reduces the ability of online fsck to find inconsistencies and repair them.
575
576- **Incorrect repairs**: As with all software, there might be defects in the
577  software that result in incorrect repairs being written to the filesystem.
578  Systematic fuzz testing (detailed in the next section) is employed by the
579  authors to find bugs early, but it might not catch everything.
580  The kernel build system provides Kconfig options (``CONFIG_XFS_ONLINE_SCRUB``
581  and ``CONFIG_XFS_ONLINE_REPAIR``) to enable distributors to choose not to
582  accept this risk.
583  The xfsprogs build system has a configure option (``--enable-scrub=no``) that
584  disables building of the ``xfs_scrub`` binary, though this is not a risk
585  mitigation if the kernel functionality remains enabled.
586
587- **Inability to repair**: Sometimes, a filesystem is too badly damaged to be
588  repairable.
589  If the keyspaces of several metadata indices overlap in some manner but a
590  coherent narrative cannot be formed from records collected, then the repair
591  fails.
592  To reduce the chance that a repair will fail with a dirty transaction and
593  render the filesystem unusable, the online repair functions have been
594  designed to stage and validate all new records before committing the new
595  structure.
596
597- **Misbehavior**: Online fsck requires many privileges -- raw IO to block
598  devices, opening files by handle, ignoring Unix discretionary access control,
599  and the ability to perform administrative changes.
600  Running this automatically in the background scares people, so the systemd
601  background service is configured to run with only the privileges required.
602  Obviously, this cannot address certain problems like the kernel crashing or
603  deadlocking, but it should be sufficient to prevent the scrub process from
604  escaping and reconfiguring the system.
605  The cron job does not have this protection.
606
607- **Fuzz Kiddiez**: There are many people now who seem to think that running
608  automated fuzz testing of ondisk artifacts to find mischevious behavior and
609  spraying exploit code onto the public mailing list for instant zero-day
610  disclosure is somehow of some social benefit.
611  In the view of this author, the benefit is realized only when the fuzz
612  operators help to **fix** the flaws, but this opinion apparently is not
613  widely shared among security "researchers".
614  The XFS maintainers' continuing ability to manage these events presents an
615  ongoing risk to the stability of the development process.
616  Automated testing should front-load some of the risk while the feature is
617  considered EXPERIMENTAL.
618
619Many of these risks are inherent to software programming.
620Despite this, it is hoped that this new functionality will prove useful in
621reducing unexpected downtime.
622
6233. Testing Plan
624===============
625
626As stated before, fsck tools have three main goals:
627
6281. Detect inconsistencies in the metadata;
629
6302. Eliminate those inconsistencies; and
631
6323. Minimize further loss of data.
633
634Demonstrations of correct operation are necessary to build users' confidence
635that the software behaves within expectations.
636Unfortunately, it was not really feasible to perform regular exhaustive testing
637of every aspect of a fsck tool until the introduction of low-cost virtual
638machines with high-IOPS storage.
639With ample hardware availability in mind, the testing strategy for the online
640fsck project involves differential analysis against the existing fsck tools and
641systematic testing of every attribute of every type of metadata object.
642Testing can be split into four major categories, as discussed below.
643
644Integrated Testing with fstests
645-------------------------------
646
647The primary goal of any free software QA effort is to make testing as
648inexpensive and widespread as possible to maximize the scaling advantages of
649community.
650In other words, testing should maximize the breadth of filesystem configuration
651scenarios and hardware setups.
652This improves code quality by enabling the authors of online fsck to find and
653fix bugs early, and helps developers of new features to find integration
654issues earlier in their development effort.
655
656The Linux filesystem community shares a common QA testing suite,
657`fstests <https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git/>`_, for
658functional and regression testing.
659Even before development work began on online fsck, fstests (when run on XFS)
660would run both the ``xfs_check`` and ``xfs_repair -n`` commands on the test and
661scratch filesystems between each test.
662This provides a level of assurance that the kernel and the fsck tools stay in
663alignment about what constitutes consistent metadata.
664During development of the online checking code, fstests was modified to run
665``xfs_scrub -n`` between each test to ensure that the new checking code
666produces the same results as the two existing fsck tools.
667
668To start development of online repair, fstests was modified to run
669``xfs_repair`` to rebuild the filesystem's metadata indices between tests.
670This ensures that offline repair does not crash, leave a corrupt filesystem
671after it exists, or trigger complaints from the online check.
672This also established a baseline for what can and cannot be repaired offline.
673To complete the first phase of development of online repair, fstests was
674modified to be able to run ``xfs_scrub`` in a "force rebuild" mode.
675This enables a comparison of the effectiveness of online repair as compared to
676the existing offline repair tools.
677
678General Fuzz Testing of Metadata Blocks
679---------------------------------------
680
681XFS benefits greatly from having a very robust debugging tool, ``xfs_db``.
682
683Before development of online fsck even began, a set of fstests were created
684to test the rather common fault that entire metadata blocks get corrupted.
685This required the creation of fstests library code that can create a filesystem
686containing every possible type of metadata object.
687Next, individual test cases were created to create a test filesystem, identify
688a single block of a specific type of metadata object, trash it with the
689existing ``blocktrash`` command in ``xfs_db``, and test the reaction of a
690particular metadata validation strategy.
691
692This earlier test suite enabled XFS developers to test the ability of the
693in-kernel validation functions and the ability of the offline fsck tool to
694detect and eliminate the inconsistent metadata.
695This part of the test suite was extended to cover online fsck in exactly the
696same manner.
697
698In other words, for a given fstests filesystem configuration:
699
700* For each metadata object existing on the filesystem:
701
702  * Write garbage to it
703
704  * Test the reactions of:
705
706    1. The kernel verifiers to stop obviously bad metadata
707    2. Offline repair (``xfs_repair``) to detect and fix
708    3. Online repair (``xfs_scrub``) to detect and fix
709
710Targeted Fuzz Testing of Metadata Records
711-----------------------------------------
712
713The testing plan for online fsck includes extending the existing fs testing
714infrastructure to provide a much more powerful facility: targeted fuzz testing
715of every metadata field of every metadata object in the filesystem.
716``xfs_db`` can modify every field of every metadata structure in every
717block in the filesystem to simulate the effects of memory corruption and
718software bugs.
719Given that fstests already contains the ability to create a filesystem
720containing every metadata format known to the filesystem, ``xfs_db`` can be
721used to perform exhaustive fuzz testing!
722
723For a given fstests filesystem configuration:
724
725* For each metadata object existing on the filesystem...
726
727  * For each record inside that metadata object...
728
729    * For each field inside that record...
730
731      * For each conceivable type of transformation that can be applied to a bit field...
732
733        1. Clear all bits
734        2. Set all bits
735        3. Toggle the most significant bit
736        4. Toggle the middle bit
737        5. Toggle the least significant bit
738        6. Add a small quantity
739        7. Subtract a small quantity
740        8. Randomize the contents
741
742        * ...test the reactions of:
743
744          1. The kernel verifiers to stop obviously bad metadata
745          2. Offline checking (``xfs_repair -n``)
746          3. Offline repair (``xfs_repair``)
747          4. Online checking (``xfs_scrub -n``)
748          5. Online repair (``xfs_scrub``)
749          6. Both repair tools (``xfs_scrub`` and then ``xfs_repair`` if online repair doesn't succeed)
750
751This is quite the combinatoric explosion!
752
753Fortunately, having this much test coverage makes it easy for XFS developers to
754check the responses of XFS' fsck tools.
755Since the introduction of the fuzz testing framework, these tests have been
756used to discover incorrect repair code and missing functionality for entire
757classes of metadata objects in ``xfs_repair``.
758The enhanced testing was used to finalize the deprecation of ``xfs_check`` by
759confirming that ``xfs_repair`` could detect at least as many corruptions as
760the older tool.
761
762These tests have been very valuable for ``xfs_scrub`` in the same ways -- they
763allow the online fsck developers to compare online fsck against offline fsck,
764and they enable XFS developers to find deficiencies in the code base.
765
766Proposed patchsets include
767`general fuzzer improvements
768<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=fuzzer-improvements>`_,
769`fuzzing baselines
770<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=fuzz-baseline>`_,
771and `improvements in fuzz testing comprehensiveness
772<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=more-fuzz-testing>`_.
773
774Stress Testing
775--------------
776
777A unique requirement to online fsck is the ability to operate on a filesystem
778concurrently with regular workloads.
779Although it is of course impossible to run ``xfs_scrub`` with *zero* observable
780impact on the running system, the online repair code should never introduce
781inconsistencies into the filesystem metadata, and regular workloads should
782never notice resource starvation.
783To verify that these conditions are being met, fstests has been enhanced in
784the following ways:
785
786* For each scrub item type, create a test to exercise checking that item type
787  while running ``fsstress``.
788* For each scrub item type, create a test to exercise repairing that item type
789  while running ``fsstress``.
790* Race ``fsstress`` and ``xfs_scrub -n`` to ensure that checking the whole
791  filesystem doesn't cause problems.
792* Race ``fsstress`` and ``xfs_scrub`` in force-rebuild mode to ensure that
793  force-repairing the whole filesystem doesn't cause problems.
794* Race ``xfs_scrub`` in check and force-repair mode against ``fsstress`` while
795  freezing and thawing the filesystem.
796* Race ``xfs_scrub`` in check and force-repair mode against ``fsstress`` while
797  remounting the filesystem read-only and read-write.
798* The same, but running ``fsx`` instead of ``fsstress``.  (Not done yet?)
799
800Success is defined by the ability to run all of these tests without observing
801any unexpected filesystem shutdowns due to corrupted metadata, kernel hang
802check warnings, or any other sort of mischief.
803
804Proposed patchsets include `general stress testing
805<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=race-scrub-and-mount-state-changes>`_
806and the `evolution of existing per-function stress testing
807<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/log/?h=refactor-scrub-stress>`_.
808
8094. User Interface
810=================
811
812The primary user of online fsck is the system administrator, just like offline
813repair.
814Online fsck presents two modes of operation to administrators:
815A foreground CLI process for online fsck on demand, and a background service
816that performs autonomous checking and repair.
817
818Checking on Demand
819------------------
820
821For administrators who want the absolute freshest information about the
822metadata in a filesystem, ``xfs_scrub`` can be run as a foreground process on
823a command line.
824The program checks every piece of metadata in the filesystem while the
825administrator waits for the results to be reported, just like the existing
826``xfs_repair`` tool.
827Both tools share a ``-n`` option to perform a read-only scan, and a ``-v``
828option to increase the verbosity of the information reported.
829
830A new feature of ``xfs_scrub`` is the ``-x`` option, which employs the error
831correction capabilities of the hardware to check data file contents.
832The media scan is not enabled by default because it may dramatically increase
833program runtime and consume a lot of bandwidth on older storage hardware.
834
835The output of a foreground invocation is captured in the system log.
836
837The ``xfs_scrub_all`` program walks the list of mounted filesystems and
838initiates ``xfs_scrub`` for each of them in parallel.
839It serializes scans for any filesystems that resolve to the same top level
840kernel block device to prevent resource overconsumption.
841
842Background Service
843------------------
844
845To reduce the workload of system administrators, the ``xfs_scrub`` package
846provides a suite of `systemd <https://systemd.io/>`_ timers and services that
847run online fsck automatically on weekends by default.
848The background service configures scrub to run with as little privilege as
849possible, the lowest CPU and IO priority, and in a CPU-constrained single
850threaded mode.
851This can be tuned by the systemd administrator at any time to suit the latency
852and throughput requirements of customer workloads.
853
854The output of the background service is also captured in the system log.
855If desired, reports of failures (either due to inconsistencies or mere runtime
856errors) can be emailed automatically by setting the ``EMAIL_ADDR`` environment
857variable in the following service files:
858
859* ``xfs_scrub_fail@.service``
860* ``xfs_scrub_media_fail@.service``
861* ``xfs_scrub_all_fail.service``
862
863The decision to enable the background scan is left to the system administrator.
864This can be done by enabling either of the following services:
865
866* ``xfs_scrub_all.timer`` on systemd systems
867* ``xfs_scrub_all.cron`` on non-systemd systems
868
869This automatic weekly scan is configured out of the box to perform an
870additional media scan of all file data once per month.
871This is less foolproof than, say, storing file data block checksums, but much
872more performant if application software provides its own integrity checking,
873redundancy can be provided elsewhere above the filesystem, or the storage
874device's integrity guarantees are deemed sufficient.
875
876The systemd unit file definitions have been subjected to a security audit
877(as of systemd 249) to ensure that the xfs_scrub processes have as little
878access to the rest of the system as possible.
879This was performed via ``systemd-analyze security``, after which privileges
880were restricted to the minimum required, sandboxing was set up to the maximal
881extent possible with sandboxing and system call filtering; and access to the
882filesystem tree was restricted to the minimum needed to start the program and
883access the filesystem being scanned.
884The service definition files restrict CPU usage to 80% of one CPU core, and
885apply as nice of a priority to IO and CPU scheduling as possible.
886This measure was taken to minimize delays in the rest of the filesystem.
887No such hardening has been performed for the cron job.
888
889Proposed patchset:
890`Enabling the xfs_scrub background service
891<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-media-scan-service>`_.
892
893Health Reporting
894----------------
895
896XFS caches a summary of each filesystem's health status in memory.
897The information is updated whenever ``xfs_scrub`` is run, or whenever
898inconsistencies are detected in the filesystem metadata during regular
899operations.
900System administrators should use the ``health`` command of ``xfs_spaceman`` to
901download this information into a human-readable format.
902If problems have been observed, the administrator can schedule a reduced
903service window to run the online repair tool to correct the problem.
904Failing that, the administrator can decide to schedule a maintenance window to
905run the traditional offline repair tool to correct the problem.
906
907**Future Work Question**: Should the health reporting integrate with the new
908inotify fs error notification system?
909Would it be helpful for sysadmins to have a daemon to listen for corruption
910notifications and initiate a repair?
911
912*Answer*: These questions remain unanswered, but should be a part of the
913conversation with early adopters and potential downstream users of XFS.
914
915Proposed patchsets include
916`wiring up health reports to correction returns
917<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=corruption-health-reports>`_
918and
919`preservation of sickness info during memory reclaim
920<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=indirect-health-reporting>`_.
921
9225. Kernel Algorithms and Data Structures
923========================================
924
925This section discusses the key algorithms and data structures of the kernel
926code that provide the ability to check and repair metadata while the system
927is running.
928The first chapters in this section reveal the pieces that provide the
929foundation for checking metadata.
930The remainder of this section presents the mechanisms through which XFS
931regenerates itself.
932
933Self Describing Metadata
934------------------------
935
936Starting with XFS version 5 in 2012, XFS updated the format of nearly every
937ondisk block header to record a magic number, a checksum, a universally
938"unique" identifier (UUID), an owner code, the ondisk address of the block,
939and a log sequence number.
940When loading a block buffer from disk, the magic number, UUID, owner, and
941ondisk address confirm that the retrieved block matches the specific owner of
942the current filesystem, and that the information contained in the block is
943supposed to be found at the ondisk address.
944The first three components enable checking tools to disregard alleged metadata
945that doesn't belong to the filesystem, and the fourth component enables the
946filesystem to detect lost writes.
947
948Whenever a file system operation modifies a block, the change is submitted
949to the log as part of a transaction.
950The log then processes these transactions marking them done once they are
951safely persisted to storage.
952The logging code maintains the checksum and the log sequence number of the last
953transactional update.
954Checksums are useful for detecting torn writes and other discrepancies that can
955be introduced between the computer and its storage devices.
956Sequence number tracking enables log recovery to avoid applying out of date
957log updates to the filesystem.
958
959These two features improve overall runtime resiliency by providing a means for
960the filesystem to detect obvious corruption when reading metadata blocks from
961disk, but these buffer verifiers cannot provide any consistency checking
962between metadata structures.
963
964For more information, please see the documentation for
965Documentation/filesystems/xfs-self-describing-metadata.rst
966
967Reverse Mapping
968---------------
969
970The original design of XFS (circa 1993) is an improvement upon 1980s Unix
971filesystem design.
972In those days, storage density was expensive, CPU time was scarce, and
973excessive seek time could kill performance.
974For performance reasons, filesystem authors were reluctant to add redundancy to
975the filesystem, even at the cost of data integrity.
976Filesystems designers in the early 21st century choose different strategies to
977increase internal redundancy -- either storing nearly identical copies of
978metadata, or more space-efficient encoding techniques.
979
980For XFS, a different redundancy strategy was chosen to modernize the design:
981a secondary space usage index that maps allocated disk extents back to their
982owners.
983By adding a new index, the filesystem retains most of its ability to scale
984well to heavily threaded workloads involving large datasets, since the primary
985file metadata (the directory tree, the file block map, and the allocation
986groups) remain unchanged.
987Like any system that improves redundancy, the reverse-mapping feature increases
988overhead costs for space mapping activities.
989However, it has two critical advantages: first, the reverse index is key to
990enabling online fsck and other requested functionality such as free space
991defragmentation, better media failure reporting, and filesystem shrinking.
992Second, the different ondisk storage format of the reverse mapping btree
993defeats device-level deduplication because the filesystem requires real
994redundancy.
995
996+--------------------------------------------------------------------------+
997| **Sidebar**:                                                             |
998+--------------------------------------------------------------------------+
999| A criticism of adding the secondary index is that it does nothing to     |
1000| improve the robustness of user data storage itself.                      |
1001| This is a valid point, but adding a new index for file data block        |
1002| checksums increases write amplification by turning data overwrites into  |
1003| copy-writes, which age the filesystem prematurely.                       |
1004| In keeping with thirty years of precedent, users who want file data      |
1005| integrity can supply as powerful a solution as they require.             |
1006| As for metadata, the complexity of adding a new secondary index of space |
1007| usage is much less than adding volume management and storage device      |
1008| mirroring to XFS itself.                                                 |
1009| Perfection of RAID and volume management are best left to existing       |
1010| layers in the kernel.                                                    |
1011+--------------------------------------------------------------------------+
1012
1013The information captured in a reverse space mapping record is as follows:
1014
1015.. code-block:: c
1016
1017	struct xfs_rmap_irec {
1018	    xfs_agblock_t    rm_startblock;   /* extent start block */
1019	    xfs_extlen_t     rm_blockcount;   /* extent length */
1020	    uint64_t         rm_owner;        /* extent owner */
1021	    uint64_t         rm_offset;       /* offset within the owner */
1022	    unsigned int     rm_flags;        /* state flags */
1023	};
1024
1025The first two fields capture the location and size of the physical space,
1026in units of filesystem blocks.
1027The owner field tells scrub which metadata structure or file inode have been
1028assigned this space.
1029For space allocated to files, the offset field tells scrub where the space was
1030mapped within the file fork.
1031Finally, the flags field provides extra information about the space usage --
1032is this an attribute fork extent?  A file mapping btree extent?  Or an
1033unwritten data extent?
1034
1035Online filesystem checking judges the consistency of each primary metadata
1036record by comparing its information against all other space indices.
1037The reverse mapping index plays a key role in the consistency checking process
1038because it contains a centralized alternate copy of all space allocation
1039information.
1040Program runtime and ease of resource acquisition are the only real limits to
1041what online checking can consult.
1042For example, a file data extent mapping can be checked against:
1043
1044* The absence of an entry in the free space information.
1045* The absence of an entry in the inode index.
1046* The absence of an entry in the reference count data if the file is not
1047  marked as having shared extents.
1048* The correspondence of an entry in the reverse mapping information.
1049
1050There are several observations to make about reverse mapping indices:
1051
10521. Reverse mappings can provide a positive affirmation of correctness if any of
1053   the above primary metadata are in doubt.
1054   The checking code for most primary metadata follows a path similar to the
1055   one outlined above.
1056
10572. Proving the consistency of secondary metadata with the primary metadata is
1058   difficult because that requires a full scan of all primary space metadata,
1059   which is very time intensive.
1060   For example, checking a reverse mapping record for a file extent mapping
1061   btree block requires locking the file and searching the entire btree to
1062   confirm the block.
1063   Instead, scrub relies on rigorous cross-referencing during the primary space
1064   mapping structure checks.
1065
10663. Consistency scans must use non-blocking lock acquisition primitives if the
1067   required locking order is not the same order used by regular filesystem
1068   operations.
1069   For example, if the filesystem normally takes a file ILOCK before taking
1070   the AGF buffer lock but scrub wants to take a file ILOCK while holding
1071   an AGF buffer lock, scrub cannot block on that second acquisition.
1072   This means that forward progress during this part of a scan of the reverse
1073   mapping data cannot be guaranteed if system load is heavy.
1074
1075In summary, reverse mappings play a key role in reconstruction of primary
1076metadata.
1077The details of how these records are staged, written to disk, and committed
1078into the filesystem are covered in subsequent sections.
1079
1080Checking and Cross-Referencing
1081------------------------------
1082
1083The first step of checking a metadata structure is to examine every record
1084contained within the structure and its relationship with the rest of the
1085system.
1086XFS contains multiple layers of checking to try to prevent inconsistent
1087metadata from wreaking havoc on the system.
1088Each of these layers contributes information that helps the kernel to make
1089three decisions about the health of a metadata structure:
1090
1091- Is a part of this structure obviously corrupt (``XFS_SCRUB_OFLAG_CORRUPT``) ?
1092- Is this structure inconsistent with the rest of the system
1093  (``XFS_SCRUB_OFLAG_XCORRUPT``) ?
1094- Is there so much damage around the filesystem that cross-referencing is not
1095  possible (``XFS_SCRUB_OFLAG_XFAIL``) ?
1096- Can the structure be optimized to improve performance or reduce the size of
1097  metadata (``XFS_SCRUB_OFLAG_PREEN``) ?
1098- Does the structure contain data that is not inconsistent but deserves review
1099  by the system administrator (``XFS_SCRUB_OFLAG_WARNING``) ?
1100
1101The following sections describe how the metadata scrubbing process works.
1102
1103Metadata Buffer Verification
1104````````````````````````````
1105
1106The lowest layer of metadata protection in XFS are the metadata verifiers built
1107into the buffer cache.
1108These functions perform inexpensive internal consistency checking of the block
1109itself, and answer these questions:
1110
1111- Does the block belong to this filesystem?
1112
1113- Does the block belong to the structure that asked for the read?
1114  This assumes that metadata blocks only have one owner, which is always true
1115  in XFS.
1116
1117- Is the type of data stored in the block within a reasonable range of what
1118  scrub is expecting?
1119
1120- Does the physical location of the block match the location it was read from?
1121
1122- Does the block checksum match the data?
1123
1124The scope of the protections here are very limited -- verifiers can only
1125establish that the filesystem code is reasonably free of gross corruption bugs
1126and that the storage system is reasonably competent at retrieval.
1127Corruption problems observed at runtime cause the generation of health reports,
1128failed system calls, and in the extreme case, filesystem shutdowns if the
1129corrupt metadata force the cancellation of a dirty transaction.
1130
1131Every online fsck scrubbing function is expected to read every ondisk metadata
1132block of a structure in the course of checking the structure.
1133Corruption problems observed during a check are immediately reported to
1134userspace as corruption; during a cross-reference, they are reported as a
1135failure to cross-reference once the full examination is complete.
1136Reads satisfied by a buffer already in cache (and hence already verified)
1137bypass these checks.
1138
1139Internal Consistency Checks
1140```````````````````````````
1141
1142After the buffer cache, the next level of metadata protection is the internal
1143record verification code built into the filesystem.
1144These checks are split between the buffer verifiers, the in-filesystem users of
1145the buffer cache, and the scrub code itself, depending on the amount of higher
1146level context required.
1147The scope of checking is still internal to the block.
1148These higher level checking functions answer these questions:
1149
1150- Does the type of data stored in the block match what scrub is expecting?
1151
1152- Does the block belong to the owning structure that asked for the read?
1153
1154- If the block contains records, do the records fit within the block?
1155
1156- If the block tracks internal free space information, is it consistent with
1157  the record areas?
1158
1159- Are the records contained inside the block free of obvious corruptions?
1160
1161Record checks in this category are more rigorous and more time-intensive.
1162For example, block pointers and inumbers are checked to ensure that they point
1163within the dynamically allocated parts of an allocation group and within
1164the filesystem.
1165Names are checked for invalid characters, and flags are checked for invalid
1166combinations.
1167Other record attributes are checked for sensible values.
1168Btree records spanning an interval of the btree keyspace are checked for
1169correct order and lack of mergeability (except for file fork mappings).
1170For performance reasons, regular code may skip some of these checks unless
1171debugging is enabled or a write is about to occur.
1172Scrub functions, of course, must check all possible problems.
1173
1174Validation of Userspace-Controlled Record Attributes
1175````````````````````````````````````````````````````
1176
1177Various pieces of filesystem metadata are directly controlled by userspace.
1178Because of this nature, validation work cannot be more precise than checking
1179that a value is within the possible range.
1180These fields include:
1181
1182- Superblock fields controlled by mount options
1183- Filesystem labels
1184- File timestamps
1185- File permissions
1186- File size
1187- File flags
1188- Names present in directory entries, extended attribute keys, and filesystem
1189  labels
1190- Extended attribute key namespaces
1191- Extended attribute values
1192- File data block contents
1193- Quota limits
1194- Quota timer expiration (if resource usage exceeds the soft limit)
1195
1196Cross-Referencing Space Metadata
1197````````````````````````````````
1198
1199After internal block checks, the next higher level of checking is
1200cross-referencing records between metadata structures.
1201For regular runtime code, the cost of these checks is considered to be
1202prohibitively expensive, but as scrub is dedicated to rooting out
1203inconsistencies, it must pursue all avenues of inquiry.
1204The exact set of cross-referencing is highly dependent on the context of the
1205data structure being checked.
1206
1207The XFS btree code has keyspace scanning functions that online fsck uses to
1208cross reference one structure with another.
1209Specifically, scrub can scan the key space of an index to determine if that
1210keyspace is fully, sparsely, or not at all mapped to records.
1211For the reverse mapping btree, it is possible to mask parts of the key for the
1212purposes of performing a keyspace scan so that scrub can decide if the rmap
1213btree contains records mapping a certain extent of physical space without the
1214sparsenses of the rest of the rmap keyspace getting in the way.
1215
1216Btree blocks undergo the following checks before cross-referencing:
1217
1218- Does the type of data stored in the block match what scrub is expecting?
1219
1220- Does the block belong to the owning structure that asked for the read?
1221
1222- Do the records fit within the block?
1223
1224- Are the records contained inside the block free of obvious corruptions?
1225
1226- Are the name hashes in the correct order?
1227
1228- Do node pointers within the btree point to valid block addresses for the type
1229  of btree?
1230
1231- Do child pointers point towards the leaves?
1232
1233- Do sibling pointers point across the same level?
1234
1235- For each node block record, does the record key accurate reflect the contents
1236  of the child block?
1237
1238Space allocation records are cross-referenced as follows:
1239
12401. Any space mentioned by any metadata structure are cross-referenced as
1241   follows:
1242
1243   - Does the reverse mapping index list only the appropriate owner as the
1244     owner of each block?
1245
1246   - Are none of the blocks claimed as free space?
1247
1248   - If these aren't file data blocks, are none of the blocks claimed as space
1249     shared by different owners?
1250
12512. Btree blocks are cross-referenced as follows:
1252
1253   - Everything in class 1 above.
1254
1255   - If there's a parent node block, do the keys listed for this block match the
1256     keyspace of this block?
1257
1258   - Do the sibling pointers point to valid blocks?  Of the same level?
1259
1260   - Do the child pointers point to valid blocks?  Of the next level down?
1261
12623. Free space btree records are cross-referenced as follows:
1263
1264   - Everything in class 1 and 2 above.
1265
1266   - Does the reverse mapping index list no owners of this space?
1267
1268   - Is this space not claimed by the inode index for inodes?
1269
1270   - Is it not mentioned by the reference count index?
1271
1272   - Is there a matching record in the other free space btree?
1273
12744. Inode btree records are cross-referenced as follows:
1275
1276   - Everything in class 1 and 2 above.
1277
1278   - Is there a matching record in free inode btree?
1279
1280   - Do cleared bits in the holemask correspond with inode clusters?
1281
1282   - Do set bits in the freemask correspond with inode records with zero link
1283     count?
1284
12855. Inode records are cross-referenced as follows:
1286
1287   - Everything in class 1.
1288
1289   - Do all the fields that summarize information about the file forks actually
1290     match those forks?
1291
1292   - Does each inode with zero link count correspond to a record in the free
1293     inode btree?
1294
12956. File fork space mapping records are cross-referenced as follows:
1296
1297   - Everything in class 1 and 2 above.
1298
1299   - Is this space not mentioned by the inode btrees?
1300
1301   - If this is a CoW fork mapping, does it correspond to a CoW entry in the
1302     reference count btree?
1303
13047. Reference count records are cross-referenced as follows:
1305
1306   - Everything in class 1 and 2 above.
1307
1308   - Within the space subkeyspace of the rmap btree (that is to say, all
1309     records mapped to a particular space extent and ignoring the owner info),
1310     are there the same number of reverse mapping records for each block as the
1311     reference count record claims?
1312
1313Proposed patchsets are the series to find gaps in
1314`refcount btree
1315<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-detect-refcount-gaps>`_,
1316`inode btree
1317<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-detect-inobt-gaps>`_, and
1318`rmap btree
1319<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-detect-rmapbt-gaps>`_ records;
1320to find
1321`mergeable records
1322<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-detect-mergeable-records>`_;
1323and to
1324`improve cross referencing with rmap
1325<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-strengthen-rmap-checking>`_
1326before starting a repair.
1327
1328Checking Extended Attributes
1329````````````````````````````
1330
1331Extended attributes implement a key-value store that enable fragments of data
1332to be attached to any file.
1333Both the kernel and userspace can access the keys and values, subject to
1334namespace and privilege restrictions.
1335Most typically these fragments are metadata about the file -- origins, security
1336contexts, user-supplied labels, indexing information, etc.
1337
1338Names can be as long as 255 bytes and can exist in several different
1339namespaces.
1340Values can be as large as 64KB.
1341A file's extended attributes are stored in blocks mapped by the attr fork.
1342The mappings point to leaf blocks, remote value blocks, or dabtree blocks.
1343Block 0 in the attribute fork is always the top of the structure, but otherwise
1344each of the three types of blocks can be found at any offset in the attr fork.
1345Leaf blocks contain attribute key records that point to the name and the value.
1346Names are always stored elsewhere in the same leaf block.
1347Values that are less than 3/4 the size of a filesystem block are also stored
1348elsewhere in the same leaf block.
1349Remote value blocks contain values that are too large to fit inside a leaf.
1350If the leaf information exceeds a single filesystem block, a dabtree (also
1351rooted at block 0) is created to map hashes of the attribute names to leaf
1352blocks in the attr fork.
1353
1354Checking an extended attribute structure is not so straightfoward due to the
1355lack of separation between attr blocks and index blocks.
1356Scrub must read each block mapped by the attr fork and ignore the non-leaf
1357blocks:
1358
13591. Walk the dabtree in the attr fork (if present) to ensure that there are no
1360   irregularities in the blocks or dabtree mappings that do not point to
1361   attr leaf blocks.
1362
13632. Walk the blocks of the attr fork looking for leaf blocks.
1364   For each entry inside a leaf:
1365
1366   a. Validate that the name does not contain invalid characters.
1367
1368   b. Read the attr value.
1369      This performs a named lookup of the attr name to ensure the correctness
1370      of the dabtree.
1371      If the value is stored in a remote block, this also validates the
1372      integrity of the remote value block.
1373
1374Checking and Cross-Referencing Directories
1375``````````````````````````````````````````
1376
1377The filesystem directory tree is a directed acylic graph structure, with files
1378constituting the nodes, and directory entries (dirents) constituting the edges.
1379Directories are a special type of file containing a set of mappings from a
1380255-byte sequence (name) to an inumber.
1381These are called directory entries, or dirents for short.
1382Each directory file must have exactly one directory pointing to the file.
1383A root directory points to itself.
1384Directory entries point to files of any type.
1385Each non-directory file may have multiple directories point to it.
1386
1387In XFS, directories are implemented as a file containing up to three 32GB
1388partitions.
1389The first partition contains directory entry data blocks.
1390Each data block contains variable-sized records associating a user-provided
1391name with an inumber and, optionally, a file type.
1392If the directory entry data grows beyond one block, the second partition (which
1393exists as post-EOF extents) is populated with a block containing free space
1394information and an index that maps hashes of the dirent names to directory data
1395blocks in the first partition.
1396This makes directory name lookups very fast.
1397If this second partition grows beyond one block, the third partition is
1398populated with a linear array of free space information for faster
1399expansions.
1400If the free space has been separated and the second partition grows again
1401beyond one block, then a dabtree is used to map hashes of dirent names to
1402directory data blocks.
1403
1404Checking a directory is pretty straightfoward:
1405
14061. Walk the dabtree in the second partition (if present) to ensure that there
1407   are no irregularities in the blocks or dabtree mappings that do not point to
1408   dirent blocks.
1409
14102. Walk the blocks of the first partition looking for directory entries.
1411   Each dirent is checked as follows:
1412
1413   a. Does the name contain no invalid characters?
1414
1415   b. Does the inumber correspond to an actual, allocated inode?
1416
1417   c. Does the child inode have a nonzero link count?
1418
1419   d. If a file type is included in the dirent, does it match the type of the
1420      inode?
1421
1422   e. If the child is a subdirectory, does the child's dotdot pointer point
1423      back to the parent?
1424
1425   f. If the directory has a second partition, perform a named lookup of the
1426      dirent name to ensure the correctness of the dabtree.
1427
14283. Walk the free space list in the third partition (if present) to ensure that
1429   the free spaces it describes are really unused.
1430
1431Checking operations involving :ref:`parents <dirparent>` and
1432:ref:`file link counts <nlinks>` are discussed in more detail in later
1433sections.
1434
1435Checking Directory/Attribute Btrees
1436^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1437
1438As stated in previous sections, the directory/attribute btree (dabtree) index
1439maps user-provided names to improve lookup times by avoiding linear scans.
1440Internally, it maps a 32-bit hash of the name to a block offset within the
1441appropriate file fork.
1442
1443The internal structure of a dabtree closely resembles the btrees that record
1444fixed-size metadata records -- each dabtree block contains a magic number, a
1445checksum, sibling pointers, a UUID, a tree level, and a log sequence number.
1446The format of leaf and node records are the same -- each entry points to the
1447next level down in the hierarchy, with dabtree node records pointing to dabtree
1448leaf blocks, and dabtree leaf records pointing to non-dabtree blocks elsewhere
1449in the fork.
1450
1451Checking and cross-referencing the dabtree is very similar to what is done for
1452space btrees:
1453
1454- Does the type of data stored in the block match what scrub is expecting?
1455
1456- Does the block belong to the owning structure that asked for the read?
1457
1458- Do the records fit within the block?
1459
1460- Are the records contained inside the block free of obvious corruptions?
1461
1462- Are the name hashes in the correct order?
1463
1464- Do node pointers within the dabtree point to valid fork offsets for dabtree
1465  blocks?
1466
1467- Do leaf pointers within the dabtree point to valid fork offsets for directory
1468  or attr leaf blocks?
1469
1470- Do child pointers point towards the leaves?
1471
1472- Do sibling pointers point across the same level?
1473
1474- For each dabtree node record, does the record key accurate reflect the
1475  contents of the child dabtree block?
1476
1477- For each dabtree leaf record, does the record key accurate reflect the
1478  contents of the directory or attr block?
1479
1480Cross-Referencing Summary Counters
1481``````````````````````````````````
1482
1483XFS maintains three classes of summary counters: available resources, quota
1484resource usage, and file link counts.
1485
1486In theory, the amount of available resources (data blocks, inodes, realtime
1487extents) can be found by walking the entire filesystem.
1488This would make for very slow reporting, so a transactional filesystem can
1489maintain summaries of this information in the superblock.
1490Cross-referencing these values against the filesystem metadata should be a
1491simple matter of walking the free space and inode metadata in each AG and the
1492realtime bitmap, but there are complications that will be discussed in
1493:ref:`more detail <fscounters>` later.
1494
1495:ref:`Quota usage <quotacheck>` and :ref:`file link count <nlinks>`
1496checking are sufficiently complicated to warrant separate sections.
1497
1498Post-Repair Reverification
1499``````````````````````````
1500
1501After performing a repair, the checking code is run a second time to validate
1502the new structure, and the results of the health assessment are recorded
1503internally and returned to the calling process.
1504This step is critical for enabling system administrator to monitor the status
1505of the filesystem and the progress of any repairs.
1506For developers, it is a useful means to judge the efficacy of error detection
1507and correction in the online and offline checking tools.
1508
1509Eventual Consistency vs. Online Fsck
1510------------------------------------
1511
1512Complex operations can make modifications to multiple per-AG data structures
1513with a chain of transactions.
1514These chains, once committed to the log, are restarted during log recovery if
1515the system crashes while processing the chain.
1516Because the AG header buffers are unlocked between transactions within a chain,
1517online checking must coordinate with chained operations that are in progress to
1518avoid incorrectly detecting inconsistencies due to pending chains.
1519Furthermore, online repair must not run when operations are pending because
1520the metadata are temporarily inconsistent with each other, and rebuilding is
1521not possible.
1522
1523Only online fsck has this requirement of total consistency of AG metadata, and
1524should be relatively rare as compared to filesystem change operations.
1525Online fsck coordinates with transaction chains as follows:
1526
1527* For each AG, maintain a count of intent items targetting that AG.
1528  The count should be bumped whenever a new item is added to the chain.
1529  The count should be dropped when the filesystem has locked the AG header
1530  buffers and finished the work.
1531
1532* When online fsck wants to examine an AG, it should lock the AG header
1533  buffers to quiesce all transaction chains that want to modify that AG.
1534  If the count is zero, proceed with the checking operation.
1535  If it is nonzero, cycle the buffer locks to allow the chain to make forward
1536  progress.
1537
1538This may lead to online fsck taking a long time to complete, but regular
1539filesystem updates take precedence over background checking activity.
1540Details about the discovery of this situation are presented in the
1541:ref:`next section <chain_coordination>`, and details about the solution
1542are presented :ref:`after that<intent_drains>`.
1543
1544.. _chain_coordination:
1545
1546Discovery of the Problem
1547````````````````````````
1548
1549Midway through the development of online scrubbing, the fsstress tests
1550uncovered a misinteraction between online fsck and compound transaction chains
1551created by other writer threads that resulted in false reports of metadata
1552inconsistency.
1553The root cause of these reports is the eventual consistency model introduced by
1554the expansion of deferred work items and compound transaction chains when
1555reverse mapping and reflink were introduced.
1556
1557Originally, transaction chains were added to XFS to avoid deadlocks when
1558unmapping space from files.
1559Deadlock avoidance rules require that AGs only be locked in increasing order,
1560which makes it impossible (say) to use a single transaction to free a space
1561extent in AG 7 and then try to free a now superfluous block mapping btree block
1562in AG 3.
1563To avoid these kinds of deadlocks, XFS creates Extent Freeing Intent (EFI) log
1564items to commit to freeing some space in one transaction while deferring the
1565actual metadata updates to a fresh transaction.
1566The transaction sequence looks like this:
1567
15681. The first transaction contains a physical update to the file's block mapping
1569   structures to remove the mapping from the btree blocks.
1570   It then attaches to the in-memory transaction an action item to schedule
1571   deferred freeing of space.
1572   Concretely, each transaction maintains a list of ``struct
1573   xfs_defer_pending`` objects, each of which maintains a list of ``struct
1574   xfs_extent_free_item`` objects.
1575   Returning to the example above, the action item tracks the freeing of both
1576   the unmapped space from AG 7 and the block mapping btree (BMBT) block from
1577   AG 3.
1578   Deferred frees recorded in this manner are committed in the log by creating
1579   an EFI log item from the ``struct xfs_extent_free_item`` object and
1580   attaching the log item to the transaction.
1581   When the log is persisted to disk, the EFI item is written into the ondisk
1582   transaction record.
1583   EFIs can list up to 16 extents to free, all sorted in AG order.
1584
15852. The second transaction contains a physical update to the free space btrees
1586   of AG 3 to release the former BMBT block and a second physical update to the
1587   free space btrees of AG 7 to release the unmapped file space.
1588   Observe that the the physical updates are resequenced in the correct order
1589   when possible.
1590   Attached to the transaction is a an extent free done (EFD) log item.
1591   The EFD contains a pointer to the EFI logged in transaction #1 so that log
1592   recovery can tell if the EFI needs to be replayed.
1593
1594If the system goes down after transaction #1 is written back to the filesystem
1595but before #2 is committed, a scan of the filesystem metadata would show
1596inconsistent filesystem metadata because there would not appear to be any owner
1597of the unmapped space.
1598Happily, log recovery corrects this inconsistency for us -- when recovery finds
1599an intent log item but does not find a corresponding intent done item, it will
1600reconstruct the incore state of the intent item and finish it.
1601In the example above, the log must replay both frees described in the recovered
1602EFI to complete the recovery phase.
1603
1604There are subtleties to XFS' transaction chaining strategy to consider:
1605
1606* Log items must be added to a transaction in the correct order to prevent
1607  conflicts with principal objects that are not held by the transaction.
1608  In other words, all per-AG metadata updates for an unmapped block must be
1609  completed before the last update to free the extent, and extents should not
1610  be reallocated until that last update commits to the log.
1611
1612* AG header buffers are released between each transaction in a chain.
1613  This means that other threads can observe an AG in an intermediate state,
1614  but as long as the first subtlety is handled, this should not affect the
1615  correctness of filesystem operations.
1616
1617* Unmounting the filesystem flushes all pending work to disk, which means that
1618  offline fsck never sees the temporary inconsistencies caused by deferred
1619  work item processing.
1620
1621In this manner, XFS employs a form of eventual consistency to avoid deadlocks
1622and increase parallelism.
1623
1624During the design phase of the reverse mapping and reflink features, it was
1625decided that it was impractical to cram all the reverse mapping updates for a
1626single filesystem change into a single transaction because a single file
1627mapping operation can explode into many small updates:
1628
1629* The block mapping update itself
1630* A reverse mapping update for the block mapping update
1631* Fixing the freelist
1632* A reverse mapping update for the freelist fix
1633
1634* A shape change to the block mapping btree
1635* A reverse mapping update for the btree update
1636* Fixing the freelist (again)
1637* A reverse mapping update for the freelist fix
1638
1639* An update to the reference counting information
1640* A reverse mapping update for the refcount update
1641* Fixing the freelist (a third time)
1642* A reverse mapping update for the freelist fix
1643
1644* Freeing any space that was unmapped and not owned by any other file
1645* Fixing the freelist (a fourth time)
1646* A reverse mapping update for the freelist fix
1647
1648* Freeing the space used by the block mapping btree
1649* Fixing the freelist (a fifth time)
1650* A reverse mapping update for the freelist fix
1651
1652Free list fixups are not usually needed more than once per AG per transaction
1653chain, but it is theoretically possible if space is very tight.
1654For copy-on-write updates this is even worse, because this must be done once to
1655remove the space from a staging area and again to map it into the file!
1656
1657To deal with this explosion in a calm manner, XFS expands its use of deferred
1658work items to cover most reverse mapping updates and all refcount updates.
1659This reduces the worst case size of transaction reservations by breaking the
1660work into a long chain of small updates, which increases the degree of eventual
1661consistency in the system.
1662Again, this generally isn't a problem because XFS orders its deferred work
1663items carefully to avoid resource reuse conflicts between unsuspecting threads.
1664
1665However, online fsck changes the rules -- remember that although physical
1666updates to per-AG structures are coordinated by locking the buffers for AG
1667headers, buffer locks are dropped between transactions.
1668Once scrub acquires resources and takes locks for a data structure, it must do
1669all the validation work without releasing the lock.
1670If the main lock for a space btree is an AG header buffer lock, scrub may have
1671interrupted another thread that is midway through finishing a chain.
1672For example, if a thread performing a copy-on-write has completed a reverse
1673mapping update but not the corresponding refcount update, the two AG btrees
1674will appear inconsistent to scrub and an observation of corruption will be
1675recorded.  This observation will not be correct.
1676If a repair is attempted in this state, the results will be catastrophic!
1677
1678Several other solutions to this problem were evaluated upon discovery of this
1679flaw and rejected:
1680
16811. Add a higher level lock to allocation groups and require writer threads to
1682   acquire the higher level lock in AG order before making any changes.
1683   This would be very difficult to implement in practice because it is
1684   difficult to determine which locks need to be obtained, and in what order,
1685   without simulating the entire operation.
1686   Performing a dry run of a file operation to discover necessary locks would
1687   make the filesystem very slow.
1688
16892. Make the deferred work coordinator code aware of consecutive intent items
1690   targeting the same AG and have it hold the AG header buffers locked across
1691   the transaction roll between updates.
1692   This would introduce a lot of complexity into the coordinator since it is
1693   only loosely coupled with the actual deferred work items.
1694   It would also fail to solve the problem because deferred work items can
1695   generate new deferred subtasks, but all subtasks must be complete before
1696   work can start on a new sibling task.
1697
16983. Teach online fsck to walk all transactions waiting for whichever lock(s)
1699   protect the data structure being scrubbed to look for pending operations.
1700   The checking and repair operations must factor these pending operations into
1701   the evaluations being performed.
1702   This solution is a nonstarter because it is *extremely* invasive to the main
1703   filesystem.
1704
1705.. _intent_drains:
1706
1707Intent Drains
1708`````````````
1709
1710Online fsck uses an atomic intent item counter and lock cycling to coordinate
1711with transaction chains.
1712There are two key properties to the drain mechanism.
1713First, the counter is incremented when a deferred work item is *queued* to a
1714transaction, and it is decremented after the associated intent done log item is
1715*committed* to another transaction.
1716The second property is that deferred work can be added to a transaction without
1717holding an AG header lock, but per-AG work items cannot be marked done without
1718locking that AG header buffer to log the physical updates and the intent done
1719log item.
1720The first property enables scrub to yield to running transaction chains, which
1721is an explicit deprioritization of online fsck to benefit file operations.
1722The second property of the drain is key to the correct coordination of scrub,
1723since scrub will always be able to decide if a conflict is possible.
1724
1725For regular filesystem code, the drain works as follows:
1726
17271. Call the appropriate subsystem function to add a deferred work item to a
1728   transaction.
1729
17302. The function calls ``xfs_defer_drain_bump`` to increase the counter.
1731
17323. When the deferred item manager wants to finish the deferred work item, it
1733   calls ``->finish_item`` to complete it.
1734
17354. The ``->finish_item`` implementation logs some changes and calls
1736   ``xfs_defer_drain_drop`` to decrease the sloppy counter and wake up any threads
1737   waiting on the drain.
1738
17395. The subtransaction commits, which unlocks the resource associated with the
1740   intent item.
1741
1742For scrub, the drain works as follows:
1743
17441. Lock the resource(s) associated with the metadata being scrubbed.
1745   For example, a scan of the refcount btree would lock the AGI and AGF header
1746   buffers.
1747
17482. If the counter is zero (``xfs_defer_drain_busy`` returns false), there are no
1749   chains in progress and the operation may proceed.
1750
17513. Otherwise, release the resources grabbed in step 1.
1752
17534. Wait for the intent counter to reach zero (``xfs_defer_drain_intents``), then go
1754   back to step 1 unless a signal has been caught.
1755
1756To avoid polling in step 4, the drain provides a waitqueue for scrub threads to
1757be woken up whenever the intent count drops to zero.
1758
1759The proposed patchset is the
1760`scrub intent drain series
1761<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-drain-intents>`_.
1762
1763.. _jump_labels:
1764
1765Static Keys (aka Jump Label Patching)
1766`````````````````````````````````````
1767
1768Online fsck for XFS separates the regular filesystem from the checking and
1769repair code as much as possible.
1770However, there are a few parts of online fsck (such as the intent drains, and
1771later, live update hooks) where it is useful for the online fsck code to know
1772what's going on in the rest of the filesystem.
1773Since it is not expected that online fsck will be constantly running in the
1774background, it is very important to minimize the runtime overhead imposed by
1775these hooks when online fsck is compiled into the kernel but not actively
1776running on behalf of userspace.
1777Taking locks in the hot path of a writer thread to access a data structure only
1778to find that no further action is necessary is expensive -- on the author's
1779computer, this have an overhead of 40-50ns per access.
1780Fortunately, the kernel supports dynamic code patching, which enables XFS to
1781replace a static branch to hook code with ``nop`` sleds when online fsck isn't
1782running.
1783This sled has an overhead of however long it takes the instruction decoder to
1784skip past the sled, which seems to be on the order of less than 1ns and
1785does not access memory outside of instruction fetching.
1786
1787When online fsck enables the static key, the sled is replaced with an
1788unconditional branch to call the hook code.
1789The switchover is quite expensive (~22000ns) but is paid entirely by the
1790program that invoked online fsck, and can be amortized if multiple threads
1791enter online fsck at the same time, or if multiple filesystems are being
1792checked at the same time.
1793Changing the branch direction requires taking the CPU hotplug lock, and since
1794CPU initialization requires memory allocation, online fsck must be careful not
1795to change a static key while holding any locks or resources that could be
1796accessed in the memory reclaim paths.
1797To minimize contention on the CPU hotplug lock, care should be taken not to
1798enable or disable static keys unnecessarily.
1799
1800Because static keys are intended to minimize hook overhead for regular
1801filesystem operations when xfs_scrub is not running, the intended usage
1802patterns are as follows:
1803
1804- The hooked part of XFS should declare a static-scoped static key that
1805  defaults to false.
1806  The ``DEFINE_STATIC_KEY_FALSE`` macro takes care of this.
1807  The static key itself should be declared as a ``static`` variable.
1808
1809- When deciding to invoke code that's only used by scrub, the regular
1810  filesystem should call the ``static_branch_unlikely`` predicate to avoid the
1811  scrub-only hook code if the static key is not enabled.
1812
1813- The regular filesystem should export helper functions that call
1814  ``static_branch_inc`` to enable and ``static_branch_dec`` to disable the
1815  static key.
1816  Wrapper functions make it easy to compile out the relevant code if the kernel
1817  distributor turns off online fsck at build time.
1818
1819- Scrub functions wanting to turn on scrub-only XFS functionality should call
1820  the ``xchk_fsgates_enable`` from the setup function to enable a specific
1821  hook.
1822  This must be done before obtaining any resources that are used by memory
1823  reclaim.
1824  Callers had better be sure they really need the functionality gated by the
1825  static key; the ``TRY_HARDER`` flag is useful here.
1826
1827Online scrub has resource acquisition helpers (e.g. ``xchk_perag_lock``) to
1828handle locking AGI and AGF buffers for all scrubber functions.
1829If it detects a conflict between scrub and the running transactions, it will
1830try to wait for intents to complete.
1831If the caller of the helper has not enabled the static key, the helper will
1832return -EDEADLOCK, which should result in the scrub being restarted with the
1833``TRY_HARDER`` flag set.
1834The scrub setup function should detect that flag, enable the static key, and
1835try the scrub again.
1836Scrub teardown disables all static keys obtained by ``xchk_fsgates_enable``.
1837
1838For more information, please see the kernel documentation of
1839Documentation/staging/static-keys.rst.
1840
1841.. _xfile:
1842
1843Pageable Kernel Memory
1844----------------------
1845
1846Some online checking functions work by scanning the filesystem to build a
1847shadow copy of an ondisk metadata structure in memory and comparing the two
1848copies.
1849For online repair to rebuild a metadata structure, it must compute the record
1850set that will be stored in the new structure before it can persist that new
1851structure to disk.
1852Ideally, repairs complete with a single atomic commit that introduces
1853a new data structure.
1854To meet these goals, the kernel needs to collect a large amount of information
1855in a place that doesn't require the correct operation of the filesystem.
1856
1857Kernel memory isn't suitable because:
1858
1859* Allocating a contiguous region of memory to create a C array is very
1860  difficult, especially on 32-bit systems.
1861
1862* Linked lists of records introduce double pointer overhead which is very high
1863  and eliminate the possibility of indexed lookups.
1864
1865* Kernel memory is pinned, which can drive the system into OOM conditions.
1866
1867* The system might not have sufficient memory to stage all the information.
1868
1869At any given time, online fsck does not need to keep the entire record set in
1870memory, which means that individual records can be paged out if necessary.
1871Continued development of online fsck demonstrated that the ability to perform
1872indexed data storage would also be very useful.
1873Fortunately, the Linux kernel already has a facility for byte-addressable and
1874pageable storage: tmpfs.
1875In-kernel graphics drivers (most notably i915) take advantage of tmpfs files
1876to store intermediate data that doesn't need to be in memory at all times, so
1877that usage precedent is already established.
1878Hence, the ``xfile`` was born!
1879
1880+--------------------------------------------------------------------------+
1881| **Historical Sidebar**:                                                  |
1882+--------------------------------------------------------------------------+
1883| The first edition of online repair inserted records into a new btree as  |
1884| it found them, which failed because filesystem could shut down with a    |
1885| built data structure, which would be live after recovery finished.       |
1886|                                                                          |
1887| The second edition solved the half-rebuilt structure problem by storing  |
1888| everything in memory, but frequently ran the system out of memory.       |
1889|                                                                          |
1890| The third edition solved the OOM problem by using linked lists, but the  |
1891| memory overhead of the list pointers was extreme.                        |
1892+--------------------------------------------------------------------------+
1893
1894xfile Access Models
1895```````````````````
1896
1897A survey of the intended uses of xfiles suggested these use cases:
1898
18991. Arrays of fixed-sized records (space management btrees, directory and
1900   extended attribute entries)
1901
19022. Sparse arrays of fixed-sized records (quotas and link counts)
1903
19043. Large binary objects (BLOBs) of variable sizes (directory and extended
1905   attribute names and values)
1906
19074. Staging btrees in memory (reverse mapping btrees)
1908
19095. Arbitrary contents (realtime space management)
1910
1911To support the first four use cases, high level data structures wrap the xfile
1912to share functionality between online fsck functions.
1913The rest of this section discusses the interfaces that the xfile presents to
1914four of those five higher level data structures.
1915The fifth use case is discussed in the :ref:`realtime summary <rtsummary>` case
1916study.
1917
1918The most general storage interface supported by the xfile enables the reading
1919and writing of arbitrary quantities of data at arbitrary offsets in the xfile.
1920This capability is provided by ``xfile_pread`` and ``xfile_pwrite`` functions,
1921which behave similarly to their userspace counterparts.
1922XFS is very record-based, which suggests that the ability to load and store
1923complete records is important.
1924To support these cases, a pair of ``xfile_obj_load`` and ``xfile_obj_store``
1925functions are provided to read and persist objects into an xfile.
1926They are internally the same as pread and pwrite, except that they treat any
1927error as an out of memory error.
1928For online repair, squashing error conditions in this manner is an acceptable
1929behavior because the only reaction is to abort the operation back to userspace.
1930All five xfile usecases can be serviced by these four functions.
1931
1932However, no discussion of file access idioms is complete without answering the
1933question, "But what about mmap?"
1934It is convenient to access storage directly with pointers, just like userspace
1935code does with regular memory.
1936Online fsck must not drive the system into OOM conditions, which means that
1937xfiles must be responsive to memory reclamation.
1938tmpfs can only push a pagecache folio to the swap cache if the folio is neither
1939pinned nor locked, which means the xfile must not pin too many folios.
1940
1941Short term direct access to xfile contents is done by locking the pagecache
1942folio and mapping it into kernel address space.
1943Programmatic access (e.g. pread and pwrite) uses this mechanism.
1944Folio locks are not supposed to be held for long periods of time, so long
1945term direct access to xfile contents is done by bumping the folio refcount,
1946mapping it into kernel address space, and dropping the folio lock.
1947These long term users *must* be responsive to memory reclaim by hooking into
1948the shrinker infrastructure to know when to release folios.
1949
1950The ``xfile_get_page`` and ``xfile_put_page`` functions are provided to
1951retrieve the (locked) folio that backs part of an xfile and to release it.
1952The only code to use these folio lease functions are the xfarray
1953:ref:`sorting<xfarray_sort>` algorithms and the :ref:`in-memory
1954btrees<xfbtree>`.
1955
1956xfile Access Coordination
1957`````````````````````````
1958
1959For security reasons, xfiles must be owned privately by the kernel.
1960They are marked ``S_PRIVATE`` to prevent interference from the security system,
1961must never be mapped into process file descriptor tables, and their pages must
1962never be mapped into userspace processes.
1963
1964To avoid locking recursion issues with the VFS, all accesses to the shmfs file
1965are performed by manipulating the page cache directly.
1966xfile writers call the ``->write_begin`` and ``->write_end`` functions of the
1967xfile's address space to grab writable pages, copy the caller's buffer into the
1968page, and release the pages.
1969xfile readers call ``shmem_read_mapping_page_gfp`` to grab pages directly
1970before copying the contents into the caller's buffer.
1971In other words, xfiles ignore the VFS read and write code paths to avoid
1972having to create a dummy ``struct kiocb`` and to avoid taking inode and
1973freeze locks.
1974tmpfs cannot be frozen, and xfiles must not be exposed to userspace.
1975
1976If an xfile is shared between threads to stage repairs, the caller must provide
1977its own locks to coordinate access.
1978For example, if a scrub function stores scan results in an xfile and needs
1979other threads to provide updates to the scanned data, the scrub function must
1980provide a lock for all threads to share.
1981
1982.. _xfarray:
1983
1984Arrays of Fixed-Sized Records
1985`````````````````````````````
1986
1987In XFS, each type of indexed space metadata (free space, inodes, reference
1988counts, file fork space, and reverse mappings) consists of a set of fixed-size
1989records indexed with a classic B+ tree.
1990Directories have a set of fixed-size dirent records that point to the names,
1991and extended attributes have a set of fixed-size attribute keys that point to
1992names and values.
1993Quota counters and file link counters index records with numbers.
1994During a repair, scrub needs to stage new records during the gathering step and
1995retrieve them during the btree building step.
1996
1997Although this requirement can be satisfied by calling the read and write
1998methods of the xfile directly, it is simpler for callers for there to be a
1999higher level abstraction to take care of computing array offsets, to provide
2000iterator functions, and to deal with sparse records and sorting.
2001The ``xfarray`` abstraction presents a linear array for fixed-size records atop
2002the byte-accessible xfile.
2003
2004.. _xfarray_access_patterns:
2005
2006Array Access Patterns
2007^^^^^^^^^^^^^^^^^^^^^
2008
2009Array access patterns in online fsck tend to fall into three categories.
2010Iteration of records is assumed to be necessary for all cases and will be
2011covered in the next section.
2012
2013The first type of caller handles records that are indexed by position.
2014Gaps may exist between records, and a record may be updated multiple times
2015during the collection step.
2016In other words, these callers want a sparse linearly addressed table file.
2017The typical use case are quota records or file link count records.
2018Access to array elements is performed programmatically via ``xfarray_load`` and
2019``xfarray_store`` functions, which wrap the similarly-named xfile functions to
2020provide loading and storing of array elements at arbitrary array indices.
2021Gaps are defined to be null records, and null records are defined to be a
2022sequence of all zero bytes.
2023Null records are detected by calling ``xfarray_element_is_null``.
2024They are created either by calling ``xfarray_unset`` to null out an existing
2025record or by never storing anything to an array index.
2026
2027The second type of caller handles records that are not indexed by position
2028and do not require multiple updates to a record.
2029The typical use case here is rebuilding space btrees and key/value btrees.
2030These callers can add records to the array without caring about array indices
2031via the ``xfarray_append`` function, which stores a record at the end of the
2032array.
2033For callers that require records to be presentable in a specific order (e.g.
2034rebuilding btree data), the ``xfarray_sort`` function can arrange the sorted
2035records; this function will be covered later.
2036
2037The third type of caller is a bag, which is useful for counting records.
2038The typical use case here is constructing space extent reference counts from
2039reverse mapping information.
2040Records can be put in the bag in any order, they can be removed from the bag
2041at any time, and uniqueness of records is left to callers.
2042The ``xfarray_store_anywhere`` function is used to insert a record in any
2043null record slot in the bag; and the ``xfarray_unset`` function removes a
2044record from the bag.
2045
2046The proposed patchset is the
2047`big in-memory array
2048<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=big-array>`_.
2049
2050Iterating Array Elements
2051^^^^^^^^^^^^^^^^^^^^^^^^
2052
2053Most users of the xfarray require the ability to iterate the records stored in
2054the array.
2055Callers can probe every possible array index with the following:
2056
2057.. code-block:: c
2058
2059	xfarray_idx_t i;
2060	foreach_xfarray_idx(array, i) {
2061	    xfarray_load(array, i, &rec);
2062
2063	    /* do something with rec */
2064	}
2065
2066All users of this idiom must be prepared to handle null records or must already
2067know that there aren't any.
2068
2069For xfarray users that want to iterate a sparse array, the ``xfarray_iter``
2070function ignores indices in the xfarray that have never been written to by
2071calling ``xfile_seek_data`` (which internally uses ``SEEK_DATA``) to skip areas
2072of the array that are not populated with memory pages.
2073Once it finds a page, it will skip the zeroed areas of the page.
2074
2075.. code-block:: c
2076
2077	xfarray_idx_t i = XFARRAY_CURSOR_INIT;
2078	while ((ret = xfarray_iter(array, &i, &rec)) == 1) {
2079	    /* do something with rec */
2080	}
2081
2082.. _xfarray_sort:
2083
2084Sorting Array Elements
2085^^^^^^^^^^^^^^^^^^^^^^
2086
2087During the fourth demonstration of online repair, a community reviewer remarked
2088that for performance reasons, online repair ought to load batches of records
2089into btree record blocks instead of inserting records into a new btree one at a
2090time.
2091The btree insertion code in XFS is responsible for maintaining correct ordering
2092of the records, so naturally the xfarray must also support sorting the record
2093set prior to bulk loading.
2094
2095Case Study: Sorting xfarrays
2096~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2097
2098The sorting algorithm used in the xfarray is actually a combination of adaptive
2099quicksort and a heapsort subalgorithm in the spirit of
2100`Sedgewick <https://algs4.cs.princeton.edu/23quicksort/>`_ and
2101`pdqsort <https://github.com/orlp/pdqsort>`_, with customizations for the Linux
2102kernel.
2103To sort records in a reasonably short amount of time, ``xfarray`` takes
2104advantage of the binary subpartitioning offered by quicksort, but it also uses
2105heapsort to hedge aginst performance collapse if the chosen quicksort pivots
2106are poor.
2107Both algorithms are (in general) O(n * lg(n)), but there is a wide performance
2108gulf between the two implementations.
2109
2110The Linux kernel already contains a reasonably fast implementation of heapsort.
2111It only operates on regular C arrays, which limits the scope of its usefulness.
2112There are two key places where the xfarray uses it:
2113
2114* Sorting any record subset backed by a single xfile page.
2115
2116* Loading a small number of xfarray records from potentially disparate parts
2117  of the xfarray into a memory buffer, and sorting the buffer.
2118
2119In other words, ``xfarray`` uses heapsort to constrain the nested recursion of
2120quicksort, thereby mitigating quicksort's worst runtime behavior.
2121
2122Choosing a quicksort pivot is a tricky business.
2123A good pivot splits the set to sort in half, leading to the divide and conquer
2124behavior that is crucial to  O(n * lg(n)) performance.
2125A poor pivot barely splits the subset at all, leading to O(n\ :sup:`2`)
2126runtime.
2127The xfarray sort routine tries to avoid picking a bad pivot by sampling nine
2128records into a memory buffer and using the kernel heapsort to identify the
2129median of the nine.
2130
2131Most modern quicksort implementations employ Tukey's "ninther" to select a
2132pivot from a classic C array.
2133Typical ninther implementations pick three unique triads of records, sort each
2134of the triads, and then sort the middle value of each triad to determine the
2135ninther value.
2136As stated previously, however, xfile accesses are not entirely cheap.
2137It turned out to be much more performant to read the nine elements into a
2138memory buffer, run the kernel's in-memory heapsort on the buffer, and choose
2139the 4th element of that buffer as the pivot.
2140Tukey's ninthers are described in J. W. Tukey, `The ninther, a technique for
2141low-effort robust (resistant) location in large samples`, in *Contributions to
2142Survey Sampling and Applied Statistics*, edited by H. David, (Academic Press,
21431978), pp. 251–257.
2144
2145The partitioning of quicksort is fairly textbook -- rearrange the record
2146subset around the pivot, then set up the current and next stack frames to
2147sort with the larger and the smaller halves of the pivot, respectively.
2148This keeps the stack space requirements to log2(record count).
2149
2150As a final performance optimization, the hi and lo scanning phase of quicksort
2151keeps examined xfile pages mapped in the kernel for as long as possible to
2152reduce map/unmap cycles.
2153Surprisingly, this reduces overall sort runtime by nearly half again after
2154accounting for the application of heapsort directly onto xfile pages.
2155
2156.. _xfblob:
2157
2158Blob Storage
2159````````````
2160
2161Extended attributes and directories add an additional requirement for staging
2162records: arbitrary byte sequences of finite length.
2163Each directory entry record needs to store entry name,
2164and each extended attribute needs to store both the attribute name and value.
2165The names, keys, and values can consume a large amount of memory, so the
2166``xfblob`` abstraction was created to simplify management of these blobs
2167atop an xfile.
2168
2169Blob arrays provide ``xfblob_load`` and ``xfblob_store`` functions to retrieve
2170and persist objects.
2171The store function returns a magic cookie for every object that it persists.
2172Later, callers provide this cookie to the ``xblob_load`` to recall the object.
2173The ``xfblob_free`` function frees a specific blob, and the ``xfblob_truncate``
2174function frees them all because compaction is not needed.
2175
2176The details of repairing directories and extended attributes will be discussed
2177in a subsequent section about atomic extent swapping.
2178However, it should be noted that these repair functions only use blob storage
2179to cache a small number of entries before adding them to a temporary ondisk
2180file, which is why compaction is not required.
2181
2182The proposed patchset is at the start of the
2183`extended attribute repair
2184<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-xattrs>`_ series.
2185
2186.. _xfbtree:
2187
2188In-Memory B+Trees
2189`````````````````
2190
2191The chapter about :ref:`secondary metadata<secondary_metadata>` mentioned that
2192checking and repairing of secondary metadata commonly requires coordination
2193between a live metadata scan of the filesystem and writer threads that are
2194updating that metadata.
2195Keeping the scan data up to date requires requires the ability to propagate
2196metadata updates from the filesystem into the data being collected by the scan.
2197This *can* be done by appending concurrent updates into a separate log file and
2198applying them before writing the new metadata to disk, but this leads to
2199unbounded memory consumption if the rest of the system is very busy.
2200Another option is to skip the side-log and commit live updates from the
2201filesystem directly into the scan data, which trades more overhead for a lower
2202maximum memory requirement.
2203In both cases, the data structure holding the scan results must support indexed
2204access to perform well.
2205
2206Given that indexed lookups of scan data is required for both strategies, online
2207fsck employs the second strategy of committing live updates directly into
2208scan data.
2209Because xfarrays are not indexed and do not enforce record ordering, they
2210are not suitable for this task.
2211Conveniently, however, XFS has a library to create and maintain ordered reverse
2212mapping records: the existing rmap btree code!
2213If only there was a means to create one in memory.
2214
2215Recall that the :ref:`xfile <xfile>` abstraction represents memory pages as a
2216regular file, which means that the kernel can create byte or block addressable
2217virtual address spaces at will.
2218The XFS buffer cache specializes in abstracting IO to block-oriented  address
2219spaces, which means that adaptation of the buffer cache to interface with
2220xfiles enables reuse of the entire btree library.
2221Btrees built atop an xfile are collectively known as ``xfbtrees``.
2222The next few sections describe how they actually work.
2223
2224The proposed patchset is the
2225`in-memory btree
2226<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=in-memory-btrees>`_
2227series.
2228
2229Using xfiles as a Buffer Cache Target
2230^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2231
2232Two modifications are necessary to support xfiles as a buffer cache target.
2233The first is to make it possible for the ``struct xfs_buftarg`` structure to
2234host the ``struct xfs_buf`` rhashtable, because normally those are held by a
2235per-AG structure.
2236The second change is to modify the buffer ``ioapply`` function to "read" cached
2237pages from the xfile and "write" cached pages back to the xfile.
2238Multiple access to individual buffers is controlled by the ``xfs_buf`` lock,
2239since the xfile does not provide any locking on its own.
2240With this adaptation in place, users of the xfile-backed buffer cache use
2241exactly the same APIs as users of the disk-backed buffer cache.
2242The separation between xfile and buffer cache implies higher memory usage since
2243they do not share pages, but this property could some day enable transactional
2244updates to an in-memory btree.
2245Today, however, it simply eliminates the need for new code.
2246
2247Space Management with an xfbtree
2248^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2249
2250Space management for an xfile is very simple -- each btree block is one memory
2251page in size.
2252These blocks use the same header format as an on-disk btree, but the in-memory
2253block verifiers ignore the checksums, assuming that xfile memory is no more
2254corruption-prone than regular DRAM.
2255Reusing existing code here is more important than absolute memory efficiency.
2256
2257The very first block of an xfile backing an xfbtree contains a header block.
2258The header describes the owner, height, and the block number of the root
2259xfbtree block.
2260
2261To allocate a btree block, use ``xfile_seek_data`` to find a gap in the file.
2262If there are no gaps, create one by extending the length of the xfile.
2263Preallocate space for the block with ``xfile_prealloc``, and hand back the
2264location.
2265To free an xfbtree block, use ``xfile_discard`` (which internally uses
2266``FALLOC_FL_PUNCH_HOLE``) to remove the memory page from the xfile.
2267
2268Populating an xfbtree
2269^^^^^^^^^^^^^^^^^^^^^
2270
2271An online fsck function that wants to create an xfbtree should proceed as
2272follows:
2273
22741. Call ``xfile_create`` to create an xfile.
2275
22762. Call ``xfs_alloc_memory_buftarg`` to create a buffer cache target structure
2277   pointing to the xfile.
2278
22793. Pass the buffer cache target, buffer ops, and other information to
2280   ``xfbtree_create`` to write an initial tree header and root block to the
2281   xfile.
2282   Each btree type should define a wrapper that passes necessary arguments to
2283   the creation function.
2284   For example, rmap btrees define ``xfs_rmapbt_mem_create`` to take care of
2285   all the necessary details for callers.
2286   A ``struct xfbtree`` object will be returned.
2287
22884. Pass the xfbtree object to the btree cursor creation function for the
2289   btree type.
2290   Following the example above, ``xfs_rmapbt_mem_cursor`` takes care of this
2291   for callers.
2292
22935. Pass the btree cursor to the regular btree functions to make queries against
2294   and to update the in-memory btree.
2295   For example, a btree cursor for an rmap xfbtree can be passed to the
2296   ``xfs_rmap_*`` functions just like any other btree cursor.
2297   See the :ref:`next section<xfbtree_commit>` for information on dealing with
2298   xfbtree updates that are logged to a transaction.
2299
23006. When finished, delete the btree cursor, destroy the xfbtree object, free the
2301   buffer target, and the destroy the xfile to release all resources.
2302
2303.. _xfbtree_commit:
2304
2305Committing Logged xfbtree Buffers
2306^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2307
2308Although it is a clever hack to reuse the rmap btree code to handle the staging
2309structure, the ephemeral nature of the in-memory btree block storage presents
2310some challenges of its own.
2311The XFS transaction manager must not commit buffer log items for buffers backed
2312by an xfile because the log format does not understand updates for devices
2313other than the data device.
2314An ephemeral xfbtree probably will not exist by the time the AIL checkpoints
2315log transactions back into the filesystem, and certainly won't exist during
2316log recovery.
2317For these reasons, any code updating an xfbtree in transaction context must
2318remove the buffer log items from the transaction and write the updates into the
2319backing xfile before committing or cancelling the transaction.
2320
2321The ``xfbtree_trans_commit`` and ``xfbtree_trans_cancel`` functions implement
2322this functionality as follows:
2323
23241. Find each buffer log item whose buffer targets the xfile.
2325
23262. Record the dirty/ordered status of the log item.
2327
23283. Detach the log item from the buffer.
2329
23304. Queue the buffer to a special delwri list.
2331
23325. Clear the transaction dirty flag if the only dirty log items were the ones
2333   that were detached in step 3.
2334
23356. Submit the delwri list to commit the changes to the xfile, if the updates
2336   are being committed.
2337
2338After removing xfile logged buffers from the transaction in this manner, the
2339transaction can be committed or cancelled.
2340
2341Bulk Loading of Ondisk B+Trees
2342------------------------------
2343
2344As mentioned previously, early iterations of online repair built new btree
2345structures by creating a new btree and adding observations individually.
2346Loading a btree one record at a time had a slight advantage of not requiring
2347the incore records to be sorted prior to commit, but was very slow and leaked
2348blocks if the system went down during a repair.
2349Loading records one at a time also meant that repair could not control the
2350loading factor of the blocks in the new btree.
2351
2352Fortunately, the venerable ``xfs_repair`` tool had a more efficient means for
2353rebuilding a btree index from a collection of records -- bulk btree loading.
2354This was implemented rather inefficiently code-wise, since ``xfs_repair``
2355had separate copy-pasted implementations for each btree type.
2356
2357To prepare for online fsck, each of the four bulk loaders were studied, notes
2358were taken, and the four were refactored into a single generic btree bulk
2359loading mechanism.
2360Those notes in turn have been refreshed and are presented below.
2361
2362Geometry Computation
2363````````````````````
2364
2365The zeroth step of bulk loading is to assemble the entire record set that will
2366be stored in the new btree, and sort the records.
2367Next, call ``xfs_btree_bload_compute_geometry`` to compute the shape of the
2368btree from the record set, the type of btree, and any load factor preferences.
2369This information is required for resource reservation.
2370
2371First, the geometry computation computes the minimum and maximum records that
2372will fit in a leaf block from the size of a btree block and the size of the
2373block header.
2374Roughly speaking, the maximum number of records is::
2375
2376        maxrecs = (block_size - header_size) / record_size
2377
2378The XFS design specifies that btree blocks should be merged when possible,
2379which means the minimum number of records is half of maxrecs::
2380
2381        minrecs = maxrecs / 2
2382
2383The next variable to determine is the desired loading factor.
2384This must be at least minrecs and no more than maxrecs.
2385Choosing minrecs is undesirable because it wastes half the block.
2386Choosing maxrecs is also undesirable because adding a single record to each
2387newly rebuilt leaf block will cause a tree split, which causes a noticeable
2388drop in performance immediately afterwards.
2389The default loading factor was chosen to be 75% of maxrecs, which provides a
2390reasonably compact structure without any immediate split penalties::
2391
2392        default_load_factor = (maxrecs + minrecs) / 2
2393
2394If space is tight, the loading factor will be set to maxrecs to try to avoid
2395running out of space::
2396
2397        leaf_load_factor = enough space ? default_load_factor : maxrecs
2398
2399Load factor is computed for btree node blocks using the combined size of the
2400btree key and pointer as the record size::
2401
2402        maxrecs = (block_size - header_size) / (key_size + ptr_size)
2403        minrecs = maxrecs / 2
2404        node_load_factor = enough space ? default_load_factor : maxrecs
2405
2406Once that's done, the number of leaf blocks required to store the record set
2407can be computed as::
2408
2409        leaf_blocks = ceil(record_count / leaf_load_factor)
2410
2411The number of node blocks needed to point to the next level down in the tree
2412is computed as::
2413
2414        n_blocks = (n == 0 ? leaf_blocks : node_blocks[n])
2415        node_blocks[n + 1] = ceil(n_blocks / node_load_factor)
2416
2417The entire computation is performed recursively until the current level only
2418needs one block.
2419The resulting geometry is as follows:
2420
2421- For AG-rooted btrees, this level is the root level, so the height of the new
2422  tree is ``level + 1`` and the space needed is the summation of the number of
2423  blocks on each level.
2424
2425- For inode-rooted btrees where the records in the top level do not fit in the
2426  inode fork area, the height is ``level + 2``, the space needed is the
2427  summation of the number of blocks on each level, and the inode fork points to
2428  the root block.
2429
2430- For inode-rooted btrees where the records in the top level can be stored in
2431  the inode fork area, then the root block can be stored in the inode, the
2432  height is ``level + 1``, and the space needed is one less than the summation
2433  of the number of blocks on each level.
2434  This only becomes relevant when non-bmap btrees gain the ability to root in
2435  an inode, which is a future patchset and only included here for completeness.
2436
2437.. _newbt:
2438
2439Reserving New B+Tree Blocks
2440```````````````````````````
2441
2442Once repair knows the number of blocks needed for the new btree, it allocates
2443those blocks using the free space information.
2444Each reserved extent is tracked separately by the btree builder state data.
2445To improve crash resilience, the reservation code also logs an Extent Freeing
2446Intent (EFI) item in the same transaction as each space allocation and attaches
2447its in-memory ``struct xfs_extent_free_item`` object to the space reservation.
2448If the system goes down, log recovery will use the unfinished EFIs to free the
2449unused space, the free space, leaving the filesystem unchanged.
2450
2451Each time the btree builder claims a block for the btree from a reserved
2452extent, it updates the in-memory reservation to reflect the claimed space.
2453Block reservation tries to allocate as much contiguous space as possible to
2454reduce the number of EFIs in play.
2455
2456While repair is writing these new btree blocks, the EFIs created for the space
2457reservations pin the tail of the ondisk log.
2458It's possible that other parts of the system will remain busy and push the head
2459of the log towards the pinned tail.
2460To avoid livelocking the filesystem, the EFIs must not pin the tail of the log
2461for too long.
2462To alleviate this problem, the dynamic relogging capability of the deferred ops
2463mechanism is reused here to commit a transaction at the log head containing an
2464EFD for the old EFI and new EFI at the head.
2465This enables the log to release the old EFI to keep the log moving forwards.
2466
2467EFIs have a role to play during the commit and reaping phases; please see the
2468next section and the section about :ref:`reaping<reaping>` for more details.
2469
2470Proposed patchsets are the
2471`bitmap rework
2472<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-bitmap-rework>`_
2473and the
2474`preparation for bulk loading btrees
2475<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-prep-for-bulk-loading>`_.
2476
2477
2478Writing the New Tree
2479````````````````````
2480
2481This part is pretty simple -- the btree builder (``xfs_btree_bulkload``) claims
2482a block from the reserved list, writes the new btree block header, fills the
2483rest of the block with records, and adds the new leaf block to a list of
2484written blocks::
2485
2486  ┌────┐
2487  │leaf│
2488  │RRR │
2489  └────┘
2490
2491Sibling pointers are set every time a new block is added to the level::
2492
2493  ┌────┐ ┌────┐ ┌────┐ ┌────┐
2494  │leaf│→│leaf│→│leaf│→│leaf│
2495  │RRR │←│RRR │←│RRR │←│RRR │
2496  └────┘ └────┘ └────┘ └────┘
2497
2498When it finishes writing the record leaf blocks, it moves on to the node
2499blocks
2500To fill a node block, it walks each block in the next level down in the tree
2501to compute the relevant keys and write them into the parent node::
2502
2503      ┌────┐       ┌────┐
2504      │node│──────→│node│
2505      │PP  │←──────│PP  │
2506      └────┘       └────┘
2507      ↙   ↘         ↙   ↘
2508  ┌────┐ ┌────┐ ┌────┐ ┌────┐
2509  │leaf│→│leaf│→│leaf│→│leaf│
2510  │RRR │←│RRR │←│RRR │←│RRR │
2511  └────┘ └────┘ └────┘ └────┘
2512
2513When it reaches the root level, it is ready to commit the new btree!::
2514
2515          ┌─────────┐
2516          │  root   │
2517          │   PP    │
2518          └─────────┘
2519          ↙         ↘
2520      ┌────┐       ┌────┐
2521      │node│──────→│node│
2522      │PP  │←──────│PP  │
2523      └────┘       └────┘
2524      ↙   ↘         ↙   ↘
2525  ┌────┐ ┌────┐ ┌────┐ ┌────┐
2526  │leaf│→│leaf│→│leaf│→│leaf│
2527  │RRR │←│RRR │←│RRR │←│RRR │
2528  └────┘ └────┘ └────┘ └────┘
2529
2530The first step to commit the new btree is to persist the btree blocks to disk
2531synchronously.
2532This is a little complicated because a new btree block could have been freed
2533in the recent past, so the builder must use ``xfs_buf_delwri_queue_here`` to
2534remove the (stale) buffer from the AIL list before it can write the new blocks
2535to disk.
2536Blocks are queued for IO using a delwri list and written in one large batch
2537with ``xfs_buf_delwri_submit``.
2538
2539Once the new blocks have been persisted to disk, control returns to the
2540individual repair function that called the bulk loader.
2541The repair function must log the location of the new root in a transaction,
2542clean up the space reservations that were made for the new btree, and reap the
2543old metadata blocks:
2544
25451. Commit the location of the new btree root.
2546
25472. For each incore reservation:
2548
2549   a. Log Extent Freeing Done (EFD) items for all the space that was consumed
2550      by the btree builder.  The new EFDs must point to the EFIs attached to
2551      the reservation to prevent log recovery from freeing the new blocks.
2552
2553   b. For unclaimed portions of incore reservations, create a regular deferred
2554      extent free work item to be free the unused space later in the
2555      transaction chain.
2556
2557   c. The EFDs and EFIs logged in steps 2a and 2b must not overrun the
2558      reservation of the committing transaction.
2559      If the btree loading code suspects this might be about to happen, it must
2560      call ``xrep_defer_finish`` to clear out the deferred work and obtain a
2561      fresh transaction.
2562
25633. Clear out the deferred work a second time to finish the commit and clean
2564   the repair transaction.
2565
2566The transaction rolling in steps 2c and 3 represent a weakness in the repair
2567algorithm, because a log flush and a crash before the end of the reap step can
2568result in space leaking.
2569Online repair functions minimize the chances of this occuring by using very
2570large transactions, which each can accomodate many thousands of block freeing
2571instructions.
2572Repair moves on to reaping the old blocks, which will be presented in a
2573subsequent :ref:`section<reaping>` after a few case studies of bulk loading.
2574
2575Case Study: Rebuilding the Inode Index
2576^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2577
2578The high level process to rebuild the inode index btree is:
2579
25801. Walk the reverse mapping records to generate ``struct xfs_inobt_rec``
2581   records from the inode chunk information and a bitmap of the old inode btree
2582   blocks.
2583
25842. Append the records to an xfarray in inode order.
2585
25863. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2587   of blocks needed for the inode btree.
2588   If the free space inode btree is enabled, call it again to estimate the
2589   geometry of the finobt.
2590
25914. Allocate the number of blocks computed in the previous step.
2592
25935. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2594   generate the internal node blocks.
2595   If the free space inode btree is enabled, call it again to load the finobt.
2596
25976. Commit the location of the new btree root block(s) to the AGI.
2598
25997. Reap the old btree blocks using the bitmap created in step 1.
2600
2601Details are as follows.
2602
2603The inode btree maps inumbers to the ondisk location of the associated
2604inode records, which means that the inode btrees can be rebuilt from the
2605reverse mapping information.
2606Reverse mapping records with an owner of ``XFS_RMAP_OWN_INOBT`` marks the
2607location of the old inode btree blocks.
2608Each reverse mapping record with an owner of ``XFS_RMAP_OWN_INODES`` marks the
2609location of at least one inode cluster buffer.
2610A cluster is the smallest number of ondisk inodes that can be allocated or
2611freed in a single transaction; it is never smaller than 1 fs block or 4 inodes.
2612
2613For the space represented by each inode cluster, ensure that there are no
2614records in the free space btrees nor any records in the reference count btree.
2615If there are, the space metadata inconsistencies are reason enough to abort the
2616operation.
2617Otherwise, read each cluster buffer to check that its contents appear to be
2618ondisk inodes and to decide if the file is allocated
2619(``xfs_dinode.i_mode != 0``) or free (``xfs_dinode.i_mode == 0``).
2620Accumulate the results of successive inode cluster buffer reads until there is
2621enough information to fill a single inode chunk record, which is 64 consecutive
2622numbers in the inumber keyspace.
2623If the chunk is sparse, the chunk record may include holes.
2624
2625Once the repair function accumulates one chunk's worth of data, it calls
2626``xfarray_append`` to add the inode btree record to the xfarray.
2627This xfarray is walked twice during the btree creation step -- once to populate
2628the inode btree with all inode chunk records, and a second time to populate the
2629free inode btree with records for chunks that have free non-sparse inodes.
2630The number of records for the inode btree is the number of xfarray records,
2631but the record count for the free inode btree has to be computed as inode chunk
2632records are stored in the xfarray.
2633
2634The proposed patchset is the
2635`AG btree repair
2636<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-ag-btrees>`_
2637series.
2638
2639Case Study: Rebuilding the Space Reference Counts
2640^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2641
2642Reverse mapping records are used to rebuild the reference count information.
2643Reference counts are required for correct operation of copy on write for shared
2644file data.
2645Imagine the reverse mapping entries as rectangles representing extents of
2646physical blocks, and that the rectangles can be laid down to allow them to
2647overlap each other.
2648From the diagram below, it is apparent that a reference count record must start
2649or end wherever the height of the stack changes.
2650In other words, the record emission stimulus is level-triggered::
2651
2652                        █    ███
2653              ██      █████ ████   ███        ██████
2654        ██   ████     ███████████ ████     █████████
2655        ████████████████████████████████ ███████████
2656        ^ ^  ^^ ^^    ^ ^^ ^^^  ^^^^  ^ ^^ ^  ^     ^
2657        2 1  23 21    3 43 234  2123  1 01 2  3     0
2658
2659The ondisk reference count btree does not store the refcount == 0 cases because
2660the free space btree already records which blocks are free.
2661Extents being used to stage copy-on-write operations should be the only records
2662with refcount == 1.
2663Single-owner file blocks aren't recorded in either the free space or the
2664reference count btrees.
2665
2666The high level process to rebuild the reference count btree is:
2667
26681. Walk the reverse mapping records to generate ``struct xfs_refcount_irec``
2669   records for any space having more than one reverse mapping and add them to
2670   the xfarray.
2671   Any records owned by ``XFS_RMAP_OWN_COW`` are also added to the xfarray
2672   because these are extents allocated to stage a copy on write operation and
2673   are tracked in the refcount btree.
2674
2675   Use any records owned by ``XFS_RMAP_OWN_REFC`` to create a bitmap of old
2676   refcount btree blocks.
2677
26782. Sort the records in physical extent order, putting the CoW staging extents
2679   at the end of the xfarray.
2680   This matches the sorting order of records in the refcount btree.
2681
26823. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2683   of blocks needed for the new tree.
2684
26854. Allocate the number of blocks computed in the previous step.
2686
26875. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2688   generate the internal node blocks.
2689
26906. Commit the location of new btree root block to the AGF.
2691
26927. Reap the old btree blocks using the bitmap created in step 1.
2693
2694Details are as follows; the same algorithm is used by ``xfs_repair`` to
2695generate refcount information from reverse mapping records.
2696
2697- Until the reverse mapping btree runs out of records:
2698
2699  - Retrieve the next record from the btree and put it in a bag.
2700
2701  - Collect all records with the same starting block from the btree and put
2702    them in the bag.
2703
2704  - While the bag isn't empty:
2705
2706    - Among the mappings in the bag, compute the lowest block number where the
2707      reference count changes.
2708      This position will be either the starting block number of the next
2709      unprocessed reverse mapping or the next block after the shortest mapping
2710      in the bag.
2711
2712    - Remove all mappings from the bag that end at this position.
2713
2714    - Collect all reverse mappings that start at this position from the btree
2715      and put them in the bag.
2716
2717    - If the size of the bag changed and is greater than one, create a new
2718      refcount record associating the block number range that we just walked to
2719      the size of the bag.
2720
2721The bag-like structure in this case is a type 2 xfarray as discussed in the
2722:ref:`xfarray access patterns<xfarray_access_patterns>` section.
2723Reverse mappings are added to the bag using ``xfarray_store_anywhere`` and
2724removed via ``xfarray_unset``.
2725Bag members are examined through ``xfarray_iter`` loops.
2726
2727The proposed patchset is the
2728`AG btree repair
2729<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-ag-btrees>`_
2730series.
2731
2732Case Study: Rebuilding File Fork Mapping Indices
2733^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2734
2735The high level process to rebuild a data/attr fork mapping btree is:
2736
27371. Walk the reverse mapping records to generate ``struct xfs_bmbt_rec``
2738   records from the reverse mapping records for that inode and fork.
2739   Append these records to an xfarray.
2740   Compute the bitmap of the old bmap btree blocks from the ``BMBT_BLOCK``
2741   records.
2742
27432. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2744   of blocks needed for the new tree.
2745
27463. Sort the records in file offset order.
2747
27484. If the extent records would fit in the inode fork immediate area, commit the
2749   records to that immediate area and skip to step 8.
2750
27515. Allocate the number of blocks computed in the previous step.
2752
27536. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2754   generate the internal node blocks.
2755
27567. Commit the new btree root block to the inode fork immediate area.
2757
27588. Reap the old btree blocks using the bitmap created in step 1.
2759
2760There are some complications here:
2761First, it's possible to move the fork offset to adjust the sizes of the
2762immediate areas if the data and attr forks are not both in BMBT format.
2763Second, if there are sufficiently few fork mappings, it may be possible to use
2764EXTENTS format instead of BMBT, which may require a conversion.
2765Third, the incore extent map must be reloaded carefully to avoid disturbing
2766any delayed allocation extents.
2767
2768The proposed patchset is the
2769`file mapping repair
2770<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-file-mappings>`_
2771series.
2772
2773.. _reaping:
2774
2775Reaping Old Metadata Blocks
2776---------------------------
2777
2778Whenever online fsck builds a new data structure to replace one that is
2779suspect, there is a question of how to find and dispose of the blocks that
2780belonged to the old structure.
2781The laziest method of course is not to deal with them at all, but this slowly
2782leads to service degradations as space leaks out of the filesystem.
2783Hopefully, someone will schedule a rebuild of the free space information to
2784plug all those leaks.
2785Offline repair rebuilds all space metadata after recording the usage of
2786the files and directories that it decides not to clear, hence it can build new
2787structures in the discovered free space and avoid the question of reaping.
2788
2789As part of a repair, online fsck relies heavily on the reverse mapping records
2790to find space that is owned by the corresponding rmap owner yet truly free.
2791Cross referencing rmap records with other rmap records is necessary because
2792there may be other data structures that also think they own some of those
2793blocks (e.g. crosslinked trees).
2794Permitting the block allocator to hand them out again will not push the system
2795towards consistency.
2796
2797For space metadata, the process of finding extents to dispose of generally
2798follows this format:
2799
28001. Create a bitmap of space used by data structures that must be preserved.
2801   The space reservations used to create the new metadata can be used here if
2802   the same rmap owner code is used to denote all of the objects being rebuilt.
2803
28042. Survey the reverse mapping data to create a bitmap of space owned by the
2805   same ``XFS_RMAP_OWN_*`` number for the metadata that is being preserved.
2806
28073. Use the bitmap disunion operator to subtract (1) from (2).
2808   The remaining set bits represent candidate extents that could be freed.
2809   The process moves on to step 4 below.
2810
2811Repairs for file-based metadata such as extended attributes, directories,
2812symbolic links, quota files and realtime bitmaps are performed by building a
2813new structure attached to a temporary file and swapping the forks.
2814Afterward, the mappings in the old file fork are the candidate blocks for
2815disposal.
2816
2817The process for disposing of old extents is as follows:
2818
28194. For each candidate extent, count the number of reverse mapping records for
2820   the first block in that extent that do not have the same rmap owner for the
2821   data structure being repaired.
2822
2823   - If zero, the block has a single owner and can be freed.
2824
2825   - If not, the block is part of a crosslinked structure and must not be
2826     freed.
2827
28285. Starting with the next block in the extent, figure out how many more blocks
2829   have the same zero/nonzero other owner status as that first block.
2830
28316. If the region is crosslinked, delete the reverse mapping entry for the
2832   structure being repaired and move on to the next region.
2833
28347. If the region is to be freed, mark any corresponding buffers in the buffer
2835   cache as stale to prevent log writeback.
2836
28378. Free the region and move on.
2838
2839However, there is one complication to this procedure.
2840Transactions are of finite size, so the reaping process must be careful to roll
2841the transactions to avoid overruns.
2842Overruns come from two sources:
2843
2844a. EFIs logged on behalf of space that is no longer occupied
2845
2846b. Log items for buffer invalidations
2847
2848This is also a window in which a crash during the reaping process can leak
2849blocks.
2850As stated earlier, online repair functions use very large transactions to
2851minimize the chances of this occurring.
2852
2853The proposed patchset is the
2854`preparation for bulk loading btrees
2855<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-prep-for-bulk-loading>`_
2856series.
2857
2858Case Study: Reaping After a Regular Btree Repair
2859````````````````````````````````````````````````
2860
2861Old reference count and inode btrees are the easiest to reap because they have
2862rmap records with special owner codes: ``XFS_RMAP_OWN_REFC`` for the refcount
2863btree, and ``XFS_RMAP_OWN_INOBT`` for the inode and free inode btrees.
2864Creating a list of extents to reap the old btree blocks is quite simple,
2865conceptually:
2866
28671. Lock the relevant AGI/AGF header buffers to prevent allocation and frees.
2868
28692. For each reverse mapping record with an rmap owner corresponding to the
2870   metadata structure being rebuilt, set the corresponding range in a bitmap.
2871
28723. Walk the current data structures that have the same rmap owner.
2873   For each block visited, clear that range in the above bitmap.
2874
28754. Each set bit in the bitmap represents a block that could be a block from the
2876   old data structures and hence is a candidate for reaping.
2877   In other words, ``(rmap_records_owned_by & ~blocks_reachable_by_walk)``
2878   are the blocks that might be freeable.
2879
2880If it is possible to maintain the AGF lock throughout the repair (which is the
2881common case), then step 2 can be performed at the same time as the reverse
2882mapping record walk that creates the records for the new btree.
2883
2884Case Study: Rebuilding the Free Space Indices
2885`````````````````````````````````````````````
2886
2887The high level process to rebuild the free space indices is:
2888
28891. Walk the reverse mapping records to generate ``struct xfs_alloc_rec_incore``
2890   records from the gaps in the reverse mapping btree.
2891
28922. Append the records to an xfarray.
2893
28943. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2895   of blocks needed for each new tree.
2896
28974. Allocate the number of blocks computed in the previous step from the free
2898   space information collected.
2899
29005. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2901   generate the internal node blocks for the free space by length index.
2902   Call it again for the free space by block number index.
2903
29046. Commit the locations of the new btree root blocks to the AGF.
2905
29067. Reap the old btree blocks by looking for space that is not recorded by the
2907   reverse mapping btree, the new free space btrees, or the AGFL.
2908
2909Repairing the free space btrees has three key complications over a regular
2910btree repair:
2911
2912First, free space is not explicitly tracked in the reverse mapping records.
2913Hence, the new free space records must be inferred from gaps in the physical
2914space component of the keyspace of the reverse mapping btree.
2915
2916Second, free space repairs cannot use the common btree reservation code because
2917new blocks are reserved out of the free space btrees.
2918This is impossible when repairing the free space btrees themselves.
2919However, repair holds the AGF buffer lock for the duration of the free space
2920index reconstruction, so it can use the collected free space information to
2921supply the blocks for the new free space btrees.
2922It is not necessary to back each reserved extent with an EFI because the new
2923free space btrees are constructed in what the ondisk filesystem thinks is
2924unowned space.
2925However, if reserving blocks for the new btrees from the collected free space
2926information changes the number of free space records, repair must re-estimate
2927the new free space btree geometry with the new record count until the
2928reservation is sufficient.
2929As part of committing the new btrees, repair must ensure that reverse mappings
2930are created for the reserved blocks and that unused reserved blocks are
2931inserted into the free space btrees.
2932Deferrred rmap and freeing operations are used to ensure that this transition
2933is atomic, similar to the other btree repair functions.
2934
2935Third, finding the blocks to reap after the repair is not overly
2936straightforward.
2937Blocks for the free space btrees and the reverse mapping btrees are supplied by
2938the AGFL.
2939Blocks put onto the AGFL have reverse mapping records with the owner
2940``XFS_RMAP_OWN_AG``.
2941This ownership is retained when blocks move from the AGFL into the free space
2942btrees or the reverse mapping btrees.
2943When repair walks reverse mapping records to synthesize free space records, it
2944creates a bitmap (``ag_owner_bitmap``) of all the space claimed by
2945``XFS_RMAP_OWN_AG`` records.
2946The repair context maintains a second bitmap corresponding to the rmap btree
2947blocks and the AGFL blocks (``rmap_agfl_bitmap``).
2948When the walk is complete, the bitmap disunion operation ``(ag_owner_bitmap &
2949~rmap_agfl_bitmap)`` computes the extents that are used by the old free space
2950btrees.
2951These blocks can then be reaped using the methods outlined above.
2952
2953The proposed patchset is the
2954`AG btree repair
2955<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-ag-btrees>`_
2956series.
2957
2958.. _rmap_reap:
2959
2960Case Study: Reaping After Repairing Reverse Mapping Btrees
2961``````````````````````````````````````````````````````````
2962
2963Old reverse mapping btrees are less difficult to reap after a repair.
2964As mentioned in the previous section, blocks on the AGFL, the two free space
2965btree blocks, and the reverse mapping btree blocks all have reverse mapping
2966records with ``XFS_RMAP_OWN_AG`` as the owner.
2967The full process of gathering reverse mapping records and building a new btree
2968are described in the case study of
2969:ref:`live rebuilds of rmap data <rmap_repair>`, but a crucial point from that
2970discussion is that the new rmap btree will not contain any records for the old
2971rmap btree, nor will the old btree blocks be tracked in the free space btrees.
2972The list of candidate reaping blocks is computed by setting the bits
2973corresponding to the gaps in the new rmap btree records, and then clearing the
2974bits corresponding to extents in the free space btrees and the current AGFL
2975blocks.
2976The result ``(new_rmapbt_gaps & ~(agfl | bnobt_records))`` are reaped using the
2977methods outlined above.
2978
2979The rest of the process of rebuildng the reverse mapping btree is discussed
2980in a separate :ref:`case study<rmap_repair>`.
2981
2982The proposed patchset is the
2983`AG btree repair
2984<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-ag-btrees>`_
2985series.
2986
2987Case Study: Rebuilding the AGFL
2988```````````````````````````````
2989
2990The allocation group free block list (AGFL) is repaired as follows:
2991
29921. Create a bitmap for all the space that the reverse mapping data claims is
2993   owned by ``XFS_RMAP_OWN_AG``.
2994
29952. Subtract the space used by the two free space btrees and the rmap btree.
2996
29973. Subtract any space that the reverse mapping data claims is owned by any
2998   other owner, to avoid re-adding crosslinked blocks to the AGFL.
2999
30004. Once the AGFL is full, reap any blocks leftover.
3001
30025. The next operation to fix the freelist will right-size the list.
3003
3004See `fs/xfs/scrub/agheader_repair.c <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/xfs/scrub/agheader_repair.c>`_ for more details.
3005
3006Inode Record Repairs
3007--------------------
3008
3009Inode records must be handled carefully, because they have both ondisk records
3010("dinodes") and an in-memory ("cached") representation.
3011There is a very high potential for cache coherency issues if online fsck is not
3012careful to access the ondisk metadata *only* when the ondisk metadata is so
3013badly damaged that the filesystem cannot load the in-memory representation.
3014When online fsck wants to open a damaged file for scrubbing, it must use
3015specialized resource acquisition functions that return either the in-memory
3016representation *or* a lock on whichever object is necessary to prevent any
3017update to the ondisk location.
3018
3019The only repairs that should be made to the ondisk inode buffers are whatever
3020is necessary to get the in-core structure loaded.
3021This means fixing whatever is caught by the inode cluster buffer and inode fork
3022verifiers, and retrying the ``iget`` operation.
3023If the second ``iget`` fails, the repair has failed.
3024
3025Once the in-memory representation is loaded, repair can lock the inode and can
3026subject it to comprehensive checks, repairs, and optimizations.
3027Most inode attributes are easy to check and constrain, or are user-controlled
3028arbitrary bit patterns; these are both easy to fix.
3029Dealing with the data and attr fork extent counts and the file block counts is
3030more complicated, because computing the correct value requires traversing the
3031forks, or if that fails, leaving the fields invalid and waiting for the fork
3032fsck functions to run.
3033
3034The proposed patchset is the
3035`inode
3036<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-inodes>`_
3037repair series.
3038
3039Quota Record Repairs
3040--------------------
3041
3042Similar to inodes, quota records ("dquots") also have both ondisk records and
3043an in-memory representation, and hence are subject to the same cache coherency
3044issues.
3045Somewhat confusingly, both are known as dquots in the XFS codebase.
3046
3047The only repairs that should be made to the ondisk quota record buffers are
3048whatever is necessary to get the in-core structure loaded.
3049Once the in-memory representation is loaded, the only attributes needing
3050checking are obviously bad limits and timer values.
3051
3052Quota usage counters are checked, repaired, and discussed separately in the
3053section about :ref:`live quotacheck <quotacheck>`.
3054
3055The proposed patchset is the
3056`quota
3057<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-quota>`_
3058repair series.
3059
3060.. _fscounters:
3061
3062Freezing to Fix Summary Counters
3063--------------------------------
3064
3065Filesystem summary counters track availability of filesystem resources such
3066as free blocks, free inodes, and allocated inodes.
3067This information could be compiled by walking the free space and inode indexes,
3068but this is a slow process, so XFS maintains a copy in the ondisk superblock
3069that should reflect the ondisk metadata, at least when the filesystem has been
3070unmounted cleanly.
3071For performance reasons, XFS also maintains incore copies of those counters,
3072which are key to enabling resource reservations for active transactions.
3073Writer threads reserve the worst-case quantities of resources from the
3074incore counter and give back whatever they don't use at commit time.
3075It is therefore only necessary to serialize on the superblock when the
3076superblock is being committed to disk.
3077
3078The lazy superblock counter feature introduced in XFS v5 took this even further
3079by training log recovery to recompute the summary counters from the AG headers,
3080which eliminated the need for most transactions even to touch the superblock.
3081The only time XFS commits the summary counters is at filesystem unmount.
3082To reduce contention even further, the incore counter is implemented as a
3083percpu counter, which means that each CPU is allocated a batch of blocks from a
3084global incore counter and can satisfy small allocations from the local batch.
3085
3086The high-performance nature of the summary counters makes it difficult for
3087online fsck to check them, since there is no way to quiesce a percpu counter
3088while the system is running.
3089Although online fsck can read the filesystem metadata to compute the correct
3090values of the summary counters, there's no way to hold the value of a percpu
3091counter stable, so it's quite possible that the counter will be out of date by
3092the time the walk is complete.
3093Earlier versions of online scrub would return to userspace with an incomplete
3094scan flag, but this is not a satisfying outcome for a system administrator.
3095For repairs, the in-memory counters must be stabilized while walking the
3096filesystem metadata to get an accurate reading and install it in the percpu
3097counter.
3098
3099To satisfy this requirement, online fsck must prevent other programs in the
3100system from initiating new writes to the filesystem, it must disable background
3101garbage collection threads, and it must wait for existing writer programs to
3102exit the kernel.
3103Once that has been established, scrub can walk the AG free space indexes, the
3104inode btrees, and the realtime bitmap to compute the correct value of all
3105four summary counters.
3106This is very similar to a filesystem freeze, though not all of the pieces are
3107necessary:
3108
3109- The final freeze state is set one higher than ``SB_FREEZE_COMPLETE`` to
3110  prevent other threads from thawing the filesystem, or other scrub threads
3111  from initiating another fscounters freeze.
3112
3113- It does not quiesce the log.
3114
3115With this code in place, it is now possible to pause the filesystem for just
3116long enough to check and correct the summary counters.
3117
3118+--------------------------------------------------------------------------+
3119| **Historical Sidebar**:                                                  |
3120+--------------------------------------------------------------------------+
3121| The initial implementation used the actual VFS filesystem freeze         |
3122| mechanism to quiesce filesystem activity.                                |
3123| With the filesystem frozen, it is possible to resolve the counter values |
3124| with exact precision, but there are many problems with calling the VFS   |
3125| methods directly:                                                        |
3126|                                                                          |
3127| - Other programs can unfreeze the filesystem without our knowledge.      |
3128|   This leads to incorrect scan results and incorrect repairs.            |
3129|                                                                          |
3130| - Adding an extra lock to prevent others from thawing the filesystem     |
3131|   required the addition of a ``->freeze_super`` function to wrap         |
3132|   ``freeze_fs()``.                                                       |
3133|   This in turn caused other subtle problems because it turns out that    |
3134|   the VFS ``freeze_super`` and ``thaw_super`` functions can drop the     |
3135|   last reference to the VFS superblock, and any subsequent access        |
3136|   becomes a UAF bug!                                                     |
3137|   This can happen if the filesystem is unmounted while the underlying    |
3138|   block device has frozen the filesystem.                                |
3139|   This problem could be solved by grabbing extra references to the       |
3140|   superblock, but it felt suboptimal given the other inadequacies of     |
3141|   this approach.                                                         |
3142|                                                                          |
3143| - The log need not be quiesced to check the summary counters, but a VFS  |
3144|   freeze initiates one anyway.                                           |
3145|   This adds unnecessary runtime to live fscounter fsck operations.       |
3146|                                                                          |
3147| - Quiescing the log means that XFS flushes the (possibly incorrect)      |
3148|   counters to disk as part of cleaning the log.                          |
3149|                                                                          |
3150| - A bug in the VFS meant that freeze could complete even when            |
3151|   sync_filesystem fails to flush the filesystem and returns an error.    |
3152|   This bug was fixed in Linux 5.17.                                      |
3153+--------------------------------------------------------------------------+
3154
3155The proposed patchset is the
3156`summary counter cleanup
3157<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-fscounters>`_
3158series.
3159
3160Full Filesystem Scans
3161---------------------
3162
3163Certain types of metadata can only be checked by walking every file in the
3164entire filesystem to record observations and comparing the observations against
3165what's recorded on disk.
3166Like every other type of online repair, repairs are made by writing those
3167observations to disk in a replacement structure and committing it atomically.
3168However, it is not practical to shut down the entire filesystem to examine
3169hundreds of billions of files because the downtime would be excessive.
3170Therefore, online fsck must build the infrastructure to manage a live scan of
3171all the files in the filesystem.
3172There are two questions that need to be solved to perform a live walk:
3173
3174- How does scrub manage the scan while it is collecting data?
3175
3176- How does the scan keep abreast of changes being made to the system by other
3177  threads?
3178
3179.. _iscan:
3180
3181Coordinated Inode Scans
3182```````````````````````
3183
3184In the original Unix filesystems of the 1970s, each directory entry contained
3185an index number (*inumber*) which was used as an index into on ondisk array
3186(*itable*) of fixed-size records (*inodes*) describing a file's attributes and
3187its data block mapping.
3188This system is described by J. Lions, `"inode (5659)"
3189<http://www.lemis.com/grog/Documentation/Lions/>`_ in *Lions' Commentary on
3190UNIX, 6th Edition*, (Dept. of Computer Science, the University of New South
3191Wales, November 1977), pp. 18-2; and later by D. Ritchie and K. Thompson,
3192`"Implementation of the File System"
3193<https://archive.org/details/bstj57-6-1905/page/n8/mode/1up>`_, from *The UNIX
3194Time-Sharing System*, (The Bell System Technical Journal, July 1978), pp.
31951913-4.
3196
3197XFS retains most of this design, except now inumbers are search keys over all
3198the space in the data section filesystem.
3199They form a continuous keyspace that can be expressed as a 64-bit integer,
3200though the inodes themselves are sparsely distributed within the keyspace.
3201Scans proceed in a linear fashion across the inumber keyspace, starting from
3202``0x0`` and ending at ``0xFFFFFFFFFFFFFFFF``.
3203Naturally, a scan through a keyspace requires a scan cursor object to track the
3204scan progress.
3205Because this keyspace is sparse, this cursor contains two parts.
3206The first part of this scan cursor object tracks the inode that will be
3207examined next; call this the examination cursor.
3208Somewhat less obviously, the scan cursor object must also track which parts of
3209the keyspace have already been visited, which is critical for deciding if a
3210concurrent filesystem update needs to be incorporated into the scan data.
3211Call this the visited inode cursor.
3212
3213Advancing the scan cursor is a multi-step process encapsulated in
3214``xchk_iscan_iter``:
3215
32161. Lock the AGI buffer of the AG containing the inode pointed to by the visited
3217   inode cursor.
3218   This guarantee that inodes in this AG cannot be allocated or freed while
3219   advancing the cursor.
3220
32212. Use the per-AG inode btree to look up the next inumber after the one that
3222   was just visited, since it may not be keyspace adjacent.
3223
32243. If there are no more inodes left in this AG:
3225
3226   a. Move the examination cursor to the point of the inumber keyspace that
3227      corresponds to the start of the next AG.
3228
3229   b. Adjust the visited inode cursor to indicate that it has "visited" the
3230      last possible inode in the current AG's inode keyspace.
3231      XFS inumbers are segmented, so the cursor needs to be marked as having
3232      visited the entire keyspace up to just before the start of the next AG's
3233      inode keyspace.
3234
3235   c. Unlock the AGI and return to step 1 if there are unexamined AGs in the
3236      filesystem.
3237
3238   d. If there are no more AGs to examine, set both cursors to the end of the
3239      inumber keyspace.
3240      The scan is now complete.
3241
32424. Otherwise, there is at least one more inode to scan in this AG:
3243
3244   a. Move the examination cursor ahead to the next inode marked as allocated
3245      by the inode btree.
3246
3247   b. Adjust the visited inode cursor to point to the inode just prior to where
3248      the examination cursor is now.
3249      Because the scanner holds the AGI buffer lock, no inodes could have been
3250      created in the part of the inode keyspace that the visited inode cursor
3251      just advanced.
3252
32535. Get the incore inode for the inumber of the examination cursor.
3254   By maintaining the AGI buffer lock until this point, the scanner knows that
3255   it was safe to advance the examination cursor across the entire keyspace,
3256   and that it has stabilized this next inode so that it cannot disappear from
3257   the filesystem until the scan releases the incore inode.
3258
32596. Drop the AGI lock and return the incore inode to the caller.
3260
3261Online fsck functions scan all files in the filesystem as follows:
3262
32631. Start a scan by calling ``xchk_iscan_start``.
3264
32652. Advance the scan cursor (``xchk_iscan_iter``) to get the next inode.
3266   If one is provided:
3267
3268   a. Lock the inode to prevent updates during the scan.
3269
3270   b. Scan the inode.
3271
3272   c. While still holding the inode lock, adjust the visited inode cursor
3273      (``xchk_iscan_mark_visited``) to point to this inode.
3274
3275   d. Unlock and release the inode.
3276
32778. Call ``xchk_iscan_teardown`` to complete the scan.
3278
3279There are subtleties with the inode cache that complicate grabbing the incore
3280inode for the caller.
3281Obviously, it is an absolute requirement that the inode metadata be consistent
3282enough to load it into the inode cache.
3283Second, if the incore inode is stuck in some intermediate state, the scan
3284coordinator must release the AGI and push the main filesystem to get the inode
3285back into a loadable state.
3286
3287The proposed patches are the
3288`inode scanner
3289<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-iscan>`_
3290series.
3291The first user of the new functionality is the
3292`online quotacheck
3293<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-quotacheck>`_
3294series.
3295
3296Inode Management
3297````````````````
3298
3299In regular filesystem code, references to allocated XFS incore inodes are
3300always obtained (``xfs_iget``) outside of transaction context because the
3301creation of the incore context for an existing file does not require metadata
3302updates.
3303However, it is important to note that references to incore inodes obtained as
3304part of file creation must be performed in transaction context because the
3305filesystem must ensure the atomicity of the ondisk inode btree index updates
3306and the initialization of the actual ondisk inode.
3307
3308References to incore inodes are always released (``xfs_irele``) outside of
3309transaction context because there are a handful of activities that might
3310require ondisk updates:
3311
3312- The VFS may decide to kick off writeback as part of a ``DONTCACHE`` inode
3313  release.
3314
3315- Speculative preallocations need to be unreserved.
3316
3317- An unlinked file may have lost its last reference, in which case the entire
3318  file must be inactivated, which involves releasing all of its resources in
3319  the ondisk metadata and freeing the inode.
3320
3321These activities are collectively called inode inactivation.
3322Inactivation has two parts -- the VFS part, which initiates writeback on all
3323dirty file pages, and the XFS part, which cleans up XFS-specific information
3324and frees the inode if it was unlinked.
3325If the inode is unlinked (or unconnected after a file handle operation), the
3326kernel drops the inode into the inactivation machinery immediately.
3327
3328During normal operation, resource acquisition for an update follows this order
3329to avoid deadlocks:
3330
33311. Inode reference (``iget``).
3332
33332. Filesystem freeze protection, if repairing (``mnt_want_write_file``).
3334
33353. Inode ``IOLOCK`` (VFS ``i_rwsem``) lock to control file IO.
3336
33374. Inode ``MMAPLOCK`` (page cache ``invalidate_lock``) lock for operations that
3338   can update page cache mappings.
3339
33405. Log feature enablement.
3341
33426. Transaction log space grant.
3343
33447. Space on the data and realtime devices for the transaction.
3345
33468. Incore dquot references, if a file is being repaired.
3347   Note that they are not locked, merely acquired.
3348
33499. Inode ``ILOCK`` for file metadata updates.
3350
335110. AG header buffer locks / Realtime metadata inode ILOCK.
3352
335311. Realtime metadata buffer locks, if applicable.
3354
335512. Extent mapping btree blocks, if applicable.
3356
3357Resources are often released in the reverse order, though this is not required.
3358However, online fsck differs from regular XFS operations because it may examine
3359an object that normally is acquired in a later stage of the locking order, and
3360then decide to cross-reference the object with an object that is acquired
3361earlier in the order.
3362The next few sections detail the specific ways in which online fsck takes care
3363to avoid deadlocks.
3364
3365iget and irele During a Scrub
3366^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3367
3368An inode scan performed on behalf of a scrub operation runs in transaction
3369context, and possibly with resources already locked and bound to it.
3370This isn't much of a problem for ``iget`` since it can operate in the context
3371of an existing transaction, as long as all of the bound resources are acquired
3372before the inode reference in the regular filesystem.
3373
3374When the VFS ``iput`` function is given a linked inode with no other
3375references, it normally puts the inode on an LRU list in the hope that it can
3376save time if another process re-opens the file before the system runs out
3377of memory and frees it.
3378Filesystem callers can short-circuit the LRU process by setting a ``DONTCACHE``
3379flag on the inode to cause the kernel to try to drop the inode into the
3380inactivation machinery immediately.
3381
3382In the past, inactivation was always done from the process that dropped the
3383inode, which was a problem for scrub because scrub may already hold a
3384transaction, and XFS does not support nesting transactions.
3385On the other hand, if there is no scrub transaction, it is desirable to drop
3386otherwise unused inodes immediately to avoid polluting caches.
3387To capture these nuances, the online fsck code has a separate ``xchk_irele``
3388function to set or clear the ``DONTCACHE`` flag to get the required release
3389behavior.
3390
3391Proposed patchsets include fixing
3392`scrub iget usage
3393<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-iget-fixes>`_ and
3394`dir iget usage
3395<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-dir-iget-fixes>`_.
3396
3397.. _ilocking:
3398
3399Locking Inodes
3400^^^^^^^^^^^^^^
3401
3402In regular filesystem code, the VFS and XFS will acquire multiple IOLOCK locks
3403in a well-known order: parent → child when updating the directory tree, and
3404in numerical order of the addresses of their ``struct inode`` object otherwise.
3405For regular files, the MMAPLOCK can be acquired after the IOLOCK to stop page
3406faults.
3407If two MMAPLOCKs must be acquired, they are acquired in numerical order of
3408the addresses of their ``struct address_space`` objects.
3409Due to the structure of existing filesystem code, IOLOCKs and MMAPLOCKs must be
3410acquired before transactions are allocated.
3411If two ILOCKs must be acquired, they are acquired in inumber order.
3412
3413Inode lock acquisition must be done carefully during a coordinated inode scan.
3414Online fsck cannot abide these conventions, because for a directory tree
3415scanner, the scrub process holds the IOLOCK of the file being scanned and it
3416needs to take the IOLOCK of the file at the other end of the directory link.
3417If the directory tree is corrupt because it contains a cycle, ``xfs_scrub``
3418cannot use the regular inode locking functions and avoid becoming trapped in an
3419ABBA deadlock.
3420
3421Solving both of these problems is straightforward -- any time online fsck
3422needs to take a second lock of the same class, it uses trylock to avoid an ABBA
3423deadlock.
3424If the trylock fails, scrub drops all inode locks and use trylock loops to
3425(re)acquire all necessary resources.
3426Trylock loops enable scrub to check for pending fatal signals, which is how
3427scrub avoids deadlocking the filesystem or becoming an unresponsive process.
3428However, trylock loops means that online fsck must be prepared to measure the
3429resource being scrubbed before and after the lock cycle to detect changes and
3430react accordingly.
3431
3432.. _dirparent:
3433
3434Case Study: Finding a Directory Parent
3435^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3436
3437Consider the directory parent pointer repair code as an example.
3438Online fsck must verify that the dotdot dirent of a directory points up to a
3439parent directory, and that the parent directory contains exactly one dirent
3440pointing down to the child directory.
3441Fully validating this relationship (and repairing it if possible) requires a
3442walk of every directory on the filesystem while holding the child locked, and
3443while updates to the directory tree are being made.
3444The coordinated inode scan provides a way to walk the filesystem without the
3445possibility of missing an inode.
3446The child directory is kept locked to prevent updates to the dotdot dirent, but
3447if the scanner fails to lock a parent, it can drop and relock both the child
3448and the prospective parent.
3449If the dotdot entry changes while the directory is unlocked, then a move or
3450rename operation must have changed the child's parentage, and the scan can
3451exit early.
3452
3453The proposed patchset is the
3454`directory repair
3455<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-dirs>`_
3456series.
3457
3458.. _fshooks:
3459
3460Filesystem Hooks
3461`````````````````
3462
3463The second piece of support that online fsck functions need during a full
3464filesystem scan is the ability to stay informed about updates being made by
3465other threads in the filesystem, since comparisons against the past are useless
3466in a dynamic environment.
3467Two pieces of Linux kernel infrastructure enable online fsck to monitor regular
3468filesystem operations: filesystem hooks and :ref:`static keys<jump_labels>`.
3469
3470Filesystem hooks convey information about an ongoing filesystem operation to
3471a downstream consumer.
3472In this case, the downstream consumer is always an online fsck function.
3473Because multiple fsck functions can run in parallel, online fsck uses the Linux
3474notifier call chain facility to dispatch updates to any number of interested
3475fsck processes.
3476Call chains are a dynamic list, which means that they can be configured at
3477run time.
3478Because these hooks are private to the XFS module, the information passed along
3479contains exactly what the checking function needs to update its observations.
3480
3481The current implementation of XFS hooks uses SRCU notifier chains to reduce the
3482impact to highly threaded workloads.
3483Regular blocking notifier chains use a rwsem and seem to have a much lower
3484overhead for single-threaded applications.
3485However, it may turn out that the combination of blocking chains and static
3486keys are a more performant combination; more study is needed here.
3487
3488The following pieces are necessary to hook a certain point in the filesystem:
3489
3490- A ``struct xfs_hooks`` object must be embedded in a convenient place such as
3491  a well-known incore filesystem object.
3492
3493- Each hook must define an action code and a structure containing more context
3494  about the action.
3495
3496- Hook providers should provide appropriate wrapper functions and structs
3497  around the ``xfs_hooks`` and ``xfs_hook`` objects to take advantage of type
3498  checking to ensure correct usage.
3499
3500- A callsite in the regular filesystem code must be chosen to call
3501  ``xfs_hooks_call`` with the action code and data structure.
3502  This place should be adjacent to (and not earlier than) the place where
3503  the filesystem update is committed to the transaction.
3504  In general, when the filesystem calls a hook chain, it should be able to
3505  handle sleeping and should not be vulnerable to memory reclaim or locking
3506  recursion.
3507  However, the exact requirements are very dependent on the context of the hook
3508  caller and the callee.
3509
3510- The online fsck function should define a structure to hold scan data, a lock
3511  to coordinate access to the scan data, and a ``struct xfs_hook`` object.
3512  The scanner function and the regular filesystem code must acquire resources
3513  in the same order; see the next section for details.
3514
3515- The online fsck code must contain a C function to catch the hook action code
3516  and data structure.
3517  If the object being updated has already been visited by the scan, then the
3518  hook information must be applied to the scan data.
3519
3520- Prior to unlocking inodes to start the scan, online fsck must call
3521  ``xfs_hooks_setup`` to initialize the ``struct xfs_hook``, and
3522  ``xfs_hooks_add`` to enable the hook.
3523
3524- Online fsck must call ``xfs_hooks_del`` to disable the hook once the scan is
3525  complete.
3526
3527The number of hooks should be kept to a minimum to reduce complexity.
3528Static keys are used to reduce the overhead of filesystem hooks to nearly
3529zero when online fsck is not running.
3530
3531.. _liveupdate:
3532
3533Live Updates During a Scan
3534``````````````````````````
3535
3536The code paths of the online fsck scanning code and the :ref:`hooked<fshooks>`
3537filesystem code look like this::
3538
3539            other program
35403541            inode lock ←────────────────────┐
3542                  ↓                         │
3543            AG header lock                  │
3544                  ↓                         │
3545            filesystem function             │
3546                  ↓                         │
3547            notifier call chain             │    same
3548                  ↓                         ├─── inode
3549            scrub hook function             │    lock
3550                  ↓                         │
3551            scan data mutex ←──┐    same    │
3552                  ↓            ├─── scan    │
3553            update scan data   │    lock    │
3554                  ↑            │            │
3555            scan data mutex ←──┘            │
3556                  ↑                         │
3557            inode lock ←────────────────────┘
35583559            scrub function
35603561            inode scanner
35623563            xfs_scrub
3564
3565These rules must be followed to ensure correct interactions between the
3566checking code and the code making an update to the filesystem:
3567
3568- Prior to invoking the notifier call chain, the filesystem function being
3569  hooked must acquire the same lock that the scrub scanning function acquires
3570  to scan the inode.
3571
3572- The scanning function and the scrub hook function must coordinate access to
3573  the scan data by acquiring a lock on the scan data.
3574
3575- Scrub hook function must not add the live update information to the scan
3576  observations unless the inode being updated has already been scanned.
3577  The scan coordinator has a helper predicate (``xchk_iscan_want_live_update``)
3578  for this.
3579
3580- Scrub hook functions must not change the caller's state, including the
3581  transaction that it is running.
3582  They must not acquire any resources that might conflict with the filesystem
3583  function being hooked.
3584
3585- The hook function can abort the inode scan to avoid breaking the other rules.
3586
3587The inode scan APIs are pretty simple:
3588
3589- ``xchk_iscan_start`` starts a scan
3590
3591- ``xchk_iscan_iter`` grabs a reference to the next inode in the scan or
3592  returns zero if there is nothing left to scan
3593
3594- ``xchk_iscan_want_live_update`` to decide if an inode has already been
3595  visited in the scan.
3596  This is critical for hook functions to decide if they need to update the
3597  in-memory scan information.
3598
3599- ``xchk_iscan_mark_visited`` to mark an inode as having been visited in the
3600  scan
3601
3602- ``xchk_iscan_teardown`` to finish the scan
3603
3604This functionality is also a part of the
3605`inode scanner
3606<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-iscan>`_
3607series.
3608
3609.. _quotacheck:
3610
3611Case Study: Quota Counter Checking
3612^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3613
3614It is useful to compare the mount time quotacheck code to the online repair
3615quotacheck code.
3616Mount time quotacheck does not have to contend with concurrent operations, so
3617it does the following:
3618
36191. Make sure the ondisk dquots are in good enough shape that all the incore
3620   dquots will actually load, and zero the resource usage counters in the
3621   ondisk buffer.
3622
36232. Walk every inode in the filesystem.
3624   Add each file's resource usage to the incore dquot.
3625
36263. Walk each incore dquot.
3627   If the incore dquot is not being flushed, add the ondisk buffer backing the
3628   incore dquot to a delayed write (delwri) list.
3629
36304. Write the buffer list to disk.
3631
3632Like most online fsck functions, online quotacheck can't write to regular
3633filesystem objects until the newly collected metadata reflect all filesystem
3634state.
3635Therefore, online quotacheck records file resource usage to a shadow dquot
3636index implemented with a sparse ``xfarray``, and only writes to the real dquots
3637once the scan is complete.
3638Handling transactional updates is tricky because quota resource usage updates
3639are handled in phases to minimize contention on dquots:
3640
36411. The inodes involved are joined and locked to a transaction.
3642
36432. For each dquot attached to the file:
3644
3645   a. The dquot is locked.
3646
3647   b. A quota reservation is added to the dquot's resource usage.
3648      The reservation is recorded in the transaction.
3649
3650   c. The dquot is unlocked.
3651
36523. Changes in actual quota usage are tracked in the transaction.
3653
36544. At transaction commit time, each dquot is examined again:
3655
3656   a. The dquot is locked again.
3657
3658   b. Quota usage changes are logged and unused reservation is given back to
3659      the dquot.
3660
3661   c. The dquot is unlocked.
3662
3663For online quotacheck, hooks are placed in steps 2 and 4.
3664The step 2 hook creates a shadow version of the transaction dquot context
3665(``dqtrx``) that operates in a similar manner to the regular code.
3666The step 4 hook commits the shadow ``dqtrx`` changes to the shadow dquots.
3667Notice that both hooks are called with the inode locked, which is how the
3668live update coordinates with the inode scanner.
3669
3670The quotacheck scan looks like this:
3671
36721. Set up a coordinated inode scan.
3673
36742. For each inode returned by the inode scan iterator:
3675
3676   a. Grab and lock the inode.
3677
3678   b. Determine that inode's resource usage (data blocks, inode counts,
3679      realtime blocks) and add that to the shadow dquots for the user, group,
3680      and project ids associated with the inode.
3681
3682   c. Unlock and release the inode.
3683
36843. For each dquot in the system:
3685
3686   a. Grab and lock the dquot.
3687
3688   b. Check the dquot against the shadow dquots created by the scan and updated
3689      by the live hooks.
3690
3691Live updates are key to being able to walk every quota record without
3692needing to hold any locks for a long duration.
3693If repairs are desired, the real and shadow dquots are locked and their
3694resource counts are set to the values in the shadow dquot.
3695
3696The proposed patchset is the
3697`online quotacheck
3698<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-quotacheck>`_
3699series.
3700
3701.. _nlinks:
3702
3703Case Study: File Link Count Checking
3704^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3705
3706File link count checking also uses live update hooks.
3707The coordinated inode scanner is used to visit all directories on the
3708filesystem, and per-file link count records are stored in a sparse ``xfarray``
3709indexed by inumber.
3710During the scanning phase, each entry in a directory generates observation
3711data as follows:
3712
37131. If the entry is a dotdot (``'..'``) entry of the root directory, the
3714   directory's parent link count is bumped because the root directory's dotdot
3715   entry is self referential.
3716
37172. If the entry is a dotdot entry of a subdirectory, the parent's backref
3718   count is bumped.
3719
37203. If the entry is neither a dot nor a dotdot entry, the target file's parent
3721   count is bumped.
3722
37234. If the target is a subdirectory, the parent's child link count is bumped.
3724
3725A crucial point to understand about how the link count inode scanner interacts
3726with the live update hooks is that the scan cursor tracks which *parent*
3727directories have been scanned.
3728In other words, the live updates ignore any update about ``A → B`` when A has
3729not been scanned, even if B has been scanned.
3730Furthermore, a subdirectory A with a dotdot entry pointing back to B is
3731accounted as a backref counter in the shadow data for A, since child dotdot
3732entries affect the parent's link count.
3733Live update hooks are carefully placed in all parts of the filesystem that
3734create, change, or remove directory entries, since those operations involve
3735bumplink and droplink.
3736
3737For any file, the correct link count is the number of parents plus the number
3738of child subdirectories.
3739Non-directories never have children of any kind.
3740The backref information is used to detect inconsistencies in the number of
3741links pointing to child subdirectories and the number of dotdot entries
3742pointing back.
3743
3744After the scan completes, the link count of each file can be checked by locking
3745both the inode and the shadow data, and comparing the link counts.
3746A second coordinated inode scan cursor is used for comparisons.
3747Live updates are key to being able to walk every inode without needing to hold
3748any locks between inodes.
3749If repairs are desired, the inode's link count is set to the value in the
3750shadow information.
3751If no parents are found, the file must be :ref:`reparented <orphanage>` to the
3752orphanage to prevent the file from being lost forever.
3753
3754The proposed patchset is the
3755`file link count repair
3756<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=scrub-nlinks>`_
3757series.
3758
3759.. _rmap_repair:
3760
3761Case Study: Rebuilding Reverse Mapping Records
3762^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3763
3764Most repair functions follow the same pattern: lock filesystem resources,
3765walk the surviving ondisk metadata looking for replacement metadata records,
3766and use an :ref:`in-memory array <xfarray>` to store the gathered observations.
3767The primary advantage of this approach is the simplicity and modularity of the
3768repair code -- code and data are entirely contained within the scrub module,
3769do not require hooks in the main filesystem, and are usually the most efficient
3770in memory use.
3771A secondary advantage of this repair approach is atomicity -- once the kernel
3772decides a structure is corrupt, no other threads can access the metadata until
3773the kernel finishes repairing and revalidating the metadata.
3774
3775For repairs going on within a shard of the filesystem, these advantages
3776outweigh the delays inherent in locking the shard while repairing parts of the
3777shard.
3778Unfortunately, repairs to the reverse mapping btree cannot use the "standard"
3779btree repair strategy because it must scan every space mapping of every fork of
3780every file in the filesystem, and the filesystem cannot stop.
3781Therefore, rmap repair foregoes atomicity between scrub and repair.
3782It combines a :ref:`coordinated inode scanner <iscan>`, :ref:`live update hooks
3783<liveupdate>`, and an :ref:`in-memory rmap btree <xfbtree>` to complete the
3784scan for reverse mapping records.
3785
37861. Set up an xfbtree to stage rmap records.
3787
37882. While holding the locks on the AGI and AGF buffers acquired during the
3789   scrub, generate reverse mappings for all AG metadata: inodes, btrees, CoW
3790   staging extents, and the internal log.
3791
37923. Set up an inode scanner.
3793
37944. Hook into rmap updates for the AG being repaired so that the live scan data
3795   can receive updates to the rmap btree from the rest of the filesystem during
3796   the file scan.
3797
37985. For each space mapping found in either fork of each file scanned,
3799   decide if the mapping matches the AG of interest.
3800   If so:
3801
3802   a. Create a btree cursor for the in-memory btree.
3803
3804   b. Use the rmap code to add the record to the in-memory btree.
3805
3806   c. Use the :ref:`special commit function <xfbtree_commit>` to write the
3807      xfbtree changes to the xfile.
3808
38096. For each live update received via the hook, decide if the owner has already
3810   been scanned.
3811   If so, apply the live update into the scan data:
3812
3813   a. Create a btree cursor for the in-memory btree.
3814
3815   b. Replay the operation into the in-memory btree.
3816
3817   c. Use the :ref:`special commit function <xfbtree_commit>` to write the
3818      xfbtree changes to the xfile.
3819      This is performed with an empty transaction to avoid changing the
3820      caller's state.
3821
38227. When the inode scan finishes, create a new scrub transaction and relock the
3823   two AG headers.
3824
38258. Compute the new btree geometry using the number of rmap records in the
3826   shadow btree, like all other btree rebuilding functions.
3827
38289. Allocate the number of blocks computed in the previous step.
3829
383010. Perform the usual btree bulk loading and commit to install the new rmap
3831    btree.
3832
383311. Reap the old rmap btree blocks as discussed in the case study about how
3834    to :ref:`reap after rmap btree repair <rmap_reap>`.
3835
383612. Free the xfbtree now that it not needed.
3837
3838The proposed patchset is the
3839`rmap repair
3840<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-rmap-btree>`_
3841series.
3842
3843Staging Repairs with Temporary Files on Disk
3844--------------------------------------------
3845
3846XFS stores a substantial amount of metadata in file forks: directories,
3847extended attributes, symbolic link targets, free space bitmaps and summary
3848information for the realtime volume, and quota records.
3849File forks map 64-bit logical file fork space extents to physical storage space
3850extents, similar to how a memory management unit maps 64-bit virtual addresses
3851to physical memory addresses.
3852Therefore, file-based tree structures (such as directories and extended
3853attributes) use blocks mapped in the file fork offset address space that point
3854to other blocks mapped within that same address space, and file-based linear
3855structures (such as bitmaps and quota records) compute array element offsets in
3856the file fork offset address space.
3857
3858Because file forks can consume as much space as the entire filesystem, repairs
3859cannot be staged in memory, even when a paging scheme is available.
3860Therefore, online repair of file-based metadata createas a temporary file in
3861the XFS filesystem, writes a new structure at the correct offsets into the
3862temporary file, and atomically swaps the fork mappings (and hence the fork
3863contents) to commit the repair.
3864Once the repair is complete, the old fork can be reaped as necessary; if the
3865system goes down during the reap, the iunlink code will delete the blocks
3866during log recovery.
3867
3868**Note**: All space usage and inode indices in the filesystem *must* be
3869consistent to use a temporary file safely!
3870This dependency is the reason why online repair can only use pageable kernel
3871memory to stage ondisk space usage information.
3872
3873Swapping metadata extents with a temporary file requires the owner field of the
3874block headers to match the file being repaired and not the temporary file.  The
3875directory, extended attribute, and symbolic link functions were all modified to
3876allow callers to specify owner numbers explicitly.
3877
3878There is a downside to the reaping process -- if the system crashes during the
3879reap phase and the fork extents are crosslinked, the iunlink processing will
3880fail because freeing space will find the extra reverse mappings and abort.
3881
3882Temporary files created for repair are similar to ``O_TMPFILE`` files created
3883by userspace.
3884They are not linked into a directory and the entire file will be reaped when
3885the last reference to the file is lost.
3886The key differences are that these files must have no access permission outside
3887the kernel at all, they must be specially marked to prevent them from being
3888opened by handle, and they must never be linked into the directory tree.
3889
3890+--------------------------------------------------------------------------+
3891| **Historical Sidebar**:                                                  |
3892+--------------------------------------------------------------------------+
3893| In the initial iteration of file metadata repair, the damaged metadata   |
3894| blocks would be scanned for salvageable data; the extents in the file    |
3895| fork would be reaped; and then a new structure would be built in its     |
3896| place.                                                                   |
3897| This strategy did not survive the introduction of the atomic repair      |
3898| requirement expressed earlier in this document.                          |
3899|                                                                          |
3900| The second iteration explored building a second structure at a high      |
3901| offset in the fork from the salvage data, reaping the old extents, and   |
3902| using a ``COLLAPSE_RANGE`` operation to slide the new extents into       |
3903| place.                                                                   |
3904|                                                                          |
3905| This had many drawbacks:                                                 |
3906|                                                                          |
3907| - Array structures are linearly addressed, and the regular filesystem    |
3908|   codebase does not have the concept of a linear offset that could be    |
3909|   applied to the record offset computation to build an alternate copy.   |
3910|                                                                          |
3911| - Extended attributes are allowed to use the entire attr fork offset     |
3912|   address space.                                                         |
3913|                                                                          |
3914| - Even if repair could build an alternate copy of a data structure in a  |
3915|   different part of the fork address space, the atomic repair commit     |
3916|   requirement means that online repair would have to be able to perform  |
3917|   a log assisted ``COLLAPSE_RANGE`` operation to ensure that the old     |
3918|   structure was completely replaced.                                     |
3919|                                                                          |
3920| - A crash after construction of the secondary tree but before the range  |
3921|   collapse would leave unreachable blocks in the file fork.              |
3922|   This would likely confuse things further.                              |
3923|                                                                          |
3924| - Reaping blocks after a repair is not a simple operation, and           |
3925|   initiating a reap operation from a restarted range collapse operation  |
3926|   during log recovery is daunting.                                       |
3927|                                                                          |
3928| - Directory entry blocks and quota records record the file fork offset   |
3929|   in the header area of each block.                                      |
3930|   An atomic range collapse operation would have to rewrite this part of  |
3931|   each block header.                                                     |
3932|   Rewriting a single field in block headers is not a huge problem, but   |
3933|   it's something to be aware of.                                         |
3934|                                                                          |
3935| - Each block in a directory or extended attributes btree index contains  |
3936|   sibling and child block pointers.                                      |
3937|   Were the atomic commit to use a range collapse operation, each block   |
3938|   would have to be rewritten very carefully to preserve the graph        |
3939|   structure.                                                             |
3940|   Doing this as part of a range collapse means rewriting a large number  |
3941|   of blocks repeatedly, which is not conducive to quick repairs.         |
3942|                                                                          |
3943| This lead to the introduction of temporary file staging.                 |
3944+--------------------------------------------------------------------------+
3945
3946Using a Temporary File
3947``````````````````````
3948
3949Online repair code should use the ``xrep_tempfile_create`` function to create a
3950temporary file inside the filesystem.
3951This allocates an inode, marks the in-core inode private, and attaches it to
3952the scrub context.
3953These files are hidden from userspace, may not be added to the directory tree,
3954and must be kept private.
3955
3956Temporary files only use two inode locks: the IOLOCK and the ILOCK.
3957The MMAPLOCK is not needed here, because there must not be page faults from
3958userspace for data fork blocks.
3959The usage patterns of these two locks are the same as for any other XFS file --
3960access to file data are controlled via the IOLOCK, and access to file metadata
3961are controlled via the ILOCK.
3962Locking helpers are provided so that the temporary file and its lock state can
3963be cleaned up by the scrub context.
3964To comply with the nested locking strategy laid out in the :ref:`inode
3965locking<ilocking>` section, it is recommended that scrub functions use the
3966xrep_tempfile_ilock*_nowait lock helpers.
3967
3968Data can be written to a temporary file by two means:
3969
39701. ``xrep_tempfile_copyin`` can be used to set the contents of a regular
3971   temporary file from an xfile.
3972
39732. The regular directory, symbolic link, and extended attribute functions can
3974   be used to write to the temporary file.
3975
3976Once a good copy of a data file has been constructed in a temporary file, it
3977must be conveyed to the file being repaired, which is the topic of the next
3978section.
3979
3980The proposed patches are in the
3981`repair temporary files
3982<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-tempfiles>`_
3983series.
3984
3985Atomic Extent Swapping
3986----------------------
3987
3988Once repair builds a temporary file with a new data structure written into
3989it, it must commit the new changes into the existing file.
3990It is not possible to swap the inumbers of two files, so instead the new
3991metadata must replace the old.
3992This suggests the need for the ability to swap extents, but the existing extent
3993swapping code used by the file defragmenting tool ``xfs_fsr`` is not sufficient
3994for online repair because:
3995
3996a. When the reverse-mapping btree is enabled, the swap code must keep the
3997   reverse mapping information up to date with every exchange of mappings.
3998   Therefore, it can only exchange one mapping per transaction, and each
3999   transaction is independent.
4000
4001b. Reverse-mapping is critical for the operation of online fsck, so the old
4002   defragmentation code (which swapped entire extent forks in a single
4003   operation) is not useful here.
4004
4005c. Defragmentation is assumed to occur between two files with identical
4006   contents.
4007   For this use case, an incomplete exchange will not result in a user-visible
4008   change in file contents, even if the operation is interrupted.
4009
4010d. Online repair needs to swap the contents of two files that are by definition
4011   *not* identical.
4012   For directory and xattr repairs, the user-visible contents might be the
4013   same, but the contents of individual blocks may be very different.
4014
4015e. Old blocks in the file may be cross-linked with another structure and must
4016   not reappear if the system goes down mid-repair.
4017
4018These problems are overcome by creating a new deferred operation and a new type
4019of log intent item to track the progress of an operation to exchange two file
4020ranges.
4021The new deferred operation type chains together the same transactions used by
4022the reverse-mapping extent swap code.
4023The new log item records the progress of the exchange to ensure that once an
4024exchange begins, it will always run to completion, even there are
4025interruptions.
4026The new ``XFS_SB_FEAT_INCOMPAT_LOG_ATOMIC_SWAP`` log-incompatible feature flag
4027in the superblock protects these new log item records from being replayed on
4028old kernels.
4029
4030The proposed patchset is the
4031`atomic extent swap
4032<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=atomic-file-updates>`_
4033series.
4034
4035+--------------------------------------------------------------------------+
4036| **Sidebar: Using Log-Incompatible Feature Flags**                        |
4037+--------------------------------------------------------------------------+
4038| Starting with XFS v5, the superblock contains a                          |
4039| ``sb_features_log_incompat`` field to indicate that the log contains     |
4040| records that might not readable by all kernels that could mount this     |
4041| filesystem.                                                              |
4042| In short, log incompat features protect the log contents against kernels |
4043| that will not understand the contents.                                   |
4044| Unlike the other superblock feature bits, log incompat bits are          |
4045| ephemeral because an empty (clean) log does not need protection.         |
4046| The log cleans itself after its contents have been committed into the    |
4047| filesystem, either as part of an unmount or because the system is        |
4048| otherwise idle.                                                          |
4049| Because upper level code can be working on a transaction at the same     |
4050| time that the log cleans itself, it is necessary for upper level code to |
4051| communicate to the log when it is going to use a log incompatible        |
4052| feature.                                                                 |
4053|                                                                          |
4054| The log coordinates access to incompatible features through the use of   |
4055| one ``struct rw_semaphore`` for each feature.                            |
4056| The log cleaning code tries to take this rwsem in exclusive mode to      |
4057| clear the bit; if the lock attempt fails, the feature bit remains set.   |
4058| Filesystem code signals its intention to use a log incompat feature in a |
4059| transaction by calling ``xlog_use_incompat_feat``, which takes the rwsem |
4060| in shared mode.                                                          |
4061| The code supporting a log incompat feature should create wrapper         |
4062| functions to obtain the log feature and call                             |
4063| ``xfs_add_incompat_log_feature`` to set the feature bits in the primary  |
4064| superblock.                                                              |
4065| The superblock update is performed transactionally, so the wrapper to    |
4066| obtain log assistance must be called just prior to the creation of the   |
4067| transaction that uses the functionality.                                 |
4068| For a file operation, this step must happen after taking the IOLOCK      |
4069| and the MMAPLOCK, but before allocating the transaction.                 |
4070| When the transaction is complete, the ``xlog_drop_incompat_feat``        |
4071| function is called to release the feature.                               |
4072| The feature bit will not be cleared from the superblock until the log    |
4073| becomes clean.                                                           |
4074|                                                                          |
4075| Log-assisted extended attribute updates and atomic extent swaps both use |
4076| log incompat features and provide convenience wrappers around the        |
4077| functionality.                                                           |
4078+--------------------------------------------------------------------------+
4079
4080Mechanics of an Atomic Extent Swap
4081``````````````````````````````````
4082
4083Swapping entire file forks is a complex task.
4084The goal is to exchange all file fork mappings between two file fork offset
4085ranges.
4086There are likely to be many extent mappings in each fork, and the edges of
4087the mappings aren't necessarily aligned.
4088Furthermore, there may be other updates that need to happen after the swap,
4089such as exchanging file sizes, inode flags, or conversion of fork data to local
4090format.
4091This is roughly the format of the new deferred extent swap work item:
4092
4093.. code-block:: c
4094
4095	struct xfs_swapext_intent {
4096	    /* Inodes participating in the operation. */
4097	    struct xfs_inode    *sxi_ip1;
4098	    struct xfs_inode    *sxi_ip2;
4099
4100	    /* File offset range information. */
4101	    xfs_fileoff_t       sxi_startoff1;
4102	    xfs_fileoff_t       sxi_startoff2;
4103	    xfs_filblks_t       sxi_blockcount;
4104
4105	    /* Set these file sizes after the operation, unless negative. */
4106	    xfs_fsize_t         sxi_isize1;
4107	    xfs_fsize_t         sxi_isize2;
4108
4109	    /* XFS_SWAP_EXT_* log operation flags */
4110	    uint64_t            sxi_flags;
4111	};
4112
4113The new log intent item contains enough information to track two logical fork
4114offset ranges: ``(inode1, startoff1, blockcount)`` and ``(inode2, startoff2,
4115blockcount)``.
4116Each step of a swap operation exchanges the largest file range mapping possible
4117from one file to the other.
4118After each step in the swap operation, the two startoff fields are incremented
4119and the blockcount field is decremented to reflect the progress made.
4120The flags field captures behavioral parameters such as swapping the attr fork
4121instead of the data fork and other work to be done after the extent swap.
4122The two isize fields are used to swap the file size at the end of the operation
4123if the file data fork is the target of the swap operation.
4124
4125When the extent swap is initiated, the sequence of operations is as follows:
4126
41271. Create a deferred work item for the extent swap.
4128   At the start, it should contain the entirety of the file ranges to be
4129   swapped.
4130
41312. Call ``xfs_defer_finish`` to process the exchange.
4132   This is encapsulated in ``xrep_tempswap_contents`` for scrub operations.
4133   This will log an extent swap intent item to the transaction for the deferred
4134   extent swap work item.
4135
41363. Until ``sxi_blockcount`` of the deferred extent swap work item is zero,
4137
4138   a. Read the block maps of both file ranges starting at ``sxi_startoff1`` and
4139      ``sxi_startoff2``, respectively, and compute the longest extent that can
4140      be swapped in a single step.
4141      This is the minimum of the two ``br_blockcount`` s in the mappings.
4142      Keep advancing through the file forks until at least one of the mappings
4143      contains written blocks.
4144      Mutual holes, unwritten extents, and extent mappings to the same physical
4145      space are not exchanged.
4146
4147      For the next few steps, this document will refer to the mapping that came
4148      from file 1 as "map1", and the mapping that came from file 2 as "map2".
4149
4150   b. Create a deferred block mapping update to unmap map1 from file 1.
4151
4152   c. Create a deferred block mapping update to unmap map2 from file 2.
4153
4154   d. Create a deferred block mapping update to map map1 into file 2.
4155
4156   e. Create a deferred block mapping update to map map2 into file 1.
4157
4158   f. Log the block, quota, and extent count updates for both files.
4159
4160   g. Extend the ondisk size of either file if necessary.
4161
4162   h. Log an extent swap done log item for the extent swap intent log item
4163      that was read at the start of step 3.
4164
4165   i. Compute the amount of file range that has just been covered.
4166      This quantity is ``(map1.br_startoff + map1.br_blockcount -
4167      sxi_startoff1)``, because step 3a could have skipped holes.
4168
4169   j. Increase the starting offsets of ``sxi_startoff1`` and ``sxi_startoff2``
4170      by the number of blocks computed in the previous step, and decrease
4171      ``sxi_blockcount`` by the same quantity.
4172      This advances the cursor.
4173
4174   k. Log a new extent swap intent log item reflecting the advanced state of
4175      the work item.
4176
4177   l. Return the proper error code (EAGAIN) to the deferred operation manager
4178      to inform it that there is more work to be done.
4179      The operation manager completes the deferred work in steps 3b-3e before
4180      moving back to the start of step 3.
4181
41824. Perform any post-processing.
4183   This will be discussed in more detail in subsequent sections.
4184
4185If the filesystem goes down in the middle of an operation, log recovery will
4186find the most recent unfinished extent swap log intent item and restart from
4187there.
4188This is how extent swapping guarantees that an outside observer will either see
4189the old broken structure or the new one, and never a mismash of both.
4190
4191Preparation for Extent Swapping
4192```````````````````````````````
4193
4194There are a few things that need to be taken care of before initiating an
4195atomic extent swap operation.
4196First, regular files require the page cache to be flushed to disk before the
4197operation begins, and directio writes to be quiesced.
4198Like any filesystem operation, extent swapping must determine the maximum
4199amount of disk space and quota that can be consumed on behalf of both files in
4200the operation, and reserve that quantity of resources to avoid an unrecoverable
4201out of space failure once it starts dirtying metadata.
4202The preparation step scans the ranges of both files to estimate:
4203
4204- Data device blocks needed to handle the repeated updates to the fork
4205  mappings.
4206- Change in data and realtime block counts for both files.
4207- Increase in quota usage for both files, if the two files do not share the
4208  same set of quota ids.
4209- The number of extent mappings that will be added to each file.
4210- Whether or not there are partially written realtime extents.
4211  User programs must never be able to access a realtime file extent that maps
4212  to different extents on the realtime volume, which could happen if the
4213  operation fails to run to completion.
4214
4215The need for precise estimation increases the run time of the swap operation,
4216but it is very important to maintain correct accounting.
4217The filesystem must not run completely out of free space, nor can the extent
4218swap ever add more extent mappings to a fork than it can support.
4219Regular users are required to abide the quota limits, though metadata repairs
4220may exceed quota to resolve inconsistent metadata elsewhere.
4221
4222Special Features for Swapping Metadata File Extents
4223```````````````````````````````````````````````````
4224
4225Extended attributes, symbolic links, and directories can set the fork format to
4226"local" and treat the fork as a literal area for data storage.
4227Metadata repairs must take extra steps to support these cases:
4228
4229- If both forks are in local format and the fork areas are large enough, the
4230  swap is performed by copying the incore fork contents, logging both forks,
4231  and committing.
4232  The atomic extent swap mechanism is not necessary, since this can be done
4233  with a single transaction.
4234
4235- If both forks map blocks, then the regular atomic extent swap is used.
4236
4237- Otherwise, only one fork is in local format.
4238  The contents of the local format fork are converted to a block to perform the
4239  swap.
4240  The conversion to block format must be done in the same transaction that
4241  logs the initial extent swap intent log item.
4242  The regular atomic extent swap is used to exchange the mappings.
4243  Special flags are set on the swap operation so that the transaction can be
4244  rolled one more time to convert the second file's fork back to local format
4245  so that the second file will be ready to go as soon as the ILOCK is dropped.
4246
4247Extended attributes and directories stamp the owning inode into every block,
4248but the buffer verifiers do not actually check the inode number!
4249Although there is no verification, it is still important to maintain
4250referential integrity, so prior to performing the extent swap, online repair
4251builds every block in the new data structure with the owner field of the file
4252being repaired.
4253
4254After a successful swap operation, the repair operation must reap the old fork
4255blocks by processing each fork mapping through the standard :ref:`file extent
4256reaping <reaping>` mechanism that is done post-repair.
4257If the filesystem should go down during the reap part of the repair, the
4258iunlink processing at the end of recovery will free both the temporary file and
4259whatever blocks were not reaped.
4260However, this iunlink processing omits the cross-link detection of online
4261repair, and is not completely foolproof.
4262
4263Swapping Temporary File Extents
4264```````````````````````````````
4265
4266To repair a metadata file, online repair proceeds as follows:
4267
42681. Create a temporary repair file.
4269
42702. Use the staging data to write out new contents into the temporary repair
4271   file.
4272   The same fork must be written to as is being repaired.
4273
42743. Commit the scrub transaction, since the swap estimation step must be
4275   completed before transaction reservations are made.
4276
42774. Call ``xrep_tempswap_trans_alloc`` to allocate a new scrub transaction with
4278   the appropriate resource reservations, locks, and fill out a ``struct
4279   xfs_swapext_req`` with the details of the swap operation.
4280
42815. Call ``xrep_tempswap_contents`` to swap the contents.
4282
42836. Commit the transaction to complete the repair.
4284
4285.. _rtsummary:
4286
4287Case Study: Repairing the Realtime Summary File
4288^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4289
4290In the "realtime" section of an XFS filesystem, free space is tracked via a
4291bitmap, similar to Unix FFS.
4292Each bit in the bitmap represents one realtime extent, which is a multiple of
4293the filesystem block size between 4KiB and 1GiB in size.
4294The realtime summary file indexes the number of free extents of a given size to
4295the offset of the block within the realtime free space bitmap where those free
4296extents begin.
4297In other words, the summary file helps the allocator find free extents by
4298length, similar to what the free space by count (cntbt) btree does for the data
4299section.
4300
4301The summary file itself is a flat file (with no block headers or checksums!)
4302partitioned into ``log2(total rt extents)`` sections containing enough 32-bit
4303counters to match the number of blocks in the rt bitmap.
4304Each counter records the number of free extents that start in that bitmap block
4305and can satisfy a power-of-two allocation request.
4306
4307To check the summary file against the bitmap:
4308
43091. Take the ILOCK of both the realtime bitmap and summary files.
4310
43112. For each free space extent recorded in the bitmap:
4312
4313   a. Compute the position in the summary file that contains a counter that
4314      represents this free extent.
4315
4316   b. Read the counter from the xfile.
4317
4318   c. Increment it, and write it back to the xfile.
4319
43203. Compare the contents of the xfile against the ondisk file.
4321
4322To repair the summary file, write the xfile contents into the temporary file
4323and use atomic extent swap to commit the new contents.
4324The temporary file is then reaped.
4325
4326The proposed patchset is the
4327`realtime summary repair
4328<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-rtsummary>`_
4329series.
4330
4331Case Study: Salvaging Extended Attributes
4332^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4333
4334In XFS, extended attributes are implemented as a namespaced name-value store.
4335Values are limited in size to 64KiB, but there is no limit in the number of
4336names.
4337The attribute fork is unpartitioned, which means that the root of the attribute
4338structure is always in logical block zero, but attribute leaf blocks, dabtree
4339index blocks, and remote value blocks are intermixed.
4340Attribute leaf blocks contain variable-sized records that associate
4341user-provided names with the user-provided values.
4342Values larger than a block are allocated separate extents and written there.
4343If the leaf information expands beyond a single block, a directory/attribute
4344btree (``dabtree``) is created to map hashes of attribute names to entries
4345for fast lookup.
4346
4347Salvaging extended attributes is done as follows:
4348
43491. Walk the attr fork mappings of the file being repaired to find the attribute
4350   leaf blocks.
4351   When one is found,
4352
4353   a. Walk the attr leaf block to find candidate keys.
4354      When one is found,
4355
4356      1. Check the name for problems, and ignore the name if there are.
4357
4358      2. Retrieve the value.
4359         If that succeeds, add the name and value to the staging xfarray and
4360         xfblob.
4361
43622. If the memory usage of the xfarray and xfblob exceed a certain amount of
4363   memory or there are no more attr fork blocks to examine, unlock the file and
4364   add the staged extended attributes to the temporary file.
4365
43663. Use atomic extent swapping to exchange the new and old extended attribute
4367   structures.
4368   The old attribute blocks are now attached to the temporary file.
4369
43704. Reap the temporary file.
4371
4372The proposed patchset is the
4373`extended attribute repair
4374<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-xattrs>`_
4375series.
4376
4377Fixing Directories
4378------------------
4379
4380Fixing directories is difficult with currently available filesystem features,
4381since directory entries are not redundant.
4382The offline repair tool scans all inodes to find files with nonzero link count,
4383and then it scans all directories to establish parentage of those linked files.
4384Damaged files and directories are zapped, and files with no parent are
4385moved to the ``/lost+found`` directory.
4386It does not try to salvage anything.
4387
4388The best that online repair can do at this time is to read directory data
4389blocks and salvage any dirents that look plausible, correct link counts, and
4390move orphans back into the directory tree.
4391The salvage process is discussed in the case study at the end of this section.
4392The :ref:`file link count fsck <nlinks>` code takes care of fixing link counts
4393and moving orphans to the ``/lost+found`` directory.
4394
4395Case Study: Salvaging Directories
4396`````````````````````````````````
4397
4398Unlike extended attributes, directory blocks are all the same size, so
4399salvaging directories is straightforward:
4400
44011. Find the parent of the directory.
4402   If the dotdot entry is not unreadable, try to confirm that the alleged
4403   parent has a child entry pointing back to the directory being repaired.
4404   Otherwise, walk the filesystem to find it.
4405
44062. Walk the first partition of data fork of the directory to find the directory
4407   entry data blocks.
4408   When one is found,
4409
4410   a. Walk the directory data block to find candidate entries.
4411      When an entry is found:
4412
4413      i. Check the name for problems, and ignore the name if there are.
4414
4415      ii. Retrieve the inumber and grab the inode.
4416          If that succeeds, add the name, inode number, and file type to the
4417          staging xfarray and xblob.
4418
44193. If the memory usage of the xfarray and xfblob exceed a certain amount of
4420   memory or there are no more directory data blocks to examine, unlock the
4421   directory and add the staged dirents into the temporary directory.
4422   Truncate the staging files.
4423
44244. Use atomic extent swapping to exchange the new and old directory structures.
4425   The old directory blocks are now attached to the temporary file.
4426
44275. Reap the temporary file.
4428
4429**Future Work Question**: Should repair revalidate the dentry cache when
4430rebuilding a directory?
4431
4432*Answer*: Yes, it should.
4433
4434In theory it is necessary to scan all dentry cache entries for a directory to
4435ensure that one of the following apply:
4436
44371. The cached dentry reflects an ondisk dirent in the new directory.
4438
44392. The cached dentry no longer has a corresponding ondisk dirent in the new
4440   directory and the dentry can be purged from the cache.
4441
44423. The cached dentry no longer has an ondisk dirent but the dentry cannot be
4443   purged.
4444   This is the problem case.
4445
4446Unfortunately, the current dentry cache design doesn't provide a means to walk
4447every child dentry of a specific directory, which makes this a hard problem.
4448There is no known solution.
4449
4450The proposed patchset is the
4451`directory repair
4452<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-dirs>`_
4453series.
4454
4455Parent Pointers
4456```````````````
4457
4458A parent pointer is a piece of file metadata that enables a user to locate the
4459file's parent directory without having to traverse the directory tree from the
4460root.
4461Without them, reconstruction of directory trees is hindered in much the same
4462way that the historic lack of reverse space mapping information once hindered
4463reconstruction of filesystem space metadata.
4464The parent pointer feature, however, makes total directory reconstruction
4465possible.
4466
4467XFS parent pointers include the dirent name and location of the entry within
4468the parent directory.
4469In other words, child files use extended attributes to store pointers to
4470parents in the form ``(parent_inum, parent_gen, dirent_pos) → (dirent_name)``.
4471The directory checking process can be strengthened to ensure that the target of
4472each dirent also contains a parent pointer pointing back to the dirent.
4473Likewise, each parent pointer can be checked by ensuring that the target of
4474each parent pointer is a directory and that it contains a dirent matching
4475the parent pointer.
4476Both online and offline repair can use this strategy.
4477
4478**Note**: The ondisk format of parent pointers is not yet finalized.
4479
4480+--------------------------------------------------------------------------+
4481| **Historical Sidebar**:                                                  |
4482+--------------------------------------------------------------------------+
4483| Directory parent pointers were first proposed as an XFS feature more     |
4484| than a decade ago by SGI.                                                |
4485| Each link from a parent directory to a child file is mirrored with an    |
4486| extended attribute in the child that could be used to identify the       |
4487| parent directory.                                                        |
4488| Unfortunately, this early implementation had major shortcomings and was  |
4489| never merged into Linux XFS:                                             |
4490|                                                                          |
4491| 1. The XFS codebase of the late 2000s did not have the infrastructure to |
4492|    enforce strong referential integrity in the directory tree.           |
4493|    It did not guarantee that a change in a forward link would always be  |
4494|    followed up with the corresponding change to the reverse links.       |
4495|                                                                          |
4496| 2. Referential integrity was not integrated into offline repair.         |
4497|    Checking and repairs were performed on mounted filesystems without    |
4498|    taking any kernel or inode locks to coordinate access.                |
4499|    It is not clear how this actually worked properly.                    |
4500|                                                                          |
4501| 3. The extended attribute did not record the name of the directory entry |
4502|    in the parent, so the SGI parent pointer implementation cannot be     |
4503|    used to reconnect the directory tree.                                 |
4504|                                                                          |
4505| 4. Extended attribute forks only support 65,536 extents, which means     |
4506|    that parent pointer attribute creation is likely to fail at some      |
4507|    point before the maximum file link count is achieved.                 |
4508|                                                                          |
4509| The original parent pointer design was too unstable for something like   |
4510| a file system repair to depend on.                                       |
4511| Allison Henderson, Chandan Babu, and Catherine Hoang are working on a    |
4512| second implementation that solves all shortcomings of the first.         |
4513| During 2022, Allison introduced log intent items to track physical       |
4514| manipulations of the extended attribute structures.                      |
4515| This solves the referential integrity problem by making it possible to   |
4516| commit a dirent update and a parent pointer update in the same           |
4517| transaction.                                                             |
4518| Chandan increased the maximum extent counts of both data and attribute   |
4519| forks, thereby ensuring that the extended attribute structure can grow   |
4520| to handle the maximum hardlink count of any file.                        |
4521+--------------------------------------------------------------------------+
4522
4523Case Study: Repairing Directories with Parent Pointers
4524^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4525
4526Directory rebuilding uses a :ref:`coordinated inode scan <iscan>` and
4527a :ref:`directory entry live update hook <liveupdate>` as follows:
4528
45291. Set up a temporary directory for generating the new directory structure,
4530   an xfblob for storing entry names, and an xfarray for stashing directory
4531   updates.
4532
45332. Set up an inode scanner and hook into the directory entry code to receive
4534   updates on directory operations.
4535
45363. For each parent pointer found in each file scanned, decide if the parent
4537   pointer references the directory of interest.
4538   If so:
4539
4540   a. Stash an addname entry for this dirent in the xfarray for later.
4541
4542   b. When finished scanning that file, flush the stashed updates to the
4543      temporary directory.
4544
45454. For each live directory update received via the hook, decide if the child
4546   has already been scanned.
4547   If so:
4548
4549   a. Stash an addname or removename entry for this dirent update in the
4550      xfarray for later.
4551      We cannot write directly to the temporary directory because hook
4552      functions are not allowed to modify filesystem metadata.
4553      Instead, we stash updates in the xfarray and rely on the scanner thread
4554      to apply the stashed updates to the temporary directory.
4555
45565. When the scan is complete, atomically swap the contents of the temporary
4557   directory and the directory being repaired.
4558   The temporary directory now contains the damaged directory structure.
4559
45606. Reap the temporary directory.
4561
45627. Update the dirent position field of parent pointers as necessary.
4563   This may require the queuing of a substantial number of xattr log intent
4564   items.
4565
4566The proposed patchset is the
4567`parent pointers directory repair
4568<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=pptrs-online-dir-repair>`_
4569series.
4570
4571**Unresolved Question**: How will repair ensure that the ``dirent_pos`` fields
4572match in the reconstructed directory?
4573
4574*Answer*: There are a few ways to solve this problem:
4575
45761. The field could be designated advisory, since the other three values are
4577   sufficient to find the entry in the parent.
4578   However, this makes indexed key lookup impossible while repairs are ongoing.
4579
45802. We could allow creating directory entries at specified offsets, which solves
4581   the referential integrity problem but runs the risk that dirent creation
4582   will fail due to conflicts with the free space in the directory.
4583
4584   These conflicts could be resolved by appending the directory entry and
4585   amending the xattr code to support updating an xattr key and reindexing the
4586   dabtree, though this would have to be performed with the parent directory
4587   still locked.
4588
45893. Same as above, but remove the old parent pointer entry and add a new one
4590   atomically.
4591
45924. Change the ondisk xattr format to ``(parent_inum, name) → (parent_gen)``,
4593   which would provide the attr name uniqueness that we require, without
4594   forcing repair code to update the dirent position.
4595   Unfortunately, this requires changes to the xattr code to support attr
4596   names as long as 263 bytes.
4597
45985. Change the ondisk xattr format to ``(parent_inum, hash(name)) →
4599   (name, parent_gen)``.
4600   If the hash is sufficiently resistant to collisions (e.g. sha256) then
4601   this should provide the attr name uniqueness that we require.
4602   Names shorter than 247 bytes could be stored directly.
4603
4604Discussion is ongoing under the `parent pointers patch deluge
4605<https://www.spinics.net/lists/linux-xfs/msg69397.html>`_.
4606
4607Case Study: Repairing Parent Pointers
4608^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4609
4610Online reconstruction of a file's parent pointer information works similarly to
4611directory reconstruction:
4612
46131. Set up a temporary file for generating a new extended attribute structure,
4614   an `xfblob<xfblob>` for storing parent pointer names, and an xfarray for
4615   stashing parent pointer updates.
4616
46172. Set up an inode scanner and hook into the directory entry code to receive
4618   updates on directory operations.
4619
46203. For each directory entry found in each directory scanned, decide if the
4621   dirent references the file of interest.
4622   If so:
4623
4624   a. Stash an addpptr entry for this parent pointer in the xfblob and xfarray
4625      for later.
4626
4627   b. When finished scanning the directory, flush the stashed updates to the
4628      temporary directory.
4629
46304. For each live directory update received via the hook, decide if the parent
4631   has already been scanned.
4632   If so:
4633
4634   a. Stash an addpptr or removepptr entry for this dirent update in the
4635      xfarray for later.
4636      We cannot write parent pointers directly to the temporary file because
4637      hook functions are not allowed to modify filesystem metadata.
4638      Instead, we stash updates in the xfarray and rely on the scanner thread
4639      to apply the stashed parent pointer updates to the temporary file.
4640
46415. Copy all non-parent pointer extended attributes to the temporary file.
4642
46436. When the scan is complete, atomically swap the attribute fork of the
4644   temporary file and the file being repaired.
4645   The temporary file now contains the damaged extended attribute structure.
4646
46477. Reap the temporary file.
4648
4649The proposed patchset is the
4650`parent pointers repair
4651<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=pptrs-online-parent-repair>`_
4652series.
4653
4654Digression: Offline Checking of Parent Pointers
4655^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4656
4657Examining parent pointers in offline repair works differently because corrupt
4658files are erased long before directory tree connectivity checks are performed.
4659Parent pointer checks are therefore a second pass to be added to the existing
4660connectivity checks:
4661
46621. After the set of surviving files has been established (i.e. phase 6),
4663   walk the surviving directories of each AG in the filesystem.
4664   This is already performed as part of the connectivity checks.
4665
46662. For each directory entry found, record the name in an xfblob, and store
4667   ``(child_ag_inum, parent_inum, parent_gen, dirent_pos)`` tuples in a
4668   per-AG in-memory slab.
4669
46703. For each AG in the filesystem,
4671
4672   a. Sort the per-AG tuples in order of child_ag_inum, parent_inum, and
4673      dirent_pos.
4674
4675   b. For each inode in the AG,
4676
4677      1. Scan the inode for parent pointers.
4678         Record the names in a per-file xfblob, and store ``(parent_inum,
4679         parent_gen, dirent_pos)`` tuples in a per-file slab.
4680
4681      2. Sort the per-file tuples in order of parent_inum, and dirent_pos.
4682
4683      3. Position one slab cursor at the start of the inode's records in the
4684         per-AG tuple slab.
4685         This should be trivial since the per-AG tuples are in child inumber
4686         order.
4687
4688      4. Position a second slab cursor at the start of the per-file tuple slab.
4689
4690      5. Iterate the two cursors in lockstep, comparing the parent_ino and
4691         dirent_pos fields of the records under each cursor.
4692
4693         a. Tuples in the per-AG list but not the per-file list are missing and
4694            need to be written to the inode.
4695
4696         b. Tuples in the per-file list but not the per-AG list are dangling
4697            and need to be removed from the inode.
4698
4699         c. For tuples in both lists, update the parent_gen and name components
4700            of the parent pointer if necessary.
4701
47024. Move on to examining link counts, as we do today.
4703
4704The proposed patchset is the
4705`offline parent pointers repair
4706<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=pptrs-repair>`_
4707series.
4708
4709Rebuilding directories from parent pointers in offline repair is very
4710challenging because it currently uses a single-pass scan of the filesystem
4711during phase 3 to decide which files are corrupt enough to be zapped.
4712This scan would have to be converted into a multi-pass scan:
4713
47141. The first pass of the scan zaps corrupt inodes, forks, and attributes
4715   much as it does now.
4716   Corrupt directories are noted but not zapped.
4717
47182. The next pass records parent pointers pointing to the directories noted
4719   as being corrupt in the first pass.
4720   This second pass may have to happen after the phase 4 scan for duplicate
4721   blocks, if phase 4 is also capable of zapping directories.
4722
47233. The third pass resets corrupt directories to an empty shortform directory.
4724   Free space metadata has not been ensured yet, so repair cannot yet use the
4725   directory building code in libxfs.
4726
47274. At the start of phase 6, space metadata have been rebuilt.
4728   Use the parent pointer information recorded during step 2 to reconstruct
4729   the dirents and add them to the now-empty directories.
4730
4731This code has not yet been constructed.
4732
4733.. _orphanage:
4734
4735The Orphanage
4736-------------
4737
4738Filesystems present files as a directed, and hopefully acyclic, graph.
4739In other words, a tree.
4740The root of the filesystem is a directory, and each entry in a directory points
4741downwards either to more subdirectories or to non-directory files.
4742Unfortunately, a disruption in the directory graph pointers result in a
4743disconnected graph, which makes files impossible to access via regular path
4744resolution.
4745
4746Without parent pointers, the directory parent pointer online scrub code can
4747detect a dotdot entry pointing to a parent directory that doesn't have a link
4748back to the child directory and the file link count checker can detect a file
4749that isn't pointed to by any directory in the filesystem.
4750If such a file has a positive link count, the file is an orphan.
4751
4752With parent pointers, directories can be rebuilt by scanning parent pointers
4753and parent pointers can be rebuilt by scanning directories.
4754This should reduce the incidence of files ending up in ``/lost+found``.
4755
4756When orphans are found, they should be reconnected to the directory tree.
4757Offline fsck solves the problem by creating a directory ``/lost+found`` to
4758serve as an orphanage, and linking orphan files into the orphanage by using the
4759inumber as the name.
4760Reparenting a file to the orphanage does not reset any of its permissions or
4761ACLs.
4762
4763This process is more involved in the kernel than it is in userspace.
4764The directory and file link count repair setup functions must use the regular
4765VFS mechanisms to create the orphanage directory with all the necessary
4766security attributes and dentry cache entries, just like a regular directory
4767tree modification.
4768
4769Orphaned files are adopted by the orphanage as follows:
4770
47711. Call ``xrep_orphanage_try_create`` at the start of the scrub setup function
4772   to try to ensure that the lost and found directory actually exists.
4773   This also attaches the orphanage directory to the scrub context.
4774
47752. If the decision is made to reconnect a file, take the IOLOCK of both the
4776   orphanage and the file being reattached.
4777   The ``xrep_orphanage_iolock_two`` function follows the inode locking
4778   strategy discussed earlier.
4779
47803. Call ``xrep_orphanage_compute_blkres`` and ``xrep_orphanage_compute_name``
4781   to compute the new name in the orphanage and the block reservation required.
4782
47834. Use ``xrep_orphanage_adoption_prep`` to reserve resources to the repair
4784   transaction.
4785
47865. Call ``xrep_orphanage_adopt`` to reparent the orphaned file into the lost
4787   and found, and update the kernel dentry cache.
4788
4789The proposed patches are in the
4790`orphanage adoption
4791<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=repair-orphanage>`_
4792series.
4793
47946. Userspace Algorithms and Data Structures
4795===========================================
4796
4797This section discusses the key algorithms and data structures of the userspace
4798program, ``xfs_scrub``, that provide the ability to drive metadata checks and
4799repairs in the kernel, verify file data, and look for other potential problems.
4800
4801.. _scrubcheck:
4802
4803Checking Metadata
4804-----------------
4805
4806Recall the :ref:`phases of fsck work<scrubphases>` outlined earlier.
4807That structure follows naturally from the data dependencies designed into the
4808filesystem from its beginnings in 1993.
4809In XFS, there are several groups of metadata dependencies:
4810
4811a. Filesystem summary counts depend on consistency within the inode indices,
4812   the allocation group space btrees, and the realtime volume space
4813   information.
4814
4815b. Quota resource counts depend on consistency within the quota file data
4816   forks, inode indices, inode records, and the forks of every file on the
4817   system.
4818
4819c. The naming hierarchy depends on consistency within the directory and
4820   extended attribute structures.
4821   This includes file link counts.
4822
4823d. Directories, extended attributes, and file data depend on consistency within
4824   the file forks that map directory and extended attribute data to physical
4825   storage media.
4826
4827e. The file forks depends on consistency within inode records and the space
4828   metadata indices of the allocation groups and the realtime volume.
4829   This includes quota and realtime metadata files.
4830
4831f. Inode records depends on consistency within the inode metadata indices.
4832
4833g. Realtime space metadata depend on the inode records and data forks of the
4834   realtime metadata inodes.
4835
4836h. The allocation group metadata indices (free space, inodes, reference count,
4837   and reverse mapping btrees) depend on consistency within the AG headers and
4838   between all the AG metadata btrees.
4839
4840i. ``xfs_scrub`` depends on the filesystem being mounted and kernel support
4841   for online fsck functionality.
4842
4843Therefore, a metadata dependency graph is a convenient way to schedule checking
4844operations in the ``xfs_scrub`` program:
4845
4846- Phase 1 checks that the provided path maps to an XFS filesystem and detect
4847  the kernel's scrubbing abilities, which validates group (i).
4848
4849- Phase 2 scrubs groups (g) and (h) in parallel using a threaded workqueue.
4850
4851- Phase 3 scans inodes in parallel.
4852  For each inode, groups (f), (e), and (d) are checked, in that order.
4853
4854- Phase 4 repairs everything in groups (i) through (d) so that phases 5 and 6
4855  may run reliably.
4856
4857- Phase 5 starts by checking groups (b) and (c) in parallel before moving on
4858  to checking names.
4859
4860- Phase 6 depends on groups (i) through (b) to find file data blocks to verify,
4861  to read them, and to report which blocks of which files are affected.
4862
4863- Phase 7 checks group (a), having validated everything else.
4864
4865Notice that the data dependencies between groups are enforced by the structure
4866of the program flow.
4867
4868Parallel Inode Scans
4869--------------------
4870
4871An XFS filesystem can easily contain hundreds of millions of inodes.
4872Given that XFS targets installations with large high-performance storage,
4873it is desirable to scrub inodes in parallel to minimize runtime, particularly
4874if the program has been invoked manually from a command line.
4875This requires careful scheduling to keep the threads as evenly loaded as
4876possible.
4877
4878Early iterations of the ``xfs_scrub`` inode scanner naïvely created a single
4879workqueue and scheduled a single workqueue item per AG.
4880Each workqueue item walked the inode btree (with ``XFS_IOC_INUMBERS``) to find
4881inode chunks and then called bulkstat (``XFS_IOC_BULKSTAT``) to gather enough
4882information to construct file handles.
4883The file handle was then passed to a function to generate scrub items for each
4884metadata object of each inode.
4885This simple algorithm leads to thread balancing problems in phase 3 if the
4886filesystem contains one AG with a few large sparse files and the rest of the
4887AGs contain many smaller files.
4888The inode scan dispatch function was not sufficiently granular; it should have
4889been dispatching at the level of individual inodes, or, to constrain memory
4890consumption, inode btree records.
4891
4892Thanks to Dave Chinner, bounded workqueues in userspace enable ``xfs_scrub`` to
4893avoid this problem with ease by adding a second workqueue.
4894Just like before, the first workqueue is seeded with one workqueue item per AG,
4895and it uses INUMBERS to find inode btree chunks.
4896The second workqueue, however, is configured with an upper bound on the number
4897of items that can be waiting to be run.
4898Each inode btree chunk found by the first workqueue's workers are queued to the
4899second workqueue, and it is this second workqueue that queries BULKSTAT,
4900creates a file handle, and passes it to a function to generate scrub items for
4901each metadata object of each inode.
4902If the second workqueue is too full, the workqueue add function blocks the
4903first workqueue's workers until the backlog eases.
4904This doesn't completely solve the balancing problem, but reduces it enough to
4905move on to more pressing issues.
4906
4907The proposed patchsets are the scrub
4908`performance tweaks
4909<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-performance-tweaks>`_
4910and the
4911`inode scan rebalance
4912<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-iscan-rebalance>`_
4913series.
4914
4915.. _scrubrepair:
4916
4917Scheduling Repairs
4918------------------
4919
4920During phase 2, corruptions and inconsistencies reported in any AGI header or
4921inode btree are repaired immediately, because phase 3 relies on proper
4922functioning of the inode indices to find inodes to scan.
4923Failed repairs are rescheduled to phase 4.
4924Problems reported in any other space metadata are deferred to phase 4.
4925Optimization opportunities are always deferred to phase 4, no matter their
4926origin.
4927
4928During phase 3, corruptions and inconsistencies reported in any part of a
4929file's metadata are repaired immediately if all space metadata were validated
4930during phase 2.
4931Repairs that fail or cannot be repaired immediately are scheduled for phase 4.
4932
4933In the original design of ``xfs_scrub``, it was thought that repairs would be
4934so infrequent that the ``struct xfs_scrub_metadata`` objects used to
4935communicate with the kernel could also be used as the primary object to
4936schedule repairs.
4937With recent increases in the number of optimizations possible for a given
4938filesystem object, it became much more memory-efficient to track all eligible
4939repairs for a given filesystem object with a single repair item.
4940Each repair item represents a single lockable object -- AGs, metadata files,
4941individual inodes, or a class of summary information.
4942
4943Phase 4 is responsible for scheduling a lot of repair work in as quick a
4944manner as is practical.
4945The :ref:`data dependencies <scrubcheck>` outlined earlier still apply, which
4946means that ``xfs_scrub`` must try to complete the repair work scheduled by
4947phase 2 before trying repair work scheduled by phase 3.
4948The repair process is as follows:
4949
49501. Start a round of repair with a workqueue and enough workers to keep the CPUs
4951   as busy as the user desires.
4952
4953   a. For each repair item queued by phase 2,
4954
4955      i.   Ask the kernel to repair everything listed in the repair item for a
4956           given filesystem object.
4957
4958      ii.  Make a note if the kernel made any progress in reducing the number
4959           of repairs needed for this object.
4960
4961      iii. If the object no longer requires repairs, revalidate all metadata
4962           associated with this object.
4963           If the revalidation succeeds, drop the repair item.
4964           If not, requeue the item for more repairs.
4965
4966   b. If any repairs were made, jump back to 1a to retry all the phase 2 items.
4967
4968   c. For each repair item queued by phase 3,
4969
4970      i.   Ask the kernel to repair everything listed in the repair item for a
4971           given filesystem object.
4972
4973      ii.  Make a note if the kernel made any progress in reducing the number
4974           of repairs needed for this object.
4975
4976      iii. If the object no longer requires repairs, revalidate all metadata
4977           associated with this object.
4978           If the revalidation succeeds, drop the repair item.
4979           If not, requeue the item for more repairs.
4980
4981   d. If any repairs were made, jump back to 1c to retry all the phase 3 items.
4982
49832. If step 1 made any repair progress of any kind, jump back to step 1 to start
4984   another round of repair.
4985
49863. If there are items left to repair, run them all serially one more time.
4987   Complain if the repairs were not successful, since this is the last chance
4988   to repair anything.
4989
4990Corruptions and inconsistencies encountered during phases 5 and 7 are repaired
4991immediately.
4992Corrupt file data blocks reported by phase 6 cannot be recovered by the
4993filesystem.
4994
4995The proposed patchsets are the
4996`repair warning improvements
4997<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-better-repair-warnings>`_,
4998refactoring of the
4999`repair data dependency
5000<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-repair-data-deps>`_
5001and
5002`object tracking
5003<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-object-tracking>`_,
5004and the
5005`repair scheduling
5006<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=scrub-repair-scheduling>`_
5007improvement series.
5008
5009Checking Names for Confusable Unicode Sequences
5010-----------------------------------------------
5011
5012If ``xfs_scrub`` succeeds in validating the filesystem metadata by the end of
5013phase 4, it moves on to phase 5, which checks for suspicious looking names in
5014the filesystem.
5015These names consist of the filesystem label, names in directory entries, and
5016the names of extended attributes.
5017Like most Unix filesystems, XFS imposes the sparest of constraints on the
5018contents of a name:
5019
5020- Slashes and null bytes are not allowed in directory entries.
5021
5022- Null bytes are not allowed in userspace-visible extended attributes.
5023
5024- Null bytes are not allowed in the filesystem label.
5025
5026Directory entries and attribute keys store the length of the name explicitly
5027ondisk, which means that nulls are not name terminators.
5028For this section, the term "naming domain" refers to any place where names are
5029presented together -- all the names in a directory, or all the attributes of a
5030file.
5031
5032Although the Unix naming constraints are very permissive, the reality of most
5033modern-day Linux systems is that programs work with Unicode character code
5034points to support international languages.
5035These programs typically encode those code points in UTF-8 when interfacing
5036with the C library because the kernel expects null-terminated names.
5037In the common case, therefore, names found in an XFS filesystem are actually
5038UTF-8 encoded Unicode data.
5039
5040To maximize its expressiveness, the Unicode standard defines separate control
5041points for various characters that render similarly or identically in writing
5042systems around the world.
5043For example, the character "Cyrillic Small Letter A" U+0430 "а" often renders
5044identically to "Latin Small Letter A" U+0061 "a".
5045
5046The standard also permits characters to be constructed in multiple ways --
5047either by using a defined code point, or by combining one code point with
5048various combining marks.
5049For example, the character "Angstrom Sign U+212B "Å" can also be expressed
5050as "Latin Capital Letter A" U+0041 "A" followed by "Combining Ring Above"
5051U+030A "◌̊".
5052Both sequences render identically.
5053
5054Like the standards that preceded it, Unicode also defines various control
5055characters to alter the presentation of text.
5056For example, the character "Right-to-Left Override" U+202E can trick some
5057programs into rendering "moo\\xe2\\x80\\xaegnp.txt" as "mootxt.png".
5058A second category of rendering problems involves whitespace characters.
5059If the character "Zero Width Space" U+200B is encountered in a file name, the
5060name will render identically to a name that does not have the zero width
5061space.
5062
5063If two names within a naming domain have different byte sequences but render
5064identically, a user may be confused by it.
5065The kernel, in its indifference to upper level encoding schemes, permits this.
5066Most filesystem drivers persist the byte sequence names that are given to them
5067by the VFS.
5068
5069Techniques for detecting confusable names are explained in great detail in
5070sections 4 and 5 of the
5071`Unicode Security Mechanisms <https://unicode.org/reports/tr39/>`_
5072document.
5073When ``xfs_scrub`` detects UTF-8 encoding in use on a system, it uses the
5074Unicode normalization form NFD in conjunction with the confusable name
5075detection component of
5076`libicu <https://github.com/unicode-org/icu>`_
5077to identify names with a directory or within a file's extended attributes that
5078could be confused for each other.
5079Names are also checked for control characters, non-rendering characters, and
5080mixing of bidirectional characters.
5081All of these potential issues are reported to the system administrator during
5082phase 5.
5083
5084Media Verification of File Data Extents
5085---------------------------------------
5086
5087The system administrator can elect to initiate a media scan of all file data
5088blocks.
5089This scan after validation of all filesystem metadata (except for the summary
5090counters) as phase 6.
5091The scan starts by calling ``FS_IOC_GETFSMAP`` to scan the filesystem space map
5092to find areas that are allocated to file data fork extents.
5093Gaps betweeen data fork extents that are smaller than 64k are treated as if
5094they were data fork extents to reduce the command setup overhead.
5095When the space map scan accumulates a region larger than 32MB, a media
5096verification request is sent to the disk as a directio read of the raw block
5097device.
5098
5099If the verification read fails, ``xfs_scrub`` retries with single-block reads
5100to narrow down the failure to the specific region of the media and recorded.
5101When it has finished issuing verification requests, it again uses the space
5102mapping ioctl to map the recorded media errors back to metadata structures
5103and report what has been lost.
5104For media errors in blocks owned by files, parent pointers can be used to
5105construct file paths from inode numbers for user-friendly reporting.
5106
51077. Conclusion and Future Work
5108=============================
5109
5110It is hoped that the reader of this document has followed the designs laid out
5111in this document and now has some familiarity with how XFS performs online
5112rebuilding of its metadata indices, and how filesystem users can interact with
5113that functionality.
5114Although the scope of this work is daunting, it is hoped that this guide will
5115make it easier for code readers to understand what has been built, for whom it
5116has been built, and why.
5117Please feel free to contact the XFS mailing list with questions.
5118
5119FIEXCHANGE_RANGE
5120----------------
5121
5122As discussed earlier, a second frontend to the atomic extent swap mechanism is
5123a new ioctl call that userspace programs can use to commit updates to files
5124atomically.
5125This frontend has been out for review for several years now, though the
5126necessary refinements to online repair and lack of customer demand mean that
5127the proposal has not been pushed very hard.
5128
5129Extent Swapping with Regular User Files
5130```````````````````````````````````````
5131
5132As mentioned earlier, XFS has long had the ability to swap extents between
5133files, which is used almost exclusively by ``xfs_fsr`` to defragment files.
5134The earliest form of this was the fork swap mechanism, where the entire
5135contents of data forks could be exchanged between two files by exchanging the
5136raw bytes in each inode fork's immediate area.
5137When XFS v5 came along with self-describing metadata, this old mechanism grew
5138some log support to continue rewriting the owner fields of BMBT blocks during
5139log recovery.
5140When the reverse mapping btree was later added to XFS, the only way to maintain
5141the consistency of the fork mappings with the reverse mapping index was to
5142develop an iterative mechanism that used deferred bmap and rmap operations to
5143swap mappings one at a time.
5144This mechanism is identical to steps 2-3 from the procedure above except for
5145the new tracking items, because the atomic extent swap mechanism is an
5146iteration of an existing mechanism and not something totally novel.
5147For the narrow case of file defragmentation, the file contents must be
5148identical, so the recovery guarantees are not much of a gain.
5149
5150Atomic extent swapping is much more flexible than the existing swapext
5151implementations because it can guarantee that the caller never sees a mix of
5152old and new contents even after a crash, and it can operate on two arbitrary
5153file fork ranges.
5154The extra flexibility enables several new use cases:
5155
5156- **Atomic commit of file writes**: A userspace process opens a file that it
5157  wants to update.
5158  Next, it opens a temporary file and calls the file clone operation to reflink
5159  the first file's contents into the temporary file.
5160  Writes to the original file should instead be written to the temporary file.
5161  Finally, the process calls the atomic extent swap system call
5162  (``FIEXCHANGE_RANGE``) to exchange the file contents, thereby committing all
5163  of the updates to the original file, or none of them.
5164
5165.. _swapext_if_unchanged:
5166
5167- **Transactional file updates**: The same mechanism as above, but the caller
5168  only wants the commit to occur if the original file's contents have not
5169  changed.
5170  To make this happen, the calling process snapshots the file modification and
5171  change timestamps of the original file before reflinking its data to the
5172  temporary file.
5173  When the program is ready to commit the changes, it passes the timestamps
5174  into the kernel as arguments to the atomic extent swap system call.
5175  The kernel only commits the changes if the provided timestamps match the
5176  original file.
5177
5178- **Emulation of atomic block device writes**: Export a block device with a
5179  logical sector size matching the filesystem block size to force all writes
5180  to be aligned to the filesystem block size.
5181  Stage all writes to a temporary file, and when that is complete, call the
5182  atomic extent swap system call with a flag to indicate that holes in the
5183  temporary file should be ignored.
5184  This emulates an atomic device write in software, and can support arbitrary
5185  scattered writes.
5186
5187Vectorized Scrub
5188----------------
5189
5190As it turns out, the :ref:`refactoring <scrubrepair>` of repair items mentioned
5191earlier was a catalyst for enabling a vectorized scrub system call.
5192Since 2018, the cost of making a kernel call has increased considerably on some
5193systems to mitigate the effects of speculative execution attacks.
5194This incentivizes program authors to make as few system calls as possible to
5195reduce the number of times an execution path crosses a security boundary.
5196
5197With vectorized scrub, userspace pushes to the kernel the identity of a
5198filesystem object, a list of scrub types to run against that object, and a
5199simple representation of the data dependencies between the selected scrub
5200types.
5201The kernel executes as much of the caller's plan as it can until it hits a
5202dependency that cannot be satisfied due to a corruption, and tells userspace
5203how much was accomplished.
5204It is hoped that ``io_uring`` will pick up enough of this functionality that
5205online fsck can use that instead of adding a separate vectored scrub system
5206call to XFS.
5207
5208The relevant patchsets are the
5209`kernel vectorized scrub
5210<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=vectorized-scrub>`_
5211and
5212`userspace vectorized scrub
5213<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=vectorized-scrub>`_
5214series.
5215
5216Quality of Service Targets for Scrub
5217------------------------------------
5218
5219One serious shortcoming of the online fsck code is that the amount of time that
5220it can spend in the kernel holding resource locks is basically unbounded.
5221Userspace is allowed to send a fatal signal to the process which will cause
5222``xfs_scrub`` to exit when it reaches a good stopping point, but there's no way
5223for userspace to provide a time budget to the kernel.
5224Given that the scrub codebase has helpers to detect fatal signals, it shouldn't
5225be too much work to allow userspace to specify a timeout for a scrub/repair
5226operation and abort the operation if it exceeds budget.
5227However, most repair functions have the property that once they begin to touch
5228ondisk metadata, the operation cannot be cancelled cleanly, after which a QoS
5229timeout is no longer useful.
5230
5231Defragmenting Free Space
5232------------------------
5233
5234Over the years, many XFS users have requested the creation of a program to
5235clear a portion of the physical storage underlying a filesystem so that it
5236becomes a contiguous chunk of free space.
5237Call this free space defragmenter ``clearspace`` for short.
5238
5239The first piece the ``clearspace`` program needs is the ability to read the
5240reverse mapping index from userspace.
5241This already exists in the form of the ``FS_IOC_GETFSMAP`` ioctl.
5242The second piece it needs is a new fallocate mode
5243(``FALLOC_FL_MAP_FREE_SPACE``) that allocates the free space in a region and
5244maps it to a file.
5245Call this file the "space collector" file.
5246The third piece is the ability to force an online repair.
5247
5248To clear all the metadata out of a portion of physical storage, clearspace
5249uses the new fallocate map-freespace call to map any free space in that region
5250to the space collector file.
5251Next, clearspace finds all metadata blocks in that region by way of
5252``GETFSMAP`` and issues forced repair requests on the data structure.
5253This often results in the metadata being rebuilt somewhere that is not being
5254cleared.
5255After each relocation, clearspace calls the "map free space" function again to
5256collect any newly freed space in the region being cleared.
5257
5258To clear all the file data out of a portion of the physical storage, clearspace
5259uses the FSMAP information to find relevant file data blocks.
5260Having identified a good target, it uses the ``FICLONERANGE`` call on that part
5261of the file to try to share the physical space with a dummy file.
5262Cloning the extent means that the original owners cannot overwrite the
5263contents; any changes will be written somewhere else via copy-on-write.
5264Clearspace makes its own copy of the frozen extent in an area that is not being
5265cleared, and uses ``FIEDEUPRANGE`` (or the :ref:`atomic extent swap
5266<swapext_if_unchanged>` feature) to change the target file's data extent
5267mapping away from the area being cleared.
5268When all other mappings have been moved, clearspace reflinks the space into the
5269space collector file so that it becomes unavailable.
5270
5271There are further optimizations that could apply to the above algorithm.
5272To clear a piece of physical storage that has a high sharing factor, it is
5273strongly desirable to retain this sharing factor.
5274In fact, these extents should be moved first to maximize sharing factor after
5275the operation completes.
5276To make this work smoothly, clearspace needs a new ioctl
5277(``FS_IOC_GETREFCOUNTS``) to report reference count information to userspace.
5278With the refcount information exposed, clearspace can quickly find the longest,
5279most shared data extents in the filesystem, and target them first.
5280
5281**Future Work Question**: How might the filesystem move inode chunks?
5282
5283*Answer*: To move inode chunks, Dave Chinner constructed a prototype program
5284that creates a new file with the old contents and then locklessly runs around
5285the filesystem updating directory entries.
5286The operation cannot complete if the filesystem goes down.
5287That problem isn't totally insurmountable: create an inode remapping table
5288hidden behind a jump label, and a log item that tracks the kernel walking the
5289filesystem to update directory entries.
5290The trouble is, the kernel can't do anything about open files, since it cannot
5291revoke them.
5292
5293**Future Work Question**: Can static keys be used to minimize the cost of
5294supporting ``revoke()`` on XFS files?
5295
5296*Answer*: Yes.
5297Until the first revocation, the bailout code need not be in the call path at
5298all.
5299
5300The relevant patchsets are the
5301`kernel freespace defrag
5302<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=defrag-freespace>`_
5303and
5304`userspace freespace defrag
5305<https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=defrag-freespace>`_
5306series.
5307
5308Shrinking Filesystems
5309---------------------
5310
5311Removing the end of the filesystem ought to be a simple matter of evacuating
5312the data and metadata at the end of the filesystem, and handing the freed space
5313to the shrink code.
5314That requires an evacuation of the space at end of the filesystem, which is a
5315use of free space defragmentation!
5316