Lines Matching full:buffers

84  * Returns if the folio has dirty or writeback buffers. If all the buffers
86 * any of the buffers are locked, it is assumed they are locked for IO.
181 * But it's the page lock which protects the buffers. To get around this,
223 /* we might be here because some of the buffers on this page are in __find_get_block_slow()
226 * elsewhere, don't buffer_error if we had some unmapped buffers in __find_get_block_slow()
286 * If all of the buffers are uptodate then we can set the page in end_buffer_async_read()
427 * If a page's buffers are under async readin (end_buffer_async_read
429 * control could lock one of the buffers after it has completed
430 * but while some of the other buffers have not completed. This
435 * The page comes unlocked when it has no locked buffer_async buffers
439 * the buffers.
476 * management of a list of dependent buffers at ->i_mapping->private_list.
478 * Locking is a little subtle: try_to_free_buffers() will remove buffers
481 * at the time, not against the S_ISREG file which depends on those buffers.
483 * which backs the buffers. Which is different from the address_space
484 * against which the buffers are listed. So for a particular address_space,
489 * Which introduces a requirement: all buffers on an address_space's
492 * address_spaces which do not place buffers at ->private_list via these
503 * mark_buffer_dirty_fsync() to clearly define why those buffers are being
510 * that buffers are taken *off* the old inode's list when they are freed
537 * as you dirty the buffers, and then use osync_inode_buffers to wait for
538 * completion. Any other dirty buffers which are not yet queued for
567 * sync_mapping_buffers - write out & wait upon a mapping's "associated" buffers
568 * @mapping: the mapping which wants those buffers written
570 * Starts I/O against the buffers at mapping->private_list, and waits upon
574 * @mapping is a file or directory which needs those buffers to be written for
599 * filesystems which track all non-inode metadata in the buffers list
624 /* check and advance again to catch errors after syncing out buffers */ in generic_buffers_fsync_noflush()
642 * filesystems which track all non-inode metadata in the buffers list
703 * If the page has buffers, the uptodate buffers are set dirty, to preserve
704 * dirty-state coherency between the page and the buffers. It the page does
705 * not have buffers then when they are later attached they will all be set
708 * The buffers are dirtied before the page is dirtied. There's a small race
711 * before the buffers, a concurrent writepage caller could clear the page dirty
712 * bit, see a bunch of clean buffers and we'd end up with dirty buffers/clean
716 * page's buffer list. Also use this to protect against clean buffers being
758 * Write out and wait upon a list of buffers.
761 * initially dirty buffers get waited on, but that any subsequently
762 * dirtied buffers don't. After all, we don't want fsync to last
765 * Do this in two main stages: first we copy dirty buffers to a
773 * the osync code to catch these locked, dirty buffers without requeuing
774 * any newly dirty buffers for write.
856 * Invalidate any and all dirty buffers on a given inode. We are
858 * done a sync(). Just drop the buffers from the inode list.
861 * assumes that all the buffers are against the blockdev. Not true
880 * Remove any clean buffers from the inode's buffer list. This is called
881 * when we're trying to free the inode itself. Those buffers can pin it.
883 * Returns true if all buffers were removed.
909 * Create the appropriate buffers when given a folio for data area and
911 * follow the buffers created. Return NULL if unable to create more
912 * buffers.
1001 * Initialise the state of a blockdev folio's buffers.
1075 * Link the folio to the buffers and initialise them. Take the in grow_dev_page()
1093 * Create buffers for the specified block device block's page. If
1094 * that page was dirty, the buffers are set dirty also.
1117 /* Create a page with the proper size buffers.. */ in grow_buffers()
1152 * The relationship between dirty buffers and dirty pages:
1154 * Whenever a page has any dirty buffers, the page's dirty bit is set, and
1157 * At all times, the dirtiness of the buffers represents the dirtiness of
1158 * subsections of the page. If the page has buffers, the page dirty bit is
1161 * When a page is set dirty in its entirety, all its buffers are marked dirty
1162 * (if the page has buffers).
1165 * buffers are not.
1167 * Also. When blockdev buffers are explicitly read with bread(), they
1169 * uptodate - even if all of its buffers are uptodate. A subsequent
1171 * buffers, will set the folio uptodate and will perform no I/O.
1235 * Decrement a buffer_head's reference count. If all buffers against a page
1237 * and unlocked then try_to_free_buffers() may strip the buffers from the page
1238 * in preparation for freeing it (sometimes, rarely, buffers are removed from
1239 * a page but it ends up not being freed, and buffers may later be reattached).
1290 * The bhs[] array is sorted - newest buffer is at bhs[0]. Buffers have their
1583 * block_invalidate_folio() does not have to release all buffers, but it must
1627 * We release buffers only if the entire folio is being invalidated. in block_invalidate_folio()
1639 * We attach and possibly dirty the buffers atomically wrt
1681 * clean_bdev_aliases: clean a range of buffers in block device
1682 * @bdev: Block device to clean buffers in
1696 * writeout I/O going on against recently-freed buffers. We don't wait on that
1722 * to pin buffers here since we can afford to sleep and in clean_bdev_aliases()
1794 * While block_write_full_page is writing back the dirty buffers under
1795 * the page lock, whoever dirtied the buffers may decide to clean them
1826 * here, and the (potentially unmapped) buffers may become dirty at in __block_write_full_folio()
1830 * Buffers outside i_size may be dirtied by block_dirty_folio; in __block_write_full_folio()
1842 * Get all the dirty buffers mapped to disk addresses and in __block_write_full_folio()
1848 * mapped buffers outside i_size will occur, because in __block_write_full_folio()
1898 * The folio and its buffers are protected by the writeback flag, in __block_write_full_folio()
1918 * The folio was marked dirty, but the buffers were in __block_write_full_folio()
1939 /* Recovery: lock and submit the mapped buffers */ in __block_write_full_folio()
1972 * If a folio has any new buffers, zero them out here, and mark them uptodate
2204 * If this is a partial write which happened to make all buffers in __block_commit_write()
2251 * The buffers that were written will now be uptodate, so in block_write_end()
2316 * block_is_partially_uptodate checks whether buffers within a folio are
2319 * Returns true if all buffers which correspond to the specified part
2430 * All buffers are uptodate - we can set the folio uptodate in block_read_full_folio()
2439 /* Stage two: lock the buffers */ in block_read_full_folio()
2889 * try_to_free_buffers() checks if all the buffers on this particular folio
2895 * If the folio is dirty but all the buffers are clean then we need to
2897 * may be against a block device, and a later reattachment of buffers
2898 * to a dirty folio will set *all* buffers dirty. Which would corrupt
2901 * The same applies to regular filesystem folios: if all the buffers are
2960 * If the filesystem writes its buffers by hand (eg ext3) in try_to_free_buffers()
2961 * then we can have clean buffers against a dirty folio. We in try_to_free_buffers()
2966 * the folio's buffers clean. We discover that here and clean in try_to_free_buffers()
3109 * __bh_read_batch - Submit read for a batch of unlocked buffers