Lines Matching full:we

26  *   1) space_info.  This is the ultimate arbiter of how much space we can use.
29 * reservations we care about total_bytes - SUM(space_info->bytes_) when
34 * metadata reservation we have. You can see the comment in the block_rsv
38 * 3) btrfs_calc*_size. These are the worst case calculations we used based
39 * on the number of items we will want to modify. We have one for changing
40 * items, and one for inserting new items. Generally we use these helpers to
46 * We call into either btrfs_reserve_data_bytes() or
47 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with
48 * num_bytes we want to reserve.
65 * Assume we are unable to simply make the reservation because we do not have
88 * Check if ->bytes == 0, if it does we got our reservation and we can carry
89 * on, if not return the appropriate error (ENOSPC, but can be EINTR if we
94 * Same as the above, except we add ourselves to the
95 * space_info->priority_tickets, and we do not use ticket->wait, we simply
101 * Generally speaking we will have two cases for each state, a "nice" state
102 * and a "ALL THE THINGS" state. In btrfs we delay a lot of work in order to
106 * reclaim space so we can make new reservations.
110 * for example, we would update the inode item at write time to update the
112 * isize or bytes. We keep these delayed items to coalesce these operations
118 * for delayed allocation. We can reclaim some of this space simply by
119 * running delalloc, but usually we need to wait for ordered extents to
123 * We have a block reserve for the outstanding delayed refs space, and every
125 * to reclaim space, but we want to hold this until the end because COW can
126 * churn a lot and we can avoid making some extent tree modifications if we
130 * We will skip this the first time through space reservation, because of
131 * overcommit and we don't want to have a lot of useless metadata space when
135 * If we're freeing inodes we're likely freeing checksums, file extent
140 * This will commit the transaction. Historically we had a lot of logic
141 * surrounding whether or not we'd commit the transaction, but this waits born
142 * out of a pre-tickets era where we could end up committing the transaction
144 * ticketing system we know if we're not making progress and can error
150 * Because we hold so many reservations for metadata we will allow you to
155 * You can see the current logic for when we allow overcommit in
176 * after adding space to the filesystem, we need to clear the full flags
360 * If we have dup, raid1 or raid10 then only half of the free in calc_available_free_space()
362 * doesn't include the parity drive, so we don't have to in calc_available_free_space()
369 * If we aren't flushing all things, let us overcommit up to in calc_available_free_space()
370 * 1/2th of the space. If we can flush, don't let us overcommit in calc_available_free_space()
410 * This is for space we already have accounted in space_info->bytes_may_use, so
411 * basically when we're returning space from block_rsv's.
592 /* Calc the number of the pages we need flush for space reservation */ in shrink_delalloc()
597 * to_reclaim is set to however much metadata we need to in shrink_delalloc()
599 * exactly. What we really want to do is reclaim full inode's in shrink_delalloc()
601 * here. We will take a fraction of the delalloc bytes for our in shrink_delalloc()
603 * the amount we write to cover an entire dirty extent, which in shrink_delalloc()
615 * If we are doing more ordered than delalloc we need to just wait on in shrink_delalloc()
616 * ordered extents, otherwise we'll waste time trying to flush delalloc in shrink_delalloc()
617 * that likely won't give us the space back we need. in shrink_delalloc()
631 * We need to make sure any outstanding async pages are now in shrink_delalloc()
632 * processed before we continue. This is because things like in shrink_delalloc()
634 * marked clean. We don't use filemap_fwrite for flushing in shrink_delalloc()
635 * because we want to control how many pages we write out at a in shrink_delalloc()
636 * time, thus this is the only safe way to make sure we've in shrink_delalloc()
640 * This exists because we do not want to wait for each in shrink_delalloc()
641 * individual inode to finish its async work, we simply want to in shrink_delalloc()
643 * for all of the async work to catch up. Once we're done with in shrink_delalloc()
644 * that we know we'll have ordered extents for everything and we in shrink_delalloc()
645 * can decide if we wait for that or not. in shrink_delalloc()
647 * If we choose to replace this in the future, make absolutely in shrink_delalloc()
656 * We don't want to wait forever, if we wrote less pages in this in shrink_delalloc()
657 * loop than we have outstanding, only wait for that number of in shrink_delalloc()
658 * pages, otherwise we can wait for all async pages to finish in shrink_delalloc()
679 * If we are for preemption we just want a one-shot of delalloc in shrink_delalloc()
680 * flushing so we can stop flushing if we decide we don't need in shrink_delalloc()
775 * If we have pending delayed iputs then we could free up a in flush_space()
776 * bunch of pinned space, so make sure we run the iputs before in flush_space()
777 * we do our pinned bytes check below. in flush_space()
785 * We don't want to start a new transaction, just attach to the in flush_space()
787 * happening at the moment. Note: we don't use a nostart join in flush_space()
825 * We may be flushing because suddenly we have less space than we had in btrfs_calc_reclaim_metadata_size()
826 * before, and now we're well over-committed based on our current free in btrfs_calc_reclaim_metadata_size()
827 * space. If that's the case add in our overage so we make sure to put in btrfs_calc_reclaim_metadata_size()
848 /* If we're just plain full then async reclaim just slows us down. */ in need_preemptive_reclaim()
860 * 128MiB is 1/4 of the maximum global rsv size. If we have less than in need_preemptive_reclaim()
862 * we don't have a lot of things that need flushing. in need_preemptive_reclaim()
868 * We have tickets queued, bail so we don't compete with the async in need_preemptive_reclaim()
875 * If we have over half of the free space occupied by reservations or in need_preemptive_reclaim()
876 * pinned then we want to start flushing. in need_preemptive_reclaim()
878 * We do not do the traditional thing here, which is to say in need_preemptive_reclaim()
883 * because this doesn't quite work how we want. If we had more than 50% in need_preemptive_reclaim()
884 * of the space_info used by bytes_used and we had 0 available we'd just in need_preemptive_reclaim()
885 * constantly run the background flusher. Instead we want it to kick in in need_preemptive_reclaim()
900 * much delalloc we need for the background flusher to kick in. in need_preemptive_reclaim()
914 * If we have more ordered bytes than delalloc bytes then we're either in need_preemptive_reclaim()
915 * doing a lot of DIO, or we simply don't have a lot of delalloc waiting in need_preemptive_reclaim()
919 * nothing, if our reservations are tied up in ordered extents we'll in need_preemptive_reclaim()
924 * block reserves that we would actually be able to directly reclaim in need_preemptive_reclaim()
925 * from. In this case if we're heavy on metadata operations this will in need_preemptive_reclaim()
930 * We want to make sure we truly are maxed out on ordered however, so in need_preemptive_reclaim()
931 * cut ordered in half, and if it's still higher than delalloc then we in need_preemptive_reclaim()
932 * can keep flushing. This is to avoid the case where we start in need_preemptive_reclaim()
933 * flushing, and now delalloc == ordered and we stop preemptively in need_preemptive_reclaim()
934 * flushing when we could still have several gigs of delalloc to flush. in need_preemptive_reclaim()
980 * maybe_fail_all_tickets - we've exhausted our flushing, start failing tickets
982 * @space_info - the space info we were flushing
984 * We call this when we've exhausted our flushing ability and haven't made
986 * order, so if there is a large ticket first and then smaller ones we could
1028 * We're just throwing tickets away, so more flushing may not in maybe_fail_all_tickets()
1029 * trip over btrfs_try_granting_tickets, so we need to call it in maybe_fail_all_tickets()
1030 * here to see if we can make progress with the next ticket in in maybe_fail_all_tickets()
1040 * This is for normal flushers, we can wait all goddamned day if we want to. We
1041 * will loop and continuously try to flush as long as we are making progress.
1042 * We count progress as clearing off tickets each time we have to loop.
1087 * We do not want to empty the system of delalloc unless we're in btrfs_async_reclaim_metadata_space()
1089 * logic before we start doing a FLUSH_DELALLOC_FULL. in btrfs_async_reclaim_metadata_space()
1095 * We don't want to force a chunk allocation until we've tried in btrfs_async_reclaim_metadata_space()
1096 * pretty hard to reclaim space. Think of the case where we in btrfs_async_reclaim_metadata_space()
1098 * to reclaim. We would rather use that than possibly create a in btrfs_async_reclaim_metadata_space()
1102 * around then we can force a chunk allocation. in btrfs_async_reclaim_metadata_space()
1125 * This handles pre-flushing of metadata space before we get to the point that
1126 * we need to start blocking threads on tickets. The logic here is different
1128 * much we need to flush, instead it attempts to keep us below the 80% full
1160 * We don't have a precise counter for the metadata being in btrfs_preempt_reclaim_metadata_space()
1161 * reserved for delalloc, so we'll approximate it by subtracting in btrfs_preempt_reclaim_metadata_space()
1163 * amount is higher than the individual reserves, then we can in btrfs_preempt_reclaim_metadata_space()
1174 * We don't want to include the global_rsv in our calculation, in btrfs_preempt_reclaim_metadata_space()
1175 * because that's space we can't touch. Subtract it from the in btrfs_preempt_reclaim_metadata_space()
1181 * We really want to avoid flushing delalloc too much, as it in btrfs_preempt_reclaim_metadata_space()
1205 * We don't want to reclaim everything, just a portion, so scale in btrfs_preempt_reclaim_metadata_space()
1217 /* We only went through once, back off our clamping. */ in btrfs_preempt_reclaim_metadata_space()
1228 * 1) compression is on and we allocate less space than we reserved
1229 * 2) we are overwriting existing space
1236 * For #2 this is trickier. Once the ordered extent runs we will drop the
1237 * extent in the range we are overwriting, which creates a delayed ref for
1242 * If we are freeing inodes, we want to make sure all delayed iputs have
1249 * This is where we reclaim all of the pinned space generated by running the
1253 * For data we start with alloc chunk force, however we could have been full
1255 * so if we now have space to allocate do the force chunk allocation.
1380 * because we may have only satisfied the priority tickets and still in priority_reclaim_metadata_space()
1381 * left non priority tickets on the list. We would then have in priority_reclaim_metadata_space()
1402 * Attempt to steal from the global rsv if we can, except if the fs was in priority_reclaim_metadata_space()
1405 * success to the caller if we can steal from the global rsv - this is in priority_reclaim_metadata_space()
1418 * We must run try_granting_tickets here because we could be a large in priority_reclaim_metadata_space()
1432 /* We could have been granted before we got here. */ in priority_reclaim_data_space()
1467 * Delete us from the list. After we unlock the space in wait_reserve_ticket()
1468 * info, we don't want the async reclaim job to reserve in wait_reserve_ticket()
1496 * @flush: how much we can flush
1536 * Check that we can't have an error set if the reservation succeeded, in handle_reserve_ticket()
1564 * If we're heavy on ordered operations then clamping won't help us. We in maybe_clamp_preempt()
1568 * delayed nodes. If we're already more ordered than delalloc then in maybe_clamp_preempt()
1569 * we're keeping up, otherwise we aren't and should probably clamp. in maybe_clamp_preempt()
1595 * @space_info: space info we want to allocate from
1596 * @orig_bytes: number of bytes we want
1597 * @flush: whether or not we can flush to make our reservation
1621 * BTRFS_RESERVE_FLUSH_EVICT, as we could deadlock because those in __reserve_bytes()
1640 * We don't want NO_FLUSH allocations to jump everybody, they can in __reserve_bytes()
1651 * Carry on if we have enough space (short-circuit) OR call in __reserve_bytes()
1652 * can_overcommit() to ensure we can overcommit to continue. in __reserve_bytes()
1663 * Things are dire, we need to make a reservation so we don't abort. We in __reserve_bytes()
1664 * will let this reservation go through as long as we have actual space in __reserve_bytes()
1677 * If we couldn't make a reservation then setup our reservation ticket in __reserve_bytes()
1680 * If we are a priority flusher then we just need to add our ticket to in __reserve_bytes()
1681 * the list and we will do our own flushing further down. in __reserve_bytes()
1698 * We were forced to add a reserve ticket, so in __reserve_bytes()
1719 * We will do the space reservation dance during log replay, in __reserve_bytes()
1720 * which means we won't have fs_info->fs_root set, so don't do in __reserve_bytes()
1721 * the async reclaim as we will panic. in __reserve_bytes()
1744 * @block_rsv: block_rsv we're allocating for
1745 * @orig_bytes: number of bytes we want
1746 * @flush: whether or not we can flush to make our reservation
1779 * @bytes: number of bytes we need
1780 * @flush: how we are allowed to flush
1783 * space then we will attempt to flush space as specified by flush.
1806 /* Dump all the space infos when we abort a transaction due to ENOSPC. */
1830 /* It's df, we don't care if it's racy */ in btrfs_account_ro_block_groups_free_space()