Lines Matching full:we

70  * have an existing running transaction: we only make a new transaction
71 * once we have started to commit the old one).
74 * The journal MUST be locked. We don't perform atomic mallocs on the
75 * new transaction and we can't block without protecting against other
179 * We don't call jbd2_might_wait_for_commit() here as there's no in wait_transaction_switching()
202 * Wait until we can add credits for handle to the running transaction. Called
204 * transaction. Returns 1 if we had to wait, j_state_lock is dropped, and
208 * value, we need to fake out sparse so ti doesn't complain about a
233 * potential buffers requested by this operation, we need to in add_transaction_credits()
240 * then start to commit it: we can then go back and in add_transaction_credits()
272 * We must therefore ensure the necessary space in the journal in add_transaction_credits()
289 /* No reservation? We are done... */ in add_transaction_credits()
294 /* We allow at most half of a transaction to be reserved */ in add_transaction_credits()
345 * transaction early if there are high chances we'll need it. If we in start_this_handle()
346 * guess wrong, we'll retry or free unused transaction. in start_this_handle()
350 * If __GFP_FS is not present, then we may be being called from in start_this_handle()
351 * inside the fs writeback layer, so we MUST NOT fail. in start_this_handle()
364 * We need to hold j_state_lock until t_updates has been incremented, in start_this_handle()
379 * we allow reserved handles to proceed because otherwise commit could in start_this_handle()
406 /* We may have dropped j_state_lock - restart in that case */ in start_this_handle()
417 * We have handle reserved so we are allowed to join T_LOCKED in start_this_handle()
418 * transaction and we don't have to check for transaction size in start_this_handle()
419 * and journal space. But we still have to wait while running in start_this_handle()
526 * @nblocks: number of block buffer we might modify
528 * We make sure that the transaction can guarantee at least nblocks of
529 * modified buffers in the log. We block until the log can guarantee
530 * that much space. Additionally, if rsv_blocks > 0, we also create another
711 * Subtract necessary revoke descriptor blocks from handle credits. We in stop_this_handle()
740 * Scope of the GFP_NOFS context is over here and so we can restore the in stop_this_handle()
760 * credits. We preserve reserved handle if there's any attached to the
772 /* If we've had an abort of any type, don't even think about in jbd2__journal_restart()
788 * TODO: If we use READ_ONCE / WRITE_ONCE for j_commit_request we can in jbd2__journal_restart()
829 * jbd2_journal_free_transaction(). This can only happen when we in jbd2_journal_wait_updates()
831 * Hence we should everytime retrieve new j_running_transaction in jbd2_journal_wait_updates()
884 * We have now established a barrier against other normal updates, but in jbd2_journal_lock_updates()
885 * we also need to barrier against other jbd2_journal_lock_updates() calls in jbd2_journal_lock_updates()
886 * to make sure that we serialise special journal-locked operations in jbd2_journal_lock_updates()
928 /* Fire data frozen trigger just before we copy the data */ in jbd2_freeze_jh_data()
934 * Now that the frozen data is saved off, we need to store any matching in jbd2_freeze_jh_data()
942 * is nothing we need to do. If it is already part of a prior
943 * transaction which we are still committing to disk, then we need to
944 * make sure that we do not overwrite the old copy: we do copy-out to
945 * preserve the copy going to disk. We also account the buffer against
981 /* We now hold the buffer lock so it is safe to query the buffer in do_get_write_access()
986 * Otherwise, it is journaled, and we don't expect dirty buffers in do_get_write_access()
997 * We need to clean the dirty flag and we must do it under the in do_get_write_access()
998 * buffer lock to be sure we don't race with running write-out. in do_get_write_access()
1005 * ever called for it. So we need to set jbddirty bit here to in do_get_write_access()
1037 * If the buffer is not journaled right now, we need to make sure it in do_get_write_access()
1071 * If there is already a copy-out version of this buffer, then we don't in do_get_write_access()
1085 * There is one case we have to be very careful about. If the in do_get_write_access()
1087 * and has NOT made a copy-out, then we cannot modify the buffer in do_get_write_access()
1090 * primary copy is already going to disk then we cannot do copy-out in do_get_write_access()
1103 * past that stage (here we use the fact that BH_Shadow is set under in do_get_write_access()
1105 * point we know the buffer doesn't have BH_Shadow set). in do_get_write_access()
1107 * Subtle point, though: if this is a get_undo_access, then we will be in do_get_write_access()
1109 * committed_data record after the transaction, so we HAVE to force the in do_get_write_access()
1138 * If we are about to journal a buffer, then any revoke pending on it is in do_get_write_access()
1163 * RCU protects us from dereferencing freed pages. So the checks we do in jbd2_write_access_granted()
1165 * & reallocated while we work with it. So we have to be careful. When in jbd2_write_access_granted()
1166 * we see jh attached to the running transaction, we know it must stay in jbd2_write_access_granted()
1168 * will be attached to the same bh while we run. However it can in jbd2_write_access_granted()
1170 * just after we get pointer to it from bh. So we have to be careful in jbd2_write_access_granted()
1171 * and recheck jh still belongs to our bh before we return success. in jbd2_write_access_granted()
1188 * 1) Make sure to fetch b_bh after we did previous checks so that we in jbd2_write_access_granted()
1190 * while we were checking. Paired with implicit barrier in that path. in jbd2_write_access_granted()
1213 * because we're ``write()ing`` a buffer which is also part of a shared mapping.
1228 /* We do not want to get caught playing with fields which the in jbd2_journal_get_write_access()
1239 * (ie. getblk() returned a new buffer and we are going to populate it
1240 * manually rather than reading off disk), then we need to keep the
1242 * data. In this case, we should be able to make the assertion that
1291 * the commit finished, we've filed the buffer for in jbd2_journal_get_create_access()
1292 * checkpointing and marked it dirty. Now we are reallocating in jbd2_journal_get_create_access()
1317 * blocks which contain freed but then revoked metadata. We need in jbd2_journal_get_create_access()
1318 * to cancel the revoke in case we end up freeing it yet again in jbd2_journal_get_create_access()
1337 * this for freeing and allocating space, we have to make sure that we
1339 * since if we overwrote that space we would make the delete
1346 * as we know that the buffer has definitely been committed to disk.
1348 * We never need to know which transaction the committed data is part
1351 * we can discard the old committed data pointer.
1371 * Do this first --- it can drop the journal lock, so we want to in jbd2_journal_get_undo_access()
1466 * current committing transaction (in which case we should have frozen
1467 * data present for that commit). In that case, we don't relink the
1482 * We don't grab jh reference here since the buffer must be part in jbd2_journal_dirty_metadata()
1490 * This and the following assertions are unreliable since we may see jh in jbd2_journal_dirty_metadata()
1491 * in inconsistent state unless we grab bh_state lock. But this is in jbd2_journal_dirty_metadata()
1554 * I _think_ we're OK here with SMP barriers - a mistaken decision will in jbd2_journal_dirty_metadata()
1555 * result in this test being false, so we go in and take the locks. in jbd2_journal_dirty_metadata()
1607 /* And this case is illegal: we can't reuse another in jbd2_journal_dirty_metadata()
1631 * We can only do the bforget if there are no commits pending against the
1632 * buffer. If the buffer is dirty in the current running transaction we
1678 * The buffer's going from the transaction, we must drop in jbd2_journal_forget()
1686 /* If we are forgetting a buffer which is already part in jbd2_journal_forget()
1687 * of this transaction, then we can just drop it from in jbd2_journal_forget()
1695 * we only want to drop a reference if this transaction in jbd2_journal_forget()
1702 * We are no longer going to journal this buffer. in jbd2_journal_forget()
1704 * important to the buffer: the delete that we are now in jbd2_journal_forget()
1706 * committing, we can satisfy the buffer's checkpoint. in jbd2_journal_forget()
1708 * So, if we have a checkpoint on the buffer, we should in jbd2_journal_forget()
1710 * we know to remove the checkpoint after we commit. in jbd2_journal_forget()
1726 * (committing) transaction, we can't drop it yet... */ in jbd2_journal_forget()
1728 /* ... but we CAN drop it from the new transaction through in jbd2_journal_forget()
1754 * transaction, we can just drop it now if it has no in jbd2_journal_forget()
1774 * The buffer is still not written to disk, we should in jbd2_journal_forget()
1800 * There is not much action needed here. We just return any remaining
1802 * complication is that we need to start a commit operation if the
1852 * arrive. It doesn't cost much - we're about to run a commit in jbd2_journal_stop()
1856 * We try and optimize the sleep time against what the in jbd2_journal_stop()
1862 * join the transaction. We achieve this by measuring how in jbd2_journal_stop()
1865 * < commit time then we sleep for the delta and commit. This in jbd2_journal_stop()
1870 * to perform a synchronous write. We do this to detect the in jbd2_journal_stop()
1907 * If the handle is marked SYNC, we need to set another commit in jbd2_journal_stop()
1908 * going! We also want to force a commit if the transaction is too in jbd2_journal_stop()
1932 * committing on us and eventually disappear. So we must not in jbd2_journal_stop()
2072 /* Get reference so that buffer cannot be freed before we unlock it */ in jbd2_journal_unfile_buffer()
2092 * This function returns non-zero if we wish try_to_free_buffers()
2093 * to be called. We do this if the page is releasable by try_to_free_buffers().
2094 * We also do it if the page has locked or dirty buffers and the caller wants
2097 * This complicates JBD locking somewhat. We aren't protected by the
2098 * BKL here. We wish to remove the buffer from its committing or
2105 * buffer. So we need to lock against that. jbd2_journal_dirty_data()
2112 * cannot happen because we never reallocate freed data as metadata
2131 * We take our own ref against the journal_head here to avoid in jbd2_journal_try_to_free_buffers()
2160 * checkpoint list we need to record it on this transaction's forget list
2162 * this transaction commits. If the buffer isn't on a checkpoint list, we
2179 * We don't want to write the buffer anymore, clear the in __dispose_buffer()
2180 * bit so that we don't confuse checks in in __dispose_buffer()
2201 * i_size must be updated on disk before we start calling invalidate_folio
2206 * invariant, we can be sure that it is safe to throw away any buffers
2208 * we know that the data will not be needed.
2210 * Note however that we can *not* throw away data belonging to the
2226 * The above applies mainly to ordered data mode. In writeback mode we
2228 * particular we don't guarantee that new dirty data is flushed before
2238 * We're outside-transaction here. Either or both of j_running_transaction
2252 * buffers cannot be stolen by try_to_free_buffers as long as we are in journal_unmap_buffer()
2260 /* OK, we have data buffer in journaled mode */ in journal_unmap_buffer()
2266 * We cannot remove the buffer from checkpoint lists until the in journal_unmap_buffer()
2271 * the buffer will be lost. On the other hand we have to in journal_unmap_buffer()
2280 * Also we have to clear buffer_mapped flag of a truncated buffer in journal_unmap_buffer()
2283 * buffer_head can be reused when the file is extended again. So we end in journal_unmap_buffer()
2291 * has no checkpoint link, then we can zap it: in journal_unmap_buffer()
2292 * it's a writeback-mode buffer so we don't care in journal_unmap_buffer()
2300 /* bdflush has written it. We can drop it now */ in journal_unmap_buffer()
2319 * orphan record which we wrote for this file must have in journal_unmap_buffer()
2320 * passed into commit. We must attach this buffer to in journal_unmap_buffer()
2329 * committed. We can cleanse this buffer */ in journal_unmap_buffer()
2338 * The buffer is committing, we simply cannot touch in journal_unmap_buffer()
2339 * it. If the page is straddling i_size we have to wait in journal_unmap_buffer()
2353 * OK, buffer won't be reachable after truncate. We just clear in journal_unmap_buffer()
2370 * We are writing our own transaction's data, not any in journal_unmap_buffer()
2372 * (remember that we expect the filesystem to have set in journal_unmap_buffer()
2374 * expose the disk blocks we are discarding here.) */ in journal_unmap_buffer()
2436 /* We will potentially be playing with lists other than just the in jbd2_journal_invalidate_folio()
2492 * For metadata buffers, we track dirty bit in buffer_jbddirty in __jbd2_journal_file_buffer()
2493 * instead of buffer_dirty. We should not see a dirty bit set in __jbd2_journal_file_buffer()
2494 * here because we clear it in do_get_write_access but e.g. in __jbd2_journal_file_buffer()
2496 * so we try to gracefully handle that. in __jbd2_journal_file_buffer()
2591 * We set b_transaction here because b_next_transaction will inherit in __jbd2_journal_refile_buffer()
2612 * __jbd2_journal_refile_buffer() with necessary locking added. We take our
2613 * bh reference so that we can safely unlock bh.
2657 /* Is inode already attached where we need it? */ in jbd2_journal_file_inode()
2663 * We only ever set this variable to 1 so the test is safe. Since in jbd2_journal_file_inode()
2664 * t_need_data_flush is likely to be set, we do the test to save some in jbd2_journal_file_inode()
2706 * committing, we cannot discard the data by truncate until we have
2707 * written them. Otherwise if we crashed after the transaction with
2709 * committed, we could see stale data in block A. This function is a
2716 * avoids the race that someone writes new data and we start
2720 * happens in the same transaction as write --- we don't have to write
2734 * enough that the transaction was not committing before we started in jbd2_journal_begin_ordered_truncate()