Lines Matching full:we
23 * recover, so we don't allow failure here. Also, we allocate in a context that
24 * we don't want to be issuing transactions from, so we need to tell the
27 * We don't reserve any space for the ticket - we are going to steal whatever
28 * space we require from transactions as they commit. To ensure we reserve all
29 * the space required, we need to set the current reservation of the ticket to
30 * zero so that we know to steal the initial transaction overhead from the
42 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc()
62 * We can't rely on just the log item being in the CIL, we have to check
80 * current sequence, we're in a new checkpoint. in xlog_item_in_current_chkpt()
140 * We're in the middle of switching cil contexts. Reset the in xlog_cil_push_pcp_aggregate()
141 * counter we use to detect when the current context is nearing in xlog_cil_push_pcp_aggregate()
151 * limit threshold so we can switch to atomic counter aggregation for accurate
168 * We can race with other cpus setting cil_pcpmask. However, we've in xlog_cil_insert_pcp_aggregate()
200 * After the first stage of log recovery is done, we know where the head and
201 * tail of the log are. We need this log initialisation done before we can
204 * Here we allocate a log ticket to track space usage during a CIL push. This
205 * ticket is passed to xlog_write() directly so that we don't slowly leak log
236 * If we do this allocation within xlog_cil_insert_format_items(), it is done
238 * the memory allocation. This means that we have a potential deadlock situation
239 * under low memory conditions when we have lots of dirty metadata pinned in
240 * the CIL and we need a CIL commit to occur to free memory.
242 * To avoid this, we need to move the memory allocation outside the
249 * process, we cannot share the buffer between the transaction commit (which
252 * unreliable, but we most definitely do not want to be allocating and freeing
259 * the incoming modification. Then during the formatting of the item we can swap
260 * the active buffer with the new one if we can't reuse the existing buffer. We
262 * it's size is right, otherwise we'll free and reallocate it at that point.
295 * Ordered items need to be tracked but we do not wish to write in xlog_cil_alloc_shadow_bufs()
296 * them. We need a logvec to track the object, but we do not in xlog_cil_alloc_shadow_bufs()
306 * We 64-bit align the length of each iovec so that the start of in xlog_cil_alloc_shadow_bufs()
307 * the next one is naturally aligned. We'll need to account for in xlog_cil_alloc_shadow_bufs()
310 * We also add the xlog_op_header to each region when in xlog_cil_alloc_shadow_bufs()
312 * at this point. Hence we'll need an addition number of bytes in xlog_cil_alloc_shadow_bufs()
324 * that space to ensure we can align it appropriately and not in xlog_cil_alloc_shadow_bufs()
330 * if we have no shadow buffer, or it is too small, we need to in xlog_cil_alloc_shadow_bufs()
336 * We free and allocate here as a realloc would copy in xlog_cil_alloc_shadow_bufs()
337 * unnecessary data. We don't use kvzalloc() for the in xlog_cil_alloc_shadow_bufs()
338 * same reason - we don't need to zero the data area in in xlog_cil_alloc_shadow_bufs()
390 * If there is no old LV, this is the first time we've seen the item in in xfs_cil_prepare_item()
391 * this CIL context and so we need to pin it. If we are replacing the in xfs_cil_prepare_item()
393 * buffer for later freeing. In both cases we are now switching to the in xfs_cil_prepare_item()
412 * CIL, store the sequence number on the log item so we can in xfs_cil_prepare_item()
423 * For delayed logging, we need to hold a formatted buffer containing all the
431 * guaranteed to be large enough for the current modification, but we will only
432 * use that if we can't reuse the existing lv. If we can't reuse the existing
433 * lv, then simple swap it out for the shadow lv. We don't free it - that is
436 * We don't set up region headers during this process; we simply copy the
437 * regions into the flat buffer. We can do this because we still have to do a
439 * ophdrs during the iclog write means that we can support splitting large
443 * Hence what we need to do now is change the rewrite the vector array to point
444 * to the copied region inside the buffer we just allocated. This allows us to
456 /* Bail out if we didn't find a log item. */ in xlog_cil_insert_format_items()
546 * as well. Remove the amount of space we added to the checkpoint ticket from
568 * We can do this safely because the context can't checkpoint until we in xlog_cil_insert_items()
569 * are done so it doesn't matter exactly how we update the CIL. in xlog_cil_insert_items()
574 * Subtract the space released by intent cancelation from the space we in xlog_cil_insert_items()
575 * consumed so that we remove it from the CIL space and add it back to in xlog_cil_insert_items()
581 * Grab the per-cpu pointer for the CIL before we start any accounting. in xlog_cil_insert_items()
582 * That ensures that we are running with pre-emption disabled and so we in xlog_cil_insert_items()
594 * We need to take the CIL checkpoint unit reservation on the first in xlog_cil_insert_items()
595 * commit into the CIL. Test the XLOG_CIL_EMPTY bit first so we don't in xlog_cil_insert_items()
596 * unnecessarily do an atomic op in the fast path here. We can clear the in xlog_cil_insert_items()
597 * XLOG_CIL_EMPTY bit as we are under the xc_ctx_lock here and that in xlog_cil_insert_items()
605 * Check if we need to steal iclog headers. atomic_read() is not a in xlog_cil_insert_items()
606 * locked atomic operation, so we can check the value before we do any in xlog_cil_insert_items()
607 * real atomic ops in the fast path. If we've already taken the CIL unit in xlog_cil_insert_items()
608 * reservation from this commit, we've already got one iclog header in xlog_cil_insert_items()
609 * space reserved so we have to account for that otherwise we risk in xlog_cil_insert_items()
612 * If the CIL is already at the hard limit, we might need more header in xlog_cil_insert_items()
614 * commit that occurs once we are over the hard limit to ensure the CIL in xlog_cil_insert_items()
617 * This can steal more than we need, but that's OK. in xlog_cil_insert_items()
648 * If we just transitioned over the soft limit, we need to in xlog_cil_insert_items()
663 * We do this here so we only need to take the CIL lock once during in xlog_cil_insert_items()
680 * If we've overrun the reservation, dump the tx details before we move in xlog_cil_insert_items()
711 * Mark all items committed and clear busy extents. We free the log vector
712 * chains in a separate pass so that we unpin the log items as quickly as
723 * If the I/O failed, we're aborting the commit and already shutdown. in xlog_cil_committed()
724 * Wake any commit waiters before aborting the log items so we don't in xlog_cil_committed()
773 * Record the LSN of the iclog we were just granted space to start writing into.
790 * The LSN we need to pass to the log items on transaction in xlog_cil_set_ctx_write_state()
792 * the commit lsn. If we use the commit record lsn then we can in xlog_cil_set_ctx_write_state()
801 * Make sure the metadata we are about to overwrite in the log in xlog_cil_set_ctx_write_state()
812 * Take a reference to the iclog for the context so that we still hold in xlog_cil_set_ctx_write_state()
820 * iclog for an entire commit record, so we can attach the context in xlog_cil_set_ctx_write_state()
821 * callbacks now. This needs to be done before we make the commit_lsn in xlog_cil_set_ctx_write_state()
831 * Now we can record the commit LSN and wake anyone waiting for this in xlog_cil_set_ctx_write_state()
865 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_order_write()
972 * Build a checkpoint transaction header to begin the journal transaction. We
976 * This is the only place we write a transaction header, so we also build the
978 * transaction header. We keep the start record in it's own log vector rather
1030 * CIL item reordering compare function. We want to order in ascending ID order,
1031 * but we want to leave items with the same ID in the order they were added to
1032 * the list. This is important for operations like reflink where we log 4 order
1033 * dependent intents in a single transaction when we overwrite an existing
1051 * the CIL. We don't need the CIL lock here because it's only needed on the
1054 * If a log item is marked with a whiteout, we do not need to write it to the
1055 * journal and so we just move them to the whiteout list for the caller to
1081 /* we don't write ordered log vectors */ in xlog_cil_build_lv_chain()
1109 * If the current sequence is the same as xc_push_seq we need to do a flush. If
1111 * flushed and we don't need to do anything - the caller will wait for it to
1115 * Hence we can allow log forces to run racily and not issue pushes for the
1116 * same sequence twice. If we get a race between multiple pushes for the same
1151 * As we are about to switch to a new, empty CIL context, we no longer in xlog_cil_push_work()
1164 * Check if we've anything to push. If there is nothing, then we don't in xlog_cil_push_work()
1165 * move on to a new sequence number and so we have to be able to push in xlog_cil_push_work()
1182 * We are now going to push this context, so add it to the committing in xlog_cil_push_work()
1183 * list before we do anything else. This ensures that anyone waiting on in xlog_cil_push_work()
1192 * waiting on. If the CIL is not empty, we get put on the committing in xlog_cil_push_work()
1194 * an empty CIL and an unchanged sequence number means we jumped out in xlog_cil_push_work()
1211 * Switch the contexts so we can drop the context lock and move out in xlog_cil_push_work()
1212 * of a shared context. We can't just go straight to the commit record, in xlog_cil_push_work()
1213 * though - we need to synchronise with previous and future commits so in xlog_cil_push_work()
1215 * that we process items during log IO completion in the correct order. in xlog_cil_push_work()
1217 * For example, if we get an EFI in one checkpoint and the EFD in the in xlog_cil_push_work()
1218 * next (e.g. due to log forces), we do not want the checkpoint with in xlog_cil_push_work()
1220 * we must strictly order the commit records of the checkpoints so in xlog_cil_push_work()
1225 * Hence we need to add this context to the committing context list so in xlog_cil_push_work()
1231 * committing list. This also ensures that we can do unlocked checks in xlog_cil_push_work()
1241 * Sort the log vector chain before we add the transaction headers. in xlog_cil_push_work()
1242 * This ensures we always have the transaction headers at the start in xlog_cil_push_work()
1249 * begin the transaction. We need to account for the space used by the in xlog_cil_push_work()
1251 * Add the lvhdr to the head of the lv chain we pass to xlog_write() so in xlog_cil_push_work()
1273 * Grab the ticket from the ctx so we can ungrant it after releasing the in xlog_cil_push_work()
1274 * commit_iclog. The ctx may be freed by the time we return from in xlog_cil_push_work()
1276 * callback run) so we can't reference the ctx after the call to in xlog_cil_push_work()
1283 * to complete before we submit the commit_iclog. We can't use state in xlog_cil_push_work()
1287 * In the latter case, if it's a future iclog and we wait on it, the we in xlog_cil_push_work()
1289 * wakeup until this commit_iclog is written to disk. Hence we use the in xlog_cil_push_work()
1290 * iclog header lsn and compare it to the commit lsn to determine if we in xlog_cil_push_work()
1301 * iclogs older than ic_prev. Hence we only need to wait in xlog_cil_push_work()
1309 * We need to issue a pre-flush so that the ordering for this in xlog_cil_push_work()
1362 * We need to push CIL every so often so we don't cache more than we can fit in
1376 * The cil won't be empty because we are called while holding the in xlog_cil_push_background()
1377 * context lock so whatever we added to the CIL will still be there. in xlog_cil_push_background()
1382 * We are done if: in xlog_cil_push_background()
1383 * - we haven't used up all the space available yet; or in xlog_cil_push_background()
1384 * - we've already queued up a push; and in xlog_cil_push_background()
1385 * - we're not over the hard limit; and in xlog_cil_push_background()
1388 * If so, we don't need to take the push lock as there's nothing to do. in xlog_cil_push_background()
1405 * Drop the context lock now, we can't hold that if we need to sleep in xlog_cil_push_background()
1406 * because we are over the blocking threshold. The push_lock is still in xlog_cil_push_background()
1413 * If we are well over the space limit, throttle the work that is being in xlog_cil_push_background()
1438 * If the caller is performing a synchronous force, we will flush the workqueue
1443 * If the caller is performing an async push, we need to ensure that the
1444 * checkpoint is fully flushed out of the iclogs when we finish the push. If we
1448 * mechanism. Hence in this case we need to pass a flag to the push work to
1471 * If this is an async flush request, we always need to set the in xlog_cil_push_now()
1480 * If the CIL is empty or we've already pushed the sequence then in xlog_cil_push_now()
1481 * there's no more work that we need to do. in xlog_cil_push_now()
1510 * committed in the current (same) CIL checkpoint, we don't need to write either
1512 * journalled atomically within this checkpoint. As we cannot remove items from
1548 * To do this, we need to format the item, pin it in memory if required and
1549 * account for the space used by the transaction. Once we have done that we
1551 * transaction to the checkpoint context so we carry the busy extents through
1570 * Do all necessary memory allocation before we lock the CIL. in xlog_cil_commit()
1595 * This needs to be done before we drop the CIL context lock because we in xlog_cil_commit()
1597 * to disk. If we don't, then the CIL checkpoint can race with us and in xlog_cil_commit()
1598 * we can run checkpoint completion before we've updated and unlocked in xlog_cil_commit()
1640 * We only need to push if we haven't already pushed the sequence number given.
1641 * Hence the only time we will trigger a push here is if the push sequence is
1644 * We return the current commit lsn to allow the callers to determine if a
1663 * check to see if we need to force out the current context. in xlog_cil_force_seq()
1671 * See if we can find a previous sequence still committing. in xlog_cil_force_seq()
1672 * We need to wait for all previous sequence commits to complete in xlog_cil_force_seq()
1679 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_force_seq()
1704 * Hence by the time we have got here it our sequence may not have been in xlog_cil_force_seq()
1710 * Hence if we don't find the context in the committing list and the in xlog_cil_force_seq()
1714 * it means we haven't yet started the push, because if it had started in xlog_cil_force_seq()
1715 * we would have found the context on the committing list. in xlog_cil_force_seq()
1727 * We detected a shutdown in progress. We need to trigger the log force in xlog_cil_force_seq()
1729 * we are already in a shutdown state. Hence we can't return in xlog_cil_force_seq()
1731 * LSN is already stable), so we return a zero LSN instead. in xlog_cil_force_seq()