Lines Matching full:we
89 * We need to make sure the buffer pointer returned is naturally aligned for the
90 * biggest basic data type we put into it. We have already accounted for this
93 * However, this padding does not get written into the log, and hence we have to
98 * We also add space for the xlog_op_header that describes this region in the
99 * log. This prepends the data region we return to the caller to copy their data
101 * is not 8 byte aligned, we have to be careful to ensure that we align the
102 * start of the buffer such that the region we return to the call is 8 byte
256 * Hence when we are woken here, it may be that the head of the in xlog_grant_head_wake()
259 * reservation we require. However, if the AIL has already in xlog_grant_head_wake()
260 * pushed to the target defined by the old log head location, we in xlog_grant_head_wake()
265 * the grant head, we need to push the AIL again to ensure the in xlog_grant_head_wake()
267 * position before we wait for the tail to move again. in xlog_grant_head_wake()
330 * path. Hence any lock will be globally hot if we take it unconditionally on
333 * As tickets are only ever moved on and off head->waiters under head->lock, we
334 * only need to take that lock if we are going to add the ticket to the queue
335 * and sleep. We can avoid taking the lock if the ticket was never added to
336 * head->waiters because the t_queue list head will be empty and we hold the
353 * logspace before us. Wake up the first waiters, if we do not wake in xlog_grant_head_check()
415 * This is a new transaction on the ticket, so we need to change the in xfs_log_regrant()
417 * the log. Just add one to the existing tid so that we can see chains in xfs_log_regrant()
442 * If we are failing, make sure the ticket doesn't have any current in xfs_log_regrant()
443 * reservations. We don't want to add this back when the ticket/ in xfs_log_regrant()
455 * When writes happen to the on-disk log, we don't subtract the length of the
457 * reservation, we prevent over allocation problems.
499 * If we are failing, make sure the ticket doesn't have any current in xfs_log_reserve()
500 * reservations. We don't want to add this back when the ticket/ in xfs_log_reserve()
510 * space waiters so they can process the newly set shutdown state. We really
511 * don't care what order we process callbacks here because the log is shut down
512 * and so state cannot change on disk anymore. However, we cannot wake waiters
513 * until the callbacks have been processed because we may be in unmount and
514 * we must ensure that all AIL operations the callbacks perform have completed
515 * before we tear down the AIL.
517 * We avoid processing actively referenced iclogs so that we don't run callbacks
552 * If XLOG_ICL_NEED_FUA is already set on the iclog, we need to ensure that the
555 * within the iclog. We need to ensure that the log tail does not move beyond
564 * the iclog will get zeroed on activation of the iclog after sync, so we
582 * of the tail LSN into the iclog so we guarantee that the log tail does in xlog_state_release_iclog()
583 * not move between the first time we know that the iclog needs to be in xlog_state_release_iclog()
584 * made stable and when we eventually submit it. in xlog_state_release_iclog()
665 * Now that we have set up the log and it's internal geometry in xfs_log_mount()
666 * parameters, we can validate the given log space and drop a critical in xfs_log_mount()
670 * the other log geometry constraints, so we don't have to check those in xfs_log_mount()
673 * Note: For v4 filesystems, we can't just reject the mount if the in xfs_log_mount()
678 * We can, however, reject mounts for V5 format filesystems, as the in xfs_log_mount()
704 * Initialize the AIL now we have a log. in xfs_log_mount()
736 * Now the log has been fully initialised and we know were our in xfs_log_mount()
737 * space grant counters are, we can initialise the permanent ticket in xfs_log_mount()
758 * If we finish recovery successfully, start the background log work. If we are
759 * not doing recovery, then we have a RO filesystem and we don't need to start
775 * During the second phase of log recovery, we need iget and in xfs_log_mount_finish()
778 * of inodes before we're done replaying log items on those in xfs_log_mount_finish()
780 * so that we don't leak the quota inodes if subsequent mount in xfs_log_mount_finish()
783 * We let all inodes involved in redo item processing end up on in xfs_log_mount_finish()
784 * the LRU instead of being evicted immediately so that if we do in xfs_log_mount_finish()
787 * in log recovery failure. We have to evict the unreferenced in xfs_log_mount_finish()
788 * lru inodes after clearing SB_ACTIVE because we don't in xfs_log_mount_finish()
804 * but we do it unconditionally to make sure we're always in a clean in xfs_log_mount_finish()
824 /* Make sure the log is dead if we're returning failure. */ in xfs_log_mount_finish()
859 * is done before we tear down these buffers.
877 * have been ordered and callbacks run before we are woken here, hence
903 * Write out an unmount record using the ticket provided. We have to account for
966 * At this point, we're umounting anyway, so there's no point in in xlog_unmount_write()
999 * We just write the magic number now since that particular field isn't
1018 * If we think the summary counters are bad, avoid writing the unmount in xfs_log_unmount_write()
1037 * To do this, we first need to shut down the background log work so it is not
1038 * trying to cover the log as we clean up. We then need to unpin all objects in
1039 * the log so we can then flush them out. Once they have completed their IO and
1040 * run the callbacks removing themselves from the AIL, we can cover the log.
1047 * Clear log incompat features since we're quiescing the log. Report in xfs_log_quiesce()
1067 * XBF_ASYNC flag set, so we need to use a lock/unlock pair to wait for in xfs_log_quiesce()
1089 * During unmount, we need to ensure we flush all the dirty metadata objects
1090 * from the AIL so that the log is empty before we write the unmount record to
1091 * the log. Once this is done, we can tear down the AIL and the log.
1101 * cleaning will have been skipped and so we need to wait in xfs_log_unmount()
1102 * for the iclog to complete shutdown processing before we in xfs_log_unmount()
1136 * Wake up processes waiting for log space after we have moved the log tail.
1168 * Determine if we have a transaction that has gone to disk that needs to be
1171 * we start attempting to cover the log.
1173 * Only if we are then in a state where covering is needed, the caller is
1177 * If there are any items in the AIl or CIL, then we do not want to attempt to
1178 * cover the log as we may be in a situation where there isn't log space
1181 * there's no point in running a dummy transaction at this point because we
1243 * state machine if the log requires covering. Therefore, we must call in xfs_log_cover()
1244 * this function once and use the result until we've issued an sb sync. in xfs_log_cover()
1263 * we found it. in xfs_log_cover()
1276 * We may be holding the log iclog lock upon entering this routine.
1289 * To make sure we always have a valid LSN for the log tail we keep in xlog_assign_tail_lsn_locked()
1323 * wrap the tail, we should blow up. Rather than catch this case here,
1324 * we depend on other ASSERTions in other parts of the code. XXXmiken
1326 * If reservation head is behind the tail, we have a problem. Warn about it,
1330 * shortcut invalidity asserts in this case so that we don't trigger them
1361 * The reservation head is behind the tail. In this case we just want to in xlog_space_left()
1391 * Race to shutdown the filesystem if we see an error. in xlog_ioend_work()
1402 * Drop the lock to signal that we are done. Nothing references the in xlog_ioend_work()
1405 * unlock as we could race with it being freed. in xlog_ioend_work()
1415 * If the filesystem blocksize is too large, we may need to choose a
1448 * Clear the log incompat flags if we have the opportunity.
1450 * This only happens if we're about to log the second dummy transaction as part
1451 * of covering the log and we can get the log incompat feature usage lock.
1474 * Every sync period we need to unpin all items in the AIL and push them to
1475 * disk. If there is nothing dirty, then we might need to cover the log to
1493 * We cannot use an inode here for this - that will push dirty in xfs_log_worker()
1495 * will prevent log covering from making progress. Hence we in xfs_log_worker()
1599 * done this way so that we can use different sizes for machines in xlog_alloc_log()
1676 * Compute the LSN that we'd need to push the log tail towards in order to have
1733 * Push the tail of the log if we need to do so to maintain the free log space
1734 * thresholds set out by xlog_grant_push_threshold. We may need to adopt a
1735 * policy which pushes on an lsn which is further along in the log once we
1736 * reach the high water mark. In this manner, we would be creating a low water
1881 * We lock the iclogbufs here so that we can serialise against I/O in xlog_write_iclog()
1882 * completion during unmount. We might be processing a shutdown in xlog_write_iclog()
1884 * unmount thread, and hence we need to ensure that completes before in xlog_write_iclog()
1885 * tearing down the iclogbufs. Hence we need to hold the buffer lock in xlog_write_iclog()
1891 * It would seem logical to return EIO here, but we rely on in xlog_write_iclog()
1893 * doing it here. We kick of the state machine and unlock in xlog_write_iclog()
1901 * We use REQ_SYNC | REQ_IDLE here to tell the block layer the are more in xlog_write_iclog()
1916 * For external log devices, we also need to flush the data in xlog_write_iclog()
1919 * but it *must* complete before we issue the external log IO. in xlog_write_iclog()
1921 * If the flush fails, we cannot conclude that past metadata in xlog_write_iclog()
1923 * not possible, hence we must shut down with log IO error to in xlog_write_iclog()
1942 * If this log buffer would straddle the end of the log we will have in xlog_write_iclog()
1943 * to split it up into two bios, so that we can continue at the start. in xlog_write_iclog()
1967 * We need to bump cycle number for the part of the iclog that is
2011 * fashion. Previously, we should have moved the current iclog
2015 * to save away the 1st word of each BBSIZE block into the header. We replace
2019 * we can't have part of a 512 byte block written and part not written. By
2020 * tagging each block, we will know which blocks are valid when recovering
2049 * If we have a ticket, account for the roundoff via the ticket in xlog_sync()
2051 * Otherwise, we have to move grant heads directly. in xlog_sync()
2074 /* Do we need to split this write into 2 parts? */ in xlog_sync()
2300 * length. We write until we cannot fit a full record into the remaining space
2301 * and then stop. We return the log vector that is to be written that cannot
2320 /* walk the logvec, copying until we run out of space in the iclog */ in xlog_write_partial()
2328 * start recovering from the next opheader it finds. Because we in xlog_write_partial()
2334 * opheader, then we need to start afresh with a new iclog. in xlog_write_partial()
2356 /* If we wrote the whole region, move to the next. */ in xlog_write_partial()
2361 * We now have a partially written iovec, but it can span in xlog_write_partial()
2362 * multiple iclogs so we loop here. First we release the iclog in xlog_write_partial()
2363 * we currently have, then we get a new iclog and add a new in xlog_write_partial()
2364 * opheader. Then we continue copying from where we were until in xlog_write_partial()
2365 * we either complete the iovec or fill the iclog. If we in xlog_write_partial()
2366 * complete the iovec, then we increment the index and go right in xlog_write_partial()
2367 * back to the top of the outer loop. if we fill the iclog, we in xlog_write_partial()
2372 * and get a new one before returning to the outer loop. We must in xlog_write_partial()
2373 * always guarantee that we exit this inner loop with at least in xlog_write_partial()
2375 * iclog, hence we cannot just terminate the loop at the end in xlog_write_partial()
2376 * of the of the continuation. So we loop while there is no in xlog_write_partial()
2382 * Ensure we include the continuation opheader in the in xlog_write_partial()
2383 * space we need in the new iclog by adding that size in xlog_write_partial()
2384 * to the length we require. This continuation opheader in xlog_write_partial()
2386 * consumes hasn't been accounted to the lv we are in xlog_write_partial()
2408 * continuation. Otherwise we're going around again. in xlog_write_partial()
2444 * 2. Check whether we violate the tickets reservation.
2451 * 3. Find out if we can fit entire region into this iclog
2471 * we don't really know exactly how much space will be used. As a result,
2472 * we don't update ic_offset until the end when we know exactly how many
2506 * If we have a context pointer, pass it the first iclog we are in xlog_write()
2525 * We have no iclog to release, so just return in xlog_write()
2538 * We've already been guaranteed that the last writes will fit inside in xlog_write()
2540 * those writes accounted to it. Hence we do not need to update the in xlog_write()
2561 * dummy transaction, we can change state into IDLE (the second time in xlog_state_activate_iclog()
2562 * around). Otherwise we should change the state into NEED a dummy. in xlog_state_activate_iclog()
2563 * We don't need to cover the dummy. in xlog_state_activate_iclog()
2570 * We have two dirty iclogs so start over. This could also be in xlog_state_activate_iclog()
2614 * We go to NEED for any non-covering writes. We go to NEED2 if we just in xlog_covered_state()
2615 * wrote the first covering record (DONE). We go to IDLE if we just in xlog_covered_state()
2684 * transactions can be large enough to span many iclogs. We cannot change the
2687 * will prevent recovery from finding the start of the transaction. Hence we
2691 * We have to do this before we drop the icloglock to ensure we are the only one
2694 * If we are moving the last_sync_lsn forwards, we also need to ensure we kick
2696 * target is bound by the current last_sync_lsn value. Hence if we have a large
2699 * freeing space in the log. Hence once we've updated the last_sync_lsn we
2724 * Return true if we need to stop processing, false to continue to the next
2745 * Now that we have an iclog that is in the DONE_SYNC state, do in xlog_state_iodone_process_iclog()
2746 * one more check here to see if we have chased our tail around. in xlog_state_iodone_process_iclog()
2747 * If this is not the lowest lsn iclog, then we will leave it in xlog_state_iodone_process_iclog()
2759 * in the DONE_SYNC state, we skip the rest and just try to in xlog_state_iodone_process_iclog()
2768 * we ran any callbacks, indicating that we dropped the icloglock. We don't need
2858 * If we got an error, either on the first buffer, or in the case of in xlog_state_done_syncing()
2859 * split log writes, on the second, we shut down the file system and in xlog_state_done_syncing()
2869 * iclog buffer, we wake them all, one will get to do the in xlog_state_done_syncing()
2878 * If the head of the in-core log ring is not (ACTIVE or DIRTY), then we must
2879 * sleep. We wait on the flush queue on the head iclog as that should be
2881 * we will wait here and all new writes will sleep until a sync completes.
2958 * If we are the only one writing to this iclog, sync it to in xlog_state_get_iclog_space()
2959 * disk. We need to do an atomic compare and decrement here to in xlog_state_get_iclog_space()
2972 /* Do we have enough room to write the full amount in the remainder in xlog_state_get_iclog_space()
2973 * of this iclog? Or must we continue a write on the next iclog and in xlog_state_get_iclog_space()
2974 * mark this iclog as completely taken? In the case where we switch in xlog_state_get_iclog_space()
2992 * The first cnt-1 times a ticket goes through here we don't need to move the
3016 /* just return if we still have some of the pre-reserved space */ in xfs_log_ticket_regrant()
3031 * All the information we need to make a correct determination of space left
3033 * count should have been decremented to zero. We only need to deal with the
3037 * reservation can be done before we need to ask for more space. The first
3038 * one goes to fill up the first current reservation. Once we run out of
3057 * If this is a permanent reservation ticket, we may be able to free in xfs_log_ticket_ungrant()
3127 * pmem) or fast async storage because we drop the icloglock to issue the IO.
3157 * we don't guarantee this data will be written out. A change from past
3160 * Basically, we try and perform an intelligent scan of the in-core logs.
3161 * If we determine there is no flushable data, we just return. There is no
3169 * We may sleep if:
3177 * b) when we return from flushing out this iclog, it is still
3204 * If the head is dirty or (active and empty), then we need to in xfs_log_force()
3207 * If the previous iclog is active or dirty we are done. There in xfs_log_force()
3208 * is nothing to sync out. Otherwise, we attach ourselves to the in xfs_log_force()
3214 /* We have exclusive access to this iclog. */ in xfs_log_force()
3224 * Someone else is still writing to this iclog, so we in xfs_log_force()
3226 * gets synced immediately as we may be waiting on it. in xfs_log_force()
3233 * The iclog we are about to wait on may contain the checkpoint pushed in xfs_log_force()
3235 * to disk yet. Like the ACTIVE case above, we need to make sure caches in xfs_log_force()
3291 * We sleep here if we haven't already slept (e.g. this is the in xlog_force_lsn()
3292 * first time we've looked at the correct iclog buf) and the in xlog_force_lsn()
3294 * is that if we are doing sync transactions here, by waiting in xlog_force_lsn()
3295 * for the previous I/O to complete, we can allow a few more in xlog_force_lsn()
3296 * transactions into this iclog before we close it down. in xlog_force_lsn()
3298 * Otherwise, we mark the buffer WANT_SYNC, and bump up the in xlog_force_lsn()
3299 * refcnt so we can release the log (which drops the ref count). in xlog_force_lsn()
3324 * ACTIVE case above, we need to make sure caches are flushed in xlog_force_lsn()
3333 * completes, so we don't need to manipulate caches here at all. in xlog_force_lsn()
3334 * We just need to wait for completion if necessary. in xlog_force_lsn()
3355 * a synchronous log force, we will wait on the iclog with the LSN returned by
3432 * We need to account for all the leadup data and trailer data in xlog_calc_unit_res()
3434 * And then we need to account for the worst case in terms of using in xlog_calc_unit_res()
3459 * the space used for the headers. If we use the iclog size, then we in xlog_calc_unit_res()
3471 * Fundamentally, this means we must pass the entire log vector to in xlog_calc_unit_res()
3480 /* add extra header reservations if we overrun */ in xlog_calc_unit_res()
3541 * the cycles are the same, we can't be overlapping. Otherwise, make sure that
3604 * 2. Make sure we have a good magic number
3605 * 3. Make sure we don't have magic numbers in the data
3720 * Return true if the shutdown cause was a log IO error and we actually shut the
3735 * being shut down. We need to do this first as shutting down the log in xlog_force_shutdown()
3739 * When we are in recovery, there are no transactions to flush, and in xlog_force_shutdown()
3740 * we don't want to touch the log because we don't want to perturb the in xlog_force_shutdown()
3741 * current head/tail for future recovery attempts. Hence we need to in xlog_force_shutdown()
3744 * If we are shutting down due to a log IO error, then we must avoid in xlog_force_shutdown()
3753 * set, there someone else is performing the shutdown and so we are done in xlog_force_shutdown()
3754 * here. This should never happen because we should only ever get called in xlog_force_shutdown()
3758 * cannot change once they hold the log->l_icloglock. Hence we need to in xlog_force_shutdown()
3759 * hold that lock here, even though we use the atomic test_and_set_bit() in xlog_force_shutdown()
3784 * We don't want anybody waiting for log reservations after this. That in xlog_force_shutdown()
3785 * means we have to wake up everybody queued up on reserveq as well as in xlog_force_shutdown()
3786 * writeq. In addition, we make sure in xlog_{re}grant_log_space that in xlog_force_shutdown()
3787 * we don't enqueue anything once the SHUTDOWN flag is set, and this in xlog_force_shutdown()
3844 * resets the in-core LSN. We can't validate in this mode, but in xfs_log_check_lsn()
3874 * Notify the log that we're about to start using a feature that is protected
3885 /* Notify the log that we've finished using log incompat features. */