/openbmc/qemu/docs/devel/migration/ |
H A D | compatibility.rst | 7 When we do migration, we have two QEMU processes: the source and the 18 Let's start with a practical example, we start with: 36 I am going to list the number of combinations that we can have. Let's 50 This are the easiest ones, we will not talk more about them in this 53 Now we start with the more interesting cases. Consider the case where 54 we have the same QEMU version in both sides (qemu-5.2) but we are using 72 because we have the limitation than qemu-5.1 doesn't know pc-5.2. So 77 This migration is known as newer to older. We need to make sure 78 when we are developing 5.2 we need to take care about not to break 79 migration to qemu-5.1. Notice that we can't make updates to [all …]
|
/openbmc/linux/fs/btrfs/ |
H A D | space-info.c | 26 * 1) space_info. This is the ultimate arbiter of how much space we can use. 29 * reservations we care about total_bytes - SUM(space_info->bytes_) when 34 * metadata reservation we have. You can see the comment in the block_rsv 38 * 3) btrfs_calc*_size. These are the worst case calculations we used based 39 * on the number of items we will want to modify. We have one for changing 40 * items, and one for inserting new items. Generally we use these helpers to 46 * We call into either btrfs_reserve_data_bytes() or 47 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with 48 * num_bytes we want to reserve. 65 * Assume we are unable to simply make the reservation because we do not have [all …]
|
H A D | delalloc-space.c | 25 * We call into btrfs_reserve_data_bytes() for the user request bytes that 26 * they wish to write. We make this reservation and add it to 27 * space_info->bytes_may_use. We set EXTENT_DELALLOC on the inode io_tree 29 * make a real allocation if we are pre-allocating or doing O_DIRECT. 32 * At writepages()/prealloc/O_DIRECT time we will call into 33 * btrfs_reserve_extent() for some part or all of this range of bytes. We 37 * may allocate a smaller on disk extent than we previously reserved. 48 * This is the simplest case, we haven't completed our operation and we know 49 * how much we reserved, we can simply call 62 * We keep track of two things on a per inode bases [all …]
|
/openbmc/linux/drivers/md/bcache/ |
H A D | journal.h | 9 * never spans two buckets. This means (not implemented yet) we can resize the 15 * We also keep some things in the journal header that are logically part of the 20 * rewritten when we want to move/wear level the main journal. 22 * Currently, we don't journal BTREE_REPLACE operations - this will hopefully be 25 * moving gc we work around it by flushing the btree to disk before updating the 35 * We track this by maintaining a refcount for every open journal entry, in a 38 * zero, we pop it off - thus, the size of the fifo tells us the number of open 41 * We take a refcount on a journal entry when we add some keys to a journal 42 * entry that we're going to insert (held by struct btree_op), and then when we 43 * insert those keys into the btree the btree write we're setting up takes a [all …]
|
H A D | bset.h | 17 * We use two different functions for validating bkeys, bch_ptr_invalid and 27 * them on disk, just unnecessary work - so we filter them out when resorting 30 * We can't filter out stale keys when we're resorting, because garbage 32 * unless we're rewriting the btree node those stale keys still exist on disk. 34 * We also implement functions here for removing some number of sectors from the 44 * There could be many of them on disk, but we never allow there to be more than 45 * 4 in memory - we lazily resort as needed. 47 * We implement code here for creating and maintaining auxiliary search trees 48 * (described below) for searching an individial bset, and on top of that we 62 * Since keys are variable length, we can't use a binary search on a bset - we [all …]
|
/openbmc/linux/fs/xfs/ |
H A D | xfs_log_cil.c | 23 * recover, so we don't allow failure here. Also, we allocate in a context that 24 * we don't want to be issuing transactions from, so we need to tell the 27 * We don't reserve any space for the ticket - we are going to steal whatever 28 * space we require from transactions as they commit. To ensure we reserve all 29 * the space required, we need to set the current reservation of the ticket to 30 * zero so that we know to steal the initial transaction overhead from the 42 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc() 62 * We can't rely on just the log item being in the CIL, we have to check 80 * current sequence, we're in a new checkpoint. in xlog_item_in_current_chkpt() 140 * We're in the middle of switching cil contexts. Reset the in xlog_cil_push_pcp_aggregate() [all …]
|
H A D | xfs_discard.c | 25 * We need to walk the filesystem free space and issue discards on the free 26 * space that meet the search criteria (size and location). We cannot issue 28 * still marked as busy. To serialise against extent state changes whilst we are 29 * gathering extents to trim, we must hold the AGF lock to lock out other 32 * However, we cannot just hold the AGF for the entire AG free space walk whilst 33 * we issue discards on each free space that is found. Storage devices can have 36 * extent can take a *long* time. Whilst we are doing this walk, nothing else 37 * can access the AGF, and we can stall transactions and hence the log whilst 41 * Hence we need to take a leaf from the bulkstat playbook. It takes the AGI 47 * We can't do this exactly with free space - once we drop the AGF lock, the [all …]
|
H A D | xfs_log_priv.h | 74 * By covering, we mean changing the h_tail_lsn in the last on-disk 83 * might include space beyond the EOF. So if we just push the EOF a 91 * system is idle. We need two dummy transaction because the h_tail_lsn 103 * we are done covering previous transactions. 104 * NEED -- logging has occurred and we need a dummy transaction 106 * DONE -- we were in the NEED state and have committed a dummy 108 * NEED2 -- we detected that a dummy transaction has gone to the 110 * DONE2 -- we committed a dummy transaction when in the NEED2 state. 112 * There are two places where we switch states: 114 * 1.) In xfs_sync, when we detect an idle log and are in NEED or NEED2. [all …]
|
H A D | xfs_log.c | 89 * We need to make sure the buffer pointer returned is naturally aligned for the 90 * biggest basic data type we put into it. We have already accounted for this 93 * However, this padding does not get written into the log, and hence we have to 98 * We also add space for the xlog_op_header that describes this region in the 99 * log. This prepends the data region we return to the caller to copy their data 101 * is not 8 byte aligned, we have to be careful to ensure that we align the 102 * start of the buffer such that the region we return to the call is 8 byte 256 * Hence when we are woken here, it may be that the head of the in xlog_grant_head_wake() 259 * reservation we require. However, if the AIL has already in xlog_grant_head_wake() 260 * pushed to the target defined by the old log head location, we in xlog_grant_head_wake() [all …]
|
/openbmc/linux/net/ipv4/ |
H A D | tcp_vegas.c | 15 * o We do not change the loss detection or recovery mechanisms of 19 * only every-other RTT during slow start, we increase during 22 * we use the rate at which ACKs come back as the "actual" 24 * o To speed convergence to the right rate, we set the cwnd 25 * to achieve the right ("actual") rate when we exit slow start. 26 * o To filter out the noise caused by delayed ACKs, we use the 55 /* There are several situations when we must "re-start" Vegas: 60 * o when we send a packet and there is no outstanding 63 * In these circumstances we cannot do a Vegas calculation at the 64 * end of the first RTT, because any calculation we do is using [all …]
|
/openbmc/linux/arch/powerpc/mm/nohash/ |
H A D | tlb_low_64e.S | 91 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */ 93 /* We do the user/kernel test for the PID here along with the RW test 95 /* We pre-test some combination of permissions to avoid double 98 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE 103 * writeable, we will take a new fault later, but that should be 106 * We also move ESR_ST in _PAGE_DIRTY position 109 * MAS1 is preset for all we need except for TID that needs to 137 * We are entered with: 176 /* Now we build the MAS: 219 /* We need to check if it was an instruction miss */ [all …]
|
/openbmc/linux/arch/arm64/kvm/hyp/nvhe/ |
H A D | tlb.c | 22 * We have two requirements: in __tlb_switch_to_guest() 25 * CPUs, for which a dsb(DOMAIN-st) is what we need, DOMAIN in __tlb_switch_to_guest() 30 * we trapped to EL2 so that we can mess with the MM in __tlb_switch_to_guest() 47 * For CPUs that are affected by ARM 1319367, we need to in __tlb_switch_to_guest() 48 * avoid a host Stage-1 walk while we have the guest's in __tlb_switch_to_guest() 50 * We're guaranteed that the S1 MMU is enabled, so we can in __tlb_switch_to_guest() 62 * ensuring that we always have an ISB, but not two ISBs back in __tlb_switch_to_guest() 90 * We could do so much better if we had the VA as well. in __kvm_tlb_flush_vmid_ipa() 91 * Instead, we invalidate Stage-2 for this IPA, and the in __kvm_tlb_flush_vmid_ipa() 98 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa() [all …]
|
/openbmc/u-boot/include/configs/ |
H A D | ti_armv7_common.h | 9 * board or even SoC common file, we define a common file to be re-used 33 * We setup defaults based on constraints from the Linux kernel, which should 34 * also be safe elsewhere. We have the default load at 32MB into DDR (for 37 * seen large trees). We say all of this must be within the first 256MB 39 * bootm_size and we only run on platforms with 256MB or more of memory. 62 * we say (for simplicity) that we have 1 bank, always, even when 63 * we have more. We always start at 0x80000000, and we place the 64 * initial stack pointer in our SRAM. Otherwise, we can define 84 * The following are general good-enough settings for U-Boot. We set a 85 * large malloc pool as we generally have a lot of DDR, and we opt for [all …]
|
/openbmc/linux/Documentation/filesystems/ |
H A D | xfs-delayed-logging-design.rst | 15 We begin with an overview of transactions in XFS, followed by describing how 16 transaction reservations are structured and accounted, and then move into how we 18 reservations bounds. At this point we need to explain how relogging works. With 113 individual modification is atomic, the chain is *not atomic*. If we crash half 140 complete, we can explicitly tag a transaction as synchronous. This will trigger 145 throughput to the IO latency limitations of the underlying storage. Instead, we 161 available to write the modification into the journal before we start making 164 log in the worst case. This means that if we are modifying a btree in the 165 transaction, we have to reserve enough space to record a full leaf-to-root split 166 of the btree. As such, the reservations are quite complex because we have to [all …]
|
/openbmc/u-boot/lib/libfdt/ |
H A D | fdt_region.c | 115 /* Should we merge with previous? */ in fdt_find_regions() 155 * The region is added if there is space, but in any case we increment the 156 * count. If permitted, and the new region overlaps the last one, we merge 197 * fdt_add_alias_regions() - Add regions covering the aliases that we want 201 * aliases are special in that we generally want to include those which 204 * In fact we want to include only aliases for those nodes still included in 208 * This function scans the aliases and adds regions for those which we want 217 * @return number of regions after processing, or -FDT_ERR_NOSPACE if we did 218 * not have enough room in the regions table for the regions we wanted to add. 232 * Find the next node so that we know where the /aliases node ends. We in fdt_add_alias_regions() [all …]
|
/openbmc/linux/drivers/misc/vmw_vmci/ |
H A D | vmci_route.c | 33 * which comes from the VMX, so we know it is coming from a in vmci_route() 36 * To avoid inconsistencies, test these once. We will test in vmci_route() 37 * them again when we do the actual send to ensure that we do in vmci_route() 49 * If this message already came from a guest then we in vmci_route() 57 * We must be acting as a guest in order to send to in vmci_route() 63 /* And we cannot send if the source is the host context. */ in vmci_route() 71 * then they probably mean ANY, in which case we in vmci_route() 87 * If it is not from a guest but we are acting as a in vmci_route() 88 * guest, then we need to send it down to the host. in vmci_route() 89 * Note that if we are also acting as a host then this in vmci_route() [all …]
|
/openbmc/linux/arch/powerpc/kexec/ |
H A D | core_64.c | 45 * Since we use the kernel fault handlers and paging code to in machine_kexec_prepare() 46 * handle the virtual mode, we must make sure no destination in machine_kexec_prepare() 53 /* We also should not overwrite the tce tables */ in machine_kexec_prepare() 86 * We rely on kexec_load to create a lists that properly in copy_segments() 88 * We will still crash if the list is wrong, but at least in copy_segments() 121 * After this call we may not use anything allocated in dynamic in kexec_copy_flush() 129 * we need to clear the icache for all dest pages sometime, in kexec_copy_flush() 146 mb(); /* make sure our irqs are disabled before we say they are */ in kexec_smp_down() 153 * Now every CPU has IRQs off, we can clear out any pending in kexec_smp_down() 171 /* Make sure each CPU has at least made it to the state we need. in kexec_prepare_cpus_wait() [all …]
|
/openbmc/openbmc/poky/meta/recipes-devtools/python/python3/ |
H A D | create_manifest3.py | 5 # packages only when the user needs them, hence why we split upstream python 17 # Such output will be parsed by this script, we will look for each dependency on the 18 # manifest and if we find that another package already includes it, then we will add 19 # that package as an RDEPENDS to the package we are currently checking; in case we dont 20 # find the current dependency on any other package we will add it to the current package 24 # This way we will create a new manifest from the data structure that was built during 28 # There are some caveats which we try to deal with, such as repeated files on different 100 # The JSON format doesn't allow comments so we hack the call to keep the comments using a marker 109 # First pass to get core-package functionality, because we base everything on the fact that core is… 136 # of file that we cant import (directories, binaries, configs) in which case we [all …]
|
/openbmc/linux/drivers/usb/dwc2/ |
H A D | hcd_queue.c | 31 /* If we get a NAK, wait this long before retrying */ 120 * @num_bits: The number of bits we need per period we want to reserve 122 * @interval: How often we need to be scheduled for the reservation this 126 * the interval or we return failure right away. 127 * @only_one_period: Normally we'll allow picking a start anywhere within the 128 * first interval, since we can still make all repetition 130 * here then we'll return failure if we can't fit within 133 * The idea here is that we want to schedule time for repeating events that all 138 * To keep things "simple", we'll represent our schedule with a bitmap that 140 * but does mean that we need to handle things specially (and non-ideally) if [all …]
|
/openbmc/linux/arch/ia64/lib/ |
H A D | copy_user.S | 8 * the boundary. When reading from user space we must catch 9 * faults on loads. When writing to user space we must catch 11 * we don't need to worry about overlapping regions. 27 * - handle the case where we have more than 16 bytes and the alignment 39 #define COPY_BREAK 16 // we do byte copy below (must be >=16) 111 // Now we do the byte by byte loop with software pipeline 128 // At this point we know we have more than 16 bytes to copy 133 // The basic idea is that we copy byte-by-byte at the head so 134 // that we can reach 8-byte alignment for both src1 and dst1. 153 // Optimization. If dst1 is 8-byte aligned (quite common), we don't need [all …]
|
/openbmc/linux/drivers/gpu/drm/i915/ |
H A D | i915_request.c | 73 * We could extend the life of a context to beyond that of all in i915_fence_get_timeline_name() 75 * or we just give them a false name. Since in i915_fence_get_timeline_name() 130 * freed when the slab cache itself is freed, and so we would get in i915_fence_release() 139 * We do not hold a reference to the engine here and so have to be in i915_fence_release() 140 * very careful in what rq->engine we poke. The virtual engine is in i915_fence_release() 141 * referenced via the rq->context and we released that ref during in i915_fence_release() 142 * i915_request_retire(), ergo we must not dereference a virtual in i915_fence_release() 143 * engine here. Not that we would want to, as the only consumer of in i915_fence_release() 148 * we know that it will have been processed by the HW and will in i915_fence_release() 154 * power-of-two we assume that rq->engine may still be a virtual in i915_fence_release() [all …]
|
/openbmc/linux/fs/xfs/scrub/ |
H A D | repair.c | 65 * scrub so that we can tell userspace if we fixed the problem. in xrep_attempt() 83 * We tried harder but still couldn't grab all the resources in xrep_attempt() 84 * we needed to fix it. The corruption has not been fixed, in xrep_attempt() 90 * EAGAIN tells the caller to re-scrub, so we cannot return in xrep_attempt() 99 * Complain about unfixable problems in the filesystem. We don't log 116 * Repair probe -- userspace uses this to probe if we're willing to repair a 142 * Keep the AG header buffers locked while we roll the transaction. in xrep_roll_ag_trans() 143 * Ensure that both AG buffers are dirty and held when we roll the in xrep_roll_ag_trans() 161 * Roll the transaction. We still hold the AG header buffers locked in xrep_roll_ag_trans() 187 * Keep the AG header buffers locked while we complete deferred work in xrep_defer_finish() [all …]
|
/openbmc/linux/kernel/irq/ |
H A D | spurious.c | 26 * We wait here for a poller to finish. 28 * If the poll runs on this CPU, then we yell loudly and return 32 * We wait until the poller is done and then recheck disabled and 33 * action (about to be disabled). Only if it's still active, we return 86 * All handlers must agree on IRQF_SHARED, so we test just the in try_one_irq() 209 * We need to take desc->lock here. note_interrupt() is called in __report_bad_irq() 210 * w/o desc->lock held, but IRQ_PROGRESS set. We might race in __report_bad_irq() 244 /* We didn't actually handle the IRQ - see if it was misrouted? */ in try_misrouted_irq() 249 * But for 'irqfixup == 2' we also do it for handled interrupts if in try_misrouted_irq() 260 * Since we don't get the descriptor lock, "action" can in try_misrouted_irq() [all …]
|
/openbmc/bmcweb/scripts/ |
H A D | generate_schema_collections.py | 73 # Given a root node we want to parse the tree to find all instances of a 74 # specific EntityType. This is a separate routine so that we can rewalk the 114 # Helper function which expects a NavigationProperty to be passed in. We need 122 # We don't want to actually parse this property if it's just an excerpt 128 # We don't want to aggregate JsonSchemas as well as anything under 140 # Do we need to parse this file or another file? 148 # If we contain a collection array then we don't want to add the 149 # name to the path if we're a collection schema 156 # Did we find the top level collection in the current path or 157 # did we previously find it? [all …]
|
/openbmc/linux/fs/jbd2/ |
H A D | transaction.c | 70 * have an existing running transaction: we only make a new transaction 71 * once we have started to commit the old one). 74 * The journal MUST be locked. We don't perform atomic mallocs on the 75 * new transaction and we can't block without protecting against other 179 * We don't call jbd2_might_wait_for_commit() here as there's no in wait_transaction_switching() 202 * Wait until we can add credits for handle to the running transaction. Called 204 * transaction. Returns 1 if we had to wait, j_state_lock is dropped, and 208 * value, we need to fake out sparse so ti doesn't complain about a 233 * potential buffers requested by this operation, we need to in add_transaction_credits() 240 * then start to commit it: we can then go back and in add_transaction_credits() [all …]
|