| /openbmc/qemu/docs/devel/migration/ |
| H A D | compatibility.rst | 7 When we do migration, we have two QEMU processes: the source and the 18 Let's start with a practical example, we start with: 36 I am going to list the number of combinations that we can have. Let's 50 This are the easiest ones, we will not talk more about them in this 53 Now we start with the more interesting cases. Consider the case where 54 we have the same QEMU version in both sides (qemu-5.2) but we are using 72 because we have the limitation than qemu-5.1 doesn't know pc-5.2. So 77 This migration is known as newer to older. We need to make sure 78 when we are developing 5.2 we need to take care about not to break 79 migration to qemu-5.1. Notice that we can't make updates to [all …]
|
| /openbmc/u-boot/include/configs/ |
| H A D | ti_armv7_common.h | 9 * board or even SoC common file, we define a common file to be re-used 33 * We setup defaults based on constraints from the Linux kernel, which should 34 * also be safe elsewhere. We have the default load at 32MB into DDR (for 37 * seen large trees). We say all of this must be within the first 256MB 39 * bootm_size and we only run on platforms with 256MB or more of memory. 62 * we say (for simplicity) that we have 1 bank, always, even when 63 * we have more. We always start at 0x80000000, and we place the 64 * initial stack pointer in our SRAM. Otherwise, we can define 84 * The following are general good-enough settings for U-Boot. We set a 85 * large malloc pool as we generally have a lot of DDR, and we opt for [all …]
|
| H A D | bur_am335x_common.h | 40 * supports X-MODEM loading via UART, and we leverage this and then use 41 * Y-MODEM to load u-boot.img, when booted over UART. We must also include 50 * we don't need to do it twice. 62 * DDR information. We say (for simplicity) that we have 1 bank, 63 * always, even when we have more. We always start at 0x80000000, 64 * and we place the initial stack pointer in our SRAM. 72 * memory) enough for full U-Boot to be loaded. We also support Falcon 74 * instead, if desired. We make use of the general SPL framework found 75 * under common/spl/. Given our generally common memory map, we set a 80 * We limit our size to the ROM-defined downloaded image area, and use the [all …]
|
| /openbmc/u-boot/lib/libfdt/ |
| H A D | fdt_region.c | 115 /* Should we merge with previous? */ in fdt_find_regions() 155 * The region is added if there is space, but in any case we increment the 156 * count. If permitted, and the new region overlaps the last one, we merge 197 * fdt_add_alias_regions() - Add regions covering the aliases that we want 201 * aliases are special in that we generally want to include those which 204 * In fact we want to include only aliases for those nodes still included in 208 * This function scans the aliases and adds regions for those which we want 217 * @return number of regions after processing, or -FDT_ERR_NOSPACE if we did 218 * not have enough room in the regions table for the regions we wanted to add. 232 * Find the next node so that we know where the /aliases node ends. We in fdt_add_alias_regions() [all …]
|
| /openbmc/openbmc/poky/meta/recipes-devtools/python/python3/ |
| H A D | create_manifest3.py | 5 # packages only when the user needs them, hence why we split upstream python 17 # Such output will be parsed by this script, we will look for each dependency on the 18 # manifest and if we find that another package already includes it, then we will add 19 # that package as an RDEPENDS to the package we are currently checking; in case we dont 20 # find the current dependency on any other package we will add it to the current package 24 # This way we will create a new manifest from the data structure that was built during 28 # There are some caveats which we try to deal with, such as repeated files on different 100 # The JSON format doesn't allow comments so we hack the call to keep the comments using a marker 109 # First pass to get core-package functionality, because we base everything on the fact that core is… 136 # of file that we cant import (directories, binaries, configs) in which case we [all …]
|
| H A D | get_module_deps3.py | 12 # We can get a log per module, for all the dependencies that were found, but its messy. 18 # We can get a list of the modules which are currently required to run python 19 # so we run python-core and get its modules, we then import what we need 20 # and check what modules are currently running, if we substract them from the 21 # modules we had initially, we get the dependencies for the module we imported. 23 # We use importlib to achieve this, so we also need to know what modules importlib needs 30 # We DONT want the path on our HOST system 78 # We handle the core package (1st pass on create_manifest.py) as a special case 82 # We know this is not the core package, so there must be a difference. 117 # Site-customize is a special case since we (OpenEmbedded) put it there manually
|
| /openbmc/openbmc/poky/meta/recipes-devtools/gcc/ |
| H A D | libgcc-initial.inc | 4 # We need a libgcc to build glibc. Tranditionally we therefore built 6 # that to build libgcc-initial which is used to build glibc which we can 9 # We were able to drop the glibc dependency from gcc-cross, with two tweaks: 13 # the headers structure has support for it. We can do this with a simple 16 # Once gcc-cross is libc independent, we can use it to build both 19 # libgcc-initial is tricky as we need to imitate the non-threaded and 20 # non-shared case. We can do that by hacking the threading mode back to 22 # libgcc-intial build. We have to create the dummy limits.h to avoid 26 # handler" capable libgcc (libgcc_eh.a). Since we know glibc doesn't need 27 # any exception handler, we can safely symlink to libgcc.a. [all …]
|
| /openbmc/bmcweb/scripts/ |
| H A D | generate_schema_collections.py | 74 # Given a root node we want to parse the tree to find all instances of a 75 # specific EntityType. This is a separate routine so that we can rewalk the 115 # Helper function which expects a NavigationProperty to be passed in. We need 123 # We don't want to actually parse this property if it's just an excerpt 129 # We don't want to aggregate JsonSchemas as well as anything under 141 # Do we need to parse this file or another file? 149 # If we contain a collection array then we don't want to add the 150 # name to the path if we're a collection schema 157 # Did we find the top level collection in the current path or 158 # did we previously find it? [all …]
|
| /openbmc/qemu/docs/system/ |
| H A D | introduction.rst | 82 For a non-x86 system where we emulate a broad range of machine types, 88 command line to launch VMs, we do want to highlight that there are a 152 In the following example we first define a ``virt`` machine which is a 153 general purpose platform for running Aarch64 guests. We enable 154 virtualisation so we can use KVM inside the emulated guest. As the 155 ``virt`` machine comes with some built in pflash devices we give them 156 names so we can override the defaults later. 164 We then define the 4 vCPUs using the ``max`` option which gives us all 165 the Arm features QEMU is capable of emulating. We enable a more 167 algorithm. We explicitly specify TCG acceleration even though QEMU [all …]
|
| /openbmc/openbmc/meta-openembedded/meta-oe/recipes-multimedia/libid3tag/libid3tag/ |
| H A D | 10_utf16.patch | 22 + /* We were called with a bogus length. It should always 23 + * be an even number. We can deal with this in a few ways: 25 + * - Try and parse as much as we can and 26 + * - return an error if we're called again when we 27 + * already tried to parse everything we can. 28 + * - tell that we parsed it, which is what we do here.
|
| /openbmc/u-boot/board/eets/pdu001/ |
| H A D | board.h | 16 * We have two pin mux functions that must exist. First we need I2C0 to 18 * Second, if we want low-level debugging or a early UART (ie. before the 19 * pin controller driver is running), we need one of the UART ports UART0 to 21 * In case of I2C0 access we explicitly don't rely on the the ROM but we could 22 * do so as we use the primary mode (mode 0) for I2C0. 25 * However we relay on the ROM to configure the pins of MMC0 (eMMC) as well
|
| /openbmc/qemu/target/hexagon/ |
| H A D | README | 2 processor(DSP). We also support Hexagon Vector eXtensions (HVX). HVX 12 We presented an overview of the project at the 2019 KVM Forum. 38 We start with scripts that generate a bunch of include files. This 109 cases this is necessary for correct execution. We can also override for 113 The gen_tcg.h file has any overrides. For example, we could write 118 C semantics are specified only with macros, we can override the default with 125 In gen_tcg.h, we use the shortcode 129 There are also cases where we brute force the TCG code generation. 134 won't fit in a TCGv or TCGv_i64, so we pass TCGv_ptr variables to pass the 158 Notice that we also generate a variable named <operand>_off for each operand of [all …]
|
| /openbmc/qemu/tests/tcg/multiarch/gdbstub/ |
| H A D | interrupt.py | 16 Check that, if thread is resumed, we go back to the same thread when the 20 # Switch to the thread we're going to be running the test in. 26 # While there are cleaner ways to do this, we want to minimize the number of 28 # Ideally, there should be no difference between what we're doing here and 31 # For this to be safe, we only need the prologue of loop() to not have 32 # instructions that may have problems with what we're doing here. We don't 40 # Check whether the thread we're in after the interruption is the same we
|
| /openbmc/openbmc/poky/meta/recipes-devtools/python/python3-hypothesis/ |
| H A D | test_rle.py | 18 """This example demonstrates testing a run length encoding scheme. That is, we 39 # By starting off the count at zero we simplify the iteration logic 44 # If you uncomment this line this branch will be skipped and we'll 68 # We use lists of a type that should have a relatively high duplication rate, 69 # otherwise we'd almost never get any runs. 75 """If we encode a sequence and then decode the result, we should get the 78 Otherwise we've done something very wrong. 86 so we need something that tests the compression property of our encoding. 88 In this test we deliberately introduce or extend a run and assert 92 # We use assume to get a valid index into the list. We could also have used
|
| H A D | test_binary_search.py | 21 determined by the invariants it must satisfy, so we can simply test for those 40 # Without this check we will get an index error on the next line when the 45 # Without this check we will miss the case where the insertion point should 46 # be zero: The invariant we maintain in the next section is that lo is 72 # We now know that there is a valid insertion point <= hi and there is no 74 # answer we were seeking 88 # We generate arbitrary lists and turn this into generating sorting lists 92 # We could also do it this way, but that would be a bad idea: 95 # low probability, so we are much better off post-processing values into the 96 # form we want than filtering them out. [all …]
|
| /openbmc/qemu/migration/ |
| H A D | migration-stats.h | 32 * based on MigrationStats. We change to Stat64 any counter that 38 * Number of bytes that were dirty last time that we synced with 39 * the guest memory. We use that to calculate the downtime. As 40 * the remaining dirty amounts to what we know that is still dirty 42 * since we synchronized bitmaps. 50 * Number of times we have synchronized guest bitmaps. 76 * Number of postcopy page faults that we have handled during 93 * Maximum amount of data we can send in a cycle. 118 * This is called when we know we start a new transfer cycle. 134 * Returns how many bytes have we transferred since the beginning of
|
| /openbmc/u-boot/include/linux/mtd/ |
| H A D | flashchip.h | 13 * happens to be in - so we don't have to care whether we're on 2.2, which 63 /* We omit len for now, because when we group them together 64 we insist that they're all of the same size, and the chip size 65 is held in the next level up. If we get more versatile later, 66 it'll make it a damn sight harder to find which chip we want from 67 a given offset, and we'll want to add the per-chip length field 80 wait_queue_head_t wq; /* Wait on here when we're waiting for the chip
|
| /openbmc/phosphor-host-ipmid/docs/ |
| H A D | testing.md | 5 For the purposes of this tutorial, we'll be setting up an environment in Docker. 8 same way that others working on the project are. Finally, we can get away with 10 bot, so we have even more confidence that we're running relevant tests the way 36 We also need to put a copy of the project you want to test against here. But 37 you've probably got a copy checked out already, so we're going to make a _git 65 (`/my/dir/for/phosphor-host-ipmid`), so we'll need to mount it when we run. Open 67 find where we call `docker run`, way down at the bottom. Add an additional 159 For this tutorial, we'll be adding some basic unit testing of the struct 199 We'll create the tests in `test/sensorhandler_unittest.cpp`; go ahead and start 210 Let's plan the test cases we care about before we build any additional [all …]
|
| /openbmc/u-boot/tools/patman/ |
| H A D | get_maintainer.py | 13 If the script is found we'll return a path to it; else None. 27 """Run get_maintainer.pl on a file if we find it. 29 We look for get_maintainer.pl in the 'scripts' directory at the top of 30 git. If we find it we'll run it. If we don't find get_maintainer.pl 31 then we fail silently.
|
| /openbmc/google-misc/subprojects/ncsid/src/platforms/nemora/portable/ |
| H A D | ncsi_fsm.h | 34 * - we cannot DHCP unless the NC-SI connection is up 35 * - we cannot do the OEM L3/L4 NC-SI configuration unless we have a valid 38 * For additional complexity we cannot get DHCP/ARP responses after the host 39 * has loaded the Mellanox NIC driver but we want to be able to periodically 40 * test the NC-SI connection regardless of whether we have network configuration 47 * matches our IP address and dedicated Nemora port so that we can receive 124 // Number of the channel we are currently operating on. (L3L4 SM only) 126 // If true, means the request was sent and we are waiting for response. 129 // The re-start and re-test delays ensures that we can flush the DMA 131 // packet that may have been received shortly after we timed out on [all …]
|
| /openbmc/qemu/docs/devel/ |
| H A D | s390-dasd-ipl.rst | 34 the real operating system is loaded into memory and we are ready to hand 49 should contain the needed flags for the operating system we have loaded. The 50 psw's instruction address will point to the location in memory where we want 68 In theory we should merely have to do the following to IPL/boot a guest 79 When we start a channel program we pass the channel subsystem parameters via an 95 it from the disk. So we need to be able to handle this case. 100 Since we are forced to live with prefetch we cannot use the very simple IPL 101 procedure we defined in the preceding section. So we compensate by doing the 112 to read the very next record which will be IPL2. But since we are not reading 113 both IPL1 and IPL2 as part of the same channel program we must manually set [all …]
|
| /openbmc/u-boot/doc/ |
| H A D | README.hwconfig | 8 via the `hwconfig' environment variable. Later we could write 17 We can implement this by integrating apt-get[3] into Das 20 2. Since we don't implement a hwconfig command, i.e. we're working 26 3. We support hwconfig options with arguments. For example, 36 internal API and then we can continue improving the user 38 command with bells and whistles. Or not adding, if we feel 46 enabling HW feature X we may need to disable Y, and turn Z
|
| /openbmc/openbmc/meta-openembedded/meta-oe/recipes-devtools/android-tools/android-tools/build/ |
| H A D | 0001-Riscv-Add-risc-v-Android-config-header.patch | 63 + * Do we have pthread_setname_np()? 66 + * the same name but different parameters, so we can't use that here.) 71 + * Do we have the futex syscall? 85 + * where we can write to /proc/<pid>/oom_adj to modify the out-of-memory 135 + * Define this if we have localtime_r(). 140 + * Define this if we have gethostbyname_r(). 145 + * Define this if we have ioctl(). 150 + * Define this if we want to use WinSock. 160 + * Define this if we have linux style epoll() 167 + * HAVE_ENDIAN_H -- have endian.h header we can include. [all …]
|
| /openbmc/dbus-sensors/src/nvidia-gpu/ |
| H A D | MctpRequester.cpp | 94 // we were handed an endpoint that can't be treated as an MCTP endpoint in processRecvMsg() 102 // we received a message that this handler doesn't support in processRecvMsg() 104 lg2::error("MctpRequester: Message type mismatch. We received {MSG}", in processRecvMsg() 120 // if the received length was greater than our buffer, we would've truncated in processRecvMsg() 128 // we received something from the device, in processRecvMsg() 129 // but we aren't able to parse iid byte in processRecvMsg() 139 // we received a request from a downstream device. in processRecvMsg() 140 // We don't currently support this, drop the packet in processRecvMsg() 151 // we've received a packet that is a response in processRecvMsg() 152 // from a device we've never talked to in processRecvMsg() [all …]
|
| /openbmc/qemu/tests/tcg/ |
| H A D | Makefile.target | 5 # These are complicated by the fact we want to build them for guest 6 # systems. This requires knowing what guests we are building and which 7 # ones we have cross-compilers for or docker images with 14 # We only include the host build system for SRC_PATH and we don't 15 # bother with the common rules.mk. We expect the following: 19 # BUILD_STATIC - are we building static binaries 27 # We also accept SPEED=slow to enable slower running tests 29 # We also expect to be in the tests build dir for the FOO-(linux-user|softmmu). 65 # to work around the pipe squashing the status we only pipe the result if 66 # we know it failed and then force failure at the end. [all …]
|