Home
last modified time | relevance | path

Searched full:drained (Results 1 – 25 of 39) sorted by relevance

12

/openbmc/qemu/tests/unit/
H A Dtest-bdrv-drain.c475 /* End the drained section */ in test_graph_change_drain_all()
1007 * everything will be drained before we go back down the tree, but in test_co_delete_by_drain()
1215 * PB: It removes B and adds C instead. The subtree of PB is drained, which
1468 * is drained.
1483 * drained section. This means that job_exit() is scheduled
1484 * before the child has left the drained section. Its
1495 * the parent only if it really is still drained because the child is
1496 * drained.
1505 * 1. child attempts to leave its drained section. The call recurses
1508 * 2. parent-node-2 leaves the drained section. Polling in
[all …]
/openbmc/qemu/block/
H A Dgraph-lock.c37 * Many write-locked sections are also drained sections. There is a convenience
38 * wrapper bdrv_graph_wrlock_drained() which begins a drained section before
40 * if it also needs to end such a drained section. It needs to be a counter,
H A Dstream.c76 * bdrv_set_backing_hd() requires that all block nodes are drained. Drain in stream_prepare()
H A Dthrottle-groups.c227 * it's being drained. Skip the round-robin search and return tgm in next_throttle_token()
638 /* Requests must have been drained */ in throttle_group_detach_aio_context()
H A Dblock-backend.c1301 /* Are we currently in a drained section? */
1317 * waits for us to enqueue ourselves before it can leave the drained in blk_wait_while_drained()
2365 * which creates a drained section. Therefore, incrementing such a BB's
2738 * attaching to a BlockDriverState that is drained. Use child instead. */ in blk_root_drained_begin()
H A Dmirror.c756 * mirror_top_bs from now on, so keep it drained. */ in mirror_exit_common()
1212 /* Must be zero because we are drained */ in mirror_run()
1985 * know the job is drained, and the vcpus are stopped, so no write in mirror_start_job()
/openbmc/qemu/tests/qemu-iotests/
H A D0324 # Test that AIO requests are drained before an image is closed. This used
/openbmc/qemu/
H A Dblock.c2447 * new_bs drained when calling bdrv_replace_child_tran() is not a in bdrv_replace_child_abort()
2470 * Both @child->bs and @new_bs (if non-NULL) must be drained. @new_bs must be
2471 * kept drained until the transaction is completed.
2930 * If @new_bs is non-NULL, the parent of @child must already be drained through
2942 * If we want to change the BdrvChild to point to a drained node as its new in bdrv_replace_child_noperm()
2943 * child->bs, we need to make sure that its new parent is drained, too. In in bdrv_replace_child_noperm()
2954 * currently drained. in bdrv_replace_child_noperm()
2957 * this case, we obviously never need to consider the case of a drained in bdrv_replace_child_noperm()
2986 * If the parent was drained through this BdrvChild previously, but new_bs in bdrv_replace_child_noperm()
2987 * is not drained, allow requests to come in only after the new node has in bdrv_replace_child_noperm()
[all …]
/openbmc/qemu/include/block/
H A Dgraph-lock.h117 * Similar to bdrv_graph_wrlock, but will begin a drained section before
128 * Also ends the drained section if bdrv_graph_wrlock_drained() was used to lock
H A Dblock_int-common.h404 * All block nodes must be drained.
414 * All block nodes must be drained.
934 * will be drained separately, so the drain only needs to be propagated to
1010 * Must be called with the affected block nodes drained.
1062 * True if the parent of this child has been drained by this BdrvChild
1066 * child is entering or leaving a drained section.
H A Dblockjob.h141 * All block nodes must be drained.
H A Dblock-io.h411 * ignore the drain request because they will be drained separately (used for
/openbmc/qemu/tests/qemu-iotests/tests/
H A Diothreads-commit-active68 # drained together with the other ones), but on the same iothread
/openbmc/qemu/backends/
H A Drng-random.c55 /* We've drained all requests, the fd handler can be reset. */ in entropy_available()
/openbmc/u-boot/arch/arm/lib/
H A Drelocate.S122 * On xscale, icache must be invalidated and write buffers drained,
/openbmc/qemu/python/qemu/machine/
H A Dconsole_socket.py119 assert not flags, "Cannot pass flags to recv() in drained mode"
/openbmc/qemu/python/qemu/qmp/
H A Dutil.py35 Utility function to ensure a StreamWriter is *fully* drained.
/openbmc/qemu/docs/devel/
H A Dmultiple-iothreads.rst131 ``bdrv_drained_begin()`` and ``bdrv_drained_end()``, thus creating a "drained
/openbmc/phosphor-dbus-interfaces/yaml/org/freedesktop/UPower/
H A DDevice.interface.yaml170 "Amount of energy being drained from the source, measured in W. If
/openbmc/qemu/hw/char/
H A Dpl011.c212 * Because the real hardware TX fifo is time-drained at the frame in pl011_loopback_tx()
215 * that could be full at times while being drained at software in pl011_loopback_tx()
/openbmc/qemu/system/
H A Drunstate.c985 * We must cancel all block jobs while the block layer is drained, in qemu_cleanup()
988 * Begin the drained section after vm_shutdown() to avoid requests being in qemu_cleanup()
/openbmc/qemu/rust/hw/char/pl011/src/
H A Ddevice.rs333 // Because the real hardware TX fifo is time-drained at the frame in loopback_tx()
336 // that could be full at times while being drained at software in loopback_tx()
/openbmc/phosphor-host-ipmid/test/message/
H A Dpayload.cpp140 // only the first whole byte should have been 'drained' into p.raw in TEST()
/openbmc/qemu/include/qemu/
H A Dmain-loop.h192 * bytes changes outside the event loop (e.g. because a vcpu thread drained the
/openbmc/phosphor-host-ipmid/include/ipmid/
H A Dmessage.hpp219 * @param wholeBytesOnly - if true, only the whole bytes will be drained

12