1*ac1129e7SMichael Kelley.. SPDX-License-Identifier: GPL-2.0 2*ac1129e7SMichael Kelley 3*ac1129e7SMichael KelleyVMbus 4*ac1129e7SMichael Kelley===== 5*ac1129e7SMichael KelleyVMbus is a software construct provided by Hyper-V to guest VMs. It 6*ac1129e7SMichael Kelleyconsists of a control path and common facilities used by synthetic 7*ac1129e7SMichael Kelleydevices that Hyper-V presents to guest VMs. The control path is 8*ac1129e7SMichael Kelleyused to offer synthetic devices to the guest VM and, in some cases, 9*ac1129e7SMichael Kelleyto rescind those devices. The common facilities include software 10*ac1129e7SMichael Kelleychannels for communicating between the device driver in the guest VM 11*ac1129e7SMichael Kelleyand the synthetic device implementation that is part of Hyper-V, and 12*ac1129e7SMichael Kelleysignaling primitives to allow Hyper-V and the guest to interrupt 13*ac1129e7SMichael Kelleyeach other. 14*ac1129e7SMichael Kelley 15*ac1129e7SMichael KelleyVMbus is modeled in Linux as a bus, with the expected /sys/bus/vmbus 16*ac1129e7SMichael Kelleyentry in a running Linux guest. The VMbus driver (drivers/hv/vmbus_drv.c) 17*ac1129e7SMichael Kelleyestablishes the VMbus control path with the Hyper-V host, then 18*ac1129e7SMichael Kelleyregisters itself as a Linux bus driver. It implements the standard 19*ac1129e7SMichael Kelleybus functions for adding and removing devices to/from the bus. 20*ac1129e7SMichael Kelley 21*ac1129e7SMichael KelleyMost synthetic devices offered by Hyper-V have a corresponding Linux 22*ac1129e7SMichael Kelleydevice driver. These devices include: 23*ac1129e7SMichael Kelley 24*ac1129e7SMichael Kelley* SCSI controller 25*ac1129e7SMichael Kelley* NIC 26*ac1129e7SMichael Kelley* Graphics frame buffer 27*ac1129e7SMichael Kelley* Keyboard 28*ac1129e7SMichael Kelley* Mouse 29*ac1129e7SMichael Kelley* PCI device pass-thru 30*ac1129e7SMichael Kelley* Heartbeat 31*ac1129e7SMichael Kelley* Time Sync 32*ac1129e7SMichael Kelley* Shutdown 33*ac1129e7SMichael Kelley* Memory balloon 34*ac1129e7SMichael Kelley* Key/Value Pair (KVP) exchange with Hyper-V 35*ac1129e7SMichael Kelley* Hyper-V online backup (a.k.a. VSS) 36*ac1129e7SMichael Kelley 37*ac1129e7SMichael KelleyGuest VMs may have multiple instances of the synthetic SCSI 38*ac1129e7SMichael Kelleycontroller, synthetic NIC, and PCI pass-thru devices. Other 39*ac1129e7SMichael Kelleysynthetic devices are limited to a single instance per VM. Not 40*ac1129e7SMichael Kelleylisted above are a small number of synthetic devices offered by 41*ac1129e7SMichael KelleyHyper-V that are used only by Windows guests and for which Linux 42*ac1129e7SMichael Kelleydoes not have a driver. 43*ac1129e7SMichael Kelley 44*ac1129e7SMichael KelleyHyper-V uses the terms "VSP" and "VSC" in describing synthetic 45*ac1129e7SMichael Kelleydevices. "VSP" refers to the Hyper-V code that implements a 46*ac1129e7SMichael Kelleyparticular synthetic device, while "VSC" refers to the driver for 47*ac1129e7SMichael Kelleythe device in the guest VM. For example, the Linux driver for the 48*ac1129e7SMichael Kelleysynthetic NIC is referred to as "netvsc" and the Linux driver for 49*ac1129e7SMichael Kelleythe synthetic SCSI controller is "storvsc". These drivers contain 50*ac1129e7SMichael Kelleyfunctions with names like "storvsc_connect_to_vsp". 51*ac1129e7SMichael Kelley 52*ac1129e7SMichael KelleyVMbus channels 53*ac1129e7SMichael Kelley-------------- 54*ac1129e7SMichael KelleyAn instance of a synthetic device uses VMbus channels to communicate 55*ac1129e7SMichael Kelleybetween the VSP and the VSC. Channels are bi-directional and used 56*ac1129e7SMichael Kelleyfor passing messages. Most synthetic devices use a single channel, 57*ac1129e7SMichael Kelleybut the synthetic SCSI controller and synthetic NIC may use multiple 58*ac1129e7SMichael Kelleychannels to achieve higher performance and greater parallelism. 59*ac1129e7SMichael Kelley 60*ac1129e7SMichael KelleyEach channel consists of two ring buffers. These are classic ring 61*ac1129e7SMichael Kelleybuffers from a university data structures textbook. If the read 62*ac1129e7SMichael Kelleyand writes pointers are equal, the ring buffer is considered to be 63*ac1129e7SMichael Kelleyempty, so a full ring buffer always has at least one byte unused. 64*ac1129e7SMichael KelleyThe "in" ring buffer is for messages from the Hyper-V host to the 65*ac1129e7SMichael Kelleyguest, and the "out" ring buffer is for messages from the guest to 66*ac1129e7SMichael Kelleythe Hyper-V host. In Linux, the "in" and "out" designations are as 67*ac1129e7SMichael Kelleyviewed by the guest side. The ring buffers are memory that is 68*ac1129e7SMichael Kelleyshared between the guest and the host, and they follow the standard 69*ac1129e7SMichael Kelleyparadigm where the memory is allocated by the guest, with the list 70*ac1129e7SMichael Kelleyof GPAs that make up the ring buffer communicated to the host. Each 71*ac1129e7SMichael Kelleyring buffer consists of a header page (4 Kbytes) with the read and 72*ac1129e7SMichael Kelleywrite indices and some control flags, followed by the memory for the 73*ac1129e7SMichael Kelleyactual ring. The size of the ring is determined by the VSC in the 74*ac1129e7SMichael Kelleyguest and is specific to each synthetic device. The list of GPAs 75*ac1129e7SMichael Kelleymaking up the ring is communicated to the Hyper-V host over the 76*ac1129e7SMichael KelleyVMbus control path as a GPA Descriptor List (GPADL). See function 77*ac1129e7SMichael Kelleyvmbus_establish_gpadl(). 78*ac1129e7SMichael Kelley 79*ac1129e7SMichael KelleyEach ring buffer is mapped into contiguous Linux kernel virtual 80*ac1129e7SMichael Kelleyspace in three parts: 1) the 4 Kbyte header page, 2) the memory 81*ac1129e7SMichael Kelleythat makes up the ring itself, and 3) a second mapping of the memory 82*ac1129e7SMichael Kelleythat makes up the ring itself. Because (2) and (3) are contiguous 83*ac1129e7SMichael Kelleyin kernel virtual space, the code that copies data to and from the 84*ac1129e7SMichael Kelleyring buffer need not be concerned with ring buffer wrap-around. 85*ac1129e7SMichael KelleyOnce a copy operation has completed, the read or write index may 86*ac1129e7SMichael Kelleyneed to be reset to point back into the first mapping, but the 87*ac1129e7SMichael Kelleyactual data copy does not need to be broken into two parts. This 88*ac1129e7SMichael Kelleyapproach also allows complex data structures to be easily accessed 89*ac1129e7SMichael Kelleydirectly in the ring without handling wrap-around. 90*ac1129e7SMichael Kelley 91*ac1129e7SMichael KelleyOn arm64 with page sizes > 4 Kbytes, the header page must still be 92*ac1129e7SMichael Kelleypassed to Hyper-V as a 4 Kbyte area. But the memory for the actual 93*ac1129e7SMichael Kelleyring must be aligned to PAGE_SIZE and have a size that is a multiple 94*ac1129e7SMichael Kelleyof PAGE_SIZE so that the duplicate mapping trick can be done. Hence 95*ac1129e7SMichael Kelleya portion of the header page is unused and not communicated to 96*ac1129e7SMichael KelleyHyper-V. This case is handled by vmbus_establish_gpadl(). 97*ac1129e7SMichael Kelley 98*ac1129e7SMichael KelleyHyper-V enforces a limit on the aggregate amount of guest memory 99*ac1129e7SMichael Kelleythat can be shared with the host via GPADLs. This limit ensures 100*ac1129e7SMichael Kelleythat a rogue guest can't force the consumption of excessive host 101*ac1129e7SMichael Kelleyresources. For Windows Server 2019 and later, this limit is 102*ac1129e7SMichael Kelleyapproximately 1280 Mbytes. For versions prior to Windows Server 103*ac1129e7SMichael Kelley2019, the limit is approximately 384 Mbytes. 104*ac1129e7SMichael Kelley 105*ac1129e7SMichael KelleyVMbus messages 106*ac1129e7SMichael Kelley-------------- 107*ac1129e7SMichael KelleyAll VMbus messages have a standard header that includes the message 108*ac1129e7SMichael Kelleylength, the offset of the message payload, some flags, and a 109*ac1129e7SMichael KelleytransactionID. The portion of the message after the header is 110*ac1129e7SMichael Kelleyunique to each VSP/VSC pair. 111*ac1129e7SMichael Kelley 112*ac1129e7SMichael KelleyMessages follow one of two patterns: 113*ac1129e7SMichael Kelley 114*ac1129e7SMichael Kelley* Unidirectional: Either side sends a message and does not 115*ac1129e7SMichael Kelley expect a response message 116*ac1129e7SMichael Kelley* Request/response: One side (usually the guest) sends a message 117*ac1129e7SMichael Kelley and expects a response 118*ac1129e7SMichael Kelley 119*ac1129e7SMichael KelleyThe transactionID (a.k.a. "requestID") is for matching requests & 120*ac1129e7SMichael Kelleyresponses. Some synthetic devices allow multiple requests to be in- 121*ac1129e7SMichael Kelleyflight simultaneously, so the guest specifies a transactionID when 122*ac1129e7SMichael Kelleysending a request. Hyper-V sends back the same transactionID in the 123*ac1129e7SMichael Kelleymatching response. 124*ac1129e7SMichael Kelley 125*ac1129e7SMichael KelleyMessages passed between the VSP and VSC are control messages. For 126*ac1129e7SMichael Kelleyexample, a message sent from the storvsc driver might be "execute 127*ac1129e7SMichael Kelleythis SCSI command". If a message also implies some data transfer 128*ac1129e7SMichael Kelleybetween the guest and the Hyper-V host, the actual data to be 129*ac1129e7SMichael Kelleytransferred may be embedded with the control message, or it may be 130*ac1129e7SMichael Kelleyspecified as a separate data buffer that the Hyper-V host will 131*ac1129e7SMichael Kelleyaccess as a DMA operation. The former case is used when the size of 132*ac1129e7SMichael Kelleythe data is small and the cost of copying the data to and from the 133*ac1129e7SMichael Kelleyring buffer is minimal. For example, time sync messages from the 134*ac1129e7SMichael KelleyHyper-V host to the guest contain the actual time value. When the 135*ac1129e7SMichael Kelleydata is larger, a separate data buffer is used. In this case, the 136*ac1129e7SMichael Kelleycontrol message contains a list of GPAs that describe the data 137*ac1129e7SMichael Kelleybuffer. For example, the storvsc driver uses this approach to 138*ac1129e7SMichael Kelleyspecify the data buffers to/from which disk I/O is done. 139*ac1129e7SMichael Kelley 140*ac1129e7SMichael KelleyThree functions exist to send VMbus messages: 141*ac1129e7SMichael Kelley 142*ac1129e7SMichael Kelley1. vmbus_sendpacket(): Control-only messages and messages with 143*ac1129e7SMichael Kelley embedded data -- no GPAs 144*ac1129e7SMichael Kelley2. vmbus_sendpacket_pagebuffer(): Message with list of GPAs 145*ac1129e7SMichael Kelley identifying data to transfer. An offset and length is 146*ac1129e7SMichael Kelley associated with each GPA so that multiple discontinuous areas 147*ac1129e7SMichael Kelley of guest memory can be targeted. 148*ac1129e7SMichael Kelley3. vmbus_sendpacket_mpb_desc(): Message with list of GPAs 149*ac1129e7SMichael Kelley identifying data to transfer. A single offset and length is 150*ac1129e7SMichael Kelley associated with a list of GPAs. The GPAs must describe a 151*ac1129e7SMichael Kelley single logical area of guest memory to be targeted. 152*ac1129e7SMichael Kelley 153*ac1129e7SMichael KelleyHistorically, Linux guests have trusted Hyper-V to send well-formed 154*ac1129e7SMichael Kelleyand valid messages, and Linux drivers for synthetic devices did not 155*ac1129e7SMichael Kelleyfully validate messages. With the introduction of processor 156*ac1129e7SMichael Kelleytechnologies that fully encrypt guest memory and that allow the 157*ac1129e7SMichael Kelleyguest to not trust the hypervisor (AMD SNP-SEV, Intel TDX), trusting 158*ac1129e7SMichael Kelleythe Hyper-V host is no longer a valid assumption. The drivers for 159*ac1129e7SMichael KelleyVMbus synthetic devices are being updated to fully validate any 160*ac1129e7SMichael Kelleyvalues read from memory that is shared with Hyper-V, which includes 161*ac1129e7SMichael Kelleymessages from VMbus devices. To facilitate such validation, 162*ac1129e7SMichael Kelleymessages read by the guest from the "in" ring buffer are copied to a 163*ac1129e7SMichael Kelleytemporary buffer that is not shared with Hyper-V. Validation is 164*ac1129e7SMichael Kelleyperformed in this temporary buffer without the risk of Hyper-V 165*ac1129e7SMichael Kelleymaliciously modifying the message after it is validated but before 166*ac1129e7SMichael Kelleyit is used. 167*ac1129e7SMichael Kelley 168*ac1129e7SMichael KelleyVMbus interrupts 169*ac1129e7SMichael Kelley---------------- 170*ac1129e7SMichael KelleyVMbus provides a mechanism for the guest to interrupt the host when 171*ac1129e7SMichael Kelleythe guest has queued new messages in a ring buffer. The host 172*ac1129e7SMichael Kelleyexpects that the guest will send an interrupt only when an "out" 173*ac1129e7SMichael Kelleyring buffer transitions from empty to non-empty. If the guest sends 174*ac1129e7SMichael Kelleyinterrupts at other times, the host deems such interrupts to be 175*ac1129e7SMichael Kelleyunnecessary. If a guest sends an excessive number of unnecessary 176*ac1129e7SMichael Kelleyinterrupts, the host may throttle that guest by suspending its 177*ac1129e7SMichael Kelleyexecution for a few seconds to prevent a denial-of-service attack. 178*ac1129e7SMichael Kelley 179*ac1129e7SMichael KelleySimilarly, the host will interrupt the guest when it sends a new 180*ac1129e7SMichael Kelleymessage on the VMbus control path, or when a VMbus channel "in" ring 181*ac1129e7SMichael Kelleybuffer transitions from empty to non-empty. Each CPU in the guest 182*ac1129e7SMichael Kelleymay receive VMbus interrupts, so they are best modeled as per-CPU 183*ac1129e7SMichael Kelleyinterrupts in Linux. This model works well on arm64 where a single 184*ac1129e7SMichael Kelleyper-CPU IRQ is allocated for VMbus. Since x86/x64 lacks support for 185*ac1129e7SMichael Kelleyper-CPU IRQs, an x86 interrupt vector is statically allocated (see 186*ac1129e7SMichael KelleyHYPERVISOR_CALLBACK_VECTOR) across all CPUs and explicitly coded to 187*ac1129e7SMichael Kelleycall the VMbus interrupt service routine. These interrupts are 188*ac1129e7SMichael Kelleyvisible in /proc/interrupts on the "HYP" line. 189*ac1129e7SMichael Kelley 190*ac1129e7SMichael KelleyThe guest CPU that a VMbus channel will interrupt is selected by the 191*ac1129e7SMichael Kelleyguest when the channel is created, and the host is informed of that 192*ac1129e7SMichael Kelleyselection. VMbus devices are broadly grouped into two categories: 193*ac1129e7SMichael Kelley 194*ac1129e7SMichael Kelley1. "Slow" devices that need only one VMbus channel. The devices 195*ac1129e7SMichael Kelley (such as keyboard, mouse, heartbeat, and timesync) generate 196*ac1129e7SMichael Kelley relatively few interrupts. Their VMbus channels are all 197*ac1129e7SMichael Kelley assigned to interrupt the VMBUS_CONNECT_CPU, which is always 198*ac1129e7SMichael Kelley CPU 0. 199*ac1129e7SMichael Kelley 200*ac1129e7SMichael Kelley2. "High speed" devices that may use multiple VMbus channels for 201*ac1129e7SMichael Kelley higher parallelism and performance. These devices include the 202*ac1129e7SMichael Kelley synthetic SCSI controller and synthetic NIC. Their VMbus 203*ac1129e7SMichael Kelley channels interrupts are assigned to CPUs that are spread out 204*ac1129e7SMichael Kelley among the available CPUs in the VM so that interrupts on 205*ac1129e7SMichael Kelley multiple channels can be processed in parallel. 206*ac1129e7SMichael Kelley 207*ac1129e7SMichael KelleyThe assignment of VMbus channel interrupts to CPUs is done in the 208*ac1129e7SMichael Kelleyfunction init_vp_index(). This assignment is done outside of the 209*ac1129e7SMichael Kelleynormal Linux interrupt affinity mechanism, so the interrupts are 210*ac1129e7SMichael Kelleyneither "unmanaged" nor "managed" interrupts. 211*ac1129e7SMichael Kelley 212*ac1129e7SMichael KelleyThe CPU that a VMbus channel will interrupt can be seen in 213*ac1129e7SMichael Kelley/sys/bus/vmbus/devices/<deviceGUID>/ channels/<channelRelID>/cpu. 214*ac1129e7SMichael KelleyWhen running on later versions of Hyper-V, the CPU can be changed 215*ac1129e7SMichael Kelleyby writing a new value to this sysfs entry. Because the interrupt 216*ac1129e7SMichael Kelleyassignment is done outside of the normal Linux affinity mechanism, 217*ac1129e7SMichael Kelleythere are no entries in /proc/irq corresponding to individual 218*ac1129e7SMichael KelleyVMbus channel interrupts. 219*ac1129e7SMichael Kelley 220*ac1129e7SMichael KelleyAn online CPU in a Linux guest may not be taken offline if it has 221*ac1129e7SMichael KelleyVMbus channel interrupts assigned to it. Any such channel 222*ac1129e7SMichael Kelleyinterrupts must first be manually reassigned to another CPU as 223*ac1129e7SMichael Kelleydescribed above. When no channel interrupts are assigned to the 224*ac1129e7SMichael KelleyCPU, it can be taken offline. 225*ac1129e7SMichael Kelley 226*ac1129e7SMichael KelleyWhen a guest CPU receives a VMbus interrupt from the host, the 227*ac1129e7SMichael Kelleyfunction vmbus_isr() handles the interrupt. It first checks for 228*ac1129e7SMichael Kelleychannel interrupts by calling vmbus_chan_sched(), which looks at a 229*ac1129e7SMichael Kelleybitmap setup by the host to determine which channels have pending 230*ac1129e7SMichael Kelleyinterrupts on this CPU. If multiple channels have pending 231*ac1129e7SMichael Kelleyinterrupts for this CPU, they are processed sequentially. When all 232*ac1129e7SMichael Kelleychannel interrupts have been processed, vmbus_isr() checks for and 233*ac1129e7SMichael Kelleyprocesses any message received on the VMbus control path. 234*ac1129e7SMichael Kelley 235*ac1129e7SMichael KelleyThe VMbus channel interrupt handling code is designed to work 236*ac1129e7SMichael Kelleycorrectly even if an interrupt is received on a CPU other than the 237*ac1129e7SMichael KelleyCPU assigned to the channel. Specifically, the code does not use 238*ac1129e7SMichael KelleyCPU-based exclusion for correctness. In normal operation, Hyper-V 239*ac1129e7SMichael Kelleywill interrupt the assigned CPU. But when the CPU assigned to a 240*ac1129e7SMichael Kelleychannel is being changed via sysfs, the guest doesn't know exactly 241*ac1129e7SMichael Kelleywhen Hyper-V will make the transition. The code must work correctly 242*ac1129e7SMichael Kelleyeven if there is a time lag before Hyper-V starts interrupting the 243*ac1129e7SMichael Kelleynew CPU. See comments in target_cpu_store(). 244*ac1129e7SMichael Kelley 245*ac1129e7SMichael KelleyVMbus device creation/deletion 246*ac1129e7SMichael Kelley------------------------------ 247*ac1129e7SMichael KelleyHyper-V and the Linux guest have a separate message-passing path 248*ac1129e7SMichael Kelleythat is used for synthetic device creation and deletion. This 249*ac1129e7SMichael Kelleypath does not use a VMbus channel. See vmbus_post_msg() and 250*ac1129e7SMichael Kelleyvmbus_on_msg_dpc(). 251*ac1129e7SMichael Kelley 252*ac1129e7SMichael KelleyThe first step is for the guest to connect to the generic 253*ac1129e7SMichael KelleyHyper-V VMbus mechanism. As part of establishing this connection, 254*ac1129e7SMichael Kelleythe guest and Hyper-V agree on a VMbus protocol version they will 255*ac1129e7SMichael Kelleyuse. This negotiation allows newer Linux kernels to run on older 256*ac1129e7SMichael KelleyHyper-V versions, and vice versa. 257*ac1129e7SMichael Kelley 258*ac1129e7SMichael KelleyThe guest then tells Hyper-V to "send offers". Hyper-V sends an 259*ac1129e7SMichael Kelleyoffer message to the guest for each synthetic device that the VM 260*ac1129e7SMichael Kelleyis configured to have. Each VMbus device type has a fixed GUID 261*ac1129e7SMichael Kelleyknown as the "class ID", and each VMbus device instance is also 262*ac1129e7SMichael Kelleyidentified by a GUID. The offer message from Hyper-V contains 263*ac1129e7SMichael Kelleyboth GUIDs to uniquely (within the VM) identify the device. 264*ac1129e7SMichael KelleyThere is one offer message for each device instance, so a VM with 265*ac1129e7SMichael Kelleytwo synthetic NICs will get two offers messages with the NIC 266*ac1129e7SMichael Kelleyclass ID. The ordering of offer messages can vary from boot-to-boot 267*ac1129e7SMichael Kelleyand must not be assumed to be consistent in Linux code. Offer 268*ac1129e7SMichael Kelleymessages may also arrive long after Linux has initially booted 269*ac1129e7SMichael Kelleybecause Hyper-V supports adding devices, such as synthetic NICs, 270*ac1129e7SMichael Kelleyto running VMs. A new offer message is processed by 271*ac1129e7SMichael Kelleyvmbus_process_offer(), which indirectly invokes vmbus_add_channel_work(). 272*ac1129e7SMichael Kelley 273*ac1129e7SMichael KelleyUpon receipt of an offer message, the guest identifies the device 274*ac1129e7SMichael Kelleytype based on the class ID, and invokes the correct driver to set up 275*ac1129e7SMichael Kelleythe device. Driver/device matching is performed using the standard 276*ac1129e7SMichael KelleyLinux mechanism. 277*ac1129e7SMichael Kelley 278*ac1129e7SMichael KelleyThe device driver probe function opens the primary VMbus channel to 279*ac1129e7SMichael Kelleythe corresponding VSP. It allocates guest memory for the channel 280*ac1129e7SMichael Kelleyring buffers and shares the ring buffer with the Hyper-V host by 281*ac1129e7SMichael Kelleygiving the host a list of GPAs for the ring buffer memory. See 282*ac1129e7SMichael Kelleyvmbus_establish_gpadl(). 283*ac1129e7SMichael Kelley 284*ac1129e7SMichael KelleyOnce the ring buffer is set up, the device driver and VSP exchange 285*ac1129e7SMichael Kelleysetup messages via the primary channel. These messages may include 286*ac1129e7SMichael Kelleynegotiating the device protocol version to be used between the Linux 287*ac1129e7SMichael KelleyVSC and the VSP on the Hyper-V host. The setup messages may also 288*ac1129e7SMichael Kelleyinclude creating additional VMbus channels, which are somewhat 289*ac1129e7SMichael Kelleymis-named as "sub-channels" since they are functionally 290*ac1129e7SMichael Kelleyequivalent to the primary channel once they are created. 291*ac1129e7SMichael Kelley 292*ac1129e7SMichael KelleyFinally, the device driver may create entries in /dev as with 293*ac1129e7SMichael Kelleyany device driver. 294*ac1129e7SMichael Kelley 295*ac1129e7SMichael KelleyThe Hyper-V host can send a "rescind" message to the guest to 296*ac1129e7SMichael Kelleyremove a device that was previously offered. Linux drivers must 297*ac1129e7SMichael Kelleyhandle such a rescind message at any time. Rescinding a device 298*ac1129e7SMichael Kelleyinvokes the device driver "remove" function to cleanly shut 299*ac1129e7SMichael Kelleydown the device and remove it. Once a synthetic device is 300*ac1129e7SMichael Kelleyrescinded, neither Hyper-V nor Linux retains any state about 301*ac1129e7SMichael Kelleyits previous existence. Such a device might be re-added later, 302*ac1129e7SMichael Kelleyin which case it is treated as an entirely new device. See 303*ac1129e7SMichael Kelleyvmbus_onoffer_rescind(). 304