Searched hist:"5995 a68a6272e4e8f4fe4de82cdc877e650fe8be" (Results 1 – 2 of 2) sorted by relevance
/openbmc/linux/drivers/xen/ |
H A D | xlate_mmu.c | diff 5995a68a6272e4e8f4fe4de82cdc877e650fe8be Tue May 05 10:54:12 CDT 2015 Julien Grall <julien.grall@citrix.com> xen/privcmd: Add support for Linux 64KB page granularity
The hypercall interface (as well as the toolstack) is always using 4KB page granularity. When the toolstack is asking for mapping a series of guest PFN in a batch, it expects to have the page map contiguously in its virtual memory.
When Linux is using 64KB page granularity, the privcmd driver will have to map multiple Xen PFN in a single Linux page.
Note that this solution works on page granularity which is a multiple of 4KB.
Signed-off-by: Julien Grall <julien.grall@citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
|
H A D | privcmd.c | diff 5995a68a6272e4e8f4fe4de82cdc877e650fe8be Tue May 05 10:54:12 CDT 2015 Julien Grall <julien.grall@citrix.com> xen/privcmd: Add support for Linux 64KB page granularity
The hypercall interface (as well as the toolstack) is always using 4KB page granularity. When the toolstack is asking for mapping a series of guest PFN in a batch, it expects to have the page map contiguously in its virtual memory.
When Linux is using 64KB page granularity, the privcmd driver will have to map multiple Xen PFN in a single Linux page.
Note that this solution works on page granularity which is a multiple of 4KB.
Signed-off-by: Julien Grall <julien.grall@citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
|