Searched hist:"465 bee1da82e43f18d10c43cc7566d0284ad13a9" (Results 1 – 5 of 5) sorted by relevance
/openbmc/qemu/block/ |
H A D | qapi.c | diff 465bee1da82e43f18d10c43cc7566d0284ad13a9 Sat May 17 17:58:19 CDT 2014 Peter Lieven <pl@kamp.de> block: optimize zero writes with bdrv_write_zeroes
this patch tries to optimize zero write requests by automatically using bdrv_write_zeroes if it is supported by the format.
This significantly speeds up file system initialization and should speed zero write test used to test backend storage performance.
I ran the following 2 tests on my internal SSD with a 50G QCOW2 container and on an attached iSCSI storage.
a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
QCOW2 [off] [on] [unmap] ----- runtime: 14secs 1.1secs 1.1secs filesize: 937M 18M 18M
iSCSI [off] [on] [unmap] ---- runtime: 9.3s 0.9s 0.9s
b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
QCOW2 [off] [on] [unmap] ----- runtime: 246secs 18secs 18secs filesize: 51G 192K 192K throughput: 203M/s 2.3G/s 2.3G/s
iSCSI* [off] [on] [unmap] ---- runtime: 8mins 45secs 33secs throughput: 106M/s 1.2G/s 1.6G/s allocated: 100% 100% 0%
* The storage was connected via an 1Gbit interface. It seems to internally handle writing zeroes via WRITESAME16 very fast.
Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
/openbmc/qemu/include/block/ |
H A D | block_int.h | diff 465bee1da82e43f18d10c43cc7566d0284ad13a9 Sat May 17 17:58:19 CDT 2014 Peter Lieven <pl@kamp.de> block: optimize zero writes with bdrv_write_zeroes
this patch tries to optimize zero write requests by automatically using bdrv_write_zeroes if it is supported by the format.
This significantly speeds up file system initialization and should speed zero write test used to test backend storage performance.
I ran the following 2 tests on my internal SSD with a 50G QCOW2 container and on an attached iSCSI storage.
a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
QCOW2 [off] [on] [unmap] ----- runtime: 14secs 1.1secs 1.1secs filesize: 937M 18M 18M
iSCSI [off] [on] [unmap] ---- runtime: 9.3s 0.9s 0.9s
b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
QCOW2 [off] [on] [unmap] ----- runtime: 246secs 18secs 18secs filesize: 51G 192K 192K throughput: 203M/s 2.3G/s 2.3G/s
iSCSI* [off] [on] [unmap] ---- runtime: 8mins 45secs 33secs throughput: 106M/s 1.2G/s 1.6G/s allocated: 100% 100% 0%
* The storage was connected via an 1Gbit interface. It seems to internally handle writing zeroes via WRITESAME16 very fast.
Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
/openbmc/qemu/ |
H A D | blockdev.c | diff 465bee1da82e43f18d10c43cc7566d0284ad13a9 Sat May 17 17:58:19 CDT 2014 Peter Lieven <pl@kamp.de> block: optimize zero writes with bdrv_write_zeroes
this patch tries to optimize zero write requests by automatically using bdrv_write_zeroes if it is supported by the format.
This significantly speeds up file system initialization and should speed zero write test used to test backend storage performance.
I ran the following 2 tests on my internal SSD with a 50G QCOW2 container and on an attached iSCSI storage.
a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
QCOW2 [off] [on] [unmap] ----- runtime: 14secs 1.1secs 1.1secs filesize: 937M 18M 18M
iSCSI [off] [on] [unmap] ---- runtime: 9.3s 0.9s 0.9s
b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
QCOW2 [off] [on] [unmap] ----- runtime: 246secs 18secs 18secs filesize: 51G 192K 192K throughput: 203M/s 2.3G/s 2.3G/s
iSCSI* [off] [on] [unmap] ---- runtime: 8mins 45secs 33secs throughput: 106M/s 1.2G/s 1.6G/s allocated: 100% 100% 0%
* The storage was connected via an 1Gbit interface. It seems to internally handle writing zeroes via WRITESAME16 very fast.
Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
H A D | qemu-options.hx | diff 465bee1da82e43f18d10c43cc7566d0284ad13a9 Sat May 17 17:58:19 CDT 2014 Peter Lieven <pl@kamp.de> block: optimize zero writes with bdrv_write_zeroes
this patch tries to optimize zero write requests by automatically using bdrv_write_zeroes if it is supported by the format.
This significantly speeds up file system initialization and should speed zero write test used to test backend storage performance.
I ran the following 2 tests on my internal SSD with a 50G QCOW2 container and on an attached iSCSI storage.
a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
QCOW2 [off] [on] [unmap] ----- runtime: 14secs 1.1secs 1.1secs filesize: 937M 18M 18M
iSCSI [off] [on] [unmap] ---- runtime: 9.3s 0.9s 0.9s
b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
QCOW2 [off] [on] [unmap] ----- runtime: 246secs 18secs 18secs filesize: 51G 192K 192K throughput: 203M/s 2.3G/s 2.3G/s
iSCSI* [off] [on] [unmap] ---- runtime: 8mins 45secs 33secs throughput: 106M/s 1.2G/s 1.6G/s allocated: 100% 100% 0%
* The storage was connected via an 1Gbit interface. It seems to internally handle writing zeroes via WRITESAME16 very fast.
Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
H A D | block.c | diff 465bee1da82e43f18d10c43cc7566d0284ad13a9 Sat May 17 17:58:19 CDT 2014 Peter Lieven <pl@kamp.de> block: optimize zero writes with bdrv_write_zeroes
this patch tries to optimize zero write requests by automatically using bdrv_write_zeroes if it is supported by the format.
This significantly speeds up file system initialization and should speed zero write test used to test backend storage performance.
I ran the following 2 tests on my internal SSD with a 50G QCOW2 container and on an attached iSCSI storage.
a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
QCOW2 [off] [on] [unmap] ----- runtime: 14secs 1.1secs 1.1secs filesize: 937M 18M 18M
iSCSI [off] [on] [unmap] ---- runtime: 9.3s 0.9s 0.9s
b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
QCOW2 [off] [on] [unmap] ----- runtime: 246secs 18secs 18secs filesize: 51G 192K 192K throughput: 203M/s 2.3G/s 2.3G/s
iSCSI* [off] [on] [unmap] ---- runtime: 8mins 45secs 33secs throughput: 106M/s 1.2G/s 1.6G/s allocated: 100% 100% 0%
* The storage was connected via an 1Gbit interface. It seems to internally handle writing zeroes via WRITESAME16 very fast.
Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|