History log of /openbmc/linux/block/blk-map.c (Results 176 – 186 of 186)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v2.6.25, v2.6.25-rc9
# f18573ab 11-Apr-2008 FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>

block: move the padding adjustment to blk_rq_map_sg

blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule
that req->data_len (the true data length) is equal to sum(bio). It

block: move the padding adjustment to blk_rq_map_sg

blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule
that req->data_len (the true data length) is equal to sum(bio). It
broke the scsi command completion code.

commit e97a294ef6938512b655b1abf17656cf2b26f709 was introduced to fix
the above issue. However, the partial completion code doesn't work
with it. The commit is also a layer violation (scsi mid-layer should
not know about the block layer's padding).

This patch moves the padding adjustment to blk_rq_map_sg (suggested by
James). The padding works like the drain buffer. This patch breaks the
rule that req->data_len is equal to sum(sg), however, the drain buffer
already broke it. So this patch just restores the rule that
req->data_len is equal to sub(bio) without breaking anything new.

Now when a low level driver needs padding, blk_rq_map_user and
blk_rq_map_user_iov guarantee there's enough room for padding.
blk_rq_map_sg can safely extend the last entry of a scatter list.

blk_rq_map_sg must extend the last entry of a scatter list only for a
request that got through bio_copy_user_iov. This patches introduces
new REQ_COPY_USER flag.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


# afdc1a78 11-Apr-2008 FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>

block: add bio_copy_user_iov support to blk_rq_map_user_iov

With this patch, blk_rq_map_user_iov uses bio_copy_user_iov when a low
level driver needs padding or a buffer in sg_iovec isn'

block: add bio_copy_user_iov support to blk_rq_map_user_iov

With this patch, blk_rq_map_user_iov uses bio_copy_user_iov when a low
level driver needs padding or a buffer in sg_iovec isn't aligned. That
is, it uses temporary kernel buffers instead of mapping user pages
directly.

When a LLD needs padding, later blk_rq_map_sg needs to extend the last
entry of a scatter list. bio_copy_user_iov guarantees that there is
enough space for padding by using temporary kernel buffers instead of
user pages.

blk_rq_map_user_iov needs buffers in sg_iovec to be aligned. The
comment in blk_rq_map_user_iov indicates that drivers/scsi/sg.c also
needs buffers in sg_iovec to be aligned. Actually, drivers/scsi/sg.c
works with unaligned buffers in sg_iovec (it always uses temporary
kernel buffers).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


Revision tags: v2.6.25-rc8, v2.6.25-rc7, v2.6.25-rc6, v2.6.25-rc5, v2.6.25-rc4
# 56d94a37 04-Mar-2008 Harvey Harrison <harvey.harrison@gmail.com>

block: fix shadowed variable warning in blk-map.c

Introduced between 2.6.25-rc2 and -rc3
block/blk-map.c:154:14: warning: symbol 'bio' shadows an earlier one
block/blk-map.c:110:13:

block: fix shadowed variable warning in blk-map.c

Introduced between 2.6.25-rc2 and -rc3
block/blk-map.c:154:14: warning: symbol 'bio' shadows an earlier one
block/blk-map.c:110:13: originally declared here

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


# bec41940 04-Mar-2008 Adrian Bunk <bunk@kernel.org>

unexport blk_rq_map_user_iov

This patch removes the unused export of blk_rq_map_user_iov.

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@orac

unexport blk_rq_map_user_iov

This patch removes the unused export of blk_rq_map_user_iov.

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


# e3790c7d 04-Mar-2008 Tejun Heo <htejun@gmail.com>

block: separate out padding from alignment

Block layer alignment was used for two different purposes - memory
alignment and padding. This causes problems in lower layers because
dri

block: separate out padding from alignment

Block layer alignment was used for two different purposes - memory
alignment and padding. This causes problems in lower layers because
drivers which only require memory alignment ends up with adjusted
rq->data_len. Separate out padding such that padding occurs iff
driver explicitly requests it.

Tomo: restorethe code to update bio in blk_rq_map_user
introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa
according to padding alignment.

Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


# 7a85f889 04-Mar-2008 FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>

block: restore the meaning of rq->data_len to the true data length

The meaning of rq->data_len was changed to the length of an allocated
buffer from the true data length. It breaks SG_IO

block: restore the meaning of rq->data_len to the true data length

The meaning of rq->data_len was changed to the length of an allocated
buffer from the true data length. It breaks SG_IO friends and
bsg. This patch restores the meaning of rq->data_len to the true data
length and adds rq->extra_len to store an extended length (due to
drain buffer and padding).

This patch also removes the code to update bio in blk_rq_map_user
introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa.
The commit adjusts bio according to memory alignment
(queue_dma_alignment). However, memory alignment is NOT padding
alignment. This adjustment also breaks SG_IO friends and bsg. Padding
alignment needs to be fixed in a proper way (by a separate patch).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <axboe@carl.home.kernel.dk>

show more ...


Revision tags: v2.6.25-rc3
# 6b00769f 19-Feb-2008 Tejun Heo <htejun@gmail.com>

block: add request->raw_data_len

With padding and draining moved into it, block layer now may extend
requests as directed by queue parameters, so now a request has two
sizes - the or

block: add request->raw_data_len

With padding and draining moved into it, block layer now may extend
requests as directed by queue parameters, so now a request has two
sizes - the original request size and the extended size which matches
the size of area pointed to by bios and later by sgs. The latter size
is what lower layers are primarily interested in when allocating,
filling up DMA tables and setting up the controller.

Both padding and draining extend the data area to accomodate
controller characteristics. As any controller which speaks SCSI can
handle underflows, feeding larger data area is safe.

So, this patch makes the primary data length field, request->data_len,
indicate the size of full data area and add a separate length field,
request->raw_data_len, for the unmodified request size. The latter is
used to report to higher layer (userland) and where the original
request size should be fed to the controller or device.

Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


# 40b01b9b 19-Feb-2008 Tejun Heo <htejun@gmail.com>

block: update bio according to DMA alignment padding

DMA start address and transfer size alignment for PC requests are
achieved using bio_copy_user() instead of bio_map_user(). This wor

block: update bio according to DMA alignment padding

DMA start address and transfer size alignment for PC requests are
achieved using bio_copy_user() instead of bio_map_user(). This works
because bio_copy_user() always uses full pages and block DMA alignment
isn't allowed to go over PAGE_SIZE.

However, the implementation didn't update the last bio of the request
to make this padding visible to lower layers. This patch makes
blk_rq_map_user() extend the last bio such that it includes the
padding area and the size of area pointed to by the request is
properly aligned.

Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


# 84e9e03c 18-Feb-2008 Jens Axboe <jens.axboe@oracle.com>

block: make blk_rq_map_user() clear ->bio if it unmaps it

That way the interface is symmetric, and calling blk_rq_unmap_user()
on the request wont oops.

Signed-off-by: Jens Axbo

block: make blk_rq_map_user() clear ->bio if it unmaps it

That way the interface is symmetric, and calling blk_rq_unmap_user()
on the request wont oops.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


Revision tags: v2.6.25-rc2, v2.6.25-rc1
# 6728cb0e 31-Jan-2008 Jens Axboe <jens.axboe@oracle.com>

block: make core bits checkpatch compliant

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>


# 86db1e29 29-Jan-2008 Jens Axboe <jens.axboe@oracle.com>

block: continue ll_rw_blk.c splitup

Adds files for barrier handling, rq execution, io context handling,
mapping data to requests, and queue settings.

Signed-off-by: Jens Axboe <

block: continue ll_rw_blk.c splitup

Adds files for barrier handling, rq execution, io context handling,
mapping data to requests, and queue settings.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

show more ...


12345678