block: fix queue bounce limit setting
Impact: don't set GFP_DMA in q->bounce_gfp unnecessarily All DMA address limits are expressed in terms of the last addressable unit (byte or page) instead of one plus that. However, when determining bounce_gfp for 64bit machines in blk_queue_bounce_limit(), it compares the specified limit against 0x100000000UL to determine whether it's below 4G ending up falsely setting GFP_DMA in q->bounce_gfp. As DMA zone is very small on x86_64, this makes larger SG_IO transfers very eager to trigger OOM killer. Fix it. While at it, rename the parameter to @dma_mask for clarity and convert comment to proper winged style. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
This commit is contained in:
parent
25636e282f
commit
cd0aca2d55
1 changed files with 11 additions and 9 deletions
|
@ -156,26 +156,28 @@ EXPORT_SYMBOL(blk_queue_make_request);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* blk_queue_bounce_limit - set bounce buffer limit for queue
|
* blk_queue_bounce_limit - set bounce buffer limit for queue
|
||||||
* @q: the request queue for the device
|
* @q: the request queue for the device
|
||||||
* @dma_addr: bus address limit
|
* @dma_mask: the maximum address the device can handle
|
||||||
*
|
*
|
||||||
* Description:
|
* Description:
|
||||||
* Different hardware can have different requirements as to what pages
|
* Different hardware can have different requirements as to what pages
|
||||||
* it can do I/O directly to. A low level driver can call
|
* it can do I/O directly to. A low level driver can call
|
||||||
* blk_queue_bounce_limit to have lower memory pages allocated as bounce
|
* blk_queue_bounce_limit to have lower memory pages allocated as bounce
|
||||||
* buffers for doing I/O to pages residing above @dma_addr.
|
* buffers for doing I/O to pages residing above @dma_mask.
|
||||||
**/
|
**/
|
||||||
void blk_queue_bounce_limit(struct request_queue *q, u64 dma_addr)
|
void blk_queue_bounce_limit(struct request_queue *q, u64 dma_mask)
|
||||||
{
|
{
|
||||||
unsigned long b_pfn = dma_addr >> PAGE_SHIFT;
|
unsigned long b_pfn = dma_mask >> PAGE_SHIFT;
|
||||||
int dma = 0;
|
int dma = 0;
|
||||||
|
|
||||||
q->bounce_gfp = GFP_NOIO;
|
q->bounce_gfp = GFP_NOIO;
|
||||||
#if BITS_PER_LONG == 64
|
#if BITS_PER_LONG == 64
|
||||||
/* Assume anything <= 4GB can be handled by IOMMU.
|
/*
|
||||||
Actually some IOMMUs can handle everything, but I don't
|
* Assume anything <= 4GB can be handled by IOMMU. Actually
|
||||||
know of a way to test this here. */
|
* some IOMMUs can handle everything, but I don't know of a
|
||||||
if (b_pfn < (min_t(u64, 0x100000000UL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
|
* way to test this here.
|
||||||
|
*/
|
||||||
|
if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
|
||||||
dma = 1;
|
dma = 1;
|
||||||
q->bounce_pfn = max_low_pfn;
|
q->bounce_pfn = max_low_pfn;
|
||||||
#else
|
#else
|
||||||
|
|
Loading…
Reference in a new issue