NVMe: Correct sg list setup in nvme_map_user_pages

Our SG list was constructed to always fill the entire first page, even
if that was more than the length of the I/O.  This is probably harmless,
but some IOMMUs might do something bad.

Correcting the first call to sg_set_page() made it look a lot closer to
the sg_set_page() in the loop, so fold the first call to sg_set_page()
into the loop.

Reported-by: Nisheeth Bhat <nisheeth.bhat@intel.com>
Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
This commit is contained in:
Matthew Wilcox 2011-09-13 17:01:39 -04:00
parent 6413214c5d
commit d0ba1e497b

View file

@ -996,11 +996,11 @@ static int nvme_map_user_pages(struct nvme_dev *dev, int write,
sg = kcalloc(count, sizeof(*sg), GFP_KERNEL); sg = kcalloc(count, sizeof(*sg), GFP_KERNEL);
sg_init_table(sg, count); sg_init_table(sg, count);
sg_set_page(&sg[0], pages[0], PAGE_SIZE - offset, offset); for (i = 0; i < count; i++) {
length -= (PAGE_SIZE - offset); sg_set_page(&sg[i], pages[i],
for (i = 1; i < count; i++) { min_t(int, length, PAGE_SIZE - offset), offset);
sg_set_page(&sg[i], pages[i], min_t(int, length, PAGE_SIZE), 0); length -= (PAGE_SIZE - offset);
length -= PAGE_SIZE; offset = 0;
} }
err = -ENOMEM; err = -ENOMEM;