mm/hugetlb.c: fix virtual address handling in hugetlb fault
handle_mm_fault() passes 'faulted' address to hugetlb_fault(). This address is not aligned to a hugepage boundary. Most of the functions for hugetlb pages are aware of that and calculate an alignment themselves. However some functions such as copy_user_huge_page() and clear_huge_page() don't handle alignment by themselves. This patch make hugeltb_fault() fix the alignment and pass an aligned addresss (to address of a faulted hugepage) to functions. [akpm@linux-foundation.org: use &=] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
ef009b25f4
commit
1e16a539ac
1 changed files with 2 additions and 0 deletions
|
@ -2640,6 +2640,8 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
|
|||
static DEFINE_MUTEX(hugetlb_instantiation_mutex);
|
||||
struct hstate *h = hstate_vma(vma);
|
||||
|
||||
address &= huge_page_mask(h);
|
||||
|
||||
ptep = huge_pte_offset(mm, address);
|
||||
if (ptep) {
|
||||
entry = huge_ptep_get(ptep);
|
||||
|
|
Loading…
Reference in a new issue