kernel-fxtec-pro1x/mm
Hugh Dickins 508034a32b [PATCH] mm: unmap_vmas with inner ptlock
Remove the page_table_lock from around the calls to unmap_vmas, and replace
the pte_offset_map in zap_pte_range by pte_offset_map_lock: all callers are
now safe to descend without page_table_lock.

Don't attempt fancy locking for hugepages, just take page_table_lock in
unmap_hugepage_range.  Which makes zap_hugepage_range, and the hugetlb test in
zap_page_range, redundant: unmap_vmas calls unmap_hugepage_range anyway.  Nor
does unmap_vmas have much use for its mm arg now.

The tlb_start_vma and tlb_end_vma in unmap_page_range are now called without
page_table_lock: if they're implemented at all, they typically come down to
flush_cache_range (usually done outside page_table_lock) and flush_tlb_range
(which we already audited for the mprotect case).

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29 21:40:41 -07:00
..
bootmem.c
fadvise.c
filemap.c
filemap.h
filemap_xip.c
fremap.c
highmem.c
hugetlb.c [PATCH] mm: unmap_vmas with inner ptlock 2005-10-29 21:40:41 -07:00
internal.h
Kconfig
madvise.c
Makefile
memory.c [PATCH] mm: unmap_vmas with inner ptlock 2005-10-29 21:40:41 -07:00
mempolicy.c
mempool.c
mincore.c
mlock.c
mmap.c [PATCH] mm: unmap_vmas with inner ptlock 2005-10-29 21:40:41 -07:00
mprotect.c
mremap.c
msync.c
nommu.c
oom_kill.c
page-writeback.c
page_alloc.c
page_io.c
pdflush.c
prio_tree.c
readahead.c
rmap.c
shmem.c
slab.c
sparse.c
swap.c
swap_state.c
swapfile.c
thrash.c
tiny-shmem.c
truncate.c
vmalloc.c
vmscan.c