pgtable*.h is intended for definitions relating to actual pagetables
and their entries, so move all the definitions for
(pte|pmd|pud|pgd)(val)?_t to the appropriate pgtable*.h headers.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* commit 'remotes/tip/x86/paravirt': (175 commits)
xen: use direct ops on 64-bit
xen: make direct versions of irq_enable/disable/save/restore to common code
xen: setup percpu data pointers
xen: fix 32-bit build resulting from mmu move
x86/paravirt: return full 64-bit result
x86, percpu: fix kexec with vmlinux
x86/vmi: fix interrupt enable/disable/save/restore calling convention.
x86/paravirt: don't restore second return reg
xen: setup percpu data pointers
x86: split loading percpu segments from loading gdt
x86: pass in cpu number to switch_to_new_gdt()
x86: UV fix uv_flush_send_and_wait()
x86/paravirt: fix missing callee-save call on pud_val
x86/paravirt: use callee-saved convention for pte_val/make_pte/etc
x86/paravirt: implement PVOP_CALL macros for callee-save functions
x86/paravirt: add register-saving thunks to reduce caller register pressure
x86/paravirt: selectively save/restore regs around pvops calls
x86: fix paravirt clobber in entry_64.S
x86/pvops: add a paravirt_ident functions to allow special patching
xen: move remaining mmu-related stuff into mmu.c
...
Conflicts:
arch/x86/mach-voyager/voyager_smp.c
arch/x86/mm/fault.c
- pmd_flags() needs to be available on 2-levels too
- provide pud_large() wrapper as well
- include page.h - it provides basic types relied on by pgtable.h
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The p?d_page() methods still rely on highlevel types and methods:
In file included from arch/x86/kernel/early_printk.c:18:
/home/mingo/tip/arch/x86/include/asm/pgtable.h: In function ‘pmd_page’:
/home/mingo/tip/arch/x86/include/asm/pgtable.h:516: error: implicit declaration of function â__pfn_to_sectionâ
/home/mingo/tip/arch/x86/include/asm/pgtable.h:516: error: initialization makes pointer from integer without a cast
/home/mingo/tip/arch/x86/include/asm/pgtable.h:516: error: implicit declaration of function ‘__section_mem_map_addr’
/home/mingo/tip/arch/x86/include/asm/pgtable.h:516: error: return makes pointer from integer without a cast
So convert them to macros and document the type dependency.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The _none test is done differently for every level of the pagetable.
Standardize them by:
1: Use the native_X_val to extract the raw entry, with no need to go
via paravirt_ops, diff -r 1d0646d0d319 arch/x86/include/asm/pgtable.h, and
2: Compare with 0 rather than using a boolean !, since they are actually values
and not booleans.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Add pgd/pud/pmd_flags which are analogous to pte_flags, and use them
where-ever we only care about testing the flags portions of the
respective entries.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Impact: cleanup
Unify pmd_pfn. Unfortunately it can't be demacroed because it has a
cyclic dependency on linux/mm.h:page_to_nid().
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
On an x86 system which doesn't support global mappings,
__supported_pte_mask has _PAGE_GLOBAL clear, to make sure it never
appears in the PTE. pfn_pte() and so on will enforce it with:
static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
{
return __pte((((phys_addr_t)page_nr << PAGE_SHIFT) |
pgprot_val(pgprot)) & __supported_pte_mask);
}
However, we overload _PAGE_GLOBAL with _PAGE_PROTNONE on non-present
ptes to distinguish them from swap entries. However, applying
__supported_pte_mask indiscriminately will clear the bit and corrupt the
pte.
I guess the best fix is to only apply __supported_pte_mask to present
ptes. This seems like the right solution to me, as it means we can
completely ignore the issue of overlaps between the present pte bits and
the non-present pte-as-swap entry use of the bits.
__supported_pte_mask contains the set of flags we support on the
current hardware. We also use bits in the pte for things like
logically present ptes with no permissions, and swap entries for
swapped out pages. We should only apply __supported_pte_mask to
present ptes, because otherwise we may destroy other information being
stored in the ptes.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
It's not necessary to deconstruct and reconstruct a pte every time its
flags are being updated. Introduce pte_set_flags and pte_clear_flags
to set and clear flags in a pte. This allows the flag manipulation
code to be inlined, and avoids calls via paravirt-ops.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Impact: cleanup
Move the new memtype old memtype allowed check to header so that is can be
shared by other users. Subsequent patch uses this in pat.c in remap_pfn_range()
code path. No functionality change in this patch.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Impact: Cleanup and branch hints only.
Move the track and untrack pfn stub routines from memory.c to asm-generic.
Also add unlikely to pfnmap related calls in fork and exit path.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Impact: Cleanup - removes a new function in favor of a recently modified older one.
Replace follow_pfnmap_pte in pat code with follow_phys. follow_phys lso
returns protection eliminating the need of pte_pgprot call. Using follow_phys
also eliminates the need for pte_pa.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Impact: New mm functionality.
Add pgprot_writecombine. pgprot_writecombine will be aliased to
pgprot_noncached when not supported by the architecture.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Impact: mm behavior change.
Make pgprot_noncached uc_minus instead of strong UC. This will make
pgprot_noncached to be in line with ioremap_nocache() and all the other
APIs that map page uc_minus on uc request.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>