If slab tracking is on then build a list of full slabs so that we can verify
the integrity of all slabs and are also able to built list of alloc/free
callers.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The patch adds PageTail(page) and PageHead(page) to check if a page is the
head or the tail of a compound page. This is done by masking the two bits
describing the state of a compound page and then comparing them. So one
comparision and a branch instead of two bit checks and two branches.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If we add a new flag so that we can distinguish between the first page and the
tail pages then we can avoid to use page->private in the first page.
page->private == page for the first page, so there is no real information in
there.
Freeing up page->private makes the use of compound pages more transparent.
They become more usable like real pages. Right now we have to be careful f.e.
if we are going beyond PAGE_SIZE allocations in the slab on i386 because we
can then no longer use the private field. This is one of the issues that
cause us not to support debugging for page size slabs in SLAB.
Having page->private available for SLUB would allow more meta information in
the page struct. I can probably avoid the 16 bit ints that I have in there
right now.
Also if page->private is available then a compound page may be equipped with
buffer heads. This may free up the way for filesystems to support larger
blocks than page size.
We add PageTail as an alias of PageReclaim. Compound pages cannot currently
be reclaimed. Because of the alias one needs to check PageCompound first.
The RFC for the this approach was discussed at
http://marc.info/?t=117574302800001&r=1&w=2
[nacc@us.ibm.com: fix hugetlbfs]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Makes SLUB behave like SLAB in this area to avoid issues....
Throw a stack dump to alert people.
At some point the behavior should be switched back. NULL is no memory as
far as I can tell and if the use asked for 0 bytes then he need to get no
memory.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is a new slab allocator which was motivated by the complexity of the
existing code in mm/slab.c. It attempts to address a variety of concerns
with the existing implementation.
A. Management of object queues
A particular concern was the complex management of the numerous object
queues in SLAB. SLUB has no such queues. Instead we dedicate a slab for
each allocating CPU and use objects from a slab directly instead of
queueing them up.
B. Storage overhead of object queues
SLAB Object queues exist per node, per CPU. The alien cache queue even
has a queue array that contain a queue for each processor on each
node. For very large systems the number of queues and the number of
objects that may be caught in those queues grows exponentially. On our
systems with 1k nodes / processors we have several gigabytes just tied up
for storing references to objects for those queues This does not include
the objects that could be on those queues. One fears that the whole
memory of the machine could one day be consumed by those queues.
C. SLAB meta data overhead
SLAB has overhead at the beginning of each slab. This means that data
cannot be naturally aligned at the beginning of a slab block. SLUB keeps
all meta data in the corresponding page_struct. Objects can be naturally
aligned in the slab. F.e. a 128 byte object will be aligned at 128 byte
boundaries and can fit tightly into a 4k page with no bytes left over.
SLAB cannot do this.
D. SLAB has a complex cache reaper
SLUB does not need a cache reaper for UP systems. On SMP systems
the per CPU slab may be pushed back into partial list but that
operation is simple and does not require an iteration over a list
of objects. SLAB expires per CPU, shared and alien object queues
during cache reaping which may cause strange hold offs.
E. SLAB has complex NUMA policy layer support
SLUB pushes NUMA policy handling into the page allocator. This means that
allocation is coarser (SLUB does interleave on a page level) but that
situation was also present before 2.6.13. SLABs application of
policies to individual slab objects allocated in SLAB is
certainly a performance concern due to the frequent references to
memory policies which may lead a sequence of objects to come from
one node after another. SLUB will get a slab full of objects
from one node and then will switch to the next.
F. Reduction of the size of partial slab lists
SLAB has per node partial lists. This means that over time a large
number of partial slabs may accumulate on those lists. These can
only be reused if allocator occur on specific nodes. SLUB has a global
pool of partial slabs and will consume slabs from that pool to
decrease fragmentation.
G. Tunables
SLAB has sophisticated tuning abilities for each slab cache. One can
manipulate the queue sizes in detail. However, filling the queues still
requires the uses of the spin lock to check out slabs. SLUB has a global
parameter (min_slab_order) for tuning. Increasing the minimum slab
order can decrease the locking overhead. The bigger the slab order the
less motions of pages between per CPU and partial lists occur and the
better SLUB will be scaling.
G. Slab merging
We often have slab caches with similar parameters. SLUB detects those
on boot up and merges them into the corresponding general caches. This
leads to more effective memory use. About 50% of all caches can
be eliminated through slab merging. This will also decrease
slab fragmentation because partial allocated slabs can be filled
up again. Slab merging can be switched off by specifying
slub_nomerge on boot up.
Note that merging can expose heretofore unknown bugs in the kernel
because corrupted objects may now be placed differently and corrupt
differing neighboring objects. Enable sanity checks to find those.
H. Diagnostics
The current slab diagnostics are difficult to use and require a
recompilation of the kernel. SLUB contains debugging code that
is always available (but is kept out of the hot code paths).
SLUB diagnostics can be enabled via the "slab_debug" option.
Parameters can be specified to select a single or a group of
slab caches for diagnostics. This means that the system is running
with the usual performance and it is much more likely that
race conditions can be reproduced.
I. Resiliency
If basic sanity checks are on then SLUB is capable of detecting
common error conditions and recover as best as possible to allow the
system to continue.
J. Tracing
Tracing can be enabled via the slab_debug=T,<slabcache> option
during boot. SLUB will then protocol all actions on that slabcache
and dump the object contents on free.
K. On demand DMA cache creation.
Generally DMA caches are not needed. If a kmalloc is used with
__GFP_DMA then just create this single slabcache that is needed.
For systems that have no ZONE_DMA requirement the support is
completely eliminated.
L. Performance increase
Some benchmarks have shown speed improvements on kernbench in the
range of 5-10%. The locking overhead of slub is based on the
underlying base allocation size. If we can reliably allocate
larger order pages then it is possible to increase slub
performance much further. The anti-fragmentation patches may
enable further performance increases.
Tested on:
i386 UP + SMP, x86_64 UP + SMP + NUMA emulation, IA64 NUMA + Simulator
SLUB Boot options
slub_nomerge Disable merging of slabs
slub_min_order=x Require a minimum order for slab caches. This
increases the managed chunk size and therefore
reduces meta data and locking overhead.
slub_min_objects=x Mininum objects per slab. Default is 8.
slub_max_order=x Avoid generating slabs larger than order specified.
slub_debug Enable all diagnostics for all caches
slub_debug=<options> Enable selective options for all caches
slub_debug=<o>,<cache> Enable selective options for a certain set of
caches
Available Debug options
F Double Free checking, sanity and resiliency
R Red zoning
P Object / padding poisoning
U Track last free / alloc
T Trace all allocs / frees (only use for individual slabs).
To use SLUB: Apply this patch and then select SLUB as the default slab
allocator.
[hugh@veritas.com: fix an oops-causing locking error]
[akpm@linux-foundation.org: various stupid cleanups and small fixes]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
i386 uses kmalloc to allocate the threadinfo structure assuming that the
allocations result in a page sized aligned allocation. That has worked so
far because SLAB exempts page sized slabs from debugging and aligns them in
special ways that goes beyond the restrictions imposed by
KMALLOC_ARCH_MINALIGN valid for other slabs in the kmalloc array.
SLUB also works fine without debugging since page sized allocations neatly
align at page boundaries. However, if debugging is switched on then SLUB
will extend the slab with debug information. The resulting slab is not
longer of page size. It will only be aligned following the requirements
imposed by KMALLOC_ARCH_MINALIGN. As a result the threadinfo structure may
not be page aligned which makes i386 fail to boot with SLUB debug on.
Replace the calls to kmalloc with calls into the page allocator.
An alternate solution may be to create a custom slab cache where the
alignment is set to PAGE_SIZE. That would allow slub debugging to be
applied to the threadinfo structure.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rename file_ra_state.prev_page to prev_index and file_ra_state.offset to
prev_offset. Also update of prev_index in do_generic_mapping_read() is now
moved close to the update of prev_offset.
[wfg@mail.ustc.edu.cn: fix it]
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: WU Fengguang <wfg@mail.ustc.edu.cn>
Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Introduce ra.offset and store in it an offset where the previous read
ended. This way we can detect whether reads are really sequential (and
thus we should not mark the page as accessed repeatedly) or whether they
are random and just happen to be in the same page (and the page should
really be marked accessed again).
Signed-off-by: Jan Kara <jack@suse.cz>
Acked-by: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: WU Fengguang <wfg@mail.ustc.edu.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Adds /proc/pid/clear_refs. When any non-zero number is written to this file,
pte_mkold() and ClearPageReferenced() is called for each pte and its
corresponding page, respectively, in that task's VMAs. This file is only
writable by the user who owns the task.
It is now possible to measure _approximately_ how much memory a task is using
by clearing the reference bits with
echo 1 > /proc/pid/clear_refs
and checking the reference count for each VMA from the /proc/pid/smaps output
at a measured time interval. For example, to observe the approximate change
in memory footprint for a task, write a script that clears the references
(echo 1 > /proc/pid/clear_refs), sleeps, and then greps for Pgs_Referenced and
extracts the size in kB. Add the sizes for each VMA together for the total
referenced footprint. Moments later, repeat the process and observe the
difference.
For example, using an efficient Mozilla:
accumulated time referenced memory
---------------- -----------------
0 s 408 kB
1 s 408 kB
2 s 556 kB
3 s 1028 kB
4 s 872 kB
5 s 1956 kB
6 s 416 kB
7 s 1560 kB
8 s 2336 kB
9 s 1044 kB
10 s 416 kB
This is a valuable tool to get an approximate measurement of the memory
footprint for a task.
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Christoph Lameter <clameter@sgi.com>
Signed-off-by: David Rientjes <rientjes@google.com>
[akpm@linux-foundation.org: build fixes]
[mpm@selenic.com: rename for_each_pmd]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If you actually clear the bit, you need to:
+ pte_update_defer(vma->vm_mm, addr, ptep);
The reason is, when updating PTEs, the hypervisor must be notified. Using
atomic operations to do this is fine for all hypervisors I am aware of.
However, for hypervisors which shadow page tables, if these PTE
modifications are not trapped, you need a post-modification call to fulfill
the update of the shadow page table.
Acked-by: Zachary Amsden <zach@vmware.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add ptep_test_and_clear_{dirty,young} to i386. They advertise that they
have it and there is at least one place where it needs to be called without
the page table lock: to clear the accessed bit on write to
/proc/pid/clear_refs.
ptep_clear_flush_{dirty,young} are updated to use the new functions. The
overall net effect to current users of ptep_clear_flush_{dirty,young} is
that we introduce an additional branch.
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Introduce a macro for suppressing gcc from generating a warning about a
probable uninitialized state of a variable.
Example:
- spinlock_t *ptl;
+ spinlock_t *uninitialized_var(ptl);
Not a happy solution, but those warnings are obnoxious.
- Using the usual pointlessly-set-it-to-zero approach wastes several
bytes of text.
- Using a macro means we can (hopefully) do something else if gcc changes
cause the `x = x' hack to stop working
- Using a macro means that people who are worried about hiding true bugs
can easily turn it off.
Signed-off-by: Borislav Petkov <bbpetkov@yahoo.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Generally we work under the assumption that memory the mem_map array is
contigious and valid out to MAX_ORDER_NR_PAGES block of pages, ie. that if we
have validated any page within this MAX_ORDER_NR_PAGES block we need not check
any other. This is not true when CONFIG_HOLES_IN_ZONE is set and we must
check each and every reference we make from a pfn.
Add a pfn_valid_within() helper which should be used when scanning pages
within a MAX_ORDER_NR_PAGES block when we have already checked the validility
of the block normally with pfn_valid(). This can then be optimised away when
we do not have holes within a MAX_ORDER_NR_PAGES block of pages.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Bob Picco <bob.picco@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Architectures that don't support DMA can say so by adding a config NO_DMA
to their Kconfig file. This will prevent compilation of some dma specific
driver code. Also dma-mapping-broken.h isn't needed anymore on at least
s390. This avoids compilation and linking of otherwise dead/broken code.
Other architectures that include dma-mapping-broken.h are arm26, h8300,
m68k, m68knommu and v850. If these could be converted as well we could get
rid of the header file.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
"John W. Linville" <linville@tuxdriver.com>
Cc: Kyle McMartin <kyle@parisc-linux.org>
Cc: <James.Bottomley@SteelEye.com>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: <geert@linux-m68k.org>
Cc: <zippel@linux-m68k.org>
Cc: <spyro@f2s.com>
Cc: <uclinux-v850@lsi.nec.co.jp>
Cc: <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ensure pages are uptodate after returning from read_cache_page, which allows
us to cut out most of the filesystem-internal PageUptodate calls.
I didn't have a great look down the call chains, but this appears to fixes 7
possible use-before uptodate in hfs, 2 in hfsplus, 1 in jfs, a few in
ecryptfs, 1 in jffs2, and a possible cleared data overwritten with readpage in
block2mtd. All depending on whether the filler is async and/or can return
with a !uptodate page.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minimum gcc version is 3.2 now. However, with likely profiling, even
modern gcc versions cannot always eliminate the call.
Replace the placeholder functions with the more conventional empty static
inlines, which should be optimal for everyone.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a proper prototype for hugetlb_get_unmapped_area() in
include/linux/hugetlb.h.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Acked-by: William Irwin <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a new mm function apply_to_page_range() which applies a given function to
every pte in a given virtual address range in a given mm structure. This is a
generic alternative to cut-and-pasting the Linux idiomatic pagetable walking
code in every place that a sequence of PTEs must be accessed.
Although this interface is intended to be useful in a wide range of
situations, it is currently used specifically by several Xen subsystems, for
example: to ensure that pagetables have been allocated for a virtual address
range, and to construct batched special pagetable update requests to map I/O
memory (in ioremap()).
[akpm@linux-foundation.org: fix warning, unpleasantly]
Signed-off-by: Ian Pratt <ian.pratt@xensource.com>
Signed-off-by: Christian Limpach <Christian.Limpach@cl.cam.ac.uk>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Matt Mackall <mpm@waste.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
At present, the serial core always allows setserial in userspace to change the
port address, irq and base clock of any serial port. That makes sense for
legacy ISA ports, but not for (say) embedded ns16550 compatible serial ports
at peculiar addresses. In these cases, the kernel code configuring the ports
must know exactly where they are, and their clocking arrangements (which can
be unusual on embedded boards). It doesn't make sense for userspace to change
these settings.
Therefore, this patch defines a UPF_FIXED_PORT flag for the uart_port
structure. If this flag is set when the serial port is configured, any
attempts to alter the port's type, io address, irq or base clock with
setserial are ignored.
In addition this patch uses the new flag for on-chip serial ports probed in
arch/powerpc/kernel/legacy_serial.c, and for other hard-wired serial ports
probed by drivers/serial/of_serial.c.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add support for the integrated serial ports of the MIPS RM9122 processor
and its relatives.
The patch also does some whitespace cleanup.
[akpm@linux-foundation.org: cleanups]
Signed-off-by: Thomas Koeller <thomas.koeller@baslerweb.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Serial driver patch for the PMC-Sierra MSP71xx devices.
There are three different fixes:
1 Fix for DesignWare APB THRE errata: In brief, this is a non-standard
16550 in that the THRE interrupt will not re-assert itself simply by
disabling and re-enabling the THRI bit in the IER, it is only re-enabled
if a character is actually sent out.
It appears that the "8250-uart-backup-timer.patch" in the "mm" tree
also fixes it so we have dropped our initial workaround. This patch now
needs to be applied on top of that "mm" patch.
2 Fix for Busy Detect on LCR write: The DesignWare APB UART has a feature
which causes a new Busy Detect interrupt to be generated if it's busy
when the LCR is written. This fix saves the value of the LCR and
rewrites it after clearing the interrupt.
3 Workaround for interrupt/data concurrency issue: The SoC needs to
ensure that writes that can cause interrupts to be cleared reach the UART
before returning from the ISR. This fix reads a non-destructive register
on the UART so the read transaction completion ensures the previously
queued write transaction has also completed.
Signed-off-by: Marc St-Jean <Marc_St-Jean@pmc-sierra.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
PCI drivers have the new_id file in sysfs which allows new IDs to be added
at runtime. The advantage is to avoid re-compilation of a driver that
works for a new device, but it's ID table doesn't contain the new device.
This mechanism is only meant for testing, after the driver has been tested
successfully, the ID should be added in source code so that new revisions
of the kernel automatically detect the device.
The implementation follows the PCI implementation. The interface is documented
in Documentation/pcmcia/driver.txt. Computations should be done in userspace,
so the sysfs string contains the raw structure members for matching.
Signed-off-by: Bernhard Walle <bwalle@suse.de>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This introduce krealloc() that reallocates memory while keeping the contents
unchanged. The allocator avoids reallocation if the new size fits the
currently used cache. I also added a simple non-optimized version for
mm/slob.c for compatibility.
[akpm@linux-foundation.org: fix warnings]
Acked-by: Josef Sipek <jsipek@fsl.cs.sunysb.edu>
Acked-by: Matt Mackall <mpm@selenic.com>
Acked-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This was broken. It adds complexity, for no good reason. Rather than
separate __pa() and __pa_symbol(), we should deprecate __pa_symbol(),
and preferably __pa() too - and just use "virt_to_phys()" instead, which
is more readable and has nicer semantics.
However, right now, just undo the separation, and make __pa_symbol() be
the exact same as __pa(). That fixes the bugs this patch introduced,
and we can do the fairly obvious cleanups later.
Do the new __phys_addr() function (which is now the actual workhorse for
the unified __pa()/__pa_symbol()) as a real external function, that way
all the potential issues with compile/link-time optimizations of
constant symbol addresses go away, and we can also, if we choose to, add
more sanity-checking of the argument.
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild: (38 commits)
kconfig: fix mconf segmentation fault
kbuild: enable use of code from a different dir
kconfig: error out if recursive dependencies are found
kbuild: scripts/basic/fixdep segfault on pathological string-o-death
kconfig: correct minor typo in Kconfig warning message.
kconfig: fix path to modules.txt in Kconfig help
usr/Kconfig: fix typo
kernel-doc: alphabetically-sorted entries in index.html of 'htmldocs'
kbuild: be more explicit on missing .config file
kbuild: clarify the creation of the LOCALVERSION_AUTO string.
kbuild: propagate errors from find in scripts/gen_initramfs_list.sh
kconfig: refer to qt3 if we cannot find qt libraries
kbuild: handle compressed cpio initramfs-es
kbuild: ignore section mismatch warning for references from .paravirtprobe to .init.text
kbuild: remove stale comment in modpost.c
kbuild/mkuboot.sh: allow spaces in CROSS_COMPILE
kbuild: fix make mrproper for Documentation/DocBook/man
kbuild: remove kconfig binaries during make mrproper
kconfig/menuconfig: do not hardcode '.config'
kbuild: override build timestamp & version
...
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm: (66 commits)
KVM: Remove unused 'instruction_length'
KVM: Don't require explicit indication of completion of mmio or pio
KVM: Remove extraneous guest entry on mmio read
KVM: SVM: Only save/restore MSRs when needed
KVM: fix an if() condition
KVM: VMX: Add lazy FPU support for VT
KVM: VMX: Properly shadow the CR0 register in the vcpu struct
KVM: Don't complain about cpu erratum AA15
KVM: Lazy FPU support for SVM
KVM: Allow passing 64-bit values to the emulated read/write API
KVM: Per-vcpu statistics
KVM: VMX: Avoid unnecessary vcpu_load()/vcpu_put() cycles
KVM: MMU: Avoid heavy ASSERT at non debug mode.
KVM: VMX: Only save/restore MSR_K6_STAR if necessary
KVM: Fold drivers/kvm/kvm_vmx.h into drivers/kvm/vmx.c
KVM: VMX: Don't switch 64-bit msrs for 32-bit guests
KVM: VMX: Reduce unnecessary saving of host msrs
KVM: Handle guest page faults when emulating mmio
KVM: SVM: Report hardware exit reason to userspace instead of dmesg
KVM: Retry sleeping allocation if atomic allocation fails
...
utrace removes the ptrace_message field in task_struct. Move our use
of this field into a new member in thread_info called "syscall"
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6: (231 commits)
[PATCH] i386: Don't delete cpu_devs data to identify different x86 types in late_initcall
[PATCH] i386: type may be unused
[PATCH] i386: Some additional chipset register values validation.
[PATCH] i386: Add missing !X86_PAE dependincy to the 2G/2G split.
[PATCH] x86-64: Don't exclude asm-offsets.c in Documentation/dontdiff
[PATCH] i386: avoid redundant preempt_disable in __unlazy_fpu
[PATCH] i386: white space fixes in i387.h
[PATCH] i386: Drop noisy e820 debugging printks
[PATCH] x86-64: Fix allnoconfig error in genapic_flat.c
[PATCH] x86-64: Shut up warnings for vfat compat ioctls on other file systems
[PATCH] x86-64: Share identical video.S between i386 and x86-64
[PATCH] x86-64: Remove CONFIG_REORDER
[PATCH] x86-64: Print type and size correctly for unknown compat ioctls
[PATCH] i386: Remove copy_*_user BUG_ONs for (size < 0)
[PATCH] i386: Little cleanups in smpboot.c
[PATCH] x86-64: Don't enable NUMA for a single node in K8 NUMA scanning
[PATCH] x86: Use RDTSCP for synchronous get_cycles if possible
[PATCH] i386: Add X86_FEATURE_RDTSCP
[PATCH] i386: Implement X86_FEATURE_SYNC_RDTSC on i386
[PATCH] i386: Implement alternative_io for i386
...
Fix up trivial conflict in include/linux/highmem.h manually.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
CC fs/nfs/nfsroot.o
fs/nfs/nfsroot.c:131: error: tokens causes a section type conflict
make[2]: *** [fs/nfs/nfsroot.o] Error 1
This is due to mixing const and non-const content in the same section
which halfway recent gccs absolutely hate. Fixed by dropping the const.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* master.kernel.org:/pub/scm/linux/kernel/git/jejb/voyager-2.6:
[VOYAGER] add smp alternatives
[VOYAGER] Use modern techniques to setup and teardown low identiy mappings.
[VOYAGER] Convert the monitor thread to use the kthread API
[VOYAGER] clockevents driver: bring voyager in to line
[VOYAGER] clockevents: correct boot cpu is zero assumption
[VOYAGER] add smp_call_function_single
The S3C2410_UDC_SETIX() macro is not used and won't be used by the udc
driver, so delete it.
Signed-off-by: Arnaud Patard <arnaud.patard@rtp-net.org>
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Get rid of the 'pio_speed' member of 'ide_drive_t' that was only used by this
driver by storing the PIO mode timings in the 'drive_data' instead -- this
allows us to greatly simplify the process of "reloading" of the chip's timing
register and do it right in sl82c150_dma_off_quietly() and to get rid of two
extra arguments to config_for_pio() -- which got renamed to sl82c105_tune_pio()
and now returns a PIO mode selected, with ide_config_drive_speed() call moved
into the tuneproc() method, now called sl82c105_tune_drive() with the code to
set drive's 'io_32bit' and 'unmask' flags in its turn moved to its proper place
in the init_hwif() method.
Also, while at it, rename get_timing_sl82c105() into get_pio_timings() and get
rid of the code in it clamping cycle counts to 32 which was both incorrect and
never executed anyway...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
__ioremap() took a set of page table flags (specifically the cacheable
and bufferable bits) to control the mapping type. However, with
the advent of ARMv6, this is far too limited.
Replace the page table flags with a memory type index, so that the
desired attributes can be selected from the mem_type table.
Finally, to prevent silent miscompilation due to the differing
arguments, rename the __ioremap() and __ioremap_pfn() functions.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add cached device type for ioremap_cached(). Group all device memory
types together, and ensure that they all have a "MT_DEVICE" prefix.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add generic IEEE 802.11 definitions.
Signed-off-by: Jiri Benc <jbenc@suse.cz>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
People treating the *_pid fields in netlink as a process ID has caused
endless confusion over the years. The fact that our own netlink.h
does this only adds to the confusion.
So here is a patch to change the comments to refer to it as the port
ID which hopefully will make it clear what the purpose of the fields
really is.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix:
drivers/serial/8250.c:1837: warning: suggest parentheses around arithmetic in operand of |
due to a macro argument being used without required parenthesis.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds support for the D-Link DSM-G600 Rev A.
This is an ARM XScale IXP4xx system relatively similar to
the NSLU2 and NAS-100D already supported by mainline. An
important difference is Gigabit Ethernet support using
the Via Velocity chipset.
This patch is the combined work of Michael Westerhof and
Alessandro Zummo, with contributions from Michael-Luke
Jones. This version addresses review comments from rmk
and Deepak Saxena.
Signed-off-by: Michael-Luke Jones <mlj28@cam.ac.uk>
Signed-off-by: Alessandro Zummo <a.zummo@towertech.it>
Signed-off-by: Michael Westerhof <mwester@dls.net>
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/drzeus/mmc: (46 commits)
mmc-omap: Clean up omap set_ios and make MMC_POWER_ON work
mmc-omap: Fix omap to use MMC_POWER_ON
mmc-omap: add missing '\n'
mmc: make tifm_sd_set_dma_data() static
mmc: remove old card states
mmc: support unsafe resume of cards
mmc: separate out reading EXT_CSD
mmc: break apart switch function
MMC: Fix handling of low-voltage cards
MMC: Consolidate voltage definitions
mmc: add bus handler
wbsd: check for data opcode earlier
mmc: Separate out protocol ops
mmc: Move core functions to subdir
mmc: deprecate mmc bus topology
mmc: remove card upon suspend
mmc: allow suspended block driver to be removed
mmc: Flush pending detects on host removal
mmc: Move host and card drivers to subdirs
mmc: Move queue functions to mmc_block
...
* git://git.linux-nfs.org/pub/linux/nfs-2.6: (28 commits)
NFS: Fix a compile glitch on 64-bit systems
NFS: Clean up nfs_create_request comments
spkm3: initialize hash
spkm3: remove bad kfree, unnecessary export
spkm3: fix spkm3's use of hmac
NFS4: invalidate cached acl on setacl
NFS: Fix directory caching problem - with test case and patch.
NFS: Set meaningful value for fattr->time_start in readdirplus results.
NFS: Added support to turn off the NFSv3 READDIRPLUS RPC.
SUNRPC: RPC client should retry with different versions of rpcbind
SUNRPC: remove old portmapper
NFS: switch NFSROOT to use new rpcbind client
SUNRPC: switch the RPC server to use the new rpcbind registration API
SUNRPC: switch socket-based RPC transports to use rpcbind
SUNRPC: introduce rpcbind: replacement for in-kernel portmapper
SUNRPC: Eliminate side effects from rpc_malloc
SUNRPC: RPC buffer size estimates are too large
NLM: Shrink the maximum request size of NLM4 requests
NFS: Use pgoff_t in structures and functions that pass page cache offsets
NFS: Clean up nfs_sync_mapping_wait()
...