Merge the nearly empty C files and move everything from power/ to
kernel/. That way the files are easier to handle.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
There is no caller of do_after_copyback() anywhere. Remove it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Couple of coding style fixes, replace __inline__ with inline and
remove #ifdef __KERNEL_- since the header file isn't exported.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use compare double and swap to implement efficient atomic64 ops for 31 bit.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
In the meantime gcc generates better code than the old inline
assemblies do. Original inline assembly results in:
lr %r1,%r2
sr %r3,%r3
lr %r2,%r1
srdl %r2,16
alr %r2,%r3
alr %r1,%r2
srl %r1,16
xilf %r1,65535
llghr %r2,%r1
br %r14
Out of the C code gcc generates this:
rll %r1,%r2,16
ar %r1,%r2
srl %r1,16
xilf %r1,65535
llghr %r2,%r1
br %r14
In addition we don't have any static register allocations anymore and
gcc is free to shuffle instructions around for better pipeline usage.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Introduce get_clock_monotonic() function which can be used to get a
(fast) timestamp. Resolution is the same as for get_clock(). The
only difference is that the timestamps are monotonic and don't jump
backward or forward.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
All scsw helper functions are very short and usage of them shouldn't
result in function calls. Therefore we move them to a separate header
file.
Also saves a lot of EXPORT_SYMBOLs.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This patch fixes a null pointer exception caused by removal of
'ack()' for level interrupts in the Xilinx interrupt driver. A recent
change to the xilinx interrupt controller removed the ack hook for
level irqs.
Signed-off-by: Roderick Colenbrander <thunderbird2k@gmail.com>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
sparc64: Fix bootup with mcount in some configs.
sparc64: Kill spurious NMI watchdog triggers by increasing limit to 30 seconds.
Functions invoked early when booting up a cpu can't use
tracing because mcount requires a valid 'current_thread_info()'
and TLB mappings to be setup.
The code path of sun4v_register_mondo_queues --> register_one_mondo
is one such case. sun4v_register_mondo_queues already has the
necessary 'notrace' annotation, but register_one_mondo does not.
Normally register_one_mondo is inlined so the bug doesn't trigger,
but with some config/compiler combinations, it won't be so we
must properly mark it notrace.
While we're here, add 'notrace' annoations to prom_printf and
prom_halt so that early error handling won't have the same problem.
Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Reported-by: Leif Sawyer <lsawyer@gci.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is a compromise and a temporary workaround for bootup NMI
watchdog triggers some people see with qla2xxx devices present.
This happens when, for example:
CPU 0 is in the driver init and looping submitting mailbox commands to
load the firmware, then waiting for completion.
CPU 1 is receiving the device interrupts. CPU 1 is where the NMI
watchdog triggers.
CPU 0 is submitting mailbox commands fast enough that by the time CPU
1 returns from the device interrupt handler, a new one is pending.
This sequence runs for more than 5 seconds.
The problematic case is CPU 1's timer interrupt running when the
barrage of device interrupts begin. Then we have:
timer interrupt
return for softirq checking
pending, thus enable interrupts
qla2xxx interrupt
return
qla2xxx interrupt
return
... 5+ seconds pass
final qla2xxx interrupt for fw load
return
run timer softirq
return
At some point in the multi-second qla2xxx interrupt storm we trigger
the NMI watchdog on CPU 1 from the NMI interrupt handler.
The timer softirq, once we get back to running it, is smart enough to
run the timer work enough times to make up for the missed timer
interrupts.
However, the NMI watchdogs (both x86 and sparc) use the timer
interrupt count to notice the cpu is wedged. But in the above
scenerio we'll receive only one such timer interrupt even if we last
all the way back to running the timer softirq.
The default watchdog trigger point is only 5 seconds, which is pretty
low (the softwatchdog triggers at 60 seconds). So increase it to 30
seconds for now.
Signed-off-by: David S. Miller <davem@davemloft.net>
I had the codes for L1 D-cache load accesses and misses swapped
around, and the wrong codes for LL-cache accesses and misses.
This corrects them.
Reported-by: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: <stable@kernel.org>
LKML-Reference: <19103.8514.709300.585484@cargo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The 32-bit parameters (len and csum) of csum_ipv6_magic() are passed in 64-bit
registers in2 and in4. The high order 32 bits of the registers were never
cleared, and garbage was sometimes calculated into the checksum.
Fix this by clearing the high order 32 bits of these registers.
Signed-off-by: Jiri Bohac <jbohac@suse.cz>
Signed-off-by: Tony Luck <tony.luck@intel.com>
arch/ia64/kernel/dma-mapping.c:14: warning: control reaches end of non-void function
arch/ia64/kernel/dma-mapping.c:14: warning: no return statement in function returning non-void
This warning was introduced by commit: 390bd132b2
Add dma_debug_init() for ia64
Signed-off-by: Tony Luck <tony.luck@intel.com>
On Tue, Aug 18, 2009 at 01:45:17PM -0400, John David Anglin wrote:
> CC arch/parisc/kernel/traps.o
> arch/parisc/kernel/traps.c: In function 'handle_interruption':
> arch/parisc/kernel/traps.c:535:18: warning: operation on 'regs->iasq[0]'
> may be undefined
Yes - Line 535 should use both [0] and [1].
Reported-by: John David Anglin <dave@hiauly1.hia.nrc.ca>
Signed-off-by: Grant Grundler <grundler@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Update ps3_defconfig.
o Refresh for 2.6.31.
o Remove MTD support.
o Add more HID drivers.
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
On non-PS3, we get:
| kernel BUG at drivers/rtc/rtc-ps3.c:36!
because the rtc-ps3 platform device is registered unconditionally in a kernel
with builtin support for PS3.
Reported-by: Sachin Sant <sachinp@in.ibm.com>
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Acked-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/m68k/include/asm/pgtable_mm.h:148:1: warning: "pgprot_noncached" redefined
In file included from arch/m68k/include/asm/pgtable_mm.h:138,
from arch/m68k/include/asm/pgtable.h:4,
from include/linux/mm.h:40,
from include/linux/pagemap.h:7,
from include/linux/blkdev.h:12,
from arch/m68k/emu/nfblock.c:17:
include/asm-generic/pgtable.h:133:1: warning: this is the location of the previous definition
pgprot_noncached() should be defined _before_ including asm-generic/pgtable.h
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
arch/m68k/include/asm/motorola_pgalloc.h: In function 'pte_alloc_one':
arch/m68k/include/asm/motorola_pgalloc.h:44: warning: passing argument 1 of 'kunmap' from incompatible pointer type
Also, remove unneeded test for kmap() failure.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2.6.31-rc7 does not boot on vSMP systems:
[ 8.501108] CPU31: Thermal monitoring enabled (TM1)
[ 8.501127] CPU 31 MCA banks SHD:2 SHD:3 SHD:5 SHD:6 SHD:8
[ 8.650254] CPU31: Intel(R) Xeon(R) CPU E5540 @ 2.53GHz stepping 04
[ 8.710324] Brought up 32 CPUs
[ 8.713916] Total of 32 processors activated (162314.96 BogoMIPS).
[ 8.721489] ERROR: parent span is not a superset of domain->span
[ 8.727686] ERROR: domain->groups does not contain CPU0
[ 8.733091] ERROR: groups don't span domain->span
[ 8.737975] ERROR: domain->cpu_power not set
[ 8.742416]
Ravikiran Thirumalai bisected it to:
| commit 2759c3287d
| x86: don't call read_apic_id if !cpu_has_apic
The problem is that on vSMP systems the CPUID derived
initial-APICIDs are overlapping - so we need to fall
back on hard_smp_processor_id() which reads the local
APIC.
Both come from the hardware (influenced by firmware
though) so it's a tough call which one to trust.
Doing the quirk expresses the vSMP property properly
and also does not affect other systems, so we go for
this solution instead of a revert.
Reported-and-Tested-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Shai Fultheim <shai@scalex86.org>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <4A944D3C.5030100@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Initialize cx before calling xen_cpuid(), in order to suppress the
"may be used uninitialized in this function" warning.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Xen always runs on CPUs which properly support WP enforcement in
privileged mode, so there's no need to test for it.
This also works around a crash reported by Arnd Hannemann, though I
think its just a band-aid for that case.
Reported-by: Arnd Hannemann <hannemann@nets.rwth-aachen.de>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
When page alloc debugging is not enabled, we essentially accept any
virtual address for linear kernel TLB misses. But with kgdb, kernel
address probing, and other facilities we can try to access arbitrary
crap.
So, make sure the address we miss on will translate to physical memory
that actually exists.
In order to make this work we have to embed the valid address bitmap
into the kernel image. And in order to make that less expensive we
make an adjustment, in that the max physical memory address is
decreased to "1 << 41", even on the chips that support a 42-bit
physical address space. We can do this because bit 41 indicates
"I/O space" and thus covers non-memory ranges.
The result of this is that:
1) kpte_linear_bitmap shrinks from 2K to 1K in size
2) we need 64K more for the valid address bitmap
We can't let the valid address bitmap be dynamically allocated
once we start using it to validate TLB misses, otherwise we have
crazy issues to deal with wrt. recursive TLB misses and such.
If we're in a TLB miss it could be the deepest trap level that's legal
inside of the cpu. So if we TLB miss referencing the bitmap, the cpu
will be out of trap levels and enter RED state.
To guard against out-of-range accesses to the bitmap, we have to check
to make sure no bits in the physical address above bit 40 are set. We
could export and use last_valid_pfn for this check, but that's just an
unnecessary extra memory reference.
On the plus side of all this, since we load all of these translations
into the special 4MB mapping TSB, and we check the TSB first for TLB
misses, there should be absolutely no real cost for these new checks
in the TLB miss path.
Reported-by: heyongli@gmail.com
Signed-off-by: David S. Miller <davem@davemloft.net>
* 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
clockevent: Prevent dead lock on clockevents_lock
timers: Drop write permission on /proc/timer_list
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
x86: Fix build with older binutils and consolidate linker script
x86: Fix an incorrect argument of reserve_bootmem()
x86: add vmlinux.lds to targets in arch/x86/boot/compressed/Makefile
xen: rearrange things to fix stackprotector
x86: make sure load_percpu_segment has no stackprotector
i386: Fix section mismatches for init code with !HOTPLUG_CPU
x86, pat: Allow ISA memory range uncacheable mapping requests
binutils prior to 2.17 can't deal with the currently possible
situation of a new segment following the per-CPU segment, but
that new segment being empty - objcopy misplaces the .bss (and
perhaps also the .brk) sections outside of any segment.
However, the current ordering of sections really just appears
to be the effect of cumulative unrelated changes; re-ordering
things allows to easily guarantee that the segment following
the per-CPU one is non-empty, and at once eliminates the need
for the bogus data.init2 segment.
Once touching this code, also use the various data section
helper macros from include/asm-generic/vmlinux.lds.h.
-v2: fix !SMP builds.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Cc: <sam@ravnborg.org>
LKML-Reference: <4A94085D02000078000119A5@vpn.id2.novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
* 'fixes' of git://git.marvell.com/orion:
[ARM] Orion NAND: Make asm volatile avoid GCC pushing ldrd out of the loop
[ARM] Kirkwood: enable eSATA on QNAP TS-219P
[ARM] Kirkwood: __init requires linux/init.h
This line looks suspicious, because if this is true, then the
'flags' parameter of function reserve_bootmem_generic() will be
unused when !CONFIG_NUMA. I don't think this is what we want.
Signed-off-by: WANG Cong <amwang@redhat.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: akpm@linux-foundation.org
LKML-Reference: <20090821083709.5098.52505.sendpatchset@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Initialize PCI/PCIe on the QNAP TS-119, TS-219 and TS-219P hardware
allowing the use of the discrete eSATA controller connected to the PCIe
bus in the TS-219P.
Signed-off-by: John Holland <john.holland@cellent-fs.de>
Tested-by: Thomas Reitmayr <treitmayr@devbase.at>
Signed-off-by: Martin Michlmayr <tbm@cyrius.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Include linux/init.h for __init to fix this error:
CC [M] drivers/net/wireless/wl12xx/boot.o
In file included from arch/arm/mach-kirkwood/include/mach/gpio.h:13,
from arch/arm/include/asm/gpio.h:5,
from include/linux/gpio.h:7,
from drivers/net/wireless/wl12xx/boot.c:24:
arch/arm/plat-orion/include/plat/gpio.h:32: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘orion_gpio_init’
make[6]: *** [drivers/net/wireless/wl12xx/boot.o] Error 1
make[5]: *** [drivers/net/wireless/wl12xx] Error 2
Signed-off-by: Martin Michlmayr <tbm@cyrius.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
setup_arch() unconditionally sets the preferred console to ttyS.
This breaks the use of 3270 devices as the console. Provide a new
function to set the default preferred console for s390. The preferred
console depends on the conmode parameter that is used to switch
between 3270 and 3215 terminal/console mode.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
As noted in 83d349f35e ("x86: don't send
an IPI to the empty set of CPU's"), some APIC's will be very unhappy
with an empty destination mask. That commit added a WARN_ON() for that
case, and avoided the resulting problem, but didn't fix the underlying
reason for why those empty mask cases happened.
This fixes that, by checking the result of 'cpumask_andnot()' of the
current CPU actually has any other CPU's left in the set of CPU's to be
sent a TLB flush, and not calling down to the IPI code if the mask is
empty.
The reason this started happening at all is that we started passing just
the CPU mask pointers around in commit 4595f9620 ("x86: change
flush_tlb_others to take a const struct cpumask"), and when we did that,
the cpumask was no longer thread-local.
Before that commit, flush_tlb_mm() used to create it's own copy of
'mm->cpu_vm_mask' and pass that copy down to the low-level flush
routines after having tested that it was not empty. But after changing
it to just pass down the CPU mask pointer, the lower level TLB flush
routines would now get a pointer to that 'mm->cpu_vm_mask', and that
could still change - and become empty - after the test due to other
CPU's having flushed their own TLB's.
See
http://bugzilla.kernel.org/show_bug.cgi?id=13933
for details.
Tested-by: Thomas Björnell <thomas.bjornell@gmail.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The default_send_IPI_mask_logical() function uses the "flat" APIC mode
to send an IPI to a set of CPU's at once, but if that set happens to be
empty, some older local APIC's will apparently be rather unhappy. So
just warn if a caller gives us an empty mask, and ignore it.
This fixes a regression in 2.6.30.x, due to commit 4595f9620 ("x86:
change flush_tlb_others to take a const struct cpumask"), documented
here:
http://bugzilla.kernel.org/show_bug.cgi?id=13933
which causes a silent lock-up. It only seems to happen on PPro, P2, P3
and Athlon XP cores. Most developers sadly (or not so sadly, if you're
a developer..) have more modern CPU's. Also, on x86-64 we don't use the
flat APIC mode, so it would never trigger there even if the APIC didn't
like sending an empty IPI mask.
Reported-by: Pavel Vilim <wylda@volny.cz>
Reported-and-tested-by: Thomas Björnell <thomas.bjornell@gmail.com>
Reported-and-tested-by: Martin Rogge <marogge@onlinehome.de>
Cc: Mike Travis <travis@sgi.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The absence of vmlinux.lds here keeps .vmlinux.lds.cmd from being
included, which in turn leads to it and all its dependents always
getting rebuilt independent of whether they are already up-to-date.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
LKML-Reference: <4A8D84670200007800010D31@vpn.id2.novell.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Make sure the stack-protector segment registers are properly set up
before calling any functions which may have stack-protection compiled
into them.
[ Impact: prevent Xen early-boot crash when stack-protector is enabled ]
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
load_percpu_segment() is used to set up the per-cpu segment registers,
which are also used for -fstack-protector. Make sure that the
load_percpu_segment() function doesn't have stackprotector enabled.
[ Impact: allow percpu setup before calling stack-protected functions ]
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
* 'next' of git://git.monstr.eu/linux-2.6-microblaze:
microblaze: Update Microblaze defconfigs
microblaze: Use klimit instead of _end for memory init
microblaze: Enable ppoll syscall
microblaze: Sane handling of missing timer/intc in device tree
microblaze: use the generic ack_bad_irq implementation
Currently clockevents_notify() is called with interrupts enabled at
some places and interrupts disabled at some other places.
This results in a deadlock in this scenario.
cpu A holds clockevents_lock in clockevents_notify() with irqs enabled
cpu B waits for clockevents_lock in clockevents_notify() with irqs disabled
cpu C doing set_mtrr() which will try to rendezvous of all the cpus.
This will result in C and A come to the rendezvous point and waiting
for B. B is stuck forever waiting for the spinlock and thus not
reaching the rendezvous point.
Fix the clockevents code so that clockevents_lock is taken with
interrupts disabled and thus avoid the above deadlock.
Also call lapic_timer_propagate_broadcast() on the destination cpu so
that we avoid calling smp_call_function() in the clockevents notifier
chain.
This issue left us wondering if we need to change the MTRR rendezvous
logic to use stop machine logic (instead of smp_call_function) or add
a check in spinlock debug code to see if there are other spinlocks
which gets taken under both interrupts enabled/disabled conditions.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: "Pallipadi Venkatesh" <venkatesh.pallipadi@intel.com>
Cc: "Brown Len" <len.brown@intel.com>
LKML-Reference: <1250544899.2709.210.camel@sbs-t61.sc.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>