Add ARM support for the DMA debug infrastructure, which allows the
DMA API usage to be debugged.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a sched_clock() implementation to Versatile Express using the new
sched_clock() infrastructure for extending 32bit counters to full
64-bit nanoseconds.
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Allow one shot timer mode to be used with the TWD. This allows
NOHZ mode to be used on SMP systems using the TWD localtimer.
Tested on Versatile Express and U8500.
Tested-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Normally different ARM platform has different way to decode the IRQ
hardware status and demultiplex to the corresponding IRQ handler.
This is highly optimized by macro irq_handler in entry-armv.S, and
each machine defines their own macro to decode the IRQ number.
However, this prevents multiple machine classes to be built into a
single kernel.
By allowing each machine to specify thier own handler, and making
function pointer 'handle_arch_irq' to point to it at run time, this
can be solved. And introduce CONFIG_MULTI_IRQ_HANDLER to allow both
solutions to work.
Comparing with the highly optimized macro of irq_handler, the new
function must be written with care not to lose too much performance.
And the IPI stuff on SMP is expected to move to the provided arch
IRQ handler as well.
The assembly code to invoke handle_arch_irq is optimized by Russell
King.
Signed-off-by: Eric Miao <eric.miao@canonical.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert versatile platforms to use the new sched_clock() infrastructure
for extending 32bit counters to full 64-bit nanoseconds.
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert orion platforms to use the new sched_clock() infrastructure for
extending 32bit counters to full 64-bit nanoseconds.
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert omap to use the new sched_clock() infrastructure for extending
32bit counters to full 64-bit nanoseconds.
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert iop platforms to use the new sched_clock() infrastructure for
extending 32bit counters to full 64-bit nanoseconds.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert u300 to use the new sched_clock() infrastructure for extending
32bit counters to full 64-bit nanoseconds.
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert tegra to use the new sched_clock() infrastructure for extending
32bit counters to full 64-bit nanoseconds.
Tested-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert sa1100 to use the new sched_clock() infrastructure for extending
32bit counters to full 64-bit nanoseconds.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert pxa to use the new sched_clock() infrastructure for extending
32bit counters to full 64-bit nanoseconds.
Tested-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert mmp to use the new sched_clock() infrastructure for extending
32bit counters to full 64-bit nanoseconds.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert ixp4xx to use the new sched_clock() infrastructure for
extending 32bit counters to full 64-bit nanoseconds.
Tested-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide common sched_clock() infrastructure for platforms to use to
create a 64-bit ns based sched_clock() implementation from a counter
running at a non-variable clock rate.
This implementation is based upon maintaining an epoch for the counter
and an epoch for the nanosecond time. When we desire a sched_clock()
time, we calculate the number of counter ticks since the last epoch
update, convert this to nanoseconds and add to the epoch nanoseconds.
We regularly refresh these epochs within the counter wrap interval.
We perform a similar calculation as above, and store the new epochs.
We read and write the epochs in such a way that sched_clock() can easily
(and locklessly) detect when an update is in progress, and repeat the
loading of these constants when they're known not to be stable. The
one caveat is that sched_clock() is not called in the middle of an
update. We achieve that by disabling IRQs.
Finally, if the clock rate is known at compile time, the counter to ns
conversion factors can be specified, allowing sched_clock() to be tightly
optimized. We ensure that these factors are correct by providing an
initialization function which performs a run-time check.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Mikael Pettersson <mikpe@it.uu.se>
Tested-by: Eric Miao <eric.y.miao@gmail.com>
Tested-by: Olof Johansson <olof@lixom.net>
Tested-by: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds nanoEngine's PCI support.
Signed-off-by: Marcelo Roberto Jimenez <mroberto@cpti.cetuc.puc-rio.br>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* __fixup_smp_on_up has been modified with support for the
THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split
into halfwords in case of misalignment, since we can't rely on
unaligned accesses working before turning the MMU on.
No attempt is made to optimise the aligned case, since the
number of fixups is typically small, and it seems best to keep
the code as simple as possible.
* Add a rotate in the fixup_smp code in order to support
CPU_BIG_ENDIAN, as suggested by Nicolas Pitre.
* Add an assembly-time sanity-check to ALT_UP() to ensure that
the content really is the right size (4 bytes).
(No check is done for ALT_SMP(). Possibly, this could be fixed
by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus
ALT_SMP...SMP_UP_B) into two macros. In the first case,
ALT_SMP needs to expand to >= 4 bytes, not == 4.)
* smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due
to macro limitations) has not been modified: the affected
instruction (mov) has no 16-bit encoding, so the correct
instruction size is satisfied in this case.
* A "mode" parameter has been added to smp_dmb:
smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser)
smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP()
This avoids assembly failures due to use of W() inside smp_dmb,
when assembling pure-ARM code in the vectors page.
There might be a better way to achieve this.
* Kconfig: make SMP_ON_UP depend on
(!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now
supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2
currently assumes little-endian order.)
Tested using a single generic realview kernel on:
ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y})
ARM RealView PBX-A9 (SMP)
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
iwmmxt is used in XScale, XScale3, Mohawk and PJ4 core. But the instructions
of accessing CP0 and CP1 is changed in PJ4. Append more files to support
iwmmxt in PJ4 core.
Signed-off-by: Zhou Zhu <zzhu3@marvell.com>
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
Because the nwfpe support is unlikely to be used on new platforms
and requires CONFIG_OABI_COMPAT, which is not generally used with
ARMv7+, we shouldn't expect to build nwfpe support into a Thumb-2
kernel.
At present, nwfpe contains assembly code which isn't Thumb-2
compatible, and for now it doesn't appear useful to port this
code.
All ARMv7-A/R platforms necessarily have VFPv3 hardware floating-
point natively, making emulation unnecessary.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This makes sense, because Thumb-2 code can't execute on plain
ARMv6 processors.
This will avoid accidentally configuring a broken kernel where the
config otherwise would allow multiple architecture versions to
coexist in the same kernel.
Not adding !CPU_V5 etc., because the chance of anyone trying to
put v5 and v7 in the same kernel is low, and I'm not aware of
any mach which can do this. These could be added later if it
matters.
Note that the rules may need to be refined if support for the
ARM1156J(F)-S processor is later added to the kernel, since this
processor supports the rare ARMv6T2 extensions, which add support
for Thumb-2 and a few other ARMv7 features.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It is kernel-wide policy that options depending on EXPERIMENTAL should
also have '(EXPERIMENTAL)' in their option text, and options with
'(EXPERIMENTAL)' depend on EXPERIMENTAL.
Ensure that all ARM options comply with this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Today more boards with arm cpu have selectable pci bus.
This patch makes this more scalable and remove line continuations in
Kconfig
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Hans Ulli Kroll <ulli.kroll@googlemail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Depending on the compiler version, ARM GCC calls the mcount function
either __gnu_mcount_nc or mcount.
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, the kprobes implementation for ARM only supports the ARM
instruction set, so it only works if CONFIG_THUMB2_KERNEL is not
enabled.
Until kprobes is updated to work with Thumb-2, turning it on will
cause horrible things to happen, so this patch disables it for now.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add CONFIG_CRASH_DUMP configuration option which is used by dump
capture kernels.
Signed-off-by: Mika Westerberg <mika.westerberg@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
factorise some generic infrastructure to assist looking up struct clks
for the ARM & SH architecture.
as the code is identical at 99%
put the arch specific code for allocation as example in asm/clkdev.h
Signed-off-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This just adds ARCH_MSM_SCORPIONMP to allow SMP selection for
MSM. MSM is unique in that it doesn't enable SCU or TWD.
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
Add the options to enable the function graph tracer on ARM. Function
graph tracer support requires frame pointers, so exclude Thumb-2 and
also make sure FRAME_POINTER gets enabled when FUNCTION_GRAPH_TRACER is
used, since FUNCTION_TRACER doesn't "select FRAME_POINTER" when
ARM_UNWIND is used. Therefore, with GCC 4.4.0+, you get plain function
tracing without frame pointers, but you'll need them if you want
function graph tracing.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Presently each one of the CPUs manually selects the same feature set, and
there's a reasonable expectation that none of these will change for
future CPUs in the SH-Mobile / R-Mobile family, so we move those over to
the top-level ARCH_SHMOBILE.
While we're at it, all of the CPUs support optional GPIOs via the PFC,
do not have I/O ports, and expect sparse IRQ, so we bring the
configuration in line across the board.
This more or less brings the ARM-based parts in sync with their SH
counterparts.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>