factorise some generic infrastructure to assist looking up struct clks
for the ARM & SH architecture.
as the code is identical at 99%
put the arch specific code for allocation as example in asm/clkdev.h
Signed-off-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
op_name_from_perf_id() currently returns a local variable, which isn't
terribly productive. As we only handle a single PMU case for now, simply
allocate and free the string from the arch init/exit context and have
op_name_from_perf_id() hand back the cached string.
This also takes UTS_MACHINE in to account, given that we build for
multiple architectures.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All 1st cut silicon in the wild has been replaced by the 2nd cut, so it's
safe to replace all of the 1st cut references and support.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch converts the remaining sh4-202 clocks
to use clkdev for lookup. The now unused name
and id from struct clk are also removed.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a detect_cpu_and_cache_system() -> cpu_probe() rename, tidies
up the unused return value, and stuffs it under __cpuinit in preparation
for CPU hotplug.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now with the lookup aliases in place there is no longer any need to
provide the clock string, kill it off for all legacy CPG CPUs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that dev_name() can be used early, we no longer require a static
string. Kill off all of the superfluous timer names.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Constify struct sysfs_ops.
This is part of the ops structure constification
effort started by Arjan van de Ven et al.
Benefits of this constification:
* prevents modification of data that is shared
(referenced) by many other structure instances
at runtime
* detects/prevents accidental (but not intentional)
modification attempts on archs that enforce
read-only kernel data at runtime
* potentially better optimized code as the compiler
can assume that the const data cannot be changed
* the compiler/linker move const data into .rodata
and therefore exclude them from false sharing
Signed-off-by: Emese Revfy <re.emese@gmail.com>
Acked-by: David Teigland <teigland@redhat.com>
Acked-by: Matt Domsch <Matt_Domsch@dell.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Hans J. Koch <hjk@linutronix.de>
Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>
Acked-by: Jens Axboe <jens.axboe@oracle.com>
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Both the store queue API and the PMB remapping take unsigned long for
their pgprot flags, which cuts off the extended protection bits. In the
case of the PMB this isn't really a problem since the cache attribute
bits that we care about are all in the lower 32-bits, but we do it just
to be safe. The store queue remapping on the other hand depends on the
extended prot bits for enabling userspace access to the mappings.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The old ctrl in/out routines are non-portable and unsuitable for
cross-platform use. While drivers/sh has already been sanitized, there
is still quite a lot of code that is not. This converts the arch/sh/ bits
over, which permits us to flag the routines as deprecated whilst still
building with -Werror for the architecture code, and to ensure that
future users are not added.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Usually we can look to the CVR to work out whether we have an FPU or not.
Unfortunately not all parts comply with this, so just set the flag
manually for all SH-4 parts and clear it on the only SH-4 that doesn't
have one (SH4-501).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The mass produced cuts use an updated PVR value, add them to the list.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This follows the x86 xstate changes and implements a task_xstate slab
cache that is dynamically sized to match one of hard FP/soft FP/FPU-less.
This also tidies up and consolidates some of the SH-2A/SH-4 FPU
fragmentation. Now fpu state restorers are commonly defined, with the
init_fpu()/fpu_init() mess reworked to follow the x86 convention.
The fpu_init() register initialization has been replaced by xstate setup
followed by writing out to hardware via the standard restore path.
As init_fpu() now performs a slab allocation a secondary lighterweight
restorer is also introduced for the context switch.
In the future the DSP state will be rolled in here, too.
More work remains for math emulation and the SH-5 FPU, which presently
uses its own special (UP-only) interfaces.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch breaks out the sh4 scif serial port platform
data from a shared platform device to one platform
device per port. Also, add serial ports to the list of
early platform devices.
While at it, get rid of the R2D ifdef in the processor
code and adjust the defconfigs to use ttySC1.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
A number of small optimisations to FPU handling, in particular:
- move the task USEDFPU flag from the thread_info flags field (which
is accessed asynchronously to the thread) to a new status field,
which is only accessed by the thread itself. This allows locking to
be removed in most cases, or can be reduced to a preempt_lock().
This mimics the i386 behaviour.
- move the modification of regs->sr and thread_info->status flags out
of save_fpu() to __unlazy_fpu(). This gives the compiler a better
chance to optimise things, as well as making save_fpu() symmetrical
with restore_fpu() and init_fpu().
- implement prepare_to_copy(), so that when creating a thread, we can
unlazy the FPU prior to copying the thread data structures.
Also make sure that the FPU is disabled while in the kernel, in
particular while booting, and for newly created kernel threads,
In a very artificial benchmark, the execution time for 2500000
context switches was reduced from 50 to 45 seconds.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
sh port of the sLeAZY-fpu feature currently implemented for some architectures
such us i386.
Right now the SH kernel has a 100% lazy fpu behaviour.
This is of course great for applications that have very sporadic or no FPU use.
However for very frequent FPU users... you take an extra trap every context
switch.
The patch below adds a simple heuristic to this code: after 5 consecutive
context switches of FPU use, the lazy behavior is disabled and the context
gets restored every context switch.
After 256 switches, this is reset and the 100% lazy behavior is returned.
Tests with LMbench showed no regression.
I saw a little improvement due to the prefetching (~2%).
The tests below also show that, with this sLeazy patch, indeed,
the number of FPU exceptions is reduced.
To test this. I hacked the lat_ctx LMBench to use the FPU a little more.
sLeasy implementation
===========================================
switch_to calls | 79326
sleasy calls | 42577
do_fpu_state_restore calls| 59232
restore_fpu calls | 59032
Exceptions: 0x800 (FPU disabled ): 16604
100% Leazy (default implementation)
===========================================
switch_to calls | 79690
do_fpu_state_restore calls | 53299
restore_fpu calls | 53101
Exceptions: 0x800 (FPU disabled ): 53273
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is superfluous, as the default CPU type and family are already
established by the initial cpuinfo definition. Given that we are still
able to probe for the CPU family even if we are not able to detect the
subtype, it's preferable to let the probing code fill out what it can and
leave the rest.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a family member to struct sh_cpuinfo, which allows us to fall
back more on the probe routines to work out what sort of subtype we are
running on. This will be used by the CPU cache initialization code in
order to first do family-level initialization, followed by subtype-level
optimizations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the processor platform device setup
functions from __initcall() and sometimes
device_initcall() to arch_initcall().
This makes sure that the platform devices are
registered a bit earlier so the devices are
available when drivers register using initcall
levels earlier than device_initcall().
A good example is platform devices needed by
i2c-sh_mobile.c which registers a bit earlier
using subsys_initcall().
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This tidies up the boot_cpu_data.flags probing on SH-4A. All of them have
a few things in common, which we can blindly set, rather than having each
subtype have to set the same flags. We can also make assumptions about
cache ways and the validity of PTEA, so this also kills off CPU_HAS_PTEA
as a config option. There was also a bug in the FPU probing, which is now
tidied up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This tidies up the L2 probing, as it may or may not be implemented on a
CPU, regardless of whether it is supported. This converts the cvr
validity checks from BUG_ON()'s to simply clearing the CPU_HAS_L2_CACHE
flag and moving on with life.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add the CPU_HAS_L2_CACHE flag to SH7724.
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This moves out the old legacy CPG clocks to their own file, and converts
over the existing users. With these clocks going away and each CPU
dealing with them on their own, CPUs can gradually move over to the new
interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
For consistenct naming, and to allow us to fix up some confusion in the
SH-Mobile clock framework, amongst other places.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds INTC tables for sh4-202 with support
for HUDI, TMU0, TMU1, TMU2, RTC, SCIF and WDT.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds TMU platform data for sh4-202. Both clockevent
and clocksource support is enabled.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
CPUs registering on-chip clocks should be using arch_clk_init() with the
new scheme so that the CPUs have the opportunity to establish the
topology prior to the initial root clock rate propagation. This ensures
that CPUs with on-chip clocks that use CLK_ENABLE_ON_INIT are properly
enabled at the initial propagation time, without having to further poke
the root clocks.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This tidies up the set_rate hack that the on-chip clocks were abusing to
trigger rate propagation, which is now handled generically.
Additionally, now that CLK_ENABLE_ON_INIT is wired up where it needs to
be for these clocks, the clk_enable() can go away. In some cases this was
bumping up the refcount higher than it should have been.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There is no real distinction here in behaviour, either a clock needs to
be enabled on initialiation or not. The ALWAYS_ENABLED flag was always
intended to only apply to clocks that were physically always on and could
simply not be disabled at all from software. Unfortunately over time this
was abused and the meaning became a bit blurry.
So, we kill off both of all of those paths now, as well as the newer
NEEDS_INIT flag, and consolidate on a CLK_ENABLE_ON_INIT. Clocks that
need to be enabled on initialization can set this, and it will purposely
enable them and bump the refcount up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Update intc tables and platform data to use one linux irq
per maskable interrupt source instead of keeping the one-to-one
mapping between vectors and linux irqs.
This fixes potential irq masking issues for sh7760 hardware
blocks such as DMAC/TMU2/REF.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds TMU platform data for sh7760. Both clockevent
and clocksource support is enabled.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds TMU platform data for sh775x. Both clockevent
and clocksource support is enabled.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements initial support for the SH-Mobile R2R CPU.
Based on Rev 0.11 of the initial SH7724 hardware manual.
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Forcing direct-mapped worked on certain older 2-way set associative
parts, but was always error prone on 4-way parts. As these are the
norm these days, there is not much point in continuing to support this
mode. Most of the folks that used direct-mapped mode generally just
wanted writethrough caching in the first place..
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds support for extended ASIDs (up to 16-bits) on newer SH-X3 cores
that implement the PTAEX register and respective functionality. Presently
only the 65nm SH7786 (90nm only supports legacy 8-bit ASIDs).
The main change is in how the PTE is written out when loading the entry
in to the TLB, as well as in how the TLB entry is selectively flushed.
While SH-X2 extended mode splits out the memory-mapped U and I-TLB data
arrays for extra bits, extended ASID mode splits out the address arrays.
While we don't use the memory-mapped data array access, the address
array accesses are necessary for selective TLB flushes, so these are
implemented newly and replace the generic SH-4 implementation.
With this, TLB flushes in switch_mm() are almost non-existent on newer
parts.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add Suspend-to-disk / swsusp / CONFIG_HIBERNATION support
to the SuperH architecture.
To suspend, use "swapon /dev/sda2; echo disk > /sys/power/state"
To resume, pass "resume=/dev/sda2" on the kernel command line.
The patch "pm: rework includes, remove arch ifdefs V2" is
needed to allow the generic swsusp code to build properly.
Hibernation is not enabled with this patch though, a patch
setting ARCH_HIBERNATION_POSSIBLE will be submitted later.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds preliminary support for the SH7786 CPU subtype.
While this is a dual-core CPU, only UP is supported for now. L2 cache
support is likewise not yet implemented.
More information on this particular CPU subtype is available at:
http://www.renesas.com/fmwk.jsp?cnt=sh7786_root.jsp&fp=/products/mpumcu/superh_family/sh7780_series/sh7786_group/
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Update intc tables and platform data to use one linux irq
per maskable interrupt source instead of keeping the one-to-one
mapping between vectors and linux irqs.
This fixes potential irq masking issues for sh775x hardware
blocks such as SCI/SCIF/RTC/DMAC/TMU2/REF.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes a bug in the FPU exception handler for the FCNVDS instruction.
To get the register number the instruction is shifted right by 9,
though it should be shifted right by 8.
More information at ST Linux bugzilla:
https://bugzilla.stlinux.com/show_bug.cgi?id=4892
Signed-off-by: Giuseppe Di Giore <giuseppe.di-giore@st.com>
Signed-off-by: Carmelo Amoroso <carmelo.amoroso@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>