This wires up clear_user_highpage() on SH-4 and subsequently converts the
SH7705 32kB cache mode over to using it. Now that the SH-4 implementation
handles all of the dcache purging directly in the aliasing case, there is
no need to do this in the default clear_page() implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently the STACK_CHECK() code is called in to multiple times, although
it's only necessary from the mcount entry. The code still attempts to
treat the nop case as an ftrace path resulting in superfluous code flow
for the case where ftrace is disabled. And finally, this also fixes up
references to a few undefined symbols when FUNCTION_TRACER=n.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently the closest reference to function_trace_stop is within a
CONFIG_STACK_DEBUG block. When this is turned off, the build bails out
with a pcrel too far error. Reorder things a bit to handle the various
combinations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a general CONFIG_MCOUNT in order to permit mcount generation
without ftrace support. This is primarily for allowing platforms to
enable aggressive stack overflow checking without having to enable ftrace
support. Based on the sparc64 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add both dynamic and static function graph tracer support for sh.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Enable kernel stack checking code in both the dynamic ftrace and mcount
code paths. Check the stack to see if it's overflowing and make sure
that the stack pointer contains an address that's either in init_stack
or after the bss.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that I've added TIF_SYSCALL_FTRACE the thread flags do not fit into
a single byte any more. Code testing them now needs to be aware of the
upper and lower bytes.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Enable HAVE_FUNCTION_TRACE_MCOUNT_TEST and test the value of
function_trace_stop from our assembly code as opposed to using the
generic C function. This should optimise our mcount/ftrace code path.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In rare circumstances csum_partial() can be called with data which is
not 16 or 32 bit aligned. This is been observed with RPC calls for NFS
file systems for example. Add support for handling this without resorting
to the misaligned fixup code (which is why this hasn't been seen as a
problem). This mimics the i386 version, which has had this support for
some time.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We chan't share code for udivsi3 and udivsi3_i4, because they
have a different clobber list. Copy udivsi3 from gcc-4.1.2.
As shown in arch/sh/lib/udivsi3.S (and -Os.S),
.global __udivsi3_i4i
.global __udivsi3_i4
.global __udivsi3
__udivsi3_i4i:
...
Three symbols are sharing one code, which is actually udivsi3_i4i.
But, this results unwanted code with gcc 4.1.
In gcc, these three are treated as pseudo instructions that have
their own clobber list apart from the usual calling convention.
According to sh's machine description. The clobber list is as
follows:
- udivsi3_i4i : t,r1,pr,mach,macl
- udivsi3_i4 : t,r0,r1,r4,r5,pr,dr0,dr2,dr4
- udivsi3 : t,r4,pr
The caller of udivsi3 will be left with a broken r1 and mac*.
gcc-4.1.x and older(at least to 3.4) generate udivsi3.
ST's gcc-4.1.1 seems to be OK because it has _i4i.
Signed-off-by: Takashi YOSHII <yoshii.takashi@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This moves in the necessary libgcc bits for SUPERH32 and drops the
libgcc linking for the regular targets. This in turn allows us to rip
out quite a few hacks both in sh_ksyms_32 and arch/sh/Makefile.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The __copy_user function can corrupt the stack in the case of a
non-trivial length of data, and either of the first two move instructions
cause an exception. This is because the fixup for these two instructions
is mapped to the no_pop case, but these instructions execute after the
stack is pushed.
This change creates an explicit NO_POP exception mapping macro, and uses
it for the two instructions executed in the trivial case where no stack
pushes occur.
More information at ST Linux bugzilla:
https://bugzilla.stlinux.com/show_bug.cgi?id=4824
Signed-off-by: Dylan Reid <dylan_reid@bose.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were doing largely bogus things and using the wrong typing for
the address. Bring these in line with the ARM definitions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently these are restricted to SH-3 and SH-4, so we reorder the
ifdefs a bit to let other parts use these also.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These should be returning a uint32_t, whereas they were erroneously
returning a u64 before. As the register sizes are 32-bits, this doesn't
really make a lot of sense.
Reported-by: Katsuya MATSUBARA <matsu@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Currently the xloops calculation in ndelay() gets set to 0 when
calculated with HZ=250, fix up how we do the HZ factoring in order
to get this right for differing values.
Signed-off-by: kogiidena <kogiidena@eggplant.ddo.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Previously we've been handling udivdi3 references and wrapping
them in to div64_32() automatically. This doesn't get a lot of
use, however, and as akpm noted in the recent thread on l-k:
http://lkml.org/lkml/2007/2/27/241
we're better off simply ripping it out and going the do_div()
route if there happen to be any places that need it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There's a bug in the Hitachi SuperH csum_partial_copy_generic()
implementation. If the supplied length is 1 (and several alignment
conditions are met), the function immediately branches to label 4.
However, the assembly at label 4 expects the length to be stored in
register r2. Since this has not occurred, subsequent behavior is
undefined.
This can cause bad payload checksums in TCP connections.
I've fixed the problem by initializing register r2 prior to the branch
instruction.
Signed-off-by: Ollie Wild <aaw@rincewind.tv>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Minor sign-extension bug in SH-specific memset()..
Signed-off-by: Toshinobu Sugioka <sugioka@itonet.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch implements a number of smp_processor_id() cleanup ideas that
Arjan van de Ven and I came up with.
The previous __smp_processor_id/_smp_processor_id/smp_processor_id API
spaghetti was hard to follow both on the implementational and on the
usage side.
Some of the complexity arose from picking wrong names, some of the
complexity comes from the fact that not all architectures defined
__smp_processor_id.
In the new code, there are two externally visible symbols:
- smp_processor_id(): debug variant.
- raw_smp_processor_id(): nondebug variant. Replaces all existing
uses of _smp_processor_id() and __smp_processor_id(). Defined
by every SMP architecture in include/asm-*/smp.h.
There is one new internal symbol, dependent on DEBUG_PREEMPT:
- debug_smp_processor_id(): internal debug variant, mapped to
smp_processor_id().
Also, i moved debug_smp_processor_id() from lib/kernel_lock.c into a new
lib/smp_processor_id.c file. All related comments got updated and/or
clarified.
I have build/boot tested the following 8 .config combinations on x86:
{SMP,UP} x {PREEMPT,!PREEMPT} x {DEBUG_PREEMPT,!DEBUG_PREEMPT}
I have also build/boot tested x64 on UP/PREEMPT/DEBUG_PREEMPT. (Other
architectures are untested, but should work just fine.)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!