Commit graph

23 commits

Author SHA1 Message Date
Paul Burton
5fac4f7ac0 MIPS: Select CONFIG_ARCH_USE_CMPXCHG_LOCKREF for MIPS64
On MIPS64 we have spinlocks that are 32b in size and an efficient
cmpxchg64 implementation, so we qualify to make use of cmpxchg backed
lockrefs. Select the ARCH_USE_CMPXCHG_LOCKREF Kconfig symbol and provide
a trivial implementation of arch_spin_value_unlocked to satisfy the
lockref code.

Using Linus' simple testcase from
http://article.gmane.org/gmane.linux.file-systems/77466 on a dual core
system with an in-development MIPS64 CPU running on FPGA I see around an
8% gain:

Pre-patch:
    Total loops: 252698
    Total loops: 251482
    Total loops: 250806
    Total loops: 252885
    Total loops: 251666

Post-patch:
    Total loops: 273728
    Total loops: 269932
    Total loops: 269341
    Total loops: 275004
    Total loops: 270208

[ralf@linux-mips.org: Fixed conflict.]

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Cc: Maciej W. Rozycki <macro@codesourcery.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/10810/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-09-03 12:08:01 +02:00
Markos Chandras
9ff897c4e8 MIPS: spinlock: Adjust arch_spin_lock back-off time
Make it similar to the trylock and R10000_LLSC_WAR cases.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9789/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-06-24 14:57:46 +02:00
Leonid Yegoshin
6f6ed48265 MIPS: Replace smp_mb with release barrier function in unlocks.
Repleace smp_mb() in arch_write_unlock() and __clear_bit_unlock() to
smp_mb__before_llsc() call which does "release" barrier functionality.

It seems like it was missed in commit f252ffd50c
during introduction of "acquire" and "release" semantics.

[ralf@linux-mips: The original patch submission was labelled a fix but
actually it replaces a barrier with another less restrictive type of
barrier so it doesn't fix any ill behaviour but rather squeezes out a
tad better performance.  Further improvments will be possible once
smp_release() has been merged.]

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: benh@kernel.crashing.org
Cc: will.deacon@arm.com
Cc: linux-kernel@vger.kernel.org
Cc: markos.chandras@imgtec.com
Cc: macro@linux-mips.org
Cc: Steven.Hill@imgtec.com
Cc: alexander.h.duyck@redhat.com
Cc: davem@davemloft.net
Patchwork: https://patchwork.linux-mips.org/patch/10507/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-06-21 21:54:30 +02:00
Markos Chandras
518222161d MIPS: asm: spinlock: Fix addiu instruction for R10000_LLSC_WAR case
Commit 5753762cbd1c("MIPS: asm: spinlock: Replace "sub" instruction
with "addiu") replaced the "sub" instruction with addiu but it did
not update the immediate value in the R10000_LLSC_WAR case.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Fixes: 5753762cbd1c("MIPS: asm: spinlock: Replace "sub" instruction with "addiu"")
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9385/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-04-10 15:41:46 +02:00
Markos Chandras
5753762cbd MIPS: asm: spinlock: Replace "sub" instruction with "addiu"
"sub $reg, imm" is not a real MIPS instruction. The assembler can
replace that with "addi $reg, -imm". However, addi has been removed
from R6, so we replace the "sub" instruction with the "addiu" one.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2015-02-17 15:37:23 +00:00
Markos Chandras
94bfb75ace MIPS: asm: Rename GCC_OFF12_ASM to GCC_OFF_SMALL_ASM
The GCC_OFF12_ASM macro is used for 12-bit immediate constrains
but we will also use it for 9-bit constrains on MIPS R6 so we
rename it to something more appropriate.

Cc: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2015-02-17 15:37:21 +00:00
Maciej W. Rozycki
b0984c4370 MIPS: Fix microMIPS LL/SC immediate offsets
In the microMIPS encoding some memory access instructions have their
immediate offset reduced to 12 bits only.  That does not match the GCC
`R' constraint we use in some places to satisfy the requirement,
resulting in build failures like this:

{standard input}: Assembler messages:
{standard input}:720: Error: macro used $at after ".set noat"
{standard input}:720: Warning: macro instruction expanded into multiple instructions

Fix the problem by defining a macro, `GCC_OFF12_ASM', that expands to
the right constraint depending on whether microMIPS or standard MIPS
code is produced.  Also apply the fix to where `m' is used as in the
worst case this change does nothing, e.g. where the pointer was already
in a register such as a function argument and no further offset was
requested, and in the best case it avoids an extraneous sequence of up
to two instructions to load the high 20 bits of the address in the LL/SC
loop.  This reduces the risk of lock contention that is the higher the
more instructions there are in the critical section between LL and SC.

Strictly speaking we could just bulk-replace `R' with `ZC' as the latter
constraint adjusts automatically depending on the ISA selected.
However it was only introduced with GCC 4.9 and we keep supporing older
compilers for the standard MIPS configuration, hence the slightly more
complicated approach I chose.

The choice of a zero-argument function-like rather than an object-like
macro was made so that it does not look like a function call taking the
C expression used for the constraint as an argument.  This is so as not
to confuse the reader or formatting checkers like `checkpatch.pl' and
follows previous practice.

Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com>
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8482/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-11-24 07:45:36 +01:00
David Daney
3d39019a16 MIPS: Remove redundant instructions from arch_spin_{,try}lock.
We were doing:
   SRL  $r,$?,16
   ANDI $r,$r,0xffff

The logical right shift by 16 leaves the upper 16 bits clear, so the
subsequent masking out of those bits is redundant, and can safely be
removed.

Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2013-04-26 17:18:24 +02:00
Ralf Baechle
e01961ceea MIPS: Remove further use of .subsection
7837314d14 [MIPS: Get rid of branches to
.subsections] removed most uses of .subsection] removed most uses of
.subsection in inline assembler code.

It left the instances in spinlock.h alone because we knew their use was
in fairly small files where .subsection use was fine but of course this
was a fragile assumption.  LTO breaks this assumption resulting in build
errors due to exceeded branch range, so remove further instances of
.subsection.

The two functions that still use .macro don't currently cause issues
however this use is still fragile.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2013-04-11 15:39:52 +02:00
Ralf Baechle
7034228792 MIPS: Whitespace cleanup.
Having received another series of whitespace patches I decided to do this
once and for all rather than dealing with this kind of patches trickling
in forever.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2013-02-01 10:00:22 +01:00
Ralf Baechle
756cca61a7 MIPS: Microoptimize arch_{read,write}_lock
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2011-03-25 18:45:13 +01:00
David Daney
500c2e1fdb MIPS: Optimize spinlocks.
The current locking mechanism uses a ll/sc sequence to release a
spinlock.  This is slower than a wmb() followed by a store to unlock.

The branching forward to .subsection 2 on sc failure slows down the
contended case.  So we get rid of that part too.

Since we are now working on naturally aligned u16 values, we can get
rid of a masking operation as the LHU already does the right thing.
The ANDI are reversed for better scheduling on multi-issue CPUs

On a 12 CPU 750MHz Octeon cn5750 this patch improves ipv4 UDP packet
forwarding rates from 3.58*10^6 PPS to 3.99*10^6 PPS, or about 11%.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/937/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-02-27 12:53:42 +01:00
David Daney
f252ffd50c MIPS: New macro smp_mb__before_llsc.
Replace some instances of smp_llsc_mb() with a new macro
smp_mb__before_llsc().  It is used before ll/sc sequences that are
documented as needing write barrier semantics.

The default implementation of smp_mb__before_llsc() is just smp_llsc_mb(),
so there are no changes in semantics.

Also simplify definition of smp_mb(), smp_rmb(), and smp_wmb() to be just
barrier() in the non-SMP case.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/851/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-02-27 12:53:06 +01:00
Thomas Gleixner
e5931943d0 locking: Convert raw_rwlock functions to arch_rwlock
Name space cleanup for rwlock functions. No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: linux-arch@vger.kernel.org
2009-12-14 23:55:32 +01:00
Thomas Gleixner
fb3a6bbc91 locking: Convert raw_rwlock to arch_rwlock
Not strictly necessary for -rt as -rt does not have non sleeping
rwlocks, but it's odd to not have a consistent naming convention.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: linux-arch@vger.kernel.org
2009-12-14 23:55:32 +01:00
Thomas Gleixner
0199c4e68d locking: Convert __raw_spin* functions to arch_spin*
Name space cleanup. No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: linux-arch@vger.kernel.org
2009-12-14 23:55:32 +01:00
Thomas Gleixner
445c89514b locking: Convert raw_spinlock to arch_spinlock
The raw_spin* namespace was taken by lockdep for the architecture
specific implementations. raw_spin_* would be the ideal name space for
the spinlocks which are not converted to sleeping locks in preempt-rt.

Linus suggested to convert the raw_ to arch_ locks and cleanup the
name space instead of using an artifical name like core_spin,
atomic_spin or whatever

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: linux-arch@vger.kernel.org
2009-12-14 23:55:32 +01:00
Robin Holt
f5f7eac41d Allow rwlocks to re-enable interrupts
Pass the original flags to rwlock arch-code, so that it can re-enable
interrupts if implemented for that architecture.

Initially, make __raw_read_lock_flags and __raw_write_lock_flags stubs
which just do the same thing as non-flags variants.

Signed-off-by: Petr Tesarik <ptesarik@suse.cz>
Signed-off-by: Robin Holt <holt@sgi.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: <linux-arch@vger.kernel.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-02 19:05:11 -07:00
David Daney
0e6826c73c MIPS: __raw_spin_lock() may spin forever on ticket wrap.
If the lock is not acquired and has to spin *and* the second attempt
to acquire the lock fails, the delay time is not masked by the ticket
range mask.  If the ticket number wraps around to zero, the result is
that the lock sampling delay is essentially infinite (due to casting
-1 to an unsigned int).

The fix: Always mask the difference between my_ticket and the current
ticket value before calculating the delay.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-03-30 14:49:39 +02:00
Kyle McMartin
a5ef7ca0e2 x86: spinlocks: define dummy __raw_spin_is_contended
Architectures other than mips and x86 are not using ticket spinlocks.
Therefore, the contention on the lock is meaningless, since there is
nobody known to be waiting on it (arguably /fairly/ unfair locks).

Dummy it out to return 0 on other architectures.

Signed-off-by: Kyle McMartin <kyle@redhat.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-02-09 08:15:39 -08:00
Johannes Dickgreber
9b8f3863d9 MIPS: Fix wrong branch target in new spin_lock code.
Signed-off-by: Johannes Dickgreber <tanzy@gmx.de>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2008-10-15 12:46:50 +01:00
Ralf Baechle
2a31b03335 MIPS: Rewrite spinlocks to ticket locks.
Based on patch by Chad Reese of Cavium Networks.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2008-10-11 16:18:54 +01:00
Ralf Baechle
384740dc49 MIPS: Move headfiles to new location below arch/mips/include
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2008-10-11 16:18:52 +01:00
Renamed from include/asm-mips/spinlock.h (Browse further)