Commit graph

6 commits

Author SHA1 Message Date
Glauber de Oliveira Costa
92b2dc79c3 x86: remove STR() macros
This patch removes the __STR() and STR() macros from x86_64 header files.
They seem to be legacy, and has no more users. Even if there were users,
they should use __stringify() instead.

In fact, there were one third place in which this macro was defined
(ia32_binfmt.c), and used just below. In this file, usage was properly
converted to __stringify()

[ tglx: arch/x86 adaptation ]

Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2007-10-17 20:16:25 +02:00
H. Peter Anvin
6619a8fb59 x86: Create clflush() inline, remove hardcoded wbinvd
Create an inline function for clflush(), with the proper arguments,
and use it instead of hard-coding the instruction.

This also removes one instance of hard-coded wbinvd, based on a patch
by Bauder de Oliveira Costa.

[ tglx: arch/x86 adaptation ]

Cc: Andi Kleen <andi@firstfloor.org>
Cc: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2007-10-17 20:16:12 +02:00
Kirill Korotaev
c1217a75ea x86: mark read_crX() asm code as volatile
Some gcc versions (I checked at least 4.1.1 from RHEL5 & 4.1.2 from gentoo)
can generate incorrect code with read_crX()/write_crX() functions mix up,
due to cached results of read_crX().

The small app for x8664 below compiled with -O2 demonstrates this
(i686 does the same thing):
2007-10-17 20:15:31 +02:00
Nick Piggin
b6c7347fff x86: optimise barriers
According to latest memory ordering specification documents from Intel
and AMD, both manufacturers are committed to in-order loads from
cacheable memory for the x86 architecture.  Hence, smp_rmb() may be a
simple barrier.

Also according to those documents, and according to existing practice in
Linux (eg.  spin_unlock doesn't enforce ordering), stores to cacheable
memory are visible in program order too.  Special string stores are safe
-- their constituent stores may be out of order, but they must complete
in order WRT surrounding stores.  Nontemporal stores to WB memory can go
out of order, and so they should be fenced explicitly to make them
appear in-order WRT other stores.  Hence, smp_wmb() may be a simple
barrier.

    http://developer.intel.com/products/processor/manuals/318147.pdf
    http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/24593.pdf

In userspace microbenchmarks on a core2 system, fence instructions range
anywhere from around 15 cycles to 50, which may not be totally
insignificant in performance critical paths (code size will go down
too).

However the primary motivation for this is to have the canonical barrier
implementation for x86 architecture.

smp_rmb on buggy pentium pros remains a locked op, which is apparently
required.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-12 18:41:21 -07:00
Nick Piggin
4071c71855 x86: fix IO write barrier
wmb() on x86 must always include a barrier, because stores can go out of
order in many cases when dealing with devices (eg. WC memory).

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-12 18:41:21 -07:00
Thomas Gleixner
96a388de5d i386/x86_64: move headers to include/asm-x86
Move the headers to include/asm-x86 and fixup the
header install make rules

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-10-11 11:20:03 +02:00
Renamed from include/asm-x86_64/system.h (Browse further)