kernel-fxtec-pro1x/arch/x86/lib
Fenghua Yu 4307bec934 x86, mem: copy_user_64.S: Support copy_to/from_user by enhanced REP MOVSB/STOSB
Support copy_to_user/copy_from_user() by enhanced REP MOVSB/STOSB.
On processors supporting enhanced REP MOVSB/STOSB, the alternative
copy_user_enhanced_fast_string function using enhanced rep movsb overrides the
original function and the fast string function.

Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Link: http://lkml.kernel.org/r/1305671358-14478-7-git-send-email-fenghua.yu@intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2011-05-17 15:40:28 -07:00
..
.gitignore
atomic64_32.c x86-32: Rewrite 32-bit atomic64 functions in assembly 2010-02-25 20:47:30 -08:00
atomic64_386_32.S x86: Use {push,pop}_cfi in more places 2011-02-28 18:06:22 +01:00
atomic64_cx8_32.S x86: Use {push,pop}_cfi in more places 2011-02-28 18:06:22 +01:00
cache-smp.c x86, lib: Add wbinvd smp helpers 2010-01-22 16:05:42 -08:00
checksum_32.S x86: Use {push,pop}_cfi in more places 2011-02-28 18:06:22 +01:00
clear_page_64.S x86, mem: clear_page_64.S: Support clear_page() with enhanced REP MOVSB/STOSB 2011-05-17 15:40:27 -07:00
cmpxchg.c x86, asm: Merge cmpxchg_486_u64() and cmpxchg8b_emu() 2010-07-28 17:05:11 -07:00
cmpxchg8b_emu.S
cmpxchg16b_emu.S percpu: Omit segment prefix in the UP case for cmpxchg_double 2011-03-27 19:25:36 -07:00
copy_page_64.S x86, alternatives: Use 16-bit numbers for cpufeature index 2010-07-07 10:36:28 -07:00
copy_user_64.S x86, mem: copy_user_64.S: Support copy_to/from_user by enhanced REP MOVSB/STOSB 2011-05-17 15:40:28 -07:00
copy_user_nocache_64.S
csum-copy_64.S x86: Clean up csum-copy_64.S a bit 2011-03-18 10:44:26 +01:00
csum-partial_64.c x86: Fix common misspellings 2011-03-18 10:39:30 +01:00
csum-wrappers_64.c
delay.c x86: udelay: Use this_cpu_read to avoid address calculation 2011-01-04 06:08:55 +01:00
getuser.S
inat.c
insn.c
iomap_copy_64.S
Makefile percpu, x86: Add arch-specific this_cpu_cmpxchg_double() support 2011-02-28 11:20:49 +01:00
memcpy_32.c x86, mem: Optimize memmove for small size and unaligned cases 2010-09-24 18:57:11 -07:00
memcpy_64.S x86, mem: Optimize memcpy by avoiding memory false dependece 2010-08-23 14:56:41 -07:00
memmove_64.S x86-64, mem: Convert memmove() to assembly file and fix return value bug 2011-01-25 16:58:39 -08:00
memset_64.S x86, alternatives: Use 16-bit numbers for cpufeature index 2010-07-07 10:36:28 -07:00
mmx_32.c
msr-reg-export.c
msr-reg.S
msr-smp.c
msr.c
putuser.S
rwlock_64.S
rwsem_64.S x86-64: Add CFI annotations to lib/rwsem_64.S 2011-02-28 18:06:21 +01:00
semaphore_32.S x86: Fix a bogus unwind annotation in lib/semaphore_32.S 2011-03-02 08:16:44 +01:00
string_32.c
strstr_32.c
thunk_32.S x86: Remove unused bits from lib/thunk_*.S 2011-02-28 18:06:22 +01:00
thunk_64.S x86: Remove unused bits from lib/thunk_*.S 2011-02-28 18:06:22 +01:00
usercopy_32.c
usercopy_64.c
x86-opcode-map.txt