kernel-fxtec-pro1x/arch/x86/lib
Linus Torvalds 92ae03f2ef x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up
This merges the 32- and 64-bit versions of the x86 strncpy_from_user()
by just rewriting it in C rather than the ancient inline asm versions
that used lodsb/stosb and had been duplicated for (trivial) differences
between the 32-bit and 64-bit versions.

While doing that, it also speeds them up by doing the accesses a word at
a time.  Finally, the new routines also properly handle the case of
hitting the end of the address space, which we have never done correctly
before (fs/namei.c has a hack around it for that reason).

Despite all these improvements, it actually removes more lines than it
adds, due to the de-duplication.  Also, we no longer export (or define)
the legacy __strncpy_from_user() function (that was defined to not do
the user permission checks), since it's not actually used anywhere, and
the user address space checks are built in to the new code.

Other architecture maintainers have been notified that the old hack in
fs/namei.c will be going away in the 3.5 merge window, in case they
copied the x86 approach of being a bit cavalier about the end of the
address space.

Cc: linux-arch@vger.kernel.org
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Anvin" <hpa@zytor.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-04-11 09:41:28 -07:00
..
.gitignore
atomic64_32.c x86: Adjust asm constraints in atomic64 wrappers 2012-01-20 17:29:31 -08:00
atomic64_386_32.S x86: atomic64 assembly improvements 2012-01-20 17:29:49 -08:00
atomic64_cx8_32.S x86: atomic64 assembly improvements 2012-01-20 17:29:49 -08:00
cache-smp.c
checksum_32.S
clear_page_64.S
cmpxchg.c
cmpxchg8b_emu.S
cmpxchg16b_emu.S
copy_page_64.S
copy_user_64.S
copy_user_nocache_64.S
csum-copy_64.S
csum-partial_64.c
csum-wrappers_64.c
delay.c x86: Derandom delay_tsc for 64 bit 2012-03-09 12:43:27 -08:00
getuser.S
inat.c x86: Fix to decode grouped AVX with VEX pp bits 2012-02-11 15:11:35 +01:00
insn.c x86: Fix to decode grouped AVX with VEX pp bits 2012-02-11 15:11:35 +01:00
iomap_copy_64.S
Makefile
memcpy_32.c
memcpy_64.S x86-64: Handle byte-wise tail copying in memcpy() without a loop 2012-01-26 21:19:20 +01:00
memmove_64.S
memset_64.S x86-64: Fix memset() to support sizes of 4Gb and above 2012-01-26 11:50:04 +01:00
mmx_32.c
msr-reg-export.c
msr-reg.S
msr-smp.c
msr.c
putuser.S
rwlock.S
rwsem.S
string_32.c
strstr_32.c
thunk_32.S
thunk_64.S
usercopy.c x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up 2012-04-11 09:41:28 -07:00
usercopy_32.c x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up 2012-04-11 09:41:28 -07:00
usercopy_64.c x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up 2012-04-11 09:41:28 -07:00
x86-opcode-map.txt Merge branches 'sched-urgent-for-linus', 'perf-urgent-for-linus' and 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip 2012-01-19 14:53:06 -08:00