92ae03f2ef
This merges the 32- and 64-bit versions of the x86 strncpy_from_user() by just rewriting it in C rather than the ancient inline asm versions that used lodsb/stosb and had been duplicated for (trivial) differences between the 32-bit and 64-bit versions. While doing that, it also speeds them up by doing the accesses a word at a time. Finally, the new routines also properly handle the case of hitting the end of the address space, which we have never done correctly before (fs/namei.c has a hack around it for that reason). Despite all these improvements, it actually removes more lines than it adds, due to the de-duplication. Also, we no longer export (or define) the legacy __strncpy_from_user() function (that was defined to not do the user permission checks), since it's not actually used anywhere, and the user address space checks are built in to the new code. Other architecture maintainers have been notified that the old hack in fs/namei.c will be going away in the 3.5 merge window, in case they copied the x86 approach of being a bit cavalier about the end of the address space. Cc: linux-arch@vger.kernel.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Anvin" <hpa@zytor.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
---|---|---|
.. | ||
.gitignore | ||
atomic64_32.c | ||
atomic64_386_32.S | ||
atomic64_cx8_32.S | ||
cache-smp.c | ||
checksum_32.S | ||
clear_page_64.S | ||
cmpxchg.c | ||
cmpxchg8b_emu.S | ||
cmpxchg16b_emu.S | ||
copy_page_64.S | ||
copy_user_64.S | ||
copy_user_nocache_64.S | ||
csum-copy_64.S | ||
csum-partial_64.c | ||
csum-wrappers_64.c | ||
delay.c | ||
getuser.S | ||
inat.c | ||
insn.c | ||
iomap_copy_64.S | ||
Makefile | ||
memcpy_32.c | ||
memcpy_64.S | ||
memmove_64.S | ||
memset_64.S | ||
mmx_32.c | ||
msr-reg-export.c | ||
msr-reg.S | ||
msr-smp.c | ||
msr.c | ||
putuser.S | ||
rwlock.S | ||
rwsem.S | ||
string_32.c | ||
strstr_32.c | ||
thunk_32.S | ||
thunk_64.S | ||
usercopy.c | ||
usercopy_32.c | ||
usercopy_64.c | ||
x86-opcode-map.txt |