Commit graph

243 commits

Author SHA1 Message Date
Chen Gang
4782f7f9ae arc: remove '__init' for get_hw_config_num_irq()
get_hw_config_num_irq() may be called by normal iss_model_init_smp()
which is a function pointer for 'init_smp' which may be called by
first_lines_of_secondary() which also need be normal too.

The related warning (with allmodconfig):

    MODPOST vmlinux.o
  WARNING: vmlinux.o(.text+0x5814): Section mismatch in reference from the function iss_model_init_smp() to the function .init.text:get_hw_config_num_irq()
  The function iss_model_init_smp() references
  the function __init get_hw_config_num_irq().
  This is often because iss_model_init_smp lacks a __init
  annotation or the annotation of get_hw_config_num_irq is wrong.

Signed-off-by: Chen Gang <gang.chen@asianux.com>
2013-11-06 10:41:43 +05:30
Chen Gang
8f5d221b06 arc: remove '__init' for first_lines_of_secondary()
first_lines_of_secondary() is a '__init' function, but it may be called
by __cpu_up() by _cpu_up() by cpu_up() which is a normal export symbol
function. So recommend to remove '__init'.

The related warning (with allmodconfig):

    MODPOST vmlinux.o
  WARNING: vmlinux.o(.text+0x315c): Section mismatch in reference from the function __cpu_up() to the function .init.text:first_lines_of_secondary()
  The function __cpu_up() references
  the function __init first_lines_of_secondary().
  This is often because __cpu_up lacks a __init
  annotation or the annotation of first_lines_of_secondary is wrong.

Signed-off-by: Chen Gang <gang.chen@asianux.com>
2013-11-06 10:41:43 +05:30
Chen Gang
ef3a661af6 arc: remove '__init' for setup_processor() and arc_init_IRQ()
They haven't '__init' in definition, but has '__init' in declaration.
And normal function start_kernel_secondary() may call setup_processor()
which will call arc_init_IRQ().

So need remove '__init' for both of them. The related warning (with
allmodconfig):

    MODPOST vmlinux.o
  WARNING: vmlinux.o(.text+0x3084): Section mismatch in reference from the function start_kernel_secondary() to the function .init.text:setup_processor()
  The function start_kernel_secondary() references
  the function __init setup_processor().
  This is often because start_kernel_secondary lacks a __init
  annotation or the annotation of setup_processor is wrong.

Signed-off-by: Chen Gang <gang.chen@asianux.com>
2013-11-06 10:41:42 +05:30
Chen Gang
3d01c1ce41 arc: kgdb: add default implementation for kgdb_roundup_cpus()
arc supports kgdb, but need update -- add function kgdb_roundup_cpus(),
or can not pass compiling. At present, add the simple generic one just
like other architectures(e.g. tile, mips ...).

The related error (with allmodconfig):

  kernel/built-in.o: In function `kgdb_cpu_enter':
  kernel/debug/debug_core.c:580: undefined reference to `kgdb_roundup_cpus'

Signed-off-by: Chen Gang <gang.chen@asianux.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:41 +05:30
Vineet Gupta
0a4c40a3b7 ARC: Fix bogus gcc warning and micro-optimise TLB iteration loop
------------------>8----------------------
arch/arc/mm/tlb.c: In function ‘do_tlb_overlap_fault’:
arch/arc/mm/tlb.c:688:13: warning: array subscript is above array bounds
[-Warray-bounds]
         (pd0[n] & PAGE_MASK)) {
             ^
------------------>8----------------------

While at it, remove the usless last iteration of outer loop when reading
a TLB SET for duplicate entries.

Suggested-by: Mischa Jonker <mjonker@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:41 +05:30
Vineet Gupta
0dafafc3ef ARC: Add support for irqflags tracing and lockdep
Lockdep required a small fix to stacktrace API which was incorrectly
unwindign out of __switch_to for the current call frame.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:40 +05:30
Vineet Gupta
54c8bff14d ARC: Reset the value of Interrupt Priority Register
In case bootloader has changed the priority of one/more IRQ lines

Reported-by: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:40 +05:30
Vineet Gupta
07ba69a46c ARC: Reduce #ifdef'ery for unaligned access emulation
Emulation not enabled is treated as if the fixup failed, so no need for
special #ifdef checks.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:39 +05:30
Vineet Gupta
21a63b5604 ARC: Change calling convention of do_page_fault()
switch the args (address, pt_regs) to match with all the other "C"
exception handlers.

This removes the awkwardness in EV_ProtV for page fault vs. unaligned
access.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:39 +05:30
Vineet Gupta
d4599baf5c ARC: cacheflush optim - PTAG can be loop invariant if V-P is const
Line op needs vaddr (indexing) and paddr (tag match). For page sized
flushes (V-P const), each line op will need a different index, but the
tag bits wil remain constant, hence paddr can be setup once outside the
loop.

This improves select LMBench numbers for Aliasing dcache where we have
more "preventive" cache flushing.

Processor, Processes - times in microseconds - smaller is better
------------------------------------------------------------------------------
Host                 OS  Mhz null null      open slct sig  sig  fork exec sh
                             call  I/O stat clos TCP  inst hndl proc proc proc
--------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
3.11-rc7- Linux 3.11.0-   80 4.66 8.88 69.7 112. 268. 8.60 28.0 3489 13.K 27.K	# Non alias ARC700
3.11-rc7- Linux 3.11.0-   80 4.64 8.51 68.6 98.5 271. 8.58 28.1 4160 15.K 32.K	# Aliasing
3.11-rc7- Linux 3.11.0-   80 4.64 8.51 69.8 99.4 270. 8.73 27.5 3880 15.K 31.K	# PTAG loop Inv

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:38 +05:30
Vineet Gupta
bd12976c36 ARC: cacheflush refactor #3: Unify the {d,i}cache flush leaf helpers
With Line length being constant now, we can fold the 2 helpers into 1.
This allows applying any optimizations (forthcoming) to single place.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:38 +05:30
Vineet Gupta
63d2dfdbf4 ARC: cacheflush refactor #2: I and D caches lines to have same size
Having them be different seems an obscure configuration.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:37 +05:30
Vineet Gupta
f3e4de3274 ARC: cacheflush refactor #1: push aux reg ascertaining into leaf routine
ARC dcache supports 3 ops - Inv, Flush, Flush-n-Inv.
The programming model however provides 2 commands FLUSH, INV.
INV will either discard or flush-n-discard (based on DT_CTRL bit)

The leaf helper __dc_line_loop() used to take the AUX register
(corresponding to the 2 commands). Now we push that to within the
helper, paving way for code consolidations to follow.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:29 +05:30
Vineet Gupta
064a626924 ARC: use __weak instead of __attribute__((weak))
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:40:37 +05:30
Vineet Gupta
8e457d6a75 ARC: Annotate some functions as static
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:40:37 +05:30
Christoph Lameter
6855e95ce3 arc: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of them is
address calculation via the form &__get_cpu_var(x). This calculates the address for
the instance of the percpu variable of the current processor based on an offset.

Other use cases are for storing and retrieving data from the current processors percpu area.
__get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment.

__get_cpu_var() is defined as :

#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))

__get_cpu_var() always only does an address determination. However, store and retrieve operations
could use a segment prefix (or global register on other platforms) to avoid the address calculation.

this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use
optimized assembly code to read and write per cpu variables.

This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr()
or into a use of this_cpu operations that use the offset. Thereby address calcualtions are avoided
and less registers are used when code is generated.

At the end of the patchset all uses of __get_cpu_var have been removed so the macro is removed too.

The patchset includes passes over all arches as well. Once these operations are used throughout then
specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by
f.e. using a global register that may be set to the per cpu base.

Transformations done to __get_cpu_var()

1. Determine the address of the percpu instance of the current processor.

	DEFINE_PER_CPU(int, y);
	int *x = &__get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(&y);

2. Same as #1 but this time an array structure is involved.

	DEFINE_PER_CPU(int, y[20]);
	int *x = __get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(y);

3. Retrieve the content of the current processors instance of a per cpu variable.

	DEFINE_PER_CPU(int, u);
	int x = __get_cpu_var(y)

   Converts to

	int x = __this_cpu_read(y);

4. Retrieve the content of a percpu struct

	DEFINE_PER_CPU(struct mystruct, y);
	struct mystruct x = __get_cpu_var(y);

   Converts to

	memcpy(this_cpu_ptr(&x), y, sizeof(x));

5. Assignment to a per cpu variable

	DEFINE_PER_CPU(int, y)
	__get_cpu_var(y) = x;

   Converts to

	this_cpu_write(y, x);

6. Increment/Decrement etc of a per cpu variable

	DEFINE_PER_CPU(int, y);
	__get_cpu_var(y)++

   Converts to

	this_cpu_inc(y)

Acked-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
2013-11-06 10:40:37 +05:30
Vineet Gupta
9c41f4eeb9 ARC: Incorrect mm reference used in vmalloc fault handler
A vmalloc fault needs to sync up PGD/PTE entry from init_mm to current
task's "active_mm".  ARC vmalloc fault handler however was using mm.

A vmalloc fault for non user task context (actually pre-userland, from
init thread's open for /dev/console) caused the handler to deref NULL mm
(for mm->pgd)

The reasons it worked so far is amazing:

1. By default (!SMP), vmalloc fault handler uses a cached value of PGD.
   In SMP that MMU register is repurposed hence need for mm pointer deref.

2. In pre-3.12 SMP kernel, the problem triggering vmalloc didn't exist in
   pre-userland code path - it was introduced with commit 20bafb3d23
   "n_tty: Move buffers into n_tty_data"

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Cc: Noam Camus <noamc@ezchip.com>
Cc: stable@vger.kernel.org    #3.10 and 3.11
Cc: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-11-02 10:27:04 -07:00
Vineet Gupta
5b24282846 ARC: Ignore ptrace SETREGSET request for synthetic register "stop_pc"
ARCompact TRAP_S insn used for breakpoints, commits before exception is
taken (updating architectural PC). So ptregs->ret contains next-PC and
not the breakpoint PC itself. This is different from other restartable
exceptions such as TLB Miss where ptregs->ret has exact faulting PC.
gdb needs to know exact-PC hence ARC ptrace GETREGSET provides for
@stop_pc which returns ptregs->ret vs. EFA depending on the
situation.

However, writing stop_pc (SETREGSET request), which updates ptregs->ret
doesn't makes sense stop_pc doesn't always correspond to that reg as
described above.

This was not an issue so far since user_regs->ret / user_regs->stop_pc
had same value and both writing to ptregs->ret was OK, needless, but NOT
broken, hence not observed.

With gdb "jump", they diverge, and user_regs->ret updating ptregs is
overwritten immediately with stop_pc, which this patch fixes.

Reported-by: Anton Kolesov <akolesov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-10-12 12:00:36 +05:30
Christian Ruppert
10469350e3 ARC: Fix signal frame management for SA_SIGINFO
Previously, when a signal was registered with SA_SIGINFO, parameters 2
and 3 of the signal handler were written to registers r1 and r2 before
the register set was saved. This led to corruption of these two
registers after returning from the signal handler (the wrong values were
restored).
With this patch, registers are now saved before any parameters are
passed, thus maintaining the processor state from before signal entry.

Signed-off-by: Christian Ruppert <christian.ruppert@abilis.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-10-03 09:43:56 +05:30
Uwe Kleine-König
55c2e26204 ARC: Use clockevents_config_and_register over clockevents_register_device
clockevents_config_and_register is more clever and correct than doing it
by hand; so use it.

[vgupta: fixed build failure due to missing ; in patch]

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-27 16:28:48 +05:30
Vineet Gupta
6c00350b57 ARC: Workaround spinlock livelock in SMP SystemC simulation
Some ARC SMP systems lack native atomic R-M-W (LLOCK/SCOND) insns and
can only use atomic EX insn (reg with mem) to build higher level R-M-W
primitives. This includes a SystemC based SMP simulation model.

So rwlocks need to use a protecting spinlock for atomic cmp-n-exchange
operation to update reader(s)/writer count.

The spinlock operation itself looks as follows:

	mov reg, 1		; 1=locked, 0=unlocked
retry:
	EX reg, [lock]		; load existing, store 1, atomically
	BREQ reg, 1, rety	; if already locked, retry

In single-threaded simulation, SystemC alternates between the 2 cores
with "N" insn each based scheduling. Additionally for insn with global
side effect, such as EX writing to shared mem, a core switch is
enforced too.

Given that, 2 cores doing a repeated EX on same location, Linux often
got into a livelock e.g. when both cores were fiddling with tasklist
lock (gdbserver / hackbench) for read/write respectively as the
sequence diagram below shows:

           core1                                   core2
         --------                                --------
1. spin lock [EX r=0, w=1] - LOCKED
2. rwlock(Read)            - LOCKED
3. spin unlock  [ST 0]     - UNLOCKED
                                         spin lock [EX r=0,w=1] - LOCKED
                      -- resched core 1----

5. spin lock [EX r=1] - ALREADY-LOCKED

                      -- resched core 2----
6.                                       rwlock(Write) - READER-LOCKED
7.                                       spin unlock [ST 0]
8.                                       rwlock failed, retry again

9.                                       spin lock  [EX r=0, w=1]
                      -- resched core 1----

10  spinlock locked in #9, retry #5
11. spin lock [EX gets 1]
                      -- resched core 2----
...
...

The fix was to unlock using the EX insn too (step 7), to trigger another
SystemC scheduling pass which would let core1 proceed, eliding the
livelock.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-27 16:28:48 +05:30
Vineet Gupta
0752adfda1 ARC: Fix 32-bit wrap around in access_ok()
Anton reported

 | LTP tests syscalls/process_vm_readv01 and process_vm_writev01 fail
 | similarly in one testcase test_iov_invalid -> lvec->iov_base.
 | Testcase expects errno EFAULT and return code -1,
 | but it gets return code 1 and ERRNO is 0 what means success.

Essentially test case was passing a pointer of -1 which access_ok()
was not catching. It was doing [@addr + @sz <= TASK_SIZE] which would
pass for @addr == -1

Fixed that by rewriting as [@addr <= TASK_SIZE - @sz]

Reported-by: Anton Kolesov <Anton.Kolesov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-27 16:28:47 +05:30
Mischa Jonker
c11eb222fd ARC: Handle zero-overhead-loop in unaligned access handler
If a load or store is the last instruction in a zero-overhead-loop, and
it's misaligned, the loop would execute only once.

This fixes that problem.

Signed-off-by: Mischa Jonker <mjonker@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-27 16:28:23 +05:30
Martin Schwidefsky
0244ad004a Remove GENERIC_HARDIRQ config option
After the last architecture switched to generic hard irqs the config
options HAVE_GENERIC_HARDIRQS & GENERIC_HARDIRQS and the related code
for !CONFIG_GENERIC_HARDIRQS can be removed.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-09-13 15:09:52 +02:00
Linus Torvalds
ac4de9543a Merge branch 'akpm' (patches from Andrew Morton)
Merge more patches from Andrew Morton:
 "The rest of MM.  Plus one misc cleanup"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (35 commits)
  mm/Kconfig: add MMU dependency for MIGRATION.
  kernel: replace strict_strto*() with kstrto*()
  mm, thp: count thp_fault_fallback anytime thp fault fails
  thp: consolidate code between handle_mm_fault() and do_huge_pmd_anonymous_page()
  thp: do_huge_pmd_anonymous_page() cleanup
  thp: move maybe_pmd_mkwrite() out of mk_huge_pmd()
  mm: cleanup add_to_page_cache_locked()
  thp: account anon transparent huge pages into NR_ANON_PAGES
  truncate: drop 'oldsize' truncate_pagecache() parameter
  mm: make lru_add_drain_all() selective
  memcg: document cgroup dirty/writeback memory statistics
  memcg: add per cgroup writeback pages accounting
  memcg: check for proper lock held in mem_cgroup_update_page_stat
  memcg: remove MEMCG_NR_FILE_MAPPED
  memcg: reduce function dereference
  memcg: avoid overflow caused by PAGE_ALIGN
  memcg: rename RESOURCE_MAX to RES_COUNTER_MAX
  memcg: correct RESOURCE_MAX to ULLONG_MAX
  mm: memcg: do not trap chargers with full callstack on OOM
  mm: memcg: rework and document OOM waiting and wakeup
  ...
2013-09-12 15:44:27 -07:00
Johannes Weiner
759496ba64 arch: mm: pass userspace fault flag to generic fault handler
Unlike global OOM handling, memory cgroup code will invoke the OOM killer
in any OOM situation because it has no way of telling faults occuring in
kernel context - which could be handled more gracefully - from
user-triggered faults.

Pass a flag that identifies faults originating in user space from the
architecture-specific fault handlers to generic code so that memcg OOM
handling can be improved.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: azurIt <azurit@pobox.sk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-12 15:38:01 -07:00
Johannes Weiner
94bce453c7 arch: mm: remove obsolete init OOM protection
The memcg code can trap tasks in the context of the failing allocation
until an OOM situation is resolved.  They can hold all kinds of locks
(fs, mm) at this point, which makes it prone to deadlocking.

This series converts memcg OOM handling into a two step process that is
started in the charge context, but any waiting is done after the fault
stack is fully unwound.

Patches 1-4 prepare architecture handlers to support the new memcg
requirements, but in doing so they also remove old cruft and unify
out-of-memory behavior across architectures.

Patch 5 disables the memcg OOM handling for syscalls, readahead, kernel
faults, because they can gracefully unwind the stack with -ENOMEM.  OOM
handling is restricted to user triggered faults that have no other
option.

Patch 6 reworks memcg's hierarchical OOM locking to make it a little
more obvious wth is going on in there: reduce locked regions, rename
locking functions, reorder and document.

Patch 7 implements the two-part OOM handling such that tasks are never
trapped with the full charge stack in an OOM situation.

This patch:

Back before smart OOM killing, when faulting tasks were killed directly on
allocation failures, the arch-specific fault handlers needed special
protection for the init process.

Now that all fault handlers call into the generic OOM killer (see commit
609838cfed: "mm: invoke oom-killer from remaining unconverted page
fault handlers"), which already provides init protection, the
arch-specific leftovers can be removed.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: azurIt <azurit@pobox.sk>
Acked-by: Vineet Gupta <vgupta@synopsys.com>	[arch/arc bits]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-12 15:38:01 -07:00
Noam Camus
c3567f8a35 ARC: SMP failed to boot due to missing IVT setup
Commit 05b016ecf5 "ARC: Setup Vector Table Base in early boot" moved
the Interrupt vector Table setup out of arc_init_IRQ() which is called
for all CPUs, to entry point of boot cpu only, breaking booting of others.

Fix by adding the same to entry point of non-boot CPUs too.

read_arc_build_cfg_regs() printing IVT Base Register didn't help the
casue since it prints a synthetic value if zero which is totally bogus,
so fix that to print the exact Register.

[vgupta: Remove the now stale comment from header of arc_init_IRQ and
also added the commentary for halt-on-reset]

Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Cc: Cc: <stable@vger.kernel.org> #3.11
Signed-off-by: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-12 07:40:08 -07:00
Linus Torvalds
31f7c3a688 Device tree core updates for v3.12
Generally minor changes. A bunch of bug fixes, particularly for
 initialization and some refactoring. Most notable change if feeding the
 entire flattened tree into the random pool at boot. May not be
 significant, but shouldn't hurt either.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQIcBAABAgAGBQJSL12LAAoJEEFnBt12D9kB64gP/RBipnYbo3RPanHg+lE/J1V7
 KSVFNGKWJHxTg47VVC1YJGIG21jqxAilpdS2MQL5FP7iyd+IzvtHpQiJgp+2G+pq
 di06yrdyrYErxRgZgGQi8IpR538ZzOEVLCKJGdb09YelkRzPT5au7CC1MAsX3qco
 yba7PHk0/Nc4hZE4aGbgR1DlRmn86ob7mM0KFE/LORaSN2BueMgWcwKhQXYNGyoh
 assX4yNhAbUG6Bgw7paBLDGqHh8c5Ei5AppU8yPb+N094jgYHBJryUoDlzzUHD23
 qqiEqHhUKT0TpgHNs8KH0WZFugcmjKvYEbzdzadBxqfXnJN4fKSEcdfF3iz4T14j
 U6EZks89GoHwA523OghUZkKNOqlsUdWfdKz+8/grQqKisYwDcf3fCxEYk/4weDCQ
 b6fFlOv6+AI3btjXp6F511ZKxyT4ZZzkHjp/ZSrhBygyamNZfax0ma0j+ZS9AZql
 kPxQS0nOve6NKaP7vXxMmW5sGMnL19ER/Hm31wthGcWI43GVebUdklnzfGaEeSjs
 pmP8oiCNemceqVpiPKxcOxiguf/eyIjP1SFXbguASygUmQeTDbbJ8n1FYznCitue
 xJgWttKWsEf/aMR3eJtQ3aBmHR3rijAV4E28Wlq8XMkocwvpQm2zMocS2Z5BJ80S
 hi1kQVy8+RxNX96tOSp1
 =GSWl
 -----END PGP SIGNATURE-----

Merge tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux

Pull device tree core updates from Grant Likely:
 "Generally minor changes.  A bunch of bug fixes, particularly for
  initialization and some refactoring.  Most notable change if feeding
  the entire flattened tree into the random pool at boot.  May not be
  significant, but shouldn't hurt either"

Tim Bird questions whether the boot time cost of the random feeding may
be noticeable.  And "add_device_randomness()" is definitely not some
speed deamon of a function.

* tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux:
  of/platform: add error reporting to of_amba_device_create()
  irq/of: Fix comment typo for irq_of_parse_and_map
  of: Feed entire flattened device tree into the random pool
  of/fdt: Clean up casting in unflattening path
  of/fdt: Remove duplicate memory clearing on FDT unflattening
  gpio: implement gpio-ranges binding document fix
  of: call __of_parse_phandle_with_args from of_parse_phandle
  of: introduce of_parse_phandle_with_fixed_args
  of: move of_parse_phandle()
  of: move documentation of of_parse_phandle_with_args
  of: Fix missing memory initialization on FDT unflattening
  of: consolidate definition of early_init_dt_alloc_memory_arch()
  of: Make of_get_phy_mode() return int i.s.o. const int
  include: dt-binding: input: create a DT header defining key codes.
  of/platform: Staticize of_platform_device_create_pdata()
  of: Specify initrd location using 64-bit
  dt: Typo fix
  OF: make of_property_for_each_{u32|string}() use parameters if OF is not enabled
2013-09-10 13:53:52 -07:00
Vineet Gupta
07b9b65147 ARC: fix new Section mismatches in build (post __cpuinit cleanup)
--------------->8--------------------
WARNING: vmlinux.o(.text+0x708): Section mismatch in reference from the
function read_arc_build_cfg_regs() to the function
.init.text:read_decode_cache_bcr()

WARNING: vmlinux.o(.text+0x702): Section mismatch in reference from the
function read_arc_build_cfg_regs() to the function
.init.text:read_decode_mmu_bcr()
--------------->8--------------------

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-05 19:19:06 +05:30
Mischa Jonker
7efd0da2d1 ARC: Fix __udelay calculation
Cast usecs to u64, to ensure that the (usecs * 4295 * HZ)
multiplication is 64 bit.

Initially, the (usecs * 4295 * HZ) part was done as a 32 bit
multiplication, with the result casted to 64 bit. This led to some bits
falling off, causing a "DMA initialization error" in the stmmac Ethernet
driver, due to a premature timeout.

Signed-off-by: Mischa Jonker <mjonker@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-05 10:31:12 +05:30
Mischa Jonker
8508d5653f ARC: remove console_verbose() from setup_arch()
It prevents kernel parameters such as 'loglevel' from doing their job.

Signed-off-by: Mischa Jonker <mjonker@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-05 10:31:12 +05:30
Mischa Jonker
6532b02fe5 ARC: Add read*_relaxed to asm/io.h
Some drivers require these, and ARC didn't had them yet.

Signed-off-by: Mischa Jonker <mjonker@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-05 10:31:11 +05:30
Noam Camus
7d669a193b ARC: Handle un-aligned user space access in BE.
Adding endian awarness to un-aligned access exception handling.

Signed-off-by: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-05 10:31:11 +05:30
Vineet Gupta
947bf103fc ARC: [ASID] Track ASID allocation cycles/generations
This helps remove asid-to-mm reverse map

While mm->context.id contains the ASID assigned to a process, our ASID
allocator also used asid_mm_map[] reverse map. In a new allocation
cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this
to check if new @asid_cache belonged to some mm2 (from prev cycle).
If so, it could locate that mm using the ASID reverse map, and mark that
mm as unallocated ASID, to force it to refresh at the time of switch_mm()

However, for SMP, the reverse map has to be maintained per CPU, so
becomes 2 dimensional, hence got rid of it.

With reverse map gone, it is NOT possible to reach out to current
assignee. So we track the ASID allocation generation/cycle and
on every switch_mm(), check if the current generation of CPU ASID is
same as mm's ASID; If not it is refreshed.

(Based loosely on arch/sh implementation)

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 21:42:19 +05:30
Vineet Gupta
c60115537c ARC: [ASID] activate_mm() == switch_mm()
ASID allocation changes/2

Use the fact that switch_mm() and activate_mm() are exactly same code
now while acknowledging the semantical difference in comment

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 21:42:19 +05:30
Vineet Gupta
3daa48d1d9 ARC: [ASID] get_new_mmu_context() to conditionally allocate new ASID
ASID allocation changes/1

This patch does 2 things:

(1) get_new_mmu_context() NOW moves mm->ASID to a new value ONLY if it
    was from a prev allocation cycle/generation OR if mm had no ASID
    allocated (vs. before would unconditionally moving to a new ASID)

    Callers desiring unconditional update of ASID, e.g.local_flush_tlb_mm()
    (for parent's address space invalidation at fork) need to first force
    the parent to an unallocated ASID.

(2) get_new_mmu_context() always sets the MMU PID reg with unchanged/new
    ASID value.

The gains are:
- consolidation of all asid alloc logic into get_new_mmu_context()
- avoiding code duplication in switch_mm() for PID reg setting
- Enables future change to fold activate_mm() into switch_mm()

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 21:42:18 +05:30
Vineet Gupta
5bd87adf9b ARC: [ASID] Refactor the TLB paranoid debug code
-Asm code already has values of SW and HW ASID values, so they can be
 passed to the printing routine.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 21:42:18 +05:30
Vineet Gupta
ade922f8e2 ARC: [ASID] Remove legacy/unused debug code
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 21:42:17 +05:30
Vineet Gupta
c0857f5d0e ARC: No need to flush the TLB in early boot
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 10:48:14 +05:30
Vineet Gupta
483e9bcb01 ARC: MMUv4 preps/3 - Abstract out TLB Insert/Delete
This reorganizes the current TLB operations into psuedo-ops to better
pair with MMUv4's native Insert/Delete operations

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 10:22:48 +05:30
Vineet Gupta
d091fcb97f ARC: MMUv4 preps/2 - Reshuffle PTE bits
With previous commit freeing up PTE bits, reassign them so as to:

- Match the bit to H/w counterpart where possible
  (e.g. MMUv2 GLOBAL/PRESENT, this avoids a shift in create_tlb())
- Avoid holes in _PAGE_xxx definitions

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 10:19:12 +05:30
Vineet Gupta
64b703ef27 ARC: MMUv4 preps/1 - Fold PTE K/U access flags
The current ARC VM code has 13 flags in Page Table entry: some software
(accesed/dirty/non-linear-maps) and rest hardware specific. With 8k MMU
page, we need 19 bits for addressing page frame so remaining 13 bits is
just about enough to accomodate the current flags.

In MMUv4 there are 2 additional flags, SZ (normal or super page) and WT
(cache access mode write-thru) - and additionally PFN is 20 bits (vs. 19
before for 8k). Thus these can't be held in current PTE w/o making each
entry 64bit wide.

It seems there is some scope of compressing the current PTE flags (and
freeing up a few bits). Currently PTE contains fully orthogonal distinct
access permissions for kernel and user mode (Kr, Kw, Kx; Ur, Uw, Ux)
which can be folded into one set (R, W, X). The translation of 3 PTE
bits into 6 TLB bits (when programming the MMU) can be done based on
following pre-requites/assumptions:

1. For kernel-mode-only translations (vmalloc: 0x7000_0000 to
   0x7FFF_FFFF), PTE additionally has PAGE_GLOBAL flag set (and user
   space entries can never be global). Thus such a PTE can translate
   to Kr, Kw, Kx (as appropriate) and zero for User mode counterparts.

2. For non global entries, the PTE flags can be used to create mirrored
   K and U TLB bits. This is true after commit a950549c67
   "ARC: copy_(to|from)_user() to honor usermode-access permissions"
   which ensured that user-space translations _MUST_ have same access
   permissions for both U/K mode accesses so that  copy_{to,from}_user()
   play fair with fault based CoW break and such...

There is no such thing as free lunch - the cost is slightly infalted
TLB-Miss Handlers.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-29 17:51:36 +05:30
Vineet Gupta
4b06ff35fb ARC: Code cosmetics (Nothing semantical)
* reduce editor lines taken by pt_regs
* ARCompact ISA specific part of TLB Miss handlers clubbed together
* cleanup some comments

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-29 17:51:15 +05:30
Grant Likely
a1727da599 of: consolidate definition of early_init_dt_alloc_memory_arch()
Most architectures use the same implementation. Collapse the common ones
into a single weak function that can be overridden.

Signed-off-by: Grant Likely <grant.likely@linaro.org>
2013-08-28 21:18:32 +01:00
Grant Likely
8be137f266 Linux 3.11-rc7
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.19 (GNU/Linux)
 
 iQEcBAABAgAGBQJSGqS5AAoJEHm+PkMAQRiGFxEH/3VrqF6WAkcviNiW/0DCdO8k
 v6Wi7Sp5LxVkwzmOCHCV1tTHwLRlH3cB9YmJlGQ0kHCREaAuEQAB0xJXIW7dnyYj
 Qq7KoRZEMe3wizmjEsj8qsrhfMLzHjBw67hBz2znwW/4P7YdgzwD7KRiEat+yRC9
 ON3nNL2zIqpfk92RXvVrSVl4KMEM+WNbOfiffgBiEP24Ja1MJMFH1d4i6hNOaB0x
 9Pb3Lw8let92x+8Ao5jnjKdKMgVsoZWbN/TgQR8zZOHM38AGGiDgk18vMz+L+hpS
 jqfjckxj1m30jGq0qZ9ZbMZx3IGif4KccVr30MqNHJpwi6Q24qXvT3YfA3HkstM=
 =nAab
 -----END PGP SIGNATURE-----

Merge tag 'v3.11-rc7' into devicetree/next

Linux 3.11-rc7
2013-08-28 20:18:13 +01:00
Vineet Gupta
fce16bc35a ARC: Entry Handler tweaks: Optimize away redundant IRQ_DISABLE_SAVE
In the exception return path, for both U/K cases, intr are already
disabled (for various existing reasons). So when we drop down to
@restore_regs, we need not redo that.

There was subtle issue - when intr were NOT being disabled for
ret-to-kernel-but-no-preemption case - now fixed by moving the
IRQ_DISABLE further up in @resume_kernel_mode.

So what do we gain:

* Shaves off a few insn in return path.

* Eliminates the need for IRQ_DISABLE_SAVE assembler macro for ARCv2
  hence allows for entry code sharing.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-26 09:40:25 +05:30
Vineet Gupta
37f3ac498c ARC: Exception Handlers Code consolidation
After the recent cleanups, all the exception handlers now have same
boilerplate prologue code. Move that into common macro.

This reduces readability but helps greatly with sharing / duplicating
entry code with ARCv2 ISA where the handlers are pretty much the same,
just the entry prologue is different (due to hardware assist).

Also while at it, add the missing FAKE_RET_FROM_EXCPN calls in couple of
places to drop down to pure kernel mode (from exception mode) before
jumping off into "C" code.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-26 09:40:25 +05:30
Vineet Gupta
fe240f11cd ARC: Add some .gitignore entries
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-26 09:40:24 +05:30
Joern Rennecke
b0f55f2a1a ARC: [lib] strchr breakage in Big-endian configuration
For a search buffer, 2 byte aligned, strchr() was returning pointer
outside of buffer (buf - 1)

------------->8----------------
    // Input buffer (default 4 byte aigned)
    char *buffer = "1AA_";

    // actual search start (to mimick 2 byte alignment)
    char *current_line = &(buffer[2]);

    // Character to search for
    char c = 'A';

    char *c_pos = strchr(current_line, c);

    printf("%s\n", c_pos) --> 'AA_' as oppose to 'A_'
------------->8----------------

Reported-by: Anton Kolesov <Anton.Kolesov@synopsys.com>
Debugged-by: Anton Kolesov <Anton.Kolesov@synopsys.com>
Cc: <stable@vger.kernel.org> # [3.9 and 3.10]
Cc: Noam Camus <noamc@ezchip.com>
Signed-off-by: Joern Rennecke  <joern.rennecke@embecosm.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-08-24 11:24:53 -07:00