mm/tlb: Remove tlb_remove_table() non-concurrent condition
Will noted that only checking mm_users is incorrect; we should also
check mm_count in order to cover CPUs that have a lazy reference to
this mm (and could do speculative TLB operations).
If removing this turns out to be a performance issue, we can
re-instate a more complete check, but in tlb_table_flush() eliding the
call_rcu_sched().
Fixes: 2672391169
("mm, powerpc: move the RCU page-table freeing into generic code")
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Rik van Riel <riel@surriel.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: David Miller <davem@davemloft.net>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
db7ddef301
commit
a6f572084f
1 changed files with 0 additions and 9 deletions
|
@ -375,15 +375,6 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
|
|||
{
|
||||
struct mmu_table_batch **batch = &tlb->batch;
|
||||
|
||||
/*
|
||||
* When there's less then two users of this mm there cannot be a
|
||||
* concurrent page-table walk.
|
||||
*/
|
||||
if (atomic_read(&tlb->mm->mm_users) < 2) {
|
||||
__tlb_remove_table(table);
|
||||
return;
|
||||
}
|
||||
|
||||
if (*batch == NULL) {
|
||||
*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
|
||||
if (*batch == NULL) {
|
||||
|
|
Loading…
Reference in a new issue