rhashtable: rhashtable_remove() must unlink in both tbl and future_tbl

As removals can occur during resizes, entries may be referred to from
both tbl and future_tbl when the removal is requested. Therefore
rhashtable_remove() must unlink the entry in both tables if this is
the case. The existing code did search both tables but stopped when it
hit the first match.

Failing to unlink in both tables resulted in use after free.

Fixes: 97defe1ecf ("rhashtable: Per bucket locks & deferred expansion/shrinking")
Reported-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Thomas Graf 2015-01-21 11:54:01 +00:00 committed by David S. Miller
parent 1dc7b90f7c
commit fe6a043c53

View file

@ -585,6 +585,7 @@ bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *obj)
struct rhash_head *he;
spinlock_t *lock;
unsigned int hash;
bool ret = false;
rcu_read_lock();
tbl = rht_dereference_rcu(ht->tbl, ht);
@ -602,17 +603,16 @@ bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *obj)
}
rcu_assign_pointer(*pprev, obj->next);
atomic_dec(&ht->nelems);
spin_unlock_bh(lock);
rhashtable_wakeup_worker(ht);
rcu_read_unlock();
return true;
ret = true;
break;
}
/* The entry may be linked in either 'tbl', 'future_tbl', or both.
* 'future_tbl' only exists for a short period of time during
* resizing. Thus traversing both is fine and the added cost is
* very rare.
*/
if (tbl != rht_dereference_rcu(ht->future_tbl, ht)) {
spin_unlock_bh(lock);
@ -625,9 +625,15 @@ bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *obj)
}
spin_unlock_bh(lock);
if (ret) {
atomic_dec(&ht->nelems);
rhashtable_wakeup_worker(ht);
}
rcu_read_unlock();
return false;
return ret;
}
EXPORT_SYMBOL_GPL(rhashtable_remove);