slub: add missed accounting
With per-cpu partial list, slab is added to partial list first and then moved to node list. The __slab_free() code path for add/remove_partial is almost deprecated(except for slub debug). But we forget to account add/remove_partial when move per-cpu partial pages to node list, so the statistics for such events are always 0. Add corresponding accounting. This is against the patch "slub: use correct parameter to add a page to partial list tail" Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
This commit is contained in:
parent
8f1e33daed
commit
b13683d1cc
1 changed files with 5 additions and 2 deletions
|
@ -1901,11 +1901,14 @@ static void unfreeze_partials(struct kmem_cache *s)
|
||||||
}
|
}
|
||||||
|
|
||||||
if (l != m) {
|
if (l != m) {
|
||||||
if (l == M_PARTIAL)
|
if (l == M_PARTIAL) {
|
||||||
remove_partial(n, page);
|
remove_partial(n, page);
|
||||||
else
|
stat(s, FREE_REMOVE_PARTIAL);
|
||||||
|
} else {
|
||||||
add_partial(n, page,
|
add_partial(n, page,
|
||||||
DEACTIVATE_TO_TAIL);
|
DEACTIVATE_TO_TAIL);
|
||||||
|
stat(s, FREE_ADD_PARTIAL);
|
||||||
|
}
|
||||||
|
|
||||||
l = m;
|
l = m;
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in a new issue