Commit e9217ca77dc3 for kernel

commit e9217ca77dc35b4978db0fe901685ddb3f1e223a
Author: Harry Yoo <harry.yoo@oracle.com>
Date:   Mon Feb 23 16:58:09 2026 +0900

    mm/slab: initialize slab->stride early to avoid memory ordering issues

    When alloc_slab_obj_exts() is called later (instead of during slab
    allocation and initialization), slab->stride and slab->obj_exts are
    updated after the slab is already accessible by multiple CPUs.

    The current implementation does not enforce memory ordering between
    slab->stride and slab->obj_exts. For correctness, slab->stride must be
    visible before slab->obj_exts. Otherwise, concurrent readers may observe
    slab->obj_exts as non-zero while stride is still stale.

    With stale slab->stride, slab_obj_ext() could return the wrong obj_ext.
    This could cause two problems:

      - obj_cgroup_put() is called on the wrong objcg, leading to
        a use-after-free due to incorrect reference counting [1] by
        decrementing the reference count more than it was incremented.

      - refill_obj_stock() is called on the wrong objcg, leading to
        a page_counter overflow [2] by uncharging more memory than charged.

    Fix this by unconditionally initializing slab->stride in
    alloc_slab_obj_exts_early(), before the need_slab_obj_exts() check.
    In the case of SLAB_OBJ_EXT_IN_OBJ, it is overridden in the function.

    This ensures updates to slab->stride become visible before the slab
    can be accessed by other CPUs via the per-node partial slab list
    (protected by spinlock with acquire/release semantics).

    Thanks to Shakeel Butt for pointing out this issue [3].

    [vbabka@kernel.org: the bug reports [1] and [2] are not yet fully fixed,
     with investigation ongoing, but it is nevertheless a step in the right
     direction to only set stride once after allocating the slab and not
     change it later ]

    Fixes: 7a8e71bc619d ("mm/slab: use stride to access slabobj_ext")
    Reported-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com>
    Link: https://lore.kernel.org/lkml/ca241daa-e7e7-4604-a48d-de91ec9184a5@linux.ibm.com [1]
    Link: https://lore.kernel.org/all/ddff7c7d-c0c3-4780-808f-9a83268bbf0c@linux.ibm.com [2]
    Link: https://lore.kernel.org/linux-mm/aZu9G9mVIVzSm6Ft@hyeyoo [3]
    Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
    Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

diff --git a/mm/slub.c b/mm/slub.c
index 52f021711744..0c906fefc31b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2196,7 +2196,6 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
 retry:
 	old_exts = READ_ONCE(slab->obj_exts);
 	handle_failed_objexts_alloc(old_exts, vec, objects);
-	slab_set_stride(slab, sizeof(struct slabobj_ext));

 	if (new_slab) {
 		/*
@@ -2272,6 +2271,9 @@ static void alloc_slab_obj_exts_early(struct kmem_cache *s, struct slab *slab)
 	void *addr;
 	unsigned long obj_exts;

+	/* Initialize stride early to avoid memory ordering issues */
+	slab_set_stride(slab, sizeof(struct slabobj_ext));
+
 	if (!need_slab_obj_exts(s))
 		return;

@@ -2288,7 +2290,6 @@ static void alloc_slab_obj_exts_early(struct kmem_cache *s, struct slab *slab)
 		obj_exts |= MEMCG_DATA_OBJEXTS;
 #endif
 		slab->obj_exts = obj_exts;
-		slab_set_stride(slab, sizeof(struct slabobj_ext));
 	} else if (s->flags & SLAB_OBJ_EXT_IN_OBJ) {
 		unsigned int offset = obj_exts_offset_in_object(s);