Commit 148f95f75c51 for kernel

commit 148f95f75c513936d466bcc7e6bf73298da2212b
Merge: 41f1a08645ab 815c8e35511d
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed Feb 11 14:12:50 2026 -0800

    Merge tag 'slab-for-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab

    Pull slab updates from Vlastimil Babka:

     - The percpu sheaves caching layer was introduced as opt-in in 6.18 and
       now we enable it for all caches and remove the previous cpu (partial)
       slab caching mechanism.

       Besides the lower locking overhead and much more likely fastpath when
       freeing, this removes the rather complicated code related to the cpu
       slab lockless fastpaths (using this_cpu_try_cmpxchg128/64) and all
       its complications for PREEMPT_RT or kmalloc_nolock().

       The lockless slab freelist+counters update operation using
       try_cmpxchg128/64 remains and is crucial for freeing remote NUMA
       objects, and to allow flushing objects from sheaves to slabs mostly
       without the node list_lock (Vlastimil Babka)

     - Eliminate slabobj_ext metadata overhead when possible. Instead of
       using kmalloc() to allocate the array for memcg and/or allocation
       profiling tag pointers, use leftover space in a slab or per-object
       padding due to alignment (Harry Yoo)

     - Various followup improvements to the above (Hao Li)

    * tag 'slab-for-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: (39 commits)
      slub: let need_slab_obj_exts() return false if SLAB_NO_OBJ_EXT is set
      mm/slab: only allow SLAB_OBJ_EXT_IN_OBJ for unmergeable caches
      mm/slab: place slabobj_ext metadata in unused space within s->size
      mm/slab: move [__]ksize and slab_ksize() to mm/slub.c
      mm/slab: save memory by allocating slabobj_ext array from leftover
      mm/memcontrol,alloc_tag: handle slabobj_ext access under KASAN poison
      mm/slab: use stride to access slabobj_ext
      mm/slab: abstract slabobj_ext access via new slab_obj_ext() helper
      ext4: specify the free pointer offset for ext4_inode_cache
      mm/slab: allow specifying free pointer offset when using constructor
      mm/slab: use unsigned long for orig_size to ensure proper metadata align
      slub: clarify object field layout comments
      mm/slab: avoid allocating slabobj_ext array from its own slab
      slub: avoid list_lock contention from __refill_objects_any()
      mm/slub: cleanup and repurpose some stat items
      mm/slub: remove DEACTIVATE_TO_* stat items
      slab: remove frozen slab checks from __slab_free()
      slab: update overview comments
      slab: refill sheaves from all nodes
      slab: remove unused PREEMPT_RT specific macros
      ...

diff --cc mm/slub.c
index cdc1e652ec52,11a99bd06ac7..18899017512c
--- a/mm/slub.c
+++ b/mm/slub.c
@@@ -6689,12 -6097,8 +6097,12 @@@ void slab_free(struct kmem_cache *s, st
  static noinline
  void memcg_alloc_abort_single(struct kmem_cache *s, void *object)
  {
 +	struct slab *slab = virt_to_slab(object);
 +
 +	alloc_tagging_slab_free_hook(s, slab, &object, 1);
 +
  	if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false)))
- 		do_slab_free(s, slab, object, object, 1, _RET_IP_);
 -		__slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_);
++		__slab_free(s, slab, object, object, 1, _RET_IP_);
  }
  #endif