Commit aa918db707fb for kernel

commit aa918db707fba507e85217961643281ee8dfb2ed
Merge: 494e7fe591bf f90b474a3574
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Mar 30 13:45:28 2025 -0700

    Merge tag 'bpf_try_alloc_pages' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

    Pull bpf try_alloc_pages() support from Alexei Starovoitov:
     "The pull includes work from Sebastian, Vlastimil and myself with a lot
      of help from Michal and Shakeel.

      This is a first step towards making kmalloc reentrant to get rid of
      slab wrappers: bpf_mem_alloc, kretprobe's objpool, etc. These patches
      make page allocator safe from any context.

      Vlastimil kicked off this effort at LSFMM 2024:

        https://lwn.net/Articles/974138/

      and we continued at LSFMM 2025:

        https://lore.kernel.org/all/CAADnVQKfkGxudNUkcPJgwe3nTZ=xohnRshx9kLZBTmR_E1DFEg@mail.gmail.com/

      Why:

      SLAB wrappers bind memory to a particular subsystem making it
      unavailable to the rest of the kernel. Some BPF maps in production
      consume Gbytes of preallocated memory. Top 5 in Meta: 1.5G, 1.2G,
      1.1G, 300M, 200M. Once we have kmalloc that works in any context BPF
      map preallocation won't be necessary.

      How:

      Synchronous kmalloc/page alloc stack has multiple stages going from
      fast to slow: cmpxchg16 -> slab_alloc -> new_slab -> alloc_pages ->
      rmqueue_pcplist -> __rmqueue, where rmqueue_pcplist was already
      relying on trylock.

      This set changes rmqueue_bulk/rmqueue_buddy to attempt a trylock and
      return ENOMEM if alloc_flags & ALLOC_TRYLOCK. It then wraps this
      functionality into try_alloc_pages() helper. We make sure that the
      logic is sane in PREEMPT_RT.

      End result: try_alloc_pages()/free_pages_nolock() are safe to call
      from any context.

      try_kmalloc() for any context with similar trylock approach will
      follow. It will use try_alloc_pages() when slab needs a new page.
      Though such try_kmalloc/page_alloc() is an opportunistic allocator,
      this design ensures that the probability of successful allocation of
      small objects (up to one page in size) is high.

      Even before we have try_kmalloc(), we already use try_alloc_pages() in
      BPF arena implementation and it's going to be used more extensively in
      BPF"

    * tag 'bpf_try_alloc_pages' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next:
      mm: Fix the flipped condition in gfpflags_allow_spinning()
      bpf: Use try_alloc_pages() to allocate pages for bpf needs.
      mm, bpf: Use memcg in try_alloc_pages().
      memcg: Use trylock to access memcg stock_lock.
      mm, bpf: Introduce free_pages_nolock()
      mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation
      locking/local_lock: Introduce localtry_lock_t

diff --cc mm/memcontrol.c
index a037ec92881d,092cab99dec7..83c2df73e4b6
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@@ -1921,18 -1938,9 +1938,18 @@@ void drain_all_stock(struct mem_cgroup
  static int memcg_hotplug_cpu_dead(unsigned int cpu)
  {
  	struct memcg_stock_pcp *stock;
 +	struct obj_cgroup *old;
 +	unsigned long flags;

  	stock = &per_cpu(memcg_stock, cpu);
 +
 +	/* drain_obj_stock requires stock_lock */
- 	local_lock_irqsave(&memcg_stock.stock_lock, flags);
++	localtry_lock_irqsave(&memcg_stock.stock_lock, flags);
 +	old = drain_obj_stock(stock);
- 	local_unlock_irqrestore(&memcg_stock.stock_lock, flags);
++	localtry_unlock_irqrestore(&memcg_stock.stock_lock, flags);
 +
  	drain_stock(stock);
 +	obj_cgroup_put(old);

  	return 0;
  }