Commit 015e7b0b0e8e for kernel

commit 015e7b0b0e8e51f7321ec2aafc1d7fc0a8a5536f
Merge: b6d993310a65 ff34657aa72a
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed Dec 3 16:54:54 2025 -0800

    Merge tag 'bpf-next-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

    Pull bpf updates from Alexei Starovoitov:

     - Convert selftests/bpf/test_tc_edt and test_tc_tunnel from .sh to
       test_progs runner (Alexis Lothoré)

     - Convert selftests/bpf/test_xsk to test_progs runner (Bastien
       Curutchet)

     - Replace bpf memory allocator with kmalloc_nolock() in
       bpf_local_storage (Amery Hung), and in bpf streams and range tree
       (Puranjay Mohan)

     - Introduce support for indirect jumps in BPF verifier and x86 JIT
       (Anton Protopopov) and arm64 JIT (Puranjay Mohan)

     - Remove runqslower bpf tool (Hoyeon Lee)

     - Fix corner cases in the verifier to close several syzbot reports
       (Eduard Zingerman, KaFai Wan)

     - Several improvements in deadlock detection in rqspinlock (Kumar
       Kartikeya Dwivedi)

     - Implement "jmp" mode for BPF trampoline and corresponding
       DYNAMIC_FTRACE_WITH_JMP. It improves "fexit" program type performance
       from 80 M/s to 136 M/s. With Steven's Ack. (Menglong Dong)

     - Add ability to test non-linear skbs in BPF_PROG_TEST_RUN (Paul
       Chaignon)

     - Do not let BPF_PROG_TEST_RUN emit invalid GSO types to stack (Daniel
       Borkmann)

     - Generalize buildid reader into bpf_dynptr (Mykyta Yatsenko)

     - Optimize bpf_map_update_elem() for map-in-map types (Ritesh
       Oedayrajsingh Varma)

     - Introduce overwrite mode for BPF ring buffer (Xu Kuohai)

    * tag 'bpf-next-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (169 commits)
      bpf: optimize bpf_map_update_elem() for map-in-map types
      bpf: make kprobe_multi_link_prog_run always_inline
      selftests/bpf: do not hardcode target rate in test_tc_edt BPF program
      selftests/bpf: remove test_tc_edt.sh
      selftests/bpf: integrate test_tc_edt into test_progs
      selftests/bpf: rename test_tc_edt.bpf.c section to expose program type
      selftests/bpf: Add success stats to rqspinlock stress test
      rqspinlock: Precede non-head waiter queueing with AA check
      rqspinlock: Disable spinning for trylock fallback
      rqspinlock: Use trylock fallback when per-CPU rqnode is busy
      rqspinlock: Perform AA checks immediately
      rqspinlock: Enclose lock/unlock within lock entry acquisitions
      bpf: Remove runqslower tool
      selftests/bpf: Remove usage of lsm/file_alloc_security in selftest
      bpf: Disable file_alloc_security hook
      bpf: check for insn arrays in check_ptr_alignment
      bpf: force BPF_F_RDONLY_PROG on insn array creation
      bpf: Fix exclusive map memory leak
      selftests/bpf: Make CS length configurable for rqspinlock stress test
      selftests/bpf: Add lock wait time stats to rqspinlock stress test
      ...

diff --cc kernel/bpf/stackmap.c
index 8f1dacaf01fe,2365541c81dd..da3d328f5c15
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@@ -310,12 -333,9 +333,9 @@@ BPF_CALL_3(bpf_get_stackid, struct pt_r
  			       BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID)))
  		return -EINVAL;

- 	max_depth += skip;
- 	if (max_depth > sysctl_perf_event_max_stack)
- 		max_depth = sysctl_perf_event_max_stack;
-
+ 	max_depth = stack_map_calculate_max_depth(map->value_size, elem_size, flags);
  	trace = get_perf_callchain(regs, kernel, user, max_depth,
 -				   false, false);
 +				   false, false, 0);

  	if (unlikely(!trace))
  		/* couldn't fetch the stack trace */
@@@ -446,13 -463,15 +463,15 @@@ static long __bpf_get_stack(struct pt_r
  	if (may_fault)
  		rcu_read_lock(); /* need RCU for perf's callchain below */

- 	if (trace_in)
+ 	if (trace_in) {
  		trace = trace_in;
- 	else if (kernel && task)
+ 		trace->nr = min_t(u32, trace->nr, max_depth);
+ 	} else if (kernel && task) {
  		trace = get_callchain_entry_for_task(task, max_depth);
- 	else
+ 	} else {
  		trace = get_perf_callchain(regs, kernel, user, max_depth,
 -					   crosstask, false);
 +					   crosstask, false, 0);
+ 	}

  	if (unlikely(!trace) || trace->nr < skip) {
  		if (may_fault)