Commit 4cff5c05e076 for kernel

commit 4cff5c05e076d2ee4e34122aa956b84a2eaac587
Merge: 541c43310e85 fb4ddf208511
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Thu Feb 12 11:32:37 2026 -0800

    Merge tag 'mm-stable-2026-02-11-19-22' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

    Pull MM updates from Andrew Morton:

     - "powerpc/64s: do not re-activate batched TLB flush" makes
       arch_{enter|leave}_lazy_mmu_mode() nest properly (Alexander Gordeev)

       It adds a generic enter/leave layer and switches architectures to use
       it. Various hacks were removed in the process.

     - "zram: introduce compressed data writeback" implements data
       compression for zram writeback (Richard Chang and Sergey Senozhatsky)

     - "mm: folio_zero_user: clear page ranges" adds clearing of contiguous
       page ranges for hugepages. Large improvements during demand faulting
       are demonstrated (David Hildenbrand)

     - "memcg cleanups" tidies up some memcg code (Chen Ridong)

     - "mm/damon: introduce {,max_}nr_snapshots and tracepoint for damos
       stats" improves DAMOS stat's provided information, deterministic
       control, and readability (SeongJae Park)

     - "selftests/mm: hugetlb cgroup charging: robustness fixes" fixes a few
       issues in the hugetlb cgroup charging selftests (Li Wang)

     - "Fix va_high_addr_switch.sh test failure - again" addresses several
       issues in the va_high_addr_switch test (Chunyu Hu)

     - "mm/damon/tests/core-kunit: extend existing test scenarios" improves
       the KUnit test coverage for DAMON (Shu Anzai)

     - "mm/khugepaged: fix dirty page handling for MADV_COLLAPSE" fixes a
       glitch in khugepaged which was causing madvise(MADV_COLLAPSE) to
       transiently return -EAGAIN (Shivank Garg)

     - "arch, mm: consolidate hugetlb early reservation" reworks and
       consolidates a pile of straggly code related to reservation of
       hugetlb memory from bootmem and creation of CMA areas for hugetlb
       (Mike Rapoport)

     - "mm: clean up anon_vma implementation" cleans up the anon_vma
       implementation in various ways (Lorenzo Stoakes)

     - "tweaks for __alloc_pages_slowpath()" does a little streamlining of
       the page allocator's slowpath code (Vlastimil Babka)

     - "memcg: separate private and public ID namespaces" cleans up the
       memcg ID code and prevents the internal-only private IDs from being
       exposed to userspace (Shakeel Butt)

     - "mm: hugetlb: allocate frozen gigantic folio" cleans up the
       allocation of frozen folios and avoids some atomic refcount
       operations (Kefeng Wang)

     - "mm/damon: advance DAMOS-based LRU sorting" improves DAMOS's movement
       of memory betewwn the active and inactive LRUs and adds auto-tuning
       of the ratio-based quotas and of monitoring intervals (SeongJae Park)

     - "Support page table check on PowerPC" makes
       CONFIG_PAGE_TABLE_CHECK_ENFORCED work on powerpc (Andrew Donnellan)

     - "nodemask: align nodes_and{,not} with underlying bitmap ops" makes
       nodes_and() and nodes_andnot() propagate the return values from the
       underlying bit operations, enabling some cleanup in calling code
       (Yury Norov)

     - "mm/damon: hide kdamond and kdamond_lock from API callers" cleans up
       some DAMON internal interfaces (SeongJae Park)

     - "mm/khugepaged: cleanups and scan limit fix" does some cleanup work
       in khupaged and fixes a scan limit accounting issue (Shivank Garg)

     - "mm: balloon infrastructure cleanups" goes to town on the balloon
       infrastructure and its page migration function. Mainly cleanups, also
       some locking simplification (David Hildenbrand)

     - "mm/vmscan: add tracepoint and reason for kswapd_failures reset" adds
       additional tracepoints to the page reclaim code (Jiayuan Chen)

     - "Replace wq users and add WQ_PERCPU to alloc_workqueue() users" is
       part of Marco's kernel-wide migration from the legacy workqueue APIs
       over to the preferred unbound workqueues (Marco Crivellari)

     - "Various mm kselftests improvements/fixes" provides various unrelated
       improvements/fixes for the mm kselftests (Kevin Brodsky)

     - "mm: accelerate gigantic folio allocation" greatly speeds up gigantic
       folio allocation, mainly by avoiding unnecessary work in
       pfn_range_valid_contig() (Kefeng Wang)

     - "selftests/damon: improve leak detection and wss estimation
       reliability" improves the reliability of two of the DAMON selftests
       (SeongJae Park)

     - "mm/damon: cleanup kdamond, damon_call(), damos filter and
       DAMON_MIN_REGION" does some cleanup work in the core DAMON code
       (SeongJae Park)

     - "Docs/mm/damon: update intro, modules, maintainer profile, and misc"
       performs maintenance work on the DAMON documentation (SeongJae Park)

     - "mm: add and use vma_assert_stabilised() helper" refactors and cleans
       up the core VMA code. The main aim here is to be able to use the mmap
       write lock's lockdep state to perform various assertions regarding
       the locking which the VMA code requires (Lorenzo Stoakes)

     - "mm, swap: swap table phase II: unify swapin use" removes some old
       swap code (swap cache bypassing and swap synchronization) which
       wasn't working very well. Various other cleanups and simplifications
       were made. The end result is a 20% speedup in one benchmark (Kairui
       Song)

     - "enable PT_RECLAIM on more 64-bit architectures" makes PT_RECLAIM
       available on 64-bit alpha, loongarch, mips, parisc, and um. Various
       cleanups were performed along the way (Qi Zheng)

    * tag 'mm-stable-2026-02-11-19-22' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (325 commits)
      mm/memory: handle non-split locks correctly in zap_empty_pte_table()
      mm: move pte table reclaim code to memory.c
      mm: make PT_RECLAIM depends on MMU_GATHER_RCU_TABLE_FREE
      mm: convert __HAVE_ARCH_TLB_REMOVE_TABLE to CONFIG_HAVE_ARCH_TLB_REMOVE_TABLE config
      um: mm: enable MMU_GATHER_RCU_TABLE_FREE
      parisc: mm: enable MMU_GATHER_RCU_TABLE_FREE
      mips: mm: enable MMU_GATHER_RCU_TABLE_FREE
      LoongArch: mm: enable MMU_GATHER_RCU_TABLE_FREE
      alpha: mm: enable MMU_GATHER_RCU_TABLE_FREE
      mm: change mm/pt_reclaim.c to use asm/tlb.h instead of asm-generic/tlb.h
      mm/damon/stat: remove __read_mostly from memory_idle_ms_percentiles
      zsmalloc: make common caches global
      mm: add SPDX id lines to some mm source files
      mm/zswap: use %pe to print error pointers
      mm/vmscan: use %pe to print error pointers
      mm/readahead: fix typo in comment
      mm: khugepaged: fix NR_FILE_PAGES and NR_SHMEM in collapse_file()
      mm: refactor vma_map_pages to use vm_insert_pages
      mm/damon: unify address range representation with damon_addr_range
      mm/cma: replace snprintf with strscpy in cma_new_area
      ...

diff --cc arch/x86/Kconfig
index 66446220afe8,a18b4263151d..e2df1b147184
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@@ -823,7 -807,15 +822,8 @@@ config PARAVIR
  config PARAVIRT_XXL
  	bool
  	depends on X86_64
+ 	select ARCH_HAS_LAZY_MMU_MODE

 -config PARAVIRT_DEBUG
 -	bool "paravirt-ops debugging"
 -	depends on PARAVIRT && DEBUG_KERNEL
 -	help
 -	  Enable to debug paravirt_ops internals.  Specifically, BUG if
 -	  a paravirt_op is missing when it is called.
 -
  config PARAVIRT_SPINLOCKS
  	bool "Paravirtualization layer for spinlocks"
  	depends on PARAVIRT && SMP
diff --cc arch/x86/include/asm/paravirt.h
index 3d0b92a8a557,13f9cd31c8f8..fcf8ab50948a
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@@ -489,13 -523,12 +489,12 @@@ static inline void arch_start_context_s

  static inline void arch_end_context_switch(struct task_struct *next)
  {
 -	PVOP_VCALL1(cpu.end_context_switch, next);
 +	PVOP_VCALL1(pv_ops, cpu.end_context_switch, next);
  }

- #define  __HAVE_ARCH_ENTER_LAZY_MMU_MODE
  static inline void arch_enter_lazy_mmu_mode(void)
  {
 -	PVOP_VCALL0(mmu.lazy_mode.enter);
 +	PVOP_VCALL0(pv_ops, mmu.lazy_mode.enter);
  }

  static inline void arch_leave_lazy_mmu_mode(void)
diff --cc include/linux/cma.h
index 2e6931735880,e2a690f7e77e..d0793eaaadaa
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@@ -57,33 -62,4 +62,13 @@@ extern bool cma_intersects(struct cma *

  extern void cma_reserve_pages_on_error(struct cma *cma);

 +#ifdef CONFIG_DMA_CMA
 +extern bool cma_skip_dt_default_reserved_mem(void);
 +#else
 +static inline bool cma_skip_dt_default_reserved_mem(void)
 +{
 +	return false;
 +}
 +#endif
 +
- #ifdef CONFIG_CMA
- struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp);
- bool cma_free_folio(struct cma *cma, const struct folio *folio);
- bool cma_validate_zones(struct cma *cma);
- #else
- static inline struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp)
- {
- 	return NULL;
- }
-
- static inline bool cma_free_folio(struct cma *cma, const struct folio *folio)
- {
- 	return false;
- }
- static inline bool cma_validate_zones(struct cma *cma)
- {
- 	return false;
- }
- #endif
-
  #endif
diff --cc mm/memcontrol-v1.h
index a304ad418cdf,49933925b4ba..eb3c3c105657
--- a/mm/memcontrol-v1.h
+++ b/mm/memcontrol-v1.h
@@@ -25,12 -25,11 +25,10 @@@
  void drain_all_stock(struct mem_cgroup *root_memcg);

  unsigned long memcg_events(struct mem_cgroup *memcg, int event);
 -unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item);
  int memory_stat_show(struct seq_file *m, void *v);

- void mem_cgroup_id_get_many(struct mem_cgroup *memcg, unsigned int n);
- struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg);
+ void mem_cgroup_private_id_get_many(struct mem_cgroup *memcg, unsigned int n);
+ struct mem_cgroup *mem_cgroup_private_id_get_online(struct mem_cgroup *memcg);

  /* Cgroup v1-specific declarations */
  #ifdef CONFIG_MEMCG_V1