Commit 4c5e7f0fcd59 for kernel

commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669
Author: Jinjiang Tu <tujinjiang@huawei.com>
Date:   Thu Mar 19 09:25:41 2026 +0800

    mm/huge_memory: fix folio isn't locked in softleaf_to_folio()

    On arm64 server, we found folio that get from migration entry isn't locked
    in softleaf_to_folio().  This issue triggers when mTHP splitting and
    zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
    in softleaf_to_folio().  The race is as follows:

            CPU0                                             CPU1

    deferred_split_scan()                              zap_nonpresent_ptes()
      lock folio
      split_folio()
        unmap_folio()
          change ptes to migration entries
        __split_folio_to_order()                         softleaf_to_folio()
          set flags(including PG_locked) for tail pages    folio = pfn_folio(softleaf_to_pfn(entry))
          smp_wmb()                                        VM_WARN_ON_ONCE(!folio_test_locked(folio))
          prep_compound_page() for tail pages

    In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
    are visible before the tail page becomes non-compound.  smp_wmb() should
    be paired with smp_rmb() in softleaf_to_folio(), which is missed.  As a
    result, if zap_nonpresent_ptes() accesses migration entry that stores tail
    pfn, softleaf_to_folio() may see the updated compound_head of tail page
    before page->flags.

    This issue will trigger VM_WARN_ON_ONCE() in pfn_swap_entry_folio()
    because of the race between folio split and zap_nonpresent_ptes()
    leading to a folio incorrectly undergoing modification without a folio
    lock being held.

    This is a BUG_ON() before commit 93976a20345b ("mm: eliminate further
    swapops predicates"), which in merged in v6.19-rc1.

    To fix it, add missing smp_rmb() if the softleaf entry is migration entry
    in softleaf_to_folio() and softleaf_to_page().

    [tujinjiang@huawei.com: update function name and comments]
      Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com
    Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com
    Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
    Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
    Acked-by: David Hildenbrand (Arm) <david@kernel.org>
    Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
    Cc: Barry Song <baohua@kernel.org>
    Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
    Cc: Liam Howlett <liam.howlett@oracle.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Mike Rapoport <rppt@kernel.org>
    Cc: Nanyong Sun <sunnanyong@huawei.com>
    Cc: Ryan Roberts <ryan.roberts@arm.com>
    Cc: Suren Baghdasaryan <surenb@google.com>
    Cc: Vlastimil Babka <vbabka@kernel.org>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

diff --git a/include/linux/leafops.h b/include/linux/leafops.h
index a9ff94b744f2..05673d3529e7 100644
--- a/include/linux/leafops.h
+++ b/include/linux/leafops.h
@@ -363,6 +363,23 @@ static inline unsigned long softleaf_to_pfn(softleaf_t entry)
 	return swp_offset(entry) & SWP_PFN_MASK;
 }

+static inline void softleaf_migration_sync(softleaf_t entry,
+		struct folio *folio)
+{
+	/*
+	 * Ensure we do not race with split, which might alter tail pages into new
+	 * folios and thus result in observing an unlocked folio.
+	 * This matches the write barrier in __split_folio_to_order().
+	 */
+	smp_rmb();
+
+	/*
+	 * Any use of migration entries may only occur while the
+	 * corresponding page is locked
+	 */
+	VM_WARN_ON_ONCE(!folio_test_locked(folio));
+}
+
 /**
  * softleaf_to_page() - Obtains struct page for PFN encoded within leaf entry.
  * @entry: Leaf entry, softleaf_has_pfn(@entry) must return true.
@@ -374,11 +391,8 @@ static inline struct page *softleaf_to_page(softleaf_t entry)
 	struct page *page = pfn_to_page(softleaf_to_pfn(entry));

 	VM_WARN_ON_ONCE(!softleaf_has_pfn(entry));
-	/*
-	 * Any use of migration entries may only occur while the
-	 * corresponding page is locked
-	 */
-	VM_WARN_ON_ONCE(softleaf_is_migration(entry) && !PageLocked(page));
+	if (softleaf_is_migration(entry))
+		softleaf_migration_sync(entry, page_folio(page));

 	return page;
 }
@@ -394,12 +408,8 @@ static inline struct folio *softleaf_to_folio(softleaf_t entry)
 	struct folio *folio = pfn_folio(softleaf_to_pfn(entry));

 	VM_WARN_ON_ONCE(!softleaf_has_pfn(entry));
-	/*
-	 * Any use of migration entries may only occur while the
-	 * corresponding folio is locked.
-	 */
-	VM_WARN_ON_ONCE(softleaf_is_migration(entry) &&
-			!folio_test_locked(folio));
+	if (softleaf_is_migration(entry))
+		softleaf_migration_sync(entry, folio);

 	return folio;
 }