Commit ffef67b93aa3 for kernel

commit ffef67b93aa352b34e6aeba3d52c19a63885409a
Author: David Hildenbrand (Arm) <david@kernel.org>
Date:   Mon Mar 23 21:20:18 2026 +0100

    mm/memory: fix PMD/PUD checks in follow_pfnmap_start()

    follow_pfnmap_start() suffers from two problems:

    (1) We are not re-fetching the pmd/pud after taking the PTL

    Therefore, we are not properly stabilizing what the lock actually
    protects.  If there is concurrent zapping, we would indicate to the
    caller that we found an entry, however, that entry might already have
    been invalidated, or contain a different PFN after taking the lock.

    Properly use pmdp_get() / pudp_get() after taking the lock.

    (2) pmd_leaf() / pud_leaf() are not well defined on non-present entries

    pmd_leaf()/pud_leaf() could wrongly trigger on non-present entries.

    There is no real guarantee that pmd_leaf()/pud_leaf() returns something
    reasonable on non-present entries.  Most architectures indeed either
    perform a present check or make it work by smart use of flags.

    However, for example loongarch checks the _PAGE_HUGE flag in pmd_leaf(),
    and always sets the _PAGE_HUGE flag in __swp_entry_to_pmd().  Whereby
    pmd_trans_huge() explicitly checks pmd_present(), pmd_leaf() does not do
    that.

    Let's check pmd_present()/pud_present() before assuming "the is a present
    PMD leaf" when spotting pmd_leaf()/pud_leaf(), like other page table
    handling code that traverses user page tables does.

    Given that non-present PMD entries are likely rare in VM_IO|VM_PFNMAP, (1)
    is likely more relevant than (2).  It is questionable how often (1) would
    actually trigger, but let's CC stable to be sure.

    This was found by code inspection.

    Link: https://lkml.kernel.org/r/20260323-follow_pfnmap_fix-v1-1-5b0ec10872b3@kernel.org
    Fixes: 6da8e9634bb7 ("mm: new follow_pfnmap API")
    Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
    Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
    Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
    Cc: Liam Howlett <liam.howlett@oracle.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Peter Xu <peterx@redhat.com>
    Cc: Suren Baghdasaryan <surenb@google.com>
    Cc: Vlastimil Babka <vbabka@kernel.org>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

diff --git a/mm/memory.c b/mm/memory.c
index 2f815a34d924..c65e82c86fed 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6815,11 +6815,16 @@ int follow_pfnmap_start(struct follow_pfnmap_args *args)

 	pudp = pud_offset(p4dp, address);
 	pud = pudp_get(pudp);
-	if (pud_none(pud))
+	if (!pud_present(pud))
 		goto out;
 	if (pud_leaf(pud)) {
 		lock = pud_lock(mm, pudp);
-		if (!unlikely(pud_leaf(pud))) {
+		pud = pudp_get(pudp);
+
+		if (unlikely(!pud_present(pud))) {
+			spin_unlock(lock);
+			goto out;
+		} else if (unlikely(!pud_leaf(pud))) {
 			spin_unlock(lock);
 			goto retry;
 		}
@@ -6831,9 +6836,16 @@ int follow_pfnmap_start(struct follow_pfnmap_args *args)

 	pmdp = pmd_offset(pudp, address);
 	pmd = pmdp_get_lockless(pmdp);
+	if (!pmd_present(pmd))
+		goto out;
 	if (pmd_leaf(pmd)) {
 		lock = pmd_lock(mm, pmdp);
-		if (!unlikely(pmd_leaf(pmd))) {
+		pmd = pmdp_get(pmdp);
+
+		if (unlikely(!pmd_present(pmd))) {
+			spin_unlock(lock);
+			goto out;
+		} else if (unlikely(!pmd_leaf(pmd))) {
 			spin_unlock(lock);
 			goto retry;
 		}