Commit 1c2a936edd71 for kernel

commit 1c2a936edd71e133f2806e68324ec81a4eb07588
Author: Kairui Song <kasong@tencent.com>
Date:   Tue Nov 11 21:36:08 2025 +0800

    mm, swap: fix potential UAF issue for VMA readahead

    Since commit 78524b05f1a3 ("mm, swap: avoid redundant swap device
    pinning"), the common helper for allocating and preparing a folio in the
    swap cache layer no longer tries to get a swap device reference
    internally, because all callers of __read_swap_cache_async are already
    holding a swap entry reference.  The repeated swap device pinning isn't
    needed on the same swap device.

    Caller of VMA readahead is also holding a reference to the target entry's
    swap device, but VMA readahead walks the page table, so it might encounter
    swap entries from other devices, and call __read_swap_cache_async on
    another device without holding a reference to it.

    So it is possible to cause a UAF when swapoff of device A raced with
    swapin on device B, and VMA readahead tries to read swap entries from
    device A.  It's not easy to trigger, but in theory, it could cause real
    issues.

    Make VMA readahead try to get the device reference first if the swap
    device is a different one from the target entry.

    Link: https://lkml.kernel.org/r/20251111-swap-fix-vma-uaf-v1-1-41c660e58562@tencent.com
    Fixes: 78524b05f1a3 ("mm, swap: avoid redundant swap device pinning")
    Suggested-by: Huang Ying <ying.huang@linux.alibaba.com>
    Signed-off-by: Kairui Song <kasong@tencent.com>
    Acked-by: Chris Li <chrisl@kernel.org>
    Cc: Baoquan He <bhe@redhat.com>
    Cc: Barry Song <baohua@kernel.org>
    Cc: Kemeng Shi <shikemeng@huaweicloud.com>
    Cc: Nhat Pham <nphamcs@gmail.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

diff --git a/mm/swap_state.c b/mm/swap_state.c
index b13e9c4baa90..f4980dde5394 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -748,6 +748,8 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,

 	blk_start_plug(&plug);
 	for (addr = start; addr < end; ilx++, addr += PAGE_SIZE) {
+		struct swap_info_struct *si = NULL;
+
 		if (!pte++) {
 			pte = pte_offset_map(vmf->pmd, addr);
 			if (!pte)
@@ -761,8 +763,19 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
 			continue;
 		pte_unmap(pte);
 		pte = NULL;
+		/*
+		 * Readahead entry may come from a device that we are not
+		 * holding a reference to, try to grab a reference, or skip.
+		 */
+		if (swp_type(entry) != swp_type(targ_entry)) {
+			si = get_swap_device(entry);
+			if (!si)
+				continue;
+		}
 		folio = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
 						&page_allocated, false);
+		if (si)
+			put_swap_device(si);
 		if (!folio)
 			continue;
 		if (page_allocated) {