Commit 631c1111501f for kernel

commit 631c1111501f34980649242751e93cfdadfd1f1c
Author: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Date:   Mon Mar 16 14:01:22 2026 +0000

    mm/zswap: add missing kunmap_local()

    Commit e2c3b6b21c77 ("mm: zswap: use SG list decompression APIs from
    zsmalloc") updated zswap_decompress() to use the scatterwalk API to copy
    data for uncompressed pages.

    In doing so, it mapped kernel memory locally for 32-bit kernels using
    kmap_local_folio(), however it never unmapped this memory.

    This resulted in the linked syzbot report where a BUG_ON() is triggered
    due to leaking the kmap slot.

    This patch fixes the issue by explicitly unmapping the established kmap.


    Also, add flush_dcache_folio() after the kunmap_local() call

    I had assumed that a new folio here combined with the flush that is done at
    the point of setting the PTE would suffice, but it doesn't seem that's
    actually the case, as update_mmu_cache() will in many archtectures only
    actually flush entries where a dcache flush was done on a range previously.

    I had also wondered whether kunmap_local() might suffice, but it doesn't
    seem to be the case.

    Some arches do seem to actually dcache flush on unmap, parisc does it if
    CONFIG_HIGHMEM is not set by setting ARCH_HAS_FLUSH_ON_KUNMAP and calling
    kunmap_flush_on_unmap() from __kunmap_local(), otherwise non-CONFIG_HIGHMEM
    callers do nothing here.

    Otherwise arch_kmap_local_pre_unmap() is called which does:

    * sparc - flush_cache_all()
    * arm - if VIVT, __cpuc_flush_dcache_area()
    * otherwise - nothing

    Also arch_kmap_local_post_unmap() is called which does:

    * arm - local_flush_tlb_kernel_page()
    * csky - kmap_flush_tlb()
    * microblaze, ppc - local_flush_tlb_page()
    * mips - local_flush_tlb_one()
    * sparc - flush_tlb_all() (again)
    * x86 - arch_flush_lazy_mmu_mode()
    * otherwise - nothing

    But this is only if it's high memory, and doesn't cover all architectures,
    so is presumably intended to handle other cache consistency concerns.

    In any case, VIPT is problematic here whether low or high memory (in spite
    of what the documentation claims, see [0] - 'the kernel did write to a page
    that is in the page cache page and / or in high memory'), because dirty
    cache lines may exist at the set indexed by the kernel direct mapping,
    which won't exist in the set indexed by any subsequent userland mapping,
    meaning userland might read stale data from L2 cache.

    Even if the documentation is correct and low memory is fine not to be
    flushed here, we can't be sure as to whether the memory is low or high
    (kmap_local_folio() will be a no-op if low), and this call should be
    harmless if it is low.

    VIVT would require more work if the memory were shared and already mapped,
    but this isn't the case here, and would anyway be handled by the dcache
    flush call.

    In any case, we definitely need this flush as far as I can tell.

    And we should probably consider updating the documentation unless it turns
    out there's somehow dcache synchronisation that happens for low
    memory/64-bit kernels elsewhere?

    [ljs@kernel.org: add flush_dcache_folio() after the kunmap_local() call]
      Link: https://lkml.kernel.org/r/13e09a99-181f-45ac-a18d-057faf94bccb@lucifer.local
    Link: https://lkml.kernel.org/r/20260316140122.339697-1-ljs@kernel.org
    Link: https://docs.kernel.org/core-api/cachetlb.html [0]
    Fixes: e2c3b6b21c77 ("mm: zswap: use SG list decompression APIs from zsmalloc")
    Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
    Reported-by: syzbot+fe426bef95363177631d@syzkaller.appspotmail.com
    Closes: https://lore.kernel.org/all/69b75e2c.050a0220.12d28.015a.GAE@google.com
    Acked-by: Yosry Ahmed <yosry@kernel.org>
    Acked-by: Johannes Weiner <hannes@cmpxchg.org>
    Reviewed-by: SeongJae Park <sj@kernel.org>
    Acked-by: Yosry Ahmed <yosry@kernel.org>
    Acked-by: Nhat Pham <nphamcs@gmail.com>
    Cc: Chengming Zhou <chengming.zhou@linux.dev>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

diff --git a/mm/zswap.c b/mm/zswap.c
index e6ec3295bdb0..16b2ef7223e1 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -942,9 +942,15 @@ static bool zswap_decompress(struct zswap_entry *entry, struct folio *folio)

 	/* zswap entries of length PAGE_SIZE are not compressed. */
 	if (entry->length == PAGE_SIZE) {
+		void *dst;
+
 		WARN_ON_ONCE(input->length != PAGE_SIZE);
-		memcpy_from_sglist(kmap_local_folio(folio, 0), input, 0, PAGE_SIZE);
+
+		dst = kmap_local_folio(folio, 0);
+		memcpy_from_sglist(dst, input, 0, PAGE_SIZE);
 		dlen = PAGE_SIZE;
+		kunmap_local(dst);
+		flush_dcache_folio(folio);
 	} else {
 		sg_init_table(&output, 1);
 		sg_set_folio(&output, folio, PAGE_SIZE, 0);