Commit 4d8e74ad4585 for kernel

commit 4d8e74ad4585672489da6145b3328d415f50db82
Author: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
Date:   Thu Apr 30 16:58:08 2026 +0800

    arm64: Reserve an extra page for early kernel mapping

    The final part of [data, end) segment may overflow into the next page of
    init_pg_end[1] which is the gap page before early_init_stack[2]:

    [1]
    crash_arm64_v9.0.1> vtop ffffffed00601000
    VIRTUAL           PHYSICAL
    ffffffed00601000  83401000

    PAGE DIRECTORY: ffffffecffd62000
       PGD: ffffffecffd62da0 => 10000000833fb003
       PMD: ffffff80033fb018 => 10000000833fe003
       PTE: ffffff80033fe008 => 68000083401f03
      PAGE: 83401000

         PTE        PHYSICAL  FLAGS
    68000083401f03  83401000  (VALID|SHARED|AF|NG|PXN|UXN)

          PAGE       PHYSICAL      MAPPING       INDEX CNT FLAGS
    fffffffec00d0040 83401000                0        0  1 4000 reserved

    [2]
    ffffffed002c8000 (r) __pi__data
    ffffffed0054e000 (d) __pi___bss_start
    ffffffed005f5000 (b) __pi_init_pg_dir
    ffffffed005fe000 (b) __pi_init_pg_end
    ffffffed005ff000 (B) early_init_stack
    ffffffed00608000 (b) __pi__end

    For 4K pages, the early kernel mapping may use 2MB block entries but the
    kernel segments are only 64KB aligned. Segment boundaries that fall
    within a 2MB block therefore require a PTE table so that different
    attributes can be applied on either side of the boundary.

    KERNEL_SEGMENT_COUNT still correctly counts the five permanent kernel
    VMAs registered by declare_kernel_vmas(). However, since commit
    5973a62efa34 ("arm64: map [_text, _stext) virtual address range
    non-executable+read-only"), the early mapper also maps [_text, _stext)
    separately from [_stext, _etext). This adds one more early-only split
    and can require one more page-table page than the existing
    EARLY_SEGMENT_EXTRA_PAGES allowance reserves.

    Increase the 4K-page early mapping allowance by one page to cover that
    additional split.

    Fixes: 5973a62efa34 ("arm64: map [_text, _stext) virtual address range non-executable+read-only")
    Assisted-by: TRAE:GLM-5.1
    Suggested-by: Ard Biesheuvel <ardb@kernel.org>
    Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
    [catalin.marinas@arm.com: rewrote part of the commit log]
    [catalin.marinas@arm.com: expanded the code comment]
    Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 74a4f738c5f5..229ee7976f69 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -68,7 +68,12 @@
 #define KERNEL_SEGMENT_COUNT	5

 #if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN
-#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1)
+/*
+ * KERNEL_SEGMENT_COUNT counts the permanent kernel VMAs. The early mapping
+ * has one additional split, [_text, _stext). Reserve one more page for the
+ * SWAPPER_BLOCK_SIZE-unaligned boundaries.
+ */
+#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 2)
 /*
  * The initial ID map consists of the kernel image, mapped as two separate
  * segments, and may appear misaligned wrt the swapper block size. This means