Commit ca125231dd29 for kernel

commit ca125231dd29fc0678dd3622e9cdea80a51dffe4
Author: xupengbo <xupengbo@oppo.com>
Date:   Wed Aug 27 10:22:07 2025 +0800

    sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out

    When a task is migrated out, there is a probability that the tg->load_avg
    value will become abnormal. The reason is as follows:

    1. Due to the 1ms update period limitation in update_tg_load_avg(), there
       is a possibility that the reduced load_avg is not updated to tg->load_avg
       when a task migrates out.

    2. Even though __update_blocked_fair() traverses the leaf_cfs_rq_list and
       calls update_tg_load_avg() for cfs_rqs that are not fully decayed, the key
       function cfs_rq_is_decayed() does not check whether
       cfs->tg_load_avg_contrib is null. Consequently, in some cases,
       __update_blocked_fair() removes cfs_rqs whose avg.load_avg has not been
       updated to tg->load_avg.

    Add a check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed(),
    which fixes the case (2.) mentioned above.

    Fixes: 1528c661c24b ("sched/fair: Ratelimit update to tg->load_avg")
    Signed-off-by: xupengbo <xupengbo@oppo.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Reviewed-by: Aaron Lu <ziqianlu@bytedance.com>
    Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
    Tested-by: Aaron Lu <ziqianlu@bytedance.com>
    Link: https://patch.msgid.link/20250827022208.14487-1-xupengbo@oppo.com

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 769d7b7990df..da46c3164537 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4034,6 +4034,9 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
 	if (child_cfs_rq_on_list(cfs_rq))
 		return false;

+	if (cfs_rq->tg_load_avg_contrib)
+		return false;
+
 	return true;
 }