Commit 01907da6 authored by Daniel Lezcano's avatar Daniel Lezcano Committed by Angus Ainslie (Purism)

thermal/drivers/cpu_cooling: Introduce the cpu idle cooling driver

The cpu idle cooling driver performs synchronized idle injection across all
cpus belonging to the same cluster and offers a new method to cool down a SoC.

Each cluster has its own idle cooling device, each core has its own idle
injection thread, each idle injection thread uses play_idle to enter idle.  In
order to reach the deepest idle state, each cooling device has the idle
injection threads synchronized together.

It has some similarity with the intel power clamp driver but it is actually
designed to work on the ARM architecture via the DT with a mathematical proof
with the power model which comes with the Documentation.

The idle injection cycle is fixed while the running cycle is variable. That
allows to have control on the device reactivity for the user experience. At
the mitigation point the idle threads are unparked, they play idle the
specified amount of time and they schedule themselves. The last thread sets
the next idle injection deadline and when the timer expires it wakes up all
the threads which in turn play idle again. Meanwhile the running cycle is
changed by set_cur_state.  When the mitigation ends, the threads are parked.
The algorithm is self adaptive, so there is no need to handle hotplugging.

If we take an example of the balanced point, we can use the DT for the hi6220.

The sustainable power for the SoC is 3326mW to mitigate at 75°C. Eight cores
running at full blast at the maximum OPP consumes 5280mW. The first value is
given in the DT, the second is calculated from the OPP with the formula:

   Pdyn = Cdyn x Voltage^2 x Frequency

As the SoC vendors don't want to share the static leakage values, we assume
it is zero, so the Prun = Pdyn + Pstatic = Pdyn + 0 = Pdyn.

In order to reduce the power to 3326mW, we have to apply a ratio to the
running time.

ratio = (Prun - Ptarget) / Ptarget = (5280 - 3326) / 3326 = 0,5874

We know the idle cycle which is fixed, let's assume 10ms. However from this
duration we have to substract the wake up latency for the cluster idle state.
In our case, it is 1.5ms. So for a 10ms latency for idle, we are really idle
8.5ms.

As we know the idle duration and the ratio, we can compute the running cycle.

   running_cycle = 8.5 / 0.5874 = 14.47ms

So for 8.5ms of idle, we have 14.47ms of running cycle, and that brings the
SoC to the balanced trip point of 75°C.

The driver has been tested on the hi6220 and it appears the temperature
stabilizes at 75°C with an idle injection time of 10ms (8.5ms real) and
running cycle of 14ms as expected by the theory above.
Signed-off-by: default avatarKevin Wangtao <kevin.wangtao@linaro.org>
Signed-off-by: default avatarDaniel Lezcano <daniel.lezcano@linaro.org>
parent a60e6a57
......@@ -173,6 +173,16 @@ config CPU_FREQ_THERMAL
This will be useful for platforms using the generic thermal interface
and not the ACPI interface.
config CPU_IDLE_THERMAL
bool "CPU idle cooling strategy"
depends on CPU_IDLE
help
This implements the generic CPU cooling mechanism through
idle injection. This will throttle the CPU by injecting
fixed idle cycle. All CPUs belonging to the same cluster
will enter idle synchronously to reach the deepest idle
state.
endchoice
config CLOCK_THERMAL
......
This diff is collapsed.
......@@ -47,6 +47,7 @@ void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
}
#endif /* CONFIG_CPU_FREQ_THERMAL */
#if defined(CONFIG_THERMAL_OF) && defined(CONFIG_CPU_THERMAL)
/**
* of_cpufreq_cooling_register - create cpufreq cooling device based on DT.
......@@ -62,4 +63,11 @@ of_cpufreq_cooling_register(struct cpufreq_policy *policy)
}
#endif /* defined(CONFIG_THERMAL_OF) && defined(CONFIG_CPU_THERMAL) */
#ifdef CONFIG_CPU_IDLE_THERMAL
extern void __init cpuidle_cooling_register(void);
#else /* CONFIG_CPU_IDLE_THERMAL */
static inline void __init cpuidle_cooling_register(void) { }
#endif /* CONFIG_CPU_IDLE_THERMAL */
#endif /* __CPU_COOLING_H__ */
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment