| Age | Commit message (Collapse) | Author | Files | Lines |
|
After resuming from S4, all CPUs except the boot CPU have the wrong EPP
hint programmed. This is because when the CPUs were offlined the EPP value
was reset to 0.
This is a similar problem as fixed by
commit ba3319e590571 ("cpufreq/amd-pstate: Fix a regression leading to EPP
0 after resume") and the solution is also similar. When offlining rather
than reset the values to zero, reset them to match those chosen by the
policy. When the CPUs are onlined again these values will be restored.
Closes: https://community.frame.work/t/increased-power-usage-after-resuming-from-suspend-on-ryzen-7040-kernel-6-15-regression/74531/20?u=mario_limonciello
Fixes: 608a76b65288 ("cpufreq/amd-pstate: Add support for the "Requested CPU Min frequency" BIOS option")
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello (AMD) <superm1@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm
Merge CPUFreq fixes for 6.18 from Viresh Kumar:
"- Update frequency for all tegra CPUs (Aaron Kling).
- Fix device leak in mediatek driver (Johan Hovold).
- Rust cpufreq helper cleanup (Thorsten Blum)."
* tag 'cpufreq-arm-updates-6.18-rc' of git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm:
cpufreq: tegra186: Initialize all cores to max frequencies
cpufreq: tegra186: Set target frequency for all cpus in policy
rust: cpufreq: streamline find_supply_names
cpufreq: mediatek: fix device leak on probe failure
|
|
Instead of using CPUFREQ_ETERNAL for signaling an error condition
in cppc_get_transition_latency(), change the return value type of
that function to int and make it return a proper negative error
code on failures.
No intentional functional impact.
Reviewed-by: Mario Limonciello (AMD) <superm1@kernel.org>
Reviewed-by: Jie Zhan <zhanjie9@hisilicon.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Qais Yousef <qyousef@layalina.io>
|
|
If cppc_get_transition_latency() returns CPUFREQ_ETERNAL to indicate a
failure to retrieve the transition latency value from the platform
firmware, the CPPC cpufreq driver will use that value (converted to
microseconds) as the policy transition delay, but it is way too large
for any practical use.
Address this by making the driver use the cpufreq's default
transition latency value (in microseconds) as the transition delay
if CPUFREQ_ETERNAL is returned by cppc_get_transition_latency().
Fixes: d4f3388afd48 ("cpufreq / CPPC: Set platform specific transition_delay_us")
Cc: 5.19+ <stable@vger.kernel.org> # 5.19
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Mario Limonciello (AMD) <superm1@kernel.org>
Reviewed-by: Jie Zhan <zhanjie9@hisilicon.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Qais Yousef <qyousef@layalina.io>
|
|
Commit a755d0e2d41b ("cpufreq: Honour transition_latency over
transition_delay_us") caused platforms where cpuinfo.transition_latency
is CPUFREQ_ETERNAL to get a very large transition latency whereas
previously it had been capped at 10 ms (and later at 2 ms).
This led to a user-observable regression between 6.6 and 6.12 as
described by Shawn:
"The dbs sampling_rate was 10000 us on 6.6 and suddently becomes
6442450 us (4294967295 / 1000 * 1.5) on 6.12 for these platforms
because the default transition delay was dropped [...].
It slows down dbs governor's reacting to CPU loading change
dramatically. Also, as transition_delay_us is used by schedutil
governor as rate_limit_us, it shows a negative impact on device
idle power consumption, because the device gets slightly less time
in the lowest OPP."
Evidently, the expectation of the drivers using CPUFREQ_ETERNAL as
cpuinfo.transition_latency was that it would be capped by the core,
but they may as well return a default transition latency value instead
of CPUFREQ_ETERNAL and the core need not do anything with it.
Accordingly, introduce CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS and make
all of the drivers in question use it instead of CPUFREQ_ETERNAL. Also
update the related Rust binding.
Fixes: a755d0e2d41b ("cpufreq: Honour transition_latency over transition_delay_us")
Closes: https://lore.kernel.org/linux-pm/20250922125929.453444-1-shawnguo2@yeah.net/
Reported-by: Shawn Guo <shawnguo@kernel.org>
Reviewed-by: Mario Limonciello (AMD) <superm1@kernel.org>
Reviewed-by: Jie Zhan <zhanjie9@hisilicon.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 6.6+ <stable@vger.kernel.org> # 6.6+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/2264949.irdbgypaU6@rafael.j.wysocki
[ rjw: Fix typo in new symbol name, drop redundant type cast from Rust binding ]
Tested-by: Shawn Guo <shawnguo@kernel.org> # with cpufreq-dt driver
Reviewed-by: Qais Yousef <qyousef@layalina.io>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
During initialization, the EDVD_COREx_VOLT_FREQ registers for some cores
are still at reset values and not reflecting the actual frequency. This
causes get calls to fail. Set all cores to their respective max
frequency during probe to initialize the registers to working values.
Suggested-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
The original commit set all cores in a cluster to a shared policy, but
did not update set_target to apply a frequency change to all cores for
the policy. This caused most cores to remain stuck at their boot
frequency.
Fixes: be4ae8c19492 ("cpufreq: tegra186: Share policy per cluster")
Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Remove local variables from find_supply_names() and use .and_then() with
the more concise kernel::kvec![] macro, instead of KVec::with_capacity()
followed by .push() and Some().
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Make sure to drop the reference to the cci device taken by
of_find_device_by_node() on probe failure (e.g. probe deferral).
Fixes: 0daa47325bae ("cpufreq: mediatek: Link CCI device to CPU")
Cc: Jia-Wei Chang <jia-wei.chang@mediatek.com>
Cc: Rex-BC Chen <rex-bc.chen@mediatek.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Reviewed-by: Chen-Yu Tsai <wenst@chromium.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
|
|
The cpufreq documentation suggests avoiding direct pointer subtraction
when working with entries in driver_freq_table, as it is relatively
costly. Instead, the recommended approach is to use the provided
iteration macros, like cpufreq_for_each_valid_entry_idx().
Use cpufreq_for_each_entry_idx() instead of pointer subtraction in
cpufreq_frequency_table_cpuinfo() which improves code clarity and
follows the established cpufreq coding style.
While at it, remove redundant local variable initialization from
cpufreq_table_index_unsorted().
No functional change intended.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Link: https://patch.msgid.link/20250923075553.45532-1-zhangzihuan@kylinos.cn
[ rjw: Subject tweak, changelog edits, local variable definition tweak ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
commit 2a6c72738706 ("cpufreq: Initialize cpufreq-based
frequency-invariance later") postponed the frequency invariance
initialization to avoid disabling it in the error case.
This isn't locking safe, instead move the initialization up before
the subsys interface is registered (which will rebuild the
sched_domains) and add the corresponding disable on the error path.
Observed lockdep without this patch:
[ 0.989686] ======================================================
[ 0.989688] WARNING: possible circular locking dependency detected
[ 0.989690] 6.17.0-rc4-cix-build+ #31 Tainted: G S
[ 0.989691] ------------------------------------------------------
[ 0.989692] swapper/0/1 is trying to acquire lock:
[ 0.989693] ffff800082ada7f8 (sched_energy_mutex){+.+.}-{4:4}, at: rebuild_sched_domains_energy+0x30/0x58
[ 0.989705]
but task is already holding lock:
[ 0.989706] ffff000088c89bc8 (&policy->rwsem){+.+.}-{4:4}, at: cpufreq_online+0x7f8/0xbe0
[ 0.989713]
which lock already depends on the new lock.
Fixes: 2a6c72738706 ("cpufreq: Initialize cpufreq-based frequency-invariance later")
Signed-off-by: Christian Loehle <christian.loehle@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The comment above the condition `if (cpu->last_sample_time)` clearly
indicates that the branch is taken for the vast majority of invocations
after the first sample in a cycle. The first sample is a one-time
initialization case.
Add likely() hint to the condition to improve branch prediction for
this performance-critical path in intel_pstate_sample().
Signed-off-by: Yaxiong Tian <tianyaxiong@kylinos.cn>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Currently, cpufreq allows drivers to implement both ->target() and
->target_index() callbacks, but that can lead to ambiguous or incorrect
behavior.
For this reason, prevent cpufreq drivers implementing both ->target()
and ->target_index() at the same time from registering.
This check can help to catch driver implementation mistakes early and
improve overall robustness, without affecting existing valid drivers.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Link: https://patch.msgid.link/20250908085738.31602-1-zhangzihuan@kylinos.cn
[ rjw: Subject adjustment and changelog rewrite ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
So far, HWP has never been enabled without EPP (Energy-Performance
Preference) interface support, since the lack of the latter indicates an
incomplete implementation of HWP, which was the case on early development
vehicle platforms. However, HWP can be expected to work if DEC (Dynamic
Efficiency Control) is enabled as indicated by setting bit 27 in
MSR_IA32_POWER_CTL (DEC enable bit).
Accordingly, allow HWP to be enabled if the EPP interface is not
supported so long as DEC is enabled in the processor.
Still, the EPP control sysfs interface is useless when EPP is not
supported, so do not expose it in that case.
Link: https://lore.kernel.org/linux-pm/20250904000608.260817-2-srinivas.pandruvada@linux.intel.com/
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Co-developed-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Make drv_write() call on_each_cpu_mask() instead of using an open-coded
equivalent of the latter.
Also remove a comment mentioning the smp_call_function_many() usage
which is not particularly useful anyway.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Mario Limonciello (AMD) <superm1@kernel.org>
|
|
|
|
Merge a hibernation regression fix and an fix related to energy model
management for 6.17-rc6
* pm-sleep:
PM: hibernate: Restrict GFP mask in hibernation_snapshot()
* pm-em:
PM: EM: Add function for registering a PD without capacity update
|
|
IO time is considered busy by default for modern Intel processors. The
current check covers recent Family 6 models but excludes the brand new
Families 18 and 19.
According to Arjan van de Ven, the model check was mainly due to a lack
of testing on systems before INTEL_CORE2_MEROM. He suggests considering
all Intel processors as having an efficient idle.
Extend the IO busy classification to all Intel processors starting with
Family 6, including Family 15 (Pentium 4s) and upcoming Families 18/19.
Use an x86 VFM check and move the function to the header file to avoid
using arch-specific #ifdefs in the C file.
Signed-off-by: Sohil Mehta <sohil.mehta@intel.com>
Link: https://patch.msgid.link/20250908230655.2562440-1-sohil.mehta@intel.com
[ rjw: Added empty line after #include ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Replace sscanf() with kstrtouint() in all sysfs store functions to improve
input validation and security. The kstrtouint() function provides better
error detection, overflow protection, and consistent error handling
compared to sscanf().
This maintains existing functionality while improving input validation
robustness and following kernel coding best practices for string parsing.
Signed-off-by: Kaushlendra Kumar <kaushlendra.kumar@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://patch.msgid.link/20250906115316.3010384-1-kaushlendra.kumar@intel.com
[ rjw: Dropped duplicate paragraph from the changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The intel_pstate driver manages CPU capacity changes itself and it does
not need an update of the capacity of all CPUs in the system to be
carried out after registering a PD.
Moreover, in some configurations (for instance, an SMT-capable
hybrid x86 system booted with nosmt in the kernel command line) the
em_check_capacity_update() call at the end of em_dev_register_perf_domain()
always fails and reschedules itself to run once again in 1 s, so
effectively it runs in vain every 1 s forever.
To address this, introduce a new variant of em_dev_register_perf_domain(),
called em_dev_register_pd_no_update(), that does not invoke
em_check_capacity_update(), and make intel_pstate use it instead of the
original.
Fixes: 7b010f9b9061 ("cpufreq: intel_pstate: EAS support for hybrid platforms")
Closes: https://lore.kernel.org/linux-pm/40212796-734c-4140-8a85-854f72b8144d@panix.com/
Reported-by: Kenneth R. Crudup <kenny@panix.com>
Tested-by: Kenneth R. Crudup <kenny@panix.com>
Cc: 6.16+ <stable@vger.kernel.org> # 6.16+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Adjust frequency percentage computations in update_cpu_qos_request() to
avoid going above the exact numerical percentage in the FREQ_QOS_MAX
case.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Link: https://patch.msgid.link/3395556.44csPzL39Z@rafael.j.wysocki
[ rjw: Rename "cpu" to "cpudata" and "cpunum" to "cpu" ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Move the code from the for_each_possible_cpu() loop in update_qos_request()
to a separate function and use __free() for cpufreq policy reference
counting in it to avoid having to call cpufreq_cpu_put() repeatedly (or
using goto).
While at it, rename update_qos_request() to update_qos_requests()
because it updates multiple requests in one go.
No intentional functional impact.
Link: https://lore.kernel.org/linux-pm/CAJZ5v0gN1T5woSF0tO=TbAh+2-sWzxFjWyDbB7wG2TFCOU01iQ@mail.gmail.com/
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Link: https://patch.msgid.link/3026597.e9J7NaK4W3@rafael.j.wysocki
[ rjw: Rename "cpu" to "cpudata" and "cpunum" to "cpu" in new code ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The cpufreq_cpu_put() call in update_qos_request() takes place too early
because the latter subsequently calls freq_qos_update_request() that
indirectly accesses the policy object in question through the QoS request
object passed to it.
Fortunately, update_qos_request() is called under intel_pstate_driver_lock,
so this issue does not matter for changing the intel_pstate operation
mode, but it theoretically can cause a crash to occur on CPU device hot
removal (which currently can only happen in virt, but it is formally
supported nevertheless).
Address this issue by modifying update_qos_request() to drop the
reference to the policy later.
Fixes: da5c504c7aae ("cpufreq: intel_pstate: Implement QoS supported freq constraints")
Cc: 5.4+ <stable@vger.kernel.org> # 5.4+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Link: https://patch.msgid.link/2255671.irdbgypaU6@rafael.j.wysocki
|
|
The intel_pstate driver does not enable HWP mode when CPUID.06H:EAX[10]
is not set, indicating that EPP (Energy Performance Preference) is not
supported by the hardware.
When EPP is unavailable, the system falls back to using EPB (Energy
Performance Bias) if the feature is supported. However, since the
intel_pstate driver will not enable HWP in this scenario, any EPB-related
code becomes unreachable and irrelevant. Remove the EPB handling code
paths simplifying the driver logic and reducing code size.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Link: https://patch.msgid.link/20250904000608.260817-1-srinivas.pandruvada@linux.intel.com
[ rjw: Subject adjustment ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Follow cleanup.h recommendations and define and assign a variable
in one statement when __free() is used.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Link: https://patch.msgid.link/2251447.irdbgypaU6@rafael.j.wysocki
|
|
Follow cleanup.h recommendations and always define and assign variables
in one statement when __free() is used.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://patch.msgid.link/4691667.LvFx2qVVIh@rafael.j.wysocki
|
|
Change the return type of the speedstep_get_freqs() function from unsigned
int to int because it may return negative error codes. For the same
reason, change the 'ret' variables to int type as well.
No effect on runtime.
Signed-off-by: Qianfeng Rong <rongqianfeng@vivo.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://patch.msgid.link/20250902114545.651661-4-rongqianfeng@vivo.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Change the 'ret' variable in store_scaling_setspeed() from unsigned int to
int, as it needs to store either negative error codes or zero returned
by kstrtouint().
No effect on runtime.
Signed-off-by: Qianfeng Rong <rongqianfeng@vivo.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://patch.msgid.link/20250902114545.651661-2-rongqianfeng@vivo.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Since commit e0b3165ba521 ("cpufreq: add 'freq_table' in struct
cpufreq_policy"), freq_table has been stored in struct cpufreq_policy
instead of being maintained separately.
However, several helpers in freq_table.c still take both policy and
freq_table as parameters, even though policy->freq_table can always be
used. This leads to redundant function arguments and increases the
chance of inconsistencies.
This patch removes the unnecessary freq_table argument from these
functions and updates their callers to only pass policy. This makes
the code simpler, more consistent, and avoids duplication.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://patch.msgid.link/20250902073323.48330-1-zhangzihuan@kylinos.cn
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm
Merge CPUFreq updates for 6.18 from Viresh Kumar:
"- Minor improvements to Rust Cpumask APIs (Alice Ryhl, Baptiste Lepers,
and Shankari Anand).
- Minor cleanups and optimizations to various cpufreq drivers (Akhilesh
Patil, BowenYu, Dennis Beier, Liao Yuanhong, Zihuan Zhang, Florian
Fainelli, Taniya Das, Md Sadre Alam, and Christian Marangi).
- Enhancements for TI cpufreq driver (Judith Mendez, and Paresh Bhagat).
- Enhancements for mediatek cpufreq driver (Nicolas Frattaroli).
- Remove outdated cpufreq-dt.txt (Frank Li).
- Update MAINTAINERS for virtual-cpufreq maintainer (Saravana Kannan)."
* tag 'cpufreq-arm-updates-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm: (28 commits)
cpufreq: mediatek: avoid redundant conditions
cpufreq/longhaul: handle NULL policy in longhaul_exit
cpufreq: tegra186: Use scope-based cleanup helper
cpufreq: mediatek: Use scope-based cleanup helper
cpufreq: s5pv210: Use scope-based cleanup helper
cpufreq: CPPC: Use scope-based cleanup helper
cpufreq: brcmstb-avs: Use scope-based cleanup helper
dt-bindings: Remove outdated cpufreq-dt.txt
arm64: dts: ti: k3-am62p: Fix supported hardware for 1GHz OPP
cpufreq: ti: Allow all silicon revisions to support OPPs
cpufreq: ti: Support more speed grades on AM62Px SoC
cpufreq: ti: Add support for AM62D2
cpufreq: dt-platdev: Blacklist ti,am62d2 SoC
rust: opp: update ARef and AlwaysRefCounted imports from sync::aref
cpufreq: mediatek-hw: don't use error path on NULL fdvfs
cpufreq: scmi: Account for malformed DT in scmi_dev_used_by_cpus()
rust: cpumask: Mark CpumaskVar as transparent
rust: cpumask: rename CpumaskVar::as[_mut]_ref to from_raw[_mut]
dt-bindings: cpufreq: cpufreq-qcom-hw: Add QCS615 compatible
MAINTAINERS: Add myself as virtual-cpufreq maintainer
...
|
|
While 'if (i <= 0) ... else if (i > 0) ...' is technically equivalent to
'if (i <= 0) ... else ...', the latter is vastly easier to read because
it avoids writing out a condition that is unnecessary. Let's drop such
unnecessary conditions.
Signed-off-by: Liao Yuanhong <liaoyuanhong@vivo.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
During the suspend sequence the cached CPPC request is destroyed
with the expectation that it's restored during resume. This assumption
broke when the separate cache EPP variable was removed, and then it was
broken again by commit 608a76b65288 ("cpufreq/amd-pstate: Add support
for the "Requested CPU Min frequency" BIOS option") which explicitly
set it to zero during suspend.
Remove the invalidation and set the value during the suspend call to
update limits so that the cached variable can be used to restore on
resume.
Fixes: 608a76b65288 ("cpufreq/amd-pstate: Add support for the "Requested CPU Min frequency" BIOS option")
Fixes: b7a41156588a ("cpufreq/amd-pstate: Invalidate cppc_req_cached during suspend")
Reported-by: goldens <goldenspinach.rhbugzilla@gmail.com>
Closes: https://community.frame.work/t/increased-power-usage-after-resuming-from-suspend-on-ryzen-7040-kernel-6-15-regression/
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2391221
Tested-by: goldens <goldenspinach.rhbugzilla@gmail.com>
Tested-by: Willian Wang <kernel@willian.wang>
Reported-by: Vincent Mauirn <vincent.maurin.fr@gmail.com>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219981
Tested-by: Alex De Lorenzo <kernel@alexdelorenzo.dev>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Link: https://lore.kernel.org/r/20250826052747.2240670-1-superm1@kernel.org
Signed-off-by: Mario Limonciello (AMD) <superm1@kernel.org>
|
|
longhaul_exit() was calling cpufreq_cpu_get(0) without checking
for a NULL policy pointer. On some systems, this could lead to a
NULL dereference and a kernel warning or panic.
This patch adds a check using unlikely() and returns early if the
policy is NULL.
Bugzilla: #219962
Signed-off-by: Dennis Beier <nanovim@gmail.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Replace the manual cpufreq_cpu_put() with __free(put_cpufreq_policy)
annotation for policy references. This reduces the risk of reference
counting mistakes and aligns the code with the latest kernel style.
No functional change intended.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
[ Viresh: Minor changes ]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Replace the manual cpufreq_cpu_put() with __free(put_cpufreq_policy)
annotation for policy references. This reduces the risk of reference
counting mistakes and aligns the code with the latest kernel style.
No functional change intended.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
[ Viresh: Minor changes ]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Replace the manual cpufreq_cpu_put() with __free(put_cpufreq_policy)
annotation for policy references. This reduces the risk of reference
counting mistakes and aligns the code with the latest kernel style.
No functional change intended.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
[ Viresh: Minor changes ]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Replace the manual cpufreq_cpu_put() with __free(put_cpufreq_policy)
annotation for policy references. This reduces the risk of reference
counting mistakes and aligns the code with the latest kernel style.
No functional change intended.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
[ Viresh: Minor changes ]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Replace the manual cpufreq_cpu_put() with __free(put_cpufreq_policy)
annotation for policy references. This reduces the risk of reference
counting mistakes and aligns the code with the latest kernel style.
No functional change intended.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
[ Viresh: Minor changes to commit log ]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
cpufreq drivers are supposed to use either ->setpolicy() or
->target()/->target_index().
Simplify the existing check by collapsing it into a single boolean
expression:
(!!driver->setpolicy == (driver->target_index || driver->target))
This is a readability/maintainability cleanup and keeps the semantics
unchanged.
No functional change intended.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://patch.msgid.link/20250822070424.166795-3-zhangzihuan@kylinos.cn
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Most kernel code using strncasecmp()/strncmp() passes strlen("xxx")
as the length argument. cpufreq_parse_policy() previously used
CPUFREQ_NAME_LEN (16), which is longer than the actual strings
("performance" is 11 chars, "powersave" is 9 chars).
This patch switches to strlen() for the comparison, making the
matching slightly more permissive (e.g., "powersavexxx" will now
also match "powersave"). While this is unlikely to cause functional
issues, it aligns cpufreq with common kernel style and makes the
behavior more intuitive.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://patch.msgid.link/20250822070424.166795-2-zhangzihuan@kylinos.cn
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
More silicon revisions are being defined for AM62x, AM62Px, and AM62ax
SoCs. These silicon may also support currently establishes OPPs, so remove
the revision limitation in ti-cpufreq and thus determine if an OPP applies
with speed grade efuse parsing.
Signed-off-by: Judith Mendez <jm@ti.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
As the AM62Px SoC family matures more speed grades are being defined.
Add support for speed grades U and T which both support all currently
established OPPs.
Signed-off-by: Judith Mendez <jm@ti.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
performance governor
In the "active" mode of the amd-pstate driver with performance
governor, the CPPC.min_perf is expected to be the nominal_perf.
However after commit a9b9b4c2a4cd ("cpufreq/amd-pstate: Drop min and
max cached frequencies"), this is not the case when the governor is
switched from performance to powersave and back to performance, and
the CPPC.min_perf will be equal to the scaling_min_freq that was set
for the powersave governor.
This is because prior to commit a9b9b4c2a4cd ("cpufreq/amd-pstate:
Drop min and max cached frequencies"), amd_pstate_epp_update_limit()
would unconditionally call amd_pstate_update_min_max_limit() and the
latter function would enforce the CPPC.min_perf constraint when the
governor is performance.
However, after the aforementioned commit,
amd_pstate_update_min_max_limit() is called by
amd_pstate_epp_update_limit() only when either the
scaling_{min/max}_freq is different from the cached value of
cpudata->{min/max}_limit_freq, which wouldn't have changed on a
governor transition from powersave to performance, thus missing out on
enforcing the CPPC.min_perf constraint for the performance governor.
Fix this by invoking amd_pstate_epp_udpate_limit() not only when the
{min/max} limits have changed from the cached values, but also when
the policy itself has changed.
Fixes: a9b9b4c2a4cd ("cpufreq/amd-pstate: Drop min and max cached frequencies")
Signed-off-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Mario Limonciello <mario.limonciello@amd.com>
Link: https://lore.kernel.org/r/20250821042638.356-1-gautham.shenoy@amd.com
Signed-off-by: Mario Limonciello (AMD) <superm1@kernel.org>
|
|
Add support for TI K3 AM62D2 SoC to read speed and revision values
from hardware and pass to OPP layer. AM62D shares the same configuations
as AM62A so use existing am62a7_soc_data.
Signed-off-by: Paresh Bhagat <p-bhagat@ti.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Add ti,am62d2 SoC to the blacklist as the ti-cpufreq driver will handle
creating the cpufreq-dt platform device after it completes and ensure
it is not created twice.
Signed-off-by: Paresh Bhagat <p-bhagat@ti.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
The IS_ERR_OR_NULL check for priv->fdvfs is inappropriate, and should be
an IS_ERR check instead, as a NULL value here would propagate it to
PTR_ERR.
In practice, there is no problem here, as devm_of_iomap cannot return
NULL in any circumstance. However, it causes a Smatch static checker
warning.
Fix the warning by changing the check from IS_ERR_OR_NULL to IS_ERR.
Fixes: 32e0d669f3ac ("cpufreq: mediatek-hw: Add support for MT8196")
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/linux-pm/aKQubSEXH1TXQpnR@stanley.mountain/
Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
When a cpufreq driver registers the first policy, it may attempt to
initialize the policy governor from `last_governor`. However, this is
meaningless for the first policy instance, because `last_governor` is
only updated when policies are removed (e.g. during CPU offline).
The `last_governor` mechanism is intended to restore the previously
used governor across CPU hotplug events. For the very first policy,
there is no "previous governor" to restore, so calling
get_governor(last_governor) is unnecessary and potentially confusing.
Skip looking up `last_governor` when registering the first policy.
Instead, it directly uses the default governor after all governors
have been registered and are available.
This avoids meaningless lookups, reduces unnecessary module reference
handling, and simplifies the initial policy path.
Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Lifeng Zheng <zhenglifeng1@huawei.com>
Link: https://patch.msgid.link/20250725041450.68754-1-zhangzihuan@kylinos.cn
[ rjw: Subject and changelog edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Broadcom STB platforms were early adopters (2017) of the SCMI framework and as
a result, not all deployed systems have a Device Tree entry where SCMI
protocol 0x13 (PERFORMANCE) is declared as a clock provider, nor are the
CPU Device Tree node(s) referencing protocol 0x13 as their clock
provider. This was clarified in commit e11c480b6df1 ("dt-bindings:
firmware: arm,scmi: Extend bindings for protocol@13") in 2023.
For those platforms, we allow the checks done by scmi_dev_used_by_cpus()
to continue, and in the event of not having done an early return, we key
off the documented compatible string and give them a pass to continue to
use scmi-cpufreq.
Fixes: 6c9bb8692272 ("cpufreq: scmi: Skip SCMI devices that aren't used by the CPUs")
Signed-off-by: Florian Fainelli <florian.fainelli@broadcom.com>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
Prevent intel_pstate from loading when OOB (Out Of Band) P-states mode is
enabled.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Link: https://patch.msgid.link/20250808145122.4057208-1-srinivas.pandruvada@linux.intel.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|