You've already forked linux-rockchip
mirror of
https://github.com/armbian/linux-rockchip.git
synced 2026-01-06 11:08:10 -08:00
Merge tag 'pm-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"These add support for 'artificial' Energy Models in which power
numbers for different entities may be in different scales, add support
for some new hardware, fix bugs and clean up code in multiple places.
Specifics:
- Update the Energy Model support code to allow the Energy Model to
be artificial, which means that the power values may not be on a
uniform scale with other devices providing power information, and
update the cpufreq_cooling and devfreq_cooling thermal drivers to
support artificial Energy Models (Lukasz Luba).
- Make DTPM check the Energy Model type (Lukasz Luba).
- Fix policy counter decrementation in cpufreq if Energy Model is in
use (Pierre Gondois).
- Add CPU-based scaling support to passive devfreq governor (Saravana
Kannan, Chanwoo Choi).
- Update the rk3399_dmc devfreq driver (Brian Norris).
- Export dev_pm_ops instead of suspend() and resume() in the IIO
chemical scd30 driver (Jonathan Cameron).
- Add namespace variants of EXPORT[_GPL]_SIMPLE_DEV_PM_OPS and
PM-runtime counterparts (Jonathan Cameron).
- Move symbol exports in the IIO chemical scd30 driver into the
IIO_SCD30 namespace (Jonathan Cameron).
- Avoid device PM-runtime usage count underflows (Rafael Wysocki).
- Allow dynamic debug to control printing of PM messages (David
Cohen).
- Fix some kernel-doc comments in hibernation code (Yang Li, Haowen
Bai).
- Preserve ACPI-table override during hibernation (Amadeusz
Sławiński).
- Improve support for suspend-to-RAM for PSCI OSI mode (Ulf Hansson).
- Make Intel RAPL power capping driver support the RaptorLake and
AlderLake N processors (Zhang Rui, Sumeet Pawnikar).
- Remove redundant store to value after multiply in the RAPL power
capping driver (Colin Ian King).
- Add AlderLake processor support to the intel_idle driver (Zhang
Rui).
- Fix regression leading to no genpd governor in the PSCI cpuidle
driver and fix the riscv-sbi cpuidle driver to allow a genpd
governor to be used (Ulf Hansson).
- Fix cpufreq governor clean up code to avoid using kfree() directly
to free kobject-based items (Kevin Hao).
- Prepare cpufreq for powerpc's asm/prom.h cleanup (Christophe
Leroy).
- Make intel_pstate notify frequency invariance code when no_turbo is
turned on and off (Chen Yu).
- Add Sapphire Rapids OOB mode support to intel_pstate (Srinivas
Pandruvada).
- Make cpufreq avoid unnecessary frequency updates due to mismatch
between hardware and the frequency table (Viresh Kumar).
- Make remove_cpu_dev_symlink() clear the real_cpus mask to simplify
code (Viresh Kumar).
- Rearrange cpufreq_offline() and cpufreq_remove_dev() to make the
calling convention for some driver callbacks consistent (Rafael
Wysocki).
- Avoid accessing half-initialized cpufreq policies from the show()
and store() sysfs functions (Schspa Shi).
- Rearrange cpufreq_offline() to make the calling convention for some
driver callbacks consistent (Schspa Shi).
- Update CPPC handling in cpufreq (Pierre Gondois).
- Extend dev_pm_domain_detach() doc (Krzysztof Kozlowski).
- Move genpd's time-accounting to ktime_get_mono_fast_ns() (Ulf
Hansson).
- Improve the way genpd deals with its governors (Ulf Hansson).
- Update the turbostat utility to version 2022.04.16 (Len Brown, Dan
Merillat, Sumeet Pawnikar, Zephaniah E. Loss-Cutler-Hull, Chen Yu)"
* tag 'pm-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (94 commits)
PM: domains: Trust domain-idle-states from DT to be correct by genpd
PM: domains: Measure power-on/off latencies in genpd based on a governor
PM: domains: Allocate governor data dynamically based on a genpd governor
PM: domains: Clean up some code in pm_genpd_init() and genpd_remove()
PM: domains: Fix initialization of genpd's next_wakeup
PM: domains: Fixup QoS latency measurements for IRQ safe devices in genpd
PM: domains: Measure suspend/resume latencies in genpd based on governor
PM: domains: Move the next_wakeup variable into the struct gpd_timing_data
PM: domains: Allocate gpd_timing_data dynamically based on governor
PM: domains: Skip another warning in irq_safe_dev_in_sleep_domain()
PM: domains: Rename irq_safe_dev_in_no_sleep_domain() in genpd
PM: domains: Don't check PM_QOS_FLAG_NO_POWER_OFF in genpd
PM: domains: Drop redundant code for genpd always-on governor
PM: domains: Add GENPD_FLAG_RPM_ALWAYS_ON for the always-on governor
powercap: intel_rapl: remove redundant store to value after multiply
cpufreq: CPPC: Enable dvfs_possible_from_any_cpu
cpufreq: CPPC: Enable fast_switch
ACPI: CPPC: Assume no transition latency if no PCCT
ACPI: bus: Set CPPC _OSC bits for all and when CPPC_LIB is supported
ACPI: CPPC: Check _OSC for flexible address space
...
This commit is contained in:
@@ -1,212 +0,0 @@
|
||||
* Rockchip rk3399 DMC (Dynamic Memory Controller) device
|
||||
|
||||
Required properties:
|
||||
- compatible: Must be "rockchip,rk3399-dmc".
|
||||
- devfreq-events: Node to get DDR loading, Refer to
|
||||
Documentation/devicetree/bindings/devfreq/event/
|
||||
rockchip-dfi.txt
|
||||
- clocks: Phandles for clock specified in "clock-names" property
|
||||
- clock-names : The name of clock used by the DFI, must be
|
||||
"pclk_ddr_mon";
|
||||
- operating-points-v2: Refer to Documentation/devicetree/bindings/opp/opp-v2.yaml
|
||||
for details.
|
||||
- center-supply: DMC supply node.
|
||||
- status: Marks the node enabled/disabled.
|
||||
- rockchip,pmu: Phandle to the syscon managing the "PMU general register
|
||||
files".
|
||||
|
||||
Optional properties:
|
||||
- interrupts: The CPU interrupt number. The interrupt specifier
|
||||
format depends on the interrupt controller.
|
||||
It should be a DCF interrupt. When DDR DVFS finishes
|
||||
a DCF interrupt is triggered.
|
||||
- rockchip,pmu: Phandle to the syscon managing the "PMU general register
|
||||
files".
|
||||
|
||||
Following properties relate to DDR timing:
|
||||
|
||||
- rockchip,dram_speed_bin : Value reference include/dt-bindings/clock/rk3399-ddr.h,
|
||||
it selects the DDR3 cl-trp-trcd type. It must be
|
||||
set according to "Speed Bin" in DDR3 datasheet,
|
||||
DO NOT use a smaller "Speed Bin" than specified
|
||||
for the DDR3 being used.
|
||||
|
||||
- rockchip,pd_idle : Configure the PD_IDLE value. Defines the
|
||||
power-down idle period in which memories are
|
||||
placed into power-down mode if bus is idle
|
||||
for PD_IDLE DFI clock cycles.
|
||||
|
||||
- rockchip,sr_idle : Configure the SR_IDLE value. Defines the
|
||||
self-refresh idle period in which memories are
|
||||
placed into self-refresh mode if bus is idle
|
||||
for SR_IDLE * 1024 DFI clock cycles (DFI
|
||||
clocks freq is half of DRAM clock), default
|
||||
value is "0".
|
||||
|
||||
- rockchip,sr_mc_gate_idle : Defines the memory self-refresh and controller
|
||||
clock gating idle period. Memories are placed
|
||||
into self-refresh mode and memory controller
|
||||
clock arg gating started if bus is idle for
|
||||
sr_mc_gate_idle*1024 DFI clock cycles.
|
||||
|
||||
- rockchip,srpd_lite_idle : Defines the self-refresh power down idle
|
||||
period in which memories are placed into
|
||||
self-refresh power down mode if bus is idle
|
||||
for srpd_lite_idle * 1024 DFI clock cycles.
|
||||
This parameter is for LPDDR4 only.
|
||||
|
||||
- rockchip,standby_idle : Defines the standby idle period in which
|
||||
memories are placed into self-refresh mode.
|
||||
The controller, pi, PHY and DRAM clock will
|
||||
be gated if bus is idle for standby_idle * DFI
|
||||
clock cycles.
|
||||
|
||||
- rockchip,dram_dll_dis_freq : Defines the DDR3 DLL bypass frequency in MHz.
|
||||
When DDR frequency is less than DRAM_DLL_DISB_FREQ,
|
||||
DDR3 DLL will be bypassed. Note: if DLL was bypassed,
|
||||
the odt will also stop working.
|
||||
|
||||
- rockchip,phy_dll_dis_freq : Defines the PHY dll bypass frequency in
|
||||
MHz (Mega Hz). When DDR frequency is less than
|
||||
DRAM_DLL_DISB_FREQ, PHY DLL will be bypassed.
|
||||
Note: PHY DLL and PHY ODT are independent.
|
||||
|
||||
- rockchip,ddr3_odt_dis_freq : When the DRAM type is DDR3, this parameter defines
|
||||
the ODT disable frequency in MHz (Mega Hz).
|
||||
when the DDR frequency is less then ddr3_odt_dis_freq,
|
||||
the ODT on the DRAM side and controller side are
|
||||
both disabled.
|
||||
|
||||
- rockchip,ddr3_drv : When the DRAM type is DDR3, this parameter defines
|
||||
the DRAM side driver strength in ohms. Default
|
||||
value is 40.
|
||||
|
||||
- rockchip,ddr3_odt : When the DRAM type is DDR3, this parameter defines
|
||||
the DRAM side ODT strength in ohms. Default value
|
||||
is 120.
|
||||
|
||||
- rockchip,phy_ddr3_ca_drv : When the DRAM type is DDR3, this parameter defines
|
||||
the phy side CA line (incluing command line,
|
||||
address line and clock line) driver strength.
|
||||
Default value is 40.
|
||||
|
||||
- rockchip,phy_ddr3_dq_drv : When the DRAM type is DDR3, this parameter defines
|
||||
the PHY side DQ line (including DQS/DQ/DM line)
|
||||
driver strength. Default value is 40.
|
||||
|
||||
- rockchip,phy_ddr3_odt : When the DRAM type is DDR3, this parameter defines
|
||||
the PHY side ODT strength. Default value is 240.
|
||||
|
||||
- rockchip,lpddr3_odt_dis_freq : When the DRAM type is LPDDR3, this parameter defines
|
||||
then ODT disable frequency in MHz (Mega Hz).
|
||||
When DDR frequency is less then ddr3_odt_dis_freq,
|
||||
the ODT on the DRAM side and controller side are
|
||||
both disabled.
|
||||
|
||||
- rockchip,lpddr3_drv : When the DRAM type is LPDDR3, this parameter defines
|
||||
the DRAM side driver strength in ohms. Default
|
||||
value is 34.
|
||||
|
||||
- rockchip,lpddr3_odt : When the DRAM type is LPDDR3, this parameter defines
|
||||
the DRAM side ODT strength in ohms. Default value
|
||||
is 240.
|
||||
|
||||
- rockchip,phy_lpddr3_ca_drv : When the DRAM type is LPDDR3, this parameter defines
|
||||
the PHY side CA line (including command line,
|
||||
address line and clock line) driver strength.
|
||||
Default value is 40.
|
||||
|
||||
- rockchip,phy_lpddr3_dq_drv : When the DRAM type is LPDDR3, this parameter defines
|
||||
the PHY side DQ line (including DQS/DQ/DM line)
|
||||
driver strength. Default value is 40.
|
||||
|
||||
- rockchip,phy_lpddr3_odt : When dram type is LPDDR3, this parameter define
|
||||
the phy side odt strength, default value is 240.
|
||||
|
||||
- rockchip,lpddr4_odt_dis_freq : When the DRAM type is LPDDR4, this parameter
|
||||
defines the ODT disable frequency in
|
||||
MHz (Mega Hz). When the DDR frequency is less then
|
||||
ddr3_odt_dis_freq, the ODT on the DRAM side and
|
||||
controller side are both disabled.
|
||||
|
||||
- rockchip,lpddr4_drv : When the DRAM type is LPDDR4, this parameter defines
|
||||
the DRAM side driver strength in ohms. Default
|
||||
value is 60.
|
||||
|
||||
- rockchip,lpddr4_dq_odt : When the DRAM type is LPDDR4, this parameter defines
|
||||
the DRAM side ODT on DQS/DQ line strength in ohms.
|
||||
Default value is 40.
|
||||
|
||||
- rockchip,lpddr4_ca_odt : When the DRAM type is LPDDR4, this parameter defines
|
||||
the DRAM side ODT on CA line strength in ohms.
|
||||
Default value is 40.
|
||||
|
||||
- rockchip,phy_lpddr4_ca_drv : When the DRAM type is LPDDR4, this parameter defines
|
||||
the PHY side CA line (including command address
|
||||
line) driver strength. Default value is 40.
|
||||
|
||||
- rockchip,phy_lpddr4_ck_cs_drv : When the DRAM type is LPDDR4, this parameter defines
|
||||
the PHY side clock line and CS line driver
|
||||
strength. Default value is 80.
|
||||
|
||||
- rockchip,phy_lpddr4_dq_drv : When the DRAM type is LPDDR4, this parameter defines
|
||||
the PHY side DQ line (including DQS/DQ/DM line)
|
||||
driver strength. Default value is 80.
|
||||
|
||||
- rockchip,phy_lpddr4_odt : When the DRAM type is LPDDR4, this parameter defines
|
||||
the PHY side ODT strength. Default value is 60.
|
||||
|
||||
Example:
|
||||
dmc_opp_table: dmc_opp_table {
|
||||
compatible = "operating-points-v2";
|
||||
|
||||
opp00 {
|
||||
opp-hz = /bits/ 64 <300000000>;
|
||||
opp-microvolt = <900000>;
|
||||
};
|
||||
opp01 {
|
||||
opp-hz = /bits/ 64 <666000000>;
|
||||
opp-microvolt = <900000>;
|
||||
};
|
||||
};
|
||||
|
||||
dmc: dmc {
|
||||
compatible = "rockchip,rk3399-dmc";
|
||||
devfreq-events = <&dfi>;
|
||||
interrupts = <GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&cru SCLK_DDRC>;
|
||||
clock-names = "dmc_clk";
|
||||
operating-points-v2 = <&dmc_opp_table>;
|
||||
center-supply = <&ppvar_centerlogic>;
|
||||
upthreshold = <15>;
|
||||
downdifferential = <10>;
|
||||
rockchip,ddr3_speed_bin = <21>;
|
||||
rockchip,pd_idle = <0x40>;
|
||||
rockchip,sr_idle = <0x2>;
|
||||
rockchip,sr_mc_gate_idle = <0x3>;
|
||||
rockchip,srpd_lite_idle = <0x4>;
|
||||
rockchip,standby_idle = <0x2000>;
|
||||
rockchip,dram_dll_dis_freq = <300>;
|
||||
rockchip,phy_dll_dis_freq = <125>;
|
||||
rockchip,auto_pd_dis_freq = <666>;
|
||||
rockchip,ddr3_odt_dis_freq = <333>;
|
||||
rockchip,ddr3_drv = <40>;
|
||||
rockchip,ddr3_odt = <120>;
|
||||
rockchip,phy_ddr3_ca_drv = <40>;
|
||||
rockchip,phy_ddr3_dq_drv = <40>;
|
||||
rockchip,phy_ddr3_odt = <240>;
|
||||
rockchip,lpddr3_odt_dis_freq = <333>;
|
||||
rockchip,lpddr3_drv = <34>;
|
||||
rockchip,lpddr3_odt = <240>;
|
||||
rockchip,phy_lpddr3_ca_drv = <40>;
|
||||
rockchip,phy_lpddr3_dq_drv = <40>;
|
||||
rockchip,phy_lpddr3_odt = <240>;
|
||||
rockchip,lpddr4_odt_dis_freq = <333>;
|
||||
rockchip,lpddr4_drv = <60>;
|
||||
rockchip,lpddr4_dq_odt = <40>;
|
||||
rockchip,lpddr4_ca_odt = <40>;
|
||||
rockchip,phy_lpddr4_ca_drv = <40>;
|
||||
rockchip,phy_lpddr4_ck_cs_drv = <80>;
|
||||
rockchip,phy_lpddr4_dq_drv = <80>;
|
||||
rockchip,phy_lpddr4_odt = <60>;
|
||||
};
|
||||
@@ -0,0 +1,384 @@
|
||||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
# %YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/memory-controllers/rockchip,rk3399-dmc.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Rockchip rk3399 DMC (Dynamic Memory Controller) device
|
||||
|
||||
maintainers:
|
||||
- Brian Norris <briannorris@chromium.org>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- rockchip,rk3399-dmc
|
||||
|
||||
devfreq-events:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
description:
|
||||
Node to get DDR loading. Refer to
|
||||
Documentation/devicetree/bindings/devfreq/event/rockchip-dfi.txt.
|
||||
|
||||
clocks:
|
||||
maxItems: 1
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: dmc_clk
|
||||
|
||||
operating-points-v2: true
|
||||
|
||||
center-supply:
|
||||
description:
|
||||
DMC regulator supply.
|
||||
|
||||
rockchip,pmu:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
description:
|
||||
Phandle to the syscon managing the "PMU general register files".
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
description:
|
||||
The CPU interrupt number. It should be a DCF interrupt. When DDR DVFS
|
||||
finishes, a DCF interrupt is triggered.
|
||||
|
||||
rockchip,ddr3_speed_bin:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
For values, reference include/dt-bindings/clock/rk3399-ddr.h. Selects the
|
||||
DDR3 cl-trp-trcd type. It must be set according to "Speed Bin" in DDR3
|
||||
datasheet; DO NOT use a smaller "Speed Bin" than specified for the DDR3
|
||||
being used.
|
||||
|
||||
rockchip,pd_idle:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
Configure the PD_IDLE value. Defines the power-down idle period in which
|
||||
memories are placed into power-down mode if bus is idle for PD_IDLE DFI
|
||||
clock cycles.
|
||||
See also rockchip,pd-idle-ns.
|
||||
|
||||
rockchip,sr_idle:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
Configure the SR_IDLE value. Defines the self-refresh idle period in
|
||||
which memories are placed into self-refresh mode if bus is idle for
|
||||
SR_IDLE * 1024 DFI clock cycles (DFI clocks freq is half of DRAM clock).
|
||||
See also rockchip,sr-idle-ns.
|
||||
default: 0
|
||||
|
||||
rockchip,sr_mc_gate_idle:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
Defines the memory self-refresh and controller clock gating idle period.
|
||||
Memories are placed into self-refresh mode and memory controller clock
|
||||
arg gating started if bus is idle for sr_mc_gate_idle*1024 DFI clock
|
||||
cycles.
|
||||
See also rockchip,sr-mc-gate-idle-ns.
|
||||
|
||||
rockchip,srpd_lite_idle:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
Defines the self-refresh power down idle period in which memories are
|
||||
placed into self-refresh power down mode if bus is idle for
|
||||
srpd_lite_idle * 1024 DFI clock cycles. This parameter is for LPDDR4
|
||||
only.
|
||||
See also rockchip,srpd-lite-idle-ns.
|
||||
|
||||
rockchip,standby_idle:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
Defines the standby idle period in which memories are placed into
|
||||
self-refresh mode. The controller, pi, PHY and DRAM clock will be gated
|
||||
if bus is idle for standby_idle * DFI clock cycles.
|
||||
See also rockchip,standby-idle-ns.
|
||||
|
||||
rockchip,dram_dll_dis_freq:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description: |
|
||||
Defines the DDR3 DLL bypass frequency in MHz. When DDR frequency is less
|
||||
than DRAM_DLL_DISB_FREQ, DDR3 DLL will be bypassed.
|
||||
Note: if DLL was bypassed, the odt will also stop working.
|
||||
|
||||
rockchip,phy_dll_dis_freq:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description: |
|
||||
Defines the PHY dll bypass frequency in MHz (Mega Hz). When DDR frequency
|
||||
is less than DRAM_DLL_DISB_FREQ, PHY DLL will be bypassed.
|
||||
Note: PHY DLL and PHY ODT are independent.
|
||||
|
||||
rockchip,auto_pd_dis_freq:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
Defines the auto PD disable frequency in MHz.
|
||||
|
||||
rockchip,ddr3_odt_dis_freq:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
minimum: 1000000 # In case anyone thought this was MHz.
|
||||
description:
|
||||
When the DRAM type is DDR3, this parameter defines the ODT disable
|
||||
frequency in Hz. When the DDR frequency is less then ddr3_odt_dis_freq,
|
||||
the ODT on the DRAM side and controller side are both disabled.
|
||||
|
||||
rockchip,ddr3_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is DDR3, this parameter defines the DRAM side drive
|
||||
strength in ohms.
|
||||
default: 40
|
||||
|
||||
rockchip,ddr3_odt:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is DDR3, this parameter defines the DRAM side ODT
|
||||
strength in ohms.
|
||||
default: 120
|
||||
|
||||
rockchip,phy_ddr3_ca_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is DDR3, this parameter defines the phy side CA line
|
||||
(incluing command line, address line and clock line) drive strength.
|
||||
default: 40
|
||||
|
||||
rockchip,phy_ddr3_dq_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is DDR3, this parameter defines the PHY side DQ line
|
||||
(including DQS/DQ/DM line) drive strength.
|
||||
default: 40
|
||||
|
||||
rockchip,phy_ddr3_odt:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is DDR3, this parameter defines the PHY side ODT
|
||||
strength.
|
||||
default: 240
|
||||
|
||||
rockchip,lpddr3_odt_dis_freq:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
minimum: 1000000 # In case anyone thought this was MHz.
|
||||
description:
|
||||
When the DRAM type is LPDDR3, this parameter defines then ODT disable
|
||||
frequency in Hz. When DDR frequency is less then ddr3_odt_dis_freq, the
|
||||
ODT on the DRAM side and controller side are both disabled.
|
||||
|
||||
rockchip,lpddr3_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR3, this parameter defines the DRAM side drive
|
||||
strength in ohms.
|
||||
default: 34
|
||||
|
||||
rockchip,lpddr3_odt:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR3, this parameter defines the DRAM side ODT
|
||||
strength in ohms.
|
||||
default: 240
|
||||
|
||||
rockchip,phy_lpddr3_ca_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR3, this parameter defines the PHY side CA line
|
||||
(including command line, address line and clock line) drive strength.
|
||||
default: 40
|
||||
|
||||
rockchip,phy_lpddr3_dq_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR3, this parameter defines the PHY side DQ line
|
||||
(including DQS/DQ/DM line) drive strength.
|
||||
default: 40
|
||||
|
||||
rockchip,phy_lpddr3_odt:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When dram type is LPDDR3, this parameter define the phy side odt
|
||||
strength, default value is 240.
|
||||
|
||||
rockchip,lpddr4_odt_dis_freq:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
minimum: 1000000 # In case anyone thought this was MHz.
|
||||
description:
|
||||
When the DRAM type is LPDDR4, this parameter defines the ODT disable
|
||||
frequency in Hz. When the DDR frequency is less then ddr3_odt_dis_freq,
|
||||
the ODT on the DRAM side and controller side are both disabled.
|
||||
|
||||
rockchip,lpddr4_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR4, this parameter defines the DRAM side drive
|
||||
strength in ohms.
|
||||
default: 60
|
||||
|
||||
rockchip,lpddr4_dq_odt:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR4, this parameter defines the DRAM side ODT on
|
||||
DQS/DQ line strength in ohms.
|
||||
default: 40
|
||||
|
||||
rockchip,lpddr4_ca_odt:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR4, this parameter defines the DRAM side ODT on
|
||||
CA line strength in ohms.
|
||||
default: 40
|
||||
|
||||
rockchip,phy_lpddr4_ca_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR4, this parameter defines the PHY side CA line
|
||||
(including command address line) drive strength.
|
||||
default: 40
|
||||
|
||||
rockchip,phy_lpddr4_ck_cs_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR4, this parameter defines the PHY side clock
|
||||
line and CS line drive strength.
|
||||
default: 80
|
||||
|
||||
rockchip,phy_lpddr4_dq_drv:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR4, this parameter defines the PHY side DQ line
|
||||
(including DQS/DQ/DM line) drive strength.
|
||||
default: 80
|
||||
|
||||
rockchip,phy_lpddr4_odt:
|
||||
deprecated: true
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description:
|
||||
When the DRAM type is LPDDR4, this parameter defines the PHY side ODT
|
||||
strength.
|
||||
default: 60
|
||||
|
||||
rockchip,pd-idle-ns:
|
||||
description:
|
||||
Configure the PD_IDLE value in nanoseconds. Defines the power-down idle
|
||||
period in which memories are placed into power-down mode if bus is idle
|
||||
for PD_IDLE nanoseconds.
|
||||
|
||||
rockchip,sr-idle-ns:
|
||||
description:
|
||||
Configure the SR_IDLE value in nanoseconds. Defines the self-refresh idle
|
||||
period in which memories are placed into self-refresh mode if bus is idle
|
||||
for SR_IDLE nanoseconds.
|
||||
default: 0
|
||||
|
||||
rockchip,sr-mc-gate-idle-ns:
|
||||
description:
|
||||
Defines the memory self-refresh and controller clock gating idle period in nanoseconds.
|
||||
Memories are placed into self-refresh mode and memory controller clock
|
||||
arg gating started if bus is idle for sr_mc_gate_idle nanoseconds.
|
||||
|
||||
rockchip,srpd-lite-idle-ns:
|
||||
description:
|
||||
Defines the self-refresh power down idle period in which memories are
|
||||
placed into self-refresh power down mode if bus is idle for
|
||||
srpd_lite_idle nanoseonds. This parameter is for LPDDR4 only.
|
||||
|
||||
rockchip,standby-idle-ns:
|
||||
description:
|
||||
Defines the standby idle period in which memories are placed into
|
||||
self-refresh mode. The controller, pi, PHY and DRAM clock will be gated
|
||||
if bus is idle for standby_idle nanoseconds.
|
||||
|
||||
rockchip,pd-idle-dis-freq-hz:
|
||||
description:
|
||||
Defines the power-down idle disable frequency in Hz. When the DDR
|
||||
frequency is greater than pd-idle-dis-freq, power-down idle is disabled.
|
||||
See also rockchip,pd-idle-ns.
|
||||
|
||||
rockchip,sr-idle-dis-freq-hz:
|
||||
description:
|
||||
Defines the self-refresh idle disable frequency in Hz. When the DDR
|
||||
frequency is greater than sr-idle-dis-freq, self-refresh idle is
|
||||
disabled. See also rockchip,sr-idle-ns.
|
||||
|
||||
rockchip,sr-mc-gate-idle-dis-freq-hz:
|
||||
description:
|
||||
Defines the self-refresh and memory-controller clock gating disable
|
||||
frequency in Hz. When the DDR frequency is greater than
|
||||
sr-mc-gate-idle-dis-freq, the clock will not be gated when idle. See also
|
||||
rockchip,sr-mc-gate-idle-ns.
|
||||
|
||||
rockchip,srpd-lite-idle-dis-freq-hz:
|
||||
description:
|
||||
Defines the self-refresh power down idle disable frequency in Hz. When
|
||||
the DDR frequency is greater than srpd-lite-idle-dis-freq, memory will
|
||||
not be placed into self-refresh power down mode when idle. See also
|
||||
rockchip,srpd-lite-idle-ns.
|
||||
|
||||
rockchip,standby-idle-dis-freq-hz:
|
||||
description:
|
||||
Defines the standby idle disable frequency in Hz. When the DDR frequency
|
||||
is greater than standby-idle-dis-freq, standby idle is disabled. See also
|
||||
rockchip,standby-idle-ns.
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- devfreq-events
|
||||
- clocks
|
||||
- clock-names
|
||||
- operating-points-v2
|
||||
- center-supply
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/rk3399-cru.h>
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
memory-controller {
|
||||
compatible = "rockchip,rk3399-dmc";
|
||||
devfreq-events = <&dfi>;
|
||||
rockchip,pmu = <&pmu>;
|
||||
interrupts = <GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&cru SCLK_DDRC>;
|
||||
clock-names = "dmc_clk";
|
||||
operating-points-v2 = <&dmc_opp_table>;
|
||||
center-supply = <&ppvar_centerlogic>;
|
||||
rockchip,pd-idle-ns = <160>;
|
||||
rockchip,sr-idle-ns = <10240>;
|
||||
rockchip,sr-mc-gate-idle-ns = <40960>;
|
||||
rockchip,srpd-lite-idle-ns = <61440>;
|
||||
rockchip,standby-idle-ns = <81920>;
|
||||
rockchip,ddr3_odt_dis_freq = <333000000>;
|
||||
rockchip,lpddr3_odt_dis_freq = <333000000>;
|
||||
rockchip,lpddr4_odt_dis_freq = <333000000>;
|
||||
rockchip,pd-idle-dis-freq-hz = <1000000000>;
|
||||
rockchip,sr-idle-dis-freq-hz = <1000000000>;
|
||||
rockchip,sr-mc-gate-idle-dis-freq-hz = <1000000000>;
|
||||
rockchip,srpd-lite-idle-dis-freq-hz = <0>;
|
||||
rockchip,standby-idle-dis-freq-hz = <928000000>;
|
||||
};
|
||||
@@ -123,6 +123,26 @@ allows a platform to register EM power values which are reflecting total power
|
||||
(static + dynamic). These power values might be coming directly from
|
||||
experiments and measurements.
|
||||
|
||||
Registration of 'artificial' EM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
There is an option to provide a custom callback for drivers missing detailed
|
||||
knowledge about power value for each performance state. The callback
|
||||
.get_cost() is optional and provides the 'cost' values used by the EAS.
|
||||
This is useful for platforms that only provide information on relative
|
||||
efficiency between CPU types, where one could use the information to
|
||||
create an abstract power model. But even an abstract power model can
|
||||
sometimes be hard to fit in, given the input power value size restrictions.
|
||||
The .get_cost() allows to provide the 'cost' values which reflect the
|
||||
efficiency of the CPUs. This would allow to provide EAS information which
|
||||
has different relation than what would be forced by the EM internal
|
||||
formulas calculating 'cost' values. To register an EM for such platform, the
|
||||
driver must set the flag 'milliwatts' to 0, provide .get_power() callback
|
||||
and provide .get_cost() callback. The EM framework would handle such platform
|
||||
properly during registration. A flag EM_PERF_DOMAIN_ARTIFICIAL is set for such
|
||||
platform. Special care should be taken by other frameworks which are using EM
|
||||
to test and treat this flag properly.
|
||||
|
||||
Registration of 'simple' EM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@@ -181,8 +201,8 @@ EM framework::
|
||||
|
||||
-> drivers/cpufreq/foo_cpufreq.c
|
||||
|
||||
01 static int est_power(unsigned long *mW, unsigned long *KHz,
|
||||
02 struct device *dev)
|
||||
01 static int est_power(struct device *dev, unsigned long *mW,
|
||||
02 unsigned long *KHz)
|
||||
03 {
|
||||
04 long freq, power;
|
||||
05
|
||||
|
||||
@@ -512,6 +512,7 @@ struct acpi_madt_generic_interrupt *acpi_cpu_get_madt_gicc(int cpu)
|
||||
{
|
||||
return &cpu_madt_gicc[cpu];
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_cpu_get_madt_gicc);
|
||||
|
||||
/*
|
||||
* acpi_map_gic_cpu_interface - parse processor MADT entry
|
||||
|
||||
@@ -319,6 +319,7 @@
|
||||
|
||||
/* Run Time Average Power Limiting (RAPL) Interface */
|
||||
|
||||
#define MSR_VR_CURRENT_CONFIG 0x00000601
|
||||
#define MSR_RAPL_POWER_UNIT 0x00000606
|
||||
|
||||
#define MSR_PKG_POWER_LIMIT 0x00000610
|
||||
|
||||
@@ -1862,7 +1862,7 @@ int __acpi_release_global_lock(unsigned int *lock)
|
||||
|
||||
void __init arch_reserve_mem_area(acpi_physical_address addr, size_t size)
|
||||
{
|
||||
e820__range_add(addr, size, E820_TYPE_ACPI);
|
||||
e820__range_add(addr, size, E820_TYPE_NVS);
|
||||
e820__update_table_print();
|
||||
}
|
||||
|
||||
|
||||
@@ -278,6 +278,20 @@ bool osc_sb_apei_support_acked;
|
||||
bool osc_pc_lpi_support_confirmed;
|
||||
EXPORT_SYMBOL_GPL(osc_pc_lpi_support_confirmed);
|
||||
|
||||
/*
|
||||
* ACPI 6.2 Section 6.2.11.2 'Platform-Wide OSPM Capabilities':
|
||||
* Starting with ACPI Specification 6.2, all _CPC registers can be in
|
||||
* PCC, System Memory, System IO, or Functional Fixed Hardware address
|
||||
* spaces. OSPM support for this more flexible register space scheme is
|
||||
* indicated by the “Flexible Address Space for CPPC Registers” _OSC bit.
|
||||
*
|
||||
* Otherwise (cf ACPI 6.1, s8.4.7.1.1.X), _CPC registers must be in:
|
||||
* - PCC or Functional Fixed Hardware address space if defined
|
||||
* - SystemMemory address space (NULL register) if not defined
|
||||
*/
|
||||
bool osc_cpc_flexible_adr_space_confirmed;
|
||||
EXPORT_SYMBOL_GPL(osc_cpc_flexible_adr_space_confirmed);
|
||||
|
||||
/*
|
||||
* ACPI 6.4 Operating System Capabilities for USB.
|
||||
*/
|
||||
@@ -315,12 +329,15 @@ static void acpi_bus_osc_negotiate_platform_control(void)
|
||||
#endif
|
||||
#ifdef CONFIG_X86
|
||||
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_GENERIC_INITIATOR_SUPPORT;
|
||||
if (boot_cpu_has(X86_FEATURE_HWP)) {
|
||||
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_CPC_SUPPORT;
|
||||
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_CPCV2_SUPPORT;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ACPI_CPPC_LIB
|
||||
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_CPC_SUPPORT;
|
||||
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_CPCV2_SUPPORT;
|
||||
#endif
|
||||
|
||||
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_CPC_FLEXIBLE_ADR_SPACE;
|
||||
|
||||
if (IS_ENABLED(CONFIG_SCHED_MC_PRIO))
|
||||
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_CPC_DIVERSE_HIGH_SUPPORT;
|
||||
|
||||
@@ -341,10 +358,9 @@ static void acpi_bus_osc_negotiate_platform_control(void)
|
||||
return;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_X86
|
||||
if (boot_cpu_has(X86_FEATURE_HWP))
|
||||
osc_sb_cppc_not_supported = !(capbuf_ret[OSC_SUPPORT_DWORD] &
|
||||
(OSC_SB_CPC_SUPPORT | OSC_SB_CPCV2_SUPPORT));
|
||||
#ifdef CONFIG_ACPI_CPPC_LIB
|
||||
osc_sb_cppc_not_supported = !(capbuf_ret[OSC_SUPPORT_DWORD] &
|
||||
(OSC_SB_CPC_SUPPORT | OSC_SB_CPCV2_SUPPORT));
|
||||
#endif
|
||||
|
||||
/*
|
||||
@@ -366,6 +382,8 @@ static void acpi_bus_osc_negotiate_platform_control(void)
|
||||
capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT;
|
||||
osc_sb_native_usb4_support_confirmed =
|
||||
capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT;
|
||||
osc_cpc_flexible_adr_space_confirmed =
|
||||
capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_CPC_FLEXIBLE_ADR_SPACE;
|
||||
}
|
||||
|
||||
kfree(context.ret.pointer);
|
||||
|
||||
@@ -100,6 +100,16 @@ static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
|
||||
(cpc)->cpc_entry.reg.space_id == \
|
||||
ACPI_ADR_SPACE_PLATFORM_COMM)
|
||||
|
||||
/* Check if a CPC register is in SystemMemory */
|
||||
#define CPC_IN_SYSTEM_MEMORY(cpc) ((cpc)->type == ACPI_TYPE_BUFFER && \
|
||||
(cpc)->cpc_entry.reg.space_id == \
|
||||
ACPI_ADR_SPACE_SYSTEM_MEMORY)
|
||||
|
||||
/* Check if a CPC register is in SystemIo */
|
||||
#define CPC_IN_SYSTEM_IO(cpc) ((cpc)->type == ACPI_TYPE_BUFFER && \
|
||||
(cpc)->cpc_entry.reg.space_id == \
|
||||
ACPI_ADR_SPACE_SYSTEM_IO)
|
||||
|
||||
/* Evaluates to True if reg is a NULL register descriptor */
|
||||
#define IS_NULL_REG(reg) ((reg)->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY && \
|
||||
(reg)->address == 0 && \
|
||||
@@ -424,6 +434,24 @@ bool acpi_cpc_valid(void)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_cpc_valid);
|
||||
|
||||
bool cppc_allow_fast_switch(void)
|
||||
{
|
||||
struct cpc_register_resource *desired_reg;
|
||||
struct cpc_desc *cpc_ptr;
|
||||
int cpu;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
cpc_ptr = per_cpu(cpc_desc_ptr, cpu);
|
||||
desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF];
|
||||
if (!CPC_IN_SYSTEM_MEMORY(desired_reg) &&
|
||||
!CPC_IN_SYSTEM_IO(desired_reg))
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cppc_allow_fast_switch);
|
||||
|
||||
/**
|
||||
* acpi_get_psd_map - Map the CPUs in the freq domain of a given cpu
|
||||
* @cpu: Find all CPUs that share a domain with cpu.
|
||||
@@ -736,6 +764,11 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
|
||||
if (gas_t->address) {
|
||||
void __iomem *addr;
|
||||
|
||||
if (!osc_cpc_flexible_adr_space_confirmed) {
|
||||
pr_debug("Flexible address space capability not supported\n");
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
addr = ioremap(gas_t->address, gas_t->bit_width/8);
|
||||
if (!addr)
|
||||
goto out_free;
|
||||
@@ -758,6 +791,10 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
|
||||
gas_t->address);
|
||||
goto out_free;
|
||||
}
|
||||
if (!osc_cpc_flexible_adr_space_confirmed) {
|
||||
pr_debug("Flexible address space capability not supported\n");
|
||||
goto out_free;
|
||||
}
|
||||
} else {
|
||||
if (gas_t->space_id != ACPI_ADR_SPACE_FIXED_HARDWARE || !cpc_ffh_supported()) {
|
||||
/* Support only PCC, SystemMemory, SystemIO, and FFH type regs. */
|
||||
@@ -1447,6 +1484,9 @@ EXPORT_SYMBOL_GPL(cppc_set_perf);
|
||||
* transition latency for performance change requests. The closest we have
|
||||
* is the timing information from the PCCT tables which provides the info
|
||||
* on the number and frequency of PCC commands the platform can handle.
|
||||
*
|
||||
* If desired_reg is in the SystemMemory or SystemIo ACPI address space,
|
||||
* then assume there is no latency.
|
||||
*/
|
||||
unsigned int cppc_get_transition_latency(int cpu_num)
|
||||
{
|
||||
@@ -1472,7 +1512,9 @@ unsigned int cppc_get_transition_latency(int cpu_num)
|
||||
return CPUFREQ_ETERNAL;
|
||||
|
||||
desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
|
||||
if (!CPC_IN_PCC(desired_reg))
|
||||
if (CPC_IN_SYSTEM_MEMORY(desired_reg) || CPC_IN_SYSTEM_IO(desired_reg))
|
||||
return 0;
|
||||
else if (!CPC_IN_PCC(desired_reg))
|
||||
return CPUFREQ_ETERNAL;
|
||||
|
||||
if (pcc_ss_id < 0)
|
||||
|
||||
@@ -172,10 +172,10 @@ EXPORT_SYMBOL_GPL(dev_pm_domain_attach_by_name);
|
||||
* @dev: Device to detach.
|
||||
* @power_off: Used to indicate whether we should power off the device.
|
||||
*
|
||||
* This functions will reverse the actions from dev_pm_domain_attach() and
|
||||
* dev_pm_domain_attach_by_id(), thus it detaches @dev from its PM domain.
|
||||
* Typically it should be invoked during the remove phase, either from
|
||||
* subsystem level code or from drivers.
|
||||
* This functions will reverse the actions from dev_pm_domain_attach(),
|
||||
* dev_pm_domain_attach_by_id() and dev_pm_domain_attach_by_name(), thus it
|
||||
* detaches @dev from its PM domain. Typically it should be invoked during the
|
||||
* remove phase, either from subsystem level code or from drivers.
|
||||
*
|
||||
* Callers must ensure proper synchronization of this function with power
|
||||
* management callbacks.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -18,6 +18,8 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
|
||||
s64 constraint_ns;
|
||||
|
||||
if (dev->power.subsys_data && dev->power.subsys_data->domain_data) {
|
||||
struct gpd_timing_data *td = dev_gpd_data(dev)->td;
|
||||
|
||||
/*
|
||||
* Only take suspend-time QoS constraints of devices into
|
||||
* account, because constraints updated after the device has
|
||||
@@ -25,7 +27,8 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
|
||||
* anyway. In order for them to take effect, the device has to
|
||||
* be resumed and suspended again.
|
||||
*/
|
||||
constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns;
|
||||
constraint_ns = td ? td->effective_constraint_ns :
|
||||
PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
|
||||
} else {
|
||||
/*
|
||||
* The child is not in a domain and there's no info on its
|
||||
@@ -49,7 +52,7 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
|
||||
*/
|
||||
static bool default_suspend_ok(struct device *dev)
|
||||
{
|
||||
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
|
||||
struct gpd_timing_data *td = dev_gpd_data(dev)->td;
|
||||
unsigned long flags;
|
||||
s64 constraint_ns;
|
||||
|
||||
@@ -136,26 +139,28 @@ static void update_domain_next_wakeup(struct generic_pm_domain *genpd, ktime_t n
|
||||
* is able to enter its optimal idle state.
|
||||
*/
|
||||
list_for_each_entry(pdd, &genpd->dev_list, list_node) {
|
||||
next_wakeup = to_gpd_data(pdd)->next_wakeup;
|
||||
next_wakeup = to_gpd_data(pdd)->td->next_wakeup;
|
||||
if (next_wakeup != KTIME_MAX && !ktime_before(next_wakeup, now))
|
||||
if (ktime_before(next_wakeup, domain_wakeup))
|
||||
domain_wakeup = next_wakeup;
|
||||
}
|
||||
|
||||
list_for_each_entry(link, &genpd->parent_links, parent_node) {
|
||||
next_wakeup = link->child->next_wakeup;
|
||||
struct genpd_governor_data *cgd = link->child->gd;
|
||||
|
||||
next_wakeup = cgd ? cgd->next_wakeup : KTIME_MAX;
|
||||
if (next_wakeup != KTIME_MAX && !ktime_before(next_wakeup, now))
|
||||
if (ktime_before(next_wakeup, domain_wakeup))
|
||||
domain_wakeup = next_wakeup;
|
||||
}
|
||||
|
||||
genpd->next_wakeup = domain_wakeup;
|
||||
genpd->gd->next_wakeup = domain_wakeup;
|
||||
}
|
||||
|
||||
static bool next_wakeup_allows_state(struct generic_pm_domain *genpd,
|
||||
unsigned int state, ktime_t now)
|
||||
{
|
||||
ktime_t domain_wakeup = genpd->next_wakeup;
|
||||
ktime_t domain_wakeup = genpd->gd->next_wakeup;
|
||||
s64 idle_time_ns, min_sleep_ns;
|
||||
|
||||
min_sleep_ns = genpd->states[state].power_off_latency_ns +
|
||||
@@ -185,8 +190,9 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
|
||||
* All subdomains have been powered off already at this point.
|
||||
*/
|
||||
list_for_each_entry(link, &genpd->parent_links, parent_node) {
|
||||
struct generic_pm_domain *sd = link->child;
|
||||
s64 sd_max_off_ns = sd->max_off_time_ns;
|
||||
struct genpd_governor_data *cgd = link->child->gd;
|
||||
|
||||
s64 sd_max_off_ns = cgd ? cgd->max_off_time_ns : -1;
|
||||
|
||||
if (sd_max_off_ns < 0)
|
||||
continue;
|
||||
@@ -215,7 +221,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
|
||||
* domain to turn off and on (that's how much time it will
|
||||
* have to wait worst case).
|
||||
*/
|
||||
td = &to_gpd_data(pdd)->td;
|
||||
td = to_gpd_data(pdd)->td;
|
||||
constraint_ns = td->effective_constraint_ns;
|
||||
/*
|
||||
* Zero means "no suspend at all" and this runs only when all
|
||||
@@ -244,7 +250,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
|
||||
* time and the time needed to turn the domain on is the maximum
|
||||
* theoretical time this domain can spend in the "off" state.
|
||||
*/
|
||||
genpd->max_off_time_ns = min_off_time_ns -
|
||||
genpd->gd->max_off_time_ns = min_off_time_ns -
|
||||
genpd->states[state].power_on_latency_ns;
|
||||
return true;
|
||||
}
|
||||
@@ -259,6 +265,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
|
||||
static bool _default_power_down_ok(struct dev_pm_domain *pd, ktime_t now)
|
||||
{
|
||||
struct generic_pm_domain *genpd = pd_to_genpd(pd);
|
||||
struct genpd_governor_data *gd = genpd->gd;
|
||||
int state_idx = genpd->state_count - 1;
|
||||
struct gpd_link *link;
|
||||
|
||||
@@ -269,11 +276,11 @@ static bool _default_power_down_ok(struct dev_pm_domain *pd, ktime_t now)
|
||||
* cannot be met.
|
||||
*/
|
||||
update_domain_next_wakeup(genpd, now);
|
||||
if ((genpd->flags & GENPD_FLAG_MIN_RESIDENCY) && (genpd->next_wakeup != KTIME_MAX)) {
|
||||
if ((genpd->flags & GENPD_FLAG_MIN_RESIDENCY) && (gd->next_wakeup != KTIME_MAX)) {
|
||||
/* Let's find out the deepest domain idle state, the devices prefer */
|
||||
while (state_idx >= 0) {
|
||||
if (next_wakeup_allows_state(genpd, state_idx, now)) {
|
||||
genpd->max_off_time_changed = true;
|
||||
gd->max_off_time_changed = true;
|
||||
break;
|
||||
}
|
||||
state_idx--;
|
||||
@@ -281,14 +288,14 @@ static bool _default_power_down_ok(struct dev_pm_domain *pd, ktime_t now)
|
||||
|
||||
if (state_idx < 0) {
|
||||
state_idx = 0;
|
||||
genpd->cached_power_down_ok = false;
|
||||
gd->cached_power_down_ok = false;
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
|
||||
if (!genpd->max_off_time_changed) {
|
||||
genpd->state_idx = genpd->cached_power_down_state_idx;
|
||||
return genpd->cached_power_down_ok;
|
||||
if (!gd->max_off_time_changed) {
|
||||
genpd->state_idx = gd->cached_power_down_state_idx;
|
||||
return gd->cached_power_down_ok;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -297,12 +304,16 @@ static bool _default_power_down_ok(struct dev_pm_domain *pd, ktime_t now)
|
||||
* going to be called for any parent until this instance
|
||||
* returns.
|
||||
*/
|
||||
list_for_each_entry(link, &genpd->child_links, child_node)
|
||||
link->parent->max_off_time_changed = true;
|
||||
list_for_each_entry(link, &genpd->child_links, child_node) {
|
||||
struct genpd_governor_data *pgd = link->parent->gd;
|
||||
|
||||
genpd->max_off_time_ns = -1;
|
||||
genpd->max_off_time_changed = false;
|
||||
genpd->cached_power_down_ok = true;
|
||||
if (pgd)
|
||||
pgd->max_off_time_changed = true;
|
||||
}
|
||||
|
||||
gd->max_off_time_ns = -1;
|
||||
gd->max_off_time_changed = false;
|
||||
gd->cached_power_down_ok = true;
|
||||
|
||||
/*
|
||||
* Find a state to power down to, starting from the state
|
||||
@@ -310,7 +321,7 @@ static bool _default_power_down_ok(struct dev_pm_domain *pd, ktime_t now)
|
||||
*/
|
||||
while (!__default_power_down_ok(pd, state_idx)) {
|
||||
if (state_idx == 0) {
|
||||
genpd->cached_power_down_ok = false;
|
||||
gd->cached_power_down_ok = false;
|
||||
break;
|
||||
}
|
||||
state_idx--;
|
||||
@@ -318,8 +329,8 @@ static bool _default_power_down_ok(struct dev_pm_domain *pd, ktime_t now)
|
||||
|
||||
done:
|
||||
genpd->state_idx = state_idx;
|
||||
genpd->cached_power_down_state_idx = genpd->state_idx;
|
||||
return genpd->cached_power_down_ok;
|
||||
gd->cached_power_down_state_idx = genpd->state_idx;
|
||||
return gd->cached_power_down_ok;
|
||||
}
|
||||
|
||||
static bool default_power_down_ok(struct dev_pm_domain *pd)
|
||||
@@ -327,11 +338,6 @@ static bool default_power_down_ok(struct dev_pm_domain *pd)
|
||||
return _default_power_down_ok(pd, ktime_get());
|
||||
}
|
||||
|
||||
static bool always_on_power_down_ok(struct dev_pm_domain *domain)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_CPU_IDLE
|
||||
static bool cpu_power_down_ok(struct dev_pm_domain *pd)
|
||||
{
|
||||
@@ -401,6 +407,5 @@ struct dev_power_governor simple_qos_governor = {
|
||||
* pm_genpd_gov_always_on - A governor implementing an always-on policy
|
||||
*/
|
||||
struct dev_power_governor pm_domain_always_on_gov = {
|
||||
.power_down_ok = always_on_power_down_ok,
|
||||
.suspend_ok = default_suspend_ok,
|
||||
};
|
||||
|
||||
@@ -263,7 +263,7 @@ static int rpm_check_suspend_allowed(struct device *dev)
|
||||
retval = -EINVAL;
|
||||
else if (dev->power.disable_depth > 0)
|
||||
retval = -EACCES;
|
||||
else if (atomic_read(&dev->power.usage_count) > 0)
|
||||
else if (atomic_read(&dev->power.usage_count))
|
||||
retval = -EAGAIN;
|
||||
else if (!dev->power.ignore_children &&
|
||||
atomic_read(&dev->power.child_count))
|
||||
@@ -1039,13 +1039,33 @@ int pm_schedule_suspend(struct device *dev, unsigned int delay)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_schedule_suspend);
|
||||
|
||||
static int rpm_drop_usage_count(struct device *dev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = atomic_sub_return(1, &dev->power.usage_count);
|
||||
if (ret >= 0)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Because rpm_resume() does not check the usage counter, it will resume
|
||||
* the device even if the usage counter is 0 or negative, so it is
|
||||
* sufficient to increment the usage counter here to reverse the change
|
||||
* made above.
|
||||
*/
|
||||
atomic_inc(&dev->power.usage_count);
|
||||
dev_warn(dev, "Runtime PM usage count underflow!\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/**
|
||||
* __pm_runtime_idle - Entry point for runtime idle operations.
|
||||
* @dev: Device to send idle notification for.
|
||||
* @rpmflags: Flag bits.
|
||||
*
|
||||
* If the RPM_GET_PUT flag is set, decrement the device's usage count and
|
||||
* return immediately if it is larger than zero. Then carry out an idle
|
||||
* return immediately if it is larger than zero (if it becomes negative, log a
|
||||
* warning, increment it, and return an error). Then carry out an idle
|
||||
* notification, either synchronous or asynchronous.
|
||||
*
|
||||
* This routine may be called in atomic context if the RPM_ASYNC flag is set,
|
||||
@@ -1057,7 +1077,10 @@ int __pm_runtime_idle(struct device *dev, int rpmflags)
|
||||
int retval;
|
||||
|
||||
if (rpmflags & RPM_GET_PUT) {
|
||||
if (!atomic_dec_and_test(&dev->power.usage_count)) {
|
||||
retval = rpm_drop_usage_count(dev);
|
||||
if (retval < 0) {
|
||||
return retval;
|
||||
} else if (retval > 0) {
|
||||
trace_rpm_usage_rcuidle(dev, rpmflags);
|
||||
return 0;
|
||||
}
|
||||
@@ -1079,7 +1102,8 @@ EXPORT_SYMBOL_GPL(__pm_runtime_idle);
|
||||
* @rpmflags: Flag bits.
|
||||
*
|
||||
* If the RPM_GET_PUT flag is set, decrement the device's usage count and
|
||||
* return immediately if it is larger than zero. Then carry out a suspend,
|
||||
* return immediately if it is larger than zero (if it becomes negative, log a
|
||||
* warning, increment it, and return an error). Then carry out a suspend,
|
||||
* either synchronous or asynchronous.
|
||||
*
|
||||
* This routine may be called in atomic context if the RPM_ASYNC flag is set,
|
||||
@@ -1091,7 +1115,10 @@ int __pm_runtime_suspend(struct device *dev, int rpmflags)
|
||||
int retval;
|
||||
|
||||
if (rpmflags & RPM_GET_PUT) {
|
||||
if (!atomic_dec_and_test(&dev->power.usage_count)) {
|
||||
retval = rpm_drop_usage_count(dev);
|
||||
if (retval < 0) {
|
||||
return retval;
|
||||
} else if (retval > 0) {
|
||||
trace_rpm_usage_rcuidle(dev, rpmflags);
|
||||
return 0;
|
||||
}
|
||||
@@ -1210,12 +1237,13 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
|
||||
{
|
||||
struct device *parent = dev->parent;
|
||||
bool notify_parent = false;
|
||||
unsigned long flags;
|
||||
int error = 0;
|
||||
|
||||
if (status != RPM_ACTIVE && status != RPM_SUSPENDED)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock_irq(&dev->power.lock);
|
||||
spin_lock_irqsave(&dev->power.lock, flags);
|
||||
|
||||
/*
|
||||
* Prevent PM-runtime from being enabled for the device or return an
|
||||
@@ -1226,7 +1254,7 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
|
||||
else
|
||||
error = -EAGAIN;
|
||||
|
||||
spin_unlock_irq(&dev->power.lock);
|
||||
spin_unlock_irqrestore(&dev->power.lock, flags);
|
||||
|
||||
if (error)
|
||||
return error;
|
||||
@@ -1247,7 +1275,7 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
|
||||
device_links_read_unlock(idx);
|
||||
}
|
||||
|
||||
spin_lock_irq(&dev->power.lock);
|
||||
spin_lock_irqsave(&dev->power.lock, flags);
|
||||
|
||||
if (dev->power.runtime_status == status || !parent)
|
||||
goto out_set;
|
||||
@@ -1288,7 +1316,7 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
|
||||
dev->power.runtime_error = 0;
|
||||
|
||||
out:
|
||||
spin_unlock_irq(&dev->power.lock);
|
||||
spin_unlock_irqrestore(&dev->power.lock, flags);
|
||||
|
||||
if (notify_parent)
|
||||
pm_request_idle(parent);
|
||||
@@ -1527,14 +1555,17 @@ EXPORT_SYMBOL_GPL(pm_runtime_forbid);
|
||||
*/
|
||||
void pm_runtime_allow(struct device *dev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
spin_lock_irq(&dev->power.lock);
|
||||
if (dev->power.runtime_auto)
|
||||
goto out;
|
||||
|
||||
dev->power.runtime_auto = true;
|
||||
if (atomic_dec_and_test(&dev->power.usage_count))
|
||||
ret = rpm_drop_usage_count(dev);
|
||||
if (ret == 0)
|
||||
rpm_idle(dev, RPM_AUTO | RPM_ASYNC);
|
||||
else
|
||||
else if (ret > 0)
|
||||
trace_rpm_usage_rcuidle(dev, RPM_AUTO | RPM_ASYNC);
|
||||
|
||||
out:
|
||||
|
||||
@@ -389,6 +389,27 @@ static int cppc_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static unsigned int cppc_cpufreq_fast_switch(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct cppc_cpudata *cpu_data = policy->driver_data;
|
||||
unsigned int cpu = policy->cpu;
|
||||
u32 desired_perf;
|
||||
int ret;
|
||||
|
||||
desired_perf = cppc_cpufreq_khz_to_perf(cpu_data, target_freq);
|
||||
cpu_data->perf_ctrls.desired_perf = desired_perf;
|
||||
ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
|
||||
|
||||
if (ret) {
|
||||
pr_debug("Failed to set target on CPU:%d. ret:%d\n",
|
||||
cpu, ret);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return target_freq;
|
||||
}
|
||||
|
||||
static int cppc_verify_policy(struct cpufreq_policy_data *policy)
|
||||
{
|
||||
cpufreq_verify_within_cpu_limits(policy);
|
||||
@@ -420,12 +441,197 @@ static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
|
||||
return cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
|
||||
}
|
||||
|
||||
static DEFINE_PER_CPU(unsigned int, efficiency_class);
|
||||
static void cppc_cpufreq_register_em(struct cpufreq_policy *policy);
|
||||
|
||||
/* Create an artificial performance state every CPPC_EM_CAP_STEP capacity unit. */
|
||||
#define CPPC_EM_CAP_STEP (20)
|
||||
/* Increase the cost value by CPPC_EM_COST_STEP every performance state. */
|
||||
#define CPPC_EM_COST_STEP (1)
|
||||
/* Add a cost gap correspnding to the energy of 4 CPUs. */
|
||||
#define CPPC_EM_COST_GAP (4 * SCHED_CAPACITY_SCALE * CPPC_EM_COST_STEP \
|
||||
/ CPPC_EM_CAP_STEP)
|
||||
|
||||
static unsigned int get_perf_level_count(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cppc_perf_caps *perf_caps;
|
||||
unsigned int min_cap, max_cap;
|
||||
struct cppc_cpudata *cpu_data;
|
||||
int cpu = policy->cpu;
|
||||
|
||||
cpu_data = policy->driver_data;
|
||||
perf_caps = &cpu_data->perf_caps;
|
||||
max_cap = arch_scale_cpu_capacity(cpu);
|
||||
min_cap = div_u64(max_cap * perf_caps->lowest_perf, perf_caps->highest_perf);
|
||||
if ((min_cap == 0) || (max_cap < min_cap))
|
||||
return 0;
|
||||
return 1 + max_cap / CPPC_EM_CAP_STEP - min_cap / CPPC_EM_CAP_STEP;
|
||||
}
|
||||
|
||||
/*
|
||||
* The cost is defined as:
|
||||
* cost = power * max_frequency / frequency
|
||||
*/
|
||||
static inline unsigned long compute_cost(int cpu, int step)
|
||||
{
|
||||
return CPPC_EM_COST_GAP * per_cpu(efficiency_class, cpu) +
|
||||
step * CPPC_EM_COST_STEP;
|
||||
}
|
||||
|
||||
static int cppc_get_cpu_power(struct device *cpu_dev,
|
||||
unsigned long *power, unsigned long *KHz)
|
||||
{
|
||||
unsigned long perf_step, perf_prev, perf, perf_check;
|
||||
unsigned int min_step, max_step, step, step_check;
|
||||
unsigned long prev_freq = *KHz;
|
||||
unsigned int min_cap, max_cap;
|
||||
struct cpufreq_policy *policy;
|
||||
|
||||
struct cppc_perf_caps *perf_caps;
|
||||
struct cppc_cpudata *cpu_data;
|
||||
|
||||
policy = cpufreq_cpu_get_raw(cpu_dev->id);
|
||||
cpu_data = policy->driver_data;
|
||||
perf_caps = &cpu_data->perf_caps;
|
||||
max_cap = arch_scale_cpu_capacity(cpu_dev->id);
|
||||
min_cap = div_u64(max_cap * perf_caps->lowest_perf,
|
||||
perf_caps->highest_perf);
|
||||
|
||||
perf_step = CPPC_EM_CAP_STEP * perf_caps->highest_perf / max_cap;
|
||||
min_step = min_cap / CPPC_EM_CAP_STEP;
|
||||
max_step = max_cap / CPPC_EM_CAP_STEP;
|
||||
|
||||
perf_prev = cppc_cpufreq_khz_to_perf(cpu_data, *KHz);
|
||||
step = perf_prev / perf_step;
|
||||
|
||||
if (step > max_step)
|
||||
return -EINVAL;
|
||||
|
||||
if (min_step == max_step) {
|
||||
step = max_step;
|
||||
perf = perf_caps->highest_perf;
|
||||
} else if (step < min_step) {
|
||||
step = min_step;
|
||||
perf = perf_caps->lowest_perf;
|
||||
} else {
|
||||
step++;
|
||||
if (step == max_step)
|
||||
perf = perf_caps->highest_perf;
|
||||
else
|
||||
perf = step * perf_step;
|
||||
}
|
||||
|
||||
*KHz = cppc_cpufreq_perf_to_khz(cpu_data, perf);
|
||||
perf_check = cppc_cpufreq_khz_to_perf(cpu_data, *KHz);
|
||||
step_check = perf_check / perf_step;
|
||||
|
||||
/*
|
||||
* To avoid bad integer approximation, check that new frequency value
|
||||
* increased and that the new frequency will be converted to the
|
||||
* desired step value.
|
||||
*/
|
||||
while ((*KHz == prev_freq) || (step_check != step)) {
|
||||
perf++;
|
||||
*KHz = cppc_cpufreq_perf_to_khz(cpu_data, perf);
|
||||
perf_check = cppc_cpufreq_khz_to_perf(cpu_data, *KHz);
|
||||
step_check = perf_check / perf_step;
|
||||
}
|
||||
|
||||
/*
|
||||
* With an artificial EM, only the cost value is used. Still the power
|
||||
* is populated such as 0 < power < EM_MAX_POWER. This allows to add
|
||||
* more sense to the artificial performance states.
|
||||
*/
|
||||
*power = compute_cost(cpu_dev->id, step);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cppc_get_cpu_cost(struct device *cpu_dev, unsigned long KHz,
|
||||
unsigned long *cost)
|
||||
{
|
||||
unsigned long perf_step, perf_prev;
|
||||
struct cppc_perf_caps *perf_caps;
|
||||
struct cpufreq_policy *policy;
|
||||
struct cppc_cpudata *cpu_data;
|
||||
unsigned int max_cap;
|
||||
int step;
|
||||
|
||||
policy = cpufreq_cpu_get_raw(cpu_dev->id);
|
||||
cpu_data = policy->driver_data;
|
||||
perf_caps = &cpu_data->perf_caps;
|
||||
max_cap = arch_scale_cpu_capacity(cpu_dev->id);
|
||||
|
||||
perf_prev = cppc_cpufreq_khz_to_perf(cpu_data, KHz);
|
||||
perf_step = CPPC_EM_CAP_STEP * perf_caps->highest_perf / max_cap;
|
||||
step = perf_prev / perf_step;
|
||||
|
||||
*cost = compute_cost(cpu_dev->id, step);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int populate_efficiency_class(void)
|
||||
{
|
||||
struct acpi_madt_generic_interrupt *gicc;
|
||||
DECLARE_BITMAP(used_classes, 256) = {};
|
||||
int class, cpu, index;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
gicc = acpi_cpu_get_madt_gicc(cpu);
|
||||
class = gicc->efficiency_class;
|
||||
bitmap_set(used_classes, class, 1);
|
||||
}
|
||||
|
||||
if (bitmap_weight(used_classes, 256) <= 1) {
|
||||
pr_debug("Efficiency classes are all equal (=%d). "
|
||||
"No EM registered", class);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Squeeze efficiency class values on [0:#efficiency_class-1].
|
||||
* Values are per spec in [0:255].
|
||||
*/
|
||||
index = 0;
|
||||
for_each_set_bit(class, used_classes, 256) {
|
||||
for_each_possible_cpu(cpu) {
|
||||
gicc = acpi_cpu_get_madt_gicc(cpu);
|
||||
if (gicc->efficiency_class == class)
|
||||
per_cpu(efficiency_class, cpu) = index;
|
||||
}
|
||||
index++;
|
||||
}
|
||||
cppc_cpufreq_driver.register_em = cppc_cpufreq_register_em;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void cppc_cpufreq_register_em(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cppc_cpudata *cpu_data;
|
||||
struct em_data_callback em_cb =
|
||||
EM_ADV_DATA_CB(cppc_get_cpu_power, cppc_get_cpu_cost);
|
||||
|
||||
cpu_data = policy->driver_data;
|
||||
em_dev_register_perf_domain(get_cpu_device(policy->cpu),
|
||||
get_perf_level_count(policy), &em_cb,
|
||||
cpu_data->shared_cpu_map, 0);
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
|
||||
{
|
||||
return cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
|
||||
}
|
||||
static int populate_efficiency_class(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static void cppc_cpufreq_register_em(struct cpufreq_policy *policy)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
@@ -536,6 +742,9 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
goto out;
|
||||
}
|
||||
|
||||
policy->fast_switch_possible = cppc_allow_fast_switch();
|
||||
policy->dvfs_possible_from_any_cpu = true;
|
||||
|
||||
/*
|
||||
* If 'highest_perf' is greater than 'nominal_perf', we assume CPU Boost
|
||||
* is supported.
|
||||
@@ -681,6 +890,7 @@ static struct cpufreq_driver cppc_cpufreq_driver = {
|
||||
.verify = cppc_verify_policy,
|
||||
.target = cppc_cpufreq_set_target,
|
||||
.get = cppc_cpufreq_get_rate,
|
||||
.fast_switch = cppc_cpufreq_fast_switch,
|
||||
.init = cppc_cpufreq_cpu_init,
|
||||
.exit = cppc_cpufreq_cpu_exit,
|
||||
.set_boost = cppc_cpufreq_set_boost,
|
||||
@@ -742,6 +952,7 @@ static int __init cppc_cpufreq_init(void)
|
||||
|
||||
cppc_check_hisi_workaround();
|
||||
cppc_freq_invariance_init();
|
||||
populate_efficiency_class();
|
||||
|
||||
ret = cpufreq_register_driver(&cppc_cpufreq_driver);
|
||||
if (ret)
|
||||
|
||||
@@ -28,6 +28,7 @@
|
||||
#include <linux/suspend.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
#include <linux/tick.h>
|
||||
#include <linux/units.h>
|
||||
#include <trace/events/power.h>
|
||||
|
||||
static LIST_HEAD(cpufreq_policy_list);
|
||||
@@ -947,13 +948,14 @@ static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf)
|
||||
{
|
||||
struct cpufreq_policy *policy = to_policy(kobj);
|
||||
struct freq_attr *fattr = to_attr(attr);
|
||||
ssize_t ret;
|
||||
ssize_t ret = -EBUSY;
|
||||
|
||||
if (!fattr->show)
|
||||
return -EIO;
|
||||
|
||||
down_read(&policy->rwsem);
|
||||
ret = fattr->show(policy, buf);
|
||||
if (likely(!policy_is_inactive(policy)))
|
||||
ret = fattr->show(policy, buf);
|
||||
up_read(&policy->rwsem);
|
||||
|
||||
return ret;
|
||||
@@ -964,7 +966,7 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
|
||||
{
|
||||
struct cpufreq_policy *policy = to_policy(kobj);
|
||||
struct freq_attr *fattr = to_attr(attr);
|
||||
ssize_t ret = -EINVAL;
|
||||
ssize_t ret = -EBUSY;
|
||||
|
||||
if (!fattr->store)
|
||||
return -EIO;
|
||||
@@ -978,7 +980,8 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
|
||||
|
||||
if (cpu_online(policy->cpu)) {
|
||||
down_write(&policy->rwsem);
|
||||
ret = fattr->store(policy, buf, count);
|
||||
if (likely(!policy_is_inactive(policy)))
|
||||
ret = fattr->store(policy, buf, count);
|
||||
up_write(&policy->rwsem);
|
||||
}
|
||||
|
||||
@@ -1019,11 +1022,12 @@ static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu,
|
||||
dev_err(dev, "cpufreq symlink creation failed\n");
|
||||
}
|
||||
|
||||
static void remove_cpu_dev_symlink(struct cpufreq_policy *policy,
|
||||
static void remove_cpu_dev_symlink(struct cpufreq_policy *policy, int cpu,
|
||||
struct device *dev)
|
||||
{
|
||||
dev_dbg(dev, "%s: Removing symlink\n", __func__);
|
||||
sysfs_remove_link(&dev->kobj, "cpufreq");
|
||||
cpumask_clear_cpu(cpu, policy->real_cpus);
|
||||
}
|
||||
|
||||
static int cpufreq_add_dev_interface(struct cpufreq_policy *policy)
|
||||
@@ -1337,12 +1341,12 @@ static int cpufreq_online(unsigned int cpu)
|
||||
down_write(&policy->rwsem);
|
||||
policy->cpu = cpu;
|
||||
policy->governor = NULL;
|
||||
up_write(&policy->rwsem);
|
||||
} else {
|
||||
new_policy = true;
|
||||
policy = cpufreq_policy_alloc(cpu);
|
||||
if (!policy)
|
||||
return -ENOMEM;
|
||||
down_write(&policy->rwsem);
|
||||
}
|
||||
|
||||
if (!new_policy && cpufreq_driver->online) {
|
||||
@@ -1382,7 +1386,6 @@ static int cpufreq_online(unsigned int cpu)
|
||||
cpumask_copy(policy->related_cpus, policy->cpus);
|
||||
}
|
||||
|
||||
down_write(&policy->rwsem);
|
||||
/*
|
||||
* affected cpus must always be the one, which are online. We aren't
|
||||
* managing offline cpus here.
|
||||
@@ -1531,9 +1534,9 @@ static int cpufreq_online(unsigned int cpu)
|
||||
|
||||
out_destroy_policy:
|
||||
for_each_cpu(j, policy->real_cpus)
|
||||
remove_cpu_dev_symlink(policy, get_cpu_device(j));
|
||||
remove_cpu_dev_symlink(policy, j, get_cpu_device(j));
|
||||
|
||||
up_write(&policy->rwsem);
|
||||
cpumask_clear(policy->cpus);
|
||||
|
||||
out_offline_policy:
|
||||
if (cpufreq_driver->offline)
|
||||
@@ -1544,6 +1547,8 @@ out_exit_policy:
|
||||
cpufreq_driver->exit(policy);
|
||||
|
||||
out_free_policy:
|
||||
up_write(&policy->rwsem);
|
||||
|
||||
cpufreq_policy_free(policy);
|
||||
return ret;
|
||||
}
|
||||
@@ -1575,47 +1580,36 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cpufreq_offline(unsigned int cpu)
|
||||
static void __cpufreq_offline(unsigned int cpu, struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cpufreq_policy *policy;
|
||||
int ret;
|
||||
|
||||
pr_debug("%s: unregistering CPU %u\n", __func__, cpu);
|
||||
|
||||
policy = cpufreq_cpu_get_raw(cpu);
|
||||
if (!policy) {
|
||||
pr_debug("%s: No cpu_data found\n", __func__);
|
||||
return 0;
|
||||
}
|
||||
|
||||
down_write(&policy->rwsem);
|
||||
if (has_target())
|
||||
cpufreq_stop_governor(policy);
|
||||
|
||||
cpumask_clear_cpu(cpu, policy->cpus);
|
||||
|
||||
if (policy_is_inactive(policy)) {
|
||||
if (has_target())
|
||||
strncpy(policy->last_governor, policy->governor->name,
|
||||
CPUFREQ_NAME_LEN);
|
||||
else
|
||||
policy->last_policy = policy->policy;
|
||||
} else if (cpu == policy->cpu) {
|
||||
/* Nominate new CPU */
|
||||
policy->cpu = cpumask_any(policy->cpus);
|
||||
}
|
||||
|
||||
/* Start governor again for active policy */
|
||||
if (!policy_is_inactive(policy)) {
|
||||
/* Nominate a new CPU if necessary. */
|
||||
if (cpu == policy->cpu)
|
||||
policy->cpu = cpumask_any(policy->cpus);
|
||||
|
||||
/* Start the governor again for the active policy. */
|
||||
if (has_target()) {
|
||||
ret = cpufreq_start_governor(policy);
|
||||
if (ret)
|
||||
pr_err("%s: Failed to start governor\n", __func__);
|
||||
}
|
||||
|
||||
goto unlock;
|
||||
return;
|
||||
}
|
||||
|
||||
if (has_target())
|
||||
strncpy(policy->last_governor, policy->governor->name,
|
||||
CPUFREQ_NAME_LEN);
|
||||
else
|
||||
policy->last_policy = policy->policy;
|
||||
|
||||
if (cpufreq_thermal_control_enabled(cpufreq_driver)) {
|
||||
cpufreq_cooling_unregister(policy->cdev);
|
||||
policy->cdev = NULL;
|
||||
@@ -1634,8 +1628,24 @@ static int cpufreq_offline(unsigned int cpu)
|
||||
cpufreq_driver->exit(policy);
|
||||
policy->freq_table = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static int cpufreq_offline(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy;
|
||||
|
||||
pr_debug("%s: unregistering CPU %u\n", __func__, cpu);
|
||||
|
||||
policy = cpufreq_cpu_get_raw(cpu);
|
||||
if (!policy) {
|
||||
pr_debug("%s: No cpu_data found\n", __func__);
|
||||
return 0;
|
||||
}
|
||||
|
||||
down_write(&policy->rwsem);
|
||||
|
||||
__cpufreq_offline(cpu, policy);
|
||||
|
||||
unlock:
|
||||
up_write(&policy->rwsem);
|
||||
return 0;
|
||||
}
|
||||
@@ -1653,19 +1663,25 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
|
||||
if (!policy)
|
||||
return;
|
||||
|
||||
down_write(&policy->rwsem);
|
||||
|
||||
if (cpu_online(cpu))
|
||||
cpufreq_offline(cpu);
|
||||
__cpufreq_offline(cpu, policy);
|
||||
|
||||
cpumask_clear_cpu(cpu, policy->real_cpus);
|
||||
remove_cpu_dev_symlink(policy, dev);
|
||||
remove_cpu_dev_symlink(policy, cpu, dev);
|
||||
|
||||
if (cpumask_empty(policy->real_cpus)) {
|
||||
/* We did light-weight exit earlier, do full tear down now */
|
||||
if (cpufreq_driver->offline)
|
||||
cpufreq_driver->exit(policy);
|
||||
|
||||
cpufreq_policy_free(policy);
|
||||
if (!cpumask_empty(policy->real_cpus)) {
|
||||
up_write(&policy->rwsem);
|
||||
return;
|
||||
}
|
||||
|
||||
/* We did light-weight exit earlier, do full tear down now */
|
||||
if (cpufreq_driver->offline)
|
||||
cpufreq_driver->exit(policy);
|
||||
|
||||
up_write(&policy->rwsem);
|
||||
|
||||
cpufreq_policy_free(policy);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1707,6 +1723,16 @@ static unsigned int cpufreq_verify_current_freq(struct cpufreq_policy *policy, b
|
||||
return new_freq;
|
||||
|
||||
if (policy->cur != new_freq) {
|
||||
/*
|
||||
* For some platforms, the frequency returned by hardware may be
|
||||
* slightly different from what is provided in the frequency
|
||||
* table, for example hardware may return 499 MHz instead of 500
|
||||
* MHz. In such cases it is better to avoid getting into
|
||||
* unnecessary frequency updates.
|
||||
*/
|
||||
if (abs(policy->cur - new_freq) < HZ_PER_MHZ)
|
||||
return policy->cur;
|
||||
|
||||
cpufreq_out_of_sync(policy, new_freq);
|
||||
if (update)
|
||||
schedule_work(&policy->update);
|
||||
|
||||
@@ -388,6 +388,15 @@ static void free_policy_dbs_info(struct policy_dbs_info *policy_dbs,
|
||||
gov->free(policy_dbs);
|
||||
}
|
||||
|
||||
static void cpufreq_dbs_data_release(struct kobject *kobj)
|
||||
{
|
||||
struct dbs_data *dbs_data = to_dbs_data(to_gov_attr_set(kobj));
|
||||
struct dbs_governor *gov = dbs_data->gov;
|
||||
|
||||
gov->exit(dbs_data);
|
||||
kfree(dbs_data);
|
||||
}
|
||||
|
||||
int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct dbs_governor *gov = dbs_governor_of(policy);
|
||||
@@ -425,6 +434,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
|
||||
goto free_policy_dbs_info;
|
||||
}
|
||||
|
||||
dbs_data->gov = gov;
|
||||
gov_attr_set_init(&dbs_data->attr_set, &policy_dbs->list);
|
||||
|
||||
ret = gov->init(dbs_data);
|
||||
@@ -447,6 +457,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
|
||||
policy->governor_data = policy_dbs;
|
||||
|
||||
gov->kobj_type.sysfs_ops = &governor_sysfs_ops;
|
||||
gov->kobj_type.release = cpufreq_dbs_data_release;
|
||||
ret = kobject_init_and_add(&dbs_data->attr_set.kobj, &gov->kobj_type,
|
||||
get_governor_parent_kobj(policy),
|
||||
"%s", gov->gov.name);
|
||||
@@ -488,13 +499,8 @@ void cpufreq_dbs_governor_exit(struct cpufreq_policy *policy)
|
||||
|
||||
policy->governor_data = NULL;
|
||||
|
||||
if (!count) {
|
||||
if (!have_governor_per_policy())
|
||||
gov->gdbs_data = NULL;
|
||||
|
||||
gov->exit(dbs_data);
|
||||
kfree(dbs_data);
|
||||
}
|
||||
if (!count && !have_governor_per_policy())
|
||||
gov->gdbs_data = NULL;
|
||||
|
||||
free_policy_dbs_info(policy_dbs, gov);
|
||||
|
||||
|
||||
@@ -37,6 +37,7 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
|
||||
/* Governor demand based switching data (per-policy or global). */
|
||||
struct dbs_data {
|
||||
struct gov_attr_set attr_set;
|
||||
struct dbs_governor *gov;
|
||||
void *tuners;
|
||||
unsigned int ignore_nice_load;
|
||||
unsigned int sampling_rate;
|
||||
|
||||
@@ -1322,6 +1322,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
|
||||
mutex_unlock(&intel_pstate_limits_lock);
|
||||
|
||||
intel_pstate_update_policies();
|
||||
arch_set_max_freq_ratio(global.no_turbo);
|
||||
|
||||
mutex_unlock(&intel_pstate_driver_lock);
|
||||
|
||||
@@ -2424,6 +2425,7 @@ static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] __initconst = {
|
||||
X86_MATCH(BROADWELL_X, core_funcs),
|
||||
X86_MATCH(SKYLAKE_X, core_funcs),
|
||||
X86_MATCH(ICELAKE_X, core_funcs),
|
||||
X86_MATCH(SAPPHIRERAPIDS_X, core_funcs),
|
||||
{}
|
||||
};
|
||||
|
||||
|
||||
@@ -51,8 +51,8 @@ static const u16 cpufreq_mtk_offsets[REG_ARRAY_SIZE] = {
|
||||
};
|
||||
|
||||
static int __maybe_unused
|
||||
mtk_cpufreq_get_cpu_power(unsigned long *mW,
|
||||
unsigned long *KHz, struct device *cpu_dev)
|
||||
mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *mW,
|
||||
unsigned long *KHz)
|
||||
{
|
||||
struct mtk_cpufreq_data *data;
|
||||
struct cpufreq_policy *policy;
|
||||
|
||||
@@ -18,7 +18,6 @@
|
||||
|
||||
#include <asm/hw_irq.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/time.h>
|
||||
#include <asm/smp.h>
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
#include <linux/device.h>
|
||||
#include <linux/hardirq.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <asm/prom.h>
|
||||
|
||||
#include <asm/machdep.h>
|
||||
#include <asm/irq.h>
|
||||
#include <asm/pmac_feature.h>
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user