mbox series

[0/3] PM: QoS: Restore DEV_PM_QOS_MIN/MAX_FREQUENCY

Message ID cover.1572025364.git.leonard.crestez@nxp.com (mailing list archive)
Headers show
Series PM: QoS: Restore DEV_PM_QOS_MIN/MAX_FREQUENCY | expand

Message

Leonard Crestez Oct. 25, 2019, 6 p.m. UTC
Support for frequency limits in dev_pm_qos was removed when cpufreq was
switched to freq_qos, this series attempts to restore it by
reimplementing top of freq_qos.

Previous discussion here:

https://lore.kernel.org/linux-pm/VI1PR04MB7023DF47D046AEADB4E051EBEE680@VI1PR04MB7023.eurprd04.prod.outlook.com/T/#u

The cpufreq core switched away because it needs contraints at the level
of a "cpufreq_policy" which cover multiple cpus so dev_pm_qos coupling
to struct device was not useful (and was incorrectly handling). Cpufreq
could only use dev_pm_qos by implementing an additional layer of
aggregation from CPU to policy.

However the devfreq subsystem scaling is always performed for each
device so dev_pm_qos is a very good match. Support for dev_pm_qos
inside devfreq is implemented by this series:

	https://patchwork.kernel.org/cover/11171807/

Rafael: If this makes sense to you I could incorporate the restoration
of DEV_PM_QOS_MIN/MAX_FREQUENCY in v10 of the devfreq qos series.

In theory if freq_qos is extended to handle conflicting min/max values then
this sharing would be useful. Right now freq_qos just ties two unrelated
pm_qos aggregations for min/max freq.

---
This is implemented by embeding a freq_qos_request inside dev_pm_qos_request:
the data field was already an union in order to deal with flag requests.

The internal _freq_qos_apply is exported so that it can be called from
dev_pm_qos apply_constraints.

The dev_pm_qos_constraints_destroy function has no obvious equivalent in
freq_qos but really the whole approach of "removing requests" is somewhat dubios:
request objects should be owned by consumers and the list of qos requests
should be empty when the target device is deleted. Clearing the request
list and would likely result in a WARN next time "update_request" is
called by the requestor.

Leonard Crestez (3):
  PM: QoS: Reorder pm_qos/freq_qos/dev_pm_qos structs
  PM: QoS: Export _freq_qos_apply
  PM: QoS: Restore DEV_PM_QOS_MIN/MAX_FREQUENCY

 drivers/base/power/qos.c | 69 +++++++++++++++++++++++++++++---
 include/linux/pm_qos.h   | 86 +++++++++++++++++++++++-----------------
 kernel/power/qos.c       | 11 ++---
 3 files changed, 119 insertions(+), 47 deletions(-)

Comments

Leonard Crestez Nov. 11, 2019, 7:40 p.m. UTC | #1
On 25.10.2019 21:00, Leonard Crestez wrote:
> Support for frequency limits in dev_pm_qos was removed when cpufreq was
> switched to freq_qos, this series attempts to restore it by
> reimplementing top of freq_qos.
> 
> Previous discussion here:
> 
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Flinux-pm%2FVI1PR04MB7023DF47D046AEADB4E051EBEE680%40VI1PR04MB7023.eurprd04.prod.outlook.com%2FT%2F%23u&data=02%7C01%7Cleonard.crestez%40nxp.com%7C56eca0f61d714ccda20d08d75975467d%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637076232541620636&sdata=X7yVr362%2F2LyvYLX%2FPlmadiHIR2Q1whFOXGdqecG6s4%3D&reserved=0
> 
> The cpufreq core switched away because it needs contraints at the level
> of a "cpufreq_policy" which cover multiple cpus so dev_pm_qos coupling
> to struct device was not useful (and was incorrectly handling). Cpufreq
> could only use dev_pm_qos by implementing an additional layer of
> aggregation from CPU to policy.
> 
> However the devfreq subsystem scaling is always performed for each
> device so dev_pm_qos is a very good match. Support for dev_pm_qos
> inside devfreq is implemented by this series:
> 
> 	https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.kernel.org%2Fcover%2F11171807%2F&data=02%7C01%7Cleonard.crestez%40nxp.com%7C56eca0f61d714ccda20d08d75975467d%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637076232541630633&sdata=REK2U%2B7xDg3wmXO1JM5aO%2BdjvpQkKh9%2BVrFz4ULxjnE%3D&reserved=0
> 
> Rafael: If this makes sense to you I could incorporate the restoration
> of DEV_PM_QOS_MIN/MAX_FREQUENCY in v10 of the devfreq qos series.
> 
> In theory if freq_qos is extended to handle conflicting min/max values then
> this sharing would be useful. Right now freq_qos just ties two unrelated
> pm_qos aggregations for min/max freq.
> 
> ---
> This is implemented by embeding a freq_qos_request inside dev_pm_qos_request:
> the data field was already an union in order to deal with flag requests.
> 
> The internal _freq_qos_apply is exported so that it can be called from
> dev_pm_qos apply_constraints.
> 
> The dev_pm_qos_constraints_destroy function has no obvious equivalent in
> freq_qos but really the whole approach of "removing requests" is somewhat dubios:
> request objects should be owned by consumers and the list of qos requests
> should be empty when the target device is deleted. Clearing the request
> list and would likely result in a WARN next time "update_request" is
> called by the requestor.
> 
> Leonard Crestez (3):
>    PM: QoS: Reorder pm_qos/freq_qos/dev_pm_qos structs
>    PM: QoS: Export _freq_qos_apply
>    PM: QoS: Restore DEV_PM_QOS_MIN/MAX_FREQUENCY
> 
>   drivers/base/power/qos.c | 69 +++++++++++++++++++++++++++++---
>   include/linux/pm_qos.h   | 86 +++++++++++++++++++++++-----------------
>   kernel/power/qos.c       | 11 ++---
>   3 files changed, 119 insertions(+), 47 deletions(-)

Any feedback?

The DEV_PM_QOS_MIN/MAX_FREQUENCY constraints are very useful for devfreq 
but devfreq maintainers seem unwilling to take such core changes. See:

     https://patchwork.kernel.org/patch/11221877/#22974093

This makes a lot of sense, PM QoS core changes should be reviewed by 
core PM maintainers.

What would it take to restore DEV_PM_QOS_MIN/MAX_FREQUENCY?

Cpufreq itself could still use DEV_PM_QOS if it performed two-stage 
aggregation. This would require a bit of additional effort in 
cpufreq_policy code but simplify consumers and expose fewer internals. 
This sounds like a worthwhile tradeoff.

--
Regards,
Leonard