Message ID | 1572979786-20361-1-git-send-email-thara.gopinath@linaro.org |
---|---|
Headers | show |
Series | Introduce Thermal Pressure | expand |
Hi Thara, I am going to try your patch set on some different board. To do that I need more information regarding your setup. Please find my comments below. I need probably one hack which do not fully understand. On 11/5/19 6:49 PM, Thara Gopinath wrote: > Thermal governors can respond to an overheat event of a cpu by > capping the cpu's maximum possible frequency. This in turn > means that the maximum available compute capacity of the > cpu is restricted. But today in the kernel, task scheduler is > not notified of capping of maximum frequency of a cpu. > In other words, scheduler is unaware of maximum capacity > restrictions placed on a cpu due to thermal activity. > This patch series attempts to address this issue. > The benefits identified are better task placement among available > cpus in event of overheating which in turn leads to better > performance numbers. > > The reduction in the maximum possible capacity of a cpu due to a > thermal event can be considered as thermal pressure. Instantaneous > thermal pressure is hard to record and can sometime be erroneous > as there can be mismatch between the actual capping of capacity > and scheduler recording it. Thus solution is to have a weighted > average per cpu value for thermal pressure over time. > The weight reflects the amount of time the cpu has spent at a > capped maximum frequency. Since thermal pressure is recorded as > an average, it must be decayed periodically. Exisiting algorithm > in the kernel scheduler pelt framework is re-used to calculate > the weighted average. This patch series also defines a sysctl > inerface to allow for a configurable decay period. > > Regarding testing, basic build, boot and sanity testing have been > performed on db845c platform with debian file system. > Further, dhrystone and hackbench tests have been > run with the thermal pressure algorithm. During testing, due to > constraints of step wise governor in dealing with big little systems, I don't understand this modification. Could you explain what was the issue and if this modification did not break the original thermal solution upfront? You are then comparing this modified version and treat it as an 'origin', am I right? > trip point 0 temperature was made assymetric between cpus in little > cluster and big cluster; the idea being that > big core will heat up and cpu cooling device will throttle the > frequency of the big cores faster, there by limiting the maximum available > capacity and the scheduler will spread out tasks to little cores as well. > > Test Results > > Hackbench: 1 group , 30000 loops, 10 runs > Result SD > (Secs) (% of mean) > No Thermal Pressure 14.03 2.69% > Thermal Pressure PELT Algo. Decay : 32 ms 13.29 0.56% > Thermal Pressure PELT Algo. Decay : 64 ms 12.57 1.56% > Thermal Pressure PELT Algo. Decay : 128 ms 12.71 1.04% > Thermal Pressure PELT Algo. Decay : 256 ms 12.29 1.42% > Thermal Pressure PELT Algo. Decay : 512 ms 12.42 1.15% > > Dhrystone Run Time : 20 threads, 3000 MLOOPS > Result SD > (Secs) (% of mean) > No Thermal Pressure 9.452 4.49% > Thermal Pressure PELT Algo. Decay : 32 ms 8.793 5.30% > Thermal Pressure PELT Algo. Decay : 64 ms 8.981 5.29% > Thermal Pressure PELT Algo. Decay : 128 ms 8.647 6.62% > Thermal Pressure PELT Algo. Decay : 256 ms 8.774 6.45% > Thermal Pressure PELT Algo. Decay : 512 ms 8.603 5.41% > What I would like to see also for this performance results is avg temperature of the chip. Is it higher than in the 'origin'? Regards, Lukasz Luba IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
On Wed, Nov 6, 2019 at 12:20 AM Thara Gopinath <thara.gopinath@linaro.org> wrote: > > Thermal governors can respond to an overheat event of a cpu by > capping the cpu's maximum possible frequency. This in turn > means that the maximum available compute capacity of the > cpu is restricted. But today in the kernel, task scheduler is > not notified of capping of maximum frequency of a cpu. > In other words, scheduler is unaware of maximum capacity > restrictions placed on a cpu due to thermal activity. > This patch series attempts to address this issue. > The benefits identified are better task placement among available > cpus in event of overheating which in turn leads to better > performance numbers. > > The reduction in the maximum possible capacity of a cpu due to a > thermal event can be considered as thermal pressure. Instantaneous > thermal pressure is hard to record and can sometime be erroneous > as there can be mismatch between the actual capping of capacity > and scheduler recording it. Thus solution is to have a weighted > average per cpu value for thermal pressure over time. > The weight reflects the amount of time the cpu has spent at a > capped maximum frequency. Since thermal pressure is recorded as > an average, it must be decayed periodically. Exisiting algorithm > in the kernel scheduler pelt framework is re-used to calculate > the weighted average. This patch series also defines a sysctl > inerface to allow for a configurable decay period. > > Regarding testing, basic build, boot and sanity testing have been > performed on db845c platform with debian file system. > Further, dhrystone and hackbench tests have been > run with the thermal pressure algorithm. During testing, due to > constraints of step wise governor in dealing with big little systems, What contraints? > trip point 0 temperature was made assymetric between cpus in little > cluster and big cluster; the idea being that > big core will heat up and cpu cooling device will throttle the > frequency of the big cores faster, there by limiting the maximum available > capacity and the scheduler will spread out tasks to little cores as well. Can you share the hack to get this behaviour as well so I can try to reproduce on 845c? > Test Results > > Hackbench: 1 group , 30000 loops, 10 runs > Result SD > (Secs) (% of mean) > No Thermal Pressure 14.03 2.69% > Thermal Pressure PELT Algo. Decay : 32 ms 13.29 0.56% > Thermal Pressure PELT Algo. Decay : 64 ms 12.57 1.56% > Thermal Pressure PELT Algo. Decay : 128 ms 12.71 1.04% > Thermal Pressure PELT Algo. Decay : 256 ms 12.29 1.42% > Thermal Pressure PELT Algo. Decay : 512 ms 12.42 1.15% >
On 11/12/19 11:21 AM, Lukasz Luba wrote: > Hi Thara, > > I am going to try your patch set on some different board. > To do that I need more information regarding your setup. > Please find my comments below. I need probably one hack > which do not fully understand. > > On 11/5/19 6:49 PM, Thara Gopinath wrote: >> Thermal governors can respond to an overheat event of a cpu by >> capping the cpu's maximum possible frequency. This in turn >> means that the maximum available compute capacity of the >> cpu is restricted. But today in the kernel, task scheduler is >> not notified of capping of maximum frequency of a cpu. >> In other words, scheduler is unaware of maximum capacity >> restrictions placed on a cpu due to thermal activity. >> This patch series attempts to address this issue. >> The benefits identified are better task placement among available >> cpus in event of overheating which in turn leads to better >> performance numbers. >> >> The reduction in the maximum possible capacity of a cpu due to a >> thermal event can be considered as thermal pressure. Instantaneous >> thermal pressure is hard to record and can sometime be erroneous >> as there can be mismatch between the actual capping of capacity >> and scheduler recording it. Thus solution is to have a weighted >> average per cpu value for thermal pressure over time. >> The weight reflects the amount of time the cpu has spent at a >> capped maximum frequency. Since thermal pressure is recorded as >> an average, it must be decayed periodically. Exisiting algorithm >> in the kernel scheduler pelt framework is re-used to calculate >> the weighted average. This patch series also defines a sysctl >> inerface to allow for a configurable decay period. >> >> Regarding testing, basic build, boot and sanity testing have been >> performed on db845c platform with debian file system. >> Further, dhrystone and hackbench tests have been >> run with the thermal pressure algorithm. During testing, due to >> constraints of step wise governor in dealing with big little systems, > I don't understand this modification. Could you explain what was the > issue and if this modification did not break the original > thermal solution upfront? You are then comparing this modified > version and treat it as an 'origin', am I right? With Ionela's help I understood the reason for doing this hack. For those who follow: She created a 'capacity inversion' between big and little cores to tests if the patches really work. How: she starts throttling big cores at lower temperature, so earlier in time, thus the power is shifted towards little cores (which are more energy efficient and can run with higher frequency). The big cores run at minimum frequency and little (hopefully) at max frequency. This 'capacity inversion' is the use case which might occur in the real world. It is hard to trigger it in normal benchmarks, though. I don't know how often this 'capacity inversion' occurs and for how long it stays in real workloads. Based on the tests run with default thermal solution and results almost the same, I would say that it is not often (maybe 3% of the test period, otherwise I would get better results because this patch set solves this issue). I have run a few different kernels and benchmarks without this 'capacity inversions' and I don't see the regression (and benefits from this solution), which is also a big plus in case of mainlining it. In case where the 'capacity inversion' is artificially introduced into the system for 100% time, the stress tests show huge difference. Please refer to Ionela's test results [1] (~30% better). Regards, Lukasz Luba [1] https://docs.google.com/spreadsheets/d/1ibxDSSSLTodLzihNAw6jM36eVZABuPMMnjvV-Xh4NEo/edit#gid=0