SlideShare una empresa de Scribd logo
1 de 24
Descargar para leer sin conexión
WALT vs PELT : Redux
Pavankumar Kondeti
Engineer, Staff
Qualcomm Technologies, Inc
09/27/2017
pkondeti@qti.qualcomm.com
2
Agenda
• Why we need load tracking?
• A quick recap of PELT and WALT
• Results
3
Why we need load tracking?
• Load balancer/group scheduling
• Task placement
◦ Task load and CPU utilization tracking is essential for energy aware scheduling.
◦ Heavy tasks can be placed on higher capacity CPUs. Small tasks can be packed on a busy CPU
without waking up an idle CPU.
• Frequency guidance
◦ CPU frequency governors like ondemand and interactive uses a timer and compute the CPU
busy time by subtracting the CPU idle time from the timer period.
◦ Task migrations are not accounted. If a task execution time is split across two CPUs, governor
fails to ramp up the frequency.
4
PELT recap
• The PELT (Per Entity Load Tracking) is introduced in 3.8 kernel. The load is tracked at the sched
entity and cfs_rq/rq level
• The load is accounted using a decayed geometric series with runnable time in 1 msec period as
coefficients. The decay constant is chosen such that the contribution in the 32 msec past is
weighted half as strongly as the current contribution.
• The blocked tasks also contribute to the rq load/utilization. The task load is decayed during sleep.
• Currently the load tracking is done only for the CFS class. There are patches available to extend it
to RT/DL class.
5
PELT signals
• se->avg.load_avg - The load average contribution of an entity. The entity weight and
frequency scaling are factored into the runnable time.
• cfs_rq->runnable_load_avg - Sum of the load average contributions from all runnable tasks.
Used in load balancer.
• cfs_rq->avg.load_avg - Represents the load average contributions from all runnable and
blocked entities. Used in CFS group scheduling
• se->avg.util_avg - The utilization of an entity. The CPU efficiency and frequency scaling are
factored into the running time. Used in EAS task placement
• cfs_rq->avg.util_avg - Represents the utilization from all runnable and blocked entities.
Input to the schedutil governor for frequency selection.
6
WALT Introduction
• WALT keeps track of recent N windows of execution for every task. Windows where a task had no
activity are ignored and not recorded.
• Task demand is derived from these N samples. Different policies like max(), avg(), max(recent,
avg) are available.
• The wait time of a task on the CPU rq is also accounted towards its demand.
• Task utilization is tracked separately from the demand. The utilization of a task is the execution
time in the recently completed window.
• The CPU rq’s utilization is the sum of the utilization of all tasks ran in the recently completed
window.
7
WALT signals
• p->ravg.demand - Task demand. Used in the EAS task placement.
• rq->cumulative_runnable_avg - Sum of the demand of all runnable tasks on this CPU. Represent
the Instantaneous load. Used in the EAS task placement
• rq->prev_runnable_sum - CPU utilization in the recent complete window. Input to the schedutil.
• rq->cum_window_demand - Sum of the demand of tasks ran in the current window. This signal is
used to estimate the frequency. Used in the EAS task placement for evaluating the energy
difference
8
PELT vs WALT
• WALT has more dynamic demand/utilization signal than provided by PELT, which is subjected to
the geometric series.
• PELT takes more time to detect a heavy task or re-classification of a light task as heavy. Thus
the task migration to a higher capacity CPU is delayed. As example, it takes ~138msec for a task
with 0 utilization to become a 95% utilization task.
• PELT decays the utilization of a task when it goes to sleep. As an example, a 100% utilization
task would become 10% utilization task just after 100 msec sleep.
• Similar to task utilization, PELT takes more time to build up the CPU utilization, thus delaying the
frequency ramp up.
• PELT's blocked utilization decay implies that the underlying cpufreq governor would have to
delay dropping frequency for longer than necessary
9
Frequency ramp up
Task Execution
Frequency (WALT)
Task Execution
Frequency (PELT)
• The frequency ramp up is 4 times faster on WALT
• The CPU utilization is limited by the current capacity. If the frequency is not ramped up
fast, the utilization accumulation is delayed which in turn delays ramping up to the Fmax.
10
Frequency ramp down
Task Execution
Frequency (WALT)
Task Execution
Frequency (PELT)
• The frequency ramp down is 8 times faster on WALT
• The blocked tasks also contribute to the CPU utilization in PELT. This delays frequency
ramp down. It hurts power but may boost the performance in some use cases.
11
Results
• Benchmarks
• JankBench
• Ux (Twitter, Facebook, Youtube)
• Games
12
Testing platform
• The tests are run on the SDM835 platform
• The EAS + schedutil is based on the AOSP common kernel. Few additional patches which are
available in AOSP gerrit review are included.
• HZ = 300 and WALT window size is 10 msec
• The schedtune boost for top-app is 10% by default. It is boosted to 50% upon touch event for 3
seconds.
13
Benchmark Result
GeekBench (v4) ST same
MT same
PCMark (v1.2) Total WALT is 23% better
Web browsing WALT is 38% better
Video Playback PELT is 4% better
Writing WALT is 33% better
Photo Editing WALT is 32% better
Antutu (v6) same
AndroBench (v5) Sequential Read same
Sequential Write Same
Random Read WALT is 51%
better
WALT accounts task wait time towards
demand. This results in task spreading
which improved performance. With
schedtune.prefer_idle = 1, the both PELT
and WALT are giving same result.
Random Write WALT is 11%
better
14
Frequency ramp up in PCMark
PELT
WALT• The faster frequency ramp up is helping WALT in PCMark
• Better results are observed with Patrick’s util_est series for PELT. The gap is reduced to
10% from 23%
15
List View
Fling
PELT
(msec)
WALT
(msec)
25% 5.541621 5.046939
50% 6.235489 5.364740
75% 8.211237 5.858221
90% 9.225127 6.928594
99% 11.848338 10.306270
Image List
View Fling
PELT
(msec)
WALT
(msec)
25% 5.170975 4.950513
50% 5.715942 5.348545
75% 6.817395 5.850904
90% 8.662801 6.625814
99% 11.443108 9.915054
Test Power
List View Fling PELT is 2% better
Image List View Fling PELT is 4% better
16
Shadow
Grid Fling
PELT
(msec)
WALT
(msec)
25% 6.500720 6.219871
50% 7.525394 7.932580
75% 9.514976 9.256707
90% 10.416724 10.443205
99% 12.531525 13.334770
High
hitrate text
render
PELT
(msec)
WALT
(msec)
25% 5.663946 5.161756
50% 6.003140 5.444418
75% 6.726914 6.053425
90% 8.065195 7.008964
99% 12.561744 11.277686
Test Power
Shadow Grid Fling PELT is 7% better
High hitrate text render PELT is 3% better
17
Low
hitrate text
render
PELT
(msec)
WALT
(msec)
25% 5.596932 5.106558
50% 5.945192 5.388022
75% 6.695878 5.898033
90% 8.143912 6.888017
99% 12.639909 10.600410
Edit text
input
PELT
(msec)
WALT
(msec)
25% 5.566935 4.477202
50% 6.977525 5.445546
75% 9.407826 6.955690
90% 14.892524 9.764885
99% 21.150100 16.515769
Test Power
Low hitrate text render PELT is 3% better
Edit text input PELT is 6% better
18
BitMap
upload
(overdraw)
PELT
(msec)
WALT
(msec)
25% 15.136069 14.489167
50% 15.908737 15.844335
75% 16.836846 18.725650
90% 18.500224 21.474493
99% 22.062164 28.835235
Test Power
BitMap upload
(OverDraw)
PELT is 1% better
19
WALT has 5% better power with similar performance (no of janks). Frame
rendering times are better with PELT
20
WALT has 5% better power with similar performance (no of janks). Frame
rendering times are better with PELT
21
Both PELT and WALT played the video with 30 Fps. WALT has 4% better power
22
Games
Game Result
Subway Surfers same
With touchboost disable, Power is 10% better for
WALT. PELT has better performance (~2FPs)
Thunder Cross Power is 3% better for WALT for the same
performance measured in FPS.
Candy crush soda saga same
23
Conclusion
• There is no clear winner in UX use cases like JankBench. The performance (especially
90% and 99%) is better with WALT. The touchboost settings can be tuned down to save
power. Note that for UX use cases, rendering frame with in 16 msec is critical. The power
optimizations are not possible if we can’t meet this deadline.
• Clear Power savings with WALT for Youtube streaming.
• WALT performance is better in Benchmarks like PCMark.
• We will do this comparison again with Patrick’s util_est patches, which are meant to
address the load decay and slower ramp up problems with PELT.
Follow us on:
For more information, visit us at:
www.qualcomm.com & www.qualcomm.com/blog
Thank you
Nothing in these materials is an offer to sell any of the components or devices referenced herein.
©2017 Qualcomm Technologies, Inc. and/or its affiliated companies. All Rights Reserved.
Qualcomm is a trademark of Qualcomm Incorporated, registered in the United States and other countries. Other products and brand names
may be trademarks or registered trademarks of their respective owners.
References in this presentation to “Qualcomm” may mean Qualcomm Incorporated, Qualcomm Technologies, Inc., and/or other subsidiaries
or business units within the Qualcomm corporate structure, as applicable. Qualcomm Incorporated includes Qualcomm’s licensing business,
QTL, and the vast majority of its patent portfolio. Qualcomm Technologies, Inc., a wholly-owned subsidiary of Qualcomm Incorporated,
operates, along with its subsidiaries, substantially all of Qualcomm’s engineering, research and development functions, and substantially all
of its product and services businesses, including its semiconductor business, QCT.

Más contenido relacionado

La actualidad más candente

Kernel_Crash_Dump_Analysis
Kernel_Crash_Dump_AnalysisKernel_Crash_Dump_Analysis
Kernel_Crash_Dump_Analysis
Buland Singh
 
Kvm performance optimization for ubuntu
Kvm performance optimization for ubuntuKvm performance optimization for ubuntu
Kvm performance optimization for ubuntu
Sim Janghoon
 
High-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringHigh-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uring
ScyllaDB
 
HKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
HKG15-505: Power Management interactions with OP-TEE and Trusted FirmwareHKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
HKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
Linaro
 

La actualidad más candente (20)

BPF Internals (eBPF)
BPF Internals (eBPF)BPF Internals (eBPF)
BPF Internals (eBPF)
 
The Theory and Implementation of DVFS on Linux
The Theory and Implementation of DVFS on LinuxThe Theory and Implementation of DVFS on Linux
The Theory and Implementation of DVFS on Linux
 
The Linux Kernel Scheduler (For Beginners) - SFO17-421
The Linux Kernel Scheduler (For Beginners) - SFO17-421The Linux Kernel Scheduler (For Beginners) - SFO17-421
The Linux Kernel Scheduler (For Beginners) - SFO17-421
 
Exploring Your Apple M1 devices with Open Source Tools
Exploring Your Apple M1 devices with Open Source ToolsExploring Your Apple M1 devices with Open Source Tools
Exploring Your Apple M1 devices with Open Source Tools
 
Understanding eBPF in a Hurry!
Understanding eBPF in a Hurry!Understanding eBPF in a Hurry!
Understanding eBPF in a Hurry!
 
Kernel_Crash_Dump_Analysis
Kernel_Crash_Dump_AnalysisKernel_Crash_Dump_Analysis
Kernel_Crash_Dump_Analysis
 
Scheduling in Android
Scheduling in AndroidScheduling in Android
Scheduling in Android
 
Kvm performance optimization for ubuntu
Kvm performance optimization for ubuntuKvm performance optimization for ubuntu
Kvm performance optimization for ubuntu
 
LinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking WalkthroughLinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking Walkthrough
 
Process Scheduler and Balancer in Linux Kernel
Process Scheduler and Balancer in Linux KernelProcess Scheduler and Balancer in Linux Kernel
Process Scheduler and Balancer in Linux Kernel
 
Container Performance Analysis
Container Performance AnalysisContainer Performance Analysis
Container Performance Analysis
 
Making Linux do Hard Real-time
Making Linux do Hard Real-timeMaking Linux do Hard Real-time
Making Linux do Hard Real-time
 
Meet cute-between-ebpf-and-tracing
Meet cute-between-ebpf-and-tracingMeet cute-between-ebpf-and-tracing
Meet cute-between-ebpf-and-tracing
 
BPF - in-kernel virtual machine
BPF - in-kernel virtual machineBPF - in-kernel virtual machine
BPF - in-kernel virtual machine
 
SFO15-TR9: PSCI, ACPI (and UEFI to boot)
SFO15-TR9: PSCI, ACPI (and UEFI to boot)SFO15-TR9: PSCI, ACPI (and UEFI to boot)
SFO15-TR9: PSCI, ACPI (and UEFI to boot)
 
High-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringHigh-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uring
 
HKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
HKG15-505: Power Management interactions with OP-TEE and Trusted FirmwareHKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
HKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
 
Using eBPF for High-Performance Networking in Cilium
Using eBPF for High-Performance Networking in CiliumUsing eBPF for High-Performance Networking in Cilium
Using eBPF for High-Performance Networking in Cilium
 
ebpf and IO Visor: The What, how, and what next!
ebpf and IO Visor: The What, how, and what next!ebpf and IO Visor: The What, how, and what next!
ebpf and IO Visor: The What, how, and what next!
 
Linux kernel debugging
Linux kernel debuggingLinux kernel debugging
Linux kernel debugging
 

Similar a WALT vs PELT : Redux - SFO17-307

Optimizing your java applications for multi core hardware
Optimizing your java applications for multi core hardwareOptimizing your java applications for multi core hardware
Optimizing your java applications for multi core hardware
IndicThreads
 
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docxCS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
faithxdunce63732
 
참여기관_발표자료-국민대학교 201301 정기회의
참여기관_발표자료-국민대학교 201301 정기회의참여기관_발표자료-국민대학교 201301 정기회의
참여기관_발표자료-국민대학교 201301 정기회의
DzH QWuynh
 
Performance Test Plan - Sample 1
Performance Test Plan - Sample 1Performance Test Plan - Sample 1
Performance Test Plan - Sample 1
Atul Pant
 

Similar a WALT vs PELT : Redux - SFO17-307 (20)

KVM Tuning @ eBay
KVM Tuning @ eBayKVM Tuning @ eBay
KVM Tuning @ eBay
 
Optimizing your java applications for multi core hardware
Optimizing your java applications for multi core hardwareOptimizing your java applications for multi core hardware
Optimizing your java applications for multi core hardware
 
XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.
XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.
XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.
 
CPN302 your-linux-ami-optimization-and-performance
CPN302 your-linux-ami-optimization-and-performanceCPN302 your-linux-ami-optimization-and-performance
CPN302 your-linux-ami-optimization-and-performance
 
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docxCS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
 
Performance Tuning Oracle Weblogic Server 12c
Performance Tuning Oracle Weblogic Server 12cPerformance Tuning Oracle Weblogic Server 12c
Performance Tuning Oracle Weblogic Server 12c
 
Dynamic task scheduling on multicore automotive ec us
Dynamic task scheduling on multicore automotive ec usDynamic task scheduling on multicore automotive ec us
Dynamic task scheduling on multicore automotive ec us
 
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUSDYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
 
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUSDYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
 
Using AWR for SQL Analysis
Using AWR for SQL AnalysisUsing AWR for SQL Analysis
Using AWR for SQL Analysis
 
Oracle Performance Tuning DE(v1.2)-part2.ppt
Oracle Performance Tuning DE(v1.2)-part2.pptOracle Performance Tuning DE(v1.2)-part2.ppt
Oracle Performance Tuning DE(v1.2)-part2.ppt
 
DevoxxUK: Optimizating Application Performance on Kubernetes
DevoxxUK: Optimizating Application Performance on KubernetesDevoxxUK: Optimizating Application Performance on Kubernetes
DevoxxUK: Optimizating Application Performance on Kubernetes
 
z/VM Performance Analysis
z/VM Performance Analysisz/VM Performance Analysis
z/VM Performance Analysis
 
Oracle R12 EBS Performance Tuning
Oracle R12 EBS Performance TuningOracle R12 EBS Performance Tuning
Oracle R12 EBS Performance Tuning
 
참여기관_발표자료-국민대학교 201301 정기회의
참여기관_발표자료-국민대학교 201301 정기회의참여기관_발표자료-국민대학교 201301 정기회의
참여기관_발표자료-국민대학교 201301 정기회의
 
Steps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issuesSteps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issues
 
Improving PHP Application Performance with APC
Improving PHP Application Performance with APCImproving PHP Application Performance with APC
Improving PHP Application Performance with APC
 
Performance Test Plan - Sample 1
Performance Test Plan - Sample 1Performance Test Plan - Sample 1
Performance Test Plan - Sample 1
 
OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]
OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]
OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]
 
Px execution in rac
Px execution in racPx execution in rac
Px execution in rac
 

Más de Linaro

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Linaro
 
HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
Linaro
 
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Linaro
 
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Linaro
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro
 
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Linaro
 
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
Linaro
 
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
Linaro
 
HKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
Linaro
 

Más de Linaro (20)

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
 
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta VekariaArm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
 
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraHuawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
 
Bud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qaBud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qa
 
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
 
HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
 
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
 
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
 
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
 
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
 
HKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening KeynoteHKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening Keynote
 
HKG18-318 - OpenAMP Workshop
HKG18-318 - OpenAMP WorkshopHKG18-318 - OpenAMP Workshop
HKG18-318 - OpenAMP Workshop
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
 
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and allHKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
 
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
 
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
 
HKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8MHKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8M
 
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
 
HKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
 

Último

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 

Último (20)

Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 

WALT vs PELT : Redux - SFO17-307

  • 1. WALT vs PELT : Redux Pavankumar Kondeti Engineer, Staff Qualcomm Technologies, Inc 09/27/2017 pkondeti@qti.qualcomm.com
  • 2. 2 Agenda • Why we need load tracking? • A quick recap of PELT and WALT • Results
  • 3. 3 Why we need load tracking? • Load balancer/group scheduling • Task placement ◦ Task load and CPU utilization tracking is essential for energy aware scheduling. ◦ Heavy tasks can be placed on higher capacity CPUs. Small tasks can be packed on a busy CPU without waking up an idle CPU. • Frequency guidance ◦ CPU frequency governors like ondemand and interactive uses a timer and compute the CPU busy time by subtracting the CPU idle time from the timer period. ◦ Task migrations are not accounted. If a task execution time is split across two CPUs, governor fails to ramp up the frequency.
  • 4. 4 PELT recap • The PELT (Per Entity Load Tracking) is introduced in 3.8 kernel. The load is tracked at the sched entity and cfs_rq/rq level • The load is accounted using a decayed geometric series with runnable time in 1 msec period as coefficients. The decay constant is chosen such that the contribution in the 32 msec past is weighted half as strongly as the current contribution. • The blocked tasks also contribute to the rq load/utilization. The task load is decayed during sleep. • Currently the load tracking is done only for the CFS class. There are patches available to extend it to RT/DL class.
  • 5. 5 PELT signals • se->avg.load_avg - The load average contribution of an entity. The entity weight and frequency scaling are factored into the runnable time. • cfs_rq->runnable_load_avg - Sum of the load average contributions from all runnable tasks. Used in load balancer. • cfs_rq->avg.load_avg - Represents the load average contributions from all runnable and blocked entities. Used in CFS group scheduling • se->avg.util_avg - The utilization of an entity. The CPU efficiency and frequency scaling are factored into the running time. Used in EAS task placement • cfs_rq->avg.util_avg - Represents the utilization from all runnable and blocked entities. Input to the schedutil governor for frequency selection.
  • 6. 6 WALT Introduction • WALT keeps track of recent N windows of execution for every task. Windows where a task had no activity are ignored and not recorded. • Task demand is derived from these N samples. Different policies like max(), avg(), max(recent, avg) are available. • The wait time of a task on the CPU rq is also accounted towards its demand. • Task utilization is tracked separately from the demand. The utilization of a task is the execution time in the recently completed window. • The CPU rq’s utilization is the sum of the utilization of all tasks ran in the recently completed window.
  • 7. 7 WALT signals • p->ravg.demand - Task demand. Used in the EAS task placement. • rq->cumulative_runnable_avg - Sum of the demand of all runnable tasks on this CPU. Represent the Instantaneous load. Used in the EAS task placement • rq->prev_runnable_sum - CPU utilization in the recent complete window. Input to the schedutil. • rq->cum_window_demand - Sum of the demand of tasks ran in the current window. This signal is used to estimate the frequency. Used in the EAS task placement for evaluating the energy difference
  • 8. 8 PELT vs WALT • WALT has more dynamic demand/utilization signal than provided by PELT, which is subjected to the geometric series. • PELT takes more time to detect a heavy task or re-classification of a light task as heavy. Thus the task migration to a higher capacity CPU is delayed. As example, it takes ~138msec for a task with 0 utilization to become a 95% utilization task. • PELT decays the utilization of a task when it goes to sleep. As an example, a 100% utilization task would become 10% utilization task just after 100 msec sleep. • Similar to task utilization, PELT takes more time to build up the CPU utilization, thus delaying the frequency ramp up. • PELT's blocked utilization decay implies that the underlying cpufreq governor would have to delay dropping frequency for longer than necessary
  • 9. 9 Frequency ramp up Task Execution Frequency (WALT) Task Execution Frequency (PELT) • The frequency ramp up is 4 times faster on WALT • The CPU utilization is limited by the current capacity. If the frequency is not ramped up fast, the utilization accumulation is delayed which in turn delays ramping up to the Fmax.
  • 10. 10 Frequency ramp down Task Execution Frequency (WALT) Task Execution Frequency (PELT) • The frequency ramp down is 8 times faster on WALT • The blocked tasks also contribute to the CPU utilization in PELT. This delays frequency ramp down. It hurts power but may boost the performance in some use cases.
  • 11. 11 Results • Benchmarks • JankBench • Ux (Twitter, Facebook, Youtube) • Games
  • 12. 12 Testing platform • The tests are run on the SDM835 platform • The EAS + schedutil is based on the AOSP common kernel. Few additional patches which are available in AOSP gerrit review are included. • HZ = 300 and WALT window size is 10 msec • The schedtune boost for top-app is 10% by default. It is boosted to 50% upon touch event for 3 seconds.
  • 13. 13 Benchmark Result GeekBench (v4) ST same MT same PCMark (v1.2) Total WALT is 23% better Web browsing WALT is 38% better Video Playback PELT is 4% better Writing WALT is 33% better Photo Editing WALT is 32% better Antutu (v6) same AndroBench (v5) Sequential Read same Sequential Write Same Random Read WALT is 51% better WALT accounts task wait time towards demand. This results in task spreading which improved performance. With schedtune.prefer_idle = 1, the both PELT and WALT are giving same result. Random Write WALT is 11% better
  • 14. 14 Frequency ramp up in PCMark PELT WALT• The faster frequency ramp up is helping WALT in PCMark • Better results are observed with Patrick’s util_est series for PELT. The gap is reduced to 10% from 23%
  • 15. 15 List View Fling PELT (msec) WALT (msec) 25% 5.541621 5.046939 50% 6.235489 5.364740 75% 8.211237 5.858221 90% 9.225127 6.928594 99% 11.848338 10.306270 Image List View Fling PELT (msec) WALT (msec) 25% 5.170975 4.950513 50% 5.715942 5.348545 75% 6.817395 5.850904 90% 8.662801 6.625814 99% 11.443108 9.915054 Test Power List View Fling PELT is 2% better Image List View Fling PELT is 4% better
  • 16. 16 Shadow Grid Fling PELT (msec) WALT (msec) 25% 6.500720 6.219871 50% 7.525394 7.932580 75% 9.514976 9.256707 90% 10.416724 10.443205 99% 12.531525 13.334770 High hitrate text render PELT (msec) WALT (msec) 25% 5.663946 5.161756 50% 6.003140 5.444418 75% 6.726914 6.053425 90% 8.065195 7.008964 99% 12.561744 11.277686 Test Power Shadow Grid Fling PELT is 7% better High hitrate text render PELT is 3% better
  • 17. 17 Low hitrate text render PELT (msec) WALT (msec) 25% 5.596932 5.106558 50% 5.945192 5.388022 75% 6.695878 5.898033 90% 8.143912 6.888017 99% 12.639909 10.600410 Edit text input PELT (msec) WALT (msec) 25% 5.566935 4.477202 50% 6.977525 5.445546 75% 9.407826 6.955690 90% 14.892524 9.764885 99% 21.150100 16.515769 Test Power Low hitrate text render PELT is 3% better Edit text input PELT is 6% better
  • 18. 18 BitMap upload (overdraw) PELT (msec) WALT (msec) 25% 15.136069 14.489167 50% 15.908737 15.844335 75% 16.836846 18.725650 90% 18.500224 21.474493 99% 22.062164 28.835235 Test Power BitMap upload (OverDraw) PELT is 1% better
  • 19. 19 WALT has 5% better power with similar performance (no of janks). Frame rendering times are better with PELT
  • 20. 20 WALT has 5% better power with similar performance (no of janks). Frame rendering times are better with PELT
  • 21. 21 Both PELT and WALT played the video with 30 Fps. WALT has 4% better power
  • 22. 22 Games Game Result Subway Surfers same With touchboost disable, Power is 10% better for WALT. PELT has better performance (~2FPs) Thunder Cross Power is 3% better for WALT for the same performance measured in FPS. Candy crush soda saga same
  • 23. 23 Conclusion • There is no clear winner in UX use cases like JankBench. The performance (especially 90% and 99%) is better with WALT. The touchboost settings can be tuned down to save power. Note that for UX use cases, rendering frame with in 16 msec is critical. The power optimizations are not possible if we can’t meet this deadline. • Clear Power savings with WALT for Youtube streaming. • WALT performance is better in Benchmarks like PCMark. • We will do this comparison again with Patrick’s util_est patches, which are meant to address the load decay and slower ramp up problems with PELT.
  • 24. Follow us on: For more information, visit us at: www.qualcomm.com & www.qualcomm.com/blog Thank you Nothing in these materials is an offer to sell any of the components or devices referenced herein. ©2017 Qualcomm Technologies, Inc. and/or its affiliated companies. All Rights Reserved. Qualcomm is a trademark of Qualcomm Incorporated, registered in the United States and other countries. Other products and brand names may be trademarks or registered trademarks of their respective owners. References in this presentation to “Qualcomm” may mean Qualcomm Incorporated, Qualcomm Technologies, Inc., and/or other subsidiaries or business units within the Qualcomm corporate structure, as applicable. Qualcomm Incorporated includes Qualcomm’s licensing business, QTL, and the vast majority of its patent portfolio. Qualcomm Technologies, Inc., a wholly-owned subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of Qualcomm’s engineering, research and development functions, and substantially all of its product and services businesses, including its semiconductor business, QCT.