SlideShare una empresa de Scribd logo
1 de 36
Parsl:
Pervasive Parallel Programming in Python
18 October 2019
Daniel S. Katz
(d.katz@ieee.org, http://danielskatz.org, @danielskatz)
Assistant Director for Scientific
Software & Applications, NCSA
Research Associate Professor,
CS, ECE, iSchool
Parsl Team: Y. Babuji, A. Woodard,
Z. Li, D. S. Katz, B. Clifford, R. Kumar,
L. Lacinski, R. Chard, J. M. Wozniak,
I. Foster, M. Wilde, K. Chard
Supporting composition and parallelism in Python
Software is increasingly assembled rather than written
• High-level language (e.g., Python) to integrate and wrap components from
many sources
Parallel and distributed computing is no longer a niche area
• Increasing data sizes combined with plateauing sequential processing power
• Parallel hardware (e.g., accelerators) and distributed computing systems
Parsl allows for the natural expression of parallelism in such a way that
programs can express opportunities for parallelism that can then be
realized, at execution time, using different execution models on different
parallel platforms
Traditional workflow
• A set of tasks and dependencies between them
• Perhaps expressed as data structure, e.g. graph (DAG or cyclic)
• How is this different than a procedural computer program?
• At the level of a set of instructions:
• Dependencies are often explicit, perhaps like a compiled intermediate
representation
• Tasks are much longer (running time O(sec) – O(hr))
• At level of set of functions:
• Tasks as more well-defined (inputs, outputs)
• Tasks are often longer (running time O(sec) – O(hr))
https://danielskatzblog.wordpress.com/2018/01/08/expressing-workflows-as-code-vs-data/
https://danielskatzblog.wordpress.com/2019/02/05/using-workflows-expressed-as-code-and-workflows-expressed-as-data-together/
Traditional workflow expression
• Why express a workflow differently than a program?
• Program (script) is a natural way of expressing a workflow
• Easy to understand, easy to change
• Examples: shell scripts, programs in Parsl
• Parsl: functions used to identify components
• Expressing it as data corresponds to the compiled (assembly)
version of the workflow
• Maybe easier to execute?
• Perhaps easier to reproduce?
https://danielskatzblog.wordpress.com/2018/01/08/expressing-workflows-as-code-vs-data/
https://danielskatzblog.wordpress.com/2019/02/05/using-workflows-expressed-as-code-and-workflows-expressed-as-data-together/
Workflow as code vs workflow as data
• Goes back to workflow lifecycle concept
• Workflows follow a cycle:
• Experimentation/exploration phase (scientific hacking): workflow is extension of
thought processes of workflow maker
• Productization/ dissemination phase: developer (or someone else) prepares for
wider and repeated use by documentation and optimization then disseminates
• This use by others can simply be use, or it can be further development.
• Different types of users have different needs
• Experts want to be able to do as much as possible
• Other users trade away complex features for simpler user interface
C. Wroe, C. Goble, et al., “Recycling workflows and services through discovery and reuse,”
CCPE v19, pp. 181-194, 2007. doi:10.1002/cpe.1050
Representative Parsl Use Cases
Input
Output
DLHub SwiftSeq LSST-DESC
DESC image simulation
Catalog
Simulator
Image
Simulator
Atmosphere
, Telescope,
Camera...
LSST Data
Management
Stack
Fake
Observations!SCIENCE!
NASA/JPL-
Caltech/ESO
/R. Hurt
HSC Project /
NAOJ
LSST Project
Credit: Antonio Villarreal
ImSim workflow
Bundler
189 sensors
x ~10,000s
of instance
catalogs
Node-sized bundles
(64 tasks each)
JSON description
Catalog 1 189
tasks
x 4000 nodes
Parsl
Extreme
Scale
Executor
x 189 x ~10,000s
256K cores
for 3 days
Catalog 2
Catalog 3
128K cores
for 3.5 days
x 2000 nodes
Representative Parsl use cases
DLHub
Machine Learning
Inference
SwiftSeq
DNA Sequence
Analysis
LSST-DESC
Simulated Sky
Survey
O(Tasks) Thousands Thousands Millions
O(Nodes) Tens Hundreds Thousands
O(Duration) Milliseconds-Seconds Hours-Days Hours-Day
Pattern Bag-of-tasks Dataflow Dataflow
Requirements Low latency bounds High throughput Extreme scale
Parsl Basics
Parsl: Interactive parallel programming in Python
Apps define opportunities for parallelism
• Python apps call Python functions
• Bash apps call external applications
Apps return “futures”: a proxy for a result
that might not yet be available
Apps run concurrently respecting data
dependencies. Natural parallel
programming!
Parsl scripts are independent of where
they run. Write once run anywhere!
pip install parsl
Try parsl via binder at bottom left of http://parsl-project.org
Expressing a many task workflow in Parsl
1) Wrap the science applications as Parsl Apps:
@bash_app
def simulate(outputs=[]):
return './simulation_app.exe {outputs[0]}’
@bash_app
def merge(inputs=[], outputs=[]):
i = inputs; o = outputs
return './merge {1} {0}'.format(' '.join(i), o[0])
@python_app
def analyze(inputs=[]):
return analysis_package(inputs)
Expressing a many task workflow in Parsl
2) Execute the parallel workflow by calling Apps:
sims = []
for i in range (nsims):
sims.append(simulate(outputs=['sim-%s.txt' % i]))
all = merge(inputs=[i.outputs[0] for i in sims],
outputs=['all.txt'])
result = analyze(inputs=[all.outputs[0]])
simulate simulate simulate
…
merge
analyze
sim-1.txt sim-2.txt sim-N.txt
all.txt
Decomposing dynamic parallel execution into a task-dependency
graph
Parsl
Parsl scripts are execution provider independent
The same script can be run locally, on grids, clouds,
or supercomputers
Growing support for various schedulers and cloud
vendors
From Parsl docs
Separation of code and execution
Choose execution environment
at runtime. Parsl will direct
tasks to the configured
execution environment(s).
Authentication and authorization
Authn/z is hard…
• 2FA, X509, GSISSH, etc.
Integration with Globus Auth
to support native app
integration for accessing
Globus (and other) services
Using scoped access tokens,
refresh tokens, delegation
support
Transparent (wide area) data management
Implicit data movement to/from
repositories, laptops,
supercomputers
Globus for third-party, high
performance and reliable data
transfer
• Support for site-specific DTNs
HTTP/FTP direct data staging
parsl_file =
File(globus://EP/path/file)
www.globus.org
Parsl Performance
Different types of scientific workloads
High-throughput workloads
• Protein docking, image processing, materials reconstructions
• Requirements: 1000s of tasks, 100s of nodes, reliability, usability,
monitoring, elasticity, etc.
Extreme-scale workloads
• Cosmology simulations, imaging the arctic, genomics analysis
• Requirements: millions of tasks, 1000s of nodes (100,000s cores)
Interactive and real-time workloads
• Materials science, cosmic ray shower analysis, machine learning inference
• Requirements: 10s of nodes, rapid response, pipelining
Different types of execution
High-throughput executor (HTEX)
• Pilot job-based model with multi-threaded manager deployed on workers
• Designed for ease of use, fault-tolerance, etc.
• <2000 nodes (~60K workers), ms tasks, task duration/nodes > 0.01
Extreme-scale executor (EXEX)
• Distributed MPI job manages execution. Manager rank communicates
workload to other worker ranks directly
• Designed for extreme scale execution on supercomputers
• >1000 nodes (>30K workers), ms tasks, >1 m task duration
Low-latency Executor (LLEX)
• Direct socket communication to workers, fixed resource pool, limited features
• 10s nodes, <1M tasks, <1m tasks
Short tasks scale to thousands of workers
Strong scaling: 50,000 tasks submitted
with increasing number of workers
* Fireworks only 5,000 tasks
HTEX and EXEX outperform other
Python-based approaches when >256
workers
Other approaches are limited to fewer
than 128 nodes; HTEX and EXEX
continue to scale
0s tasks
1s tasks
Executors scale to 2M tasks/256K workers
0s tasks
1s tasks
Weak scaling: 10 tasks per worker
HTEX and EXEX again outperform
other Python-based approaches up
to ~2M tasks
HTEX and EXEX scale to 2K nodes
(~65k workers) and 8K nodes
(~262K workers), respectively, with
>1K tasks/s
Parsl executors can provide low latency
• LLEX achieves low
(3.47ms) and
consistent latency
• HTEX (6.87ms) and
EXEX (9.83) are less
consistent
Scalability summary
• EXEX scales to over 250,000 workers across 8,000 nodes
• Both EXEX and HTEX deliver ~1200 tasks/s
• LLEX achieves an average latency of 3.47 ms with tight bounds
Framework Max. number of
workers
Max. number of
nodes
Max tasks/sec
Parsl-IPP 2048 64 330
Parsl-HTEX 65 536 2048 1181
Parsl-EXEX 262 144 8192 1176
FireWorks 1024 32 4
Dask distributed 4096 128 2617
More Parsl Functionality
Interactive supercomputing in Jupyter notebooks
Monitoring
and
visualization
Workflow view Task view
DOE Distributed Computing & Data Ecosystem
(DCDE)
• A DOE group is identifying best practices and research challenges to create
and operate a DOE/SC wide federated Distributed Computing & Data
Ecosystem (DCDE)
• Future Lab Computing Working Group (FLC-WG)
• Initially working towards a pilot
• Using OAuth, working with Globus
• Test deployment at BNL
• Parsl is part of this effort, via initial work in linking ORNL and BNL
• We’ve added support for an OAuthSSHChannel
• Now being tested on test deployment
Multi-site execution
1.Loading Parsl
configuration triggers:
a) Creation of SSH
channels
b) Deployment of an
interchange process
onto login nodes
c) Submission of pilot jobs
that will connect to the
interchange
2.Parsl submits tasks
directly to interchange
3.Parsl uses Globus to
stage data
Interchange Interchange
Parsl
Multi-site execution
Too much small code
See demo instead
https://bit.ly/2Wsjlep
(code in https://github.com/Parsl/demo_multifacility)
Other functionality provided by Parsl
Globus. Delegated authentication
and wide area data management
Fault tolerance. Support for retries,
checkpointing, and memoization
Containers. Sandboxed execution
environments for workers and tasks
Data management. Automated
staging with HTTP, FTP, and Globus
Multi site. Combining
executors/providers for execution
across different resources
Elasticity. Automated resource
expansion/retraction based on
workload
Monitoring. Workflow and resource
monitoring and visualization
Reproducibility. Capture of workflow
provenance in the task graph
Jupyter integration. Seamless
description and management of
workflows
Resource abstraction. Block-based
model overlaying different providers
and resources
Summary
Parsl’s parallelism in Python
• Simple: minimal new constructs
• Safe: deterministic parallel programs through immutable
input/output objects, dependency task graph, etc.
• Scalable: efficient execution from laptops to the largest
supercomputers
• Flexible: programs composed from existing components and
then applied to different resources/workloads
Open source
https://github.com/Parsl/parsl
Questions?
http://parsl-project.org
U . S . D E P A R T M E N T O F
ENERGY

Más contenido relacionado

La actualidad más candente

MAVRL Workshop 2014 - pymatgen-db & custodian
MAVRL Workshop 2014 - pymatgen-db & custodianMAVRL Workshop 2014 - pymatgen-db & custodian
MAVRL Workshop 2014 - pymatgen-db & custodian
University of California, San Diego
 
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Frederic Desprez
 
Weld Strata talk
Weld Strata talkWeld Strata talk
Weld Strata talk
Deepak Narayanan
 

La actualidad más candente (20)

MAVRL Workshop 2014 - pymatgen-db & custodian
MAVRL Workshop 2014 - pymatgen-db & custodianMAVRL Workshop 2014 - pymatgen-db & custodian
MAVRL Workshop 2014 - pymatgen-db & custodian
 
Smart Data Conference: DL4J and DataVec
Smart Data Conference: DL4J and DataVecSmart Data Conference: DL4J and DataVec
Smart Data Conference: DL4J and DataVec
 
Snorkel: Dark Data and Machine Learning with Christopher Ré
Snorkel: Dark Data and Machine Learning with Christopher RéSnorkel: Dark Data and Machine Learning with Christopher Ré
Snorkel: Dark Data and Machine Learning with Christopher Ré
 
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
 
Ray and Its Growing Ecosystem
Ray and Its Growing EcosystemRay and Its Growing Ecosystem
Ray and Its Growing Ecosystem
 
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016Tom Peters, Software Engineer, Ufora at MLconf ATL 2016
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016
 
Neural Networks, Spark MLlib, Deep Learning
Neural Networks, Spark MLlib, Deep LearningNeural Networks, Spark MLlib, Deep Learning
Neural Networks, Spark MLlib, Deep Learning
 
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...
 
Automating materials science workflows with pymatgen, FireWorks, and atomate
Automating materials science workflows with pymatgen, FireWorks, and atomateAutomating materials science workflows with pymatgen, FireWorks, and atomate
Automating materials science workflows with pymatgen, FireWorks, and atomate
 
Weld Strata talk
Weld Strata talkWeld Strata talk
Weld Strata talk
 
Convolutional Neural Networks at scale in Spark MLlib
Convolutional Neural Networks at scale in Spark MLlibConvolutional Neural Networks at scale in Spark MLlib
Convolutional Neural Networks at scale in Spark MLlib
 
TensorFrames: Google Tensorflow on Apache Spark
TensorFrames: Google Tensorflow on Apache SparkTensorFrames: Google Tensorflow on Apache Spark
TensorFrames: Google Tensorflow on Apache Spark
 
AI Development with H2O.ai
AI Development with H2O.aiAI Development with H2O.ai
AI Development with H2O.ai
 
Parallel Programming in Python: Speeding up your analysis
Parallel Programming in Python: Speeding up your analysisParallel Programming in Python: Speeding up your analysis
Parallel Programming in Python: Speeding up your analysis
 
Flare: Scale Up Spark SQL with Native Compilation and Set Your Data on Fire! ...
Flare: Scale Up Spark SQL with Native Compilation and Set Your Data on Fire! ...Flare: Scale Up Spark SQL with Native Compilation and Set Your Data on Fire! ...
Flare: Scale Up Spark SQL with Native Compilation and Set Your Data on Fire! ...
 
Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow
 
SoDA v2 - Named Entity Recognition from streaming text
SoDA v2 - Named Entity Recognition from streaming textSoDA v2 - Named Entity Recognition from streaming text
SoDA v2 - Named Entity Recognition from streaming text
 
Software tools to facilitate materials science research
Software tools to facilitate materials science researchSoftware tools to facilitate materials science research
Software tools to facilitate materials science research
 
Scalable Algorithm Design with MapReduce
Scalable Algorithm Design with MapReduceScalable Algorithm Design with MapReduce
Scalable Algorithm Design with MapReduce
 
Big Data Science with H2O in R
Big Data Science with H2O in RBig Data Science with H2O in R
Big Data Science with H2O in R
 

Similar a Parsl: Pervasive Parallel Programming in Python

Similar a Parsl: Pervasive Parallel Programming in Python (20)

Scalable Parallel Programming in Python with Parsl
Scalable Parallel Programming in Python with ParslScalable Parallel Programming in Python with Parsl
Scalable Parallel Programming in Python with Parsl
 
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data PlatformsCassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
 
Introduction to OpenSees by Frank McKenna
Introduction to OpenSees by Frank McKennaIntroduction to OpenSees by Frank McKenna
Introduction to OpenSees by Frank McKenna
 
Globus Labs: Forging the Next Frontier
Globus Labs: Forging the Next FrontierGlobus Labs: Forging the Next Frontier
Globus Labs: Forging the Next Frontier
 
Expressing and sharing workflows
Expressing and sharing workflowsExpressing and sharing workflows
Expressing and sharing workflows
 
Design of Experiments on Federator Polystore Architecture
Design of Experiments on Federator Polystore ArchitectureDesign of Experiments on Federator Polystore Architecture
Design of Experiments on Federator Polystore Architecture
 
PEARC17:A real-time machine learning and visualization framework for scientif...
PEARC17:A real-time machine learning and visualization framework for scientif...PEARC17:A real-time machine learning and visualization framework for scientif...
PEARC17:A real-time machine learning and visualization framework for scientif...
 
Advances in Scientific Workflow Environments
Advances in Scientific Workflow EnvironmentsAdvances in Scientific Workflow Environments
Advances in Scientific Workflow Environments
 
Scaling Data Science on Big Data
Scaling Data Science on Big DataScaling Data Science on Big Data
Scaling Data Science on Big Data
 
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesMatching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software Architectures
 
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesMatching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software Architectures
 
Spark meetup TCHUG
Spark meetup TCHUGSpark meetup TCHUG
Spark meetup TCHUG
 
Hadoop Tutorial.ppt
Hadoop Tutorial.pptHadoop Tutorial.ppt
Hadoop Tutorial.ppt
 
Hadoop tutorial
Hadoop tutorialHadoop tutorial
Hadoop tutorial
 
Spark
SparkSpark
Spark
 
Scientific
Scientific Scientific
Scientific
 
Integrating Apache Phoenix with Distributed Query Engines
Integrating Apache Phoenix with Distributed Query EnginesIntegrating Apache Phoenix with Distributed Query Engines
Integrating Apache Phoenix with Distributed Query Engines
 
Big Data Analytics (ML, DL, AI) hands-on
Big Data Analytics (ML, DL, AI) hands-onBig Data Analytics (ML, DL, AI) hands-on
Big Data Analytics (ML, DL, AI) hands-on
 
Azure 機器學習 - 使用Python, R, Spark, CNTK 深度學習
Azure 機器學習 - 使用Python, R, Spark, CNTK 深度學習 Azure 機器學習 - 使用Python, R, Spark, CNTK 深度學習
Azure 機器學習 - 使用Python, R, Spark, CNTK 深度學習
 
Bp301
Bp301Bp301
Bp301
 

Más de Daniel S. Katz

Software Citation in Theory and Practice
Software Citation in Theory and PracticeSoftware Citation in Theory and Practice
Software Citation in Theory and Practice
Daniel S. Katz
 

Más de Daniel S. Katz (20)

Research software susainability
Research software susainabilityResearch software susainability
Research software susainability
 
Software Professionals (RSEs) at NCSA
Software Professionals (RSEs) at NCSASoftware Professionals (RSEs) at NCSA
Software Professionals (RSEs) at NCSA
 
Requiring Publicly-Funded Software, Algorithms, and Workflows to be Made Publ...
Requiring Publicly-Funded Software, Algorithms, and Workflows to be Made Publ...Requiring Publicly-Funded Software, Algorithms, and Workflows to be Made Publ...
Requiring Publicly-Funded Software, Algorithms, and Workflows to be Made Publ...
 
What is eScience, and where does it go from here?
What is eScience, and where does it go from here?What is eScience, and where does it go from here?
What is eScience, and where does it go from here?
 
Citation and Research Objects: Toward Active Research Objects
Citation and Research Objects: Toward Active Research ObjectsCitation and Research Objects: Toward Active Research Objects
Citation and Research Objects: Toward Active Research Objects
 
FAIR is not Fair Enough, Particularly for Software Citation, Availability, or...
FAIR is not Fair Enough, Particularly for Software Citation, Availability, or...FAIR is not Fair Enough, Particularly for Software Citation, Availability, or...
FAIR is not Fair Enough, Particularly for Software Citation, Availability, or...
 
Fundamentals of software sustainability
Fundamentals of software sustainabilityFundamentals of software sustainability
Fundamentals of software sustainability
 
Software Citation in Theory and Practice
Software Citation in Theory and PracticeSoftware Citation in Theory and Practice
Software Citation in Theory and Practice
 
URSSI
URSSIURSSI
URSSI
 
Research Software Sustainability: WSSSPE & URSSI
Research Software Sustainability: WSSSPE & URSSIResearch Software Sustainability: WSSSPE & URSSI
Research Software Sustainability: WSSSPE & URSSI
 
Software citation
Software citationSoftware citation
Software citation
 
Citation and reproducibility in software
Citation and reproducibility in softwareCitation and reproducibility in software
Citation and reproducibility in software
 
Software Citation: Principles, Implementation, and Impact
Software Citation:  Principles, Implementation, and ImpactSoftware Citation:  Principles, Implementation, and Impact
Software Citation: Principles, Implementation, and Impact
 
Summary of WSSSPE and its working groups
Summary of WSSSPE and its working groupsSummary of WSSSPE and its working groups
Summary of WSSSPE and its working groups
 
Working towards Sustainable Software for Science: Practice and Experience (WS...
Working towards Sustainable Software for Science: Practice and Experience (WS...Working towards Sustainable Software for Science: Practice and Experience (WS...
Working towards Sustainable Software for Science: Practice and Experience (WS...
 
20160607 citation4software panel
20160607 citation4software panel20160607 citation4software panel
20160607 citation4software panel
 
20160607 citation4software opening
20160607 citation4software opening20160607 citation4software opening
20160607 citation4software opening
 
Scientific Software Challenges and Community Responses
Scientific Software Challenges and Community ResponsesScientific Software Challenges and Community Responses
Scientific Software Challenges and Community Responses
 
What do we need beyond a DOI?
What do we need beyond a DOI?What do we need beyond a DOI?
What do we need beyond a DOI?
 
Looking at Software Sustainability and Productivity Challenges from NSF
Looking at Software Sustainability and Productivity Challenges from NSFLooking at Software Sustainability and Productivity Challenges from NSF
Looking at Software Sustainability and Productivity Challenges from NSF
 

Último

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Último (20)

Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 

Parsl: Pervasive Parallel Programming in Python

  • 1. Parsl: Pervasive Parallel Programming in Python 18 October 2019 Daniel S. Katz (d.katz@ieee.org, http://danielskatz.org, @danielskatz) Assistant Director for Scientific Software & Applications, NCSA Research Associate Professor, CS, ECE, iSchool Parsl Team: Y. Babuji, A. Woodard, Z. Li, D. S. Katz, B. Clifford, R. Kumar, L. Lacinski, R. Chard, J. M. Wozniak, I. Foster, M. Wilde, K. Chard
  • 2. Supporting composition and parallelism in Python Software is increasingly assembled rather than written • High-level language (e.g., Python) to integrate and wrap components from many sources Parallel and distributed computing is no longer a niche area • Increasing data sizes combined with plateauing sequential processing power • Parallel hardware (e.g., accelerators) and distributed computing systems Parsl allows for the natural expression of parallelism in such a way that programs can express opportunities for parallelism that can then be realized, at execution time, using different execution models on different parallel platforms
  • 3. Traditional workflow • A set of tasks and dependencies between them • Perhaps expressed as data structure, e.g. graph (DAG or cyclic) • How is this different than a procedural computer program? • At the level of a set of instructions: • Dependencies are often explicit, perhaps like a compiled intermediate representation • Tasks are much longer (running time O(sec) – O(hr)) • At level of set of functions: • Tasks as more well-defined (inputs, outputs) • Tasks are often longer (running time O(sec) – O(hr)) https://danielskatzblog.wordpress.com/2018/01/08/expressing-workflows-as-code-vs-data/ https://danielskatzblog.wordpress.com/2019/02/05/using-workflows-expressed-as-code-and-workflows-expressed-as-data-together/
  • 4. Traditional workflow expression • Why express a workflow differently than a program? • Program (script) is a natural way of expressing a workflow • Easy to understand, easy to change • Examples: shell scripts, programs in Parsl • Parsl: functions used to identify components • Expressing it as data corresponds to the compiled (assembly) version of the workflow • Maybe easier to execute? • Perhaps easier to reproduce? https://danielskatzblog.wordpress.com/2018/01/08/expressing-workflows-as-code-vs-data/ https://danielskatzblog.wordpress.com/2019/02/05/using-workflows-expressed-as-code-and-workflows-expressed-as-data-together/
  • 5. Workflow as code vs workflow as data • Goes back to workflow lifecycle concept • Workflows follow a cycle: • Experimentation/exploration phase (scientific hacking): workflow is extension of thought processes of workflow maker • Productization/ dissemination phase: developer (or someone else) prepares for wider and repeated use by documentation and optimization then disseminates • This use by others can simply be use, or it can be further development. • Different types of users have different needs • Experts want to be able to do as much as possible • Other users trade away complex features for simpler user interface C. Wroe, C. Goble, et al., “Recycling workflows and services through discovery and reuse,” CCPE v19, pp. 181-194, 2007. doi:10.1002/cpe.1050
  • 6. Representative Parsl Use Cases Input Output DLHub SwiftSeq LSST-DESC
  • 7. DESC image simulation Catalog Simulator Image Simulator Atmosphere , Telescope, Camera... LSST Data Management Stack Fake Observations!SCIENCE! NASA/JPL- Caltech/ESO /R. Hurt HSC Project / NAOJ LSST Project Credit: Antonio Villarreal
  • 8. ImSim workflow Bundler 189 sensors x ~10,000s of instance catalogs Node-sized bundles (64 tasks each) JSON description Catalog 1 189 tasks x 4000 nodes Parsl Extreme Scale Executor x 189 x ~10,000s 256K cores for 3 days Catalog 2 Catalog 3 128K cores for 3.5 days x 2000 nodes
  • 9. Representative Parsl use cases DLHub Machine Learning Inference SwiftSeq DNA Sequence Analysis LSST-DESC Simulated Sky Survey O(Tasks) Thousands Thousands Millions O(Nodes) Tens Hundreds Thousands O(Duration) Milliseconds-Seconds Hours-Days Hours-Day Pattern Bag-of-tasks Dataflow Dataflow Requirements Low latency bounds High throughput Extreme scale
  • 11. Parsl: Interactive parallel programming in Python Apps define opportunities for parallelism • Python apps call Python functions • Bash apps call external applications Apps return “futures”: a proxy for a result that might not yet be available Apps run concurrently respecting data dependencies. Natural parallel programming! Parsl scripts are independent of where they run. Write once run anywhere! pip install parsl Try parsl via binder at bottom left of http://parsl-project.org
  • 12. Expressing a many task workflow in Parsl 1) Wrap the science applications as Parsl Apps: @bash_app def simulate(outputs=[]): return './simulation_app.exe {outputs[0]}’ @bash_app def merge(inputs=[], outputs=[]): i = inputs; o = outputs return './merge {1} {0}'.format(' '.join(i), o[0]) @python_app def analyze(inputs=[]): return analysis_package(inputs)
  • 13. Expressing a many task workflow in Parsl 2) Execute the parallel workflow by calling Apps: sims = [] for i in range (nsims): sims.append(simulate(outputs=['sim-%s.txt' % i])) all = merge(inputs=[i.outputs[0] for i in sims], outputs=['all.txt']) result = analyze(inputs=[all.outputs[0]]) simulate simulate simulate … merge analyze sim-1.txt sim-2.txt sim-N.txt all.txt
  • 14. Decomposing dynamic parallel execution into a task-dependency graph Parsl
  • 15. Parsl scripts are execution provider independent The same script can be run locally, on grids, clouds, or supercomputers Growing support for various schedulers and cloud vendors From Parsl docs
  • 16. Separation of code and execution Choose execution environment at runtime. Parsl will direct tasks to the configured execution environment(s).
  • 17. Authentication and authorization Authn/z is hard… • 2FA, X509, GSISSH, etc. Integration with Globus Auth to support native app integration for accessing Globus (and other) services Using scoped access tokens, refresh tokens, delegation support
  • 18. Transparent (wide area) data management Implicit data movement to/from repositories, laptops, supercomputers Globus for third-party, high performance and reliable data transfer • Support for site-specific DTNs HTTP/FTP direct data staging parsl_file = File(globus://EP/path/file) www.globus.org
  • 20. Different types of scientific workloads High-throughput workloads • Protein docking, image processing, materials reconstructions • Requirements: 1000s of tasks, 100s of nodes, reliability, usability, monitoring, elasticity, etc. Extreme-scale workloads • Cosmology simulations, imaging the arctic, genomics analysis • Requirements: millions of tasks, 1000s of nodes (100,000s cores) Interactive and real-time workloads • Materials science, cosmic ray shower analysis, machine learning inference • Requirements: 10s of nodes, rapid response, pipelining
  • 21. Different types of execution High-throughput executor (HTEX) • Pilot job-based model with multi-threaded manager deployed on workers • Designed for ease of use, fault-tolerance, etc. • <2000 nodes (~60K workers), ms tasks, task duration/nodes > 0.01 Extreme-scale executor (EXEX) • Distributed MPI job manages execution. Manager rank communicates workload to other worker ranks directly • Designed for extreme scale execution on supercomputers • >1000 nodes (>30K workers), ms tasks, >1 m task duration Low-latency Executor (LLEX) • Direct socket communication to workers, fixed resource pool, limited features • 10s nodes, <1M tasks, <1m tasks
  • 22. Short tasks scale to thousands of workers Strong scaling: 50,000 tasks submitted with increasing number of workers * Fireworks only 5,000 tasks HTEX and EXEX outperform other Python-based approaches when >256 workers Other approaches are limited to fewer than 128 nodes; HTEX and EXEX continue to scale 0s tasks 1s tasks
  • 23. Executors scale to 2M tasks/256K workers 0s tasks 1s tasks Weak scaling: 10 tasks per worker HTEX and EXEX again outperform other Python-based approaches up to ~2M tasks HTEX and EXEX scale to 2K nodes (~65k workers) and 8K nodes (~262K workers), respectively, with >1K tasks/s
  • 24. Parsl executors can provide low latency • LLEX achieves low (3.47ms) and consistent latency • HTEX (6.87ms) and EXEX (9.83) are less consistent
  • 25. Scalability summary • EXEX scales to over 250,000 workers across 8,000 nodes • Both EXEX and HTEX deliver ~1200 tasks/s • LLEX achieves an average latency of 3.47 ms with tight bounds Framework Max. number of workers Max. number of nodes Max tasks/sec Parsl-IPP 2048 64 330 Parsl-HTEX 65 536 2048 1181 Parsl-EXEX 262 144 8192 1176 FireWorks 1024 32 4 Dask distributed 4096 128 2617
  • 27. Interactive supercomputing in Jupyter notebooks
  • 29. DOE Distributed Computing & Data Ecosystem (DCDE) • A DOE group is identifying best practices and research challenges to create and operate a DOE/SC wide federated Distributed Computing & Data Ecosystem (DCDE) • Future Lab Computing Working Group (FLC-WG) • Initially working towards a pilot • Using OAuth, working with Globus • Test deployment at BNL • Parsl is part of this effort, via initial work in linking ORNL and BNL • We’ve added support for an OAuthSSHChannel • Now being tested on test deployment
  • 30. Multi-site execution 1.Loading Parsl configuration triggers: a) Creation of SSH channels b) Deployment of an interchange process onto login nodes c) Submission of pilot jobs that will connect to the interchange 2.Parsl submits tasks directly to interchange 3.Parsl uses Globus to stage data Interchange Interchange Parsl
  • 31. Multi-site execution Too much small code See demo instead https://bit.ly/2Wsjlep (code in https://github.com/Parsl/demo_multifacility)
  • 32. Other functionality provided by Parsl Globus. Delegated authentication and wide area data management Fault tolerance. Support for retries, checkpointing, and memoization Containers. Sandboxed execution environments for workers and tasks Data management. Automated staging with HTTP, FTP, and Globus Multi site. Combining executors/providers for execution across different resources Elasticity. Automated resource expansion/retraction based on workload Monitoring. Workflow and resource monitoring and visualization Reproducibility. Capture of workflow provenance in the task graph Jupyter integration. Seamless description and management of workflows Resource abstraction. Block-based model overlaying different providers and resources
  • 34. Parsl’s parallelism in Python • Simple: minimal new constructs • Safe: deterministic parallel programs through immutable input/output objects, dependency task graph, etc. • Scalable: efficient execution from laptops to the largest supercomputers • Flexible: programs composed from existing components and then applied to different resources/workloads
  • 36. Questions? http://parsl-project.org U . S . D E P A R T M E N T O F ENERGY