SlideShare una empresa de Scribd logo
1 de 20
Magellan Experiences with OpenStack




Narayan Desai
desai@mcs.anl.gov
Argonne National Lab
The Challenge of High Performance Computing


   Scientific progress is predicated on the use of computational models, simulation,
    or large scale data analysis
     – Conceptually similar to (or enabling of) traditional experiments
   Progress is also limited by the computational capacities usable by applications
   Applications often use large quantities of resources
     –   100s to 100000s of processors in concert
     –   High bandwidth network links
     –   Low latency communication between processors
     –   Massive data sets
   Largest problems often ride the ragged edge of available resources
     – Inefficiency reduces scope and efficacy of computational approaches to tackle particular
       large scale problems
   Historically driven by applications, not services
The Technical Computing Bottleneck
DOE Magellan Project (2009-2011)


   Joint project between Argonne and Berkeley Labs
   ARRA Funded
   Goal: To assess “cloud” approaches for mid-range technical computing
     –   Comparison of private/public clouds to HPC systems
     –   Evaluation of Hadoop for scientific computing
     –   Application performance comparison
     –   User productivity assessment
   Approach: Build a system with an HPC configuration, but operate as a private
    cloud
     –   504 IBM Idataplex nodes
     –   200 IBM 3650 Storage nodes (8 Disk, 4 ssd)
     –   12 HP 1TB Memory nodes
     –   133 Nvidia Fermi nodes
     –   QDR Infiniband
     –   Connected to the ESNet Research Network
Initial Approach


   Setup Magellan as a testbed
     – Several hardware types, many software configurations
   Chose Eucalyptus 1.6 as the cloud software stack
     – Mindshare leader in 2009
     – Had previous deployment experience
     – Supported widest range of EC2 Apis at the time
   Planned to deploy 500 nodes into the private cloud portion of the system
     – Bare metal provisioning for the rest, due to lack of virtualization support for GPUs, etc
Initial Results
Detailed Initial Experiences (2009-2010)


   Had serious stability and scalability problems once we hit 84 nodes
   Eucalyptus showed its research project heritage
     – Implemented in multiple languages
     – Questionable architecture decisions
   Managed to get system into usable state, but barely
   Began evaluating potential replacements (11/2010)
     – Eucalyptus 2.0
     – Nimbus
     – Openstack (Bexar+)
Evaluation Results


   Eucalyptus 2.0 better, but more of the same
   Openstack fared much better
     –   Poor documentation
     –   Solid architecture
     –   Good scalability
     –   High quality code
          • Good enough to function as documentation surrogate in many cases
     – Amazing community
          • (Thanks Vish!)
   Decided to deploy Openstack Nova in 1/2011
     –   Started with Cactus beta codebase and tracked changes through release
     –   By February, we had deployed 168 nodes and began moving users over
     –   Turned off old system by 3/2011
     –   Scaled to 336 than 420 nodes over the following few months
Early Openstack Compute Operational Experiences

   Cactus
     – Our configuration was unusual, due to scale
           • Multiple network servers
           • Splitting services out to individual service nodes
     – Once things were setup, the system mainly ran
     – Little administrative intervention required to keep the system running
   User productivity
     –   Most scientific users aren’t used to managing systems
     –   Typical usage model is application, not service centric
     –   Private cloud model has a higher barrier to entry
     –   Model also enabled aggressive disintermediation, which users liked
     –   It also turned out there was a substantial unmet demand for services in scientific
         computing
   Due to the user productivity benefits, we decided to transition the system to
    production at the end of the testbed project, in support of the DOE Systems
    Biology Knowledgebase project
Enable DOE Mission Science
                    Communities

                                  Plants




  Microbes
Transitioning into Production (11/2011)
   Production meant new priorities
     – Stability
     – Serviceability
     – Performance
   And a new operation team
   Initial build based on Diablo
     –   Nova
     –   Glance
     –   Keystone*
     –   Horizon*
   Started to develop operational processes
     – Maintenance
     – Troubleshooting
     – Appropriate monitoring
   Performed a full software stack shakedown
     – Scaled rack by rack up to 504 compute nodes
   Vanilla system ready by late 12/2011
Building Towards HPC Efficiency


   HPC platforms target peak performance
     – Virtualization is not a natural choice
   How close can we get to HPC performance while maintaining cloud feature
    benefits?
   Several major areas of concern
     –   Storage I/O
     –   Network Bandwidth
     –   Network latency
     –   Driver support for accelerators/GPUs
   Goal is to build multi-tenant, on demand high performance computational
    infrastructure
     – Support wide area data movement
     – Large scale computations
     – Scalable services hosting bio-informatics data integrations
Network Performance Expedition


   Goal: To determine the limits of Openstack infrastructure for wide area network
    transfers
     – Want small numbers of large flows as opposed to large numbers of slow flows
   Built a new Essex test deployment
     –   15 compute nodes, with 1x10GE link each
     –   Had 15 more in reserve
     –   Expected to need 20 nodes
     –   KVM hypervisor
   Used FlatManager network setup
     – Multi-host configuration
     – Each hypervisor ran ethernet bridging and ip firewalling for its guest(s)
   Nodes connected to the DOE ESNet Advanced Networking Initiative
ESNet Advanced Networking Infrastructure
Setup and Tuning


   Standard instance type
     – 8 vcpus
     – 4 vnics bridged to the same 10GE ethernet
     – virtio
   Standard tuning for wide area high bandwidth transfers
     –   Jumbo frames (9K MTU)
     –   Increased TX queue length on the hypervisor
     –   Buffer sizes on the guest
     –   32-64 MB window size on the guest
     –   Fasterdata.es.net rocks!
   Remote data sinks
     – 3 nodes with 4x10GE
     – No virtualization
   Settled on 10 VMs for testing
     – 4 TCP flows each (ANL -> LBL)
     – Memory to memory
Network Performance Results
Results and comments


   95 gigabit consistently
     – 98 peak!
     – ~12 GB/s across 50 ms latency!
   Single node performance was way higher than we expected
     – CPU utilization even suggests we could handle more bandwidth (5-10 more?)
     – Might be able to improve more with EoIB or SR-IOV
   Single stream performance was worse than native
     – Topped out at 3.5-4 gigabits
   Exotic tuning wasn’t really required
   Openstack performed beautifully
     – Was able to cleanly configure this networking setup
     – All of the APIs are usable in their intended ways
     – No duct tape involved!
Conclusions


   Openstack has been a key enabler of on demand computing for us
     – Even in technical computing, where these techniques are less common
   Openstack is definitely ready for prime time
     – Even supports crazy experimentation
   Experimental results shows that on demand high bandwidth data transfers are
    feasible
     – Our next step is to build openstack storage that can source/sink that data rate
   Eventually, multi-tenancy data transfer infrastructure will be possible
   This is just one example of the potential of mixed cloud/HPC systems
Acknowledgements


   Argonne Team         Original Magellan Team
     – Jason Hedden      • Susan Coghlan
     – Linda Winkler     • Adam Scovel
   ESNet                • Piotr Zbiegel
                         • Rick Bradshaw
     –   Jon Dugan
                         • Anping Liu
     –   Brian Tierney
                         • Ed Holohan
     –   Patrick Dorn
     –   Chris Tracy

Más contenido relacionado

La actualidad más candente

Supporting Research through "Desktop as a Service" models of e-infrastructure...
Supporting Research through "Desktop as a Service" models of e-infrastructure...Supporting Research through "Desktop as a Service" models of e-infrastructure...
Supporting Research through "Desktop as a Service" models of e-infrastructure...David Wallom
 
Regarding Clouds, Mainframes, and Desktops … and Linux
Regarding Clouds, Mainframes, and Desktops … and LinuxRegarding Clouds, Mainframes, and Desktops … and Linux
Regarding Clouds, Mainframes, and Desktops … and LinuxRobert Sutor
 
Introduction to Crystal and Jasper Reports for Novell Sentinel 6.1
Introduction to Crystal and Jasper Reports for Novell Sentinel 6.1Introduction to Crystal and Jasper Reports for Novell Sentinel 6.1
Introduction to Crystal and Jasper Reports for Novell Sentinel 6.1Novell
 
Hadoop Cluster on Docker Containers
Hadoop Cluster on Docker ContainersHadoop Cluster on Docker Containers
Hadoop Cluster on Docker Containerspranav_joshi
 
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'OpenStack Korea Community
 
Working with XSLT, XPath and ECMA Scripts: Make It Simpler with Novell Identi...
Working with XSLT, XPath and ECMA Scripts: Make It Simpler with Novell Identi...Working with XSLT, XPath and ECMA Scripts: Make It Simpler with Novell Identi...
Working with XSLT, XPath and ECMA Scripts: Make It Simpler with Novell Identi...Novell
 
Update on the Exascale Computing Project (ECP)
Update on the Exascale Computing Project (ECP)Update on the Exascale Computing Project (ECP)
Update on the Exascale Computing Project (ECP)inside-BigData.com
 
DevOps Fest 2019. Stanislav Kolenkin. Сonnecting pool Kubernetes clusters: Fe...
DevOps Fest 2019. Stanislav Kolenkin. Сonnecting pool Kubernetes clusters: Fe...DevOps Fest 2019. Stanislav Kolenkin. Сonnecting pool Kubernetes clusters: Fe...
DevOps Fest 2019. Stanislav Kolenkin. Сonnecting pool Kubernetes clusters: Fe...DevOps_Fest
 
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander DibboOpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander DibboOpenNebula Project
 
Mastering OpenStack - Episode 07 - Compute Nodes
Mastering OpenStack - Episode 07 - Compute NodesMastering OpenStack - Episode 07 - Compute Nodes
Mastering OpenStack - Episode 07 - Compute NodesRoozbeh Shafiee
 
Mastering OpenStack - Episode 11 - Scaling Out
Mastering OpenStack - Episode 11 - Scaling OutMastering OpenStack - Episode 11 - Scaling Out
Mastering OpenStack - Episode 11 - Scaling OutRoozbeh Shafiee
 
PuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMware
PuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMwarePuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMware
PuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMwarePuppet
 
Geek Week 2016 - Deep Dive To Openstack
Geek Week 2016 -  Deep Dive To OpenstackGeek Week 2016 -  Deep Dive To Openstack
Geek Week 2016 - Deep Dive To OpenstackHaim Ateya
 
Deploying Apache CloudStack from API to UI
Deploying Apache CloudStack from API to UIDeploying Apache CloudStack from API to UI
Deploying Apache CloudStack from API to UIJoe Brockmeier
 
Novell Identity Manager Tips, Tricks and Best Practices
Novell Identity Manager Tips, Tricks and Best PracticesNovell Identity Manager Tips, Tricks and Best Practices
Novell Identity Manager Tips, Tricks and Best PracticesNovell
 
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...VMware Tanzu
 
BACD July 2012 : The Xen Cloud Platform
BACD July 2012 : The Xen Cloud Platform BACD July 2012 : The Xen Cloud Platform
BACD July 2012 : The Xen Cloud Platform The Linux Foundation
 

La actualidad más candente (20)

Supporting Research through "Desktop as a Service" models of e-infrastructure...
Supporting Research through "Desktop as a Service" models of e-infrastructure...Supporting Research through "Desktop as a Service" models of e-infrastructure...
Supporting Research through "Desktop as a Service" models of e-infrastructure...
 
Regarding Clouds, Mainframes, and Desktops … and Linux
Regarding Clouds, Mainframes, and Desktops … and LinuxRegarding Clouds, Mainframes, and Desktops … and Linux
Regarding Clouds, Mainframes, and Desktops … and Linux
 
Introduction to Crystal and Jasper Reports for Novell Sentinel 6.1
Introduction to Crystal and Jasper Reports for Novell Sentinel 6.1Introduction to Crystal and Jasper Reports for Novell Sentinel 6.1
Introduction to Crystal and Jasper Reports for Novell Sentinel 6.1
 
Hadoop Cluster on Docker Containers
Hadoop Cluster on Docker ContainersHadoop Cluster on Docker Containers
Hadoop Cluster on Docker Containers
 
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
 
Working with XSLT, XPath and ECMA Scripts: Make It Simpler with Novell Identi...
Working with XSLT, XPath and ECMA Scripts: Make It Simpler with Novell Identi...Working with XSLT, XPath and ECMA Scripts: Make It Simpler with Novell Identi...
Working with XSLT, XPath and ECMA Scripts: Make It Simpler with Novell Identi...
 
Update on the Exascale Computing Project (ECP)
Update on the Exascale Computing Project (ECP)Update on the Exascale Computing Project (ECP)
Update on the Exascale Computing Project (ECP)
 
DevOps Fest 2019. Stanislav Kolenkin. Сonnecting pool Kubernetes clusters: Fe...
DevOps Fest 2019. Stanislav Kolenkin. Сonnecting pool Kubernetes clusters: Fe...DevOps Fest 2019. Stanislav Kolenkin. Сonnecting pool Kubernetes clusters: Fe...
DevOps Fest 2019. Stanislav Kolenkin. Сonnecting pool Kubernetes clusters: Fe...
 
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander DibboOpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander Dibbo
 
Mastering OpenStack - Episode 07 - Compute Nodes
Mastering OpenStack - Episode 07 - Compute NodesMastering OpenStack - Episode 07 - Compute Nodes
Mastering OpenStack - Episode 07 - Compute Nodes
 
Mastering OpenStack - Episode 11 - Scaling Out
Mastering OpenStack - Episode 11 - Scaling OutMastering OpenStack - Episode 11 - Scaling Out
Mastering OpenStack - Episode 11 - Scaling Out
 
PuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMware
PuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMwarePuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMware
PuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMware
 
Geek Week 2016 - Deep Dive To Openstack
Geek Week 2016 -  Deep Dive To OpenstackGeek Week 2016 -  Deep Dive To Openstack
Geek Week 2016 - Deep Dive To Openstack
 
Deploying Apache CloudStack from API to UI
Deploying Apache CloudStack from API to UIDeploying Apache CloudStack from API to UI
Deploying Apache CloudStack from API to UI
 
Novell Identity Manager Tips, Tricks and Best Practices
Novell Identity Manager Tips, Tricks and Best PracticesNovell Identity Manager Tips, Tricks and Best Practices
Novell Identity Manager Tips, Tricks and Best Practices
 
The State of Linux Containers
The State of Linux ContainersThe State of Linux Containers
The State of Linux Containers
 
Apache CloudStack from API to UI
Apache CloudStack from API to UIApache CloudStack from API to UI
Apache CloudStack from API to UI
 
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
 
CloudStack Architecture
CloudStack ArchitectureCloudStack Architecture
CloudStack Architecture
 
BACD July 2012 : The Xen Cloud Platform
BACD July 2012 : The Xen Cloud Platform BACD July 2012 : The Xen Cloud Platform
BACD July 2012 : The Xen Cloud Platform
 

Destacado

Nectar openstack 2012 v3
Nectar openstack 2012 v3Nectar openstack 2012 v3
Nectar openstack 2012 v3laurabeckcahoon
 
Nova, Folsom presentation, Compute PTL
Nova, Folsom presentation, Compute PTLNova, Folsom presentation, Compute PTL
Nova, Folsom presentation, Compute PTLlaurabeckcahoon
 
Rackspace Open-cloud, Engates, Interrante Keynote
Rackspace Open-cloud, Engates, Interrante KeynoteRackspace Open-cloud, Engates, Interrante Keynote
Rackspace Open-cloud, Engates, Interrante Keynotelaurabeckcahoon
 
Virtual Data Centers with OpenStack Quantum
Virtual Data Centers with OpenStack QuantumVirtual Data Centers with OpenStack Quantum
Virtual Data Centers with OpenStack Quantumlaurabeckcahoon
 
NICTA, Disaster Recovery Using OpenStack
NICTA, Disaster Recovery Using OpenStackNICTA, Disaster Recovery Using OpenStack
NICTA, Disaster Recovery Using OpenStacklaurabeckcahoon
 
Rackspace Hosting Presentation
Rackspace Hosting  PresentationRackspace Hosting  Presentation
Rackspace Hosting Presentationogarza
 

Destacado (7)

Nectar openstack 2012 v3
Nectar openstack 2012 v3Nectar openstack 2012 v3
Nectar openstack 2012 v3
 
Openflow
OpenflowOpenflow
Openflow
 
Nova, Folsom presentation, Compute PTL
Nova, Folsom presentation, Compute PTLNova, Folsom presentation, Compute PTL
Nova, Folsom presentation, Compute PTL
 
Rackspace Open-cloud, Engates, Interrante Keynote
Rackspace Open-cloud, Engates, Interrante KeynoteRackspace Open-cloud, Engates, Interrante Keynote
Rackspace Open-cloud, Engates, Interrante Keynote
 
Virtual Data Centers with OpenStack Quantum
Virtual Data Centers with OpenStack QuantumVirtual Data Centers with OpenStack Quantum
Virtual Data Centers with OpenStack Quantum
 
NICTA, Disaster Recovery Using OpenStack
NICTA, Disaster Recovery Using OpenStackNICTA, Disaster Recovery Using OpenStack
NICTA, Disaster Recovery Using OpenStack
 
Rackspace Hosting Presentation
Rackspace Hosting  PresentationRackspace Hosting  Presentation
Rackspace Hosting Presentation
 

Similar a DOE Magellan OpenStack user story

Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
Transforming Data Architecture Complexity at Sears - StampedeCon 2013
Transforming Data Architecture Complexity at Sears - StampedeCon 2013Transforming Data Architecture Complexity at Sears - StampedeCon 2013
Transforming Data Architecture Complexity at Sears - StampedeCon 2013StampedeCon
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyPeter Clapham
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on DockerDataWorks Summit
 
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
Network-aware Data Management for High Throughput Flows   Akamai, Cambridge, ...Network-aware Data Management for High Throughput Flows   Akamai, Cambridge, ...
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...balmanme
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
 
CERN Data Centre Evolution
CERN Data Centre EvolutionCERN Data Centre Evolution
CERN Data Centre EvolutionGavin McCance
 
Introduction to Apache Mesos and DC/OS
Introduction to Apache Mesos and DC/OSIntroduction to Apache Mesos and DC/OS
Introduction to Apache Mesos and DC/OSSteve Wong
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...DataWorks Summit/Hadoop Summit
 
CLOUD ENABLING TECHNOLOGIES.pptx
 CLOUD ENABLING TECHNOLOGIES.pptx CLOUD ENABLING TECHNOLOGIES.pptx
CLOUD ENABLING TECHNOLOGIES.pptxDr Geetha Mohan
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
 
Exploring emerging technologies in the HPC co-design space
Exploring emerging technologies in the HPC co-design spaceExploring emerging technologies in the HPC co-design space
Exploring emerging technologies in the HPC co-design spacejsvetter
 
Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Community
 
Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Community
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCMidoNet
 
Cloud101-Introduction to cloud
Cloud101-Introduction to cloud Cloud101-Introduction to cloud
Cloud101-Introduction to cloud Ranjan Ghosh
 
Network research
Network researchNetwork research
Network researchJisc
 

Similar a DOE Magellan OpenStack user story (20)

Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
Transforming Data Architecture Complexity at Sears - StampedeCon 2013
Transforming Data Architecture Complexity at Sears - StampedeCon 2013Transforming Data Architecture Complexity at Sears - StampedeCon 2013
Transforming Data Architecture Complexity at Sears - StampedeCon 2013
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journey
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on Docker
 
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
Network-aware Data Management for High Throughput Flows   Akamai, Cambridge, ...Network-aware Data Management for High Throughput Flows   Akamai, Cambridge, ...
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing Technologies
 
CERN Data Centre Evolution
CERN Data Centre EvolutionCERN Data Centre Evolution
CERN Data Centre Evolution
 
Introduction to Apache Mesos and DC/OS
Introduction to Apache Mesos and DC/OSIntroduction to Apache Mesos and DC/OS
Introduction to Apache Mesos and DC/OS
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
 
CLOUD ENABLING TECHNOLOGIES.pptx
 CLOUD ENABLING TECHNOLOGIES.pptx CLOUD ENABLING TECHNOLOGIES.pptx
CLOUD ENABLING TECHNOLOGIES.pptx
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
 
Exploring emerging technologies in the HPC co-design space
Exploring emerging technologies in the HPC co-design spaceExploring emerging technologies in the HPC co-design space
Exploring emerging technologies in the HPC co-design space
 
Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks
 
Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance Networks
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoC
 
Cloud101-Introduction to cloud
Cloud101-Introduction to cloud Cloud101-Introduction to cloud
Cloud101-Introduction to cloud
 
Network research
Network researchNetwork research
Network research
 

Más de laurabeckcahoon

Chris Kemp Nebula Keynote
Chris Kemp Nebula KeynoteChris Kemp Nebula Keynote
Chris Kemp Nebula Keynotelaurabeckcahoon
 
Reliable Redundant Resilient
Reliable Redundant ResilientReliable Redundant Resilient
Reliable Redundant Resilientlaurabeckcahoon
 
Integrating OpenStack to Existing infrastructure
Integrating OpenStack to Existing infrastructureIntegrating OpenStack to Existing infrastructure
Integrating OpenStack to Existing infrastructurelaurabeckcahoon
 
HP Open Stack Keynote 4 18_2012 final
HP Open Stack Keynote 4 18_2012 finalHP Open Stack Keynote 4 18_2012 final
HP Open Stack Keynote 4 18_2012 finallaurabeckcahoon
 
Open stack private cloud panel
Open stack private cloud panelOpen stack private cloud panel
Open stack private cloud panellaurabeckcahoon
 
Open stack private cloud panel
Open stack private cloud panelOpen stack private cloud panel
Open stack private cloud panellaurabeckcahoon
 
Hp gavin pratt - open stack networking presentation
Hp   gavin pratt - open stack networking presentationHp   gavin pratt - open stack networking presentation
Hp gavin pratt - open stack networking presentationlaurabeckcahoon
 
San Diego Super Computer
San Diego Super ComputerSan Diego Super Computer
San Diego Super Computerlaurabeckcahoon
 

Más de laurabeckcahoon (11)

OpenStack NASA
OpenStack NASAOpenStack NASA
OpenStack NASA
 
Chris Kemp Nebula Keynote
Chris Kemp Nebula KeynoteChris Kemp Nebula Keynote
Chris Kemp Nebula Keynote
 
Reliable Redundant Resilient
Reliable Redundant ResilientReliable Redundant Resilient
Reliable Redundant Resilient
 
Radio Free Asia
Radio Free AsiaRadio Free Asia
Radio Free Asia
 
Integrating OpenStack to Existing infrastructure
Integrating OpenStack to Existing infrastructureIntegrating OpenStack to Existing infrastructure
Integrating OpenStack to Existing infrastructure
 
HP Open Stack Keynote 4 18_2012 final
HP Open Stack Keynote 4 18_2012 finalHP Open Stack Keynote 4 18_2012 final
HP Open Stack Keynote 4 18_2012 final
 
Deutsche telekom
Deutsche telekomDeutsche telekom
Deutsche telekom
 
Open stack private cloud panel
Open stack private cloud panelOpen stack private cloud panel
Open stack private cloud panel
 
Open stack private cloud panel
Open stack private cloud panelOpen stack private cloud panel
Open stack private cloud panel
 
Hp gavin pratt - open stack networking presentation
Hp   gavin pratt - open stack networking presentationHp   gavin pratt - open stack networking presentation
Hp gavin pratt - open stack networking presentation
 
San Diego Super Computer
San Diego Super ComputerSan Diego Super Computer
San Diego Super Computer
 

Último

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 

Último (20)

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 

DOE Magellan OpenStack user story

  • 1. Magellan Experiences with OpenStack Narayan Desai desai@mcs.anl.gov Argonne National Lab
  • 2. The Challenge of High Performance Computing  Scientific progress is predicated on the use of computational models, simulation, or large scale data analysis – Conceptually similar to (or enabling of) traditional experiments  Progress is also limited by the computational capacities usable by applications  Applications often use large quantities of resources – 100s to 100000s of processors in concert – High bandwidth network links – Low latency communication between processors – Massive data sets  Largest problems often ride the ragged edge of available resources – Inefficiency reduces scope and efficacy of computational approaches to tackle particular large scale problems  Historically driven by applications, not services
  • 4. DOE Magellan Project (2009-2011)  Joint project between Argonne and Berkeley Labs  ARRA Funded  Goal: To assess “cloud” approaches for mid-range technical computing – Comparison of private/public clouds to HPC systems – Evaluation of Hadoop for scientific computing – Application performance comparison – User productivity assessment  Approach: Build a system with an HPC configuration, but operate as a private cloud – 504 IBM Idataplex nodes – 200 IBM 3650 Storage nodes (8 Disk, 4 ssd) – 12 HP 1TB Memory nodes – 133 Nvidia Fermi nodes – QDR Infiniband – Connected to the ESNet Research Network
  • 5. Initial Approach  Setup Magellan as a testbed – Several hardware types, many software configurations  Chose Eucalyptus 1.6 as the cloud software stack – Mindshare leader in 2009 – Had previous deployment experience – Supported widest range of EC2 Apis at the time  Planned to deploy 500 nodes into the private cloud portion of the system – Bare metal provisioning for the rest, due to lack of virtualization support for GPUs, etc
  • 7. Detailed Initial Experiences (2009-2010)  Had serious stability and scalability problems once we hit 84 nodes  Eucalyptus showed its research project heritage – Implemented in multiple languages – Questionable architecture decisions  Managed to get system into usable state, but barely  Began evaluating potential replacements (11/2010) – Eucalyptus 2.0 – Nimbus – Openstack (Bexar+)
  • 8. Evaluation Results  Eucalyptus 2.0 better, but more of the same  Openstack fared much better – Poor documentation – Solid architecture – Good scalability – High quality code • Good enough to function as documentation surrogate in many cases – Amazing community • (Thanks Vish!)  Decided to deploy Openstack Nova in 1/2011 – Started with Cactus beta codebase and tracked changes through release – By February, we had deployed 168 nodes and began moving users over – Turned off old system by 3/2011 – Scaled to 336 than 420 nodes over the following few months
  • 9. Early Openstack Compute Operational Experiences  Cactus – Our configuration was unusual, due to scale • Multiple network servers • Splitting services out to individual service nodes – Once things were setup, the system mainly ran – Little administrative intervention required to keep the system running  User productivity – Most scientific users aren’t used to managing systems – Typical usage model is application, not service centric – Private cloud model has a higher barrier to entry – Model also enabled aggressive disintermediation, which users liked – It also turned out there was a substantial unmet demand for services in scientific computing  Due to the user productivity benefits, we decided to transition the system to production at the end of the testbed project, in support of the DOE Systems Biology Knowledgebase project
  • 10. Enable DOE Mission Science Communities Plants Microbes
  • 11.
  • 12. Transitioning into Production (11/2011)  Production meant new priorities – Stability – Serviceability – Performance  And a new operation team  Initial build based on Diablo – Nova – Glance – Keystone* – Horizon*  Started to develop operational processes – Maintenance – Troubleshooting – Appropriate monitoring  Performed a full software stack shakedown – Scaled rack by rack up to 504 compute nodes  Vanilla system ready by late 12/2011
  • 13. Building Towards HPC Efficiency  HPC platforms target peak performance – Virtualization is not a natural choice  How close can we get to HPC performance while maintaining cloud feature benefits?  Several major areas of concern – Storage I/O – Network Bandwidth – Network latency – Driver support for accelerators/GPUs  Goal is to build multi-tenant, on demand high performance computational infrastructure – Support wide area data movement – Large scale computations – Scalable services hosting bio-informatics data integrations
  • 14. Network Performance Expedition  Goal: To determine the limits of Openstack infrastructure for wide area network transfers – Want small numbers of large flows as opposed to large numbers of slow flows  Built a new Essex test deployment – 15 compute nodes, with 1x10GE link each – Had 15 more in reserve – Expected to need 20 nodes – KVM hypervisor  Used FlatManager network setup – Multi-host configuration – Each hypervisor ran ethernet bridging and ip firewalling for its guest(s)  Nodes connected to the DOE ESNet Advanced Networking Initiative
  • 15. ESNet Advanced Networking Infrastructure
  • 16. Setup and Tuning  Standard instance type – 8 vcpus – 4 vnics bridged to the same 10GE ethernet – virtio  Standard tuning for wide area high bandwidth transfers – Jumbo frames (9K MTU) – Increased TX queue length on the hypervisor – Buffer sizes on the guest – 32-64 MB window size on the guest – Fasterdata.es.net rocks!  Remote data sinks – 3 nodes with 4x10GE – No virtualization  Settled on 10 VMs for testing – 4 TCP flows each (ANL -> LBL) – Memory to memory
  • 18. Results and comments  95 gigabit consistently – 98 peak! – ~12 GB/s across 50 ms latency!  Single node performance was way higher than we expected – CPU utilization even suggests we could handle more bandwidth (5-10 more?) – Might be able to improve more with EoIB or SR-IOV  Single stream performance was worse than native – Topped out at 3.5-4 gigabits  Exotic tuning wasn’t really required  Openstack performed beautifully – Was able to cleanly configure this networking setup – All of the APIs are usable in their intended ways – No duct tape involved!
  • 19. Conclusions  Openstack has been a key enabler of on demand computing for us – Even in technical computing, where these techniques are less common  Openstack is definitely ready for prime time – Even supports crazy experimentation  Experimental results shows that on demand high bandwidth data transfers are feasible – Our next step is to build openstack storage that can source/sink that data rate  Eventually, multi-tenancy data transfer infrastructure will be possible  This is just one example of the potential of mixed cloud/HPC systems
  • 20. Acknowledgements  Argonne Team Original Magellan Team – Jason Hedden • Susan Coghlan – Linda Winkler • Adam Scovel  ESNet • Piotr Zbiegel • Rick Bradshaw – Jon Dugan • Anping Liu – Brian Tierney • Ed Holohan – Patrick Dorn – Chris Tracy