Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Building Cloud - Where SDN Could Help

Talk from the inaugural workshop of the Swiss SDN working group in October 2013. In the context of a green-field "cloud" infrastructure build, the talk looks at three areas where "SDN" techniques seem useful: To scale the internal network fabric across many racks; to realize a cost-effective high-bandwidth uplink connection to the Internet; to support "virtual private cloud" services using tunneling.

  • Inicia sesión para ver los comentarios

  • Sé el primero en recomendar esto

Building Cloud - Where SDN Could Help

  1. 1. Building Cloud Where SDN Could Help SDN Workshop, Zurich, 30.October 2013 Simon Leinen simon.leinen@switch.ch
  2. 2. SWITCH “Cloud” Experience (so far) • Built ~10-node Ceph+OpenStack cluster BCC – “building cloud competence” • Services: – VMs for various researchers and internal testers – File synchronization server for ~500 end users in “Cloud Shared Storage” usability tests – ownCloud vs. PowerFolder • Networking: –2*10GE per server – 6*10GE on front-end servers, which route – Two Brocade “ToR” switches with TRILL-based multi-chassis multipath, L2+VLANs – 2*10GE towards backbone © 2013 SWITCH 2
  3. 3. Next Step: ~2 * 2 racks with room to scale Goals: • Offer “Dropbox-like” service to entire community • Offer “IaaS” services (VM/storage) to researchers • A first example of “scientific SaaS” • Stable and efficient operations • Scalability, both architectural and economical © 2013 SWITCH 3
  4. 4. Growing the Cloud: Internal fabric • Beyond a few racks, we need some sort of “aggregation layer” beyond the ToR. There are multiple approaches: –Traditional with large aggregation switch (doubled for redundancy) – Modern with leaf/spine design <- cost-effective “commodity” kit • How can servers make use of parallelism in the fabric? – Smart L2 switches (TRILL, Multi-chassis LAG etc.) – vendor lock-in? – L3 switches with hypervisor-based overlay à la Nicira OVP © 2013 SWITCH 4
  5. 5. Never underestimate the power of Xeon © 2013 SWITCH 5
  6. 6. chur.snabb.co © 2013 SWITCH 6
  7. 7. Performance results © 2013 SWITCH 7
  8. 8. Data Center/Backbone Interface • Traditionally, you have an access router at each site. • At >>10 Gb/s, this gets expensive. • Can we leverage the many cheap 10GEs we have on our Intel servers? – Basic (BGP) routing/filtering functionality needed –Could peer directly with backbone routers in neighboring PoPs © 2013 SWITCH 8
  9. 9. Virtual Private Cloud (VPC) • Offer customer institutions (university) VMs with IP address from the university’s range • Somehow bridge/tunnel these VMs’ interfaces into the university’s campus network… so that they appear on the “right” side of the firewall –What are suitable mechanisms/interfaces at the campus side? • Also, allow customers to build their own private networks within our cloud, i.e. between cloud-hosted VMs – This is now standard functionality in OpenStack/Neutron © 2013 SWITCH 9

×