Apresentação da NetApp no evento Executive Breakfast realizado em 16 de Março de 2012 no Sheraton Lisboa.
O evento focou-se nos méritos das soluções da NetApp e da VMWare para suportar tendências incontornáveis em IT Security e Management - virtualização de desktops e aplicações, planos de recuperação de desastres e continuidade operacional com base em virtualização e storage com replicação inteligente e utilização de soluções tecnicamente superiores como via de redução de custos.
Lets look at some of the major trends in the market currently…Massive Unstructured Data Growth - This isn’t new, we’ve seen this for many years now, however the impact of it now is bigger than ever before, according to Gartner most companies are seeing Data Growth between 50 - 100%, however with a Storage Utilisation of less then 40%We talk about PBs not MBs, billions not millionsThen there’s the Green Agenda, although this dropped down the list of priorities in the private sector it remains a priority in the public sector, according to a report released by Ovum after the recent G20 summit in London ‘In the midst of the world financial crisis, green IT still remains a top priority on the IT agenda of governments and is growing in importance for the public sector’Then there’s the continuing mass adoption of Server virtualisation, which itself drives a requirement for Shared / virtualised storage. According to Gartner at their recent Symposium in “Server virtualisation expected increase to 50 percent by the end of 2012, and use of VMs will grow most quickly among small and midsize businesses”4) There is then the clear drive toward Cloud, companies looking to procure IT as a service. There continues to be huge growth in the major players in this sector, Googlemail, Amazon EC2, Microsoft Azure etc. According to Gartner ‘The five year growth outlook remains strong, with a five-year annual growth rate of 26% – over six times the rate of traditional IT offerings’. However there are still a number of challenges here around security and legislation, what location does the data actually reside in when its in a Cloud providers infrastructure? Is it secure?5) And then Flash in Servers and Storage, we have been using SSD for read acceleration in our FlashCache technology for a while now, other vendors are beginning to follow suit. Combine this with the commoditisation that is beginning as SSD is becoming more commonplace in laptop and desktop systems and the price curve drops dramatically which enables further adoption. What becomes important is how you use it though, flash has some interesting characteristics, for example performance, but also has some drawbacks regarding Enterprise levels of reliability.Next slide – Leading to new storage requirements (7 text items)
Enormous opportunity for optimisationExternal benchmark is the cloud, which is starting to become a viable alternative. SO choice is to transform and optimise or embrace the aaS model.The data explosion and budget pressures due to the economy are causing pressure on CIOs to come up with new answers. Given the fact that customers are still purchasing legacy architectures, continuing old ways of doing things, they’re finding themselves repeatedly investing in architectures with an enormous amount of waste—meaning they have 10x the storage they actually need.That’s kind of a big claim, so let me quickly explain my thinking: the current state of most systems is they are:Running at 30% utilisationUsing minimal (if any) data reductionStoring roughly 20 copies of their dataThese systems could quite reasonably be:Running at 70-75%utilisation (a 2x increase)Deploying data reduction technologies like dedupe (a 2x increase)Using cloning to take those 20 copies down to 3-4 (a 4-5x increase)Multiply those together and you’ve got about 10x slack in the system.The result of this is that spending remains very high, and there is never enough money to serve the strategic needs of the business. (Which, for most CIOs, is the primary metric their performance is measured by!)Next slide: and Data Centre’s are evolving
It’s very clear there’s a move towards a shared infrastructureCould be the start of a multi-decade change (movement to cloud), optimisation opportunity, new business model becoming relevant, move to virtualisation. It happened in telecommunications and manufacturing.Whether internal or external, our objectives and requirements are the same.Initial – IT Architectures will co-exist in this current era, siloed environments working alongside new virtualised environments. Legacy applications, organisational change and uptime concerns may keep some customers from being too aggressive with cloud adoption. However, some companies’ survival require massive transformation, and they will be the pioneers in adopting new ways to run their data centers. Click transition – Our goal is to be the platform of choice for both, to allow the customer to manage their infrastructure with the concerns of the business.Next Slide : Our Storage Strategy
Recipe book for how to go to transform IT infrastructure. Customers are on different parts of this. But need to do all of this to get the results. We must evolve our environments to include advancements in technology, changes in how we process and store information, and adapt to possible conditions in the future.Centralise – We can no longer manage our systems separately, we have work closely in a common technical languageRationalise – We are finding out how to support the application demands of the business while still innovatingStandardise – We need to simplify our IT landscape, as the number of devices on a network increase, we need to have a consistent approach to these thingsConsolidate – Combining resources to increase efficiency is the beginning. Cloud and ITaaS is the continuation of this trendVirtualise – Allow further mobility and resiliency of data and compute infrastructure by decoupling it from physical devicesOptimise – No longer will you need to compromise any performance for efficiency, technology innovation allows for bothOutsource – Focus on your core business responsibilities, allow some infrastructure management to be handled externally so you can spend time improving your SLAsNext slide: The right storage enables these (in the middle)
First two are biz or architectural decisions.The right storage enables these – Yes, by being able to take advantage of a single unified platform designed with these objectives in mind, you can achieve all of these things much easier.Next slide: Highlighting the Unified Architecture
Before, storage admins wouldn’t talk to
Standardise – Fewer architectures makes us more flexible. Environments are changing and standardising architectures helps us become more capable of adapting to change.
You see, with our unified architecture, it’s a single operating system to give you control over your storage, regardless of how you access it in your data center.You have many applications in your data center, but many times in the past you have had to use different types of storage to satisfy the requirements of each application. With Data ONTAP, you can run the same operating system across a variety of storage controllers, taking advantage of a feature set that gives you the flexibility to virtualise storage on other vendor’s disk arrays, accelerate data sets with SSD’s, and operate it from a single pane of glass. This simplified management will allow you to focus on performance, growth, and efficiency.Next slide: One Architecture for many Workloads
Why are we winning in the virtualisation space? Because: we have built the power of abstraction into how we deliver storage. We have technologies that allow us to clone, dedupe, etc. with full VM awareness. We have developed many features that not only enable speed and efficiency, but significantly lower the cost of test-and-dev, tightly integrated backups, and non-stop operation.Next slide: Flexpod and SMT
Purpose of the slide: Introduce Secure Multi-Tenancy and Advanced Secure Multi-TenancyKey points:vShields zone 2.0Cisco SAFE architectureNext Slide : Highlight Optimise
As companies look to automate more and more tasks and standardise the management tools across the infrastructure it is critical for us to expose the full suite of NetApp capabilities through API’s to enable this integration.Click1: Highlighting the In-House Management Tools (Utilise the API’s to create an orchestration layer of your own)Click2: Highlighting the Virtualisation Management Products (Integrate into vCenter for a single-pane infrastructure)Click3: Highlighting the IT Service Management Platforms (Orchestrate the data center with BMC or CA offerings)We have many customers that have created their own Web-portals for the automation of end to end application provisioning, BT, as an example, have a web front end where customers can request the Virtual Data Centre they require, this automatically provisions Virtual Servers, Applications and Storage inside BT’s shared infrastructureMost companies are now using VMware and want to be able to perform most day to day management activities through the vCenter Management console, using our VSC plugin we enable the administrator to perform storage tasks all within the same console, Virtual machines can be provisioned, cloned, and managed with backup, restore, snapshotting all initiated from a single environment. For larger complex environments, organisations are using advanced Orchestration tools such as offerings from BMC or CA, where we have built tight integration into these tools as well.Next slide: Orchestration example with an application customer requiring a certain level of service
We need to consolidate to save costs:When consolidating two things dramatically change:1 – scale goes up (so architecture needs to be able to handle scale) and 2 – risk goes up (so architecture needs to be able to handle data protection and resiliency).Consolidating your data centers doesn’t necessarily have to mean that you lose functionality or have to compromise on security features. Let me show you the three dimensions of storage scalingNext slide: Scaling: Required on Three Dimensions
Most of our thinking is driven by a workload “view”.Top right – tier 1Bottom right – cheap and deep (archiving, repositories, etc.). Different performance requirements.Top left – high performance compute – high bandwidth and scalabilityBottom left – smaller and simplicitySaying that NetApp is midrange is completely bogus, as is needing a different architecture for tier 1 is bogus. Same architecture should be able to extend across any workload. That is unified architecture. Some examples:Talk about tier 1 - we have the largest SAP instance in the world at shell, 3PB SAN at ministry of finance in SingaporeCiti and Verizon – company wide virtualizationEngineering – e.g. Clearcase for Boeing.All met by the same architecture.Saying that NetApp is midrange is completely bogus, as is needing a different architecture for tier 1 is bogus. Same architecture should be able to extend across any workload. That is unified architecture. Some examples:FOR COUNTRY SPECIFIC USE, PLEASE ADD YOUR OWN REFERENCES TO THIS SLIDE!And while we are focused on the green zone of virtualised infrastructures, we are winning across the primary workloads with the compelling power of our technology. Are we tier 1. absolutely. We are running the erp for GSK, a fortune 100 company. We are running the largest sap instance in the world for Shell. We replaced a 3PB SAN fabric consisting of Clariions and DMXs at the ministry of defense in Singapore The virtualized infrastructure is the sweet spot and doesn’t need more discussion. We are becoming the standard in large scale VDI implementations enabled by our FlashCache technology. CDW is a tremendous success story in the MSE space. Went from nowhere to $65M in a year, when we tweaked our technology with the appropriate packaging, their sales skyrocketed. Winning large scale data repositories – a top online music and managed backup company running on Data ONTAP 8, check imaging for most of the US financial institutions at ViewPointe And continuing to win in the engineering/tech apps – boeing, CA, airbusNext slide: One Architecture for Many Workloads – A focus on secondary storagePearson – Northeast quadrant of the Virtualized Infrastructure – traditional Tier 2 appVerizon – Can’t tell if is a Tier 1 or Tier 2 applicatioYahoo and Freudenberg IT – DSS. Due north quadrant of Virt Infra. Segment. Examworks – Center of Tier 1. Small company, but put every application on VMware on NetApp. Their business is providing medical professionals on demand for insurance, peer review, and other ‘second opinion’ type things. It is a fragmented industry, and they are buying up small services companies around the company. Having all of the apps in VMware makes integrating the regional company IT (with some specific regional processing requirements) very easyShell on T-systems. High end (northeast) Tier 1. 5th largest SAP system 40TB instance. In production since JanuaryBT – Center of the Virt Infrastructure. Built on SMT architecture. Internal plus external Cloud.On MSE – need a datapoint about growth of 2000s YoY, especially in EMEA.INA – France. 1.2 usable PB in an archive app. Southeast corner of the chart.Viewpointe. 5PB deal for check archiving service for multiple banks.Airbus – France. $8m deal for engineering apps. Northwest part of VI segment, but not HPC (apps were PTC, Dassault Catia
Voiceover at end of slide: Scaleout:Foundation for the entire future of ONTAP (i.e. this one is big so we better do a good job of it.From user side it provides a single virtualised pool of all storage.From a system point of view, it provides unlimited performance and capacity scalability by adding controllers (performance) and storage (capacity)Storage is accessed via an abstraction (e.g. by policy or SLA) and the cluster takes care of delivering the right storage and all the complexity behind the scenes1 - Scaling for performance is a given. It starts at the bottom with the appropriately designed block store (WAFL) and then moves up to supporting the latest fastest media types (flash, flash as cache, sas, etc.). And dealing with multiple faster cores. We have a fully integrated technology agenda to drive more performance. All the more important with consolidation.2 - The sheer amount of storage we have to deal with. Consolidated DCs has more TBs and it is no longer enough to just have large systems. So we need a single logical pool that can be provisioned as a logical pool across lots of arrays. This is the basis of our next gen OnTap 8.3 – How do we make sure that the storage can operationally scale? Storage admins can no longer spend time on manual activities (provisioning, dp, tuning, etc.). This is all about automation.In the early days, the only way to upgrade was to scale up, get the bigger system, the better controller. With Ethernet networks and the emergence of the Internet, many environments scale out with more systems. But with a flexible platform built with these new workloads in mind, you can now scale for capacity, allowing applications to get the performance and quality of storage necessary to run.Next slide: Scale out
And of course we’re pushing towards efficient use of storage and automating tasks for better storage management. Next slide: Leading Efficiency
Storage efficiency is not a technology it’s a way of thinking, a strategy. Starting with how to get more performance from technologies like SATA and still providing enterprise reliability. How do we pack more data on a disk – technologies like dedupe and compression. And then intelligent cloning to help reduce number of copies, etc. As we move to the future we’re building technologies to increase scale of storage efficiency and more use cases for these technologies.All of these enable process to move faster. Other vendors have some of the features but severely limit their use cases. Wins come down to making processes faster and more efficient. At Microsoft 1800% storage utilisation is achieved on NetApp.Then there’s the idea of reducing the number of FT admins to manage storage. Integrating with Service automation tools from vendors like BMC, VMware vsphere, Microsoft data center manager, etc.Next Slide : Service Automation
Goal of Slide: IT Infrastructure is evolvingKey Points:In a traditional infrastructure, servers, networking, and storage are dedicated to an application and its users. To roll out a new application, you must purchase and deploy new hardware and infrastructure. It cantake months to get a new application up and running. And once the application is rolled out, it can be hard to share resources. It is almost impossible to reallocate stranded excess capacity and horsepowerto differentuses.Server virtualization allows you to share a single server resourceacross multiple applications and clients. Most servers are underutilized, so if you can run multiple apps on the same server, you can reduce your server footprint, drive up utilization, and save a lot of money and manpower. By decoupling applications from the hardware, virtualization allows you to move applications from server to server for load balancing, move them from data center to data center for disaster recovery, and move them into the cloud and back out of the cloud for increased capacityand flexibility while lowering cost. Server virtualization shortens the time it takesto deploy new applications. However, without a storage infrastructure that provides thesame level of flexibility and efficiency, the savings you achieve by virtualizing servers can by consumed by the additional complexity that virtualized servers create for the storage infrastructure. NetApp is a proven leader in storage virtualization. In fact,NetApp provides virtualization capabilities at every level of the storage hierarchy in all NetApp storage platforms.The NetApp architecture, its partnerships, and its dedication to customer success make NetApp a leader in shared storage infrastructures.Transition:You can start anywhere, and feel confident you’re laying a future-ready foundation.
Goal of Slide: The NetApp Systems Portfolio is truly unifiedKey Points: Unified storage architecture is much more than support for multiple protocols on a single storage array. In most environments of scale, it is uncommon to run multiple protocols on the same box. The real benefits of unified storage are at an architecture level, not at a box level. The big question is how to achieve the lowest cost profile while meeting the SLAs for a particular workload or mix of workloads. For example: Why buy more than you need? The ability to grow and scale from low-end to high-end systems on the same architecture means that you don't have to take a "rip-and-replace' approach to one of the most costly parts of your IT operations - the process and skills sets required to deliver IT services to your users. How can NetApp help you benefit from our IT efficiencies if you already have an investment in a different storage infrastructure?Our ability to virtualize existing SAN systems with V-Series enables you to achieve the benefits of standardization, data protection, and storage efficiency even if you are currently running EMC, HDS, or HP storage systems.How can you achieve different cost-performance profiles in the same architecture? NetApp enables what some people refer to as "tierless storage" through the use of flash assist technologies or caching techniques to achieve high performance with low-costs drives. A unified architecture means that you don't need to take a rip-and-replace approach when you need more I/O or, more likely, a mix of I/O and cost profiles for different applications and storage needs.Standardization helps to drive cost reduction: If you have fewer architectures, you can be more efficient and more flexible. You can increase storage utilization by using a single architecture rather than a multi-array approach that requires you to break it up into smaller pieces. The ability to handle multiple workloads and deploy multiple technology options across a single architecture provides you with the flexibility to deal with change. Whatever storage requirements you may have today, they will likely change again in the next 12 - 18 months.These are all varying aspects of unified architecture. If you can deliver a unified set of tools, a unified set of processes, the same way of doing disaster recovery and backup and provisioning and management and maintenance, then you begin to see massive benefits in terms of complexity reduction. Complexity reduction quickly translates into cost reduction.
NetApp offers a market-leading systems platform that offer true flexibility and range for our customers, whatever their size or individual business requirements. The FAS 6200 Series offers three new models and extends NetApp’s unified architecture, providingdouble the performance, increased enterprise-class availability, and more scalability and flexibility for the most demanding workloads. Customers can meet their mission-critical business requirements while respondingto the rapid growth and pace of their business. For technical press:Latest multi-core processors and PCIe-based hardwareFuture-ready scalability and flexibilityUp to 2.9PB capacity, plus 2x more PCIe connectivityUp to 8TB of Flash Cache flash memoryBuilt-in 10Gb Ethernet, 8Gb FC, and 6Gb SASEnterprise-class availabilityIn addition to the high-end systems, we’re also introducing a family of three new midrange storage systems. The FAS 3200 Series also come in three new models, and deliver the industry’s most cost-effective platform for greater flexibility and efficiency, higher performance, and enterprise-class availability to support today’s midrange storage requirements while adaptingto future changes. For technical press:The best value for mixed workloadsSpans across enterprise class and MSE deployments Future-ready flexibility and scalability50% more PCIe connectivityUp to 1.9 PB of storage capacityUnified architecture + Data ONTAP® 8 = leading storage efficiencyMore software value included at no additional cost[Transition next slide]:All of these systems leverage one, unified, scale-up architecture to provide optimum flexibility and efficiency for our customers.
Purpose of the slide: To Summarise the 8 Criteria for Shared IT infrastructure