The document outlines a company's plan to transition its infrastructure to a cloud computing model over the next few years. It will lower costs, increase efficiency and flexibility, and allow the company to focus on its core business. The plan involves defining requirements, making strategic investments, increasing bandwidth, moving to a rental model, and leveraging cloud computing services from providers. This is expected to reduce costs and risks while improving scalability compared to the company's existing on-premise infrastructure model.
As an organization it is time to define potentially new strategies in light of new opportunities. We find ourselves in a constantly changing media world with constantly changing consumer behavior. Our offerings need to change with the changing needs of our clients. At the same time the cause of those changes, computing and communications technology, is morphing at a breathtaking rate. The result is a dramatic change in behavior. [Image credit: http:// www.baekdal.com]
The explosion of wireless devices is truly changing the media consumption landscape. Ten years ago there were around 600-700 million wireless subscribers worldwide. This year the world will cross the 5 billion mark, if it hasn't already, and will move toward 10 billion devices in 2-4 years. At the same time user interfaces are changing due to multi-touch jester screens on handheld, tablet and table surface systems, and to active wireless motion control devices for gaming, augmented reality and collaborative design systems. [Image credit: http:// ]
So Why The Face !? So what is Cloud Computing and what is in it for us? Wikipedia defines Cloud Computing as " Cloud Computing is Internet -based computing , whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid . " In essence, instead of local area network server resources being used to satisfy local client systems needs, the server resources are housed in facilities elsewhere, often by other organizations, and connected to via the Internet. [Image credit: Wikipedia Commons http://en.wikipedia.org/wiki/File:Cloud_computing.svg ]
We will do this by the process of: Define the problems and opportunities based on an evaluation of current vs. if we started from scratch Design a new desired state by evaluating what is available to produce business values Deploy changes based on planned obsolescence in a 1-2-3-4 year refresh schedule Specifically:
IT infrastructure items have a useful lifetime. This is often different then tax depreciations schedules. To ensure the best possible return and that the tools are working up to expectations a periodic tek refresh is necessary. The plan allocation different software and hardware into groups that have 1, 2, 3 or 4 year expected useful life. The Total Cost of Ownership (TCO) can be allocated across those times regardless if it is bought, leased or rented. A replacement value for a similar or new items can also be used for everything from insurance, to disaster recovery, to cost analysis. By forcing tek refresh at the end of useful life, adoption of new capabilities is easier to accomplish.
Internet broadband speed has generally followed the same Moore’s law price-performance curve as other digital technology. Every three years we can expect offerings of twice the speed at half the cost as before. Increased bandwidth with multiple redundant vendors provides the superhighway for more global trade – in this case compute outsourcing options.
By moving from investment capital dollars, to annual or monthly rental cash models provides the ability to scale Tek resources up and down more quickly. This allows for costs to be easily allocated over units of people, locations, production capacity, demand and revenue. Costs can also be reallocated more quickly, new resources added quickly, and changed to deploy new innovations more quickly. The embedded or sunk costs are less as concern in decision making going forward.
Some items are generally more than 4 years old and still have useful life. Item that can be easily and cost effectively upgraded should be, and the old scraped or auction off. For those that age gracefully and or not obsolete should be granted extended life. Some equipment may not have all of the outsourcing options currently available, or perhaps they require more than available bandwidth. Knowing that in a few years those options will be available, nursing some current equipment for a year or two more, may make sense.
Over 80% of large organization have one or more enterprise could service. Over 1/3 of all internal application servers are virtualized with another third to soon follow. Virtualization of applications and servers make it much easier to physically move them from one location to another, and from one provider to another. Most of our internal servers are virtualized and we are increasingly provisioning virtual servers from cloud providers. Over 60% of IT organizations are looking at and/or deploying applications of IaaS and PaaS services. As we build new software for our clients we will increasing be basing them on PaaS platforms like Force.com, Amazon AWS Google App Engine and Microsoft Azure. The software we procure will likewise increasing leverage these dynamic and agile infrastructures. Virtualized servers often exceed 80% utilization factors compared to 15% to 20% for most non-virtualized. That is a 4 fold increase in cost effeciency. World class data center providers like Google operate their resources at ½ to 1/5 of the norms for the industry. Is it any winder why cloud computing, public and/or private, will dominate and soon?
The main reason cloud computing architecture is becoming popular and will soon dominate is that you can do more for less. Cloud vendors excel at scale. Large data centers that house computing equipment can use economies of scale and create shared environments for equipment at low cost. Computing equipment infrastructures need uninterruptable power and cooling . Data centers require security and 24/7 monitoring and staffing to deliver best practices data integrity required by many organizations. One measure of efficiency used in the data center industry is simply the ratio of computing cycles used for useful work vs. the energy entering the entire data center. World class performers like Google can deliver 2-10 times the efficiency of average large scale operators. This means they have at a substantial cost advantage over most. One of the factors that is driving innovation in efficiency is server virtualization. In traditional environments, CPU utilization is often in the 10-20% range and the system is otherwise idle. Even when idle most system still consume 60-70 if the electricity as they do when fully loaded. This means that at least 70-80% of server capacity goes unused and wasting energy. If you use virtualization software such as VMware you can share the wasted cycles across multiple operating system running on the same hardware. If you consolidate servers onto shared hardware data centers can boost efficiency up towards 80-90%. VM software can also manage servers across multiple pieces of hardware and automatically move where software runs dynamically to respond to changes in load requirements. Cloud computing vendors can deliver full servers in a much more cost effective way with such scaling. Sharing computer resources remotely rather than storing software and data on a local server or PC's can slash IT costs by 25% or more. As competition heats up are seeing cloud computing vendor costs going only one way... down. Cost is but one factor. As internal demands increase, we needs to plan for, finance, acquire, install and manage new server equipment. When utilizing a cloud vendor, new servers and increased capability can be deployed with a click of a mouse. This means the ability to quickly scale and having the flexibility to change computing resources on demand leads to new ways to offer services. By using cloud computing severs, facilities and applications everything by definition is remote. It means that our office connectivity, home connectivity and wireless connectivity can all be used to do business regardless of your location. The office is not us. Our people are us. This trend is helping organizations redefine what work means. Work is not a place that you go but rather something you do. That give us flexibility in planning and opportunities for new and more refined business models and future offerings for our clients.
Some organizations dismiss or downplay cloud computing as an option for three basic reasons: loss of control, concerns over security and reliability. From my perspctive once you decide to give up total control (and responsibility) and partner with cloud based suppliers the other two are excuses that are easily rationaized. The issue of reliability revolves around the availability of broadband to and from your facilities and the uptime of the suppliers infrastructure. During the early days of our country's electrification and the role out of telephone networks, reliability was a large issue. But in our lifetime, the fact that your phone is on all of the time and loss of electricity is possible, but is a rare occurrence. This type of commonplace existence in our lives is close to being a reality for broadband service providers. But we still need to plan on a secondary or dual set of copper, coax, fiber and wireless options in our plans for near 100% availability. The second part of the reliability equation is the uptime of service providers. This becomes an important evaluation criteria when selecting cloud vendors. The major providers such as Rackspace, Amazon, Google, Salesforce and Microsoft invest tremendeous capital is redundency, scalability and reliability. And because of their size they can invest at a level that most organizations cannot or will not for themselves. The issue of security is resolved the same way, vendors reputations are based on secure and reliable operations. Any breech of service is a major black eye to their reputations and their ability to attract and retain customers. They not only implement, but in many cases, determine industry best practices. If you make an evaluation and business decision that a chosen cloud vendor delivers compute resources with the same or better security and reliability, then the last issue of loss of control is based on concerns over functionality, flexibility or just fear. Fear of losing your job is real, but economies (businesses) grow by improving productivity and doing more with less.
Hardware and software systems can be organized into these four functional business areas. All organizations have these groups of systems but deliver them in differing ways. The infrastructure needed is dependent on the systems chosen for the organization. We have grown from one workstation per user to a significant set of servers and network devices to deliver functionality. Over the last several years there has been a deliberate trend to outsource these systems to collocation facilities and software as a service providers. But whether we use individual software on each workstation, client server software with network server based functionality, or remote software as a service or platform as a service functionality, we use software to support these four functions.
Communications are vital for almost every operational area. This is an area that is also most easily outsourced to cloud service providers. In fact most already are. With the transition to Google Apps, most workers are truly virtual, or can be.
By using the communication transport system groups of clients, staff, and project participants can collaborate online in real time. This is perhaps the hottest area of innovation today. Keeping options open and cost down are key to deploying new – things.
Many innovative offering are cropping up as Software as a Service offerings. Traditional vendors are recasting their systems to run as web based and easily provisioned. New entrants are gaining share primarily because of very low cost of market entry. Once of the big attraction to this model is the loss of need and responsibility for enhancement installations and regression testing. This areas is the next big wave of cloud migration and to more standard, yet configurable, software.
The area of control for different businesses can vary substantially. For a manufacturing operation factory floor automation, supply chain management, distribution and warehousing are obvious areas. For professional service organizations it is about process, procedure and people management. These systems tend to be the most specialized, the most core, and generate the most concern about security and availability. These will be in the last wave of cloud migration. But whether they are destined to be migrated to public cloud platforms or kept internal on private clouds, the transition will happen – soon.
As a recap…
How do we deliver the 4 C's today? Our tradition has been to internally manage a fairly sophisticated infrastructure for our communications, production, design and development tool sets. We became a Microsoft dominated shop early on since their networking infrastructure and development tool price point fit a small business such as ours. The infrastructure continued to grow until we began to see options for outsourcing some of it. Our current infrastructure is getting old and has had diminished investments over the last several years. We have chosen to keep what we have running until the economic condition improves and in anticipation of new and less expensive alternative on the horizon. For instance our Storage Area Network (SAN), which provides several terabytes of managed storage, just turned three years old. All of our development and staging servers are older than that with some over six years old. Any hardware such as this is considered to have useful life of two to five years. In the past we have had a rotating schedule of retirements and upgrades. Do to limited investment recently, it is time to consider new equipment in conjunction with new options that are available. Old Network Diagram In 2007 we began outsourcing much of our customer facing and hosted system to collocation vendor Rackspace. This allows us to rely of Rackspace's tier one facility for critical system management and support. We also retired our Voice over IP (VoIP) phone system and transitioned to mobile phones for each employee and a hosted virtual PBX vendor. This allows us to only manage one phone per person, allows full mobile support for voice communications and eliminate capital investment in system hardware. We continued this trend toward outsourcing infrastructure last year with the retirement of our old Microsoft Exchange 2003 server and transition to Google Apps for primary electronic communications. This allowed us to reduce the cost of email from over $20 per month per user to around $5 per month per user. From the beginning, we have made significant investments in IT infrastructure and leading edge software solutions. In many cases this was necessary to facilitate the application services we wanted to provide to our clients. Today there are many options to turn to service providers who concentrate and excel at system management at levels and costs that we cannot begin to approach internally. Going forward, the overall strategy should be to continue to outsource systems and infrastructure to specialty providers and for us to concentrate on our core competencies.
What do we do today? 1-2-3-4 Planning Moving forward we should identify cycle times and replacement and upgrade schedule of equipment in a more realistic manner. If we do not we end up in a position like the present, too much of our equipment is too old and in need of wholesale replacement. The good news for our business model is that much of our cost structure is scalable with demand. By moving to more outsourcing, the ability to pay per user rather than pay for fixed assets, the cost structure will scale up and down as we adjust our number one cost, people resources. Each category of technical infrastructure item has a useful lifetime. In the past we have looked at about a three year life span for all equipment for calculating depreciation and upgrade cycles. In the current environment, this is too coarse of an estimate. The recommendation going forward is to adopt 1-2, 2-3, 3-4 year planning cycles with software and hardware in buckets that are closer to their true useful life spans. For instance, the smart-phone and feature phone service we use are leased over a 2 year contract agreement. This puts mobile phone devices in the 1-2 year planning bucket. Alternatively some of our equipment such as switches and routers have longer than 3 year useful life spans, and should be put into the 3-4 year bucket. We can look at all of our assets from equipment to furniture and building and identify the useful life and rental, lease or depreciation costs in terms of monthly cash flow. We can apportion these costs on an individual staff basis. Here is a table that shows how different areas of costs per person and useful life categories for each. Bandwidth Migration Start migration to new bandwidth providers, keep older router for another year, install load balancing equipment to facility new additions. ... The more bandwidth we have, the more options we have. Load balancer allows us to add and change providers and share aggregate bandwidth. We can substitute our T-1 pairs with 7 bonded T-1 for the same price going from 3 Mbps to 12 Mbps service. We can also replace one of the T-1 Pairs with cable bandwidth which gives up very cost effective download speeds but minimal upload speed. Cable broadband also has the appeal of high download speed and low cost. Fly the Friendly Clouds Don't Buy, Rent to get Linear cost structure - scalability. 'Nuff said. Nurse Equipment Even though some equipment is older and arguably at or near their useful life span... we do not want to expend financial resources needlessly. We want to keep certain pieces of equipment around a little longer, take the risk that some might fail unexpectedly. Until the cloud catches up. The largest expense internally is and will be large scale storage. We should plan on keeping our current SAN and disk to disk to tape solution on line for another year. By then we are likely to see options at prices not available today. We should keep out eye on internal vs. external virtual server options. If we run across a good deal on additional blades for our Dell enclosure we should make a purchase, both for spare parts and for additional capacity.
Our current cost structure for Tek has us spending upwards of $xxx,xxx per year on production capacity, personal and shared hardware software and services. In terms of annual revenue this ends up being less than 8%. Industries range considerable on IT spend from 2-3% to more than 20%. Our organization has a business model that relies on technical leadership and investment, but at the same time we strive for as much as possible with as little as possible. If our business model was based on return on capital investments in leading edge technology all of the time, we would be spending twice what we do. But our business model thrives on leadership in cost containment and we now have an opportunity to plans anew and exploiting the new opportunities with cloud computing faculties. Over the next two years we will see new and lower cost options for all areas of infrastructure needs. Broadband costs and options will continue to go down. A new investment in flexible link balancing hardware will allow us to easily mix and match ISP connections. After the two year horizon we will likely see complete outsourcing options that will lower our costs and make location even more irrelevant. We can look at personal Tek as a personal choice. We budget a monthly figure for phones, handheld and/or laptops and allows individuals to manage their own equipment. With the near merging or personal and professional lives, this make more and more sense. For those systems that are desktop systems we should look at station approach as apposed to a personal workstation approach. We already do this for the video workstation. We can look at certain designer/developer needs the same way. For all software and hardware expenditures we should slot each item into 1-2-3-4 year time frames. A rolling quarter by quarter investment schedule will allow us to budget strategically and to react to changes in business model. This will be updated once per quarter going forward.