1. Government Cloud Computing Applications Roger Smith [email_address] http://www.modelbenders.com/cloud.html HPTi Technology Forum March 19, 2010, Reston, VA
2.
3.
4.
5.
6. Drivers for Cloud Computing Adoption Scalability Users have access to a large amount of resources that scale based on user demand. Elasticity The environment transparently manages a user’s resource utilization based on dynamically changing needs. Virtualization Each user has a single view of the available resources, independently of how they are arranged in terms of physical devices. Cost The pay-per-usage model allows an organization to only pay for the resources they need with basically no investment in the physical resources available in the cloud. There are no infrastructure maintenance or upgrade costs. Mobility Users have the ability to access data and applications from around the globe. Collaboration Users are starting to see the cloud as a way to work simultaneously on common data and information.
7. Barriers for Cloud Computing Adoption Security The key concern is data privacy. Users do not have control of or know where their data is being stored. Interoperability A universal set of standards and/or interfaces have not yet been defined, resulting in a significant risk of vendor lock-in. Control The amount of control that the user has over the cloud environment varies greatly between vendors. Performance All access to the cloud is done via the internet, introducing latency into every communication between the user and the environment. Reliability Many existing cloud infrastructures leverage commodity hardware that is known to fail unexpectedly.
17. Army G2: Military Cloud Build systems without unnecessary barriers between customers, applications, and data. e.g. Location, Hardware, O/S, Networks Does not solve issues with data formats, incompatible APIs, and classification Note: This slide is intentionally vague because of the applications and users.
31. Government Cloud Computing Framework Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) / Applications User/ Admin Portal Reporting & Analytics Service Mgmt & Provisioning Analytic Tools Data Mgmt Reporting Knowledge Mgmt Citizen Engagement Wikis / Blogs Social Networking Agency Website Hosting Email / IM Virtual Desktop Office Automation Business Svcs Apps Core Mission Apps Legacy Apps (Mainframes) Gov Productivity Gov Enterprise Apps Database Testing Tools Developer Tools DBMS Directory Services Security & Data Privacy Data/Network Security Data Privacy Certification & Compliance Authentication & Authorization Auditing & Accounting Service Provisioning SLA Mgmt Performance Monitoring DR / Backup Operations Mgmt Storage Virtual Machines Web Servers Server Hosting Network Data Center Facilities Routers / Firewalls LAN/WAN Internet Access Hosting Centers Cloud Service Delivery Capabilities Cloud User Tools Core Cloud Services Application Integration API’s Workflow Engine EAI Mobile Device Integration Data Migration Tools ETL User Profile Mgmt Trouble Mgmt Product Catalog Order Mgmt Billing / Invoice Tracking Customer / Account Mgmt
32. GSA Deployment Model PRIVATE CLOUD Operated solely for an organization. COMMUNITY CLOUD Shared by several organizations and supports a specific community that has shared concerns PUBLIC CLOUD Made available to the general public or a large industry group and is owned by an organization selling cloud services. HYBRID CLOUD Composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability
33. Cloud Definition Framework Deployment Models Service Models Essential Characteristics Common Characteristics Homogeneity Massive Scale Resilient Computing Geographic Distribution Community Cloud Private Cloud Public Cloud Hybrid Clouds Software as a Service (SaaS) Platform as a Service (PaaS) Infrastructure as a Service (IaaS) Resource Pooling Broad Network Access Rapid Elasticity Measured Service On Demand Self-Service Low Cost Software Virtualization Service Orientation Advanced Security
38. DISA Vision of Services SIPRNet Enterprise Tactical Content Delivery Network Fixed Geo-redundant Data Centers Deployable Data Center Device Clients Plug-n-Fight Do for Computing what IP did for Networks Cloud = default background resource
47. OneSAF vs. World of Warcraft World of Warcraft Visual Detail: 100X Algorithm Detail: 1X Heavy Client Demand OneSAF Visual Detail: 1X Algorithm Detail: 100X Heavy Server Demand
48.
49.
Notas del editor
Presentation (Full Color) Author, Date 03/21/10
Presentation (Full Color) Author, Date 03/21/10
Presentation (Full Color) Author, Date 03/21/10
Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations. Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).
Amazon EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios. Based on an Amazon Machine Image or AMI (like a VM) that gets loaded on their environment. Multiple OSs and configurations to choose from. http://aws.amazon.com/ec2/ Amazon S3: Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers. Sample uses: back up files, host static website content, securely share files with your external business partners, or store scientific, financial, or website data for processing via Amazon EC2. http://aws.amazon.com/s3/ IBM Computing on Demand (CoD): The leading cloud computing enterprise solution - provides flexible computing power - by the hour, week or year, global access to CoD centers, and the security you can depend on. By off-loading transactions to CoD, you can scale your infrastructure without further capital investments helping to reduce costs and improve your competitive advantage. Includes processing power and storage. Has multiple configurations and options. Customers purchase annual base membership to the Computing on Demand center. Base membership includes a "home" management node in the IBM CoD center and a software VPN connection (SSL, “Meet-Me” and hardware upgrade available). http://www-03.ibm.com/systems/deepcomputing/cod/index.html Microsoft Live Mesh: With Live Mesh, you can synchronize files with all of your devices, so you always have the latest versions handy. Access your files from any device or from the web, easily share them with others, and get notified whenever someone changes a file. Seems to be marketed solely for individuals. There is a 5GB cap on the resources available with no apparent way of extending that even at a cost. This goes against the scalability claim of most Cloud Computing environments. http://www.mesh.com/ Microsoft Azure Services Platform: The Azure™ Services Platform is an internet-scale cloud computing and services platform hosted in Microsoft data centers. The Azure Services Platform provides a range of functionality to build applications that span from consumer web to enterprise scenarios and includes a cloud operating system and a set of developer services. Fully interoperable through the support of industry standards and web protocols such as REST and SOAP, you can use the Azure services individually or together, either to build new applications or to extend existing ones. http://www.microsoft.com/azure/ Presentation (Full Color) Author, Date 03/21/10
Google App Engine: App Engine offers a complete development stack that uses familiar technologies to build and host web applications (currently Java and Python). Starting out will always be free and if you need additional computing resources, they're available at competitive market pricing. With App Engine you write your application code, test it on your local machine and upload it to Google with a simple click of a button or command line script. Once your application is uploaded to Google we host and scale your application for you. You no longer need to worry about system administration, bringing up new instances of your application, sharing your database or buying machines. We take care of all the maintenance so you can focus on features for your users. http://code.google.com/appengine/ Yahoo! Open Strategy (Y!OS): At the heart of Y!OS are a set of complementary platforms that allow developers to rapidly access Yahoo! network data and develop applications, with access controlled using an open authentication standard. Yahoo allows developers to leverage their traffic and user base on top of simply their hardware/software infrastructure. Provides a set of languages, platforms, and APIs to maximize the developers ability to tap into the resources provided by Yahoo!. Focuses more on the value added in terms of the visibility of the application and existing functionality provided than the scalability of the infrastructure. Tailored to facilitate social-aware applications. Uses a large number of open standards, including OAuth and OpenSocial. Allows developers to use a wide range of languages and technologies. Appears to tailor to individuals due to free use, open nature, and limited to no support. http://developer.yahoo.com/yos/intro/ Force.com: Force.com (from salesforce.com) is cloud computing for the enterprise. It's the fastest and easiest platform to build, buy, and run your business applications. It’s delivered over the Internet so you can say goodbye to complicated servers and software that eat up your valuable IT resources. What used to take months or years can now be done in days or weeks at a fraction of the cost. Focuses on enterprise users. Attempts to allow users to develop applications with "no programming" through the use of point-and-click wizards, but the platform only supports applications written in Apex. Does not focus on scalability as is primary source of value. Used for creating applications to be used by the developing organization, not global web applications. Advocates the "multitenant architecture" that provides all users with the same level of service. Addresses the need for users to integrate with existing applications and more importantly, other clouds. http://www.salesforce.com/platform/ Zoho: Zoho provides a significant set of shrink-wrapped applications that are ready to be run on their platform out-of-the-box. On top of this, they provide a set of APIs for integrating/interacting with these Zoho applications in a variety of ways. Part of their pricing model includes charging users to access more advanced APIs. A marketplace is also provided for users to leverage applications developed by other users. Focuses on users extending the set of standard applications rather than applications built from scratch. Specifies not only the platform for development, but also the platform for use. Adds value for both developers and users of the applications developed. Developers have a captured audience in many senses. Users get a wide range of applications in a single location (i.e. one-stop-shopping). http://www.zoho.com/ Akamai EdgePlatform: Places significant focus on performance issues and resource locality. Supports only J2EE web-based applications. Couples a large set of consulting services with their technologies to provide customers with end-to-end support for leveraging their environment. Emphasizes the large amount of feedback/analysis that is provided for sites they host. http://www.akamai.com/html/technology/edgeplatform.html Presentation (Full Color) Author, Date 03/21/10
Private cloud. The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise. Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise. Public cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud . The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting).
Cloud diagram idea inspired by Maria Spinola 8-31-09
Futures: FDCE – Federated Development and Certification Environment SQL – Structured Query Language (Microsoft Database product Oracle – Database SOE – Standard Operating Environment – Proposed standards for application software ESM – Enterprise System Management – Standard system control software – Remote operations and reporting LAMP – Linux, Apache, MySql, PHP (or Perl) – Standard Web design suite for Linux – RedHat or Suse
A simulation is actually a service provided to a customer – it is an experience in learning. It is not a facility that houses hardware, software, and expert controllers. Those are all necessary, but they are not the simulation. With 21 st century technology it is possible to separate the simulation experience from the equipment that makes it happen. This can enable simulation to escape the bonds of unique facilities and travel over global networks to customers anywhere … at any time. We have the potential to deliver training events to people via the computers, networks, and infrastructure that already exist. We can break the 1-to-1 relationship between events and equipment and reach out to train the entire force from a very powerful and remote computer center. One of the key components in reaching these soldiers is providing a user interface application that can reside on their own computers. Heavy-weight and highly specialized applications require highly specialized equipment to run them. We need to isolate the complexity in on the server side where the professionals reside, and present simplicity on the client side where the soldiers reside. Simulation has a history that is very similar to IT services. Both were originally very technology and provider centric. The customers’ needs have been subordinate to the providers needs and a continual fascination with new capabilities. Just as IT is shifting its focus toward providing services needed by its customers, the simulation community needs to shift focus away from ivory towers of technology and toward customer accessible services.
Providing large volumes of executable simulation to customers around the world will significantly stress the capacity of traditional networks of computers. The high fidelity of military simulation (as compared to IT applications and multi-player games) calls for more power on the server side. HPC is one way to provide the necessary computer power. Also, all future desktop computers will have multiple compute cores in them. Intel and AMD claim that they will double the number of cores every 18 months just as they used to double clock speed. Future simulation applications are going to have to be structured to operate on these parallel desktop machines. HPCs are a great environment in which to learn these programming skills right now. Wikipedia Article: The term high performance computing (HPC) refers to the use of (parallel) supercomputers and computer clusters, that is, computing systems comprised of multiple (usually mass-produced) processors linked together in a single system with commercially available interconnects. This is in contrast to mainframe computers, which are generally monolithic in nature. While a high level of technical skill is undeniably needed to assemble and use such systems, they can be created from off-the-shelf components. Because of their flexibility, power, and relatively low cost, HPC systems increasingly dominate the world of supercomputing. Usually, computer systems in or above the teraflop-region are counted as HPC-computers. The term is most commonly associated with computing used for scientific research. A related term, High-performance technical computing (HPTC), generally refers to the engineering applications of cluster-based computing (such as computational fluid dynamics and the building and testing of virtual prototypes). Recently, HPC has come to be applied to business uses of cluster-based supercomputers, such as data warehouses, line-of-business (LOB) applications and transaction processing.
A simulation is actually a service provided to a customer – it is an experience in learning. It is not a facility that houses hardware, software, and expert controllers. Those are all necessary, but they are not the simulation. With 21 st century technology it is possible to separate the simulation experience from the equipment that makes it happen. This can enable simulation to escape the bonds of unique facilities and travel over global networks to customers anywhere … at any time. We have the potential to deliver training events to people via the computers, networks, and infrastructure that already exist. We can break the 1-to-1 relationship between events and equipment and reach out to train the entire force from a very powerful and remote computer center. One of the key components in reaching these soldiers is providing a user interface application that can reside on their own computers. Heavy-weight and highly specialized applications require highly specialized equipment to run them. We need to isolate the complexity in on the server side where the professionals reside, and present simplicity on the client side where the soldiers reside. Simulation has a history that is very similar to IT services. Both were originally very technology and provider centric. The customers’ needs have been subordinate to the providers needs and a continual fascination with new capabilities. Just as IT is shifting its focus toward providing services needed by its customers, the simulation community needs to shift focus away from ivory towers of technology and toward customer accessible services.