1. OIT552 Cloud Computing
Course Material
Prepared By
Kaviya.P
Assistant Professor / Information Technology
Kamaraj College of Engineering & Technology
(Autonomous)
2. 15-11-2021
1
IBM Power Systems
Cloud computing is an umbrella term used to
refer to Internet based development and
services
Introduction to Cloud Computing
IBM Power Systems
The Next Revolution in IT
The Big Switch in IT
• Classical Computing
– Buy & Own
• Hardware, System
Software, Applications
often to meet peak
needs.
– Install, Configure, Test,
Verify, Evaluate
– Manage
– ..
– Finally, use it
– $$$$....$(High CapEx)
• Cloud Computing
– Subscribe
– Use
– $ - pay for what you use,
based on QoS
Every
18
months?
IBM Power Systems
WHAT IS CLOUD COMPUTING ?
What do they say ?
IBM Power Systems
What is Cloud Computing?
• Shared pool of configurable computing resources
• On-demand network access
• Provisioned by the Service Provider
4
3. 15-11-2021
2
IBM Power Systems
• A model for enabling convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks,
servers, storage, applications, and services)
• It can be rapidly provisioned and released with minimal management
effort or service provider interaction.
• Promotes availability
• Provides high level abstraction of computation and storage model.
• It has essential characteristics, service models, and deployment
models.
Cloud Definitions
IBM Power Systems
Cloud Definitions
• Definition from Wikipedia
– Cloud computing is Internet-based computing, whereby
shared resources, software, and information are provided
to computers and other devices on demand like the
electricity grid.
– Cloud computing - A style of computing in which
dynamically scalable and often virtualized resources are
provided as a service over the Internet.
IBM Power Systems
Cloud Definitions
• Definition from Whatis.com
– Name cloud computing was inspired by the cloud
symbol that's often used to represent the Internet in
flowcharts and diagrams.
– Cloud computing is a general term for anything that
involves delivering hosted services over the Internet.
IBM Power Systems
Cloud Definitions
• Definition from Berkeley
– Cloud Computing refers to both the applications
delivered as services over the Internet and the
hardware and systems software in the datacenters
that provide those services.
4. 15-11-2021
3
IBM Power Systems
Cloud Definitions
• Definition from Buyya
A Cloud is a type of parallel and distributed system consisting of a
collection of interconnected and virtualized computers .
They are dynamically provisioned and presented as one or more
unified computing resources based on service-level agreements
established through negotiation between the service provider and
consumers.
IBM Power Systems
Cloud Applications
•Scientific / Technical Applications
•Business Applications
•Consumer / Social Applications
Science and Technical Applications
Business Applications
Consumer / Social Applications
IBM Power Systems
Cloud Computing
• Provides the facility to provision virtual hardware, runtime
environment and services to people – on demand service
• These facilities are used by end user as long as they are
needed by them
• Long term Vision of cloud computing
o IT services are traded as utilities on an open market
without technological and legal barriers
IBM Power Systems
9. 15-11-2021
2
ROOTS OF CLOUD COMPUTING
• Hardware (virtualization, multi-core chips)
• Internet technologies (Web services, service-oriented
architectures, Web 2.0),
• Distributed computing (clusters, grids)
• Systems management (autonomic
center automation).
computing, data
From Mainframes to Clouds
• Switch in the IT world
• From in-house generated computing power
• into
• Utility-supplied computing resources delivered over the
Internet as Web services
Computing delivered as a utility can be defined as
―on demand delivery of infrastructure, applications, and
business processes in a security-rich, shared, scalable, and
based computer environment over the Internet for a fee”
10. 15-11-2021
3
In 1970s,
• Common data processing tasks ( payroll automation)
operated time-shared mainframes as utilities
• Mainframes had to operate at very high utilization rates
• They are very expensive.
Disadvantages
• With the advent of fast and inexpensive microprocessors
• isolation of workload into dedicated servers
• Incompatibilities between software stacks and operating
systems
• the unavailability of efficient computer network
SOA, Web Services, Web 2.0 and Mashups
• Web services
• glue together applications running on different messaging
product platforms
• enabling information from one application to be made
available to others
• enabling internal applications to be made available over the
Internet.
SOA, Web Services, Web 2.0 and Mashups
• Describe, compose, and orchestrate services, package and
transport messages between services, publish and discover
services, represent quality of service (QoS) parameters, and
ensure security in service access.
• Created on HTTP and XML - providing a common mechanism
for delivering services, making them ideal for implementing a
service-oriented architecture (SOA).
• Purpose of a SOA
• to address requirements of loosely coupled, standards-based,
and protocol-independent distributed computing.
• Software resources are packaged as ―services,
• They are well-defined, self contained modules that provide
standard business functionality
• They are independent of the state or context of other services.
• Service Mashups - information and services may be
building blocks of
programmatically aggregated, acting as
complex compositions
11. 15-11-2021
4
• Use of distributed Systems to solve
computational problems
• The processors communicate with
one another through
communication lines such as high
speed buses or telephone lines
• Each processor has its own local
memory
• Examples :
• ATM, Internet, Intranet /
Workgroups
Distributed Computing
Properties of Distributed Computing
• Fault Tolerance
• When one or some node fails, the whole system still work fine
except performance
• Need to check the status of each node
• Resource Sharing
• Each user can share the computing power and storage resources in
the system with other users
• Load Sharing
• Dispatching several tasks to each nodes can help share loading to
the whole system
• Easy to expand
• Expect to use few time when adding nodes
• Performance
• Parallel computing can be considered a subset of distributed
computing
Why Distributed Computing ?
• Nature of application
• Performance
• Computing intensive
• Task consume lot of time on computing
• Ex: computation of pi value using Monte carlo simulation
• Data intensive
• Task deals with a large amount or large size of filesng
• Ex: Facebook, Experimental data processing
• Robustness
• No SPOF ( Single Point Of Failure)
• Other nodes can execute the same task executed on failed node
12. 15-11-2021
5
• Grid
• Users (client applications) gain access to computing resources
(processors, storage, data, applications) as needed with little
knowledge of where those resources are located or what the
underlying technologies, hardware and operating system
• “The Gird” links computing
workstations, servers,
together
storage elements) and provides
resources (PC,
the
mechanism needed to access them
• Grid Computing
• is a computing infrastructure that provides dependable,
consistent, pervasive and inexpensive access to computational
capabilities
Grid Computing
• Grid Computing
• Share more than information
• Data, computing power, applications in dynamic
environment, multi-institutional, virtual organizations
• Effective use of resources at many institutes. People from
many institutions working to solve a common problem (
virtual organization)
• Join local communities
• Interactions with the underneath layers must be
transparent and seemless to the users
• Open Grid ServicesArchitecture (OGSA)
• defining a set of core capabilities and behaviors that address
key concerns in grid systems
• Globus Toolkit is a middleware that implements several standard
Grid services
• Grid brokers, which facilitate user interaction with multiple
middleware and implement policies to meet QoS needs.
13. 15-11-2021
6
• Types of Grid
• Computational Grid
• provide secure access to large pool of shared processing
power suitable for high throughput applications
• Data Grid
• provide an infrastructure to support data storage, data
manipulation of large volume of data stored
discovery, data handling, data publication and data
in
heterogeneous databases and file system
• Disadvantages
• Ensuring QoS in grids is difficult
• availability of resources with diverse software configurations
• Eg: Disparate operating systems, libraries, compilers,
runtime environments but user applications would often run
only on specially customized environments
• Cluster
• is a type of parallel or distributed computer system consists of a
collection of inter-connected stand-alone computers working
together as a single integrated computing resource
• Key components
• Multiple standalone computers, operating systems, high
performance interconnects, middleware, parallel computing
environments and applications
• Clusters are usually deployed to improve speed
Cluster Computing
14. 15-11-2021
7
• Types of Clusters
• High Availability or Failover clusters
• Load Balancing Clusters
• Parallel / Distributed Processing Clusters
• Benefits of clustering
• System availability
• Offer inherent high system availability due to redundancy
of hardware, OS and applications
• Hardware fault tolerance
• Redundancy for most system components (hardware and
software)
• OS and applications reliability
• Run multiple copies of OS, applications
• Scalability
• Adding servers to the cluster
• High Performance
• Running cluster enabled programme
• Utility
• Eg : electrical power – seek to meet fluctuating needs and charge
for the resources based on usage rather than flat basis.
• Utility Computing
• Service provisioning models in which a service provider
makes computing resources and infrastructure
management available to the customer as needed and
changes them for specific usage rather than a flat rate
• Advantage
• Low or no initial cost to acquire compute resource –
Computational resource are essentially rented
Utility Computing
15. 15-11-2021
8
• U
tility Computing ?
• Pay-for-use Pricing Model
• Data Center Virtualization and provisioning
• Solves Resource utilization problem
• Outsourcing
• Web Services Delivery
• Automation
Hardware Virtualization
• Hardware virtualization allows running multiple operating systems and
software stacks on a single physical platform
• Software layer - Virtual machine monitor (VMM) - Hypervisor
- mediates access to the physical hardware presenting to each
guest operating system a virtual machine (VM), which is a set of
virtual platform interfaces
16. 15-11-2021
9
Technologiesincreased adoption of virtualization
• Multi-core chips,
• Paravirtualization,
• Hardware-assisted virtualization, and
• Live migration of VMs
Benefits
• Improvements on sharing and utilization
• Better manageability
• Higher reliability.
Capabilities regarding management of workload in a virtualized
system
• Isolation
• Consolidation
• Migration
• Work load Isolation
• Execution of one VM should not affect the performance of
another VM
• Consolidation
• Consolidation of several individual and heterogeneous
workloads onto a single physical platform leads to better
system utilization.
• Workload Migration
• It is done by encapsulating a guest OS state within a VM
and allowing it to be suspended, fully serialized,
migrated to a different platform, and resumed
immediately or preserved to be restored at a later date
Virtual Appliances
• An application combined with the environment needed to run it
Environment - operating system, libraries, compilers, databases,
application containers, and so forth.
• It eases software customization, configuration, and patching and
improves portability.
Example –AMI(Amazon Machine Image) format forAmazon
EC2 public cloud
17. 15-11-2021
10
Open Virtualization Format
• Consists of a file or Set of files –
• Describing the VM hardware characteristics (e.g.,
memory, network cards, and disks)
• Operating system details, startup, and shutdown
actions
• Virtual disks themselves
• Other metadata containing product and licensing
information.
Autonomic Computing
• Systems should manage themselves, with high-level guidance
from humans
• Autonomic (self-managing) systems rely on
• Monitoring probes and gauges (sensors),
• On an adaptation engine (autonomic manager) for computing
optimizations based on monitoring data, and
• On effectors to carry out changes on the system.
• 4 properties of autonomic systems(by IBM):
• self-configuration,
• self-optimization,
• self-healing, and
• self-protection.
• IBM - Reference model for autonomic control loops of
autonomic managers
MAPE-K (Monitor Analyze Plan Execute—Knowledge)
• Autonomic computing inspire software technologies for
data centre automation
• Its Tasks are
• Management of service levels of running applications
• Management of data centre capacity
• Proactive disaster recovery and
• Automation of VM provisioning
18. 15-11-2021
1
Desired Features of Cloud
1
Desired Features of Cloud
To satisfy the expectations of consumers cloud must provide,
• Self-Service
• Per Usage Metering – Billing
• Elastic
• Customization
2
19. 15-11-2021
2
Desired Features of Cloud
Self Service
• On-demand instant
access to resources
• Must allow self service
access, So customers can
request, customize, pay
and use services without
intervention.
3
Desired Features of Cloud
Per Usage Metering and Billing
• Services must be prized on short term basis
• Allow users to release resources as soon as they are not needed.
• Must offer efficient trading services like prizing, accounting and billing
• Metering should be done accordingly for different services
• Usage promptly reported
4
20. 15-11-2021
3
Desired Features of Cloud
Elasticity
• Infinite computing resources available on demand.
• Rapidly provide resources in any quantity and at any time.
• Additional resources can be provided when application load
increases
• Release when load decreases.
5
Desired Features of Cloud
Customization
• Resources rented from cloud must be customizable.
• In IaaS – allow users to deploy specialised virtual appliances
and give privileged access to servers.
6
21. CHALLENGES AND RISKS OF CLOUD COMPUTING
Despite the initial success and popularity of the cloud computing paradigm and the extensive
availability of providers and tools, a significant number of challenges and risks are inherent to
this new model of computing.
Issues faced in cloud computing are
Security, Privacy and Trust
Data Lock in Standardization
Availability, Fault-Tolerance, and Disaster Recovery
Resource Management and Energy Efficient
Security, Privacy and Trust
Current cloud offerings are essentially public, exposing the system to more attacks. For
this reason there are potentially additional challenges to make cloud computing
environments as secure as in-house IT systems.
Security and privacy affect the entire cloud computing stack, since there is a massive use
of third-party services and infrastructures that are used to host important data or to
perform critical operations.
In this scenario, the trust toward providers is fundamental to ensure the desired level of
privacy for applications hosted in the cloud.
Legal and regulatory issues also need attention. When data are moved into the Cloud,
providers may choose to locate them anywhere on the planet.
The physical location of data centers determines the set of laws that can be applied to the
management of data.
For example, specific cryptography techniques could not be used because they are not
allowed in some countries.
Similarly, country laws can impose that sensitive data, such as patient health records, are
to be stored within national border.
Data Lock-In and Standardization
A major concern of cloud computing users is about having their data locked-in by a
certain provider.
Users may want to move data and applications out from a provider that does not meet
their requirements.
However, in their current form, cloud computing infrastructures and platforms do not
employ standard methods of storing user data and applications.
The answer to this concern is standardization. In this direction, there are efforts to create
open standards for cloud computing.
The Cloud Computing Interoperability Forum (CCIF) was formed by organizations such
as Intel, Sun, and Cisco in order to “enable a global cloud computing ecosystem whereby
organizations are able to seamlessly work together for the purposes for wider industry
adoption of cloud computing technology”.
The development of the Unified Cloud Interface (UCI) by CCIF aims at creating a
standard programmatic point of access to an entire cloud infrastructure.
In the hardware virtualization sphere, the Open Virtual Format (OVF) aims at facilitating
22. packing and distribution of software to be run on VMs so that virtual appliances can be
made portable—that is, seamlessly run on hypervisor of different vendor
Availability, Fault-Tolerance and Disaster Recovery
Availability of the service, its overall performance, and what measures are to be taken
when something goes wrong in the system or its components is very essential in cloud.
Users seek for a warranty before they can comfortably move their business to the cloud.
SLAs, which include QoS requirements, must be ideally set up between customers
andcloud computing providers to act as warranty.
An SLA specifies the details of the service to be provided, including availability and
performance guarantees.
Additionally, metrics must be agreed upon by all parties, and penalties for violating the
expectations must also be approved.
Resource Management and Energy-Efficiency
Resource Management and Energy-Efficiency is an important challenge faced by
providers of cloud computing services is the efficient management of virtualized resource
pools.
Physical resources such as CPU cores, disk space, and network bandwidth must be sliced
and shared among virtual machines running potentially heterogeneous workloads.
The multi-dimensional nature of virtual machines complicates the activity of finding a
good mapping of VMs onto available physical hosts while maximizing user utility.
Dimensions to be considered include: number of CPUs, amount of memory, size of
virtual disks, and network bandwidth.
Dynamic VM mapping policies may leverage the ability to suspend, migrate, and resume
VMs as an easy way of preempting low-priority allocations in favor of higher-priority
ones.
Migration of VMs also brings additional challenges such as detecting when to initiate a
migration, which VM to migrate, and where to migrate.
In addition, policies may take advantage of live migration of virtual machines to relocate
data center load without significantly disrupting running services.
Data centers consumer large amounts of electricity. According to a data published by HP,
100 server racks can consume 1.3MWof power and another 1.3 MW are required by the
cooling system, thus costing USD 2.6 million per year.
Besides the monetary cost, data centers significantly impact the environment in terms of
CO2 emissions from the cooling systems.
23. BENEFITS OF CLOUD COMPUTING
No upfront commitment
IT assets, namely software and infrastructure, are turned into utility costs, which are
paid for as long as they are used, not paid for upfront.
Capital costs are costs associated with assets that need to be paid in advance to start a
business activity.
Before cloud computing, IT infrastructure and software generated capital costs,
since they were paid upfront so that business start-ups could afford a computing
infrastructure, enabling the business activities of the organization.
Cost efficiency
The most evident benefit from the use of cloud computing systems and
technologies is the increased economical return due to the reduced maintenance
costs and operational costs related to IT software and infrastructure.
The biggest reason behind shifting to cloud computing is that it takes considerably
lesser cost than an on-premise technology.
Now the companies need not store the data in disks anymore as the Cloud offers
enormous storage space, saving money and resources of the companies.
It helps you to save substantial capital cost as it does not need any physical hardware
investments.
Also, you do not need trained personnel to maintain the hardware. The buying and
managing of equipment is done by the cloud service provider.
On Demand
Services can be accessed on demand and only when required.
Cloud users can access the required services only when they need and pay for only for
theusage.
Any subscriber of cloud service can access the services from anywhere and at anytime.
Disaster Recovery:
It is highly recommended that businesses have an emergency backup plan ready in the
case of an emergency. Cloud storage can be used as a back‐up plan by businesses by
providing a second copy of important files.
These files are stored at a remote location and can be accessed through an internet
connection.
Excellent accessibility
Storing the information in cloud allows you to access it anywhere and anytime
regardless of the machine making it highly accessible and flexible technology of
present times.
Information and services stored in the cloud are exposed to users by Web-based
interfaces that make them accessible from portable devices as well as desktops at
home.
Scalability
If you are anticipating a huge upswing in computing need (or even if you are
surprised bya sudden demand), cloud computing can help you manage. Rather than
having to buy, install, and configure new equipment, you can buy additional CPU
cycles or storage froma third party.
For example, organizations can add more servers to process workload spikes and
dismiss them when they are no longer needed.
Flexibility
Increased agility in defining and structuring software systems is another
significantbenefit of cloud computing.
Since organizations rent IT services, they can more dynamically and flexibly
compose their software systems, without being constrained by capital costs for IT
assets.
There is a reduced need for capacity planning, since cloud computing allows
organizations to react to unplanned surges in demand quite rapidly.
24. DISADVANTAGES OF CLOUD COMPUTING
Downtime
With massive overload on the servers from various clients, the service provider might
come up against technical outages. Due to this unavoidable situation your business
could be temporarily sabotaged.
And in case your internet connection is down, you will not be able to access the data,
software or applications on the cloud. So basically you are depending on the
quality ofthe internet to access the tools and software, as it is not installed in-house.
Security
There is room for imminent risk for your data even though cloud service providers
abide by strict confidentiality terms, are industry certified and implement the best
security standards.
When you seek to use cloud-based technology you are extending your access
controls toa third party agent to import critical confidential data from your company
onto the cloud.
With high levels of security and confidentiality involved, the cloud service
providers are often faced with security challenges.
The presence of data on the cloud opens up a greater risk of data theft as hackers
could find loopholes in the framework. Basically your data on the cloud is at a higher
risk, than if it was managed in-house.
Hackers could find ways to gain access to data, scan, exploit a loophole and look
for vulnerabilities on the cloud server to gain access to the data.
For instance, when you are dealing with a multi-tenant cloud server, the chances of a
hacker breaking into your data are quite high, as the server has data stored by multiple
users.
But the cloud-based servers take enough precautions to prevent data thefts and
thelikelihood of being hacked is quite less.
Vendor Lock-In
Companies might find it a bit of a hassle to change the vendors.
Although the cloud service providers assure that it is a breeze to use the cloud and
integrate your business needs with them, disengaging and moving to the next vendor
is not a forte that’s completely evolved.
As the applications that work fine with one platform may not be compatible with
another.
The transition might pose a risk and the change could be inflexible due to
synchronization and support issues.
Limited Control
Organizations could have limited access control on the data, tools and apps as the
cloud iscontrolled by the service provider.
It hands over minimal control to the customer, as the access is only limited to the
applications, tools and data that is loaded on the server and no access to the
infrastructureitself.
The customer may not have access to the key administrative services.
Legal Issues
Legal issues may also arise. These are specifically tied to the ubiquitous nature of
cloud computing, which spreads computing infrastructure across diverse geographical
locations.
Different legislation about privacy in different countries may potentially create
disputesas to the rights that third parties (including government agencies) have to
your data.
U.S. legislation is known to give extreme powers to government agencies to acquire
confidential data when there is the suspicion of operations leading to a threat to
national security.
European countries are more restrictive and protect the right of privacy.
25. 15-11-2021
1
Basics of Virtualization, Types of
Virtualization, Implementation
Levels of Virtualization
BASICS OF VIRTUALIZATION
• Virtualization is a computer architecture technology by
which multiple virtual machines (VMs) are multiplexed in
the same hardware machine.
• The purpose of a VM is to enhance resource sharing by
many users and improve computer performance in terms
of resource utilization and application flexibility.
• Hardware resources such as CPU, memory, I/O devices or
software resources such as OS, software libraries can be
virtualized
26. 15-11-2021
2
Levels of Virtualization
Implementation
Levels of Virtualization Implementation
• A traditional computer runs with host OS with its hardware
architecture
• After virtualization different user applications managed by their
own OS can run on the same hardware independent of host OS
• Additional layer called virtualization layer called hypervisor or
virtual machine monitor(VMM)
• Main function of software layer is to virtualize the physical
hardware of host machine into virtual resources to be used by
VMs.
27. 15-11-2021
3
Levels of Virtualization Implementation
• Virtualization software creates the abstraction of VMs by
interposing a virtualization layer at various levels of a
computer system.
• Common virtualization layers are:
1. Instruction Set Architecture (ISA) level
2. Hardware level
3. Operating System level
4. Library support level
5. Application level
Virtualization ranging from hardware to applications in five abstraction levels
Levels of Virtualization Implementation
28. 15-11-2021
4
Instruction Set Architecture Level
• Virtualization is performed by emulating a given ISA by
the ISA of the host machine.
• For example, MIPS binary code can run on an x86-based
host machine with the help of ISA emulation.
• It is possible to run a large amount of legacy binary code
written for various processors on any given new
hardware host machine.
• Instruction set emulation leads to virtual ISAs created on
any hardware machine.
Instruction Set Architecture Level
• Basic emulation method is through code interpretation.
• An interpreter program interprets the source instructions
to target instructions one by one.
• This process is relatively slow.
• For better performance, dynamic binary translation is
desired. This approach translates basic blocks of dynamic
source instructions to target instructions.
29. 15-11-2021
5
Instruction Set Architecture Level
• Instruction set emulation requires binary translation and
optimization.
• A virtual instruction set architecture (V-ISA) thus requires
adding a processor-specific software translation layer to
the compiler.
Hardware Abstraction Level
• Hardware-level virtualization is performed right on top of the
bare hardware.
• This approach generates a virtual hardware environment for a
VM.
• The intention is to upgrade the hardware utilization rate by
multiple users concurrently.
• The idea was implemented in the IBM VM/370 in the 1960s.
• recently, the Xen hypervisor has been applied to virtualize
x86-based machines to run Linux or other guest OS
applications.
30. 15-11-2021
6
Operating System Level
• Refers to an abstraction layer between traditional OS and
user applications.
• OS-level virtualization creates isolated containers on a single
physical server and the OS instances to utilize the hardware
and software in data centers.
• The containers behave like real servers.
Operating System Level
• OS-level virtualization is commonly used in creating
virtual hosting environments to allocate hardware
resources among a large number of mutually distrusting
users.
31. 15-11-2021
7
Library Support Level
• Most applications use APIs exported by user-level libraries
rather than using lengthy system calls by the OS.
• Virtualization with library interfaces is possible by
controlling the communication link between applications
and the rest of a system through API hooks.
• The software tool WINE has implemented this approach to
support Windows applications on top of UNIX hosts.
• Another example is the vCUDA which allows applications
executing within VMs to leverage GPU hardware
acceleration.
User-Application Level
• Virtualizes an application as a VM.
• Application-level virtualization is also known as process-level
virtualization.
• Application seems to be running on a local machine infact it is
running on virtual machine(such as server) in another location
• The most popular approach is to deploy high level language (HLL)
• The virtualization layer sits as an application program on top of
the operating system.
• The Microsoft .NET CLR and Java Virtual Machine (JVM) are two
good examples of this class of VM.
32. 15-11-2021
8
Application-level virtualization
• Application-level virtualization are known as
– application isolation,
– Application sandboxing, or
– Application streaming.
• The process involves wrapping the application in a layer that
is isolated from the host OS and other applications.
• An example is the LANDesk application virtualization
platform : self-contained, executable files in an isolated
• Environment without requiring installation, system
modifications, or elevated security privileges.
Relative Merits of Different
Approaches
33. 15-11-2021
9
VMM Design Requirements and Providers
• Hardware-level virtualization inserts a layer between real
hardware and traditional operating systems.
• This layer is commonly called the Virtual Machine Monitor
(VMM) and it manages the hardware resources of a
computing system.
• Each time programs access the hardware the VMM captures
the process
• One hardware component, such as the CPU, can be
virtualized as several virtual copies.
Three requirements for a VMM
1. VMM should provide an environment identical to the
original machine.
2. Programs run in this environment should show, only minor
decreases in speed.
3. VMM should be in complete control of the system
resources.
34. 15-11-2021
10
Virtual Machine Monitor
• VMM should exhibit a function identical to that which it
runs on the original machine directly.
• Two possible exceptions permitted:
– Differences caused by the availability of system resources:
arises when more than one VM runs on the same machine
– Differences caused by timing dependencies.
• These two differences pertain to performance, while
the function a VMM provides stays the same as that of
a real machine
Virtual Machine Monitor
• Compared with a physical machine, no one prefers a VMM if its
efficiency is too low.
• Traditional emulators and complete software interpreters emulate
each instruction by means of functions or macros
• Provides the most flexible solutions for VMMs.
• However, emulators or simulators are too slow to be used as real
machines.
• To guarantee the efficiency of a VMM, a statistically dominant
subset of the virtual processor’s instructions needs to be
executed directly by the real processor, with no software
intervention by the VMM
35. 15-11-2021
11
• Complete control of these resources by a
VMM includes the following aspects:
(1) The VMM is responsible for allocating
hardware resources for programs;
(2) it is not possible for a program to access any
resource not explicitly allocated to it; and
(3) it is possible under certain circumstances for
a VMM to regain control of resources already
allocated
Comparison of Four VMM and
Hypervisor Software Package
36. LOAD BALANCING
With the explosive growth of the Internet and its increasingly important role in our lives,
the traffic on the Internet is increasing dramatically, which has been growing at over
100% annualy.
The workload on servers is increasing rapidly, so servers may easily be overloaded,
especially servers for a popular web site.
There are two basic solutions to the problem of overloaded servers,
One is a single-server solution,
i.e., upgrade the server to a higher performance server. However, the new server may also
soon be overloaded, requiring another upgrade.
Further, the upgrading process is complex and the cost is high.
The second solution is a multiple-server solution,
i.e., build a scalable network service system on a cluster of servers. When load increases,
you can simply add one or more new servers to the cluster, and commodity servers have
the highest performance/cost ratio.
Therefore, it is more scalable and more cost-effective to build a server cluster system for
network services.
Cloud Load Balancing
Cloud load balancing is the process of distributing workloads across multiple computing
resources.
Cloud load balancing is defined as the method of splitting workloads and computing
resources in a cloud computing.
It enables enterprise to manage workload demands or application demands by distributing
resources among numerous computers, networks or servers.
Load Balancer
A load balancer is a device that distributes network or application traffic across a cluster
of servers.
Load balancing improves responsiveness and increases availability of applications.
A load balancer sits between the client and the server farm accepting incoming network
and application traffic and distributing the traffic across multiple backend servers using
various methods.
Load Balancer
Cloud-based server farms can achieve high scalability and availability using server load
balancing. This technique makes the server farm appear to clients as a single server.
37. Load balancing distributes service requests from clients across a bank of servers and
makes those servers appear as if it is only a single powerful server responding to client
requests.
Load balancing solutions can be divided into software-based load balancers and
hardware-based load balancers.
Hardware-based load balancers are specialized boxes that include Application Specific
Integrated Circuits (ASICs) customized for a specific use.
Software-based load balancers run on standard operating systems and standard
hardware components such as desktop PCs.
Load balancing Algorithms
Round Robin
Weighted Round Robin
Least Connection
Source IP Hash
Global Server Load Balancing
Round Robin:
This load balancing technique involves a pool of servers that have been identically
configured to deliver the same service as each other.
Each will have a unique IP address but will be linked to the same domain name, and
requests and servers are linked.
Weighted Round Robin
Weighted Round Robin builds on the simple Round Robin load balancing method.
In the weighted version, each server in the pool is given a static numerical weighting.
Servers with higher ratings get more requests sent to them.
Least Connection
Neither Round Robin or Weighted Round Robin take the current server load into
consideration when distributing requests.
The Least Connection method does take the current server load into consideration.
The current request goes to the server that is servicing the least number of active sessions
at the current time.
Source IP Hash
This algorithm combines source and destination IP addresses of the client and server to
generate a unique hash key.
The key is used to allocate the client to a particular server. As the key can be regenerated
if the session is broken, the client request is directed to the same server it was using
previously.
This is useful if it’s important that a client should connect to a session that is still active
after a disconnection.
For example, to retain items in a shopping cart between sessions.
Global Server Load Balancing (GSLB)
GSLB load balances DNS requests, not traffic.
It uses algorithms such as round robin, weighted round robin, fixed weighting, real server
load, location-based, proximity and all available. It offers High Availability through
multiple data centers.
If a primary site is down, traffic is diverted to a disaster recovery site. Clients connect to
their fastest performing, geographically closest data center.
Application health checking ensures unavailable services or data centers are not visible to
clients.
38. 15-11-2021
1
Virtualization Structures Tools
and Mechanisms
Virtualization structures Tools & Mechanisms
• Before virtualization, the operating system manages the
hardware.
• After virtualization, a virtualization layer is inserted
between the hardware and the OS.
• The virtualization layer is responsible for converting
portions of the real hardware into virtual hardware.
Virtualization structures Tools & Mechanisms
• Depending on the position of the virtualization layer,
there are several classes of VM architectures, namely
– Hypervisor architecture
– Paravirtualization
– Host-based virtualization
Hypervisor and Xen Architecture
• Hypervisor supports hardware-level virtualization on
bare metal devices such as CPU,memory,disk and
network interfaces
• Hypervisor sits directly between physical hardware and
its OS
• Depending on the functionality, a hypervisor can assume
a micro-kernel architecture or a monolithic hypervisor
architecture
39. 15-11-2021
2
Hypervisor and Xen Architecture
• A micro-kernel hypervisor includes only the basic and
unchanging functions (such as physical memory
management and processor scheduling)
• Device drivers and other changeable components are
outside the hypervisor
• A monolithic hypervisor implements all the
aforementioned functions, including those of the device
drivers
• The size of the hypervisor code of a micro-kernel
hypervisor is smaller than that of a monolithic hypervisor
Xen Architecture
• Xen is an open source hypervisor program developed by
Cambridge University
• Xen is a microkernel hypervisor, which separates the policy
from the mechanism
• It implements all the mechanisms, leaving the policy to be
handled by Domain 0
• Xen does not include any device drivers natively
• It just provides a mechanism by which a guest OS can have
direct access to the physical devices
Xen Architecture
• Like other virtualization systems, many guest OSes can run
on top of the hypervisor
• Not all guest OS es are created equal and one in particular
controls other
• The guest OS (privileged guest OS), which has control
ability, is called Domain 0, and the others are called Domain
U
• It is first loaded when Xen boots
• Domain 0 is designed to access hardware directly and
manage devices.
40. 15-11-2021
3
Xen Architecture
• VM is named Domain 0, which has the privilege to
manage other VMs implemented on the same host
• If Domain 0 is compromised, the hacker can control the
entire system
Binary Translation with Full Virtualization
• Depending on implementation technologies, hardware
virtualization can be classified into two categories:
full virtualization
host-based virtualization
Full virtualization:
– Does not need to modify the host OS
– Relies on binary translation to trap and to virtualize the
execution of certain sensitive, nonvirtualizable
instructions
41. 15-11-2021
4
Full Virtualization
• With full virtualization, noncritical instructions run on the
hardware directly
• Critical instructions are discovered and replaced with traps
into the VMM to be emulated by software.
• Noncritical instructions do not control hardware or threaten
the security of the system, but critical instructions do
• Running noncritical instructions on hardware not only can
promote efficiency, but also can ensure system security
Binary Translation of Guest OS Requests Using
a VMM
• VMware puts the VMM at Ring 0 and the guest OS at
Ring 1
• VMM scans the instruction stream and identifies the
privileged, control and behaviour-sensitive instructions
• These instructions are identified, they are trapped into
the VMM, which emulates the behaviour of these
instructions
Binary Translation of Guest OS Requests Using
a VMM
• The method used is binary translation
• Full virtualization combines binary translation and
direct execution
• The guest OS is completely decoupled from the
underlying hardware
Binary Translation of Guest OS Requests Using
a VMM
• Performance of full virtualization may not be ideal,
involves binary translation
• Code cache to store translated hot instructions to
improve performance, but it increases the cost of
memory usage
42. 15-11-2021
5
Host Based Virtualization
• An alternative is to install a virtualization layer on top of
the host OS
• Host OS is still responsible for managing the hardware
• Guest OSes are installed and run on top of the
virtualization layer
• Dedicated applications may run on the VMs
• Some other applications can also run with the host OS
directly
Host based virtualization
• First, the user can install this VM architecture without
modifying the host OS
• Second, the host-based approach appeals to many host
machine configurations
• Performance is too low when compared with
hypervisor/VMM architecture
• Application requesting hardware access involves four
layers of mapping
43. 15-11-2021
6
Para-virtualization
• OS recognize the presence of VMM
• Guest OS communicates directly with hypervisor
• Para-virtualization needs to modify the guest operating
systems
• Para-virtualized VMs provides special APIs requiring OS
modifications
• API exchanges hyper calls with hypervisor
• Assisted by compiler to replace nonvirtualizable OS
instructions by hyper calls
Para-virtualization
• X86 offers four instruction execution rings: Ring 0,1,2,3
• The lower ring number responsible for running high
privileged instructions
• OS responsible for managing the hardware and privilege
instructions to execute at Ring0.
• User level applications at Ring3
Para-Virtualization Architecture
44. 15-11-2021
7
Problems with Para-virtualization
• It must support the unmodified OS as well.
• Second, the cost of maintaining paravirtualized OSes is
high, because they may require deep OS kernel
modifications
• Finally, the performance advantage of para-virtualization
varies greatly due to workload variations
• Main problem in full virtualization is its low performance
in binary translation
Problems with Para-virtualization
• To speed up binary translation is difficult
• Many virtualization products employ the para-
virtualization architecture
• Eg: XEN, VMWare, ESX
KVM (Kernel based VM)
• Linux para-virtualization system
• Memory management and scheduling activities are
carried out by the existing Linux kernel, KVM does the rest
• KVM is a hardware-assisted para-virtualization tool, which
improves performance and supports unmodified guest
OSes such as Windows, Linux, Solaris, and other UNIX
variants
45. 15-11-2021
8
Para-Virtualization with Compiler Support
• Full virtualization architecture which intercepts and
emulates privileged and sensitive instructions at runtime
• Para-virtualization handles these instructions at compile
time
• The guest OS kernel is modified to replace the privileged
and sensitive instructions with hypercalls to the hypervisor
or VMM
Para-Virtualization with Compiler Support
• The guest OS running in a guest domain may run at Ring 1
instead of at Ring 0
• This implies that the guest OS may not be able to execute
some privileged and sensitive instructions
• Privileged instructions are implemented by hypercalls to
the hypervisor
• After replacing the instructions with hypercalls, the
modified guest OS emulates the behavior of the original
guest OS
VMware ESX Server for Para-
Virtualization
46. SERVER VIRTUALIZATION
Server virtualization is the process of using software on a physical server to create
multiple partitions or "virtual instances" each capable of running independently.
Whereas on a single dedicated server the entire machine has only one instance of an
operating system, on a virtual server the same machine can be used to run multiple
server instances each with independent operating system configurations.
Server virtualization is a virtualization technique that involves partitioning a physical
server into a number of small, virtual servers with the help of virtualization software.
In server virtualization, each virtual server runs multiple operating system instances at
the same time.
The primary uses of server virtualization are:
To centralize the server administration
Improve the availability of server
Helps in disaster recovery
Ease in development & testing
Make efficient use of server resources.
Types of Server Virtualization and Approaches to Server Virtualization
There are 3 types of server virtualization in cloud computing:
Hypervisor
A Hypervisor is a layer between the operating system and hardware. The hypervisor is
the reason behind the successful running of multiple operating systems.
It can also perform tasks such as handling queues, dispatching and returning the
hardware request. Host operating system works on the top of the hypervisor, we use it
to administer and manage the virtual machines.
Para-Virtualization
In Para-virtualization model, simulation in trapping overhead in software
virtualizations.
It is based on the hypervisor and the guest operating system and modified entry
compiled for installing it in a virtual machine.
After the modification, the overall performance is increased as the guest operating
system communicates directly with the hypervisor.
47. Full Virtualization
Full virtualizations can emulate the underlying hardware. It is quite similar to Para-
virtualization. Here, machine operation used by the operating system which is further
used to perform input-output or modify the system status.
The unmodified operating system can run on the top of the hypervisor. This is
possible because of the operations, which are emulated in the software and the status
codes are returned with what the real hardware would deliver.
WhyServerVirtualization?
Server Virtualization allows us to use resources efficiently. With the help of server
virtualization, you can eliminate the major cost of hardware.
This virtualization in Cloud Computing can divide the workload to the multiple
servers and all these virtual servers are capable of performing a dedicated task.
One of the reasons for choosing server virtualization is that a person can move the
workload between virtual machine according to the load.
Application server virtualization
Application server virtualization abstracts a collection of application servers that
provide the same services as a single virtual application server by using load-
balancing strategies and providing a high-availability infrastructure for the services
hosted in the application server.
This is a particular form of virtualization and serves the same purpose of storage
virtualization: providing a better quality of service rather than emulating a different
environment
Advantages of Server Virtualization
Cost Reduction: Server virtualization reduces cost because less hardware is required.
Independent Restart: Each server can be rebooted independently and that reboot
won't affectthe working of other virtual servers.
48. DESKTOP VIRTUALIZATION
Desktop virtualization abstracts the desktop environment available on a personal
computer in order to provide access to it using a client / server approach.
Desktop virtualization provides the same out- come of hardware virtualization but
serves a different purpose. Similarly to hardware virtualization, desktop virtualization
makes accessible a different system as though it were natively installed on the host,
but this system is remotely stored on a different host and accessed through a network
connection.
Moreover, desktop virtualization addresses the problem of making the same desktop
environment accessible from everywhere.
Although the term desktop virtualization strictly refers to the ability to remotely
access a desktop environment, generally the desktop environment is stored in a
remote server or a datacenter that provides a high-availability infrastructure and
ensures the accessibility and persistence of the data.
In this scenario, an infrastructure supporting hardware virtualization is fundamental to
provide access to multiple desktop environments hosted on the same server; a specific
desktop environment is stored in a virtual machine image that is loaded and started on
demand when a client connects to the desktop environment.
This is a typical cloud computing scenario in which the user leverages the virtual
infrastructure for performing the daily tasks on his computer. The advantages of
desktop virtualization are high availability, persistence, accessibility, and ease of
management
The basic services for remotely accessing a desktop environment are implemented in
software components such as Windows Remote Services, VNC, and X Server.
Infrastructures for desktop virtualization based on cloud computing solutions include
Sun Virtual Desktop Infrastructure (VDI), Parallels Virtual Desktop Infrastructure
(VDI), Citrix XenDesktop, and others
49. APPLICATION VIRTUALIZATION
Application-level virtualization is a technique allowing applications to be run in
runtime environments that do not natively support all the features required by such
applications.
In this scenario, applications are not installed in the expected runtime environment but
are run as though they were.
In general, these techniques are mostly concerned with partial file systems, libraries,
and operating system component emulation. Such emulation is performed by a thin
layer – a program or an operating system component—that is incharge of executing
the application.
Emulation can also be used to execute program binaries compiled for different
hardware architectures. In this case, one of the following strategies can be
implemented.
Interpretation. In this technique every source instruction is interpreted by an
emulator for Executing native ISA instructions, leading to poor performance.
Interpretation has a minimal Startup cost but a huge overhead, since each instruction
is emulated.
Binary translation. In this technique every source instruction is converted to native
instructions with equivalent functions. After a block of instructions is translated, It is
cached and reused.
Binary translation has a large initial overhead cost, but over time it is subject to better
performance, since previously translated instruction blocks are directly executed.
50. Emulation, as described, is different from hardware-level virtualization. The former
simply allows the execution of a program compiled against a different hardware,
whereas the latter emulates a complete hardware environment where an entire
operating system can be installed
Application virtualization is a good solution in the case of missing libraries in the host
operating system; in this case a replacement library can be linked with the application,
or library calls can be remapped to existing functions available in the hostsystem
Another advantage is that in this case the virtual machine manager is much lighter
since it provides a partial emula- tion of the runtime environment compared to
hardware virtualization..
Moreover, this technique allows incompatible applications to run together. Compared
to programming-level virtualization, which works across all the applications
developed for that virtual machine, application-level virtualization works for a
specific environment: It supports all the applications that run on top of a specific
environment.
One of the most popular solutions implementing application virtualization is Wine,
which is a software application allowing Unix-like operating systems to execute
programs written for the Microsoft Windows platform.
Wine takes its inspiration from a similar product from Sun, Windows Application
Binary Interface (WABI), which implements the Win 16API specifications on Solaris.
VMware ThinApp, another product in this area, allows capturing the setup of an
installed application and packaging it into an executable image isolated from the
hosting operating system.
51. UNIT III CLOUD ARCHITECTURE, SERVICES AND STORAGE
NIST CLOUD REFERENCE ARCHTECTURE
The Conceptual Reference Model
NIST cloud computing reference architecture, which identifies the major actors, their
activities andfunctions in cloud computing.
The diagram depicts a generic high-level architecture and is intended to facilitate the
understanding ofthe requirements, uses, characteristics and standards of cloud computing.
The NIST cloud computing reference architecture defines five major actors:
cloud consumer,
cloud provider,
cloud carrier,
cloud auditor and
cloud broker.
Each actor is an entity (a person or an organization) that participates in a transaction or process
and/or performs tasks in cloud computing.
52. Cloud Consumer
The cloud consumer is the principal stakeholder for the cloud computing service. A cloud
consumer represents a person or organization that maintains a business relationship with,
and uses the service from a cloud provider.
A cloud consumer browses the service catalog from a cloud provider, requests the
appropriate service, sets up service contracts with the cloud provider, and uses the service.
The cloud consumer may be billed for the service provisioned, and needs to arrange
payments accordingly.
A cloud consumer can freely choose a cloud provider with better pricing and more
favorable terms.
Typically a cloud provider‟s pricing policy and SLAs are non-negotiable, unless the
customer expects heavy usage and might be able to negotiate for better contracts
SaaS applications in the cloud and made accessible via a network to the SaaS consumers.
SaaS consumers can be billed based on the number of end users, the time of use, the
network bandwidth consumed, the amount of data stored or duration of stored data.
Cloud consumers of PaaS can employ the tools and execution resources provided by cloud
providers to develop, test, deploy and manage the applications hosted in a cloud
environment
Consumers of IaaS have access to virtual computers, network-accessible storage, network
infrastructure components, and other fundamental computing resources onwhich they can
deploy and run arbitrary software.
Cloud Provider
A cloud provider is a person, an organization; it is the entity responsible for making a
service available to interested parties.
A Cloud Provider acquires and manages the computing infrastructure required for
providing the services, runs the cloud software that provides the services, and makes
arrangement to deliver the cloud services to the Cloud Consumers through network acces.
A cloud provider conducts its activities in the areas of service deployment, service
orchestration, cloud service management, security, and privacy.
Service Orchestration
Service Orchestration refers to the composition of system components to support the Cloud
Providersactivities in arrangement, coordination and management of computing resources in
order to provide cloud services to Cloud Consumers
53. A three-layered model is used in this representation, representing the grouping of
three types of system components Cloud Providers need to compose to deliver their
services.
The top is the service layer, this is where Cloud Providers define interfaces for Cloud
Consumers to access the computing services. Access interfaces of each of the three service
models are provided in this layer.
The optional dependency relationships among SaaS, PaaS, and IaaS components are
represented graphically as components stacking on each other;
The middle layer in the model is the resource abstraction and control layer. This layer
contains the system components that Cloud Providers use to provide andmanage access to
the physical computing resources through software abstraction.
Examples of resource abstraction components include software elements such as
hypervisors, virtual machines, virtual data storage, and other computing resource
abstractions.
The lowest layer in the stack is the physical resource layer, which includes all the
physical computing resources. This layer includes hardware resources, such as computers
(CPU and memory), networks (routers, firewalls, switches, network links and interfaces),
storage components (hard disks) and other physical computing infrastructure elements. It
also includes facility resources, such as heating, ventilation and air conditioning (HVAC),
power, communications, and other aspects of the physical plant.
Cloud Service Management
Cloud Service Management includes all of the service-related functions that are necessary for the
management and operation of those services required by or proposed to cloud consumers. Cloud
service management can be described from the perspective of business support, provisioning and
configuration, and from the perspective of portability and interoperability requirements.
Business Support
Business Support entails the set of business-related services dealing with clients and supporting
processes. It includes the components used to run business operations that are client-facing.
Customer management: Manage customer accounts, open/close/terminate accounts,
manage user profiles, manage customer relationships by providing points-of-contact and
resolving customer issues and problems, etc.
Contract management: Manage service contracts, setup/negotiate/close/terminatecontract,
etc.
54. Inventory Management: Set up and manage service catalogs, etc.
Accounting and Billing: Manage customer billing information, send billing statements,
process received payments, track invoices, etc.
Reporting and Auditing: Monitor user operations, generate reports, etc.
Pricing and Rating: Evaluate cloud services and determine prices, handle promotionsand
pricing rules based on a user's profile, etc.
Provisioning and Configuration
Rapid provisioning: Automatically deploying cloud systems based on the requested
service/resources/capabilities.
Resource changing: Adjusting configuration/resource assignment for repairs,upgrades and
joining new nodes into the cloud.
Monitoring and Reporting: Discovering and monitoring virtual resources, monitoring
cloud operations and events and generating performance reports.
Metering: Providing a metering capability at some level of abstraction appropriate tothe
type of service (e.g., storage, processing, bandwidth, and active user accounts).
SLA management: Encompassing the SLA contract definition (basic schema with theQoS
parameters), SLA monitoring and SLA enforcement according to defined policies.
Portability and Interoperability
The proliferation of cloud computing promises cost savings in technologyinfrastructure
and faster software upgrades.
Cloud providers should provide mechanisms to support data portability, service
interoperability, and system portability
Data portability is the ability of cloud consumers to copy data objects into or out of a
cloud or to use a disk for bulk data transfer.
Service interoperability is the ability of cloud consumers to use their data andservices
across multiple cloud providers with a unified management interface
Cloud Auditor
A cloud auditor is a party that can perform an independent examination of cloud
service controls with the intent to express an opinion thereon.
Audits are performed to verify conformance to standards through review of objective
evidence.
A cloud auditor can evaluate the services provided by a cloud provider in terms of
security controls, privacy impact, performance, etc.
A privacy impact audit can help Federal agencies comply with applicable privacy lawsand
regulations governing an individual‟s privacy, and to ensure confidentiality, integrity, and
availability of an individual‟s personal information at every stage of development and
operation.
Security
It is critical to recognize that security is a cross-cutting aspect of the architecture that
spans across all layers of the reference model, ranging from physical security to
application security.
Therefore, security in cloud computing architecture concerns is not solely under the
purview of the Cloud Providers, but also Cloud Consumers and other relevant actors.
Cloud-based systems still need to address security requirements such as authentication,
authorization, availability, confidentiality, identity management, integrity, audit, security
monitoring, incident response, and security policy management.
55. While these security requirements are not new, we discuss cloud specific perspectives to
help discuss, analyze and implement security in a cloud system
Cloud Broker
As cloud computing evolves, the integration of cloud services can be too complex for
cloud consumers to manage.
A cloud consumer may request cloud services from a cloud broker, instead ofcontacting a
cloud provider directly.
A cloud broker is an entity that manages the use, performance and delivery of cloud
services and negotiates relationships between cloud providers and cloud consumers.
In general, a cloud broker can provide services in three categories [9]:
Service Intermediation: A cloud broker enhances a given service by improving some
specific capability and providing value-added services to cloud consumers. The
improvement can be managing access to cloud services, identity management,
performance reporting, enhanced security, etc.
Service Aggregation: A cloud broker combines and integrates multiple services into one
or more new services. The broker provides data integration and ensures the secure data
movement between the cloud consumer and multiple cloud providers.
Service Arbitrage: Service arbitrage is similar to service aggregation except that the
services being aggregated are not fixed. Service arbitrage means a broker has the
flexibility to choose services from multiple agencies. The cloud broker, for example, can
use a credit-scoring service to measure and select an agency with the best score.
Cloud Carrier
A cloud carrier acts as an intermediary that provides connectivity and transport of cloud
services between cloud consumers and cloud providers.
Cloud carriers provide access to consumers through network, telecommunication and
other access devices.
The distribution of cloud services is normally provided by network and
telecommunication carriers or a transport agent, where a transport agent refers to a
business organization that provides physical transport of storage media such as high-
capacity hard drives.
Note that a cloud provider will set up SLAs with a cloud carrier to provide services
consistent with the level of SLAs offered to cloud consumers, and may require the cloud
carrier to provide dedicated and secure connections between cloud consumers and cloud
providers.
56. CLOUD DEPLOYMENT MODELS
A cloud infrastructure may be operated in one of the following deployment models:
Public cloud,
Private cloud,
Community cloud, or
Hybrid cloud.
The differences are based on how exclusive the computing resources are made to a
CloudConsumer.
Public Cloud
A public cloud is one in which the cloud infrastructure and computing resources are
made available to the general public over a public network. A public cloud is
ownedby an organization selling cloud services, and serves a diverse pool of clients.
A public cloud is built over the Internet and can be accessed by any user who has
paid forthe service.
In Public cloud, the services offered are made available to anyone, from
anywhere, and at any time through the Internet.
From a structural point of view public cloud is a distributed system, most likely
composed of one or more datacenters connected together, on top of which the
specific services offered by the cloud are implemented.
Any customer can easily sign in with the cloud provider, enter her credential and
billing details, and use the services offered.
Public cloud offer solutions for minimizing IT infrastructure costs and serve as a
viable option for handling peak loads on the local infrastructure.
They have become an interesting option for small enterprises, which are able to start
their businesses without large up-front investments by completely relying on public
infrastructure for their IT needs.
A fundamental characteristic of public clouds is multi-tenancy. A public cloud is
meant to serve a multitude of users, not a single customer. Any customer requires a
virtual computing environment that is separated, and most likely isolated, from other
users.
A public cloud can offer any kind of service: infrastructure, platform, or
applications.
From an architectural point of view there is no restriction concerning the type of
distributed system implemented to support public clouds.
Public clouds can be composed of geographically dispersed data centers to share the
load of users and better serve them according to their locations.
Public cloud is better suited for business requirements which require managing the
load;
Benefit of Public Cloud
Public clouds promote standardization, preserve capital investment, and offer
applicationflexibility.
Example of Public Cloud
Amazon EC2 is a public cloud that provides infrastructure as a service;
57. Google AppEngine is a public cloud that provides an application development
platform as a service;
SalesForce.com is a public cloud that provides software as a service.
Drawbacks
In the case of public clouds, the provider is in control of the infrastructure and,
eventually, of the customers’ core logic and sensitive data.
The risk of a breach in the security infrastructure of the provider could expose
sensitive information to others.
Public cloud service offering has low degree of control and physical and security
aspects of the cloud.
Private Cloud
A private cloud gives a single Cloud Consumers organization the exclusive access
to and usage of the infrastructure and computational resources.
In private cloud, the cloud infrastructure is operated solely for an organization.
It may be managed either by the Cloud Consumer organization or by a third party,
and may be hosted on the organizations premises
Private clouds give local users a flexible and agile private infrastructure to run
service workloads within their administrative domains.
A private cloud is supposed to deliver more efficient and convenient cloud services.
It may impact the cloud standardization, while retaining greater customization and
organizational control.
In a private cloud security management and day to day operations are relegated to
internal IT or third party vendor, with contractual SLAs.
Hence customer of private cloud service offering has high degree of control and
physical and security aspects of the cloud.
Security concerns are less critical, since sensitive information does not flow out of
the private infrastructure.
Business that has dynamic or unforeseen needs, assignments which are mission
critical, security alarms, management demands and uptime requirements are better
suited for private cloud.
Private clouds have the advantage of keeping the core business operations in-
house by relying on the existing IT infrastructure and reducing the burden of
maintaining it once the cloud has been set up.
Moreover, existing IT resources can be better utilized because the private cloud
canprovide services to a different range of users.
Contrary to popular belief, private cloud may exist off premises and can be
managed by thirdparty. Thus two private cloud scenarios exist, as follows,
On premises or On site Private Cloud
Applies to private cloud implemented at a customer premises.
Outsourced Private Cloud
Applies to private clouds where the server side is outsourced to a hosting company.
58. Key advantages of using a private cloud computing infrastructure
Customer information protection.- In-house security is easier to maintain and rely
on.
Infrastructure ensuring SLAs.
Compliance with standard procedures and operations.
Private clouds attempt to achieve customization and offer higher efficiency,
resiliency, security, and privacy.
Drawback
From an architectural point of view, private clouds can be implemented on more
heterogeneous hardware: They generally rely on the existing IT infrastructure
already deployed on the private premises.
Private clouds can provide in-house solutions for cloud computing, but if compared
to public clouds they exhibit more limited capability to scale elastically on demand.
Example
VMWare vSphere
Openstack
Amazon VPC (Virtual Private Cloud)
Microsoft ECI data center
Hybrid and Community Cloud
A hybrid cloud is a composition of two or more clouds (on-site private, on-site
community, off-site private, off-site community or public) that remain as distinct
entities but are bound together by standardized or proprietary technology that
enables data and application portability.
Hybrid clouds allow enterprises to exploit existing IT infrastructures, maintain
sensitive information within the premises, and naturally grow and shrink by
provisioning external resources and releasing them when they’re no longer needed.
Hybrid clouds address scalability issues by leveraging external resources for
exceeding capacity demand.
These resources or services are temporarily leased for the time required and then
released .This practice is also known as cloud bursting.
A hybrid cloud provides access to clients, the partner network, and third parties.
In summary, public clouds promote standardization, preserve capital investment,
andoffer application flexibility.
Private clouds attempt to achieve customization and offer higher efficiency,
resiliency, security, and privacy.
Hybrid clouds operate in the middle, with many compromises in terms of resource
sharing.
In hybrid cloud the resources are managed and provided either in-house or by
externalproviders.
It is an adaptation among two platforms in which the workload exchange between
theprivate cloud and the public cloud as per the need and demand.
For example organizations can use the hybrid cloud model for processing big data.
59. Ona private cloud it can retain sales, business and various data that needs security
and privacy.
Hybrid cloud hosting is enabled with features like scalability, flexibility and
security.
Example
Microsoft Azure
VMWare – vSphere for private and vCloudAir for public
Rackspace Rackconnect
Community Cloud
A community cloud serves a group of Cloud Consumers which have shared concerns
such as mission objectives, security, privacy and compliance policy, rather than
serving a single organization as does a private cloud.
A community cloud is “shared by several organizations and supports a specific
community that has shared concerns (e.g., mission, security requirements, policy,
and compliance considerations)
Similar to private clouds, a community cloud may be managed by the organizations
or by a third party, and may be implemented on customer premise (i.e. on-site
community cloud) or outsourced to a hosting company (i.e. outsourced community
cloud).
From an architectural point of view, a community cloud is most likely implemented
over multi- ple administrative domains. This means that different organizations
such as government bodies private enterprises, research organizations, and even
public virtual infrastructure providers contrib- ute with their resources to build the
cloud infrastructure.
Candidate sectors for community clouds are as follows:
Media industry
Healthcare industry
Energy and other core industries
Public sector
Scientific research
The benefits of these community clouds are the following:
Openness. - By removing the dependency on cloud vendors, community clouds are
open systems in which fair competition between different solutions can happen.
Community - Being based on a collective that provides resources and services, the
infrastructure turns out to be more scalable because the system can grow simply by
expanding its user base.
Graceful failures - Since there is no single provider or vendor in control of the
infrastructure, there is no single point of failure.
Convenience and control - Within a community cloud there is no conflict between
convenience and control because the cloud is shared and owned by the community,
which makes all the decisions through a collective democratic process.
Environmental sustainability - The community cloud is supposed to have a smaller
carbon footprint because it harnesses underutilized resources.
60. CLOUD SERVICE MODELS
Infrastructure as a Service (IaaS)
In cloud Computing Offering virtualized resources (computation, storage, and communication) on
demand is known as Infrastructure as a Service (IaaS).
This model allows users to use virtualized IT resources for computing, storage, andnetworking.
In short, the service is performed by rented cloud infrastructure. The user can deployand run his
applications over his chosen OS environment.
They deliver customizable infrastructure on demand.
IaaS (Infrastructure as a Service): provides you the computing infrastructure, physical or (quite often)
virtual machines and other resources like virtual-machine disk image library, block and file-based
storage, firewalls, load balancers, IP addresses, virtual local area networks etc.
A cloud infrastructure enables on-demand provisioning of servers running several choices of operating
systems and a customized software stack. Infrastructure services are considered to be the bottom layer
of cloud computing systems.
Examples: Amazon EC2, Windows Azure, Rackspace, Google Compute Engine.
The main technology used to deliver and implement these solutions is hardware virtualization: one or
more virtual machines opportunely configured and interconnected define the distributed sys- tem on
top of which applications are installed and deployed.
IaaS/HaaS solutions bring all the bene- fits of hardware virtualization: workload partitioning,
application isolation, sandboxing, and hard- ware tuning.
From the perspective of the service provider, IaaS/HaaS allows better exploiting the IT infrastructure
and provides a more secure environment where executing third party applications.
From the perspective of the customer it reduces the administration and maintenance cost as well as the
capital costs allocated to purchase hardware.
61. It is possible to distinguish three principal layers:
the physical infrastructure,
the software management infrastructure, and
the user interface.
At the top layer the user interface provides access to the services exposed by the software management
infrastructure. Such an interface is generally based on Web 2.0 technologies: Web services, RESTful
APIs, and mash-ups.
The core features of an IaaS solution are implemented in the infrastructure management software
layer. In particular, management of the virtual machines is the most important function performed
by this layer. A central role is played by the scheduler, which is in charge of allocating the execution
of virtual machine instances.
The bottom layer is composed of the physical infrastructure, on top of which the management layer
operates.
62. In the case of complete IaaS solutions, all three levels are offered as service. This is generally the case
with public clouds vendors such as Amazon, GoGrid, Joyent, Rightscale, Terremark, Rackspace,
ElasticHosts, and Flexiscale.
Platform as a Service (PaaS)
Platform-as-a-Service (PaaS) solutions provide a development and deployment platform for running
applications in the cloud. They constitute the middleware on topof which applications are built.
In PaaS we can able to develop, deploy, and manage the execution of applications using provisioned
resources demands a cloud platform with the proper software environment.
Such a platform includes operating system and runtime library support.
PaaS (Platform as a Service provides you computing platforms which typically includes operating
system, programming language execution environment, database, web server etc.
Examples:
Google AppEngine, an example of Platform as a Service
AWS Elastic Beanstalk,
Windows Azure, Heroku,
Force.com,,
Apache Stratos
Application management is the core functionality of the middleware. PaaS implementations pro- vide
applications with a runtime environment and do not expose any service for managing the underlying
infrastructure.
Developers design their systems in terms of applications and are not concerned with hardware (physical
or virtual), operating systems, and other low-level services.
The core middleware is in charge of managing the resources and scaling applications on demand or
automatically, according to the commitments made with users.
Developers generally have the full power of programming languages such as Java, .NET, Python,
or Ruby, with some restrictions to provide better scalability and security.
In this case the traditional development environments can be used to design and develop applications,
which are then deployed on the cloud by using the APIs exposed by the PaaS provider.
PaaS solutions can offer middleware for developing applications together with the infrastructure or
simply provide users with the software that is installed on the user premises.
It is possible to organize the various solutions into three wide categories PaaS-I, PaaS- II, and PaaS-III.
The first category identifies PaaS implementations that completely follow the cloud computing style for
application development and deployment.
Example - Force.com and Longjump. Both deliver as platforms the combination of middleware and
infrastructure.
In the second class we can list all those solutions that are focused on providing a scalable infrastructure
63. for Web application, mostly websites. In this case, developers generally use the providers’ APIs, which
are built on top of industrial runtimes, to develop applications.
Example - Google AppEngine is the most popular product in this category.
The third category consists of all those solutions that provide a cloud programming platform for any
kind of application, not only Web applications
Example - Microsoft Windows Azure, which provides a comprehensive framework for building
service- oriented cloud applications on top of the .NET technology, hosted on Microsoft’s
datacenters
Manjrasoft Aneka, Apprenda SaaSGrid, Appistry Cloud IQ Platform, DataSynapse,and GigaSpaces
DataGrid, provide only middleware with different services
Some essential characteristics that identify a PaaS solution:
Runtime framework
Abstraction
Automation
Cloud services
Another essential component for a PaaS-based approach is the ability to integrate third-party cloud
services offered from other vendors by leveraging service-oriented architecture.
One of the major concerns of leveraging PaaS solutions for implementing applicationsis vendor lock-
in.
Differently from IaaS solutions, which deliver bare virtual servers that can be fully custom- ized in
terms of the software stack installed, PaaS environments deliver a platform for developing applications,
which exposes a well-defined set of APIs and,in most cases, binds the application to the specific
runtime of the PaaS provider.
Finally, from a financial standpoint, although IaaS solutions allow shifting the capital cost into
operational costs through outsourcing, PaaS solutions can cut the cost across development, deployment,
and management of applications.
It helps management reduce the risk of ever-changing technologies by offloading the cost of upgrading
the technology to the PaaS provider.
Software as a Service (SaaS)
Software-as-a-Service (SaaS) is a software delivery model that provides access to applications through
the Internet as a Web-based service.
It provides a means to free users from complex hard- ware and software management by offloading
such tasks to third parties, which build applications accessible to multiple users through a Web
browser.
SaaS (Software as a Service) model you are provided with access to application software often referred
64. to as "on-demand software".
No need to worry about the installation, setup and running of the application. Service provider will do
that for you. You just have to pay and use it through some client.
On the provider side, the specific details and features of each customer’s applica- tion are maintained in
the infrastructure and made available on demand
The SaaS model is appealing for applications serving a wide range of users and thatcan be adapted to
specific needs with little further customization. This requirement characterizes SaaS as a “one-to-
many” software delivery model, whereby an application is shared across multiple users.
The SaaS model provides software applications as a service. As a result, on the customer side, there is
no upfront investment in servers or software licensing.
On the provider side, costs are kept rather low, compared with conventional hosting of user
applications. Customer data is stored in the cloud that is either vendor proprietary or publicly hosted to
support PaaS and IaaS.
Examples: Google Apps, Microsoft Office 365.
SaaS applications are naturally multitenant.
Multitenancy which is a feature of SaaS compared to traditional packaged software, allows providers to
centralize and sustain the effort of managing large hardware infrastructures, maintaining and upgrading
applications transparently to the users, and optimizing resources by sharing the costs among the large
user base.
Benefits of SaaS
Software cost reduction and total cost of ownership (TCO) were paramount
Service-level improvements
Rapid implementation
Standalone and configurable applications
Rudimentary application and data integration
Subscription and pay-as-you-go (PAYG) pricing
Software-as-a-Service applications can serve different needs. CRM, ERP, and social networking
applications are definitely the most popular ones.
SalesForce.com is probably the most successful and popular example of a CRM service
Another important class of popular SaaS applications comprises social networkingapplications such as
Facebook and professional networking sites such as LinkedIn.
Office automation applications are also an important representative for SaaS applications: Google
Documents and Zoho Office are examples of Web-based applications that aim to address all user needs
for documents, spreadsheets, and presentation management
It is important to note the role of SaaS solution enablers, which provide an environment in which to
integrate third-party services and share information with others.
69. CLOUD COMPUTING DESIGN CHALLENGES
Cloud computing presents many challenges for industry and academia.
The interoperation between different clouds, the creation of standards, security, scalability,fault tolerance, and
organizational aspects.
Cloud interoperability and standards
Cloud computing is a service-based model for delivering IT infrastructure and applications like utilities such
as power, water, and electricity.
To fully realize this goal, introducing standards and allowing interoperability between solutions offered by
different vendors are objectives of fundamental importance.
Vendor lock-in constitutes one of the major strategic barriers against the seam- less adoption of cloud
computing at all stages.
Vendor lock-in can prevent a customer from switching to another competitor’s solution,
The presence of standards that are actually implemented and adopted in the cloud computing community
could give room for interoperability and then lessen the risks resulting from vendor lock-in.
The standardization efforts are mostly concerned with the lower level of the cloud computing architecture,
which is the most popular and developed.
The Open Virtualization Format (OVF) [51] is an attempt to provide a common format for storing the
information and metadata describing a virtual machine image.
Another direction in which standards try to move is devising general reference architecture for cloud
computing systems and providing a standard interface through which one can interact with them.
Scalability and fault tolerance
The ability to scale on demand constitutes one of the most attractive features of cloud computing. Clouds
allow scaling beyond the limits of the existing in-house IT resources, whether they are infrastructure
(compute and storage) or applications services.
To implement such a capability, the cloud middleware has to be designed with the principle of scalability
along different dimensions in mind—for example, performance, size, and load.
the ability to tolerate failure becomes fundamental, sometimes even more important than providing an
extremely efficient and optimized system.
Hence, the challenge in this case is designing highly scalable and fault-tolerant systems that are easy to
manage and at the same time provide competitive performance.
Security, trust, and privacy
Security, trust, and privacy issues are major obstacles for massive adoption of cloud computing.
The traditional cryptographic technologies are used to prevent data tampering and access to sensitive
information.
The massive use of virtualization technologies exposes the existing system to new threats, which previously
were not considered applicable.
70. It then happens that a new way of using existing technologies creates new opportunities for additional
threats to the security of applications.
The lack of control over their own data and processes also poses severe problems for the trust we give to
the cloud service provider and the level of privacy we want to have for our data.
On one side we need to decide whether to trust the provider itself; on the other side, specific regulations can
simply prevail over the agreement the provider is willing to establish with us concerning the privacy of the
information managed on our behalf.
The challenges in this area are, then, mostly concerned with devising secure and trustable systems from
different perspectives: technical, social, and legal.
Organizational aspects
Cloud computing introduces a significant change in the way IT services are consumed and man- aged. More
precisely, storage, compute power, network infrastructure, and applications are delivered as metered services
over the Internet.
This introduces a billing model that is new within typical enterprise IT departments, which requires a certain
level of cultural and organizational process maturity. In particular, a wide acceptance of cloud computing will
require a significant change to business processes and organizational boundaries.
From an organizational point of view, the lack of control over the management of data and processes poses
not only security threats but also new problems that previously did not exist.
Traditionally, when there was a problem with computer systems, organizations developed strategies and
solutions to cope with them, often by relying on local expertise and knowledge.
72. 15-11-2021
2
5
What is cloud storage?
History
J.C.R.Licklider – One of the fathers of the
cloud based computing idea.
Global network that allows access from
anywhere at anytime.
Technological limits of the 60’s.
What is cloud storage?
Cloud storage is a service model in which data is
maintained, managed and backed up remotely and
made available to users over a network (typically the
Internet).
How does cloud storage work?
Redundancy
Core of cloud
computing
Equipment
Data servers
Power supplies
Data files
Replication
73. 15-11-2021
3
9
Provider failures
Amazon S3 systems failure downs Web 2.0 sites
Twitterers lose their faces, others just want their data back
Computer World, July 21, 2008
Customers Shrug Off S3 Service Failure
At about 7:30 EST this morning, S3, Amazon.com‟s
online storage service, went down. The 2-hour
service failure affected customers worldwide.
Wired, Feb. 15, 2008
Loss of customer data spurs closure of
online storage service 'The Linkup„
Network World, Nov 8, 2008
Spectacular Data Loss
Drowns Sidekick Users
October 10, 2009
Temporary
unavailability
Permanent
data loss
How do we increase users’ confidence in the cloud?
Cloud Storage
iCloud
•iCloud is a service provided by
Apple
•5GB storage space is free of cost
•Once the iCloud is used you can
share your stored data on any of
your different Apple devices
•Aceess to all files, music, calendar,
email
•Only iOS 5 has iCloud installed
First 1 TB / month $0.140 per GB
Next 49 TB / month $0.125 per GB
Next 450 TB / month $0.110 per GB
Next 500 TB / month $0.095 per GB
Next 4000 TB / month $0.080 per GB
Over 5000 TB / month $0.055 per GB
Home: Business:
Packages: 3 2
Price Range: $7.95 - $24.95 $49.95 - $159.95
Storage Space: 2TB - 5TB 2TB - 10TB+
Users: 1 3 - 10+
74. 15-11-2021
4
Free Options
15 16
Data Storage Saving:
• By storing your data online you are reducing the burden of your hard disk,
which means you are eventually saving disk space
World Wide Accessibility
• You can access your data anywhere in the world. You don’t have to carry your
hard disk pen drive or any other storage device
Data Safety
• You cannot trust your HDD and storage device every time because it can
crash anytime
• In order to make your data safe from such hazards you can keep it online
Advantages
75. 15-11-2021
5
17
Security
• Most of the online storage sites provide better security
• Only the user can access the account
Easy sharing
• You can share data faster, easy and secure manner
Data Recovery
• Online data storage sites provide quick recovery of your files and
folders
• This makes them more safe and secure
Automatic backup
• User can even schedule automatic backup of your personal
computer in order to avoid manual backup of files
Advantages
18
19
Improper handling can cause trouble
• You must need your user-id and password safe to protect your
data
• If someone knows or even guess your credentials, it may result in
loss of data
• Use complex passwords and try to avoid storage them in your
personal storage devices such as pen drive and HDD
Disadvantages
20
Choose trustworthy source to avoid any hazard
• There are many online storage sites out there but you have to
choose the one, on which you can trust
Internet connection sucks
• To access your files everywhere the only thing you need is internet
connection
• If you don’t get internet connection somewhere then will end up
with no access of data even though it is safely stored online
Disadvantages
76. 15-11-2021
6
Cloud Storage
• Several large Web companies are now exploiting the fact that they
have data storage capacity that can be hired out to others.
– allows data stored remotely to be temporarily cached on desktop
computers, mobile phones or other Internet-linked devices.
• Amazon’s Elastic Compute Cloud (EC2) and Simple Storage Solution
(S3) are well known examples
Amazon Simple Storage Service (S3)
• Amazon S3 provides a simple web services interface that can be used
to store and retrieve any amount of data, at any time, from anywhere
on the web.
• S3 provides the object-oriented storage service for users.
• Users can access their objects through Simple Object Access Protocol
(SOAP) with either browsers or other client programs which support
SOAP.
• SQS is responsible for ensuring a reliable message service between
two processes, even if the receiver processes are not running.
23
• Fundamental operation unit of S3 is called an object.
• Each object is stored in a bucket and retrieved via a unique, developer-
assigned key - the object has other attributes such as values, metadata,
and access control information.
• The storage provided by S3 can be viewed as a very coarse-grained key-
value pair.
• Through the key-value programming interface, users can write, read,
and delete objects containing from 1 byte to 5 gigabytes of data each.
• There are two types of web service interface for the user to access the
data stored in Amazon clouds.
• REST (web 2.0) interface,
• SOAP interface.
Amazon Simple Storage Service (S3)
24
77. 15-11-2021
7
25
• Redundant through geographic dispersion.
• Designed to provide 99.999999999 percent durability and 99.99
percent availability of objects over a given year with cheaper
reduced redundancy storage (RRS).
• Authentication mechanisms to ensure that data is kept secure
from unauthorized access.
• Objects can be made private or public, and rights can be granted
to specific users.
• Per-object URLs and ACLs (access control lists). 6, 2010).
Key features of S3:
26
• Default download protocol of HTTP.
• A BitTorrent protocol interface is provided to lower costs for
high-scale distribution.
• $0.055 (more than 5,000 TB) to 0.15 per GB per month storage
(depending on total amount).
• First 1 GB per month input or output free and then $.08 to $0.15 per
GB for transfers outside an S3 region.
• There is no data transfer charge for data transferred between
Amazon EC2 and Amazon S3 within the same region or for data
transferred between the Amazon EC2 Northern Virginia region and
the Amazon S3 U.S. Standard region (as of October
Key features of S3:
27
Amazon Elastic Block Store (EBS) and SimpleDB
• The Elastic Block Store (EBS) provides the volume block interface for
saving and restoring the virtual images of EC2 instances.
• The status of EC2 are saved in the EBS system after the machine is shut
down.
• Users can use EBS to save persistent data and mount to the running
instances of EC2.
• EBS is analogous to a distributed file system accessed by traditional OS
disk access mechanisms.
• EBS allows you to create storage volumes from 1 GB to 1 TB that can
be mounted as EC2 instances.
28
• Multiple volumes can be mounted to the same instance.
• These storage volumes behave like raw, unformatted block devices,
with user-supplied device names and a block device interface.
• You can create a file system on top of Amazon EBS volumes, or use
them in any other way you would use a block device (like a hard
drive).
• Snapshots are provided so that the data can be saved incrementally.
• EBS also charges $0.10 per 1 million I/O requests made to the storage
(as of October 6, 2010).
Amazon Elastic Block Store(EBS) and SimpleDB
78. 15-11-2021
8
29
Amazon SimpleDB Service
• SimpleDB provides a simplified data model based on the relational
database data model.
• Structured data from users must be organized into domains.
• Each domain can be considered a table.
• The items are the rows in the table.
• A cell in the table is recognized as the value for a specific attribute
(column name) of the corresponding row.
• This is similar to a table in a relational database and possible to assign
multiple values to a single cell in the table.
• This is not permitted in a traditional relational database which wants to
maintain data consistency
30
• Many developers simply want to quickly store, access, and query the
stored data.
• SimpleDB removes the requirement to maintain database schemas with
strong consistency.
• SimpleDB is priced at $0.140 per Amazon SimpleDB Machine Hour
consumed with the first 25 Amazon SimpleDB Machine Hours
consumed per month free (as of October 6, 2010).
SimpleDB called as “LittleTable”
31 32