1. Cloud Computing :
The term “cloud”, appears to have its origins in network diagrams that represented the internet,
or various parts of it, as schematic clouds. “Cloud computing” was coined for what happens
when applications and services are moved into the internet “cloud.” Cloud computing is not
something that suddenly appeared overnight; in some form, it may trace back to a time when
computer systems remotely time-shared computing resources and applications. More currently
though, cloud computing refers to the many different types of services and applications being
delivered in the internet cloud, and the fact that, in many cases, the devices used to access these
services and applications do not require any special applications.
Cloud computing : “It is a techno- business disruptive model of using distributed large- scale
data centers either private or public or hybrid offering customers a scalable virtualized
infrastructure or an abstracted set of services qualified by service- level agreements (SLAs) and
charged only by the abstracted IT resources consumed.”
Many companies are delivering services from the cloud. Some notable examples include the
following:
• Google — Has a private cloud that it uses for delivering Google Docs and many other services
to its users, including email access, document applications, text translations, maps, web
analytics, and much more.
• Microsoft — Has Microsoft® Office 365® online service that allows for content and business
intelligence tools to be moved into the cloud, and Microsoft currently makes its office
applications available in a cloud.
• Salesforce.com — Runs its application set for its customers in a cloud, and its Force.com and
Vmforce.com products provide developers with platforms to build customized cloud services.
Characteristics
Cloud computing has a variety of characteristics, with the main ones being:
• Shared Infrastructure — Uses a virtualized software model, enabling the sharing of physical
services, storage, and networking capabilities. The cloud infrastructure, regardless of deployment
model, seeks to make the most of the available infrastructure across a number of users.
• Dynamic Provisioning — Allows for the provision of services based on current demand
requirements. This is done automatically using software automation, enabling the expansion and
contraction of service capability, as needed. This dynamic scaling needs to be done while
maintaining high levels of reliability and security.
• Network Access — Needs to be accessed across the internet from a broad range of devices such
as PCs, laptops, and mobile devices, using standards-based APIs (for example, ones based on
2. HTTP). Deployments of services in the cloud include everything from using business
applications to the latest application on the newest smartphones.
• Managed Metering — Uses metering for managing and optimizing the service and to provide
reporting and billing information. In this way, consumers are billed for services according to how
much they have actually used during the billing period.
Service Models
Once a cloud is established, how its cloud computing services are deployed in terms of business
models can differ depending on requirements. The primary service models being deployed are
commonly known as:
• Software as a Service (SaaS) — Consumers purchase the ability to access and use an
application or service that is hosted in the cloud. A benchmark example of this is Salesforce.com,
as discussed previously, where necessary information for the interaction between the consumer
and the service is hosted as part of the service in the cloud. Also,Microsoft has made a
significant investment in this area, and as part of the cloud computing option for Microsoft®
Office 365, its Office suite is available as a subscription through its cloudbased Online Services.
• Platform as a Service (PaaS) — Consumers purchase access to the platforms, enabling them
to deploy their own software and applications in the cloud. The operating systems and network
access are not managed by the consumer, and there might be constraints as to which applications
can be deployed. Examples include Amazon Web Services (AWS), Rackspace and Microsoft
Azure.
• Infrastructure as a Service (IaaS) — Consumers control and manage the systems in terms of
the operating systems, applications, storage, and network connectivity, but do not themselves
control the cloud infrastructure.
Also known are the various subsets of these models that may be related to a particular industry or
market.
Communications as a Service (CaaS) is one such subset model used to describe hosted IP
telephony services. Along with the move to CaaS is a shift to more IP-centric communications
and more SIP trunking deployments. With IP and SIP in place, it can be as easy to have the PBX
in the cloud as it is to have it on the premise. In this context, CaaS could be seen as a subset of
SaaS.
3. Difference between IaaS, PaaS, and SaaS
The below table shows the difference between IaaS, PaaS, and SaaS
IaaS Paas SaaS
It provides a virtual data
center to store information
and create platforms for app
development, testing, and
deployment.
It provides virtual
platforms and tools to
create, test, and deploy
apps.
It provides web software
and apps to complete
business tasks.
It provides access to
resources such as virtual
machines, virtual storage,
etc.
It provides runtime
environments and
deployment tools for
applications.
It provides software as a
service to the end-users.
It is used by network
architects.
It is used by developers. It is used by end users.
IaaS provides only
Infrastructure.
PaaS provides
Infrastructure+Platform.
SaaS provides
Infrastructure+Platform
+Software.
Deployment Models
Deploying cloud computing can differ depending on requirements, and the following four
deployment models have been identified, each with specific characteristics that support the needs
of the services and users of the clouds in particular ways.
• Private Cloud — The cloud infrastructure has been deployed, and is maintained and operated
for a specific organization. The operation may be in-house or with a third party on the premises.
• Community Cloud — The cloud infrastructure is shared among a number of organizations
with similar interests and requirements. This may help limit the capital expenditure costs for its
4. establishment as the costs are shared among the organizations. The operation may be in-house or
with a third party on the premises.
• Public Cloud — The cloud infrastructure is available to the public on a commercial basis by a
cloud service provider. This enables a consumer to develop and deploy a service in the cloud
with very little financial outlay compared to the capital expenditure requirements normally
associated with other deployment options.
• Hybrid Cloud — The cloud infrastructure consists of a number of clouds of any type, but the
clouds have the ability through their interfaces to allow data and/or applications to be moved
from one cloud to another. This can be a combination of private and public clouds that support
the requirement to retain some data in an organization, and also the need to offer services in the
cloud.
Components of Cloud Computing Architecture
There are the following components of cloud computing architecture -
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to
interact with the cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s
requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS
applications run directly through the web browser means we do not require to download and
install these applications. Some important example of SaaS is given below –
Example: Google Apps, Salesforce Dropbox, Slack, Hub spot, Cisco WebEx.
ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite
similar to SaaS, but the difference is that PaaS provides a platform for software creation, but
using SaaS, we can access software over the internet without the need of any platform.
Example: Windows Azure, Force.com, Magneto Commerce Cloud, Open Shift.
5. iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is
responsible for managing applications data, middleware, and runtime environments.
Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco
Metaphor.
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount
of storage capacity in the cloud to store and manage data.
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure
includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud
computing model.
7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.
8. Security
Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.
9. Internet
The Internet is medium through which front end and back end can interact and communicate
with each other.
Benefits
The following are some of the possible benefits for those who offer cloud computing-based
services and applications:
• Cost Savings — Companies can reduce their capital expenditures and use operational
expenditures for increasing their computing capabilities. This is a lower barrier to entry and also
requires fewer in-house IT resources to provide system support.
6. • Scalability/Flexibility — Companies can start with a small deployment and grow to a large
deployment fairly rapidly, and then scale back if necessary. Also, the flexibility of cloud
computing allows companies to use extra resources at peak times, enabling them to satisfy
consumer demands.
• Reliability — Services using multiple redundant sites can support business continuity and
disaster recovery.
• Maintenance — Cloud service providers do the system maintenance, and access is through
APIs that do not require application installations onto PCs, thus further reducing maintenance
requirements.
• Mobile Accessible — Mobile workers have increased productivity due to systems accessible in
an infrastructure available from anywhere.
Challenges
The following are some of the notable challenges associated with cloud computing, and although
some of these may cause a slowdown when delivering more services in the cloud, most also can
provide opportunities, if resolved with due care and attention in the planning stages.
• Security and Privacy — Perhaps two of the more “hot button” issues surrounding cloud
computing relate to storing and securing data, and monitoring the use of the cloud by the service
providers. These issues are generally attributed to slowing the deployment of cloud services.
These challenges can be addressed, for example, by storing the information internal to the
organization, but allowing it to be used in the cloud. For this to occur, though, the security
mechanisms between organization and the cloud need to be robust and a Hybrid cloud could
support such a deployment.
• Lack of Standards — Clouds have documented interfaces; however, no standards are
associated with these, and thus it is unlikely that most clouds will be interoperable. The Open
Grid Forum is developing an Open Cloud Computing Interface to resolve this issue and the Open
Cloud Consortium is working on cloud computing standards and practices. The findings of these
groups will need to mature, but it is not known whether they will address the needs of the people
deploying the services and the specific interfaces these services need. However, keeping up to
date on the latest standards as they evolve will allow them to be leveraged, if applicable.
• Continuously Evolving — User requirements are continuously evolving, as are the
requirements for interfaces, networking, and storage. This means that a “cloud,” especially a
public one, does not remain static and is also continuously evolving.
• Compliance Concerns — The Sarbanes-Oxley Act (SOX) in the US and Data Protection
directives in the EU are just two among many compliance issues affecting cloud computing,
based on the type of data and application for which the cloud is being used. The EU has a
7. legislative backing for data protection across all member states, but in the US data protection is
different and can vary from state to state. As with security and privacy mentioned previously,
these typically result in Hybrid cloud deployment with one cloud storing the data internal to the
organization.
Migrat in g into a Cloud
Most enterprises today are powered by captive data centers . In most large or small enterprises
today, IT is the backbone of their operations . Invariably for these large enterprises , their data
centers are distributed across various geographies . They comprise systems and software that
span several gene r a t ions of products sold by a variety of IT vendors . In order to meet varying
loads, most of these data centers are provisioned with capacity beyond the peak loads experience
d. If the enterprise is in a seasonal or cyclical business , then the load variation would be
significant . Thus what is observed gene r ally is that the provisioned capacity of IT resources is
several times the average demand. This is indicative of significant degree of idle capacity. Many
data center management teams have been continuously innovating their management practices
and technologies deployed to possibly squeeze out the last possible usable computing resource
cycle through appropriate programming, systems configurations , SLAs, and systems manageme
nt .
Cloud computing turned attractive to them because they could pass on the additional demand
from their IT setups onto the cloud while paying only for the usage and being unencumbered by
the load of operations and management.
Why Migrate
• Business Reasons
• Technological Reasons
What can be Migrated
• Application
• Code
• Design
• Architecture
• Usage
Obstacles for cloud computing Technology:
Obstacle 1: Business Continuity and Service Availability
Organizations often worry about the availability of the service provided by the cloud providers.
Even the popular service providers like Amazon, Google, Microsoft experience outages. Keeping
the technical issues of a availability aside, a cloud provider could suffer outages for non-
technical reasons like going out of business or regulatory action.
8. Obstacle 2: Data Lock-In
Data Lock-is related to tight dependency of an organization’s business with the software or
hardware infrastructure of a cloud provider. Even though software stacks have improved
interoperability among platforms, the storage APIs are still essentially proprietary, or at least
have not been subject of active standardization. This leads to customers not being able to extract
their data and programs from one site to run on another as in hybrid cloud computing or surge
computing.
Obstacle 3: Data Confidentiality/Audit ability
Security of sensitive information in the cloud is one of the most often cited objections to cloud
computing. Analysts and skeptical companies ask “who would trust their essential data out there
somewhere?”. Cloud users face security threats both from outside and inside the cloud.
The cloud user is responsible for application-level security. The cloud provider is responsible for
physical security, and likely for enforcing external firewall policies. Security for intermediate
layers is shared between the user and the operator.
Although cloud makes external security easier, it does pose new problems related to internal
security. Cloud providers must guard against theft or denial-of-service attacks by users. Users
need to be protected from one another.
Obstacle 4: Data Transfer Bottlenecks
Now-a-days cloud applications are becoming data-intensive. The data store capacity of enterprise
applications or academic scientific programs might range from a few terabytes to a few peta
bytes or even more.
Transferring such high volumes of data between two clouds might take from a few days to even
months with network having high data rates.
Obstacle 5: Performance Unpredictability
In the cloud virtual machines can share CPUs and main memory effectively but network and I/O
sharing is more problematic. As a result, different Amazon EC2 instances vary more in their I/O
performance than in main memory performance.
The obstacle to attracting HPC is, HPC applications need to ensure that all the threads of a
program are running simultaneously, and today’s virtual machines and operating systems do not
provide a programmer visible way to ensure this.
Obstacle 6: Scalable Storage
The problem with storage is it’s rigid behaviour towards scalability. There have been many
attempts to answer this, varying in the richness of the query and storage APIs, the performance
guarantees offered, and the resulting consistency semantics.
Obstacle 7: Bugs in Large-Scale Distributed Systems
One of the difficult challenges in cloud computing is removing errors in large-scale distributed
systems. A common caveat is that these bugs cannot be reproduced in smaller configurations, so
the debugging must occur at scale in the production data centers.
Obstacle 8: Scaling Quickly
Pay-as-you-go model is well applied for storage and network bandwidth, as they can be
measured in terms of bytes transferred. Computation is slightly different, depending on the
virtualization level. For example, Google App Engine automatically scales in response to load
increases and decreases, and users are charged by the cycles used. AWS charges by the hour for
the number of instances that are alive (even when they are inactive).
Obstacle 9: Reputation Fate Sharing
9. One customer’s bad behaviour can affect the reputation of other customers using the same cloud.
For example, In March 2009, FBI raided a Dallas data center because a company whose services
are hosted there was being investigated for possible criminal activity, which affected a number of
other innocent customers who are also hosted in the same data center.
Obstacle 10: Software Licensing
Current software licensing bills its consumers on the basis of how many machines (physical) on
which the software is going to be installed. The problem with cloud is, the computational units
are VMs instead of physical machines. A physical machine might have tens of VMs running on
it. So, how does software vendors licence their software?
Following table gives a summary of the above mentioned obstacles for cloud computing along
with possible opportunities for each obstacle:
THE SEVEN- STEP MODEL OF MIGRATION INTO A CLOUD
Migration initiatives into the cloud are implemented in phases or in stages
1. ASSESSMENT
Migration starts with an assessment of the issues relating to migration, at the application, code,
design, and architecture levels. Moreover, assessments are also required for tools being used,
functionality, test cases, and configuration of the application. The proof of concepts for
migration and the corresponding pricing details will help to assess these issues properly.
10. 2. ISOLATE
The second step is the isolation of all the environmental and systemic dependencies of the
enterprise application within the captive data centre. These include library, application, and
architectural dependencies. This step results in a better understanding of the complexity of the
migration.
3. MAP
A mapping construct is generated to separate the components that should reside in the captive
data center from the ones that will go into the cloud.
4. RE-ARCHITECT
It is likely that a substantial part of the application has to be re-architected and implemented in
the cloud. This can affect the functionalities of the application and some of these might be lost. It
is possible to approximate lost functionality using cloud runtime support API.
5. AUGMENT
The features of cloud computing service are used to augment the application.
6. TEST
Once the augmentation is done, the application needs to be validated and tested. This is to be
done using a test suite for the applications on the cloud. New test cases due to augmentation and
proof-of-concepts are also tested at this stage.
7. OPTIMISE
The test results from the last step can be mixed and so require iteration and optimization. It may
take several optimizing iterations for the migration to be successful. It is best to iterate through
this seven step model as this will ensure the migration to be robust and comprehensive.
11. Migration risks
Migration risks for migrating into the cloud fall under two broad categories:
The general migration risks :
• performance monitoring and tuning,
• the compliance with standard s and governance issues; the IP and licensing issues;
• the quality of service (QoS) parameters as well as the corresponding SLAs committed to;
• the owner ship, transfer , and storage of data in the application;
• the portability and interoperability issues which could help mitigate potential vendor lock- ins
The security- related migration risks :
• obtaining the right execution logs as well as retaining the rights to all audit trails at a detailed
level
• matters of multi- tenancy and the impact of IT data leakage in the cloud computing
environments
CLOUD INTEGRATION
Cloud integration is a system of tools and technologies that connects various applications,
systems, repositories, and IT environments for the real-time exchange of data and processes.
Once combined, the data and integrated cloud services can then be accessed by multiple devices
over a network or via the internet.
12. THE PURPOSE OF CLOUD INTEGRATION
Cloud integration was created to break down data silos, improve connectivity and
visibility, and ultimately optimize business processes.
It is a response to the need to share data among cloud based applications and to unify
information components.
Cloud integration has grown in popularity as the use of Software as a Service (SaaS)
solutions continues to increase.
I DC predicts this growth will continue and that nearly one third of the worldwide
enterprise application market will be SaaS-based by 2018.
Additionally, more businesses are operating with a hybrid mix of SaaS and on premises
applications, creating a greater need for progressive integration methods.
THE BENEFITS OF CLOUD INTEGRATION
Companies who use cloud integration have synchronized data and applications,
improving their ability to operate effectively and nimbly.
Other benefits include:
Improved operational efficiency
Increased flexibility and scalability
Faster time-to-market
Better internal communication
Improved customer service, support, and retention
Increased competitive edge
Reduced operational costs and increased revenue
CLOUD INTEGRATION TYPES AND METHODS
Integration in the cloud can involve creating cloud to- cloud integration, cloud-to-on-
premises integration, or a combination of both. Integrations can address different business
components, including data and applications.
• Data integration - The synchronization of data between repositories. Data can be processed,
transported and/or transformed during data integration. This is a strictly data-related connection.
• Application integration - Connects various applications and arranges continued functionality
and interoperability. This is more than data sharing. It involves issuing re9uestsand commands to
trigger business events or processes.
There are three types for cloud integration
• Traditional Enterprise Integration Tools can be empowered with special connectors to
access Cloud- located Applications:
With a persistent rise in the necessity towards accessing and integrating cloud applications,
special drivers, connectors and adapters are being built and incorporated on the existing
integration platforms to enable bidirectional connectivity with the participating cloud services.
13. • Traditional Enterprise Integration Tools are hosted in the Cloud:
This approach is similar to the first option except that the integration software suite is now
hosted in any third- party cloud infrastructure s so that the enterprise does not worry about
procuring and managing the hardware or installing the integration software.
• Integration- as- a- Service (IaaS) or On- Demand Integration Offerings :
These are SaaS applications that are designed to deliver the integration service securely over the
Internet and are able to integrate cloud applications with the on- premise systems , cloud- to-
cloud applications .
How Integration is done?
• Integrat ion as a service (IaaS) is the budding and distinctive capability of clouds in fulfilling
the business integration requirements
.• IaaS overcomes these challenge s by smartly utilizing the time tested business- to- business
(B2B) integration technology as the value added bridge between SaaS solutions and in-house
business applications.
SaaS INTEGRATION
• Cloud- centric integration solutions are being developed and demonstrated for showcasing their
capabilities for integrating enterprise and cloud applications.
• Now with the arrival and adoption of the transformative and disruptive paradigm of cloud
computing, every ICT products are being converted into a collection of services to be delivered
via the open Internet
• In that line, the standards- compliant integration suites are being transitioned into services so
that any integration need of any one from any par t of the world , can be easily, cheaply and
rapidly met.
Integration as a Service (IaaS) : Migration of the functionality of a typical enterprise
application integration (EAI) hub / enterprise service bus (ESB) into the cloud for providing for
smooth data transport between any enterprise and SaaS applications.
• User s subscribe to IaaS as they would do for any other SaaS application.
• cloud middleware will be made available as a service.
For service integration, it is enterprise service bus (ESB) and for data integration, it is enterprise
data bus (EDB).
There are Message oriented middleware (MOM) and message brokers for integrating decoupled
applications through message passing and pick up.
Events are coming up fast and there are complex event processing (CEP) engines that receive a
stream of diver se event s from diverse sources ,process them at real- time to extract and figure
out the encapsulated knowledge , and accordingly select and activate one or more target
applications .
14. • Cloud infrastructure is not very useful without SaaS applications that run on top of them, and
SaaS applications are not very valuable without access to the critical corporate data that is
typically locked away in various corporate systems .
• So, for cloud applications to offer maximum value to their users, they need to provide a simple
mechanism to import or load external data, export or replicate their data for reporting or analysis
purposes , and finally keep their data synchronized with on- premise applications.
Why SaaS Integration is hard?
Reason s :
Limited Access : Access to cloud resources (SaaS, PaaS, and the infrastructures ) is more
limited than local applications . Once applications move to the cloud, custom applications must
be designed to support integration because there is no longer that low level of access. Enterprise
s putting their applications in the cloud or those subscribers of cloud- based business services are
dependent on the vendor to provide the integration hooks and APIs.
Dynamic Resources : Cloud resources are virtualized and service oriented. That is, everything is
expressed and exposed as a service. Due to the dynamism factor infrastructural changes are
liable for dynamic changes . These would clearly impact the integration model.
Performance : Clouds support application scalability and resource elasticity. However the
network distances between elements in the cloud are no longer under our control. Because of the
round trip latency, the cloud integration performance is bound to slow down
Introduction to System Testing
System testing is a process of testing the entire system that is fully functional, in order to
ensure the system is bound to all the requirements provided by the client in the form of the
functional specification or system specification documentation. In most cases, it is done next to
the Integration testing, as this testing should be covering the end-to-end system’s actual routine.
This type of testing requires a dedicated Test Plan and other test documentation derived from the
system specification document that should cover both software and hardware requirements.
Functional and Non-Functional tests also done by System testing. All things are done to maintain
trust within the development that the system is defect-free and bug-free.
Types of System Testing
Below are the different types of testing which are as follows:
1. Functionality Testing
This testing makes sure that the functionality of a product is working as per the
requirements specification, within the capabilities of the system.
Functional testing is done manually or with automated tools.
2. Recoverability Testing
This testing determines whether operations can be continued after a disaster or after the
integrity of the system has been lost.
15. The best example of this supposes we are downloading one file. And suddenly
connection goes off. After resuming connection our downloading starts at where we left.
It does not start from starting again.
This used where continuity of the operations is essential
3. Performance Testing
This testing makes sure the system’s performance under the various condition, in terms of
performance characteristics.
This testing is also called as compliance testing with respect to performance.
This testing ensures that meets the system requirements
It checks when multiple users use the same app at a time, then how it responds back
Performance testing can be categorized into three main categories like speed, scalability,
stability.
4. Scalability Testing
This testing makes sure the system’s scaling abilities in various terms like user scaling,
geographic scaling, and resource scaling.
5. Reliability Testing
Reliability testing makes sure that system is bug-free.
This testing makes sure the system can be operated for a longer duration without
developing failures.
6. Documentation Testing
This testing makes sure that system’s user guide and other help topics documents are correct and
usable.
7. Security Testing
Testing which confirms that the program can access to authorized personnel and that
authorized personnel can access the functions available to their security level.
This testing makes sure that the system does not allow unauthorized access to data and
resources.
The purpose of security testing is to determine, how well a system protects against
unauthorized internal or external access or willful damage.
There is the following area where we generally can check for security:
1. Authentication
2. Authorization
3. Data validation
4. Transport security
5. Data protection
6. Session management
8. Usability Testing
To make sure that the system is easy to use, learn and operate
9. Requirements Testing
Every system is a requirement tested.
Direct observations of people using the system.
Usability surveys have been done under this testing.
User tests under this testing. Is also called as Beta testing.
This testing test the system as to how the real user will work in the environment.
16. Usability testing is mainly used for the design of the application.
In a usability test, actual users try to get typical goals and tasks with a product under
controlled conditions.
This system is used to determine:
1. How simple it is to understand application usage.
2. How easy it is to execute an application process.
10. Load Testing
This testing determines, how the application behaves when multiple users access it
simultaneously across multiple locations.
This testing is done to determine if the System performance is acceptable at a pre-
determined load level.
Load testing evaluates system performance with the predefined load levels.
It checks normal and predefined conditions of the application.
11. Stress Testing
This testing generally checks the system is going to continue to function when subjected to the
large volume of data than expected.
Stress testing may contain input transactions, internal tables, communication channels,
disk space, etc.
Stress testing checks that the system should run as it would in a production environment.
It checks the system under extreme conditions.
Stress Testing is also known as Endurance Testing.
12. Configuration Testing
Configuration testing is checking that with the multiple combinations of application with
hardware.
This testing checks for a compatibility issue.
Determine minimal and optimal H/W and S/W configuration.
This testing determines the effects of adding or modifying resources like memory, disk
space, CPU, network card.
13. Compatibility Testing
Compatibility Testing used to check whether your application is capable of running on
different H/W, OS, applications, network environments or Mobile devices, etc.
Similar to multi-platform testing.
Capability testing is more useful in web-based applications where we can check that
application must be accessible from every browser.
Scaling Applications in the Cloud
Scalability is an important part when architecting a web app. There are multiple options on
how to scale the web app tier and the database tier. Those options will be explained with
examples from services from Microsoft Azure. If you are beginner and want to understand
the fundamentals of scalability and resiliency, then this article is for you.
17. Scaling the web app
A business web app hosted on a VM. At first, your website gets some ten requests per
second. But now, after you launched a new cool product or service, it is getting multiple
thousands of requests per second. The VM will receive all the load, in certain point it will
reject requests and become slow if not down, that is bad news for your growing business!
How to solve it? You might say: I need a more powerful VM! Well, that is called Vertical
scaling.
Scale Up (Vertical scaling)
The 8GB RAM, I3 processor and HDD disk are not enough anymore, then you spin up
another VM. The new one have 512 GB RAM, Xeon processor and the latest SSD disk.
This is Scale Up. It is the easiest and fastest way to scale a web app. It requires only
moving the web app content to the bigger new VM, without changing the source code.
Azure provides VMs up to 448GB dedicated RAM.
Scale Out (Horizontal scaling)
Scale Up focuses on making the single machine bigger, Scale Out is creating multiple
ones. This way we can have much more RAM and CPU, not on one single VM, but on a
cluster of VMs. The same solution used to get more computation power with processors,
when moved from one single to multiple processors/threads.
Scale Out requires re-thinking about the architecture of the app and in some scenarios
changing the source code.
Scale Out to multiple instances
This approach needs a solution to choose to which VM instance to send the user or HTTP
request to. Well, that is the Load Balancer or the Traffic Manager.
Content Delivery Network (CDN)
CDN is used to reduce the latency for getting static content from the server to the user’s
location. This latency is mainly caused by 2 reasons. The first reason is the physical
distance between the user and the server. CDNs are located in multiple locations called
Point-of-Presence (POP) around the world. So that it is possible to get one that is closer to
the user location than your servers are. The second is accessing the file on the disk, so
CDN might use a combination of HDD, SSD or even RAM to cache these data, depending
on the frequency of data access frequency. A time-to-live (TTL) can be applied to the
cache to say it should expire at a certain time.
Cache
The lots of SQL requests to the database gives the same result, then it is better to cache
this data in memory to ensure faster data access and reduce the load on the database. The
typical case is the top 10 products displayed on the home page for all users. Because it
uses RAM memory and not disks, it can save as much data as the RAM do. Data is stored
as key-value pairs. Cache can be distributed across multiple regions.
18. Azure Functions (Serverless app)
Serverless app is a small piece of the app hosted on its own instance. This instance is
managed for you, so you don’t need to take care of any container or VM. It can scale out
automatically depending on the load. Typically, you can use it for resizing or processing
images, starting a job on a database etc., anything that is more often independent from the
business logic.
Azure API Management
As Load Balancer distributes load on VMs, API Management can distribute load on
different API endpoints or microservices. The distribution mechanism can take into
account the load on each endpoint.
Azure Queue Storage
Queues provides an asynchronous solution to communication between software
components. When using REST web services for communication, the requested server
must be available at that time or the app will fail. While with Queues, if the server is not
available, then it doesn’t matter. The request can wait in the Queue to be processed when
the server will be available later. This approach help to decouple the different components
and makes them easily scalable and resilient.
Cloud Contracting models.
Selecting a cloud service: Choosing the appropriate cloud service and deployment
model is the critical first step in procuring cloud services.
Cloud service provider and end-user agreements: Terms of service and all
CSP/customer-required agreements need to be integrated fully into cloud contracts.
Service-level agreements: SLAs need to define performance with clear terms and
definitions, demonstrate how performance is being measured, and specify what
enforcement mechanisms are in place to ensure that SLAs are met.
CSP, agency, and integrator roles and responsibilities: Careful delineation between
the responsibilities and relationships among the federal agency, integrators and the CSP
are needed in order to effectively manage cloud services.
Standards: The use of the National Institute of Standards and Technology’s Cloud
Computing Reference Architecture and agency involvement in standards are necessary
for cloud procurements.
Security: Agencies must clearly detail the requirements for CSPs to maintain the security
and integrity of data existing in a cloud environment.
Privacy: If cloud services host “privacy data,” agencies must adequately identify
potential privacy risks and responsibilities and address those needs in the contract.
19. E-discovery: Federal agencies must ensure that all data stored in a CSP environment is
available for legal discovery by allowing all data to be located, preserved, collected,
processed, reviewed and produced.
Freedom of Information Act: Federal agencies must ensure that all data stored in a CSP
environment is available for appropriate handling under FOIA.
E-records: Agencies must ensure that CSPs understand and assist federal agencies in
compliance with the Federal Records Act and obligations under that law.
TECHNOLOGIES FOR DATA SECURITY IN CLOUD COMPUTING
Unique issues of the cloud data storage platform from a few different perspectives
Database Out sourcing and Query Integrity Assurance
Storing data into and fetching data from devices and machines behind a cloud are essentially a
novel form of database outsourcing
Data Integrity in Untrustworthy Storage
• The fear of losing data or data corruption
• Relieve the user s’ fear by providing technologies that enable user s to check the integrity of
their data
Web- Application- Based Security
• Once the data se t is stored remotely, a Web browser is one of the most convenient approaches
that end users can use to access their data on remote services
• Web security plays a more important role for cloud computing
Multimedia Data Security
• With the development of high- speed network technologies and large bandwidth connections,
more and more multimedia data are being stored and shared in cyber space
• The security requirement s for video, audio, pictures, or images are different from other
applications
Database Out sourcing and Query Integrity Assurance
Database outsourcing has become an important component of cloud computing as
– The cost of transmitting a terabyte of data over long distance s has decreased significantly
– The total cost of data management is five to ten times higher than the initial acquisition costs
– A growing interest in outsourcing database management tasks to third par ties can provide
these tasks for a much lower cost due to the economy of scale
– The benefits of reducing the costs for running Database Management Systems (DBMS)
independently enabling enterprises to concentrate on their main businesses
• The general architecture of a database outsourcing environment with clients
20. • The outsourcing of databases to a third- party service provider raises
– Two security concerns
• Data privacy and
• Query integrity
Data Privacy Protection
– A method to execute SQL queries over encrypted databases
• To process as much of a query as possible by the service providers , without having to decrypt
the data
• Decryption and the remainder of the query processing are performed at the client side
– An order- preserving encrypt ion scheme for numeric values
Query Integrity Assurance
– Query integrity examine s the trustworthiness of the hosting environment
– When a client receives a query result from the service provider
• Assure s that the result is both correct and complete
• Correct means that the result must originate in the owner’s data and not has been tampered with
• Complete means that the result includes all records satisfying the query
– A solution named dual encryption
• Ensure query integrity without requiring the database engine to perform any special function
beyond query processing
Data Integrity in Untrustworthy Storage
The fear of loss of control on their data is one of the major concerns that prevent end users from
migrating to cloud storage services
Different motivations for a storage service provide r could become untrustworthy
• To cover the consequence of a mistake in opera t ion Or deny the vulnerability in the system
after the data have been stolen by an adversary
BASIC PRINCIPLES OF CLOUD COMPUTING
• Federation : All cloud computing provide r s , regardless of how big they are, have a finite
capacity. To grow beyond this capacity, cloud computing provide r s should be able to form
federations of provide r s such that they can collaborate and share their resources. Any federation
of cloud computing provide r s should allow virtual application to be deployed across federated
sites. Furthermore , virtual applications need to be completely location free and allowed to
migrate in part or as a whole between sites.
• Independence :User s should be able to use the services of the cloud without relying on any
provide r specific tool, and cloud computing provide r s should be able to manage their
infrastructure without exposing internal details to their customers or partners .
• Isolation : Cloud computing services are, by definition, hosted by a provider that will
simultaneously host applications from many different users. Users must be ensured that their
resources cannot be accessed by other s sharing the same cloud and that adequate performance
isolation is in place to ensure that no other user may possess the power to directly effect the
service granted to their application.
21. • Elasticity: One of the main advantage s of cloud computing is the capability to provide, or
release, resources on- demand. These “elasticity” capabilities should be enacted automatically by
cloud computing provider s to meet demand variations
• Business Orientation : Before enterprises move their mission critical applications to the cloud,
cloud computing provide r s will need to develop the mechanisms to ensure quality of service
(QoS) and proper support for service- level agreement s (SLAs).
• Trust: Mechanisms to build and maintain trust between cloud computing consumers and cloud
computing providers , as well as between cloud computing provider s among themselves , are
essential for the success of any cloud computing offering.
Features of Federation Type s
• Framework agreement support : Framework agreements may either be support e d by the
architecture or not. If framework agreements are not supported, this implies that federation may
only be carried out in a more ad hoc opportunistic manner
• Opportunistic placement support : If framework agreements are not supported by the
architecture , or if there is not enough spare capacity even including the framework agreement s ,
a site may choose to perform opportunistic placement . It is a process where remote sites are
queried on- demand as the need for additional resources arises, and the local site requests a
certain SLA-governed capacity for a given cost from the remote sites
• Advance resource reservation support : This feature may be used both when there is an
existing framework agreement and when opportunistic placement has been performed. Both
types of advance reservations are only valid for a certain time, since they impact the utilization
of resources at a site. Because of this impact , they should be billed as actual usage during the
active time interval.
• Federated migration support : The ability to migrate machines across sites defines the
federated migration support . There are two types of migration: cold and hot (or live).
In cold migration , the VEE is suspended and experiences a certain amount of downtime while
it is being transferred.
Hot or live migration does not allow for system downtime, and it works by transferring
the runtime stat e while the VEE is still running.
SLA Management in Cloud Computing
• Capacity planning: The activity of determining the number of servers and their capacity that
could satisfactorily serve the application end- user requests at peak loads. An example scenario
where two web applications , application A and application B, are hosted on a separate set of
dedicated server s within the enterprise- owned server rooms is shown in Figur e 16.1.
22. The planned capacity for each of the applications to run successfully is three serve r s . As the
number of web applications grew, the server rooms in the organization became large and such
server rooms were known as data center s . These data cent e r s were owned and managed by the
enterprises themselves
As the number and complexity of the web applications grew, enterprises realized that it was
economical to outsource the application hosting activity to third- party infrastructure providers
• These providers get the required hardware and make it available for application hosting.
It necessitated the enterprises to enter into a legal agreement with the infrastructure service
providers to guarantee a minimum quality of service (QoS).
• The QoS parameters are related to the availability of the system CPU, data storage, and
network for efficient execution of the application at peak loads.
• This legal agreement is known as the service - level agreement (SLA).
23. TYPES OF SLA
• Service- level agreement provides a framework within which both seller and buyer of a service
can pursue a profitable service business relationship.
• It outlines the broad understanding between the service provider and the service consume r for
conducting business and forms the basis for maintaining a mutually beneficial relationship.
• From a legal perspective, the necessary terms and conditions that bind the service provider to
provide services continually to the service consumer are formally defined in SLA.
• SLA can be modeled using web service- level agreement (WSLA) language Specification.
Service- level parameter , metric, function, measurement directive, service- level objective, and
penalty are some of the important component s of WSLA
Key Components of a Service - Level Agreement
• Service- Level Describe s an observable property Parameter of a service whose value is
measurable
• Metrics: Metrics are the key instrument to describe exactly what SLA parameter s mean by
specifying how to measure or compute the parameter values
• Function :A function specifies how to compute a metric’s value from the values of other
metrics and constants
• Measurement: These specify how to measure a metric directives
LIFE CYCLE OF SLA
• Each SLA goes through a sequence of steps star ting from identification of terms and
conditions, activation and monitoring of the stated terms and conditions, and eventual
termination of contract once the hosting relationship ceases to exist.
Such a sequence of steps is called SLA life cycle and consists of the following five phases:
1. Contract definition
2. Publishing and discovery
3. Negotiation
4. Operationalization
5. De-commissioning
• Contract Definition : Service provide r s define a set of service offerings and corresponding
SLAs using standard template s . These service offerings form a catalog. Individual SLAs for
enterprises can be derived by customizing these base SLA templates .
• Publication and Discovery : Service provider advertise s these base service offerings through
standard publication media, and the customers should be able to locate the service provider by
searching the catalog. The customers can search different competitive offerings and shortlist a
few that fulfill their requirements for further negotiation.
• Negotiation : Once the customer has discovered a service provider who can meet their
application hosting need, the SLA terms and conditions needs to be mutually agreed upon before
signing the agreement for hosting the application.
24. For a standard packaged application which is offered as service, this phase could be automated.
For customized applications that are hosted on cloud platforms , this phase is manual . The
service provide r needs to analyze the application’s behavior with respect to scalability and
performance before agreeing on the specification of SLA. At the end of this phase, the SLA is
mutually agreed by both customer and provider and is eventually signed off.
• Operationalization: SLA operation consist s of
1.SLA monitoring involves measuring parameter values and calculating the metrics defined as a
part of SLA and determining the deviations. On identifying the deviations , the concerned parties
are notified.
2. SLA accounting involves capturing and archiving the SLA adherence for compliance. As part
of accounting, the application’s actual performance and the performance guaranteed as a part of
SLA is reported. Apart from the frequency and the duration of the SLA breach, it should also
provide the penal ties paid for each SLA violation
3.SLA enforcement involves taking appropriate action when the runtime monitoring detect s a
SLA violation. Such actions could be notifying the concerned par ties, charging the penalties
beside s other things.
DATA SECURITY IN THE CLOUD
Information in a cloud environment has much more dynamism and fluidity than information that
is static on a desktop or in a network folder. Nature of cloud computing dictates that data are
fluid object s, accessible from a multitude of nodes and geographic locations and, as such, must
have a data security methodology that takes this into account while ensuring that this fluidity is
not compromised. The idea of content- centric or information centric protection, being an inher
ent par t of a data object is a development out of the idea of the “de- perimerization” of the
enterprise. This idea was put forward by a group of Chief Information Officer s (CIOs) who
formed an organization called the Jericho Forum .
The Jericho Forum was founded in 2004 because of the increasing need for data exchange
between companies and external parties— for example: employees using remote compute r s ;
partner companies; Customer s .
The idea of creating, essentially, de- centralized perimeter , where the perimeters are created by
the data object itself, allows the security to move with the data, as opposed to retaining the data
within a secured and static wall
CLOUD COMPUTING AND DATA SECURITY RISK
Cloud computing is a development that is meant to allow more open accessibility and easier and
improved data sharing. Data are uploaded into a cloud and stored in a data cent e r , for access by
user s from that data center ; or in a more fully cloud- based model, the data themselves are creat
ed in the cloud and stored and accessed from the cloud (again via a data cente r ) .
25. The most obvious risk in this scenario is that associated with the storage of that data. A user
uploading or creating cloud- based data include those data that are stored and maintained by a
third- party cloud provide r such as Google, Amazon, Microsoft, and so on.
This action has several risks associated with it:
• Firstly, it is necessary to protect the data during upload into the data center to ensure that the
data do not get hijacked on the way into the database .
• Secondly, it is necessary to the store s the data in the data center to ensure that they are encrypt
ed at all times.
• Thirdly, and perhaps less obvious, the access to those data need to be controlled; this control
should also be applied to the hosting company, including the administrator s of the data cent e r .
• In addition, an are a often forgotten in the application of security to a data resource is the
protection of that resource during its use
Data security risks are compound e d by the open nature of cloud computing. Access control
become s a much more fundamental issue in cloud- based systems because of the accessibility of
the data Information- centric access control (as opposed to access control lists) can help to
balance improved accessibility with risk, by associating access rules with different data object s
within an open and accessible platform, without losing the Inherent usability of that platform.
A further area of risk associated not only with cloud computing, but also with traditional network
computing, is the use of content after access. The risk is potentially higher in a cloud network,
for the simple reason that the information is outside of your corporate walls.
CLOUD COMPUTING AND IDENTITY
Digital identity holds the key to flexible data security within a cloud Environment . A digital
identity represents who we are and how we interact with others on- line.
Access , identity, and risk are three variables that can become inherently connected when
applied to the security of data, because access and risk are directly proportional: As access
increases , so then risk to the security of the data increases . Access cont rolled by identifying
the actor attempting the access is the most logical manner of performing this operation.
Ultimately, digital identity holds the key to securing data, if that digital identity can be
programmatically linked to security policies cont rolling the post- access usage of data.
Identity, Reputation, and Trust
Reputation is a real- world commodity; that is a basic requirement of human- to- human
relationships: Our basic societal communication structure is built upon the idea of reputation and
trust . Reputation and its count e r value, trust , is easily transferable to a digital realm:
26. Legal Issue s in cloud comp ut in g
Significant issues regarding privacy of data and data security exist, specifically as they relate to
protecting personally identifiable information of individuals, but also as they relate to protection
of sensitive and potentially confidential business information either directly accessible through
or indirectly from the cloud systems . Complex jurisdictional issues may arise due to the potent
ial for data to reside in disparate or multiple geographies .
This geographical diversity is inherent in cloud service offerings. This means that both
virtualization of and physical locations of servers storing and processing data may potentially
impact what country’s law might govern in the event of a data breach or intrusion into cloud
systems . Jurisdictional matte r s also determine the country’s law that is applicable to data and
informat ion that may be moved geographic ally among data cent e r s around the world at any
given point in time
Cloud Contracting models.
Selecting a cloud service: Choosing the appropriate cloud service and deployment model is the
critical first step in procuring cloud services.
Cloud service provider and end-user agreements: Terms of service and all CSP/customer-
required agreements need to be integrated fully into cloud contracts.
Service-level agreements: SLAs need to define performance with clear terms and definitions,
demonstrate how performance is being measured, and specify what enforcement mechanisms are
in place to ensure that SLAs are met.
CSP, agency, and integrator roles and responsibilities: Careful delineation between the
responsibilities and relationships among the federal agency, integrators and the CSP are needed
in order to effectively manage cloud services.
Standards: The use of the National Institute of Standards and Technology’s Cloud Computing
Reference Architecture and agency involvement in standards are necessary for cloud
procurements.
Security: Agencies must clearly detail the requirements for CSPs to maintain the security and
integrity of data existing in a cloud environment.
Privacy: If cloud services host “privacy data,” agencies must adequately identify potential
privacy risks and responsibilities and address those needs in the contract.
E-discovery: Federal agencies must ensure that all data stored in a CSP environment is available
for legal discovery by allowing all data to be located, preserved, collected, processed, reviewed
and produced.
Freedom of Information Act: Federal agencies must ensure that all data stored in a CSP
environment is available for appropriate handling under FOIA.
E-records: Agencies must ensure that CSPs understand and assist federal agencies in
compliance with the Federal Records Act and obligations under that law.
.