SlideShare una empresa de Scribd logo
1 de 95
J.Jeysree
What is Virtualization 
Virtualization is a technique of 
abstracting physical resources in to logical 
view 
Increases utilization and capability of IT 
resource 
Simplifies resource management by 
pooling and sharing resources 
Significantly reduce downtime 
Planned and unplanned 
Improved performance of IT resources
What is a Virtual Machine ?
Why Virtualize? 
Consolidate resources 
Server consolidation 
Client consolidation 
Improve system management 
For both hardware and software 
From the desktop to the data center 
Improve the software lifecycle 
Develop, debug, deploy and maintain applications in virtual 
machines 
Increase application availability 
Fast, automated recovery
Consolidate resources 
Server consolidation 
reduce number of servers 
reduce space, power and cooling 
70•-80% reduction numbers cited in industry 
Client consolidation 
developers: test multiple OS versions, distributed 
application configurations on a single machine 
end user: Windows on Linux, Windows on Mac 
reduce physical desktop space, avoid managing multiple 
physical computers
Improve system management 
Data center management 
VM portability and live migration a key enabler 
automate resource scheduling across a pool of servers 
optimize for performance and/or power consumption 
allocate resources for new applications on the fly 
add/remove servers without application downtime 
Desktop management 
centralize management of desktop VM images 
automate deployment and patching of desktop VMs 
run desktop VMs on servers or on client machines 
Industry‐cited 10x increase in sys admin efficiency
Improve the software lifecycle 
•Develop, debug, deploy and maintain applications in 
virtual machines 
Power tool for software developers 
record/replay application execution deterministically 
trace application behavior online and offline 
model distributed hardware for multi‐tier applications 
Application and OS flexibility 
run any application or operating system 
Virtual appliances 
a complete, portable application execution environment
Increase application availability 
Fast, automated recovery 
automated failover/restart within a cluster 
disaster recovery across sites 
VM portability enables this to work reliably across 
potentially different hardware configurations 
Fault tolerance 
hypervisor‐based fault tolerance against hardware 
failures [Bressoud and Schneider, SOSP 1995] 
run two identical VMs on two different machines, 
backup VM takes over if primary VM’s hardware crashes 
commercial prototypes beginning to emerge (2008)
Virtualization Comes in Many 
Forms 
14 
Each application sees its own logical 
memory, independent of physical memory 
Virtual 
Memory 
Each application sees its own logical 
network, independent of physical network 
Virtual 
Networks 
Each application sees its own logical 
server, independent of physical servers 
Virtual 
Servers 
Each application sees its own logical 
storage, independent of physical storage 
Virtual 
Storage
15 
Memory Virtualization 
Each application sees its own logical 
memory, independent of physical memory 
Virtual 
Memory 
Benefits of Virtual Memory 
• Remove physical-memory limits 
• Run multiple applications at once 
Physical memory 
Swap space 
App 
App 
App
16 
Network Virtualization 
Each application sees its own logical 
network, independent of physical network 
Virtual 
Networks 
Benefits of Virtual Networks 
• Common network links with access-control 
properties of separate links 
•Manage logical networks instead of 
physical networks 
• Virtual SANs provide similar benefits 
for storage-area networks 
VLAN A VLAN B VLAN C 
Switch 
Switch VLAN trunk
Server Virtualization 
Before Server Virtualization: 
Application 
Operating system 
 Single operating system image per 
machine 
 Software and hardware tightly coupled 
 Running multiple applications on same 
machine often creates conflict 
 Underutilized resources 
After Server Virtualization: 
App App App 
Operating system 
App App App 
Operating system 
Virtualization layer 
 Virtual Machines (VMs) break 
dependencies between operating system 
and hardware 
 Manage operating system and 
application as single unit by 
encapsulating them into VMs 
 Strong fault and security isolation 
 Hardware-independent
Storage Virtualization 
Process of presenting a logical view 
of physical storage resources to 
hosts 
Logical storage appears and 
behaves as physical storage directly 
connected to host 
Examples of storage virtualization 
are: 
Host-based volume management 
LUN creation 
Tape virtualization 
Benefits of storage virtualization: 
Increased storage utilization 
Adding or deleting storage without 
affecting application’s availability 
Non-disruptive data migration 
Virtualization 
Layer 
Servers 
Heterogeneous Physical Storage
Definitions 
Virtualization 
A layer mapping its visible interface and resources onto the 
interface and resources of the underlying layer or system on which it 
is implemented 
Purposes 
Abstraction – to simplify the use of the underlying resource (e.g., by 
removing details of the resource’s structure) 
Replication – to create multiple instances of the resource (e.g., to 
simplify management or allocation) 
Isolation – to separate the uses which clients make of the underlying 
resources (e.g., to improve security) 
Virtual Machine Monitor (VMM) 
A virtualization system that partitions a single physical “machine” into multiple 
virtual machines. 
Terminology 
Host – the machine and/or software on which the VMM is implemented 
Guest – the OS which executes under the control of the VMM
Properties of Classical 
Virtualization 
Equivalence = Fidelity 
Program running under a VMM should exhibit a 
behavior identical to that of running on the equivalent 
machine 
Efficiency = Performance 
A statistically dominant fraction of machine 
instructions may be executed without VMM 
intervention 
Resource Control = Safety 
VMM is in full control of virtualized resources 
Executed programs may not affect the system resources
Evolution of Software solutions 
• 1st Generation: Full 
virtualization (Binary 
rewriting) 
– Software Based 
– VMware and 
Microsoft 
Time 
• 3rd Generation: 
Silicon-based 
(Hardware-assisted) 
virtualization 
– Unmodified guest 
– VMware and Xen on 
virtualization-aware 
hardware platforms 
• 2nd Generation: 
Paravirtualization 
– Cooperative 
virtualization 
– Modified guest 
– VMware, Xen 
Virtual 
Machine 
Virtual 
Machine … 
Dynamic Translation 
Operating System 
Hardware 
VM VM 
Hypervisor 
Hardware 
Virtual 
Machine 
Virtual 
Machine … 
Hardware 
Virtualization Logic 
Hypervisor 
… 
Server virtualization approaches
Full Virtualization 
• 1st Generation offering of x86/x64 server 
virtualization 
• Dynamic binary translation 
– The emulation layer talks to an operating 
system which talks to the computer 
hardware 
– The guest OS doesn't see that it is used in an 
emulated environment 
• All of the hardware is emulated including the CPU 
• Two popular open source emulators are QEMU and 
Bochs 
Server virtualization approaches 
Virtual Machine 
Guest OS 
Device Drivers 
Emulated 
Hardware 
App. A 
App. B 
App. C 
Device Drivers 
Host OS 
Hardware
Para-Virtualization 
• The Guest OS is modified and thus run 
kernel-level operations at Ring 1 (or 3) 
– the guest is fully aware of how to process 
privileged instructions 
– thus, privileged instruction translation by the 
VMM is no longer necessary 
– The guest operating system uses a specialized 
API to talk to the VMM and, in this way, 
execute the privileged instructions 
• The VMM is responsible for handling the 
virtualization requests and putting them to 
the hardware 
Virtual Machine 
Guest OS 
Server virtualization approaches 
App. A 
App. B 
App. C 
Device Drivers 
Specialized API 
Virtual Machine Monitor 
Device Drivers 
Hypervisor 
Hardware
Server virtualization approaches 
Hardware-assisted virtualization 
• The guest OS runs at ring 0 
• The VMM uses processor extensions (such as 
Intel®-VT or AMD-V) to intercept and emulate 
privileged operations in the guest 
• Hardware-assisted virtualization removes many 
of the problems that make writing a VMM a 
challenge 
• The VMM runs in a more privileged ring than 0, a 
virtual -1 ring is created 
Virtual Machine 
Guest OS 
App. A 
App. B 
App. C 
Device Drivers 
Specialized API 
Virtual Machine Monitor 
Device Drivers 
Hypervisor 
Hardware
System-level Design Approaches 
Full virtualization (direct execution) 
Exact hardware exposed to OS 
Efficient execution 
OS runs unchanged 
Requires a “virtualizable” architecture 
Example: VMWare 
 Paravirtualization 
 OS modified to execute under VMM 
 Requires porting OS code 
 Execution overhead 
 Necessary for some (popular) 
architectures (e.g., x86) 
 Examples: Xen, Denali 
CS5204 – Operating Systems
CPU Background 
Virtualization Techniques 
System ISA Virtualization 
Instruction Interpretation 
Trap and Emulate 
Binary Translation
Computer System Organization 
CPU 
MMU 
Memory 
Controller 
Local Bus 
Interface 
High-Speed 
I/O Bus 
NIC Controller Bridge 
Frame 
Buffer 
LAN 
Low-Speed 
I/O Bus 
CD-ROM USB
CPU Organization 
Instruction Set Architecture (ISA) 
Defines: 
the state visible to the programmer 
registers and memory 
the instruction that operate on the state 
ISA typically divided into 2 parts 
User ISA 
Primarily for computation 
System ISA 
Primarily for system resource management
User ISA - State 
User Virtual 
Memory 
Program 
Counter 
Condition 
Codes 
Reg 0 
Reg 1 
Reg n-1 
FP 0 
FP 1 
FP n-1 
Special-Purpose 
Registers 
General-Purpose 
Registers 
Floating Point 
Registers
User ISA – Instructions 
Integer Memory Control Flow Floating Point 
Add 
Sub 
And 
Compare 
… 
Load byte 
Load Word 
Store Multiple 
Push 
… 
Jump 
Jump equal 
Call 
Return 
… 
Add single 
Mult. double 
Sqrt double 
… 
Fetch Registers Issue 
Integer 
Integer 
Memory 
FP 
Typical Instruction Pipeline 
Decode 
Instruction Groupings
System ISA 
Privilege Levels 
Control Registers 
Traps and Interrupts 
Hardcoded Vectors 
Dispatch Table 
System Clock 
MMU 
Page Tables 
TLB 
I/O Device Access 
User 
Syste 
m 
User 
Extension 
Kernel 
Level 0 
Level 1 
Level 2
Outline 
CPU Background 
Virtualization Techniques 
System ISA Virtualization 
Instruction Interpretation 
Trap and Emulate 
Binary Translation
Isomorphism 
e(Si) 
Si Sj 
Guest 
V(Si) V(Sj) 
e’(Si’) 
Si’ Sj’ 
Host 
Formally, virtualization involves the construction of an 
isomorphism from guest state to host state.
Virtualizing the System ISA 
Hardware needed by monitor 
Ex: monitor must control real hardware interrupts 
Access to hardware would allow VM to compromise 
isolation boundaries 
Ex: access to MMU would allow VM to write any page 
So… 
All access to the virtual System ISA by the guest must be 
emulated by the monitor in software. 
System state kept in memory. 
System instructions are implemented as functions in the 
monitor.
Example: CPUState 
static struct { 
uint32 GPR[16]; 
uint32 LR; 
uint32 PC; 
int IE; 
int IRQ; 
} CPUState; 
void CPU_CLI(void) 
{ 
CPUState.IE = 0; 
} 
void CPU_STI(void) 
{ 
CPUState.IE = 1; 
} 
Goal for CPU virtualization techniques 
Process normal instructions as fast as possible 
Forward privileged instructions to emulation routines
Instruction Interpretation 
Emulate Fetch/Decode/Execute pipeline in software 
Postives 
Easy to implement 
Minimal complexity 
Negatives 
Slow!
Trap and Emulate 
Guest OS + Applications 
Page 
Fault 
Unde 
f 
Instr 
vIRQ 
MMU 
Emulation 
CPU 
Emulation 
I/O 
Emulation 
Virtual Machine Monitor 
Privileged Unprivileged
CPU Architecture 
What is trap ? 
When CPU is running in user mode, some internal or external 
events, which need to be handled in kernel mode, take place. 
Then CPU will jump to hardware exception handler vector, and 
execute system operations in kernel mode. 
Trap types : 
System Call 
Invoked by application in user mode. 
For example, application ask OS for system IO. 
Hardware Interrupts 
Invoked by some hardware events in any mode. 
For example, hardware clock timer trigger event. 
Exception 
Invoked when unexpected error or system malfunction occur. 
For example, execute privilege instructions in user mode.
Trap and Emulate Model 
If we want CPU virtualization to be efficient, how should 
we implement the VMM ? 
We should make guest binaries run on CPU as fast as 
possible. 
Theoretically speaking, if we can run all guest binaries 
natively, there will NO overhead at all. 
But we cannot let guest OS handle everything, VMM should 
be able to control all hardware resources. 
Solution : 
Ring Compression 
Shift traditional OS from kernel mode(Ring 0) to user mode(Ring 1), 
and run VMM in kernel mode. 
Then VMM will be able to intercept all trapping event.
Trap and Emulate Model 
VMM virtualization paradigm (trap and emulate) : 
1. Let normal instructions of guest OS run directly on 
processor in user mode. 
2. When executing privileged instructions, hardware will 
make processor trap into the VMM. 
3. The VMM emulates the effect of the privileged instructions 
for the guest OS and return to guest.
Trap and Emulate Model 
Traditional OS : 
When application 
invoke a system call : 
CPU will trap to 
interrupt handler vector 
in OS. 
CPU will switch to 
kernel mode (Ring 0) 
and execute OS 
instructions. 
When hardware event : 
Hardware will interrupt 
CPU execution, and 
jump to interrupt 
handler in OS.
Trap and Emulate Model 
VMM and Guest OS : 
System Call 
CPU will trap to interrupt 
handler vector of VMM. 
VMM jump back into guest OS. 
Hardware Interrupt 
Hardware make CPU trap to 
interrupt handler of VMM. 
VMM jump to corresponding 
interrupt handler of guest OS. 
Privilege Instruction 
Running privilege instructions 
in guest OS will be trapped to 
VMM for instruction emulation. 
After emulation, VMM jump 
back to guest OS.
De-privileging 
VMM emulates the effect on 
system/hardware resources of 
privileged instructions whose execution 
traps into the VMM 
aka trap-and-emulate 
Typically achieved by running GuestOS 
at a lower hardware priority level than 
the VMM 
Problematic on some architectures 
where privileged instructions do not 
trap when executed at deprivileged 
priority 
vmm 
resource 
privileged 
instruction 
GuestOS 
trap 
resource 
emulate change 
change
Issues with Trap and Emulate 
Not all architectures support it 
Trap costs may be high 
Monitor uses a privilege level 
Need to virtualize the protection levels
Binary Translator 
Guest 
Code 
Translator 
Translatio 
TC n 
Callouts 
Index 
Cache 
CPU 
Emulation 
Routines
Storage Virtualization 
Process of presenting a logical view of physical storage 
resources to hosts 
Logical storage appears and behaves as physical storage 
directly connected to host 
Examples of storage virtualization are: 
Host‐based volume management 
LUN creation 
Tape virtualization 
Benefits of storage virtualization: 
Increased storage utilization 
Adding or deleting storage without affecting application’s 
availability 
Non‐disruptive data migration
SNIA Storage Virtualization Taxonomy 
Storage 
Virtualization 
Block 
Virtualization 
Disk 
Virtualization 
File System, 
File/record 
Virtualization 
Other Device 
Virtualization 
Tape, Tape Drive, 
Tape Library 
Virtualization 
Network 
Based Virtualization 
Storage Device/Storage 
Subsystem Virtualization 
Host Based 
Virtualization 
In-band 
Virtualization 
Out-of-band 
Virtualization 
What is created 
Where it is done 
How it is implemented
Storage Virtualization Requires a 
Multi-Level Approach 
Server 
Storage 
Network 
Storage 
Path management 
Volume management 
Replication 
Path redirection 
Load balancing - ISL trucking 
Access control - Zoning 
Volume management - LUNs 
Access control 
Replication 
RAID
server 
With traditional storage hardware devices that connected 
directly to servers, the actual magnetic disk was presented 
to servers and their operating systems as LUNs, where the 
disk was arranged into sectors comprised of a number of 
fixed size blocks. 
To allow applications to not only store, but find 
information easily, the operating system arranged these 
blocks into a “filesystem” 
Much like a paper• based filing system, a file system is 
simply a logical way of referencing these blocks into a 
series of unique files, each with a meaningful name and 
type so they can be easily accessed.
Storage Network 
Network‐based storage virtualization embeds the 
intelligence, managing the storage resources in the 
networklayer. 
Abstracting the view of real storage resources between 
the server and the storage array , either in‐band or 
out‐of‐band.
Storage Virtualization 
Configuration 
Servers 
Storage 
Network 
Storage 
Arrays 
Virtualization 
Appliance 
Out-of-Band 
(a) 
Servers 
Storage 
Network 
Storage 
Arrays 
In-Band 
(b) 
Virtualization 
Appliance 
(a) In out-of-band implementation, the virtualized environment configuration is stored external to the data path 
(b) The in-band implementation places the virtualization function in the data path
In-Band-Approach 
The in‐band approach, sometimes referred to as 
symmetric. 
It embeds the virtualization functionality in the I/O 
(input/output) path between the server and storage 
array. 
It can be implemented in the SAN switches 
themselves.
In-Band-Approach 
All I/O requests, along with the data, pass through the 
device, with the server interacting with the 
virtualization device, never directly with the storage 
device. 
The virtualization device analyzes the request, 
consults its mappingtables, and, inturn, performs I/O 
to the storage device. 
These devices not only translate storage requests but 
are also able to cache data with the iron‐board 
memory.
It also provides 
Metrics on data 
Usage 
Manage replication services 
Orchestrate data migration 
Implement thin provisioning.
Out-Of-Band Approach 
The out‐of‐band approach, sometimes referred to as 
asymmetric 
It does not strictly reside in the I/O path like the 
in‐band approach. 
The servers maintain direct interaction with the 
storage array through the intelligent switch. 
The out‐of‐band appliance maintains a map (often 
referred to as meta‐data ) of all the storage resources 
connected in the SAN and instructs the server where 
to find it.
Out-Of-Band Approach 
•It uses special software or an agent, as instructions 
need to be sent through the SAN to make it work. 
Functions such as caching of data are not possible. 
However, only the in‐band approach increased 
performance.
Pros and Cons 
Both in‐band and out‐of‐band approaches provide 
virtualization with the ability to: 
1. Pool heterogeneous vendor storage products in a 
seamless accessible pool. 
2.Perform replication between non‐like devices. 
3.Provide a single management interface
Pros and Cons 
Implementation can be very complex because the 
pooling of storage requires the storage extents to be 
remapped into virtual extents. 
Clustering is needed to protect the mapping tables 
and maintain cache consistency which can be very 
risky. 
The I/O can suffer from latency, impacting 
performance and scalability due to the multiple steps 
required to complete the request
Pros and Cons 
Decoupling the virtualization from the storage once it 
has been implemented is impossible because all the 
meta‐data resides in the appliance. 
Solutions on the market only exist for fibrechannel 
(FC) based SANs. These devices are not suitable for 
Internet protocol (IP) based SANs. 
Since both approaches are dependent on the SAN, they 
require additional switch ports, which involves 
additional zoning complexity
Pros and Cons 
When migrating data between storage systems, the 
virtualization appliance must read and write the data 
through the SAN, check status coming back, and 
maintain a log for any changes during the move that 
impact performance. 
Specialized software needs to be installed on all 
servers , making it difficult to maintain.
Storage controller 
Enterprise‐class storage arrays, which have features 
and capability suitable for large organizations, have 
always featured virtualization capabilities (some more 
than others) to enhance the physical storage resource. 
One example of this is RAID, for providing data 
protection from disk failures
Storage controller 
Many enterprise‐class devices incorporate 
sophisticated switching architectures. 
with multiple physical connections to disk drives. 
The external storage assets presented to it are then 
“discovered” and managed in the same way as internal 
disks
Storage controller 
This approach has a number of benefits, including not 
requiring a remapping of LUNs and increasing extents. 
Once virtualized in this manner, the sophisticated 
microcode software that resides on the storage 
controller presents the external storage.
Controller‐based storage virtualization allows external storage to 
appear as if it’s internal.
Storage controller 
Leveraging mature enterprise class features, data can 
be migrated non•-disruptively from one pool to 
another, and replication can take place between non•- 
like and like storage. 
Partitioning can be implemented to allocate resources 
such as ports, cache, and disk pools to particular 
workloads
Advantages 
Capabilities such as replication, partitioning, 
migration, and thin provisioning are extended to 
legacy storage arrays. 
Heterogeneous data replication between non•-like 
vendors or different storage classes reduces data 
protection costs. 
Interoperability issues are reduced as the virtualized 
controller mimics a server connection to external 
storage
Block-Level Storage Virtualization 
Ties together multiple 
independent storage arrays 
Presented to host as a single 
storage device 
Mapping used to redirect 
I/O on this device to 
underlying physical arrays 
Deployed in a SAN 
environment 
Non-disruptive data mobility 
and data migration 
Enable significant cost and 
resource optimization 
Servers 
Virtualization Applied at SAN Level 
Heterogeneous Storage Arrays
File-Level Virtualization 
Before File-Level Virtualization 
Clients Clients 
IP 
Network 
Storage 
Array 
File 
Server 
NAS Devices/Platforms 
File 
Server 
 Every NAS device is an independent 
entity, physically and logically 
 Underutilized storage resources 
 Downtime caused by data migrations 
After File-Level Virtualization 
Clients Clients 
IP 
Network 
Storage 
Array 
File 
Server 
NAS Devices/Platforms 
Virtualization 
Appliance 
File 
Server 
 Break dependencies between end-user 
access and data location 
 Storage utilization is optimized 
 Nondisruptive migrations
Storage Virtualization Challenges 
Scalability 
Ensure storage devices perform appropriate requirements 
Functionality 
Virtualized environment must provide same or better 
functionality 
Must continue to leverage existing functionality on arrays 
Manageability 
Virtualization device breaks end-to-end view of storage 
infrastructure 
Must integrate existing management tools 
Support 
Interoperability in multivendor environment
Network Virtualization for Dummies 
Making a physical network appear as multiple logical 
ones
Why Virtualize ? 
Internet is almost ossified 
Lots of band-aids and makeshift solutions (e.g. overlays) 
new architecture (aka clean-slate) is needed 
Hard to come up with a one-size-fits-all architecture 
Almost impossible to predict what future might unleash 
Why not create an all-sizes-fit-into-one instead! 
Open and expandable architecture 
Testbed for future networking architectures and 
protocols
Related Concepts 
Virtual Private Networks (VPN) 
Virtual network connecting distributed sites 
Not customizable enough 
Active and Programmable Networks 
Customized network functionalities 
Programmable interfaces and active codes 
Overlay Networks 
Application layer virtual networks 
Not flexible enough
Network Virtualization Model 
Business Model 
Architecture 
Design Principles 
Design Goals
Business Model
Architecture
Design Principle
Design Goals 
Flexibility 
Service providers can choose 
arbitrary network topology, 
routing and forwarding functionalities, 
customized control and data planes 
No need for co-ordination with others 
IPv6 fiasco should never happen again 
Manageability 
Clear separation of policy from mechanism 
Defined accountabilityof infrastructure and service providers 
Modular management
Design Goals 
Scalability 
Maximize the number of co-existing virtual networks 
Increase resource utilization and amortize CAPEX and 
OPEX 
Security, Privacy, and Isolation 
Complete isolation between virtual networks 
Logical and resource 
Isolate faults, bugs, and misconfigurations 
Secured and private
Design Goals 
Programmability 
Of network elements e.g. routers 
Answer “How much”and “how” 
Easy and effective without being vulnerable to threats 
Heterogeneity 
Networking technologies 
Optical, sensor, wireless etc. 
Virtual networks
Design Goals 
Experimental and Deployment Facility 
PlanetLab, GENI, VINI 
Directly deploy services in real world from the testing 
phase 
Legacy Support 
Consider the existing Internet as a member of the 
collection of multiple virtual Internets 
Very important to keep all concerned parties satisfied
Existing Projects 
Four general categories 
1.Networking technology 
IP (X-Bone), ATM (Tempest) 
2.Layer of virtualization 
Physical layer (UCLP), Application layer (VIOLIN) 
3.Architectural domain 
Network resource management (VNRMS), Spawning 
networks (Genesis) 
4.Level of virtualization 
Node virtualization (PlanetLab), Full virtualization (Cabo)

Más contenido relacionado

La actualidad más candente

IBM Endpoint Manager for Server Automation presentation
IBM Endpoint Manager for Server Automation presentationIBM Endpoint Manager for Server Automation presentation
IBM Endpoint Manager for Server Automation presentation
RMayo22
 

La actualidad más candente (20)

AAI-3281 Smarter Production with WebSphere Application Server ND Intelligent ...
AAI-3281 Smarter Production with WebSphere Application Server ND Intelligent ...AAI-3281 Smarter Production with WebSphere Application Server ND Intelligent ...
AAI-3281 Smarter Production with WebSphere Application Server ND Intelligent ...
 
2.ibm flex system manager overview
2.ibm flex system manager overview2.ibm flex system manager overview
2.ibm flex system manager overview
 
IBM Endpoint Manager for Server Automation presentation
IBM Endpoint Manager for Server Automation presentationIBM Endpoint Manager for Server Automation presentation
IBM Endpoint Manager for Server Automation presentation
 
Virtual Machine
Virtual MachineVirtual Machine
Virtual Machine
 
what is sccm ? sccm online Training
what is sccm ? sccm online Training what is sccm ? sccm online Training
what is sccm ? sccm online Training
 
SCEP 2012 inside SCCM 2012
SCEP 2012 inside SCCM 2012SCEP 2012 inside SCCM 2012
SCEP 2012 inside SCCM 2012
 
VMworld Revisited; VMware View & vSphere 4.1
VMworld Revisited; VMware View & vSphere 4.1VMworld Revisited; VMware View & vSphere 4.1
VMworld Revisited; VMware View & vSphere 4.1
 
Virtual machine
Virtual machineVirtual machine
Virtual machine
 
Xen and the art of virtualization
Xen and the art of virtualizationXen and the art of virtualization
Xen and the art of virtualization
 
Sccm Interview Questions and Answers
Sccm Interview Questions and AnswersSccm Interview Questions and Answers
Sccm Interview Questions and Answers
 
Virtualization Training
Virtualization TrainingVirtualization Training
Virtualization Training
 
Tranxition sccm integration guide
Tranxition sccm integration guideTranxition sccm integration guide
Tranxition sccm integration guide
 
ConnectED 2015 - IBM Notes Traveler Daily Business
ConnectED 2015 - IBM Notes Traveler Daily BusinessConnectED 2015 - IBM Notes Traveler Daily Business
ConnectED 2015 - IBM Notes Traveler Daily Business
 
AAI-1445 Managing Dynamic Workloads with WebSphere ND and in the Cloud
AAI-1445 Managing Dynamic Workloads with WebSphere ND and in the CloudAAI-1445 Managing Dynamic Workloads with WebSphere ND and in the Cloud
AAI-1445 Managing Dynamic Workloads with WebSphere ND and in the Cloud
 
IBM Endpoint Manager V9.0
IBM Endpoint Manager V9.0IBM Endpoint Manager V9.0
IBM Endpoint Manager V9.0
 
IBM Service Management Suite for z/OS
IBM Service Management Suite for z/OS IBM Service Management Suite for z/OS
IBM Service Management Suite for z/OS
 
Microsoft App-V 5.1 and Flexera AdminStudio Webinar
Microsoft App-V 5.1 and Flexera AdminStudio WebinarMicrosoft App-V 5.1 and Flexera AdminStudio Webinar
Microsoft App-V 5.1 and Flexera AdminStudio Webinar
 
IBM Endpoint Manager for Server Automation (Overview)
IBM Endpoint Manager for Server Automation (Overview)IBM Endpoint Manager for Server Automation (Overview)
IBM Endpoint Manager for Server Automation (Overview)
 
Deployment day session 4 deployment using sccm
Deployment day session 4 deployment using sccmDeployment day session 4 deployment using sccm
Deployment day session 4 deployment using sccm
 
When availability matters the most
When availability matters the mostWhen availability matters the most
When availability matters the most
 

Similar a cloud basics.

Server Virtualization Seminar Presentation
Server Virtualization Seminar PresentationServer Virtualization Seminar Presentation
Server Virtualization Seminar Presentation
shabi_hassan
 
Server Virtualization
Server VirtualizationServer Virtualization
Server Virtualization
Akhilesh Jha
 

Similar a cloud basics. (20)

Virtual machine
Virtual machineVirtual machine
Virtual machine
 
virtual-machine-150316004018-conversion-gate01.pdf
virtual-machine-150316004018-conversion-gate01.pdfvirtual-machine-150316004018-conversion-gate01.pdf
virtual-machine-150316004018-conversion-gate01.pdf
 
Virtualization 101
Virtualization 101Virtualization 101
Virtualization 101
 
Chapter 3.4.pptx
Chapter 3.4.pptxChapter 3.4.pptx
Chapter 3.4.pptx
 
Server Virtualization Seminar Presentation
Server Virtualization Seminar PresentationServer Virtualization Seminar Presentation
Server Virtualization Seminar Presentation
 
Intro to virtualization
Intro to virtualizationIntro to virtualization
Intro to virtualization
 
Virtualization
VirtualizationVirtualization
Virtualization
 
Cloud Computing Virtualization and containers
Cloud Computing Virtualization and containersCloud Computing Virtualization and containers
Cloud Computing Virtualization and containers
 
CH14-Virtual Machines.pptx
CH14-Virtual Machines.pptxCH14-Virtual Machines.pptx
CH14-Virtual Machines.pptx
 
virtual-machine-ppt 18030 cloud computing.pptx
virtual-machine-ppt 18030 cloud computing.pptxvirtual-machine-ppt 18030 cloud computing.pptx
virtual-machine-ppt 18030 cloud computing.pptx
 
CloudComputing_UNIT 2.pdf
CloudComputing_UNIT 2.pdfCloudComputing_UNIT 2.pdf
CloudComputing_UNIT 2.pdf
 
CloudComputing_UNIT 2.pdf
CloudComputing_UNIT 2.pdfCloudComputing_UNIT 2.pdf
CloudComputing_UNIT 2.pdf
 
Unit 3 Virtualization.pdf
Unit 3 Virtualization.pdfUnit 3 Virtualization.pdf
Unit 3 Virtualization.pdf
 
Virtualization
VirtualizationVirtualization
Virtualization
 
Cloud
CloudCloud
Cloud
 
Unit-3-Virtualization.pptx
Unit-3-Virtualization.pptxUnit-3-Virtualization.pptx
Unit-3-Virtualization.pptx
 
Virtualization and cloud Computing
Virtualization and cloud ComputingVirtualization and cloud Computing
Virtualization and cloud Computing
 
Virtualization for Cloud Environment
Virtualization for Cloud EnvironmentVirtualization for Cloud Environment
Virtualization for Cloud Environment
 
Virtualization: Force driving cloud computing
Virtualization: Force driving cloud computingVirtualization: Force driving cloud computing
Virtualization: Force driving cloud computing
 
Server Virtualization
Server VirtualizationServer Virtualization
Server Virtualization
 

Último

Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
Epec Engineered Technologies
 

Último (20)

Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
Unit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdfUnit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdf
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Hazard Identification (HAZID) vs. Hazard and Operability (HAZOP): A Comparati...
Hazard Identification (HAZID) vs. Hazard and Operability (HAZOP): A Comparati...Hazard Identification (HAZID) vs. Hazard and Operability (HAZOP): A Comparati...
Hazard Identification (HAZID) vs. Hazard and Operability (HAZOP): A Comparati...
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 

cloud basics.

  • 2. What is Virtualization Virtualization is a technique of abstracting physical resources in to logical view Increases utilization and capability of IT resource Simplifies resource management by pooling and sharing resources Significantly reduce downtime Planned and unplanned Improved performance of IT resources
  • 3. What is a Virtual Machine ?
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9. Why Virtualize? Consolidate resources Server consolidation Client consolidation Improve system management For both hardware and software From the desktop to the data center Improve the software lifecycle Develop, debug, deploy and maintain applications in virtual machines Increase application availability Fast, automated recovery
  • 10. Consolidate resources Server consolidation reduce number of servers reduce space, power and cooling 70•-80% reduction numbers cited in industry Client consolidation developers: test multiple OS versions, distributed application configurations on a single machine end user: Windows on Linux, Windows on Mac reduce physical desktop space, avoid managing multiple physical computers
  • 11. Improve system management Data center management VM portability and live migration a key enabler automate resource scheduling across a pool of servers optimize for performance and/or power consumption allocate resources for new applications on the fly add/remove servers without application downtime Desktop management centralize management of desktop VM images automate deployment and patching of desktop VMs run desktop VMs on servers or on client machines Industry‐cited 10x increase in sys admin efficiency
  • 12. Improve the software lifecycle •Develop, debug, deploy and maintain applications in virtual machines Power tool for software developers record/replay application execution deterministically trace application behavior online and offline model distributed hardware for multi‐tier applications Application and OS flexibility run any application or operating system Virtual appliances a complete, portable application execution environment
  • 13. Increase application availability Fast, automated recovery automated failover/restart within a cluster disaster recovery across sites VM portability enables this to work reliably across potentially different hardware configurations Fault tolerance hypervisor‐based fault tolerance against hardware failures [Bressoud and Schneider, SOSP 1995] run two identical VMs on two different machines, backup VM takes over if primary VM’s hardware crashes commercial prototypes beginning to emerge (2008)
  • 14. Virtualization Comes in Many Forms 14 Each application sees its own logical memory, independent of physical memory Virtual Memory Each application sees its own logical network, independent of physical network Virtual Networks Each application sees its own logical server, independent of physical servers Virtual Servers Each application sees its own logical storage, independent of physical storage Virtual Storage
  • 15. 15 Memory Virtualization Each application sees its own logical memory, independent of physical memory Virtual Memory Benefits of Virtual Memory • Remove physical-memory limits • Run multiple applications at once Physical memory Swap space App App App
  • 16. 16 Network Virtualization Each application sees its own logical network, independent of physical network Virtual Networks Benefits of Virtual Networks • Common network links with access-control properties of separate links •Manage logical networks instead of physical networks • Virtual SANs provide similar benefits for storage-area networks VLAN A VLAN B VLAN C Switch Switch VLAN trunk
  • 17. Server Virtualization Before Server Virtualization: Application Operating system  Single operating system image per machine  Software and hardware tightly coupled  Running multiple applications on same machine often creates conflict  Underutilized resources After Server Virtualization: App App App Operating system App App App Operating system Virtualization layer  Virtual Machines (VMs) break dependencies between operating system and hardware  Manage operating system and application as single unit by encapsulating them into VMs  Strong fault and security isolation  Hardware-independent
  • 18. Storage Virtualization Process of presenting a logical view of physical storage resources to hosts Logical storage appears and behaves as physical storage directly connected to host Examples of storage virtualization are: Host-based volume management LUN creation Tape virtualization Benefits of storage virtualization: Increased storage utilization Adding or deleting storage without affecting application’s availability Non-disruptive data migration Virtualization Layer Servers Heterogeneous Physical Storage
  • 19. Definitions Virtualization A layer mapping its visible interface and resources onto the interface and resources of the underlying layer or system on which it is implemented Purposes Abstraction – to simplify the use of the underlying resource (e.g., by removing details of the resource’s structure) Replication – to create multiple instances of the resource (e.g., to simplify management or allocation) Isolation – to separate the uses which clients make of the underlying resources (e.g., to improve security) Virtual Machine Monitor (VMM) A virtualization system that partitions a single physical “machine” into multiple virtual machines. Terminology Host – the machine and/or software on which the VMM is implemented Guest – the OS which executes under the control of the VMM
  • 20. Properties of Classical Virtualization Equivalence = Fidelity Program running under a VMM should exhibit a behavior identical to that of running on the equivalent machine Efficiency = Performance A statistically dominant fraction of machine instructions may be executed without VMM intervention Resource Control = Safety VMM is in full control of virtualized resources Executed programs may not affect the system resources
  • 21.
  • 22. Evolution of Software solutions • 1st Generation: Full virtualization (Binary rewriting) – Software Based – VMware and Microsoft Time • 3rd Generation: Silicon-based (Hardware-assisted) virtualization – Unmodified guest – VMware and Xen on virtualization-aware hardware platforms • 2nd Generation: Paravirtualization – Cooperative virtualization – Modified guest – VMware, Xen Virtual Machine Virtual Machine … Dynamic Translation Operating System Hardware VM VM Hypervisor Hardware Virtual Machine Virtual Machine … Hardware Virtualization Logic Hypervisor … Server virtualization approaches
  • 23. Full Virtualization • 1st Generation offering of x86/x64 server virtualization • Dynamic binary translation – The emulation layer talks to an operating system which talks to the computer hardware – The guest OS doesn't see that it is used in an emulated environment • All of the hardware is emulated including the CPU • Two popular open source emulators are QEMU and Bochs Server virtualization approaches Virtual Machine Guest OS Device Drivers Emulated Hardware App. A App. B App. C Device Drivers Host OS Hardware
  • 24. Para-Virtualization • The Guest OS is modified and thus run kernel-level operations at Ring 1 (or 3) – the guest is fully aware of how to process privileged instructions – thus, privileged instruction translation by the VMM is no longer necessary – The guest operating system uses a specialized API to talk to the VMM and, in this way, execute the privileged instructions • The VMM is responsible for handling the virtualization requests and putting them to the hardware Virtual Machine Guest OS Server virtualization approaches App. A App. B App. C Device Drivers Specialized API Virtual Machine Monitor Device Drivers Hypervisor Hardware
  • 25. Server virtualization approaches Hardware-assisted virtualization • The guest OS runs at ring 0 • The VMM uses processor extensions (such as Intel®-VT or AMD-V) to intercept and emulate privileged operations in the guest • Hardware-assisted virtualization removes many of the problems that make writing a VMM a challenge • The VMM runs in a more privileged ring than 0, a virtual -1 ring is created Virtual Machine Guest OS App. A App. B App. C Device Drivers Specialized API Virtual Machine Monitor Device Drivers Hypervisor Hardware
  • 26. System-level Design Approaches Full virtualization (direct execution) Exact hardware exposed to OS Efficient execution OS runs unchanged Requires a “virtualizable” architecture Example: VMWare  Paravirtualization  OS modified to execute under VMM  Requires porting OS code  Execution overhead  Necessary for some (popular) architectures (e.g., x86)  Examples: Xen, Denali CS5204 – Operating Systems
  • 27.
  • 28. CPU Background Virtualization Techniques System ISA Virtualization Instruction Interpretation Trap and Emulate Binary Translation
  • 29. Computer System Organization CPU MMU Memory Controller Local Bus Interface High-Speed I/O Bus NIC Controller Bridge Frame Buffer LAN Low-Speed I/O Bus CD-ROM USB
  • 30. CPU Organization Instruction Set Architecture (ISA) Defines: the state visible to the programmer registers and memory the instruction that operate on the state ISA typically divided into 2 parts User ISA Primarily for computation System ISA Primarily for system resource management
  • 31. User ISA - State User Virtual Memory Program Counter Condition Codes Reg 0 Reg 1 Reg n-1 FP 0 FP 1 FP n-1 Special-Purpose Registers General-Purpose Registers Floating Point Registers
  • 32. User ISA – Instructions Integer Memory Control Flow Floating Point Add Sub And Compare … Load byte Load Word Store Multiple Push … Jump Jump equal Call Return … Add single Mult. double Sqrt double … Fetch Registers Issue Integer Integer Memory FP Typical Instruction Pipeline Decode Instruction Groupings
  • 33. System ISA Privilege Levels Control Registers Traps and Interrupts Hardcoded Vectors Dispatch Table System Clock MMU Page Tables TLB I/O Device Access User Syste m User Extension Kernel Level 0 Level 1 Level 2
  • 34. Outline CPU Background Virtualization Techniques System ISA Virtualization Instruction Interpretation Trap and Emulate Binary Translation
  • 35. Isomorphism e(Si) Si Sj Guest V(Si) V(Sj) e’(Si’) Si’ Sj’ Host Formally, virtualization involves the construction of an isomorphism from guest state to host state.
  • 36. Virtualizing the System ISA Hardware needed by monitor Ex: monitor must control real hardware interrupts Access to hardware would allow VM to compromise isolation boundaries Ex: access to MMU would allow VM to write any page So… All access to the virtual System ISA by the guest must be emulated by the monitor in software. System state kept in memory. System instructions are implemented as functions in the monitor.
  • 37. Example: CPUState static struct { uint32 GPR[16]; uint32 LR; uint32 PC; int IE; int IRQ; } CPUState; void CPU_CLI(void) { CPUState.IE = 0; } void CPU_STI(void) { CPUState.IE = 1; } Goal for CPU virtualization techniques Process normal instructions as fast as possible Forward privileged instructions to emulation routines
  • 38. Instruction Interpretation Emulate Fetch/Decode/Execute pipeline in software Postives Easy to implement Minimal complexity Negatives Slow!
  • 39. Trap and Emulate Guest OS + Applications Page Fault Unde f Instr vIRQ MMU Emulation CPU Emulation I/O Emulation Virtual Machine Monitor Privileged Unprivileged
  • 40. CPU Architecture What is trap ? When CPU is running in user mode, some internal or external events, which need to be handled in kernel mode, take place. Then CPU will jump to hardware exception handler vector, and execute system operations in kernel mode. Trap types : System Call Invoked by application in user mode. For example, application ask OS for system IO. Hardware Interrupts Invoked by some hardware events in any mode. For example, hardware clock timer trigger event. Exception Invoked when unexpected error or system malfunction occur. For example, execute privilege instructions in user mode.
  • 41. Trap and Emulate Model If we want CPU virtualization to be efficient, how should we implement the VMM ? We should make guest binaries run on CPU as fast as possible. Theoretically speaking, if we can run all guest binaries natively, there will NO overhead at all. But we cannot let guest OS handle everything, VMM should be able to control all hardware resources. Solution : Ring Compression Shift traditional OS from kernel mode(Ring 0) to user mode(Ring 1), and run VMM in kernel mode. Then VMM will be able to intercept all trapping event.
  • 42. Trap and Emulate Model VMM virtualization paradigm (trap and emulate) : 1. Let normal instructions of guest OS run directly on processor in user mode. 2. When executing privileged instructions, hardware will make processor trap into the VMM. 3. The VMM emulates the effect of the privileged instructions for the guest OS and return to guest.
  • 43. Trap and Emulate Model Traditional OS : When application invoke a system call : CPU will trap to interrupt handler vector in OS. CPU will switch to kernel mode (Ring 0) and execute OS instructions. When hardware event : Hardware will interrupt CPU execution, and jump to interrupt handler in OS.
  • 44. Trap and Emulate Model VMM and Guest OS : System Call CPU will trap to interrupt handler vector of VMM. VMM jump back into guest OS. Hardware Interrupt Hardware make CPU trap to interrupt handler of VMM. VMM jump to corresponding interrupt handler of guest OS. Privilege Instruction Running privilege instructions in guest OS will be trapped to VMM for instruction emulation. After emulation, VMM jump back to guest OS.
  • 45. De-privileging VMM emulates the effect on system/hardware resources of privileged instructions whose execution traps into the VMM aka trap-and-emulate Typically achieved by running GuestOS at a lower hardware priority level than the VMM Problematic on some architectures where privileged instructions do not trap when executed at deprivileged priority vmm resource privileged instruction GuestOS trap resource emulate change change
  • 46. Issues with Trap and Emulate Not all architectures support it Trap costs may be high Monitor uses a privilege level Need to virtualize the protection levels
  • 47.
  • 48.
  • 49.
  • 50.
  • 51. Binary Translator Guest Code Translator Translatio TC n Callouts Index Cache CPU Emulation Routines
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57. Storage Virtualization Process of presenting a logical view of physical storage resources to hosts Logical storage appears and behaves as physical storage directly connected to host Examples of storage virtualization are: Host‐based volume management LUN creation Tape virtualization Benefits of storage virtualization: Increased storage utilization Adding or deleting storage without affecting application’s availability Non‐disruptive data migration
  • 58. SNIA Storage Virtualization Taxonomy Storage Virtualization Block Virtualization Disk Virtualization File System, File/record Virtualization Other Device Virtualization Tape, Tape Drive, Tape Library Virtualization Network Based Virtualization Storage Device/Storage Subsystem Virtualization Host Based Virtualization In-band Virtualization Out-of-band Virtualization What is created Where it is done How it is implemented
  • 59. Storage Virtualization Requires a Multi-Level Approach Server Storage Network Storage Path management Volume management Replication Path redirection Load balancing - ISL trucking Access control - Zoning Volume management - LUNs Access control Replication RAID
  • 60. server With traditional storage hardware devices that connected directly to servers, the actual magnetic disk was presented to servers and their operating systems as LUNs, where the disk was arranged into sectors comprised of a number of fixed size blocks. To allow applications to not only store, but find information easily, the operating system arranged these blocks into a “filesystem” Much like a paper• based filing system, a file system is simply a logical way of referencing these blocks into a series of unique files, each with a meaningful name and type so they can be easily accessed.
  • 61. Storage Network Network‐based storage virtualization embeds the intelligence, managing the storage resources in the networklayer. Abstracting the view of real storage resources between the server and the storage array , either in‐band or out‐of‐band.
  • 62. Storage Virtualization Configuration Servers Storage Network Storage Arrays Virtualization Appliance Out-of-Band (a) Servers Storage Network Storage Arrays In-Band (b) Virtualization Appliance (a) In out-of-band implementation, the virtualized environment configuration is stored external to the data path (b) The in-band implementation places the virtualization function in the data path
  • 63. In-Band-Approach The in‐band approach, sometimes referred to as symmetric. It embeds the virtualization functionality in the I/O (input/output) path between the server and storage array. It can be implemented in the SAN switches themselves.
  • 64.
  • 65. In-Band-Approach All I/O requests, along with the data, pass through the device, with the server interacting with the virtualization device, never directly with the storage device. The virtualization device analyzes the request, consults its mappingtables, and, inturn, performs I/O to the storage device. These devices not only translate storage requests but are also able to cache data with the iron‐board memory.
  • 66. It also provides Metrics on data Usage Manage replication services Orchestrate data migration Implement thin provisioning.
  • 67. Out-Of-Band Approach The out‐of‐band approach, sometimes referred to as asymmetric It does not strictly reside in the I/O path like the in‐band approach. The servers maintain direct interaction with the storage array through the intelligent switch. The out‐of‐band appliance maintains a map (often referred to as meta‐data ) of all the storage resources connected in the SAN and instructs the server where to find it.
  • 68.
  • 69. Out-Of-Band Approach •It uses special software or an agent, as instructions need to be sent through the SAN to make it work. Functions such as caching of data are not possible. However, only the in‐band approach increased performance.
  • 70. Pros and Cons Both in‐band and out‐of‐band approaches provide virtualization with the ability to: 1. Pool heterogeneous vendor storage products in a seamless accessible pool. 2.Perform replication between non‐like devices. 3.Provide a single management interface
  • 71. Pros and Cons Implementation can be very complex because the pooling of storage requires the storage extents to be remapped into virtual extents. Clustering is needed to protect the mapping tables and maintain cache consistency which can be very risky. The I/O can suffer from latency, impacting performance and scalability due to the multiple steps required to complete the request
  • 72. Pros and Cons Decoupling the virtualization from the storage once it has been implemented is impossible because all the meta‐data resides in the appliance. Solutions on the market only exist for fibrechannel (FC) based SANs. These devices are not suitable for Internet protocol (IP) based SANs. Since both approaches are dependent on the SAN, they require additional switch ports, which involves additional zoning complexity
  • 73. Pros and Cons When migrating data between storage systems, the virtualization appliance must read and write the data through the SAN, check status coming back, and maintain a log for any changes during the move that impact performance. Specialized software needs to be installed on all servers , making it difficult to maintain.
  • 74. Storage controller Enterprise‐class storage arrays, which have features and capability suitable for large organizations, have always featured virtualization capabilities (some more than others) to enhance the physical storage resource. One example of this is RAID, for providing data protection from disk failures
  • 75. Storage controller Many enterprise‐class devices incorporate sophisticated switching architectures. with multiple physical connections to disk drives. The external storage assets presented to it are then “discovered” and managed in the same way as internal disks
  • 76. Storage controller This approach has a number of benefits, including not requiring a remapping of LUNs and increasing extents. Once virtualized in this manner, the sophisticated microcode software that resides on the storage controller presents the external storage.
  • 77. Controller‐based storage virtualization allows external storage to appear as if it’s internal.
  • 78. Storage controller Leveraging mature enterprise class features, data can be migrated non•-disruptively from one pool to another, and replication can take place between non•- like and like storage. Partitioning can be implemented to allocate resources such as ports, cache, and disk pools to particular workloads
  • 79. Advantages Capabilities such as replication, partitioning, migration, and thin provisioning are extended to legacy storage arrays. Heterogeneous data replication between non•-like vendors or different storage classes reduces data protection costs. Interoperability issues are reduced as the virtualized controller mimics a server connection to external storage
  • 80. Block-Level Storage Virtualization Ties together multiple independent storage arrays Presented to host as a single storage device Mapping used to redirect I/O on this device to underlying physical arrays Deployed in a SAN environment Non-disruptive data mobility and data migration Enable significant cost and resource optimization Servers Virtualization Applied at SAN Level Heterogeneous Storage Arrays
  • 81. File-Level Virtualization Before File-Level Virtualization Clients Clients IP Network Storage Array File Server NAS Devices/Platforms File Server  Every NAS device is an independent entity, physically and logically  Underutilized storage resources  Downtime caused by data migrations After File-Level Virtualization Clients Clients IP Network Storage Array File Server NAS Devices/Platforms Virtualization Appliance File Server  Break dependencies between end-user access and data location  Storage utilization is optimized  Nondisruptive migrations
  • 82. Storage Virtualization Challenges Scalability Ensure storage devices perform appropriate requirements Functionality Virtualized environment must provide same or better functionality Must continue to leverage existing functionality on arrays Manageability Virtualization device breaks end-to-end view of storage infrastructure Must integrate existing management tools Support Interoperability in multivendor environment
  • 83.
  • 84. Network Virtualization for Dummies Making a physical network appear as multiple logical ones
  • 85. Why Virtualize ? Internet is almost ossified Lots of band-aids and makeshift solutions (e.g. overlays) new architecture (aka clean-slate) is needed Hard to come up with a one-size-fits-all architecture Almost impossible to predict what future might unleash Why not create an all-sizes-fit-into-one instead! Open and expandable architecture Testbed for future networking architectures and protocols
  • 86. Related Concepts Virtual Private Networks (VPN) Virtual network connecting distributed sites Not customizable enough Active and Programmable Networks Customized network functionalities Programmable interfaces and active codes Overlay Networks Application layer virtual networks Not flexible enough
  • 87. Network Virtualization Model Business Model Architecture Design Principles Design Goals
  • 91. Design Goals Flexibility Service providers can choose arbitrary network topology, routing and forwarding functionalities, customized control and data planes No need for co-ordination with others IPv6 fiasco should never happen again Manageability Clear separation of policy from mechanism Defined accountabilityof infrastructure and service providers Modular management
  • 92. Design Goals Scalability Maximize the number of co-existing virtual networks Increase resource utilization and amortize CAPEX and OPEX Security, Privacy, and Isolation Complete isolation between virtual networks Logical and resource Isolate faults, bugs, and misconfigurations Secured and private
  • 93. Design Goals Programmability Of network elements e.g. routers Answer “How much”and “how” Easy and effective without being vulnerable to threats Heterogeneity Networking technologies Optical, sensor, wireless etc. Virtual networks
  • 94. Design Goals Experimental and Deployment Facility PlanetLab, GENI, VINI Directly deploy services in real world from the testing phase Legacy Support Consider the existing Internet as a member of the collection of multiple virtual Internets Very important to keep all concerned parties satisfied
  • 95. Existing Projects Four general categories 1.Networking technology IP (X-Bone), ATM (Tempest) 2.Layer of virtualization Physical layer (UCLP), Application layer (VIOLIN) 3.Architectural domain Network resource management (VNRMS), Spawning networks (Genesis) 4.Level of virtualization Node virtualization (PlanetLab), Full virtualization (Cabo)