Más contenido relacionado La actualidad más candente (17) Similar a Visão geral do hardware do servidor System z e Linux on z - Concurso Mainframe (20) Más de Anderson Bassani (14) Visão geral do hardware do servidor System z e Linux on z - Concurso Mainframe1. © IBM Corporation, 2014
Visão geral do hardware do servidor
System z e Linux on z
Anderson Bassani
abassani@br.ibm.com
Especialista técnico de pré-vendas – System z
2. Apresentação realizada no dia
04/Setembro/2014 durante o evento de
premiação do Concurso Mainframe
2014.
Local: IBM Tutóia, São Paulo.
2 © IBM Corporation, 2014
3. Servidor IBM Mainframe – System z
Linux on z
O que o System z faz que outras plataformas
não conseguem fazer ?
Exemplo de um caso real de um Independent
Software Vendor (ISV)
3 © IBM Corporation, 2014
6. zEC12 New Build Radiator-based Air cooled – Under the
covers (Model H89 and HA1) Front view
Overhead
Power Cables
(option)
Internal
Batteries
(option)
Power
Supplies
2 x Support
Elements
PCIe I/O
drawers
(Maximum 5
for zEC12)
Processor Books
with Flexible
Support
Processors
(FSPs), PCIe and
HCA I/O fanouts
PCIe I/O interconnect
cables and Ethernet
cables FSP cage
controller cards
Radiator with N+1
pumps, blowers and
motors
Overhead I/O
feature is a co-req
for overhead power
option
Optional FICON
LX Fiber Quick
Connect (FQC)
not shown
6 © IBM Corporation, 2014
7. IBM System z – Virtual Tour
http://ibmtvdemo.edgesuite.net/servers/z/demos/zEnterprise_Radiator_Product_Tour/index.html
7 © IBM Corporation, 2014
8. zBC12 Model H13 – Under the covers
Internal
Batteries
(optional)
Power
Supplies
2 x CPC Drawers,
Memory & HCAs
I/O Drawer
Ethernet cables for
internal System LAN
connecting Flexible
Service Processor
(FSP) cage controller
cards (not shown)
2 x Support
Elements
FQC for
FICON LX
only
PCIe I/O
drawers
Rear View Front View
8 © IBM Corporation, 2014
9. zEC12 Continues the CMOS Mainframe Heritage
Begun in 1994
770 MHz
1.2 GHz
1.7 GHz
4.4 GHz
5.2 GHz
5.5 GHz
6000
5000
4000
3000
2000
1000
0
MHz/GHz
2000
z900
189 nm SOI
16 Cores
Full 64-bit
z/Architecture
2003
z990
nm SOI 130
32 Cores
Superscalar
Modular SMP
2005
z9 EC
nm SOI 90
54 Cores
System level
scaling
2012
zEC12
nm SOI 32
101 Cores
OOO and eDRAM
cache improvements
PCIe Flash
Arch extensions
for scaling
2010
z196
nm SOI 45
80 Cores
OOO core
eDRAM cache
RAIM memory
zBX integration
2008
z10 EC
nm SOI 65
64 Cores
High-freq core
3-level cache
9 © IBM Corporation, 2014
10. zEnterprise EC12 Book and Frame
MMCCMM
MMeemm
MMeemm
EC12 Book
4-Book EC12 System
10 © IBM Corporation, 2014
11. MCM @ 1800W
Water Cooled
zEC12 Book Layout
16 DIMMs
100mm High
MCM
Memory
Memory
3 DCA Power Supplies 14 DIMMs
100mm High
Rear
I/O
Fanout
Cards
Cooling
connector
Front
11 © IBM Corporation, 2014
12. zEC12 PU chip, SC chip and MCM
BOOK
Side View
Front View
Fanouts
zEC12
Hexa-core
PU CHIP
L4B L4B
L4C
L4Q L4Q
PU 2 PU 1
PU 0
SC 1 SC 0
PU 3 PU 4 PU 5
MCM
Core2
L3C 0
L3C 1
GX
MCU
Core0
Core1
Core3
Core4
Core5
L4
Q
L4Q
V00
V01
V10
V11
12 © IBM Corporation, 2014
13. Cores Can be Configured for Different Needs
13 © IBM Corporation, 2014
14. Arquitetura – Processadores Especializados
System z tem muitos processadores, porém cada um executa o seu papel.
Sistema Operacional
e Aplicação – Total de 120 Pus
(Cores) sendo até 101
processadores configuráveis
Processadores Especializados
CP (IBM System z Central Processor) – zOS, zTPF e zVSE
. zAAP (IBM System z Application Assist Processor) – Java
. zIIP (IBM System z Integrated Information Processor) – XML e DB2 Calls
IFL (IBM System z Integrated Facility for Linux) - Linux
até +2 processadores “Spare”
I/O
até 16 SAPs - System Assist Processors
Placas de I/O (FICON/FCP) ou OSA
Até 320 Processadores RISC
. Enviar/Receber requisições de I/O
(Discos e Fitas)
Processadores RISC/Power
. FICON – z/OS, zVSE e zVM / Linux
. FCP – zVM e Linux
É um
“Datacenter in a Box”
até 16 CPU’s para Criptografia
- alta escalabilidade para transações SSL
Integrated Firmware Processor
14 © IBM Corporation, 2014
15. Arquitetura – Demais plataformas de
hardware
Código de Aplicação
Microprocessador
Comparar esse design com
servidores RISC / Unix ou x86
Todas as funções de um computador
por software
I/O Device
Drivers
Criptografia, etc
OS e Gerenciamento
de Recurso
* Monotarefa e
Monousuário
* Licenciamento de
Software
15 © IBM Corporation, 2014
16. IBM System z Redbooks
http://www.redbooks.ibm.com/portals/systemz
16 © IBM Corporation, 2014
18. World-Class Server Virtualization:
System z LPAR and z/VM
Helping clients reduce costs and improve control of their IT infrastructure
Virtualization Consolidation Workload management Automation
• Logical Partitioning (LPAR) and
z/VM are complementary
technologies
– Both employ great hardware and
firmware (PR/SM) innovations
developed over the years
– Virtualization is a part of the basic
componentry of the System z
platform
• LPAR
– Host a relatively small number
of very high-performance
virtual servers
– Very low overhead, hardware-based
virtualization through
partitioning
• z/VM
– Host large numbers of high-performance
virtual servers
– Low overhead, hardware-based,
true virtualization with extreme
levels of software augmentation
Together, System z LPAR and
z/VM technology provide:
– High performance “on the metal”
virtual servers for larger,
performance-critical workloads
– The ability to provision 1000s of
additional virtual servers flexibly
and on demand
18 © IBM Corporation, 2014
19. I/O
Processor
Design
Workload
Management
Architecture On/Off Capacity
Security
on Demand
Server
Provisioning
Partitioning and
Virtualization
Software
Licensing
Systems
Management
19 © IBM Corporation, 2014
20. Anatomia de um Sistema Linux
O'Reilly, Charting the Linux Anatomy by Ed Stephenson
http://www.oreillynet.com/pub/a/oreilly/linux/news/linuxanatomy_0101.html
20 © IBM Corporation, 2014
21. Estrutura do Linux no Servidor System z
Muitos pacotes de software Linux não requerem qualquer alteração de código para ser executado no Linux para System z
2211
• 0.28 % platform specific code in GCC 4.1
• 0.55 % of platform specific code in Glibc 2.5
• 1.81 % platform specific code in Linux Kernel 2.6.25
GNU C compiler
GNU binutils
Backend
Backend
Linux applications
Linux Kernel GNU runtime environment Backend
Network Protocols File systems
Generic drivers
HW dependent drivers
Memory
Mgmt
Process
Mgmt
arch arch
code
System z
dependent code
Virtualization layer
System z instruction set and I/O hardware
Architecture
independent
Note:
Every supported Linux platform
requires platform specific code
in GCC, Glibc and the Linux
kernel
21 © IBM Corporation, 2014
22. z zBX x86
The Linux’s all look the same (on different architectures)
and have the same Linux kernel source.
But they have different personalities, qualities,
features and options derived from the architectures.
2222 © IBM Corporation, 2014
23. Versões de Linux atualmente suportadas no System z
23 © IBM Corporation, 2014
25. SHARE – www.share.org
Who We Are
SHARE Inc. is an independent, volunteer run association providing
enterprise technology professionals with continuous education and
training, valuable professional networking and effective industry
influence.
Our Mission
SHARE is an independent volunteer-run information technology
association that provides education, professional networking and
industry influence.
Link da apresentação:
https://share.confex.com/share/121/webprogram/Session13557.html
25 © IBM Corporation, 2014
26. What is Different about the Enterprise Linux Server
Virtualization enables mixing of high and low priority workloads without penalty
Enterprise Linux Server
Priority Workload
– No throughput reduction
– No response time increase
Low Priority Workload
– Soaks up remaining processor minutes
Unused processor minutes 1.9%
z/VM 10VM 32 Core CPU Usage With Physical
100.00
90.00
80.00
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
Donor Workload
Priority Too much
resource given
to Low Priority
workload
High Priority
workload
gets less
resource than
needed
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57
Time (mins)
% CPU Usage
Leading x86 Hypervisor
Priority Workload
– 31% throughput reduction
– 45% response time increase
Low Priority Workload
– Soaks up more CPU minutes
Unused CPU minutes 21.9%
26 © IBM Corporation, 2014
27. Priority Workload zVM With 10VM Varying 32 Core Demand % CPU Usage
Running
Standalone On System z PR/SM
100
90
80
70
60
50
40
30
20
10
0
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57
Time (mins)
% CPU Usage
High Priority Workload
Demand Curve
Usage - FB Standalone
Priority Workload
Priority Workload Metrics
Total Throughput: 9.125M
Avg Response Time: 140ms
% CPU Usage
Time (mins.)
Capacity Used
High Priority - 72.2% CPU Minutes
Unused (wasted) - 27.8% CPU Minutes
27 © IBM Corporation, 2014
28. z/VM 10VM 32 Core CPU Usage With Physical
Priority Workload On System z Does Not Degrade When
Low Priority Donor Workload Is Added
100.00
90.00
80.00
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57
Time (mins.)
Time (mins)
% CPU Usage
Run High Priority
And Low Priority
Workloads Together
Donor Workload
Priority Workload
NO
throughput leakage
NO
response time
increase
Priority Workload Metrics
Total Throughput: 9.125M
Avg Response Time: 140ms
% CPU Usage
Capacity Used
High Priority - 74.2% CPU Minutes
Low Priority - 23.9% CPU Minutes
Wasted – 1.9% CPU Minutes
28 © IBM Corporation, 2014
29. Priority Workload With ESX Varying % CPU Usage Demand FB
Running
Standalone On x86 Hypervisor
100
90
80
70
60
50
40
30
20
10
0
0
6
12
17
23
29
34
40
46
51
Time (mins.)
Time (mins)
% CPU Usage
High Priority Guest
CPU Demand
Usage - FB Standalone
% CPU Usage
Capacity Used
High Priority - 57.5% CPU Minutes
Unused (wasted) – 42.5% CPU Minutes
Priority Workload
Priority Workload Metrics
Total Throughput: 6.47M
Avg Response Time: 153ms
29 © IBM Corporation, 2014
30. ESX CPU Usage Shared
Priority Workload On x86 Hypervisor Degrades Severely
When Low Priority Workload Is Added
100.00
90.00
80.00
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
0
5
10
15
20
25
30
35
40
45
50
55
Time (mins.)
Time (mins)
% CPU Usage
Run High Priority
And Low Priority
Workloads Together
Donor Workload
Priority Workload
30.7%
throughput leakage
45.1%
response time increase
21.9%
wasted CPU minutes
% CPU Usage
Capacity Used
High Priority - 42.3% CPU Minutes
Low Priority – 35.8% CPU Minutes
Wasted – 21.9% CPU Minutes
Priority Workload Metrics
Total Throughput: 4.48M
Avg Response Time: 220ms
30 © IBM Corporation, 2014
31. System z Virtualization Enables Mixing Of High And Low
Priority Workloads Without Penalty
System z
Too much
resource given
to Low Priority
workload
z/VM 10VM 32 Core CPU Usage With Physical
100.00
90.00
80.00
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57
Time (mins)
% CPU Usage
Donor Workload
Priority Workload
Priority Workload
No throughput reduction
No response time increase
Low Priority Workload
Soaks up remaining CPU minutes
Unused CPU minutes 1.9%
x86 with common hypervisor
Priority Workload
31% throughput reduction
45% response time increase
Low Priority Workload
Soaks up more CPU minutes
Unused CPU minutes 21.9%
High Priority
workload gets
less resource
than needed
31 © IBM Corporation, 2014
32. System z Virtualization Enables Mixing Of High And Low
Priority Workloads Without Penalty
Too much
resource given
to Low Priority
workload
System z x86 with common hypervisor
z/VM 10VM 32 Core CPU Usage With Physical
100.00
90.00
80.00
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57
Time (mins)
% CPU Usage
Donor Workload
Priority Workload
Perfect workload management
Consolidate workloads of different
priorities on the same platform
Full use of available processing
resource (high utilization)
Imperfect workload management
Forces workloads to be segregated
on different servers
More servers are required (low
utilization)
High Priority
workload gets
less resource
than needed
32 © IBM Corporation, 2014
33. Um resumo de 5 principais diferenciais
33 © IBM Corporation, 2014
35. Benchmark – MATERA Systems
Parceria entre IBM e MATERA apresenta número inédito de transações bancárias - See more
at:
http://www.matera.com/br/2014/06/02/parceria-entre-ibm-e-matera-apresenta-numero-inedito-de-transacoes-35 © IBM Corporation, 2014
Notas del editor Central Processor
zSeries Application Assist Processor
zSeries Integrated Information Processor
Integrated Facility for Linux
Coupling Facility
System Assist Processor
A side by side comparison that shows the dramatic difference in the abilities of System z to maintain high priority workload SLAs. ESX is unable to handle both high priority and low priority workloads without a huge adverse effect on the high priority work.
The net result: wasted CPU.
The only ESX solution that maintains the high priority SLAs requires segregating the high priority work into serves that are high priority workloads only.
Intel/VMware is not able to maintain the SLA of the high priority workload when lower priority workload is added
Severe throughput and response time degradation
With Intel/VMware, the only practical solution to maintain SLAs of high priority workloads is to deploy workloads into separate environments
For example, most Intel/VMware deployments separate Dev environments from Production environments
The need to maintain multiple environments directly affects the total cost of the solution
Fixed size Intel boxes can force additional boxes to support “spill over” high priority work
There may be spare capacity on the additional machine, but nothing else can run on it without impacting the primary workload’s SLA
Additional environment needed to deliver lower priority workloads