2. 2
Trabajo
Inteligente
Trabajo
Inteligente
Más allá de lo
Verde
Más allá de lo
Verde
Nueva
Inteligencia
Nueva
Inteligencia
Procesos flexibles
y dinámicos
diseñados para
las nuevas formas
en las que la
gente compra,
trabaja o vive
Necesito “insights”Necesito “insights” Necesito trabajar de
forma inteligente
Necesito trabajar de
forma inteligente
Necesito
eficiencia
Necesito
eficiencia
Infraestructura
Dinámica
Infraestructura
Dinámica
Una infraestructura
que reduzca los
costos, sea
inteligente, segura y
tan dinámica como
el entorno de
negocios de hoy en
día
Necesito responder
rápidamente
Necesito responder
rápidamente
Mayores
eficiencias,
competir más
efectivamente y
responder más
ágilmente,
emprendiendo
acciones sobre
energía, entorno o
sustentabilidad
Recordemos los 4 pilares de Planeta Inteligente
Sacar provecho
de la riqueza de
información
disponible en
múltiples fuentes
para tomar
decisiones más
inteligentes en
tiempo real
Los datos se
multiplican y están en
silos aislados
Mi Infraestructura es
poco flexible y costosa
Nuestros recursos
son limitados
Nuevas necesidades de
negocios y procesos
3. 3
Pienso que hay un mercado mundial para
quizás cinco computadoras”
Thomas Watson, chairman of IBM, 1943
“Las computadoras en el futuro tendrán un
peso no mayor a 1.5 toneladas. ”
Popular Mechanics, 1949
“No hay ninguna razón por la que alguien
desearía una computadora en su casa.
Ken Olsen, founder of DEC, 1977
“640K por persona deberían
ser suficientes.” Bill Gates, 1981
“La predicción es difícil,
especialmente acerca del futuro”
Yogi Berra
La tecnología... en perspectiva
4. 4
En 6 años el consumo de energía de un
server ha crecido de 8 a 100 watts por cada
u$s1000 en tecnología.
En promedio, cada 100 unidades de
energía que ingresan a un data center, sólo
3 unidades se usan para procesamiento
propiamente dicho. Más de la mitad se
pierde en enfriar servidores.
¿ Cómo usamos los recursos de TI ?
En entornos distribuidos el 85% de la
capacidad de cómputo está ociosa.
5. 5
De cada 100 unidades de energía sólo 3 se utilizan
realmente en potencia de cómputo
Data Center
Hardware
Enfriamiento
55% 45%
Server/storage
hardware
70% 30%
Energía, memoria,
ventiladores,
placas,
dispositivos . . .
Procesador
80% 20%
Ocioso
Tasa de
utilización
Procesador
6. 6
Esto es lo que sucede cuando no virtualizamos
A1
D1
A2
D2
A3
D3
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
7. 7
Base instalada
(M Unidades)
Gasto
(US$B)
Fuente: IDC 2007
$0
$50
$100
$150
$200
$250
$300
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
Gasto en servers nuevos
Costo de administración 4x
Costo de energía y enfriamiento 8x
0
5
10
15
20
25
30
35
40
45
50
En el 2007 por cada dólar gastado en hardware, la mitad se gastó en electricidad para el
centro de cómputos y más de dos dólares se gastaron en administración y gerenciamiento.
El gasto en energía va a aumentar un 54% en los próximos cuatros años
El costo de administración, electricidad y refrigeración de
los servidores está aumentando año a año
8. 8
Virtualización es la capacidad de poder ver varios recursos
como si fueran uno, o un recurso como si fueran varios
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
An
Dn
S1
datos
S2 S3
Utilización
de recursos
Reducción
de costos
La Consolidación y Virtualización
permiten optimizar el uso de los
recursos y reducir costos
9. 9
Virtualizando optimizamos el uso de los recursos
Data Center
Hardware
Enfriamiento
55% 45%
Server/storage
hardware
70% 30%
Energía, memoria,
ventiladores,
placas,
dispositivos . . .
Procesador
80% 20%
Ocioso
Tasa de
utilización
Procesador
Reducimos consumo de energía
Reducimos la cantidad de licencias de software y sistemas operativos
Facilitamos la administración, pues hay menos servidores que
administrar
Podemos “mover” particiones de un equipo a otro
Podemos crear entornos adicionales fácilmente
Es decir que no sólo reducimos costos, sino que
también ganamos en flexibilidad
Y además, al reducir la cantidad de
servidores y procesadores...
10. 10
Así funciona la capa de virtualización
A1 A2 A3
Sistema Operativo
Hardware
A1 A2 A3
Sistema Operativo
Hardware
Capa de Virtualización
13. 13
El IBM BladeCenter S combina servidores, almacenamiento,
switches, energía, ventilación y cables en una solución
fácilmente administrable y autocontenida.
¿ Le resulta familiar esta instalación ?
14. 14
El “kit de Oficinas” del
IBM BladeCenter S
brinda eficiencia y
permite que el equipo
produzca el mismo
nivel de ruido que una
conversación
Más sobre el IBM BladeCenter S
15. 15
2002 2003 2004 2005 2006 2007 2008 2009
BladeCenter E
2002-2010
BladeCenter T BladeCenter H
2006-2010+
BladeCenter HT BladeCenter S
Un cliente que compró un chasis E en noviembre de 2002 puede actualizar sus 14
blades a HS22 con los procesadores Nehalem anuciados a fines de junio de 2009
Esto es proteger su inversión: un chasis para cada
necesidad y componentes compatibles
16. 16
LS22
Two Socket
LS41
Four Socket
HS21
Two Socket
HS22
Two Socket
HS12
Single Socket
JS22
Two Socket
JS12
Single Socket
Cell Broadband
Engine
QS22
Two Socket
PN41 / T2BC
Especializados
JS23
Two Socket
JS43
Four Socket
Un mismo chasis para diferentes tipos de blades
17. 17
Consolidando y virtualizando con blades
obtenemos mejoras en las tres categorías
Y además con los blades de IBM...
Reducimos consumo de energía, pues un blade consume menos energía que
su equivalente en formato tradicional
Reducimos el uso del espacio del datacenter
Podemos mezclar tecnologías de blades en un mismo chasis, tal como lo
hacemos con servidores tradicionales en un mismo rack
Así aprovechamos el espacio y los componentes compartidos del chasis
Podemos ahorrar el uso de switches externos
Un blade es más barato que su equivalente en formato
rackeable.
Data Center
Hardware
Enfriamiento
55% 45%
Server/storage
hardware
70% 30%
Energía, memoria,
ventiladores,
placas,
dispositivos . . .
Procesador
80% 20%
Ocioso
Tasa de
utilización
Procesador
21. 21
PowerVM Live Partition Mobility
PowerVM permite mover particiones AIX y Linux entre
máquinas POWER6 en caliente
Infraestructura de LAN y SAN virtualizadasInfraestructura de LAN y SAN virtualizadas
El primer equipo en soportar tanto Unix como Linux en el mismo sistema
Diseñado para mover cargas de trabajo misión crítica sin ser detenidas
Otra muestra del liderazgo de IBM con más de 4 décadas de experiencia
-- No disponible en Unix de competencia
Disponibilidad:
elimina las paradas programadas
Ahorro de energía:
para aprovechar las horas no-pico
Balanceo de carga:
durante picos, para responder ante
demandas de carga de trabajo
22. 22
Virtualización de servidores
A1 A2 A3
SO1
Hardware
Capa de Virtualización
A4 A5 A6
SO2 SO3
x86 Power
HypervisorVMWare / Xen
Software
Software
Hardware
Hardware
23. 23
Consolidando y virtualizando en Power logramos
mayor eficiencia
Tanto con blades como con
servidores rackeables
El procesador Power tiene la mejor relación tpmc/watt
Con Power podemos lograr mayores porcentajes de utilización que con x86
Los blades Power pueden compartir el chasis con blades x86
Hypervisor corre embebido en el hardware
Además su costo está incluido, no es un adicional como en x86
Y no utiliza los procesadores !!
Hypervisor de Power Systems aprovecha más de 40 años de experiencia en
virtualización de IBM.
El formato blade y el p520 son la manera más económica
de adquirir un server Power
Data Center
Hardware
Enfriamiento
55% 45%
Server/storage
hardware
70% 30%
Energía, memoria,
ventiladores,
placas,
dispositivos . . .
Procesador
80% 20%
Ocioso
Tasa de
utilización
Procesador
25. 25
DS4000 & DS5000 Series
DS3000 Series
Datacenter
conectividad FC
• Buque insignia del rango medio
• Aplicaciones intensivas y grandes
consolidaciones
• Alto rendimiento, flexibilidad, y
escalabilidad
• El 1er disco de rango medio con
encripción por hardware !!
Nivel de entrada
conectividad SAS
• DAS compartido
para grupos de
trabajo y SMBs,
• Consolidación,
simplicidad,
escalabilidad y
confiabilidad
DS3200
DS3300
DS3400
DS4700
DS4800
Niv. Ent. FC SAN
conectividad FC
•FC para agregar un
disco económico a una
SAN, o como primer
equipo de una SAN.
•Excelente relación
precio/rendimiento
Rango medio
conectividad FC
• Sistema con mucha
funcionalidad, diseñado para
ambientes de rango medio.
• Funcionalidad y escalabilidad
high-end
• Excelente valor considerando el
rendimiento y la escalabilidad
Red de bajo costo
conectividad iSCSI
• Red IP para SMBs
• Disco de bajo costo,
con acceso por red y
manejo simple
• Escalabilidad y
confiabilidad
DS5000
Sistemas de disco de rangos bajo y medio
26. 26
Requerimientos DS5000 DS4800 DS4700 DS3000
IOPS
Escalabilidad lineal hasta
448 discos
Escalabilidad lineal hasta
224 discos
Escalabilidad lineal hasta
80 discos
Escalabilidad lineal hasta
48 discos
MB/s
El mejor en throughput Excelente opción Bien hasta 990 MB/s Bien hasta 900 MB/s
Escalabilidad
Consolidación
El mejor para grandes
consolidaciones
Muy bueno para
consolidaciones
Excelente para
departamentos
Excelente para SMBs/ y
sitios remotos
Replicación
El mejor para replicación
intensiva
Adecuado para replicación
intensiva
El mejor para replicación
limitada
Adecuado para replicación
limitada
Posicionamiento de los sistemas de disco
27. 27
Virtualizar discos también permite reducir costos y
proteger la inversión
Aún en sistemas de rango bajo y
medio
Comparando con discos internos de servidores, un sistema de discos
centralizado:
Requiere menos recursos y esfuerzo de administración
Tiene mejor rendimiento y confiabilidad
Está optimizados para reducir el consumo de energía
A niveles similares de redundancia y tasas de utilización un sistema
centralizado tiene menos discos girando que los discos internos en servidores
El DS3200 tiene una excelente relación costo/beneficio
para sistemas de discos centralizados de rango inicial
Data Center
Hardware
Enfriamiento
55% 45%
Server/storage
hardware
70% 30%
Energía, memoria,
ventiladores,
placas,
dispositivos . . .
Procesador
80% 20%
Ocioso
Tasa de
utilización
Procesador
28. 28
Hasta aquí vimos...
La relación de Consolidación y Virtualización con Planeta Inteligente
Las ventajas de la Consolidación y Virtualización, especialmente la reducción
de costos en:
Energía
Administración
Software
Virtualización en x86 con blades
Virtualización en Power
Sistemas de discos de rangos medio y bajo
Trabajo
Inteligente
Trabajo
Inteligente
Más allá de lo
Verde
Más allá de lo
Verde
Nueva
Inteligencia
Nueva
Inteligencia
Procesos flexibles
y dinámicos
diseñados para
las nuevas formas
en las que la
gente compra,
trabaja o vive
Necesito “insights”Necesito “insights” Necesito trabajar de
forma inteligente
Necesito trabajar de
forma inteligente
Necesito
eficiencia
Necesito
eficiencia
Infraestructura
Dinámica
Infraestructura
Dinámica
Una infraestructura
que reduzca los
costos, sea
inteligente, segura y
tan dinámica como
el entorno de
negocios de hoy en
día
Necesito responder
rápidamente
Necesito responder
rápidamente
Mayores
eficiencias,
competir más
efectivamente y
responder más
ágilmente,
emprendiendo
acciones sobre
energía, entorno o
sustentabilidad
Sacar provecho
de la riqueza de
información
disponible en
múltiples fuentes
para tomar
decisiones más
inteligentes en
tiempo real
Los datos se
multiplican y están en
silos aislados
Mi Infraestructura es
poco flexible y costosa
Nuestros recursos
son limitados
Nuevas necesidades de
negocios y procesos
Data Center
Hardware
Enfriamiento
55% 45%
Server/storage
hardware
70% 30%
Energía, memoria,
ventiladores,
placas,
dispositivos . . .
Procesador
80% 20%
Ocioso
Tasa de
utilización
Procesador
As every human being, company, organization, city, nation, natural system, and man-made system is becoming interconnected, instrumented, and literally made more intelligent. In this new world, we believe there are 3 questions to be considered:
How can we take advantage of the wealth of information available from our new smarter things to make more intelligent choices? – New Intelligence
How can we work smarter supported by flexible and dynamic processes modeled for the new way people buy, live & work – Smart Work
How do we align our goals and behaviors with our new responsibilities, so that caring for our planet and its people is no longer perceived as generosity or sacrifice? - Green & Beyond
How do we create an infrastructure that drives down cost, is intelligent and secure, and is just as dynamic as today’s business climate? – Dynamic Infrastructure
Let’s discuss each of these in more detail. Why are they important? What are the pressures we face? Where can we look for solutions? How have enterprises already been working on smart solutions to their problems?
New Intelligence
Volume of Digital Data: The data explosion, of course, but also shifts in the nature of data. Once virtually all the information available to be "processed" was authored by someone. Now that kind of data is being overwhelmed by machine-generated data – spewing out of sensors, RFID, meters, microphones, surveillance systems, GPS systems and all manner of animate and inanimate objects.
By 2010, the amount of digital information will grow to 988 Exabytes (equivalent to a stack of books from the sun to Pluto and back)
Every day, 15 Petabytes of new information are being generated. This 8 times more than the information in all U.S. libraries
The number of emails sent every day is estimated to be over 200 billion
By 2010, the codified information base of the world is expected to double every 11 hours
Variety of Information (diversity and heterogeneity): With this expansion of the sources of information comes large variance in the complexion of the available data -- very noisy, lots of errors -- and no time to cleanse it in a world of real-time decision making.
80% of new data growth is unstructured content, generated largely by email, with increasing contribution by documents, images, and video and audio
38% of email archiving decisions receive input from a C-level executive and 23% from legal/compliance professional
The average car will have 100 million lines of code by 2010; the Airbus A380 alone contains over 1 billion lines of code
Velocity of Decision Making: This is about optimizing the speed of insight generated as well as confidence that the decisions and actions taken will yield the best outcomes based on more proactive, planning around the management and use of information sources, and creating far more advanced predictive capabilities:
Every week, the average information worker spends 14.5 hours reading & answering e-mail, 13.3 hours creating documents, 9.6 hours searching for information, 9.5 hours analyzing information
For every 1,000 knowledge workers, $5.7 million is lost annually in time wasted reformatting information between applications.
Not finding the right information costs an additional $5.3 million per year
An Institute for Business Value Agile CFO Study in 2007 indicated that only 9% of senior finance executives believe they excel at gathering, interpreting & conveying information to senior management
42% of managers say they inadvertently use the wrong information at least once per week
Finally, shifts in the nature of what we can analyze: Traditionally, that's been the analysis of a standalone business process or sub-processes, or activities like airline crew scheduling. More and more, enterprises and governments -- as well as biologists, life scientists and environmentalists -- will have to take a broader, systems-based approach to what they examine and attempt to optimize. So crew scheduling, combined with weather patterns, combined with fuel prices, combined with marketing promotions and the status of labor negotiations, as one example; or the ability to tap into the collective intelligence of people across the value chain through social media and associated Web 2.0-3/0 technologies..
Business intelligence is rated as the top IT spending category with 80% of C-level execs respondents rating it as high or medium priority
Stream Computing and Event Processing capabilities are enabling the consumption and analysis of extreme volumes, speeds, and complexity of event scenarios real time (events generated from water streams, applications, news feeds, services; with technology able to analyze 5 million events per second).
Smart Work
A whole host of rapidly accelerating changes are unfolding: mergers of hundred year old companies, creation of new industries and the demise of others, the emergence of new economies, the opening of long isolated markets, the imposition of new government regulations and the relaxation of others, and so on. Organizations are driven to change and become more dynamic by these identified evolving forces:
Economic Pressures: The emergence of a global economy is applying pressure on businesses to reduce costs and build better visibility into their business processes to mitigate risk and optimize profit.
Global Competition: The emergence of a global economy is moving businesses to create more responsive processes to achieve improved agility within a worldwide competitive marketplace
The Demanding Consumer: The expectations of customers and employees have never been higher, requiring businesses to supply a personalized and responsive environment. Such expectations for a personalized, custom, user experience are driving requirements back to the business and service provider to deliver innovative new services anytime, anywhere.
Emergence of New Technology: New technologies like Cloud, Web 2.0 and pervasive digitally connected objects are empowering the business user and driving the convergence of business and IT and blurring the lines between companies, business partners and customers!
In this fast paced, opportunistic and at times volatile environment, organizations need to be dynamic, resilient and efficient in how they build, assemble, reassemble, loosely couple, and link resources in the organization. Static, rigid, monolithic, and fragile will be the descriptors of the organizations that get left behind.
Dynamic Infrastructure
Multiple forces are driving a transformation of business and government of all sizes
Business innovation can drive competitive advantage, but wreck havoc with existing IT infrastructures.
98% of CEOs expect their business models to change, while a rapidly growing percentage recognizes they lack the ability to handle that change effectively
New technologies, like Web 2.0, petaflop super computers, cloud computing
Enterprises report that IT operational overhead = up to 70% of IT budget and growing
The thrust of the discussion around dynamic infrastructure will center on three benefits: reducing costs, improving service and managing risk.
Reducing cost
Dramatically improve the total cost of both the underlying IT infrastructure and management costs associated with the need to speed delivery of IT services as well as managing the growing convergence of intelligent, instrumented business and IT assets - while successfully addressing mounting economic pressures, shifting consumer demands, service delivery expectations and the emergence of new technology.
Improving Service
Respond quickly and flexibly to business opportunities and customer demands with a superior business-driven service model that provides visibility, control and automation of the underlying business and IT infrastructure; align physical and IT assets to the business to enable rapid, agile response to changing business circumstances.
Managing Risk
Instill trust with key constituents and experience improved service reliability, respond effectively to regulatory and compliance requirements, and adapt quickly to changing conditions with the peace of mind that the business and IT infrastructure is secure and resilient, including the extended infrastructure created by “smart” and mobile devices, external networks, supply chains and explosion of data.
Green & Beyond
Businesses have been at the center of the converging pressure to go green… The public is demanding greener practices and is putting pressure on the government to introduce tougher regulations. The customers are resisting business attempts to pass on energy induced price hikes and are demanding greener products and policies. The environment is resonating as a critical issue affecting shareholder value across geographies. And the governments are creating tough new regulatory standards to control energy use and carbon emissions, or at least threatening to do so if businesses do not act first.
Private enterprises, public organizations, communities, regions and entire industries are faced with how to develop strategies and solutions for becoming more energy and environmentally responsible that also generates new revenue opportunities and lower costs and risk.
Intelligent energy and carbon management improvements are about adding intelligence to passive or “dumb” systems to create “smart systems” that are dramatically more efficient and reliable and therefore enabled to save energy and resources. Intelligent Utility Networks, Transportation Systems, and Oilfields all become more efficient. These are real solutions, available today, that harness and leverage the power of built-in intelligence to:
•improve energy management
•make our energy have less impact to the environment and be more reliable
•reduce traffic congestion and associated greenhouse gas emissions
•reduce energy demand
Energy demand doubling● IT unable to keep pace
99% of installed base of volume servers is inefficient today
On average● only 3 out of 100 units are used for productive computing
Only 28% respondents to IBM benchmarking tool report they know the energy consumption of IT
e-waste cannot be ignored – 1 billion computers potential scrap by 2010
Consider how much energy we waste.
According to published reports, the losses of electrical energy because grid systems are not “smart” range as high as 40 to 70 percent around the world.
In distributed computing environments 85% of computing capacity sits idle. In six years, the power consumption of a server has risen from 8 watts to more than 100 watts per $1,000 worth of technology. On average, for every 100 units of energy piped into a data center, only 3 units are used for actual computing. More than half goes to cooling the servers.
And, of course, consider the crisis in our financial markets.
The current crisis will be analyzed for decades, but one thing is already clear: Financial institutions spread risk but weren't able to track risk – and that uncertainty, that lack of knowing with precision, undermined confidence.
Green represents unannounced products
Auto HA solutions are Virtualization Engine-based offering for rapid provisioning and reprovisioning of server resources. Expected VE announce is May 2004
Investment protection is the name of the game for BladeCenter.
IBM will offer a full portfolio of BladeCenter based offerings. 64-bit blades, NEBS-compliant platforms, InfiniBand and iSCSI are all on the technology roadmaps.
This is not a disclosure chart, and does not speak to the individual offerings, it is merely a guideline to what IBM thinks are appropriate technologies for the blade marketplace.
First to support both UNIX and Linux on the same system
Designed for live movement of mission-critical workloads
Expanding IBM virtualization leadership with over 4 decades of experiences
-- Not available on Sun Solaris on SPARC or T1/T2
-- Not available on HP-UX on Itanium or PA-RISC
First and only UNIX to deliver
Excellent for continuous availability when moving compute, transactional or data intensive mission-critical workloads
Significantly more scalable than non-UNIX solutions
4 times the cores (16 vs 4)**
** Statement refers to the maximum size of a logical partition or virtual machine in terms of CPUs. VMware Infrastructure 3 Enterprise supports a maximum of 4 virtual CPUs per virtual machine (source: VMware Infrastructure 3 Online Library, section “Virtual Machine Maximums” at http://pubs.vmware.com/vi301/config_max/config_max.1.2.html. The IBM System p 570 supports up to 16 CPUs per Logical Partition.
10 times memory/per core (48 vs 4)***
*** Statement refers to the maximum amount of memory supported in a virtual machine. VMware supports a maximum of 4 CPUs and 16 GB of RAM per virtual machine (source: http://pubs.vmware.com/vi301/config_max/config_max.1.2.html) . This translates to a total of 4 GB of RAM per CPU. IBM System p 570 supports up to the full memory and CPU configuration of the system for LPARs, which translates to 16 CPUs and 768GB of RAM. Thus, the p570 supports up to 48GB of RAM per core.
Reduce impact of planned outages
Relocate workloads to enable growth
Provision new technology with no disruption to service
Save energy by moving workloads off underutilized servers
This is a good overview slide to see how each of the DS storage systems are positioned.
Starting with the DS3000 series. This is affordable entry-level storage platform designed for the small to medium business as well as departmental and remote office customers seeking cost-effective, reliable and simple storage. The DS3000 supports SAS, iSCSI or FC host connectivity with support for up to 48 SAS and/or SATA drives.
We then continue to scale upward with the DS4000 and DS5000 series of storage systems. The DS4000 and DS5000 are fully-featured performance-driven midrange and enterprise storage designed for wide-ranging open systems environments. Ideally suited for compute-intensive applications and consolidation, the DS4000 and DS5000 series includes robust management software designed for the advanced storage administrator. This series also supports FC host connectivity with support up to 256 FC and/or SATA drives
And we will quickly gloss over this slide as I would like to suggest printing it out and getting it out when you are discussing and suggesting the DS series to your customers.
Some of the key items your customer is looking for are listed in the left column and includes IOPs and megabytes performance, scalability, consolidation size, and replication features offered.
And as an example, if my customer is looking for good scalability for their midrange environment, but large block performance that we know to be Megabytes per second is not critical, we may suggest the DS4800. However, if you have a smaller company in which replication features such as mirroring is not important but cost is, we would suggest the DS3000 series of arrays.