This document discusses using PFCache to improve storage density for container files by deduplicating identical data. PFCache uses a cache area to store deduplicated file contents, referenced via cache links in container image files. Evaluation showed PFCache improved storage density. Future work includes upstreaming PLoop for containers and pursuing IO deduplication in the Linux kernel for additional benefits.
This document discusses techniques for storing container files more densely using shared templates and deduplication. It introduces PFCache, a user-space caching mechanism that sits on top of PLoop devices to deduplicate page cache and IO between container templates. Evaluation results show PFCache improves density. Future work includes upstreaming PLoop and exploring additional IO deduplication techniques in the Linux kernel for containers.
beyondfs is distributed-file system, like hdfs, glusterfs, ceph.
wrote by c/c++ on linux, posix-api compatible.
and supports partial write/read.
consists of center(meta), stores(data), fuse, cliapp.
Applicables are MicroSoft@Office, LibreOffice, HancomOffice, AutoCAD, PhotoShop, Winrar, 7-zip, tar, gzip, xz.
The document describes VeloxDFS, a decentralized distributed file system that manages file metadata using distributed hash tables. It stores file blocks with replication for fault tolerance. VeloxDFS distributes blocks based on hashes and supports clients via shell commands as well as C++ and Java APIs. It aims to improve upon HDFS and Cassandra file systems.
This document provides an overview of disk filesystems and network filesystems from the perspective of GlusterFS. It discusses the basic data structures of files and directories, including inodes, data blocks, and representation of different file types. It also outlines the main Linux system calls used to manipulate filesystem metadata and data, such as read, write, truncate, and directory operations. These calls can operate on files via paths, file descriptors, or directory file descriptors.
The document provides an overview of log structured file systems. It discusses how log structured file systems work by writing all data and metadata sequentially to a circular buffer called a log to improve write performance. It also describes how log structured file systems address issues like limited disk space through garbage collection and provide simpler crash recovery without requiring a file system check.
The document discusses module management and the InterPlanetary File System (IPFS). It covers topics like how modules are currently managed through tools like npm, and some of the limitations of existing systems. It then introduces IPFS as a new protocol that could be used to upgrade the web by allowing modules and other content to be permanently stored, distributed and accessed in a decentralized manner. Key components of IPFS discussed include its use of content addressing, distributed hash tables, the merkle dag data structure, IPNS for mutable naming, and how these pieces could provide benefits like discovery, integrity, transport and updating of modules in a more robust way compared to existing systems.
Redis is an in-memory data structure store that can be used as a database, cache, or message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. Data can be persisted to disk for durability and replicated across multiple servers for high availability. Redis also implements features like expiration of keys, master-slave replication, clustering, and bloom filters.
This document discusses techniques for storing container files more densely using shared templates and deduplication. It introduces PFCache, a user-space caching mechanism that sits on top of PLoop devices to deduplicate page cache and IO between container templates. Evaluation results show PFCache improves density. Future work includes upstreaming PLoop and exploring additional IO deduplication techniques in the Linux kernel for containers.
beyondfs is distributed-file system, like hdfs, glusterfs, ceph.
wrote by c/c++ on linux, posix-api compatible.
and supports partial write/read.
consists of center(meta), stores(data), fuse, cliapp.
Applicables are MicroSoft@Office, LibreOffice, HancomOffice, AutoCAD, PhotoShop, Winrar, 7-zip, tar, gzip, xz.
The document describes VeloxDFS, a decentralized distributed file system that manages file metadata using distributed hash tables. It stores file blocks with replication for fault tolerance. VeloxDFS distributes blocks based on hashes and supports clients via shell commands as well as C++ and Java APIs. It aims to improve upon HDFS and Cassandra file systems.
This document provides an overview of disk filesystems and network filesystems from the perspective of GlusterFS. It discusses the basic data structures of files and directories, including inodes, data blocks, and representation of different file types. It also outlines the main Linux system calls used to manipulate filesystem metadata and data, such as read, write, truncate, and directory operations. These calls can operate on files via paths, file descriptors, or directory file descriptors.
The document provides an overview of log structured file systems. It discusses how log structured file systems work by writing all data and metadata sequentially to a circular buffer called a log to improve write performance. It also describes how log structured file systems address issues like limited disk space through garbage collection and provide simpler crash recovery without requiring a file system check.
The document discusses module management and the InterPlanetary File System (IPFS). It covers topics like how modules are currently managed through tools like npm, and some of the limitations of existing systems. It then introduces IPFS as a new protocol that could be used to upgrade the web by allowing modules and other content to be permanently stored, distributed and accessed in a decentralized manner. Key components of IPFS discussed include its use of content addressing, distributed hash tables, the merkle dag data structure, IPNS for mutable naming, and how these pieces could provide benefits like discovery, integrity, transport and updating of modules in a more robust way compared to existing systems.
Redis is an in-memory data structure store that can be used as a database, cache, or message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. Data can be persisted to disk for durability and replicated across multiple servers for high availability. Redis also implements features like expiration of keys, master-slave replication, clustering, and bloom filters.
This document compares the Oracle Cluster File System (OCFS2) and Global File System version 2 (GFS2). It provides brief descriptions of each file system and discusses their current community support. Key features of each are outlined and compared. Available administration tools for each are listed. Examples are given of formatting disks with each file system. Finally, simple performance tests in local and multi-node configurations are described.
The document discusses the GlusterFS APIs and libgfapi basics. It describes how libgfapi allows manually creating a context, loading a volume file, and making individual calls like glfs_open and glfs_write. It also provides a Python example of using libgfapi to create a file. The document outlines the basics of the GlusterFS translator including adding functionality from storage bricks to the user, and the translator environment of stacking requests and unwinding responses.
This document provides an introduction to file systems and the OCFS2 file system. It begins with basic concepts of how data can be stored using block devices, databases, and file systems. It then discusses file system interfaces, I/O models, and classifications. It provides an overview of the virtual file system (VFS) layer and its key data structures. It describes the EXT3 and OCFS2 file systems in detail, covering their layouts, journaling, mounting, and space management.
Hadoop uses large 64MB blocks by default to store file data in HDFS for improved performance. The namenode manages file metadata and knows which datanodes store each block. Datanodes store and retrieve blocks as requested by clients. The secondary namenode helps manage the namenode metadata but cannot replace it in case of failure. Writing files involves breaking them into blocks and storing replicas across datanodes, while reading locates blocks and retrieves their data.
Glusterfs session #2 1 layer above disk filesystemsPranith Karampuri
This presentation contains the slides used for second dev-session about gluster which talks about layer above disk filesystem i.e. posix layer in glusterfs
This document summarizes Redis, including where to get it, how it compares to Memcached, common Redis commands, Redis data types, and simple Redis applications. It discusses using Redis for cohort analysis using bitmaps, offloading logic and computing using Lua scripts, and publishing notifications using Pub/Sub. The document provides an overview of Redis capabilities and use cases.
Learn about NVIDIA's detailed process for extracting and migrating intact/complete Perforce metadata and content from an existing instance to a new one using perfsplit without perfsnap.
What is a container? Is it really a “lightweight VM” or is it more like a Linux process? In this talk you'll see exactly what a container is, as Liz builds one from scratch in a few lines of Go code. You'll learn how namespaces, control groups and chroot are used to construct containers, and how they are isolated from each other and from the host machine they run on.
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vosNETWAYS
During this talk, Niels will explain the basics of Gluster and show how Bareos integrates with it. Gluster provides a Software Defined Storage environment that can scale-out when the backup storage needs to grow. With a live demonstration Niels shows how simple it is to setup a small Gluster environment and configure Bareos to use the native Gluster protocol.
This document discusses strategies for splitting a large Perforce depot into multiple depots to address issues with large database tables, table locking, and user frustration. It outlines using the Perfsplit tool but notes its limitations, then describes a workaround process that involves harvesting integration records, bypassing the archive copy phase, replaying changes on a new instance, and renaming the depot. The overall goal is to minimize downtime and disk space usage when splitting a large Perforce depot.
Level 101 for Presto: What is PrestoDB?Ali LeClerc
Presto is a widely adopted federated SQL engine for federated querying across multiple data sources. With Presto, you can perform ad hoc querying of data in place. For today’s “data hacker”, Presto helps solve challenges around time to discovery and the amount of time it takes to do ad hoc analysis.
In Level 101, you’ll get an overview of Presto, including:
A high level overview of Presto & most common use cases
The problems it solves and why you should use it
A live, hands-on demo on getting Presto running on Docker
Real world example: How Twitter uses Presto at scale
Document-oriented databases store data in collections of documents rather than tables with a predefined schema. MongoDB is an open-source, document-oriented database that is easy to install and use. It supports many programming languages and operating systems. Documents in MongoDB can have unique field sets and storing non-normalized data can provide faster speeds than relational databases for heavy use cases.
The document compares the performance of NFS, GFS2, and OCFS2 filesystems on a high-performance computing cluster with nodes split across two datacenters. Generic load testing showed that NFS performance declined significantly with more than 6 nodes, while GFS2 maintained higher throughput. Further testing of GFS2 and OCFS2 using workload simulations modeling researcher usage found that OCFS2 outperformed GFS2 on small file operations and maintained high performance across nodes, making it the best choice for the shared filesystem needs of the project.
This document provides a 5-minute guide to getting started with Redis, including:
- Installing Redis on Linux, Mac, and Windows
- Starting the Redis server and client
- Performing basic operations like getting, setting, incrementing keys and working with Redis data structures like lists and hashes
- Links to an online Redis command line interface and examples of using Redis in Java applications.
The Care + Feeding of a Mongodb ClusterChris Henry
This document summarizes best practices for scaling MongoDB deployments. It discusses Behance's use of MongoDB for their activity feed, including moving from 40 nodes with 250M documents on ext3 to 60 nodes with 400M documents on ext4. It covers topics like sharding, replica sets, indexing, maintenance, and hardware considerations for large MongoDB clusters.
This document provides an overview of UNIX file systems and disks. It discusses the structure of hard disks and different file system types including FAT, NTFS, UFS, EXT2/3, and ReiserFS. It also covers disk devices in Linux, FreeBSD and Solaris. Additional topics include creating and mounting file systems, the /etc/fstab file, the NFS network file sharing protocol, and different RAID configurations including RAID 0, 1, 5 and the use of parity disks.
This document provides an overview of HDFS (Hadoop Distributed File System), including its design goals, architecture, key components, and some limitations. The main points are:
HDFS is a distributed file system designed for large files and streaming data access across commodity hardware. It uses a master-slave architecture with a NameNode managing the file system metadata and DataNodes storing file data in blocks. Files are replicated across multiple DataNodes for fault tolerance. The NameNode controls permissions, file-block mappings, and DataNode locations and balances the cluster as needed.
Redis is an advanced key-value store that is similar to memcached but supports different value types like strings, lists, sets, and sorted sets. It has master-slave replication, expiration of keys, and can be accessed from Ruby through libraries like redis-rb. The Redis server was written in C and supports semi and fully persistent modes.
Backup / Restore to Cloud Storage with esXpress and CloudArray softwareTwinStrata
The following presentation provides a functional overview on:
How PHD esXpress backup software together with CloudArray work seamlessly to protect VMware ESX and ESXi environments
This solution provides an economic and highly reliable storage tier for offsite data backup and replication without the logistics and cost of tape transport.
FUSE (Filesystem in Userspace) allows non-privileged users to create their own file systems. It works by mounting the file system within the userspace virtual file system. Python has a FUSE library called fusepy that provides a simple interface for implementing FUSE file systems in Python. PEPFS is an example of a FUSE file system implemented in Python that makes Python Enhancement Proposals (PEPs) available as read-only files organized in a file system structure. It uses fusepy and lazily downloads specific PEP files on demand when read.
Oracle Open World 2014: Lies, Damned Lies, and I/O Statistics [ CON3671]Kyle Hailey
The document discusses analyzing I/O performance and summarizing lessons learned. It describes common tools used to measure I/O like moats.sh, strace, and ioh.sh. It also summarizes the top 10 anomalies encountered like caching effects, shared drives, connection limits, I/O request consolidation and fragmentation over NFS, and tiered storage migration. Solutions provided focus on avoiding caching, isolating workloads, proper sizing of NFS parameters, and direct I/O.
Linux treats all devices as files. There are three main types of devices in Linux - block devices which deal with blocks of data like hard disks, character devices which transfer data as a stream of bytes like keyboards, and network devices which transmit data packets over a network. The Linux kernel includes device drivers that provide a standard interface to access and interact with devices, making them accessible to applications as special files.
This document compares the Oracle Cluster File System (OCFS2) and Global File System version 2 (GFS2). It provides brief descriptions of each file system and discusses their current community support. Key features of each are outlined and compared. Available administration tools for each are listed. Examples are given of formatting disks with each file system. Finally, simple performance tests in local and multi-node configurations are described.
The document discusses the GlusterFS APIs and libgfapi basics. It describes how libgfapi allows manually creating a context, loading a volume file, and making individual calls like glfs_open and glfs_write. It also provides a Python example of using libgfapi to create a file. The document outlines the basics of the GlusterFS translator including adding functionality from storage bricks to the user, and the translator environment of stacking requests and unwinding responses.
This document provides an introduction to file systems and the OCFS2 file system. It begins with basic concepts of how data can be stored using block devices, databases, and file systems. It then discusses file system interfaces, I/O models, and classifications. It provides an overview of the virtual file system (VFS) layer and its key data structures. It describes the EXT3 and OCFS2 file systems in detail, covering their layouts, journaling, mounting, and space management.
Hadoop uses large 64MB blocks by default to store file data in HDFS for improved performance. The namenode manages file metadata and knows which datanodes store each block. Datanodes store and retrieve blocks as requested by clients. The secondary namenode helps manage the namenode metadata but cannot replace it in case of failure. Writing files involves breaking them into blocks and storing replicas across datanodes, while reading locates blocks and retrieves their data.
Glusterfs session #2 1 layer above disk filesystemsPranith Karampuri
This presentation contains the slides used for second dev-session about gluster which talks about layer above disk filesystem i.e. posix layer in glusterfs
This document summarizes Redis, including where to get it, how it compares to Memcached, common Redis commands, Redis data types, and simple Redis applications. It discusses using Redis for cohort analysis using bitmaps, offloading logic and computing using Lua scripts, and publishing notifications using Pub/Sub. The document provides an overview of Redis capabilities and use cases.
Learn about NVIDIA's detailed process for extracting and migrating intact/complete Perforce metadata and content from an existing instance to a new one using perfsplit without perfsnap.
What is a container? Is it really a “lightweight VM” or is it more like a Linux process? In this talk you'll see exactly what a container is, as Liz builds one from scratch in a few lines of Go code. You'll learn how namespaces, control groups and chroot are used to construct containers, and how they are isolated from each other and from the host machine they run on.
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vosNETWAYS
During this talk, Niels will explain the basics of Gluster and show how Bareos integrates with it. Gluster provides a Software Defined Storage environment that can scale-out when the backup storage needs to grow. With a live demonstration Niels shows how simple it is to setup a small Gluster environment and configure Bareos to use the native Gluster protocol.
This document discusses strategies for splitting a large Perforce depot into multiple depots to address issues with large database tables, table locking, and user frustration. It outlines using the Perfsplit tool but notes its limitations, then describes a workaround process that involves harvesting integration records, bypassing the archive copy phase, replaying changes on a new instance, and renaming the depot. The overall goal is to minimize downtime and disk space usage when splitting a large Perforce depot.
Level 101 for Presto: What is PrestoDB?Ali LeClerc
Presto is a widely adopted federated SQL engine for federated querying across multiple data sources. With Presto, you can perform ad hoc querying of data in place. For today’s “data hacker”, Presto helps solve challenges around time to discovery and the amount of time it takes to do ad hoc analysis.
In Level 101, you’ll get an overview of Presto, including:
A high level overview of Presto & most common use cases
The problems it solves and why you should use it
A live, hands-on demo on getting Presto running on Docker
Real world example: How Twitter uses Presto at scale
Document-oriented databases store data in collections of documents rather than tables with a predefined schema. MongoDB is an open-source, document-oriented database that is easy to install and use. It supports many programming languages and operating systems. Documents in MongoDB can have unique field sets and storing non-normalized data can provide faster speeds than relational databases for heavy use cases.
The document compares the performance of NFS, GFS2, and OCFS2 filesystems on a high-performance computing cluster with nodes split across two datacenters. Generic load testing showed that NFS performance declined significantly with more than 6 nodes, while GFS2 maintained higher throughput. Further testing of GFS2 and OCFS2 using workload simulations modeling researcher usage found that OCFS2 outperformed GFS2 on small file operations and maintained high performance across nodes, making it the best choice for the shared filesystem needs of the project.
This document provides a 5-minute guide to getting started with Redis, including:
- Installing Redis on Linux, Mac, and Windows
- Starting the Redis server and client
- Performing basic operations like getting, setting, incrementing keys and working with Redis data structures like lists and hashes
- Links to an online Redis command line interface and examples of using Redis in Java applications.
The Care + Feeding of a Mongodb ClusterChris Henry
This document summarizes best practices for scaling MongoDB deployments. It discusses Behance's use of MongoDB for their activity feed, including moving from 40 nodes with 250M documents on ext3 to 60 nodes with 400M documents on ext4. It covers topics like sharding, replica sets, indexing, maintenance, and hardware considerations for large MongoDB clusters.
This document provides an overview of UNIX file systems and disks. It discusses the structure of hard disks and different file system types including FAT, NTFS, UFS, EXT2/3, and ReiserFS. It also covers disk devices in Linux, FreeBSD and Solaris. Additional topics include creating and mounting file systems, the /etc/fstab file, the NFS network file sharing protocol, and different RAID configurations including RAID 0, 1, 5 and the use of parity disks.
This document provides an overview of HDFS (Hadoop Distributed File System), including its design goals, architecture, key components, and some limitations. The main points are:
HDFS is a distributed file system designed for large files and streaming data access across commodity hardware. It uses a master-slave architecture with a NameNode managing the file system metadata and DataNodes storing file data in blocks. Files are replicated across multiple DataNodes for fault tolerance. The NameNode controls permissions, file-block mappings, and DataNode locations and balances the cluster as needed.
Redis is an advanced key-value store that is similar to memcached but supports different value types like strings, lists, sets, and sorted sets. It has master-slave replication, expiration of keys, and can be accessed from Ruby through libraries like redis-rb. The Redis server was written in C and supports semi and fully persistent modes.
Backup / Restore to Cloud Storage with esXpress and CloudArray softwareTwinStrata
The following presentation provides a functional overview on:
How PHD esXpress backup software together with CloudArray work seamlessly to protect VMware ESX and ESXi environments
This solution provides an economic and highly reliable storage tier for offsite data backup and replication without the logistics and cost of tape transport.
FUSE (Filesystem in Userspace) allows non-privileged users to create their own file systems. It works by mounting the file system within the userspace virtual file system. Python has a FUSE library called fusepy that provides a simple interface for implementing FUSE file systems in Python. PEPFS is an example of a FUSE file system implemented in Python that makes Python Enhancement Proposals (PEPs) available as read-only files organized in a file system structure. It uses fusepy and lazily downloads specific PEP files on demand when read.
Oracle Open World 2014: Lies, Damned Lies, and I/O Statistics [ CON3671]Kyle Hailey
The document discusses analyzing I/O performance and summarizing lessons learned. It describes common tools used to measure I/O like moats.sh, strace, and ioh.sh. It also summarizes the top 10 anomalies encountered like caching effects, shared drives, connection limits, I/O request consolidation and fragmentation over NFS, and tiered storage migration. Solutions provided focus on avoiding caching, isolating workloads, proper sizing of NFS parameters, and direct I/O.
Linux treats all devices as files. There are three main types of devices in Linux - block devices which deal with blocks of data like hard disks, character devices which transfer data as a stream of bytes like keyboards, and network devices which transmit data packets over a network. The Linux kernel includes device drivers that provide a standard interface to access and interact with devices, making them accessible to applications as special files.
ZFS provides several advantages over traditional block-based filesystems when used with PostgreSQL, including preventing bitrot, improved compression ratios, and write locality. ZFS uses copy-on-write and transactional semantics to ensure data integrity and allow for snapshots and clones. Proper configuration such as enabling compression and using ZFS features like intent logging can optimize performance when used with PostgreSQL's workloads.
002-Storage Basics and Application Environments V1.0.pptxDrewMe1
Storage Basics and Application Environments is a document that discusses storage concepts, hardware, protocols, and data protection basics. It begins by defining storage and describing different types including block storage, file storage, and object storage. It then covers basic concepts of storage hardware such as disks, disk arrays, controllers, enclosures, and I/O modules. Storage protocols like SCSI, NVMe, iSCSI, and Fibre Channel are also introduced. Additional concepts like RAID, LUNs, multipathing, and file systems are explained. The document provides a high-level overview of fundamental storage topics.
Case study of BtrFS: A fault tolerant File systemKumar Amit Mehta
A case study of Fault Tolerance features of BTRFS. These slides were prepared for the coursework for a Masters level program at Tallinn University of Technology, Estonia. A lot of materials in the slides are taken from the materials in the public domain. Many thanks to the people on BTRFS IRC Channel.
UKOUG, Lies, Damn Lies and I/O StatisticsKyle Hailey
1. Many factors can cause storage performance anomalies that make benchmarking difficult. Caching, shared infrastructure, I/O consolidation and fragmentation, and tiered storage are some of the top issues.
2. It is important to use real workloads, capture latency histograms rather than just averages, ensure results are reproducible, and run tests long enough to reach steady state.
3. Proper testing methodology is required to accurately characterize storage performance and avoid anomalies. Tools like FIO can help simulate real workloads.
This document discusses network attached storage (NAS) and file systems for network storage. It defines NAS as storage units that are on the network and accessed as files from file systems supported by NAS servers. The document outlines different types of storage arrays including SAN, NAS, and unified storage. It also describes NAS architectures like the NAS server and NAS gateway configurations and how file systems manage blocks of disk to provide files to applications. The document discusses scaling NAS by deploying multiple NAS arrays or using a scale-out NAS cluster with a single global namespace.
It's the End of Data Storage As We Know It (And I Feel Fine)Stephen Foskett
Technological change is finally coming to storage, and it will wipe away the architecture we've come to know over the last few decades. Say goodbye to the "do it all" Fibre Channel SAN storage array and get ready for converged infrastructure, distributed storage, alternative attachments like PCIe, and top-of-rack flash! In this session, Stephen Foskett will explain why this change is inevitable and how it will shake out. You won't recognize what's coming, but it will be faster, cheaper, and more integrated than ever! Delivered at
KFS aka Kosmos FS is a distributed file system written in C++ that is modeled after HDFS. It was originally developed by Kosmix, which was later acquired by Walmart. Some key points:
- KFS uses a master/chunkserver architecture where the metadata is stored on a master node and file data is stored in chunks on chunkservers.
- It supports features like replication, data integrity checks, and rebalancing of chunks.
- While still in early stages, it provides alternatives to the Hadoop ecosystem through its C++ implementation and bindings for other languages like Java and Python.
- The documentation provides instructions for building, deploying, and accessing KFS, though some functionality
Containers are typically managed by having each container chroot to its own subdirectory of the host filesystem. This leads to problems like journal bottlenecks and inefficient small file I/O. The proposed solution is to manage each container's filesystem within a virtual block device represented by a file (container-in-a-file). This avoids journal bottlenecks and allows efficient operations like backup, migration and snapshots through copy-on-write images. It provides flexibility in filesystem choice and management while solving storage and I/O issues. Future work includes optimizing the design and integrating it into the Linux kernel.
Containers are typically managed by having each container chroot to its own subdirectory of the host filesystem. This leads to problems like journal bottlenecks and inefficient small file I/O. The proposed solution is to manage each container's filesystem within a virtual block device represented by a file (container-in-a-file). This avoids journal bottlenecks and allows efficient operations like backup, migration and snapshots through copy-on-write images. It provides flexibility in filesystem choice and management while solving storage and I/O issues. Future work includes optimizing the design and integrating it into the Linux kernel.
This document provides an overview of file system topics. It begins with an introduction to file systems and their relationship to operating system architecture. It then discusses the Virtual File System (VFS) interface and key metadata components like super blocks, inodes, and directory entries. The document reviews common file system optimizations based on memory hierarchy and storage characteristics. Examples of specific file systems are given, including Ext4, NTFS, ZFS, NFS, and Google File System. The document concludes by soliciting any questions.
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
An overview of Hadoop Storage Format and different codecs available. It explains which are available and how they are different and which to use where.
Vancouver bug enterprise storage and zfsRami Jebara
This document provides an overview of enterprise storage needs and introduces the ZFS file system. It discusses typical enterprise storage requirements, components, tiers, and concerns. It then provides a brief history of ZFS, introduces key concepts like storage pools, datasets, snapshots, replication, and more. It offers tips for preparing ZFS for production and highlights emerging trends, resources for learning more, and examples of tools that use ZFS.
An updated talk about how to use Solr for logs and other time-series data, like metrics and social media. In 2016, Solr, its ecosystem, and the operating systems it runs on have evolved quite a lot, so we can now show new techniques to scale and new knobs to tune.
We'll start by looking at how to scale SolrCloud through a hybrid approach using a combination of time- and size-based indices, and also how to divide the cluster in tiers in order to handle the potentially spiky load in real-time. Then, we'll look at tuning individual nodes. We'll cover everything from commits, buffers, merge policies and doc values to OS settings like disk scheduler, SSD caching, and huge pages.
Finally, we'll take a look at the pipeline of getting the logs to Solr and how to make it fast and reliable: where should buffers live, which protocols to use, where should the heavy processing be done (like parsing unstructured data), and which tools from the ecosystem can help.
Tuning Solr and its Pipeline for Logs: Presented by Rafał Kuć & Radu Gheorghe...Lucidworks
The document summarizes key points from a presentation on optimizing Solr and log pipelines for time-series data. The presentation covered using time-based Solr collections that rotate based on size, tiering hot and cold clusters, tuning OS and Solr settings, parsing logs, buffering pipelines, and shipping logs using protocols like UDP, TCP, and Kafka. The overall conclusions were that tuning segments per tier and max merged segment size improved indexing throughput, and that simple, reliable pipelines like Filebeat to Kafka or rsyslog over UNIX sockets generally work best.
LAS16-400: Mini Conference 3 AOSP (Session 1)Linaro
LAS16-400: Mini Conference 3 AOSP (Session 1)
Speakers: Thomas Gall, Bernhard Rosenkränzer
Date: September 29, 2016
★ Session Description ★
The Android Open Source Project is one community which is strategic to Linaro and it’s members. The purpose of this mini conference is to gather fellow Android engineers together from the community, member companies, and Linaro to discuss engineering activities and improve collaboration across different groups.
Within this mini conference we encourage discussion and presentations to advance engineering topics, forge consensus and educate each other.
The tentative agenda for this mini conference includes :
- Quick introduction
- Filesystems - Between requirements for encryption and standing concerns about degrading performance as an Android file system age, let’s have some discussion involving current data, known issues and towards improvements in this area for Android.
- HAL consolidation - Review current status and discuss next steps to work on.
One build for many devices: device/build configuration. Next features and platforms to add. Gaps in HiKey support vs. AOSP build.
- Graphics - YUV support in mesa and hwc.
- WiFi and sensor HAL status and next steps
- New developments with AOSP + the Kernel - With regards to the Google Common Kernel tree and upstream Linux kernel activities related to Android, there are a few topics up for discussion:
- - Updates on HiKey in AOSP
- - EAS in common.git & integration with AOSP userspace
- - New Sync API in 4.6+ kernels, and how it will affects graphics drivers
- AOSP transition to clang - As everyone knows GCC in AOSP has been deprecated. Let’s cover current status, issues and next steps. Let’s also discuss the elephant in the room, building the kernel with clang.
- Out of tree AOSP User space Patches - This is a discussion with the goal of organized action to see forward progress on AOSP user space patches that aren’t in AOSP for whatever reason.
- Android is used in some environments where booting can be frequent and affect the product experience. Do you want to wait for a minute while your car boots? We’ll spend time brainstorming on improving Android boot time.
★ Resources ★
Etherpad: pad.linaro.org/p/las16-400
Presentations & Videos: http://connect.linaro.org/resource/las16/las16-400/
★ Event Details ★
Linaro Connect Las Vegas 2016 – #LAS16
September 26-30, 2016
http://www.linaro.org
http://connect.linaro.org
Thin provisioning allows storage arrays to provision more capacity than is physically available by only allocating space as it is used. This improves efficiency but can lead to issues if overprovisioned storage runs out. There are challenges to thin provisioning across different layers including file systems, virtualization, and storage arrays. For thin provisioning to be effective, all layers must work together to monitor capacity usage and free space accurately at a fine granularity.
Tachyon is a memory-centric distributed storage system that provides reliable data sharing at memory speed across various cluster computing frameworks. It addresses issues with current storage systems like slow data sharing due to disk writes, cache loss when processes crash, and in-memory data duplication. Tachyon keeps only one copy of data in memory, tracks data lineage for fault tolerance, and enables fast sharing of data within and across frameworks and jobs. It provides a simple API and allows frameworks like Spark and MapReduce to access data reliably from memory without code changes.
Similar a Denser containers with PF cache - Pavel Emelyanov (20)
This document discusses using PFCache to improve storage density for container files by deduplicating identical data. PFCache uses a cache area to store deduplicated file contents, referenced via cache links in container image files. Evaluation showed PFCache improved storage density. Future work includes upstreaming PLoop for containers and pursuing IO deduplication in the Linux kernel for additional benefits.
This document proposes a new interface called task_diag to provide process information in a more efficient way compared to the current /proc interface. Task_diag would use a netlink message format to allow querying process attributes in groups and splitting the response across multiple messages. Performance tests show task_diag can provide process information 2-8 times faster than parsing /proc. The goal is to speed up tools like ps and top.
Live migration: pros, cons and gotchas -- Pavel EmelyanovOpenVZ
Live migrating containers has pros like load balancing and updating hardware, but also cons. It is complex due to needing to save a container's state, transfer it, and restore it on another host while tasks are frozen. This can be done with memory pre-copy, where memory is copied iteratively while tasks run, or post-copy, where memory is transferred after tasks resume. P.Haul uses CRIU to handle the critical state save and restore functions needed for live migrating containers between hosts.
Live migrating a container: pros, cons and gotchas -- Pavel EmelyanovOpenVZ
Live migrating a container: pros, cons and gotchas
Monday, November 16 • 17:20 - 18:05
Pavel Emelyanov
Principal Engineer, Odin
Principal engineer at Odin Server Virtualization team, creator and maintainer of the CRIU project. Joined Parallels in 2004 as junior Linux kernel developer, later became kernel team leader. Now works on architecture of the Odin Server products. | | Pavel tweets at @xemulp.
http://dockerconeu2015.sched.org/event/62e6d2ea7380442a48fafaeee26c9842
В своей презентации мы на примере дистрибутива Linux расскажем об опыте организации процесса тестирования продукта, существенная часть (более 90%) кода которого создается независимыми от компании разработчиками.
https://www.youtube.com/watch?v=AstgrnE7_dI
LibCT: one lib to rule them all -- Andrey VaginOpenVZ
LibCT is a C library that allows building containerized applications by configuring namespaces and cgroups to provide isolation. It aims to simplify the complex low-level container APIs and support different container types like Linux containers, OpenVZ, Solaris Zones, and BSD jails. LibCT hides low-level API changes and provides bindings for other languages. It also serves as an alternative to Libcontainer which is written in Go. The presentation covered the history of Linux containers, namespaces, cgroups, LibCT's API, examples of use, and future integration plans.
What's missing from upstream kernel containers? - Kir Kolyshkin, Sergey Bronn...OpenVZ
This document discusses containers in the upstream Linux kernel compared to the Virtuozzo (VZ) kernel. It notes that OpenVZ has contributed many features to the Linux kernel over time, with about 60% of the VZ kernel now upstreamed. While much progress has been made, there are still areas needing to be added to the upstream kernel to fully replace the VZ kernel, including memory management and accounting features, network and I/O optimizations, and legacy containerization tools. The presenters from OpenVZ are available to discuss any questions.
Not so brief history of Linux Containers - Kir KolyshkinOpenVZ
Linux containers have evolved from initial ideas in 1999 to widespread use today. Early container technologies included Virtuozzo in 1999-2000 and the introduction of namespaces in 2000-2001. The Linux-VServer project in 2001 helped advance containers. Checkpointing and live migration were implemented in 2002-2003. OpenVZ in 2005 helped popularize containers. Developments from 2006-2010 included new namespaces and use of cgroups. Docker in 2013 and projects like CoreOS and LXC further advanced containers. CRIU, introduced in 2011, helped enable checkpoint/restore in userspace. Containers are now widely used in projects like OpenStack.
This document compares virtual machines (VMs) to containers and discusses their differences. It notes that VMs have higher overhead than containers and can support more instances per server. Containers offer near-native performance and scale better. The document also outlines ongoing and future work to further integrate container technologies into the Linux kernel to provide capabilities like checkpoint/restore and live migration of containers across servers.
Управление ресурсами в Linux и OpenVZ Кирилл Колышкин kir@openvz.org http://openvz.org/
Отчет - http://yourcmc.ru/wiki/RootConf_2009:_%D0%9E%D1%82%D1%87%D1%91%D1%82_%D0%92%D0%B8%D1%82%D0%B0%D0%BB%D0%B8%D1%8F_%D0%A4%D0%B8%D0%BB%D0%B8%D0%BF%D0%BF%D0%BE%D0%B2%D0%B0#.D0.A3.D0.BF.D1.80.D0.B0.D0.B2.D0.BB.D0.B5.D0.BD.D0.B8.D0.B5_.D1.80.D0.B5.D1.81.D1.83.D1.80.D1.81.D0.B0.D0.BC.D0.B8_.D0.B2_Linux_.D0.B8_OpenVZ_.28.D0.B7.D0.B0.D1.87.D1.91.D1.82.21.29
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
When it is all about ERP solutions, companies typically meet their needs with common ERP solutions like SAP, Oracle, and Microsoft Dynamics. These big players have demonstrated that ERP systems can be either simple or highly comprehensive. This remains true today, but there are new factors to consider, including a promising new contender in the market that’s Odoo. This blog compares Odoo ERP with traditional ERP systems and explains why many companies now see Odoo ERP as the best choice.
What are ERP Systems?
An ERP, or Enterprise Resource Planning, system provides your company with valuable information to help you make better decisions and boost your ROI. You should choose an ERP system based on your company’s specific needs. For instance, if you run a manufacturing or retail business, you will need an ERP system that efficiently manages inventory. A consulting firm, on the other hand, would benefit from an ERP system that enhances daily operations. Similarly, eCommerce stores would select an ERP system tailored to their needs.
Because different businesses have different requirements, ERP system functionalities can vary. Among the various ERP systems available, Odoo ERP is considered one of the best in the ERp market with more than 12 million global users today.
Odoo is an open-source ERP system initially designed for small to medium-sized businesses but now suitable for a wide range of companies. Odoo offers a scalable and configurable point-of-sale management solution and allows you to create customised modules for specific industries. Odoo is gaining more popularity because it is built in a way that allows easy customisation, has a user-friendly interface, and is affordable. Here, you will cover the main differences and get to know why Odoo is gaining attention despite the many other ERP systems available in the market.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Top 9 Trends in Cybersecurity for 2024.pptxdevvsandy
Security and risk management (SRM) leaders face disruptions on technological, organizational, and human fronts. Preparation and pragmatic execution are key for dealing with these disruptions and providing the right cybersecurity program.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
1. Denser containers with PFCacheDenser containers with PFCache
Pavel Emelyanov
ContainerCon, Seattle, 2015
2. AgendaAgenda
• How to store container files
• Why shared template matters
• What can be deduplicated and what should be
• PFCache
• Q&A
2
3. How to store container filesHow to store container files
3
Filesystem
Container
processes
4. How to store container filesHow to store container files
4
Filesystem
Container
processes
Block device
NetworkHost
Filesystem
Host
block device
Hardware
5. How to store container files (1)How to store container files (1)
5
Filesystem
Container
processes
Block device
NetworkHost
Filesystem
Host
block device
Hardware
Chroot()
Union FS
6. How to store container files (2)How to store container files (2)
6
Filesystem
Container
processes
Block device
NetworkHost
Filesystem
Host
block device
Hardware
Loop device
ZFS ZVol
BTRFS subvolume
PLoop
7. What's PLoopWhat's PLoop
• Loop device plus
– AIO for better performance
– Snapshots
– QCOW2-like format for thin provisioning
– Thin provisionong itself
• Upstreaming work in progress
7
8. How to store container files (3)How to store container files (3)
8
Filesystem
Container
processes
Block device
NetworkHost
Filesystem
Host
block device
Hardware
LVM
DM-thin
9. How to store container files (4)How to store container files (4)
9
Filesystem
Container
processes
Block device
NetworkHost
Filesystem
Host
block device
Hardware
NBD
Ceph RBD
iSCSI
10. How to store container files (5)How to store container files (5)
10
Filesystem
Container
processes
Block device
NetworkHost
Filesystem
Host
block device
Hardware
NFS
GFS2
OCFS
Ceph
11. Containers vs TemplatesContainers vs Templates
• Containers ...
– are massively cloned from pre-created “templates”
– do not have direct access to the underlying (block) storage
• Identical data can be effectively deduplicated
– Higher density
– Lower IO and/or memory consumption
11
12. Who can do shared templatesWho can do shared templates
12
Storage OpenVZ Docker LXC
Union FSs + + +
Btrfs +
DM-thin +
PLoop +
Ceph
ZFS +
13. What can be de-duplicatedWhat can be de-duplicated
13
Filesystem
Container
processes
Block device Network
14. What can be de-duplicatedWhat can be de-duplicated
14
Filesystem
Container
processes
Block device Network
Page cache
Cached pages
15. What can be de-duplicatedWhat can be de-duplicated
15
Filesystem
Container
processes
Block device Network
Page cache
Cached pages
IO flow
16. What is deduplicatedWhat is deduplicated
16
Storage Memory IO
Union FSs + +
Btrfs +/-
DM-thin
PLoop + +
Ceph
ZFS
17. Additional OpenVZ constraintsAdditional OpenVZ constraints
• Containers disks are independent image files
– Can be easily copied across nodes
– No single (shared) point of failure
• Deduplicated data is volatile
– “Templates” can be lost (e.g. while migrating)
– Too big pool with shared data can be easily shrunk
17
20. Cache and cache link behaviorCache and cache link behavior
• Cache area
– target file name is sha1 sum of the contents
– files are created by user-space daemon
– cache size is limited by ploop
• Cache link
– created automatically upon file creation
– dropped when file is opened for writing
– Is kept during metadata update (chown/chmod)
20
22. Future workFuture work
• PLoop is available in OpenVZ & Virtuozzo
– Upstream WIP
• IO deduplication in the upstream
– Issue raied at 2013'th LSFMM
– DM-thin/btrfs IO dedup for containers
– KSM++ for VM-s
22
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>
To install a font:
Open Fonts by clicking the Start button , clicking Control Panel, clicking Appearance and Personalization, and then clicking Fonts.
Click File, and then click Install New Font. ...
In the Add Fonts dialog box, under Drives, click the drive where the font that you want to install is located.
http://windows.microsoft.com/en-us/windows-vista/install-or-uninstall-fonts
<number>