SlideShare una empresa de Scribd logo
1 de 26
Descargar para leer sin conexión
Part IV I/O System
Chapter 12: Mass Storage Structure
Disk Structure
qThree elements: cylinder, track and sector/block.
qThree types of latency (i.e., delay)
 vPositional or seek delay – mechanical and slowest
 vRotational delay
 vTransfer Delay
Computing Disk Latency
     qTrack size: 32K = 32,768 bytes
     qRotation Time: 16.67 msec (millisecond)
     qAverage seek time: 30 msec
     qWhat is the average time to transfer k bytes?

    Average read time = 30 + 16.67/2 + (k/32K)×16.67

average time to move from   on average, wait   this is the “length”
track to track              a half turn        the disk head must
                                               pass to complete a
                                               transfer
Disk Block Interleaving




no interleaving   single interleaving    double interleaving


                               Cylinder Skew
                              The position of sector/block 0
                               on each track is offset from
                               the previous one, providing
                              sufficient time for moving the
                              disk head from track to track.
Disk Scheduling
qSince seeking is time consuming, disk scheduling
 algorithms try to minimize this latency.
qWe shall discuss the following algorithms:
  vFirst-come, first served (FCFS)
  vShortest-seek-time-first (SSTF)
  vSCAN and C-SCAN
  vLOOK and C-LOOK
qSince seeking only involves cylinders, the input to
 these algorithms are cylinder numbers.
First-Come, First-Served

        Requests: 11, 1, 36, 16, 34, 9, 12
        Service Order: 11, 1, 36, 16, 34, 9, 12
Shortest-Seek-Time-First




          Requests: 11, 1, 36, 16, 34, 9, 12
          Service Order: 11, 12, 9, 16, 1, 34, 36
SCAN Scheduling: 1/2
qThis algorithm requires one more piece of
 information: the disk head movement direction,
 inward or outward.
qThe disk head starts at one end, and move toward
 the other in the current direction.
qAt the other end, the direction is reversed and
 service continues.
qSome authors refer the SCAN algorithm as the
 elevator algorithm. However, to some others the
 elevator algorithm means the LOOK algorithm.
SCAN Scheduling: 2/2




           Requests: 11, 1, 36, 16, 34, 9, 12




               Total tracks: 50, 0 to 49
               Current direction: inward
               Current head position: 11
C-SCAN Scheduling: 1/2
qC-SCAN is a variation of SCAN.
qWhen the disk head reaches one end, move it
 back to the other end. Thus, this is simply a
 wrap-around.
qThe C in C-SCAN means circular.
C-SCAN Scheduling: 2/2




             Requests: 11, 1, 36, 16, 34, 9, 12




          Total number of tracks: 50, 0 to 49
          Current direction: inward
          Current head position: 11
LOOK Scheduling: 1/2
qWith SCAN and C-SCAN, the disk head moves
 across the full width of the disk.
qThis is very time consuming. In practice,
 SCAN and C-SCAN are not implemented this
 way.
qLOOK: It is a variation of SCAN. The disk
 head goes as far as the last request and reverses
 its direction.
qC-LOOK: It is similar to C-SCAN. The disk
 head also goes as far as the last request and
 reverses its direction.
LOOK Scheduling: 2/2



         Requests: 11, 1, 36, 16, 34, 9, 12




                Current direction: inward
                Current head position: 11
C-LOOK Scheduling

       Requests: 11, 1, 36, 16, 34, 9, 12




              Current direction: inward
              Current head position: 11
RAID Structure: 1/2
qRAID: Redundant Arrays of Inexpensive Disks.
qRAID is a set of physical drives viewed by the
 operating system as a single logical drive.
qData are distributed across the physical drivers of
 an array.
qRedundant disk capacity is used to store parity
 information, which guarantees data recoverability
 is case of disk failure.
qRAID has 6 levels, each of which is not necessary
 an extension of the other.
RAID Structure: 2/2




(figures on this and subsequent pages taken from William Stallings, Operating Systems, 4th ed)
RAID Level 0
qThe virtual single disk simulated by RAID is
 divided up into strips of k sectors each.
qConsecutive strips are written over the drivers in a
 round-robin fashion. There is no redundancy.
qIf a single I/O request consists of multiple
 contiguous strips, then multiple strips can be
 handled in parallel.
RAID Level 1: Mirror
qEach logical strip is mapped to two separate
 physical drives so that every drive in the array has
 a mirror drive that contains the same data.
qRecovery from a disk failure is simple due to
 redundancy.
qA write request involves two parallel disk writes.
qProblem: Cost is high (doubled)!
Parallel Access

qRAID Levels 2 and 3 require the use of parallel
 access technique. In a parallel access array, all
 member disks participate in the execution of
 every I/O and the spindles of the individual
 drives are synchronized so that each disk head
 is in the same position on each disk at any given
 time.
qData strips are very small, usually a single byte
 or word.
RAID Level 2: 1/2
qAn error-correcting code is calculated across
 corresponding bits on each data and the bits of code
 are stored in the corresponding bit positions on disks.
qExample: A 8-bit byte is divided into two 4-bit nibbles.
 A 3-bit Hamming code is added to form a 7-bit word,
 of which bits 1, 2 and 4 are parity bits. This 7-bit
 Hamming coded word is written to the seven disks,
 one bit per disk.




                                   parity bits
RAID Level 2: 2/2
qCost is high, although the number of bits needed is
 less than that of RAID 1 (mirror).
qThe number of redundant disks is O(log2 n), where n
 is the number of data disks.
qOn a single read, all disks are accessed at the same
 time. The requested data and the associated error-
 correcting code are delivered to the controller. If
 there is error, the controller reconstructs the data
 bytes using the error-correcting code. Thus, read
 access is not slowed.
qRAID 2 would only be an effective choice in the
 environment in which many disk errors occur.
RAID Level 3: 1/2
qRAID 3 is a simplified version of to RAID 2. It
 only needs one redundant drive.
qA single parity bit is computed for each data
 word and written to a parity drive.
qExample: The parity bit of bits 1-4 is written to
 the same position on the parity drive.
RAID Level 3: 2/2
qIf one failing drive is known, the parity bit can be
 used to reconstruct the data word.
qBecause data are striped in very small strips,
 RAID 3 can achieve very high data transfer rates.
qAny I/O request will involve the parallel transfer
 of data from all of the data disks. However, only
 one I/O request can be executed at a time.
RAID Level 4: 1/2
qRAID 4 and RAID 5 work with strips rather than
 individual data words, and do not require
 synchronized drives.
qThe parity of all strips on the same “row” is written
 on an parity drive.
qExample: strips 0, 1, 2 and 3 are exclusive-Ored,
 resulting in a parity strip.          exclusive OR
RAID Level 4: 2/2
qIf a drive fails, the lost bytes can be
 reconstructed from the parity drive.
qIf a sector fails, it is necessary to read all drives,
 including the parity drive, to recover.
qThe load of the parity drive is very heavy.
RAID Level 5
qTo avoid the bottleneck of the parity drive of
 RAID 4, the parity strips can be distributed
 uniformly over all drives in a round-robin fashion.
qHowever, data recovery from a disk crash is more
 complex.

Más contenido relacionado

La actualidad más candente

OSDC 2015: Roland Kammerer | DRBD9: Managing High-Available Storage in Many-N...
OSDC 2015: Roland Kammerer | DRBD9: Managing High-Available Storage in Many-N...OSDC 2015: Roland Kammerer | DRBD9: Managing High-Available Storage in Many-N...
OSDC 2015: Roland Kammerer | DRBD9: Managing High-Available Storage in Many-N...NETWAYS
 
CRIU: Time and Space Travel for Linux Containers
CRIU: Time and Space Travel for Linux ContainersCRIU: Time and Space Travel for Linux Containers
CRIU: Time and Space Travel for Linux ContainersKirill Kolyshkin
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 
Rocks db state store in structured streaming
Rocks db state store in structured streamingRocks db state store in structured streaming
Rocks db state store in structured streamingBalaji Mohanam
 
Fedora Virtualization Day: Linux Containers & CRIU
Fedora Virtualization Day: Linux Containers & CRIUFedora Virtualization Day: Linux Containers & CRIU
Fedora Virtualization Day: Linux Containers & CRIUAndrey Vagin
 
Pgxc scalability pg_open2012
Pgxc scalability pg_open2012Pgxc scalability pg_open2012
Pgxc scalability pg_open2012Ashutosh Bapat
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
 
20171101 taco scargo luminous is out, what's in it for you
20171101 taco scargo   luminous is out, what's in it for you20171101 taco scargo   luminous is out, what's in it for you
20171101 taco scargo luminous is out, what's in it for youTaco Scargo
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous updateinwin stack
 
Dealing with JVM limitations in Apache Cassandra (Fosdem 2012)
Dealing with JVM limitations in Apache Cassandra (Fosdem 2012)Dealing with JVM limitations in Apache Cassandra (Fosdem 2012)
Dealing with JVM limitations in Apache Cassandra (Fosdem 2012)jbellis
 
Choosing the right high availability strategy
Choosing the right high availability strategyChoosing the right high availability strategy
Choosing the right high availability strategyMariaDB plc
 
Health Check Your DB2 UDB For Z/OS System
Health Check Your DB2 UDB For Z/OS SystemHealth Check Your DB2 UDB For Z/OS System
Health Check Your DB2 UDB For Z/OS Systemsjreese
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
GlusterFs Architecture & Roadmap - LinuxCon EU 2013GlusterFs Architecture & Roadmap - LinuxCon EU 2013
GlusterFs Architecture & Roadmap - LinuxCon EU 2013Gluster.org
 

La actualidad más candente (20)

Write behind logging
Write behind loggingWrite behind logging
Write behind logging
 
OSDC 2015: Roland Kammerer | DRBD9: Managing High-Available Storage in Many-N...
OSDC 2015: Roland Kammerer | DRBD9: Managing High-Available Storage in Many-N...OSDC 2015: Roland Kammerer | DRBD9: Managing High-Available Storage in Many-N...
OSDC 2015: Roland Kammerer | DRBD9: Managing High-Available Storage in Many-N...
 
CRIU: Time and Space Travel for Linux Containers
CRIU: Time and Space Travel for Linux ContainersCRIU: Time and Space Travel for Linux Containers
CRIU: Time and Space Travel for Linux Containers
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
From fermi to kepler
From fermi to keplerFrom fermi to kepler
From fermi to kepler
 
Rocks db state store in structured streaming
Rocks db state store in structured streamingRocks db state store in structured streaming
Rocks db state store in structured streaming
 
Fedora Virtualization Day: Linux Containers & CRIU
Fedora Virtualization Day: Linux Containers & CRIUFedora Virtualization Day: Linux Containers & CRIU
Fedora Virtualization Day: Linux Containers & CRIU
 
Pgxc scalability pg_open2012
Pgxc scalability pg_open2012Pgxc scalability pg_open2012
Pgxc scalability pg_open2012
 
Strata - 03/31/2012
Strata - 03/31/2012Strata - 03/31/2012
Strata - 03/31/2012
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
Drbd
DrbdDrbd
Drbd
 
20171101 taco scargo luminous is out, what's in it for you
20171101 taco scargo   luminous is out, what's in it for you20171101 taco scargo   luminous is out, what's in it for you
20171101 taco scargo luminous is out, what's in it for you
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous update
 
Intorduce to Ceph
Intorduce to CephIntorduce to Ceph
Intorduce to Ceph
 
Dealing with JVM limitations in Apache Cassandra (Fosdem 2012)
Dealing with JVM limitations in Apache Cassandra (Fosdem 2012)Dealing with JVM limitations in Apache Cassandra (Fosdem 2012)
Dealing with JVM limitations in Apache Cassandra (Fosdem 2012)
 
Choosing the right high availability strategy
Choosing the right high availability strategyChoosing the right high availability strategy
Choosing the right high availability strategy
 
Health Check Your DB2 UDB For Z/OS System
Health Check Your DB2 UDB For Z/OS SystemHealth Check Your DB2 UDB For Z/OS System
Health Check Your DB2 UDB For Z/OS System
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
GlusterFs Architecture & Roadmap - LinuxCon EU 2013GlusterFs Architecture & Roadmap - LinuxCon EU 2013
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
 

Similar a Storage structure

Disk scheduling geekssay.com
Disk scheduling geekssay.comDisk scheduling geekssay.com
Disk scheduling geekssay.comHemant Gautam
 
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...Tokyo Institute of Technology
 
12.mass stroage system
12.mass stroage system12.mass stroage system
12.mass stroage systemSenthil Kanth
 
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE
 
Mass storage structurefinal
Mass storage structurefinalMass storage structurefinal
Mass storage structurefinalmarangburu42
 
Lec11 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part3
Lec11 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part3Lec11 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part3
Lec11 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part3Hsien-Hsin Sean Lee, Ph.D.
 
A Spark Framework For < $100, < 1 Hour, Accurate Personalized DNA Analy...
A Spark Framework For < $100, < 1 Hour, Accurate Personalized DNA Analy...A Spark Framework For < $100, < 1 Hour, Accurate Personalized DNA Analy...
A Spark Framework For < $100, < 1 Hour, Accurate Personalized DNA Analy...Spark Summit
 
9_Storage_Devices.pptx
9_Storage_Devices.pptx9_Storage_Devices.pptx
9_Storage_Devices.pptxJawaharPrasad3
 
General commands for navisphere cli
General commands for navisphere cliGeneral commands for navisphere cli
General commands for navisphere climsaleh1234
 
Mass storagestructure pre-final-formatting
Mass storagestructure pre-final-formattingMass storagestructure pre-final-formatting
Mass storagestructure pre-final-formattingmarangburu42
 
SAOUG - Connect 2014 - Flex Cluster and Flex ASM
SAOUG - Connect 2014 - Flex Cluster and Flex ASMSAOUG - Connect 2014 - Flex Cluster and Flex ASM
SAOUG - Connect 2014 - Flex Cluster and Flex ASMAlex Zaballa
 
rac_for_beginners_ppt.pdf
rac_for_beginners_ppt.pdfrac_for_beginners_ppt.pdf
rac_for_beginners_ppt.pdfHODCA1
 

Similar a Storage structure (20)

Disk scheduling
Disk schedulingDisk scheduling
Disk scheduling
 
Disk scheduling geekssay.com
Disk scheduling geekssay.comDisk scheduling geekssay.com
Disk scheduling geekssay.com
 
Disk scheduling
Disk schedulingDisk scheduling
Disk scheduling
 
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
 
12.mass stroage system
12.mass stroage system12.mass stroage system
12.mass stroage system
 
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
 
Mass storage structurefinal
Mass storage structurefinalMass storage structurefinal
Mass storage structurefinal
 
I/O structure slide by Rajalakshmi SKC
I/O structure slide by Rajalakshmi SKCI/O structure slide by Rajalakshmi SKC
I/O structure slide by Rajalakshmi SKC
 
Lec11 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part3
Lec11 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part3Lec11 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part3
Lec11 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part3
 
A Spark Framework For < $100, < 1 Hour, Accurate Personalized DNA Analy...
A Spark Framework For < $100, < 1 Hour, Accurate Personalized DNA Analy...A Spark Framework For < $100, < 1 Hour, Accurate Personalized DNA Analy...
A Spark Framework For < $100, < 1 Hour, Accurate Personalized DNA Analy...
 
9_Storage_Devices.pptx
9_Storage_Devices.pptx9_Storage_Devices.pptx
9_Storage_Devices.pptx
 
9_Storage_Devices.pptx
9_Storage_Devices.pptx9_Storage_Devices.pptx
9_Storage_Devices.pptx
 
Sucet os module_5_notes
Sucet os module_5_notesSucet os module_5_notes
Sucet os module_5_notes
 
General commands for navisphere cli
General commands for navisphere cliGeneral commands for navisphere cli
General commands for navisphere cli
 
Mass storagestructure pre-final-formatting
Mass storagestructure pre-final-formattingMass storagestructure pre-final-formatting
Mass storagestructure pre-final-formatting
 
SAOUG - Connect 2014 - Flex Cluster and Flex ASM
SAOUG - Connect 2014 - Flex Cluster and Flex ASMSAOUG - Connect 2014 - Flex Cluster and Flex ASM
SAOUG - Connect 2014 - Flex Cluster and Flex ASM
 
rac_for_beginners_ppt.pdf
rac_for_beginners_ppt.pdfrac_for_beginners_ppt.pdf
rac_for_beginners_ppt.pdf
 
Raid
RaidRaid
Raid
 
Ch12
Ch12Ch12
Ch12
 
Memoryhierarchy
MemoryhierarchyMemoryhierarchy
Memoryhierarchy
 

Más de Mohd Arif

Bootp and dhcp
Bootp and dhcpBootp and dhcp
Bootp and dhcpMohd Arif
 
Arp and rarp
Arp and rarpArp and rarp
Arp and rarpMohd Arif
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocolMohd Arif
 
Project identification
Project identificationProject identification
Project identificationMohd Arif
 
Project evalaution techniques
Project evalaution techniquesProject evalaution techniques
Project evalaution techniquesMohd Arif
 
Presentation
PresentationPresentation
PresentationMohd Arif
 
Pointers in c
Pointers in cPointers in c
Pointers in cMohd Arif
 
Peer to-peer
Peer to-peerPeer to-peer
Peer to-peerMohd Arif
 
Overview of current communications systems
Overview of current communications systemsOverview of current communications systems
Overview of current communications systemsMohd Arif
 
Overall 23 11_2007_hdp
Overall 23 11_2007_hdpOverall 23 11_2007_hdp
Overall 23 11_2007_hdpMohd Arif
 
Objectives of budgeting
Objectives of budgetingObjectives of budgeting
Objectives of budgetingMohd Arif
 
Network management
Network managementNetwork management
Network managementMohd Arif
 
Networing basics
Networing basicsNetworing basics
Networing basicsMohd Arif
 
Iris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformIris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformMohd Arif
 
Ip sec and ssl
Ip sec and  sslIp sec and  ssl
Ip sec and sslMohd Arif
 
Ip security in i psec
Ip security in i psecIp security in i psec
Ip security in i psecMohd Arif
 
Intro to comp. hardware
Intro to comp. hardwareIntro to comp. hardware
Intro to comp. hardwareMohd Arif
 

Más de Mohd Arif (20)

Bootp and dhcp
Bootp and dhcpBootp and dhcp
Bootp and dhcp
 
Arp and rarp
Arp and rarpArp and rarp
Arp and rarp
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocol
 
Project identification
Project identificationProject identification
Project identification
 
Project evalaution techniques
Project evalaution techniquesProject evalaution techniques
Project evalaution techniques
 
Presentation
PresentationPresentation
Presentation
 
Pointers in c
Pointers in cPointers in c
Pointers in c
 
Peer to-peer
Peer to-peerPeer to-peer
Peer to-peer
 
Overview of current communications systems
Overview of current communications systemsOverview of current communications systems
Overview of current communications systems
 
Overall 23 11_2007_hdp
Overall 23 11_2007_hdpOverall 23 11_2007_hdp
Overall 23 11_2007_hdp
 
Objectives of budgeting
Objectives of budgetingObjectives of budgeting
Objectives of budgeting
 
Network management
Network managementNetwork management
Network management
 
Networing basics
Networing basicsNetworing basics
Networing basics
 
Loaders
LoadersLoaders
Loaders
 
Lists
ListsLists
Lists
 
Iris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformIris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platform
 
Ip sec and ssl
Ip sec and  sslIp sec and  ssl
Ip sec and ssl
 
Ip security in i psec
Ip security in i psecIp security in i psec
Ip security in i psec
 
Intro to comp. hardware
Intro to comp. hardwareIntro to comp. hardware
Intro to comp. hardware
 
Heap sort
Heap sortHeap sort
Heap sort
 

Último

Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 

Último (20)

Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 

Storage structure

  • 1. Part IV I/O System Chapter 12: Mass Storage Structure
  • 2. Disk Structure qThree elements: cylinder, track and sector/block. qThree types of latency (i.e., delay) vPositional or seek delay – mechanical and slowest vRotational delay vTransfer Delay
  • 3. Computing Disk Latency qTrack size: 32K = 32,768 bytes qRotation Time: 16.67 msec (millisecond) qAverage seek time: 30 msec qWhat is the average time to transfer k bytes? Average read time = 30 + 16.67/2 + (k/32K)×16.67 average time to move from on average, wait this is the “length” track to track a half turn the disk head must pass to complete a transfer
  • 4. Disk Block Interleaving no interleaving single interleaving double interleaving Cylinder Skew The position of sector/block 0 on each track is offset from the previous one, providing sufficient time for moving the disk head from track to track.
  • 5. Disk Scheduling qSince seeking is time consuming, disk scheduling algorithms try to minimize this latency. qWe shall discuss the following algorithms: vFirst-come, first served (FCFS) vShortest-seek-time-first (SSTF) vSCAN and C-SCAN vLOOK and C-LOOK qSince seeking only involves cylinders, the input to these algorithms are cylinder numbers.
  • 6. First-Come, First-Served Requests: 11, 1, 36, 16, 34, 9, 12 Service Order: 11, 1, 36, 16, 34, 9, 12
  • 7. Shortest-Seek-Time-First Requests: 11, 1, 36, 16, 34, 9, 12 Service Order: 11, 12, 9, 16, 1, 34, 36
  • 8. SCAN Scheduling: 1/2 qThis algorithm requires one more piece of information: the disk head movement direction, inward or outward. qThe disk head starts at one end, and move toward the other in the current direction. qAt the other end, the direction is reversed and service continues. qSome authors refer the SCAN algorithm as the elevator algorithm. However, to some others the elevator algorithm means the LOOK algorithm.
  • 9. SCAN Scheduling: 2/2 Requests: 11, 1, 36, 16, 34, 9, 12 Total tracks: 50, 0 to 49 Current direction: inward Current head position: 11
  • 10. C-SCAN Scheduling: 1/2 qC-SCAN is a variation of SCAN. qWhen the disk head reaches one end, move it back to the other end. Thus, this is simply a wrap-around. qThe C in C-SCAN means circular.
  • 11. C-SCAN Scheduling: 2/2 Requests: 11, 1, 36, 16, 34, 9, 12 Total number of tracks: 50, 0 to 49 Current direction: inward Current head position: 11
  • 12. LOOK Scheduling: 1/2 qWith SCAN and C-SCAN, the disk head moves across the full width of the disk. qThis is very time consuming. In practice, SCAN and C-SCAN are not implemented this way. qLOOK: It is a variation of SCAN. The disk head goes as far as the last request and reverses its direction. qC-LOOK: It is similar to C-SCAN. The disk head also goes as far as the last request and reverses its direction.
  • 13. LOOK Scheduling: 2/2 Requests: 11, 1, 36, 16, 34, 9, 12 Current direction: inward Current head position: 11
  • 14. C-LOOK Scheduling Requests: 11, 1, 36, 16, 34, 9, 12 Current direction: inward Current head position: 11
  • 15. RAID Structure: 1/2 qRAID: Redundant Arrays of Inexpensive Disks. qRAID is a set of physical drives viewed by the operating system as a single logical drive. qData are distributed across the physical drivers of an array. qRedundant disk capacity is used to store parity information, which guarantees data recoverability is case of disk failure. qRAID has 6 levels, each of which is not necessary an extension of the other.
  • 16. RAID Structure: 2/2 (figures on this and subsequent pages taken from William Stallings, Operating Systems, 4th ed)
  • 17. RAID Level 0 qThe virtual single disk simulated by RAID is divided up into strips of k sectors each. qConsecutive strips are written over the drivers in a round-robin fashion. There is no redundancy. qIf a single I/O request consists of multiple contiguous strips, then multiple strips can be handled in parallel.
  • 18. RAID Level 1: Mirror qEach logical strip is mapped to two separate physical drives so that every drive in the array has a mirror drive that contains the same data. qRecovery from a disk failure is simple due to redundancy. qA write request involves two parallel disk writes. qProblem: Cost is high (doubled)!
  • 19. Parallel Access qRAID Levels 2 and 3 require the use of parallel access technique. In a parallel access array, all member disks participate in the execution of every I/O and the spindles of the individual drives are synchronized so that each disk head is in the same position on each disk at any given time. qData strips are very small, usually a single byte or word.
  • 20. RAID Level 2: 1/2 qAn error-correcting code is calculated across corresponding bits on each data and the bits of code are stored in the corresponding bit positions on disks. qExample: A 8-bit byte is divided into two 4-bit nibbles. A 3-bit Hamming code is added to form a 7-bit word, of which bits 1, 2 and 4 are parity bits. This 7-bit Hamming coded word is written to the seven disks, one bit per disk. parity bits
  • 21. RAID Level 2: 2/2 qCost is high, although the number of bits needed is less than that of RAID 1 (mirror). qThe number of redundant disks is O(log2 n), where n is the number of data disks. qOn a single read, all disks are accessed at the same time. The requested data and the associated error- correcting code are delivered to the controller. If there is error, the controller reconstructs the data bytes using the error-correcting code. Thus, read access is not slowed. qRAID 2 would only be an effective choice in the environment in which many disk errors occur.
  • 22. RAID Level 3: 1/2 qRAID 3 is a simplified version of to RAID 2. It only needs one redundant drive. qA single parity bit is computed for each data word and written to a parity drive. qExample: The parity bit of bits 1-4 is written to the same position on the parity drive.
  • 23. RAID Level 3: 2/2 qIf one failing drive is known, the parity bit can be used to reconstruct the data word. qBecause data are striped in very small strips, RAID 3 can achieve very high data transfer rates. qAny I/O request will involve the parallel transfer of data from all of the data disks. However, only one I/O request can be executed at a time.
  • 24. RAID Level 4: 1/2 qRAID 4 and RAID 5 work with strips rather than individual data words, and do not require synchronized drives. qThe parity of all strips on the same “row” is written on an parity drive. qExample: strips 0, 1, 2 and 3 are exclusive-Ored, resulting in a parity strip. exclusive OR
  • 25. RAID Level 4: 2/2 qIf a drive fails, the lost bytes can be reconstructed from the parity drive. qIf a sector fails, it is necessary to read all drives, including the parity drive, to recover. qThe load of the parity drive is very heavy.
  • 26. RAID Level 5 qTo avoid the bottleneck of the parity drive of RAID 4, the parity strips can be distributed uniformly over all drives in a round-robin fashion. qHowever, data recovery from a disk crash is more complex.