SlideShare una empresa de Scribd logo
1 de 5
Descargar para leer sin conexión
Stratified B-trees and Versioned Dictionaries.

    Andy Twigg∗ , Andrew Byde∗ , Grzegorz Miło´∗ , Tim Moreton∗ , John Wilkes†∗ and Tom Wilkie∗
                                               s
                                         ∗
                                           Acunu, † Google
                                   firstname@acunu.com


Abstract                                                                       A versioned B-tree is of great interest to storage
                                                                            and file systems. In 1986, Driscoll et al. [8] pre-
External-memory versioned dictionaries are fundamen-                        sented the ‘path-copying’ technique to make pointer-
tal to file systems, databases and many other algo-                          based internal-memory data structures fully-versioned
rithms. The ubiquitous data structure is the copy-on-                       (fully-persistent). Applying this technique to the B-tree
write (CoW) B-tree. Unfortunately, it doesn’t inherit the                   gives the copy-on-write (CoW) B-tree, first deployed in
B-tree’s optimality properties; it has poor space utiliza-                  EpisodeFS in 1992 [6]. Since then, it has become ubiqui-
tion, cannot offer fast updates, and relies on random IO                    tous in file systems and databases, e.g. WAFL [11], ZFS
to scale. We describe the ‘stratified B-tree’, which is the                  [4], Btrfs [9], and many more.
first versioned dictionary offering fast updates and an op-                     The CoW B-tree does not share the same optimality
timal tradeoff between space, query and update costs.                       properties as the B-tree. Every update requires random
                                                                            IOs to walk down the tree and then to write out a new
                                                                            path, copying previous blocks. Many systems use a CoW
1     Introduction
                                                                            B-tree with a log file system, in an attempt to make the
                                                                            writes sequential. Although this succeeds for light work-
The (external-memory) dictionary is at the heart of any
                                                                            loads, in general it leads to large space blowups, ineffi-
file system or database, and many other algorithms. A
                                                                            cient caching, and poor performance.
dictionary stores a mapping from keys to values. A ver-
sioned dictionary is a dictionary with an associated ver-
sion tree, supporting the following operations:
                                                                            3 This paper
    • update(k,v,x): associate value x to key k in
      leaf version v;                                                       For unversioned dictionaries, it is known that sacrific-
                                                                            ing point lookup cost from O(logB N ) to O(log N ) al-
    • range query(k1,k2,v): return all keys (and                            lows update cost to be improved from O(logB N ) to
      values) in range [k1,k2] in version v;                                O((log N )/B). In practice, this is about 2 orders of mag-
                                                                            nitude improvement for around 3x slower point queries
    • clone(v): return a new child of version v that                        [3]. This paper presents a recent construction, the Strat-
      inherits all its keys and values.                                     ified B-tree, which offers an analogous query/update
                                                                            tradeoff for fully-versioned data. It offers fully-versioned
Note that only leaf versions can be modified. If clone                       updates around 2 orders of magnitude faster than the
only works on leaf versions, we say the structure is                        CoW B-tree, and performs around one order of magni-
partially-versioned; otherwise it is fully-versioned.                       tude faster for range queries, thanks to heavy use of se-
                                                                            quential IO. In addition, it is cache-oblivious [10] and
                                                                            can be implemented without locking. This means it can
2     Related work                                                          take advantage of many-core architectures and SSDs.
                                                                               The downside is that point queries are slightly slower,
The B-tree was presented in 1972 [1], and it survives be-                   around 3x in our implementation. However, many appli-
cause it has many desirable properties; in particular, it                   cations, particularly for analytics and so-called ‘big data’
uses optimal space, and offers point queries in optimal                     problems, require high ingest rates and range queries,
O(logB N ) IOs1 . More details can be found in [7].                         rather than point queries. For these applications, we
    1 We use the standard notation B to denote the block size, and N the
                                                                            believe the stratified B-tree is a better choice than the
total number of elements inserted. For the analysis, we assume entries
                                                                            CoW B-tree, and all other known versioned dictionaries.
(including pointers) are of equal size, so B is the number of entries per   Acunu is developing a commercial open-source imple-
block.                                                                      mentation of stratified B-trees in the Linux kernel.
Structure              Versioning             Update                   Range query (size Z)                Space
         B-tree [1]               None           O(logB N ) random           O(logB N + Z/B) random                 O(N )
     CoW B-tree [ZFS]              Full         O(logB Nv ) random           O(logB Nv + Z/B) random            O(N B logB N )
         MVBT [2]                 Partial       O(logB Nv ) random           O(logB Nv + Z/B) random              O(N + V )
 Lanka et al. (fat field) [12]      Full         O(logB Nv ) random         O((logB Nv )(1 + Z/B)) random          O(N + V )
Stratified B-tree [this paper]      Full      O∗ ((log Nv )/B) sequential    O∗ (log Nv + Z/B) sequential          O(N + V )

 Table 1: Comparing the cost of basic operations on versioned dictionaries. Bounds marked O∗ (·) are amortized over
 operations on a given version.


    Table 1 summarises our results and known data struc-            v and treating these as an unversioned B-tree node. Up-
 tures. We use N for the total number of keys written over          dates are more complex and require an additional ‘ver-
 all versions, and Nv for the number of keys live (accessi-         sion split’ operation in addition to the standard key split.
 ble) in version v, and V for the total number of versions             Soules et al. [14] compare the metadata efficiency of
 (see Preliminaries in Section 5).                                  a versioning file system using both CoW B-trees and a
                                                                    structure (CVFS) based on the MVBT. They find that, in
                                                                    many cases, the size of the CoW metadata index exceeds
 3.1     CoW B-tree                                                 the dataset size. In one trace, the versioned data occupies
                                                                    123GB, yet the CoW metadata requires 152GB while the
 The CoW B-tree is the classic versioned dictionary in file
                                                                    CVFS metadata requires 4GB, a saving of 97%.
 systems, storage and other external-memory algorithms.
 The basic idea is to have a B-tree with many roots, one
 for each version. Nodes are versioned and updates can              5 Stratified B-trees
 only be done to a node of the same version as the up-
 date. To perform an update in v2, one ensures there is a           Structure. The high-level structure is vaguely simi-
 suitable root for v2 (by duplicating the root for v1, the          lar to the logarithmic method (e.g. employed by the
 parent of v2, if necessary), then follows pointers as in           COLA of Bender et al. [3]). We store a collection
 a normal B-tree; every time a node is encountered with             of arrays of sorted (key, version, value) ele-
 version other than v2, it is copied to make a new node             ments, arranged into levels, with ‘forward pointers’ be-
 with version v2, and the parent’s pointer updated before           tween arrays to facilitate searches. Each element is writ-
 the update continues down the tree. A lookup proceeds              ten (k, v, x), where k is a key, v is a version id, and x
 as in a B-tree, starting from the appropriate root. More           is either a value or pointer to a value. Each array A is
 details can be found in [13].                                      tagged with some subtree W of the version tree, so we
    It has three major problems: slow updates, since each           write (A, W ). Arrays in the same level have disjoint ver-
 update may cause a new path to be written – updat-                 sion sets, hence stratified in version space. Arrays in
 ing a 16-byte key in a tree of depth 3 with 256K block             level l have size roughly 2l+1 .
 size requires 3x256K random reads and writing 768K                    Preliminaries. Elements within an array (A, W ) are
 of data. Since these nodes cannot be garbage collected             ordered lexicographically by (k, v). We assume keys
 unless a version is deleted, they represent a large space          come equipped with a natural ordering (e.g., byte-order),
 blowup. Finally, it relies on random IO, for both up-              and versions are ordered by decreasing DFS (depth-first-
 dates and range queries. Over time, the leaves tend to             search) number in the version tree. This has the property
 be scattered randomly, which causes problems for range             that for any v, all descendants of v appear in a contiguous
 queries, garbage collection and prefetching.                       region to its left. For a version v, we write W [v] for the
                                                                    subtree of W descendant from v. We also write, for sim-
                                                                    plicity, V to represent both the global version tree and
 4     Multiversion B-trees                                         the number of versions in it (its use will be clear from
                                                                    the context).
 The multi-version B-tree (MVBT) [2] offers the same                   An element (k, v, x) is said to live at version w iff v
 query/update bounds as the CoW B-tree, but with asymp-             is a descendant of w, and k has not been rewritten be-
 totically optimal O(N ) space. However, it is only                 tween v and w. An element (k, v, x) is lead at version
 partially-versioned. The basic idea is to use versioned            v. For an array (A, W ), we write live(A, W ) to count
 pointers inside nodes – each pointer stores the range of           all the elements live at some w ∈ W , and similarly for
 (totally-ordered) versions for which the pointer is live. A        lead(A, W ). We use N to count the total number of ele-
 query for version v starts by finding the root node for v,          ments inserted, and Nv to count the number of elements
 then at every node, extracting the set of pointers live at         live at version v.

                                                                2
k0      k1      k2     k3                                                         contains the closest ancestor to v. In the simplest con-
            v0                                                                                       struction, we maintain a per-array B-tree index for the
            v4                                                             v0                        keys in that array, and a Bloom filter on its key set. A
            v5                                                                                       query at v involves querying the Bloom filter for each ar-
                                                                           v4   v5
           v1                                                    v1                                  ray selected as above, then examining the B-tree in those
           v2                                                                                        arrays matching the filter. If multiple results are found,
                                                            v2        v3
           v3                                                                                        return the one written in the nearest ancestor to v – or in
                                                                                                     the lowest level if there is more than one such entry. In
  W={v1,v2,v3} k0, v0, x k1, v0, x k1, v2, x k2, v1, x k2, v2, x k2, v3, x k3, v1, x k3, v2, x
                                                                                                     practice, the internal nodes of each B-tree can be held in
                                                                                                     memory so that this involves O(1) IOs per array.
                                                                                                        For a range query, we find the starting point in each ar-
Figure 1: A versioned array, a version tree and its layout
                                                                                                     ray, and merge together the results of iterating forwards
on disk. Versions v1, v2, v3 are tagged, so dark entries
                                                                                                     in each array. The density property implies that, for large
are lead entries. The entry (k0 , v0 , x) is written in v0 ,
                                                                                                     range queries, simply scanning the arrays with the ap-
so it is not a lead entry, but it is live at v1 , v2 and v3 .
                                                                                                     propriate version tags will use an asymptotically optimal
Similarly, (k1 , v0 , x) is live at v1 and v3 (since it was not
                                                                                                     number of IOs. For small range queries, we resort to
overwritten at v1 ) but not at v2 . The live counts are as
                                                                                                     a more involved ‘local density’ argument to show that,
follows: live(v1 ) = 4, live(v2 ) = 4, live(v3 ) = 4, and
                          4                                                                          over a large enough number of range queries, the aver-
the array has density 8 . In practice, the on-disk layout
                                                                                                     age range query cost will also be optimal. Indeed, doing
can be compressed by writing the key once for all the
                                                                                                     better with O(N ) space is known to be impossible. More
versions, and other well-known techniques.
                                                                                                     details can be found in [5].
                                                                                                        Insertions, promotions and merging. An insert
   We say an array (A, W ) has density δ if, for all w ∈                                             of (k, v, x) consists of promoting the singleton array
W , at least a fraction δ of the elements in A are live at                                           ({k, v, x}, {v}) into level 0. In general, an array S =
w. We call an array dense if it has density ≥ 1 . The im-
                                                6
                                                                                                     (A, W ) promoted to level l always has a unique root w
portance of density is the following: if density is too low,                                         in W , so we merge it with the unique array (A , W )
then scanning an array to answer a range query at version                                            at level l that is tagged with the closest ancestor to v
v will involve skipping over many elements not live at v.                                            (if such an array exists) to give an array (A, W) =
On the other hand, if we insist on density being too high,                                           (A ∪ A , W ∪ W ). If no such array exists, we leave
then many elements must be duplicated (in the limit, each                                            S in place.
array will simply contain all the live elements for a sin-                                              After merging, (A, W) may be too large to exist at
gle version), leading to a large space blowup. The cen-                                              level l. We employ two procedures to remedy this. First,
tral part of our construction is a merging and splitting                                             we call find-promotable in order to extract any possible
process that guarantees that every array has both density                                            subarrays that are large and dense enough to survive at
and lead fraction Ω(1). This is enough to guarantee only                                             level l + 1. Secondly, on the remainder (which may be
a constant O(1) space and query blowup.                                                              the entire array if no promotable subarray was found),
   The notion of a split is important for the purpose                                                we apply density amplification, which produces a col-
of breaking large arrays into smaller ones. For an                                                   lection of dense subarrays that all have the required size.
array (A, W ) and version set X ⊆ W , we say                                                            Find-promotable. This proceeds by searching for
split((A, W ), X) ⊆ A contains all the elements in A                                                 the highest (closest to root) version v ∈ W such that
live at any x ∈ X.                                                                                   X = split(A, W[v]) has size ≥ 2l+1 (thus being too
   Notation in figures. Figures 2 and 3 give examples of                                              large for level l) and has a sufficiently high live and lead
arrays and various split procedures. We draw a versioned                                             fraction. If such a version is to be found then it is ex-
array as a matrix of (key,version) pairs, where the shaded                                           tracted and promoted to level l + 1, leaving the remain-
entries are present in the array. For each array (A, W ),                                            der (A , W ) = split(A, W  W[v]). Figure 2 gives an
versions in W are written in bold font (note that arrays                                             example of find-promotable.
may have entries in versions not in W – these entries are                                               Density amplification. Now we consider the remain-
duplicates, they are live in some version in W , but are not                                         der after calling find-promotable. There may be a version
lead). Figure 1 gives an example of a versioned array.                                               w ∈ W that was originally dense but is no longer dense
   Queries. In the simplest case, a point query can be                                               in the larger (A , W ). To restore density we subdivide
answered by considering each level independently. Un-                                                the array into several smaller arrays (Ai , Wi ), each of
like the COLA, each level may have many arrays. We                                                   which is dense. Crucially, by carefully choosing which
guarantee that a query for version v examines at most                                                sets of versions to include in which array, we are able
one array per level: examine the array (A, W ) where W                                               to bound the amount of work done during density am-

                                                                                                 3
k0   k1   k2   k3                                                                                 k0   k1   k2   k3

                                                       v0
                                                                                                            live(v0) = 2                                 v0
                                                                                                           density = 2/11
                                                                                live(v1) = 4
                                                       v4                                                                                                v4                       live(v0) = 2
                                                                                live(v2) = 4
      k0        k1        k2        k3                                                                     k0        k1        k2        k3                                       live(v5) = 4
                                           Promote     v5                       live(v3) = 4                                                   split 1   v5
 v0                                                                                                  v0
                                                       v1                                                                                                v1                       density = 2/4
 v4                                                                             density = 4/8        v4
                                                       v2                                                                                                v2
 v5                                                                                                  v5
                                                       v3                                                                                                v3
 v1                                                                                                  v1

 v2                                                                                                  v2

 v3                                                         k0   k1   k2   k3                        v3                                                       k0   k1   k2   k3

                                                       v0                                                                                                v0
                                                                                                                                                                                  live(v4) = 2
                     v0
                                           Remainder   v4
                                                                                live(v0) = 2
                                                                                                                          v0
                                                                                                                                               split 2   v4                       live(v1) = 3
                                                                                live(v4) = 2
                                                                                                                                                                                  live(v2) = 3
                                                       v5                       live(v5) = 4                                                             v5
                                                                                                                                                                                  live(v3) = 3
                     v4        v5                      v1                                                                 v4        v5                   v1
           v1                                                                   density = 2/6                   v1
                           Remainder                   v2                                                                            split 1             v2
                                                                                                                                                                                  density = 2/7

                                                       v3                                                                                                v3
      v2        v3                                                                                        v2      v3
      Promote                                                                                             split 2



Figure 2: Example of find-promotable. The merged ar-                                                 Figure 3: Example of density amplification. The merged
ray is too large to exist at the current level (it has size                                         array has density 11 < 1 , so it is not dense. We find a
                                                                                                                        2
                                                                                                                               6
12, whereas level l = 3 requires size < 8, say). We                                                 split into two parts: the first split (A1 , {v0 , v5 }) has size
promote to the next level the subtree under version v1,                                             4 and density 1 . The second split (A2 , {v4 , v1 , v2 , v3 })
                                                                                                                    2
which gives the promoted array of size 8 and density                                                has size 7 and density 2 . Both splits have size < 8 and
                                                                                                                             7
4     1                                                                                                         1
8 = 2 . Note that the entries (k0 , v0 ), (k1 , v0 ) are dupli-
                                                                                                    density ≥ 6 , so they can remain at the current level.
cated, since they are also live at v1 , v2 , v3 . The remainder
array has size 6, so it can remain at the original level.
                                                                                                    the table mapping v to root node is updated, the newly
                                                                                                    copied nodes are not visible in searches, and can be re-
plification, and to guarantee that not too many additional
                                                                                                    cycled in the event of a crash. Likewise for the Stratified
duplicate elements are created.
                                                                                                    B-tree arrays are not destroyed until they are no longer
   We start at the root version and greedily search for a
                                                                                                    in the search path, thus ensuring consistency.
version v and some subset of its children whose split ar-
rays can be merged into one dense array at level l. More                                               Concurrency. With many-core architectures, exploit-
precisely, letting U = i W [vi ], we search for a subset                                            ing concurrency is crucial. Implementing a good concur-
of v’s children {vi } such that                                                                     rent B-tree is difficult. The Stratified B-tree can naturally
                                                                                                    be implemented with minimal locking. Each per-array
                                         |split(A , U)| < 2l+1 .                                    B-tree is built bottom-up (a node is only written once all
                                                                                                    its children are written), and once a node is written, it
If no such set exists at v, we recurse into the child vi                                            is immutable. If the B-tree T is being constructed as a
maximizing |split(A , W [vi ])|. It is possible to show                                             merge of T1 , T2 , then there is a partition key k such that
that this always finds a dense split. Once such a set U                                              all nodes with key less than k are immutable; based on
is identified, the corresponding array is written out, and                                           k, queries can be sent either to T , or to T1 and T2 . Each
we recurse on the remainder split(A , W  U). Figure 3                                              merge can thus be handled with a single writer without
gives an example of density amplification.                                                           locks. In the Stratified B-tree, there are many merges
                                                                                                    ongoing concurrently; each has a separate writer.
6      Practicalities                                                                                  Allocation and low free space. When a new array of
                                                                                                    size k needs to be written, we try to allocate it in the first
Consistency. We would like the data structure to be con-                                            free region of size ≥ k. If this fails, we use a ‘chunking
sistent – at any time, the on-disk representation is in a                                           strategy’: we divide each array into contiguous chunks
well-defined and valid state. In particular, we’d like it                                            of size c >> B (currently 10MB). During a merge, we
to always be the case that, after a crash or power loss,                                            read chunks from each input array until we have a chunk
the system can continue without any additional work (or                                             formed in memory to output. When a chunk is no longer
very little) to recover from the crash. Both CoW and                                                needed from the input chunk arrays, it can be deallocated
Stratified B-tree implement consistency in a similar way:                                            and the output chunk written there. This doesn’t guar-
the difference between the new and old data structure is                                            antee that the the entire array is sequential on disk, but
assembled in such a way that it can be abandoned at any                                             it is sequential to within the chunk size, which is suffi-
time up until a small in-memory change is made at the                                               cient in practice to extract good performance, and only
end. In the case of CoW it is the root for version v: until                                         uses O(c) extra space during merges - thus the system


                                                                                                4
Insert rate, as a function of dictionary size                                                             Range rate, as a function of dictionary size
                                                                                                                                        1e+09



                           1e+06
                                                                                                                                        1e+08



                           100000
    Inserts per second




                                                                                                                     Reads per second
                                                                                                                                        1e+07



                            10000
                                                                                                                                        1e+06



                            1000
                                                                                                                                        100000



                              100       Stratified B-tree                                                                                            Stratified B-tree
                                            CoW B-tree                                                                                                   CoW B-tree
                                                                                                                                         10000
                                    1                                                                      10                                    1                                                                 10
                                                                             Keys (millions)                                                                                          Keys (millions)




                          Figure 4: Insert performance with 1000 versions.                                          Figure 5: Range query performance with 1000 versions.


degrades gracefully under low free space conditions.                                                                                    1(3):173–189, 1972.
                                                                                                                     [2] Bruno Becker and Stephan Gschwind. An asymptoti-
                                                                                                                         cally optimal multiversion b-tree. The VLDB Journal,
7                          Experimental results                                                                          5(4):264–275, 1996.

We implemented prototypes of the Stratified B-tree and                                                                [3] Michael A. Bender and Martin Farach-Colton. Cache-
                                                                                                                         oblivious streaming b-trees. In SPAA ’07, pages 81–92,
CoW B-tree (using in-place updates) in OCaml. The ma-
                                                                                                                         New York, NY, USA, 2007. ACM.
chine had 1GB memory available, a 2GHz Athlon 64
processor (although our implementation was only single-                                                              [4] Jeff Bonwick and Matt Ahrens. The zettabyte file system,
                                                                                                                         2008.
threaded) and a 500GB SATA disk. We used a block size
of 32KB; the disk can perform about 100 such IOs/s. We                                                               [5] A. Byde and A. Twigg. Optimal query/update tradeoffs
started with a single root version and inserted random                                                                   in versioned dictionaries. http://arxiv.org/abs/
                                                                                                                         1103.2566. ArXiv e-prints, March 2011.
100 byte key-value pairs to random leaf versions, and
periodically performed range queries of size 1000 at a                                                               [6] S Chutani and O.T. Anderson. The episode file system.
random version. Every 100,000 insertions, we create a                                                                    In USENIX Annual Technical Conference, pages 43–60,
new version as follows: with probability 1/3 we clone a                                                                  1992.
random leaf version and w.p. 2/3 we clone a random in-                                                               [7] Thomas H. Cormen, Clifford Stein, Ronald L. Rivest,
ternal node of the version tree. The aim is to keep to the                                                               and Charles E. Leiserson. Introduction to Algorithms.
version tree ‘balanced’ in the sense that there are roughly                                                              McGraw-Hill Higher Education, 2nd edition, 2001.
twice as many internal nodes as leaves.                                                                              [8] J R Driscoll and N Sarnak. Making data structures persis-
   Figures 4 and 5 show update and range query per-                                                                      tent. In STOC ’86, pages 109–121, New York, NY, USA,
formance results for the CoW B-tree and the Strati-                                                                      1986. ACM.
fied B-tree. The B-tree performance degrades rapidly                                                                  [9] Btrfs file system.                                  http://en.wikipedia.org/
when the index exceeds internal memory available. The                                                                    wiki/Btrfs.
right plot shows range query performance (elements/s                                                                [10] Matteo Frigo and Charles E. Leiserson. Cache-oblivious
extracted using range queries of size 1000). The Strat-                                                                  algorithms. In FOCS ’99, pages 285–, Washington, DC,
ified B-tree beats the CoW B-tree by a factor of more                                                                     USA, 1999. IEEE Computer Society.
than 10. The CoW B-tree is limited by random IO                                                                     [11] Dave Hitz and James Lau. File system design for an nfs
here2 , but the Stratified B-tree is CPU-bound (OCaml is                                                                  file server appliance, 1994.
single-threaded). Preliminary performance results from a                                                            [12] Sitaram Lanka and Eric Mays. Fully persistent B+-trees.
highly-concurrent in-kernel implementation suggest that                                                                  SIGMOD Rec., 20(2):426–435, 1991.
well over 500k updates/s are possible with 16 cores.                                                                [13] Ohad Rodeh. B-trees, shadowing, and clones. Trans.
                                                                                                                         Storage, 3:2:1–2:27, February 2008.
References                                                                                                          [14] Craig A. N. Soules, Garth R. Goodson, John D. Strunk,
                                                                                                                         and Gregory R. Ganger. Metadata efficiency in version-
 [1] R. Bayer and E. McCreight. Organization and main-                                                                   ing file systems. In Proceedings of the 2nd USENIX Con-
     tenance of large ordered indexes. Acta Informatica,                                                                 ference on File and Storage Technologies, pages 43–58,
                         2 (100/s*32KB)/(200
                                                                                                                         Berkeley, CA, USA, 2003. USENIX Association.
                                                            bytes/key) = 16384 key/s


                                                                                                                5

Más contenido relacionado

La actualidad más candente

Lustre Generational Performance Improvements & New Features
Lustre Generational Performance Improvements & New FeaturesLustre Generational Performance Improvements & New Features
Lustre Generational Performance Improvements & New Featuresinside-BigData.com
 
Database Research on Modern Computing Architecture
Database Research on Modern Computing ArchitectureDatabase Research on Modern Computing Architecture
Database Research on Modern Computing ArchitectureKyong-Ha Lee
 
Linux rumpkernel - ABC2018 (AsiaBSDCon 2018)
Linux rumpkernel - ABC2018 (AsiaBSDCon 2018)Linux rumpkernel - ABC2018 (AsiaBSDCon 2018)
Linux rumpkernel - ABC2018 (AsiaBSDCon 2018)Hajime Tazaki
 
Advocacy for the OpenSource Relational Data Base Management System Over FreeBSD
Advocacy for the OpenSource Relational Data Base Management System Over FreeBSDAdvocacy for the OpenSource Relational Data Base Management System Over FreeBSD
Advocacy for the OpenSource Relational Data Base Management System Over FreeBSDtheManda
 
35. multiplepartitionallocation
35. multiplepartitionallocation35. multiplepartitionallocation
35. multiplepartitionallocationmyrajendra
 
MapReduce: A useful parallel tool that still has room for improvement
MapReduce: A useful parallel tool that still has room for improvementMapReduce: A useful parallel tool that still has room for improvement
MapReduce: A useful parallel tool that still has room for improvementKyong-Ha Lee
 
Direct Code Execution - LinuxCon Japan 2014
Direct Code Execution - LinuxCon Japan 2014Direct Code Execution - LinuxCon Japan 2014
Direct Code Execution - LinuxCon Japan 2014Hajime Tazaki
 
Library Operating System for Linux #netdev01
Library Operating System for Linux #netdev01Library Operating System for Linux #netdev01
Library Operating System for Linux #netdev01Hajime Tazaki
 
Linux Kernel Library - Reusing Monolithic Kernel
Linux Kernel Library - Reusing Monolithic KernelLinux Kernel Library - Reusing Monolithic Kernel
Linux Kernel Library - Reusing Monolithic KernelHajime Tazaki
 
Direct Code Execution @ CoNEXT 2013
Direct Code Execution @ CoNEXT 2013Direct Code Execution @ CoNEXT 2013
Direct Code Execution @ CoNEXT 2013Hajime Tazaki
 
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)Anne Nicolas
 
Network Stack in Userspace (NUSE)
Network Stack in Userspace (NUSE)Network Stack in Userspace (NUSE)
Network Stack in Userspace (NUSE)Hajime Tazaki
 
Network stack personality in Android phone - netdev 2.2
Network stack personality in Android phone - netdev 2.2Network stack personality in Android phone - netdev 2.2
Network stack personality in Android phone - netdev 2.2Hajime Tazaki
 
Drobo Benchmarks to accompany TMUP 159
Drobo Benchmarks to accompany TMUP 159Drobo Benchmarks to accompany TMUP 159
Drobo Benchmarks to accompany TMUP 159typicalmacuser
 
LibOS as a regression test framework for Linux networking #netdev1.1
LibOS as a regression test framework for Linux networking #netdev1.1LibOS as a regression test framework for Linux networking #netdev1.1
LibOS as a regression test framework for Linux networking #netdev1.1Hajime Tazaki
 
NUSE (Network Stack in Userspace) at #osio
NUSE (Network Stack in Userspace) at #osioNUSE (Network Stack in Userspace) at #osio
NUSE (Network Stack in Userspace) at #osioHajime Tazaki
 
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...Tokyo Institute of Technology
 
Efficient Parallel Set-Similarity Joins Using MapReduce - Poster
Efficient Parallel Set-Similarity Joins Using MapReduce - PosterEfficient Parallel Set-Similarity Joins Using MapReduce - Poster
Efficient Parallel Set-Similarity Joins Using MapReduce - Posterrvernica
 

La actualidad más candente (20)

Lustre Generational Performance Improvements & New Features
Lustre Generational Performance Improvements & New FeaturesLustre Generational Performance Improvements & New Features
Lustre Generational Performance Improvements & New Features
 
Database Research on Modern Computing Architecture
Database Research on Modern Computing ArchitectureDatabase Research on Modern Computing Architecture
Database Research on Modern Computing Architecture
 
Linux rumpkernel - ABC2018 (AsiaBSDCon 2018)
Linux rumpkernel - ABC2018 (AsiaBSDCon 2018)Linux rumpkernel - ABC2018 (AsiaBSDCon 2018)
Linux rumpkernel - ABC2018 (AsiaBSDCon 2018)
 
Advocacy for the OpenSource Relational Data Base Management System Over FreeBSD
Advocacy for the OpenSource Relational Data Base Management System Over FreeBSDAdvocacy for the OpenSource Relational Data Base Management System Over FreeBSD
Advocacy for the OpenSource Relational Data Base Management System Over FreeBSD
 
35. multiplepartitionallocation
35. multiplepartitionallocation35. multiplepartitionallocation
35. multiplepartitionallocation
 
Pratima fragmentation
Pratima fragmentationPratima fragmentation
Pratima fragmentation
 
MapReduce: A useful parallel tool that still has room for improvement
MapReduce: A useful parallel tool that still has room for improvementMapReduce: A useful parallel tool that still has room for improvement
MapReduce: A useful parallel tool that still has room for improvement
 
Direct Code Execution - LinuxCon Japan 2014
Direct Code Execution - LinuxCon Japan 2014Direct Code Execution - LinuxCon Japan 2014
Direct Code Execution - LinuxCon Japan 2014
 
Library Operating System for Linux #netdev01
Library Operating System for Linux #netdev01Library Operating System for Linux #netdev01
Library Operating System for Linux #netdev01
 
Linux Kernel Library - Reusing Monolithic Kernel
Linux Kernel Library - Reusing Monolithic KernelLinux Kernel Library - Reusing Monolithic Kernel
Linux Kernel Library - Reusing Monolithic Kernel
 
Direct Code Execution @ CoNEXT 2013
Direct Code Execution @ CoNEXT 2013Direct Code Execution @ CoNEXT 2013
Direct Code Execution @ CoNEXT 2013
 
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)
 
Network Stack in Userspace (NUSE)
Network Stack in Userspace (NUSE)Network Stack in Userspace (NUSE)
Network Stack in Userspace (NUSE)
 
Network stack personality in Android phone - netdev 2.2
Network stack personality in Android phone - netdev 2.2Network stack personality in Android phone - netdev 2.2
Network stack personality in Android phone - netdev 2.2
 
Drobo Benchmarks to accompany TMUP 159
Drobo Benchmarks to accompany TMUP 159Drobo Benchmarks to accompany TMUP 159
Drobo Benchmarks to accompany TMUP 159
 
LibOS as a regression test framework for Linux networking #netdev1.1
LibOS as a regression test framework for Linux networking #netdev1.1LibOS as a regression test framework for Linux networking #netdev1.1
LibOS as a regression test framework for Linux networking #netdev1.1
 
NUSE (Network Stack in Userspace) at #osio
NUSE (Network Stack in Userspace) at #osioNUSE (Network Stack in Userspace) at #osio
NUSE (Network Stack in Userspace) at #osio
 
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
Integrating Cache Oblivious Approach with Modern Processor Architecture: The ...
 
Efficient Parallel Set-Similarity Joins Using MapReduce - Poster
Efficient Parallel Set-Similarity Joins Using MapReduce - PosterEfficient Parallel Set-Similarity Joins Using MapReduce - Poster
Efficient Parallel Set-Similarity Joins Using MapReduce - Poster
 
2020 icldla-updated
2020 icldla-updated2020 icldla-updated
2020 icldla-updated
 

Similar a Stratified B-trees - HotStorage11

b tree file system report
b tree file system reportb tree file system report
b tree file system reportDinesh Gupta
 
File implementation
File implementationFile implementation
File implementationMohd Arif
 
[2C6]SQLite DB 의 입출력 특성분석 : Android 와 Tizen 사례
[2C6]SQLite DB 의 입출력 특성분석 : Android 와 Tizen 사례[2C6]SQLite DB 의 입출력 특성분석 : Android 와 Tizen 사례
[2C6]SQLite DB 의 입출력 특성분석 : Android 와 Tizen 사례NAVER D2
 
Azure, Cloud Computing & Services
Azure, Cloud Computing & ServicesAzure, Cloud Computing & Services
Azure, Cloud Computing & ServicesAlan Dean
 
MS Cloud Day - Building web applications with Azure storage
MS Cloud Day - Building web applications with Azure storageMS Cloud Day - Building web applications with Azure storage
MS Cloud Day - Building web applications with Azure storageSpiffy
 
Oracle essbase 11.1.1 vs 11.1.2
Oracle essbase 11.1.1 vs 11.1.2Oracle essbase 11.1.1 vs 11.1.2
Oracle essbase 11.1.1 vs 11.1.2Vikrant Singh
 
Building services using windows azure
Building services using windows azureBuilding services using windows azure
Building services using windows azureSuliman AlBattat
 
Large-Scale Data Storage and Processing for Scientists with Hadoop
Large-Scale Data Storage and Processing for Scientists with HadoopLarge-Scale Data Storage and Processing for Scientists with Hadoop
Large-Scale Data Storage and Processing for Scientists with HadoopEvert Lammerts
 
Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...
Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...
Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...Amazon Web Services
 
All .net Interview questions
All .net Interview questionsAll .net Interview questions
All .net Interview questionsAsad Masood Qazi
 
Buffer Trees - Utility and Applications for External Memory Data Processing
Buffer Trees - Utility and Applications for External Memory Data ProcessingBuffer Trees - Utility and Applications for External Memory Data Processing
Buffer Trees - Utility and Applications for External Memory Data ProcessingMilind Gokhale
 
ICDE2015 Research 3: Distributed Storage and Processing
ICDE2015 Research 3: Distributed Storage and ProcessingICDE2015 Research 3: Distributed Storage and Processing
ICDE2015 Research 3: Distributed Storage and ProcessingTakuma Wakamori
 
VTU 3RD SEM UNIX AND SHELL PROGRAMMING SOLVED PAPERS
VTU 3RD SEM UNIX AND SHELL PROGRAMMING SOLVED PAPERSVTU 3RD SEM UNIX AND SHELL PROGRAMMING SOLVED PAPERS
VTU 3RD SEM UNIX AND SHELL PROGRAMMING SOLVED PAPERSvtunotesbysree
 

Similar a Stratified B-trees - HotStorage11 (20)

b tree file system report
b tree file system reportb tree file system report
b tree file system report
 
File implementation
File implementationFile implementation
File implementation
 
[2C6]SQLite DB 의 입출력 특성분석 : Android 와 Tizen 사례
[2C6]SQLite DB 의 입출력 특성분석 : Android 와 Tizen 사례[2C6]SQLite DB 의 입출력 특성분석 : Android 와 Tizen 사례
[2C6]SQLite DB 의 입출력 특성분석 : Android 와 Tizen 사례
 
Azure, Cloud Computing & Services
Azure, Cloud Computing & ServicesAzure, Cloud Computing & Services
Azure, Cloud Computing & Services
 
MS Cloud Day - Building web applications with Azure storage
MS Cloud Day - Building web applications with Azure storageMS Cloud Day - Building web applications with Azure storage
MS Cloud Day - Building web applications with Azure storage
 
Hadoop Vectored IO
Hadoop Vectored IOHadoop Vectored IO
Hadoop Vectored IO
 
Oracle essbase 11.1.1 vs 11.1.2
Oracle essbase 11.1.1 vs 11.1.2Oracle essbase 11.1.1 vs 11.1.2
Oracle essbase 11.1.1 vs 11.1.2
 
Building services using windows azure
Building services using windows azureBuilding services using windows azure
Building services using windows azure
 
Large-Scale Data Storage and Processing for Scientists with Hadoop
Large-Scale Data Storage and Processing for Scientists with HadoopLarge-Scale Data Storage and Processing for Scientists with Hadoop
Large-Scale Data Storage and Processing for Scientists with Hadoop
 
Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...
Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...
Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...
 
SFScon19 - Davide Montesin - Why you should consider using btrfs
SFScon19 - Davide Montesin - Why you should consider using btrfsSFScon19 - Davide Montesin - Why you should consider using btrfs
SFScon19 - Davide Montesin - Why you should consider using btrfs
 
All .net Interview questions
All .net Interview questionsAll .net Interview questions
All .net Interview questions
 
Bt0066
Bt0066Bt0066
Bt0066
 
B T0066
B T0066B T0066
B T0066
 
Buffer Trees - Utility and Applications for External Memory Data Processing
Buffer Trees - Utility and Applications for External Memory Data ProcessingBuffer Trees - Utility and Applications for External Memory Data Processing
Buffer Trees - Utility and Applications for External Memory Data Processing
 
cluster08
cluster08cluster08
cluster08
 
ICDE2015 Research 3: Distributed Storage and Processing
ICDE2015 Research 3: Distributed Storage and ProcessingICDE2015 Research 3: Distributed Storage and Processing
ICDE2015 Research 3: Distributed Storage and Processing
 
VTU 3RD SEM UNIX AND SHELL PROGRAMMING SOLVED PAPERS
VTU 3RD SEM UNIX AND SHELL PROGRAMMING SOLVED PAPERSVTU 3RD SEM UNIX AND SHELL PROGRAMMING SOLVED PAPERS
VTU 3RD SEM UNIX AND SHELL PROGRAMMING SOLVED PAPERS
 
Beowulf cluster
Beowulf clusterBeowulf cluster
Beowulf cluster
 
C++0x
C++0xC++0x
C++0x
 

Más de Acunu

Acunu and Hailo: a realtime analytics case study on Cassandra
Acunu and Hailo: a realtime analytics case study on CassandraAcunu and Hailo: a realtime analytics case study on Cassandra
Acunu and Hailo: a realtime analytics case study on CassandraAcunu
 
Virtual nodes: Operational Aspirin
Virtual nodes: Operational AspirinVirtual nodes: Operational Aspirin
Virtual nodes: Operational AspirinAcunu
 
Acunu Analytics and Cassandra at Hailo All Your Base 2013
Acunu Analytics and Cassandra at Hailo All Your Base 2013 Acunu Analytics and Cassandra at Hailo All Your Base 2013
Acunu Analytics and Cassandra at Hailo All Your Base 2013 Acunu
 
Understanding Cassandra internals to solve real-world problems
Understanding Cassandra internals to solve real-world problemsUnderstanding Cassandra internals to solve real-world problems
Understanding Cassandra internals to solve real-world problemsAcunu
 
Acunu Analytics: Simpler Real-Time Cassandra Apps
Acunu Analytics: Simpler Real-Time Cassandra AppsAcunu Analytics: Simpler Real-Time Cassandra Apps
Acunu Analytics: Simpler Real-Time Cassandra AppsAcunu
 
All Your Base
All Your BaseAll Your Base
All Your BaseAcunu
 
Realtime Analytics with Apache Cassandra
Realtime Analytics with Apache CassandraRealtime Analytics with Apache Cassandra
Realtime Analytics with Apache CassandraAcunu
 
Realtime Analytics with Apache Cassandra - JAX London
Realtime Analytics with Apache Cassandra - JAX LondonRealtime Analytics with Apache Cassandra - JAX London
Realtime Analytics with Apache Cassandra - JAX LondonAcunu
 
Real-time Cassandra
Real-time CassandraReal-time Cassandra
Real-time CassandraAcunu
 
Realtime Analytics on the Twitter Firehose with Apache Cassandra - Denormaliz...
Realtime Analytics on the Twitter Firehose with Apache Cassandra - Denormaliz...Realtime Analytics on the Twitter Firehose with Apache Cassandra - Denormaliz...
Realtime Analytics on the Twitter Firehose with Apache Cassandra - Denormaliz...Acunu
 
Realtime Analytics with Cassandra
Realtime Analytics with CassandraRealtime Analytics with Cassandra
Realtime Analytics with CassandraAcunu
 
Acunu Analytics @ Cassandra London
Acunu Analytics @ Cassandra LondonAcunu Analytics @ Cassandra London
Acunu Analytics @ Cassandra LondonAcunu
 
Exploring Big Data value for your business
Exploring Big Data value for your businessExploring Big Data value for your business
Exploring Big Data value for your businessAcunu
 
Realtime Analytics on the Twitter Firehose with Cassandra
Realtime Analytics on the Twitter Firehose with CassandraRealtime Analytics on the Twitter Firehose with Cassandra
Realtime Analytics on the Twitter Firehose with CassandraAcunu
 
Progressive NOSQL: Cassandra
Progressive NOSQL: CassandraProgressive NOSQL: Cassandra
Progressive NOSQL: CassandraAcunu
 
Cassandra EU 2012 - Overview of Case Studies and State of the Market by 451 R...
Cassandra EU 2012 - Overview of Case Studies and State of the Market by 451 R...Cassandra EU 2012 - Overview of Case Studies and State of the Market by 451 R...
Cassandra EU 2012 - Overview of Case Studies and State of the Market by 451 R...Acunu
 
Cassandra EU 2012 - Putting the X Factor into Cassandra
Cassandra EU 2012 - Putting the X Factor into CassandraCassandra EU 2012 - Putting the X Factor into Cassandra
Cassandra EU 2012 - Putting the X Factor into CassandraAcunu
 
Cassandra EU 2012 - Netflix's Cassandra Architecture and Open Source Efforts
Cassandra EU 2012 - Netflix's Cassandra Architecture and Open Source EffortsCassandra EU 2012 - Netflix's Cassandra Architecture and Open Source Efforts
Cassandra EU 2012 - Netflix's Cassandra Architecture and Open Source EffortsAcunu
 
Next Generation Cassandra
Next Generation CassandraNext Generation Cassandra
Next Generation CassandraAcunu
 
Cassandra EU 2012 - CQL: Then, Now and When by Eric Evans
Cassandra EU 2012 - CQL: Then, Now and When by Eric Evans Cassandra EU 2012 - CQL: Then, Now and When by Eric Evans
Cassandra EU 2012 - CQL: Then, Now and When by Eric Evans Acunu
 

Más de Acunu (20)

Acunu and Hailo: a realtime analytics case study on Cassandra
Acunu and Hailo: a realtime analytics case study on CassandraAcunu and Hailo: a realtime analytics case study on Cassandra
Acunu and Hailo: a realtime analytics case study on Cassandra
 
Virtual nodes: Operational Aspirin
Virtual nodes: Operational AspirinVirtual nodes: Operational Aspirin
Virtual nodes: Operational Aspirin
 
Acunu Analytics and Cassandra at Hailo All Your Base 2013
Acunu Analytics and Cassandra at Hailo All Your Base 2013 Acunu Analytics and Cassandra at Hailo All Your Base 2013
Acunu Analytics and Cassandra at Hailo All Your Base 2013
 
Understanding Cassandra internals to solve real-world problems
Understanding Cassandra internals to solve real-world problemsUnderstanding Cassandra internals to solve real-world problems
Understanding Cassandra internals to solve real-world problems
 
Acunu Analytics: Simpler Real-Time Cassandra Apps
Acunu Analytics: Simpler Real-Time Cassandra AppsAcunu Analytics: Simpler Real-Time Cassandra Apps
Acunu Analytics: Simpler Real-Time Cassandra Apps
 
All Your Base
All Your BaseAll Your Base
All Your Base
 
Realtime Analytics with Apache Cassandra
Realtime Analytics with Apache CassandraRealtime Analytics with Apache Cassandra
Realtime Analytics with Apache Cassandra
 
Realtime Analytics with Apache Cassandra - JAX London
Realtime Analytics with Apache Cassandra - JAX LondonRealtime Analytics with Apache Cassandra - JAX London
Realtime Analytics with Apache Cassandra - JAX London
 
Real-time Cassandra
Real-time CassandraReal-time Cassandra
Real-time Cassandra
 
Realtime Analytics on the Twitter Firehose with Apache Cassandra - Denormaliz...
Realtime Analytics on the Twitter Firehose with Apache Cassandra - Denormaliz...Realtime Analytics on the Twitter Firehose with Apache Cassandra - Denormaliz...
Realtime Analytics on the Twitter Firehose with Apache Cassandra - Denormaliz...
 
Realtime Analytics with Cassandra
Realtime Analytics with CassandraRealtime Analytics with Cassandra
Realtime Analytics with Cassandra
 
Acunu Analytics @ Cassandra London
Acunu Analytics @ Cassandra LondonAcunu Analytics @ Cassandra London
Acunu Analytics @ Cassandra London
 
Exploring Big Data value for your business
Exploring Big Data value for your businessExploring Big Data value for your business
Exploring Big Data value for your business
 
Realtime Analytics on the Twitter Firehose with Cassandra
Realtime Analytics on the Twitter Firehose with CassandraRealtime Analytics on the Twitter Firehose with Cassandra
Realtime Analytics on the Twitter Firehose with Cassandra
 
Progressive NOSQL: Cassandra
Progressive NOSQL: CassandraProgressive NOSQL: Cassandra
Progressive NOSQL: Cassandra
 
Cassandra EU 2012 - Overview of Case Studies and State of the Market by 451 R...
Cassandra EU 2012 - Overview of Case Studies and State of the Market by 451 R...Cassandra EU 2012 - Overview of Case Studies and State of the Market by 451 R...
Cassandra EU 2012 - Overview of Case Studies and State of the Market by 451 R...
 
Cassandra EU 2012 - Putting the X Factor into Cassandra
Cassandra EU 2012 - Putting the X Factor into CassandraCassandra EU 2012 - Putting the X Factor into Cassandra
Cassandra EU 2012 - Putting the X Factor into Cassandra
 
Cassandra EU 2012 - Netflix's Cassandra Architecture and Open Source Efforts
Cassandra EU 2012 - Netflix's Cassandra Architecture and Open Source EffortsCassandra EU 2012 - Netflix's Cassandra Architecture and Open Source Efforts
Cassandra EU 2012 - Netflix's Cassandra Architecture and Open Source Efforts
 
Next Generation Cassandra
Next Generation CassandraNext Generation Cassandra
Next Generation Cassandra
 
Cassandra EU 2012 - CQL: Then, Now and When by Eric Evans
Cassandra EU 2012 - CQL: Then, Now and When by Eric Evans Cassandra EU 2012 - CQL: Then, Now and When by Eric Evans
Cassandra EU 2012 - CQL: Then, Now and When by Eric Evans
 

Último

My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 

Último (20)

My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 

Stratified B-trees - HotStorage11

  • 1. Stratified B-trees and Versioned Dictionaries. Andy Twigg∗ , Andrew Byde∗ , Grzegorz Miło´∗ , Tim Moreton∗ , John Wilkes†∗ and Tom Wilkie∗ s ∗ Acunu, † Google firstname@acunu.com Abstract A versioned B-tree is of great interest to storage and file systems. In 1986, Driscoll et al. [8] pre- External-memory versioned dictionaries are fundamen- sented the ‘path-copying’ technique to make pointer- tal to file systems, databases and many other algo- based internal-memory data structures fully-versioned rithms. The ubiquitous data structure is the copy-on- (fully-persistent). Applying this technique to the B-tree write (CoW) B-tree. Unfortunately, it doesn’t inherit the gives the copy-on-write (CoW) B-tree, first deployed in B-tree’s optimality properties; it has poor space utiliza- EpisodeFS in 1992 [6]. Since then, it has become ubiqui- tion, cannot offer fast updates, and relies on random IO tous in file systems and databases, e.g. WAFL [11], ZFS to scale. We describe the ‘stratified B-tree’, which is the [4], Btrfs [9], and many more. first versioned dictionary offering fast updates and an op- The CoW B-tree does not share the same optimality timal tradeoff between space, query and update costs. properties as the B-tree. Every update requires random IOs to walk down the tree and then to write out a new path, copying previous blocks. Many systems use a CoW 1 Introduction B-tree with a log file system, in an attempt to make the writes sequential. Although this succeeds for light work- The (external-memory) dictionary is at the heart of any loads, in general it leads to large space blowups, ineffi- file system or database, and many other algorithms. A cient caching, and poor performance. dictionary stores a mapping from keys to values. A ver- sioned dictionary is a dictionary with an associated ver- sion tree, supporting the following operations: 3 This paper • update(k,v,x): associate value x to key k in leaf version v; For unversioned dictionaries, it is known that sacrific- ing point lookup cost from O(logB N ) to O(log N ) al- • range query(k1,k2,v): return all keys (and lows update cost to be improved from O(logB N ) to values) in range [k1,k2] in version v; O((log N )/B). In practice, this is about 2 orders of mag- nitude improvement for around 3x slower point queries • clone(v): return a new child of version v that [3]. This paper presents a recent construction, the Strat- inherits all its keys and values. ified B-tree, which offers an analogous query/update tradeoff for fully-versioned data. It offers fully-versioned Note that only leaf versions can be modified. If clone updates around 2 orders of magnitude faster than the only works on leaf versions, we say the structure is CoW B-tree, and performs around one order of magni- partially-versioned; otherwise it is fully-versioned. tude faster for range queries, thanks to heavy use of se- quential IO. In addition, it is cache-oblivious [10] and can be implemented without locking. This means it can 2 Related work take advantage of many-core architectures and SSDs. The downside is that point queries are slightly slower, The B-tree was presented in 1972 [1], and it survives be- around 3x in our implementation. However, many appli- cause it has many desirable properties; in particular, it cations, particularly for analytics and so-called ‘big data’ uses optimal space, and offers point queries in optimal problems, require high ingest rates and range queries, O(logB N ) IOs1 . More details can be found in [7]. rather than point queries. For these applications, we 1 We use the standard notation B to denote the block size, and N the believe the stratified B-tree is a better choice than the total number of elements inserted. For the analysis, we assume entries CoW B-tree, and all other known versioned dictionaries. (including pointers) are of equal size, so B is the number of entries per Acunu is developing a commercial open-source imple- block. mentation of stratified B-trees in the Linux kernel.
  • 2. Structure Versioning Update Range query (size Z) Space B-tree [1] None O(logB N ) random O(logB N + Z/B) random O(N ) CoW B-tree [ZFS] Full O(logB Nv ) random O(logB Nv + Z/B) random O(N B logB N ) MVBT [2] Partial O(logB Nv ) random O(logB Nv + Z/B) random O(N + V ) Lanka et al. (fat field) [12] Full O(logB Nv ) random O((logB Nv )(1 + Z/B)) random O(N + V ) Stratified B-tree [this paper] Full O∗ ((log Nv )/B) sequential O∗ (log Nv + Z/B) sequential O(N + V ) Table 1: Comparing the cost of basic operations on versioned dictionaries. Bounds marked O∗ (·) are amortized over operations on a given version. Table 1 summarises our results and known data struc- v and treating these as an unversioned B-tree node. Up- tures. We use N for the total number of keys written over dates are more complex and require an additional ‘ver- all versions, and Nv for the number of keys live (accessi- sion split’ operation in addition to the standard key split. ble) in version v, and V for the total number of versions Soules et al. [14] compare the metadata efficiency of (see Preliminaries in Section 5). a versioning file system using both CoW B-trees and a structure (CVFS) based on the MVBT. They find that, in many cases, the size of the CoW metadata index exceeds 3.1 CoW B-tree the dataset size. In one trace, the versioned data occupies 123GB, yet the CoW metadata requires 152GB while the The CoW B-tree is the classic versioned dictionary in file CVFS metadata requires 4GB, a saving of 97%. systems, storage and other external-memory algorithms. The basic idea is to have a B-tree with many roots, one for each version. Nodes are versioned and updates can 5 Stratified B-trees only be done to a node of the same version as the up- date. To perform an update in v2, one ensures there is a Structure. The high-level structure is vaguely simi- suitable root for v2 (by duplicating the root for v1, the lar to the logarithmic method (e.g. employed by the parent of v2, if necessary), then follows pointers as in COLA of Bender et al. [3]). We store a collection a normal B-tree; every time a node is encountered with of arrays of sorted (key, version, value) ele- version other than v2, it is copied to make a new node ments, arranged into levels, with ‘forward pointers’ be- with version v2, and the parent’s pointer updated before tween arrays to facilitate searches. Each element is writ- the update continues down the tree. A lookup proceeds ten (k, v, x), where k is a key, v is a version id, and x as in a B-tree, starting from the appropriate root. More is either a value or pointer to a value. Each array A is details can be found in [13]. tagged with some subtree W of the version tree, so we It has three major problems: slow updates, since each write (A, W ). Arrays in the same level have disjoint ver- update may cause a new path to be written – updat- sion sets, hence stratified in version space. Arrays in ing a 16-byte key in a tree of depth 3 with 256K block level l have size roughly 2l+1 . size requires 3x256K random reads and writing 768K Preliminaries. Elements within an array (A, W ) are of data. Since these nodes cannot be garbage collected ordered lexicographically by (k, v). We assume keys unless a version is deleted, they represent a large space come equipped with a natural ordering (e.g., byte-order), blowup. Finally, it relies on random IO, for both up- and versions are ordered by decreasing DFS (depth-first- dates and range queries. Over time, the leaves tend to search) number in the version tree. This has the property be scattered randomly, which causes problems for range that for any v, all descendants of v appear in a contiguous queries, garbage collection and prefetching. region to its left. For a version v, we write W [v] for the subtree of W descendant from v. We also write, for sim- plicity, V to represent both the global version tree and 4 Multiversion B-trees the number of versions in it (its use will be clear from the context). The multi-version B-tree (MVBT) [2] offers the same An element (k, v, x) is said to live at version w iff v query/update bounds as the CoW B-tree, but with asymp- is a descendant of w, and k has not been rewritten be- totically optimal O(N ) space. However, it is only tween v and w. An element (k, v, x) is lead at version partially-versioned. The basic idea is to use versioned v. For an array (A, W ), we write live(A, W ) to count pointers inside nodes – each pointer stores the range of all the elements live at some w ∈ W , and similarly for (totally-ordered) versions for which the pointer is live. A lead(A, W ). We use N to count the total number of ele- query for version v starts by finding the root node for v, ments inserted, and Nv to count the number of elements then at every node, extracting the set of pointers live at live at version v. 2
  • 3. k0 k1 k2 k3 contains the closest ancestor to v. In the simplest con- v0 struction, we maintain a per-array B-tree index for the v4 v0 keys in that array, and a Bloom filter on its key set. A v5 query at v involves querying the Bloom filter for each ar- v4 v5 v1 v1 ray selected as above, then examining the B-tree in those v2 arrays matching the filter. If multiple results are found, v2 v3 v3 return the one written in the nearest ancestor to v – or in the lowest level if there is more than one such entry. In W={v1,v2,v3} k0, v0, x k1, v0, x k1, v2, x k2, v1, x k2, v2, x k2, v3, x k3, v1, x k3, v2, x practice, the internal nodes of each B-tree can be held in memory so that this involves O(1) IOs per array. For a range query, we find the starting point in each ar- Figure 1: A versioned array, a version tree and its layout ray, and merge together the results of iterating forwards on disk. Versions v1, v2, v3 are tagged, so dark entries in each array. The density property implies that, for large are lead entries. The entry (k0 , v0 , x) is written in v0 , range queries, simply scanning the arrays with the ap- so it is not a lead entry, but it is live at v1 , v2 and v3 . propriate version tags will use an asymptotically optimal Similarly, (k1 , v0 , x) is live at v1 and v3 (since it was not number of IOs. For small range queries, we resort to overwritten at v1 ) but not at v2 . The live counts are as a more involved ‘local density’ argument to show that, follows: live(v1 ) = 4, live(v2 ) = 4, live(v3 ) = 4, and 4 over a large enough number of range queries, the aver- the array has density 8 . In practice, the on-disk layout age range query cost will also be optimal. Indeed, doing can be compressed by writing the key once for all the better with O(N ) space is known to be impossible. More versions, and other well-known techniques. details can be found in [5]. Insertions, promotions and merging. An insert We say an array (A, W ) has density δ if, for all w ∈ of (k, v, x) consists of promoting the singleton array W , at least a fraction δ of the elements in A are live at ({k, v, x}, {v}) into level 0. In general, an array S = w. We call an array dense if it has density ≥ 1 . The im- 6 (A, W ) promoted to level l always has a unique root w portance of density is the following: if density is too low, in W , so we merge it with the unique array (A , W ) then scanning an array to answer a range query at version at level l that is tagged with the closest ancestor to v v will involve skipping over many elements not live at v. (if such an array exists) to give an array (A, W) = On the other hand, if we insist on density being too high, (A ∪ A , W ∪ W ). If no such array exists, we leave then many elements must be duplicated (in the limit, each S in place. array will simply contain all the live elements for a sin- After merging, (A, W) may be too large to exist at gle version), leading to a large space blowup. The cen- level l. We employ two procedures to remedy this. First, tral part of our construction is a merging and splitting we call find-promotable in order to extract any possible process that guarantees that every array has both density subarrays that are large and dense enough to survive at and lead fraction Ω(1). This is enough to guarantee only level l + 1. Secondly, on the remainder (which may be a constant O(1) space and query blowup. the entire array if no promotable subarray was found), The notion of a split is important for the purpose we apply density amplification, which produces a col- of breaking large arrays into smaller ones. For an lection of dense subarrays that all have the required size. array (A, W ) and version set X ⊆ W , we say Find-promotable. This proceeds by searching for split((A, W ), X) ⊆ A contains all the elements in A the highest (closest to root) version v ∈ W such that live at any x ∈ X. X = split(A, W[v]) has size ≥ 2l+1 (thus being too Notation in figures. Figures 2 and 3 give examples of large for level l) and has a sufficiently high live and lead arrays and various split procedures. We draw a versioned fraction. If such a version is to be found then it is ex- array as a matrix of (key,version) pairs, where the shaded tracted and promoted to level l + 1, leaving the remain- entries are present in the array. For each array (A, W ), der (A , W ) = split(A, W W[v]). Figure 2 gives an versions in W are written in bold font (note that arrays example of find-promotable. may have entries in versions not in W – these entries are Density amplification. Now we consider the remain- duplicates, they are live in some version in W , but are not der after calling find-promotable. There may be a version lead). Figure 1 gives an example of a versioned array. w ∈ W that was originally dense but is no longer dense Queries. In the simplest case, a point query can be in the larger (A , W ). To restore density we subdivide answered by considering each level independently. Un- the array into several smaller arrays (Ai , Wi ), each of like the COLA, each level may have many arrays. We which is dense. Crucially, by carefully choosing which guarantee that a query for version v examines at most sets of versions to include in which array, we are able one array per level: examine the array (A, W ) where W to bound the amount of work done during density am- 3
  • 4. k0 k1 k2 k3 k0 k1 k2 k3 v0 live(v0) = 2 v0 density = 2/11 live(v1) = 4 v4 v4 live(v0) = 2 live(v2) = 4 k0 k1 k2 k3 k0 k1 k2 k3 live(v5) = 4 Promote v5 live(v3) = 4 split 1 v5 v0 v0 v1 v1 density = 2/4 v4 density = 4/8 v4 v2 v2 v5 v5 v3 v3 v1 v1 v2 v2 v3 k0 k1 k2 k3 v3 k0 k1 k2 k3 v0 v0 live(v4) = 2 v0 Remainder v4 live(v0) = 2 v0 split 2 v4 live(v1) = 3 live(v4) = 2 live(v2) = 3 v5 live(v5) = 4 v5 live(v3) = 3 v4 v5 v1 v4 v5 v1 v1 density = 2/6 v1 Remainder v2 split 1 v2 density = 2/7 v3 v3 v2 v3 v2 v3 Promote split 2 Figure 2: Example of find-promotable. The merged ar- Figure 3: Example of density amplification. The merged ray is too large to exist at the current level (it has size array has density 11 < 1 , so it is not dense. We find a 2 6 12, whereas level l = 3 requires size < 8, say). We split into two parts: the first split (A1 , {v0 , v5 }) has size promote to the next level the subtree under version v1, 4 and density 1 . The second split (A2 , {v4 , v1 , v2 , v3 }) 2 which gives the promoted array of size 8 and density has size 7 and density 2 . Both splits have size < 8 and 7 4 1 1 8 = 2 . Note that the entries (k0 , v0 ), (k1 , v0 ) are dupli- density ≥ 6 , so they can remain at the current level. cated, since they are also live at v1 , v2 , v3 . The remainder array has size 6, so it can remain at the original level. the table mapping v to root node is updated, the newly copied nodes are not visible in searches, and can be re- plification, and to guarantee that not too many additional cycled in the event of a crash. Likewise for the Stratified duplicate elements are created. B-tree arrays are not destroyed until they are no longer We start at the root version and greedily search for a in the search path, thus ensuring consistency. version v and some subset of its children whose split ar- rays can be merged into one dense array at level l. More Concurrency. With many-core architectures, exploit- precisely, letting U = i W [vi ], we search for a subset ing concurrency is crucial. Implementing a good concur- of v’s children {vi } such that rent B-tree is difficult. The Stratified B-tree can naturally be implemented with minimal locking. Each per-array |split(A , U)| < 2l+1 . B-tree is built bottom-up (a node is only written once all its children are written), and once a node is written, it If no such set exists at v, we recurse into the child vi is immutable. If the B-tree T is being constructed as a maximizing |split(A , W [vi ])|. It is possible to show merge of T1 , T2 , then there is a partition key k such that that this always finds a dense split. Once such a set U all nodes with key less than k are immutable; based on is identified, the corresponding array is written out, and k, queries can be sent either to T , or to T1 and T2 . Each we recurse on the remainder split(A , W U). Figure 3 merge can thus be handled with a single writer without gives an example of density amplification. locks. In the Stratified B-tree, there are many merges ongoing concurrently; each has a separate writer. 6 Practicalities Allocation and low free space. When a new array of size k needs to be written, we try to allocate it in the first Consistency. We would like the data structure to be con- free region of size ≥ k. If this fails, we use a ‘chunking sistent – at any time, the on-disk representation is in a strategy’: we divide each array into contiguous chunks well-defined and valid state. In particular, we’d like it of size c >> B (currently 10MB). During a merge, we to always be the case that, after a crash or power loss, read chunks from each input array until we have a chunk the system can continue without any additional work (or formed in memory to output. When a chunk is no longer very little) to recover from the crash. Both CoW and needed from the input chunk arrays, it can be deallocated Stratified B-tree implement consistency in a similar way: and the output chunk written there. This doesn’t guar- the difference between the new and old data structure is antee that the the entire array is sequential on disk, but assembled in such a way that it can be abandoned at any it is sequential to within the chunk size, which is suffi- time up until a small in-memory change is made at the cient in practice to extract good performance, and only end. In the case of CoW it is the root for version v: until uses O(c) extra space during merges - thus the system 4
  • 5. Insert rate, as a function of dictionary size Range rate, as a function of dictionary size 1e+09 1e+06 1e+08 100000 Inserts per second Reads per second 1e+07 10000 1e+06 1000 100000 100 Stratified B-tree Stratified B-tree CoW B-tree CoW B-tree 10000 1 10 1 10 Keys (millions) Keys (millions) Figure 4: Insert performance with 1000 versions. Figure 5: Range query performance with 1000 versions. degrades gracefully under low free space conditions. 1(3):173–189, 1972. [2] Bruno Becker and Stephan Gschwind. An asymptoti- cally optimal multiversion b-tree. The VLDB Journal, 7 Experimental results 5(4):264–275, 1996. We implemented prototypes of the Stratified B-tree and [3] Michael A. Bender and Martin Farach-Colton. Cache- oblivious streaming b-trees. In SPAA ’07, pages 81–92, CoW B-tree (using in-place updates) in OCaml. The ma- New York, NY, USA, 2007. ACM. chine had 1GB memory available, a 2GHz Athlon 64 processor (although our implementation was only single- [4] Jeff Bonwick and Matt Ahrens. The zettabyte file system, 2008. threaded) and a 500GB SATA disk. We used a block size of 32KB; the disk can perform about 100 such IOs/s. We [5] A. Byde and A. Twigg. Optimal query/update tradeoffs started with a single root version and inserted random in versioned dictionaries. http://arxiv.org/abs/ 1103.2566. ArXiv e-prints, March 2011. 100 byte key-value pairs to random leaf versions, and periodically performed range queries of size 1000 at a [6] S Chutani and O.T. Anderson. The episode file system. random version. Every 100,000 insertions, we create a In USENIX Annual Technical Conference, pages 43–60, new version as follows: with probability 1/3 we clone a 1992. random leaf version and w.p. 2/3 we clone a random in- [7] Thomas H. Cormen, Clifford Stein, Ronald L. Rivest, ternal node of the version tree. The aim is to keep to the and Charles E. Leiserson. Introduction to Algorithms. version tree ‘balanced’ in the sense that there are roughly McGraw-Hill Higher Education, 2nd edition, 2001. twice as many internal nodes as leaves. [8] J R Driscoll and N Sarnak. Making data structures persis- Figures 4 and 5 show update and range query per- tent. In STOC ’86, pages 109–121, New York, NY, USA, formance results for the CoW B-tree and the Strati- 1986. ACM. fied B-tree. The B-tree performance degrades rapidly [9] Btrfs file system. http://en.wikipedia.org/ when the index exceeds internal memory available. The wiki/Btrfs. right plot shows range query performance (elements/s [10] Matteo Frigo and Charles E. Leiserson. Cache-oblivious extracted using range queries of size 1000). The Strat- algorithms. In FOCS ’99, pages 285–, Washington, DC, ified B-tree beats the CoW B-tree by a factor of more USA, 1999. IEEE Computer Society. than 10. The CoW B-tree is limited by random IO [11] Dave Hitz and James Lau. File system design for an nfs here2 , but the Stratified B-tree is CPU-bound (OCaml is file server appliance, 1994. single-threaded). Preliminary performance results from a [12] Sitaram Lanka and Eric Mays. Fully persistent B+-trees. highly-concurrent in-kernel implementation suggest that SIGMOD Rec., 20(2):426–435, 1991. well over 500k updates/s are possible with 16 cores. [13] Ohad Rodeh. B-trees, shadowing, and clones. Trans. Storage, 3:2:1–2:27, February 2008. References [14] Craig A. N. Soules, Garth R. Goodson, John D. Strunk, and Gregory R. Ganger. Metadata efficiency in version- [1] R. Bayer and E. McCreight. Organization and main- ing file systems. In Proceedings of the 2nd USENIX Con- tenance of large ordered indexes. Acta Informatica, ference on File and Storage Technologies, pages 43–58, 2 (100/s*32KB)/(200 Berkeley, CA, USA, 2003. USENIX Association. bytes/key) = 16384 key/s 5