Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

bup backup system (2011-04)

Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio

Eche un vistazo a continuación

1 de 22 Anuncio

Más Contenido Relacionado

Presentaciones para usted (20)

Similares a bup backup system (2011-04) (20)

Anuncio

Más reciente (20)

bup backup system (2011-04)

  1. 1. bup: the git-based backup system Avery Pennarun 2011 04 30
  2. 2. The Challenge <ul><li>Back up entire filesystems (> 1TB)
  3. 3. Including huge VM disk images (files >100GB)
  4. 4. Lots of separate files (500k or more)
  5. 5. Calculate/store incrementals efficiently
  6. 6. Create backups in O(n), where n = number of changed bytes
  7. 7. Incremental backup direct to a remote computer (no local copy)
  8. 8. ...and don't forget metadata </li></ul>5
  9. 9. The Result <ul><li>bup is extremely fast - ~80 megs/sec in python
  10. 10. Sub-file incrementals are very space efficient </li><ul><li>>5x better than rsnapshot in real-life use </li></ul><li>VMs compress smaller and faster than gzip
  11. 11. Dedupe between different client machines
  12. 12. O(log N) seek times to any part of any file
  13. 13. You can mount your backup history as a filesystem or browse it as a web page </li></ul>8
  14. 14. The Design 8
  15. 15. Why git? <ul><li>Easily handles file renames
  16. 16. Easily deduplicates between identical files and trees
  17. 17. Debug my code using git commands
  18. 18. Someone already thought about the repo format (packs, idxes)
  19. 19. Three-level “work tree” vs “index” vs “commit” </li></ul>11
  20. 20. Problem 1: Large files <ul><li>'git gc' explodes badly on large files; totally unusable
  21. 21. git bigfiles fork “solves” the problem by just never deltifying large objects: lame
  22. 22. zlib window size is very small: lousy compression on VM images -- </li></ul>13
  23. 23. Digression: zlib window size <ul><li>gzip only does two things: </li><ul><li>backref: copy some bytes from the preceding 64k window
  24. 24. huffman code: a dictionary of common words </li></ul><li>That 64k window is a serious problem!
  25. 25. Duplicated data >64k apart can't be compressed
  26. 26. cat file.tar file.tar | gzip -c | wc -c </li><ul><li>surprisingly, twice the size of a single tar.gz </li></ul></ul>18
  27. 27. bupsplit <ul><li>Uses a rolling checksum to -- </li></ul>18
  28. 28. Digression: rolling checksums <ul><li>Popularized by rsync (Andrew Tridgell, the Samba guy)
  29. 29. He wrote a (readable) Ph.D. thesis about it
  30. 30. bup uses a variant of the rsync algorithm to -- </li></ul>19
  31. 31. Double Digression: rsync algorithm <ul><li>First player: </li><ul><li>Divide the file into fixed-size chunks
  32. 32. Send the list of all chunk checksums </li></ul><li>Second player: </li><ul><li>Look through existing files for any blocks that have those checksums
  33. 33. But any n-byte subsequence might be the match
  34. 34. Searching naively is about O(n^2) ... ouch.
  35. 35. So we use a rolling checksum instead </li></ul></ul>21
  36. 36. Digression: rolling checksums <ul><li>Calculate the checksum of bytes 0..n
  37. 37. Remove byte 0, add byte n+1, to get the checksum from 1..n+1 </li><ul><li>And so on </li></ul><li>Searching is now more like O(n)... vastly faster
  38. 38. Requires a special “rollable” checksum (adler32) </li></ul>23
  39. 39. Digression: gzip --rsyncable <ul><li>You can't rsync gzipped files efficiently
  40. 40. Changing a byte early in a file changes the compression dictionary, so the rest of the file is different
  41. 41. --rsyncable resets the compression dictionary whenever low bits of adler32 == 0
  42. 42. Fraction of a percent overhead on file size
  43. 43. But now your gzip files are rsyncable! </li></ul>26
  44. 44. bupsplit <ul><li>Based on gzip --rsyncable
  45. 45. Instead of a compression dictionary, we break the file into blocks on adler32 boundaries </li><ul><li>If low 13 checksum bits are 1, end this chunk
  46. 46. Average chunk: 2**13 = 8192 </li></ul><li>Now we have a list of chunks and their sums
  47. 47. Inserting/deleting bytes changes at most two chunks! </li></ul>28
  48. 48. bupsplit trees <ul><li>Inspired by “Tiger tree” hashing used in some P2P systems
  49. 49. Arrange chunk list into trees: </li><ul><li>If low 17 checksum bits are 1, end a superchunk
  50. 50. If low 21 bits are 1, end a superduperchunk
  51. 51. and so on. </li></ul><li>Superchunk boundary is also a chunk boundary
  52. 52. Inserting/deleting bytes changes at most 2*log(n) chunks! </li></ul>30
  53. 53. Advantages of bupsplit <ul><li>Never loads the whole file into RAM
  54. 54. Compresses most VM images more (and faster) than gzip
  55. 55. Works well on binary and text files
  56. 56. Don't need to teach it about file formats
  57. 57. Diff huge files in about O(log n) time
  58. 58. Seek to any offset in a file in O(log n) time </li></ul>32
  59. 59. Problem 2: Millions of objects <ul><li>Plain git format: </li><ul><li>1TB of data / 8k per chunk: 122 million chunks
  60. 60. x 20 bytes per SHA1: 2.4GB
  61. 61. Divided into 2GB packs: 500 .idx files of 5MB each
  62. 62. 8-bit prefix lookup table </li></ul><li>Adding a new chunk means searching 500 * (log(5MB)-8) hops = 500 * 14 hops = 500 * 7 pages = 3500 pages </li></ul>34
  63. 63. Millions of objects (cont'd) <ul><li>bup .midx files: merge all .idx into a single .midx
  64. 64. Larger initial lookup table to immediately narrow search to the last 7 bits
  65. 65. log(2.4GB) = 31 bits
  66. 66. 24-bit lookup table * 4 bytes per entry: 64 MB
  67. 67. Adding a new chunk always only touches two pages </li></ul>35
  68. 68. Bloom Filters <ul><li>Problems with .midx: </li><ul><li>Have to rewrite the whole file to merge a new .idx
  69. 69. Storing 20 bytes per hash is wasteful; even 48 bits would be unique
  70. 70. Paging in two pages per chunk is maybe too much </li></ul><li>Solution: Bloom filters </li><ul><li>Idea borrowed from Plan9 WORM fs
  71. 71. Create a big hash table
  72. 72. Hash each block several ways
  73. 73. Gives false positives, never false negatives
  74. 74. Can update incrementally </li></ul></ul>36
  75. 75. bup command line: indexed mode <ul><li>Saving: </li><ul><li>bup index -vux /
  76. 76. bup save -vn mybackup / </li></ul><li>Restoring: </li><ul><li>bup restore (extract multiple files)
  77. 77. bup ftp (command-line browsing)
  78. 78. bup web
  79. 79. bup fuse (mount backups as a filesystem) </li></ul></ul>36
  80. 80. Not implemented yet <ul><li>Pruning old backups </li><ul><li>'git gc' is far too simple minded to handle a 1TB backup set </li></ul><li>Metadata </li><ul><li>Almost done: 'bup meta' and the .bupmeta files </li></ul></ul>37
  81. 81. Crazy Ideas <ul><li>Encryption
  82. 82. 'bup restore' that updates the index like 'git checkout' does
  83. 83. Distributed filesystem: bittorrent with a smarter data structure </li></ul>39
  84. 84. Questions? 40

×