7. Partition data based on ranges
• User defines shard key
• Shard key defines range of data
• Key space is like points on a line
• Range is a segment of that line
8. Distribute data in chunks across
nodes
• Initially 1 chunk
• Default max chunk size: 64mb
• MongoDB automatically splits & migrates chunks
when max reached
9. MongoDB manages data access
• Queries routed to specific
shards
• MongoDB balances cluster
• MongoDB migrates data to
new nodes
11. Data stored in shard
• Shard is a node of the cluster
• Shard can be a single mongod or a replica set
12. Config server stores meta data
• Config Server
– Stores cluster chunk ranges and locations
– Can have only 1 or 3 (production must have 3)
– Two phase commit (not a replica set)
13. MongoS manages the data routing
• Mongos
– Acts as a router / balancer
– No local data (persists to config database)
– Can have 1 or many
18. Chunk splitting
• A chunk is split once it exceeds the maximum size
• There is no split point if all documents have the same shard
key
• Chunk split is a logical operation (no data is moved)
• If split creates too large of a discrepancy of chunk count across
cluster a balancing round starts
19. Balancing
• Balancer is running on mongos
• Once the difference in chunks between the most dense shard
and the least dense shard is above the migration threshold, a
balancing round starts
20. Acquiring the Balancer Lock
• The balancer on mongos takes out a “balancer lock”
• To see the status of these locks:
- use config
- db.locks.find({ _id: “balancer” })
21. Moving the chunk
• The mongos sends a “moveChunk” command to source
shard
• The source shard then notifies destination shard
• The destination claims the chunk shard-key range
• Destination shard starts pulling documents from source shard
22. Committing Migration
• When complete, destination shard updates config server
- Provides new locations of the chunks
23. Cleanup
• Source shard deletes moved data
- Must wait for open cursors to either close or time out
- NoTimeout cursors may prevent the release of the lock
• Mongos releases the balancer lock after old chunks are
deleted
44. Shard Key
• Choose a field common used in queries
• Shard key is immutable
• Shard key values are immutable
• Shard key requires index on fields contained in
key
• Uniqueness of `_id` field is only guaranteed
within individual shard
• Shard key limited to 512 bytes in size
47. Hashed Shard Keys
• New in MongoDB 2.4
• Uses Hashed indexes
• The mongos will route all equality queries to a
specific shard or set of shards; however,
the mongos must route range queries to all
shards.
• When using a hashed shard key on an empty
collection, MongoDB automatically pre-splits the
range of 64-bit hash values into chunks
48. Use Cases for Hashed Shard Keys
• Write heavy applications
• Need good write distribution
• Tolerant of slower scatter gather queries
• Rarely perform range queries (Everything is
directed to one document)
From mainframes, to RAC Oracle servers.. People solved problems by adding more resources to a single machine.
Google was the first to demonstrate that a large scale operation could be had with high performance on commodity hardwareBuild - Document oriented database maps perfectly to object oriented languagesScale - MongoDB presents clear path to scalability that isn't ops intensive - Provides same interface for sharded cluster as single instance
Indexes should be contained in working set.
Add arrows for or mention that there is communication between shards (migrations)
Quick review from earlier.
Once chunk size is reached, mongos asks mongod to split a chunk + internal function called splitVector()mongod counts number of documents on each side of split + based on avg. document size `db.stats()`Chunk split is a **logical** operation (no data has moved)Max on first chunk should be 14
Balancer lock actually held on config server.
Moved chunk on shard2 should be gray
How do the other mongoses know that their configuration is out of date? When does the chunk version on the shard itself get updated?
The mongos does not have to load the whole set into memory since each shard sorts locally. The mongos can just getMore from the shards as needed and incrementally return the results to the client.
_id could be unique across shards if used as shard key.we could only guarantee uniqueness of (any) attributes if the keys are used as shard keys with unique attribute equals true