32. Architecture
• in-memory database
- append-only log on disk
- virtual memory
• single instance
- master-slave replication
- clustering is on roadmap
33. “Memory is the new disk,
disk is the new tape.”
— Jim Gray
38. Features
• “Memcached with persistence”
- extremely fast
- throughput scales linearly
• automatic data placement
- memory, ssd, disk
39. Features
• “Memcached with persistence”
- extremely fast
- throughput scales linearly
• automatic data placement
- memory, ssd, disk
• configurable replica count
41. Architecture
• cluster
- all nodes are alike
- one elected as “coordinator”
• each node is master for part of key
space
- handles all reads & writes
52. Design
• two machines (+ load balancer)
- Redis master handles all reads /
writes
- Redis slave as hot standby
53. Design
• two machines (+ load balancer)
- Redis master handles all reads /
writes
- Redis slave as hot standby
- both machines used as app servers
54. Design
• two machines (+ load balancer)
- Redis master handles all reads /
writes
- Redis slave as hot standby
- both machines used as app servers
• dedicated hardware
55. Data model
• one Redis hash per user
- key: facebook id
• store data as serialized JSON
- booleans, strings, numbers,
timestamps ...
56. Advantages
• turns Redis into “document db”
- efficient to swap user data in / out
- atomic ops on parts
• easy to dump / restore user data
57. Capacity
• 4 GB memory for 20 mio. integer keys
- keys always stay in memory!
58. Capacity
• 4 GB memory for 20 mio. integer keys
- keys always stay in memory!
• 2 GB memory for 10.000 user hashes
- others can be swapped out
59. Capacity
• 4 GB memory for 20 mio. integer keys
- keys always stay in memory!
• 2 GB memory for 10.000 user hashes
- others can be swapped out
• 3.6 mio. ops / minute
- sufficient for 200.000 requests
61. Status
• game was launched in august
- currently still in beta
• expect to reach 1 mio. daily active users
in Q1/2011
62. Status
• game was launched in august
- currently still in beta
• expect to reach 1 mio. daily active users
in Q1/2011
• will try to stick to 2 or 3 machines
- possibly bigger / faster ones
64. Conclusions
• use the right tool for the job
• keep it simple
- avoid sharding, if possible
65. Conclusions
• use the right tool for the job
• keep it simple
- avoid sharding, if possible
• don’t scale out too early
- but have a viable “plan b”
66. Conclusions
• use the right tool for the job
• keep it simple
- avoid sharding, if possible
• don’t scale out too early
- but have a viable “plan b”
• use dedicated hardware