3. Storage Trend
> Data Size and Capacity
– Multimedia Contents
– Big Demo binary, Detail Graphic /
Photos, Audio and Video etc.
> Data Functional need
– Different Business requirement
– More Data driven process
– More application with data
– More ecommerce
> Data Backup for a longer
period
– Legislation and Compliance
– Business analysis
5. Software Define Storage
> High Extensibility:
– Distributed over multiple nodes in cluster
> High Availability:
– No single point of failure
> High Flexibility:
– API, Block Device and Cloud Supported Architecture
> Pure Software Define Architecture
> Self Monitoring and Self Repairing
7. Why using Cloud Storage?
> Very High ROI compare to traditional Hard Storage Solution
Vendor
> Cloud Ready and S3 Supported
> Thin Provisioning
> Remote Replication
> Cache Tiering
> Erasure Coding
> Self Manage and Self Repair with continuous monitoring
8. Other Key Features
> Support client from multiple OS
> Data encryption over physical disk ( more CPU needed)
> On the fly data compression
> Basically Unlimited Extendibility
> Copy-On-Writing ( Clone and Snapshot )
> iSCSI support ( VM and thin client etc )
14. HTPC AMD (A8-5545M)
Form factor:
– 29.9 mm x 107.6 mm x 114.4mm
CPU:
– AMD A8-5545M ( Clock up 2.7GHz / 4M 4Core)
RAM:
– 8G DDR-3-1600 KingStone ( Up to 16G SO-DIMM )
Storage:
– mS200 120G/m-SATA/read:550M, write: 520M
Lan:
– Gigabit LAN (RealTek RTL8111G)
Connectivity:
– USB3.0 * 4
Price:
– $6980 (NTD)
15. Enclosure
Form factor:
– 215(D) x 126(w) x 166(H) mm
Storage:
– Support all brand of 3.5" SATA I / II / III hard disk drive 4 x 8TB = 32TB
Connectivity:
– USB 3.0 or eSATA Interface
Price:
– $3000 (NTD)
16. AMD (A8-5545M)
> Node = 6980
> 512SSD + 4TB + 6TB
+ Enclosure =5000 +
4000 + 7000 = 16000
> 30TB total = 16000 * 3
= 58000
> It is about the half of
Amazon Cloud 30TB
cost over 1 year
17.
18. QUICK 3 NODE SETUP
Demo basic setup of a small cluster
19. CEPH Cluster Requirement
> At least 3 MON
> At least 3 OSD
– At least 15GB per osd
– Journal better on SSD
20. ceph-deploy
> ssh no password id need
to pass over to all cluster
nodes
> echo nodes ceph user
has sudo for root
permission
> ceph-deploy new
<node1> <node2>
<node3>
– Create all the new MON
> ceph.conf file will be
created at the current
directory for you to build
your cluster
configuration
> Each cluster node
should have identical
ceph.conf file
25. RBD Management
> rbd --pool ssd create --size 10000 ssd_block
– Create a 1G rbd in ssd pool
> rbd map ssd/ssd_block ( in client )
– It should show up in /dev/rbd/<pool-name>/<block-name>
> Then you can use it like a block device
27. Files prepare for this demo
Kiwi Image SLE12 + SES2
> https://files.secureserver.
net/0fCLysbi0hb8cr
Git Salt Stack Repo
> https://github.com/Aveng
erMoJo/Ceph-Saltstack
28. USB install and then Prepare Salt-Minion
> #accept all node* key from minion
> salt-key -a
> #copy all the module and _systemd /srv/salt/ ;
> sudo salt 'node*' saltutil.sync_all
> #benchmark ( get all the disk io basic number )
> sudo salt "node*" ceph_sles.bench_disk /dev/sda /dev/sdb /dev/sdc /dev/sdd
> #get all the disk information
> sudo salt "node*" ceph_sles.disk_info
> #get all the networking information
> sudo salt -L "salt-master node1 node2 node3 node4 node5"
ceph_sles.bench_network salt-master node1 node2 node3 node4 node5
29. Prepare and Create Clusters Mons
> #create salt-master ssh key
> sudo salt "salt-master" ceph_sles.keygen
> #send key over to nodes
> sudo salt "salt-master" ceph_sles.send_key node1 node2 node3
> #create new cluster with the new mon
> sudo salt "salt-master" ceph_sles.new_mon node1 node2 node3
> #sending cluster conf and key over to the nodes
> sudo salt "salt-master" ceph_sles.push_conf salt-master node1 node2 node3
30. Create Journal and OSD
> #create the osd journal partition
> #we can combin the get_disk_info for ssd auto assign
> sudo salt -L "node1 node2 node3" ceph_sles.prep_osd_journal /dev/sda 40G
> # clean all the osd disk partition first
> sudo salt 'salt-master' ceph_sles.clean_disk_partition "node1,node2,node3"
"/dev/sdb,/dev/sdc,/dev/sdd"
> # prep the list of osd for the cluster
> sudo salt "salt-master" ceph_sles.prep_osd "node1,node2,node3"
"/dev/sdb,/dev/sdc,/dev/sdd"
31. Update Crushmap and do rados benchmark
> # crushmap update for the benchmark
> sudo salt "salt-master"
ceph_sles.crushmap_update_disktype_ssd_hdd node1
node2 node3
> # rados bench
> sudo salt "salt-master" ceph_sles.bench_rados
32. Cache Tier setup
> sudo salt "salt-master" ceph_sles.create_pool samba_ssd_pool
100 2 ssd_replicated
> sudo salt "salt-master" ceph_sles.create_pool samba_hdd_pool
100 3 hdd_replicated
> ceph osd tier add samba_hdd_pool samba_ssd_pool
> ceph osd tier cache-mode samba_ssd_pool writeback
> ceph osd tier set-overlay samba_hdd_pool samba_ssd_pool
> ceph osd pool set samba_ssd_pool hit_set_type bloom
> ceph osd pool set samba_ssd_pool hit_set_count 2
> ceph osd pool set samba_ssd_pool hit_set_period 300