SlideShare una empresa de Scribd logo
1 de 34
Build our own affordable cloud storage
openSUSE Asia Summit Dec, 2015
CLOUD STORAGE INTRO
Software Define Storage
Storage Trend
> Data Size and Capacity
– Multimedia Contents
– Big Demo binary, Detail Graphic /
Photos, Audio and Video etc.
> Data Functional need
– Different Business requirement
– More Data driven process
– More application with data
– More ecommerce
> Data Backup for a longer
period
– Legislation and Compliance
– Business analysis
Storage Usage
Tier 0
Ultra High
Performance
Tier 1
High-value, OLTP,
Revenue Generating
Tier 2
Backup/Recovery,
Reference Data,
Bulk Data
Tier 3
Object, Archive,
Compliance Archive,
Long-term Retention
1-3%
15-20%
20-25%
50-60%
Software Define Storage
> High Extensibility:
– Distributed over multiple nodes in cluster
> High Availability:
– No single point of failure
> High Flexibility:
– API, Block Device and Cloud Supported Architecture
> Pure Software Define Architecture
> Self Monitoring and Self Repairing
Sample Cluster
Why using Cloud Storage?
> Very High ROI compare to traditional Hard Storage Solution
Vendor
> Cloud Ready and S3 Supported
> Thin Provisioning
> Remote Replication
> Cache Tiering
> Erasure Coding
> Self Manage and Self Repair with continuous monitoring
Other Key Features
> Support client from multiple OS
> Data encryption over physical disk ( more CPU needed)
> On the fly data compression
> Basically Unlimited Extendibility
> Copy-On-Writing ( Clone and Snapshot )
> iSCSI support ( VM and thin client etc )
WHO USING IT ?
Show Cases of Cloud Storage
EMC, Hitachi,
HP, IBM
NetApp, Dell,
Pura Storage,
Nexsan
Promise, Synology,
QNAP, Infortrend,
ProWare, SansDigitial
Who is doing Software Define Storage
Who is using software define storage?
HOW MUCH?
What if we use Software Define Storage?
HTPC AMD (A8-5545M)
Form factor:
– 29.9 mm x 107.6 mm x 114.4mm
CPU:
– AMD A8-5545M ( Clock up 2.7GHz / 4M 4Core)
RAM:
– 8G DDR-3-1600 KingStone ( Up to 16G SO-DIMM )
Storage:
– mS200 120G/m-SATA/read:550M, write: 520M
Lan:
– Gigabit LAN (RealTek RTL8111G)
Connectivity:
– USB3.0 * 4
Price:
– $6980 (NTD)
Enclosure
Form factor:
– 215(D) x 126(w) x 166(H) mm
Storage:
– Support all brand of 3.5" SATA I / II / III hard disk drive 4 x 8TB = 32TB
Connectivity:
– USB 3.0 or eSATA Interface
Price:
– $3000 (NTD)
AMD (A8-5545M)
> Node = 6980
> 512SSD + 4TB + 6TB
+ Enclosure =5000 +
4000 + 7000 = 16000
> 30TB total = 16000 * 3
= 58000
> It is about the half of
Amazon Cloud 30TB
cost over 1 year
QUICK 3 NODE SETUP
Demo basic setup of a small cluster
CEPH Cluster Requirement
> At least 3 MON
> At least 3 OSD
– At least 15GB per osd
– Journal better on SSD
ceph-deploy
> ssh no password id need
to pass over to all cluster
nodes
> echo nodes ceph user
has sudo for root
permission
> ceph-deploy new
<node1> <node2>
<node3>
– Create all the new MON
> ceph.conf file will be
created at the current
directory for you to build
your cluster
configuration
> Each cluster node
should have identical
ceph.conf file
OSD Prepare and Activate
> ceph-deploy osd prepare
<node1>:</dev/sda5>:</var/lib/ceph/osd/journal/osd-0>
> ceph-deploy osd activate <node1>:</dev/sda5>
Cluster Status
> ceph status
> ceph osd stat
> ceph osd dump
> ceph osd tree
> ceph mon stat
> ceph mon dump
> ceph quorum_status
> ceph osd lspools
Pool Management
> ceph osd lspools
> ceph osd pool create <pool-name> <pg-num> <pgp-
num> <pool-type> <crush-ruleset-name>
> ceph osd pool delete <pool-name> <pool-name> --yes-
i-really-really-mean-it
> ceph osd pool set <pool-name> <key> <value>
CRUSH Map Management
> ceph osd getcrushmap -o crushmap.out
> crushtool -d crushmap.out -o decom_crushmap.txt
> cp decom_crushmap.txt update_decom_crushmap.txt
> crushtool -c update_decom_crushmap.txt -o update_crushmap.out
> ceph osd setcrushmap -i update_crushmap.out
> crushtool --test -i update_crushmap.out --show-choose-tries --rule 2
--num-rep=2
> crushtool --test -i update_crushmap.out --show-utilization --num-
rep=2
ceph osd crush show-tunables
RBD Management
> rbd --pool ssd create --size 10000 ssd_block
– Create a 1G rbd in ssd pool
> rbd map ssd/ssd_block ( in client )
– It should show up in /dev/rbd/<pool-name>/<block-name>
> Then you can use it like a block device
SALT STACK + SES
Install config and benchmark
Files prepare for this demo
Kiwi Image SLE12 + SES2
> https://files.secureserver.
net/0fCLysbi0hb8cr
Git Salt Stack Repo
> https://github.com/Aveng
erMoJo/Ceph-Saltstack
USB install and then Prepare Salt-Minion
> #accept all node* key from minion
> salt-key -a
> #copy all the module and _systemd /srv/salt/ ;
> sudo salt 'node*' saltutil.sync_all
> #benchmark ( get all the disk io basic number )
> sudo salt "node*" ceph_sles.bench_disk /dev/sda /dev/sdb /dev/sdc /dev/sdd
> #get all the disk information
> sudo salt "node*" ceph_sles.disk_info
> #get all the networking information
> sudo salt -L "salt-master node1 node2 node3 node4 node5"
ceph_sles.bench_network salt-master node1 node2 node3 node4 node5
Prepare and Create Clusters Mons
> #create salt-master ssh key
> sudo salt "salt-master" ceph_sles.keygen
> #send key over to nodes
> sudo salt "salt-master" ceph_sles.send_key node1 node2 node3
> #create new cluster with the new mon
> sudo salt "salt-master" ceph_sles.new_mon node1 node2 node3
> #sending cluster conf and key over to the nodes
> sudo salt "salt-master" ceph_sles.push_conf salt-master node1 node2 node3
Create Journal and OSD
> #create the osd journal partition
> #we can combin the get_disk_info for ssd auto assign
> sudo salt -L "node1 node2 node3" ceph_sles.prep_osd_journal /dev/sda 40G
> # clean all the osd disk partition first
> sudo salt 'salt-master' ceph_sles.clean_disk_partition "node1,node2,node3"
"/dev/sdb,/dev/sdc,/dev/sdd"
> # prep the list of osd for the cluster
> sudo salt "salt-master" ceph_sles.prep_osd "node1,node2,node3"
"/dev/sdb,/dev/sdc,/dev/sdd"
Update Crushmap and do rados benchmark
> # crushmap update for the benchmark
> sudo salt "salt-master"
ceph_sles.crushmap_update_disktype_ssd_hdd node1
node2 node3
> # rados bench
> sudo salt "salt-master" ceph_sles.bench_rados
Cache Tier setup
> sudo salt "salt-master" ceph_sles.create_pool samba_ssd_pool
100 2 ssd_replicated
> sudo salt "salt-master" ceph_sles.create_pool samba_hdd_pool
100 3 hdd_replicated
> ceph osd tier add samba_hdd_pool samba_ssd_pool
> ceph osd tier cache-mode samba_ssd_pool writeback
> ceph osd tier set-overlay samba_hdd_pool samba_ssd_pool
> ceph osd pool set samba_ssd_pool hit_set_type bloom
> ceph osd pool set samba_ssd_pool hit_set_count 2
> ceph osd pool set samba_ssd_pool hit_set_period 300
Block device demo
> rbd --pool samba_hdd_pool create --size 10000 samba_test
> sudo rbd --pool samba_ssd_pool ls
> sudo rbd --pool samba_ssd_pool map samba_test
> sudo mkfs.xfs /dev/rbd0
> sudo mount /dev/rbd0 /mnt/samba
> sudo systemctl restart smb.service
WHAT NEXT?
Email me alau@suse.com
Let me know what you want to hear next

Más contenido relacionado

La actualidad más candente

London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiLondon Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiCeph Community
 
SLE12 SP2 : High Availability et Geo Cluster
SLE12 SP2 : High Availability et Geo ClusterSLE12 SP2 : High Availability et Geo Cluster
SLE12 SP2 : High Availability et Geo ClusterSUSE
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)Lars Marowsky-Brée
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
 
Powershell dcpp
Powershell dcppPowershell dcpp
Powershell dcppartisriva
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph Ceph Community
 
Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화OpenStack Korea Community
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 
Redis persistence in practice
Redis persistence in practiceRedis persistence in practice
Redis persistence in practiceEugene Fidelin
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 

La actualidad más candente (19)

London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiLondon Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
 
SLE12 SP2 : High Availability et Geo Cluster
SLE12 SP2 : High Availability et Geo ClusterSLE12 SP2 : High Availability et Geo Cluster
SLE12 SP2 : High Availability et Geo Cluster
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
Powershell dcpp
Powershell dcppPowershell dcpp
Powershell dcpp
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Bluestore
BluestoreBluestore
Bluestore
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 
Redis persistence in practice
Redis persistence in practiceRedis persistence in practice
Redis persistence in practice
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 

Destacado

Red Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed_Hat_Storage
 
應用Ceph技術打造軟體定義儲存新局
應用Ceph技術打造軟體定義儲存新局應用Ceph技術打造軟體定義儲存新局
應用Ceph技術打造軟體定義儲存新局Alex Lau
 
Keynote openSUSE Asia Summit 2015
Keynote openSUSE Asia Summit 2015Keynote openSUSE Asia Summit 2015
Keynote openSUSE Asia Summit 2015Alex Lau
 
Are you a good scout? PHPNW Aug 2015
Are you a good scout? PHPNW Aug 2015Are you a good scout? PHPNW Aug 2015
Are you a good scout? PHPNW Aug 2015phpboyscout
 
Are you a good scout? - PHPNW15 Track 3
Are you a good scout? - PHPNW15 Track 3Are you a good scout? - PHPNW15 Track 3
Are you a good scout? - PHPNW15 Track 3phpboyscout
 
Punishment Driven Development
Punishment Driven DevelopmentPunishment Driven Development
Punishment Driven DevelopmentIvanaTerrorBull
 
Storage best practices
Storage best practicesStorage best practices
Storage best practicesMaor Lipchuk
 
OpenStack Overview: Deployments and the Big Tent, Toronto 2016
OpenStack Overview: Deployments and the Big Tent, Toronto 2016OpenStack Overview: Deployments and the Big Tent, Toronto 2016
OpenStack Overview: Deployments and the Big Tent, Toronto 2016Jonathan Le Lous
 
Usine logicielle à Orange Labs
Usine logicielle à Orange LabsUsine logicielle à Orange Labs
Usine logicielle à Orange LabsEmmanuel Hugonnet
 
Standards ouverts, interopérabilité et logiciel libre - Canada, 2015
Standards ouverts, interopérabilité et logiciel libre - Canada, 2015Standards ouverts, interopérabilité et logiciel libre - Canada, 2015
Standards ouverts, interopérabilité et logiciel libre - Canada, 2015Jonathan Le Lous
 
Presentation olps vf
Presentation olps vfPresentation olps vf
Presentation olps vfabdellah93
 

Destacado (12)

Red Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
 
應用Ceph技術打造軟體定義儲存新局
應用Ceph技術打造軟體定義儲存新局應用Ceph技術打造軟體定義儲存新局
應用Ceph技術打造軟體定義儲存新局
 
Keynote openSUSE Asia Summit 2015
Keynote openSUSE Asia Summit 2015Keynote openSUSE Asia Summit 2015
Keynote openSUSE Asia Summit 2015
 
Are you a good scout? PHPNW Aug 2015
Are you a good scout? PHPNW Aug 2015Are you a good scout? PHPNW Aug 2015
Are you a good scout? PHPNW Aug 2015
 
Are you a good scout? - PHPNW15 Track 3
Are you a good scout? - PHPNW15 Track 3Are you a good scout? - PHPNW15 Track 3
Are you a good scout? - PHPNW15 Track 3
 
Punishment Driven Development
Punishment Driven DevelopmentPunishment Driven Development
Punishment Driven Development
 
Storage best practices
Storage best practicesStorage best practices
Storage best practices
 
OpenStack Overview: Deployments and the Big Tent, Toronto 2016
OpenStack Overview: Deployments and the Big Tent, Toronto 2016OpenStack Overview: Deployments and the Big Tent, Toronto 2016
OpenStack Overview: Deployments and the Big Tent, Toronto 2016
 
Usine logicielle à Orange Labs
Usine logicielle à Orange LabsUsine logicielle à Orange Labs
Usine logicielle à Orange Labs
 
Standards ouverts, interopérabilité et logiciel libre - Canada, 2015
Standards ouverts, interopérabilité et logiciel libre - Canada, 2015Standards ouverts, interopérabilité et logiciel libre - Canada, 2015
Standards ouverts, interopérabilité et logiciel libre - Canada, 2015
 
Les usines à logiciels
Les usines à logicielsLes usines à logiciels
Les usines à logiciels
 
Presentation olps vf
Presentation olps vfPresentation olps vf
Presentation olps vf
 

Similar a Build affordable cloud storage with openSUSE and Ceph

DUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation CenterDUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation CenterAndrey Kudryavtsev
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
SiteGround Tech TeamBuilding
SiteGround Tech TeamBuildingSiteGround Tech TeamBuilding
SiteGround Tech TeamBuildingMarian Marinov
 
Deep Dive on Amazon EC2 Instances (March 2017)
Deep Dive on Amazon EC2 Instances (March 2017)Deep Dive on Amazon EC2 Instances (March 2017)
Deep Dive on Amazon EC2 Instances (March 2017)Julien SIMON
 
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
 
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
 
Debugging linux issues with eBPF
Debugging linux issues with eBPFDebugging linux issues with eBPF
Debugging linux issues with eBPFIvan Babrou
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
 
Dumb Simple PostgreSQL Performance (NYCPUG)
Dumb Simple PostgreSQL Performance (NYCPUG)Dumb Simple PostgreSQL Performance (NYCPUG)
Dumb Simple PostgreSQL Performance (NYCPUG)Joshua Drake
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawnGábor Nyers
 
Debugging Ruby
Debugging RubyDebugging Ruby
Debugging RubyAman Gupta
 
Debugging Ruby Systems
Debugging Ruby SystemsDebugging Ruby Systems
Debugging Ruby SystemsEngine Yard
 
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
 
Gnocchi Profiling v2
Gnocchi Profiling v2Gnocchi Profiling v2
Gnocchi Profiling v2Gordon Chung
 

Similar a Build affordable cloud storage with openSUSE and Ceph (20)

DUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation CenterDUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Deep Dive on Amazon EC2
Deep Dive on Amazon EC2Deep Dive on Amazon EC2
Deep Dive on Amazon EC2
 
SiteGround Tech TeamBuilding
SiteGround Tech TeamBuildingSiteGround Tech TeamBuilding
SiteGround Tech TeamBuilding
 
Oracle on AWS RDS Migration - 성기명
Oracle on AWS RDS Migration - 성기명Oracle on AWS RDS Migration - 성기명
Oracle on AWS RDS Migration - 성기명
 
Deep Dive on Amazon EC2 Instances (March 2017)
Deep Dive on Amazon EC2 Instances (March 2017)Deep Dive on Amazon EC2 Instances (March 2017)
Deep Dive on Amazon EC2 Instances (March 2017)
 
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
 
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
 
Debugging linux issues with eBPF
Debugging linux issues with eBPFDebugging linux issues with eBPF
Debugging linux issues with eBPF
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Ceph barcelona-v-1.2
Ceph barcelona-v-1.2Ceph barcelona-v-1.2
Ceph barcelona-v-1.2
 
Solaris
SolarisSolaris
Solaris
 
Dumb Simple PostgreSQL Performance (NYCPUG)
Dumb Simple PostgreSQL Performance (NYCPUG)Dumb Simple PostgreSQL Performance (NYCPUG)
Dumb Simple PostgreSQL Performance (NYCPUG)
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
 
Debugging Ruby
Debugging RubyDebugging Ruby
Debugging Ruby
 
Debugging Ruby Systems
Debugging Ruby SystemsDebugging Ruby Systems
Debugging Ruby Systems
 
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...
 
Multipath
MultipathMultipath
Multipath
 
Hacking the swisscom modem
Hacking the swisscom modemHacking the swisscom modem
Hacking the swisscom modem
 
Gnocchi Profiling v2
Gnocchi Profiling v2Gnocchi Profiling v2
Gnocchi Profiling v2
 

Más de Alex Lau

Aquarium introduction-asia-summit-2021
Aquarium introduction-asia-summit-2021Aquarium introduction-asia-summit-2021
Aquarium introduction-asia-summit-2021Alex Lau
 
openATTIC using grafana and prometheus
openATTIC using  grafana and prometheusopenATTIC using  grafana and prometheus
openATTIC using grafana and prometheusAlex Lau
 
VIA IOT Presentation
VIA IOT PresentationVIA IOT Presentation
VIA IOT PresentationAlex Lau
 
Open source company and business model
Open source company and business modelOpen source company and business model
Open source company and business modelAlex Lau
 
AvengerGear Presentation for openSUSE Students
AvengerGear Presentation for openSUSE StudentsAvengerGear Presentation for openSUSE Students
AvengerGear Presentation for openSUSE StudentsAlex Lau
 
AvengerGear present: From pretotype to prototype
AvengerGear present: From pretotype to prototypeAvengerGear present: From pretotype to prototype
AvengerGear present: From pretotype to prototypeAlex Lau
 

Más de Alex Lau (6)

Aquarium introduction-asia-summit-2021
Aquarium introduction-asia-summit-2021Aquarium introduction-asia-summit-2021
Aquarium introduction-asia-summit-2021
 
openATTIC using grafana and prometheus
openATTIC using  grafana and prometheusopenATTIC using  grafana and prometheus
openATTIC using grafana and prometheus
 
VIA IOT Presentation
VIA IOT PresentationVIA IOT Presentation
VIA IOT Presentation
 
Open source company and business model
Open source company and business modelOpen source company and business model
Open source company and business model
 
AvengerGear Presentation for openSUSE Students
AvengerGear Presentation for openSUSE StudentsAvengerGear Presentation for openSUSE Students
AvengerGear Presentation for openSUSE Students
 
AvengerGear present: From pretotype to prototype
AvengerGear present: From pretotype to prototypeAvengerGear present: From pretotype to prototype
AvengerGear present: From pretotype to prototype
 

Último

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 

Último (20)

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 

Build affordable cloud storage with openSUSE and Ceph

  • 1. Build our own affordable cloud storage openSUSE Asia Summit Dec, 2015
  • 3. Storage Trend > Data Size and Capacity – Multimedia Contents – Big Demo binary, Detail Graphic / Photos, Audio and Video etc. > Data Functional need – Different Business requirement – More Data driven process – More application with data – More ecommerce > Data Backup for a longer period – Legislation and Compliance – Business analysis
  • 4. Storage Usage Tier 0 Ultra High Performance Tier 1 High-value, OLTP, Revenue Generating Tier 2 Backup/Recovery, Reference Data, Bulk Data Tier 3 Object, Archive, Compliance Archive, Long-term Retention 1-3% 15-20% 20-25% 50-60%
  • 5. Software Define Storage > High Extensibility: – Distributed over multiple nodes in cluster > High Availability: – No single point of failure > High Flexibility: – API, Block Device and Cloud Supported Architecture > Pure Software Define Architecture > Self Monitoring and Self Repairing
  • 7. Why using Cloud Storage? > Very High ROI compare to traditional Hard Storage Solution Vendor > Cloud Ready and S3 Supported > Thin Provisioning > Remote Replication > Cache Tiering > Erasure Coding > Self Manage and Self Repair with continuous monitoring
  • 8. Other Key Features > Support client from multiple OS > Data encryption over physical disk ( more CPU needed) > On the fly data compression > Basically Unlimited Extendibility > Copy-On-Writing ( Clone and Snapshot ) > iSCSI support ( VM and thin client etc )
  • 9. WHO USING IT ? Show Cases of Cloud Storage
  • 10. EMC, Hitachi, HP, IBM NetApp, Dell, Pura Storage, Nexsan Promise, Synology, QNAP, Infortrend, ProWare, SansDigitial
  • 11. Who is doing Software Define Storage
  • 12. Who is using software define storage?
  • 13. HOW MUCH? What if we use Software Define Storage?
  • 14. HTPC AMD (A8-5545M) Form factor: – 29.9 mm x 107.6 mm x 114.4mm CPU: – AMD A8-5545M ( Clock up 2.7GHz / 4M 4Core) RAM: – 8G DDR-3-1600 KingStone ( Up to 16G SO-DIMM ) Storage: – mS200 120G/m-SATA/read:550M, write: 520M Lan: – Gigabit LAN (RealTek RTL8111G) Connectivity: – USB3.0 * 4 Price: – $6980 (NTD)
  • 15. Enclosure Form factor: – 215(D) x 126(w) x 166(H) mm Storage: – Support all brand of 3.5" SATA I / II / III hard disk drive 4 x 8TB = 32TB Connectivity: – USB 3.0 or eSATA Interface Price: – $3000 (NTD)
  • 16. AMD (A8-5545M) > Node = 6980 > 512SSD + 4TB + 6TB + Enclosure =5000 + 4000 + 7000 = 16000 > 30TB total = 16000 * 3 = 58000 > It is about the half of Amazon Cloud 30TB cost over 1 year
  • 17.
  • 18. QUICK 3 NODE SETUP Demo basic setup of a small cluster
  • 19. CEPH Cluster Requirement > At least 3 MON > At least 3 OSD – At least 15GB per osd – Journal better on SSD
  • 20. ceph-deploy > ssh no password id need to pass over to all cluster nodes > echo nodes ceph user has sudo for root permission > ceph-deploy new <node1> <node2> <node3> – Create all the new MON > ceph.conf file will be created at the current directory for you to build your cluster configuration > Each cluster node should have identical ceph.conf file
  • 21. OSD Prepare and Activate > ceph-deploy osd prepare <node1>:</dev/sda5>:</var/lib/ceph/osd/journal/osd-0> > ceph-deploy osd activate <node1>:</dev/sda5>
  • 22. Cluster Status > ceph status > ceph osd stat > ceph osd dump > ceph osd tree > ceph mon stat > ceph mon dump > ceph quorum_status > ceph osd lspools
  • 23. Pool Management > ceph osd lspools > ceph osd pool create <pool-name> <pg-num> <pgp- num> <pool-type> <crush-ruleset-name> > ceph osd pool delete <pool-name> <pool-name> --yes- i-really-really-mean-it > ceph osd pool set <pool-name> <key> <value>
  • 24. CRUSH Map Management > ceph osd getcrushmap -o crushmap.out > crushtool -d crushmap.out -o decom_crushmap.txt > cp decom_crushmap.txt update_decom_crushmap.txt > crushtool -c update_decom_crushmap.txt -o update_crushmap.out > ceph osd setcrushmap -i update_crushmap.out > crushtool --test -i update_crushmap.out --show-choose-tries --rule 2 --num-rep=2 > crushtool --test -i update_crushmap.out --show-utilization --num- rep=2 ceph osd crush show-tunables
  • 25. RBD Management > rbd --pool ssd create --size 10000 ssd_block – Create a 1G rbd in ssd pool > rbd map ssd/ssd_block ( in client ) – It should show up in /dev/rbd/<pool-name>/<block-name> > Then you can use it like a block device
  • 26. SALT STACK + SES Install config and benchmark
  • 27. Files prepare for this demo Kiwi Image SLE12 + SES2 > https://files.secureserver. net/0fCLysbi0hb8cr Git Salt Stack Repo > https://github.com/Aveng erMoJo/Ceph-Saltstack
  • 28. USB install and then Prepare Salt-Minion > #accept all node* key from minion > salt-key -a > #copy all the module and _systemd /srv/salt/ ; > sudo salt 'node*' saltutil.sync_all > #benchmark ( get all the disk io basic number ) > sudo salt "node*" ceph_sles.bench_disk /dev/sda /dev/sdb /dev/sdc /dev/sdd > #get all the disk information > sudo salt "node*" ceph_sles.disk_info > #get all the networking information > sudo salt -L "salt-master node1 node2 node3 node4 node5" ceph_sles.bench_network salt-master node1 node2 node3 node4 node5
  • 29. Prepare and Create Clusters Mons > #create salt-master ssh key > sudo salt "salt-master" ceph_sles.keygen > #send key over to nodes > sudo salt "salt-master" ceph_sles.send_key node1 node2 node3 > #create new cluster with the new mon > sudo salt "salt-master" ceph_sles.new_mon node1 node2 node3 > #sending cluster conf and key over to the nodes > sudo salt "salt-master" ceph_sles.push_conf salt-master node1 node2 node3
  • 30. Create Journal and OSD > #create the osd journal partition > #we can combin the get_disk_info for ssd auto assign > sudo salt -L "node1 node2 node3" ceph_sles.prep_osd_journal /dev/sda 40G > # clean all the osd disk partition first > sudo salt 'salt-master' ceph_sles.clean_disk_partition "node1,node2,node3" "/dev/sdb,/dev/sdc,/dev/sdd" > # prep the list of osd for the cluster > sudo salt "salt-master" ceph_sles.prep_osd "node1,node2,node3" "/dev/sdb,/dev/sdc,/dev/sdd"
  • 31. Update Crushmap and do rados benchmark > # crushmap update for the benchmark > sudo salt "salt-master" ceph_sles.crushmap_update_disktype_ssd_hdd node1 node2 node3 > # rados bench > sudo salt "salt-master" ceph_sles.bench_rados
  • 32. Cache Tier setup > sudo salt "salt-master" ceph_sles.create_pool samba_ssd_pool 100 2 ssd_replicated > sudo salt "salt-master" ceph_sles.create_pool samba_hdd_pool 100 3 hdd_replicated > ceph osd tier add samba_hdd_pool samba_ssd_pool > ceph osd tier cache-mode samba_ssd_pool writeback > ceph osd tier set-overlay samba_hdd_pool samba_ssd_pool > ceph osd pool set samba_ssd_pool hit_set_type bloom > ceph osd pool set samba_ssd_pool hit_set_count 2 > ceph osd pool set samba_ssd_pool hit_set_period 300
  • 33. Block device demo > rbd --pool samba_hdd_pool create --size 10000 samba_test > sudo rbd --pool samba_ssd_pool ls > sudo rbd --pool samba_ssd_pool map samba_test > sudo mkfs.xfs /dev/rbd0 > sudo mount /dev/rbd0 /mnt/samba > sudo systemctl restart smb.service
  • 34. WHAT NEXT? Email me alau@suse.com Let me know what you want to hear next