CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
Efficient source selection for sparql endpoint federation
1. SUPERVISORS
PROF. DR.-ING. HABIL. KLAUS-PETER FÄHNRICH,
UNIVERSITY OF LEIPZIG
DR. AXEL-CYRILLE NGONGA NGOMO , UNIVERSITY OF
LEIPZIG
May 13th, 2016
EFFICIENT SOURCE SELECTION FOR
SPARQL ENDPOINT QUERY
FEDERATION
Muhammad Saleem
Faculty of Mathematics and Computer Science
University of Leipzig
PhD Defense
1
4. INTRODUCTION: EXAMPLE
Return the party membership and news pages about all US presidents.
Party memberships
US presidents
US presidents
News pages
Computation of results require data from both sources
4
12. PROBLEM STATEMENT
12
Overestimation of sources is expensive
Extra intermediate results
Extra network traffic
Increase overall runtime
1. How to perform join-aware source
selection with ensured result set
completeness?
2. How to test the efficiency of the
source selection?
Comprehensive benchmarks
Which system is better and why?
What are the limitations of a given
system?
How one can improve a given
system?
3. How to design comprehensive
federated SPARQL as well as triple
stores benchmark?
14. PROBLEM STATEMENT AND
CONTRIBUTIONS
14
Research Questions
1. How to perform join-aware
source selection with
ensured result set
completeness?
2. How to perform duplicate-
aware source selection?
3. How to perform policy-aware
source selection?
4. How to perform data
distribution-aware source
selection?
5. How to design
comprehensive federated
SPARQL as well as triple
stores benchmark?
S1 S2 S3 S4
RDF RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
QUETSAL,
LargeRDFBen
ch, State-of-
the-art
EvaluationHIBISCuS,
DAW,
SAFE,
TopFed
15. PROBLEM STATEMENT AND
CONTRIBUTIONS
15
S1 S2 S3 S4
RDF RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
QUETSAL,
LargeRDFBen
ch, State-of-
the-art
EvaluationHIBISCuS,
DAW,
SAFE,
TopFed
Research Questions
1. How to perform join-aware
source selection with
ensured result set
completeness?
2. How to perform duplicate-
aware source selection?
3. How to perform policy-aware
source selection?
4. How to perform data
distribution-aware source
selection?
5. How to design
comprehensive federated
SPARQL as well as triple
stores benchmark?
17. HIBISCUS: HYPER GRAPH-BASED
SOURCE SELECTION
Models SPARQL queries as hypergraphs
Makes use of URI‘s authorities in index
Performs join-aware triple pattern-wise source selection
Can be combined with any existing SPARQL endpoint federation
system
17
Muhammad Saleem, Axel-Cyrille Ngonga Ngomo HiBISCuS: Hypergraph-
Based Source Selection for SPARQL Endpoint Federation (ESWC, 2014)
18. HIBISCUS: HYPER GRAPH-BASED
SOURCE SELECTION
Makes use of the URI’s authorities
18
http://dbpedia.org/ontology/party
Scheme Authority Path
32. PROBLEM STATEMENT AND
CONTRIBUTIONS
32
S1 S2 S3 S4
RDF RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
QUETSAL,
LargeRDFBen
ch, State-of-
the-art
EvaluationHIBISCuS,
DAW,
SAFE,
TopFed
Research Questions
1. How to perform join-aware
source selection with
ensured result set
completeness?
2. How to perform duplicate-
aware source selection?
3. How to perform policy-
aware source selection?
4. How to perform data
distribution-aware source
selection?
5. How to design
comprehensive federated
SPARQL as well as triple
stores benchmark?
33. DAW: DUPLICATE-AWARE SOURCE
SELECTION
33
Retrieved results for TP1 (?uri <p1> ?v1)
Triple pattern-wise source selection and skipping
S1 S2 S3TP1 =
Total triple pattern-wise selected sources = 4
S1 S2TP2 = S4
Min. number of new triples (threshold) = 20
Total triple pattern-wise skipped sources = 2
Retrieved results for TP2 (?uri <p2> ?v2)
34. DAW: DUPLICATE-AWARE SOURCE
SELECTION
A combination of MIPs with compact data summaries
Use average selectivities values for bound subject and objects
Can be combined with any existing SPARQL endpoint federation
system
Can be used for partial result retrieval
34
Saleem et al. DAW: Duplicate-AWare Federated Query Processing over the Web of
Data (ISWC, 2013)
36. FEDX EXTENSION WITH DAW
36
0
1
2
3
4
5
6
STP S-1 S-2 P-1 P-2 P-3 STP S-1 S-2 P-1 P-2 P-3 STP S-1 S-2 STP
Diseasome Publication Geo Data Movie
Executiontime(sec)
FedX DAW
Over all performance Evaluation
Diseasome Publication Geo Data Movie Overall
Average Gain % Average Gain % Average Gain % Average Gain % Average Gain %
FedX 2.44
18.79
1.48
-12.38
4.60
14.71
1.74
7.59
2.44
9.76
DAW 1.98 1.67 3.92 1.61 2.20
37. SPLENDID EXTENSION WITH DAW
37
0
1
2
3
4
5
6
7
8
9
10
STP S-1 S-2 P-1 P-2 P-3 STP S-1 S-2 P-1 P-2 P-3 STP S-1 S-2 STP
Diseasome Publication Geo Movie
Executiontime(sec)
SPLENDID DAW
Over all performance Evaluation
Diseasome Publication Geo Data Movie Overall
Average Gain % Average Gain % Average Gain % Average Gain % Average Gain %
SPLENDID 3.78 19.48 2.18 -8.94 7.27 14.40 1.9 11.16 3.71 11.11
DAW 3.04 2.37 6.22 1.688 3.30
38. DARQ EXTENSION WITH DAW
38
0
5
10
15
20
25
30
35
40
STP S-1 S-2 P-1 P-2 P-3 STP S-1 S-2 P-1 P-2 P-3 STP S-1 S-2 STP
Diseasome Publication Geo Movie
Executiontime(sec)
DARQ DAW
Over all performance Evaluation
Diseasome Publication Geo Data Movie Overall
Average Gain % Average Gain % Average Gain % Average Gain % AverageGain %
DARQ 8.27
23.34
5.26
6.14
23.44
16.31
1.96
13.88
9.59
16.46
DAW 6.34 4.94 19.62 1.688 8.01
39. PROBLEM STATEMENT AND
CONTRIBUTIONS
39
S1 S2 S3 S4
RDF RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
QUETSAL,
LargeRDFBen
ch, State-of-
the-art
EvaluationHIBISCuS,
DAW,
SAFE,
TopFed
Research Questions
1. How to perform join-aware
source selection with
ensured result set
completeness?
2. How to perform duplicate-
aware source selection?
3. How to perform policy-aware
source selection?
4. How to perform data
distribution-aware source
selection?
5. How to design
comprehensive federated
SPARQL as well as triple
stores benchmark?
40. SAFE: POLICY-AWARE SOURCE
SELECTION
40
return number of patients that have been administered the drug Insulin and exhibit
BMI > 25 and Hypertension and Diabetes as adverse events
Switzerland Cyprus Greece
Yasar et al. SAFE: Policy Aware SPARQL Query Federation Over RDF Data
45. PROBLEM STATEMENT AND
CONTRIBUTIONS
45
S1 S2 S3 S4
RDF RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
QUETSAL,
LargeRDFBen
ch, State-of-
the-art
EvaluationHIBISCuS,
DAW,
SAFE,
TopFed
Research Questions
1. How to perform join-aware
source selection with
ensured result set
completeness?
2. How to perform duplicate-
aware source selection?
3. How to perform policy-
aware source selection?
4. How to perform data
distribution-aware source
selection?
5. How to design
comprehensive federated
SPARQL as well as triple
stores benchmark?
46. TOPFED: DATA DISTRIBUTION-AWARE
SOURCE SELECTION
Intelligent data distribution combined with
Efficient source selection to handle federation over Big Data
Federation over 20.4 billion Linked TCGA data
46Saleem et al. TopFed: TCGA Tailored Federated Query Processing and Linking to
47. TOPFED
47
b1 b2 p1 p2 g1 g2 g3p3 p4 g4 g5 g6p5 p6 g7 g8 g9
C = {CNV, SNP, E-Gene, E-Protein, miRNA, Clinical}
F = {Expression-Exon}M = {beta_value, position}
(CNV, SNP, E-Gene, miRNA,
E-Protein, Clinical)
Exon-Expression
Methylation
D = {seg_mean, rpmmm, scaled_est, p_exp_val}
C-2 = {{p ∈ {E ∪ A ∪ G} ∨ {p = rdf:type ∧ o ∈ F}} ∧ {{S-Join(p, E ∪ F) ∨ P-Join(p, E ∪ F)} ∨ {!S-Join(p, M ∪ B ∪ D ∪ C) ∧ !P-Join(p, M ∪ B ∪ D ∪ C) }}}
C-3 = {{p ∈ {M ∪ A} ∨ {p = rdf:type ∧ o ∈ B}} ∧ {{S-Join(p, M ∪ B) ∨ P-Join(p, M ∪ B) } ∨ {!S-Join(p, E ∪ F ∪ D ∪ C) ∧ !P-Join(p, E ∪ F ∪ D ∪ C) }}}
C-1 = {{p ∈ {D ∪ A ∪ G} ∨ {p = rdf:type ∧ o ∈ C}} ∧ {{S-Join(p, D ∪ C) ∨ P-Join(p, D ∪ C) } ∨ {!S-Join(p, M ∪ B ∪ E ∪ F) ∧ !P-Join(p, M ∪ B ∪ E ∪ F) }}}
C-1 ∨ Category
Colour = blue
IF tumour lookup is successful
forward to corresponding leaf
Else
broadcast to every one
For each query triple t(s, p, o) ∈ T
A = {chromosome, result, bcr_patient_barcode} G = {start, stop}
B = {DNA-Methylation}
E = {RPKM}
Tumours
SPARQL
endpoints
C-2 ∨ Category
Colour = pink
C-3 ∨ Category
Colour = green
1-16 17-33 1-5 6-11 12-16 17-22 23-27 28-33 1-4 5-8 9-12 13-16 17-20 21-24 25-27 28-30 31-33
49. TOPFED VS. FEDX
TopFed outperforms FedX significantly on 90% of the queries
On average, the query run time of TopFed is about 1/3 of that of FedX
49
1
10
100
1000
10000
100000
Query
No
1 2 3 4 5 6 7 8 9 10 Average
QueryExecutionTime(ms)LogScale
FedX (chached) TopFed
50. PROBLEM STATEMENT AND
CONTRIBUTIONS
50
S1 S2 S3 S4
RDF RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
QUETSAL,
LargeRDFBen
ch, State-of-
the-art
EvaluationHIBISCuS,
DAW,
SAFE,
TopFed
Research Questions
1. How to perform join-aware
source selection with
ensured result set
completeness?
2. How to perform duplicate-
aware source selection?
3. How to perform policy-
aware source selection?
4. How to perform data
distribution-aware source
selection?
5. How to design
comprehensive federated
SPARQL as well as triple
stores benchmark?
51. SPARQL BENCHMARKS
Non-Federated Benchmarks
Centralized repositories
Query span over a single dataset
Real or synthetic
Examples: LUBM, SP2Bench, BSBM, WatDiv, DBPSB, FEASIBLE
Federated Benchmarks
Multiple Interlinked datasets
Query span over multiple datasets
Real or synthetic
Examples: FedBench, LargeRDFBench
51
52. FEASIBLE: BENCHMARK
GENERATION FRAMEWORK
Dataset cleaning
Feature vectors and normalization
Selection of exemplars
Selection of benchmark queries
52Saleem et al. FEASIBLE: A Featured-Based SPARQL Benchmark Generation
75. RANK-WISE RANKING OF TRIPLE
STORES
75
All values are in percentages
None of the system is sole winner or loser for a particular rank
Virtuoso mostly lies in the higher ranks, i.e., rank 1 and 2 (68.29%)
Fuseki mostly in the middle ranks, i.e., rank 2 and 3 (65.14%)
OWLIM-SE usually on the slower side, i.e., rank 3 and 4 (60.86 %)
Sesame is either fast or slow. Rank 1 (31.71% of the queries) and
rank 4 (23.14%)
76. PROBLEM STATEMENT AND
CONTRIBUTIONS
76
S1 S2 S3 S4
RDF RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
QUETSAL,
LargeRDFBen
ch, State-of-
the-art
EvaluationHIBISCuS,
DAW,
SAFE,
TopFed
Research Questions
1. How to perform join-aware
source selection with
ensured result set
completeness?
2. How to perform duplicate-
aware source selection?
3. How to perform policy-
aware source selection?
4. How to perform data
distribution-aware source
selection?
5. How to design
comprehensive federated
SPARQL as well as triple
stores benchmark?
77. LARGERDFBENCH
32 Queries
10 simple
10 complex
8 large data
14 Interlined datasets
77
Linked
MDB
DBpedi
a
New
York
Times
Linked
TCGA-
M
Linked
TCGA-
E
Linked
TCGA-
A
Affymetr
ix
SW
Dog
Food
KEGG
Drug
bank
Jamend
o
ChEBI
Geo
names
basedNear owl:sameAs
x-geneid
#Links: 251.3k
country, ethnicity, race
keggCompoundId
bcr_patient_barcode
Same instance
Life Sciences Cross Domain Large Data
bcr_patient_barcode
#Links: 1.7k
#Links: 4.1k
#Links: 21.7k
#Links: 1.3k
Saleem et al. LargeRDFBench: A Billion Triples Benchmark for SPARQL Endpoint
80. LARGERDFBENCH QUERIES
PROPERTIES
14 Simple
2-7 triple patterns
Subset of SPARQL clauses
Query execution time around 2 seconds on avg.
10 Complex
8-13 triple patterns
Use more SPARQL clauses
Query execution time up to 10 min
8 Large Data
Minimum 80459 results
Large intermediate results
Query execution time in hours
80
88. CONCLUSIONS
88
S2 S3 S4
RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
S1
RDF
Better source selection leads to
overall improvement of runtime
performance
• HIBISCUS: 24.61% - 92.22%
• DAW: 9.79% - 16.46%
• SAFE: 84%
• TopFed: 68%
Better benchmarking
allows for informed
selection of RDF stores
• 55% less error than
DBSPB
• Column stores
(Virtuoso) not always
best
89. CONCLUSIONS
89
S2 S3 S4
RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
S1
RDF
Better source selection leads to
overall improvement of runtime
performance
• HIBISCUS: 24.61% - 92.22%
• DAW: 9.79% - 16.46%
• SAFE: 84%
• TopFed: 68%
Better benchmarking
allows for informed
selection of RDF stores
• 55% less error than
DBSPB
• Column stores
(Virtuoso) not always
best
LargeRDFBench addresses
drawbacks of current
federated benchmarks
• SPARQL features
• Size of intermediary
results
• Total runtime of
queries
90. CONCLUSIONS
90
S2 S3 S4
RDF RDF RDF
Parsing/Rewriting
Source Selection
Federator Optimizer
Integrator
Federation
Engine
S1
RDF
Better source selection leads to
overall improvement of runtime
performance
• HIBISCUS: 24.61% - 92.22%
• DAW: 9.79% - 16.46%
• SAFE: 84%
• TopFed: 68%
Better benchmarking
allows for informed
selection of RDF stores
• 55% less error than
DBSPB
• Column stores
(Virtuoso) not always
best
LargeRDFBench addresses
drawbacks of current
federated benchmarks
• SPARQL features
• Size of intermediary
results
• Total runtime of
queries
Contributions allow for
• Informed selection of triple
stores and of federation
engines
• Better source selection
• Efficient query planning
• Reduction of intermediate
results,
• Time-efficient query
execution
91. FUTURE DIRECTIONS
Top-K relevant source selection
Cost-based query planning
Caching intermediate results
Intelligent data distribution
Provenance and runtime estimation
Federated benchmarks out of queries log
Synthetic benchmarks more like real benchmarks
91
92. AWARDS
1. Best paper award at conference on Semantics in Healthcare and
Life Sciences (CSHALS 2014) with paper titled GenomeSnip:
Fragmenting the Genomic Wheel to augment discovery in cancer
research
2. Semantic Web Challenge-Big Data Track winner at ISWC 2013 with
paper titled Fostering Serendipity through Big Linked Data
3. I-CHALLENGE (Linked Data Cup) winner at I-Semantics 2013 with
paper titled Linked Cancer Genome Atlas Database
92
93. PUBLICATIONS AND CITATIONS
Total Publications: 25
5 Journals (I.F. 2.55, 2.55, 2.26, 0.44)
10 Conference (5 A ranked, CORE)
4 Workshops
2 Tutorials (A ranked, CORE)
1 Technical report
3 Demo (A ranked, CORE)
93
95. PUBLICATIONS
2016
1. Muhammad Saleem, Ricardo Usbeck, Michael Roder, and Axel-Cyrille Ngonga Ngomo SPARQL Querying
Benchmarks Tutorial at International Semantic Web Conference (ISWC), 2015
2. Ethem Cem Ozkan, Muhammad Saleem, Erdogan Dogdu, and Axel-Cyrille Ngonga Ngomo UPSP: Unique
Predicate-based Source Selection for SPARQL Endpoint Federation PROFILES at Extended Semantic Web
Conference (ESWC), 2016
95
96. PUBLICATIONS
2015
1. Muhammad Saleem, Yasar Khan, Ali Hasnain, Ivan Ermilov, and Axel-Cyrille Ngonga Ngomo A Fine-
Grained Evaluation of SPARQL Endpoint Federation Systems Semantic Web Journal, 2015
2. Muhammad Saleem, Qaiser Mehmood, and Axel-Cyrille Ngonga Ngomo FEASIBLE: A Featured-Based
SPARQL Benchmark Generation Framework International Semantic Web Conference (ISWC), 2015
3. Muhammad Saleem, Muhammad Intizar Ali, Ruben Verborgh, Qaiser Mehmood, and Axel-Cyrille
Ngonga Ngomo LSQ: The Linked SPARQL Queries Dataset International Semantic Web Conference
(ISWC), 2015
4. Muhammad Saleem, Muhammad Intizar Ali, Ruben Verborgh, andAxel-Cyrille Ngonga
Ngomo Federated Query Processing over Linked Data Tutorial at International Semantic Web
Conference (ISWC), 2015
5. Muhammad Saleem, Intizar Ali, Aidan Hogan,Qaiser Mehmood, and Axel-Cyrille Ngonga Ngomo LSQ:
The Linked SPARQL Queries Dataset Technical Report LSQ Technical Report
6. Muhammad Saleem, Qaiser Mehmood, and Axel-Cyrille Ngonga Ngomo Automatic SPARQL Benchmark
Generation Using FEASIBLE Demo at International Semantic Web Conference (ISWC), 2015
7. Muhammad Saleem, Muhammad Intizar Ali, Aidan Hogan, Qaiser Mehmood, and Axel-Cyrille Ngonga
Ngomo The LSQ Dataset: Querying for Queries Demo at International Semantic Web Conference (ISWC),
2015
8. Syeda Sana e Zainab, Ali Hasnain, Muhammad Saleem, Qaiser Mehmood, Durre Zehra, and Stefan
Decker SPARQL Query Formulation and Execution using FedViz Demo at International Semantic Web
Conference (ISWC), 2015
9. Syeda Sana e Zainab, Ali Hasnain, Muhammad Saleem, Qaiser Mehmood, Durre Zehra, and Stefan
Decker FedViz: A Visual Interface for SPARQL Queries Formulation and Execution VOILA
96
97. PUBLICATIONS
2014
1. Yasar Khan, Muhammad Saleem, Aftab Iqbal, Muntazir Mehdi, Aidan Hogan, Panagiotis Hasapis, Axel-
Cyrille Ngonga Ngomo, Stefan Decker, and Ratnesh Sahay SAFE: Policy Aware SPARQL Query Federation
Over RDF Data Cubes Semantic Web Applications and Tools for Life Sciences (SWAT4LS), 2014
2. Nur Aini Rakhmawati, Muhammad Saleem, Sarasi Lalithsena, and Stefan Decker QFed: Query Set For
Federated SPARQL Query Benchmark 16th International Conference on Information Integration and Web-
based Applications & Services (iiWAS), 2014
3. Bühmann, Lorenz, Ricardo Usbeck, Axel-Cyrille Ngonga Ngomo, Muhammad Saleem, Andreas Both, Valter
Crescenzi, Paolo Merialdo, and Disheng Qiu Web-Scale Extension of RDF Knowledge Bases from
Templated Websites International Semantic Web Conference (ISWC), 2014
4. Muhammad Saleem, Axel-Cyrille Ngonga HiBISCuS: Hypergraph-Based Source Selection for SPARQL
Endpoint Federation Extended Semantic Web Conference (ESWC), 2014
5. Maulik R. Kamdar, Aftab Iqbal, Muhammad Saleem, Helena F. Deus, and Stefan Decker GenomeSnip:
Fragmenting the Genomic Wheel to augment discovery in cancer research CSHALS, 2014, (Best paper
award)
6. Muhammad Saleem, Shanmukha Sampath, Axel-Cyrille Ngonga Ngomo, Aftab Iqbal, Jonas Almeidaand,
and Helena Deus TopFed: TCGA Tailored Federated Query Processing and Linking to LOD Journal of
Biomedical Semantics, 2014
7. Muhammad Saleem, Maulik R. Kamdar, Aftab Iqbal, Shanmukha Sampath, Helena F. Deus, and Axel-
Cyrille Ngonga Ngomo Big Linked Cancer Data: Integrating Linked TCGA and PubMed Journal of Web
Semantics, 2014 97
98. PUBLICATIONS
2009-2013
1. Muhammad Saleem, Maulik R. Kamdar, Aftab Iqbal, Shanmukha Sampath, Helena F. Deus, and Axel-Cyrille
Ngonga Ngomo Fostering Serendipity through Big Linked Data Semantic Web Challenge at International
Semantic Web Conference (ISWC), 2013, Semantic Web Challenge (Big Data Track) Winner
2. Muhammad Saleem, Shanmukha S Padmanabhuni, Axel-Cyrille Ngonga Ngomo, Jonas S Almeida, and
Stefan Decker, Helena Deus Linked Cancer Genome Atlas Database In Linked Data Cup, I-
Semantics2013, I-CHALLENGE (Linked Data Cup) Winner
3. Muhammad Saleem, Axel-Cyrille Ngonga Ngomo, Josian Xavier Pariera, Helena F. Deus, and Manfred
Hauswirth DAW: Duplicate-AWare Federated Query Processing over the Web of Data International Semantic
Web Conference (ISWC), 2013
4. Muhammad Saleem, Ali Zahir, Yasir Ismail, and Bilal Saeed Enhanced Generic Information Services Using
Mobile Messaging Grid and Pervasive Computing (GPC), 2010
5. Muhammad Saleem, Ali Zahir, Yasir Ismail, and Bilal Saeed Enhanced Generic Information Services Using
Mobile Messaging Grid and Pervasive Computing (GPC), 2010
6. Muhammad Saleem, and Kyung-Goo Doh Generic Information System Using SMS Gateway The Fourth
International Conference on Computer Sciences and Convergence Information Technology (ICCIT), 2009
7. Muhammad Saleem, Rasheed Hussain, Yasir Ismail, and Shaikh Mohsin Cost Effective Software Engineering
using Program Slicing Techniques The 2nd International Conference on Interaction Sciences: Information
Technology, Culture and Human (ICIS), 2009
98
108. TRIPLE STORE BENCHMARKS
Synthetic Benchmarks
Make use of the synthetic queries and/or data
Suitable to test scalability
Often fail to reflect real datasets
Examples: LUBM, SP2Bench, BSBM, WatDiv
Query Log Benchmarks
Make use of the real queries from queries log
Can be more close to the reality
Scalability can be tested
Examples: DBPSB, FEASIBLE
108