SlideShare una empresa de Scribd logo
1 de 40
Descargar para leer sin conexión
S-Cube Learning Package


Using Data Properties in Quality Prediction

                     ´
    Universidad Politecnica de Madrid (UPM)
Learning Package Categorization


                           S-Cube



          WP-JRA-1.3: End-to-End Quality Provision
                 and SLA Conformance



           Quality Assurance and Quality Prediction



          Using Data Properties in Quality Prediction
Service Compositions and QoS

   Service compositions are an essential element of the
   Service-Oriented-Architecture (SOA):
     •   Putting together several “lower level” (specialized) services
     •   Leveraging low coupling and platform-independence
     •   Achieve a more complex goal, e.g. a business process
     •   Often cross-organizational, i.e. using services from different providers


   Quality of Service (QoS) for compositions often critically important:
     • Relates to composition level running time, computational cost,
         bandwidth, etc.
     • Depends on QoS of component services + composition internals +
         environment factors (such as system and network loads/failures)
     • Can affect business-level KPIs (key performance indicators)
     • Influences applicability and usability in a particular context
     • Constrained by a Service-Level Agreement (SLA)
Learning Package Overview



1   Problem Description


2   Using Data Properties in Quality Prediction


3   Discussion


4   Conclusions
1   Problem Description
Components Impacting Orchestration QoS

   Two groups of factors are usually encountered when analyzing QoS
   of a service composition:
     • External variations:
            Bandwidth, current load and throughput, network status
            Behavior of component services (e.g., meeting deadlines?)
            Usually not under designer’s control, they change dynamically
     • Composition structure:
            What does it do with incoming requests?
            Which other services are invoked and how?
            Partially under designer control, known in advance.

   Focusing on the latter, what kind of knowledge about composition
   behavior we can extract to predict composition QoS?
   Besides, can we make prediction more precise by taking into account
   characteristics of the data fed to the composition?
Automotive Scenario Example
   Suppose you are an car part provider hired by an factory to purchase
   a series of parts for its assembly line.
     • You are given a list of parts and their quantities
     • The parts must come from the same maker (be mutually compatible)
     • You contact a number of part makers and reserve each of the parts in
       the required quantity.
     • If a maker cannot provide all parts, you cancel all reserved parts from
       that maker and move to another maker.
                                                          Maker 1



                        Factory         Provider            .
                                                            .
                                                         Maker K



     • Time is essential: you want the process to take the least amount of
       time and to include the smallest number of cancellations.
Automotive Scenario Example (contd.)
   In the service world, you publish to the Client (the factory) your
   Provider service that uses a series of Maker services.

                                                             .
                                                          eq
                                                     rt r      K   Maker 1
                                                   Pa ot O
                                                        n l
                                                    K / ance
                                                   O C
                              Request
                     Client             Provider     P
                                                   OK ar t re
                                                      /       q
                                                    Ca not O .
                                                        nce K
                                                            l
                                                                   Maker K




     • The protocol requires reserving one car part type at a time. If a maker
        answers with “not OK,” the provider sends “Cancel” messages for all
        reserved parts and starts reserving from another maker.

   The total time is linked to the computation cost of serving the client.
     • It depends heavily (among other things) on number of parts (in the
        input message) and characteristics of individual makers.
Computation Cost of Service Networks
Computation Cost Example
                                                                     TB1 (n) = n + 1                    B1
                                                                  TB1 (n) = n + 1
                                                                                                                  Input message abstracted as the the
                                                                                                                     Input message abstracted as
                                                                                               B1
                                                                                  to    B 1?
                                                                             ing ?
    TA (n) = 2n + 3 + nS(n) A
                                                                        bind B 1
                                                                      indin
                                                                    b b
                                                                            to
                                                                            g                                     number of parts n. n.
                                                                                                                     number of parts
TA (n) = 2n + 3 + nS (n)                               A                 indin
                                                                    bin        g to
                                                                       ding
                                                                                to B
                                                                                   2?
                                                                                        B2 ?
                                                                                                                     Time T for provider (A) depends
                                                                                                                  Time TA forAprovider (A) depends
                                                                  TB (n) = 0.1n + B
                                                            TB2 (n)2 = 0.1n + 7
                                                                                  7
                                                                                                2
                                                                                                        B2        on n andand timetime)S(n) of the
                                                                                                                     on n the the S (n of the
                                                                                                                  chosen maker (B1 (B1Bor B2 ).
                                                                                                                     chosen maker or 2 ).
                             140
                                       QoS / Comp Cost for A+B1
                                       QoS / Comp Cost for A+B2                                                   Structural part part 2n in TAin TA does
                                                                                                                      Structural 2n + 3 + 3 does
                                                                                                                  not not depend the choice of maker.
                                                                                                                      depend on on the choice of maker
                             120
 QoS / Computational Cost




                             100
                                                                                                                  The graph shows the the QoS /
                                                                                                                     The graph shows QoS /
                              80
                                                                                                                  computation costcosttwo possible
                                                                                                                     computation for for two possible
                                                                                                                     bindings:
                                                                                                                  bindings:
                              60
                                                                                                                   TA withABwith) = 2n = 2n + (n + 1) + 1)
                                                                                                                       T 1 (n B (n) + 3 + n3 + n(n
                                                                                                                               1
                              40
                                                                                                                               = n2 = n2 + 3n + 3
                                                                                                                                    + 3n + 3
                              20                                                                                   TA withABwith) = 2n = 2n + (0.1n + 7) + 7)
                                                                                                                       T 2 (n B (n) + 3 + n3 + n(0.1n
                                   4       5           6             7           8                  9        10                2
                                                   Input data size (for a given metric)
                                                                                                                               = 0.1n20.1n2 + 3 + 3
                                                                                                                                    = + 9n + 9n


                            Ivanovi´ et al. (UPM, IMDEA)
                                   c                                                    Data-Aware QoS-Driven Adaptation                         2010-07-07     5
Computation Cost of Service Networks

    Computation cost information for B1 and B2 can be made available together
    with other service-related information (e.g., WSDL extensions):
      • Computation cost expressed as function of some metrics of input data.
      • Relationships between the size of input data and size of the output
         data (when they exist).
    A should in turn publish synthesized information (for reuse in other
    compositions involving A).
    Such abstract descriptions of computation cost do not compromise privacy
    of implementation details.
      • They act as higher-level contracts on composition behavior.

Problem
Inferring, representing and using the computation cost information
for service compositions for QoS prediction.
2   Using Data Properties in Quality Prediction
Overview of the Approach
                                                Feedback




                                                          Translation




                                                                                  Analysis
                 WSDL    Trans
                               la   tion
                                           Intermediate                 Logic                Analysis
                               la   tion   language                     program              results
                         Trans
                 BPEL


                                                Feedback



 1   Service / orchestration descriptions represented in intermediate language.
       • Provides indepdence from the source language (BPEL, Windows
          Workflow, etc.)
 2   Intermediate representation translated into (annotated) logic program.
        • Can capture just the relevant characteristics of the orchestration.
 3   Logic program analyzed for computation cost bounds.
 4   Analysis results useful for design-time quality prediction, predictive
     monitoring, matchmaking, etc.
Background: Alternatives in S-Cube

Other S-Cube Approaches Include:

   Detecting Possible SLA Violations Using Data Mining
      • Extracting information from event logs of successful and failed
        executions of a composition in combination with event monitoring to
        identify critical points and influential factors that are used as predictors
        of possible SLA violations.


   Using Online Testing to Predict Fault Points in Compositions
      • Using model checking-based techniques on post-mortem traces of
        failed composition executions to identify activities that are likely to fail,
        both on the level of composition definition, and in particular cases of
        executing instances.
Benefits of the Computation Cost Approach
   Statistical approaches: structure and environmental factors contribute to
   QoS variability:
                            Environment factors      Structural factors

                                               QoS

     •   Hard to separate structural & environmental variations.
     •   Whole range of input data may not be represented / sampled.
     •   Runs may not be representative.
     •   Results reflect historic variations in the environment.

   Structural approaches with data information: safe approximations of
   structural contributions.
                         Environment factors      Structural factor bounds

                                               QoS

     •   Structural and environmental factors separately composed into QoS.
     •   Entire input data range accounted for.
     •   Results are safe and hold for all possible runs.
     •   Results reflect current variations in the environment.
Computation Cost Analysis and SOA

   The computation cost approach relies on applying static cost analysis
   to service orchestrations:
     • Traditionally concerned with running time: Number of execution steps,
       worst-case execution time (WCET)
     • Generalized to counting and measuring events Number of iterations,
       number of partner service invocations, number of exchanged
       messages, network traffic (number of bytes sent/received).

   Data Awareness: bounds expressed as functions of input data.
     • Magnitude of scalars: floating-point, ordinal and cardinal values
     • Measures of data structures: number of items in a list, depth of a tree,
       size of a collection

   Leveraging existing analysis tools.
     • In this case, for logic programs
Approximating Actual Behavior
 Bounds for Computation Cost
With Upper and Lower Bounds
                                      Cost analysis (either automatic or manual) often can only determine safe
                                     Automatic analysis often cancomputation costs. upper and lower bounds.
                                      upper and lower bounds of only determine safe
                                     Exact cost function somewhere in somewhere in between.
                                      Exact computation cost function between.
                           140
                                                 Upper bound QoS / Comp Cost for A+B1
                                                 Lower bound QoS / Comp Cost for A+B1                    Assumption: different instances of
                                                 Upper bound QoS / Comp Cost for A+B2
                           120                   Lower bound QoS / Comp Cost for A+B2                   Assumption: differentcontribute of the
                                                                                                         the same event type instances
                                                                                                        same event type contribute equally to
                                                                                                         equally to the overall computation
QoS / Computational Cost




                           100
                                                                                                        the overall computation cost.
                                                                                                         cost.
                            80

                                                                                                        Safe computation are combinedare
                                                                                                        Safe cost bounds cost bounds
                            60
                                                                                                        combined with current environment
                                                                                                        with current environment
                            40                                                                          parameters from monitoring (e.g.,
                                                                                                        parameters from monitoring (e.g.,
                            20
                                                                                                        network speed) to produce QoS
                                                                                                        network speed) to produce QoS
                                 4       5       6             7           8           9       10
                                             Input data size (for a given metric)                       bounds.
                                                                                                        bounds.
                                     QoS ≈ cost ⊗ approximatednot combining cost bounds and environment
                                      QoS bounds environment by strictly safe, but:
                                      factors are not strictly safe, but:
                                         More informed than data-unaware, single point predictions, static
                                         •bounds, or averages. data-unaware, single point predictions, static
                                            More informed than
                                         Can be used averages. future behavior of a composition.
                                            bounds, or to predict
                                         • Can be used to predict future behavior of a composition.
                           Ivanovi´ et al. (UPM, IMDEA)
                                  c                                                 Data-Aware QoS-Driven Adaptation             2010-07-07   7/1
Benefits of Upper/Lower Bounds Approach

   QoS                            QoS                                       Good for aggregate
                                                                            measures.
                                                            F OCUS :
                                                                                 Usually simpler
                                                            AVERAGE              to calculate.
                                                            C ASE
                                                                            Not very informative
                                                                            for individual running
   Input data measure             Input data measure                        instances.


   QoS                            QoS                                     Can be combined with
                                                                          the average case
                                                           F OCUS :       approach.

                                                           U PPER /            More difficult to
                                                           L OWER              calculate.

                                                           B OUNDS        Useful for monitoring /
                                                                          adapting individual
   Input data measure             Input data measure                      running instances.


I NSENSITIVE TO I NPUT DATA    S ENSITIVE TO I NPUT DATA

                        General idea: More information ⇒ more precision
Orchestration Intermediate Language


Intermediate language (partly) inspired by common BPEL constructs:

Data Types:   XML-style data structures with basic (string, Boolean, number) and
              complex types (structures, lists, optionality).
Expression language: XPath restricted to child/attribute navigation that can be
              resolved statically. Basic arithmetic/logical/string operations.
Basic constructs: assignment, sequence, branching, and looping.
Partner invocation: invoke follows the synchronous pattern. The moment of
               reply reception is not accounted for.
Scopes and fault handlers: usual lexical scoping and exception processing.
Parallel flows: using logical link dependencies.
Translation into Logic Program
     Service: Translated into a logic predicate expressing a mapping from the
              input message to a reply or a fault.
  Invocation: Translated into a predicate call. Returns a reply or a fault.
 Assignment: Passes the expression value to subsequent predicate calls.
  Branching: Mutually exclusive clauses for the then and else parts.
    Looping: Recursive predicate with the base case that corresponds to the
             loop exit condition.
     Scopes: Sub-predicates for scope body and each defined fault handler.
      Flows: Statically serialized according to logical link dependencies.

Concrete Semantics and Resource Consumption
    Resulting logic program does not aim to mimic the operational semantics of,
    e.g., BPEL processes.
    Reflecting just the necessary semantics for resource analyzers to infer
    computation costs with minimal precision loss.
Obtaining Computation Cost Functions

   Example analysis of a simple scenario (one provider - one maker):


                                                    part req.
                              Request              OK / not OK
                     Client             Provider                 Maker
                                                     Cancel




     • not OK is treated as a fault by the provider.
     • two analysis variants: without fault handling (ideal case) and with fault
        handling (general case).

   As a generalized resource that is analyzed, here we take the number of
   Provider→Maker invocations for different n.
     • Can be related to the Key Performance Indicators (KPIs)
             Some events are related to business value for the provider and/or maker.
             E.g., minimizing cancellations (undesirable in general).
Example of Analysis Results

   Computation cost analysis results returned as upper and lower bound
   functions of n (number of parts to reserve).
      • These functions express the number of events:
                executions of simple activities in the orchestration
                reservations of single part type
                cancellations of previously reserved types

      • In the case without fault handling, we assume that each invocation is
         successful (i.e. the optimistic case).

                                   With fault handling         Without fault handling
                 Resource      lower bound     upper bound   lower bound    upper bound
    No. of simple activities        2             7n           5n + 2         5n + 2
       Single reservations          0             n               n              n
             Cancellations          0            n−1              0              0
3   Discussion
Application to Predictive Monitoring
                      QoS metric
                                                                      Prediction   after
                                                                      observation C
                                                                      Prediction   after
              Max                                                     observation B




                    Actual profile

                                    Initially expected behavior
                                                            History

                    A               B              C              D

   Notion of pending QoS – remaining metric until composition finishes.
   At point B, a deviation is detected from the initial prediction ⇒ it must come
   from environment. Updated prediction (densely dotted) for D still within
   range.
   At point C, further deviation detected. Updated prediction (loosely dotted)
   can fall out the range ⇒ violation of QoS concerns can be predicted ahead
   of time.
Experiment in Predictive Monitoring
   Simulation of a service-to-service call with time constraint Tmax :
     • Service A invoked with input message of size n in range 1..50
     • A invokes service B between 50 and 100 times for n = 1, and between
       250 and 500 times for n = 50 (the bounds are linear)
     • B performs between 8 and 16 steps on each invocation.
     • Each iteration of A and each step of B take some time between known
       bounds. Message and reply transfer times are environment factors.
   During execution of an orchestration instance for given n, the system
   takes into account:
     • known computation cost bounds (iterations, steps above)
     • the current environment factors
   and gives the following signals:
     • OK: time limit compliance guaranteed
     • Warn: time limit violation possible
     • Alarm: time limit violation certain

   The actual results are: OK for the time limit compliance and ¬OK for
   violation.
OK                        Warn/OK
                                                                                                                               ,23

                                                                                                                               +23

Experiment in Predictive Monitoring (Cont.)                                                                                    *23

                                                                                                                                23
                                                                                                                                       + % . 0 *2 *+ *% *. *0 +2 ++ +% +. +0 ,2

   Scenario 1: Environment factors suddenly double (on average) at                                                                    * , - / 1 ** *, *- */ *1 +* +, +- +/ +1

                                                                                                                                          45'6789:                    45'678;9:   '6=89:     '6=

   time Tmax /3 into execution of a composition!##$%
                                                 instance.
                        *223                                                                                                   *223
                         123                                                                                                   123
                         023                                                                                                   023                                    Warn/¬OK
                         /23                                                     Warn/¬OK                                      /23
                         .23                                                                                                   .23
                         -23                                                                                                   -23        OK
                         %23                                                                                                   ,23
                                    OK                        Warn/OK                                                                                         Warn
                         ,23                                                                                                   +23
                                                                                                                                                              /OK
                         +23                                                                                                   %23

                         *23                                                                 Alarm/¬OK                         *23

                          23                                                                                                    23
                                 + % . 0 *2 *+ *% *. *0 +2 ++ +% +. +0 ,2 ,+ ,% ,. ,0 %2 %+ %% %. %0 -2                                   %       ,       .       0 *2 *% *, *. *0 %2 %% %, %. %0 +
                                * , - / 1 ** *, *- */ *1 +* +, +- +/ +1 ,* ,, ,- ,/ ,1 %* %, %- %/ %1                                 *       +       -       /    1 ** *+ *- */ *1 %* %+ %- %/ %1

                                    45'6789:                    45'678;9:    '6=89:    '6=8;9:     ;'6=89:    ;'6=8;9:                    45'6789:                45'678;9:    '6=89:    '6=

                                Fig. 6. Ratio of true and!##$%
      • For small n, violations are not predicted and dopositives for two environmenta
                                                            false
                                                                  not happen (OK)
      • For slightly larger n, some false warnings arise (Warn/OK)
                        Under the first regime, composition executions for small va
                         *223

      • For large n, false warnings yield to true violation warnings (Warn/¬OK)
                         123

                    time to complete, so they comply with the time limit (marked b
                         023       Warn/¬OK
        and true alarms (Alarm/¬OK)
                         /23
                    are raised. For slightly larger input sizes (e.g. n = 9), executions s
                         .23
      • There are no false OKalarms (Alarm/OK).
                    time limit, but warnings are raised (Warn/OK), since the moni
                         -23                         Alarm/¬OK
                         ,23
                                 Warn
   Conclusion: very good prediction accuracy, with max . As n increases, the num
                    upper bound running time exceeds T some false warnings
                         +23
                                 /OK
                    positives decreases in favor of the true warning positives (Warn/
                         %23
   in the lower mid-range of n.
                         *23

                    average running time increases and thus the possibility of execu
                          23
                                    %       ,       .       0 *2 *% *, *. *0 %2 %% %, %. %0 +2 +% +, +. +0 ,2 ,% ,, ,. ,0 -2

                    by sudden deterioration of the environment factors. In the same
                                *       +       -       /    1 ** *+ *- */ *1 %* %+ %- %/ %1 +* ++ +- +/ +1 ,* ,+ ,- ,/ ,1
,23

                                                            +23

     Experiment in Predictive Monitoring (Cont.)
                                  Alarm/¬OK                 *23

                                                             23
                                                                    + % . 0 *2 *+ *% *. *0 +2 ++ +% +. +0 ,2 ,+ ,% ,. ,0 %2 %+ %% %. %0 -2
                    Scenario 2: Environment factors gradually deteriorate (quadrupling
                                                                   * , - / 1 ** *, *- */ *1 +* +, +- +/ +1 ,* ,, ,- ,/ ,1 %* %, %- %/ %1

                                                                       45'6789:                    45'678;9:    '6=89:    '6=8;9:     ;'6=89:    ;'6=8;9:

                    on average) during the period Tmax from the start of execution.
                            !##$%                            !##$%


                                                            *223

                                                            123

                                                            023                                    Warn/¬OK
               Warn/¬OK                                     /23

                                                            .23

                                                            -23        OK                                                          Alarm/¬OK
                                                            ,23
rn/OK                                                                                      Warn
                                                            +23
                                                                                           /OK
                                                            %23
                          Alarm/¬OK                         *23

                                                             23
*% *. *0 +2 ++ +% +. +0 ,2 ,+ ,% ,. ,0 %2 %+ %% %. %0 -2               %       ,       .       0 *2 *% *, *. *0 %2 %% %, %. %0 +2 +% +, +. +0 ,2 ,% ,, ,. ,0 -2
, *- */ *1 +* +, +- +/ +1 ,* ,, ,- ,/ ,1 %* %, %- %/ %1            *       +       -       /    1 ** *+ *- */ *1 %* %+ %- %/ %1 +* ++ +- +/ +1 ,* ,+ ,- ,/ ,1

78;9:     '6=89:    '6=8;9:     ;'6=89:    ;'6=8;9:                    45'6789:                45'678;9:    '6=89:    '6=8;9:     ;'6=89:    ;'6=8;9:

6. Ratio of true and!##$% positives for two environmental regimes.
                     false
                         • For small n, do not happen (OK), but there are some false warnings
                (Warn/OK)
 first regime, composition executions for small values of n take little
             • For larger n, false warnings yield to true violation warnings
ete, so they comply with the time limit (marked by OK) and no alerts
 arn/¬OK

 slightly larger(Warn/sizes (e.g. ntrue alarms (Alarm/¬comply with the
                  input ¬OK) and = 9), executions still OK)
             • Alarm/¬OK are again no false alarms (Alarm/OK)
                There (Warn/OK), since the monitor’s estimate of the
  warnings are raised
unning time exceeds Twhen conditions gradually deteriorate, the prediction
          Conclusion: max . As n increases, the number of false warning
eases in favor of the true warning positives (Warn/¬OK), because the
          tends to become more accurate on average.
ng time increases and thus the possibility of execution being affected
 *, *. *0 %2 %% %, %. %0 +2 +% +, +. +0 ,2 ,% ,, ,. ,0 -2
                                                           '(#)*
erioration of the environment factors. In the same region (around n =
*+ *- */ *1 %* %+ %- %/ %1 +* ++ +- +/ +1 ,* ,+ ,- ,/ ,1
Experiment in Proactive Adaptation
                Tier 1              Tier 2
                                                                   Client chooses provider Pj from
                 P1                  S1      ub1 (n)

               UB 1 (m)
                                                                   first tier of services, passing the
                 P2                  S2      ub2 (n)               input argument m = 0..50.
               UB 2 (m)               .
                                      .
Client
                   .
                   .
                   .
                                      .                            Chosen provider chooses M = 5
                 PN                  SN      ubN (n)               times a part maker (the second
               UB N (m)                                            tier) with the input n = m.
250
                                                   ub_1(x)
                                                   ub_2(x)
                                                   ub_3(x)
                                                                   Plot depicts family of upper bound
                                                   ub_4(x)


200
                                                   ub_5(x)
                                                   ub_6(x)
                                                   ub_7(x)
                                                                   functions for structural computation
                                                   ub_8(x)
                                                   ub_9(x)
                                                  ub_10(x)
                                                  ub_11(x)
                                                                   cost for the first and the second tier.
                                                  ub_12(x)
                                                     lub(x)
150

                                                                   Structural computation cost models
100
                                                                   number of messages exchanged
                                                                   (without messages between the
 50                                                                tiers).

  0
      0   10              20   30            40               50
                                                                   Fault rate used to model service
                                                                   unavailability.
Experiment in Proactive Adaptation (Cont.)
                Tier 1              Tier 2

                 P1                  S1      ub1 (n)               Selection of first/second tier
               UB 1 (m)                                            service done using:
                 P2                  S2      ub2 (n)

               UB 2 (m)               .
                                                                     • random choice;
                   .                  .
                                      .
Client             .
                   .                                                 • fixed preference (lowest
                 PN                  SN      ubN (n)                   computation cost for n = 12); and
               UB N (m)                                              • data-aware computation cost
250
                                                                        minimization
                                                   ub_1(x)
                                                   ub_2(x)
                                                   ub_3(x)
                                                   ub_4(x)


200
                                                   ub_5(x)
                                                   ub_6(x)
                                                   ub_7(x)
                                                                   Message passing times for the
                                                   ub_8(x)
                                                   ub_9(x)
                                                  ub_10(x)
                                                  ub_11(x)
                                                                   services simulated using the
                                                  ub_12(x)

150
                                                     lub(x)
                                                                   following two regimes:

100
                                                                    (A) Random Gaussian choice with
                                                                        average 5ms for all services.
 50                                                                 (B) Varying average 4-8ms.

  0
      0   10              20   30            40               50
                                                                   Effectiveness of policies compared
                                                                   w.r.t. total simulated time.
A Simulation Experiment (Cont.)
         Simulation results indicate that for both cases (A and B) of service running
         time variations, the data aware outperforms both the random choice and
         fixed preference policies.

                  • x-axis gives input data size in the range 0-50
                  • y-axis gives total simulated running time
                  • The fault rate is pf = 0.001

                                  Time [ms]                                                                   Time [ms]
6000                                                                         6000
       random                                                                       random
          fixed                                                                        fixed
           data                                                                         data
5000                                                                         5000



4000                                                                         4000



3000                                                                         3000



2000                                                                         2000



1000                                                                         1000
                                                   sim_s1_pf001.data                                                           sim_s2_pf001.data

  0                                                                            0
            5      10   15   20     25        30      35     40    45   50               5     10   15   20     25        30      35     40    45   50
Experiment in Proactive Adaptation (4)
         Another set of simulation results for pf = 0.1 (below) indicate that the
         advantages of using the data aware service selection policy persist even
         under very high noise / failure / unavailability rates.

                  • included both cases (A and B) of service running time variations
                  • overall, the data awareness gives best results for very small and big
                    input data sizes
                                  Time [ms]                                                                   Time [ms]
6000                                                                         6000
       random                                                                       random
          fixed                                                                        fixed
           data                                                                         data
5000                                                                         5000



4000                                                                         4000



3000                                                                         3000



2000                                                                         2000



1000                                                                         1000
                                                   sim_s1_pf100.data                                                           sim_s2_pf100.data

  0                                                                            0
            5      10   15   20     25        30      35     40    45   50               5     10   15   20     25        30      35     40    45   50
Current Restrictions on Orchestrations
   Currently, we are looking at “common” orchestrations that respect
   some restrictions w.r.t. behavior.
     • Overcoming these limitations is a goal for future work.

   Orchestrations must follow receive-reply interaction pattern:
     • All processing between reception of the initiating message and
       dispatching of (final) response.
     • Applicable to processes that accept one among several possible input
       messages.
     • Future work: relax restriction by using fragmentation to
       identify/separate reply-response service sections.

   Orchestration must have no stateful callbacks:
     • I.e., no correlation sets / WS-Addressing.
     • Practical problem: current analyzers lose precision when passing
       opaque objects containing state.
     • Future work: improve translation and analysis itself.
4   Conclusions
Conclusions


   Data-aware computation cost functions can be used to predict QoS and thus
   drive QoS-aware adaptation or signal certain or possible QoS violations.
   Based on a translation scheme that, from an orchestration represented in an
   intermediate language, a logic program is generated and analyzed by
   existing tools.

     • Analysis derives computation cost functions which are safe upper and
        lower bounds of the orchestration’s computation cost.
     • The computation cost functions are expressed as functions of the size
        of input data, expressed in some appropriate data metrics.
     • Computation cost functions are combined with environment factors
        used to build more precise QoS bounds estimations as a function of
        input data.
Conclusions (Cont.)

   In predictive monitoring, simulation results suggest high accuracy of
   predictions ahead of time, including situations when environmental
   conditions gradually deteriorate.
     • The time before detection and occurrence of a violation may be used
        for preparing and triggering the appropriate adaptive action.
   Simulation results indicate the usefulness of the approach in improving the
   efficiency of dynamic, run-time adaptation based on QoS-aware service
   selection.
     • In general, data-aware adaptation gives better results than other
        service selection policies — even with very large variability in service
        availability.
   The idea is to integrate the presented approach into service composition
   provision systems, collect empirical data and compare and combine it with
   statistical / data mining approaches.
References




   This presentation is based on [ICH10a, ICH10b].
   Some pointers on QoS analysis and prediction for Web service
   compositions: [Car05, Car07, LWR+ 09, HKMP08, DMK10]
   Some pointers on automatic complexity analysis / computational cost
   / resource consumption analysis:
   [HBC+ 12, HPBLG05, NMLH09, NMLGH08, ABG+ 11]
Bibliography I
[ABG+ 11]                                       ¨                           ´
            E. Albert, R. Bubel, S. Genaim, R. Hahnle, G. Puebla, and G. Roman-D´ez.
                                                                                ı
            Verified resource guarantees using COSTA and KeY.
            In Siau-Cheng Khoo and Jeremy G. Siek, editors, PEPM, pages 73–76.
            ACM, 2011.

[Car05]     J. Cardoso.
            About the Data-Flow Complexity of Web Processes.
            In 6th International Workshop on Business Process Modeling, Development,
            and Support: Business Processes and Support Systems: Design for
            Flexibility, pages 67–74, 2005.

[Car07]     J. Cardoso.
            Complexity analysis of BPEL web processes.
            Software Process: Improvement and Practice, 12(1):35–49, 2007.

[DMK10]     Dimitris Dranidis, Andreas Metzger, and Dimitrios Kourtesis.
            Enabling proactive adaptation through just-in-time testing of conversational
            services.
            In Elisabetta Di Nitto and Ramin Yahyapour, editors, ServiceWave, volume
            6481 of Lecture Notes in Computer Science, pages 63–75. Springer, 2010.
Bibliography II
[HBC+ 12]                                               ´
            M. V. Hermenegildo, F. Bueno, M. Carro, P. Lopez, E. Mera, J.F. Morales, and
            G. Puebla.
            An Overview of Ciao and its Design Philosophy.
            Theory and Practice of Logic Programming, 12(1–2):219–252, January 2012.

            http://arxiv.org/abs/1102.5497.

[HKMP08]    Julia Hielscher, Raman Kazhamiakin, Andreas Metzger, and Marco Pistore.
            A framework for proactive self-adaptation of service-based applications
            based on online testing.
                      ¨ ¨
            In Petri Mahonen, Klaus Pohl, and Thierry Priol, editors, Towards a
            Service-Based Internet, volume 5377 of Lecture Notes in Computer Science,
            pages 122–133. Springer Berlin / Heidelberg, 2008.

                                                        ´
[HPBLG05] M. Hermenegildo, G. Puebla, F. Bueno, and P. Lopez-Garc´a.
                                                                 ı
            Integrated Program Debugging, Verification, and Optimization Using Abstract
            Interpretation (and The Ciao System Preprocessor).
            Science of Computer Programming, 58(1–2):115–140, 2005.
Bibliography III
[ICH10a]    D. Ivanovi´ , M. Carro, and M. Hermenegildo.
                      c
            An Initial Proposal for Data-Aware Resource Analysis of Orchestrations with
            Applications to Predictive Monitoring.
                           ´ ´
            In Asit Dan, Frederic Gittler, and Farouk Toumani, editors, International
            Workshops, ICSOC/ServiceWave 2009, Revised Selected Papers, number
            6275 in LNCS. Springer, September 2010.

[ICH10b]    D. Ivanovi´ , M. Carro, and M. Hermenegildo.
                      c
            Towards Data-Aware QoS-Driven Adaptation for Service Orchestrations.
            In Proceedings of the 2010 IEEE International Conference on Web Services
            (ICWS 2010), Miami, FL, USA, 5-10 July 2010, pages 107–114. IEEE, 2010.

[LWR+ 09]   Philipp Leitner, Branimir Wetzstein, Florian Rosenberg, Anton Michlmayr,
            Schahram Dustdar, and Frank Leymann.
            Runtime prediction of service level agreement violations for composite
            services.
            In Asit Dan, Frederic Gittler, and Farouk Toumani, editors,
            ICSOC/ServiceWave Workshops, volume 6275 of Lecture Notes in Computer
            Science, pages 176–186, 2009.
Bibliography IV


                                 ´
[NMLGH08] J. Navas, E. Mera, P. Lopez-Garc´a, and M. Hermenegildo.
                                          ı
            Inference of User-Definable Resource Bounds Usage for Logic Programs and
            its Applications.
            Technical Report CLIP5/2008.0, Technical University of Madrid (UPM),
            School of Computer Science, UPM, July 2008.

[NMLH09]                  ´
            J. Navas, M. Mendez-Lojo, and M. Hermenegildo.
            User-Definable Resource Usage Bounds Analysis for Java Bytecode.
            In Proceedings of the Workshop on Bytecode Semantics, Verification,
            Analysis and Transformation (BYTECODE’09), volume 253 of Electronic
            Notes in Theoretical Computer Science, pages 6–86. Elsevier - North
            Holland, March 2009.
Acknowledgments




   The research leading to these results has received funding
   from the European Community’s Seventh Framework
   Programme [FP7/2007-2013] under grant agreement 215483
   (S-Cube).

Más contenido relacionado

Más de virtual-campus

S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...
S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...
S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...virtual-campus
 
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphor
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical MetaphorS-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphor
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphorvirtual-campus
 
S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...
S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...
S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...virtual-campus
 
S-CUBE LP: The Chemical Computing model and HOCL Programming
S-CUBE LP: The Chemical Computing model and HOCL ProgrammingS-CUBE LP: The Chemical Computing model and HOCL Programming
S-CUBE LP: The Chemical Computing model and HOCL Programmingvirtual-campus
 
S-CUBE LP: Executing the HOCL: Concept of a Chemical Interpreter
S-CUBE LP: Executing the HOCL: Concept of a Chemical InterpreterS-CUBE LP: Executing the HOCL: Concept of a Chemical Interpreter
S-CUBE LP: Executing the HOCL: Concept of a Chemical Interpretervirtual-campus
 
S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...
S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...
S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...virtual-campus
 
S-CUBE LP: Service Discovery and Task Models
S-CUBE LP: Service Discovery and Task ModelsS-CUBE LP: Service Discovery and Task Models
S-CUBE LP: Service Discovery and Task Modelsvirtual-campus
 
S-CUBE LP: Impact of SBA design on Global Software Development
S-CUBE LP: Impact of SBA design on Global Software DevelopmentS-CUBE LP: Impact of SBA design on Global Software Development
S-CUBE LP: Impact of SBA design on Global Software Developmentvirtual-campus
 
S-CUBE LP: Techniques for design for adaptation
S-CUBE LP: Techniques for design for adaptationS-CUBE LP: Techniques for design for adaptation
S-CUBE LP: Techniques for design for adaptationvirtual-campus
 
S-CUBE LP: Self-healing in Mixed Service-oriented Systems
S-CUBE LP: Self-healing in Mixed Service-oriented SystemsS-CUBE LP: Self-healing in Mixed Service-oriented Systems
S-CUBE LP: Self-healing in Mixed Service-oriented Systemsvirtual-campus
 
S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...
S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...
S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...virtual-campus
 
S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...
S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...
S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...virtual-campus
 
S-CUBE LP: Analyzing Business Process Performance Using KPI Dependency Analysis
S-CUBE LP: Analyzing Business Process Performance Using KPI Dependency AnalysisS-CUBE LP: Analyzing Business Process Performance Using KPI Dependency Analysis
S-CUBE LP: Analyzing Business Process Performance Using KPI Dependency Analysisvirtual-campus
 
S-CUBE LP: Process Performance Monitoring in Service Compositions
S-CUBE LP: Process Performance Monitoring in Service CompositionsS-CUBE LP: Process Performance Monitoring in Service Compositions
S-CUBE LP: Process Performance Monitoring in Service Compositionsvirtual-campus
 
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...virtual-campus
 
S-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event Logs
S-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event LogsS-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event Logs
S-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event Logsvirtual-campus
 
S-CUBE LP: Proactive SLA Negotiation
S-CUBE LP: Proactive SLA NegotiationS-CUBE LP: Proactive SLA Negotiation
S-CUBE LP: Proactive SLA Negotiationvirtual-campus
 
S-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service Selection
S-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service SelectionS-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service Selection
S-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service Selectionvirtual-campus
 
S-CUBE LP: Variability Modeling and QoS Analysis of Web Services Orchestrations
S-CUBE LP: Variability Modeling and QoS Analysis of Web Services OrchestrationsS-CUBE LP: Variability Modeling and QoS Analysis of Web Services Orchestrations
S-CUBE LP: Variability Modeling and QoS Analysis of Web Services Orchestrationsvirtual-campus
 
S-CUBE LP: Run-time Verification for Preventive Adaptation
S-CUBE LP: Run-time Verification for Preventive AdaptationS-CUBE LP: Run-time Verification for Preventive Adaptation
S-CUBE LP: Run-time Verification for Preventive Adaptationvirtual-campus
 

Más de virtual-campus (20)

S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...
S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...
S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...
 
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphor
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical MetaphorS-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphor
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphor
 
S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...
S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...
S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...
 
S-CUBE LP: The Chemical Computing model and HOCL Programming
S-CUBE LP: The Chemical Computing model and HOCL ProgrammingS-CUBE LP: The Chemical Computing model and HOCL Programming
S-CUBE LP: The Chemical Computing model and HOCL Programming
 
S-CUBE LP: Executing the HOCL: Concept of a Chemical Interpreter
S-CUBE LP: Executing the HOCL: Concept of a Chemical InterpreterS-CUBE LP: Executing the HOCL: Concept of a Chemical Interpreter
S-CUBE LP: Executing the HOCL: Concept of a Chemical Interpreter
 
S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...
S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...
S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...
 
S-CUBE LP: Service Discovery and Task Models
S-CUBE LP: Service Discovery and Task ModelsS-CUBE LP: Service Discovery and Task Models
S-CUBE LP: Service Discovery and Task Models
 
S-CUBE LP: Impact of SBA design on Global Software Development
S-CUBE LP: Impact of SBA design on Global Software DevelopmentS-CUBE LP: Impact of SBA design on Global Software Development
S-CUBE LP: Impact of SBA design on Global Software Development
 
S-CUBE LP: Techniques for design for adaptation
S-CUBE LP: Techniques for design for adaptationS-CUBE LP: Techniques for design for adaptation
S-CUBE LP: Techniques for design for adaptation
 
S-CUBE LP: Self-healing in Mixed Service-oriented Systems
S-CUBE LP: Self-healing in Mixed Service-oriented SystemsS-CUBE LP: Self-healing in Mixed Service-oriented Systems
S-CUBE LP: Self-healing in Mixed Service-oriented Systems
 
S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...
S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...
S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...
 
S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...
S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...
S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...
 
S-CUBE LP: Analyzing Business Process Performance Using KPI Dependency Analysis
S-CUBE LP: Analyzing Business Process Performance Using KPI Dependency AnalysisS-CUBE LP: Analyzing Business Process Performance Using KPI Dependency Analysis
S-CUBE LP: Analyzing Business Process Performance Using KPI Dependency Analysis
 
S-CUBE LP: Process Performance Monitoring in Service Compositions
S-CUBE LP: Process Performance Monitoring in Service CompositionsS-CUBE LP: Process Performance Monitoring in Service Compositions
S-CUBE LP: Process Performance Monitoring in Service Compositions
 
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...
 
S-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event Logs
S-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event LogsS-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event Logs
S-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event Logs
 
S-CUBE LP: Proactive SLA Negotiation
S-CUBE LP: Proactive SLA NegotiationS-CUBE LP: Proactive SLA Negotiation
S-CUBE LP: Proactive SLA Negotiation
 
S-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service Selection
S-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service SelectionS-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service Selection
S-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service Selection
 
S-CUBE LP: Variability Modeling and QoS Analysis of Web Services Orchestrations
S-CUBE LP: Variability Modeling and QoS Analysis of Web Services OrchestrationsS-CUBE LP: Variability Modeling and QoS Analysis of Web Services Orchestrations
S-CUBE LP: Variability Modeling and QoS Analysis of Web Services Orchestrations
 
S-CUBE LP: Run-time Verification for Preventive Adaptation
S-CUBE LP: Run-time Verification for Preventive AdaptationS-CUBE LP: Run-time Verification for Preventive Adaptation
S-CUBE LP: Run-time Verification for Preventive Adaptation
 

Último

Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 

Último (20)

Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 

S-CUBE LP: Using Data Properties in Quality Prediction

  • 1. S-Cube Learning Package Using Data Properties in Quality Prediction ´ Universidad Politecnica de Madrid (UPM)
  • 2. Learning Package Categorization S-Cube WP-JRA-1.3: End-to-End Quality Provision and SLA Conformance Quality Assurance and Quality Prediction Using Data Properties in Quality Prediction
  • 3. Service Compositions and QoS Service compositions are an essential element of the Service-Oriented-Architecture (SOA): • Putting together several “lower level” (specialized) services • Leveraging low coupling and platform-independence • Achieve a more complex goal, e.g. a business process • Often cross-organizational, i.e. using services from different providers Quality of Service (QoS) for compositions often critically important: • Relates to composition level running time, computational cost, bandwidth, etc. • Depends on QoS of component services + composition internals + environment factors (such as system and network loads/failures) • Can affect business-level KPIs (key performance indicators) • Influences applicability and usability in a particular context • Constrained by a Service-Level Agreement (SLA)
  • 4. Learning Package Overview 1 Problem Description 2 Using Data Properties in Quality Prediction 3 Discussion 4 Conclusions
  • 5. 1 Problem Description
  • 6. Components Impacting Orchestration QoS Two groups of factors are usually encountered when analyzing QoS of a service composition: • External variations: Bandwidth, current load and throughput, network status Behavior of component services (e.g., meeting deadlines?) Usually not under designer’s control, they change dynamically • Composition structure: What does it do with incoming requests? Which other services are invoked and how? Partially under designer control, known in advance. Focusing on the latter, what kind of knowledge about composition behavior we can extract to predict composition QoS? Besides, can we make prediction more precise by taking into account characteristics of the data fed to the composition?
  • 7. Automotive Scenario Example Suppose you are an car part provider hired by an factory to purchase a series of parts for its assembly line. • You are given a list of parts and their quantities • The parts must come from the same maker (be mutually compatible) • You contact a number of part makers and reserve each of the parts in the required quantity. • If a maker cannot provide all parts, you cancel all reserved parts from that maker and move to another maker. Maker 1 Factory Provider . . Maker K • Time is essential: you want the process to take the least amount of time and to include the smallest number of cancellations.
  • 8. Automotive Scenario Example (contd.) In the service world, you publish to the Client (the factory) your Provider service that uses a series of Maker services. . eq rt r K Maker 1 Pa ot O n l K / ance O C Request Client Provider P OK ar t re / q Ca not O . nce K l Maker K • The protocol requires reserving one car part type at a time. If a maker answers with “not OK,” the provider sends “Cancel” messages for all reserved parts and starts reserving from another maker. The total time is linked to the computation cost of serving the client. • It depends heavily (among other things) on number of parts (in the input message) and characteristics of individual makers.
  • 9. Computation Cost of Service Networks Computation Cost Example TB1 (n) = n + 1 B1 TB1 (n) = n + 1 Input message abstracted as the the Input message abstracted as B1 to B 1? ing ? TA (n) = 2n + 3 + nS(n) A bind B 1 indin b b to g number of parts n. n. number of parts TA (n) = 2n + 3 + nS (n) A indin bin g to ding to B 2? B2 ? Time T for provider (A) depends Time TA forAprovider (A) depends TB (n) = 0.1n + B TB2 (n)2 = 0.1n + 7 7 2 B2 on n andand timetime)S(n) of the on n the the S (n of the chosen maker (B1 (B1Bor B2 ). chosen maker or 2 ). 140 QoS / Comp Cost for A+B1 QoS / Comp Cost for A+B2 Structural part part 2n in TAin TA does Structural 2n + 3 + 3 does not not depend the choice of maker. depend on on the choice of maker 120 QoS / Computational Cost 100 The graph shows the the QoS / The graph shows QoS / 80 computation costcosttwo possible computation for for two possible bindings: bindings: 60 TA withABwith) = 2n = 2n + (n + 1) + 1) T 1 (n B (n) + 3 + n3 + n(n 1 40 = n2 = n2 + 3n + 3 + 3n + 3 20 TA withABwith) = 2n = 2n + (0.1n + 7) + 7) T 2 (n B (n) + 3 + n3 + n(0.1n 4 5 6 7 8 9 10 2 Input data size (for a given metric) = 0.1n20.1n2 + 3 + 3 = + 9n + 9n Ivanovi´ et al. (UPM, IMDEA) c Data-Aware QoS-Driven Adaptation 2010-07-07 5
  • 10. Computation Cost of Service Networks Computation cost information for B1 and B2 can be made available together with other service-related information (e.g., WSDL extensions): • Computation cost expressed as function of some metrics of input data. • Relationships between the size of input data and size of the output data (when they exist). A should in turn publish synthesized information (for reuse in other compositions involving A). Such abstract descriptions of computation cost do not compromise privacy of implementation details. • They act as higher-level contracts on composition behavior. Problem Inferring, representing and using the computation cost information for service compositions for QoS prediction.
  • 11. 2 Using Data Properties in Quality Prediction
  • 12. Overview of the Approach Feedback Translation Analysis WSDL Trans la tion Intermediate Logic Analysis la tion language program results Trans BPEL Feedback 1 Service / orchestration descriptions represented in intermediate language. • Provides indepdence from the source language (BPEL, Windows Workflow, etc.) 2 Intermediate representation translated into (annotated) logic program. • Can capture just the relevant characteristics of the orchestration. 3 Logic program analyzed for computation cost bounds. 4 Analysis results useful for design-time quality prediction, predictive monitoring, matchmaking, etc.
  • 13. Background: Alternatives in S-Cube Other S-Cube Approaches Include: Detecting Possible SLA Violations Using Data Mining • Extracting information from event logs of successful and failed executions of a composition in combination with event monitoring to identify critical points and influential factors that are used as predictors of possible SLA violations. Using Online Testing to Predict Fault Points in Compositions • Using model checking-based techniques on post-mortem traces of failed composition executions to identify activities that are likely to fail, both on the level of composition definition, and in particular cases of executing instances.
  • 14. Benefits of the Computation Cost Approach Statistical approaches: structure and environmental factors contribute to QoS variability: Environment factors Structural factors QoS • Hard to separate structural & environmental variations. • Whole range of input data may not be represented / sampled. • Runs may not be representative. • Results reflect historic variations in the environment. Structural approaches with data information: safe approximations of structural contributions. Environment factors Structural factor bounds QoS • Structural and environmental factors separately composed into QoS. • Entire input data range accounted for. • Results are safe and hold for all possible runs. • Results reflect current variations in the environment.
  • 15. Computation Cost Analysis and SOA The computation cost approach relies on applying static cost analysis to service orchestrations: • Traditionally concerned with running time: Number of execution steps, worst-case execution time (WCET) • Generalized to counting and measuring events Number of iterations, number of partner service invocations, number of exchanged messages, network traffic (number of bytes sent/received). Data Awareness: bounds expressed as functions of input data. • Magnitude of scalars: floating-point, ordinal and cardinal values • Measures of data structures: number of items in a list, depth of a tree, size of a collection Leveraging existing analysis tools. • In this case, for logic programs
  • 16. Approximating Actual Behavior Bounds for Computation Cost With Upper and Lower Bounds Cost analysis (either automatic or manual) often can only determine safe Automatic analysis often cancomputation costs. upper and lower bounds. upper and lower bounds of only determine safe Exact cost function somewhere in somewhere in between. Exact computation cost function between. 140 Upper bound QoS / Comp Cost for A+B1 Lower bound QoS / Comp Cost for A+B1 Assumption: different instances of Upper bound QoS / Comp Cost for A+B2 120 Lower bound QoS / Comp Cost for A+B2 Assumption: differentcontribute of the the same event type instances same event type contribute equally to equally to the overall computation QoS / Computational Cost 100 the overall computation cost. cost. 80 Safe computation are combinedare Safe cost bounds cost bounds 60 combined with current environment with current environment 40 parameters from monitoring (e.g., parameters from monitoring (e.g., 20 network speed) to produce QoS network speed) to produce QoS 4 5 6 7 8 9 10 Input data size (for a given metric) bounds. bounds. QoS ≈ cost ⊗ approximatednot combining cost bounds and environment QoS bounds environment by strictly safe, but: factors are not strictly safe, but: More informed than data-unaware, single point predictions, static •bounds, or averages. data-unaware, single point predictions, static More informed than Can be used averages. future behavior of a composition. bounds, or to predict • Can be used to predict future behavior of a composition. Ivanovi´ et al. (UPM, IMDEA) c Data-Aware QoS-Driven Adaptation 2010-07-07 7/1
  • 17. Benefits of Upper/Lower Bounds Approach QoS QoS Good for aggregate measures. F OCUS : Usually simpler AVERAGE to calculate. C ASE Not very informative for individual running Input data measure Input data measure instances. QoS QoS Can be combined with the average case F OCUS : approach. U PPER / More difficult to L OWER calculate. B OUNDS Useful for monitoring / adapting individual Input data measure Input data measure running instances. I NSENSITIVE TO I NPUT DATA S ENSITIVE TO I NPUT DATA General idea: More information ⇒ more precision
  • 18. Orchestration Intermediate Language Intermediate language (partly) inspired by common BPEL constructs: Data Types: XML-style data structures with basic (string, Boolean, number) and complex types (structures, lists, optionality). Expression language: XPath restricted to child/attribute navigation that can be resolved statically. Basic arithmetic/logical/string operations. Basic constructs: assignment, sequence, branching, and looping. Partner invocation: invoke follows the synchronous pattern. The moment of reply reception is not accounted for. Scopes and fault handlers: usual lexical scoping and exception processing. Parallel flows: using logical link dependencies.
  • 19. Translation into Logic Program Service: Translated into a logic predicate expressing a mapping from the input message to a reply or a fault. Invocation: Translated into a predicate call. Returns a reply or a fault. Assignment: Passes the expression value to subsequent predicate calls. Branching: Mutually exclusive clauses for the then and else parts. Looping: Recursive predicate with the base case that corresponds to the loop exit condition. Scopes: Sub-predicates for scope body and each defined fault handler. Flows: Statically serialized according to logical link dependencies. Concrete Semantics and Resource Consumption Resulting logic program does not aim to mimic the operational semantics of, e.g., BPEL processes. Reflecting just the necessary semantics for resource analyzers to infer computation costs with minimal precision loss.
  • 20. Obtaining Computation Cost Functions Example analysis of a simple scenario (one provider - one maker): part req. Request OK / not OK Client Provider Maker Cancel • not OK is treated as a fault by the provider. • two analysis variants: without fault handling (ideal case) and with fault handling (general case). As a generalized resource that is analyzed, here we take the number of Provider→Maker invocations for different n. • Can be related to the Key Performance Indicators (KPIs) Some events are related to business value for the provider and/or maker. E.g., minimizing cancellations (undesirable in general).
  • 21. Example of Analysis Results Computation cost analysis results returned as upper and lower bound functions of n (number of parts to reserve). • These functions express the number of events: executions of simple activities in the orchestration reservations of single part type cancellations of previously reserved types • In the case without fault handling, we assume that each invocation is successful (i.e. the optimistic case). With fault handling Without fault handling Resource lower bound upper bound lower bound upper bound No. of simple activities 2 7n 5n + 2 5n + 2 Single reservations 0 n n n Cancellations 0 n−1 0 0
  • 22. 3 Discussion
  • 23. Application to Predictive Monitoring QoS metric Prediction after observation C Prediction after Max observation B Actual profile Initially expected behavior History A B C D Notion of pending QoS – remaining metric until composition finishes. At point B, a deviation is detected from the initial prediction ⇒ it must come from environment. Updated prediction (densely dotted) for D still within range. At point C, further deviation detected. Updated prediction (loosely dotted) can fall out the range ⇒ violation of QoS concerns can be predicted ahead of time.
  • 24. Experiment in Predictive Monitoring Simulation of a service-to-service call with time constraint Tmax : • Service A invoked with input message of size n in range 1..50 • A invokes service B between 50 and 100 times for n = 1, and between 250 and 500 times for n = 50 (the bounds are linear) • B performs between 8 and 16 steps on each invocation. • Each iteration of A and each step of B take some time between known bounds. Message and reply transfer times are environment factors. During execution of an orchestration instance for given n, the system takes into account: • known computation cost bounds (iterations, steps above) • the current environment factors and gives the following signals: • OK: time limit compliance guaranteed • Warn: time limit violation possible • Alarm: time limit violation certain The actual results are: OK for the time limit compliance and ¬OK for violation.
  • 25. OK Warn/OK ,23 +23 Experiment in Predictive Monitoring (Cont.) *23 23 + % . 0 *2 *+ *% *. *0 +2 ++ +% +. +0 ,2 Scenario 1: Environment factors suddenly double (on average) at * , - / 1 ** *, *- */ *1 +* +, +- +/ +1 45'6789: 45'678;9: '6=89: '6= time Tmax /3 into execution of a composition!##$% instance. *223 *223 123 123 023 023 Warn/¬OK /23 Warn/¬OK /23 .23 .23 -23 -23 OK %23 ,23 OK Warn/OK Warn ,23 +23 /OK +23 %23 *23 Alarm/¬OK *23 23 23 + % . 0 *2 *+ *% *. *0 +2 ++ +% +. +0 ,2 ,+ ,% ,. ,0 %2 %+ %% %. %0 -2 % , . 0 *2 *% *, *. *0 %2 %% %, %. %0 + * , - / 1 ** *, *- */ *1 +* +, +- +/ +1 ,* ,, ,- ,/ ,1 %* %, %- %/ %1 * + - / 1 ** *+ *- */ *1 %* %+ %- %/ %1 45'6789: 45'678;9: '6=89: '6=8;9: ;'6=89: ;'6=8;9: 45'6789: 45'678;9: '6=89: '6= Fig. 6. Ratio of true and!##$% • For small n, violations are not predicted and dopositives for two environmenta false not happen (OK) • For slightly larger n, some false warnings arise (Warn/OK) Under the first regime, composition executions for small va *223 • For large n, false warnings yield to true violation warnings (Warn/¬OK) 123 time to complete, so they comply with the time limit (marked b 023 Warn/¬OK and true alarms (Alarm/¬OK) /23 are raised. For slightly larger input sizes (e.g. n = 9), executions s .23 • There are no false OKalarms (Alarm/OK). time limit, but warnings are raised (Warn/OK), since the moni -23 Alarm/¬OK ,23 Warn Conclusion: very good prediction accuracy, with max . As n increases, the num upper bound running time exceeds T some false warnings +23 /OK positives decreases in favor of the true warning positives (Warn/ %23 in the lower mid-range of n. *23 average running time increases and thus the possibility of execu 23 % , . 0 *2 *% *, *. *0 %2 %% %, %. %0 +2 +% +, +. +0 ,2 ,% ,, ,. ,0 -2 by sudden deterioration of the environment factors. In the same * + - / 1 ** *+ *- */ *1 %* %+ %- %/ %1 +* ++ +- +/ +1 ,* ,+ ,- ,/ ,1
  • 26. ,23 +23 Experiment in Predictive Monitoring (Cont.) Alarm/¬OK *23 23 + % . 0 *2 *+ *% *. *0 +2 ++ +% +. +0 ,2 ,+ ,% ,. ,0 %2 %+ %% %. %0 -2 Scenario 2: Environment factors gradually deteriorate (quadrupling * , - / 1 ** *, *- */ *1 +* +, +- +/ +1 ,* ,, ,- ,/ ,1 %* %, %- %/ %1 45'6789: 45'678;9: '6=89: '6=8;9: ;'6=89: ;'6=8;9: on average) during the period Tmax from the start of execution. !##$% !##$% *223 123 023 Warn/¬OK Warn/¬OK /23 .23 -23 OK Alarm/¬OK ,23 rn/OK Warn +23 /OK %23 Alarm/¬OK *23 23 *% *. *0 +2 ++ +% +. +0 ,2 ,+ ,% ,. ,0 %2 %+ %% %. %0 -2 % , . 0 *2 *% *, *. *0 %2 %% %, %. %0 +2 +% +, +. +0 ,2 ,% ,, ,. ,0 -2 , *- */ *1 +* +, +- +/ +1 ,* ,, ,- ,/ ,1 %* %, %- %/ %1 * + - / 1 ** *+ *- */ *1 %* %+ %- %/ %1 +* ++ +- +/ +1 ,* ,+ ,- ,/ ,1 78;9: '6=89: '6=8;9: ;'6=89: ;'6=8;9: 45'6789: 45'678;9: '6=89: '6=8;9: ;'6=89: ;'6=8;9: 6. Ratio of true and!##$% positives for two environmental regimes. false • For small n, do not happen (OK), but there are some false warnings (Warn/OK) first regime, composition executions for small values of n take little • For larger n, false warnings yield to true violation warnings ete, so they comply with the time limit (marked by OK) and no alerts arn/¬OK slightly larger(Warn/sizes (e.g. ntrue alarms (Alarm/¬comply with the input ¬OK) and = 9), executions still OK) • Alarm/¬OK are again no false alarms (Alarm/OK) There (Warn/OK), since the monitor’s estimate of the warnings are raised unning time exceeds Twhen conditions gradually deteriorate, the prediction Conclusion: max . As n increases, the number of false warning eases in favor of the true warning positives (Warn/¬OK), because the tends to become more accurate on average. ng time increases and thus the possibility of execution being affected *, *. *0 %2 %% %, %. %0 +2 +% +, +. +0 ,2 ,% ,, ,. ,0 -2 '(#)* erioration of the environment factors. In the same region (around n = *+ *- */ *1 %* %+ %- %/ %1 +* ++ +- +/ +1 ,* ,+ ,- ,/ ,1
  • 27. Experiment in Proactive Adaptation Tier 1 Tier 2 Client chooses provider Pj from P1 S1 ub1 (n) UB 1 (m) first tier of services, passing the P2 S2 ub2 (n) input argument m = 0..50. UB 2 (m) . . Client . . . . Chosen provider chooses M = 5 PN SN ubN (n) times a part maker (the second UB N (m) tier) with the input n = m. 250 ub_1(x) ub_2(x) ub_3(x) Plot depicts family of upper bound ub_4(x) 200 ub_5(x) ub_6(x) ub_7(x) functions for structural computation ub_8(x) ub_9(x) ub_10(x) ub_11(x) cost for the first and the second tier. ub_12(x) lub(x) 150 Structural computation cost models 100 number of messages exchanged (without messages between the 50 tiers). 0 0 10 20 30 40 50 Fault rate used to model service unavailability.
  • 28. Experiment in Proactive Adaptation (Cont.) Tier 1 Tier 2 P1 S1 ub1 (n) Selection of first/second tier UB 1 (m) service done using: P2 S2 ub2 (n) UB 2 (m) . • random choice; . . . Client . . • fixed preference (lowest PN SN ubN (n) computation cost for n = 12); and UB N (m) • data-aware computation cost 250 minimization ub_1(x) ub_2(x) ub_3(x) ub_4(x) 200 ub_5(x) ub_6(x) ub_7(x) Message passing times for the ub_8(x) ub_9(x) ub_10(x) ub_11(x) services simulated using the ub_12(x) 150 lub(x) following two regimes: 100 (A) Random Gaussian choice with average 5ms for all services. 50 (B) Varying average 4-8ms. 0 0 10 20 30 40 50 Effectiveness of policies compared w.r.t. total simulated time.
  • 29. A Simulation Experiment (Cont.) Simulation results indicate that for both cases (A and B) of service running time variations, the data aware outperforms both the random choice and fixed preference policies. • x-axis gives input data size in the range 0-50 • y-axis gives total simulated running time • The fault rate is pf = 0.001 Time [ms] Time [ms] 6000 6000 random random fixed fixed data data 5000 5000 4000 4000 3000 3000 2000 2000 1000 1000 sim_s1_pf001.data sim_s2_pf001.data 0 0 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50
  • 30. Experiment in Proactive Adaptation (4) Another set of simulation results for pf = 0.1 (below) indicate that the advantages of using the data aware service selection policy persist even under very high noise / failure / unavailability rates. • included both cases (A and B) of service running time variations • overall, the data awareness gives best results for very small and big input data sizes Time [ms] Time [ms] 6000 6000 random random fixed fixed data data 5000 5000 4000 4000 3000 3000 2000 2000 1000 1000 sim_s1_pf100.data sim_s2_pf100.data 0 0 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50
  • 31. Current Restrictions on Orchestrations Currently, we are looking at “common” orchestrations that respect some restrictions w.r.t. behavior. • Overcoming these limitations is a goal for future work. Orchestrations must follow receive-reply interaction pattern: • All processing between reception of the initiating message and dispatching of (final) response. • Applicable to processes that accept one among several possible input messages. • Future work: relax restriction by using fragmentation to identify/separate reply-response service sections. Orchestration must have no stateful callbacks: • I.e., no correlation sets / WS-Addressing. • Practical problem: current analyzers lose precision when passing opaque objects containing state. • Future work: improve translation and analysis itself.
  • 32. 4 Conclusions
  • 33. Conclusions Data-aware computation cost functions can be used to predict QoS and thus drive QoS-aware adaptation or signal certain or possible QoS violations. Based on a translation scheme that, from an orchestration represented in an intermediate language, a logic program is generated and analyzed by existing tools. • Analysis derives computation cost functions which are safe upper and lower bounds of the orchestration’s computation cost. • The computation cost functions are expressed as functions of the size of input data, expressed in some appropriate data metrics. • Computation cost functions are combined with environment factors used to build more precise QoS bounds estimations as a function of input data.
  • 34. Conclusions (Cont.) In predictive monitoring, simulation results suggest high accuracy of predictions ahead of time, including situations when environmental conditions gradually deteriorate. • The time before detection and occurrence of a violation may be used for preparing and triggering the appropriate adaptive action. Simulation results indicate the usefulness of the approach in improving the efficiency of dynamic, run-time adaptation based on QoS-aware service selection. • In general, data-aware adaptation gives better results than other service selection policies — even with very large variability in service availability. The idea is to integrate the presented approach into service composition provision systems, collect empirical data and compare and combine it with statistical / data mining approaches.
  • 35. References This presentation is based on [ICH10a, ICH10b]. Some pointers on QoS analysis and prediction for Web service compositions: [Car05, Car07, LWR+ 09, HKMP08, DMK10] Some pointers on automatic complexity analysis / computational cost / resource consumption analysis: [HBC+ 12, HPBLG05, NMLH09, NMLGH08, ABG+ 11]
  • 36. Bibliography I [ABG+ 11] ¨ ´ E. Albert, R. Bubel, S. Genaim, R. Hahnle, G. Puebla, and G. Roman-D´ez. ı Verified resource guarantees using COSTA and KeY. In Siau-Cheng Khoo and Jeremy G. Siek, editors, PEPM, pages 73–76. ACM, 2011. [Car05] J. Cardoso. About the Data-Flow Complexity of Web Processes. In 6th International Workshop on Business Process Modeling, Development, and Support: Business Processes and Support Systems: Design for Flexibility, pages 67–74, 2005. [Car07] J. Cardoso. Complexity analysis of BPEL web processes. Software Process: Improvement and Practice, 12(1):35–49, 2007. [DMK10] Dimitris Dranidis, Andreas Metzger, and Dimitrios Kourtesis. Enabling proactive adaptation through just-in-time testing of conversational services. In Elisabetta Di Nitto and Ramin Yahyapour, editors, ServiceWave, volume 6481 of Lecture Notes in Computer Science, pages 63–75. Springer, 2010.
  • 37. Bibliography II [HBC+ 12] ´ M. V. Hermenegildo, F. Bueno, M. Carro, P. Lopez, E. Mera, J.F. Morales, and G. Puebla. An Overview of Ciao and its Design Philosophy. Theory and Practice of Logic Programming, 12(1–2):219–252, January 2012. http://arxiv.org/abs/1102.5497. [HKMP08] Julia Hielscher, Raman Kazhamiakin, Andreas Metzger, and Marco Pistore. A framework for proactive self-adaptation of service-based applications based on online testing. ¨ ¨ In Petri Mahonen, Klaus Pohl, and Thierry Priol, editors, Towards a Service-Based Internet, volume 5377 of Lecture Notes in Computer Science, pages 122–133. Springer Berlin / Heidelberg, 2008. ´ [HPBLG05] M. Hermenegildo, G. Puebla, F. Bueno, and P. Lopez-Garc´a. ı Integrated Program Debugging, Verification, and Optimization Using Abstract Interpretation (and The Ciao System Preprocessor). Science of Computer Programming, 58(1–2):115–140, 2005.
  • 38. Bibliography III [ICH10a] D. Ivanovi´ , M. Carro, and M. Hermenegildo. c An Initial Proposal for Data-Aware Resource Analysis of Orchestrations with Applications to Predictive Monitoring. ´ ´ In Asit Dan, Frederic Gittler, and Farouk Toumani, editors, International Workshops, ICSOC/ServiceWave 2009, Revised Selected Papers, number 6275 in LNCS. Springer, September 2010. [ICH10b] D. Ivanovi´ , M. Carro, and M. Hermenegildo. c Towards Data-Aware QoS-Driven Adaptation for Service Orchestrations. In Proceedings of the 2010 IEEE International Conference on Web Services (ICWS 2010), Miami, FL, USA, 5-10 July 2010, pages 107–114. IEEE, 2010. [LWR+ 09] Philipp Leitner, Branimir Wetzstein, Florian Rosenberg, Anton Michlmayr, Schahram Dustdar, and Frank Leymann. Runtime prediction of service level agreement violations for composite services. In Asit Dan, Frederic Gittler, and Farouk Toumani, editors, ICSOC/ServiceWave Workshops, volume 6275 of Lecture Notes in Computer Science, pages 176–186, 2009.
  • 39. Bibliography IV ´ [NMLGH08] J. Navas, E. Mera, P. Lopez-Garc´a, and M. Hermenegildo. ı Inference of User-Definable Resource Bounds Usage for Logic Programs and its Applications. Technical Report CLIP5/2008.0, Technical University of Madrid (UPM), School of Computer Science, UPM, July 2008. [NMLH09] ´ J. Navas, M. Mendez-Lojo, and M. Hermenegildo. User-Definable Resource Usage Bounds Analysis for Java Bytecode. In Proceedings of the Workshop on Bytecode Semantics, Verification, Analysis and Transformation (BYTECODE’09), volume 253 of Electronic Notes in Theoretical Computer Science, pages 6–86. Elsevier - North Holland, March 2009.
  • 40. Acknowledgments The research leading to these results has received funding from the European Community’s Seventh Framework Programme [FP7/2007-2013] under grant agreement 215483 (S-Cube).