SlideShare una empresa de Scribd logo
1 de 79
APPLICATION PERFORMANCE MANAGEMENT/QUALITY MANAGEMENT SYSTEM (APM/QMS) in PSHRC Eng. Mohammad Al-Nofaie Network Performance Engineer Center of Computer & Info. Systems (CCIS), PSHRC
APPLICATION PERFORMANCE MANAGEMENT/QUALITY MANAGEMENT SYSTEM (APM/QMS) in PSHRC ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
VISION To improve the quality of network performance for  through advanced communication services to  authorized users in equal access to state-of-the-art  technology MISSION To provide authorized users the highest quality and  technologically advanced end-user services
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Apq Qms Project Plan
Stage 1  :   Initiating and Preparation Description ,[object Object],[object Object],[object Object],[object Object],[object Object]
Stage 1  :   Initiating and Preparation Stock Holders ,[object Object],[object Object],[object Object],[object Object],Return of Investment of this Application Performance Management Project 1 2 3 4
Stage 1  :   Initiating and Preparation Application Architecture The goal of successful application architecture is to explore the entire business and define an application and infrastructure framework that has the potential of delivering workable solutions for the foreseeable future. The key is to identify the business aspects that are core and the others that might change significantly. This frames the risk when looking at the specific areas to support. With a solid business perspective, current technologies and future science can be assessed. Although new technologies might stimulate new business, technologies are the tools, not the goal - business is the key. The results should support business growth or shrinkage, and replacement of application and technology components over time. Change is a constant - the architecture's aim is not just to withstand it but also to enable it. The exact structure is not important but the focus must be correct and the framework must be appropriately flexible to evolve. The enterprise architecture will also provide the framing and guidance for the next levels of architecture and design.
Stage 1  :   Initiating and Preparation Application Multi-tier Multi-tier applications enable enterprises to share information with, and permit collaboration among, employees, customers, and business partners. A typical multi-tier application has three tiers: a front end that performs authentication and serves as an interface to the user, a middle tier that handles authorization and business logic, and a back end that acts as a store for information.
Stage 1  :   Initiating and Preparation Service Level Agreement An SLA sets the expectations between the consumer and provider. It helps define the relationship between the two parties. It is the cornerstone of how the service provider sets and maintains commitments to the service consumer. A good SLA addresses five key aspects: In the definition of an SLA, realistic and measurable commitments are important. Performing as promised is important, but swift and well communicated resolution of issues is even more important. The challenge for a new service and its associated SLA is that there is a direct relationship between the architecture and what the maximum levels of availability are. Thus, an SLA cannot be created in a vacuum. An SLA must be defined with the infrastructure in mind. An exponential relationship exists between the levels of availability and the related cost. Some customers need higher levels of availability and are willing to pay more. Therefore, having different SLAs with different associated costs is a common approach. ,[object Object],[object Object],[object Object],[object Object],[object Object]
Stage 1  :   Initiating and Preparation Service Level Objective Service Level Objectives (SLOs) are a key element of a Service Level Agreement between a Service Provider and a customer. SLOs are agreed as a means of measuring the performance of the Service Provider and are outlined as a way of avoiding disputes between the two parties based on misunderstanding. The SLO may be composed of one or more quality-of-service measurements that are combined to produce the SLO achievement value. As an example, an availability SLO may depend on multiple components, each of which may have a QOS availability measurement. The combination of QOS measures into a SLO achievement value will depend on the nature and architecture of the service. SLO must be: Attainable, Measurable, Understandable,  Meaningful, Controllable, Affordable, Mutually acceptable  Service Level Commitment Between 0 to .2% application and network error free, Response procedures to  system failures within 5 minutes of failure notification. Have a Disaster Recovery Plan (DRP), automated monitoring of server availability and carrying out daily back-ups of critical data.  Confidentially and Security of Data.
Stage 1  :   Initiating and Preparation External Issues Affecting Performance
Cisco Service-Oriented Network Architecture (SONA) Framework Stage 1  :   Initiating and Preparation Framework
Cisco Service-Oriented Network Architecture (SONA) Framework Application Layer This layer contains the business applications and collaborative applications that use interactive services to operate more efficiently or can be deployed quicker and with lower integration costs. Stage 1  :   Initiating and Preparation Framework
This layer is a full architecture of several network technologies working together to create functionality that can be used by multiple applications across the network. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1   :   Initiating and Preparation Framework
Security Services Ensures that all aspects of the network together to secured pervasively from the edge to the core of the network looking at multiple aspects from passive attacks like viruses to active attacks and segmentation of data types. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1   :   Initiating and Preparation Framework
Mobility Services Allows users to access network resources regardless of their physical location but includes more than simple wireless devices. It is also the interaction through the network to allow for seamless layer three mobility and rapid re-association and forwarding of voice and video content. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1  :   Initiating and Preparation Framework
Storage Services Provides distributed and virtual storage across the infrastructure enabling additional services such as backup and translational functionality usually requiring additional media servers that need to be separately maintained. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1  :   Initiating and Preparation Framework
Voice and Collaboration Services Delivers the foundation by which voice and video streaming can be carried across the network with a high degree of quality while interacting with different data systems all working together as a full service. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1  :   Initiating and Preparation Framework
Compute Services Connects and virtualizes compute resources based on the application helping to provide cost effective business continuity as well as a dislocation of specific applications to specific servers. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1  :   Initiating and Preparation Framework
Identity Services Maps resources and policies to the user and device for use both by security services and to be used to create preferences for users for collaborative services. Identity service is also utilized by multiple applications to provide single sign-on capabilities. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1  :   Initiating and Preparation Framework
This layer is where all the IT resources are interconnected across a converged network foundation designed as a complete architecture to interoperate with all advanced services, across all places in the network, without requiring re-architecture or forklift upgrades. Cisco Service-Oriented Network Architecture (SONA) Framework Networked Infrastructure Layer Stage 1  :   Initiating and Preparation Framework
The network group is responsible in  analyzing and generating reports about the current network infrastructure After the analysis and report generation, both teams will discuss every aspects of the problems and provide solutions Business / Collaboration Application Team Network Interactive Service Team Stage 1  :   Initiating and Preparation Team Structure
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Stage 1  :   Initiating and Preparation Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
Existing and future applications must be declared Current network status is identified Remaining network resources must be elaborated and proposed network upgrade solutions is formulated The objectives should be meet in order to proceed to the next stage Stage 1  :   Initiating and Preparation Deliverables
To identify and understand the current network environment and possible impact during application deployment To understand and recommend information with regards to cost justification, project initiation and execution Stage 2  :   Planning Objectives
Business / Collaboration Application Team Network Interactive Service Team Stage 2  :   Planning Team Structure
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Stage 2  :   Planning Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
Current load of the network and all resource consumption of the application must be declared Must identify all applications currently running in the network that might become affected after the impact of the application deployment Contingency plan must be prepared during system and application failure Cost analysis for system upgrades are identified The objectives should be meet in order to proceed to the next stage Stage 2  :   Planning Deliverables
To identify the major requirements of the application to be tested, both hardware and software To identify the needs for hardware changes To identify end user’s knowledge ability in using the application To ensure the integrity of the application during runtime Stage 3  :   Testing Environment Objectives
To identify the applications performance To identify the network resource consumption To identify the integrity of the contingency plan during software and hardware failure events To provide information from the tools which measures the application and network performance. Stage 3  :   Implementation Objectives
Stage 3   :   Implementation 4 Stages of Implementation IT Guru IT GURU, does the following: 1)  Diagnose  – Visualize network, traffic flows and application transactions.  Quickly determine root-cause of performance (server, network or client). Audit compliance with network security policies. 2)  Validate Changes Prior to Implementation  – test network configuration before implementation, right size capacity upgrades, analyze system upgrades, consolidations and relocations. 3)  Plan Ahead for Growth and High Availability  – establish budgets with quantitative justification, plan upgrades for growth or new facilities, optimize the deployment of new technologies and mission critical applications. We will evaluate three (3) products namely under IT Guru: CISCO Works, HP Manager, Sniffer Pro. 1 Provides a Virtual Network Environment that models the behavior of your entire network. Including it’s routers, switches, protocols, systems and individual applications.  By working in the Virtual Network Environment, IT Managers, network and system planners and operations staff are empowered to more effectively diagnose difficult problems, validate changes before they are implemented and plan for future scenarios including growth and failure.
Stage 3   :   Implementation 1)  Capture (Application Traces)  – Capture a ‘finger print’ of the application transaction as it traverses the infrastructure.  2)  Visualize (Transactions)  – Visualize applications transaction allows both at the application level and the network packet level. Understand the interactions and dependencies among clients, the network, application servers and database servers. 3)  Diagnose (Performance Problems)  – Identify and diagnose performance bottleneck. Decode capture applications that cause unacceptable processing delays. 4)  Validate (Solutions)  – Quickly evaluate the impact of changing growth bandwidth protocol settings, application behavior, server speed and network congestion on end-to-end response times. 2 Performance of networked applications depends on complex interactions among applications, servers and networks.  IT organizations need detailed, quantitative understanding of these interactions to efficiently and cost-effectively troubleshoot and deploy applications. ACE directly address these challenges. 4 Stages of Implementation ACE (Application Characterization Environment)
Stage 3   :   Implementation 3 Provides real-time performance analysis of complex applications by monitoring system and application metrics within each server across all tiers.  Panorama automatically spots abnormal vs. normal behavior with advanced deviation tracking and correlation technologies.  It automates the otherwise tedious analysis of thousands of application and system metrics across multiple tiers to identify sources of performance problems or potential choke-points. 4 Stages of Implementation Panorama (Real-time Application Analytics)
Stage 3   :   Implementation 4 An application service level monitoring solution that provides visibility into interdependent application and infrastructure components, and quantifies SLA compliance. SLA Commander™ employs synthetic transactions to monitor the response time and availability of web applications as seen by end-users, proactively alerting IT operations teams when performance thresholds are exceeded. SLA Commander integrates with OPNET's ACE™ to enable the in-depth analysis of problems that are intermittent or cannot easily be reproduced.  4 Stages of Implementation SLA Commander  Key Features •  Automated, around-the-clock application monitoring with threshold-based alarms  •  Convenient web-based dashboard that displays application service levels, enabling at-a-glance identification of problem areas  •  Comprehensive service model that maps infrastructure and application components to a business service  •  Early warning alerts to advise support teams of performance degradation  •  Drill-down analysis into poorly performing services to isolate faults to specific components  •  Intuitive authoring environment to create test scripts without programming, by recording a user's browser activity  •  High-fidelity browser playback of scripted transactions  •  Integration with OPNET's free ACE™ Capture Agents to automatically capture and archive packet traces of problematic transactions for subsequent analysis in ACE
Stage 3   :   Implementation Assessing Application Networkability Workflow
Stage 3   :   Implementation Methodology 1 The process of capturing application data that accurately reflects the behavior of the application. Capture and Import Application Packet Traces 2 ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Analyzing the Application
3 The network impact can be studied by changing network parameters (bandwidth, latency, packet loss, link utilization, TCP window size, etc.) on the application response time. An example is by plotting the application response time against any one parameter while keeping the others fixed. In general, the application response time should decrease if you increase bandwidth and/or reduce packet loss, link utilization and latency. Study Network Impact Stage 3   :   Implementation Methodology 4 Changes in the application behavior will cause changes in the underlying network data exchange. Modifying the number of application turns, application bytes, and the processing times on relevant tiers will produce a data exchange pattern that reflects the application behavior. Modify Application Characteristics
5 The following are the steps in simulating the application: Auto-Create the Basic Topology The model and configuration of the topology is based on the number of tiers. Specifying its LAN segments will help to specify other parameters such as loss and latency, in addition to the WAN technology (IP, ATM or Frame Relay) and the bandwidth. Selecting the appropriate device models enables you to capture application packet traces in the simulation in the same way capturing the protocol traces in the real world. Determine Propagation Delay and Latency The discrete event simulation’s default method of determining the propagation delay using a “line-of-sight” geographic distance may often give a propagation delay that is too low because, for example, the actual network links may not follow a true line-of-sight. Therefore, it is often important to explicitly set latency/propagation attribute values when simulating application traffic, especially when doing application response time studies over TCP. Simulate the Application Stage 3   :   Implementation Methodology
Tune Protocols and Set Parameters The parameters of the client and servers are typically the most important. In general, depending on the protocols and devices you have chosen, there may be many parameters. Advance versions of device model gives access to the broadest range of parameters. Parameters for TCP are often the most influential when working with applications that make use of this protocol. The advance versions of client and server models provide a full complement of TCP parameters that can be control controlled. Understand the Important TCP Parameters “ TCP Delayed Acknowledgement Mechanism”  controls how the delayed “dataless” acknowledgements are sent by the TCP connection process. Note that TCP does not send an ACK the instant that it receives data. Instead, it delays the ACK, hoping that it will have data to send with it (called “ACK piggybacking”). “ TCP Maximum Acknowledgement Delay”  is the longest time that a TCP connection process waits to send an ACK after receiving data. “ TCP Receive Buffer Usage Threshold”  affects the window size of the TCP connection. The window size is the amount of space available in the receive buffer. The usage threshold determines when data should be transferred from TCPs receive buffer to the application, thereby allowing the receive window to open further. Stage 3   :   Implementation Methodology: Simulate the Application
Run the Initial Simulation and Get Results Choosing a Simulation Duration and Selecting Statistics obtains the measurement of the response time, allowing to confirm that each element is behaving as expected. These should include application response time, server load and task-processing-related statistics, link utilization, and send and received data throughputs for the application. Running the Initial Simulation and Validating the Application Response Time The overall simulated response time for the application’s transactions may not match what you observe on your actual network because of several factors, such as the network is not fully modeled or the protocol parameters are not yet tuned. The packet analyzer captures a trace of the application task that is being simulated. Importing application packet trace allows to compare the statistics and diagnosis to the that was originally imported from a live network. In most cases, the results will match closely. The result will not match if the protocol parameters are not configured appropriately or have not taken into account the effect of other users. Represent the Server Servers are highly complex devices compose of numerous subsystems that perform tasks with varying degrees of concurrency. Further more, the behavior of various applications and operating systems vary greatly from vendor to vendor, and even from revision to revision, due to patches and upgrades. As a result, creating models of server performance can be difficult but can be easier if the models built are not that complex. Stage 3   :   Implementation Methodology: Simulate the Application
6 To model the effect of other users and traffic sources, be sure to create appropriate load on the various components in the path. While it exceeds the scope of this methodology, obtaining load information is often done using network performance management tools that monitor statistics gathered by agents in the network. Run Simulations and Get Results Once the topology is built, include the effect of other users, and tune all the relevant protocol parameters, run the simulation and obtain results.  Troubleshoot Application Response Times Load and possible congestion in the network can be the source of delay why the compare response time between the simulation and the actual network is not observed. Congestion is indicated by repeated sequences of numbers. The retransmissions of signal packets that are being dropped can be a significant contributor to lagging application response times. Client Relocation Approach Moving the client to various locations is an effective approach to locate the source of additional delay in the path between client and server. By “plugging” the client into different locations along the path, takin response time measurements, can obtain an estimate of the contribution of each segment of the path to overall response time. Model the Effect of Other Users and Traffic Sources Stage 3   :   Implementation Methodology
Ping Approach Instead of physically moving the client to different locations, “Ping Command” can determines the roundtrip times from the client to other components in the path, provided that thos components also use the IP protocol. The round-trip times gives an idea where latency is in the path. Stage 3   :   Implementation Methodology: Model the Effect of Other Users and Traffic sources 7 Results can be analyze by viewing the output of the simulation in the form of graphs and statistics.  These result allows to iteratively construct the what-if scenarios and study the impact of the changes on the application. Analyze Results 8 Reports is used to demonstrate and allows the collaborator to understand more about the applications’ performance test result. Visual displays and graphs are the essential report design to demonstrate the key findings effectively. Generate Reports
Stage 3   :   Implementation Team Structure Collaboration Application Group Business Application Group
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Stage 3   :   Implementation Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
Network and application’s performance are measured Failure events are recorded Contingency plans are performed End-users are well trained The objectives should be meet in order to proceed to the next stage Stage 3   :   Implementation Deliverables
To identify the major requirements of the application to be tested, both hardware and software To identify the needs for hardware changes To identify end user’s knowledge ability in using the application To ensure the integrity of the application during runtime Stage 4   :   Testing Environment Objectives
Business / Collaboration Application Team Network Interactive Service Team Stage 4   :   Testing Environment Team Structure
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Stage 4   :   Testing Environment Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
All software, hardware and network performance are identified Application integrity and connectivity are measured Connectivity issues of all tier are tested and recorded Enhancements and module revisions are identified Hardware requirements are identified End-user’s application usage skills are evaluated The objectives should be meet in order to proceed to the next stage Stage 4  :   Testing Environment Deliverables
To determine the impact of the application into the live network infrastructure To verify the end result of the application simulation To evaluate the reporting performance To identify the enhancements needed into the application based on the implementation result Stage 5  :   Analyzing Baseline Scenario Objectives
Stage 5  :   Analyzing Baseline Scenario Accessing Application Impact
Stage 5  :   Analyzing Baseline Scenario Methodology 1 The process of capturing application data that accurately reflects the behavior of the application. Capture and Import Application Packet Traces 2 ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Analyzing the Application
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Provide Diagnoses and Statistics The diagnosis and statistics include the delays on each tier, the packer sizes, protocol delays, network transmission delays, propagation delays and so on. The diagnosis is based on different interpretations of the statistic data. If the value in a diagnosis exceeds its threshold, it is considered a “Bottleneck”. If it is close to the threshold, it is considered a “Potential Bottleneck”. If it is below the potential bottleneck range, it is considered to be “No Bottleneck”. Processing delay bottleneck  is the processing time expressed as a percentage of the total response time. This delay represents the time taken due to operations within the machine, such as file I/O. CPU time, disk time, or memory access. Protocol overhead bottleneck  is the total protocol overhead expressed as a percentage of the total amount of data transferred. Each protocol adds overhead to an application message in the form of headers. Protocols send packets that do not contain application data such as ACK. These packets are also counted as protocol overhead. Chattiness bottleneck  is the number of application bytes per application turn. If an application is “chatty”, the data sent in each application is small. This may cause significant network delays and also processing delays at each tier since each tier now has to handle many litter messages. Network cost of chattiness bottleneck  is the total network delay incurred due to application turns represented as a percentage of the total application response time. Applications that send many small packets back-and-forth incur a network delay. This delay becomes significant if there is a high latency link. Methodology: Analyzing the Application Stage 5   :   Analyzing Baseline Scenario
Provide Diagnoses and Statistics (continued…) Propagation delay bottleneck  is the time taken by the packets to propagate across the network represented as a percentage of the total application response time. Propagation delay is a function of the distance traveled and the speed of light. Device latencies can also add to this bottleneck. Transmission delay bottleneck  is the transmission delay caused by line speeds expressed as a percentage of the total application response time. The transmission delay is a function of the total bytes transmitted and the line speed. Protocol delay bottleneck  is the total delay due to protocol effects represented as a percentage of the total application response time. Examples of protocol effects are TCP flow control, congestion control, delay due to retransmissions, and collisions. Connection resets bottleneck  is the total percentage of packets that were retransmitted. Protocols such as TCP retransmit a packet if they detect a long latency or a packet loss. Retransmission causes delays and additional protocol overhead. TCP also reduces the rate at which applications can send traffic when a retransmission occurs as a means of congestion control. This causes additional throttling of application traffic. Packet loss or unusual delays that trigger retransmissions can occur as a result of “bursty” application traffic, overflowing queues, misbehaving devices and link or node failures. Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Provide Diagnoses and Statistics (continued…) TCP windowing bottleneck  is the bandwidth-delay product used by the TCP connection. When an application sends bulk data over a TCP connection, the TCP window size should be large enough to permit TCP to send many packets in a row without having to wait for TCP ACK. TCP frozen window bottleneck  is the advertised TCP Receive Window that has dropped to a value smaller than the Maximum Segment Size (MSS). When this occurs, the sender cannot send any data until the receive window is one MSS or larger. To determine if the receive window has become larger, the sending side periodically sends on-byte probe packets. The contents of these probe packets depend on the particular implementation, but they are usually sent with an exponential back off. The common reason for the frozen window is that the application on the receiving side is not taking data from the TCP receive buffer quickly enough. TCP Nagle’s algorithm bottleneck  indicates that Nagle’s algorithm is present and is slowing application response times. Nagle’s algorithm is a sending-side algorithm that reduces the number of small packets on the network, thereby increasing router efficiency. Nagle’s algorithm causes excessive numbers of delayed ACKs and slows down the application. Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Recommendations The implications of each diagnosis and our recommendations for correcting the problem are described below: Processing delay  - Improve overall speed of the machine by adding faster processors, faster disks and more memory. Consider revamping the application so it uses machine resources more efficiently. E.g. Database application can benefit from indexing, transferring large records at once, and redesigning database queries. Protocol overhead  – Consider sending larger application packets. This reduces the amount of header information that the protocol has to add, as there will be fewer application messages. Protocols such as TCP will also reduce the number of ACKs that have to be transmitted. Chattiness  – Send fewer small application messages. Modify the application logic so that more data is sent in parallel. If a database is fetching one record at a time, try modifying it so that it obtains all the requested records, stores them in a structure, and sends the structure all at once. Network cost of chattiness  – If the application is incurring significant network delay due to chattiness, try to eliminate the “chattiness” bottleneck. Consider reducing the transmission and propagation delay between tiers. Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Recommendations (continued…) Propagation delay  – Move the affected tiers closer together. Use intermediate devices that are faster, that is, once that have a smaller latency. Use a utility program to examine actual network conditions.  Transmission delay  – Increase the line speed and reduce the number of hops that the messages have to traverse. Use a utility program to examine actual network conditions. Protocol delay  – Retransmissions or unusual latencies are the causes of protocol delay. If the protocol is TCP and has an application sending small packets, check to see if the application has enabled Nagle’s algorithm. This algorithm causes small messages to wait until larger segments are formed for efficient transmission. However this adversely affects interactive applications that send many little messages back and forth. Connection resets  – A reset implies that a connection could not be completed, or the connection was disconnected because the peers could not contact each other. A small number of resets are fairly common for applications such as HTTP, but if there are large number of resets, check if there is loss on connectivity among the tier pairs. Retransmissions  – These are caused by loss or long delays. Eliminate the cause of the packet loss or the long delay. There are some networks that you have no control over, such as the Internet. Try to use different technologies such as VPN or IP tunneling, or attempt to obtain a higher Quality-of-Service (QoS) from the ISP. Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Recommendations (continued…) TCP windowing  – Use larger TCP send and receive windows. These windows should be greater than the bandwidth-delay product for the connection. Use newer versions of TCP that have options such as SACK. Most operating system allows modification of select set of TCP parameters. TCP frozen window  – Try to send less data, have the receiving application retrieve the data quickly. If the application cannot process all the data at once, consider storing the data in another buffer. Upgrade the receiving computer. TCP Nagle’s algorithm  – Disable Nagle’s algorithm for this application. Rewrite the application such that it sends fewer, larger packets, or does not encounter a TCP delayed ACK. Configure TCP on the receiving host so that TCP acknowledges every packet it receives. Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario Succeeding methodologies are explained in detail at Stage 5
Stage 5  :   Analyzing Baseline Scenario Team Structure Collaboration Application Group Business Application Group
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Stage 5  :   Analyzing Baseline Scenario Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
Network allocation resources are identified Application’s integrity, data connectivity and reporting performance are measured Future application enhancements are identified All plans are created in preparation for the Go Live Stage Stage 5  :   Analyzing Baseline Scenario Deliverables
Stage 5  :   Analyzing Baseline Scenario Methodology 1 The process of capturing application data that accurately reflects the behavior of the application. Capture and Import Application Packet Traces 2 ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Analyzing the Application
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Provide Diagnoses and Statistics The diagnosis and statistics include the delays on each tier, the packer sizes, protocol delays, network transmission delays, propagation delays and so on. The diagnosis is based on different interpretations of the statistic data. If the value in a diagnosis exceeds its threshold, it is considered a “Bottleneck”. If it is close to the threshold, it is considered a “Potential Bottleneck”. If it is below the potential bottleneck range, it is considered to be “No Bottleneck”. Processing delay bottleneck  is the processing time expressed as a percentage of the total response time. This delay represents the time taken due to operations within the machine, such as file I/O. CPU time, disk time, or memory access. Protocol overhead bottleneck  is the total protocol overhead expressed as a percentage of the total amount of data transferred. Each protocol adds overhead to an application message in the form of headers. Protocols send packets that do not contain application data such as ACK. These packets are also counted as protocol overhead. Chattiness bottleneck  is the number of application bytes per application turn. If an application is “chatty”, the data sent in each application is small. This may cause significant network delays and also processing delays at each tier since each tier now has to handle many litter messages. Network cost of chattiness bottleneck  is the total network delay incurred due to application turns represented as a percentage of the total application response time. Applications that send many small packets back-and-forth incur a network delay. This delay becomes significant if there is a high latency link. Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Provide Diagnoses and Statistics (continued…) Propagation delay bottleneck  is the time taken by the packets to propagate across the network represented as a percentage of the total application response time. Propagation delay is a function of the distance traveled and the speed of light. Device latencies can also add to this bottleneck. Transmission delay bottleneck  is the transmission delay caused by line speeds expressed as a percentage of the total application response time. The transmission delay is a function of the total bytes transmitted and the line speed. Protocol delay bottleneck  is the total delay due to protocol effects represented as a percentage of the total application response time. Examples of protocol effects are TCP flow control, congestion control, delay due to retransmissions, and collisions. Connection resets bottleneck  is the total percentage of packets that were retransmitted. Protocols such as TCP retransmit a packet if they detect a long latency or a packet loss. Retransmission causes delays and additional protocol overhead. TCP also reduces the rate at which applications can send traffic when a retransmission occurs as a means of congestion control. This causes additional throttling of application traffic. Packet loss or unusual delays that trigger retransmissions can occur as a result of “bursty” application traffic, overflowing queues, misbehaving devices and link or node failures. Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Provide Diagnoses and Statistics (continued…) TCP windowing bottleneck  is the bandwidth-delay product used by the TCP connection. When an application sends bulk data over a TCP connection, the TCP window size should be large enough to permit TCP to send many packets in a row without having to wait for TCP ACK. TCP frozen window bottleneck  is the advertised TCP Receive Window that has dropped to a value smaller than the Maximum Segment Size (MSS). When this occurs, the sender cannot send any data until the receive window is one MSS or larger. To determine if the receive window has become larger, the sending side periodically sends on-byte probe packets. The contents of these probe packets depend on the particular implementation, but they are usually sent with an exponential back off. The common reason for the frozen window is that the application on the receiving side is not taking data from the TCP receive buffer quickly enough. TCP Nagle’s algorithm bottleneck  indicates that Nagle’s algorithm is present and is slowing application response times. Nagle’s algorithm is a sending-side algorithm that reduces the number of small packets on the network, thereby increasing router efficiency. Nagle’s algorithm causes excessive numbers of delayed ACKs and slows down the application. Methodology: Analyzing the Application Stage 5  :   Analyzing Baseline Scenario
Stage 5  :   Analyzing Baseline Scenario Team Structure Collaboration Application Group Business Application Group
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Stage 5  :   Analyzing Baseline Scenario Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
Network allocation resources are identified Application’s integrity, data connectivity and reporting performance are measured Future application enhancements are identified All plans are created in preparation for the Go Live Stage The objectives should be meet in order to proceed to the next stage Stage 5  :   Analyzing Baseline Scenario Deliverables
To identify the deployment process of the application to the live servers To identify the actual impact of the application deployment to other applications currently running on the network To verify accuracy and credibility of data exchange between client and servers Stage 6  :   Go Live Scenario Objectives
Stage 6  :   Go Live Scenario Team Structure Collaboration Application Group Business Application Group
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Stage 6   :   Go Live Scenario Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
Recorded result of application and network performance upon deployment Result analysis of the hardware performance Identification of the weak parts of the network Stage 6  :   Go Live Scenario Deliverables
To finalize end result and present the output to  Top Management To document the projects related issues including software documentation and summarization Stage 7  :   Project Closing Objectives
Stage 7  :   Project Closing Team Structure Collaboration Application Group Business Application Group
[object Object],[object Object],[object Object],[object Object],Stage 7  :   Project Closing Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
Documentation of the project must be present Project Review Project turnover to CCIS from vendor Stage 7  :   Project Closing Deliverables Establishes action plans for identified additional needs

Más contenido relacionado

La actualidad más candente

From Components To Services
From Components To ServicesFrom Components To Services
From Components To ServicesJames Phillips
 
LinkState Company Profile V3
LinkState Company Profile V3LinkState Company Profile V3
LinkState Company Profile V3Chris Hendrikz
 
Introduction to CAST HIGHLIGHT - Rapid Application Portfolio Analysis
Introduction to CAST HIGHLIGHT - Rapid Application Portfolio AnalysisIntroduction to CAST HIGHLIGHT - Rapid Application Portfolio Analysis
Introduction to CAST HIGHLIGHT - Rapid Application Portfolio AnalysisCAST
 
COMBINING REUSABLE TEST CASES AND CONTINUOUS SECURITY TESTING FOR REDUCING WE...
COMBINING REUSABLE TEST CASES AND CONTINUOUS SECURITY TESTING FOR REDUCING WE...COMBINING REUSABLE TEST CASES AND CONTINUOUS SECURITY TESTING FOR REDUCING WE...
COMBINING REUSABLE TEST CASES AND CONTINUOUS SECURITY TESTING FOR REDUCING WE...ijseajournal
 
Point-to-Point vs. MEAP - The Right Approach for an Integrated Mobility Solut...
Point-to-Point vs. MEAP - The Right Approach for an Integrated Mobility Solut...Point-to-Point vs. MEAP - The Right Approach for an Integrated Mobility Solut...
Point-to-Point vs. MEAP - The Right Approach for an Integrated Mobility Solut...RapidValue
 
Re engineering for SaaS & cloud enablement
Re engineering for SaaS & cloud enablementRe engineering for SaaS & cloud enablement
Re engineering for SaaS & cloud enablementEkartha Inc
 
MetaSolv Implementation Services
MetaSolv Implementation ServicesMetaSolv Implementation Services
MetaSolv Implementation ServicesProdapt Solutions
 
Managed It Services
Managed It ServicesManaged It Services
Managed It ServicesGss America
 
Computerized Maintenance Management
Computerized Maintenance Management Computerized Maintenance Management
Computerized Maintenance Management leansavant
 
Rapid Portfolio Analysis powered by CAST Highlight
Rapid Portfolio Analysis powered by CAST HighlightRapid Portfolio Analysis powered by CAST Highlight
Rapid Portfolio Analysis powered by CAST HighlightCAST
 
Application Performance Management: Intelligence for an Optimized WAN
Application Performance Management: Intelligence for an Optimized WANApplication Performance Management: Intelligence for an Optimized WAN
Application Performance Management: Intelligence for an Optimized WANXO Communications
 
T3 Consortium's Performance Center of Excellence
T3 Consortium's Performance Center of ExcellenceT3 Consortium's Performance Center of Excellence
T3 Consortium's Performance Center of Excellenceveehikle
 
Beagle research moving your on-premise contact center to the cloud
Beagle research  moving your on-premise contact center to the cloudBeagle research  moving your on-premise contact center to the cloud
Beagle research moving your on-premise contact center to the clouddebm_madronasg
 

La actualidad más candente (20)

Jon shende fbcs citp q&a
Jon shende fbcs citp q&aJon shende fbcs citp q&a
Jon shende fbcs citp q&a
 
From Components To Services
From Components To ServicesFrom Components To Services
From Components To Services
 
LinkState Company Profile V3
LinkState Company Profile V3LinkState Company Profile V3
LinkState Company Profile V3
 
Introduction to CAST HIGHLIGHT - Rapid Application Portfolio Analysis
Introduction to CAST HIGHLIGHT - Rapid Application Portfolio AnalysisIntroduction to CAST HIGHLIGHT - Rapid Application Portfolio Analysis
Introduction to CAST HIGHLIGHT - Rapid Application Portfolio Analysis
 
COMBINING REUSABLE TEST CASES AND CONTINUOUS SECURITY TESTING FOR REDUCING WE...
COMBINING REUSABLE TEST CASES AND CONTINUOUS SECURITY TESTING FOR REDUCING WE...COMBINING REUSABLE TEST CASES AND CONTINUOUS SECURITY TESTING FOR REDUCING WE...
COMBINING REUSABLE TEST CASES AND CONTINUOUS SECURITY TESTING FOR REDUCING WE...
 
Point-to-Point vs. MEAP - The Right Approach for an Integrated Mobility Solut...
Point-to-Point vs. MEAP - The Right Approach for an Integrated Mobility Solut...Point-to-Point vs. MEAP - The Right Approach for an Integrated Mobility Solut...
Point-to-Point vs. MEAP - The Right Approach for an Integrated Mobility Solut...
 
Re engineering for SaaS & cloud enablement
Re engineering for SaaS & cloud enablementRe engineering for SaaS & cloud enablement
Re engineering for SaaS & cloud enablement
 
MetaSolv Implementation Services
MetaSolv Implementation ServicesMetaSolv Implementation Services
MetaSolv Implementation Services
 
Managed It Services
Managed It ServicesManaged It Services
Managed It Services
 
Computerized Maintenance Management
Computerized Maintenance Management Computerized Maintenance Management
Computerized Maintenance Management
 
Pavankumar Kakarla
Pavankumar KakarlaPavankumar Kakarla
Pavankumar Kakarla
 
CAST HIGHLIGHT - Overview & Demos
CAST HIGHLIGHT - Overview & DemosCAST HIGHLIGHT - Overview & Demos
CAST HIGHLIGHT - Overview & Demos
 
Gnanaguru
GnanaguruGnanaguru
Gnanaguru
 
Rapid Portfolio Analysis powered by CAST Highlight
Rapid Portfolio Analysis powered by CAST HighlightRapid Portfolio Analysis powered by CAST Highlight
Rapid Portfolio Analysis powered by CAST Highlight
 
Application Performance Management: Intelligence for an Optimized WAN
Application Performance Management: Intelligence for an Optimized WANApplication Performance Management: Intelligence for an Optimized WAN
Application Performance Management: Intelligence for an Optimized WAN
 
soc
socsoc
soc
 
T3 Consortium's Performance Center of Excellence
T3 Consortium's Performance Center of ExcellenceT3 Consortium's Performance Center of Excellence
T3 Consortium's Performance Center of Excellence
 
Gurpreet_Resume_BillRAFM
Gurpreet_Resume_BillRAFMGurpreet_Resume_BillRAFM
Gurpreet_Resume_BillRAFM
 
Gurpreet_Resume_BillRAFM
Gurpreet_Resume_BillRAFMGurpreet_Resume_BillRAFM
Gurpreet_Resume_BillRAFM
 
Beagle research moving your on-premise contact center to the cloud
Beagle research  moving your on-premise contact center to the cloudBeagle research  moving your on-premise contact center to the cloud
Beagle research moving your on-premise contact center to the cloud
 

Similar a Apq Qms Project Plan

Top 8 Trends in Performance Engineering
Top 8 Trends in Performance EngineeringTop 8 Trends in Performance Engineering
Top 8 Trends in Performance EngineeringConvetit
 
unit 5 cloud.pptx
unit 5 cloud.pptxunit 5 cloud.pptx
unit 5 cloud.pptxMrPrathapG
 
The F5 Networks Application Services Reference Architecture (White Paper)
The F5 Networks Application Services Reference Architecture (White Paper)The F5 Networks Application Services Reference Architecture (White Paper)
The F5 Networks Application Services Reference Architecture (White Paper)F5 Networks
 
Connectivity And Topology Of Wireless Networks
Connectivity And Topology Of Wireless NetworksConnectivity And Topology Of Wireless Networks
Connectivity And Topology Of Wireless NetworksTrina Simmons
 
Title Software Re-Engineering: 3 Strategies for Building Better Applications
Title Software Re-Engineering: 3 Strategies for Building Better ApplicationsTitle Software Re-Engineering: 3 Strategies for Building Better Applications
Title Software Re-Engineering: 3 Strategies for Building Better ApplicationsLucy Zeniffer
 
RUNNING HEAD Intersession 6 Final Project Projection1Interse.docx
RUNNING HEAD Intersession 6 Final Project Projection1Interse.docxRUNNING HEAD Intersession 6 Final Project Projection1Interse.docx
RUNNING HEAD Intersession 6 Final Project Projection1Interse.docxjeanettehully
 
New Model to Achieve Software Quality Assurance (SQA) in Web Application
New Model to Achieve Software Quality Assurance (SQA) in Web ApplicationNew Model to Achieve Software Quality Assurance (SQA) in Web Application
New Model to Achieve Software Quality Assurance (SQA) in Web Applicationijsrd.com
 
CLASS NAMEMIS600PROFESSORS NAME STUDENTS NAME PRO.docx
CLASS NAMEMIS600PROFESSORS NAME STUDENTS NAME PRO.docxCLASS NAMEMIS600PROFESSORS NAME STUDENTS NAME PRO.docx
CLASS NAMEMIS600PROFESSORS NAME STUDENTS NAME PRO.docxmonicafrancis71118
 
whitepaper_workday_technology_platform_devt_process
whitepaper_workday_technology_platform_devt_processwhitepaper_workday_technology_platform_devt_process
whitepaper_workday_technology_platform_devt_processEric Saraceno
 
The Need for Unified Performance Management
The Need for Unified Performance ManagementThe Need for Unified Performance Management
The Need for Unified Performance ManagementRiverbed Technology
 
Root Cause Detection in a Service-Oriented Architecture
Root Cause Detection in a Service-Oriented ArchitectureRoot Cause Detection in a Service-Oriented Architecture
Root Cause Detection in a Service-Oriented ArchitectureSam Shah
 
Maximizing Efficiency and Productivity: Transform Your Business with Applicat...
Maximizing Efficiency and Productivity: Transform Your Business with Applicat...Maximizing Efficiency and Productivity: Transform Your Business with Applicat...
Maximizing Efficiency and Productivity: Transform Your Business with Applicat...basilmph
 
A research on- Sales force Project- documentation
A research on- Sales force Project- documentationA research on- Sales force Project- documentation
A research on- Sales force Project- documentationPasupathi Ganesan
 
Impact of cloud services on software development life
Impact of cloud services on software development life Impact of cloud services on software development life
Impact of cloud services on software development life Mohamed M. Yazji
 
Applying a Comprehensive, Automated Assurance Framework to Validate Cloud Rea...
Applying a Comprehensive, Automated Assurance Framework to Validate Cloud Rea...Applying a Comprehensive, Automated Assurance Framework to Validate Cloud Rea...
Applying a Comprehensive, Automated Assurance Framework to Validate Cloud Rea...Cognizant
 
M.S. Dissertation in Salesforce on Force.com
M.S. Dissertation in Salesforce on Force.comM.S. Dissertation in Salesforce on Force.com
M.S. Dissertation in Salesforce on Force.comArun Somu Panneerselvam
 

Similar a Apq Qms Project Plan (20)

Top 8 Trends in Performance Engineering
Top 8 Trends in Performance EngineeringTop 8 Trends in Performance Engineering
Top 8 Trends in Performance Engineering
 
unit 5 cloud.pptx
unit 5 cloud.pptxunit 5 cloud.pptx
unit 5 cloud.pptx
 
The F5 Networks Application Services Reference Architecture (White Paper)
The F5 Networks Application Services Reference Architecture (White Paper)The F5 Networks Application Services Reference Architecture (White Paper)
The F5 Networks Application Services Reference Architecture (White Paper)
 
Connectivity And Topology Of Wireless Networks
Connectivity And Topology Of Wireless NetworksConnectivity And Topology Of Wireless Networks
Connectivity And Topology Of Wireless Networks
 
Title Software Re-Engineering: 3 Strategies for Building Better Applications
Title Software Re-Engineering: 3 Strategies for Building Better ApplicationsTitle Software Re-Engineering: 3 Strategies for Building Better Applications
Title Software Re-Engineering: 3 Strategies for Building Better Applications
 
internship paper
internship paperinternship paper
internship paper
 
RUNNING HEAD Intersession 6 Final Project Projection1Interse.docx
RUNNING HEAD Intersession 6 Final Project Projection1Interse.docxRUNNING HEAD Intersession 6 Final Project Projection1Interse.docx
RUNNING HEAD Intersession 6 Final Project Projection1Interse.docx
 
New Model to Achieve Software Quality Assurance (SQA) in Web Application
New Model to Achieve Software Quality Assurance (SQA) in Web ApplicationNew Model to Achieve Software Quality Assurance (SQA) in Web Application
New Model to Achieve Software Quality Assurance (SQA) in Web Application
 
CLASS NAMEMIS600PROFESSORS NAME STUDENTS NAME PRO.docx
CLASS NAMEMIS600PROFESSORS NAME STUDENTS NAME PRO.docxCLASS NAMEMIS600PROFESSORS NAME STUDENTS NAME PRO.docx
CLASS NAMEMIS600PROFESSORS NAME STUDENTS NAME PRO.docx
 
whitepaper_workday_technology_platform_devt_process
whitepaper_workday_technology_platform_devt_processwhitepaper_workday_technology_platform_devt_process
whitepaper_workday_technology_platform_devt_process
 
The Need for Unified Performance Management
The Need for Unified Performance ManagementThe Need for Unified Performance Management
The Need for Unified Performance Management
 
Root Cause Detection in a Service-Oriented Architecture
Root Cause Detection in a Service-Oriented ArchitectureRoot Cause Detection in a Service-Oriented Architecture
Root Cause Detection in a Service-Oriented Architecture
 
Maximizing Efficiency and Productivity: Transform Your Business with Applicat...
Maximizing Efficiency and Productivity: Transform Your Business with Applicat...Maximizing Efficiency and Productivity: Transform Your Business with Applicat...
Maximizing Efficiency and Productivity: Transform Your Business with Applicat...
 
KrishnaThorati
KrishnaThoratiKrishnaThorati
KrishnaThorati
 
Application Rationalization | Torry Harris Whitepaper
Application Rationalization | Torry Harris WhitepaperApplication Rationalization | Torry Harris Whitepaper
Application Rationalization | Torry Harris Whitepaper
 
A research on- Sales force Project- documentation
A research on- Sales force Project- documentationA research on- Sales force Project- documentation
A research on- Sales force Project- documentation
 
Impact of cloud services on software development life
Impact of cloud services on software development life Impact of cloud services on software development life
Impact of cloud services on software development life
 
Applying a Comprehensive, Automated Assurance Framework to Validate Cloud Rea...
Applying a Comprehensive, Automated Assurance Framework to Validate Cloud Rea...Applying a Comprehensive, Automated Assurance Framework to Validate Cloud Rea...
Applying a Comprehensive, Automated Assurance Framework to Validate Cloud Rea...
 
C0371019027
C0371019027C0371019027
C0371019027
 
M.S. Dissertation in Salesforce on Force.com
M.S. Dissertation in Salesforce on Force.comM.S. Dissertation in Salesforce on Force.com
M.S. Dissertation in Salesforce on Force.com
 

Último

IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 
Introduction to Quantum Computing
Introduction to Quantum ComputingIntroduction to Quantum Computing
Introduction to Quantum ComputingGDSC PJATK
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaborationbruanjhuli
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7DianaGray10
 
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IES VE
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostMatt Ray
 
GenAI and AI GCC State of AI_Object Automation Inc
GenAI and AI GCC State of AI_Object Automation IncGenAI and AI GCC State of AI_Object Automation Inc
GenAI and AI GCC State of AI_Object Automation IncObject Automation
 
Babel Compiler - Transforming JavaScript for All Browsers.pptx
Babel Compiler - Transforming JavaScript for All Browsers.pptxBabel Compiler - Transforming JavaScript for All Browsers.pptx
Babel Compiler - Transforming JavaScript for All Browsers.pptxYounusS2
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1DianaGray10
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxGDSC PJATK
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemAsko Soukka
 
Designing A Time bound resource download URL
Designing A Time bound resource download URLDesigning A Time bound resource download URL
Designing A Time bound resource download URLRuncy Oommen
 
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdfJamie (Taka) Wang
 
Digital magic. A small project for controlling smart light bulbs.
Digital magic. A small project for controlling smart light bulbs.Digital magic. A small project for controlling smart light bulbs.
Digital magic. A small project for controlling smart light bulbs.francesco barbera
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1DianaGray10
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...Aggregage
 

Último (20)

IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 
Introduction to Quantum Computing
Introduction to Quantum ComputingIntroduction to Quantum Computing
Introduction to Quantum Computing
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7
 
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
 
GenAI and AI GCC State of AI_Object Automation Inc
GenAI and AI GCC State of AI_Object Automation IncGenAI and AI GCC State of AI_Object Automation Inc
GenAI and AI GCC State of AI_Object Automation Inc
 
Babel Compiler - Transforming JavaScript for All Browsers.pptx
Babel Compiler - Transforming JavaScript for All Browsers.pptxBabel Compiler - Transforming JavaScript for All Browsers.pptx
Babel Compiler - Transforming JavaScript for All Browsers.pptx
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptx
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystem
 
Designing A Time bound resource download URL
Designing A Time bound resource download URLDesigning A Time bound resource download URL
Designing A Time bound resource download URL
 
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
20200723_insight_release_plan_v6.pdf20200723_insight_release_plan_v6.pdf
 
Digital magic. A small project for controlling smart light bulbs.
Digital magic. A small project for controlling smart light bulbs.Digital magic. A small project for controlling smart light bulbs.
Digital magic. A small project for controlling smart light bulbs.
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
 

Apq Qms Project Plan

  • 1. APPLICATION PERFORMANCE MANAGEMENT/QUALITY MANAGEMENT SYSTEM (APM/QMS) in PSHRC Eng. Mohammad Al-Nofaie Network Performance Engineer Center of Computer & Info. Systems (CCIS), PSHRC
  • 2.
  • 3. VISION To improve the quality of network performance for through advanced communication services to authorized users in equal access to state-of-the-art technology MISSION To provide authorized users the highest quality and technologically advanced end-user services
  • 4.
  • 6.
  • 7.
  • 8. Stage 1 : Initiating and Preparation Application Architecture The goal of successful application architecture is to explore the entire business and define an application and infrastructure framework that has the potential of delivering workable solutions for the foreseeable future. The key is to identify the business aspects that are core and the others that might change significantly. This frames the risk when looking at the specific areas to support. With a solid business perspective, current technologies and future science can be assessed. Although new technologies might stimulate new business, technologies are the tools, not the goal - business is the key. The results should support business growth or shrinkage, and replacement of application and technology components over time. Change is a constant - the architecture's aim is not just to withstand it but also to enable it. The exact structure is not important but the focus must be correct and the framework must be appropriately flexible to evolve. The enterprise architecture will also provide the framing and guidance for the next levels of architecture and design.
  • 9. Stage 1 : Initiating and Preparation Application Multi-tier Multi-tier applications enable enterprises to share information with, and permit collaboration among, employees, customers, and business partners. A typical multi-tier application has three tiers: a front end that performs authentication and serves as an interface to the user, a middle tier that handles authorization and business logic, and a back end that acts as a store for information.
  • 10.
  • 11. Stage 1 : Initiating and Preparation Service Level Objective Service Level Objectives (SLOs) are a key element of a Service Level Agreement between a Service Provider and a customer. SLOs are agreed as a means of measuring the performance of the Service Provider and are outlined as a way of avoiding disputes between the two parties based on misunderstanding. The SLO may be composed of one or more quality-of-service measurements that are combined to produce the SLO achievement value. As an example, an availability SLO may depend on multiple components, each of which may have a QOS availability measurement. The combination of QOS measures into a SLO achievement value will depend on the nature and architecture of the service. SLO must be: Attainable, Measurable, Understandable, Meaningful, Controllable, Affordable, Mutually acceptable Service Level Commitment Between 0 to .2% application and network error free, Response procedures to system failures within 5 minutes of failure notification. Have a Disaster Recovery Plan (DRP), automated monitoring of server availability and carrying out daily back-ups of critical data. Confidentially and Security of Data.
  • 12. Stage 1 : Initiating and Preparation External Issues Affecting Performance
  • 13. Cisco Service-Oriented Network Architecture (SONA) Framework Stage 1 : Initiating and Preparation Framework
  • 14. Cisco Service-Oriented Network Architecture (SONA) Framework Application Layer This layer contains the business applications and collaborative applications that use interactive services to operate more efficiently or can be deployed quicker and with lower integration costs. Stage 1 : Initiating and Preparation Framework
  • 15. This layer is a full architecture of several network technologies working together to create functionality that can be used by multiple applications across the network. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  • 16. Security Services Ensures that all aspects of the network together to secured pervasively from the edge to the core of the network looking at multiple aspects from passive attacks like viruses to active attacks and segmentation of data types. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  • 17. Mobility Services Allows users to access network resources regardless of their physical location but includes more than simple wireless devices. It is also the interaction through the network to allow for seamless layer three mobility and rapid re-association and forwarding of voice and video content. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  • 18. Storage Services Provides distributed and virtual storage across the infrastructure enabling additional services such as backup and translational functionality usually requiring additional media servers that need to be separately maintained. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  • 19. Voice and Collaboration Services Delivers the foundation by which voice and video streaming can be carried across the network with a high degree of quality while interacting with different data systems all working together as a full service. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  • 20. Compute Services Connects and virtualizes compute resources based on the application helping to provide cost effective business continuity as well as a dislocation of specific applications to specific servers. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  • 21. Identity Services Maps resources and policies to the user and device for use both by security services and to be used to create preferences for users for collaborative services. Identity service is also utilized by multiple applications to provide single sign-on capabilities. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  • 22. This layer is where all the IT resources are interconnected across a converged network foundation designed as a complete architecture to interoperate with all advanced services, across all places in the network, without requiring re-architecture or forklift upgrades. Cisco Service-Oriented Network Architecture (SONA) Framework Networked Infrastructure Layer Stage 1 : Initiating and Preparation Framework
  • 23. The network group is responsible in analyzing and generating reports about the current network infrastructure After the analysis and report generation, both teams will discuss every aspects of the problems and provide solutions Business / Collaboration Application Team Network Interactive Service Team Stage 1 : Initiating and Preparation Team Structure
  • 24.
  • 25. Existing and future applications must be declared Current network status is identified Remaining network resources must be elaborated and proposed network upgrade solutions is formulated The objectives should be meet in order to proceed to the next stage Stage 1 : Initiating and Preparation Deliverables
  • 26. To identify and understand the current network environment and possible impact during application deployment To understand and recommend information with regards to cost justification, project initiation and execution Stage 2 : Planning Objectives
  • 27. Business / Collaboration Application Team Network Interactive Service Team Stage 2 : Planning Team Structure
  • 28.
  • 29. Current load of the network and all resource consumption of the application must be declared Must identify all applications currently running in the network that might become affected after the impact of the application deployment Contingency plan must be prepared during system and application failure Cost analysis for system upgrades are identified The objectives should be meet in order to proceed to the next stage Stage 2 : Planning Deliverables
  • 30. To identify the major requirements of the application to be tested, both hardware and software To identify the needs for hardware changes To identify end user’s knowledge ability in using the application To ensure the integrity of the application during runtime Stage 3 : Testing Environment Objectives
  • 31. To identify the applications performance To identify the network resource consumption To identify the integrity of the contingency plan during software and hardware failure events To provide information from the tools which measures the application and network performance. Stage 3 : Implementation Objectives
  • 32. Stage 3 : Implementation 4 Stages of Implementation IT Guru IT GURU, does the following: 1) Diagnose – Visualize network, traffic flows and application transactions. Quickly determine root-cause of performance (server, network or client). Audit compliance with network security policies. 2) Validate Changes Prior to Implementation – test network configuration before implementation, right size capacity upgrades, analyze system upgrades, consolidations and relocations. 3) Plan Ahead for Growth and High Availability – establish budgets with quantitative justification, plan upgrades for growth or new facilities, optimize the deployment of new technologies and mission critical applications. We will evaluate three (3) products namely under IT Guru: CISCO Works, HP Manager, Sniffer Pro. 1 Provides a Virtual Network Environment that models the behavior of your entire network. Including it’s routers, switches, protocols, systems and individual applications. By working in the Virtual Network Environment, IT Managers, network and system planners and operations staff are empowered to more effectively diagnose difficult problems, validate changes before they are implemented and plan for future scenarios including growth and failure.
  • 33. Stage 3 : Implementation 1) Capture (Application Traces) – Capture a ‘finger print’ of the application transaction as it traverses the infrastructure. 2) Visualize (Transactions) – Visualize applications transaction allows both at the application level and the network packet level. Understand the interactions and dependencies among clients, the network, application servers and database servers. 3) Diagnose (Performance Problems) – Identify and diagnose performance bottleneck. Decode capture applications that cause unacceptable processing delays. 4) Validate (Solutions) – Quickly evaluate the impact of changing growth bandwidth protocol settings, application behavior, server speed and network congestion on end-to-end response times. 2 Performance of networked applications depends on complex interactions among applications, servers and networks. IT organizations need detailed, quantitative understanding of these interactions to efficiently and cost-effectively troubleshoot and deploy applications. ACE directly address these challenges. 4 Stages of Implementation ACE (Application Characterization Environment)
  • 34. Stage 3 : Implementation 3 Provides real-time performance analysis of complex applications by monitoring system and application metrics within each server across all tiers. Panorama automatically spots abnormal vs. normal behavior with advanced deviation tracking and correlation technologies. It automates the otherwise tedious analysis of thousands of application and system metrics across multiple tiers to identify sources of performance problems or potential choke-points. 4 Stages of Implementation Panorama (Real-time Application Analytics)
  • 35. Stage 3 : Implementation 4 An application service level monitoring solution that provides visibility into interdependent application and infrastructure components, and quantifies SLA compliance. SLA Commander™ employs synthetic transactions to monitor the response time and availability of web applications as seen by end-users, proactively alerting IT operations teams when performance thresholds are exceeded. SLA Commander integrates with OPNET's ACE™ to enable the in-depth analysis of problems that are intermittent or cannot easily be reproduced. 4 Stages of Implementation SLA Commander Key Features • Automated, around-the-clock application monitoring with threshold-based alarms • Convenient web-based dashboard that displays application service levels, enabling at-a-glance identification of problem areas • Comprehensive service model that maps infrastructure and application components to a business service • Early warning alerts to advise support teams of performance degradation • Drill-down analysis into poorly performing services to isolate faults to specific components • Intuitive authoring environment to create test scripts without programming, by recording a user's browser activity • High-fidelity browser playback of scripted transactions • Integration with OPNET's free ACE™ Capture Agents to automatically capture and archive packet traces of problematic transactions for subsequent analysis in ACE
  • 36. Stage 3 : Implementation Assessing Application Networkability Workflow
  • 37.
  • 38. 3 The network impact can be studied by changing network parameters (bandwidth, latency, packet loss, link utilization, TCP window size, etc.) on the application response time. An example is by plotting the application response time against any one parameter while keeping the others fixed. In general, the application response time should decrease if you increase bandwidth and/or reduce packet loss, link utilization and latency. Study Network Impact Stage 3 : Implementation Methodology 4 Changes in the application behavior will cause changes in the underlying network data exchange. Modifying the number of application turns, application bytes, and the processing times on relevant tiers will produce a data exchange pattern that reflects the application behavior. Modify Application Characteristics
  • 39. 5 The following are the steps in simulating the application: Auto-Create the Basic Topology The model and configuration of the topology is based on the number of tiers. Specifying its LAN segments will help to specify other parameters such as loss and latency, in addition to the WAN technology (IP, ATM or Frame Relay) and the bandwidth. Selecting the appropriate device models enables you to capture application packet traces in the simulation in the same way capturing the protocol traces in the real world. Determine Propagation Delay and Latency The discrete event simulation’s default method of determining the propagation delay using a “line-of-sight” geographic distance may often give a propagation delay that is too low because, for example, the actual network links may not follow a true line-of-sight. Therefore, it is often important to explicitly set latency/propagation attribute values when simulating application traffic, especially when doing application response time studies over TCP. Simulate the Application Stage 3 : Implementation Methodology
  • 40. Tune Protocols and Set Parameters The parameters of the client and servers are typically the most important. In general, depending on the protocols and devices you have chosen, there may be many parameters. Advance versions of device model gives access to the broadest range of parameters. Parameters for TCP are often the most influential when working with applications that make use of this protocol. The advance versions of client and server models provide a full complement of TCP parameters that can be control controlled. Understand the Important TCP Parameters “ TCP Delayed Acknowledgement Mechanism” controls how the delayed “dataless” acknowledgements are sent by the TCP connection process. Note that TCP does not send an ACK the instant that it receives data. Instead, it delays the ACK, hoping that it will have data to send with it (called “ACK piggybacking”). “ TCP Maximum Acknowledgement Delay” is the longest time that a TCP connection process waits to send an ACK after receiving data. “ TCP Receive Buffer Usage Threshold” affects the window size of the TCP connection. The window size is the amount of space available in the receive buffer. The usage threshold determines when data should be transferred from TCPs receive buffer to the application, thereby allowing the receive window to open further. Stage 3 : Implementation Methodology: Simulate the Application
  • 41. Run the Initial Simulation and Get Results Choosing a Simulation Duration and Selecting Statistics obtains the measurement of the response time, allowing to confirm that each element is behaving as expected. These should include application response time, server load and task-processing-related statistics, link utilization, and send and received data throughputs for the application. Running the Initial Simulation and Validating the Application Response Time The overall simulated response time for the application’s transactions may not match what you observe on your actual network because of several factors, such as the network is not fully modeled or the protocol parameters are not yet tuned. The packet analyzer captures a trace of the application task that is being simulated. Importing application packet trace allows to compare the statistics and diagnosis to the that was originally imported from a live network. In most cases, the results will match closely. The result will not match if the protocol parameters are not configured appropriately or have not taken into account the effect of other users. Represent the Server Servers are highly complex devices compose of numerous subsystems that perform tasks with varying degrees of concurrency. Further more, the behavior of various applications and operating systems vary greatly from vendor to vendor, and even from revision to revision, due to patches and upgrades. As a result, creating models of server performance can be difficult but can be easier if the models built are not that complex. Stage 3 : Implementation Methodology: Simulate the Application
  • 42. 6 To model the effect of other users and traffic sources, be sure to create appropriate load on the various components in the path. While it exceeds the scope of this methodology, obtaining load information is often done using network performance management tools that monitor statistics gathered by agents in the network. Run Simulations and Get Results Once the topology is built, include the effect of other users, and tune all the relevant protocol parameters, run the simulation and obtain results. Troubleshoot Application Response Times Load and possible congestion in the network can be the source of delay why the compare response time between the simulation and the actual network is not observed. Congestion is indicated by repeated sequences of numbers. The retransmissions of signal packets that are being dropped can be a significant contributor to lagging application response times. Client Relocation Approach Moving the client to various locations is an effective approach to locate the source of additional delay in the path between client and server. By “plugging” the client into different locations along the path, takin response time measurements, can obtain an estimate of the contribution of each segment of the path to overall response time. Model the Effect of Other Users and Traffic Sources Stage 3 : Implementation Methodology
  • 43. Ping Approach Instead of physically moving the client to different locations, “Ping Command” can determines the roundtrip times from the client to other components in the path, provided that thos components also use the IP protocol. The round-trip times gives an idea where latency is in the path. Stage 3 : Implementation Methodology: Model the Effect of Other Users and Traffic sources 7 Results can be analyze by viewing the output of the simulation in the form of graphs and statistics. These result allows to iteratively construct the what-if scenarios and study the impact of the changes on the application. Analyze Results 8 Reports is used to demonstrate and allows the collaborator to understand more about the applications’ performance test result. Visual displays and graphs are the essential report design to demonstrate the key findings effectively. Generate Reports
  • 44. Stage 3 : Implementation Team Structure Collaboration Application Group Business Application Group
  • 45.
  • 46. Network and application’s performance are measured Failure events are recorded Contingency plans are performed End-users are well trained The objectives should be meet in order to proceed to the next stage Stage 3 : Implementation Deliverables
  • 47. To identify the major requirements of the application to be tested, both hardware and software To identify the needs for hardware changes To identify end user’s knowledge ability in using the application To ensure the integrity of the application during runtime Stage 4 : Testing Environment Objectives
  • 48. Business / Collaboration Application Team Network Interactive Service Team Stage 4 : Testing Environment Team Structure
  • 49.
  • 50. All software, hardware and network performance are identified Application integrity and connectivity are measured Connectivity issues of all tier are tested and recorded Enhancements and module revisions are identified Hardware requirements are identified End-user’s application usage skills are evaluated The objectives should be meet in order to proceed to the next stage Stage 4 : Testing Environment Deliverables
  • 51. To determine the impact of the application into the live network infrastructure To verify the end result of the application simulation To evaluate the reporting performance To identify the enhancements needed into the application based on the implementation result Stage 5 : Analyzing Baseline Scenario Objectives
  • 52. Stage 5 : Analyzing Baseline Scenario Accessing Application Impact
  • 53.
  • 54.
  • 55. Provide Diagnoses and Statistics The diagnosis and statistics include the delays on each tier, the packer sizes, protocol delays, network transmission delays, propagation delays and so on. The diagnosis is based on different interpretations of the statistic data. If the value in a diagnosis exceeds its threshold, it is considered a “Bottleneck”. If it is close to the threshold, it is considered a “Potential Bottleneck”. If it is below the potential bottleneck range, it is considered to be “No Bottleneck”. Processing delay bottleneck is the processing time expressed as a percentage of the total response time. This delay represents the time taken due to operations within the machine, such as file I/O. CPU time, disk time, or memory access. Protocol overhead bottleneck is the total protocol overhead expressed as a percentage of the total amount of data transferred. Each protocol adds overhead to an application message in the form of headers. Protocols send packets that do not contain application data such as ACK. These packets are also counted as protocol overhead. Chattiness bottleneck is the number of application bytes per application turn. If an application is “chatty”, the data sent in each application is small. This may cause significant network delays and also processing delays at each tier since each tier now has to handle many litter messages. Network cost of chattiness bottleneck is the total network delay incurred due to application turns represented as a percentage of the total application response time. Applications that send many small packets back-and-forth incur a network delay. This delay becomes significant if there is a high latency link. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  • 56. Provide Diagnoses and Statistics (continued…) Propagation delay bottleneck is the time taken by the packets to propagate across the network represented as a percentage of the total application response time. Propagation delay is a function of the distance traveled and the speed of light. Device latencies can also add to this bottleneck. Transmission delay bottleneck is the transmission delay caused by line speeds expressed as a percentage of the total application response time. The transmission delay is a function of the total bytes transmitted and the line speed. Protocol delay bottleneck is the total delay due to protocol effects represented as a percentage of the total application response time. Examples of protocol effects are TCP flow control, congestion control, delay due to retransmissions, and collisions. Connection resets bottleneck is the total percentage of packets that were retransmitted. Protocols such as TCP retransmit a packet if they detect a long latency or a packet loss. Retransmission causes delays and additional protocol overhead. TCP also reduces the rate at which applications can send traffic when a retransmission occurs as a means of congestion control. This causes additional throttling of application traffic. Packet loss or unusual delays that trigger retransmissions can occur as a result of “bursty” application traffic, overflowing queues, misbehaving devices and link or node failures. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  • 57. Provide Diagnoses and Statistics (continued…) TCP windowing bottleneck is the bandwidth-delay product used by the TCP connection. When an application sends bulk data over a TCP connection, the TCP window size should be large enough to permit TCP to send many packets in a row without having to wait for TCP ACK. TCP frozen window bottleneck is the advertised TCP Receive Window that has dropped to a value smaller than the Maximum Segment Size (MSS). When this occurs, the sender cannot send any data until the receive window is one MSS or larger. To determine if the receive window has become larger, the sending side periodically sends on-byte probe packets. The contents of these probe packets depend on the particular implementation, but they are usually sent with an exponential back off. The common reason for the frozen window is that the application on the receiving side is not taking data from the TCP receive buffer quickly enough. TCP Nagle’s algorithm bottleneck indicates that Nagle’s algorithm is present and is slowing application response times. Nagle’s algorithm is a sending-side algorithm that reduces the number of small packets on the network, thereby increasing router efficiency. Nagle’s algorithm causes excessive numbers of delayed ACKs and slows down the application. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  • 58. Recommendations The implications of each diagnosis and our recommendations for correcting the problem are described below: Processing delay - Improve overall speed of the machine by adding faster processors, faster disks and more memory. Consider revamping the application so it uses machine resources more efficiently. E.g. Database application can benefit from indexing, transferring large records at once, and redesigning database queries. Protocol overhead – Consider sending larger application packets. This reduces the amount of header information that the protocol has to add, as there will be fewer application messages. Protocols such as TCP will also reduce the number of ACKs that have to be transmitted. Chattiness – Send fewer small application messages. Modify the application logic so that more data is sent in parallel. If a database is fetching one record at a time, try modifying it so that it obtains all the requested records, stores them in a structure, and sends the structure all at once. Network cost of chattiness – If the application is incurring significant network delay due to chattiness, try to eliminate the “chattiness” bottleneck. Consider reducing the transmission and propagation delay between tiers. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  • 59. Recommendations (continued…) Propagation delay – Move the affected tiers closer together. Use intermediate devices that are faster, that is, once that have a smaller latency. Use a utility program to examine actual network conditions. Transmission delay – Increase the line speed and reduce the number of hops that the messages have to traverse. Use a utility program to examine actual network conditions. Protocol delay – Retransmissions or unusual latencies are the causes of protocol delay. If the protocol is TCP and has an application sending small packets, check to see if the application has enabled Nagle’s algorithm. This algorithm causes small messages to wait until larger segments are formed for efficient transmission. However this adversely affects interactive applications that send many little messages back and forth. Connection resets – A reset implies that a connection could not be completed, or the connection was disconnected because the peers could not contact each other. A small number of resets are fairly common for applications such as HTTP, but if there are large number of resets, check if there is loss on connectivity among the tier pairs. Retransmissions – These are caused by loss or long delays. Eliminate the cause of the packet loss or the long delay. There are some networks that you have no control over, such as the Internet. Try to use different technologies such as VPN or IP tunneling, or attempt to obtain a higher Quality-of-Service (QoS) from the ISP. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  • 60. Recommendations (continued…) TCP windowing – Use larger TCP send and receive windows. These windows should be greater than the bandwidth-delay product for the connection. Use newer versions of TCP that have options such as SACK. Most operating system allows modification of select set of TCP parameters. TCP frozen window – Try to send less data, have the receiving application retrieve the data quickly. If the application cannot process all the data at once, consider storing the data in another buffer. Upgrade the receiving computer. TCP Nagle’s algorithm – Disable Nagle’s algorithm for this application. Rewrite the application such that it sends fewer, larger packets, or does not encounter a TCP delayed ACK. Configure TCP on the receiving host so that TCP acknowledges every packet it receives. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario Succeeding methodologies are explained in detail at Stage 5
  • 61. Stage 5 : Analyzing Baseline Scenario Team Structure Collaboration Application Group Business Application Group
  • 62.
  • 63. Network allocation resources are identified Application’s integrity, data connectivity and reporting performance are measured Future application enhancements are identified All plans are created in preparation for the Go Live Stage Stage 5 : Analyzing Baseline Scenario Deliverables
  • 64.
  • 65.
  • 66. Provide Diagnoses and Statistics The diagnosis and statistics include the delays on each tier, the packer sizes, protocol delays, network transmission delays, propagation delays and so on. The diagnosis is based on different interpretations of the statistic data. If the value in a diagnosis exceeds its threshold, it is considered a “Bottleneck”. If it is close to the threshold, it is considered a “Potential Bottleneck”. If it is below the potential bottleneck range, it is considered to be “No Bottleneck”. Processing delay bottleneck is the processing time expressed as a percentage of the total response time. This delay represents the time taken due to operations within the machine, such as file I/O. CPU time, disk time, or memory access. Protocol overhead bottleneck is the total protocol overhead expressed as a percentage of the total amount of data transferred. Each protocol adds overhead to an application message in the form of headers. Protocols send packets that do not contain application data such as ACK. These packets are also counted as protocol overhead. Chattiness bottleneck is the number of application bytes per application turn. If an application is “chatty”, the data sent in each application is small. This may cause significant network delays and also processing delays at each tier since each tier now has to handle many litter messages. Network cost of chattiness bottleneck is the total network delay incurred due to application turns represented as a percentage of the total application response time. Applications that send many small packets back-and-forth incur a network delay. This delay becomes significant if there is a high latency link. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  • 67. Provide Diagnoses and Statistics (continued…) Propagation delay bottleneck is the time taken by the packets to propagate across the network represented as a percentage of the total application response time. Propagation delay is a function of the distance traveled and the speed of light. Device latencies can also add to this bottleneck. Transmission delay bottleneck is the transmission delay caused by line speeds expressed as a percentage of the total application response time. The transmission delay is a function of the total bytes transmitted and the line speed. Protocol delay bottleneck is the total delay due to protocol effects represented as a percentage of the total application response time. Examples of protocol effects are TCP flow control, congestion control, delay due to retransmissions, and collisions. Connection resets bottleneck is the total percentage of packets that were retransmitted. Protocols such as TCP retransmit a packet if they detect a long latency or a packet loss. Retransmission causes delays and additional protocol overhead. TCP also reduces the rate at which applications can send traffic when a retransmission occurs as a means of congestion control. This causes additional throttling of application traffic. Packet loss or unusual delays that trigger retransmissions can occur as a result of “bursty” application traffic, overflowing queues, misbehaving devices and link or node failures. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  • 68. Provide Diagnoses and Statistics (continued…) TCP windowing bottleneck is the bandwidth-delay product used by the TCP connection. When an application sends bulk data over a TCP connection, the TCP window size should be large enough to permit TCP to send many packets in a row without having to wait for TCP ACK. TCP frozen window bottleneck is the advertised TCP Receive Window that has dropped to a value smaller than the Maximum Segment Size (MSS). When this occurs, the sender cannot send any data until the receive window is one MSS or larger. To determine if the receive window has become larger, the sending side periodically sends on-byte probe packets. The contents of these probe packets depend on the particular implementation, but they are usually sent with an exponential back off. The common reason for the frozen window is that the application on the receiving side is not taking data from the TCP receive buffer quickly enough. TCP Nagle’s algorithm bottleneck indicates that Nagle’s algorithm is present and is slowing application response times. Nagle’s algorithm is a sending-side algorithm that reduces the number of small packets on the network, thereby increasing router efficiency. Nagle’s algorithm causes excessive numbers of delayed ACKs and slows down the application. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  • 69. Stage 5 : Analyzing Baseline Scenario Team Structure Collaboration Application Group Business Application Group
  • 70.
  • 71. Network allocation resources are identified Application’s integrity, data connectivity and reporting performance are measured Future application enhancements are identified All plans are created in preparation for the Go Live Stage The objectives should be meet in order to proceed to the next stage Stage 5 : Analyzing Baseline Scenario Deliverables
  • 72. To identify the deployment process of the application to the live servers To identify the actual impact of the application deployment to other applications currently running on the network To verify accuracy and credibility of data exchange between client and servers Stage 6 : Go Live Scenario Objectives
  • 73. Stage 6 : Go Live Scenario Team Structure Collaboration Application Group Business Application Group
  • 74.
  • 75. Recorded result of application and network performance upon deployment Result analysis of the hardware performance Identification of the weak parts of the network Stage 6 : Go Live Scenario Deliverables
  • 76. To finalize end result and present the output to Top Management To document the projects related issues including software documentation and summarization Stage 7 : Project Closing Objectives
  • 77. Stage 7 : Project Closing Team Structure Collaboration Application Group Business Application Group
  • 78.
  • 79. Documentation of the project must be present Project Review Project turnover to CCIS from vendor Stage 7 : Project Closing Deliverables Establishes action plans for identified additional needs