Adaptive bitrate algorithms have become paramount in ensuring quality video delivery on every device and across varying network conditions. This presentation looks at the design goals and the inner workings of ABR logic, how it is used in the open-source players hls.js and dash.js, and what broadcasters can do to improve and optimize their own stack.
Translation of a program written in a source language into a semantically equivalent program written in a target language
It also reports to its users the presence of errors in the source program
DATABASE MANAGEMENT SYSTEM UNIT-I Chapter-1Raj vardhan
Database systems have become essential in modern society, with applications in libraries, banking, travel, retail, and online purchases. A database contains organized data and a database management system provides tools to define, construct, manipulate, share, and protect the database. The three-schema architecture separates a database into an internal schema for physical storage, a conceptual schema for logical structure, and external schemas for individual user views. This separation and the use of mappings between schemas provides logical and physical data independence.
Apache Hadoop is a framework for distributed computation and storage of very large data sets on computer clusters. Hadoop began as a project to implement Google’s MapReduce programming model and has become synonymous with a rich ecosystem of related technologies, not limited to Apache Pig, Apache Hive, Apache Spark, Apache HBase, and others
The document discusses various PHP functions for manipulating files including:
- readfile() which reads a file and writes it to the output buffer
- fopen() which opens files and gives more options than readfile()
- fread() which reads from an open file
- fclose() which closes an open file
- fgets() which reads a single line from a file
- feof() which checks if the end-of-file has been reached
It also discusses sanitizing user input before passing it to execution functions to prevent malicious commands from being run.
This document summarizes key concepts from Chapter 14 of the textbook "Database System Concepts, 6th Ed." including:
1) A transaction is a unit of program execution that accesses and updates data items. For integrity, transactions must have ACID properties: atomicity, consistency, isolation, and durability.
2) Concurrency control ensures serializable execution of concurrent transactions to maintain consistency. Schedules must be conflict serializable and recoverable.
3) SQL supports transactions and different isolation levels to balance consistency and concurrency. The default isolation level is usually serializable but some systems allow weaker isolation.
The document discusses key components and concepts related to operating system structures. It describes common system components like process management, memory management, file management, I/O management, and more. It then provides more details on specific topics like the role of processes, main memory management, file systems, I/O systems, secondary storage, networking, protection systems, and command interpreters in operating systems. Finally, it discusses operating system services, system calls, and how parameters are passed between programs and the operating system.
This document provides an introduction to the Pig analytics platform for Hadoop. It begins with an overview of big data and Hadoop, then discusses the basics of Pig including its data model, language called Pig Latin, and components. Key points made are that Pig provides a high-level language for expressing data analysis processes, compiles queries into MapReduce programs for execution, and allows for easier programming than lower-level systems like Java MapReduce. The document also compares Pig to SQL and Hive, and demonstrates visualizing Pig jobs with the Twitter Ambrose tool.
The document discusses CPU scheduling in operating systems. It describes how the CPU scheduler selects processes that are ready to execute and allocates the CPU to one of them. The goals of CPU scheduling are to maximize CPU utilization, minimize waiting times and turnaround times. Common CPU scheduling algorithms discussed are first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin scheduling. Multilevel queue scheduling is also mentioned. Examples are provided to illustrate how each algorithm works.
Translation of a program written in a source language into a semantically equivalent program written in a target language
It also reports to its users the presence of errors in the source program
DATABASE MANAGEMENT SYSTEM UNIT-I Chapter-1Raj vardhan
Database systems have become essential in modern society, with applications in libraries, banking, travel, retail, and online purchases. A database contains organized data and a database management system provides tools to define, construct, manipulate, share, and protect the database. The three-schema architecture separates a database into an internal schema for physical storage, a conceptual schema for logical structure, and external schemas for individual user views. This separation and the use of mappings between schemas provides logical and physical data independence.
Apache Hadoop is a framework for distributed computation and storage of very large data sets on computer clusters. Hadoop began as a project to implement Google’s MapReduce programming model and has become synonymous with a rich ecosystem of related technologies, not limited to Apache Pig, Apache Hive, Apache Spark, Apache HBase, and others
The document discusses various PHP functions for manipulating files including:
- readfile() which reads a file and writes it to the output buffer
- fopen() which opens files and gives more options than readfile()
- fread() which reads from an open file
- fclose() which closes an open file
- fgets() which reads a single line from a file
- feof() which checks if the end-of-file has been reached
It also discusses sanitizing user input before passing it to execution functions to prevent malicious commands from being run.
This document summarizes key concepts from Chapter 14 of the textbook "Database System Concepts, 6th Ed." including:
1) A transaction is a unit of program execution that accesses and updates data items. For integrity, transactions must have ACID properties: atomicity, consistency, isolation, and durability.
2) Concurrency control ensures serializable execution of concurrent transactions to maintain consistency. Schedules must be conflict serializable and recoverable.
3) SQL supports transactions and different isolation levels to balance consistency and concurrency. The default isolation level is usually serializable but some systems allow weaker isolation.
The document discusses key components and concepts related to operating system structures. It describes common system components like process management, memory management, file management, I/O management, and more. It then provides more details on specific topics like the role of processes, main memory management, file systems, I/O systems, secondary storage, networking, protection systems, and command interpreters in operating systems. Finally, it discusses operating system services, system calls, and how parameters are passed between programs and the operating system.
This document provides an introduction to the Pig analytics platform for Hadoop. It begins with an overview of big data and Hadoop, then discusses the basics of Pig including its data model, language called Pig Latin, and components. Key points made are that Pig provides a high-level language for expressing data analysis processes, compiles queries into MapReduce programs for execution, and allows for easier programming than lower-level systems like Java MapReduce. The document also compares Pig to SQL and Hive, and demonstrates visualizing Pig jobs with the Twitter Ambrose tool.
The document discusses CPU scheduling in operating systems. It describes how the CPU scheduler selects processes that are ready to execute and allocates the CPU to one of them. The goals of CPU scheduling are to maximize CPU utilization, minimize waiting times and turnaround times. Common CPU scheduling algorithms discussed are first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin scheduling. Multilevel queue scheduling is also mentioned. Examples are provided to illustrate how each algorithm works.
The document provides an overview of computer architecture and organization by:
1) Describing the basic structure of a computer system including the central processing unit, main memory, and input/output systems.
2) Explaining the four main functions of a computer as data processing, data storage, data movement, and control.
3) Discussing the different levels of abstraction in transforming a problem into a working computer system from the problem statement to electronics.
The document discusses query optimization by describing how a database system estimates the cost of different query evaluation plans using statistical information about relations. It covers topics like estimating the size of selections, joins, aggregations and other operations to choose the lowest cost plan using transformations and equivalence rules.
This document discusses directory structures and file system mounting in operating systems. It describes several types of directory structures including single-level, two-level, hierarchical, tree, and acyclic graph structures. It notes that directories organize files in a hierarchical manner and that mounting makes storage devices available to the operating system by reading metadata about the filesystem. Mounting attaches an additional filesystem to the currently accessible filesystem, while unmounting disconnects the filesystem.
The C programming language was created in 1972 at Bell Labs by Dennis Ritchie. It is a high-level, structured programming language that incorporates features of low-level languages like assembly. C programs use header files, variables, operators, input/output functions, and control statements like if/else and loops. Keywords, data types, and functions make C a flexible yet efficient language used widely in software development.
The document summarizes key aspects of operating system structures including:
1) Operating systems provide services to users like user interfaces, program execution, I/O, file manipulation and resource allocation. They also ensure efficient system operation through accounting and protection.
2) System calls are the programming interface to OS services, accessed via APIs. Common APIs include Win32, POSIX, and Java.
3) Operating systems can have different structures like layered, modular, microkernel and virtual machine approaches. They are implemented through system programs, boot processes, and configuration for specific hardware.
The document discusses the Domain Name System (DNS), which maps human-readable domain names to IP addresses. DNS uses a hierarchical, domain-based naming scheme stored in a distributed database across multiple name servers. When a domain name is queried, DNS performs a recursive lookup by querying name servers at higher levels until it reaches an authoritative name server that can provide the IP address associated with the domain name. Caching of responses improves performance by avoiding unnecessary lookups.
The document discusses the role and implementation of a lexical analyzer in compilers. A lexical analyzer is the first phase of a compiler that reads source code characters and generates a sequence of tokens. It groups characters into lexemes and determines the tokens based on patterns. A lexical analyzer may need to perform lookahead to unambiguously determine tokens. It associates attributes with tokens, such as symbol table entries for identifiers. The lexical analyzer and parser interact through a producer-consumer relationship using a token buffer.
This presentation covers the understanding of system calls for various resource management and covers system calls for file management in details. The understanding of using system calls helps to start with working with device driver programming on Unix/Linux OS.
We have learnt that any computer system is made of hardware and software.
The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember.
These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine.
This is known as Language Processing System.
Salvatore Sanfilippo – How Redis Cluster works, and why - NoSQL matters Barce...NoSQLmatters
Salvatore Sanfilippo – How Redis Cluster works, and why
In this talk the algorithmic details of Redis Cluster will be exposed in order to show what were the design tensions in the clustered version of an high performance database supporting complex data type, the selected tradeoffs, and their effect on the availability and consistency of the resulting solution.Other non-chosen solutions in the design space will be illustrated for completeness.
The document discusses the role and process of a lexical analyzer in compiler design. A lexical analyzer groups input characters into lexemes and produces a sequence of tokens as output for the syntactic analyzer. It strips out comments and whitespace, correlates line numbers with errors, and interacts with the symbol table. Lexical analysis improves compiler efficiency, portability, and allows for simpler parser design by separating lexical and syntactic analysis.
The document discusses the structure of file systems. It explains that a file system provides mechanisms for storing and accessing files and data. It uses a layered approach, with each layer responsible for specific tasks related to file management. The logical file system contains metadata and verifies permissions and paths. It maps logical file blocks to physical disk blocks using a file organization module, which also manages free space. The basic file system then issues I/O commands to access those physical blocks via device drivers, with I/O controls handling interrupts.
A compiler acts as a translator that converts programs written in high-level human-readable languages into machine-readable low-level languages. Compilers are needed because computers can only understand machine languages, not human languages. A compiler performs analysis and synthesis on a program, breaking the process into phases like scanning, parsing, code generation, and optimization to translate the high-level code into an executable form. The phases include lexical analysis, syntax analysis, semantic analysis, code generation, and optimization.
This document discusses several software cost estimation techniques:
1. Top-down and bottom-up approaches - Top-down estimates system-level costs while bottom-up estimates costs of each module and combines them.
2. Expert judgment - Widely used technique where experts estimate costs based on past similar projects. It utilizes experience but can be biased.
3. Delphi estimation - Estimators anonymously provide estimates in rounds to reach consensus without group dynamics influencing individuals.
4. Work breakdown structure - Hierarchical breakdown of either the product components or work activities to aid bottom-up estimation.
This document provides an overview of Apache Hadoop and HBase. It begins with an introduction to why big data is important and how Hadoop addresses storing and processing large amounts of data across commodity servers. The core components of Hadoop, HDFS for storage and MapReduce for distributed processing, are described. An example MapReduce job is outlined. The document then introduces the Hadoop ecosystem, including Apache HBase for random read/write access to data stored in Hadoop. Real-world use cases of Hadoop at companies like Yahoo, Facebook and Twitter are briefly mentioned before addressing questions.
Functional dependency defines a relationship between attributes in a table where a set of attributes determine another attribute. There are different types of functional dependencies including trivial, non-trivial, multivalued, and transitive. An example given is a student table with attributes Stu_Id, Stu_Name, Stu_Age which has the functional dependency of Stu_Id->Stu_Name since the student ID uniquely identifies the student name.
A presentation of Compiler Design on the topic Passes. It include one pass, two pass and multi-pass. It also include comparison between single pass and multi-pass compiler and use cases of both the compiler.
Technological geeks Hindi Video 1 -
https://youtu.be/LSvAoo4pYjs
Contents :-
What is Big Data ?
Big Data characteristics
Big Data sources
Use cases of Big Data
Hadoop Daemons
Hadoop Master slave architecture
Hadoop cluster
Secondary namenode
Paris Video Tech - 1st Edition: Streamroot, Adaptive Bitrate Algorithms: comm...Erica Beavers
Nous ferons une rapide explication des enjeux et mécanismes de l'Adaptive Bitrate Streaming, puis allons regarder les implémentations pratiques dans les media engines de référence dash.js et hls.js.
The document provides an overview of computer architecture and organization by:
1) Describing the basic structure of a computer system including the central processing unit, main memory, and input/output systems.
2) Explaining the four main functions of a computer as data processing, data storage, data movement, and control.
3) Discussing the different levels of abstraction in transforming a problem into a working computer system from the problem statement to electronics.
The document discusses query optimization by describing how a database system estimates the cost of different query evaluation plans using statistical information about relations. It covers topics like estimating the size of selections, joins, aggregations and other operations to choose the lowest cost plan using transformations and equivalence rules.
This document discusses directory structures and file system mounting in operating systems. It describes several types of directory structures including single-level, two-level, hierarchical, tree, and acyclic graph structures. It notes that directories organize files in a hierarchical manner and that mounting makes storage devices available to the operating system by reading metadata about the filesystem. Mounting attaches an additional filesystem to the currently accessible filesystem, while unmounting disconnects the filesystem.
The C programming language was created in 1972 at Bell Labs by Dennis Ritchie. It is a high-level, structured programming language that incorporates features of low-level languages like assembly. C programs use header files, variables, operators, input/output functions, and control statements like if/else and loops. Keywords, data types, and functions make C a flexible yet efficient language used widely in software development.
The document summarizes key aspects of operating system structures including:
1) Operating systems provide services to users like user interfaces, program execution, I/O, file manipulation and resource allocation. They also ensure efficient system operation through accounting and protection.
2) System calls are the programming interface to OS services, accessed via APIs. Common APIs include Win32, POSIX, and Java.
3) Operating systems can have different structures like layered, modular, microkernel and virtual machine approaches. They are implemented through system programs, boot processes, and configuration for specific hardware.
The document discusses the Domain Name System (DNS), which maps human-readable domain names to IP addresses. DNS uses a hierarchical, domain-based naming scheme stored in a distributed database across multiple name servers. When a domain name is queried, DNS performs a recursive lookup by querying name servers at higher levels until it reaches an authoritative name server that can provide the IP address associated with the domain name. Caching of responses improves performance by avoiding unnecessary lookups.
The document discusses the role and implementation of a lexical analyzer in compilers. A lexical analyzer is the first phase of a compiler that reads source code characters and generates a sequence of tokens. It groups characters into lexemes and determines the tokens based on patterns. A lexical analyzer may need to perform lookahead to unambiguously determine tokens. It associates attributes with tokens, such as symbol table entries for identifiers. The lexical analyzer and parser interact through a producer-consumer relationship using a token buffer.
This presentation covers the understanding of system calls for various resource management and covers system calls for file management in details. The understanding of using system calls helps to start with working with device driver programming on Unix/Linux OS.
We have learnt that any computer system is made of hardware and software.
The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember.
These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine.
This is known as Language Processing System.
Salvatore Sanfilippo – How Redis Cluster works, and why - NoSQL matters Barce...NoSQLmatters
Salvatore Sanfilippo – How Redis Cluster works, and why
In this talk the algorithmic details of Redis Cluster will be exposed in order to show what were the design tensions in the clustered version of an high performance database supporting complex data type, the selected tradeoffs, and their effect on the availability and consistency of the resulting solution.Other non-chosen solutions in the design space will be illustrated for completeness.
The document discusses the role and process of a lexical analyzer in compiler design. A lexical analyzer groups input characters into lexemes and produces a sequence of tokens as output for the syntactic analyzer. It strips out comments and whitespace, correlates line numbers with errors, and interacts with the symbol table. Lexical analysis improves compiler efficiency, portability, and allows for simpler parser design by separating lexical and syntactic analysis.
The document discusses the structure of file systems. It explains that a file system provides mechanisms for storing and accessing files and data. It uses a layered approach, with each layer responsible for specific tasks related to file management. The logical file system contains metadata and verifies permissions and paths. It maps logical file blocks to physical disk blocks using a file organization module, which also manages free space. The basic file system then issues I/O commands to access those physical blocks via device drivers, with I/O controls handling interrupts.
A compiler acts as a translator that converts programs written in high-level human-readable languages into machine-readable low-level languages. Compilers are needed because computers can only understand machine languages, not human languages. A compiler performs analysis and synthesis on a program, breaking the process into phases like scanning, parsing, code generation, and optimization to translate the high-level code into an executable form. The phases include lexical analysis, syntax analysis, semantic analysis, code generation, and optimization.
This document discusses several software cost estimation techniques:
1. Top-down and bottom-up approaches - Top-down estimates system-level costs while bottom-up estimates costs of each module and combines them.
2. Expert judgment - Widely used technique where experts estimate costs based on past similar projects. It utilizes experience but can be biased.
3. Delphi estimation - Estimators anonymously provide estimates in rounds to reach consensus without group dynamics influencing individuals.
4. Work breakdown structure - Hierarchical breakdown of either the product components or work activities to aid bottom-up estimation.
This document provides an overview of Apache Hadoop and HBase. It begins with an introduction to why big data is important and how Hadoop addresses storing and processing large amounts of data across commodity servers. The core components of Hadoop, HDFS for storage and MapReduce for distributed processing, are described. An example MapReduce job is outlined. The document then introduces the Hadoop ecosystem, including Apache HBase for random read/write access to data stored in Hadoop. Real-world use cases of Hadoop at companies like Yahoo, Facebook and Twitter are briefly mentioned before addressing questions.
Functional dependency defines a relationship between attributes in a table where a set of attributes determine another attribute. There are different types of functional dependencies including trivial, non-trivial, multivalued, and transitive. An example given is a student table with attributes Stu_Id, Stu_Name, Stu_Age which has the functional dependency of Stu_Id->Stu_Name since the student ID uniquely identifies the student name.
A presentation of Compiler Design on the topic Passes. It include one pass, two pass and multi-pass. It also include comparison between single pass and multi-pass compiler and use cases of both the compiler.
Technological geeks Hindi Video 1 -
https://youtu.be/LSvAoo4pYjs
Contents :-
What is Big Data ?
Big Data characteristics
Big Data sources
Use cases of Big Data
Hadoop Daemons
Hadoop Master slave architecture
Hadoop cluster
Secondary namenode
Paris Video Tech - 1st Edition: Streamroot, Adaptive Bitrate Algorithms: comm...Erica Beavers
Nous ferons une rapide explication des enjeux et mécanismes de l'Adaptive Bitrate Streaming, puis allons regarder les implémentations pratiques dans les media engines de référence dash.js et hls.js.
This was a talk, largely on Kamaelia & its original context given at a Free Streaming Workshop in Florence, Italy in Summer 2004. Many of the core
concepts still hold valid in Kamaelia today
Towards Benchmaking Modern Distruibuted Systems-(Grace Huang, Intel)Spark Summit
This document discusses StreamingBench, a benchmarking tool for streaming systems. It aims to help users understand and select streaming platforms, identify factors that impact performance, and provide guidance on optimizing resources. The document outlines StreamingBench workloads and scoring metrics, compares the performance of Spark Streaming, Storm, Trident and Samza, and analyzes how configuration choices like serialization, partitions, and acknowledgements affect throughput and latency.
IEEE ICC'22_ LEADER_ A Collaborative Edge- and SDN-Assisted Framework for HTT...Reza Farahani
1) The document proposes LEADER, a collaborative edge- and SDN-assisted framework for HTTP adaptive video streaming. LEADER employs virtual network functions with transcoding capabilities at network edges to optimize video streaming quality of experience and network utilization.
2) An SDN controller runs an optimization model to determine the optimal location, action, and approach for fetching client-requested video qualities. A lightweight heuristic approach is also proposed.
3) An evaluation using a large-scale testbed of 250 clients, edge servers, and an SDN controller shows that LEADER improves average video bitrate, reduces quality switches and stalls, and increases perceived quality of experience over non-collaborative and default edge approaches. LE
1) A streaming service provider hired a telecommunications company to provide infrastructure for streaming content in Asia. To validate the content delivery network (CDN) could handle 100 Gbps, a large-scale load test was conducted.
2) Abstracta was contracted to devise and execute the load test using the BlazeMeter cloud platform. The test simulated 85,000 concurrent viewers using 500 testing engines across locations.
3) The 10-minute test successfully achieved over 8,000 requests per second, validating the CDN could support the streaming service's requirements. With the successful results, the streaming service was able to launch its video platform in South Korea.
Emulation of Dynamic Adaptive Streaming over HTTP with MininetAnatoliy Zabrovskiy
This document discusses emulating Dynamic Adaptive Streaming over HTTP (DASH) traffic using the Mininet network emulator. The researchers developed a methodology for setting up a Mininet virtual environment with bandwidth shaping and connecting it to a real IP network. Experiments were conducted transmitting DASH content over different bandwidth profiles in Mininet and a specialized emulator. Results showed video bitrates matched between the emulators. Future work aims to test more complex network topologies and analyze DASH delivery in modern approaches like SDN, CDN, and CCN.
3. Quality of Experience-Centric Management.pdfAliIssa53
1) Video streaming, especially HTTP adaptive streaming (HAS), dominates internet traffic but optimizing user quality of experience (QoE) is challenging. Current client-based heuristics can result in video freezes, especially during bandwidth drops or for live streams with small buffers.
2) Recent works have proposed extending the client-based structure of HAS to fully optimize QoE through server/network assistance, application-level solutions, and transport/network layer modifications.
3) This survey paper reviews and classifies these approaches, outlines recommendations, and identifies open challenges including immersive video, QUIC streaming, traffic encryption, and personalized QoE control.
Explaining why you should and how you can test your web application using JMeter test plans to determine the scalability. By knowing how good or bad your application performs you can then make the decision to optimize it.
But where to start optimizing? => You need to profile your application in order to see the components which have the most impact in your application regarding number of method calls, memory consumption and CPU time used.
XHProf is a good hierarchical profiler which you can use to generate the callstack and a callgraph of your application (directed acyclic weighted graph) to see where you could start optimizing your application.
This talk was presented on the TYPO3 DevDays 2015 in Nuremberg.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
AWS re:Invent 2016: Amazon CloudFront Flash Talks: Best Practices on Configur...Amazon Web Services
In this series of 15-minute technical flash talks you will learn directly from Amazon CloudFront engineers and their best practices on debugging caching issues, measuring performance using Real User Monitoring (RUM), and stopping malicious viewers using CloudFront and AWS WAF.
A Two-Tiered On-Line Server-Side Bandwidth Reservation Framework for the Real...white paper
This document summarizes a two-tiered bandwidth reservation framework for delivering multiple video streams from servers in real-time. The framework uses a combination of per-stream reservations and a shared aggregate reservation across all streams. Each stream is allocated a guaranteed reservation equal to the p percentile of its bandwidth distribution. An additional shared reservation provides statistical multiplexing of peak bandwidth demands. This enables delivery of streams with less total bandwidth than deterministic approaches while bounding frame drop probabilities based on system parameters. The document proposes an online admission control algorithm that uses three pre-computed parameters per stream and has linear complexity in the number of servers.
Application Profiling at the HPCAC High Performance Centerinside-BigData.com
Pak Lui from the HPC Advisory Council presented this deck at the 2017 Stanford HPC Conference.
"To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance from tests conducted at the HPC Advisory Council High Performance Center."
Watch the video presentation: http://wp.me/p3RLHQ-gpY
Learn more: http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document provides tips for building a scalable and high-performance website, including using caching, load balancing, and monitoring. It discusses horizontal and vertical scalability, and recommends planning, testing, and version control. Specific techniques mentioned include static content caching, Memcached, and the YSlow performance tool.
IBM Blockchain Platform - Architectural Good Practices v1.0Matt Lucas
This document discusses architectural good practices for blockchains and Hyperledger Fabric performance. It provides an overview of key concepts like transaction processing in Fabric and performance metrics. It also covers optimizing different parts of the Fabric network like client applications, peers, ordering service, and chaincode. The document recommends using tools like Hyperledger Caliper and custom test harnesses for performance testing and monitoring Fabric deployments. It highlights lessons learned from real projects around reusing connections and load balancing requests.
The document discusses a TechTalk webinar on hyperconverged infrastructure from Cisco Thailand that includes a live demo. It provides definitions and explanations of key concepts like hyperconvergence, software defined storage, and hyperconverged architectures. The webinar highlights benefits like agility, efficiency, simplicity and scalability and discusses how hyperconvergence is shifting the market towards server-based ecosystems.
International Journal of Computational Engineering Research(IJCER) ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
The document discusses using AWS services like EC2, VPC, Auto Scaling and others to build a hybrid architecture that integrates an organization's on-premises data center with the AWS cloud. It provides overviews of EC2 instance types, Auto Scaling capabilities, and how to use VPC to connect networks and define routing and security rules. The hybrid model allows leveraging AWS' elastic infrastructure while integrating it with existing IT systems, enabling innovation without being constrained by data center capacity limits or costs.
This document discusses using AWS for high-performance computing (HPC). It describes how AWS provides flexible, scalable infrastructure for HPC workloads through services like EC2 instances, EBS storage, FSx file systems, and Batch job scheduling. Case studies are presented of companies like Western Digital and MeteoGroup using AWS for computation-heavy tasks like simulation and weather forecasting. The document emphasizes that AWS allows HPC workloads to scale up and down on demand rather than requiring fixed infrastructure capacity.
Similar a ABR Algorithms Explained (from Streaming Media East 2016) (20)
Jean-Baptiste Kempf, President of VideoLAN and Lead VLC developer, presents the brand new VLC 3.0 including new compatibilities and a future project to bring VLC the browser.
Streaming Media West 2017 - HTML5 WorkshopErica Beavers
This 3-hour workshop, given at Streaming Media West, describes in detail the HTML5 video workflow. We cover device support, formats, encoding options, digital rights management, server-side ad insertion, and delivery, with a special emphasis on how HTML5 players work and what to look for when choosing a video player.
Romain Bouqueau of GPAC Licensing takes us through the industry trends and future possibilities at IBC show 2017: low latency, webRTC, 8K, virtual reality, AI, HDR, patents, software defined workflows and more.
As Flash continues to decline, HTML5 video technologies increasingly bring promise of heightened performance and better QOE. This workshop provides an in-depth look at HTML5 players, their features and strengths, as well as the open-source media engine frameworks available on the market today. We begin by examining the main components in a video player. We then discuss how to choose a player adapted to one’s use case, examining how several open-source solutions compare. Finally, we use an interactive example to build features and demonstrate several optimizations, offering tips and best practices and pointing out potential production issues as we go along.
The next generation of protocols and APIs that could change streaming videoErica Beavers
As HTML5 video gains widespread adoption, we have seen significant advances in a short period of time. While EME and webRTC get a lot of attention, they are not the only tools with the potential to change the way we stream video in the future. In this presentation, we discuss some of the new browser APIs that could usher in the next generation of HTML5 video: from the browser Fetch API to Service Workers to network side improvements such as HTTP2 and QUIC. This presentation first discusses what these new APIs can do, as well as the advantages and possible drawbacks of using them. We then examine the state of the art and obstacles to adoption (standardization, politics, etc.) to offer broadcasters a glimpse of what the future will hold.
This document discusses the Common Media Application Format (CMAF) specification. It provides an overview of CMAF, including that it was mainly proposed by Apple and Microsoft to improve cacheability and reduce complexity for segmented video delivery. It describes what a MPEG Application Format (MAF) is and lists many MAF specifications. The document then covers key CMAF concepts like fragmented delivery, tracks, and resources. It outlines the CMAF specification, profiles for media like video, audio and subtitles, and presentation profiles. In under 3 sentences.
2016 Streaming Media West: Choosing an HTML5 PlayerErica Beavers
This presentation provides a brief overview of how modern video players work, what broadcasters should look for depending on their technical and business goals, and different open-source tools that are available. It will discuss compatible formats, codecs and supported DRMs, user performance, the ability of each media engine to handle edge cases, and the performance of the ABR algorithms. At the end of the presentation, broadcasters will better understand what tools are right for their needs and be able to evaluate the pros and cons of each solution available.
2016 Streaming Media West: Transitioning from Flash to HTML5Erica Beavers
As Flash continues to decline, HTML5 video technologies increasingly bring the promise of heightened performance and better QOE. This workshop provides an in-depth look at HTML5 players, their features and strengths, as well as the open-source media engine frameworks available on the market today. We begin by examining the main components in a video player, then discuss how to choose a player adapted to one’s use case, examining how several open-source solutions compare. Finally, we use an interactive example to build features and demonstrate several optimizations, offering tips and best practices and pointing out potential production issues as we go along.
Paris Video Tech - 1st Edition: Afrostream, un player agile pour suivre le m...Erica Beavers
Chez Afrostream nous avons testé une grande partie des players video du marché, je parlerai des problèmes rencontrés et comment se préparer a changer de techno rapidement
Paris Video Tech - 1st Edition: Dailymotion Améliorer l'expérience utilisateu...Erica Beavers
The document discusses user experience analytics for streaming video at Dailymotion. It notes that Dailymotion streams 100 million videos daily, mostly via desktop and mobile. It aims to understand user experience factors like loading times and video quality to improve the experience. The architecture involves collecting player event data, aggregating it, and visualizing metrics to analyze latency, bandwidth, rebuffering rates, and engagement levels. Data-driven ABR algorithm testing and refinement is discussed to optimize video delivery quality.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Introduction of Cybersecurity with OSS at Code Europe 2024
ABR Algorithms Explained (from Streaming Media East 2016)
1. ADAPTIVE BITRATE ALGORITHMS: HOW THEY WORK AND HOW
TO OPTIMIZE YOUR STACK
Streaming Media East – Track D
Tuesday, May 10, 2016
1:45 to 2:30 pm
CLIENT-ACCELERATED
STREAMING
2. Streamroot: Who are we?
PARTNERS
INFINITE POSSIBILITIES, LIMITLESS DELIVERY
Streamroot combines the best of a controlled, centralized network
with the resilience and scalability of a widely distributed delivery
architecture.
3. Presentation Outline
I. Introduction: What are we trying to accomplish? Why does this matter?
II. The Basics of how ABR algorithms work: constraints & parameters, process
Example: hls.js
III. Possible improvements to basic ABR algorithms: smoothing, quantizing, scheduling
Example: dash.js
IV. Going further
Another Approach: buffer levels
The key to improving: testing and iterating
4. I. Why ABR?
Multiplicity of network conditions and devices need to dynamically select resolution
HTTP / TCP stack removal from the transport protocol congestion logic client-level
estimation & decisions
Source: FESTIVE diagram of HTTP streaming
5. I. Design Goals
1. Maximize efficiency – stream at the highest bitrate possible
2. Minimize rebuffering – avoid underrun and playback stalls
3. Encourage stability – switch only when necessary
(4. Promote fairness across network bottlenecks)
6. I. Why this Matters
Views 24 min longer when buffer ratio is < 0.2% for live content
View time drops 40% when > 0.4% buffer ratio mark
Buffer ratio vs. play time
Source: NPAW aggregated
data for a set of European
live broadcasters
7. II. The Basics: Constraints and Parameters
CONSTRAINTS TRADEOFF PARAMETERS
Screen size / Player size Buffer size
CPU & Dropped frame threshold Bandwidth & possible bitrate
Startup time / Rebuffering recovery (Bonus: P2P Bandwidth)
8. II. The Basics: Constraints
1. Screen & Player Size
Bitrate should never be larger than the actual size of the video player
2. CPU & Dropped frame rate
Downgrade when too many dropped frames per second
3. Startup time
Always fetch the lowest quality first whenever the buffer is empty
9. II. The Basics: Tradeoff parameters
1. Maximize bitrate available bandwidth estimation
Estimate the available bandwidth based on prior segment(s)
Available bandwidth = size of chunk / time taken to
download
2. Minimize rebuffering ratio buffer size
Buffer ratio = buffering time / (buffering time + playback
time)
Abandon strategy
Source: BOLA
10. Example: HLS.js
HTML5 (MSE-based) media engine open-sourced by Dailymotion
https://github.com/dailymotion/hls.js
Very modular, so you can change the rules without even forking the media engine!
11. Example: HLS.js player size level capping
https://github.com/dailymotion/hls.js/blob/master/src/controller/cap-level-controller.js#L68
Checks the max CapLevel
corresponding to current
player size
Frequency: every 1000 ms
12. Example: HLS.js dropped frame rule
https://github.com/dailymotion/hls.js/blob/master/src/controller/fps-controller.js#L33
Calculates the dropped frames
per second ratio.
If > 0.2, bans the level forever
goes into restricted capping levels
fpsDroppedMonitoringThreshold
fpsDroppedMonitoringPeriod
13. Example: HLS.js startup strategy
https://github.com/dailymotion/hls.js/blob/master/src/controller/stream-controller.js#L131
First segment is loaded from
the first level in the playlist, then
continues with normal ABR
rule.
14. Example: HLS.js bandwidth-based ABR controller
https://github.com/dailymotion/hls.js/blob/master/src/controller/abr-controller.js
Simple algorithm,
inspired by Android’s AVController’s ABR algo
17. STRONG POINTS COULD BE IMPROVED
Very simple and understandable
Add history parameter to BW estimation and
adjustment
Handles CPU & player size constraints
Startup time constraint could be improved to
get the lowest level first
Conservative BW adjustment to avoid
oscillation
Sound emergency abort mechanism
Example: HLS.js sum-up
Simple algorithm with better performances in practice
compared to native implementations.
18. 1. Tweak the parameters
https://github.com/dailymotion/hls.js/blob/master/API.md#fine-tuning
Dropped FPS:
capLevelOnFPSDrop: false,
fpsDroppedMonitoringPeriod: 5000,
fpsDroppedMonitoringThreshold: 0.2
PlayerSize:
capLevelToPlayerSize: false,
2. Write your own rules!
AbrController: AbrController
capLevelController: CapLevelController,
fpsController: fpsController
Example: HLS.js how to improve
19. III. Improvements: the pitfalls of bandwidth estimation
• Not resilient to sudden network fluctuations
• Often leads to bitrate oscillations
• Biased by HTTP/TCP calls on the same device/network
20. III. Improvements: better bandwidth estimation
A new 4-step approach:
1. Estimation
2. Smoothing
3. Quantizing
4. Scheduling
Source: Block diagram for PANDA
21. III. Improvements: estimation & smoothing
Estimation: take history into account!
Smoothing: Apply a smoothing function to the range of values obtained.
Possible functions: average, median, EMWA, harmonic mean
How many segments? 3? 10? 20?
22. III. Improvements: quantizing
Quantizing: quantize the smoothed bandwidth to a discrete bitrate
Additive increase multiplicative decrease conservative when switching
up, more aggressive when down.
Source: FESTIVE
23. III. Improvements: scheduling (bonus)
Continuous & periodic download scheduling
oscillation, over- or underused resources
Randomize target buffer level to avoid startup bias
and increase stability.
Also extremely useful for promoting fairness!
Source: FESTIVE
24. Example 2: DASH.JS
Dash.js is the reference DASH player developed by DASH-IF.
https://github.com/Dash-Industry-Forum/dash.js/wiki
4 different rules:
2 Main:
ThroughputRule
AbandonRequestsRule
2 secondary:
BufferOccupancyRule
InsufficientBufferRule
26. Example 2: DASH.JS, sum-up
STRONG POINTS COULD BE IMPROVED
Smoothes bandwidth No quantization of bitrates
Segment abort mechanism to avoid
buffering during network drops
Doesn’t handle CPU & Player size
constraints
Rich buffer threshold to avoid BW
oscillations
27. Example 2: DASH.JS how to improve
1. Tweak the Parameters
ThroughputRule:
AVERAGE_THROUGHPUT_SAMPLE_AMOUNT_LIVE = 2;
AVERAGE_THROUGHPUT_SAMPLE_AMOUNT_VOD = 3;
AbandonRequestRule:
GRACE_TIME_THRESHOLD = 500;
ABANDON_MULTIPLIER = 1.5;
2. Write your own rules
https://github.com/Dash-Industry-Forum/dash.js/wiki/Migration-2.0#extending-dashjs
https://github.com/Dash-Industry-Forum/dash.js/blob/development/src/streaming/rules/abr/ABRRulesCollection.js
BufferOccupancyRule:
RICH_BUFFER_THRESHOLD = 20
28. Buffer size based ONLY no more bandwidth estimations
Uses utility theory to make decisions: configurable tradeoff between rebuffering potential
& bitrate maximization:
Maximize Vn + y Sn
Where:
Vn is the bitrate utility
Sn is the playback Smoothness
y is the tradeoff weight parameter
IV. Going further: DASH.js BOLA, another approach
29. IV. Going further: test and iterate!
Tweaking algorithms is easy, creating your forks too.
You’ve got the power!
- Know what is important to you (buffering, max bitrate, bandwidth savings…)
- Compare and cross with QoS analytics to understand your audiences
- Test and iterate: AB testing allows you to compare changes in real-time
Significant improvements without even changing your workflow!
31. Further Reading / Contact Us
Probe and Adapt: Rate Adaptation for HTTP Video Streaming At Scale. Zhi Li, Xiaoqing Zhu, Josh Gahm, Rong Pan, Hao
Hu, Ali C. Begen, Dave Oran, Cisco Systems, 7 Jul 2013.
Improving Fairness, Efficiency, and Stability in HTTP-based Adaptive Video Streaming with FESTIVE, Junchen Jiang,
Carnegie Mellon University, Vyas Sekar, Stony Brook University, Hui Zhang, Carnegie Mellon, University/Conviva Inc.
2012.
ELASTIC: a Client-side Controller for Dynamic Adaptive Streaming over HTTP (DASH). Luca De Cicco, Member, IEEE,
Vito Caldaralo, Vittorio Palmisano, and Saverio Mascolo, Senior Member, IEEE.
BOLA: Near-Optimal Bitrate Adaptation for Online Videos. Kevin Spiteri, Rahul Urgaonkar , Ramesh K. Sitaraman,
University of Massachusetts Amherst, Amazon Inc., Akamai Technologies Inc.
Contact us at:
Nikolay Rodionov, Co-Founder and CPO, nikolay@streamroot.io
Erica Beavers, Head of Partnerships, erica@streamroot.io
Notas del editor
Explain what HLS.js is. Also say it’s quite simple to extend, as the different controllers are actually option parameters, and so can be easily replaced.
Checks the max CapLevel corresponding to current player size
Every 1000ms.
You can also add up manual level caps on initialization.
If the cap level is bigger that the last one (which means the player size has grown, like in Fullscreen for exemple), then you flush the current buffer and ask for a new quality right away (force the buffer)
Calculates the dropped frames per second ratio.
If it is > 0.2, bans the level for ever => goes into restricated levels
Not activated in production!
fpsDroppedMonitoringThreshold
fpsDroppedMonitoringPeriod
First segment always from the lowest quality, then it continues with normal rule (very simple simple rule in practice!)
Another optimization is just to load this level (and playlist), and don’t wait for the other levels to have been loaded
Simple algorithm,
Here talk about Streamroot, and the fact having the sources from different buffers is even more difficult!
Code from us?x
Basically a onProgress & bandwidth estimation too (coming from CDN & P2P network!)
Request.onProgress
Request.onLoad => classic estimation
With P2P estimation! Don’t wanna infinite speed, and thus includes a P2P bandwidth metric.
Not the same for different peers, so averaged and smoothed
Code from us?x
Basically a onProgress & bandwidth estimation too (coming from CDN & P2P network!)
Shema => a P2P cache and a CDN buffer => and time = 0
One of the most important ones here
What happens if you started a request and then BW drops ? Especially important when you ahve long fragments, this can very easily lead to a buffer underrun!
After Half of the needed time, compare the estimate time of arrival to time of buffer underrun. And then see if there is another level that could solve the issue?
Pros:
Simple implementation, taking into account a lot of different params
Works as good as the other implementation at Dailymotion! (alshls, android, iPhone… etc)
Cons:
Still Naive bandwidth estimation => possible overestimation, and possible oscillation around bitrates?
We can do a lot of improvements on bandwidth estimation! difficult to correlate a unique segment download time to the real device’s available bandwidth, for several reasons:
You can have very quick bandwidth changes, especially on a mobile network, as well as unexpected bandwidth drops
The requests can be living in parallel with other TCP request (HTTP or any other on the user’s device)
This can lead to frequent estimation oscillations!
The different static constants more for you use-case?
You can play with them
You can also easily build your own rule!
Here is an example on Github?
First explain how to do that?
difficult to correlate a unique segment download time to the real device’s available bandwidth, for several reasons:
You can have very quick bandwidth changes, especially on a mobile network, as well as unexpected bandwidth drops
The requests can be living in parallel with other TCP request (HTTP or any other on the user’s device)
This can lead to frequent estimation oscillations!
difficult to correlate a unique segment download time to the real device’s available bandwidth, for several reasons:
You can have very quick bandwidth changes, especially on a mobile network, as well as unexpected bandwidth drops
The requests can be living in parallel with other TCP request (HTTP or any other on the user’s device)
This can lead to frequent estimation oscillations!
Good to minimize the oscillations!
Can have a different switch when UP or DOWN:
Conservative when UP, less conservative when DOWN
You can also scale taking into account the bitrate (and it’s utility)
DASH.Js has 4 different Rules
ThroughputRule calculates bandwidth with some smoothing!
No real quantizing (have a real estimate and no other values)
AbandonRequestsRule cancels if takes more than 1.5x of donwload
BufferOccupancyRule to now go down if buffer large enough (RICH BUFFER TRESHOLD)
InsufficientBufferRule au tas
You can easily take the best out of hls.js here! Write a player size rule, a FPS drop rule… change the Abandonrate rule!
It’s all very easy to do!
BOLA stuff ? The approach is quite difficult to explain… based on utility theory, and supposed to be a lot more efficient because there are no need to estimate the bandiwdth.
BUT
Not fully implemented in dash.js, and there are some optimisation constants that depend a lot on the use-case (target buffer, live, vod…)
Today not working great for small segment sizes AND small buffer size ( but good for 1+ min apparently?)
Still work in progress, but an interesting approach!
We can give a lot of tips, but most of the use-cases are spcific (segment size, playlist size, latency… and also which parameter is most important to you (buffer rate? Best bitrate ? Best bitrate no so useful if you KNOW that most of your user have a better bandwidth anyway? Number of switches)
So what’s important is to have a way to iterate and improve ?
The best is to have AB testing on 50/50 of population, to be able to quickly see results and compare them! What happens if you just tweak one parameter ?
The results can be quite stunning!
We can give a lot of tips, but most of the use-cases are spcific (segment size, playlist size, latency… and also which parameter is most important to you (buffer rate? Best bitrate ? Best bitrate no so useful if you KNOW that most of your user have a better bandwidth anyway? Number of switches)
So what’s important is to have a way to iterate and improve ?
The best is to have AB testing on 50/50 of population, to be able to quickly see results and compare them! What happens if you just tweak one parameter ?
The results can be quite stunning!