Streaming protocols break down video content into small chunks that are delivered sequentially to viewers for reassembly and playback. This overcomes limitations of standard video formats for storage and playback. Common streaming protocols include HTTP Live Streaming (HLS), Dynamic Adaptive Streaming over HTTP (DASH), and Microsoft Smooth Streaming (MSS). These protocols support features like adaptive bitrate streaming and digital rights management (DRM). DRM uses encryption and licenses to restrict playback of protected content and is implemented through standards like Encrypted Media Extensions (EME) and content decryption modules (CDMs).
5 p9 pnor and open bmc overview - finalYutaka Kawai
This document provides an overview of P9 PNOR (BIOS) and OpenBMC. It discusses firmware components like SBE, Hostboot, OCC, OPAL, and Petitboot that make up the PNOR. It explains how to build your own PNOR image and view boot logs from the OpenBMC console. It also covers topics like OpenBMC overview, roadmap, and a demonstration of the latest web UI.
Linux is an open-source operating system based on the Unix model. It can run on a variety of hardware and has thousands of available programs. The document discusses the history and development of Linux from its origins in the 1960s through its creation by Linus Torvalds in 1991. It also covers key Linux concepts like kernels, processes, threads, file systems, and boot processes. Community links are provided for learning more about the Linux kernel, drivers, boot loader, and file systems.
There is a surge in number of sensors / devices that are getting connected under the umbrella of Internet-Of-Things (IoT). These devices need to be integrated into the Android system and accessed via applications, which is covered in the course. Our Android system development course curriculum over weekends with practicals ensures you learn all critical components to get started.
Micro XRCE-DDS and micro-ROS enable ROS 2 functionality on embedded devices. Micro XRCE-DDS is a middleware that provides embedded devices access to the ROS 2 data space using a client-server architecture. It has low memory usage and supports various transports and real-time capabilities. Micro-ROS builds on Micro XRCE-DDS to mirror the ROS 2 API and ecosystem, allowing developers to create ROS 2 nodes that run on embedded devices. Together they help close the gap between embedded devices and ROS 2 by bringing ROS 2 capabilities to microcontrollers and supporting a wide range of hardware and operating systems.
RTSP is used for controlling streaming media over the web. It allows for audio and video-on-demand streaming to large groups. RTSP uses directives like OPTIONS, DESCRIBE, SETUP, PLAY, PAUSE, and TEARDOWN to control the stream. SDP is used to describe the metadata of the stream, including information like the session name, connection details, media formats, and attributes. Common RTSP operations include requesting information with OPTIONS, retrieving the SDP description with DESCRIBE, setting up transports with SETUP, starting and pausing playback with PLAY and PAUSE, and terminating the session with TEARDOWN.
This document provides an introduction and overview of Gstreamer, including its concepts and examples of its use. Gstreamer is a media framework that allows building media handling applications and facilitating tasks like accessing hardware, building plugins, and using scriptable command line tools. It discusses key Gstreamer concepts and provides examples of using it to analyze media files, transcode video and audio to different formats, and stream video. The document encourages questions and provides credits for resources used.
Android Audio HAL – Audio Architecture – Audio HAL interface – Audio Policy – Audio HAL compilation & verification – Overview of Tinyalsa
Android Video HAL – Camera Architecture – Overview of camera HAL interface – Overview of V4L2 – Enabling V4l2 in kernel – Camera HAL compilation and verification
Kernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime RipardAnne Nicolas
Every modern multimedia-oriented ARM SoC usually has a number of display controllers, to drive a screen or an LCD panel, and a GPU, to provide 3D acceleration. The Linux kernel framework of choice to support these controllers is the DRM subsystem.
This talk will walk through the DRM stack, the architecture of a DRM/KMS driver and the interaction between the display and GPU drivers. The presentation is based on the work we have done to develop a DRM driver for the Allwinner SoCs display controller with multiple outputs, such as parallel display interfaces, HDMI or MIPI-DSI. The work done to make the ARM Mali OpenGL driver work on top of a mainline DRM/KMS driver will also be detailed, as well as the more traditional, Mesa-based, solution used in a variety of other platforms.
Maxime Ripard, Free Electrons
5 p9 pnor and open bmc overview - finalYutaka Kawai
This document provides an overview of P9 PNOR (BIOS) and OpenBMC. It discusses firmware components like SBE, Hostboot, OCC, OPAL, and Petitboot that make up the PNOR. It explains how to build your own PNOR image and view boot logs from the OpenBMC console. It also covers topics like OpenBMC overview, roadmap, and a demonstration of the latest web UI.
Linux is an open-source operating system based on the Unix model. It can run on a variety of hardware and has thousands of available programs. The document discusses the history and development of Linux from its origins in the 1960s through its creation by Linus Torvalds in 1991. It also covers key Linux concepts like kernels, processes, threads, file systems, and boot processes. Community links are provided for learning more about the Linux kernel, drivers, boot loader, and file systems.
There is a surge in number of sensors / devices that are getting connected under the umbrella of Internet-Of-Things (IoT). These devices need to be integrated into the Android system and accessed via applications, which is covered in the course. Our Android system development course curriculum over weekends with practicals ensures you learn all critical components to get started.
Micro XRCE-DDS and micro-ROS enable ROS 2 functionality on embedded devices. Micro XRCE-DDS is a middleware that provides embedded devices access to the ROS 2 data space using a client-server architecture. It has low memory usage and supports various transports and real-time capabilities. Micro-ROS builds on Micro XRCE-DDS to mirror the ROS 2 API and ecosystem, allowing developers to create ROS 2 nodes that run on embedded devices. Together they help close the gap between embedded devices and ROS 2 by bringing ROS 2 capabilities to microcontrollers and supporting a wide range of hardware and operating systems.
RTSP is used for controlling streaming media over the web. It allows for audio and video-on-demand streaming to large groups. RTSP uses directives like OPTIONS, DESCRIBE, SETUP, PLAY, PAUSE, and TEARDOWN to control the stream. SDP is used to describe the metadata of the stream, including information like the session name, connection details, media formats, and attributes. Common RTSP operations include requesting information with OPTIONS, retrieving the SDP description with DESCRIBE, setting up transports with SETUP, starting and pausing playback with PLAY and PAUSE, and terminating the session with TEARDOWN.
This document provides an introduction and overview of Gstreamer, including its concepts and examples of its use. Gstreamer is a media framework that allows building media handling applications and facilitating tasks like accessing hardware, building plugins, and using scriptable command line tools. It discusses key Gstreamer concepts and provides examples of using it to analyze media files, transcode video and audio to different formats, and stream video. The document encourages questions and provides credits for resources used.
Android Audio HAL – Audio Architecture – Audio HAL interface – Audio Policy – Audio HAL compilation & verification – Overview of Tinyalsa
Android Video HAL – Camera Architecture – Overview of camera HAL interface – Overview of V4L2 – Enabling V4l2 in kernel – Camera HAL compilation and verification
Kernel Recipes 2017 - An introduction to the Linux DRM subsystem - Maxime RipardAnne Nicolas
Every modern multimedia-oriented ARM SoC usually has a number of display controllers, to drive a screen or an LCD panel, and a GPU, to provide 3D acceleration. The Linux kernel framework of choice to support these controllers is the DRM subsystem.
This talk will walk through the DRM stack, the architecture of a DRM/KMS driver and the interaction between the display and GPU drivers. The presentation is based on the work we have done to develop a DRM driver for the Allwinner SoCs display controller with multiple outputs, such as parallel display interfaces, HDMI or MIPI-DSI. The work done to make the ARM Mali OpenGL driver work on top of a mainline DRM/KMS driver will also be detailed, as well as the more traditional, Mesa-based, solution used in a variety of other platforms.
Maxime Ripard, Free Electrons
The document provides an overview of the Linux kernel, including its architecture, startup process, functionality, configuration, and compilation. It discusses the differences between micro and monolithic kernels. It also explains the Linux kernel architecture with user space and kernel space separated by a system call interface. Key aspects covered include process management, memory management, device management, and the kernel build system.
Embedded Android System Development - Part II talks about Hardware Abstraction Layer (HAL). HAL is an interfacing layer through which Android service can place a request to device. Uses functions provided by Linux system to service the request from android framework. A C/C++ layer with purely vendor specific implementation. Packaged into modules (.so) file & loaded by Android system at appropriate time
Using open source software to build an industrial grade embedded linux platfo...SZ Lin
Building an embedded Linux platform is like a puzzle; placing the suitable software components in the right positions will constitute an optimal platform. However, selecting suitable components is difficult since it depends on different application scenarios. The essential components of an embedded Linux platform include the bootloader, Linux kernel, toolchain, root filesystem; it also needs the tools for image generation, upgrades, and testing. There are abundant resources in the Linux ecosystem with these components and tools; however, selecting the suitable modules and tools is still a key challenge for system designers.
This document discusses the AUTOSAR application layer. It explains that the application layer provides the system functionality through software components (SWCs) that contain software. The document outlines different types of SWCs and their elements like ports, runnable entities, and events. It also discusses how SWCs communicate internally and across ECUs using the virtual functional bus. The mapping of runnable entities to operating system tasks is mentioned as the topic for the next session.
It describes the MMC storage device driver functionality in Linux Kernel and it's role. It explains different type of storage devices available and how they are handled from MMC driver point of view. It describes eMMC (internal storage) device and SD (external storage) devices in details and SD protocol used for communicating with these devices in Linux.
This document summarizes BlueStore, a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore manages metadata and data separately, with metadata stored in a key-value database (RocksDB) and data written directly to block devices. This avoids issues with POSIX filesystem transactions and enables more efficient features like checksumming, compression, and cloning. BlueStore addresses consistency and performance problems that arose with previous approaches like FileStore and NewStore.
LCU13: Deep Dive into ARM Trusted Firmware
Resource: LCU13
Name: Deep Dive into ARM Trusted Firmware
Date: 31-10-2013
Speaker: Dan Handley / Charles Garcia-Tobin
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...The Linux Foundation
Xen role, details of implementation and problems in a sample solution based on OSS (Android, Linux and Xen) that addresses Automotive requirements such as ultra-fast RVC boot time, quick IVI system boot time, cloud connectivity and multimedia capabilities, reliability and security through hardware virtualization. Secure CAN/LIN/MOST bus integration handled by Linux on Dom0 while Android runs customizable QML-based HMI in a sandbox of DomU. These case studies will include but not be limited to: computing power requirements, memory requirements, virtualization, stability, boot-time sequence and optimization, video clips showing results of the work done. Case study is built on TexasInstruments OMAP5 SoC.
File systems provide an organized way to store and access data on storage devices like hard drives. The Linux file system hierarchy standard defines a common structure across Linux distributions with directories like /bin, /etc, /home, /usr, and /var. Common Linux file system types include ext2, ext3, ext4 for disks, initramfs for RAM, and JFFS2 for flash storage. File systems can also be distributed across a network using NFS or optimized for specific purposes like squashfs for read-only files. Partitions divide available storage space to better manage files, users, and data security.
The document discusses the Android audio system architecture. It is comprised of an Audio Framework layer that includes AudioTrack, AudioRecord and AudioPolicy classes that handle routing audio between apps and the hardware. Below this is the Audio HAL interface that provides read/write functions to the underlying Linux audio driver and hardware. The Audio Flinger manages multiple threads to non-blocking read and write audio data to attached hardware devices. This layered design provides flexibility and handles the real-time audio needs across different Android devices and usage scenarios.
Converging CAS and DRM, David Bouteruche from NagraJustindwah
This document discusses the convergence of CAS (conditional access systems) and DRM (digital rights management) technologies. It notes that viewing habits are shifting to non-linear, multi-screen consumption. While CAS was designed for pay TV and DRM for PC/music, the lines are blurring as DRM takes on functions of CAS for OTT/live streaming. The document advocates an advanced and flexible DRM approach that can serve both traditional broadcast and multi-screen OTT delivery across multiple networks and devices.
LAS16-307: Benchmarking Schedutil in AndroidLinaro
This document summarizes benchmarking results that compare the performance and power efficiency of Android's schedutil CPU scheduler against the existing ondemand and interactive schedulers. Tests were conducted on a Hikey development board using various workloads before and after applying the Energy Aware Scheduling patches. While schedutil showed competitive performance in many tests, some regressions were observed in user experience metrics like recent app switching and gallery scrolling, as well as higher energy usage when combined with the EAS patches, indicating areas for further optimization.
The presentation will cover Xen Automotive. We will elaborate technical solutions for the identified gaps:
1. ARM architecture - support HW virtualization extensions for embedded systems
2. Stability requirements
3. RT Scheduler
4. Rich virtualized peripheral support (WiFi, Gfx, MM, USB, etc.)
5. Performance benchmarking
6. Security
For new age touch-based embedded devices, Android is becoming a popular OS going beyond mobile phones. With its roots from Embedded Linux, Android framework offers benefits in terms of rich libraries, open-source and multi-device support. Emertxe’s hands-on Embedded Android Training Course is designed to customize, build and deploy custom Embedded OS on ARM target. Rich set of projects will make your learning complete.
This presentation will provide the information about the Linux Root File systems and its hierarchy. So any technocrate who is willing to gain info about root files of Linux can easily understand . preffered for Embedded system design Students who are pursuing diploma courses in various CDAC centers.
The document discusses the Android emulator and provides instructions on how to build and run the emulator from source. It describes:
1) How to get the emulator source code from Google's source repository, and how to build the emulator using the provided build scripts.
2) The different emulation engines (classic vs qemu2), virtual platforms (Goldfish vs Ranchu), and UI backends (SDL2 vs Qt) that are supported.
3) The various ways to launch the emulator, including by configuring an Android Virtual Device (AVD), setting environment variables, or directly invoking the emulation binaries.
4) Key components of the emulator like the Goldfish virtual platform and how devices are registered
Android booting sequece and setup and debuggingUtkarsh Mankad
The document summarizes key Android SDK components and concepts in 3 sentences or less:
Android SDK components are organized by functionality and include Activities, Services, BroadcastReceivers, Views, Intents, Adapters, AlertDialogs, Notifications, ContentProviders, and data storage methods. Common data storage options include SharedPreferences, internal storage, external storage, and SQLite databases. The Android booting process involves 6 stages: power on and ROM code execution, boot loader loading, starting the Linux kernel, initiating the init process, launching the Zygote and Dalvik virtual machine, and system server initiation.
The document provides an introduction and overview of transcoding including:
- Transcoding converts media formats to facilitate distribution across different platforms and ecosystems.
- Codecs, profiles, containers, and platforms are key terminology. H.264 is a widely used and patented codec.
- Formats combine containers and codecs with parameters for playback.
- Transcoding allows content to be optimized and customized for different destinations and viewer requirements.
Explore the world of Digital Rights Management (DRM) in websites with this informative presentation. Gain insights into the challenges of implementing DRM, the evolution of video playback on the web, and the role of HTML5 in modern video streaming. Discover the basics of using static keys and Apple HLS for content protection, as well as the issues associated with static key DRM. Finally, learn about advanced DRM solutions that address these issues, ensuring secure and efficient content delivery. Dive into this comprehensive guide to DRM in HTML5 websites and enhance your understanding of this crucial aspect of online video streaming.
The document provides an overview of the Linux kernel, including its architecture, startup process, functionality, configuration, and compilation. It discusses the differences between micro and monolithic kernels. It also explains the Linux kernel architecture with user space and kernel space separated by a system call interface. Key aspects covered include process management, memory management, device management, and the kernel build system.
Embedded Android System Development - Part II talks about Hardware Abstraction Layer (HAL). HAL is an interfacing layer through which Android service can place a request to device. Uses functions provided by Linux system to service the request from android framework. A C/C++ layer with purely vendor specific implementation. Packaged into modules (.so) file & loaded by Android system at appropriate time
Using open source software to build an industrial grade embedded linux platfo...SZ Lin
Building an embedded Linux platform is like a puzzle; placing the suitable software components in the right positions will constitute an optimal platform. However, selecting suitable components is difficult since it depends on different application scenarios. The essential components of an embedded Linux platform include the bootloader, Linux kernel, toolchain, root filesystem; it also needs the tools for image generation, upgrades, and testing. There are abundant resources in the Linux ecosystem with these components and tools; however, selecting the suitable modules and tools is still a key challenge for system designers.
This document discusses the AUTOSAR application layer. It explains that the application layer provides the system functionality through software components (SWCs) that contain software. The document outlines different types of SWCs and their elements like ports, runnable entities, and events. It also discusses how SWCs communicate internally and across ECUs using the virtual functional bus. The mapping of runnable entities to operating system tasks is mentioned as the topic for the next session.
It describes the MMC storage device driver functionality in Linux Kernel and it's role. It explains different type of storage devices available and how they are handled from MMC driver point of view. It describes eMMC (internal storage) device and SD (external storage) devices in details and SD protocol used for communicating with these devices in Linux.
This document summarizes BlueStore, a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore manages metadata and data separately, with metadata stored in a key-value database (RocksDB) and data written directly to block devices. This avoids issues with POSIX filesystem transactions and enables more efficient features like checksumming, compression, and cloning. BlueStore addresses consistency and performance problems that arose with previous approaches like FileStore and NewStore.
LCU13: Deep Dive into ARM Trusted Firmware
Resource: LCU13
Name: Deep Dive into ARM Trusted Firmware
Date: 31-10-2013
Speaker: Dan Handley / Charles Garcia-Tobin
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...The Linux Foundation
Xen role, details of implementation and problems in a sample solution based on OSS (Android, Linux and Xen) that addresses Automotive requirements such as ultra-fast RVC boot time, quick IVI system boot time, cloud connectivity and multimedia capabilities, reliability and security through hardware virtualization. Secure CAN/LIN/MOST bus integration handled by Linux on Dom0 while Android runs customizable QML-based HMI in a sandbox of DomU. These case studies will include but not be limited to: computing power requirements, memory requirements, virtualization, stability, boot-time sequence and optimization, video clips showing results of the work done. Case study is built on TexasInstruments OMAP5 SoC.
File systems provide an organized way to store and access data on storage devices like hard drives. The Linux file system hierarchy standard defines a common structure across Linux distributions with directories like /bin, /etc, /home, /usr, and /var. Common Linux file system types include ext2, ext3, ext4 for disks, initramfs for RAM, and JFFS2 for flash storage. File systems can also be distributed across a network using NFS or optimized for specific purposes like squashfs for read-only files. Partitions divide available storage space to better manage files, users, and data security.
The document discusses the Android audio system architecture. It is comprised of an Audio Framework layer that includes AudioTrack, AudioRecord and AudioPolicy classes that handle routing audio between apps and the hardware. Below this is the Audio HAL interface that provides read/write functions to the underlying Linux audio driver and hardware. The Audio Flinger manages multiple threads to non-blocking read and write audio data to attached hardware devices. This layered design provides flexibility and handles the real-time audio needs across different Android devices and usage scenarios.
Converging CAS and DRM, David Bouteruche from NagraJustindwah
This document discusses the convergence of CAS (conditional access systems) and DRM (digital rights management) technologies. It notes that viewing habits are shifting to non-linear, multi-screen consumption. While CAS was designed for pay TV and DRM for PC/music, the lines are blurring as DRM takes on functions of CAS for OTT/live streaming. The document advocates an advanced and flexible DRM approach that can serve both traditional broadcast and multi-screen OTT delivery across multiple networks and devices.
LAS16-307: Benchmarking Schedutil in AndroidLinaro
This document summarizes benchmarking results that compare the performance and power efficiency of Android's schedutil CPU scheduler against the existing ondemand and interactive schedulers. Tests were conducted on a Hikey development board using various workloads before and after applying the Energy Aware Scheduling patches. While schedutil showed competitive performance in many tests, some regressions were observed in user experience metrics like recent app switching and gallery scrolling, as well as higher energy usage when combined with the EAS patches, indicating areas for further optimization.
The presentation will cover Xen Automotive. We will elaborate technical solutions for the identified gaps:
1. ARM architecture - support HW virtualization extensions for embedded systems
2. Stability requirements
3. RT Scheduler
4. Rich virtualized peripheral support (WiFi, Gfx, MM, USB, etc.)
5. Performance benchmarking
6. Security
For new age touch-based embedded devices, Android is becoming a popular OS going beyond mobile phones. With its roots from Embedded Linux, Android framework offers benefits in terms of rich libraries, open-source and multi-device support. Emertxe’s hands-on Embedded Android Training Course is designed to customize, build and deploy custom Embedded OS on ARM target. Rich set of projects will make your learning complete.
This presentation will provide the information about the Linux Root File systems and its hierarchy. So any technocrate who is willing to gain info about root files of Linux can easily understand . preffered for Embedded system design Students who are pursuing diploma courses in various CDAC centers.
The document discusses the Android emulator and provides instructions on how to build and run the emulator from source. It describes:
1) How to get the emulator source code from Google's source repository, and how to build the emulator using the provided build scripts.
2) The different emulation engines (classic vs qemu2), virtual platforms (Goldfish vs Ranchu), and UI backends (SDL2 vs Qt) that are supported.
3) The various ways to launch the emulator, including by configuring an Android Virtual Device (AVD), setting environment variables, or directly invoking the emulation binaries.
4) Key components of the emulator like the Goldfish virtual platform and how devices are registered
Android booting sequece and setup and debuggingUtkarsh Mankad
The document summarizes key Android SDK components and concepts in 3 sentences or less:
Android SDK components are organized by functionality and include Activities, Services, BroadcastReceivers, Views, Intents, Adapters, AlertDialogs, Notifications, ContentProviders, and data storage methods. Common data storage options include SharedPreferences, internal storage, external storage, and SQLite databases. The Android booting process involves 6 stages: power on and ROM code execution, boot loader loading, starting the Linux kernel, initiating the init process, launching the Zygote and Dalvik virtual machine, and system server initiation.
The document provides an introduction and overview of transcoding including:
- Transcoding converts media formats to facilitate distribution across different platforms and ecosystems.
- Codecs, profiles, containers, and platforms are key terminology. H.264 is a widely used and patented codec.
- Formats combine containers and codecs with parameters for playback.
- Transcoding allows content to be optimized and customized for different destinations and viewer requirements.
Explore the world of Digital Rights Management (DRM) in websites with this informative presentation. Gain insights into the challenges of implementing DRM, the evolution of video playback on the web, and the role of HTML5 in modern video streaming. Discover the basics of using static keys and Apple HLS for content protection, as well as the issues associated with static key DRM. Finally, learn about advanced DRM solutions that address these issues, ensuring secure and efficient content delivery. Dive into this comprehensive guide to DRM in HTML5 websites and enhance your understanding of this crucial aspect of online video streaming.
The document provides an agenda and overview of key technologies for internet delivered media, including:
- Adaptive streaming standards like DASH, HLS, and CMAF for encoding and delivering video over HTTP.
- Encryption technologies like CENC for multi-DRM encryption and EME for decrypting encrypted media in browsers.
- Web media APIs that enable advanced media playback like MSE for adaptive streaming in HTML5 video and controlling media streams with JavaScript.
Unique info about videostreaming compression in iOS from our the best iOS specialist Vladimir Predko. He's ready to answer all your questions! Go ahead!
Video combines pictures and sounds displayed over time by breaking a continuous event into individual frames. Video formats are made up of a container that specifies the file structure and codecs for compressing and encoding the audio and video data. Common video formats include AVI, QuickTime, and WMV, while codecs like MPEG and DivX are used to compress the files. Larger video file sizes are needed for higher quality video with more frames per second, and file size affects the hardware requirements for storing and playing back video.
Windows 7 provides improved video support and optimizations. It can playback more formats efficiently using optimized decoders and hardware acceleration. Developers can access these features through Media Foundation and DirectX interfaces to build robust applications that meet performance needs. Windows 7 also supports transferring video to portable devices through automatic transcoding and enables new camera formats.
The document provides information about a multimedia streaming module, including:
- The module code, title, level, and credit value
- Assessment requirements including creating a live/on-demand streaming media station and accompanying website
- An overview of topics covered in the module like media encoding, streaming servers, planning live broadcasts, and streaming non-audio/video content
- Considerations for streaming like file sizes, frame rates, formats, and computer hardware requirements
Speaking of experiences web, the one of video in web is one of most popular at the moment. In this session they will see the possibilities of support of those experiences of video with Flash Media Server 3.5.
Codec stands for enCOder/DECoder or COmpressor/DECompressor. It is a software or hardware that compresses and decompresses audio and video data streams.
At castLabs, we aim to be a trusted and reliable partner in the world of video streaming. Our goal is to simplify a range of complex technologies enabling you to distribute content online.
Check our presentation to learn more about the company and what we do exactly.
The document discusses video streaming, including its objectives, advantages, architecture, compression techniques, and standards. It provides details on video capture, content management, formats, frame rates, codecs, content compression using MPEG, and protocols for real-time transmission like RTP, UDP, and TCP. It also compares major streaming products from Microsoft and RealNetworks.
CommTech Talks: Challenges for Video on Demand (VoD) servicesAntonio Capone
Chili is an Italian premium video on demand service that has expanded to several European countries. It faces challenges in designing a scalable architecture to support multiple devices, delivering content through content delivery networks, and protecting content with digital rights management while offering various formats. Chili addresses these by using a microservices architecture, adaptive streaming technologies, a multi-CDN approach, and common encryption standards to manage digital rights across platforms.
Premium content protection is key to a successful content monetization strategy and with the recent evolution of streaming formats and standards, it is now easier than ever to create DRM-protected streaming systems. The ability to support all of today’s DRMs - including Widevine, Fairplay and PlayReady – in an efficient and easy-to-manage workflow is crucial for operators who want to enable richer feature sets, such as offline viewing and TVE.
Join Irdeto and Bitmovin for a live webinar as we explore
+ Common approaches for Digital Rights Management in 2018
+ Changes coming to common workflows with CMAF
+ Real-world implementations of simple and complex systems
Watch the webinar! >> https://buff.ly/2ILcSp3
MIPS Technologies is a leading provider of processors for connected digital home devices. The document discusses market trends driving increased connectivity and capabilities in digital home devices. It recommends hardware specifications for MIPS processors to support 1080p video playback, 3D graphics, and future platforms like Android. Key partnerships are highlighted to help customers integrate complementary technologies and accelerate development.
This document provides an overview of Azure Media Services, Microsoft's cloud platform for on-demand and live video streaming solutions. It describes key features such as encoding, packaging, content protection, and scalable delivery. It also outlines common usage scenarios like live events, video portals, and digital distribution. Azure Media Services accounts require an associated storage account to access media content and processing jobs. The document discusses types of media services for on-demand and live streaming workflows.
Encoding refers to converting media files between formats, usually to compress them for storage and transmission. It involves removing redundant data to reduce file sizes while maintaining quality. Lossy encoding is used for audio and video and discards some data, resulting in degraded quality compared to the original. Lossless encoding maintains the original quality by identifying and removing only purely repetitive data. The goals of encoding are compression to smaller file sizes and formatting for compatibility with devices and software.
The document discusses video compression techniques. It describes video compression as removing repetitive images, sounds, and scenes to reduce file size. There are two types: lossy compression which removes unnecessary data, and lossless compression which compresses without data loss. Common techniques involve predicting frames, exploiting temporal and spatial redundancies, and standards like MPEG. Applications include cable TV, video conferencing, storage media. Advantages are reduced file sizes and faster transfer, while disadvantages are recompilation needs and potential transmission errors.
Azure Media Services Step-by-Step Tutorial Docs Series - Part 6Shige Fukushima
This document discusses applying content protection for video on demand (VOD) streaming using Azure Media Services. It describes using PlayReady or Widevine for digital rights management (DRM) with common encryption to encrypt Smooth Streaming content and package it into HLS or DASH formats. It also describes using AES clear key encryption for trusted content without DRM. The document provides an overview of content protection options and scenarios for when each would be appropriate.
This white paper discusses the H.264 video compression standard and its applications in video surveillance. It provides an introduction to H.264 and how it offers significantly higher compression rates than previous standards like MPEG-4 Part 2, reducing bandwidth and storage needs. It then explains how video compression works, the development of the H.264 standard, and how it supports different profiles and levels to optimize various applications and formats. The paper concludes that H.264 will be widely adopted and help enable higher resolution surveillance applications.
H.264 is a new video compression standard that provides much more efficient compression than previous standards like MPEG-4 and Motion JPEG. It can reduce file sizes by 50-80% while maintaining the same quality. H.264 supports applications with different bandwidth and latency requirements. It uses various frame types and motion compensation techniques to reduce redundant data between frames. These techniques, along with an improved intra-frame prediction method, allow H.264 to compress video much more efficiently than prior standards.
What is React-Native?
Why React-Native?
How React-Native works in detail?
- Metro bundler
- Main Thread
- Shadow Thread
- Javascript Thread
Yoga Engine
Threads Communication in React-Native
Comparison with Flutter and Native
React-Native Components
Introduction to Clean Code in Turkish
Temiz Kod Nedir?
Neden Temiz Kod Yazmalıyız?
Temiz Kod Nasıl Yazılır?
Temiz Kod Yazmaya Giriş
- İsimlendirme Kuralları
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
2. WHAT IS A STREAMING PROTOCOL?
A Streaming Protocol is:
A standardized delivery method for breaking video into chunks
Sending it to the viewer
Reassembling it on the viewer
3. WHY NEED STREAMING PROTOCOLS?
Most digital video is designed for:
Storage (small file sizes)
Playback (universal playback)
Most standart video formats are not designed for streaming
In order to stream a video:
Video needs to be converted to a streamable file
A streamable file consists of chunks
These chunks arrive sequentially and playback as recevied
4. STREAMING PROTOCOLS ADVANTAGES
Streaming protocols can get much more complex
Many are “adaptive bitrate” protocols
Deliver the best quality that a viewer can support at any given time
Some protocols focus on “reducing latency”
Some protocols “delay” between an event and viewer
Some protocols focus on “DRM”
Some protocols work only on certain systems
5. PROTOCOL – CODEC – CONTAINER FORMAT
Codec refers to “Video Compression Technology”
Different codecs are used for different purposes
For example:
Apple ProRes is often used for video editing
H.264 is widely used for online video
6. PROTOCOL – CODEC – CONTAINER FORMAT
Format simply refers to container format of a video file
.mp4, .m4v, .avi, .mkv
A container format is like a “box” that contains:
A video file
An audio file
Metadata
Container format isn’t a central concept for live streamers
7. STREAMING IN REAL-LIFE
Imagine that you’re a merchant, and you’re transporting clothing in bulk
The clothing represents the video content
The streaming codec is the machine that compresses the clothing into a bundle to save space
The container format is the boxcar that these bundles are packed inside
The streaming protocol is analogous to the railroad tracks, signals, and drivers who deliver it to the
destination
8. STREAMING IN OTT
Generate multiple versions of the same content (e.g. different bitrates, spatial resolutions)
Chop these versions into segments (e.g. two seconds)
The segments are stored in a web-server and can be downloaded with HTTP GET requests
The relationships between different versions is described by a manifest file
The manifest file is provided to the client prior to the streaming session
Manifest represents different qualities of the media content
Manifest has individual segments of each quality with URLs
This structure allows to bind to segments to the bitrate, among others (start time, duration of segments)
9. STREAMING PROTOCOLS
HTTP LIVE STREAMING (HLS)
DYNAMIC ADAPTIVE STREAMING over HTTP (MPEG-DASH)
MICROSOFT SMOOTH STREAMING (MSS)
REAL-TIME MESSAGING PROTOCOL (RTMP)
WEB-RTC
SECURE RELIABLE TRANSPORT (SRT)
REAL-TIME STREAMING PROTOCOL (RTSP)
10. HTTP LIVE STREAMING
HLS
Apple created it in 2009
Built to drop Flash from iPhones
Supported by:
Desktop browsers
Smart TVs
Android and iOS mobile devices
HTML5 players also natively supports
11. HTTP LIVE STREAMING
HLS
HLS supports:
Adaptive-bitrate streaming (High Quality)
Supports the common H.264 codec
Supports latest H.265 codec
Secure streaming
The major downside is high latency
12. DYNAMIC ADAPTIVE STREAMING OVER HTTP
MPEG-DASH
The only international standardized solution
Created in 2012
Currently adopted by YouTube, Netflix etc.
Most big companies have contributed to standardization
13. DYNAMIC ADAPTIVE STREAMING OVER HTTP
MPEG-DASH
MPEG-DASH supports:
Adaptive-bitrate streaming (High Quality)
Codec agnostic (can be used with almost any streaming encoding)
It supports standards-based APIs for browser based DRMs:
Encrypted Media Extensions (EME)
Media Source Extensions (MSE)
The major downside is no compatibility with Apple Devices/iOS
14. MICROSOFT SMOOTH STREAMING
MSS
Microsoft created it in 2008
Targeting the smooth delivery of HD contents over IIS
Based on fragmented MP4 files
15. MICROSOFT SMOOTH STREAMING
MSS
MSS supports:
Adaptive-bitrate streaming (High Quality)
Includes CPU utilization for adaptive-bitrate streaming
Supports the common H.264 codec
The major downside is MSS limits the use of Smooth Streaming to CDNs using Microsoft Products
19. DIGITAL RIGHTS MANAGEMENT
DRM
DRM refers to the algorithms and processes
DRM enforces copyright compliance when consuming video content
Without DRM, content can be easily copied
DRM is not visible to the consumers
DRM is also used offline to provide copyright protection for CDs, DVDs, and BluRays
20. DRM TECHNOLOGIES
Fairplay: Cipher Block Chaining encryption
The only option for Safari and is only used by Apple devices
Widevine: Developed by Widevine Technologies, bought by Google
Used on Android Devices natively, in Chrome, Edge (soon), Roku, Smart TVs
PlayReady: developed and maintained by Microsoft
Supported on Windows, most set-top boxes and TVs
27. DRM ENCRYPTION KEYWORDS
COMMON MEDIA APPLICATION FORMAT (CMAF)
There are primarily two protocols in use today – MPEG-DASH and HLS
MPEG-DASH uses the mp4 container and HLS uses the MPEG-TS (ts) container for its video files
Duplicate contents (doubled storage size)
When also adding DRM
If we use the 3 hypothetical DRM providers with 3 different encryption standard, then we need 2*3=6 copies of
the video
The CMAF specification was created
Store files in the fragmented mp4 container format (fmp4)
With support from both MPEG-DASH and HLS, we now create only one set of videos, store it in fmp4 format
28. DRM ENCRYPTION KEYWORDS
COMMON ENCRYPTION SPECIFICATION (CENC)
If different DRM technologies use different encryption standards
We still need to store multiple copies of each file
For this purpose, the MPEG developed the CNEC
Videos can be encrypted using either CENC (AES-128 Counter-CTR) or CBCS (AES-128 Cipher Block Chaining-CBC)
The implication of CENC
A content provider needs to encrypt videos only once and any decryption module can decrypt it
Note: Exposing the encryption algorithm is not a problem as long as the keys are strongly protected.
29. DRM ENCRYPTION KEYWORDS
ADVANCED ENCRYPTION STANDARD (AES)
AES is a symmetric-key algorithm: encryption and decryption are performed using the same key
It has three variants based on the key-length:
128, 192, and 256 bits. The longer the key, the harder it is to crack.
Cracking the AES-128 without the key would require a “billion times a billion years” and a super-
computer
30. HOW DRM WORKS?
ENCRYPTION
Communications between the requesting playback software and the license server are encrypted
Each segment is encrypted according to the MPEG Common Encryption (CENC) specification
The MPEG-CENC standard is comprised of XML style formats
The MPEG-CENC standard requires a minimum of a key and key id to run
Standard content encryption is done according to the Advanced Encryption Standard (AES)
Using 128-bit keys and a Cipher Block
Cipher block is either Counter Mode (CTR) or Cipher Block Chaining (CBC)
Only the audio and video data within a segment is encrypted
31.
32. DRM DECRYPTION KEYWORDS
ENCRYPTED MEDIA EXTENSIONS (EME)
Encrypted Media Extensions (EME) is a JavaScript API
EME is an extension to the HTMLMediaElement specification
EME provides an API that enables web applications to interact with content protection systems
EME allows playback of encrypted audio and video
EME is designed to enable the same app and encrypted files to be used in any browser, regardless of the
underlying protection system
33. DRM DECRYPTION KEYWORDS
CONTENT DECRYPTION MODULE (CDM)
Content Decryption Module (CDM) is a software that decrypts and optionally, decodes + displays the
video.
Every DRM provider provides its own:
Mechanism to create a license request (using the KeyID, device identifier, signing the request, etc.)
Mechanism to understand the license response received from the DRM License Server (the response is encrypted
too) and extract the decryption key
Rules around storing the license locally on the client, license renewal, expiry, etc
CDMs (Content Decryption Modules) is built into browsers such as Chrome, Firefox, Microsoft Edge,
Safari
34. DRM FLOW
Obtain the movie & its manifest from the CDN
Extract the KeyID from the manifest
Create the license request
Send the license request to the license server
Wait, listen, and receive the response from the license server.
Use the decryption key from the server to decrypt the content
Decode the decrypted content
Display the decoded movie
35.
36.
37. HOW DRM WORKS?
DECRYPTION
When a web player identifies protected content:
It calls on processes and interfaces defined by Encrypted Media Extensions (EME)
Browsers will initiate a license request process
License requests are generated by Content Decryption Module (CDM )(all of the decryption is done by
CDM)
Passed to the players through the EME (EME is just simply an interface)
The player calls the appropriate function on the EME interface
Then the sessions are updated by the CDM
The EME interfaces with the CDM handles the decryption of the segments on browser or OS level
38. HOW DRM WORKS?
CLIENT-SIDE
The license acquisition using the EME starts from the playback client
Creating a key session unique to the client, device, and the metadata found in the segments
The CDM then generates a signed key message.
The client then sends then secured message to the license server
The license server returns the requested license
With the resulting decision of whether or not the client is granted playback rights to the requested content
If not, playback is halted and an error is shown.
In successful communications scenarios, the client updates the session data with a returned license
The content decryption is handled fully by the CDM
In some circumstances, the license is cached for a set time and can be used to playback protected content offline
The license and the decrypted data must not be accessible to clients other than the licensed content requester
Therefore, the private keys and decrypted data are kept in a secure environment within the browser, operating system, and hardware
(if supported), like Trusted Execution Environments.
Basically, protocols are technical processes that facilitate the transfer of data from one program to another.
In streaming, this means the transfer of your video files to and from your encoder, streaming host, and eventually, the video player where your audience views your stream.
As a consequence, each client will first request the manifest that contains the temporal and structural information for the media content,
and based on that information it will request the individual segments that fit best for its requirements.
The adaptation to the bitrate or spatial resolution is done on the client-side for each segment, e.g., the client can switch to a higher bitrate – if bandwidth permits – on a per-segment basis, or to a lower bitrate – if bandwidth decreases.
This has several advantages because the client knows its capabilities such as the received throughput, delay, device capabilities (e.g., screen resolution), etc. best.
H.265 codec, which delivers twice the video quality at the same file size as H.264.
Microsoft also includes the CPU utilization as an indicator for the stream switching decision which is especially valuable for mobile devices such as smartphones and tablets. This means that if the CPU utilization is high, the client reduces the stream quality and resolution which furthermore reduces the CPU performance needs of the decoding process and guarantees a continuous decoding without stalls.
Adaptive bitrate technology on DRM
Encryption is a technique used to keep data confidential and prevent unauthorized people from reading it.
Encryption uses a “key” to convert input data (plaintext) into an alternate form called ciphertext.
It is almost impossible to convert the ciphertext back to plaintext without the key.
However, practically speaking, decryption without the key is possible, and encryption algorithms are designed make reverse-engineering extremely expensive – in terms of time, money, and computing resources needed.
Apple FairPlay supports only AES-CBC cbcs mode.
HLS supports only AES-CBC cbcs mode (irrespective of CMAF)
Widevine and PlayReady support both AES-128 CTR cenc or AES-128 CBC cbcs modes.
MPEG-DASH with CMAF supports both AES-128 CTR cenc or AES-128 CBC cbcs modes.
MPEG-DASH without CMAF supports only AES-128 CTR cenc mode.
Similarly, when we encrypt a movie with a particular key, we need to create that association and provide that to the DRM license server (our receptionist, if you will).
In DRM, a “KeyID” provides the association between an encryption key and a movie. It is a unique string of characters generated at the time of creating an encryption key for a particular movie.
The Encryption Key and the KeyID are stored in a secure server (Key Store) that works alongside a DRM license server.
When a client needs to play an encrypted movie, it requests the DRM license server for the decryption key by providing that particular movie’s KeyID. If the DRM license server is happy with the request (authentic request), it will ask the Key Store to provide the decryption key associated with that KeyID.
DRM vendors test and certify these CDMs to ensure that
the license requests are formed correctly and as per specifications.
they do not leak the decryption keys
they do not leak the decrypted and decoded movies
they securely store the decryption keys based on the license specifications (store the key for X days, for example)
safely transport the video to the screen without leaking it
For the above reasons, CDMs in browsers are closed-source, and this is a source of contention in the industry and public. They are not-trusted because the public cannot see what’s inside the CDM’s source code.
The player takes care of obtaining the movie, parsing the manifest, extracting the KeyID, making the requests to the DRM License Server, etc.
A separate module (called the CDM or Content Decryption Module) takes care of creating the license request, decrypting & decoding the content.
The video player is a JavaScript program that uses the EME APIs to transmit messages between the CDM and the License Server.
The player takes care of obtaining the movie, parsing the manifest, extracting the KeyID, making the requests to the DRM License Server, etc.
A separate module (called the CDM or Content Decryption Module) takes care of creating the license request, decrypting & decoding the content.
The video player is a JavaScript program that uses the EME APIs to transmit messages between the CDM and the License Server.
From the perspective of the content requester –..;....