Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

The challenges of generating 2110 streams on Standard IT Hardware

Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Próximo SlideShare
IT Projektek megtérülése
IT Projektek megtérülése
Cargando en…3
×

Eche un vistazo a continuación

1 de 26 Anuncio

Más Contenido Relacionado

Similares a The challenges of generating 2110 streams on Standard IT Hardware (20)

Anuncio

Más reciente (20)

The challenges of generating 2110 streams on Standard IT Hardware

  1. 1. The challenges involved in generating ST 2110 streams on Standard IT Hardware Kieran Kunhya – kierank@obe.tv
  2. 2. Company Introduction • Company specialising in software- based encoders and decoders for Sport, News and Channel contribution (B2B) • Based in Central London • Build everything in house – Hardware, firmware, software • Not to be confused with:
  3. 3. What is ST 2110 “If you can't explain it to a six year old, you don't understand it yourself.” • Never seen any webinars/papers etc. get close to a half-decent explanation • Way too detailed or way too handwavy • Often very full of jargon • No explanation of why things are being done • Let’s simplify years of work into 10 minutes! • Then will go into deep-dive about implementation challenges in software
  4. 4. Where things are • Live production (sports, news etc.) uses Serial Digital Interface cables • Old fashioned Electrical Signal transmitting what was a very large amount of data when it was invented (gigabit/s). Nowadays not so much. • One TV signal (pictures + sound) in one cable • And other historical signals • Expensive equipment to switch between signals • Specific to the television industry, huge mess of cabling
  5. 5. Where things are going • Don’t want to use old-fashioned cabling • Internet growing very quickly • Youtube, Netflix, Social Media, Big Data • We can use Internet technology (but not the internet itself) to send pictures and sound around a facility. Known as Internet Protocol “IP” technology. • Chop up a television picture and audio into thousands of little pieces (“packets”) and send over a wired network • Hundreds of television signals in a cable. Cheaper equipment from much bigger industry
  6. 6. About Time (1) • Set the time of two clocks and check them after a day • For example a microwave and an oven • They won’t be the same • Clocks have their own time source inside that have small differences • Could be dependent on heat, location (e.g altitude), manufacturer of the clock • Nowadays, phone connects to the Internet to set its time
  7. 7. About Time (2) • Television pictures and sound are generated from a clock on the camera or microphone. In Europe, 25 still pictures a second • Hard to cut between two camera angles if the clocks do not match 1 2
  8. 8. About Time (3) • An old fashioned signal to keep them together, yet another cable • Note: on the web, Zoom, webcasts etc., don’t do this 1 2
  9. 9. About Time (4) • During a leap year an extra day is added because the Earth doesn’t perfectly go round the Sun (365.2422 days) • Also small adjustments known as “leap seconds” at certain times when the clock strikes New Year’s • Need to make sure time doesn’t jump otherwise pictures will glitch when Big Ben strikes midnight • One of the reasons we can’t just use the time sync mechanism like a phone • Use GPS (satellite navigation) as a time-source, very high quality, has no leap seconds • Transport this as IP Packets over the Network (Precision Time Protocol – PTP) but much better quality than a normal Internet connected clock
  10. 10. Packets and time (1) • Split the pictures and sound up into thousands of little packets • Imagine all television signals started at the same time, and each picture was a “tick” • Work how many pictures there should be by now and wait to start at the right time 1st January 1970 (PTP epoch) 42 years later… Now
  11. 11. Packets and time (2) • Lots of pictures and sounds, split into packets and being sent over the network • Packets given an address (can send to more than one place) – millions per second • Not everyone needs to receive pictures if they are only using the sound • When I receive these packets, how do I know when they are from? • Need to know this so I can make sure they are in sync (when he kicks the ball you get a kicking sound at the right time) • Attach the time (“timestamp”) to each packet so we can put the puzzle back together
  12. 12. Key points • We want to use modern IP technology to make live television • We have agreed on a common time across all the clocks in the broadcast • We split pictures and sound up into thousands of packets and make sure we send them at the right time, and add a timestamp to each packets • Anyone who wants to receive these packets uses the timestamps to put them all back together • The rest is just jargon, and implementation details
  13. 13. Commercial Drivers • Innovation in Live Production is limited by historical technology like SDI • Inflexible, costly, space-consuming • Difficult to scale operations • In contrast to modern streaming services that operate entirely in the cloud and can scale very easily • Also challenging to use IT hardware in an SDI environment, conversion devices required
  14. 14. “This is easy, we’ve done this before” • Many have just put SDI/IP converters at both ends of the product • This is like electric car manufacturers taking petrol cars and swapping the engine out • Doesn’t take advantage of the benefits of new technology (density etc) • Constrains product development (technical debt) • Challenging product redesigns are required - build the product around IP (analogous to the electric battery/engine)
  15. 15. Using Standard IT Hardware • Large cost/operational efficiencies using IT Hardware • Cloud-like operations for live production • But what are the challenges? • Very High Data Rates (of UDP traffic) • Very tight timing requirements (hardware-centric) • Software-unfriendly data structures for pixel packing • Multidisciplinary engineering challenge • The technical paper goes into a lot more detail
  16. 16. Very High Data Rates • UDP traffic requires costly system calls and memory copies by default in Operating System (OS) • Kernel Bypass techniques used to reduce overheads • Bypass the Operating System and read/write directly to Network Interface Card (NIC) • Many out there, DPDK, netmap, Registered IO • Send packets in batches (we use ~4000) • Approximately 10-20x speed improvement • Rendering video to/from NIC like a video player does to Graphics Card • Implement IP stack yourself, Ethernet, VLAN, IP, UDP • Lose features of the OS such as firewalls and routing
  17. 17. Pixel Formats • 2110 pixel format (pgroup) is very software unfriendly • Computers work in bytes (8-bits) but no pixel lies on an byte boundary (alignment) • Many possible incoming formats, all need conversion to/from pgroup • Even in “fast” languages like C this is a slow operation • We use handwritten special instructions on CPU (SIMD) to accelerate • Easily 20x faster operations than C • School of thought saying C is fast enough is incredibly wrong • Kernel bypass + SIMD – 100-200x faster
  18. 18. Timing: Frequency • Difficult to send thousands of packets on a server with microsecond accuracy. Software has ~100us timescale. • High Frequency Traders in finance do it, but carefully tuned hardware • Very tight busy loop required • Use NIC (Intel + Mellanox) rate limiters to control the output packet rate • Designed to hard-cap virtual machines rate • Numerous challenges associated with rate limiters • Need to send fractional bits per second for NTSC • Large inaccuracy in rate limiters. (i.e Ask for 1Gbps and get 1.001Gbps) • Adaptively measure and compensate otherwise buffers underflow • Also drift relative to PTP clock. Have to measure both! • Likely an internal oscillator on the NIC
  19. 19. Timing: Phase • Agreed that a frame must be produced N frames after 1-Jan-1970 • Very difficult to start the video frame precisely at the correct time • Don’t know how long the gap is between sending the packet in software and it going on the wire • We took approach of starting, measuring how far and drifting forward • Very careful design needed not to swing too rapidly or too slowly • Three coupled variables, packet gaps, frequency and phase • Change one and all of them move. Drift compensation is hard
  20. 20. Other Practicalities • 2110 requires a gap (no packets), where historical blanking data used to go • Hard to make a clean gap in software, can’t just stop then start • Skew of 2022-7 links is a big problem • Can’t send two packets on the link at exactly the same time • Have to send on both, measure and compensate • Handle switch port going down • Buffer doesn’t empty and contains stale data and needs to be resynced with the other link • Could have to send flows at different resolutions or framerates at the same time • Numerous drift compensation algorithms could collide
  21. 21. Audio • Audio seemed like a simple problem • Low bitrate, 10Mbit/s doesn’t need kernel bypass • BUT, huge CPU load at 125us profile • Bitrate is low but Packet Rate is high! • Per packet system call overhead • After several months, conceded and used kernel bypass instead of busy loop and OS
  22. 22. Interoperability • Huge problems with third-party interoperability • Some receivers require source port to match destination port (why?) • A lot of equipment ignoring RTP timestamps and using packet arrival times • Set them to zero when sending, equipment receives and processes without issue • Likewise, some equipment sends timestamps far in the past, some equipment doesn’t mind • Very problematic behaviour for software receivers. Software receivers generally trust timestamps as NIC or OS may not pass packet arrival times to software. In a switched network packet arrival times should not be used, timestamps should be used. • Simple things like IP header checksums not being verified • Audio had limitations, a lot of single flow devices and channel number limitations
  23. 23. Software Comments • Challenging to maintain a modular codebase with Uncompressed IP • Technical debt accumulation • A number of large rewrites • Hard to use standard paradigms such as reference counting (especially with tiny 125us buffers) • Very tight integration with application • Can’t use garbage collection, takes too long
  24. 24. Conclusions • A 2110 software sender is a massive technical challenge lasting several years • This is not just normal software. It’s not a road-car, it’s a very high performance Formula 1 car • Can’t compare with traditional media processing applications that process tens of megabits per second • Thousands of times more data per second • In the end this is too complex to become mainstream, live broadcast continues to be held back by such complexity. Not plug-and-play.
  25. 25. The future • Pandemic has changed the spotlight to cloud production rather than on-premise • Shared environment, tight timing not possible/necessary • New protocol needed such as Amazon CDI • Software-friendly pixel format – Cloud Exchange Format: https://github.com/vsf-tv/cef • Work is ongoing!
  26. 26. Thank You! • Thank you to my team who worked very hard on this! • Thanks also to Chris and Martin from Sky and Tim and Fabio from Amazon and IMG who supported us commercially and technically during this development • Watched this in action during quarantine!

Notas del editor

  • I especially enjoy the way many presenters handwave away literally several years of work.
  • My phone can process many gigabits of data
  • Every 5 years speeds double

×