Supermicro and Intel® product and solution experts will discuss, in an informal session, the benefits of the solutions in the areas of Cloud Gaming, Media Delivery, Transcoding, and AI Inferencing using the recently announced Intel Flex Series GPUs. The webinar will explain the advantages of the Supermicro solutions, the ideal servers and the benefits of using the Intel® Data Center GPU Flex Series (codenamed Arctic Sound-M).
7. Optimized forTCO and Density
PCIe Gen 4 Cards
Half Height, half length, simple Passive Cooling
2 GPUs / card
Launching with Intel® Scalable Gen3 Processors
See the Vision section of intel.com/performanceindex for workloads and configurations. Results may vary.
FLEX SERIES 140
4
Media
Engines
75W
Power Envelope
8
RayTracing
Units
Architecture
Half
Height
PCIe
8
Xe cores
8. See the Vision section of intel.com/performanceindex for workloads and configurations. Results may vary.
FLEX SERIES 170
2
Media
Engines
150W
Power Envelope
32
RayTracing
Units
Architecture
Full
Height
PCIe
32
Xe cores
Optimized for Peak Performance
PCIe Gen 4 Cards
Full Height, ¾ length, single-wide, Passive Cooling
1 GPU / card
Launching with Intel® Scalable Gen3 Processors
18. Next Gen Video Experiences: AV1 on Intel® Data Center GPU Flex Series
Ushering in a new era of video with groundbreaking AV1
codec technology
Intel is leading the industry in bringing a suite of AV1
solutions to the market
Ultimate quality and performance optimized SVT-
AV1 CPU Software Encoder for Intel® Xeon™ and
Core™ processors
Intel® Data Center GPU Flex Series bringing
professional quality, performance AV1 HW Encode to
serve Data Center
19. Targeted Usages
Object Classification
Object Detection
Image Segmentation
Frameworks
Media
AVC/HEVC/AV1
JPEG/MJPEG
Upto 4k
Visual Inference, Media Analytics on Intel® Data Center GPU Flex Series
Xe Matrix
Extensions
Dedicated built-inAI functionality
Accelerating most AI DataTypes
256
INT8
ops/clock
128
FP16/BF16
ops/clock
512
INT4/INT2
ops/clock
Let’s first talk about the ATS-M150 optimized for peak performance.
Let’s first talk about the ATS-M150 optimized for peak performance.
As I said great HW needs great SW.
Intel Extensible and Open Software Architecture is focusing on optimizing end-to-end workloads.
Starting with low level GPU drivers and firmware, Intel’s OneAPI foundational layer, through use case specific open-source enablement of video, inference and gaming frameworks and APIs.
Resulting in creation of reference solutions for our partners.
Video streaming currently accounts for over 80% of global internet traffic which drives significant demand for video processing in the datacenter and increases cost for service providers.
Providers are looking to simultaneously improve bitrate savings to keep costs low, while maintaining the visual quality needed to satisfy their customers.
Arctic Sound-M is well positioned to help solve this challenge, bringing its industry leading AV1 HW encoder paired with Intel’s oneVPL library to the market later this year.
ATSM with AV1 will deliver close to 30% distribution cost savings, which in todays market could translate to about $20M per year savings for 100K users.
Currently inference solutions are on a CPU and GPU in a closed ecosystem.
We have been working with customers to be able to seamless move their workloads to all Intel open source ecosystem.
There are three major buckets of usescases within the media analytics segment that we’re targeting with ATS-M
First : IOT and Video Analytics where video is coming in directly from a camera into the network and being processed to gather information. These typically object detection, object classification, and segmentation algorithms
Next: These algorithms are used “Library Indexing and Compliance” category to really understand what is going in the video and extracting more information.
Lastly is “AI Guided Enhancement” which is used to help improve video quality or drive down the cost of transmitting the video.
Here is what we are doing on ATS-M:
One: Intel is optimizing the most popular AI frameworks with TensorFlow, pytorch, and openvino to work efficiently with ATS-M on visual inference models.
Two: You can use the same tools in the openvino tool suite to enable hybrid workflows between CPUs and GPUs
Three: In addition to AV1, ATS-M will continue to support multiple video CODECs