MPEG Systems receives two Emmy Awards

The 138th MPEG meeting was held online, 25–29 April 2022

MPEG Systems receives two Emmy Awards

In its 73rd annual ceremony on April 26, 2022, the National Academy of Television Arts & Sciences (NATAS) awarded ISO/IEC MPEG Systems Working Group (WG 3) with two Technology & Engineering Emmy® Awards for its MPEG-DASH and Open Font Format standards.

Since 1948, the Technology & Engineering Emmy® Awards honor development and innovation in TV-related technology and recognize companies, organizations, and individuals for breakthroughs in this field.

This year, MPEG Systems received a Technology & Engineering Emmy® for its work on “Standardization of HTTP Encapsulated Protocols”. 3GPP, which partnered with MPEG on the collaborative development of the MPEG-DASH standard, was also recognized with an Emmy.

MPEG Systems started the project on HTTP streaming in 2009. MPEG published the first edition of the Dynamic Adaptive Streaming over HTTP (DASH) standard (ISO/IEC 23009) in 2012, followed by three more editions since then. The 5th edition of the MPEG-DASH standard is expected to be published in 2022.

The MPEG-DASH standard defines a delivery format for streaming multimedia content with the highest quality possible over networks with variable bandwidth using CDNs. Its features support many services and applications such as on-demand video, live streaming, low latency streaming, and targeted ad insertion. MPEG-DASH is the first true open international standard for video streaming over the internet that enabled multi-vendor interoperable solutions and has been widely adopted by industry and various consortia.

The standard was developed with the strong participation of experts from around the world. More than 90 experts from over 60 companies participated in the development of the state-of-art technology defined by the standard for more than a decade. The following experts, in particular, are recognized by MPEG for contributing to the development of the standards in various ways: Ali C. Begen, Romain Bouqueau, Imed Bouzazi, Zachary Cava, Mary-Luc Champel, Cyril Concolato, Igor Curcio, Franck Denoual, Mike Dolan, Wang Fang Gerard Fernando, Per Fröjdh, Alexander Giladi, Jeff Goldberg, Miska Hannuksela, Mitsuhiro Hirabayashi, Paul Higgs, Ingo Hoffmann, Kilroy Hughes, Will Law, Jin Young Lee, Jean Le Feuvre, Brendan Long, Sylvain Kervadec, Yongliang Liu, Brenden Long, Shivakumar Mahadevappa, Frederic Maze, Nhut Nguyen, Harry Pile (late), Yuriy Reznik, Sungryeul Rhyu, Yago Sanchez, Thomas Schierl, Iraj Sodagar, Thomas Stockhammer, Kevin Streeter, Yasser Syed, Viswanathan (Vishy) Swaminathan, Emmanuel Thomas, Christian Timmerer, Ye Xiaoyang, Ye-Kui Wang, Mark Watson, Yang Yanzi, Shaobo Zhang, and Waqar Zia. Many companies and institutions participated by providing experts and resources for the development of the DASH standard, including Adobe, Akamai, Bitmovin, Brightcove, Bytedance, CableLabs, Canon, Cisco, Comcast, Ericsson, ETRI, Fraunhofer, Google, Huawei, Hulu, InterDigital Communications, LG Electronics, KPN, Microsoft, Netflix, Orange, Nokia, Ozyegin University, Panasonic, Qualcomm, Samsung, Sony, Telecom Paris, Tencent America, Xiaomi, University of Klagenfurt, Vubiquity, and ZTE Corporation.

The chair of the MPEG-DASH subgroup, Iraj Sodagar said, “The success of the DASH standard shows that when the industry as a whole participates in a standardization project under a reputable SDO such as ISO/IEC JTC 1, the resulting standard can make a direct impact on the entire industry by enabling interoperability between different parts of the ecosystem.”

The second Technology & Engineering Emmy® Award presented to MPEG Systems this year is for the development of the ISO/IEC 14496-22 “Open Font Format”, with its 4th edition published in 2019 and work currently underway toward 5th edition. The Emmy was presented for “Standardization of Font Technology for Custom Downloadable Fonts and Typography for Web and TV Devices”.

Font technology standardization work as part of the MPEG-4 family of standards was initiated in 2004 with the publication of the ISO/IEC 14496-18 “Font compression and streaming”. This work continued with the development of the new part 22 (ISO/IEC 14496-22), “Open Font Format”, which was based on the contribution of OpenType technology by Microsoft and Adobe in 2004. Since then, the standard has been revised and extended many times, with the participation of a wide community of experts who offer combined expertise in typography, linguistics, font design, and computer science. Experts contributing to this work include representatives of a number of large companies, including Adobe, Apple, Google, Microsoft, Monotype, and many individual contributors and subject matter experts including John Hudson, Laurence Penney, and Adam Twardoch (among many others) who made valuable contributions. Vladimir Levantovsky (Type Standards LLC) has served as a chair of the Font subgroup and the Project Editor for both standards.

“Fonts are the critical components of any written communication. Text carries a meaning, but it’s a font that makes text readable – fonts give the written word a voice! Standardization of the Open Font Format technology by ISO/IEC JTC 1’s MPEG Systems Working Group (SC 29/WG 3) significantly influenced the capabilities of all classes of consumer electronic devices bringing advanced font technology for digital TV, streaming media environments, and the Web. It also inspired many open-source projects that enabled mass adoption of high-quality font rendering and advanced text support, making it easy and cost-effective for OEMs, service providers, and content authors to deploy new features and applications supporting all world languages and writing systems,” the chair of the Fonts subgroup, Vladimir Levantovsky, said.

MPEG selects Technology for

MPEG-I Video-based Dynamic Mesh Compression

MPEG had previously issued a Call for Proposals (CfP) for dynamic mesh compression technology in October 2021. At the 138th MPEG meeting, MPEG Coding of 3D Graphics (WG 7) reviewed submissions to this CfP and selected technology for the subsequent standardization process. Five submissions covering full codecs were evaluated based on (i) objective measurements for geometry and attributes and (ii) subjective visualization tests conducted at two test sites.

The technology selected in the CfP already enables the efficient representation of realistic dynamic objects (including humans), and the MPEG 3D Graphics Coding will conduct additional experiments to further improve this basis technology and to produce a standard specification document and test model for the full codec (encoder and decoder). The final standard will make it possible to include realistic 3D dynamic objects in games and VR or AR experiences, transmit them between virtual worlds, and create immersive experiences for artistic performances and sports events.

It is expected that the standard will progress to the first formal stage of its approval process with a Committee Draft (CD) in April 2023, followed by a Draft International Standard (DIS) in January 2024, and completed International Standard (IS) by October 2024.

MPEG selects Technologies for
Encoder and Packager Synchronization and Asset Storage

At the previous 137th MPEG meeting of January 2022, MPEG Technical Requirements (WG 2) had issued a Call for Proposals (CfP) on technologies for encoder and packager synchronization and asset storage. At the 138th meeting, MPEG concluded the evaluation of the responses to the CfP. The CfP responses fulfilled and supplemented the CfP requirements, with one proposal dealing with encoder/packager synchronization and a second proposal dealing with asset storage and recording. The combination of the responses already covers 85 percent of the expressed synchronization requirements and 75 percent of the storage requirements. Furthermore, the proposals are well aligned with existing MPEG specifications, in particular MPEG-DASH and CMAF. As a result, MPEG produced a draft text of a Working Draft at this meeting for the upcoming specification. The final standard is expected to be completed at the beginning of 2023.

This standard will greatly assist the industry in achieving effective interoperability for the production and storage of 24×7 live content. It meets specific requirements for 24×7 live media production and distribution in cloud-based workflows, such as the use of object-based cloud storage. The proposal addresses the use cases of redundant encoder synchronization for failover handling, distributed encoding of (multi-codec) bit-rate ladders, and A/B watermarking. As the proposals are based on MPEG-DASH and CMAF, this standard also introduces best practices for using these MPEG technologies for the key use cases targeted by broadcasters and content owners.

The MPEG Technical Requirements WG thanks the proponents who submitted responses to the CfP. MPEG will continue to collect and solicit feedback to improve the solution in the upcoming meetings.

MPEG completes CMAF support of EVC and VVC

As a part of efforts to integrate recently developed video coding standards into storage and delivery standards, at the 138th MPEG meeting, MPEG Systems (WG 3) completed the standardization process for carriage of Versatile Video Coding (VVC) and Essential Video Coding (EVC) by the Common Media Application Format (CMAF).

The third edition of ISO/IEC 23000-19 Common media application format (CMAF) for segmented media has reached the final milestone of standard development, Final Draft International Standard (FDIS) with support for these new video coding technologies. The specification defines CMAF track and media profiles for VVC and EVC. For both video codecs, constraints on elementary streams for use in CMAF and constraints on permitted values of some fields of parameter sets are specified. Additionally, restrictions on the usage of Supplemental Enhancement Information (SEI) and Video Usability Information (VUI) are also specified. The method to indicate the conformance point of the video codecs, such as profile, level, toolset indication, etc., are also defined by this specification. Concerning VVC, the specification supports the carriage of both single- and multilayer bitstreams.

As CMAF has already been adopted by other Standards Development Organizations (SDOs) and many of them have been anticipating the integration of new video codecs into CMAF with various application requirements and constraints, and the development process has been conducted with active collaboration with them. Through liaison communications and public GitHub discussions, many valuable inputs have been received, carefully studied, and reflected as much as possible.

MPEG issues Call for Proposal for Video Coding for Machines

At the 138th MPEG meeting, MPEG Technical Requirements (WG 2) issued a Call for Proposals (CfP) for technologies and solutions enabling efficient video coding for machine vision tasks.

This work on “Video Coding for Machines” aims at compressing input videos and images or feature maps for machine tasks. As machines consume and understand visual data differently to human vision, coding technologies and solutions could be different from conventional ones, even for coding videos and images, in order to achieve optimized performance for machine usage. With the rise of machine learning technologies and machine vision applications, the amount of video and images consumed by machines has been rapidly growing. Typical use cases include intelligent transportation, smart city, intelligent content management, etc., which incorporate machine vision tasks such as object detection, instance segmentation, and object tracking. Due to the large volume of video data, it is essential to compress video for efficient transmission and storage. Besides the highly demanded compression benefit, VCM can also be helpful in some other regards, such as computational offloading and privacy protection.

Over the last three years, MPEG has investigated potential technologies for efficient compression of visual data for machine vision tasks and established an evaluation mechanism that includes common test conditions (CTC), rate-distortion based metrics, and evaluation pipelines. In addition to existing image datasets, some new datasets, and video datasets, in particular, have been built and donated to MPEG which are valuable for future research and standardization.

This CfP welcomes submissions of proposals from companies and other organizations. Registration is required by 06 July 2022; the submission of bitstream files, results, and decoders are required by 30 September 2022; and the submission of proponent documentation is due by 12 October 2022. Evaluation of the submissions in response to the CfP will be performed at the 140th MPEG meeting in October 2022.

Companies and organizations that have developed VCM technologies are kindly invited to bring such information in response to this CfP by contacting Dr. Igor Curcio, MPEG Technical Requirements Convenor at [email protected].

MPEG White Paper on MPEG Smart Contracts for Media

At the 138th MPEG meeting, MPEG Liaison and Communication (AG 03) approved a white paper on MPEG Smart Contracts for Media.

In the last few years, MPEG has developed a set of standardized RDF ontologies and XML schemas for the codification of intellectual property (IP) rights information related to music and media. ISO/IEC 21000-19 Media Value Chain Ontology (MVCO) facilitates rights tracking for fair, timely, and transparent transactions by capturing user roles and their permissible actions on a particular IP entity. ISO/IEC 21000-19/AMD1 Audio Value Chain Ontology (AVCO) extends MVCO functionality related to the description of IP entities in the audio domain, e.g., multitrack audio and time segments. ISO/IEC 21000-21 (2nd Ed) Media Contract Ontology (MCO) facilitates the conversion of narrative contracts to digital ones related to the exploitation of IP rights, payments, and notifications. With respect to the latter, XML schemas have been developed as ISO/IEC 21000-20 (2nd Ed) Contract Expression Language (CEL).

Furthermore, the axioms in these XML schemas and RDF ontologies can drive the execution of rights-related workflows in controlled environments, e.g., Distributed Ledger Technologies (DLTs), where transparency and interoperability are favoured toward fair trade of music and media. Thus, the aim of ISO/IEC 21000-23 Smart Contracts for Media is to provide the means (e.g., application programming interfaces) for converting these XML and RDF media contracts to smart contracts that can be executed on existing DLT environments. By performing this conversion in a standard way for several smart contract languages, MPEG-21 CEL/MCO schemas and ontologies will prevail as the interlingua for transferring verified contractual data from one DLT to another.

Another important feature of this standard is that it offers the possibility to bind the clauses of a smart contract with those of a narrative contract and vice versa. In this way, each party signing a smart contract knows exactly what the clauses stored in the smart contract express.

Pallycon April NAB 2024

Leave a Comment

Your email address will not be published. Required fields are marked *

Enjoying this article? Subscribe to OTTVerse and receive exclusive news and information from the OTT Industry.