Inter BEE 2010 Report: Improved Picture Quality with Next-Generation Technology -- New-Type Encoders Appear.
It has been 10 years since the commencement of BS digital broadcasting, and 7 years the commencement of terrestrial digital broadcasting (in Japan). Both of these systems use MPEG2 standards to encode images. MPEG was chosen for its flexibility and viability as long-term broadcasting infrastructure because of its compatibility with predicted advances in encoder technology and picture quality.
Such advanced encoders were on exhibit at the Inter BEE exhibition held in November 2010 that are capable of transmitting digital terrestrial broadcasts at 1920 x 1080 pixels instead of the conventional 1440 x 1080 pixels. Moreover, applications to convert archived video to higher resolutions using super-resolution enhancement technology were also on display, pioneering the way forward for content recycling. (Koji Suginuma)
Advancing Picture Quality
Standardizations since MPEG1 strictly stipulate decoding methods, although a wide range of discretion is allowed with encoding. Carrying the argument to the end, this really means that "anything is okay as long as the decoder can always replay it". Obviously, since decoding methods are strictly defined, encoding methods naturally tend to follow certain patterns. Even so, the greater freedom allowed with encoding development means there are big differences with older encoding methods.
This level of freedom has brought about advances in picture quality and efficiency, that have enabled higher picture quality using the same bit rates, or similar picture quality using lower bit rates. Lower bit rates don't have an effect on ensuring bandwidth with individual terrestrial broadcast stations, however, when single transponders are used with satellite broadcasting (BS, CS), and multiple stations have to be accommodated, efficiency is critical from an economic point of view.
It isn't possible to change the capacity of a transponder, so the only method available is to increase the number of channels with more efficient compression methods that do not impinge on picture quality. In the West, development of MPEG 2 encoders for satellite broadcasters is thriving, and major encoder manufacturers bring out new models every two years. Encoder efficiency has steadily risen since 2000.
Achieving 1920 horizontal pixels
NEC exhibited their new VC-5350 MPEG 2 encoder. This new encoder uses the "adaptive frame/field coding" system. Since line interlacing is used with digital terrestrial broadcasting, video images are basically created as fields. There are two methods of encoding such video, one is to combine 2 fields and encode them as a single frame, or the other is to encode by retaining the field structure.
The former is not just a simple process of mixing two fields, but properly combining the field for the upper part of the screen with that of the lower part. This frame is then used with the encoding process. Since there are time differences between the upper and lower parts of the screen, this method is best for handling still images or video with minimal movement.
In the method that uses field structures, encoding is performed for each field. 2 fields are not merged into a single frame. This means that fields consisting of I picture and P picture are referenced from the newly encoded field and processed for optimization. Because time slippage within a single field does not occur, this method is better for video with movement.
The VC-5350 switches between the two encoding methods depending on the video, which enables encoding to be optimized for motion. 1440 x 1080 pixel resolution has been mainly used with terrestrial digital broadcasting up to now. This is defined as "H14L" in the MPEG2 levels. To use 1920 horizontal pixels, it is necessary to use the "HL" level.
According to NEC, this encoder can achieve 1920 pixels (HL) without raising the bit rate. Obviously then the bit rate can be lowered with 1440 horizontal pixels. Implementing this new encoder will give broadcasters more options.
Mitsubishi Electric presented next generation encoding technologies in their booth. Joint standardization of the ISO-based MPEG and ITU-based H.26x systems has been proposed as HEVC. Mitsubishi Electric is engaged in joint research with NHK Science & Technology Research Laboratories.
As a nickname for H.265, HEVC aims to double the efficiency of MPEG4 AVC/H.264 (AVC/H.264 hereafter), thus halving the bit rate with no loss of picture quality. It makes is easier to control the complexity of processing, and thus reduce the load placed on hardware and software development. The draft standardization has been slated for 2012.
Mitsubishi Electric also exhibited the same technology at CEATEC in October, but since Inter BEE is an exhibition aimed at industry professionals, the company revised their exhibit this time around. At CEATEC, AVC/H.264 was compared with proposed methods, but at Inter BEE the company compared proposed methods with original images. This involved 51.3 Mbps compression (460 to 1) of 8k x 4k video.
If 8k x 4k video can be kept to 50Mbps, terrestrial and satellite broadcasting comes closer to reality. HEVC holds promise as a method to achieve Super Hi-Vision (UHDTV) broadcasting.
HD with Super Hi-Vision
The NEC booth also featured referenced exhibits of Super Hi-Vision technology aimed at archive applications. Super Hi-Vision technology uses a variety of processes to reproduce images at resolutions greater than the original. Generally speaking, Super Hi-Vision involves inputting multiple images, which are converted to high definition using data gathered from subtle position differences in the images.
Recently, there has also been something called Super Hi-Vision mentioned on television. Basically, that technology uses data from one frame, and is somewhat different from the standard Super Hi-Vision processing.
When Super Hi-Vision is done by the book, it is not possible to perform processing for sections where motion occurs. So, the process can only output high resolution for the still sections, and outputs the moving sections at their original resolution.
NEC has been able to achieve processing for motion sections by implementing motion compensation. This means that Super Hi-Vision processing can be applied to the 3 frames before and after the frame in focus (a total of seven frames), and can even be applied to objects that are moving quickly.
At present, when motion compensation processing is insufficient for certain parts, a peculiar distortion occurs, but it's not enough to be bothersome. The long-hoped-for ability to convert standard definition video into high-definition seems to have been achieved.
At present, the processing is done on PC, and it takes about 10 times as long as the content to convert SD to HD (1920 x 1080 pixels), in other words it takes 10 hours to process a one-hour movie. However, with the older methods, it took about a month to convert one hour of SD to HD, meaning NEC have improved processing speed by a factor of 70. and because the current software does not require any special hardware, there is room for even more processing speed through the addition of extra graphical processors and so on.
NEC is developing this technology with the aim of enabling broadcasters and others to broadcast their SD assets as HD. It will take a few more years, but this technology promises further advancement of content usage.
Super Hi-Vision 4k Debut
Super Hi-Vision 4k for theatres and amusement parks was also presented. Zaxel exhibited real-time HD 4K conversion technology that uses graphics processing units.
Using 1920 x 1080/60i input, Zaxel's system outputs 3840 x 2160/60p in real time. The system uses 2 GPUs, an NVIDIA Quadro FX5800 and a GeForce GTX275 (a total of 480 cores).
Zaxel President Norihisa Suzuki is a highly experienced computer architecture specialist having served as a Tokyo University professor, and as a head of IBM Research-Tokyo as well as Sony Computer Science Laboratories. Mr. Suzuki formed the company in 2004 after forming subsidiaries of a company he established in the US in 1998. The company has focused on developing high-performance encoding technologies, and has branched into the Super Hi-Vision field.
Mr. Suzuki says that the entertainment industry is eyeing this technology, because there isn't a lot of 4k content around, and it will enable them to use their existing assets converted to HD to show on large screens.
1) Zaxel President Norihisa Suzuki, and the company's GPU-boosted real-time Super Hi-Vision technology.
2) NEC's VC-5350 encoder responds to field structures, and can encode 1920 horizontal pixels at conventional bit rates.
3) NEC's Super Hi-Vision reference exhibit works with motion video, and offers a 70-fold increase in processing speed.