Network Working Group L. Berc
Request for Comments: 2035 Digital Equipment Corporation
Category: Standards Track W. Fenner
Xerox PARC
R. Frederick
Xerox PARC
S. McCanne
Lawrence Berkeley Laboratory
October 1996
RTP Payload Format for JPEG-compressed Video
Status of this Memo
This document specifies an Internet standards track protocol for the
Internet community, and requests discussion and suggestions for
improvements. Please refer to the current edition of the "Internet
Official Protocol Standards" (STD 1) for the standardization state
and status of this protocol. Distribution of this memo is unlimited.
Abstract
This memo describes the RTP payload format for JPEG video streams.
The packet format is optimized for real-time video streams where
codec parameters change rarely from frame to frame.
This document is a product of the Audio-Video Transport working group
within the Internet Engineering Task Force. Comments are solicited
and should be addressed to the working group's mailing list at rem-
conf@es.net and/or the author(s).
The Joint Photographic Experts Group (JPEG) standard [1,2,3] defines
a family of compression algorithms for continuous-tone, still images.
This still image compression standard can be applied to video by
compressing each frame of video as an independent still image and
transmitting them in series. Video coded in this fashion is often
called Motion-JPEG.
We first give an overview of JPEG and then describe the specific
subset of JPEG that is supported in RTP and the mechanism by which
JPEG frames are carried as RTP payloads.
The JPEG standard defines four modes of operation: the sequential DCT
mode, the progressive DCT mode, the lossless mode, and the
hierarchical mode. Depending on the mode, the image is represented
Berc, et. al. Standards Track [Page 1]
RFC 2035 RTP Payload Format for JPEG Video October 1996
in one or more passes. Each pass (called a frame in the JPEG
standard) is further broken down into one or more scans. Within each
scan, there are one to four components,which represent the three
components of a color signal (e.g., "red, green, and blue", or a
luminance signal and two chromanince signals). These components can
be encoded as separate scans or interleaved into a single scan.
Each frame and scan is preceded with a header containing optional
definitions for compression parameters like quantization tables and
Huffman coding tables. The headers and optional parameters are
identified with "markers" and comprise a marker segment; each scan
appears as an entropy-coded bit stream within two marker segments.
Markers are aligned to byte boundaries and (in general) cannot appear
in the entropy-coded segment, allowing scan boundaries to be
determined without parsing the bit stream.
Compressed data is represented in one of three formats: the
interchange format, the abbreviated format, or the table-
specification format. The interchange format contains definitions
for all the table used in the by the entropy-coded segments, while
the abbreviated format might omit some assuming they were defined
out-of-band or by a "previous" image.
The JPEG standard does not define the meaning or format of the
components that comprise the image. Attributes like the color space
and pixel aspect ratio must be specified out-of-band with respect to
the JPEG bit stream. The JPEG File Interchange Format (JFIF) [4] is
a defacto standard that provides this extra information using an
application marker segment (APP0). Note that a JFIF file is simply a
JPEG interchange format image along with the APP0 segment. In the
case of video, additional parameters must be defined out-of-band
(e.g., frame rate, interlaced vs. non-interlaced, etc.).
While the JPEG standard provides a rich set of algorithms for
flexible compression, cost-effective hardware implementations of the
full standard have not appeared. Instead, most hardware JPEG video
codecs implement only a subset of the sequential DCT mode of
operation. Typically, marker segments are interpreted in software
(which "re-programs" the hardware) and the hardware is presented with
a single, interleaved entropy-coded scan represented in the YUV color
space.
To maximize interoperability among hardware-based codecs, we assume
the sequential DCT operating mode [1,Annex F] and restrict the set of
predefined RTP/JPEG "type codes" (defined below) to single-scan,
interleaved images. While this is more restrictive than even
Berc, et. al. Standards Track [Page 2]
RFC 2035 RTP Payload Format for JPEG Video October 1996
baseline JPEG, many hardware implementation fall short of the
baseline specification (e.g., most hardware cannot decode non-
interleaved scans).
In practice, most of the table-specification data rarely changes from
frame to frame within a single video stream. Therefore, RTP/JPEG
data is represented in abbreviated format, with all of the tables
omitted from the bit stream. Each image begins immediately with the
(single) entropy-coded scan. The information that would otherwise be
in both the frame and scan headers is represented entirely within a
64-bit RTP/JPEG header (defined below) that lies between the RTP
header and the JPEG scan and is present in every packet.
While parameters like Huffman tables and color space are likely to
remain fixed for the lifetime of the video stream, other parameters
should be allowed to vary, notably the quantization tables and image
size (e.g., to implement rate-adaptive transmission or allow a user
to adjust the "quality level" or resolution manually). Thus explicit
fields in the RTP/JPEG header are allocated to represent this
information. Since only a small set of quantization tables are
typically used, we encode the entire set of quantization tables in a
small integer field. The image width and height are encoded
explicitly.
Because JPEG frames are typically larger than the underlying
network's maximum packet size, frames must often be fragmented into
several packets. One approach is to allow the network layer below
RTP (e.g., IP) to perform the fragmentation. However, this precludes
rate-controlling the resulting packet stream or partial delivery in
the presence of loss. For example, IP will not deliver a fragmented
datagram to the application if one or more fragments is lost, or IP
might fragment an 8000 byte frame into a burst of 8 back-to-back
packets. Instead, RTP/JPEG defines a simple fragmentation and
reassembly scheme at the RTP level.
The RTP timestamp is in units of 90000Hz. The same timestamp must
appear across all fragments of a single frame. The RTP marker bit is
set in the last packet of a frame.
Berc, et. al. Standards Track [Page 3]
RFC 2035 RTP Payload Format for JPEG Video October 1996
The type field specifies the information that would otherwise be
present in a JPEG abbreviated table-specification as well as the
additional JFIF-style parameters not defined by JPEG. Types 0-127
are reserved as fixed, well-known mappings to be defined by this
document and future revisions of this document. Types 128-255 are
free to be dynamically defined by a session setup protocol (which is
beyond the scope of this document).
This field encodes the height of the image in 8-pixel multiples
(e.g., a height of 30 denotes an image 240 pixels tall).
Berc, et. al. Standards Track [Page 4]
RFC 2035 RTP Payload Format for JPEG Video October 1996
The data following the RTP/JPEG header is an entropy-coded segment
consisting of a single scan. The scan header is not present and is
inferred from the RTP/JPEG header. The scan is terminated either
implicitly (i.e., the point at which the image is fully parsed), or
explicitly with an EOI marker. The scan may be padded to arbitrary
length with undefined bytes. (Existing hardware codecs generate
extra lines at the bottom of a video frame and removal of these lines
would require a Huffman-decoding pass over the data.)
As defined by JPEG, restart markers are the only type of marker that
may appear embedded in the entropy-coded segment. The "type code"
determines whether a restart interval is defined, and therefore
whether restart markers may be present. It also determines if the
restart intervals will be aligned with RTP packets, allowing for
partial decode of frames, thus increasing resiliance to packet drop.
If restart markers are present, the 6-byte DRI segment (define
restart interval marker [1, Sec. B.2.4.4] precedes the scan).
JPEG markers appear explicitly on byte aligned boundaries beginning
with an 0xFF. A "stuffed" 0x00 byte follows any 0xFF byte generated
by the entropy coder [1, Sec. B.1.1.5].
The Type field defines the abbreviated table-specification and
additional JFIF-style parameters not defined by JPEG, since they are
not present in the body of the transmitted JPEG data. The Type field
must remain constant for the duration of a session.
Six type codes are currently defined. They correspond to an
abbreviated table-specification indicating the "Baseline DCT
sequential" mode, 8-bit samples, square pixels, three components in
the YUV color space, standard Huffman tables as defined in [1, Annex
K.3], and a single interleaved scan with a scan component selector
indicating components 0, 1, and 2 in that order. The Y, U, and V
color planes correspond to component numbers 0, 1, and 2,
respectively. Component 0 (i.e., the luminance plane) uses Huffman
table number 0 and quantization table number 0 (defined below) and
components 1 and 2 (i.e., the chrominance planes) use Huffman table
number 1 and quantization table number 1 (defined below).
Additionally, video is non-interlaced and unscaled (i.e., the aspect
ratio is determined by the image width and height). The frame rate
is variable and explicit via the RTP timestamp.
Berc, et. al. Standards Track [Page 5]
RFC 2035 RTP Payload Format for JPEG Video October 1996
Six RTP/JPEG types are currently defined that assume all of the
above. The odd types have different JPEG sampling factors from the
even ones:
horizontal vertical
types comp samp. fact. samp. fact.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0/2/4 | 0 | 2 | 1 |
| 0/2/4 | 1 | 1 | 1 |
| 0/2/4 | 2 | 1 | 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 1/3/5 | 0 | 2 | 2 |
| 1/3/5 | 1 | 1 | 1 |
| 1/3/5 | 2 | 1 | 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
These sampling factors indicate that the chromanince components of
type 0/2/4 video is downsampled horizontally by 2 (often called
4:2:2) while the chrominance components of type 1/3/5 video are
downsampled both horizontally and vertically by 2 (often called
4:2:0).
The three pairs of types (0/1), (2/3) and (4/5) differ from each
other as follows:
0/1 : No restart markers are present in the entropy data.
No restriction is placed on the fragmentation of the stream
into RTP packets.
The type specific field is unused and must be zero.
2/3 : Restart markers are present in the entropy data.
The entropy data is preceded by a DRI marker segment, defining
the restart interval.
No restriction is placed on the fragmentation of the stream
into RTP packets.
The type specific field is unused and must be zero.
Berc, et. al. Standards Track [Page 6]
RFC 2035 RTP Payload Format for JPEG Video October 1996
4/5 : Restart markers are present in the entropy data.
The entropy data is preceded by a DRI marker segment, defining
the restart interval.
Restart intervals are be sent as separate (possibly multiple)
RTP packets.
The type specific field (TSPEC) is used as follows:
A restart interval count (RCOUNT) is defined, which
starts at zero, and is incremented for each restart
interval in the frame.
The first packet of a restart interval gets TSPEC = RCOUNT.
Subsequent packets of the restart interval get TSPEC = 254,
except the final packet, which gets TSPEC = 255.
Additional types in the range 128-255 may be defined by external
means, such as a session protocol.
Appendix B contains C source code for transforming the RTP/JPEG
header parameters into the JPEG frame and scan headers that are
absent from the data payload.
The quantization tables used in the decoding process are
algorithmically derived from the Q field. The algorithm used depends
on the type field but only one algorithm is currently defined for the
two types.
Both type 0 and type 1 JPEG assume two quantizations tables. These
tables are chosen as follows. For 1 <= Q <= 99, the Independent JPEG
Group's formula [5] is used to produce a scale factor S as:
S = 5000 / Q for 1 <= Q <= 50
= 200 - 2 * Q for 51 <= Q <= 99
This value is then used to scale Tables K.1 and K.2 from [1]
(saturating each value to 8-bits) to give quantization table numbers
0 and 1, respectively. C source code is provided in Appendix A to
compute these tables.
For Q >= 100, a dynamically defined quantization table is used, which
might be specified by a session setup protocol. (This session
protocol is beyond the scope of this document). It is expected that
the standard quantization tables will handle most cases in practice,
and dynamic tables will be used rarely. Q = 0 is reserved.
Berc, et. al. Standards Track [Page 7]
RFC 2035 RTP Payload Format for JPEG Video October 1996
Since JPEG frames are large, they must often be fragmented. Frames
should be fragmented into packets in a manner avoiding fragmentation
at a lower level. When using restart markers, frames should be
fragmented such that each packet starts with a restart interval (see
below).
Each packet that makes up a single frame has the same timestamp. The
fragment offset field is set to the byte offset of this packet within
the original frame. The RTP marker bit is set on the last packet in
a frame.
An entire frame can be identified as a sequence of packets beginning
with a packet having a zero fragment offset and ending with a packet
having the RTP marker bit set. Missing packets can be detected
either with RTP sequence numbers or with the fragment offset and
lengths of each packet. Reassembly could be carried out without the
offset field (i.e., using only the RTP marker bit and sequence
numbers), but an efficient single-copy implementation would not
otherwise be possible in the presence of misordered packets.
Moreover, if the last packet of the previous frame (containing the
marker bit) were dropped, then a receiver could not detect that the
current frame is entirely intact.
Restart markers indicate a point in the JPEG stream at which the
Huffman codec and DC predictors are reset, allowing partial decoding
starting at that point. The use of restart markers allows for
robustness in the face of packet loss.
RTP/JPEG Types 4/5 allow for partial decode of frames, due to the
alignment of restart intervals with RTP packets. The decoder knows it
has a whole restart interval when it gets sequence of packets with
contiguous RTP sequence numbers, starting with TSPEC<254 (RCOUNT) and
either ending with TSPEC==255, or TSPEC<255 and next packet's
TSPEC<254 (or end of frame).
It can then decompress the RST interval, and paint it. The X and Y
tile offsets of the first MCU in the interval are given by:
tile_offset = RCOUNT * restart_interval * 2
x_offset = tile_offset % frame_width_in_tiles
y_offset = tile_offset / frame_width_in_tiles
The MCUs in a restart interval may span multiple tile rows.
Berc, et. al. Standards Track [Page 8]
RFC 2035 RTP Payload Format for JPEG Video October 1996
Decoders can, however, treat types 4/5 as types 2/3, simply
reassembling the entire frame and then decoding.
Lance M. Berc
Systems Research Center
Digital Equipment Corporation
130 Lytton Ave
Palo Alto CA 94301
Phone: +1 415 853 2100
EMail: berc@pa.dec.com
William C. Fenner
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA 94304
Phone: +1 415 812 4816
EMail: fenner@cmf.nrl.navy.mil
Ron Frederick
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA 94304
Phone: +1 415 812 4459
EMail: frederick@parc.xerox.com
Steven McCanne
Lawrence Berkeley Laboratory
M/S 46A-1123
One Cyclotron Road
Berkeley, CA 94720
Phone: +1 510 486 7520
EMail: mccanne@ee.lbl.gov
Berc, et. al. Standards Track [Page 9]
RFC 2035 RTP Payload Format for JPEG Video October 1996