Remember way back when, or well just a couple years ago really, when we only had that one standard video connector? To most people in the US, it was the yellow RCA plug that matched up to a yellow jack on our television. Video professionals had it pretty easy too. Just hook up a single BNC cable from the Video Out port to the Video In port on a monitor. Oh, those were the good old days. Now we have a whole collection of different connectors and video formats, not to mention about a dozen variations of each. So, in what promises to be a lengthy article, I will try to break down all the confusion and describe many of the video connectors that are in use today.
Composite Video – The Yellow Plug
A composite video signal is still the most common output and input on both consumer and professional video devices. On the consumer side of things, the composite signal is usually carried over a standard coaxial cable and connects to a television or VCR with a RCA (Radio Corporation of America) connector. These cables are commonly called RCA cables because of the connector type. RCA cables are used for a wide variety of applications including video, sound, and even digital S/PDIF surround sound. They used to also be called “phono” connectors, because that was how one connected a phonograph to a radio! Because of the wide variety of signals, the connectors are often colored for specific use. In a three-connector RCA cable, the video connector is yellow and the sound channels are coated red and white. All three are the same cable, so don’t let anyone at Best Buy tell you otherwise.
In the professional world, the same video signal is used but with a different video connector. The signal is still sent over a well-insulated coaxial cable and connects with a BNC type connector. The BNC is a locking connector, which is very well suited for professional video. These cables are usually called BNC cables. Because both RCA and BNC cables use a standard coaxial wire, they can be easily converted to each other with a BNC to RCA type adaptor.
So what exactly is a composite video signal? Well, it is called “composite”, because the chroma (color) and luma (brightness) information are mixed together onto a single carrier signal by means of frequency multiplexing. This is the same way that video is transmitted over antenna or on a cable network. On a broadcast signal, the frequency varies to create different channels. On a composite video signal, the frequency is fixed and only one signal is sent. Because there are so many different channels on broadcast, (each channel using less amplitude) the cable that runs from your cable jack to your television is thick and well insulated to improve the signal-to-noise ratio.
Composite Video is strictly analog Standard Definition (SD) format in the professional video world. In the US, this is more specifically NTSC video (PAL in much of the rest of the world). NTSC SD video is generally defined as 720 pixels wide by 480 pixels tall with a 4:3 ratio. All SD video you see on broadcast or a DVD in the US will be in this format. Now, you may be wondering about your 16:9 (wide screen) DVDs. Well they are 720×480 also, but the pixels are rectangular instead of square (referred to in some literature as non-square pixels). This gives you that wide screen look but maintains the same NTSC specifications. On an SD camera, this mode is often called Anamorphic, Squeeze, or Wide but they all refer to the same rectangular pixel SD format.
S-Video – Separated Video NOT Super Video
S-Video is very similar to analog composite video, but with the chroma and luma signals transmitted separately, and a multi-conductor used instead of a coaxial cable. The most common S-Video cable uses a 4-pin mini-DIN connector. There are other variations on some devices, but these are hard to find today. The main advantage of S-Video over composite video is that, with separate chrominance and luminance channels, the overall image noise is reduced.
S-Video uses the same format as composite video for both SD NTSC and PAL. You’ll often find it as an output option on compact cameras. S-Video originated with the S-VHS consumer video format in the early 1980s, which was the first to record separate chroma and luma signals.
Component Video – RGB and YPbPr and What?
Now things get a bit more complicated. Up to now our video was just on one cable, but now it’s on three. The component video cable consists of three separate cables, which are color coated green, blue, and red. These are three coaxial cables, which have either RCA connectors in the consumer world or BNC connectors in the pro world. They are the same cable essentially, but in the pro world the cables are usually a bit beefier.
Component video is similar to S-Video in that it separates analog video into separate chroma and luma channels. The difference is that S-Video separates the signal into one luma cable and one chroma cable, where component splits the signal into three parts. This increasing image quality and reduces noise even more. The trickier part is that there are two methods for doing this: RGB and YPbPr. Let me explain the difference.
An RGB signal transmitted over a component cable is split into three equal parts – Red, Green, and Blue, which I’m sure you’ve already deducted from the color coatings. Each of these signals contains both chroma and luma information from the original image. So each cable has color information and a black & white image on it. A monitor will recombine these three channels to produce an image. A VGA computer monitor connector works this same way, where different pins have different colors on them. The RGB signal is not very efficient however, because it is transmitting three of the same black & white image (one per cable). That’s where YPbPr comes in.
A YPbPr signal uses the same component cable but separates the image differently. It puts the luma information on the Y cable, the difference between blue and luma (B-Y) on the Pb cable, and the difference between red and luma (R-Y) on the Pr cable. In terms of cables, Pb is on the blue cable, Pr is on the red, and Y is on the green cable. Sending an individual green channel would be redundant because it can be derived from the red, blue, and luma information. This method is more efficient because only one luma image is transmitted. This also allows for chroma subsampling when more bandwidth is needed.
When it comes to cameras, an analog component cable is always in YPbPr format. This is also true for home video components. RGB, on the other hand, is most commonly found on computer video cards with analog VGA outputs. A common mistake in component video (particularly on consumer products) is labeling outputs with YCbCr. Component video is analog, not digital, and YPbPr is an analog signal. YCbCr is a digital representation of the same information, but would not be transmitted over an analog component cable. Calling it YUV is also incorrect, as this is something different completely.
Component video supports all the SD resolutions as well as HD resolutions. This includes NTSC, PAL, 720p, 1080i, and 1080p. Component video was the first interface available for HD, the original Sony F900 camcorder had only component video outputs for its HD signal. Many consumer televisions, DVD players, and BluRay players offer these connection types for HD as well.
FireWire, iLink, IEEE 1394, Lynx, and You
These are all names for the same thing (except the “you” part). This is the interface cable that is often used to move digital signals between a camera, deck, hard drive and computer. There are numerous signals that can be sent across a FireWire connection, most commonly DV, DVCAM, HDV and the various flavors of DVCPRO, but it is simply a computer data protocol, so in theory any signal can be sent across its transport stream. There are two common connector types for FireWire, a 4-pin and 6-pin. There are also two signal variants of FireWire, the original which is also known as FireWire 400 (IEEE 1394–1995) which can carry as much as 400 Mbit/s and the newer FireWire 800 (IEEE 1394b-2002) which can carry up to 800 Mbits/s. FireWire 800 always requires the larger 6-pin connector.
SDI – Serial Digital Interface
When it comes to professional video interfaces, SDI is king. SDI (serial digital interface) is a family of digital video interfaces for broadcast quality video. SDI uses BNC type connectors and coaxial cable. These cables are essentially the same as those used in the analog video world, but are required to have a nominal impedance of 75 ohms. Many analog video cables did not meet this specification, but it’s hard to find BNC cable with anything less theses days. However, don’t go trying to run a SDI signal into the analog video in on your monitor, you won’t see anything but noise.
So why is SDI preferred for professional video? Well it’s an uncompressed digital video source. Digital has advantages over an analog source in many ways. First, it’s already digital so it works well with modern digital video recorders. Secondly, the digital signal will not become noisy due to cable interference like an analog source, though it does have a run length maximum of approximately 300 meters. Finally, the signal can be in a variety of color spaces and supports optional embedded information. Sixteen audio tracks, timecode, closed captions, and other metadata can all be included on the same single cable. These features make a world of difference over analog, especially for broadcast applications.
SDI signals are standardized by SMPTE with different specifications. These different standards include SD-SDI (for standard definition), HD-SDI (for high definition), Dual Link HD-SDI, and the new 3G-SDI interface. SD-SDI and HD-SDI are commonly found on modern broadcast cameras, whereas Dual Link and 3G are newer technologies that are harder to come by.
SD-SDI (SMPTE 259M) supports up to 360 Mbit/s (mega bits per second) and can transmit all SD formats including NTSC and PAL formats. HD-SDI (SMPTE 292M) has a bit rate of 1.485 Gbit/s and supports 720 progressive as well as 1080 interlaced. It also supports 1080 23.98 PsF, 24P, and 48i signals. Both HD-SDI and SD-SDI are most commonly video encoded in 10-bit linearly sampled 4:2:2 YCbCr (digital version of YPbPr). The 4:2:2 is referring to chroma subsampling, which is really a topic for another blog post, but a simple description of it is that the two chroma components are sampled at half the sample rate of luma (the horizontal chroma resolution is halved). This type of subsampling is generally considered to be visually lossless, but you are still losing some color information. This is where Dual-Link and 3G come in.
Dual Link-SDI (SMPTE 372M) is simply a pair of single HD-SDI cables. The two cables are used to allow for a higher quality image to be transmitted. A single HD-SDI can transmit 1.485 Gbit/s, and two together can transmit 2.970 Gbit/s. Of course, the next logical improvement over this situation is to combine those cables into one, which they did, and it’s called 3G-SDI. Why 3G? Well it’s not an iPhone, it’s just a single cable that transmits 2.97 Gbit/s. All that extra bandwidth can be used to produce a 60P 1080P signal in 4:2:2. Or it can be used for 4:4:4 chroma sampling. For example, a Sony HDCAM SR deck supports dual link input and can record in 4:4:4 RGB color or in 1080 progressive at 60 frames per second. In a few years we might see 3G-SDI becoming the standard for professional cameras, but then again we might see something even bigger, better, and faster too.