This chapter includes information about:
- Technical terms for understanding video formats
- Recommended settings for YouTube and Vimeo
- Additional resources about compressing video
Think about what you value about a full-length movie projected on a theater screen: maybe high-quality sound and visuals, the giant size, the lack of distractions. Compare that to what you value about a video clip you play on your smartphone: decent quality but quick to load, short and to the point.
Digital video exists in many file formats, which can be affected by how it’s shot, edited and exported. Many formats are appropriate for the web, but the following terms help explain how they differ.
Technical Video Terms
The ratio related to the dimensions of a video. The standard aspect ratio used on television through the 20th century is 4:3, but the newer widescreen standard for HD video is 16:9. Differences in aspect ratios have long been a problem for movie studios, which must either crop out part of the frame or use black bars when converting films to show on TV.
The number of frames that display per second. Digital video still uses the flip book concept, where flipping pages creates the illusion of motion. The number of individual frames the flip by changes the way we perceive the video. Early cinema used a frame rate of 16 frames per second, which is slow enough to appear slightly choppy to the human eye.
The standard frame rate in film is 24 FPS (or the very similar 25 FPS in Europe and some other parts of the world). The other standard is 30 FPS, which has long been the standard for television shows and broadcast news. Because the two standards have been used differently, 24 FPS typically looks more “cinematic” to our eyes, whereas 30 FPS is associated with sitcoms and TV news.
Both frame rates are common on the web, and higher frame rates such as 48, 50 and 60 are also used. Frame rates as high at 300 FPS are being researched for use in virtual reality and sports broadcasts.
A piece of code or program that compress or decompress data files for storage or playback. (The word is a combination of compress and decompress.) In video, there are an overwhelming number of codecs for converting video projects into playable files. Codecs can be lossless, meaning they preserve all data, or lossy, meaning they strategically toss away data for the sake of smaller file sizes. Common codecs include MPEG-4 and WMV, and a go-to codec for many videographers is H.264.
The file format of an exported video, which contains the compressed data and information about how to render it so the video will play. Common containers include MP4, AVI and Quicktime (.mov).
Conversion of one digital file format to another digital format, often to optimize the file for a particular use. When video hosting sites such as YouTube or Vimeo are “processing” a video you’ve uploaded, they are transcoding your video so it will display properly on their sites.
How to choose formats
When editing, you want to use a codec that matches the highest quality settings of your raw video footage. It is best to use the same resolution and frame rate as your highest quality recording.
Then, when compressing and exporting your video for YouTube or Vimeo, it is best to use the preset that matches the highest quality video (ie YouTube Widescreen HD if your raw footage is in HD) that remains compatible with that site’s encoding requirements.
This brief Videomaker.com video describes the process of compressing, exporting and sharing videos. Some of the information is slightly out of date because Internet speeds and web video have advanced so quickly in the past decade, but the processes remain the same.