|Television Production Handbook|
|©1980-2009 Roger Inman & Greg Smith. All rights reserved.|
Editing and Program Continuity
Editing a television program is much more than just putting shots together on a single piece of tape so they can be viewed as a whole. It is in the editing phase that a production is finally made to conform to the producer's vision of it. Editing is both the last and one of the most powerful opportunities the producer has to influence whether a program will communicate the information it was meant to convey successfully, and whether it will affect the emotions and moods of the audience as the producer wished. The editing process is also a difficult and demanding craft, and presents many opportunities for the unprepared to go astray. Indeed, the failure to communicate effectively in many beginners' television programs is much more often a failure of the editor to develop a coherent conception of the flow of the program than of any problem with other technical aspects of the art of television, such as camera work or sound recording.
Conversely, it is possible for a skilled editor to take even seriously flawed original footage and produce a program which still manages to say what the writer and director had in mind - but we certainly hope you will never be forced to work that way.
In commercial television and film production, the "shooting ratio," or amount of film or tape shot divided by the length of the finished program, averages about 10:1. In other words, for every minute of edited program you see, ten minutes or more of tape was recorded by the camera crew. In some specialized and difficult programs the ratio routinely climbs above 100:1. On the other hand, in live television production in the relatively comfortable atmosphere of a studio, it is often possible to use every second of the original recording with a shooting ratio of 1:1.
Actually both of these represent extremes which you will not ordinarily encounter. Ways to reduce editing time include keeping the overall shooting ratio as low as practical, grouping scenes shot in the same place or time together on the original tape, and, when possible, "slating" shots by holding up a card in front of the camera showing the place, time, date, and crew members present and any other information you think may be helpful in identifying material later.
You may have inferred from the above that the process of editing in a sense begins long before any tape is recorded. In those productions which are scripted in detail before shooting, the script may in fact dictate almost exactly how the finished program will be edited. So a knowledge of some of the rules of visual and aural continuity is as essential for the script writer, director, and camera operator as for the editor. Again, the more planning and preparation before entering the editing room, the less time and money spent there. Even if your program doesn't lend itself well to detailed scripting in advance, it is usually possible to write up an outline, or perhaps even an exact "editing script," based on several viewings of the original footage, which can guide the editor in putting things together properly in the minimum amount of time. This is not to imply that many decisions won't have to be made during the process of editing - especially the specifics of exactly on which frame to make cuts - but good planning can make the actual work with the editing machines much quicker and more pleasant.
There are basically three purposes in editing a program. The are:
1. To eliminate material which is unusable because it is technically flawed;
2. To remove footage which is irrelevant to the information to be presented in the program;
3. To assemble what is left in a way which communicates the important information in the program and, at worst, the editing isn't distracting to viewers and, at best, the program is both interesting and entertaining.
There is almost never a justifiable reason for making a program longer than the absolute minimum necessary to cover the topic adequately. If in doubt about the relevance of a particular shot or sequence, it is almost always better to leave it out. Keep cutting away at what you have available (while watching the shooting ratio soar) until every frame of the remaining tape has something valuable to say. Your audience will be grateful for it, and may reward you by staying awake through the whole program.
What follows is a selection of various rules and concepts of editing which have been developed through experience over the last century of editing motion pictures and, more recently, television programs. The rules are, to some degree, flexible, but they should be violated blatantly only with great trepidation and after carefully considering all of the alternate ways of editing the sequence. However, if you have something important to say, and only unconventional editing will communicate it effectively, then the content should take precedence over any editing rules. Remember, your final consideration is your audience and their reactions.
1. WS - MS - CU sequences; using close-ups
Beginning television news camera people are always taught to take three shots of each subject, using different distances, angles, or lens settings to yield a long shot, medium shot, and close-up of the subject. Then these three pieces of tape are edited together in that order to give the viewer an impression of moving in on the action from the outside, finally becoming involved (through the close-ups) in the action itself. Watch television news coverage of a fire or similar event the next time you can. Typically, you'll see something like:
LONG SHOT, the burning building;
MEDIUM SHOT, firefighters pulling hoses off the truck and carrying them toward the fire;
CLOSE-UP, firefighter's face or hands as he wrestles with the hose.
Additional shots would tend to be more close-ups: equipment on the truck, hands holding the hose, the faces of spectators, etc. At some time, determined by the pacing of the editing and, in this case, the severity of the fire, there might be a return to the long shot to reestablish the overall layout of the scene in the viewer's mind.
Notice the emphasis on close-ups. Television is a relatively low-definition medium, and subjects seen in medium shots or wider just don't come across powerfully on a television screen. It is details, sometimes shot at very close distances, which are most effective in adding visual interest to a story. Use close shots - details of objects, a person's surroundings, or especially of the human face -whenever it is possible and meaningful to do so. They do more than anything else to add excitement and interest to your program.
As the amount of information on the screen decreases, so does the viewer's ability to watch it for a long time. This implies that close-ups, which contain relatively little visual information but spread it over the entire area of the screen, shouldn't be held on the screen for a very long time. When presented with a single detail of something, an audience can look at it for no more than three or four seconds before it becomes bored and turns away. Exceptions to this rule occur when there is something going on to maintain interest. For example, moving objects can be held longer than totally inanimate ones. A narrator heard in a voice over may point out interesting details in what is shown so that the audience is continually discovering new aspects of the picture. In these cases, even extreme close-ups can remain on the screen for a relatively long time.
Close-ups of faces, on the other hand, can be held for along time, as we seem to find endless fascination in that particular subject. Close-ups are very powerful and revealing for interviews and tend to give at least the illusion of great insight into the speaker.
By contrast, long and medium shots are used less in television, although they certainly have a very important place in most programs. The audience can become disoriented if they are not occasionally reminded of where they are, the overall arrangement of objects and people in the location and the relationships between them. This is the purpose of the long shot. Medium shots reveal more about a single subject, without the emotional commitment of a close-up and they are also most often used for showing the progress of action, as in our fire example above. Of course, there are an infinite number of possible variations. It is the subject matter and overall mood and structure of your program which determine in the end how sequences of different angles views will fit together.
2. Jump cuts and the thirty degree rule
One of the most distracting mistakes it is possible to make in editing is called a "jump cut." If you've watched commercial television all you life, you may in fact never have seen a jump cut. They are so visually distracting, and such pains are taken to avoid them, that this particular error almost never makes it to your home screen.
A jump cut happens when you attempt to edit together two shots which are too similar in visual content. The most obvious example might occur if you remove a single sentence from an interview shot while the camera (which recorded the interview originally as a single shot) remained static. What would happen is that the background would remain still, while the subject's head might "jump" a bit to one side, or lips which were closed might appear instantly to open.
The result is a very jarring interruption in the otherwise smooth flow of action. There are several solutions to this problem. In the case of the interview, an appropriate response would be a cutaway shot - about which more will be said later. In other situations application of the "rule of thirty degrees" can help.
The Thirty-degree Rule
The thirty degree rule states that if you make sure that the angle of view of the camera to the subject changes by thirty degrees or more between two shots cut together, the background will move enough that the shots will cut together well without any apparent visual discontinuity. The diagram below illustrates the rule. Failure to move the camera at least thirty degrees between shots almost invariable leads to a jump cut. There are exceptions, of course. Cutting from a medium shot to a close-up, for example, can be done even if the camera is not moved an inch. The point is to make each shot as visually different from the one preceding it as possible, so that the viewer's point of view of the scene appears to change in a natural manner. Notice that if you invariably follow the long shot - medium shot - close-up sequence suggested earlier, the problem of jump cuts caused by violations of the thirty degree rule never develops. And cutting from one subject to an entirely different one, or from one location to another, rarely presents any problems.
There are other types of visual discontinuities you must guard against. Most of these are essentially the responsibility of the camera crew at the time of shooting, and the editor's only involvement is to try to cover them up when they occur.
Errors of this sort usually creep in when shooting is spread out over several hours, days, or months. A typical problem might be shooting part of an interview one day, then coming back a few days later to get additional material and finding the subject dressed in different clothing. (Invariably the clothes the subject wore the first time will be in the laundry and totally unavailable.) You can't cut together parts of the two interviews without making it look as though the subject instantly changed clothes. Errors like these can only be avoided if you are aware of them and plan around their occurrence. It is also among the editor's responsibilities to point out continuity problems when they occur, so that a way can be found to avoid them or minimize their effects. (For example, in this case, it might be possible to group the footage in such a way that the clothing appears to change during a long interlude while other subjects are being shown. When the action returns to the interview the change won't be obvious.)
3. Direction of motion and the 180 degree rule
By this time you may think that editors and directors have to be experts in geometry. Not so! There is really only one other "angle rule," which is designed to keep people facing and moving in the same direction on the screen. One of the more distressing visual discontinuities is reversal in direction of motion, or in screen position of people. An example:
Someone walks out of one room and into another. In the first shot, you set up the camera in the middle of the first room and the subject walks from the left side of the screen and exits through a door on the right. Now on entering the second room, into which the subject is to walk, you find a large expanse of windows on one wall, with the door on the left as you face them. You don't want to shoot into those bright windows, as the camera will give you a poor picture, so you set up the camera on the window side of the room looking toward the middle. Now the subject walks in, but this time WILL BE ENTERING FROM THE RIGHT SIDE OF THE SCREEN. When this is edited together with the previous shot, it will look like the subject changed direction in mid-stride.
The solution is to keep the camera always on the same side of the moving subject. If something is traveling from left to right, it should continue to go in the same direction in every shot you think might be edited into the same sequence. Sometimes, as in our window example, this takes considerable pre-planning and can be something of a headache if there are many locations involved. But it is necessary if the audience is to be able to follow the action without confusion. Again, if you just can't think of any other way, a cutaway may be used between two shots where the action reverses direction. But this technique is all too obvious to the viewer much of the time.
A somewhat related situation arises when you are dealing with multiple subjects, perhaps in a discussion situation. It is necessary to arrange your shots, and their editing, so that subjects don't appear to be moving from one side of the screen to the other, or looking in different directions at different times.
Study the diagram below, which represents a simple situation with two people seated at a table. If the camera always stays in the area indicated by the number "1" no matter what two shots you cut together, subject "A" will be looking toward the right side of the screen at subject "B," who faces left. This will result in proper visual continuity.
The 180 Degree Rule
If, however, the camera crosses over to the other side of the table, the apparent positions of the two subjects will be reversed. If you cut two shots of "B" together in succession, "B" will suddenly appear to be looking at himself, but from opposite directions.
This is called the 180 degree rule. When dealing with two or more subjects, visualize a straight line drawn through both of them. As long as the camera always stays on one side or the other of this line, apparent visual continuity can be maintained. The name derives from the fact that the camera can move through an arc of 180 degrees relative to the center point between the subjects.
Now remember the thirty degree rule. We have set up a situation in which, in order to maintain effective visual continuity, the camera has to move at least thirty degrees between shots but has an overall field of only 180 degrees to work in. It can be frustrating, to say the least, to the editor who has to work with material shot without regard for these two rules. Yet the editor, too, has the responsibility of putting shots together in such a sequence that these rules are observed, while still trying to make sense out of the overall meaning of the program. Good luck!
4. Cutaways and inserts
Often the availability of proper cutaway or insert shots is the only thing which saves the editor from, at best, profound frustration, or possible even insanity.
You see cutaways frequently in television news interviews. These are the shots of the interviewer looking at his notes, or nodding at what the subject is saying. Usually the cutaway shot has been used to cover up an edit in a continuous shot of the interview subject when removing a few words or sentences would other wise have produced an unacceptable jump cut. As such, the cutaway is a valuable device, but it should be used with great discretion, as it looks rather contrived. (Often these shots are done after the interview is completed, and the interviewer isn't really reacting to anything - which often shows.) Over-the-shoulder shots are common too - and also often don't work because the sound is noticeably out of sync with the subject's body or facial movements.
It is also possible to cut away to other subjects, like the crowd reacting to a speaker's words, or a close-up of the subject's hands. These sometimes work well and can cover many otherwise embarrassing gaps in visual continuity.
Insert shots have another function in that they actually contribute to the meaning of the program. Inserts are shots, or sequences, which usually show something that a speaker is talking about. While an interview subject describes a process, an insert sequence can be designed to show that process actually taking place. While these segments can be used to cover continuity difficulties they also tend to make a program more interesting and meaningful.
Achieving the proper balance between "people" footage (interviews and speakers) and "process" material is difficult, and of course depends on the nature of the specific program you are making. In general, though, it is better to show something than to talk about it; television is a visual medium and only by making use of its ability to show things as they actually happen can it truly be distinguished from radio or audio tape. Of course, there are programs where the emphasis is on the people involved as much as the things they are doing, and in these cases there is nothing more beautiful or interesting than the human face. So keep your purpose in mind when deciding on an overall plan for the editing of your programs.
Insert and cutaway footage is so important in editing that it is essential to be aware of the need to produce this type of material at the time of shooting. Many discontinuities are not the result of ignorance of the rules on the part of the editor, but happen simply because the required quantity or quality of insert material was never shot. Much of a good camera person's time is spent looking for and recording reaction shots and various close-ups and cutaways which can be used by the editor to cover any difficulties later. It is definitely something to keep in mind when you start to shoot tape.
5. Shot timing and pacing
The need to keep certain shots fairly short was discussed earlier. In general, close-ups do not hold interest as long as medium shots and it is uncommon to see any shot that lasts longer than about ten seconds, but certain types of action can be held longer if it seems appropriate. It is a good idea to vary the length of shots, particularly if many of your shots are fairly short, unless the building of a definite and predictable rhythm is what you have in mind.
One final note. Changes in shots that are too frequent and done for no apparent reason can be worse than long static shots. Editing should never be allowed to interfere with or distract from program content.
6. Cutting sound
In professional film editing, it is not considered much of a problem at all to restructure sentences or even words by precise editing to change the meaning of what someone says. Digital audio files and the audio portion of digital video files can be processed using computers to change not only the sequence of sounds, but volume, pitch, and other characteristics. Even so, most audio editing for video is restricted to making cuts between phrases or sentences, trying to fit together the sometimes random-sounding ramblings of your subjects into a smooth whole.
It is beyond the scope of this chapter to try to dictate how your subjects' thoughts should be fitted together, so we will concentrate on a few rules and suggestions as to how the continuity of sound fits into the overall production of a program.
Most programs of the informational or educational genre have their continuity dictated almost entirely by the content of the sound track. In fact, the spoken word probably conveys most of the actual information in most of the television programs you have seen.
This is a difficult problem, because errors in visual continuity (which is what most of this chapter has been about) are usually much more obvious and distracting to the viewer than a more abstract lack of logic or coherence in what the interview subjects or narrator say. So in many cases the content will have to be adapted to the needs of maintaining a smooth VISUAL flow. Cutting within interview footage, which is often necessary from a content perspective, almost always generates a visually offensive jump cut which requires a cutaway or something similar to reduce the distraction.
The use of sound other than voices should be considered. The natural sounds of many settings - chirping crickets or the cacophony of a factory - can be used to make some kinds of points more effectively than any narration. The use of music in setting moods is fairly obvious; indeed it is possible, and often very powerful, to edit the visual portion of a program to fit, in both rhythm and content, a prerecorded piece of music.
One technical detail about editing sound which seems relevant here involves the timing of different spoken segments to be edited together. In average conversation, most people pause about half a second between sentences. If you are trying to edit dialogue together so that it still sounds natural and flowing, you should try to maintain the time between utterances at about this figure. Most people do not pause noticeably between individual words, although a gap of a tenth of a second or so will go unnoticed if it doesn't occur too often. Naturally, these times have to be adjusted somewhat to fit the specific speech patterns of the individuals involved, so they are only a guide.
A second consideration in sound editing might be thought of as an audio jump cut. Every recording location has a characteristic sound, or presence which may even change slightly during different times of the day. People, too, vary the quality of their voices according to many conditions from stuffy noses to fatigue or emotion. Very few narrators can duplicate the sound of their own voices from one day to the next. Even though two sequences might be recorded using the same equipment in the same location, the quality of the audio may well be so different that a noticeable and objectionable change in sound occurs at an edit point. In editing narrative sequences, the speaking pace must also be fairly constant if the edit is to be "believable."
This barely begins to scratch the surface of the field of editing, yet the rules and ideas presented here are basic ones you will have to keep in mind while shooting and editing videotape. Watching television critically, with an eye toward the contribution of the editor to the finished program, then editing your own work, will teach you more about the craft than any stack of books could. Don't be afraid of all those buttons!
Video editing is the selective copying of material from one videotape (or computer file) to another. The process is entirely electronic. Nothing is cut, glued, or pasted. The original is not altered in any way by the editing process. Successful and efficient editing requires some specialized equipment, some knowledge of how the equipment works, and a great deal of planning and preparation both in shooting original footage and in editing itself.
The necessary editing equipment includes two videotape recorders, two television monitors, and an edit controller. The original tape is played back on the source recorder, which is sometimes called the master recorder. This recorder must be designed to be run by remote control. The audio and video outputs of the source recorder are connected to the inputs on the editing recorder, sometimes called the slave recorder. The editing recorder, in addition to being operated by remote control, also needs some features not found on most videotape recorders. First, it must operate in sync with the playback recorder. That is, its internal timing circuits have to lock to the sync portion of the incoming composite video signal. Second, to make clean edits between old and new video, it must be able to go from the playback mode to the record mode and back only in the vertical interval, or the brief time between pictures. Finally, to accomplish this, it must have special erase heads, called "flying" erase heads, actually mounted on the video head assembly. Most videotape recorders have erase heads that are fixed and erase the entire width of the tape. Because the video signal is laid down on tape in long diagonal passes across the tape, the conventional erase head would erase portions of many frames of video. The erase heads mounted on the video drum can erase video one field at a time, allowing very clean transitions between old and new video.
The audio and video signals from each recorder are also connected to television monitors. This allows the operator to see and hear what is on either tape at any time. Finally, both recorders are connected to a compatible edit controller. The controller includes the basic transport controls for both recorders, such as fast forward, rewind, play, pause, and stop, plus special editing functions.
In the ASSEMBLE mode it is assumed that there is nothing recorded on the edited videotape after the selected edit point. Each new sequence is edited onto the end of the previous sequence until the tape is completed. No picture of sound which might already have been on the tape is used.
In the INSERT mode, it is assumed that material already on the tape is to be retained. New material is inserted into old. Not all of the signals during the edit need to be replaced. The operator sets the editing machine to change the picture or either of the sound channels or any combination of the three. At the end of the edit, the recorder will return from the record mode to the playback mode.
The Control Track:
Almost all videotape recorders record and play back a special sixty hertz pulse called the control track. This track is used in playback to make sure the video heads are positioned correctly to read video information recorded on the tape. Any break in the control track, or sudden shift in phase, or loss of signal level will cause a videotape recorder to vary its speed until it returns to lock. This in turn usually causes the picture to break up or disappear entirely. The essential difference between the assemble and insert edit modes is that in the assemble mode new control track is recorded from the edit point on, while in the insert mode prerecorded control track is used and no new control track is generated. Therefore, the picture will always break up at the end of an assemble edit and, conversely, there must always be good continuous control track already on the tape throughout an insert edit or the picture will break up on later playback wherever the control track was flawed, even though no trouble was observed during the actual insert edit.
Many editors commonly in use are called control track editors because they use the control track as a reference for all of the editing functions. It is critical to know and understand this. Without a good and continuous control track from at least six seconds prior to an edit point to at least two or three seconds after an edit point it may be impossible to make an edit at all. Actual requirements vary from machine to machine, so it is advisable to make sure there is always at least ten seconds of control track in front of and behind every shot recorded.
The Society of Motion Picture and Television Engineers devised a special audio signal that can be recorded on tape and used to identify the location of each frame of video precisely. This SMPTE code is used in many edit suites for three reasons. First, edits using SMPTE code are frame-accurate and repeatable. Second, the code can be used to trigger events in other equipment, such as special effects generators, computers, and audio recorders. Third, preview copies of raw tapes can be made with the frame numbers showing on the screen, so you can make editing decisions "off-line."
Most SMPTE code is recorded on a linear audio channel. That means you have to have two audio channels to use it. Three are better. Most one inch and many 3/4 inch VTR's have three audio channels, leaving two for program content.
SMPTE is not the only time code in use today. A number of companies have their own proprietary codes. They all serve the same purpose - allowing precise control over editing equipment.Computer Editing
With the exception of very high-end systems, nonlinear editing systems use AVI or Quicktime files that are compressed from 20 MegaBytes per second down to around 4 MB/sec. One CDR will hold about three minutes. One DVD-R will hold about 16 minutes per layer.
See the editing forum at http://tv-handbook.com/discussion/.Handbook Contents