I am a documentary filmmaker / director and recently my team and I finished editing a 40 minute documentary, which only features protagonists from a very remote area of West Africa, speaking in a language that is only relatively less spoken.
To be able to edit at all, we ingested the footage in DaVinci Resolve, always one timeline per day of shooting. A total of 37 hours of footage. Within these timelines we then had the translation recorded by a local translator. So we always had another track with the translation under picture and sound. And because our editor at home unfortunately didn't understand the local slang of the translator very well, the translator track had to be transcribed as well.
Then our editor recreated the timelines of the shooting days in AVID Media Composer and imported the translator tracks and subtitles.
After a far too long edit and the track chaos of AVID, I'm thinking about using Final Cut Pro for the next project, which takes us to an even more remote place on our planet where the language is only spoken by about 200 people. For that I would like to overdubb the raw material, i.e. every single clip, with the translator recording.
With our last project we tried to output the TC from the camera while the recordings are being played back and then use the Source-TC of the material to record audio with a sound device from the translator's voice. But that did not work.
Also the export of proxies from DaVinci Resolve via "individual clips" did not work. The translator track was not exported, only the sound of the video clip.
Does anyone have an idea how to record translator tracks with the Source-TC of the raw material and then create synchronized clips in FCPX from one video file each with the matching original sound and the translator recording?
It would be great to have the translator recordings directly connected to the raw material and have the possibility to log them like regular clips in the browser or create transcripts via the Simonsays plugin, which are then in the notes with the logged range.
Why not ingest into FCPX to start with, why use Resolve at all? Create one Project called Dailies Assembly. Have your translator record along with the dailies string-out in that FCPX project timeline. Then make that a Compound Clip, titled with the day and date. Done. Stay in one NLE, there's no need to switch from one to the other.
docsound wrote: ...Does anyone have an idea how to record translator tracks with the Source-TC of the raw material and then create synchronized clips in FCPX from one video file each with the matching original sound and the translator recording?
It would be great to have the translator recordings directly connected to the raw material and have the possibility to log them like regular clips in the browser or create transcripts via the Simonsays plugin, which are then in the notes with the logged range...
In past projects my team has created low-res interviews with burn-in timecode, delivered that to translators, then either (1) use written translated transcripts with timecode per line or (2) synchronized audio translations, keeping within about +/- 1 sec. Obviously the latter is more difficult. Typically a translator would need to first make their own transcript, then read from it while listening to the material on headphones and narrating the translated audio. If they don't do that they cannot maintain sync.
In case #1, the editor does a provisional edit based on the transcript and timecode. It will obviously be rough and will require final checking by a bi-lingual person. In case #2, the English and foreign language tracks are roughly synchronized within +/- 1 sec and an English-speaking editor can use the English track, making sure any select or cut includes both audio tracks. I don't think most doc teams do this; it is too labor intensive.
In FCPX, you want each interview to begin as multicam clip with synchronized external audio, even if there's only one camera and one audio source. You can then open & edit the multicam clip and add the timecode effect. You don't necessarily put it on a timeline for this. In FCPX you do as much in the Event Browser as possible.
A key item is not getting source and project timecode mixed up. Use one or the other for burn-in timecode. If you use project timecode (which starts at 0), then later trim the timeline to remove junk before the take begins, then *that* must thereafter be the reference point for subsequent editing. If the pre-take header is re-trimmed after the translator has delivered a transcript with project timecode per line, then it will no longer match the burn-in timecode on the trimmed timeline. The FCPX timecode effect does not allow an odometer-like "reset to 0" for project timecode after the timeline begins.
For #1 (transcript method) or #2 (synchronized audio translation method) you have the option of the final product using translated subtitles or translated voiceover. In general, voiceover is considered a better presentation, but is more difficult. It ideally requires a voice actor where the gender, age and emotional inflection matches the original subject.
For method #1, the editor makes a "paper edit" using the transcript, then edits the foreign-language interview based on the timecode per line on the transcript, then maybe adds provisional subtitles, then a bi-lingual person has to check it.
For method #2, the provisional voice translation is useful for editing with synchronized English/foreign dialog, but the inflection and gender/age match is often not attuned for the subject. So after the final edit is done, re-voicing of the final timeline translated dialog is needed. Whichever method is used, a final check by a fluent bi-lingual speaker is required to ensure no mistakes or sync issues.
A major issue with voiceover translation is grammatical density. E.g, for Spanish-to-English, English is more concise per unit of time so a translator or editor will rarely "run out of room". For English-to-Spanish, it can be difficult to fit spoken words into the available clip time. This should be studied for the languages involved, else the editor can find themselves boxed in.
In theory it's possible to use Lumberjack Builder to do text-based editing of an interview in English which brings along the synchronized foreign-language audio in the multicam. I have not tried this but intend to study it for future projects: