I am about to make some guitar music videos. I am recording audio in a separate room, from where I am doing the video. I am doing solo performances, so it is just guitar. This means I will have two audio files that are similar. Since it is solo performances, I speed up, slow down a bit between recordings. So what I need to do is warp my separately recorded audio to the video's audio.
Is there a way to do this automatically in FCP? This does not amount to simply shifting one audio track by some constant time. It amounts to adjusting one file to match the contour of the video's audio, which amounts to time warping (while preserving pitch).
I can do this manually, but I am hoping there is automatic method, and since the two audio files are of same performed music, I figure there has to be some cross-correlation based method that can automatically stretch one audio file in an adaptive way such that the result is an audio file that is almost totally in sync with the video's audio.
There is but you need to record both of them on the same clap.
Otherwise you are probably condemmed to sync by eye and ear.
Or you need to clap each one at the exact same place...
I haven't done any synching from totally different files so maybe someone will have a suggestion.
Also similar different files will have speed descrepancies.
Each audio track will have clear starting point that coincides, and yes, there are timing discrepancies, that is why I am trying to figure out a way to adaptively warp/stretch one file that it matches the performance captured in the other file.
bhuether wrote: ...I am about to make some guitar music videos. I am recording audio in a separate room, from where I am doing the video....I will have two audio files that are similar. Since it is solo performances, I speed up, slow down a bit between recordings. So what I need to do is warp my separately recorded audio to the video's audio....
Not sure this has anything to do with FCPX video editing. It is more likely a DAW (Digital Audio Workstation) issue. You might need to discuss on a DAW forum such as Apple's Logic or ProTools. They frequently deal with "tempo matching" issues, including warping or stretching one audio track to match another fixed track (which in your case is the audio track for a video):
There are people here more experienced with audio issues. Maybe they could comment.
Metronomes have their place for sure, but not here. When I play solo classical pieces I want the performance to breath, a metronome would result in very mechanical, uninspired performance. Picture a piano player playing Moonlight Sonata to a metronome, or a violin player playing a Paganini caprice to a metronome.
In my case, there will still be some manual work, but what I will do is use the plugin in Reaper with the video audio as reference track. If it doesn't work well detecting markers automatically, I will just create markers for the video audio, for instance at every bar or maybe even every beat, and at places where some rubato phrase starts/ends, and make same markers in the desired audio, and then at that point in Reaper the plugin will stretch the target audio so that in both tracks the markers line up.
That won't result in bad audio artifacts since in both performances I play somewhat similarly. Will definitely be an interesting experience!
I am no stranger playing to a metronome, but here we are talking about unaccompanied solo guitar performances. I don't need to keep time with other musicians, I need to play the piece with a certain freedom, where some moments have rubato feel, where at other times I choose to hold a note a bit longer simply because I so desire. There is just no way I would ever choose to play these pieces to a metronome for recording purposes, even if 1000 people told me otherwise. When I first learn a piece maybe I will practice many many hours to a metronome, but when I am actually ready to record, that is when the metronome has long since become history, and only would be an impediment.
I actually don't think this will as hard as I am imagining, and if the sort of audio processing I am suggesting creates artifacts, then I will back off on the processing and accept not exactly sync'd audio/video.
My current plan is to do the processing in Reaper then see how things look sync'd up in FCP.
bhuether wrote: ...I don't need to keep time with other musicians, I need to play the piece with a certain freedom....
Understood, yet now you are in a situation where you want to match that tempo with a different take. This is really a DAW issue. Those and related DAW plugins are designed to handle tempo matching of multiple takes.
It is a DAW issue currently, but I think it is a logical evolution in video editing. FCP already has auto sync for multiple camera takes (though I suspect that performs just a shift in time of one file). But it would be pretty useful in FCP to have some video clip, and select "replace audio with specified take" and voila, it handles ADR, etc, etc.
Just a quick question that might be clear to others already, but my production brain is wonder why you don't just record the audio at the same time? Also, if its a matter of the sound recording not being an option due to loud conditions in the space or whatever, most people would record the audio first, then playback the audio while filming and play along to that so everything is in sync. That's how music videos have been done forever. Just a thought, might save you a bunch of hassle in the edit.
If I record my desired audio track, it is the result of a certain performance. How I slow up/speed down, how I decide to apply vibrato, etc, etc, they are all things very specific to a given performance.
Were I to record the video and just play along to the already recorded audio then it would be impossible to play in time with rubato style phrasing. I would always a be a bit out of time as I am listening to pick up on the nuances. Similar to the issue with motion tracking radar...
I have just learned from playing these pieces hundreds or even thousands of times, that I play best when I am totally immersed in the performance. Listening through earbuds (where I am not truly hearing my tone) or playing along in some other way just has me too focused on things other than the performance of the moment. And in the video, even though I want to use separate audio, I want to play with same conviction and in same immersed mindset, as opposed to just going through motions of playing in time for sync purposes.
My variations from performance to performance are not massive. I think the sorts of slight time warping needed to get things aligned won't produce atrocious results, but who knows. I'll give it a shot and go from there. And who knows - if I come up with a good processing methodology with good results in might be useful in a wide range of other realms.
Forgot to mention - acoustics in room where I want to do video are horrible, hence the desire to record in a acoustically desirable space.