Absolutely and here's the specific reasons why:
Tentacle Sync Studio Version 1.16
1)Tentacle Sync Studio (as I mentioned above) cannot generate multiple multicam clips with one XML export. It can only generate multiple synced clips and one multicam clip sequence with all of the angles. This doesn't help me much since I need individual multicams in the browser as if I synced them in FCP X.
2) Tentacle Sync Studio it turns out , can do what I got LTC convert for - to restripe the source video timecode with the audio timecode the Tentacles put on the audio channel. ONE CAVEAT and this sia deal breaker for me: It exports new Quicktime files with a new creation date. This is a non starter for me. I prefer to keep my creation timestamps from production to archiving for searching, verification, pipeline checking etc. So if you shoot something on Jan. 1, and done't get around to processing those files until 3 months later, your new creation date will be when you processed them. No thanks. So this brings me back to LTC Convert. Although it's not the prettiest app to use, it restripes the camera timecode in my video files with the audio timecode from Tentacle Sync devices while keeping the original creation dates.
My thinking is essentially this: The tools and know-how are out there to be able to batch generate synced and multicam clips relatively easily these days and it amazes me there aren't many successful ways of doing this. I need LTC Convert to prep my files so that I can ultimately give Sync-N-Link what it needs (SMPTE (not LTC) timecode for audio and video) to quickly batch process my files using an XML from FCP X. It's the only tool I have found that can do this. DaVinci Resolve batch syncing is just not reliable and using Adobe Premiere is inefficient as well. I wish Apple would build batch syncing into FCP X since I still prefer it's syncing capabilities and the results of those over any app I've used.
If Tentacle Sync Studio could batch make multicam clips from one XML export, I would not need LTC Convert nor Sync-N-Link at least for this specific workflow. Though now we're using LTC Convert and Sync-N-Link and it's saving us massive amounts of time even though we have to bounce to several apps. On average now I can sync a full day's shoot and have them keyword and organized by type and character dialogue in about 2-3 minutes.
Same goes for KYNO by the way. I want to use this app so bad, but like Tentacle Sync Studio, there are just those niggling little details that prevent me from jumping in such as saving creation dates being an option rather than default, getting proper rewrapped clips requiring an audio pass-through workaround, and a few other issues so we're sticking with EditReady. Tentacle Sync Studio makes sense for having a multicam sequence (a la PluralEyes) for a live event, but not for episodic narrative productions where I need several individual multicam clips to choose from.
Hope this all makes sense.
Last edit: by tangierc.
A little late to this fascinating thread. So far I have been using Pluraleyes to Sync audio and have been getting mixed results. Typically I am working with:
2-3 Cannon C200s (Half Way Decent Camera audio)
5ish Go Pros (Pretty awful audio)
2 Zoom F8s which have radio mikes plugged into them. (More often than not decent audio)
Plan 1) We have made 4 videos with our relying exclusively on Pluraleyes work flow
Plan 2) We have made 4 videos with a mix of:
Setting time of day (roughly)
Naming of audio tracks in the Zoom F8 = roles in FCPX.
As luck would have it the first of these videos that I am editing using “Plan 2” was filmed next to the main TV Tower in the city I live.
There were problems with the radio mikes!!!
Often they did not work or produced noise.
In my case we are producing one of these videos a week and editing in FCPX and/or Premiere.
Next time round we are going to throw in some Boom mikes as well.
Anddddddd now one of the guys in the office has suggested getting Tentacle devices.
And using their software to sync everything up using I guess it is called slave or generated synced time code.
Reading this thread and doing some research online it seems to me that this plan might just work.
Points that do not bother me are:
If I am understanding point 1 correctly…? I believe this is more or less what I am getting from Pluraleyes. One multicam clip with all of the clips in it and no event with clips. Would not be the end of the world for me.
and also from tangierc
In our case losing the recording time would not be the end of the world as we do one of these videos a week. So we have clear line in the sand as to when these are made. What however would be very, very, very sad is if we lost for e.g. the name of the tracks to which we are recording in the Zoom F8!?! This is part of Plan 2 that I a anxious not to have to throw out! Does any one know if rewriting these files with the tentacle software would also mean losing other meta data i.e. track names? Do I even have to restripe the audio files ? I guess so as this is the point of the whole exercise aka using timecode to sync audio,
Maybe someone in this thread can comment in general on my above thoughts or has made some further progress with respect to this challenge that they would like to share?
thanks in advance
Last edit: by paurray.
You shouldn't be having major problems with quality wireless lavs. What kind and brand are they? We use mostly Sennheiser G3 and we rarely have problems: www.bhphotovideo.com/c/product/877198-RE...r_ew_100_ENG_G3.html
We have a few Saramonic RX9/TX9. They basically work OK but I don't like them as well -- battery life is shorter, audio response is bass-heavy, and we've had a few infrequent drop outs. We almost never have a problem with the Sennheisers: www.bhphotovideo.com/c/product/1331757-R...lier_microphone.html
We use the Zoom F4 and F8 and so far haven't had a major problem. The F4 has optional modular add-on channels and those have not been reliable, thus the F8.
We've talked about getting some Lectrosonic lavs but they are just too expensive for us and the Sennheisers work very well: www.bhphotovideo.com/c/product/1152843-R...lavalier_system.html
We usually have redundant audio coverage from both wireless lavs and a hand-operated shotgun mic, usually a Sennheiser MKE600: www.bhphotovideo.com/c/product/878340-RE...MKE_600_Shotgun.html
The shotgun has saved us a few times when a Saramonic lav failed. But even with Sennheiser or Lectrosonics it's possible an operator error or level mis-match could cause clipping which isn't noticed until too late.
Re Plural Eyes, I use that only when absolutely necessary, as it's really not designed for the FCPX workflow which is oriented around the Event Browser.
Using normal audio-only sync via FCPX usually works very well provided a few things are done:
- All cameras and recorders are set to the proper time of day
- All cameras have decent scratch audio
- Monitor all audio channels
- All cameras are set to not produce segmented files. Some split files every few minutes which makes syncing more complex. E.g, the Panasonic GH5 may do this but can be configured to use monolithic files. The G7 cannot.
- Record a polyphonic wav file using iXML to annotate them, not separate file per channel. The F8 does this, as do the Sound Devices recorders
- In FCPX, batch label all files with a camera name or camera angle. Without this the audio sync won't be reliable. Doing this is easy provided you keep track of what files came from what camera, store those in named folders, then import to FCPX using the camera folder name as an auto-generated keyword.
- Some audio recorders can capture at two different levels to give headroom in case of clipping. I personally don't like that because it's rarely needed if the mics are set up OK and it creates a lot more files. The Sound Devices recorders have high performance analog limiters which give the same protection but without the extra files.
We've talked about using Tentacle, but for our field documentary work it's extra complexity. If the cameras and recorders are properly configured and operated, audio sync works OK. If somebody forgets to set the camera clock or turn on the hot shoe mic, that complicates things. But that kind of operator could just as easily mess up a Tentacle device.
Last edit: by joema.
thanks for your fast and detailed reply...
We do not have does exact same mics but apparently very similar mics from Sennheiser.
I think the last time round was hard core as I say they were filming next to a TV tower.
But it is not unusual for them to record for eg in a recording studio with musicians recording simultaneously/the same piece of music over and over again in different rooms. eg 5 fixed cameras and 3 camera men going walkabout!!!
I guess that the boom microphones are pretty similar to your hand held mics and they will be a welcome addition.
Think of it as a Gorilla style doc about mice in a maze & you have the picture roughly!
Yeah it is weird not being able to get to individual clips easily in the Event Browser but I have gort used to it. I have a short cut to open my Multicam clip in the original mulitcam sequence and can match frame into the browser = a master clip is created in the browser. I can repeat this process for an audio file and have a second run at making a Multicam clip. I have also found wave agent very useful as I can organize my clips with respect to the time they were shot.
The thing is that where I am working they are kind of leaning towards Premiere and any thing that leans towards Premiere they like. Anything that goes in the FCPX direction is met first with skepticism.
I guess that you are using Sync and Link
and then doing some kind of clever batch renaming in FCPX?
Last edit: by paurray.
You're correct in your response to some of my comments. In our tests the metadata carried over while using LTC convert all the way to Sync-N-Link; including iXML data from the Sound Devices units we use. The only difference I saw was that sometimes metadata showing video content as H.264 will sometimes show as AVC. Mind you we're working with Canon C100s which have no timecode syncing capabilities (meaning i/o ports). When we upgrade cameras we'll have to reevaluate our workflow. I've been particularly looking at the UltraSync One and corresponding family of products. Though I really like the Tentacles, their ease of use and the software. The software is what makes it extra special except for that one shortcoming of not being able to generate multiple multicam clips with one XML export from it.
PluralEyes has it's place, but for reasons I've detailed on this forum and creative cow, it just doesn't suit my needs and I just plain get amazed that what is does do is so acceptable to many people. We also put out narrative content every week in addition to other content. I guess I am a purist when it comes to managing data. Keep the original date, I don't like acronymed file names, keep original meta data, etc. All of these things are very important for our ingest to archive workflow and like "the Google" much of what we do in managing our media hinges on searchability on the desktop or in a MAM environment. So the less we abstract files from their original, the easier it is to find; particularly when searching typically depends on our human memory first.
Premiere Pro definitely seems to be popular in large part because it's mostly familiar. Oddly since 2011 even to this day, I find that people stray away from FCP X without spending much time with it. Though I also find that people who do take the time to explore it end of up loving it's speed, flexibility, and how it promotes the creative process despite it's comparative shortcomings to it's competitors. I side with Michael Cioni at Light Iron - when it comes down to the price per frame (not exclusively dollars, but even sleep) that FCP X promotes a very strong economic case to us it over other NLEs.
Now on to this DaVinci 15 beta testing.
Thank you for your reply.
If I am reading it correctly you are saying in your experience going from multicam/audio to FCPX via Sync N Link means:
- You lose the time the file was created/recorded as this becomes re/wrapped reencoded.
+ You keep your track names aka iXML
+ You have things snyced all the time
There are two things I still do not get:
1) to clarify when you say
When referring to LTC convert you are talking about doing this in the Tentacles software?
2) The Tentacles software should sync stuff for me any way.
So I could at least in theory (acording to your tests)
Skip: Sync N Link & PluralEyes
And go direct from the Tentacles software to FCPX
Every thing synced reliably
Have my iXML track names
I guess throwing Sync N Link into the equation gives you more/extra control over your source Media in Libaries/Events at the FCPX end?
Just double checking that I have understood 100%
Last edit: by paurray.
No, I wouldn't support a workflow that loses that information. To clarify:
-Then Tentacle Sync Devices to EditReady (for offloading/rewrapping) to LTC Convert to Sync-N-Link workflow maintained the timestamps and metadata in eluding iXML.
- I keep my track names from the iXML (unless I end up changing them in the roles in FCP X).
- Yes, things are in excellent sync and saves me a ton of time; syncing a day or days within a few minutes.
- When I mention LTC Convert I am talking about the actual application LTC Convert. I do not use the Tentacle software at all for this type of conversion because it exports a new Quicktime with new timestamps. LTC Convert does not.
Sorry, I did make a mistake about mixing iXML info while discussing video. This is not the case as that is not touched by LTC convert - only Sync-N-Link which actually only relies on an XML exported from FCP X to do it's magic. Essentially my custom named and rewrapped media from EditReady goes into FCP X along with the corresponding audio recordings. An XML of the event is exported, sent to Sync-N-Link, and Sync-N-Link gives me back multiple individual multicam and synced clips completely organized with keywords as well as a keyword collection of media that has not sync. No metadata is lost. Only downside is that it makes synced clips also of clips that are in multicam clips too. For some this can still be useful.
Yes you could use Plural Eyes or Tentacle Software and skip Sync-N-Link. However some of the reasons why I don't is because there's no inherent benefit than me just syncing in FCP X. If you're doing live events where you can have one timeline with all angles and you're just cutting between angles that Tentacle Sync Studio and PluralEyes will work great; though you may lose timestamp and other info with PluralEyes. I cut scripted narratives mostly so I need individual synced clips for each take in a scene whether it's one camera or multiple cameras. The workflow I put together above gives me everything I want. Sadly it could be rolled into one app, but for now I haven't found another solution using C100s with narrative. There's so many small shortcomings:
-If FCP X could rename files in the Finder like it did in FCP 7, I may not need Edit Ready.
-If Tentacle Sync Studio could generate multiple individual synced clips using one XML, I may not need Sync-N-Link. -Otherwise in Tentacle Sync Studio I'd have to identify and select each group of syncable clips and export an XML for each multicam. Why bother. I can do that in FCP X and maintain all of my data.
-If I had different cameras that create unique names that never repeat per card (non-AVCHD recording) I wouldn't need EditReady and/or FileRenamer for custom naming nor would I need Tentacles because I could just jam the Sound Devices (or Zoom F8) to the cameras.
Here are my findings on PluralEyes 4 a while ago:
Here's the short of it (confirmed with a phone call to RedGiant Support)
-Multicam option gives one long multicam sequence (storyline) of all events rather than individual multicam clips
-Replaced audio limits picture to length of synced audio (may want the video after that)
-Only way to get individual synced clips is using the replace audio feature which creates clips as compound clips rather than FCP X synchronized clips.
-Confirmed with phone call. PluralEyes 4 can only return synchronized individual clips if you choose the replace audio feature.
Here are my tests with Tentacle Sync Studio a couple of months ago:
- Audio TC was maintained after rewrap through EditReady
- Metadata maintained in clip with XML sent from Tentacle Sync Studio to FCP X (for single non-synced clips - couldn’t sync clips at the time due to no existing audio TC on video from one camera).
- LTC Convert does a good job at replacing a quicktime’s timecode with audio timecode, evidenced in FCP X and Tentacle Sync Studio
- Tentacle Sync Studio 1.16 (2966) does not maintain the creation date when using it’s export Media feature. This is crucial in that we like to maintain the creation date for searching our content in the OS, NLE, and Archive software.
- Tentacle Sync Studio 1.16 (2966) will export the clips with the ability to replace the timecode with the audio TC.
Hope this helps answer some questions.
Yes. this is a huge help thanks.
I think they will go for buying Tentacle Sync Devices as we are currently losing so much time in post it is not funny.
With respect to Edit Ready
They already bought the Red Giant Shooter Suite:
Which includes “OFFLOAD 1.0” so I can imagine if they can they will want to use this:
Personally I am a huge fan of Kyno but that is another story:
As I understand this as of now we are just talking about encoding from native camera formats to Pro Res.
Compressor or what ever the encoder from Adobe is could do the exact same job. Correct?
The nice thing I see about the LTC Converts software is that there is a free demo:
So we could test this part of the workflow before taking the plunge.
I think (although it would pay for istself in a couple of days) that they might well be alarmed when they see the price of Sync N Link . *EDIT It looks as if it is cheaper than I though. For some reason I thought it was 500$ but it looks more like 200$. END EDIT.*
The thing is that I am part time and the Premiere guy is full time so this is one of the reasons why they lean towards Premiere....
On the one had maybe I could do the sync in FCPX and just skip an external sync program. But no wait this is where I am still unclear. Does LTC convert replace the exisiting standard time code with the Teantacle time code or is it added as side car as such. If the latter I guess that syncing in FCPX would not work and make Sync N Link misison critical.
With respect to the format if I stick to the mice analogy the mice have to accomplish different tasks in different rooms in the maze. So there is in general an intro explaining todays task, a walk in the maze, accomplishing tasks and a summary of how the mice did with respect to their tasks. In it simplest form it is pretty linear and not having events would not be a deal breaker for me.
On the other hand in a weird twist of fate if we were to throw in
That just might help them to get to Premiere via FCPX!
Or maybe the Premiere guy on the other side of my table has some ideas?
Digging deeper into the intelligence assistants site there is also this:
Which essentially the essence of your recommended work flow.
This is going to be our 3rd round of 4 videos and to the best of my knowledge there are going to be at least 2 more rounds of 4 videos on top of that. I am currently working on video 6 and the Premiere guy is working on video 7. I had Pluraleyes sync 107 from 178 clips this time around, it created a hell of a mess in the process. I know that the Premiere guy got a lot more in to his 7th series than I did so I guess he got lucky.
To be continued…
Last edit: by paurray.
I am not familiar with Red Giant Offload. I have used KYNO and like it's feature and frankly would love to use it, but have decided not to. Here's why (and I have emailed the company about this since version 1.2):
1.3 Trial Tested 1-2-2018
Scenario: Canon C100 clips (AVCHD) and FCP X 10.4, macOS HighSierra - using KYNO to rewrap.
- Still getting lots of digital artifacts in our rewrapped AVCHD footage on any computer. Tested against EditReady on multiple clips. It tooks us many weeks of camera testing, SD card checking, card brand changes, etc. to figure out this issue and the artifacting stays through to transcode/export out of FCP X to another ProRes flavor.
- KYNO doens't automatically recognize AVCHD "private" folder structure (unless there's a preference feature I havenen't figured out). I have to drill down to the stream folder where the .mts files are
- Even when I have the option to get timecode from source selected, the timecode in the rewrapped clip starts from 0 (shown in FCP X).
- Device manufacturer and ID metadata is missing
- Color profile metadata is different than EditReady (KYNO saying 2-2-2, EditReady saying HD (1-1-1). Not sure yet why there's a difference
- Codec metada in KYNO says AVC1, EditReady says H.264. Perhaps they're the same. Have to invesitgate further
- Rewrap conversions still yield 0 kb files unless I set the audio option to re-encode.
- Maintaining creation time stamps are still optional rather than default.
I don't use EditReady to do any encoding to ProRes. I let FCP X handle that. I simply use EditReady to rewrap our AVCHD into .mov files and to create custom naming for all of our video clips so that all the way through archiving there is a unique name that includes the episode or project name, card number, camera angle, and an auto-generated indexed number.
XtoCC is also a viable option. I've used it with success.
In 2016 I was testing out Woowave Dreamsync as alternative to PluralEyes, but it ultimately didn't work out. Perhaps it's been updated.
Interesting checking out your link to Sync-N-Link. That workflow they're talking about on their site is the one I created. I had been working with Gregory Clarke (who is fantastic at explaining the app and communicating) about my proposed workflow with their product; LTC Convert being a part of that equation. As I mentioned in another post, I only wish LTC Convert was a more modern app. It looks and feels like it was created with older OS X technologies and perhaps not AV Foundation (aka the new Quicktime), but I don't know for sure. It could use a UI refresh. Either way it gets the job done and does other great things to.
Sync-N-Link is $199. Trust me. Well worth it.
Here's an old test I did below. It's pretty lengthy so I am warning you now. There may be a couple of typos. It's copied from notes I kept. There may be some valuable info in it.
Jan. 10th, 2016 Sync Tests
PluralEyes (PE) 3.5
FCP X 10.2 on El Capitan.
One video clip/ one audio multichannel audio for each video clip.
Multichannel (interleaved) audio files
PE to FCP X XML no options checked
Synced clips exist only as project , no browser clips in sync.
Audio clips in synced project dead - not audible.
Audio clips in synced project missing channels from original audio.
Audio does point back to original audio in browser - defeats the point.
Useless without synced browser (bin) clips synced
PE to FCP XML Event option checked
This replaces camera audio with 2nd system audio
Clips show in browser
No audio configuration available in FCP X inspector
No audio from synced clip can be heard
Audio shows synced audio inside clip (reveal clip in timeline), but is missing channels, no waveform, can’t hear
Audio is not trimmed to match video according to PE manual
FCP X XML to PE then Back
This seems to work (sort of)
Synced clips are compound clips in FCP X*
Camera Audio was replaced (not sure I like that)
Audio in synced clips have two mono channels, original had three
Audio in synced clips points back to compound clip, not original audio (problem)
* Primary editing objects in FCP X are clips, synchronized clips and multicam clips, not compound clips as regular editing clips. I wish third party apps would create an object the way FCP X would instead of a compound clip. Compound clips create an extra layer of obstruction of the source media with regard to start/ end time, duration/ creation date/ and revealing in Finder.
APPCC (Premiere Pro)
Batch processing creates sequences instead of clips (I prefer clips to work with them as editorial objects)
Every synched object is called multicam (which may not be the case even if there’s only one video to audio synced)
APPCC to FCP X
APPCC XML to FCP X (via 7to X): APCC multicam sequences become projects in FCP X. Not good for editorial. Need them to be synchronized or multicam clips.
Multichannel audio separates to tracks.
APPCC to FCP 7 via XML
Multicam sequences become projects in FCP X. Not good for editorial. Need them to be synchronized or multicam clips.
Just discovered (1/10/2016)
A third party app - Woowave produces compound clips for FCP X as the editorial items. Upside is that you now have a synced object. Downside is that it’s a compound clip. I don’t think many FCP X users use compound clips as their main editorial item. Additionally you if you add a missed audio track to the compound clip (such as the right channel of audio), the inspector will not register this. It still only sees the audio tied to the video and the first connected audio item. This is the same behavior as a synchronized clip created inside FCP X; hence why mutlicam objects are used for synchronization inside FCP X even if there’s only one video and one audio to sync - it allows flexibility and registers all channel visibility and toggling in the inspector.
Create Event with Replaced Audio (for apps that have this feature)
The question here is, is the clip created for the NLE a new clip in the Finder or just a reference object in the NLE to the original clips in the Finder. If it’s a new clip in the Finder with a new timestamp, then you original shot and recorded content becomes useless and you now have (essentially) new source content. This can prevent a problem with archiving and cross referencing production dates, file creation references and such. I am a bit of a purist and prefer to maintain the originals at all times. You can of course keep your original content, but you would do so strictly for posterity and it would come at additional cost in drive space and money to support that space.
It looks like the clips created in FCP X when all PluralEyes options are enabled before exporting, does in fact refer back to the original clips in the finder as originally shot rather than generating new Quicktime movies. Though in FCP X, start, end, and durations have changed than if it had been done in FCP X only, simply because PluralEyes has trimmed the audio to match the video.
*There are times when the camera original audio is still useful, better, or absolutely necessary. In this regard I still prefer the way FCP X syncs where it maintains camera original with the 2nd system sound at all times. This way I don’t have to go hunting for the original clip at the precise sync point if and/or when I need it. I am amazed that third party apps don’t accommodate this method. In this regard I think FCP Xs only shortcoming for syncing is that it does not do batch processing and cannot quickly identify associated clips the way Dreamsync can. I have worked on many shows where it is imperative that I have the camera original audio available. I seldom ever want an application to replace the audio of my video clip.
In FCP XP, you still have to manually manage camera associations in order to build a proper multicam storyline to export as an XML for PluralEyes to handle and attempt to sync and bring back into FCP X.
Some pertinent questions using 3rd party apps for batch syncing.
How much do you value that your camera original audio is connected to the second system audio at all times? I like this feature about FCP X’s built in syncing capabilities.
With respect to Kyno I just love the way I get to numerous files on a drive with little hassle or effort. Most of my experience was with P2 cards last year.
To be honest it did not work all the time either. Depending on the flavor of P2.
And yes I also had the time code being reset to 0 as well.
The later is/was obviously a deal breaker in some cases.
I have the latest version 1.4ish so can check over the weekend.
I do not even know where Red Giant Offload is installed if at all. Probably on a laptop. To date I have been just dragging folders into Pluraleyes, clicking all the boxes and crossing my fingers. The thing is when things go wrong they seem to go horribly wrong. A case in point being our filming next to a huge TV tower!
Thanks for clairying with respect to Edit Ready,
As of now there is no long term plan to archive or re use material. We are in a pilot phase for 6 months and no one knows what comes after that. So I am going to save my breath with respect to sensible naming etc.
Yes basically any thing from Intelligent Assistance is amazing.
Also used it and it works great if the media was managed half way ok from day 1!
I hope to test it again soon…
The guys I am working for at the moment are slow about getting it.
They just do not get the fact that FCPX does not do this stuff out of the box.
And miss the fact that Premiere does not do this kind of stuff out of the box either.
In a n nut shell if they do not I will buy it.
And connect to the server with my own private laptop and do what needs to be done.
Will need it for other projects sooner or later & it will pay for itself in a blick of an eye.
Will 1st get the free trial version of LTC convert & probaly also buy it after trial is over.
If I can get timecode in sync and know where all my audio tracks are I will be the happiest editor on the planet.
And thanks for sharing your findings about Pluraleyes
Last edit: by paurray.
Sure thing. Glad I could provide some insight. KYNO was actually a big reason why we changed our file naming structure because we were having lots of digital artifacts in our content. We even took our cameras in to Canon for service and then we needed a way to track down the cards (we label them) that were used on any shoot. As I said we did a lot of things before finding that it was KYNO causing the problem.
Our video clips get converted to Project-Episode-Shoot Number[Card Letter]-Camera Angle-Autoincrement Index before going into FCP X.
So for instance:
This means Hello Cupid Season 2 Ep 2 on the 2nd shoot day with card V as Angle/Camera A and it's the first clip from the card. Meta data stays. timestamps stay for Finder or NLE sorting and one of the most important things is we try to minimize using folders because things can be moved around a lot, but these file names stay and will always be unique. When we archive we package up the FCP X Library, delete unused files (render,proxy,optimized) files (Arctic Whiteness Final Cut Library Manger is very useful), empty the package contents of duplicate principle photography because we keep our source clips in a folder next to the library (essentially not importing them into the FCP X library), export an XML for good measure of the library, and that along with other pertinent assets get stored onto two raw (as in internal SATA II desktop drives) cloned hard drives that are scanned by NeoFinder so we can always find things offline.
We're soon to integrate KeyFLow Pro, but that's another endeavor.
Last edit: by tangierc.