I’ve got the following scenario: I’ve got footage (5D, C100, FS7) that adds up to 195,92 GB. Now I’ve imported it all into FCPX with the Transcoding options „Create optimized media“ and „Create proxy media“. First question: Does it even make SENSE to use both or if I use Proxies I should untick create optimized media?
Now my FCPX project ends up being 571,23 GB with all the footage imported. I’m quite shocked that 195,92 blows up to this size but I guess it’s simply math because I create 2 different files: Proxies and Optimized so of course if we double 195,92 GB and add 100 more the file size makes sense.
Still I worry: How big can a .fcpbundle file be and how much can FCPX handle. OF COURSE I know it depends on what machine you run and what your machine can handle. My fear is this: For a future project I will probably end up with 40 TB of FOOTAGE alone. If I transcode it using Proxies AND Optimized media the FCPBUNDLE will end up being MASSIVE. Can FCPX handle something like this or will it be an overkill and what are the minimum specs to handle this?
I’m on a 2012 iMac with a 3,4 GH Intel Core i7, 32 GB RAM and an NVIDIA GeForce GTX 680 MX. I fear this machine will struggle a lot under such a heavy load.
Tom's right. Those cameras handle natively just fine in FCPX. If you do make Optimized, you probably don't need proxy or native. If you make proxy, you're probably going to work with native, not Optimized. Having all three is total waste...
I've never used proxies in my life. I don't understand why they exist. Maybe for people with G3 computers? Anyway if FCP7 can handle projects clocking it at nearly 3TB I'm sure FCPX can do the same if not more.
You guys do know that there is a story right next door about Finnish editor Ben Mercer whose starting timeline on the feature "The Unknown Soldier" is 5 hours. The crew shot nearly 500 hours of footage and they amassed over 80 terabytes of info. So yeah, I don't think FCPX is too limited by size of projects: www.fcp.co/final-cut-pro/articles/1895-o...eature-films-of-2017
If working in 4k and 6k files it is much easier to work in proxy. If you do multicam 4k? Depending on # of cameras, proxy is your friend. LOTS of reasons to use proxies and I can only think of one why you wouldn't – if you were tight on space. Of course, eventually to finish a project, you'll want to work from the originals.
Also, I'm sure Ben Mercer isn't storing 80TB in his Library bundle.
Well what can I say: I will work with 4K footage and I sometimes have trouble playing back 4 K just fine so I need to toggle to Proxies otherwise I am not able to work. There will also be RED EPIC footage.
„That's a given as I don't see many 80TB drives out there, not even RAID arrays. „
That was a problem and why we’ve installed a 110 TB NAS.
So please let me anaylize this situation and let me know if I am right or wrong: I have trouble playing back 4K natively. So I will just write Proxies. Transcoding to optimized footage AS WELL would be a waste?
I am aware that I can store my media files outside the library bundle but my question is: Why? What advantage would I gain?
Converting to optimized media can be useful if you have to deal with long clips that have been heavily compressed, such as MP4 and the likes. Converting this footage to an all i-frame codec such as ProRes prior to editing will improve skimming and cut down render times.
Converting to proxy media can be useful when you deal with ultra high resolution media that are hard to decode on the fly such as RED clips, or with very heavy multicam projects stored on slow media drives. Converting such footage to proxy media will release the strain on your processor and your media drives for editing, and you can seamlessly switch back to Original media for exporting.
So both methodes have their use but you never need to create proxy media and optimized media at the same time, that's a waste of drive space.
It's very common to use large drive arrays today. The little JellyFish Mobile from LumaForge can be configured to up to 150TB of working (100% useable) storage, and the bigger LumaForge systems can store up to 1.2 Petabyte of footage.
Storing your media outside of your Library is only madatory if you need to collaborate with other editors over a shared storage network. In other workflows, you can decide what you prefer to do.
First of all thank you very much for this very informative replies.
I would like to explain the entire situation in DETAIL so that you know EXACTLY what is going on and you might be able to give me some advice. That would be very great.
The company that I work for (I’m the only editor) is shooting a big project on several days. Something like 30 days!
30 one minute long films will be made out of this content. Now of course they don’t film chronologically so ie filming day 20 might contain footage for film A and for film X.
Now here is my task:
1) Copy all the footage onto a NAS. Our technician installed a 110 TB NAS for this job. I’ve got a SanLink 2 box hooked up to my fire wire which is connected to a glass fiber cable. I’m not a technician just an editor so I do not know all the technical details but it’s a gigabyte connection and should work really fast. About backups: Two hard disk drives of that NAS can „die“ before we run into serious troubles so yes I just park it all on the NAS because I do not have anywhere else to „double“ the files.
2) Import all the footage into FCPX. It will ALL BE 4K filmed with a RED EPIC, FFS5, FFS7.
Now my end 2012 iMac clearly has playback issues with 4K footage SO should I just transcode to proxies and forget about optimized media? Because I don’t know exactly what „optimized“ means. If I understand it correct it just converts the footage to Apple Pro Res 422. But 4K 422 is still too much for my machine too handle. So I think Proxies would be the best solution.
3) Worst part: Since it’s TOO MUCH to edit I will have to hand out the footage to 2 free lance editors who aren’t even located here.
The idea is: Editor 1 gets to edit film A,B,C… Editor 2 gets to edit film X,Y,Z and I also get to edit films.
My idea was: Create ONE master library that contains ALL the footage because I’ve got the storage for it.
Create Events for each day and import the footage by day. Once I’ve got events like DAY 1, DAY2 with all the footage inside FCPX I will look at the shoot list and look at what films have to be made (ie Film Horse Riding) and create events ALL NAMED like the films that need to be done. A good idea would be to give each film a NUMBER so that I’ve got event „01 Horse Riding“.
4) Get an external hard drive, create a library on that hard drive. Go into my master library and copy the event of film 1,2,3 onto the external hard drive.
After the freelance editor is done he gives me an XML and I relink my media.
OH one more thing: Normally my footage is always like this: Folder: Day 1 - Sub folders: Card 1, Card 2, Card 3.
I like the fact that keywords get their information from finder folders. FCPX doesn’t just import folders the way they are (or am I doing something wrong). Now of course- and I hadn’t thought about it yet - there will be a conflict because I will end up with MANY Card 1, Card 2, Card 3 folders from all the different days and FCPX (which makes sense) will merge all the footage from all those different days together into keyword collection Day 1, Day 2…using different events won’t save me from that problem. Of course I want to have event Day 1 with it’s 3 subfolders or keyword collections so I think the best solution would be just to rename he folders in the finder to: „Card 1 Day 1“ that way I can import a whole bunch of folders and files without having to worry that files won’t be where I expect them to be.
archhill wrote: You guys do know that there is a story right next door about Finnish editor Ben Mercer whose starting timeline on the feature "The Unknown Soldier" is 5 hours. The crew shot nearly 500 hours of footage and they amassed over 80 terabytes of info...
That was a superb article (and two videos). However it was oriented from the standpoint of an editor who was provided a functioning, debugged infrastructure, not the standpoint of the 1st Asst Editor and IT people who had to investigate, spec, and test that system beforehand.
I did not see specific information about how many terabytes, how much was actually imported and on line, what type of disk subsystem, what configurations were evaluated, tested and discarded, how the library was organized, or whether proxy was used -- questions the OP is asking.
1st, if you depend on RAID to be your full safety net, you're begging for trouble. RAID redundancy is NOT a backup, and should not be an excuse to not have a backup. I've seen RAIDs loose more than 2 drives a few times. Mostly when the 1st one dies, the second dies before a new one is inserted to replace the first one. That RAID has to go through a lot of work when a drive dies. 1 dies in your RAID 6, the system slows down so bad you can't work. 2 die, you're waiting a couple of days before you have a system speedy enough to edit with. And that's if the RAID enclosure itself (controller board, communication board) doesn't fry out and then you lose everything totally. You're really asking for major trouble there, friendly warning.
Had a reality TV show where cards were flying into the DIT room fast and furious. We made an Event for each day (Day 1, Day 2, Day 3) and keyword collections for each card. Since each card was used multiple times, it was Card 1.1, Card 1.2, Card 1.3, etc. We didn't have time to edit or do much else but copy and erase cards. Worked out just fine. Something like 12 TB of footage (not counting the other 8 TB shot outside of those two weeks of production we ended up using).
We didn't make folders in the Finder, didn't have time, don't know what that does to help. And we didn't use any card copy software, just plugged in a card, copied into FCPX, erased it, done. We had up to 15 cards ingesting at a time. Micro-SD, SD, I think there were 3 types of cards being used. A mix of DJI drones, GoPro Hero4 and C300 cards. Nightly backups to an identical RAID with Carbon Copy Cloner.
No proxies, no optimized, just native.
Later when it came time to edit, 3 editors, 3 different cities. 3 identical RAIDS, we exchanged XML files, system is working just fine, about to shoot a season 2 soon using this exact same setup and workflow.
Thanks to FCPXWorks for a two hour pre-production consult that avoided a ton of possible issues.
I think you're on the right track, but make an Event for each day, a keyword collection for each card. Make a Library for each Editor with only what they need. Copy those Event to that editor's Library. Keep it simple as possible.
FCPman wrote: OH one more thing: Normally my footage is always like this: Folder: Day 1 - Sub folders: Card 1, Card 2, Card 3.
I like the fact that keywords get their information from finder folders. FCPX doesn’t just import folders the way they are (or am I doing something wrong). Now of course- and I hadn’t thought about it yet - there will be a conflict because I will end up with MANY Card 1, Card 2, Card 3 folders from all the different days and FCPX (which makes sense) will merge all the footage from all those different days together into keyword collection Day 1, Day 2…using different events won’t save me from that problem ...
First of all, I have no experience with shared storage nor did I ever have to manage anything bigger than 1 TB. I just contribute thoughts, because I find media organization interesting, and I am here to learn something, so please correct me.
I think the card numbers have no purpose as keywords (wrong?). Each day (for each project if I understood correctly) should have it's own event. Each event should have it's own keyword collection and it's own smart collection(s). Within the events, you can further narrow your choices with favoring, rejecting and tagging. To me, that seems to be enough splitting and sorting of footage to keep the whole task manageable. As a consequence, translating finder folder names to keywords wouldn't help. You would create new events every day. There was no need to edit on the library level at all.
BTW: if you click the browser's magnifier-icon, the toggle filter HUD icon appears below. It works for the smart collection HUDs as well. It stays in place when you change the workspace, it's just gone after a restart. Elegant way to quickly activate Simon Ubsdells library level smart collection. Otherwise, of course, you had to double-click every smart collection to open it.
FCPman wrote: ...my task:...1) Copy all the footage onto a NAS. Our technician installed a 110 TB NAS for this job. I’ve got a SanLink 2 box...connected to a glass fiber cable. I’m not a technician just an editor so I do not know all the technical details but it’s a gigabyte connection...
Organizationally, this can be problematic. Normally the 1st Asst Editor is the technical guy who interfaces with IT, helps spec the configuration, does performance and reliability testing and validates it can handle the load -- from both system layer and application layer.
Bolting FCPX directly to an infrastructure solely created by IT (who generally don't know FCPX) can often produce situations that must be rescued by Sam Mestman, Lumaforge, etc.
I'm not saying this won't work, but high end database systems -- and that is what FCPX is in this situation -- don't just automatically work because your IT guys bought big servers. It requires a technical guy knowledgeable about the unique system-layer characteristics of FCPX, how to optimally structure the library and workflow at the application layer, and what kind of network and SAN configuration are needed to support that.
FCPman wrote: ...my...2012 iMac clearly has playback issues with 4K footage SO should I just transcode to proxies and forget about optimized media? Because I don’t know exactly what „optimized“ means.
Yes use proxy not optimized. However even proxy can almost double the size of camera-native H264 content. OTOH if your camera content is ProRes or less compressed than H264, it might only increase storage size by 25%. Determining the needed space will require investigation and testing -- for all camera codecs in use.
As already stated, you don't want to blindly create proxies for 40TB of data. You really need to be on a top-spec Mac Pro for this class of work, have an FCPX-optimized storage system like Lumaforge, and then only create proxies in the few cases where it's needed. You need someone (either in your organization or an outside consultant) with prior extensive experience integrating, testing and debugging large FCPX installations. I fully realize those items may not be under your control, but at this scale just bolting together a few things will often not produce success.
FCPman wrote: ...My idea was: Create ONE master library that contains ALL the footage because I’ve got the storage for it.
To my knowledge this is uncharted territory. I have never read any detailed article of someone putting 40 TB of *footage* in a single FCPX library. Even if all the content was "un-managed", proxy files, render files, etc. will take much additional space (and I/O bandwidth).
It would be interesting if someone did FCPX scalability studies which evaluated how different library configurations, hardware configurations and workflows shaped the achievable upper limit. These studies are frequently done in the database world, but I don't recall seeing similar things for FCPX.
To us, the card names were VITAL as keyword collections. We had production sheets, listed what shots where done on what cards, who was in those shots, etc. Like "Lumber Jack" but on paper. Yeah, our producer was too out of touch to purchase Lumber Jack when I suggested it. But those shot sheets were the basis of editing later on. We had to know what came from what card. Even on our smaller educational shows, I get reports about who had which camera, which cards, what was shot on each card by each camera, and the script has notes about all of that. Larger productions, all of that metadata is vital.
"1st, if you depend on RAID to be your full safety net, you're begging for trouble. RAID redundancy is NOT a backup, and should not be an excuse to not have a backup. I've seen RAIDs loose more than 2 drives a few times. Mostly when the 1st one dies, the second dies before a new one is inserted to replace the first one. That RAID has to go through a lot of work when a drive dies. 1 dies in your RAID 6, the system slows down so bad you can't work. 2 die, you're waiting a couple of days before you have a system speedy enough to edit with. And that's if the RAID enclosure itself (controller board, communication board) doesn't fry out and then you lose everything totally. You're really asking for major trouble there, friendly warning."
I fully agree with you but what can I do? I was hired not too long ago and all they had was saving files on external hard drives they would swap around. I said: Your 2 TB hard drives won’t do the trick for such a massive project. I would estimate it will take up 50/70 TB of storage space. We had an IT guy whom I’ve showed the Jelly fish. He contacted the jelly fish people who couldn’t give him a proper answer why it would be the number 1 solution and what makes it so special so he came up with his own system which costed the company I work for A LOT of money and you all know that people hate spending money. So what should I suggest now? Install a second NAS where we mirror all the files? I don’t think they will spend any more money on this which leaves me in a tricky situation because you know: 2 hard drives can die. Thats convincing enough for people…
If you had to make one film from 30 days of shooting, then a Day and/or Scene organization would make sense. But that's not the case here.
You are going to make 30 films from footage that will be recorded randomly over 30 days, and each day they can record footage for several films you have to produce. So in this case the films that need to be delivered are the core of your organizational structure. The cards on which the footage gets recorded, or the days on which the footage was recorded, are rather irrelevant. How can you proceed from here?
1. Create a Master Library, and in that Master Library create one Event for every film you need to produce. Set the Library settings so that Media gets stored OUTSIDE of the Library, in a dedicated folder ("Master Library Media") on the NAS.
2. When cards come in, import the media from every card into the Event (film) where these media will be used. (If cards are going to re-used: first make a backup from the cards onto a separate drive and then import your media from the backups). So the key is that you import the media from your cards directly into the Event(s) where these media belong.
If you want, you can further organize the clips in each event using Keyword Collections. At the end you will have 30 organized Events with only the media that are relevant to each Event (film).
3. When you need to hand over the media for a certain film to an external off-site editor (for the sake of clarity, let's say that you want to send "Film 1" to the external editor):
- Attach a travel drive (RAID) to any computer that is connected to your NAS.
- Create a folder "Film 1 Media" on the external drive.
- Create a new Library on the external RAID and name it "FILM 1". Set the Library settings so that Media gets stored OUTSIDE of the Library, in the "Film 1 Media" folder on the external RAID.
- Open the Master Library on your NAS. In the Master Library, select the "Film 1" Event and choose File > Copy Event to Library > FILM 1
This will copy the Film 1 Event with all the media to the FILM 1 Library and the Film 1 Media folder on your external drive. Send the drive to the off-site editor.
- Wen the editor has finished his edit, he just needs to export a Project XML to you. Open that XML in your Film 1 Event and you will see his edited Project. You will have to relink the media, but that's a piece of cake.
B. NAS and backup drives
You say: "He contacted the jelly fish people who couldn’t give him a proper answer why it would be the number 1 solution and what makes it so special so he came up with his own system which costed the company I work for A LOT of money and you all know that people hate spending money."
That is SO typical for an IT guy.
But okay, you are stuck with this system and you will have to deal with it. I entirely agree with Ben that RAID 6 does NOT offer you any kind of backup security. RAID 6 is redundancy, NOT security. Either you take the risk of losing all your data, or you talk with management to provide some kind of real backup solution. Period.
Lots of good advice has been given here, read it carefully and use it as a basis for creating your workflows for this project. If you have any further questions, we will be happy to assist you. And when the project is done, fire that IT guy (-:
Michael, besides the excellent advice from Ronny, you should understand a few things:
You are the lead editor -- often that person is not a technical guru. There is nothing wrong with that; you have other responsibilities. That is why feature film projects have tech-savvy 1st Asst Editors like Mike Matzdorff and/or consultants like Sam Mestman.
If you don't have that on a project of this scope, that is a problem.
I seriously doubt your IT guys know anything about the unique I/O and performance characteristics of FCPX and how this impacts their server. They have probably never examined an I/O histogram or queue depth chart under the loads imposed by FCPX in various workflows.
Re backup, it is possible their servers are being backed up to LTO tape, but you're just unaware of that. If a major IT system is not being backed up, that is irresponsible.
On a small project you can basically bolt together some stuff and it will often work. On a large project that stresses numerous limits, it takes significant expertise over a technical range that spans from the upper system layer to the lower application layer. You work at the application layer and the IT guys work at the low system layer but there is nobody in between. That is a recipe for poor outcome on a project of this scope.
These situations usually start out small with a little data, then as more data is loaded and stress increases on the front end and back end, it starts to bog down or become unstable. By that time you're deep into the project, deadlines are being missed and frantic hardware fixes are being tried, e.g, "upgrade the server!".
Typically what happens (politically) is the most visible product and company gets blamed, or the in-house proponent of the project or software gets blamed. They rarely blame the lack of planning or consulting expertise to integrate and test it. IOW "Oracle just kept crashing" "SQL Server was too slow", "You promised this would work", etc.
I'm not a consultant and I'm not pushing anyone's services or products but Lumaforge is well known for their expertise in this area. If you think a consultant is expensive, consider how expensive failure would be.
That's the problem here: Our IT guy certainly knows a lot and he is very skilled when it's about servers and all that technological stuff BUT I have worked for TV stations and in other broadcasting fields and those IT people know about Avid, ISIS... and all the software WHICH IS important because those are the tools I use. Our IT guy (who is a freelancer who only shows up if we call him) doesn't know anything about Final Cut or post production or video editing and the technical backgrounds so I am asking here because I see a lot of problems with this upcoming project. I want to address those problems. I mean I already convinced my superiors that we DO need a NAS and that we DO need space to save all that date because swapping hard drives certainly won't do the trick but YES I do lack the technical knowledge and I can not imagine all the problems I might run into because I am lacking the technical background so of course I am trying to find and seek help online because my superiors will tell me "you are the editor. you need to handle all this. we don't care about that nerd sh#t". It's a hard situation I do not know how to tackle exactly because I also don't know what is needed and required in order to have no problems.