fbpx
Welcome, Guest
Username: Password: Remember me
  • Page:
  • 1

TOPIC: Agonizingly Slow Media Management

Agonizingly Slow Media Management 06 Sep 2020 18:27 #109887

  • nate.haustein
  • nate.haustein's Avatar Topic Author
  • Offline
  • Fresh Boarder
  • Fresh Boarder
  • Posts: 19
  • Thank you received: 1
I've been struggling with this for a while now. Importing footage into a library or consolidating footage into a library takes FOREVER for me. This can be straight from a camera card or copied from somewhere already on the server. It happens with camera codecs and also with ProRes files. This issue takes place on a QNAP SMB server that consistently achieves 800MB/s via speed test.

Here's the odd part: regular Finder file transfers are perfectly fast and consistent as expected. Also, playback within Final Cut seems fine, i.e. if I layer a bunch of files and play them back, I get good throughput for 3 or 4 streams of 4K ProRes. I've been watching the network tab in the Mac activity monitor, and during a finder copy or timeline playback, the "Data Received" and "Data Sent" its hitting 500MB/s regularly. During import, I'm lucky if it hits 40MB/s.

Taking out the networking aspect, an import from an external drive to internal Mac SSD achieves the expected 400MB/s speeds that the SSD is capable of providing, so FCPX is capable of ingesting the footage at those speeds.

This has occurred on all my machines (2016 MBP, 2019MBP, 2017 iMac), on FCPX 10.4.8 and 10.4.9. It has happened over SMB and NFS. It happens when using the Import function of Final Cut or when just dragging content into the event. My Mac has been recently wiped with fresh reinstalls of all OS and software.

This issue even happens with creating a library on the desktop and importing media files from the server.

Any ideas of what's going on here? Why can FCPX access the media at high speeds during playback, but not during the initial copy to library?

Please Log in or Create an account to join the conversation.

Agonizingly Slow Media Management 07 Sep 2020 10:43 #109900

  • joema
  • joema's Avatar
  • Offline
  • Platinum Boarder
  • Platinum Boarder
  • Posts: 1633
  • Karma: 27
  • Thank you received: 352

nate.haustein wrote: ...Importing footage into a library or consolidating footage into a library takes FOREVER for me. This can be straight from a camera card or copied from somewhere already on the server. It happens with camera codecs and also with ProRes files. This issue takes place on a QNAP SMB server that consistently achieves 800MB/s via speed test.

Here's the odd part: regular Finder file transfers are perfectly fast and consistent as expected. Also, playback within Final Cut seems fine...Taking out the networking aspect, an import from an external drive to internal Mac SSD achieves the expected 400MB/s speeds that the SSD is capable of providing, so FCPX is capable of ingesting the footage at those speeds.... has occurred on all my machines (2016 MBP, 2019MBP, 2017 iMac), on FCPX 10.4.8 and 10.4.9. It has happened over SMB and NFS...


Thanks for the detailed description. To reach 500 MB/sec requires 10 gig ethernet, which none of the listed machines have. I assume you have Thunderbolt adapters for those? Which adapter?

In general FCPX uses one set of I/O routines -- it doesn't have special code for doing I/O to NAS vs a local drive. Thus there is likely nothing within FCPX to fix.

During playback, FCPX is mainly doing large sequential *reads*. It is not generally writing, although if background render is enabled and if those files are on the NAS there will be some writing. But those render files are large sequential writes. That type of I/O is easy for a NAS or a local mechanical RAID.

When FCPX imports data it's doing a lot of thumbnail and waveform generation, which is characterized by small random I/Os. On a local drive you'll see the data rate in MB/sec go down, but the I/O issuance rate remain very high. In Activity Monitor it's helpful to compare "Reads in/sec" to "Data read/sec", and "Writes out/sec" to "Data written/sec". This gives a very rough idea of the I/O profile. Each drive system (whether HDD, SDD or NAS) has a limit of I/Os per sec, not just a max rate limit when doing large sequential reads/writes.

The item about "happens with...library on the desktop" is interesting. If a default library, then cache, waveforms and thumbnails are contained within the library, and those I/O streams are separate from the NAS -- yet you report import is still slow in that case. I don't understand why.

Lots of people use a NAS on 10-gig E -- apparently without these issues. That implies there might be something unique to your installation or configuration which contributes to this.

The below post has a lot of info about optimizing a NAS for FCPX. Look through that and see if anything might apply:

www.fcp.co/forum/4-final-cut-pro-x-fcpx/...work?start=40#109131

There are also ways to study closer the I/O profile FCPX uses during import, but it requires disabling System Integrity Protection and using the command-line DTrace tools.

Dtrace was disabled on prior versions but apparently has been re-enabled on Catalina, although requires disabling SIP. How to disable/re-enable SIP: support.studionetworksolutions.com/hc/en...tection-SIP-in-macOS

See attached results of Dtrace monitoring FCPX I/O on Catalina. Important items are queue depth and I/O histogram.

Examples (# means comments follow on that line):

First get FCPX PID:

sudo ps -ax | grep Final

sudo iosnoop # snoop all IO

sudo iosnoop -p PID # snoop only specified PID's IO

sudo bitesize.d # print IO histograms of active processes

sudo bitesize.d -p PID # print IO histograms of specified PID

sudo iopending # periodically print IO queue histogram

sudo iopending

To use iopending on a specified disk device:

(1) Get device name: diskutil list
(2) sudo iopending -d disk0s2 # where disk0s2 is the desired disk from above cmd

Show only writes or reads, inc'l IO size and block address of each:

sudo iosnoop | grep W # or R for reads

More info: dtrace.org/blogs/brendan/2011/10/10/top-...cripts-for-mac-os-x/

Below: FCPX I/O histograms for I/O size and disk queue depth (don't remember what FCPX task)
Attachments:

Please Log in or Create an account to join the conversation.

Agonizingly Slow Media Management 12 Sep 2020 02:12 #109982

  • nate.haustein
  • nate.haustein's Avatar Topic Author
  • Offline
  • Fresh Boarder
  • Fresh Boarder
  • Posts: 19
  • Thank you received: 1
Joema, thank you VERY much for taking the time to write such a thoughtful, detailed response. It’s so very helpful to have your research complied in one place as a jumping off point for further troubleshooting.

I tried a number of the recommendations, and the thing that seemed to make the biggest difference was the port445 NETbios code. I went from ~40MB/s during imports to ~80MB/s. There’s one more oddity I found...

If I have a library with multiple events, and try to consolidate the entire library at one time, as a single “task” in the FCPX activity monitor, the process achieves the said ~80MB/s. However, if I consolidate event by event, and 2-3 events are consolidating as separate concurrent processes in the FCPX activity window, the speed jumps to 200MB/s!!

This is mind boggling. It’s almost as if the network is restricting the individual process to a smaller bandwidth, but if there are several processes, it’s able to handle them simultaneously.

So now, I suppose, what’s the networking equivalent of that concept?

Please Log in or Create an account to join the conversation.

Agonizingly Slow Media Management 12 Sep 2020 12:36 #109983

  • joema
  • joema's Avatar
  • Offline
  • Platinum Boarder
  • Platinum Boarder
  • Posts: 1633
  • Karma: 27
  • Thank you received: 352

nate.haustein wrote: If I have a library with multiple events, and try to consolidate the entire library at one time, as a single “task” in the FCPX activity monitor, the process achieves the said ~80MB/s. However, if I consolidate event by event, and 2-3 events are consolidating as separate concurrent processes in the FCPX activity window, the speed jumps to 200MB/s!!...


When I copy an event to another library within FCPX, the performance varies based on the drive or network speed. Going from my iMac Pro internal SSD to a Thunderbolt 3 SSD RAID array, it does 1,350 megabytes/sec read and write. about 2,800 read IOs/sec and 1,380 write IOs/sec. From that we can gather the read size is abaout 500 kbytes and the write size is about 1 megabyte.

Doing the same operation on a 4-drive 48TB OWC Thunderbay 4 in RAID-0, the reading & writing rate is about 800 MB/sec, with read IOs/sec about 1,700 and write IO/sec about 980, so IO size remains the same.

Doing the same operation to a network drive on a 2017 iMac 27 connected by 1 gigabit ethernet and using SMB/AFP sharing, the rates are much slower: about 50 MB/sec read and 140 MB/sec write. The rates are mis-matched because it's apparently buffering up chunks of data before it tries writing.

When doing any operation, you can observe in Activity Monitor reads in/sec vs data read/sec, and writes out/sec vs data written/sec. Dividing he average data read per sec by "reads in per sec" gives the approx. IO size. E.g, 701 MB/sec data read per sec divided by 1418 "reads in per sec" = 0.494 MB IO size, or about 500 KB per IO. Likewise for writes.

This is only a rough average. If FCPX is also doing IOs to update the library, thumbnails, or waveform files, those are smaller and random and will be swamped by the larger data IOs. Therefore without other utilities it's difficult to see what % of IOs are what size.

Various drive types (esp NAS) will exhibit widely varying performance on different IO profiles. Unfortunately a simple sequential test like Blackmagic speed test doesn't reveal much. You could try the ATTO benchmark on your NAS drive. It uses different IO sizes and draws a graph to characterize performance at each IO size: www.atto.com/disk-benchmark-macOS/

If possible try a test using NFS instead of SMB. Lumaforge sometimes recommends NFS for best performance.

Whatever FCPX does, it's doing the same IO profile on local drives, a Lumaforge server, a network-connected iMac drive and your QNAP server and the QNAP and Synology NAS devices everyone else has. There is no IO tuning adjustment for FCPX, so this must be done at the network layer. For this reason you'll probably get more informed advice from someone with QNAP configuration and tuning experience.

Please Log in or Create an account to join the conversation.

  • Page:
  • 1