Since yesterday I have both run disktester in 4-drive (default stripe size 128Kb as well as 1Mb Video Streaming option), then 6-drive configuration. In this test process the Raid is set to factory reset before configuring again. Again I have had these results:
I don't have a good explanation for your results. I just checked my 8TB R4 and it is configured for a 512KB stripe size. It's currently about 60% full and BlackMagic shows about 450MB/sec read and 450MB/sec write, although there's a lot of fluctuation.
I would probably not use 128KB stripe size.
I just did a few quick tests in FCPX and when exporting to H264 1080p it was using around 440KB I/O size. However when scrolling through thumbnails in the Event Browser of a large 2.2TB project, it was doing about 50KB I/Os. You never know what I/O size it will be using without actual testing, and even then it only matters if the specific task is I/O limited.
E.g, H264 export is mostly CPU-limited, not I/O limited, so further increasing I/O bandwidth won't help.
In general I'd probably use an approx 256KB or 512KB stripe size.
If you are using 6 x 6TB drives, that is pretty big. There is an argument that RAID-5 might not be the best fit for that from a reliability standpoint. In the event of a drive failure, the entire array must be re-read during the rebuild, and (according to one theory) the total I/O involved may approach the unrecoverable error rate. If this causes a 2nd failure during the rebuild the entire array is lost.
I think that reasoning is flawed and based in a misinterpretation of HDD error rates. However it probably does have validity at higher data volumes, and 36TB is getting up there.
You might want to consider RAID-6 for this, which can sustain two drive failures before losing the array. I haven't tested the performance implications of that.
RAID 6 and RAID 5 will be about the same I/O rates.
I've installed and maintained dozens of 32GB+ RAID 5 systems, and no, they are not susceptible to any I/O errors when rebuilding from a dead drive. There's some system performance hit, but it isn't debilitating, and no where near being fatally dangerous.
Pegasus RAID units used as factory default should be perfectly fine for video editing. We have a few that work great for2.5 and 4K work.
Thanks all for the feedback and help. The r6 Thunderbolt 1 is up and running properly now with 2 partitions 10TB + 20TB on a 6x 6TB RAID5 config using Promise's default stripe and block sizes. Good performance across and verified speeds with DiskTester.
The results are (average):
Partition 1 10TB : 479 write / 795 read - 48% full
Partition 2 20TB : 478 write / 595 read - 82% full
I found this topic yesterday during research about Pegasus R6 low speed. I have it for almost six years and on Monday one of the disks died (1TB Hitachi). I have everything on backup, so no big deal. Got 6 new Toshiba 3TB at a nice price. Swapped all of them, followed the Pegasus Utility Wizzard for setup and left it to do its synchronization. Then I was shocked by the write speed - only 250+ MB/s. Reading speed was good, around 800 MB/s. Got a bit worried while reading forums on the net... I
During the setup, I just left everything at default settings and didn't bother much with anything. My work is mainly design (Adobe CC), photography, 3D and some video. I didn't feel like deleting the - still empty - logical drive and waste another night for syncing the new one, so I went playing around with the settings. The catch was with ReadAhead/WriteBack settings. They were both marked as chosen options. So, I changed ReadAhead to NoCache, and tested the speeds again. Reading speed was the same. Then switched it back to ReadAhead and tested again. And it made a big difference! So, now I'm getting around 650+ MB/s read speed. Looks like the setting just shows to be active, but actually, it isn't.
Just my five cents to the topic, if it helps anyone.
vojko.plevel, yes this is pretty much my finding as well. I tried shuffling around the settings, especially the ReadAhead/WriteBack but found that the default ones gave me the best performance overall. There might be reasons to why you would change them (depending on the tasks that you do with the storage). But I have found that with a lot of mixed media and various file formats (CinemaDNG file sequences, wrapped codecs, standalone files..) with all this on the same storage the default setting is good enough.
I suggest you contact Promise directly. I now use mostly OWC arrays and SoftRAID. Both OWC and SoftRAID have been very responsive to support issues -- especially SoftRAID.
If your RAID set is supposed to contain 8 drives and 1 is offline it could be functioning in a degraded mode which might hurt write performance. That is just a guess. If you intentionally created a 7-drive array, then maybe not. But 100 MB/sec write speed is abnormal.
I would first verify all your important data is backed up elsewhere. A performance issue of this type could portend a data integrity issue. No matter how "redundant", RAID is not a backup and any RAID device can fail in a non-recoverable manner for various reasons.
You said after about to 23 Sept you suddenly started getting poor write performance. That indicates something changed. It now appears something or someone changed your Promise controller from "write back" caching to "write through" caching. That could account for slow write performance.
Different benchmarks will often show different performance numbers. Under the covers they may use different # of threads, different buffer systems, different # of async overlapped I/Os, different I/O sizes, etc. One is not right and the other wrong. There is no enforcement committee or criteria specifying I/O tests must use one method.