This shows you the differences between two versions of the page.
Last revision Both sides next revision | |||
correlation [2021/01/07 05:12] Jamie McCallum created |
correlation [2021/01/07 05:14] Jamie McCallum [Known bottlenecks] |
||
---|---|---|---|
Line 11: | Line 11: | ||
**Disk Speed** | **Disk Speed** | ||
The HDDs in use on the flexbuffs have a basic speed of ~1Gbps. Accessing a file that spans multiple drives (such as a VBS recording) can be slowed to this rate unless the "read ahead" option is enabled at runtime (e.g ''vbs_fs -n 4 -I test* /mnt/vbs/'' will enable a 4-chunk read ahead buffer). Note that the VBS recording will naturally tend to scatter across disks in something like a round-robin fashion which should mean access speeds can be up to the recording rate (~32 Gbps if using the whole set of drives). | The HDDs in use on the flexbuffs have a basic speed of ~1Gbps. Accessing a file that spans multiple drives (such as a VBS recording) can be slowed to this rate unless the "read ahead" option is enabled at runtime (e.g ''vbs_fs -n 4 -I test* /mnt/vbs/'' will enable a 4-chunk read ahead buffer). Note that the VBS recording will naturally tend to scatter across disks in something like a round-robin fashion which should mean access speeds can be up to the recording rate (~32 Gbps if using the whole set of drives). | ||
+ | |||
**VBS** | **VBS** | ||
The vbs_fs utility is an easy way to access the scattered recordings, treating them as unified files. One potential issue is with how to handle mutiple datastreams - is it better to mount them all under one vbs mountpoint, or run separete instances? How does this interact with the read ahead setting? | The vbs_fs utility is an easy way to access the scattered recordings, treating them as unified files. One potential issue is with how to handle mutiple datastreams - is it better to mount them all under one vbs mountpoint, or run separete instances? How does this interact with the read ahead setting? | ||
+ | |||
**Network interfaces** | **Network interfaces** | ||
- | At present, the vcs and flexbuff are connected via a single 10GbE interfaces to a fibre switch. would it be sensible to create a "bonded" interface? | + | At present, the vcs and flexbuff are connected via a single 10GbE interfaces to a fibre switch. would it be sensible to create a "bonded" interface? Is there any kernel tuning required (NB - the standard flexbuff optimsation has already been applied) |
v2d parameters to consider: dataBufferFactor, nDataSegments, sendLength, sendSize, visBufferLength, strideLength, xmaclength, numBufferedFFTs | v2d parameters to consider: dataBufferFactor, nDataSegments, sendLength, sendSize, visBufferLength, strideLength, xmaclength, numBufferedFFTs | ||
NB - all of these may interact and be media-dependent! | NB - all of these may interact and be media-dependent! |