This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
flexbuff [2020/03/24 00:34] Guifre |
flexbuff [2020/03/24 01:01] Guifre |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | == **Flexbuff and Transfer machines** == | + | ===== Flexbuff and Transfer machines ===== |
+ | <WRAP center round box 60%> | ||
+ | zpool create raid0 raidz -m /mnt/raid0 -o ashift=12 ata-ST4000DM000-1F2168_Z30280A0 ata-ST4000DM000-1F2168_Z302H4QT ata-ST4000DM000-1F2168_Z3027CXY ata-ST4000DM000-1F2168_Z3026TDT ata-ST4000DM000-1F2168_Z302H4V4 ata-ST4000DM000-1F2168_Z3027BJA | ||
+ | zpool list | ||
+ | zpool status raid0 | ||
+ | </WRAP> | ||
+ | Transferring data from flexbuflke to flexbuflyg. I used the first 6 disks in vbs mode in flexbuflyg. | ||
+ | |||
+ | > m5copy vbs://flexbuflke/mv042* vbs://flexbuflyg/ -n {1,2,3,4,5,6} | ||
+ | |||
+ | |mode|n=parallel|data-rate| | ||
+ | |---|---|---| | ||
+ | | tcp| 1 | 3.0 Gbps| | ||
+ | | tcp| 2 | 4.5 Gbps| | ||
+ | | tcp| 3 | 6.0 Gbps| | ||
+ | | tcp| 4 | 7.5 Gbps| | ||
+ | | tcp| 5 | 9.0 Gbps| | ||
+ | | tcp| 6 | 9.0 Gbps| | ||
+ | |||
+ | After using 6 recordings in parallel and achieving 9 Gbps writing rate, the bottleneck is the 10 Gbps line. |