This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
flexbuff [2020/03/24 00:45] Guifre |
flexbuff [2020/03/26 21:13] (current) Guifre |
||
---|---|---|---|
Line 1: | Line 1: | ||
===== Flexbuff and Transfer machines ===== | ===== Flexbuff and Transfer machines ===== | ||
- | Transferring data from flexbuflke to flexbuflyg | + | Creating raid-z1 with 6 disks and zfs mode. |
+ | <WRAP center round box 60%> | ||
+ | zpool create raid0 raidz -m /mnt/raid0 -o ashift=12 ata-ST4000DM000-1F2168_Z30280A0 ata-ST4000DM000-1F2168_Z302H4QT ata-ST4000DM000-1F2168_Z3027CXY ata-ST4000DM000-1F2168_Z3026TDT ata-ST4000DM000-1F2168_Z302H4V4 ata-ST4000DM000-1F2168_Z3027BJA \\ | ||
+ | zpool list \\ | ||
+ | zpool status raid0 | ||
+ | </WRAP> | ||
+ | |||
+ | Transferring data from flexbuflke to flexbuflyg. I used the first 6 disks in vbs mode in flexbuflyg. | ||
+ | |||
+ | > m5copy vbs://flexbuflke/mv042* vbs://flexbuflyg/ -n {1,2,3,4,5,6} | ||
|mode|n=parallel|data-rate| | |mode|n=parallel|data-rate| | ||
Line 14: | Line 23: | ||
After using 6 recordings in parallel and achieving 9 Gbps writing rate, the bottleneck is the 10 Gbps line. | After using 6 recordings in parallel and achieving 9 Gbps writing rate, the bottleneck is the 10 Gbps line. | ||
+ | |||
+ | Steps: | ||
+ | * Create the pool of disks with zpool. | ||
+ | * Move data in the disks. | ||
+ | * First test #1: unmount the disks, reshuffle them and mount them again. Pool of disks succesfully detected. | ||
+ | * First test #2: mount the disks in another machine and detect the raid there. Pool of disks detected OK, but in read-mode only. Possible error due to different zfs versions. | ||
+ |