User Tools

Site Tools


This wiki is not maintained! Do not use this when setting up AuScope experiments!

This is an old revision of the document!

Flexbuff and Transfer machines

Creating raid-z1 with 6 disks and zfs mode.

zpool create raid0 raidz -m /mnt/raid0 -o ashift=12 ata-ST4000DM000-1F2168_Z30280A0 ata-ST4000DM000-1F2168_Z302H4QT ata-ST4000DM000-1F2168_Z3027CXY ata-ST4000DM000-1F2168_Z3026TDT ata-ST4000DM000-1F2168_Z302H4V4 ata-ST4000DM000-1F2168_Z3027BJA
zpool list
zpool status raid0

Transferring data from flexbuflke to flexbuflyg. I used the first 6 disks in vbs mode in flexbuflyg.

m5copy vbs:flexbuflke/mv042* vbs:flexbuflyg/ -n {1,2,3,4,5,6}
tcp 1 3.0 Gbps
tcp 2 4.5 Gbps
tcp 3 6.0 Gbps
tcp 4 7.5 Gbps
tcp 5 9.0 Gbps
tcp 6 9.0 Gbps

After using 6 recordings in parallel and achieving 9 Gbps writing rate, the bottleneck is the 10 Gbps line.

Steps: - Create the pool of disks with zpool. - Move data in the disks. - First test #1: unmount the disks, reshuffle them and mount them again. Pool of disks succesfully detected. - First test #2: mount the disks in another machine and detect the raid there. Pool of disks detected OK, but in read-mode only. Possible error due to different zfs versions.

/home/www/auscope/opswiki/data/attic/flexbuff.1585257134.txt.gz · Last modified: 2020/03/26 21:12 by Guifre