This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
flexbuff [2020/03/24 01:01] Guifre |
flexbuff [2020/03/26 21:13] (current) Guifre |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ===== Flexbuff and Transfer machines ===== | ===== Flexbuff and Transfer machines ===== | ||
| + | |||
| + | Creating raid-z1 with 6 disks and zfs mode. | ||
| <WRAP center round box 60%> | <WRAP center round box 60%> | ||
| - | zpool create raid0 raidz -m /mnt/raid0 -o ashift=12 ata-ST4000DM000-1F2168_Z30280A0 ata-ST4000DM000-1F2168_Z302H4QT ata-ST4000DM000-1F2168_Z3027CXY ata-ST4000DM000-1F2168_Z3026TDT ata-ST4000DM000-1F2168_Z302H4V4 ata-ST4000DM000-1F2168_Z3027BJA | + | zpool create raid0 raidz -m /mnt/raid0 -o ashift=12 ata-ST4000DM000-1F2168_Z30280A0 ata-ST4000DM000-1F2168_Z302H4QT ata-ST4000DM000-1F2168_Z3027CXY ata-ST4000DM000-1F2168_Z3026TDT ata-ST4000DM000-1F2168_Z302H4V4 ata-ST4000DM000-1F2168_Z3027BJA \\ |
| - | zpool list | + | zpool list \\ |
| zpool status raid0 | zpool status raid0 | ||
| </WRAP> | </WRAP> | ||
| Line 21: | Line 23: | ||
| After using 6 recordings in parallel and achieving 9 Gbps writing rate, the bottleneck is the 10 Gbps line. | After using 6 recordings in parallel and achieving 9 Gbps writing rate, the bottleneck is the 10 Gbps line. | ||
| + | |||
| + | Steps: | ||
| + | * Create the pool of disks with zpool. | ||
| + | * Move data in the disks. | ||
| + | * First test #1: unmount the disks, reshuffle them and mount them again. Pool of disks succesfully detected. | ||
| + | * First test #2: mount the disks in another machine and detect the raid there. Pool of disks detected OK, but in read-mode only. Possible error due to different zfs versions. | ||
| + | |||