====== Transferring Data to Bonn ======
**Update: Transfers directly from mounted diskpacks have significantly higher rates of failure. Staging data to RAID is strongly recommended!**
**Update: The mark5B upgrade to SDK9.2 has adversely affected fuseMk5 performance. It is not uncommon for the program to crash, sometimes with kernel panic during copy operations. A workaround script using DIMino has been kludged together and is in ~oper/recovery/ but is still in development.**
* Use a VNC session to run the transfers - the hobart.phys.utas.edu.au:1 session is normally a good option.
* Open four terminal windows on the Mark5 unit the data will be sent from
* In the first terminal, make sure that neither fuseMk5 or dimino is running ps -ef | grep DIM
and ps -ef | grep fuse
and that another transfer isn't running ps -ef | grep tsuna
.
* If fuseMk5A is running and the wrong module is mounted, dismount with fusermount -u
* Then use fuseMk5 to mount the module: fuseMk5 --bank=0 --verbose --cachesize=128000000 /mnt/diskpack
This will mount Bank A (set bank=1 for Bank B) at /mnt/diskpack. Wait for a message to say .... Registering fuseMk5A to FUSE with: ....
. It can take a while. Background this process with Ctrl-z and bg
.
* In a second terminal, change to the mounted directory and make sure the data are there cd /mnt/diskpack
ls
* Start the tsunami daemon to send the data: tsunamid --port=52100 *
* In the third terminal window, log on to the computer at Bonn that will receive the data and change to the directory where the data will be sent ssh evlbi@
cd /data3/r1/hobart12/r1485/
The [[http://www3.mpifr-bonn.mpg.de/cgi-bin/showtransfers.cgi
|Bonn web page]] shows available servers and space.
(194.94.199.163 = io10, 194.94.199.164 = io3, 194.94.199.166 = sneezy2)
* Start tsunami and set up for a transfer
tsunami
set rate 300m
connect 131.217.63.175 52100
get *
* in the fourth terminal window, create an empty file with a name describing the start time of the transfer, rate etc: touch 20110525033000_r1485_Hb_Bonn_300m_52100_sneezy2_start
This puts an entry on the [[http://www3.mpifr-bonn.mpg.de/cgi-bin/showtransfers.cgi
|Bonn web page]] to say there's a transfer under way
* and send it to the web server ncftpput ftp.mpifr-bonn.mpg.de /incoming/geodesy/transfers 20110525033000_r1485hb_Hb177_Bonn_300m_52100_sneezy2_start
* When the transfer finishes, send a stop message with touch 20110525033000_Hb_stop
ncftpput ftp.mpifr-bonn.mpg.de /incoming/geodesy/transfers 20110525033000_Hb177_stop
It is also possible (and preferable) to export the data onto a RAID first. Currently, there are five RAIDS which can be used for temporary storage at Hobart - Vortex, Cornucopia, Jupiter (which contains two separate RAIDS of 6 and 12 TB) and Magilla. These can be NFS mounted on the mk5 machine with mount vortex.phys.utas.edu.au:/exports/vortex_internal/ /mnt/vortex
(as root). Once mounted the data can be copied with cp /mnt/diskpack/r1485* /mnt/vortex/r1485hb/
.
Once fully copied, please check the data has not been corrupted in the transfer. ssh into the NFS host as observer and run ./directory2list /exports/vortex_internal/r1485hb Mark5B-256-16-1
. The output to the screen will be the list of all the scans with the start mjd/time and a summary indicating if problems were found. If bad scans are found, a summary at the end will warn you, and list of bad scans will be made available.
After a transfer to Bonn completes, you should run directory2filelist /data3/r1/hobart12/r1485/ Mark5B-256-16-1
to check for errors in transmission.