10 gbit iorates with usrp tpsd processing
A test was done to measure the io rates
when reading data across a 10gbit link from a remote disc.
The setup was:
The following links show the iostat (run on gpuserv3) and nfsstat
run on gpuserv1)
- clpGpu program was run on gpuserv1.
- The raw clpdata was located on gpuserv3.
- phil@gpuserv3 GPUSERV3]$ df -kh .
Used Avail Use% Mounted on
40T 39T 1.4T 97% /export/GPUSERV3
- The file was mounted on gpuserv1 as:
Size Used Avail Use% Mounted on
40T 39T 1.4T 97%
- gpunet3 has the ip address 10.10.10.3
- The ethernet links available between gpuserv1 and gpuserv3
1 gbit link
- The topside processing was run on gpuserv1 using the
t1193 01feb19 usrp dataset on gpuserv3.
- for t1193 01feb19 50 seconds of clp data was followed
by 10 seconds of tpsd data.
- tpsd spectra were computed and integrated for 10 seconds.
- 60 seconds of data was read to get the 10 secs of topside
- Each 1 second file contained 100007168 bytes of data
- a 10second tpsd integration required 6e9 bytes of
- there were 39Tbytes of raw usrp data for this experiment.
When starting the program, portions of the data set were used
that guaranteed that the data was not in gpuserv3's
memory cache. The program was left running long enough to
process 1 hour worth of data (360Gbytes). This far exceeded
the 64Gbytes of memory in gpuserv1 and 3, so there was no
chance that all of the data came out of the disc memory cache.
- After reading in the data, it took 4.5 seconds to process it
and then write it out.
- iostat was run on gpuserv3 (the location of the disc) while
the processing was performed to monitor the disc i/o rate
- iostat iostat -ytmhd sda1 30 1000000> sda1.log
- nfsiostat was run on gpuserv1 for the remotely mounted
filesystem to see the network throughput.
- iostat sda1 run on gpuserv3
- This shows the disc i/o rate we were reading from. They are
30 second averages.
- The processing started around 10:54.
- In 1 or 2 minutes it build up to 450-500Mbytes/second
- The processing stopped around 11:23
- When the program stopped, the io/rate went to 0.
- nfsiostat 10 1000000
/net/gpunet3/export/GPUSERV3/usrpdata run on gpuserv1
- This has the network statistics for the accesses to the
- On the read lines, the kB/sec shows the i/o rate.
- It had values 440-500kb/sec (these were 10 second
- The kB/op col shows that the average xfer size was 1 mbyte.
- Data can be read from a remote disc at up to 500MBytes/sec
over a 10gb link
- The transfer size was typically 1mbyte.
- I also tried the test using 10.10.11.5 .. The 1gbit
address on gpuserv3.
- I got the same 450-500Mbytes/sec of the 10Gbit link
- When i did df -kh on the mounted file system i saw that it
was still using the 10gbit link(10.10.10.3).