[Beowulf] 10G and rsync
Dernat Rémy
remy.dernat at umontpellier.fr
Sat Feb 22 03:02:56 PST 2020
Hi,
Did you solve your problem ? Did you try copying without any encryption
layer, with something like netcat/tnc (tnc is using nc + tar + a bit of
perl) ? I would test that firstly, while reading from /dev/zero and
writting to /dev/null to avoid any HDD issue. If you have quite good
results, then read from the source and write to /dev/null, then write to
the destination disk, and finally, read from the wanted source and write
to destination. Then, add encryption, try a basic scp with only aes or
any Cipher mechanism directly known from processor (grep aes
/proc/cpuinfo ).
Limitation to 1Gb/s could also mean that you have many interfaces and
routes, but iperf should be at the same speed, except if there is any
misconfiguration on source or target. I would then double check routes
on both machines.
Otherwise, maybe I missed the solution, and that would interest me :)
Best,
Le 03/01/2020 à 01:24, David Mathog a écrit :
> On Thu, 2 Jan 2020 13:32:17 Michael Di Domenico wrote:
>> On Thu, Jan 2, 2020 at 12:44 PM David Mathog <mathog at caltech.edu> wrote:
>>> 1. Is a single large file transfer rate reasonable?
>>> 2. Ditto for several large files?
>>
>> yes, if i transfer files outside of rsync performance is reasonable
>>
>>> Are you sure there is not a patrol read ongoing on one system or the
>>> other? That can cause this sort of disk head issue.
>>
>> yes, i control both sides. the client side is totally idle and the
>> lustre system is quiet.
>
> Double checking - you queried the RAID card (if present) to see that
> it was not doing a patrol read or SMART analysis? In my experience
> SMART commands do not light the disk activity lights, so physically
> looking at the array may show no or little activity when in fact the
> disks are working quite hard.
>
>>
>>> Also it might be this "hugepage" issue:
>>> https://www.beowulf.org/pipermail/beowulf/2015-July/033282.html
>>
>> ah forgot about that one. tried it, no change
>
> Hmm. Let's see if you can take the file systems more or less out of
> the equation. Something along these lines:
>
> 1. Create 100 FIFOs with matching names on each end in a similarly
> named directory.
> 2. On the receiving machine spin out 100 processes doing:
>
> dd if=/PATH/FIFOname12 of=/dev/null &
>
> 3. On the the sending side spin out similar process to write to the FIFO
>
> dd if=/dev/zero of=/path/FIFOname12 bs=8196 count=10000 &
>
> 4. Start up rysnc on the directory holding the FIFOs.
>
> I never tried coercing rsync into working like that, but if it can be
> done then it emulates a storage system to storage system transfer
> without ever actually reading or writing to any file systems.
>
> Regards,
>
> David Mathog
> mathog at caltech.edu
> Manager, Sequence Analysis Facility, Biology Division, Caltech
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Dernat Rémy
IT Infrastructure Engineer, CNRS
MBB Platform - ISEM Montpellier
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3623 bytes
Desc: Signature cryptographique S/MIME
URL: <http://beowulf.org/pipermail/beowulf/attachments/20200222/c7a16c67/attachment-0001.bin>
More information about the Beowulf
mailing list