[Beowulf] NFS over RDMA performance confusion
Vincent Diepeveen
diep at xs4all.nl
Thu Sep 13 05:52:31 PDT 2012
On Sep 13, 2012, at 2:34 PM, Joe Landman wrote:
> On 09/13/2012 07:52 AM, holway at th.physik.uni-frankfurt.de wrote:
>
> [...]
>
>> If I set up a single machine to hammer the fileserver with IOzone
>> I see
>> something like 50,000 IOPS but if all four machines are hammering the
>> filesystem concurrently we got it up to 180,000 IOPS.
>
> I wouldn't recommend IOzone for this sort of testing. Its not a very
> good load generator, and it has a tendency to report things which are
> not actually seen at the hardware level. I'd noticed this some years
> ago, when running some of our benchmark testing on these units,
> that an
> entire IOzone benchmark completed with very few activity lights
> going on
> the disks. Which suggested that the test was happily entirely cached,
> and I was running completely within cache.
>
> Use fio.
>
> Second, are the disks behind the NFS/ZFS server solid state, ram disk,
> or spinning rust?
>
>> Can anyone tell me what might be the bottleneck on the single
>> machines?
>> Why can I not get 180,000 IOPS when running on a single machine.
>
> 50k IOPs x 16k/IOP = 819 MB/s
10 gbit TCP limit i'd guess
>
> 180k IOPs x 16k/IOP = 2949 MB/s (close to pragmatic limits on QDR)
>
40 gbit QDR yeah
the actual data goes through RDMA here and maybe through TCP using
mysql.
> Some observations ... these don't sound like disk subsystems. A
> 15k RPM
> drive will give you ~300 IOPs. To get 50k IOPs, you would need 167
> disk
> drives per machine, operating in a best performance case scenario
> (RAID0
> or JBOD). To get 180k IOPs, you'd need 600x 15k RPM disks. I am
> guessing you don't have that.
>
> Are you asking why a single machine cannot fill your QDR bandwidth?
>
> I'd recommend running traces on the individual machines to see where
> things are getting lost. One you have the traces, post em, and see if
> people can help.
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: landman at scalableinformatics.com
> web : http://scalableinformatics.com
> http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax : +1 866 888 3112
> cell : +1 734 612 4615
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf
mailing list