[ovirt-users] Performance issue for GlusterFS as the block storage for VMs

Yue, Cong Cong_Yue at alliedtelesis.com
Thu Jan 15 19:30:20 UTC 2015


It is just the default configuration. What is your advice for the tuning of I/O performance?

For Gluster FS, actually I used Xenserver currently. I justed mounted the gluster as a NFS to XenHost to be used as a SR. Is there any way to use GlusterFS as a iscsi target ?

Thanks,
Cong


From: Donny Davis [mailto:donny at cloudspin.me]
Sent: Thursday, January 15, 2015 11:24 AM
To: Yue, Cong; users at ovirt.org
Subject: RE: [ovirt-users] Performance issue for GlusterFS as the block storage for VMs

I see.

So have you done any tuning for IO performance or are the configs straight out of the box. You also said you mounted the volume to a vm. Did you mount it as gluster or use the built in NFS??

Donny

From: Yue, Cong [mailto:Cong_Yue at alliedtelesis.com]
Sent: Thursday, January 15, 2015 11:40 AM
To: Donny Davis; users at ovirt.org<mailto:users at ovirt.org>
Subject: RE: [ovirt-users] Performance issue for GlusterFS as the block storage for VMs

I am using iometer (http://www.iometer.org/) to test the IOPS from one of the vm.
As for IOPS is more trying to test the performance of block, rather than real file transfer.

In my environment, I am using 10Gbe to make two gluster nodes be replicated. And I mount it to vm as a volume. I tested the performance both for SAS and SSD.
This is my current result with iometer.

Test ID

Application

Block Size (Bytes)

Read/Write %

Random/Sequential %

I/O Performance Metrics

iscsi-hdd

iscsi-ssd

Guster,HDD

Gluster,SSD

1

Web File Server

4K

95%/5%

75%/25%

IOPS

195.28

2400.63

208.50

646.75

2

Web File Server

8K

95%/5%

75%/25%

IOPS

193.97

2225.73

179.89

649.93

3

Web File Server

64K

95%/5%

75%/25%

IOPS

180.50

1055.05

158.28

402.15

4

Database Online Transaction Processing

8K

70%/30%

100%/0%

IOPS

163.60

2415.90

132.13

308.46

5

Exchange Email

4K

67%/33%

100%/0%

IOPS

167.03

2685.33

145.92

294.56

6

OS Drive

8K

70%/30%

100%/0%

IOPS

163.60

2407.02

146.11

310.82

7

Decision Support System

1M

100%/0%

100%/0%

IOPS

74.24

207.42

81.14

112.20

8

File Server

8K

90%/10%

75%/25%

IOPS

191.20

2102.32

359.54

526.86

9

Video on Demand

512K

100%/0%

100%/0%

IOPS

100.32

327.19

136.66

162.48

10

Traffic Simulation

8K

50%/50%

75%/25%

IOPS

238.28

1923.91

301.05

297.70

11

Web Server Logging

8K

0%/100%

0%/100%

IOPS

3488.47

3644.33

290.33

282.04

12

SQL Server Logging

64K

0%/100%

0%/100%

IOPS

1423.29

1375.33

182.67

168.64

13

OS Paging

64K

90%/10%

0%/100%

IOPS

1215.74

1211.01

381.17

355.99

14

Media Streaming

64K

98%/2%

0%/100%

IOPS

1350.96

1365.22

457.28

455.49


The issue for GlusterFS I found is

-          It can not retain the nature of SSD and SAS, such as SAS is strong  for sequential access. SSD is strong for random access.

-          In some case, especially for SSD, the performance is pretty bad.

Thanks,
Cong


From: Donny Davis [mailto:donny at cloudspin.me]
Sent: Thursday, January 15, 2015 10:27 AM
To: Yue, Cong; users at ovirt.org<mailto:users at ovirt.org>
Subject: RE: [ovirt-users] Performance issue for GlusterFS as the block storage for VMs

Do you have any metrics to give an idea of the difference. I am using NFS right now, and I am migrating to Gluster. I have the gluster system up, and I see that it seems to provision disks faster than my NFS. I haven't used any real measurement tools to get actual metrics, this is all perceived.

Do you have an operational gluster?
Do what are you using right now?

Donny D
cloudspin.me

From: users-bounces at ovirt.org<mailto:users-bounces at ovirt.org> [mailto:users-bounces at ovirt.org] On Behalf Of Yue, Cong
Sent: Thursday, January 15, 2015 10:57 AM
To: users at ovirt.org<mailto:users at ovirt.org>
Subject: [ovirt-users] Performance issue for GlusterFS as the block storage for VMs

Hi

I have one question about whether GlusterFS is the suitable solution to be used as the block storage for VMs.
The failure tolerant and scalability is good for GlusterFS, but in my test, it seems the iops is pretty bad. In some blog, it said, it is even with worse performance than normal NFS.
Should I use iscsi+drbd for the block storage for VMs.
Can somebody give some advice for this?

Thanks,
Cong

________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.

________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.

________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150115/ecbff187/attachment-0001.html>


More information about the Users mailing list