[ovirt-users] oVirt 4.2.1 rc1 and upload iso to data domain test

Gianluca Cecchi gianluca.cecchi at gmail.com
Wed Jan 17 10:56:09 UTC 2018


On Wed, Jan 17, 2018 at 9:28 AM, Fred Rolland <frolland at redhat.com> wrote:

> Hi,
>
> I tested uploading an ISO to Gluster on my setup and it worked fine.
>
>
> Ok.


> Are you seeing any other issues with your Gluster setup?
> Creating regular disks, copy/move disks to this SD?
>
> Thanks,
> Fred
>

Nothing particular.
Indeed it is a nested environment and not with great performances, but it
doesn't have particular problems.
At this moment I have 3 VMs defined on this SD and one of them is powered
on: I just created another 3Gb disk on it and then formatted a file system
without problems

If I try to copy from engine VM (that is on its engine gluster domain) to
this VM (that is on data gluster domain)

First I create a 2Gb file on hosted engine vm, so the only I/O is on engine
gluster volume:

[root at ovengine ~]# time dd if=/dev/zero bs=1024k count=2048 of=/testfile
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 98.1768 s, 21.9 MB/s

real 1m38.188s
user 0m0.023s
sys 0m14.687s
[root at ovengine ~]#

Then I copy this file from the engine VM to the CentOS 6 VM with its disk
on data gluster volume.

[root at ovengine ~]# time dd if=/testfile bs=1024k count=2048 | gzip | ssh
192.168.150.111 "gunzip | dd of=/testfile bs=1024k"
root at 192.168.150.111's password:
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 41.9347 s, 51.2 MB/s
0+62205 records in
0+62205 records out
2147483648 bytes (2.1 GB) copied, 39.138 s, 54.9 MB/s

real 0m42.634s
user 0m29.727s
sys 0m5.421s
[root at ovengine ~]#

[root at centos6 ~]# ll /testfile
-rw-r--r--. 1 root root 2147483648 Jan 17 11:47 /testfile
[root at centos6 ~]#


And right after the end of the command, also at gluster point of view all
seems consistent (I see also replication go at about 50MB/s):

[[root at ovirt01 ovirt-imageio-daemon]# gluster volume heal data info
Brick ovirt01.localdomain.local:/gluster/brick2/data
Status: Connected
Number of entries: 0

Brick ovirt02.localdomain.local:/gluster/brick2/data
Status: Connected
Number of entries: 0

Brick ovirt03.localdomain.local:/gluster/brick2/data
Status: Connected
Number of entries: 0

[root at ovirt01 ovirt-imageio-daemon]# gluster volume heal engine info
Brick ovirt01.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0

Brick ovirt02.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0

Brick ovirt03.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0

[root at ovirt01 ovirt-imageio-daemon]#

Could it be in any way related to the fact that this environment has been
created in 4.0.5 then gradually updated to 4.2.1rc1?
The detailed history:

Nov 2016 installed 4.0.5 with ansible and gdeploy
Jun 2017 upgrade to 4.1.2
Jul 2017 upgrade to 4.1.3
Nov 2017 upgrade to 4.1.7
Dec 2017 upgrade to 4.2.0
Jan 2018 upgrade to 4.2.1rc1

I had a problem with enabling libgfapi due to the connection to gluster
volumes being of type host:volume instead of the default now node:/volume
see here:
https://bugzilla.redhat.com/show_bug.cgi?id=1530261

just a guess...

Thanks,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180117/889c203d/attachment.html>


More information about the Users mailing list