how to put or conditions in filters in web admin gui
by Gianluca Cecchi
Hello,
environment tin 4.3.6.
Suppose I'm in Web Admin GUI in Storage --> Disks and I want to get
displayed only the disks with "pattern1" together with the disks with
"string2" ("or" condition), limiting output to these two conditions, how
can I do it?
I tried some combinations without success
BTW: also the "and" condition seems not to work
Thanks,
Gianluca
4 years, 5 months
oVirt Hyperconverged disaster recovery questions
by wodel youchi
Hi,
oVirt Hyperconverged disaster recovery uses georeplication to replicate the
volume containing the VMs.
What I know about georeplication is that it is an asynchronous replication.
My questions are :
- How the replication of the VMs is done, are only the changes synchronized?
- What is the interval of this replication? can this interval be configured
taking into consideration the bandwidth of the replication link.
- How can the RPO be measured in the case of a georeplication?
Regards.
4 years, 5 months
Re: Cannot enable maintenance mode
by Bruno Martins
Hello guys,
No ideas for this issue?
Thanks for your cooperation!
Kind regards,
-----Original Message-----
From: Bruno Martins <bruno.o.martins(a)gfi.world>
Sent: 29 de setembro de 2019 16:16
To: users(a)ovirt.org
Subject: [ovirt-users] Cannot enable maintenance mode
Hello guys,
I am being unable to put a host from a two nodes cluster into maintenance mode in order to remove it from the cluster afterwards.
This is what I see in engine.log:
2019-09-27 16:20:58,364 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-45) [4cc251c9] Correlation ID: 4cc251c9, Job ID: 65731fbb-db34-49a9-ab56-9fba59bc0ee0, Call Stack: null, Custom Event ID: -1, Message: Host CentOS-H1 cannot change into maintenance mode - not all Vms have been migrated successfully. Consider manual intervention: stopping/migrating Vms: Non interactive user (User: admin).
Host has been rebooted multiple times. vdsClient shows no VM's running.
What else can I do?
Kind regards,
Bruno Martins
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5FJWFW7GXN...
4 years, 5 months
Incremental backup using ovirt api
by smidhunraj@gmail.com
Hi,
I tried to take incremental backup of a vm using this script.
public function downloadDiskIncremental(){
$data=array();
$xmlStr = "<disk_attachment>
<disk id='1371afdd-91d4-4bc7-9792-0efbf6bbd1c9'>
<backup>incremental</backup>
</disk>
</disk_attachment>";
$curlParam=array(
"url" => "vms/4044e014-7e20-4dbc-abe5-64690ec45f63/diskattachments",
"method" => "POST",
"data" =>$xmlStr,
);
}
But it is throwing me error as
Array ( [status] => error [message] => For correct usage, see: https://ovirt.bobcares.com/ovirt-engine/api/v4/model#services/disk-attach...
Please help me with this issue....
4 years, 5 months
Re: Super Low VM disk IO via Shared Storage
by Amit Bawer
On Tue, Oct 1, 2019 at 12:49 PM Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
wrote:
> Thank you very much Amit,
>
>
>
> I hope the result of suggested tests allows us improve the speed for
> specific IO test case as well.
>
>
>
> Apologies for not being more clear, but I was referring to changing mount
> options for storage where SHE also runs. It cannot be put in Maintenance
> mode since the engine is running on it.
> What to do in this case? Its clear that I need to power it down, but where
> can I then change the settings?
>
You can see similar question about changing mnt_options of hosted engine
and answer here [1]
[1] https://lists.ovirt.org/pipermail/users/2018-January/086265.html
>
>
> Kindly awaiting your reply.
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> *Marko Vrgotic*
>
>
>
>
>
>
>
> *From: *Amit Bawer <abawer(a)redhat.com>
> *Date: *Saturday, 28 September 2019 at 20:25
> *To: *"Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
> *Cc: *Tony Brian Albers <tba(a)kb.dk>, "hunter86_bg(a)yahoo.com" <
> hunter86_bg(a)yahoo.com>, "users(a)ovirt.org" <users(a)ovirt.org>
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
>
>
>
>
> On Fri, Sep 27, 2019 at 4:02 PM Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
> wrote:
>
> Hi oVirt gurus,
>
>
>
> Thank s to Tony, who pointed me into discovery process, the performance of
> the IO seems greatly dependent on the flags.
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=100000
>
> 100000+0 records in
>
> 100000+0 records out
>
> 51200000 bytes (51 MB) copied, 0.108962 s, *470 MB/s*
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=100000
> *oflag=dsync*
>
> 100000+0 records in
>
> 100000+0 records out
>
> 51200000 bytes (51 MB) copied, 322.314 s, *159 kB/s*
>
>
>
> Dsync flag tells dd to ignore all buffers, cache except certain kernel
> buffers and write data physically to the disc, before writing further.
> According to number of sites I looked at, this is the way to test Server
> Latency in regards to IO operations. Difference in performance is huge, as
> you can see (below I have added results from tests with 4k and 8k block)
>
>
>
> Still, certain software component we run tests with writes data in
> this/similar way, which is why I got this complaint in the first place.
>
>
>
> Here is my current NFS mount settings:
>
>
> rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5
>
>
>
> *If you have any suggestions on possible NFS tuning options, to try to
> increase performance, I would highly appreciate it.*
>
> *Can someone tell me how to change NFS mount options in oVirt for already
> existing/used storage?*
>
>
>
> Taking into account your network configured MTU [1] and Linux version [2],
> you can tune wsize, rsize mount options.
>
> Editing mount options can be done from Storage->Domains->Manage Domain
> menu.
>
>
>
> [1] https://access.redhat.com/solutions/2440411
>
> [2] https://access.redhat.com/solutions/753853
>
>
>
>
>
> Test results with 4096 and 8192 byte size.
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=100000
>
> 100000+0 records in
>
> 100000+0 records out
>
> 409600000 bytes (410 MB) copied, 1.49831 s, *273 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096
> count=100000 *oflag=dsync*
>
> 100000+0 records in
>
> 100000+0 records out
>
> 409600000 bytes (410 MB) copied, 349.041 s, *1.2 MB/s*
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=100000
>
> 100000+0 records in
>
> 100000+0 records out
>
> 819200000 bytes (819 MB) copied, 11.6553 s, *70.3 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192
> count=100000 *oflag=dsync*
>
> 100000+0 records in
>
> 100000+0 records out
>
> 819200000 bytes (819 MB) copied, 393.035 s, *2.1 MB/s*
>
>
>
>
>
> *From: *"Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
> *Date: *Thursday, 26 September 2019 at 09:51
> *To: *Amit Bawer <abawer(a)redhat.com>
> *Cc: *Tony Brian Albers <tba(a)kb.dk>, "hunter86_bg(a)yahoo.com" <
> hunter86_bg(a)yahoo.com>, "users(a)ovirt.org" <users(a)ovirt.org>
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
> Dear all,
>
>
>
> I very much appreciate all help and suggestions so far.
>
>
>
> Today I will send the test results and current mount settings for NFS4.
> Our production setup is using Netapp based NFS server.
>
>
>
> I am surprised with results from Tony’s test.
>
> We also have one setup with Gluster based NFS, and I will run tests on
> those as well.
>
> Sent from my iPhone
>
>
>
> On 25 Sep 2019, at 14:18, Amit Bawer <abawer(a)redhat.com> wrote:
>
>
>
>
>
> On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers <tba(a)kb.dk> wrote:
>
> Guys,
>
> Just for info, this is what I'm getting on a VM that is on shared
> storage via NFSv3:
>
> --------------------------snip----------------------
> [root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
> count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 4096000000 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s
>
> real 0m18.171s
> user 0m1.077s
> sys 0m4.303s
> [root@proj-000 ~]#
> --------------------------snip----------------------
>
> my /etc/exports:
> /data/ovirt
> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>
> and output from 'mount' on one of the hosts:
>
> sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
> 001.kac.lokalnet:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
> .41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
> 16.216.41)
>
>
>
> Worth to compare mount options with the slow shared NFSv4 mount.
>
>
>
> Window size tuning can be found at bottom of [1], although its relating to
> NFSv3, it could be relevant to v4 as well.
>
> [1] https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
>
>
>
>
> connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
> SATA disks in RAID10. NFS server is running CentOS 7.6.
>
> Maybe you can get some inspiration from this.
>
> /tony
>
>
>
> On Wed, 2019-09-25 at 09:59 +0000, Vrgotic, Marko wrote:
> > Dear Strahil, Amit,
> >
> > Thank you for the suggestion.
> > Test result with block size 4096:
> > Network storage:
> > avshared:
> > [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 409600000 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
> >
> > Local storage:
> >
> > avlocal2:
> > [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 409600000 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> > 10:38
> > avlocal3:
> > [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 409600000 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s
> >
> > As Amit suggested, I am also going to execute same tests on the
> > BareMetals and between BareMetal and NFS to compare results.
> >
> >
> > — — —
> > Met vriendelijke groet / Kind regards,
> >
> > Marko Vrgotic
> >
> >
> >
> >
> > From: Strahil <hunter86_bg(a)yahoo.com>
> > Date: Tuesday, 24 September 2019 at 19:10
> > To: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>, Amit <abawer@redhat
> > .com>
> > Cc: users <users(a)ovirt.org>
> > Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> > Storage
> >
> > Why don't you try with 4096 ?
> > Most block devices have a blcok size of 4096 and anything bellow is
> > slowing them down.
> > Best Regards,
> > Strahil Nikolov
> > On Sep 24, 2019 17:40, Amit Bawer <abawer(a)redhat.com> wrote:
> > have you reproduced performance issue when checking this directly
> > with the shared storage mount, outside the VMs?
> >
> > On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko <M.Vrgotic@activevideo
> > .com> wrote:
> > Dear oVirt,
> >
> > I have executed some tests regarding IO disk speed on the VMs,
> > running on shared storage and local storage in oVirt.
> >
> > Results of the tests on local storage domains:
> > avlocal2:
> > [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 51200000 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
> >
> > avlocal3:
> > [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 51200000 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
> >
> > Results of the test on shared storage domain:
> > avshared:
> > [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 51200000 bytes (51 MB) copied, 283.499 s, 181 kB/s
> >
> > Why is it so low? Is there anything I can do to tune, configure VDSM
> > or other service to speed this up?
> > Any advice is appreciated.
> >
> > Shared storage is based on Netapp with 20Gbps LACP path from
> > Hypervisor to Netapp volume, and set to MTU 9000. Used protocol is
> > NFS4.0.
> > oVirt is 4.3.4.3 SHE.
> >
> >
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: <
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/communit
> > y-guidelines/
> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> > message/7XYSFEGAHCWXIY2JILDE24EVAC5ZVKWU/
> --
> Tony Albers - Systems Architect - IT Development
> Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark
> Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142
>
>
4 years, 5 months
Re: [ANN] oVirt 4.3.6 is now generally available
by Strahil
You can go with 512 emulation and later you can recreate the brick without that emulation (if there are benefits of doing so).
After all, you gluster is either replica 2 arbiter 1 or a replica 3 volume.
Best Regards,
Strahil Nikolov
On Oct 1, 2019 09:26, Satheesaran Sundaramoorthi <sasundar(a)redhat.com> wrote:
>
> On Tue, Oct 1, 2019 at 11:27 AM Guillaume Pavese <guillaume.pavese(a)interactiv-group.com> wrote:
>>
>> Hi all,
>>
>> Sorry for asking again :/
>>
>> Is there any consensus on not using --emulate512 anymore while creating VDO volumes on Gluster?
>> Since this parameter can not be changed once the volume is created and we are nearing production setup. I would really like to have an official advice on this.
>>
>> Best,
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
> Hello Guillaume Pavese,
> If you are not using --emulate512 for VDO volume, then VDO volume will be created
> as 4K Native volume (with 4K block size).
>
> There are couple of things that bothers here:
> 1. 4K Native device support requires fixes at QEMU that will be part of
> CentOS 7.7.2 ( not yet available )
> 2. 4K Native support with VDO volumes on Gluster is not yet validated
> thoroughly.
>
> Based on the above items, it would be better you have emulate512=on or delay
> your production setup ( if possible, till above both items are addressed )
> to make use of 4K VDO volume.
>
> @Sahina Bose Do you have any other suggestions ?
>
> -- Satheesaran S (sas)
>>
>>
>> On Fri, Sep 27, 2019 at 3:19 PM Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>>>
>>>
>>>
4 years, 5 months
Re: Fwd: Unable to Upgrade
by Gobinda Das
Hi Anton,
There is nothing special for upgrading HCI env with Gluster. Bellow steps will be enough:
1- Stop glusterd service in one node which needs to be upgraded.
2- upgrade that node and start glusterd service.
3- wait heal to be completed.
4- continue same procedure for other nodes.
In your case can you please restar glusterd in all nodes and check. If it's still not working please share glusterd log?
Also please share "gluster volume status" out put.
Get Outlook for Android<https://aka.ms/ghei36>
________________________________
From: Sandro Bonazzola <sbonazzo(a)redhat.com>
Sent: Tuesday, October 1, 2019, 5:01 PM
To: Anton Marchukov; Gobinda Das; Sundaramoorthi, Satheesaran; Sahina Bose
Cc: users; Akshita Jain
Subject: Re: [ovirt-users] Fwd: Unable to Upgrade
Il giorno mar 1 ott 2019 alle ore 11:27 Anton Marchukov <amarchuk(a)redhat.com<mailto:amarchuk@redhat.com>> ha scritto:
Forwarding to the users list.
Begin forwarded message:
From: "Akshita Jain" <akshita(a)councilinfosec.org<mailto:akshita@councilinfosec.org>>
Subject: Unable to Upgrade
Date: 1 October 2019 at 11:12:58 CEST
To: infra(a)ovirt.org<mailto:infra@ovirt.org>
After upgrading oVirt 4.3.4 to 4.3.6, the gluster is also upgrading from 5.6 to 6.5. But as soon as it upgrades gluster peer status shows disconnected.
What is the correct method to upgrade oVirt with gluster HCI environment?
+Gobinda Das<mailto:godas@redhat.com> , +Sundaramoorthi, Satheesaran<mailto:sasundar@redhat.com> , +Sahina Bose<mailto:sabose@redhat.com> can you please follow up on this question?
_______________________________________________
Infra mailing list -- infra(a)ovirt.org<mailto:infra@ovirt.org>
To unsubscribe send an email to infra-leave(a)ovirt.org<mailto:infra-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/infra@ovirt.org/message/24D6NKVLMYQ...
--
Anton Marchukov
Associate Manager - RHV DevOps - Red Hat
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YO2RYKFDCSP...
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA<https://www.redhat.com/>
sbonazzo(a)redhat.com<mailto:sbonazzo@redhat.com>
[https://marketing-outfit-prod-images.s3-us-west-2.amazonaws.com/f5445ae0c...]<https://www.redhat.com/>
Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.<https://mojo.redhat.com/docs/DOC-1199578>
4 years, 6 months
change connection string in db
by olaf.buitelaar@gmail.com
Dear oVirt users,
I'm currently migrating our gluster setup, so i've done a gluster replace brick to the new machines.
Now i'm trying to update the connection strings of the related storage domains including the one hosting the ovirt-engine (which i believe cannot be brought down for maintenance). At the same time i'm trying to disable the "Use managed gluster volume" feature.
i've had tested this in a lab setup, but somehow i'm running into issues on the actual setup.
On the lab setup it was enough to run a query like this;
UPDATE public.storage_server_connections
SET "connection"='10.201.0.6:/ovirt-kube',gluster_volume_id=NULL,mount_options='backup-volfile-servers=10.201.0.1:10.201.0.2:10.201.0.3:10.201.0.5:10.201.0.4:10.201.0.7:10.201.0.8:10.201.0.9'
WHERE id='29aae3ce-61e4-4fcd-a8f2-ab0a0c07fa48';
on the live setup i also seem to run a query like this;
UPDATE public.gluster_volumes
SET task_id=NULL
WHERE id='9a552d7a-8a0d-4bae-b5a2-1cb8a7edf5c9';
i couldn't really find where this task_id relates to, but it does make the checkbox for "Use managed gluster volume" being unchecked in the web interface.
in the lab setup it was enough to run within the hosted engine;
- service ovirt-engine restart
and then bring an ovirt-host machine to maintenance, and active it again. and the changed connection string was being mounted in the /rhev/data-center/mnt/glusterSD/ directory.
Also the vm's after being shutdown and brought up again, started using the new connection string.
But now on the production instance, when i restart the engine the connection string is restored to the original values in the storage_server_connections table. I don't really understand where the engine gathers this information from.
Any advice on how to actually change the connection strings would by highly appreciated.
Thanks Olaf
4 years, 6 months