Hosted Engine Restore Issues
by Alan G
Hi,
Trying to re-deploy Hosted Engine into a new storage domain. "hosted-engine --deploy --noansible" has completed and the engine is up, but I cannot remove the existing hosted_storage domain to allow the new one to be imported.
I cannot remove the domain until the old HostedEngine VM is removed, but I cannot remove that as it has Delete Protection enabled. Any attempt to remove delete protection errors with "There was an attempt to change Hosted Engine VM values that are locked". How is this process supposed to work?
Thanks,
Alan
5 years, 6 months
OvfUpdateIntervalInMinutes restored to original value too early?
by Andreas Elvers
I'm having a problem with restore. It fails while "Wait for OVF_STORE disk content". Looking at the ansible output and at the bug report https://bugzilla.redhat.com/show_bug.cgi?id=1644748 isn't the original value of 60 minutes for OvfUpdateIntervalInMinutes restored too early?
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch host SPM_ID]
[ INFO ] changed: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Parse host SPM_ID]
[ INFO ] ok: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore original DisableFenceAtStartupInSec]
[ INFO ] changed: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove DisableFenceAtStartupInSec temporary file]
[ INFO ] changed: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore original OvfUpdateIntervalInMinutes]
[ INFO ] changed: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove OvfUpdateIntervalInMinutes temporary file]
[ INFO ] changed: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Removing temporary value]
[ INFO ] changed: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restoring original value]
[ INFO ] changed: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary directory for ansible as postgres user]
[ INFO ] changed: [localhost -> engine.infra.solutions.work]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fail if he_force_ip4 and he_force_ip6 are set at the same time]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Prepare getent key]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Trigger hosted engine OVF update and enable the serial console]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait until OVF update finishes]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Parse OVF_STORE disk list]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check OVF_STORE volume status]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for OVF_STORE disk content]
[ ERROR ] {u'_ansible_parsed': True, u'stderr_lines': [u'20+0 records in', u'20+0 records out', u'10240 bytes (10 kB) copied, 0.000164261 s, 62.3 MB/s', u'tar: 1e74609b-51e1-45c8-9106-2596ee59ba3a.ovf: Not found in archive', u'tar: Exiting with failure status due to previous errors'], u'changed': True, u'end': u'2019-05-10 12:57:57.610068', u'_ansible_item_label': {u'image_id': u'59d16f28-64ed-4c3f-aaad-7d98e2718f53', u'name': u'OVF_STORE', u'id': u'98bd5cfc-809d-4d40-83e0-98e379af5f97'}, u'stdout': u'', u'failed': True, u'_ansible_item_result': True, u'msg': u'non-zero return code', u'rc': 2, u'start': u'2019-05-10 12:57:56.928294', u'attempts': 12, u'cmd': u"vdsm-client Image prepare storagepoolID=597f329c-0296-03af-0369-000000000139 storagedomainID=8f115ea6-91ec-4e10-9240-bc8eda47b7ff imageID=98bd5cfc-809d-4d40-83e0-98e379af5f97 volumeID=59d16f28-64ed-4c3f-aaad-7d98e2718f53 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 1e74609b-51e1-45c8-9106-259
6ee59ba3a.ovf", u'item': {u'image_id': u'59d16f28-64ed-4c3f-aaad-7d98e2718f53', u'name': u'OVF_STORE', u'id': u'98bd5cfc-809d-4d40-83e0-98e379af5f97'}, u'delta': u'0:00:00.681774', u'invocation': {u'module_args': {u'warn': False, u'executable': None, u'_uses_shell': True, u'_raw_params': u"vdsm-client Image prepare storagepoolID=597f329c-0296-03af-0369-000000000139 storagedomainID=8f115ea6-91ec-4e10-9240-bc8eda47b7ff imageID=98bd5cfc-809d-4d40-83e0-98e379af5f97 volumeID=59d16f28-64ed-4c3f-aaad-7d98e2718f53 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 1e74609b-51e1-45c8-9106-2596ee59ba3a.ovf", u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout_lines': [], u'stderr': u'20+0 records in\n20+0 records out\n10240 bytes (10 kB) copied, 0.000164261 s, 62.3 MB/s\ntar: 1e74609b-51e1-45c8-9106-2596ee59ba3a.ovf: Not found in archive\ntar: Exiting with failure status due to previous errors', u'_ansible_no_log': False}
5 years, 6 months
Re: oVirt Open Source Backup solution?
by Derek Atkins
Hi,
Michael Blanchard <michael(a)wanderingmad.com> writes:
> If you haven't seen my other posts, I'm not a very experienced Linux admin, so
> I'm trying to make it as easy as possible to run and maintain. It's hard
> enough for me to not break ovirt in crazy ways
This has nothing to do with ovirt.
You could use rdiff-backup on any running machine, be it virtual or bare
metal. It's just a way to use a combination of diff and rsync to backup
machines. Indeed, I was using it with my vmware-based systems and, when
I migrated them to ovirt, the backups just continued working.
> Get Outlook for Android
-derek
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
5 years, 6 months
Re: oVirt and NetApp NFS storage
by Strahil
I know 2 approaches.
1. Use NFS hard mounting option - it will never give error to sanlock and it will be waiting until NFS is recovered (never tries this one, but in theory might work)
2. Change the default sanlock timeout (last time I tried that - it didn't work) . You might need help from Sandro or Sahina for that option.
Best Regards,
Strahil NikolovOn Apr 18, 2019 11:45, klaasdemter(a)gmail.com wrote:
>
> Hi,
>
> I got a question regarding oVirt and the support of NetApp NFS storage.
> We have a MetroCluster for our virtual machine disks but a HA-Failover
> of that (active IP gets assigned to another node) seems to produce
> outages too long for sanlock to handle - that affects all VMs that have
> storage leases. NetApp says a "worst case" takeover time is 120 seconds.
> That would mean sanlock has already killed all VMs. Is anyone familiar
> with how we could setup oVirt to allow such storage outages? Do I need
> to use another type of storage for my oVirt VMs because that NFS
> implementation is unsuitable for oVirt?
>
>
> Greetings
>
> Klaas
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57...
5 years, 6 months
Ansible oVirt.image-template role
by Jeremy Tourville
I am trying to run an Ansible playbook that doesn't appear to run correctly. I have followed the example from this blog - https://evaryont.me/blog/2018/09/getting-started-with-vagrant-and-ovirt-f...
The playbook finishes with an ok status but the template never gets built in Ovirt.
I have taken logs from three locations hoping to spot the error:
* [root@ansible ansible]#ansible-playbook -vvvv runsetup.yml
* [root@ansible ansible]# less /var/log/ansible.log
* [root@engine ~]# tail -f /var/log/messages (while the playbook is being run.
The server Ansible is my control node and Engine is my managed host.
Can anyone help me interpret the attached logs in an effort to further troubleshoot? Thanks!
5 years, 6 months
Re: Ovirt nodeNG RDMA support?
by Strahil
I saw you are using gluster v5.3 . Any reason not to update to 5.5 ?
It is quite more stable than 5.3 and some enhancements were done.
Best Regards,
Strahil NikolovOn May 10, 2019 00:13, michael(a)wanderingmad.com wrote:
>
> Tried that, get this error:
>
>
> [root@Icarus ~]# gluster volume stop storage_ssd
> Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
> volume stop: storage_ssd: success
> [root@Icarus ~]# gluster volume set storage_ssd config.transport tcp,rdma
> volume set: failed: Commit failed on localhost. Please check the log file for more details.
> and here is the glusterd.log
>
> [2019-05-09 21:10:45.945855] W [MSGID: 101095] [xlator.c:456:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/5.3/xlator/nfs/server.so: cannot open shared object file: No such file or directory
> [2019-05-09 21:10:46.041341] I [MSGID: 106521] [glusterd-op-sm.c:2953:glusterd_op_set_volume] 0-management: changing transport-type for volume storage_ssd to tcp,rdma
> [2019-05-09 21:10:46.041965] W [MSGID: 101095] [xlator.c:180:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/5.3/xlator/nfs/server.so: cannot open shared object file: No such file or di
> rectory
> [2019-05-09 21:10:46.059356] E [MSGID: 106068] [glusterd-volgen.c:1025:volgen_write_volfile] 0-management: failed to create volfile
> [2019-05-09 21:10:46.059394] E [glusterd-volgen.c:6556:glusterd_create_volfiles] 0-management: Could not generate gfproxy client volfiles
> [2019-05-09 21:10:46.059406] E [MSGID: 106068] [glusterd-op-sm.c:3062:glusterd_op_set_volume] 0-management: Unable to create volfile for 'volume set'
> [2019-05-09 21:10:46.059420] E [MSGID: 106122] [glusterd-syncop.c:1434:gd_commit_op_phase] 0-management: Commit of operation 'Volume Set' failed on localhost
> [2019-05-09 21:10:46.062304] E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler
> [root@prometheus ~]#
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HJGBS3DVBUD...
5 years, 6 months
Re: Migrating Domain Storage Gluster
by Strahil
If this is replica 2 arbiter 1 (old name replica 3 arbiter 1) or a plain replica 3 volume - you don't need the downtime.
Just stop the bricks on one of the nodes.
Mount (and verify) those SSDs on the same location and use the gluster's reset brick option.
Wait for the heal to complete and repeat with the other node(s).
Best Regards,
Strahil NikolovOn May 9, 2019 21:00, Alex McWhirter <alex(a)triadic.us> wrote:
>
> Basically i want to take out all of the HDD's in the main gluster pool,
> and replace with SSD's.
>
> My thought was to put everything in maintenance, copy the data manually
> over to a transient storage server. Destroy the gluster volume, swap in
> all the new drives, build a new gluster volume with the same name /
> settings, move data back, and be done.
>
> Any thoughts on this?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TP6V2OYG5ZX...
5 years, 6 months
Re: HostedEngine cleaned up
by Strahil
This is due to sharding.
I joined ovirt 6 moths ago and this option was still there.
Best Regards,
Strahil NikolovOn May 9, 2019 18:04, Dmitry Filonov <filonov(a)hkl.hms.harvard.edu> wrote:
>
> The data chunks are under .glusterfs folder on bricks now. Not a single huge file you can easily access from a brick.
> Not sure when that change was introduced though.
>
> Fil
>
>
> On Thu, May 9, 2019, 10:43 AM <olaf.buitelaar(a)gmail.com> wrote:
>>
>> It looks like i've got the exact same issue;
>> drwxr-xr-x. 2 vdsm kvm 4.0K Mar 29 16:01 .
>> drwxr-xr-x. 22 vdsm kvm 4.0K Mar 29 18:34 ..
>> -rw-rw----. 1 vdsm kvm 64M Feb 4 01:32 44781cef-173a-4d84-88c5-18f7310037b4
>> -rw-rw----. 1 vdsm kvm 1.0M Oct 16 2018 44781cef-173a-4d84-88c5-18f7310037b4.lease
>> -rw-r--r--. 1 vdsm kvm 311 Mar 29 16:00 44781cef-173a-4d84-88c5-18f7310037b4.meta
>> Within the meta file the image is marked legal and reports a size of SIZE=41943040, interestingly the format is mark RAW, while it was a thinly created volume.
>> My suspicion is that something went wrong while the volume was being livemigrated, and somehow the merging of the images broke the volume.
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDPJ2R5XUFN...
5 years, 6 months
Ovirt nodeNG RDMA support?
by michael@wanderingmad.com
When I was able to load CentosOS as a host OS, Was able to use RDMA, but it seems like the 4.3x branch of nodeNG is missing RDMA support? I enabled rdma and started the service, but gluster refuses to recognize that RDMA is available and always reports RDMA port as 0 and when I try to make a new drive with tcp,rdma transport options, it always fails.
5 years, 6 months
Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00361B2065257E90_=
Content-Type: text/plain; charset="US-ASCII"
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00361B2065257E90_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00361B2065257E90_=--
5 years, 6 months