Disaster recovery active-active question
by wodel youchi
Hi,
On the active-active DR, in the documentation it is said :
"You require replicated storage that is writeable on both sites to allow
virtual machines to migrate between sites and continue running on the
site’s storage."
I know that it's not a direct oVirt question, but if someone has
implemented this ...
Which type of storage that can offer both real time synchronization and
read/write on both ends on the replicated volumes?
Regards.
5 years, 2 months
Re: the virtual machine crushed and I cant shutdown the vm successfully
by Strahil
Have you checked the Hypervisor's logs.
I would start with logs from libvirt and vdsm (host) and then the logs on your engine.
What OS is your VM and the version of the VM itself (you can check in the UI).
Best Regards,
Starhil NikolovOn Sep 17, 2019 10:56, zhouhao(a)vip.friendtimes.net wrote:
>
> There is no useful information on Google. I can't solve this problem. I can only restart the ovirt-node
>
> ________________________________
> zhouhao(a)vip.friendtimes.net
>>
>>
>> From: tedhima@yahoo.co.uk
>> Date: 2019-09-17 15:30
>> To: users; devel; zhouhao(a)vip.friendtimes.net
>> Subject: Re: [ovirt-users] the virtual machine crushed and I cant shutdown the vm successfully
>> Probably related to XFS defragmentation, have you tried to google the error message?
>>
>> https://bugzilla.kernel.org/show_bug.cgi?id=73831
>> https://centos.org/forums/viewtopic.php?t=52412
>> https://blog.codecentric.de/en/2017/04/xfs-possible-memory-allocation-dea...
>> On Tuesday, 17 September 2019, 13:55:45 GMT+7, zhouhao(a)vip.friendtimes.net <zhouhao(a)vip.friendtimes.net> wrote:
>>
>>
>>>
>>> The trouble bothered me for a long time;Some times my vm will crush and I cant shutdown it,I had to reboot the ovirt-node to slove it;
>>> the vm's error below
>>>
>>> The ovirt-node's error below
>>>
>>> The vm's threading on the ovirt-node, the IO ratio is 100%
>>>
>>>
>>> and the vm's process change to defunct
>>> I can not kill it,every time I had to shutdown the ovirt-node
>>>
>>> on the engine website ,the vm's status always in the way to shutdown ,even if I wait for it for hours;
>>> <
5 years, 2 months
Gluster: Bricks remove failed
by toslavik@yandex.ru
Hi.
There is an ovirt-hosted-engine on gluster volume engine
Replicate replica count 3
Migrate to other drives.
I do:
gluster volume add-brick engine clemens:/gluster-bricks/engine tiberius:/gluster-bricks/engine octavius:/gluster-bricks/engine force
volume add-brick: success
gluster volume remove-brick engine tiberius:/engine/datastore clemens:/engine/datastore octavius:/engine/datastore start
volume remove-brick start: success
ID: dd9453d3-b688-4ed8-ad37-ba901615046c
gluster volume remove-brick engine octavius:/engine/datastore status
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 7 50.0GB 34 1 0 completed 0:00:02
clemens 11 21.0MB 31 0 0 completed 0:00:02
tiberius 12 25.0MB 36 0 0 completed 0:00:02
gluster volume status engine
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick octavius:/engine/datastore 49156 0 Y 15669
Brick tiberius:/engine/datastore 49156 0 Y 15930
Brick clemens:/engine/datastore 49156 0 Y 16193
Brick clemens:/gluster-bricks/engine 49159 0 Y 6168
Brick tiberius:/gluster-bricks/engine 49163 0 Y 29524
Brick octavius:/gluster-bricks/engine 49159 0 Y 50056
Self-heal Daemon on localhost N/A N/A Y 50087
Self-heal Daemon on clemens N/A N/A Y 6263
Self-heal Daemon on tiberius N/A N/A Y 29583
Task Status of Volume engine
------------------------------------------------------------------------------
Task : Remove brick
ID : dd9453d3-b688-4ed8-ad37-ba901615046c
Removed bricks:
tiberius:/engine/datastore
clemens:/engine/datastore
octavius:/engine/datastore
Status : completed
But the data did not migrate
du -hs /gluster-bricks/engine/ /engine/datastore/
49M /gluster-bricks/engine/
20G /engine/datastore/
Can you give some advice?
5 years, 2 months
oVirt 4.3 gluster hooks
by Staniforth, Paul
Hello,
In an oVirt 4.3 cluster there are a number of gluster hooks out of sync. When I try to resolve any of the conflicts from the engine it comes back with operation cancelled with the error message "Error while executing action Update Gluster Hook: Internal Engine Error"
In the engine log there are a number of errors including
2019-09-16 10:38:28,584+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.UpdateGlusterHookVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2210463) [62ee29dd-5f80-47ad-8e5a-c97e7c3b7acb] Command 'UpdateGlusterHookVDSCommand(HostName = hostname@domainname, GlusterHookVDSParameters:{hostId='86571cea-a20c-4087-8bbf-103d26b3c4eb'})' execution failed: VDSGenericException: VDSErrorException: Failed to UpdateGlusterHookVDS, error = Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method GlusterHook.update of <vdsm.gluster.apiwrapper.GlusterHook object at 0x7fb34421f510>> with arguments: (u'add-brick', u'PRE', u'28Quota-enable-root-xattr-heal.sh', u'IyEvYmluL3NoCgojIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjCiMjIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyMgVGhlIHNjcmlwdHMKIyMgSS4gICBhZGQtYnJpY2svcHJlL1MyOFF1b3RhLXJvb3QteGF0dHItaGVhbC5zaCAoaXRzZWxmKQojIyBJSS4gIGFkZC1icmljay9wb3N0L2Rpc2FibGVkLXJvb3QteGF0dHItaGVhbC5zaCBBTkQKIyMgY29sbGVjdGl2ZWx5IGFyY2hpZXZlcyB0aGUgam9iIG9mIGhlYWxpbmcgdGhlICdsaW1pdC1zZXQnIHhhdHRyIHVwb24KIyMgYWRkLWJyaWNrIHRvIHRoZSBnbHVzdGVyIHZvbHVtZS4KIyMKIyMgVGhpcyBzY3JpcHQgaXMgdGhlICdjb250cm9sbGluZycgc2NyaXB0LiBVcG9uIGFkZC1icmljayB0aGlzIHNjcmlwdCBlbmFibGVzCiMjIHRoZSBjb3JyZXNwb25kaW5nIHNjcmlwdCBiYXNlZCBvbiB0aGUgc3RhdHVzIG9mIHRoZSB2b2x1bWUuCiMjIElmIHZvbHVtZSBpcyBzdGFydGVkIC0gZW5hYmxlIGFkZC1icmljay9wb3N0IHNjcmlwdAojIyBlbHNlICAgICAgICAgICAgICAgICAtIGVuYWJsZSBzdGFydC9wb3N0IHNjcmlwdC4KIyMKIyMgVGhlIGVuYWJsaW5nIGFuZCBkaXNhYmxpbmcgb2YgYSBzY3JpcHQgaXMgYmFzZWQgb24gdGhlIGdsdXN0ZXJkJ3MgbG9naWMsCiMjIHRoYXQgaXQgb25seSBydW5zIHRoZSBzY3JpcHRzIHdoaWNoIHN0YXJ0cyBpdHMgbmFtZSB3aXRoICdTJy4gU28sCiMjIEVuYWJsZSAtIHN5bWxpbmsgdGhlIGZpbGUgdG8gJ1MnKi4KIyMgRGlzYWJsZS0gdW5saW5rIHN5bWxpbmsKIyMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQojIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjCgpPUFRTUEVDPSJ2b2xuYW1lOix2ZXJzaW9uOixnZC13b3JrZGlyOix2b2x1bWUtb3A6IgpQUk9HTkFNRT0iUXVvdGEteGF0dHItaGVhbC1hZGQtYnJpY2stcHJlIgpWT0xfTkFNRT0KR0xVU1RFUkRfV09SS0RJUj0KVk9MVU1FX09QPQpWRVJTSU9OPQpFTkFCTEVEX05BTUU9IlMyOFF1b3RhLXJvb3QteGF0dHItaGVhbC5zaCIKRElTQUJMRURfTkFNRT0iZGlzYWJsZWQtcXVvdGEtcm9vdC14YXR0ci1oZWFsLnNoIgoKZW5hYmxlICgpCnsKICAgICAgICBsbiAtc2YgJERJU0FCTEVEX1NUQVRFICQxOwp9CgojIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQojIyBQYXJzZSB0aGUgYXJndW1lbnRzCiMjLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCkFSR1M9JChnZXRvcHQgLWwgJE9QVFNQRUMgIC1uYW1lICRQUk9HTkFNRSAkQCkKZXZhbCBzZXQgLS0gIiRBUkdTIgoKd2hpbGUgdHJ1ZTsKZG8KICAgIGNhc2UgJDEgaW4KICAgICAgICAtLXZvbG5hbWUpCiAgICAgICAgICAgIHNoaWZ0CiAgICAgICAgICAgIFZPTF9OQU1FPSQxCiAgICAgICAgICAgIDs7CiAgICAgICAgLS1nZC13b3JrZGlyKQogICAgICAgICAgICBzaGlmdAogICAgICAgICAgICBHTFVTVEVSRF9XT1JLRElSPSQxCiAgICAgICAgICAgIDs7CiAgICAgICAgLS12b2x1bWUtb3ApCiAgICAgICAgICAgIHNoaWZ0CiAgICAgICAgICAgIFZPTFVNRV9PUD0kMQogICAgICAgICAgICA7OwogICAgICAgIC0tdmVyc2lvbikKICAgICAgICAgICAgc2hpZnQKICAgICAgICAgICAgVkVSU0lPTj0kMQogICAgICAgICAgICA7OwogICAgICAgICopCiAgICAgICAgICAgIHNoaWZ0CiAgICAgICAgICAgIGJyZWFrCiAgICAgICAgICAgIDs7CiAgICBlc2FjCiAgICBzaGlmdApkb25lCiMjLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQoKRElTQUJMRURfU1RBVEU9IiRHTFVTVEVSRF9XT1JLRElSL2hvb2tzLyRWRVJTSU9OL2FkZC1icmljay9wb3N0LyRESVNBQkxFRF9OQU1FIgpFTkFCTEVEX1NUQVRFX1NUQVJUPSIkR0xVU1RFUkRfV09SS0RJUi9ob29rcy8kVkVSU0lPTi9zdGFydC9wb3N0LyRFTkFCTEVEX05BTUUiCkVOQUJMRURfU1RBVEVfQUREX0JSSUNLPSIkR0xVU1RFUkRfV09SS0RJUi9ob29rcy8kVkVSU0lPTi9hZGQtYnJpY2svcG9zdC8kRU5BQkxFRF9OQU1FIjsKCiMjIFdoeSB0byBwcm9jZWVkIGlmIHRoZSByZXF1aXJlZCBzY3JpcHQgaXRzZWxmIGlzIG5vdCBwcmVzZW50PwpscyAkRElTQUJMRURfU1RBVEU7CmlmIFsgMCAtbmUgJD8gXQp0aGVuCiAgICAgICAgZXhpdCAkPzsKZmkKCiMjIElzIHF1b3RhIGVuYWJsZWQ/CkZMQUc9YGNhdCAkR0xVU1RFUkRfV09SS0RJUi92b2xzLyRWT0xfTkFNRS9pbmZvIHwgZ3JlcCAiXmZlYXR1cmVzLnF1b3RhPSIgXAogICAgICB8IGF3ayAtRic9JyAne3ByaW50ICRORn0nYDsKaWYgWyAiJEZMQUciICE9ICJvbiIgXQp0aGVuCiAgICAgICAgZXhpdCAkRVhJVF9TVUNDRVNTOwpmaQoKIyMgSXMgdm9sdW1lIHN0YXJ0ZWQ/CkZMQUc9YGNhdCAkR0xVU1RFUkRfV09SS0RJUi92b2xzLyRWT0xfTkFNRS9pbmZvIHwgZ3JlcCAiXnN0YXR1cz0iIFwKICAgICAgfCBhd2sgLUYnPScgJ3twcmludCAkTkZ9J2A7CmlmIFsgIiRGTEFHIiAhPSAiMSIgXQp0aGVuCiAgICAgICAgZW5hYmxlICRFTkFCTEVEX1NUQVRFX1NUQVJUOwogICAgICAgIGV4aXQgJD8KZmkKCmVuYWJsZSAkRU5BQkxFRF9TVEFURV9BRERfQlJJQ0s7CmV4aXQgJD8K') error: update() takes exactly 6 arguments (5 given)"}, code = -32603
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
5 years, 2 months
user role
by kim.kargaard@noroff.no
Hi,
We are an Higher Education Institution and have set up oVirt 4.3 as a Virtual Lab Environment, for both on-campus and distance learning students. We have integrated AD and can log in with AD credentials for users. However, I am struggling to find the right user role for normal student users. Basically, they need to be able to do the following:
1. Create a new VM and select from either a template or select an uploaded ISO
2. Start the VM
3. Restart the VM
4. Shutdown the VM
5. Snapshot the VM
6. Connect to the VM using either spice or vnc/novnc
7. Create a disk
8. Connect to the student network (only would be preferable here) as we have management networks that we dont want the students be be able to see or select (if possible)
I have tested using the PowerUserRole, but that does not allow me to select VNC/noVNC or spice and does not allow me to install from an ISO, only a template. Any suggestions on role that I can give these users?
Thanks.
Kind regards
Kim
5 years, 2 months
Re: Does cluster upgrade wait for heal before proceeding to next host?
by Strahil
As far as I know,
The UI allows the administrator to override the gluster status, although I always use that protection.
In my opinion gluster health (especially in my setup - replica2 arbiter1) is very important.
Best Regards,
Strahil NikolovOn Sep 17, 2019 00:47, Jayme <jaymef(a)gmail.com> wrote:
>
> Strahil, yes this is similar to the approach I have used to upgrade my hci cluster. This thread is in regards to the ui cluster upgrade procedure. It fails to update after the first host is rebooted because the gluster volume is still in a healing state and the attempt to set the second host to maintenance fails because quorum is not met.
>
> On Mon, Sep 16, 2019 at 1:41 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> I keep reading this chain and I still don't get what/who should wait for the cluster to heal...
>> Is there some kind of built-in autopatching feature?
>>
>> Here is my approach:
>> 1. Set global maintenance
>> 2. Power off the engine
>> 3. Create a gluster snapshot of the engine's volume
>> 4. Power on engine manually
>> 5. Check engine status
>> 6. Upgrade engine
>> 7. Upgrade engine's OS
>> 8. Reboot engine and check health
>> 9. Remove global maintenance
>> 10. Set a host into local maintenance (evacuate all VMs)
>> 11. Use UI to patch the host (enable autoreboot)
>> 12. When host is up - login and check gluster volumes' heal status
>> 13. Remove maintenance for the host and repeate for the rest of the cluster.
>>
>> I realize that for large clusters this approach is tedious and an automatic approach can be scripted.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Sep 16, 2019 11:02, Kaustav Majumder <kmajumde(a)redhat.com> wrote:
>>>
>>> Hi Jayme,
>>> It would be great if you could raise a bug regarding the same.
>>>
>>> On Wed, Sep 11, 2019 at 5:05 PM Jayme <jaymef(a)gmail.com> wrote:
>>>>
>>>> This sounds similar to the issue I hit with the cluster upgrade process in my environment. I have large 2tb ssds and most of my vms are several hundred Gbs in size. The heal process after host reboot can take 5-10 minutes to complete. I may be able to address this with better gluster tuning.
>>>>
>>>> Either way the upgrade process should be aware of the heal status and wait for it to complete before attempting to move on to the next host.
>>>>
>>>>
>>>> On Wed, Sep 11, 2019 at 3:53 AM Sahina Bose <sabose(a)redhat.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Aug 9, 2019 at 3:41 PM Martin Perina <mperina(a)redhat.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Aug 8, 2019 at 10:25 AM Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Il giorno mar 6 ago 2019 alle ore 23:17 Jayme <jaymef(a)gmail.com> ha scritto:
>>>>>>>>
>>>>>>>> I’m aware of the heal process but it’s unclear to me if the update continues to run while the volumes are healing and resumes when they are done. There doesn’t seem to be any indication in the ui (unless I’m mistaken)
>>>>>>>
>>>>>>>
>>>>>>> Adding @Martin Perina , @Sahina Bose and
5 years, 2 months
Re: Does cluster upgrade wait for heal before proceeding to next host?
by Strahil
I keep reading this chain and I still don't get what/who should wait for the cluster to heal...
Is there some kind of built-in autopatching feature?
Here is my approach:
1. Set global maintenance
2. Power off the engine
3. Create a gluster snapshot of the engine's volume
4. Power on engine manually
5. Check engine status
6. Upgrade engine
7. Upgrade engine's OS
8. Reboot engine and check health
9. Remove global maintenance
10. Set a host into local maintenance (evacuate all VMs)
11. Use UI to patch the host (enable autoreboot)
12. When host is up - login and check gluster volumes' heal status
13. Remove maintenance for the host and repeate for the rest of the cluster.
I realize that for large clusters this approach is tedious and an automatic approach can be scripted.
Best Regards,
Strahil NikolovOn Sep 16, 2019 11:02, Kaustav Majumder <kmajumde(a)redhat.com> wrote:
>
> Hi Jayme,
> It would be great if you could raise a bug regarding the same.
>
> On Wed, Sep 11, 2019 at 5:05 PM Jayme <jaymef(a)gmail.com> wrote:
>>
>> This sounds similar to the issue I hit with the cluster upgrade process in my environment. I have large 2tb ssds and most of my vms are several hundred Gbs in size. The heal process after host reboot can take 5-10 minutes to complete. I may be able to address this with better gluster tuning.
>>
>> Either way the upgrade process should be aware of the heal status and wait for it to complete before attempting to move on to the next host.
>>
>>
>> On Wed, Sep 11, 2019 at 3:53 AM Sahina Bose <sabose(a)redhat.com> wrote:
>>>
>>>
>>>
>>> On Fri, Aug 9, 2019 at 3:41 PM Martin Perina <mperina(a)redhat.com> wrote:
>>>>
>>>>
>>>>
>>>> On Thu, Aug 8, 2019 at 10:25 AM Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> Il giorno mar 6 ago 2019 alle ore 23:17 Jayme <jaymef(a)gmail.com> ha scritto:
>>>>>>
>>>>>> I’m aware of the heal process but it’s unclear to me if the update continues to run while the volumes are healing and resumes when they are done. There doesn’t seem to be any indication in the ui (unless I’m mistaken)
>>>>>
>>>>>
>>>>> Adding @Martin Perina , @Sahina Bose and @Laura Wright on this, hyperconverged deployments using cluster upgrade command would probably need some improvement.
>>>>
>>>>
>>>> The cluster upgrade process continues to the 2nd host after the 1st host becomes Up. If 2nd host then fails to switch to maintenance, we stop the upgrade process to prevent breakage.
>>>> Sahina, is gluster healing process status exposed in RESTAPI? If so, does it makes sense to wait for healing to be finished before trying to move next host to maintenance? Or any other ideas how to improve?
>>>
>>>
>>> I need to cross-check this, if we expose the heal count in the gluster bricks. Moving a host to maintenance does check if there are pending heal entries or possibility of quorum loss. And this would prevent the additional hosts to upgrade.
>>> +Gobinda Das +Sachidananda URS
>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 6, 2019 at 6:06 PM Robert O'Kane <okane(a)khm.de> wrote:
>>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> Often(?), updates to a hypervisor that also has (provides) a Gluster
>>>>>>> brick takes the hypervisor offline (updates often require a reboot).
>>>>>>>
>>>>>>> This reboot then makes the brick "out of sync" and it has to be resync'd.
>>>>>>>
>>>>>>> I find it a "feature" than another host that is also part of a gluster
>>>>>>> domain can not be updated (rebooted) before all the bricks are updated
>>>>>>> in order to guarantee there is not data loss. It is called Quorum, or?
>>>>>>>
>>>>>>> Always let the heal process end. Then the next update can start.
>>>>>>> For me there is ALWAYS a healing time before Gluster is happy again.
>>>>>>>
>>>>>>> Cheers,
>>>>>>>
>>>>>>> Robert O'Kane
>>>>>>>
>>>>>>>
>>>>>>> Am 06.08.2019 um 16:38 schrieb Shani Leviim:
>>>>>>> > Hi Jayme,
>>>>>>> > I can't recall such a healing time.
>>>>>>> > Can you please retry and attach the engine & vdsm logs so we'll be smarter?
>>>>>>> >
>>>>>>> > *Regards,
>>>>>>> > *
>>>>>>> > *Shani Leviim
>>>>>>> > *
>>>>>>> >
>>>>>>> >
>>>>>>> > On Tue, Aug 6, 2019 at 5:24 PM Jayme <jaymef(a)gmail.com
>>>>>>> > <mailto:jaymef@gmail.com>> wrote:
>>>>>>> >
>>>>>>> > I've yet to have cluster upgrade finish updating my three host HCI
>>>>>>> > cluster. The most recent try was today moving from oVirt 4.3.3 to
>>>>>>> > 4.3.5.5. The first host updates normally, but when it moves on to
>>>>>>> > the second host it fails to put it in maintenance and the cluster
>>>>>>> > upgrade stops.
>>>>>>> >
>>>>>>> > I suspect this is due to that fact that after my hosts are updated
>>>>>>> > it takes 10 minutes or more for all volumes to sync/heal. I have
>>>>>>> > 2Tb SSDs.
>>>>>>> >
>>>>>>> > Does the cluster upgrade process take heal time in to account before
>>>>>>> > attempting to place the next host in maintenance to upgrade it? Or
>>>>>>> > is there something else that may be at fault here, or perhaps a
>>>>>>> > reason why the heal process takes 10 minutes after reboot to complete?
>>>>>>> > _______________________________________________
>>>>>>> > Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
>>>>>>> > To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>> > <mailto:users-leave@ovirt.org>
>>>>>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> > oVirt Code of Conduct:
>>>>>>> > https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> > List Archives:
>>>>>>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/5XM3QB3364ZYIPAKY4KTTOSJZMCWHUPD/
>>>>>>> >
>>>>>>> >
>>>>>>> > _______________________________________________
>>>>>>> > Users mailing list -- users(a)ovirt.org
>>>>>>> > To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBX3L23MWGM...
>>>>>>> >
>>>>>>>
>>>>>>> --
>>>>>>> Systems Administrator
>>>>>>> Kunsthochschule für Medien Köln
>>>>>>> Peter-Welter-Platz 2
>>>>>>> 50676 Köln
>>>>>>> _______________________________________________
>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBAHFFFTDOI...
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/T27ROHWZPJL...
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Sandro Bonazzola
>>>>>
>>>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>>>
>>>>> Red Hat EMEA
>>>>>
>>>>> sbonazzo(a)redhat.com
>>>>>
>>>>> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
>>>>
>>>>
>>>>
>>>> --
>>>> Martin Perina
>>>> Manager, Software Engineering
>>>> Red Hat Czech s.r.o.
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4KLDBPYBCQ...
>
>
>
> --
>
> Thanks,
>
> Kaustav Majumder
5 years, 2 months
Cannot Activate iSCSI Storage Domain After Errors
by Clint Boggio
Good Day To all;
oVirt 4.3.4.3-1.el7
6 Dell R710 oVirt Nodes
10 gig Infiniband iSCSI
Standalone Engine
After an error during scheduled automated VM backkup, the iSCSi datastore that the VMs beinng backed up lived on suddenly went inactive and refuse to activate.
The backup job did not finish as the source datastore went inactive but the task did clear from the console after i restarted the ovirt-engine service.
I have tried:
1. Restart the ovirt-engine service
2. Restarted the engine physical server.
3. Tried to migrate the VM disks to a different datastore (failure)
The virtual machines that are currently living on the inactive datastore are still running and accessible.
Trying to reboot one of those VM as a test fails as the VM will not come up due to the system believing that the datastore is inactive.
From engin.log:
2019-09-15 17:09:08,787-05 INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-8) [fbdd9b7] Removing 'Wildix4-PBX_snapshot_metadata' disk
2019-09-15 17:09:13,399-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-8) [fbdd9b7] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: General Storage Exception: ('5 [\' Logical volume d1c5f4da-9024-4599-9135-4dabafe86cf8/c5f64501-142b-4a38-9b9c-faf283a64b06 changed.\'] [\' WARNING: This metadata update is NOT backed up.\', \' WARNING: Combining activation change with other commands is not advised.\', \' /dev/mapper/36001405f448f60ecea7479f9d920ac51: Checksum error at offset 19198423553536\', " Couldn\'t read volume group metadata from /dev/mapper/36001405f448f60ecea7479f9d920ac51.", \' Metadata location on /dev/mapper/36001405f448f60ecea7479f9d920ac51 at 19198423553536 has invalid summary for VG.\', \' Failed to read metadata summary from /dev/mapper/36001405f448f60ecea7479f9d920ac51\', \' Failed to scan VG from /dev/mapper/36001405f448f60ecea7479f9d920ac51\', \' Volume group "d1c5f4da-9024-4599-9135-4dabafe86cf8" not found\', \' Cannot process volume group d1c5f4da-9024-4599-9135-4dabafe86cf8\']\nd1c5f4da-9024-4599-9135-4dabafe86cf8/{\'c5f64501-142b-4a38-9b9c-faf283a64b06\': ImgsPar(imgs=[\'aadbfed3-af54-4400-af79-65d09b74c687\'], parent=\'00000000-0000-0000-0000-000000000000\')}',)
2019-09-15 17:09:13,399-05 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-8) [fbdd9b7] IrsBroker::Failed::DeleteImageGroupVDS: IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = General Storage Exception: ('5 [\' Logical volume d1c5f4da-9024-4599-9135-4dabafe86cf8/c5f64501-142b-4a38-9b9c-faf283a64b06 changed.\'] [\' WARNING: This metadata update is NOT backed up.\', \' WARNING: Combining activation change with other commands is not advised.\', \' /dev/mapper/36001405f448f60ecea7479f9d920ac51: Checksum error at offset 19198423553536\', " Couldn\'t read volume group metadata from /dev/mapper/36001405f448f60ecea7479f9d920ac51.", \' Metadata location on /dev/mapper/36001405f448f60ecea7479f9d920ac51 at 19198423553536 has invalid summary for VG.\', \' Failed to read metadata summary from /dev/mapper/36001405f448f60ecea7479f9d920ac51\', \' Failed to scan VG from /dev/mapper/36001405f448f60ecea7479f9d920ac51\', \' Volume group "d1c5f4da-9024-4599-9135-4dabafe86cf8" not found\', \' Cannot process volume group d1c5f4da-9024-4599-9135-4dabafe86cf8\']\nd1c5f4da-9024-4599-9135-4dabafe86cf8/{\'c5f64501-142b-4a38-9b9c-faf283a64b06\': ImgsPar(imgs=[\'aadbfed3-af54-4400-af79-65d09b74c687\'], parent=\'00000000-0000-0000-0000-000000000000\')}',), code = 200
2019-09-15 17:09:13,463-05 ERROR [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-8) [fbdd9b7] Command 'org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException: IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = General Storage Exception: ('5 [\' Logical volume d1c5f4da-9024-4599-9135-4dabafe86cf8/c5f64501-142b-4a38-9b9c-faf283a64b06 changed.\'] [\' WARNING: This metadata update is NOT backed up.\', \' WARNING: Combining activation change with other commands is not advised.\', \' /dev/mapper/36001405f448f60ecea7479f9d920ac51: Checksum error at offset 19198423553536\', " Couldn\'t read volume group metadata from /dev/mapper/36001405f448f60ecea7479f9d920ac51.", \' Metadata location on /dev/mapper/36001405f448f60ecea7479f9d920ac51 at 19198423553536 has invalid summary for VG.\', \' Failed to read metadata summary from /dev/mapper/36001405f448f60ecea7479f9d920ac51\', \' Failed to scan VG from /dev/mapper/36001405f448f60ecea7479f9d920ac51\', \' Volume group "d1c5f4da-9024-4599-9135-4dabafe86cf8" not found\', \' Cannot process volume group d1c5f4da-9024-4599-9135-4dabafe86cf8\']\nd1c5f4da-9024-4599-9135-4dabafe86cf8/{\'c5f64501-142b-4a38-9b9c-faf283a64b06\': ImgsPar(imgs=[\'aadbfed3-af54-4400-af79-65d09b74c687\'], parent=\'00000000-0000-0000-0000-000000000000\')}',), code = 200 (Failed with error StorageException and code 200)
<metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
Any help or input would be greaatly appreciated.
5 years, 2 months
Re: Virt-Viewer issue
by Staniforth, Paul
Hello,
you should use remote-viewer, it's part of the virt-viewer package.
Paul Staniforth
School of Built Environment Engineering and Computing.
Leeds Beckett University
Networked Systems Analyst, Research and Engineering Support.
tel: +44 (0)113 28123754
email: p.staniforth(a)leedsbeckett.ac.uk
________________________________
From: Lozada, Agustin T <Agustin.Lozada(a)centerpointenergy.com>
Sent: 13 September 2019 15:38
To: users(a)ovirt.org
Subject: [ovirt-users] Virt-Viewer issue
Im building my first VM in ovirt with gluster and Im getting this issue when I'm opening a console session using virt-viewer. Does anyone has encountered this issue?
[cid:image001.png@01D56A15.FECFDAF0]
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
5 years, 2 months
Does cluster upgrade wait for heal before proceeding to next host?
by Jayme
I've yet to have cluster upgrade finish updating my three host HCI
cluster. The most recent try was today moving from oVirt 4.3.3 to
4.3.5.5. The first host updates normally, but when it moves on to the
second host it fails to put it in maintenance and the cluster upgrade
stops.
I suspect this is due to that fact that after my hosts are updated it takes
10 minutes or more for all volumes to sync/heal. I have 2Tb SSDs.
Does the cluster upgrade process take heal time in to account before
attempting to place the next host in maintenance to upgrade it? Or is there
something else that may be at fault here, or perhaps a reason why the heal
process takes 10 minutes after reboot to complete?
5 years, 2 months