Q: Node host becomes non-operational after upgrade
by Andrei Verovski
Hi,
Today I upgraded oVirt Engine to 4.4.7.6, and then one of the nodes
(running Centos Stream).
After upgrade node (node14) becomes non-operational. Same after “Reinstall”.
Additionally, there are MANY error messages:
Host node14 moved to Non-Operational state as host CPU type is not
supported in this cluster compatibility version or is not supported at all
Quite strange, before upgrade problem with CPU host type didn't exist.
vdsm-networking service is running fine on node14.
vdsmd running but have this error message:
Jul 06 21:19:23 node14.xxx sudo[3565]: pam_systemd(sudo:session): Failed
to create session: Start job for unit user-0.slice failed with 'canceled'
I suspect that there are unnecessary repos enabled in my CentOS stream
node, which leads to this kind of errors.
Please can anyone check? Thanks in advance.
------------
[root@node14 ~]# yum repolist enabled
repo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
epel-next Extra Packages for Enterprise Linux 8 - Next - x86_64
extras CentOS Stream 8 - Extras
ovirt-4.4 Latest oVirt 4.4 Release
ovirt-4.4-centos-ceph-pacific Ceph packages for x86_64
ovirt-4.4-centos-gluster8 CentOS-8 - Gluster 8
ovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-stream-advanced-virtualization Advanced Virtualization
CentOS Stream packages for x86_64
ovirt-4.4-centos-stream-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
ovirt-4.4-centos-stream-ovirt44 CentOS-8 Stream - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:mdbarroso:ovsdbapp Copr repo
for ovsdbapp owned by mdbarroso
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr repo
for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection Copr
repo for EL8_collection owned by sbonazzo
ovirt-4.4-epel Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-openstack-train OpenStack Train Repository
ovirt-4.4-virtio-win-latest virtio-win builds roughly matching what will
be shipped in upcoming RHEL
powertools CentOS Stream 8 - PowerTools
3 years, 6 months
Re: Strange Issue with imageio
by Gianluca Cecchi
On Sat, Apr 17, 2021 at 6:27 AM Nur Imam Febrianto <nur_imam(a)outlook.com>
wrote:
> Hi,
>
>
>
> Already submit *Bug 1950593*
> <https://bugzilla.redhat.com/show_bug.cgi?id=1950593> for this issue.
>
> Thanks before.
>
>
>
> Regards,
>
> Nur Imam Febrianto
>
>
>
>
It seems I have the same problem with my 4.4.5.
Any info if it is fixed in the latest 4.4.6? It seems no update inside the
bug page..
Gianluca
3 years, 6 months
Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host
by Nur Imam Febrianto
Already tried it at one 4.4.7 host, and it solves the issue.
Maybe this issue should marked as a critical one, because the host is unusable at all if upgraded to 4.4.7.
😊
Regards,
Nur Imam Febrianto
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: Nur Imam Febrianto<mailto:nur_imam@outlook.com>
Sent: 07 July 2021 9:02
To: Klaas Demter<mailto:klaasdemter@gmail.com>; users(a)ovirt.org<mailto:users@ovirt.org>
Subject: [ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host
Where should I done this ?
At the Host ? or at HE ?
Thanks.
Regards,
Nur Imam Febrianto
From: Klaas Demter<mailto:klaasdemter@gmail.com>
Sent: 07 July 2021 3:31
To: users(a)ovirt.org<mailto:users@ovirt.org>
Subject: [ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host
https://bugzilla.redhat.com/show_bug.cgi?id=1979624<https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzil...>
run: semodule -B; touch /.autorelabel; reboot
report back if it fixes everything
On 7/6/21 5:40 PM, Nur Imam Febrianto wrote:
I’m having similar problem like this. 15 host, 7 of them already upgraded to 4.4.7 and I can’t migrate any VM or HE from 4.4.6 host to 4.4.7.
Regards,
Nur Imam Febrianto
From: Sandro Bonazzola<mailto:sbonazzo@redhat.com>
Sent: 06 July 2021 19:37
To: oVirt Users<mailto:users@ovirt.org>; Arik Hadas<mailto:ahadas@redhat.com>
Subject: [ovirt-users] Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host
Hi,
I update the hosted engine to 4.4.7 and one of the 2 nodes where the engine is running.
Current status is:
- Hosted engine at 4.4.7 running on Node 0
- Node 0 at 4.4.6
- Node 1 at 4.4.7
Now, moving Node 0 to maintenance successfully moved the SPM from Node 0 to Node 1 but while trying to migrate hosted engine I get on Node 0 vdsm.log:
2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] START repoStats(domains=()) from=::ffff:10.46.8.133,35048, task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)
2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] FINISH repoStats return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0, 'lastCheck': '3.0', 'delay': '0.00114065', 'valid': True, 'version': 5, 'acq
uired': True, 'actual': True}} from=::ffff:10.46.8.133,35048, task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)
2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] START multipath_health() from=::ffff:10.46.8.133,35048, task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)
2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] FINISH multipath_health return={} from=::ffff:10.46.8.133,35048, task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)
2021-07-06 12:25:07,883+0000 ERROR (migsrc/b2072331) [virt.vm] (vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to virtlogd: Unable to open system token /run/libvirt/common/system.token: Permission de
nied (migration:294)
2021-07-06 12:25:07,888+0000 INFO (jsonrpc/5) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:10.46.8.133,35048 (api:54)
2021-07-06 12:25:08,166+0000 ERROR (migsrc/b2072331) [virt.vm] (vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed to migrate (migration:467)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 441, in _regular_run
time.time(), machineParams
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 537, in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 626, in _perform_with_conv_schedule
self._perform_migration(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 555, in _perform_migration
self._migration_flags)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 159, in call
return getattr(self._vm._dom, name)(*a, **kw)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in migrateToURI3
raise libvirtError('virDomainMigrateToURI3() failed')
libvirt.libvirtError: can't connect to virtlogd: Unable to open system token /run/libvirt/common/system.token: Permission denied
2021-07-06 12:25:08,197+0000 INFO (jsonrpc/6) [api.virt] START getMigrationStatus() from=::ffff:10.46.8.133,35048, flow_id=4e86b85d, vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:48)
2021-07-06 12:25:08,197+0000 INFO (jsonrpc/6) [api.virt] FINISH getMigrationStatus return={'status': {'code': 0, 'message': 'Done'}, 'migrationStats': {'status': {'code': 12, 'message': 'Fatal error during migr
ation'}, 'progress': 0}} from=::ffff:10.46.8.133,35048, flow_id=4e86b85d, vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:54)
On node 0:
# ls -lZ /run/libvirt/common/system.token
ls: cannot access '/run/libvirt/common/system.token': No such file or directory
On node 1:
# ls -lZ /run/libvirt/common/system.token
-rw-------. 1 root root system_u:object_r:virt_var_run_t:s0 32 Jul 6 09:29 /run/libvirt/common/system.token
any clue?
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA<https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.re...>
sbonazzo(a)redhat.com<mailto:sbonazzo@redhat.com>
[https://static.redhat.com/libs/redhat/brand-assets/2/corp/logo--200.png]<https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.re...>
Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html<https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ov...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ov...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JG4HZW6RDT...<https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists....>
3 years, 6 months
Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host
by Klaas Demter
https://bugzilla.redhat.com/show_bug.cgi?id=1979624
run: semodule -B; touch /.autorelabel; reboot
report back if it fixes everything
On 7/6/21 5:40 PM, Nur Imam Febrianto wrote:
>
> I’m having similar problem like this. 15 host, 7 of them already
> upgraded to 4.4.7 and I can’t migrate any VM or HE from 4.4.6 host to
> 4.4.7.
>
> Regards,
>
> Nur Imam Febrianto
>
> *From: *Sandro Bonazzola <mailto:sbonazzo@redhat.com>
> *Sent: *06 July 2021 19:37
> *To: *oVirt Users <mailto:users@ovirt.org>; Arik Hadas
> <mailto:ahadas@redhat.com>
> *Subject: *[ovirt-users] Failing to migrate hosted engine from 4.4.6
> host to 4.4.7 host
>
> Hi,
>
> I update the hosted engine to 4.4.7 and one of the 2 nodes where the
> engine is running.
>
> Current status is:
>
> - Hosted engine at 4.4.7 running on Node 0
>
> - Node 0 at 4.4.6
>
> - Node 1 at 4.4.7
>
> Now, moving Node 0 to maintenance successfully moved the SPM from Node
> 0 to Node 1 but while trying to migrate hosted engine I get on Node 0
> vdsm.log:
>
> 2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] START repoStats(domains=()) from=::ffff:10.46.8.133,35048, task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)
> 2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] FINISH repoStats return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0, 'lastCheck': '3.0', 'delay': '0.00114065', 'valid': True, 'version': 5, 'acq
> uired': True, 'actual': True}} from=::ffff:10.46.8.133,35048, task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)
> 2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] START multipath_health() from=::ffff:10.46.8.133,35048, task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)
> 2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] FINISH multipath_health return={} from=::ffff:10.46.8.133,35048, task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)
> 2021-07-06 12:25:07,883+0000 ERROR (migsrc/b2072331) [virt.vm] (vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to virtlogd: Unable to open system token /run/libvirt/common/system.token: Permission de
> nied (migration:294)
> 2021-07-06 12:25:07,888+0000 INFO (jsonrpc/5) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:10.46.8.133,35048 (api:54)
> 2021-07-06 12:25:08,166+0000 ERROR (migsrc/b2072331) [virt.vm] (vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed tomigrate (migration:467)
> Traceback (most recent call last):
> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 441, in _regular_run
> time.time(), machineParams
> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 537, in _startUnderlyingMigration
> self._perform_with_conv_schedule(duri, muri)
> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 626, in _perform_with_conv_schedule
> self._perform_migration(duri, muri)
> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 555, in _perform_migration
> self._migration_flags)
> File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 159, in call
> return getattr(self._vm._dom, name)(*a, **kw)
> File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
> ret = attr(*args, **kwargs)
> File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
> ret = f(*args, **kwargs)
> File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
> return func(inst, *args, **kwargs)
> File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, inmigrateToURI3
> raise libvirtError('virDomainMigrateToURI3() failed')
> libvirt.libvirtError: can't connect to virtlogd: Unable to open system token /run/libvirt/common/system.token: Permission denied
> 2021-07-06 12:25:08,197+0000 INFO (jsonrpc/6) [api.virt] START getMigrationStatus() from=::ffff:10.46.8.133,35048, flow_id=4e86b85d, vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:48)
> 2021-07-06 12:25:08,197+0000 INFO (jsonrpc/6) [api.virt] FINISH getMigrationStatus return={'status': {'code': 0, 'message': 'Done'}, 'migrationStats': {'status': {'code': 12, 'message': 'Fatal error during migr
> ation'}, 'progress': 0}} from=::ffff:10.46.8.133,35048, flow_id=4e86b85d, vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:54)
>
> On node 0:
>
> # ls -lZ /run/libvirt/common/system.token
> ls: cannot access '/run/libvirt/common/system.token': No such file or
> directory
>
> On node 1:
>
> # ls -lZ /run/libvirt/common/system.token
> -rw-------. 1 root root system_u:object_r:virt_var_run_t:s0 32 Jul 6
> 09:29 /run/libvirt/common/system.token
>
> any clue?
>
> --
>
> *Sandro Bonazzola*
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
> <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.re...>
>
> sbonazzo(a)redhat.com <mailto:sbonazzo@redhat.com>
>
> <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.re...>
>
>
>
>
> *Red Hat respects your work life balance. Therefore there is no need
> to answer this email out of your office hours.*
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JG4HZW6RDT...
3 years, 6 months
Updates Failing
by Gary Pedretty
Getting errors trying to run dnf/yum update due to a vdsm issue.
yum update
Last metadata expiration check: 0:17:33 ago on Tue 06 Jul 2021 11:17:05 AM AKDT.
Error: Running QEMU processes found, cannot upgrade Vdsm.
Current running version of vdsm is
vdsm-4.40.60.7-1.el8
CentOS Stream
RHEL - 8.5 - 3.el8
kernel
4.18.0 - 310.el8.x86_64
_______________________________
Gary Pedretty
IT Manager
Ravn Alaska
Office: 907-266-8451
Mobile: 907-388-2247
Email: gary.pedretty(a)ravnalaska.com <mailto:gary.pedretty@ravnalaska.com>
3 years, 6 months
Any way to terminate stuck export task
by Gianluca Cecchi
Hello,
in oVirt 4.3.10 an export job to export domain takes too long, probably due
to the NFS server slow.
How can I stop in a clean way the task?
I see the exported file remains always at 4,5Gb of size.
Command vmstat on host with qemu-img process gives no throughput but
blocked processes
procs -----------memory---------- ---swap-- -----io---- -system--
------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
1 2 0 170208752 474412 16985752 0 0 719 72 2948 5677 0
0 96 4 0
0 2 0 170207184 474412 16985780 0 0 3580 99 5043 6790 0
0 96 4 0
0 2 0 170208800 474412 16985804 0 0 1379 41 2332 5527 0
0 96 4 0
and the generated file refreshes its timestamp but not the size
# ll -a /rhev/data-center/mnt/172.16.1.137:
_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
total 4675651
drwxr-xr-x. 2 vdsm kvm 1024 Jul 3 14:10 .
drwxr-xr-x. 12 vdsm kvm 1024 Jul 3 14:10 ..
-rw-rw----. 1 vdsm kvm 4787863552 Jul 3 14:33
bb94ae66-e574-432b-bf68-7497bb3ca9e6
-rw-r--r--. 1 vdsm kvm 268 Jul 3 14:10
bb94ae66-e574-432b-bf68-7497bb3ca9e6.meta
# du -sh /rhev/data-center/mnt/172.16.1.137:
_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
4.5G /rhev/data-center/mnt/172.16.1.137:
_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
The VM has two disks, 35Gb and 300GB, not full but quite occupied.
Can I simply kill the qemu-img processes on the chosen hypervisor (I
suppose the SPM one)?
Any way to track down why it is so slow?
Thanks,
Gianluca
3 years, 6 months
Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host
by Sandro Bonazzola
Hi,
I update the hosted engine to 4.4.7 and one of the 2 nodes where the engine
is running.
Current status is:
- Hosted engine at 4.4.7 running on Node 0
- Node 0 at 4.4.6
- Node 1 at 4.4.7
Now, moving Node 0 to maintenance successfully moved the SPM from Node 0 to
Node 1 but while trying to migrate hosted engine I get on Node 0 vdsm.log:
2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] START
repoStats(domains=()) from=::ffff:10.46.8.133,35048,
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)
2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] FINISH
repoStats return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0,
'lastCheck': '3.0', 'delay': '0.00114065', 'valid': True, 'version':
5, 'acq
uired': True, 'actual': True}} from=::ffff:10.46.8.133,35048,
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)
2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] START
multipath_health() from=::ffff:10.46.8.133,35048,
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)
2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] FINISH
multipath_health return={} from=::ffff:10.46.8.133,35048,
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)
2021-07-06 12:25:07,883+0000 ERROR (migsrc/b2072331) [virt.vm]
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to
virtlogd: Unable to open system token
/run/libvirt/common/system.token: Permission de
nied (migration:294)
2021-07-06 12:25:07,888+0000 INFO (jsonrpc/5) [api.host] FINISH
getStats return={'status': {'code': 0, 'message': 'Done'}, 'info':
(suppressed)} from=::ffff:10.46.8.133,35048 (api:54)
2021-07-06 12:25:08,166+0000 ERROR (migsrc/b2072331) [virt.vm]
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed to migrate
(migration:467)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
441, in _regular_run
time.time(), machineParams
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
537, in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
626, in _perform_with_conv_schedule
self._perform_migration(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
555, in _perform_migration
self._migration_flags)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line
159, in call
return getattr(self._vm._dom, name)(*a, **kw)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in
migrateToURI3
raise libvirtError('virDomainMigrateToURI3() failed')
libvirt.libvirtError: can't connect to virtlogd: Unable to open system
token /run/libvirt/common/system.token: Permission denied
2021-07-06 12:25:08,197+0000 INFO (jsonrpc/6) [api.virt] START
getMigrationStatus() from=::ffff:10.46.8.133,35048, flow_id=4e86b85d,
vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:48)
2021-07-06 12:25:08,197+0000 INFO (jsonrpc/6) [api.virt] FINISH
getMigrationStatus return={'status': {'code': 0, 'message': 'Done'},
'migrationStats': {'status': {'code': 12, 'message': 'Fatal error
during migr
ation'}, 'progress': 0}} from=::ffff:10.46.8.133,35048,
flow_id=4e86b85d, vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:54)
On node 0:
# ls -lZ /run/libvirt/common/system.token
ls: cannot access '/run/libvirt/common/system.token': No such file or
directory
On node 1:
# ls -lZ /run/libvirt/common/system.token
-rw-------. 1 root root system_u:object_r:virt_var_run_t:s0 32 Jul 6 09:29
/run/libvirt/common/system.token
any clue?
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 6 months
Re: Issue With HE HA after upgrading to 4.4.7
by Yedidyah Bar David
On Tue, Jul 6, 2021 at 5:08 PM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
>
> Hi, Recently I’m upgrading our server cluster from 4.4.6 to 4.4.7 After upgrading HE, and several Hosts, every single Host that upgraded and activated have an issue with HA Score. It always shows HA Score 0, rebooting the host doesn’t help. Any idea how to check this issue ?
Calculating the score takes time, and spreading this around the
cluster also takes time.
You should find more information in the ovirt-hosted-engine-ha logs
(both agent and broker).
Good luck and best regards,
--
Didi
3 years, 6 months