overt-guest-agent Failure on latest Debian 9 Sretch
by Andrei Verovski
Hi,
I just installed latest Debian 9 Sretch under oVirt 4.2 and got this error:
# tail -n 1000 ovirt-guest-agent.log
MainThread::INFO::2018-03-26 17:09:57,400::ovirt-guest-agent::59::root::Starting oVirt guest agent
MainThread::ERROR::2018-03-26 17:09:57,402::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent!
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in <module>
agent.run(daemon, pidfile)
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run
self.agent = LinuxVdsAgent(config)
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__
AgentLogicBase.__init__(self, config)
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__
self._stream = VirtIoStream(vport_name)
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm’
Followed this manual:
https://bugzilla.redhat.com/show_bug.cgi?id=1472293
and run
touch /etc/udev/rules.d/55-ovirt-guest-agent.rules
# edit /etc/udev/rules.d/55-ovirt-guest-agent.rules
SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent", GROUP="ovirtagent"
udevadm trigger --subsystem-match="virtio-ports”
AND this
http://lists.ovirt.org/pipermail/users/2018-January/086101.html
touch /etc/ovirt-guest-agent/ovirt-guest-agent.conf # <— it didn't existed so I created it.
# edit /etc/ovirt-guest-agent.conf
[virtio]
device = /dev/virtio-ports/ovirt-guest-agent.0
reboot
Yet still have same problem and error message.
How to solve it ?
Thanks in advance
Andrei
6 years, 8 months
vdi over wan optimation
by Andreas Huser
Hi, i have a question about vdi over wan. The traffic is very high when i look videos or online streams. 100% of my capacity of Internet Bandwidth is used.
Does anyone an idea how can i optimise spice for wan?
Thanks a lot
Andreas
6 years, 8 months
Host non-responsive after engine failure
by spfma.tech@e.mail.fr
--=_fa659c011be33add9dea7033694abeb7
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,=0A=0AI had an electrical failure on the server hosting the engine.=
=0AAfter the reboot it was able to gain access to it again, log into the=
GUI, but the currently online node is not leaving "not responsive" stat=
us.=0AOf course, the network storage paths are still mounted, the VMs ar=
e running, but I can't gain control again.=0A=0AIn vdsmd.log, I have a l=
ot of messages like this one :=0A2018-03-27 12:03:11,281+0200 INFO (vmre=
covery) [vds] recovery: waiting for storage pool to go up (clientIF:674)=
=0A2018-03-27 12:03:16,286+0200 INFO (vmrecovery) [vdsm.api] START getCo=
nnectedStoragePoolsList(options=3DNone) from=3Dinternal, task_id=3Db90f5=
50e-ee68-4a91-a7c6-3b60f11c3978 (api:46)=0A2018-03-27 12:03:16,286+0200=
INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return=
=3D{'poollist': []} from=3Dinternal, task_id=3Db90f550e-ee68-4a91-a7c6-3=
b60f11c3978 (api:52)=0A2018-03-27 12:03:16,287+0200 INFO (vmrecovery) [v=
ds] recovery: waiting for storage pool to go up (clientIF:674)=0A2018-03=
-27 12:03:18,413+0200 INFO (periodic/3) [vdsm.api] START repoStats(domai=
ns=3D()) from=3Dinternal, task_id=3D067714b4-8172-4eec-92bb-6ac16586a657=
(api:46)=0A2018-03-27 12:03:18,413+0200 INFO (periodic/3) [vdsm.api] FI=
NISH repoStats return=3D{} from=3Dinternal, task_id=3D067714b4-8172-4eec=
-92bb-6ac16586a657 (api:52)=0A2018-03-27 12:03:18,413+0200 INFO (periodi=
c/3) [vdsm.api] START multipath_health() from=3Dinternal, task_id=3De974=
21fb-5d5a-4291-9231-94bc1961cc49 (api:46)=0A2018-03-27 12:03:18,413+0200=
INFO (periodic/3) [vdsm.api] FINISH multipath_health return=3D{} from=
=3Dinternal, task_id=3De97421fb-5d5a-4291-9231-94bc1961cc49 (api:52)=0A2=
018-03-27 12:03:20,458+0200 INFO (jsonrpc/6) [api.host] START getAllVmSt=
ats() from=3D::1,57576 (api:46)=0A2018-03-27 12:03:20,462+0200 INFO (jso=
nrpc/6) [api.host] FINISH getAllVmStats return=3D{'status': {'message':=
'Done', 'code': 0}, 'statsList': (suppressed)} from=3D::1,57576 (api:52=
)=0A2018-03-27 12:03:20,464+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer=
] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573)=
=0A2018-03-27 12:03:20,474+0200 INFO (jsonrpc/7) [api.host] START getAll=
VmIoTunePolicies() from=3D::1,57576 (api:46)=0A2018-03-27 12:03:20,475+0=
200 INFO (jsonrpc/7) [api.host] FINISH getAllVmIoTunePolicies return=3D{=
'status': {'message': 'Done', 'code': 0}, 'io_tune_policies_dict': {'c33=
a30ba-7fe8-4ff4-aeac-80cb396b9670': {'policy': [], 'current_values': [{'=
ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec':=
0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L},=
'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07ef=
a4fe-06bc-498e-8f42-035461aef900/images/593f6f61-cb7f-4c53-b6e7-617964c2=
22e9/329b2e8b-6cf9-4b39-9190-14a32697ce44', 'name': 'sda'}]}, 'e8a90739-=
7737-413e-8edc-a373192f4476': {'policy': [], 'current_values': [{'ioTune=
': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'r=
ead_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path'=
: '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06=
bc-498e-8f42-035461aef900/images/97e078f7-69c6-46c2-b620-26474cd65929/bb=
b4a1fb-5594-4750-be71-c6b55dca3257', 'name': 'vda'}]}, '3aec5ce4-691f-48=
7c-a916-aa7f7a664d8c': {'policy': [], 'current_values': [{'ioTune': {'wr=
ite_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_byt=
es_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhe=
v/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e=
-8f42-035461aef900/images/46a65a1b-d00a-452d-ab9b-70862bb5c053/a4d2ad44-=
5577-4412-9a8c-819d1f12647a', 'name': 'sda'}, {'ioTune': {'write_bytes_s=
ec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L=
, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-cent=
er/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-03546=
1aef900/images/0c3a13ce-8f7a-4034-a8cc-12f795b8aa17/c48e0e37-e54b-4ca3-b=
3ed-b66ead9fad44', 'name': 'sdb'}]}, '5de1de8f-ac01-459f-b4b8-6d1ed05c8c=
a3': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L=
, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'writ=
e_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/=
10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900=
/images/320ac81c-7db7-4ec0-a271-755e91442b6a/8bfc95c5-318c-43dd-817f-6c7=
a8a7a5b43', 'name': 'sda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_io=
ps_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec'=
: 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.13=
2:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/e7a=
d86bb-3c63-466b-82cf-687164c46f7b/613ea0ce-ed14-4185-b3fd-36490441f889',=
'name': 'sdb'}]}, '5d548a09-a397-4aac-8b1f-39002e014f5f': {'policy': []=
, 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec'=
: 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, '=
total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volu=
me2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/c7421014-7=
c5f-45ad-a948-caa83b8ce3e7/ae0ba893-69af-4b67-a262-b739596d5c95', 'name'=
: 'sda'}]}, '168b01b1-5ec8-41dd-808e-fa9f66cea718': {'policy': [], 'curr=
ent_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, '=
read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_b=
ytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovi=
rt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/b9b7902a-7a62-482=
6-bfda-dff260b9fcd1/d05db17c-9908-4bfb-a74b-4aa944510a56', 'name': 'vda'=
}, {'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_s=
ec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec':=
0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1=
/07efa4fe-06bc-498e-8f42-035461aef900/images/564b3848-b6d5-4deb-910f-5b6=
f2fdbccc5/4f89ff25-2d3b-40b9-9bbc-9a6b6995346c', 'name': 'vdb'}, {'ioTun=
e': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, '=
read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path=
': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-0=
6bc-498e-8f42-035461aef900/images/738e0704-8484-483b-ae67-091715496152/2=
f811423-6bab-4966-9c00-9d3b72429328', 'name': 'vdc'}]}}} from=3D::1,5757=
6 (api:52)=0A2018-03-27 12:03:20,475+0200 INFO (jsonrpc/7) [jsonrpc.Json=
RpcServer] RPC call Host.getAllVmIoTunePolicies succeeded in 0.00 second=
s (__init__:573)=0A2018-03-27 12:03:21,292+0200 INFO (vmrecovery) [vdsm.=
api] START getConnectedStoragePoolsList(options=3DNone) from=3Dinternal,=
task_id=3Da35602b2-7d5c-4e87-86cd-ede17c62488f (api:46)=0A2018-03-27 12=
:03:21,292+0200 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStorageP=
oolsList return=3D{'poollist': []} from=3Dinternal, task_id=3Da35602b2-7=
d5c-4e87-86cd-ede17c62488f (api:52)=0A2018-03-27 12:03:21,293+0200 INFO=
(vmrecovery) [vds] recovery: waiting for storage pool to go up (clientI=
F:674)=0A=0ASo i see no error.=0A=0ABut in messages :=0AMar 27 12:01:43=
pfm-srv-virt-2 libvirtd: 2018-03-27 10:01:43.569+0000: 71793: error : q=
emuDomainAgentAvailable:6030 : Guest agent is not responding: QEMU guest=
agent is not connected=0A=0AI have restarted libvirtd and vdsmd service=
s.=0A=0AIs there something else to do ?=0A=0ARegards =0A=0A-------------=
------------------------------------------------------------------------=
------------=0AFreeMail powered by mail.fr
--=_fa659c011be33add9dea7033694abeb7
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<div><span style=3D"font-family: arial, helvetica,sans-serif; font-size:=
10pt; color: #000000;">Hi,<br /><br />I had an electrical failure on th=
e server hosting the engine.<br />After the reboot it was able to gain a=
ccess to it again, log into the GUI, but the currently online node is no=
t leaving "not responsive" status.<br />Of course, the network storage p=
aths are still mounted, the VMs are running, but I can't gain control ag=
ain.<br /><br />In vdsmd.log, I have a lot of messages like this one :<b=
r />2018-03-27 12:03:11,281+0200 INFO (vmrecovery) [vds] recovery:=
waiting for storage pool to go up (clientIF:674)<br />2018-03-27 12:03:=
16,286+0200 INFO (vmrecovery) [vdsm.api] START getConnectedStorage=
PoolsList(options=3DNone) from=3Dinternal, task_id=3Db90f550e-ee68-4a91-=
a7c6-3b60f11c3978 (api:46)<br />2018-03-27 12:03:16,286+0200 INFO =
(vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return=3D{'=
poollist': []} from=3Dinternal, task_id=3Db90f550e-ee68-4a91-a7c6-3b60f1=
1c3978 (api:52)<br />2018-03-27 12:03:16,287+0200 INFO (vmrecovery=
) [vds] recovery: waiting for storage pool to go up (clientIF:674)<br />=
2018-03-27 12:03:18,413+0200 INFO (periodic/3) [vdsm.api] START re=
poStats(domains=3D()) from=3Dinternal, task_id=3D067714b4-8172-4eec-92bb=
-6ac16586a657 (api:46)<br />2018-03-27 12:03:18,413+0200 INFO (per=
iodic/3) [vdsm.api] FINISH repoStats return=3D{} from=3Dinternal, task_i=
d=3D067714b4-8172-4eec-92bb-6ac16586a657 (api:52)<br />2018-03-27 12:03:=
18,413+0200 INFO (periodic/3) [vdsm.api] START multipath_health()=
from=3Dinternal, task_id=3De97421fb-5d5a-4291-9231-94bc1961cc49 (api:46=
)<br />2018-03-27 12:03:18,413+0200 INFO (periodic/3) [vdsm.api] F=
INISH multipath_health return=3D{} from=3Dinternal, task_id=3De97421fb-5=
d5a-4291-9231-94bc1961cc49 (api:52)<br />2018-03-27 12:03:20,458+0200 IN=
FO (jsonrpc/6) [api.host] START getAllVmStats() from=3D::1,57576 (=
api:46)<br />2018-03-27 12:03:20,462+0200 INFO (jsonrpc/6) [api.ho=
st] FINISH getAllVmStats return=3D{'status': {'message': 'Done', 'code':=
0}, 'statsList': (suppressed)} from=3D::1,57576 (api:52)<br />2018-03-2=
7 12:03:20,464+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC c=
all Host.getAllVmStats succeeded in 0.01 seconds (__init__:573)<br />201=
8-03-27 12:03:20,474+0200 INFO (jsonrpc/7) [api.host] START getAll=
VmIoTunePolicies() from=3D::1,57576 (api:46)<br />2018-03-27 12:03:20,47=
5+0200 INFO (jsonrpc/7) [api.host] FINISH getAllVmIoTunePolicies r=
eturn=3D{'status': {'message': 'Done', 'code': 0}, 'io_tune_policies_dic=
t': {'c33a30ba-7fe8-4ff4-aeac-80cb396b9670': {'policy': [], 'current_val=
ues': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_io=
ps_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_se=
c': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms=
__1/07efa4fe-06bc-498e-8f42-035461aef900/images/593f6f61-cb7f-4c53-b6e7-=
617964c222e9/329b2e8b-6cf9-4b39-9190-14a32697ce44', 'name': 'sda'}]}, 'e=
8a90739-7737-413e-8edc-a373192f4476': {'policy': [], 'current_values': [=
{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec'=
: 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}=
, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07e=
fa4fe-06bc-498e-8f42-035461aef900/images/97e078f7-69c6-46c2-b620-26474cd=
65929/bbb4a1fb-5594-4750-be71-c6b55dca3257', 'name': 'vda'}]}, '3aec5ce4=
-691f-487c-a916-aa7f7a664d8c': {'policy': [], 'current_values': [{'ioTun=
e': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, '=
read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path=
': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-0=
6bc-498e-8f42-035461aef900/images/46a65a1b-d00a-452d-ab9b-70862bb5c053/a=
4d2ad44-5577-4412-9a8c-819d1f12647a', 'name': 'sda'}, {'ioTune': {'write=
_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_=
sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/d=
ata-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f=
42-035461aef900/images/0c3a13ce-8f7a-4034-a8cc-12f795b8aa17/c48e0e37-e54=
b-4ca3-b3ed-b66ead9fad44', 'name': 'sdb'}]}, '5de1de8f-ac01-459f-b4b8-6d=
1ed05c8ca3': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_=
sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0=
L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-cen=
ter/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-0354=
61aef900/images/320ac81c-7db7-4ec0-a271-755e91442b6a/8bfc95c5-318c-43dd-=
817f-6c7a8a7a5b43', 'name': 'sda'}, {'ioTune': {'write_bytes_sec': 0L, '=
total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_i=
ops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.=
100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/im=
ages/e7ad86bb-3c63-466b-82cf-687164c46f7b/613ea0ce-ed14-4185-b3fd-364904=
41f889', 'name': 'sdb'}]}, '5d548a09-a397-4aac-8b1f-39002e014f5f': {'pol=
icy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_i=
ops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec=
': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.1=
32:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/c7=
421014-7c5f-45ad-a948-caa83b8ce3e7/ae0ba893-69af-4b67-a262-b739596d5c95'=
, 'name': 'sda'}]}, '168b01b1-5ec8-41dd-808e-fa9f66cea718': {'policy': [=
], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec=
': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L,=
'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_vo=
lume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/b9b7902a=
-7a62-4826-bfda-dff260b9fcd1/d05db17c-9908-4bfb-a74b-4aa944510a56', 'nam=
e': 'vda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 're=
ad_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_byt=
es_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt=
__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/564b3848-b6d5-4deb-=
910f-5b6f2fdbccc5/4f89ff25-2d3b-40b9-9bbc-9a6b6995346c', 'name': 'vdb'},=
{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec=
': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L=
}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07=
efa4fe-06bc-498e-8f42-035461aef900/images/738e0704-8484-483b-ae67-091715=
496152/2f811423-6bab-4966-9c00-9d3b72429328', 'name': 'vdc'}]}}} from=3D=
::1,57576 (api:52)<br />2018-03-27 12:03:20,475+0200 INFO (jsonrpc=
/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies succeed=
ed in 0.00 seconds (__init__:573)<br />2018-03-27 12:03:21,292+0200 INFO=
(vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(option=
s=3DNone) from=3Dinternal, task_id=3Da35602b2-7d5c-4e87-86cd-ede17c62488=
f (api:46)<br />2018-03-27 12:03:21,292+0200 INFO (vmrecovery) [vd=
sm.api] FINISH getConnectedStoragePoolsList return=3D{'poollist': []} fr=
om=3Dinternal, task_id=3Da35602b2-7d5c-4e87-86cd-ede17c62488f (api:52)<b=
r />2018-03-27 12:03:21,293+0200 INFO (vmrecovery) [vds] recovery:=
waiting for storage pool to go up (clientIF:674)<br /><br />So i see no=
error.<br /><br />But in messages :<br />Mar 27 12:01:43 pfm-srv-virt-2=
libvirtd: 2018-03-27 10:01:43.569+0000: 71793: error : qemuDomainAgentA=
vailable:6030 : Guest agent is not responding: QEMU guest agent is not c=
onnected<br /><br /><br />I have restarted libvirtd and vdsmd services.<=
br /><br />Is there something else to do ?<br /><br />Regards</span></di=
v>=0A <br/><hr>FreeMail powered by <a href=3D"https:/=
/mail.fr" target=3D"_blank">mail.fr</a>=0A
--=_fa659c011be33add9dea7033694abeb7--
6 years, 8 months
Workflow after restoring engine from backup
by Sven Achtelik
--_000_831f30ed018b4739a2491cbd24f2429depsaero_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi All,
I had issue with the storage that hosted my engine vm. The disk got corrupt=
ed and I needed to restore the engine from a backup. That worked as expecte=
d, I just didn't start the engine yet. I know that after the backup was tak=
en some machines where migrated around before the engine disks failed. My q=
uestion is what will happen once I start the engine service which has the r=
estored backup on it ? Will it query the hosts for the running VMs or will =
it assume that the VMs are still on the hosts as they resided at the point =
of backup ? Would I need to change the DB manual to let the engine know whe=
re VMs are up at this point ? What will happen to HA VMs ? I feel that it m=
ight try to start them a second time. My biggest issue is that I can't get=
a service Windows to shutdown all VMs and then lat them restart by the eng=
ine.
Is there a known workflow for that ?
Thank you,
Sven
--_000_831f30ed018b4739a2491cbd24f2429depsaero_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"DE" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi All, <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I had issue with the storage th=
at hosted my engine vm. The disk got corrupted and I needed to restore the =
engine from a backup. That worked as expected, I just didn’t start th=
e engine yet. I know that after the backup
was taken some machines where migrated around before the engine disks fail=
ed. My question is what will happen once I start the engine service which h=
as the restored backup on it ? Will it query the hosts for the running VMs =
or will it assume that the VMs are
still on the hosts as they resided at the point of backup ? Would I need t=
o change the DB manual to let the engine know where VMs are up at this poin=
t ? What will happen to HA VMs ? I feel that it might try to start them a s=
econd time. My biggest issue is
that I can’t get a service Windows to shutdown all VMs and then lat =
them restart by the engine.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Is there a known workflow for t=
hat ? <o:p>
</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you, <o:p></o:p></span></=
p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Sven <o:p></o:p></span></p>
</div>
</body>
</html>
--_000_831f30ed018b4739a2491cbd24f2429depsaero_--
6 years, 8 months
[graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume 'management'): "cluster.server-quorum-type:", allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'()
by TomK
Hey All,
Wondering if anyone see this happen and can provide some hints.
After numerous failed attempts to add a physical host to an oVirt VM
engine that already had a gluster volume, I get this errors and I'm
unable to start up gluster anymore:
[2018-03-27 07:01:37.511304] E [MSGID: 101021]
[graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume
'management'): "cluster.server-quorum-type:"
allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'()
[2018-03-27 07:01:37.511597] E [MSGID: 100026]
[glusterfsd.c:2403:glusterfs_process_volfp] 0-: failed to construct the
graph
[2018-03-27 07:01:37.511791] E [graph.c:1102:glusterfs_graph_destroy]
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55f06827d0cd]
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x150) [0x55f06827cf60]
-->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84)
[0x7f519a816c64] ) 0-graph: invalid argument: graph [Invalid argument]
[2018-03-27 07:01:37.511839] W [glusterfsd.c:1393:cleanup_and_exit]
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55f06827d0cd]
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55f06827cf73]
-->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55f06827c49b] ) 0-:
received signum (-1), shutting down
[2018-03-27 07:02:52.223358] I [MSGID: 100030] [glusterfsd.c:2556:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2
(args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-03-27 07:02:52.229816] E [MSGID: 101021]
[graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume
'management'): "cluster.server-quorum-type:"
allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'()
[2018-03-27 07:02:52.230125] E [MSGID: 100026]
[glusterfsd.c:2403:glusterfs_process_volfp] 0-: failed to construct the
graph
[2018-03-27 07:02:52.230320] E [graph.c:1102:glusterfs_graph_destroy]
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55832612b0cd]
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x150) [0x55832612af60]
-->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84)
[0x7f9a1ded4c64] ) 0-graph: invalid argument: graph [Invalid argument]
[2018-03-27 07:02:52.230369] W [glusterfsd.c:1393:cleanup_and_exit]
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55832612b0cd]
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55832612af73]
-->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55832612a49b] ) 0-:
received signum (-1), shutting down
--
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
6 years, 8 months
Network interface persistence
by Sverker Abrahamsson
I have a number of vm's running under Ovirt 4.2, most are CentOS 7 and
one Debian. Issue is that when multiple network interfaces are assigned
they don't persist but it variates at boot which interface will be which.
I've tried various methods, setting UUID and HWADDRESS in the ifcfg file
and a file /etc/udev/rules.d/70-persistent-net.rules file like the below:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
ATTR{address}=="00:1a:4a:16:01:63", KERNEL=="eth*", NAME="eth0"
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
ATTR{address}=="00:1a:4a:16:01:6d", KERNEL=="eth*", NAME="eth1"
I've also tried in the gui to change the network profile each virtual
card is attached, but even then the interfaces in vm will be oposide of
what I had intended.
How to accomplish with Ovirt to get network interface persistence? The
Debian vm is an appliance so I can't log in to it to change network
interfaces.
/Sverker
6 years, 8 months
novnc_websocket_proxy_fqdn
by 董青龙
------=_Part_152734_1156022334.1522057002223
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64
SGkgYWxsLAogICAgICAgIEkgYW0gdXNpbmcgbm92bmMgdG8gY29ubmVjdCB2bXMgaW4gbXkgb3Zp
cnQ0LjEgZW52aXJvbm1lbnQuIE15IGVuZ2luZSBmcWRuIGhhcyBiZWVuIHNldCB0byAiZW5naW5l
MS50ZXN0Lm9yZyIgYmVmb3JlIEkgZXhjdXRlZCAiZW5naW5lLXNldHVwIi4gSSBjb3VsZCBjb25u
ZWN0IHZtcyB1c2luZyAiZW5naW5lMS50ZXN0Lm9yZyIgb24gY2xpZW50MSB3aGljaCAiZW5naW5l
MS50ZXN0Lm9yZyIgY2FuIGJlIHJlc292ZWQuIE5vdyBJIHdhbnQgdG8gY29ubmVjdCB2bXMgdXNp
bmcgImVuZ2luZTIudGVzdC5vcmciIG9uIGNsaWVudDIgd2hpY2ggb25seSAiZW5naW5lMi50ZXN0
Lm9yZyIgY2FuIGJlIHJlc29sdmVkLiBJIGhhdmUgc2V0ICJTU09fQUxURVJOQVRFX0VOR0lORV9G
UUROUz0iZW5naW5lMi50ZXN0Lm9yZyIiIGluIC9ldGMvb3ZpcnQtZW5naW5lL2VuZ2luZS5jb25m
LmQvOTktYWx0ZXJuYXRlLWVuZ2luZS1mcWRucy5jb25mLiBCdXQgSSBmYWlsZWQuIEl0IHNhaWQg
dGhhdCAiY2FuJ3QgY29ubmVjdCB0byB3ZWJzb2NrZXQgcHJveHkgc2VydmVyIHdzczovL2VuZ2lu
ZTEudGVzdC5vcmc6NjEwMCIuIFNvIHdoZXJlIGNhbiBJIG1vZGlmeSB0aGlzIHdlYnNvY2tldCBw
cm94eSBwYXJhbWV0ZXI/CiAgICAgICAgQW55b25lIGNhbiBoZWxwPyBUaGFua3Mh
------=_Part_152734_1156022334.1522057002223
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64
PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6QXJpYWwiPjxkaXY+SGkgYWxsLDwvZGl2PjxkaXY+Jm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7IEkgYW0gdXNpbmcgbm92bmMgdG8gY29ubmVjdCB2bXMgaW4gbXkgb3ZpcnQ0
LjEgZW52aXJvbm1lbnQuIE15IGVuZ2luZSBmcWRuIGhhcyBiZWVuIHNldCB0byAiZW5naW5lMS50
ZXN0Lm9yZyIgYmVmb3JlIEkgZXhjdXRlZCAiZW5naW5lLXNldHVwIi4gSSBjb3VsZCBjb25uZWN0
IHZtcyB1c2luZyAiZW5naW5lMS50ZXN0Lm9yZyIgb24gY2xpZW50MSB3aGljaCAiZW5naW5lMS50
ZXN0Lm9yZyIgY2FuIGJlIHJlc292ZWQuIE5vdyBJIHdhbnQgdG8gY29ubmVjdCB2bXMgdXNpbmcg
ImVuZ2luZTIudGVzdC5vcmciIG9uIGNsaWVudDIgd2hpY2ggb25seSAiZW5naW5lMi50ZXN0Lm9y
ZyIgY2FuIGJlIHJlc29sdmVkLiBJIGhhdmUgc2V0ICJTU09fQUxURVJOQVRFX0VOR0lORV9GUURO
Uz0iZW5naW5lMi50ZXN0Lm9yZyIiIGluJm5ic3A7L2V0Yy9vdmlydC1lbmdpbmUvZW5naW5lLmNv
bmYuZC85OS1hbHRlcm5hdGUtZW5naW5lLWZxZG5zLmNvbmYuIEJ1dCBJIGZhaWxlZC4gSXQgc2Fp
ZCB0aGF0ICJjYW4ndCBjb25uZWN0IHRvIHdlYnNvY2tldCBwcm94eSBzZXJ2ZXIgd3NzOi8vZW5n
aW5lMS50ZXN0Lm9yZzo2MTAwIi4gU28gd2hlcmUgY2FuIEkgbW9kaWZ5IHRoaXMgd2Vic29ja2V0
IHByb3h5IHBhcmFtZXRlcj88L2Rpdj48ZGl2PiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBB
bnlvbmUgY2FuIGhlbHA/IFRoYW5rcyE8L2Rpdj48L2Rpdj48YnI+PGJyPjxzcGFuIHRpdGxlPSJu
ZXRlYXNlZm9vdGVyIj48cD4mbmJzcDs8L3A+PC9zcGFuPg==
------=_Part_152734_1156022334.1522057002223--
6 years, 8 months
Re: [ovirt-users] VDSM SSL validity
by Punaatua PAINT-KOUI
I just tried, it works ! Thank for your help.
Here are the steps that i followed:
connect to the engine database using psql
- use the request as you give it select fn_db_update_config_value('
VdsCertificateValidityInYears','2','general');
- verify the option by running select * from vdc_options where option_name
like '%VdsCer%';
- restart ovirt-engine
New host would have their certificates with the validity under 2 years. I
tested with an existing host by put it in maintenance then reinstall
Thanks !
those links helped me also:
https://www.ovirt.org/develop/developer-guide/db-issues/dbupgrade/
https://www.ovirt.org/documentation/internal/database-upgrade-procedure/
2018-03-23 17:52 GMT-10:00 Punaatua PAINT-KOUI <punaatua.pk(a)gmail.com>:
> I just tried, it works ! Thank for your help.
>
> Here are the steps that i followed:
>
> connect to the engine database using psql
>
> - use the request as you give it select fn_db_update_config_value('
> VdsCertificateValidityInYears','2','general');
>
> - verify the option by running select * from vdc_options where option_name
> like '%VdsCer%';
>
> - restart ovirt-engine
>
> New host would have their certificates with the validity under 2 years. I
> tested with an existing host by put it in maintenance then reinstall
>
> Thanks !
>
> those links helped me also:
>
> https://www.ovirt.org/develop/developer-guide/db-issues/dbupgrade/
>
> https://www.ovirt.org/documentation/internal/database-upgrade-procedure/
>
>
>
> 2018-03-22 0:49 GMT-10:00 Yedidyah Bar David <didi(a)redhat.com>:
>
>> On Thu, Mar 22, 2018 at 11:58 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>> > Didi, Sandro - Do you know if this option VdsCertificateValidityInYears
>> is
>> > present in 4.2?
>>
>> I do not think it ever was exposed to engine-config - I think it's a
>> bug in that page.
>>
>> You should be able to update it with psql, if needed - something like
>> this:
>>
>> select fn_db_update_config_value('VdsCertificateValidityInYears','
>> 2','general');
>>
>> I didn't try this myself.
>>
>> To get an sql prompt, you can use engine-psql, which should be
>> available in 4.2.2,
>> or simply copy the script from the patch page:
>>
>> https://gerrit.ovirt.org/#/q/I4d9737ea72df0d7e654776a1085901284a523b7f
>>
>> Also, some people claim that the use of certificates for communication
>> between
>> the engine and the hosts is an internal implementation detail, which
>> should not
>> be relevant to PCI DSS requirements. See e.g.:
>>
>> https://ovirt.org/develop/release-management/features/infra/pkireduce/
>>
>> >
>> > On Mon, Mar 19, 2018 at 4:43 AM, Punaatua PAINT-KOUI <
>> punaatua.pk(a)gmail.com>
>> > wrote:
>> >>
>> >> Up
>> >>
>> >> 2018-02-17 2:57 GMT-10:00 Punaatua PAINT-KOUI <punaatua.pk(a)gmail.com>:
>> >>>
>> >>> Any idea someone ?
>> >>>
>> >>> Le 14 févr. 2018 23:19, "Punaatua PAINT-KOUI" <punaatua.pk(a)gmail.com>
>> a
>> >>> écrit :
>> >>>>
>> >>>> Hi,
>> >>>>
>> >>>> I setup an hyperconverged solution with 3 nodes, hosted engine on
>> >>>> glusterfs.
>> >>>> We run this setup in a PCI-DSS environment. According to PCI-DSS
>> >>>> requirements, we are required to reduce the validity of any
>> certificate
>> >>>> under 39 months.
>> >>>>
>> >>>> I saw in this link
>> >>>> https://www.ovirt.org/develop/release-management/features/infra/pki/
>> that i
>> >>>> can use the option VdsCertificateValidityInYears at engine-config.
>> >>>>
>> >>>> I'm running ovirt engine 4.2.1 and i checked when i was on 4.2 how to
>> >>>> edit the option with engine-config --all and engine-config --list
>> but the
>> >>>> option is not listed
>> >>>>
>> >>>> Am i missing something ?
>> >>>>
>> >>>> I thing i can regenerate a VDSM certificate with openssl and the CA
>> conf
>> >>>> in /etc/pki/ovirt-engine on the hosted-engine but i would rather
>> modifiy the
>> >>>> option for future host that I will add.
>> >>>>
>> >>>> --
>> >>>> -------------------------------------
>> >>>> PAINT-KOUI Punaatua
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> -------------------------------------
>> >> PAINT-KOUI Punaatua
>> >> Licence Pro Réseaux et Télecom IAR
>> >> Université du Sud Toulon Var
>> >> La Garde France
>> >>
>> >> _______________________________________________
>> >> Users mailing list
>> >> Users(a)ovirt.org
>> >> http://lists.ovirt.org/mailman/listinfo/users
>> >>
>> >
>>
>>
>>
>> --
>> Didi
>>
>
>
>
> --
> -------------------------------------
> PAINT-KOUI Punaatua
> Licence Pro Réseaux et Télecom IAR
> Université du Sud Toulon Var
> La Garde France
>
--
-------------------------------------
PAINT-KOUI Punaatua
Licence Pro Réseaux et Télecom IAR
Université du Sud Toulon Var
La Garde France
6 years, 8 months
Re: [ovirt-users] Problem to upgrade level cluster to 4.1
by Arik Hadas
On Sun, Mar 25, 2018 at 3:06 PM, Marcelo Leandro <marceloltmm(a)gmail.com>
wrote:
> Good morning,
> follow the log:
>
> The real name of vms :
> VPS-Jarauto
> VPS-Jarauto-Firewall
> VPS-Varejo-Mais
>
> Thanks,
>
Thanks for sharing the log.
Two of these VMs are configured with the custom-property 'macspoof'. That
property is not supposed to be used since 4.0 (IIRC).
Nevertheless, defining this property for version 4.1 should solve this
problem, see [1].
The third VM is in a weird state - it cannot be updated because the host it
runs on doesn't support the number of CPUs defined for this VM.
You can either shut this VM down during the cluster upgrade or try to
migrate it to another host with more CPUs available.
Unfortunately, I can't correlate those issues with the VMs you mentioned,
but if you'll start with with the first issue you'll find which one
of them is that 'third VM'.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1373573
>
> 2018-03-25 3:29 GMT-03:00 Arik Hadas <ahadas(a)redhat.com>:
>
>>
>>
>> On Fri, Mar 23, 2018 at 9:09 PM, Marcelo Leandro <marceloltmm(a)gmail.com>
>> wrote:
>>
>>> Hello,
>>> I am try update the cluster level but i had this menssage erro:
>>>
>>> Erro durante a execução da ação: Update of cluster compatibility version
>>> failed because there are VMs/Templates [VPS-NAME01, VPS-NAME02,
>>> VPS-NAME03] with incorrect configuration. To fix the issue, please go
>>> to each of them, edit and press OK. If the save does not pass, fix the
>>> dialog validation.
>>>
>>>
>>> 23/03/2018 15:03:07
>>> Cannot update compatibility version of Vm/Template: [VPS-NAME01],
>>> Message: [No Message]
>>> 23/03/2018 15:03:07
>>> Cannot update compatibility version of Vm/Template: [VPS-NAME02],
>>> Message: [No Message]
>>> 23/03/2018 15:03:07
>>> Cannot update compatibility version of Vm/Template: [VPS-NAME03],
>>> Message: [No Message]
>>>
>>>
>>> I am already open the edit box vm and close with ok button how show in
>>> the erro menssagem:
>>> To fix the issue, please go to each of them, edit and press OK. If the
>>> save does not pass, fix the dialog validation.
>>>
>>> But not return error when save.
>>>
>>> Anyone can help me ?
>>>
>>
>> Can you please share the engine.log?
>>
>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
6 years, 8 months