Failed to complete VM name creation fills the log every minute
by acavasiljevic@gmail.com
Hi,
Could you please help.
Every 30 seconds I get an error "Failed to complete VM name creation"
The task was started but stopped after a while.
I have killed all tasks :
$ su - postgres
$ psql -d engine -U postgres
> select * from job order by start_time desc;
> select DeleteJob('702e9f6a-e2a3-4113-bd7d-3757ba6bc4ef');
I run:
oVirt Engine Version: 4.1.9.1-1.el7.centos
Thank you very much.
Best regards,
6 years, 7 months
Re: External ceph storage
by Luca 'remix_tj' Lorenzetto
Remember always to reply to the list, youur feedback will return back to
the devs!
Il dom 27 mag 2018, 19:01 Leo David <leoalex(a)gmail.com> ha scritto:
> Indeed, you right. Will setup the gws while waiting for a maybe future
> native ceph connection, which in my oppinion would bring a lot of power to
> oVirt.
>
> On Sun, May 27, 2018, 19:48 Luca 'remix_tj' Lorenzetto <
> lorenzetto.luca(a)gmail.com> wrote:
>
>> I think is the best option. Setting up a sma openstack setup only for it
>> is not worth the hassle.
>>
>> Luca
>>
>> Il dom 27 mag 2018, 18:44 Leo David <leoalex(a)gmail.com> ha scritto:
>>
>>> Ok, so since i dont have openstack installed (which would put cinder
>>> between ceph and ovirt ), the only option remains the iscsi gw...
>>> Thank you very much !
>>>
>>> On Sun, May 27, 2018, 18:14 Luca 'remix_tj' Lorenzetto <
>>> lorenzetto.luca(a)gmail.com> wrote:
>>>
>>>> No,
>>>>
>>>> The only way you have is to configure cinder to manage ceph pool or in
>>>> alternative you have to deploy an iscsi gateway, no other ways are
>>>> available at the moment.
>>>>
>>>> So you can't use rbd directly.
>>>>
>>>> Luca
>>>>
>>>> Il dom 27 mag 2018, 16:54 Leo David <leoalex(a)gmail.com> ha scritto:
>>>>
>>>>> Thank you Luca,
>>>>> At the moment i would try the cinder storage provider, since we
>>>>> already have a proxmox cluster directly connecting to ceph. The problem is
>>>>> that I just could not find a straight way to do this.
>>>>> ie: Specify the ceph monitors and ceph pool to connect to. Can oVirt
>>>>> directly connect to ceph monitors ? How the configuration should be done if
>>>>> so ?
>>>>> Thank you very much !
>>>>>
>>>>>
>>>>> On Sun, May 27, 2018, 17:20 Luca 'remix_tj' Lorenzetto <
>>>>> lorenzetto.luca(a)gmail.com> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> Yes, using cinder or through iscsi gateway.
>>>>>>
>>>>>> For a simpler setup i suggest the second option.
>>>>>>
>>>>>> Luca
>>>>>>
>>>>>> Il dom 27 mag 2018, 16:08 Leo David <leoalex(a)gmail.com> ha scritto:
>>>>>>
>>>>>>> Hello everyone,
>>>>>>> I am new to ovirt and very impressed of its features. I would like
>>>>>>> to levereage on our existing ceph cluster to provide rbd images for vm
>>>>>>> hdds, is this possible to achieve ?
>>>>>>> Thank you very much !
>>>>>>> Regards,
>>>>>>> Leo
>>>>>>> _______________________________________________
>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>>
>>>>>>
6 years, 7 months
Re: External ceph storage
by Luca 'remix_tj' Lorenzetto
I think is the best option. Setting up a sma openstack setup only for it is
not worth the hassle.
Luca
Il dom 27 mag 2018, 18:44 Leo David <leoalex(a)gmail.com> ha scritto:
> Ok, so since i dont have openstack installed (which would put cinder
> between ceph and ovirt ), the only option remains the iscsi gw...
> Thank you very much !
>
> On Sun, May 27, 2018, 18:14 Luca 'remix_tj' Lorenzetto <
> lorenzetto.luca(a)gmail.com> wrote:
>
>> No,
>>
>> The only way you have is to configure cinder to manage ceph pool or in
>> alternative you have to deploy an iscsi gateway, no other ways are
>> available at the moment.
>>
>> So you can't use rbd directly.
>>
>> Luca
>>
>> Il dom 27 mag 2018, 16:54 Leo David <leoalex(a)gmail.com> ha scritto:
>>
>>> Thank you Luca,
>>> At the moment i would try the cinder storage provider, since we already
>>> have a proxmox cluster directly connecting to ceph. The problem is that I
>>> just could not find a straight way to do this.
>>> ie: Specify the ceph monitors and ceph pool to connect to. Can oVirt
>>> directly connect to ceph monitors ? How the configuration should be done if
>>> so ?
>>> Thank you very much !
>>>
>>>
>>> On Sun, May 27, 2018, 17:20 Luca 'remix_tj' Lorenzetto <
>>> lorenzetto.luca(a)gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> Yes, using cinder or through iscsi gateway.
>>>>
>>>> For a simpler setup i suggest the second option.
>>>>
>>>> Luca
>>>>
>>>> Il dom 27 mag 2018, 16:08 Leo David <leoalex(a)gmail.com> ha scritto:
>>>>
>>>>> Hello everyone,
>>>>> I am new to ovirt and very impressed of its features. I would like to
>>>>> levereage on our existing ceph cluster to provide rbd images for vm hdds,
>>>>> is this possible to achieve ?
>>>>> Thank you very much !
>>>>> Regards,
>>>>> Leo
>>>>> _______________________________________________
>>>>> Users mailing list -- users(a)ovirt.org
>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>
>>>>
6 years, 7 months
Re: Failing to install self hosted engine - 4.2.4
by Simone Tiraboschi
On Sun, May 27, 2018 at 11:32 AM, Maton, Brett <matonb(a)ltresources.co.uk>
wrote:
> Thanks Simone,
>
> I think the problem is that /tmp is mounted noexec by default on my
> servers.
> Remounting and enabling exec seems to fix the problem.
>
> I think that allowing users to specify a working temp space would be
> better than assuming that /tmp is mounted exec and also in my case big
> enough to store the hosted engine vm (which the installer doesn't check, it
> just fails late in the install process).
>
>
If you execute hosted-engine-setup from CLI it will honor TMPDIR env
variable (with /var/tmp as default) but I don't think your issue is there
indeed your bootstrap engine VM was running.
Your issue was related to the execution of host-deploy code: host-deploy is
the code used to configure the host to be used as an hypervisor, the engine
copies it over ssh to the host under /tmp and executes it there so your
issue with noexec on /tmp.
I'm not aware of any way to customize host-deploy location.
> Regards,
> Brett
>
> On 26 May 2018 at 21:25, Simone Tiraboschi <stirabos(a)redhat.com> wrote:
>
>>
>>
>> On Sat, May 26, 2018 at 11:10 AM, Maton, Brett <matonb(a)ltresources.co.uk>
>> wrote:
>>
>>> Hi,
>>>
>>> I can't see in the log what the problem is, I expect it's something
>>> simple though...
>>> Hoping that someone can see what the problem is.
>>>
>>> engine setup log attached.
>>>
>>
>> 2018-05-26 09:56:09,676+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils
>> ansible_utils._process_output:98 fatal: [localhost]: FAILED! =>
>> {"ansible_facts": {"ovirt_hosts": [{"address": "node1.example.com",
>> "affinity_labels": [], "auto_numa_status": "unknown", "certificate":
>> {"organization": "testlab.lan", "subject": "O=testlab.lan,CN=node1.exampl
>> e.com"}, "cluster": {"href": "/ovirt-engine/api/clusters/dd
>> a509fe-60c0-11e8-ac4e-00163e2285d7", "id": "dda509fe-60c0-11e8-ac4e-00163e2285d7"},
>> "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough":
>> {"enabled": false}, "devices": [], "external_network_provider_configurations":
>> [], "external_status": "ok", "hardware_information":
>> {"supported_rng_sources": []}, "hooks": [], "href":
>> "/ovirt-engine/api/hosts/b0976daa-e4b5-4881-a59b-49f9eb51f7c5", "id":
>> "b0976daa-e4b5-4881-a59b-49f9eb51f7c5", "katello_errata": [],
>> "kdump_status": "unknown", "ksm": {"enabled": false},
>> "max_scheduling_memory": 0, "memory": 0, "name": "node1.example.com",
>> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported":
>> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port":
>> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false,
>> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
>> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh":
>> {"fingerprint": "SHA256:xh1kvHMx+qp3q2L4lxf4fwra9lCAVxlfAGiVBVa7I7M",
>> "port": 22}, "statistics": [], "status": "install_failed",
>> "storage_connection_extensions": [], "summary": {"total": 0}, "tags":
>> [], "transparent_huge_pages": {"enabled": false}, "type": "rhel",
>> "unmanaged_networks": [], "update_available": false}]}, "attempts": 120,
>> "changed": false}
>>
>> The engine failed to deploy your host and indeed we see "status":
>> "install_failed"
>> but to understand why it failed you have to consult host-deploy logs.
>> You can find them under /var/log/ovirt-engine/host-deploy on the engine
>> VM or also on your host in a directory inside hosted-engine-setup logs dir
>> if hosted-engine-setup correctly managed to fetch them from the engine VM.
>>
>>
>>
>>>
>>> Regards,
>>> Brett
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>
>>>
>>
>
6 years, 7 months
External ceph storage
by Leo David
Hello everyone,
I am new to ovirt and very impressed of its features. I would like to
levereage on our existing ceph cluster to provide rbd images for vm hdds,
is this possible to achieve ?
Thank you very much !
Regards,
Leo
6 years, 7 months
Re: hosted engine setup error
by dhy336@sina.com
hi Yanirthis is my vdsm.log vdsm run prepareForShutdown lead to engine VM shudown? why shutdown engine VM,when deploy hosted engine? thanks
2018-05-27 18:24:16,020+0800 INFO (MainThread) [vds] Received signal 15, shutting down (vdsmd:67)2018-05-27 18:24:21,028+0800 INFO (MainThread) [jsonrpc.JsonRpcServer] Stopping JsonRPC Server (__init__:703)2018-05-27 18:24:21,030+0800 INFO (MainThread) [vds] Stopping http server (http:79)2018-05-27 18:24:21,030+0800 INFO (http) [vds] Server stopped (http:69)2018-05-27 18:24:21,031+0800 INFO (MainThread) [root] Unregistering all secrets (secret:91)2018-05-27 18:24:21,031+0800 INFO (MainThread) [vds] Stopping QEMU-GA poller (qemuguestagent:116)2018-05-27 18:24:21,032+0800 INFO (MainThread) [vdsm.api] START prepareForShutdown(options=None) from=internal, task_id=d3137135-e4f8-44ef-ac4c-d1687606fb02 (api:46)2018-05-27 18:24:21,094+0800 INFO (MainThread) [storage.Monitor] Shutting down domain monitors (monitor:222)2018-05-27 18:24:21,094+0800 INFO (MainThread) [storage.check] Stopping check service (check:104)2018-05-27 18:24:21,094+0800 INFO (check/loop) [storage.asyncevent] Stopping <EventLoop running=True closed=False at 0x43126736> (asyncevent:220)
----- Original Message -----
From: <dhy336(a)sina.com>
To: "Yanir Quinn" <yquinn(a)redhat.com>
Cc: users <users(a)ovirt.org>
Subject: [ovirt-users] Re: hosted engine setup error
Date: 2018-05-27 18:51
thanks, I find when hosted-engine -deloy run [Get ovirtmgmt route table id], my engine VM is shutdown,and /var/log/message show my engine vm is powered down by guest-shutdown,[ INFO ] TASK [Get ovirtmgmt route table id][ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta": "0:00:00.004175", "end": "2018-05-26 17:41:11.892218", "rc": 0, "start": "2018-05-26 17:41:11.888043", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up[ INFO ] Cleaning temporary resources[ INFO ] TASK [Gathering Facts][ INFO ] ok: [localhost][ INFO ] TASK [Remove local vm dir]
/var/log/message
May 26 09:32:23 engine systemd: Reloading.May 26 09:32:24 engine python: ansible-file Invoked with directory_mode=None force=False remote_src=None path=/root/heanswers.conf owner=None follow=False group=None unsafe_writes=None state=absent content=NOT_LOGGING_PARAMETER serole=None diff_peek=None setype=None selevel=None original_basename=None regexp=None validate=None src=None seuser=None recurse=False delimiter=None mode=None attributes=None backup=NoneMay 26 09:32:38 engine systemd-logind: Removed session 3.May 26 09:32:48 engine qemu-ga: info: guest-shutdown called, mode: powerdownMay 26 09:32:48 engine systemd: Started Delayed Shutdown Service.May 26 09:32:48 engine systemd: Starting Delayed Shutdown Service...May 26 09:32:48 engine systemd-shutdownd: Shutting down at Sat 2018-05-26 17:32:48 CST (poweroff)...May 26 09:32:48 engine systemd-shutdownd: Creating /run/nologin, blocking further logins...May 26 09:32:48 engine systemd: Stopping QEMU Guest Agent...May 26 09:32:48 engine systemd: Stopping Session 1 of user root.May 26 09:32:48 engine systemd: Stopped target Multi-User System.May 26 09:32:48 engine systemd: Stopping Multi-User System.May 26 09:32:48 engine systemd: Stopped Execute cloud user/final scripts.May 26 09:32:48 engine systemd: Stopping Execute cloud user/final scripts...May 26 09:32:48 engine systemd: Stopping oVirt Engine...May 26 09:32:48 engine sshd[24523]: Received signal 15; terminating.May 26 09:32:48 engine systemd: Stopping OpenSSH server daemon...May 26 09:32:48 engine systemd: Stopping The Apache HTTP Server...----- Original Message -----
From: Yanir Quinn <yquinn(a)redhat.com>
To: dhy336(a)sina.com
Cc: users <users(a)ovirt.org>
Subject: Re: Re: [ovirt-users] hosted engine setup error
Date: 2018-05-27 16:46
According to BZ : https://bugzilla.redhat.com/show_bug.cgi?id=1540451#c19you should retry with a newer kernel >= -851 if possible .If you still get the error please provide full setup and vdsm logs.
On Fri, May 25, 2018 at 11:54 AM, <dhy336(a)sina.com> wrote:
Hi thanks for your reponse, I try to do by your guide, but it is not work. my kernel version is 3.10.0-693.11.1.el7.x86_64.besides, I try to do hosted-engine --deploy in my Virtual machine is work. but it is not work in Physical Machine
----- Original Message -----
From: Yanir Quinn <yquinn(a)redhat.com>
To: dhy336(a)sina.com
Cc: users <users(a)ovirt.org>
Subject: Re: [ovirt-users] hosted engine setup error
Date: 2018-05-24 18:35
Hi
There is a BZ with a similar issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1546839
To sum things up, try stopping the network manager and restarting your network if you are using a RH distribution, e,g. :
systemctl stop NetworkManager
systemctl restart networkAnd check what is your kernel version.
On Wed, May 23, 2018 at 5:03 PM, <dhy336(a)sina.com> wrote:
hiI deploy ovirt-engine-4.2.2 hosted-engine by #hosted-engine --deploy, but face some error bridge ovirtmgmt is not configure, i should how to do? thanks
[ INFO ] TASK [Get ovirtmgmt route table id][ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta": "0:00:00.010899", "end": "2018-05-23 20:03:21.222559", "rc": 0, "start": "2018-05-23 20:03:21.211660", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up[ INFO ] Cleaning temporary resources
vdsm.log
2018-05-23 19:55:15,305+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') VM wrapper has started (vm:2619)2018-05-23 19:55:15,454+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') Starting connection (guestagent:245)2018-05-23 19:55:15,458+0800 ERROR (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') Failed to connect to guest agent channel (vm:2403)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2401, in _vmDependentInit self.guestAgent.start() File "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 246, in start self._prepare_socket() File "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 288, in _prepare_socket supervdsm.getProxy().prepareVmChannel(self._socketName) File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in <lambda> **kwargs) File "<string>", line 2, in prepareVmChannel File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result)OSError: [Errno 2] No such file or directory: '/var/lib/libvirt/qemu/channels/bfc6f7cf-3e8d-4368-97f8-78a5c74a5175.com.redhat.rhevm.vdsm'2018-05-23 19:55:15,480+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') CPU running: domain initialization (vm:5908)
[root@hosted-engine-test1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:6c:ee:a8 brd ff:ff:ff:ff:ff:ff inet 192.168.122.217/24 brd 192.168.122.255 scope global dynamic eth0 valid_lft 2936sec preferred_lft 2936sec inet6 fe80::834a:9cc1:df2:83f/64 scope link valid_lft forever preferred_lft forever18: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 9a:d1:bc:96:cc:c0 brd ff:ff:ff:ff:ff:ff19: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:55:06:26 brd ff:ff:ff:ff:ff:ff inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0 valid_lft forever preferred_lft forever20: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:55:06:26 brd ff:ff:ff:ff:ff:ff
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
6 years, 7 months
Re: hosted engine setup error
by dhy336@sina.com
thanks, I find when hosted-engine -deloy run [Get ovirtmgmt route table id], my engine VM is shutdown,and /var/log/message show my engine vm is powered down by guest-shutdown,[ INFO ] TASK [Get ovirtmgmt route table id][ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta": "0:00:00.004175", "end": "2018-05-26 17:41:11.892218", "rc": 0, "start": "2018-05-26 17:41:11.888043", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up[ INFO ] Cleaning temporary resources[ INFO ] TASK [Gathering Facts][ INFO ] ok: [localhost][ INFO ] TASK [Remove local vm dir]
/var/log/message
May 26 09:32:23 engine systemd: Reloading.May 26 09:32:24 engine python: ansible-file Invoked with directory_mode=None force=False remote_src=None path=/root/heanswers.conf owner=None follow=False group=None unsafe_writes=None state=absent content=NOT_LOGGING_PARAMETER serole=None diff_peek=None setype=None selevel=None original_basename=None regexp=None validate=None src=None seuser=None recurse=False delimiter=None mode=None attributes=None backup=NoneMay 26 09:32:38 engine systemd-logind: Removed session 3.May 26 09:32:48 engine qemu-ga: info: guest-shutdown called, mode: powerdownMay 26 09:32:48 engine systemd: Started Delayed Shutdown Service.May 26 09:32:48 engine systemd: Starting Delayed Shutdown Service...May 26 09:32:48 engine systemd-shutdownd: Shutting down at Sat 2018-05-26 17:32:48 CST (poweroff)...May 26 09:32:48 engine systemd-shutdownd: Creating /run/nologin, blocking further logins...May 26 09:32:48 engine systemd: Stopping QEMU Guest Agent...May 26 09:32:48 engine systemd: Stopping Session 1 of user root.May 26 09:32:48 engine systemd: Stopped target Multi-User System.May 26 09:32:48 engine systemd: Stopping Multi-User System.May 26 09:32:48 engine systemd: Stopped Execute cloud user/final scripts.May 26 09:32:48 engine systemd: Stopping Execute cloud user/final scripts...May 26 09:32:48 engine systemd: Stopping oVirt Engine...May 26 09:32:48 engine sshd[24523]: Received signal 15; terminating.May 26 09:32:48 engine systemd: Stopping OpenSSH server daemon...May 26 09:32:48 engine systemd: Stopping The Apache HTTP Server...----- Original Message -----
From: Yanir Quinn <yquinn(a)redhat.com>
To: dhy336(a)sina.com
Cc: users <users(a)ovirt.org>
Subject: Re: Re: [ovirt-users] hosted engine setup error
Date: 2018-05-27 16:46
According to BZ : https://bugzilla.redhat.com/show_bug.cgi?id=1540451#c19you should retry with a newer kernel >= -851 if possible .If you still get the error please provide full setup and vdsm logs.
On Fri, May 25, 2018 at 11:54 AM, <dhy336(a)sina.com> wrote:
Hi thanks for your reponse, I try to do by your guide, but it is not work. my kernel version is 3.10.0-693.11.1.el7.x86_64.besides, I try to do hosted-engine --deploy in my Virtual machine is work. but it is not work in Physical Machine
----- Original Message -----
From: Yanir Quinn <yquinn(a)redhat.com>
To: dhy336(a)sina.com
Cc: users <users(a)ovirt.org>
Subject: Re: [ovirt-users] hosted engine setup error
Date: 2018-05-24 18:35
Hi
There is a BZ with a similar issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1546839
To sum things up, try stopping the network manager and restarting your network if you are using a RH distribution, e,g. :
systemctl stop NetworkManager
systemctl restart networkAnd check what is your kernel version.
On Wed, May 23, 2018 at 5:03 PM, <dhy336(a)sina.com> wrote:
hiI deploy ovirt-engine-4.2.2 hosted-engine by #hosted-engine --deploy, but face some error bridge ovirtmgmt is not configure, i should how to do? thanks
[ INFO ] TASK [Get ovirtmgmt route table id][ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta": "0:00:00.010899", "end": "2018-05-23 20:03:21.222559", "rc": 0, "start": "2018-05-23 20:03:21.211660", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up[ INFO ] Cleaning temporary resources
vdsm.log
2018-05-23 19:55:15,305+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') VM wrapper has started (vm:2619)2018-05-23 19:55:15,454+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') Starting connection (guestagent:245)2018-05-23 19:55:15,458+0800 ERROR (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') Failed to connect to guest agent channel (vm:2403)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2401, in _vmDependentInit self.guestAgent.start() File "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 246, in start self._prepare_socket() File "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 288, in _prepare_socket supervdsm.getProxy().prepareVmChannel(self._socketName) File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in <lambda> **kwargs) File "<string>", line 2, in prepareVmChannel File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result)OSError: [Errno 2] No such file or directory: '/var/lib/libvirt/qemu/channels/bfc6f7cf-3e8d-4368-97f8-78a5c74a5175.com.redhat.rhevm.vdsm'2018-05-23 19:55:15,480+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') CPU running: domain initialization (vm:5908)
[root@hosted-engine-test1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:6c:ee:a8 brd ff:ff:ff:ff:ff:ff inet 192.168.122.217/24 brd 192.168.122.255 scope global dynamic eth0 valid_lft 2936sec preferred_lft 2936sec inet6 fe80::834a:9cc1:df2:83f/64 scope link valid_lft forever preferred_lft forever18: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 9a:d1:bc:96:cc:c0 brd ff:ff:ff:ff:ff:ff19: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:55:06:26 brd ff:ff:ff:ff:ff:ff inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0 valid_lft forever preferred_lft forever20: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:55:06:26 brd ff:ff:ff:ff:ff:ff
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
6 years, 7 months
Re: Need advice using pacemaker on VMs for application HA
by wodel youchi
Hi,
Thank you both for your responses,
In a conclusion :
- Don't mix up several solution that provide HA :
- either choose oVirt HA.
- or use software HA inside a VM.
And since I need to monitor a service inside the VM for HA purpose, the
implementation will be :
- Create two VMs (nodes) without oVirt HA enabled.
- Create the shared storage.
- Configure pacemaker with stonith using fence rhev.
- Test the configuration and my be try to test a hypervisor crash to see
the behavior of the solution.
Regards.
2018-05-24 17:12 GMT+01:00 Gianluca Cecchi <gianluca.cecchi(a)gmail.com>:
> On Thu, May 24, 2018 at 6:05 PM, Gianluca Cecchi <
> gianluca.cecchi(a)gmail.com> wrote:
>
>>
>>
>>
>>>
>>>> - What about the shared storage, we will use a shared disk on oVirt
>>>> which does not support snapshot
>>>>
>>>
>>> What is the question?
>>>
>>
>>
>> In the past I had to configure a virtual CentOS 6 cluster because I
>> needed to replicate a problem I had in a physical production cluster and to
>> verify if some actions/updates would have solved the problem.
>> I had no more spare hw to configure an so using the poor-man method (dd +
>> reconfigure) I had the cluster up and running with two twin nodes identical
>> to the physical ones.
>> I also opened this bugzilla to backport the el7 package to el6:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1446474
>>
>> The intracluster network has been put on OVN btw
>>
>> But honestly I doubt I will use a virtual-cluster software stack to
>> provide high availability to production services inside a VM. Too many
>> inter-relations
>>
>>
> I didn't complete the answer about storage.
> Indeed if the cluster services need also a storage (eg a filesystem), you
> should create dedicated virtual disks that you mark as shared and assign to
> all the nodes of the virtual-cluster.
> This compromise the snapshot functionality (of only the shared disks, you
> can snapshot the boot disk and the not shared disks).
>
> HIH,
> Gianluca
>
>
6 years, 7 months
Ovirt 4.2.4 upgrade
by Maton, Brett
The 4.2.4 upgrade appears to have a database issue, i'm seeing these errors
in the postgresql logs:
2018-05-24 16:27:08.292 UTC ERROR: relation "provider_binding_host_id"
does not exist at character 15
2018-05-24 16:27:08.292 UTC QUERY: SELECT 1 FROM provider_binding_host_id
WHERE vds_id = v_vds_id FOR UPDATE
2018-05-24 16:27:08.292 UTC CONTEXT: PL/pgSQL function
updatehostproviderbinding(uuid,character varying[],character varying[])
line 3 at PERFORM
Any suggestions ?
6 years, 7 months
Re: hosted engine setup error
by dhy336@sina.com
Hi thanks for your reponse, I try to do by your guide, but it is not work. my kernel version is 3.10.0-693.11.1.el7.x86_64.besides, I try to do hosted-engine --deploy in my Virtual machine is work. but it is not work in Physical Machine
----- Original Message -----
From: Yanir Quinn <yquinn(a)redhat.com>
To: dhy336(a)sina.com
Cc: users <users(a)ovirt.org>
Subject: Re: [ovirt-users] hosted engine setup error
Date: 2018-05-24 18:35
Hi
There is a BZ with a similar issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1546839
To sum things up, try stopping the network manager and restarting your network if you are using a RH distribution, e,g. :
systemctl stop NetworkManager
systemctl restart networkAnd check what is your kernel version.
On Wed, May 23, 2018 at 5:03 PM, <dhy336(a)sina.com> wrote:
hiI deploy ovirt-engine-4.2.2 hosted-engine by #hosted-engine --deploy, but face some error bridge ovirtmgmt is not configure, i should how to do? thanks
[ INFO ] TASK [Get ovirtmgmt route table id][ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta": "0:00:00.010899", "end": "2018-05-23 20:03:21.222559", "rc": 0, "start": "2018-05-23 20:03:21.211660", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up[ INFO ] Cleaning temporary resources
vdsm.log
2018-05-23 19:55:15,305+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') VM wrapper has started (vm:2619)2018-05-23 19:55:15,454+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') Starting connection (guestagent:245)2018-05-23 19:55:15,458+0800 ERROR (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') Failed to connect to guest agent channel (vm:2403)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2401, in _vmDependentInit self.guestAgent.start() File "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 246, in start self._prepare_socket() File "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 288, in _prepare_socket supervdsm.getProxy().prepareVmChannel(self._socketName) File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in <lambda> **kwargs) File "<string>", line 2, in prepareVmChannel File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result)OSError: [Errno 2] No such file or directory: '/var/lib/libvirt/qemu/channels/bfc6f7cf-3e8d-4368-97f8-78a5c74a5175.com.redhat.rhevm.vdsm'2018-05-23 19:55:15,480+0800 INFO (vm/bfc6f7cf) [virt.vm] (vmId='bfc6f7cf-3e8d-4368-97f8-78a5c74a5175') CPU running: domain initialization (vm:5908)
[root@hosted-engine-test1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:6c:ee:a8 brd ff:ff:ff:ff:ff:ff inet 192.168.122.217/24 brd 192.168.122.255 scope global dynamic eth0 valid_lft 2936sec preferred_lft 2936sec inet6 fe80::834a:9cc1:df2:83f/64 scope link valid_lft forever preferred_lft forever18: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 9a:d1:bc:96:cc:c0 brd ff:ff:ff:ff:ff:ff19: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:55:06:26 brd ff:ff:ff:ff:ff:ff inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0 valid_lft forever preferred_lft forever20: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:55:06:26 brd ff:ff:ff:ff:ff:ff
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
6 years, 7 months