
HI Getting below error while configuring Hosted engine, root@he ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log Version: otopi-1.3.2 (otopi-1.3.2-1.el6) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [WARNING] Cannot detect if hardware supports virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup *[ ERROR ] The following VMs has been found: 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs running* [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@he ~]#

Hi Seems like you have existing VMs running on the host (you can check that by looking for qemu processes on your host). Is that a clean deployment, or was the host used before for running VMs? Perhaps you already ran the hosted engine setup, and the VM was left there? CC-ing Sandro who is more familiar in that than me. Thanks, Oved On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
HI
Getting below error while configuring Hosted engine,
root@he ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log Version: otopi-1.3.2 (otopi-1.3.2-1.el6) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [WARNING] Cannot detect if hardware supports virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup
*[ ERROR ] The following VMs has been found: 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs running* [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@he ~]#
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Its a fresh setup ,I have deleted all the vms ,still am facing same issues . On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Hi
Seems like you have existing VMs running on the host (you can check that by looking for qemu processes on your host). Is that a clean deployment, or was the host used before for running VMs? Perhaps you already ran the hosted engine setup, and the VM was left there?
CC-ing Sandro who is more familiar in that than me.
Thanks, Oved
On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
HI
Getting below error while configuring Hosted engine,
root@he ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log Version: otopi-1.3.2 (otopi-1.3.2-1.el6) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [WARNING] Cannot detect if hardware supports virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup
*[ ERROR ] The following VMs has been found: 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs running* [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@he ~]#
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Its a fresh setup ,I have deleted all the vms ,still am facing same issues .
Can you please paste the output of vdsClient -s 0 list ? thanks
On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Hi
Seems like you have existing VMs running on the host (you can check that by looking for qemu processes on your host). Is that a clean deployment, or was the host used before for running VMs? Perhaps you already ran the hosted engine setup, and the VM was left there?
CC-ing Sandro who is more familiar in that than me.
Thanks, Oved
On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
HI
Getting below error while configuring Hosted engine,
root@he ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log Version: otopi-1.3.2 (otopi-1.3.2-1.el6) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [WARNING] Cannot detect if hardware supports virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup
*[ ERROR ] The following VMs has been found: 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs running* [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@he ~]#
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I have done a fresh installation and now am getting the below error, [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up The following network ports should be opened: tcp:5900 tcp:5901 udp:5900 udp:5901 An example of the required configuration for iptables can be found at: /etc/ovirt-hosted-engine/iptables.example In order to configure firewalld, copy the files from /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services and execute the following commands: firewall-cmd -service hosted-console [ INFO ] Creating VM [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary password for console connection. The VM may not have been created: please check VDSM logs [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@he ovirt]# tail -f /var/log/vdsm/ backup/ connectivity.log mom.log supervdsm.log vdsm.log [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42741 Detector thread::DEBUG::2015-11-26 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42741) Detector thread::DEBUG::2015-11-26 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42742) Detector thread::DEBUG::2015-11-26 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42743) On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Its a fresh setup ,I have deleted all the vms ,still am facing same issues .
Can you please paste the output of vdsClient -s 0 list ? thanks
On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Hi
Seems like you have existing VMs running on the host (you can check that by looking for qemu processes on your host). Is that a clean deployment, or was the host used before for running VMs? Perhaps you already ran the hosted engine setup, and the VM was left there?
CC-ing Sandro who is more familiar in that than me.
Thanks, Oved
On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
HI
Getting below error while configuring Hosted engine,
root@he ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log Version: otopi-1.3.2 (otopi-1.3.2-1.el6) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [WARNING] Cannot detect if hardware supports virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup
*[ ERROR ] The following VMs has been found: 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs running* [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@he ~]#
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have done a fresh installation and now am getting the below error,
[ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up The following network ports should be opened: tcp:5900 tcp:5901 udp:5900 udp:5901 An example of the required configuration for iptables can be found at: /etc/ovirt-hosted-engine/iptables.example In order to configure firewalld, copy the files from /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services and execute the following commands: firewall-cmd -service hosted-console [ INFO ] Creating VM [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary password for console connection. The VM may not have been created: please check VDSM logs [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
[root@he ovirt]# tail -f /var/log/vdsm/ backup/ connectivity.log mom.log supervdsm.log vdsm.log [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42741 Detector thread::DEBUG::2015-11-26 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42741) Detector thread::DEBUG::2015-11-26 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42742) Detector thread::DEBUG::2015-11-26 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42743)
It failed before, can you please attach the whole VDSM logs?
On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Its a fresh setup ,I have deleted all the vms ,still am facing same issues .
Can you please paste the output of vdsClient -s 0 list ? thanks
On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Hi
Seems like you have existing VMs running on the host (you can check that by looking for qemu processes on your host). Is that a clean deployment, or was the host used before for running VMs? Perhaps you already ran the hosted engine setup, and the VM was left there?
CC-ing Sandro who is more familiar in that than me.
Thanks, Oved
On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
HI
Getting below error while configuring Hosted engine,
root@he ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log Version: otopi-1.3.2 (otopi-1.3.2-1.el6) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [WARNING] Cannot detect if hardware supports virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup
*[ ERROR ] The following VMs has been found: 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs running* [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@he ~]#
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Below are the logs, [root@he ~]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946) On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have done a fresh installation and now am getting the below error,
[ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up The following network ports should be opened: tcp:5900 tcp:5901 udp:5900 udp:5901 An example of the required configuration for iptables can be found at: /etc/ovirt-hosted-engine/iptables.example In order to configure firewalld, copy the files from /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services and execute the following commands: firewall-cmd -service hosted-console [ INFO ] Creating VM [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary password for console connection. The VM may not have been created: please check VDSM logs [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
[root@he ovirt]# tail -f /var/log/vdsm/ backup/ connectivity.log mom.log supervdsm.log vdsm.log [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42741 Detector thread::DEBUG::2015-11-26 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42741) Detector thread::DEBUG::2015-11-26 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42742) Detector thread::DEBUG::2015-11-26 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42743)
It failed before, can you please attach the whole VDSM logs?
On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Its a fresh setup ,I have deleted all the vms ,still am facing same issues .
Can you please paste the output of vdsClient -s 0 list ? thanks
On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Hi
Seems like you have existing VMs running on the host (you can check that by looking for qemu processes on your host). Is that a clean deployment, or was the host used before for running VMs? Perhaps you already ran the hosted engine setup, and the VM was left there?
CC-ing Sandro who is more familiar in that than me.
Thanks, Oved
On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
HI
Getting below error while configuring Hosted engine,
root@he ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log Version: otopi-1.3.2 (otopi-1.3.2-1.el6) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [WARNING] Cannot detect if hardware supports virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup
*[ ERROR ] The following VMs has been found: 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs running* [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@he ~]#
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

*Below are the entire logs* *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946) *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call readMultipathConf with () {} MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', ' polling_interval 5', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' no_path_retry fail', ' user_friendly_names no', ' flush_on_last_del yes', ' fast_io_fail_tmo 5', ' dev_loss_tmo 30', ' max_fds 4096', '}', '', 'devices {', 'device {', ' vendor "HITACHI"', ' product "DF.*"', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', '}', 'device {', ' vendor "COMPELNT"', ' product "Compellent Vol"', ' no_path_retry fail', '}', 'device {', ' # multipath.conf.default', ' vendor "DGC"', ' product ".*"', ' product_blacklist "LUNZ"', ' path_grouping_policy "group_by_prio"', ' path_checker "emc_clariion"', ' hardware_handler "1 emc"', ' prio "emc"', ' failback immediate', ' rr_weight "uniform"', ' # vdsm required configuration', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' features "0"', ' no_path_retry fail', '}', '}'] MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {} MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', 'systemManufacturer': 'Red Hat'} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({'run': 0},) {} MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * 2015-11-26 15:02:02,632:DEBUG:recent_client:False 2015-11-26 15:04:44,975:DEBUG:recent_client:True 2015-11-26 15:05:15,039:DEBUG:recent_client:False 2015-11-26 15:07:23,311:DEBUG:recent_client:True 2015-11-26 15:08:25,774:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:08:55,845:DEBUG:recent_client:False 2015-11-26 15:08:59,859:DEBUG:recent_client:True 2015-11-26 15:09:29,929:DEBUG:recent_client:False 2015-11-26 15:13:32,292:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:14:02,363:DEBUG:recent_client:False *[root@he ~]# tail -f /var/log/vdsm/mom.log * 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy '04-cputune' 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine starting 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is disabled 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics() On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Below are the logs,
[root@he ~]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have done a fresh installation and now am getting the below error,
[ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up The following network ports should be opened: tcp:5900 tcp:5901 udp:5900 udp:5901 An example of the required configuration for iptables can be found at: /etc/ovirt-hosted-engine/iptables.example In order to configure firewalld, copy the files from /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services and execute the following commands: firewall-cmd -service hosted-console [ INFO ] Creating VM [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary password for console connection. The VM may not have been created: please check VDSM logs [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
[root@he ovirt]# tail -f /var/log/vdsm/ backup/ connectivity.log mom.log supervdsm.log vdsm.log [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42741 Detector thread::DEBUG::2015-11-26 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42741) Detector thread::DEBUG::2015-11-26 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42742) Detector thread::DEBUG::2015-11-26 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42743)
It failed before, can you please attach the whole VDSM logs?
On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Its a fresh setup ,I have deleted all the vms ,still am facing same issues .
Can you please paste the output of vdsClient -s 0 list ? thanks
On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Hi
Seems like you have existing VMs running on the host (you can check that by looking for qemu processes on your host). Is that a clean deployment, or was the host used before for running VMs? Perhaps you already ran the hosted engine setup, and the VM was left there?
CC-ing Sandro who is more familiar in that than me.
Thanks, Oved
On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> HI > > Getting below error while configuring Hosted engine, > > root@he ~]# hosted-engine --deploy > [ INFO ] Stage: Initializing > [ INFO ] Generating a temporary VNC password. > [ INFO ] Stage: Environment setup > Continuing will configure this host for serving as > hypervisor and create a VM where you have to install oVirt Engine > afterwards. > Are you sure you want to continue? (Yes, No)[Yes]: yes > Configuration files: [] > Log file: > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log > Version: otopi-1.3.2 (otopi-1.3.2-1.el6) > It has been detected that this program is executed through > an SSH connection without using screen. > Continuing with the installation may lead to broken > installation if the network connection fails. > It is highly recommended to abort the installation and run > it inside a screen session using command "screen". > Do you want to continue anyway? (Yes, No)[No]: yes > [WARNING] Cannot detect if hardware supports virtualization > [ INFO ] Bridge ovirtmgmt already created > [ INFO ] Stage: Environment packages setup > [ INFO ] Stage: Programs detection > [ INFO ] Stage: Environment setup > > *[ ERROR ] The following VMs has been found: > 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage > 'Environment setup': Cannot setup Hosted Engine with other VMs running* > [ INFO ] Stage: Clean up > [ INFO ] Generating answer file > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [root@he ~]# > > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
*Below are the entire logs*
Sorry, with the entire log I mean if you can attach or share somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are not enough to point out the issue.
*[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
*[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call readMultipathConf with () {} MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', ' polling_interval 5', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' no_path_retry fail', ' user_friendly_names no', ' flush_on_last_del yes', ' fast_io_fail_tmo 5', ' dev_loss_tmo 30', ' max_fds 4096', '}', '', 'devices {', 'device {', ' vendor "HITACHI"', ' product "DF.*"', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', '}', 'device {', ' vendor "COMPELNT"', ' product "Compellent Vol"', ' no_path_retry fail', '}', 'device {', ' # multipath.conf.default', ' vendor "DGC"', ' product ".*"', ' product_blacklist "LUNZ"', ' path_grouping_policy "group_by_prio"', ' path_checker "emc_clariion"', ' hardware_handler "1 emc"', ' prio "emc"', ' failback immediate', ' rr_weight "uniform"', ' # vdsm required configuration', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' features "0"', ' no_path_retry fail', '}', '}'] MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {} MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', 'systemManufacturer': 'Red Hat'} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({'run': 0},) {} MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
*[root@he ~]# tail -f /var/log/vdsm/connectivity.log *
2015-11-26 15:02:02,632:DEBUG:recent_client:False 2015-11-26 15:04:44,975:DEBUG:recent_client:True 2015-11-26 15:05:15,039:DEBUG:recent_client:False 2015-11-26 15:07:23,311:DEBUG:recent_client:True 2015-11-26 15:08:25,774:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:08:55,845:DEBUG:recent_client:False 2015-11-26 15:08:59,859:DEBUG:recent_client:True 2015-11-26 15:09:29,929:DEBUG:recent_client:False 2015-11-26 15:13:32,292:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:14:02,363:DEBUG:recent_client:False
*[root@he ~]# tail -f /var/log/vdsm/mom.log *
2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy '04-cputune' 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine starting 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is disabled 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics()
On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Below are the logs,
[root@he ~]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have done a fresh installation and now am getting the below error,
[ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up The following network ports should be opened: tcp:5900 tcp:5901 udp:5900 udp:5901 An example of the required configuration for iptables can be found at: /etc/ovirt-hosted-engine/iptables.example In order to configure firewalld, copy the files from /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services and execute the following commands: firewall-cmd -service hosted-console [ INFO ] Creating VM [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary password for console connection. The VM may not have been created: please check VDSM logs [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
[root@he ovirt]# tail -f /var/log/vdsm/ backup/ connectivity.log mom.log supervdsm.log vdsm.log [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42741 Detector thread::DEBUG::2015-11-26 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42741) Detector thread::DEBUG::2015-11-26 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42742) Detector thread::DEBUG::2015-11-26 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42743)
It failed before, can you please attach the whole VDSM logs?
On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi <stirabos@redhat.com
wrote:
On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Its a fresh setup ,I have deleted all the vms ,still am facing same issues .
Can you please paste the output of vdsClient -s 0 list ? thanks
On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com> wrote:
> Hi > > Seems like you have existing VMs running on the host (you can check > that by looking for qemu processes on your host). > Is that a clean deployment, or was the host used before for running > VMs? > Perhaps you already ran the hosted engine setup, and the VM was left > there? > > CC-ing Sandro who is more familiar in that than me. > > Thanks, > Oved > > On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> > wrote: > >> HI >> >> Getting below error while configuring Hosted engine, >> >> root@he ~]# hosted-engine --deploy >> [ INFO ] Stage: Initializing >> [ INFO ] Generating a temporary VNC password. >> [ INFO ] Stage: Environment setup >> Continuing will configure this host for serving as >> hypervisor and create a VM where you have to install oVirt Engine >> afterwards. >> Are you sure you want to continue? (Yes, No)[Yes]: yes >> Configuration files: [] >> Log file: >> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >> It has been detected that this program is executed >> through an SSH connection without using screen. >> Continuing with the installation may lead to broken >> installation if the network connection fails. >> It is highly recommended to abort the installation and >> run it inside a screen session using command "screen". >> Do you want to continue anyway? (Yes, No)[No]: yes >> [WARNING] Cannot detect if hardware supports virtualization >> [ INFO ] Bridge ovirtmgmt already created >> [ INFO ] Stage: Environment packages setup >> [ INFO ] Stage: Programs detection >> [ INFO ] Stage: Environment setup >> >> *[ ERROR ] The following VMs has been found: >> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >> [ INFO ] Stage: Clean up >> [ INFO ] Generating answer file >> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [root@he ~]# >> >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I got only 10lines to in the vdsm logs and are below , [root@he /]# tail -f /var/log/vdsm/vdsm.log Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.HsmDomainMonitorLock' Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.HsmDomainMonitorLock' (0 active users) Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is waiting for it. Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing records. Thread-100::INFO::2015-11-27 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: stopMonitoringDomain, Return response: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> state finished Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-100::DEBUG::2015-11-27 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
*Below are the entire logs*
Sorry, with the entire log I mean if you can attach or share somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are not enough to point out the issue.
*[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
*[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call readMultipathConf with () {} MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', ' polling_interval 5', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' no_path_retry fail', ' user_friendly_names no', ' flush_on_last_del yes', ' fast_io_fail_tmo 5', ' dev_loss_tmo 30', ' max_fds 4096', '}', '', 'devices {', 'device {', ' vendor "HITACHI"', ' product "DF.*"', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', '}', 'device {', ' vendor "COMPELNT"', ' product "Compellent Vol"', ' no_path_retry fail', '}', 'device {', ' # multipath.conf.default', ' vendor "DGC"', ' product ".*"', ' product_blacklist "LUNZ"', ' path_grouping_policy "group_by_prio"', ' path_checker "emc_clariion"', ' hardware_handler "1 emc"', ' prio "emc"', ' failback immediate', ' rr_weight "uniform"', ' # vdsm required configuration', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' features "0"', ' no_path_retry fail', '}', '}'] MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {} MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', 'systemManufacturer': 'Red Hat'} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({'run': 0},) {} MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
*[root@he ~]# tail -f /var/log/vdsm/connectivity.log *
2015-11-26 15:02:02,632:DEBUG:recent_client:False 2015-11-26 15:04:44,975:DEBUG:recent_client:True 2015-11-26 15:05:15,039:DEBUG:recent_client:False 2015-11-26 15:07:23,311:DEBUG:recent_client:True 2015-11-26 15:08:25,774:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:08:55,845:DEBUG:recent_client:False 2015-11-26 15:08:59,859:DEBUG:recent_client:True 2015-11-26 15:09:29,929:DEBUG:recent_client:False 2015-11-26 15:13:32,292:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:14:02,363:DEBUG:recent_client:False
*[root@he ~]# tail -f /var/log/vdsm/mom.log *
2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy '04-cputune' 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine starting 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is disabled 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics()
On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Below are the logs,
[root@he ~]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have done a fresh installation and now am getting the below error,
[ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up The following network ports should be opened: tcp:5900 tcp:5901 udp:5900 udp:5901 An example of the required configuration for iptables can be found at: /etc/ovirt-hosted-engine/iptables.example In order to configure firewalld, copy the files from /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services and execute the following commands: firewall-cmd -service hosted-console [ INFO ] Creating VM [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary password for console connection. The VM may not have been created: please check VDSM logs [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
[root@he ovirt]# tail -f /var/log/vdsm/ backup/ connectivity.log mom.log supervdsm.log vdsm.log [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42741 Detector thread::DEBUG::2015-11-26 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42741) Detector thread::DEBUG::2015-11-26 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42742) Detector thread::DEBUG::2015-11-26 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42743)
It failed before, can you please attach the whole VDSM logs?
On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> Its a fresh setup ,I have deleted all the vms ,still am facing same > issues . > > Can you please paste the output of vdsClient -s 0 list ? thanks
> > On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com> > wrote: > >> Hi >> >> Seems like you have existing VMs running on the host (you can check >> that by looking for qemu processes on your host). >> Is that a clean deployment, or was the host used before for running >> VMs? >> Perhaps you already ran the hosted engine setup, and the VM was >> left there? >> >> CC-ing Sandro who is more familiar in that than me. >> >> Thanks, >> Oved >> >> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com> >> wrote: >> >>> HI >>> >>> Getting below error while configuring Hosted engine, >>> >>> root@he ~]# hosted-engine --deploy >>> [ INFO ] Stage: Initializing >>> [ INFO ] Generating a temporary VNC password. >>> [ INFO ] Stage: Environment setup >>> Continuing will configure this host for serving as >>> hypervisor and create a VM where you have to install oVirt Engine >>> afterwards. >>> Are you sure you want to continue? (Yes, No)[Yes]: yes >>> Configuration files: [] >>> Log file: >>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>> It has been detected that this program is executed >>> through an SSH connection without using screen. >>> Continuing with the installation may lead to broken >>> installation if the network connection fails. >>> It is highly recommended to abort the installation and >>> run it inside a screen session using command "screen". >>> Do you want to continue anyway? (Yes, No)[No]: yes >>> [WARNING] Cannot detect if hardware supports virtualization >>> [ INFO ] Bridge ovirtmgmt already created >>> [ INFO ] Stage: Environment packages setup >>> [ INFO ] Stage: Programs detection >>> [ INFO ] Stage: Environment setup >>> >>> *[ ERROR ] The following VMs has been found: >>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>> [ INFO ] Stage: Clean up >>> [ INFO ] Generating answer file >>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> [root@he ~]# >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >

On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I got only 10lines to in the vdsm logs and are below ,
Can you please provide full sos report?
[root@he /]# tail -f /var/log/vdsm/vdsm.log Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.HsmDomainMonitorLock' Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.HsmDomainMonitorLock' (0 active users) Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is waiting for it. Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing records. Thread-100::INFO::2015-11-27 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: stopMonitoringDomain, Return response: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> state finished Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-100::DEBUG::2015-11-27 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
*Below are the entire logs*
Sorry, with the entire log I mean if you can attach or share somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are not enough to point out the issue.
*[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
*[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call readMultipathConf with () {} MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', ' polling_interval 5', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' no_path_retry fail', ' user_friendly_names no', ' flush_on_last_del yes', ' fast_io_fail_tmo 5', ' dev_loss_tmo 30', ' max_fds 4096', '}', '', 'devices {', 'device {', ' vendor "HITACHI"', ' product "DF.*"', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', '}', 'device {', ' vendor "COMPELNT"', ' product "Compellent Vol"', ' no_path_retry fail', '}', 'device {', ' # multipath.conf.default', ' vendor "DGC"', ' product ".*"', ' product_blacklist "LUNZ"', ' path_grouping_policy "group_by_prio"', ' path_checker "emc_clariion"', ' hardware_handler "1 emc"', ' prio "emc"', ' failback immediate', ' rr_weight "uniform"', ' # vdsm required configuration', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' features "0"', ' no_path_retry fail', '}', '}'] MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {} MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', 'systemManufacturer': 'Red Hat'} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({'run': 0},) {} MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
*[root@he ~]# tail -f /var/log/vdsm/connectivity.log *
2015-11-26 15:02:02,632:DEBUG:recent_client:False 2015-11-26 15:04:44,975:DEBUG:recent_client:True 2015-11-26 15:05:15,039:DEBUG:recent_client:False 2015-11-26 15:07:23,311:DEBUG:recent_client:True 2015-11-26 15:08:25,774:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:08:55,845:DEBUG:recent_client:False 2015-11-26 15:08:59,859:DEBUG:recent_client:True 2015-11-26 15:09:29,929:DEBUG:recent_client:False 2015-11-26 15:13:32,292:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:14:02,363:DEBUG:recent_client:False
*[root@he ~]# tail -f /var/log/vdsm/mom.log *
2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy '04-cputune' 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine starting 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is disabled 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics()
On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Below are the logs,
[root@he ~]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi <stirabos@redhat.com
wrote:
On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have done a fresh installation and now am getting the below error,
[ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up The following network ports should be opened: tcp:5900 tcp:5901 udp:5900 udp:5901 An example of the required configuration for iptables can be found at: /etc/ovirt-hosted-engine/iptables.example In order to configure firewalld, copy the files from /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services and execute the following commands: firewall-cmd -service hosted-console [ INFO ] Creating VM [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary password for console connection. The VM may not have been created: please check VDSM logs [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
[root@he ovirt]# tail -f /var/log/vdsm/ backup/ connectivity.log mom.log supervdsm.log vdsm.log [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42741 Detector thread::DEBUG::2015-11-26 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42741) Detector thread::DEBUG::2015-11-26 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42742) Detector thread::DEBUG::2015-11-26 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 42743)
It failed before, can you please attach the whole VDSM logs?
On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
> > > On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> > wrote: > >> Its a fresh setup ,I have deleted all the vms ,still am facing same >> issues . >> >> > Can you please paste the output of > vdsClient -s 0 list > ? > thanks > > >> >> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali <oourfali@redhat.com >> > wrote: >> >>> Hi >>> >>> Seems like you have existing VMs running on the host (you can >>> check that by looking for qemu processes on your host). >>> Is that a clean deployment, or was the host used before for >>> running VMs? >>> Perhaps you already ran the hosted engine setup, and the VM was >>> left there? >>> >>> CC-ing Sandro who is more familiar in that than me. >>> >>> Thanks, >>> Oved >>> >>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju <nbudoor@gmail.com >>> > wrote: >>> >>>> HI >>>> >>>> Getting below error while configuring Hosted engine, >>>> >>>> root@he ~]# hosted-engine --deploy >>>> [ INFO ] Stage: Initializing >>>> [ INFO ] Generating a temporary VNC password. >>>> [ INFO ] Stage: Environment setup >>>> Continuing will configure this host for serving as >>>> hypervisor and create a VM where you have to install oVirt Engine >>>> afterwards. >>>> Are you sure you want to continue? (Yes, No)[Yes]: yes >>>> Configuration files: [] >>>> Log file: >>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>> It has been detected that this program is executed >>>> through an SSH connection without using screen. >>>> Continuing with the installation may lead to broken >>>> installation if the network connection fails. >>>> It is highly recommended to abort the installation and >>>> run it inside a screen session using command "screen". >>>> Do you want to continue anyway? (Yes, No)[No]: yes >>>> [WARNING] Cannot detect if hardware supports virtualization >>>> [ INFO ] Bridge ovirtmgmt already created >>>> [ INFO ] Stage: Environment packages setup >>>> [ INFO ] Stage: Programs detection >>>> [ INFO ] Stage: Environment setup >>>> >>>> *[ ERROR ] The following VMs has been found: >>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>> [ INFO ] Stage: Clean up >>>> [ INFO ] Generating answer file >>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>> [ INFO ] Stage: Pre-termination >>>> [ INFO ] Stage: Termination >>>> [root@he ~]# >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

I do not know what logs you are expecting ? the logs which I got is pasted in the mail if you require in pastebin let me know I will upload there . On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I got only 10lines to in the vdsm logs and are below ,
Can you please provide full sos report?
[root@he /]# tail -f /var/log/vdsm/vdsm.log Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.HsmDomainMonitorLock' Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.HsmDomainMonitorLock' (0 active users) Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is waiting for it. Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing records. Thread-100::INFO::2015-11-27 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: stopMonitoringDomain, Return response: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> state finished Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-100::DEBUG::2015-11-27 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
*Below are the entire logs*
Sorry, with the entire log I mean if you can attach or share somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are not enough to point out the issue.
*[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
*[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call readMultipathConf with () {} MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', ' polling_interval 5', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' no_path_retry fail', ' user_friendly_names no', ' flush_on_last_del yes', ' fast_io_fail_tmo 5', ' dev_loss_tmo 30', ' max_fds 4096', '}', '', 'devices {', 'device {', ' vendor "HITACHI"', ' product "DF.*"', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', '}', 'device {', ' vendor "COMPELNT"', ' product "Compellent Vol"', ' no_path_retry fail', '}', 'device {', ' # multipath.conf.default', ' vendor "DGC"', ' product ".*"', ' product_blacklist "LUNZ"', ' path_grouping_policy "group_by_prio"', ' path_checker "emc_clariion"', ' hardware_handler "1 emc"', ' prio "emc"', ' failback immediate', ' rr_weight "uniform"', ' # vdsm required configuration', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' features "0"', ' no_path_retry fail', '}', '}'] MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {} MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', 'systemManufacturer': 'Red Hat'} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({'run': 0},) {} MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
*[root@he ~]# tail -f /var/log/vdsm/connectivity.log *
2015-11-26 15:02:02,632:DEBUG:recent_client:False 2015-11-26 15:04:44,975:DEBUG:recent_client:True 2015-11-26 15:05:15,039:DEBUG:recent_client:False 2015-11-26 15:07:23,311:DEBUG:recent_client:True 2015-11-26 15:08:25,774:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:08:55,845:DEBUG:recent_client:False 2015-11-26 15:08:59,859:DEBUG:recent_client:True 2015-11-26 15:09:29,929:DEBUG:recent_client:False 2015-11-26 15:13:32,292:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:14:02,363:DEBUG:recent_client:False
*[root@he ~]# tail -f /var/log/vdsm/mom.log *
2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy '04-cputune' 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine starting 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is disabled 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics()
On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Below are the logs,
[root@he ~]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> I have done a fresh installation and now am getting the below error, > > [ INFO ] Updating hosted-engine configuration > [ INFO ] Stage: Transaction commit > [ INFO ] Stage: Closing up > The following network ports should be opened: > tcp:5900 > tcp:5901 > udp:5900 > udp:5901 > An example of the required configuration for iptables can > be found at: > /etc/ovirt-hosted-engine/iptables.example > In order to configure firewalld, copy the files from > /etc/ovirt-hosted-engine/firewalld to > /etc/firewalld/services > and execute the following commands: > firewall-cmd -service hosted-console > [ INFO ] Creating VM > [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary > password for console connection. The VM may not have been created: please > check VDSM logs > [ INFO ] Stage: Clean up > [ INFO ] Generating answer file > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > > > > [root@he ovirt]# tail -f /var/log/vdsm/ > backup/ connectivity.log mom.log > supervdsm.log vdsm.log > [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log > Detector thread::DEBUG::2015-11-26 > 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:42741 > Detector thread::DEBUG::2015-11-26 > 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 42741) > Detector thread::DEBUG::2015-11-26 > 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) > Adding connection from 127.0.0.1:42742 > Detector thread::DEBUG::2015-11-26 > 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) > Connection removed from 127.0.0.1:42742 > Detector thread::DEBUG::2015-11-26 > 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:42742 > Detector thread::DEBUG::2015-11-26 > 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 42742) > Detector thread::DEBUG::2015-11-26 > 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) > Adding connection from 127.0.0.1:42743 > Detector thread::DEBUG::2015-11-26 > 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) > Connection removed from 127.0.0.1:42743 > Detector thread::DEBUG::2015-11-26 > 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:42743 > Detector thread::DEBUG::2015-11-26 > 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 42743) > >
It failed before, can you please attach the whole VDSM logs?
> > On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> >> >> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com> >> wrote: >> >>> Its a fresh setup ,I have deleted all the vms ,still am facing >>> same issues . >>> >>> >> Can you please paste the output of >> vdsClient -s 0 list >> ? >> thanks >> >> >>> >>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>> oourfali@redhat.com> wrote: >>> >>>> Hi >>>> >>>> Seems like you have existing VMs running on the host (you can >>>> check that by looking for qemu processes on your host). >>>> Is that a clean deployment, or was the host used before for >>>> running VMs? >>>> Perhaps you already ran the hosted engine setup, and the VM was >>>> left there? >>>> >>>> CC-ing Sandro who is more familiar in that than me. >>>> >>>> Thanks, >>>> Oved >>>> >>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>> nbudoor@gmail.com> wrote: >>>> >>>>> HI >>>>> >>>>> Getting below error while configuring Hosted engine, >>>>> >>>>> root@he ~]# hosted-engine --deploy >>>>> [ INFO ] Stage: Initializing >>>>> [ INFO ] Generating a temporary VNC password. >>>>> [ INFO ] Stage: Environment setup >>>>> Continuing will configure this host for serving as >>>>> hypervisor and create a VM where you have to install oVirt Engine >>>>> afterwards. >>>>> Are you sure you want to continue? (Yes, No)[Yes]: yes >>>>> Configuration files: [] >>>>> Log file: >>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>> It has been detected that this program is executed >>>>> through an SSH connection without using screen. >>>>> Continuing with the installation may lead to broken >>>>> installation if the network connection fails. >>>>> It is highly recommended to abort the installation and >>>>> run it inside a screen session using command "screen". >>>>> Do you want to continue anyway? (Yes, No)[No]: yes >>>>> [WARNING] Cannot detect if hardware supports virtualization >>>>> [ INFO ] Bridge ovirtmgmt already created >>>>> [ INFO ] Stage: Environment packages setup >>>>> [ INFO ] Stage: Programs detection >>>>> [ INFO ] Stage: Environment setup >>>>> >>>>> *[ ERROR ] The following VMs has been found: >>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>> [ INFO ] Stage: Clean up >>>>> [ INFO ] Generating answer file >>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>> [ INFO ] Stage: Pre-termination >>>>> [ INFO ] Stage: Termination >>>>> [root@he ~]# >>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I do not know what logs you are expecting ? the logs which I got is pasted in the mail if you require in pastebin let me know I will upload there .
Please run sosreport utility and share the resulting archive where you prefer. You can follow this guide: http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I got only 10lines to in the vdsm logs and are below ,
Can you please provide full sos report?
[root@he /]# tail -f /var/log/vdsm/vdsm.log Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.HsmDomainMonitorLock' Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.HsmDomainMonitorLock' (0 active users) Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is waiting for it. Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing records. Thread-100::INFO::2015-11-27 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: stopMonitoringDomain, Return response: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> state finished Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-100::DEBUG::2015-11-27 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
*Below are the entire logs*
Sorry, with the entire log I mean if you can attach or share somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are not enough to point out the issue.
*[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
*[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call readMultipathConf with () {} MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', ' polling_interval 5', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' no_path_retry fail', ' user_friendly_names no', ' flush_on_last_del yes', ' fast_io_fail_tmo 5', ' dev_loss_tmo 30', ' max_fds 4096', '}', '', 'devices {', 'device {', ' vendor "HITACHI"', ' product "DF.*"', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', '}', 'device {', ' vendor "COMPELNT"', ' product "Compellent Vol"', ' no_path_retry fail', '}', 'device {', ' # multipath.conf.default', ' vendor "DGC"', ' product ".*"', ' product_blacklist "LUNZ"', ' path_grouping_policy "group_by_prio"', ' path_checker "emc_clariion"', ' hardware_handler "1 emc"', ' prio "emc"', ' failback immediate', ' rr_weight "uniform"', ' # vdsm required configuration', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' features "0"', ' no_path_retry fail', '}', '}'] MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {} MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', 'systemManufacturer': 'Red Hat'} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({'run': 0},) {} MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
*[root@he ~]# tail -f /var/log/vdsm/connectivity.log *
2015-11-26 15:02:02,632:DEBUG:recent_client:False 2015-11-26 15:04:44,975:DEBUG:recent_client:True 2015-11-26 15:05:15,039:DEBUG:recent_client:False 2015-11-26 15:07:23,311:DEBUG:recent_client:True 2015-11-26 15:08:25,774:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:08:55,845:DEBUG:recent_client:False 2015-11-26 15:08:59,859:DEBUG:recent_client:True 2015-11-26 15:09:29,929:DEBUG:recent_client:False 2015-11-26 15:13:32,292:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:14:02,363:DEBUG:recent_client:False
*[root@he ~]# tail -f /var/log/vdsm/mom.log *
2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy '04-cputune' 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine starting 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is disabled 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics()
On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Below are the logs,
[root@he ~]# tail -f /var/log/vdsm/vdsm.log Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
> > > On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com> > wrote: > >> I have done a fresh installation and now am getting the below error, >> >> [ INFO ] Updating hosted-engine configuration >> [ INFO ] Stage: Transaction commit >> [ INFO ] Stage: Closing up >> The following network ports should be opened: >> tcp:5900 >> tcp:5901 >> udp:5900 >> udp:5901 >> An example of the required configuration for iptables can >> be found at: >> /etc/ovirt-hosted-engine/iptables.example >> In order to configure firewalld, copy the files from >> /etc/ovirt-hosted-engine/firewalld to >> /etc/firewalld/services >> and execute the following commands: >> firewall-cmd -service hosted-console >> [ INFO ] Creating VM >> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >> temporary password for console connection. The VM may not have been >> created: please check VDSM logs >> [ INFO ] Stage: Clean up >> [ INFO ] Generating answer file >> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> >> >> >> [root@he ovirt]# tail -f /var/log/vdsm/ >> backup/ connectivity.log mom.log >> supervdsm.log vdsm.log >> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >> Detector thread::DEBUG::2015-11-26 >> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:42741 >> Detector thread::DEBUG::2015-11-26 >> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 42741) >> Detector thread::DEBUG::2015-11-26 >> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >> Adding connection from 127.0.0.1:42742 >> Detector thread::DEBUG::2015-11-26 >> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >> Connection removed from 127.0.0.1:42742 >> Detector thread::DEBUG::2015-11-26 >> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:42742 >> Detector thread::DEBUG::2015-11-26 >> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 42742) >> Detector thread::DEBUG::2015-11-26 >> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >> Adding connection from 127.0.0.1:42743 >> Detector thread::DEBUG::2015-11-26 >> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >> Connection removed from 127.0.0.1:42743 >> Detector thread::DEBUG::2015-11-26 >> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:42743 >> Detector thread::DEBUG::2015-11-26 >> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 42743) >> >> > > It failed before, can you please attach the whole VDSM logs? > > >> >> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >> stirabos@redhat.com> wrote: >> >>> >>> >>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju <nbudoor@gmail.com >>> > wrote: >>> >>>> Its a fresh setup ,I have deleted all the vms ,still am facing >>>> same issues . >>>> >>>> >>> Can you please paste the output of >>> vdsClient -s 0 list >>> ? >>> thanks >>> >>> >>>> >>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>> oourfali@redhat.com> wrote: >>>> >>>>> Hi >>>>> >>>>> Seems like you have existing VMs running on the host (you can >>>>> check that by looking for qemu processes on your host). >>>>> Is that a clean deployment, or was the host used before for >>>>> running VMs? >>>>> Perhaps you already ran the hosted engine setup, and the VM was >>>>> left there? >>>>> >>>>> CC-ing Sandro who is more familiar in that than me. >>>>> >>>>> Thanks, >>>>> Oved >>>>> >>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>> nbudoor@gmail.com> wrote: >>>>> >>>>>> HI >>>>>> >>>>>> Getting below error while configuring Hosted engine, >>>>>> >>>>>> root@he ~]# hosted-engine --deploy >>>>>> [ INFO ] Stage: Initializing >>>>>> [ INFO ] Generating a temporary VNC password. >>>>>> [ INFO ] Stage: Environment setup >>>>>> Continuing will configure this host for serving as >>>>>> hypervisor and create a VM where you have to install oVirt Engine >>>>>> afterwards. >>>>>> Are you sure you want to continue? (Yes, No)[Yes]: yes >>>>>> Configuration files: [] >>>>>> Log file: >>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>> It has been detected that this program is executed >>>>>> through an SSH connection without using screen. >>>>>> Continuing with the installation may lead to broken >>>>>> installation if the network connection fails. >>>>>> It is highly recommended to abort the installation >>>>>> and run it inside a screen session using command "screen". >>>>>> Do you want to continue anyway? (Yes, No)[No]: yes >>>>>> [WARNING] Cannot detect if hardware supports virtualization >>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>> [ INFO ] Stage: Environment packages setup >>>>>> [ INFO ] Stage: Programs detection >>>>>> [ INFO ] Stage: Environment setup >>>>>> >>>>>> *[ ERROR ] The following VMs has been found: >>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>> [ INFO ] Stage: Clean up >>>>>> [ INFO ] Generating answer file >>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>> [ INFO ] Stage: Pre-termination >>>>>> [ INFO ] Stage: Termination >>>>>> [root@he ~]# >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Maybe even makes sense to open a bugzilla ticket already. Better safe than sorry. On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I do not know what logs you are expecting ? the logs which I got is pasted in the mail if you require in pastebin let me know I will upload there .
Please run sosreport utility and share the resulting archive where you prefer. You can follow this guide: http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I got only 10lines to in the vdsm logs and are below ,
Can you please provide full sos report?
[root@he /]# tail -f /var/log/vdsm/vdsm.log Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.HsmDomainMonitorLock' Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.HsmDomainMonitorLock' (0 active users) Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is waiting for it. Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing records. Thread-100::INFO::2015-11-27 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: stopMonitoringDomain, Return response: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> state finished Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-100::DEBUG::2015-11-27 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <stirabos@redhat.com
wrote:
On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
*Below are the entire logs*
Sorry, with the entire log I mean if you can attach or share somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are not enough to point out the issue.
*[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
Detector thread::DEBUG::2015-11-26 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50944) Detector thread::DEBUG::2015-11-26 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50945) Detector thread::DEBUG::2015-11-26 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) Connection removed from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml from 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http detected from ('127.0.0.1', 50946)
*[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call readMultipathConf with () {} MainProcess::DEBUG::2015-11-26 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', ' polling_interval 5', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' no_path_retry fail', ' user_friendly_names no', ' flush_on_last_del yes', ' fast_io_fail_tmo 5', ' dev_loss_tmo 30', ' max_fds 4096', '}', '', 'devices {', 'device {', ' vendor "HITACHI"', ' product "DF.*"', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', '}', 'device {', ' vendor "COMPELNT"', ' product "Compellent Vol"', ' no_path_retry fail', '}', 'device {', ' # multipath.conf.default', ' vendor "DGC"', ' product ".*"', ' product_blacklist "LUNZ"', ' path_grouping_policy "group_by_prio"', ' path_checker "emc_clariion"', ' hardware_handler "1 emc"', ' prio "emc"', ' failback immediate', ' rr_weight "uniform"', ' # vdsm required configuration', ' getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', ' features "0"', ' no_path_retry fail', '}', '}'] MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {} MainProcess|Thread-13::DEBUG::2015-11-26 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', 'systemManufacturer': 'Red Hat'} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-21::DEBUG::2015-11-26 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} MainProcess|Thread-22::DEBUG::2015-11-26 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({'run': 0},) {} MainProcess|PolicyEngine::DEBUG::2015-11-26 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
*[root@he ~]# tail -f /var/log/vdsm/connectivity.log *
2015-11-26 15:02:02,632:DEBUG:recent_client:False 2015-11-26 15:04:44,975:DEBUG:recent_client:True 2015-11-26 15:05:15,039:DEBUG:recent_client:False 2015-11-26 15:07:23,311:DEBUG:recent_client:True 2015-11-26 15:08:25,774:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:08:55,845:DEBUG:recent_client:False 2015-11-26 15:08:59,859:DEBUG:recent_client:True 2015-11-26 15:09:29,929:DEBUG:recent_client:False 2015-11-26 15:13:32,292:DEBUG:recent_client:True, ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 duplex:full) 2015-11-26 15:14:02,363:DEBUG:recent_client:False
*[root@he ~]# tail -f /var/log/vdsm/mom.log *
2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy '04-cputune' 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine starting 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is disabled 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics()
On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> Below are the logs, > > > [root@he ~]# tail -f /var/log/vdsm/vdsm.log > Detector thread::DEBUG::2015-11-26 > 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:50944 > Detector thread::DEBUG::2015-11-26 > 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 50944) > Detector thread::DEBUG::2015-11-26 > 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) > Adding connection from 127.0.0.1:50945 > Detector thread::DEBUG::2015-11-26 > 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) > Connection removed from 127.0.0.1:50945 > Detector thread::DEBUG::2015-11-26 > 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:50945 > Detector thread::DEBUG::2015-11-26 > 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 50945) > Detector thread::DEBUG::2015-11-26 > 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) > Adding connection from 127.0.0.1:50946 > Detector thread::DEBUG::2015-11-26 > 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) > Connection removed from 127.0.0.1:50946 > Detector thread::DEBUG::2015-11-26 > 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:50946 > Detector thread::DEBUG::2015-11-26 > 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 50946) > > > > On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> >> >> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju <nbudoor@gmail.com >> > wrote: >> >>> I have done a fresh installation and now am getting the below >>> error, >>> >>> [ INFO ] Updating hosted-engine configuration >>> [ INFO ] Stage: Transaction commit >>> [ INFO ] Stage: Closing up >>> The following network ports should be opened: >>> tcp:5900 >>> tcp:5901 >>> udp:5900 >>> udp:5901 >>> An example of the required configuration for iptables >>> can be found at: >>> /etc/ovirt-hosted-engine/iptables.example >>> In order to configure firewalld, copy the files from >>> /etc/ovirt-hosted-engine/firewalld to >>> /etc/firewalld/services >>> and execute the following commands: >>> firewall-cmd -service hosted-console >>> [ INFO ] Creating VM >>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>> temporary password for console connection. The VM may not have been >>> created: please check VDSM logs >>> [ INFO ] Stage: Clean up >>> [ INFO ] Generating answer file >>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> >>> >>> >>> [root@he ovirt]# tail -f /var/log/vdsm/ >>> backup/ connectivity.log mom.log >>> supervdsm.log vdsm.log >>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:42741 >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 42741) >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>> Adding connection from 127.0.0.1:42742 >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>> Connection removed from 127.0.0.1:42742 >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:42742 >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 42742) >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>> Adding connection from 127.0.0.1:42743 >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>> Connection removed from 127.0.0.1:42743 >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:42743 >>> Detector thread::DEBUG::2015-11-26 >>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 42743) >>> >>> >> >> It failed before, can you please attach the whole VDSM logs? >> >> >>> >>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>> stirabos@redhat.com> wrote: >>> >>>> >>>> >>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>> nbudoor@gmail.com> wrote: >>>> >>>>> Its a fresh setup ,I have deleted all the vms ,still am facing >>>>> same issues . >>>>> >>>>> >>>> Can you please paste the output of >>>> vdsClient -s 0 list >>>> ? >>>> thanks >>>> >>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>> oourfali@redhat.com> wrote: >>>>> >>>>>> Hi >>>>>> >>>>>> Seems like you have existing VMs running on the host (you can >>>>>> check that by looking for qemu processes on your host). >>>>>> Is that a clean deployment, or was the host used before for >>>>>> running VMs? >>>>>> Perhaps you already ran the hosted engine setup, and the VM was >>>>>> left there? >>>>>> >>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>> >>>>>> Thanks, >>>>>> Oved >>>>>> >>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>> nbudoor@gmail.com> wrote: >>>>>> >>>>>>> HI >>>>>>> >>>>>>> Getting below error while configuring Hosted engine, >>>>>>> >>>>>>> root@he ~]# hosted-engine --deploy >>>>>>> [ INFO ] Stage: Initializing >>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>> [ INFO ] Stage: Environment setup >>>>>>> Continuing will configure this host for serving as >>>>>>> hypervisor and create a VM where you have to install oVirt Engine >>>>>>> afterwards. >>>>>>> Are you sure you want to continue? (Yes, No)[Yes]: >>>>>>> yes >>>>>>> Configuration files: [] >>>>>>> Log file: >>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>> It has been detected that this program is executed >>>>>>> through an SSH connection without using screen. >>>>>>> Continuing with the installation may lead to broken >>>>>>> installation if the network connection fails. >>>>>>> It is highly recommended to abort the installation >>>>>>> and run it inside a screen session using command "screen". >>>>>>> Do you want to continue anyway? (Yes, No)[No]: yes >>>>>>> [WARNING] Cannot detect if hardware supports virtualization >>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>> [ INFO ] Stage: Programs detection >>>>>>> [ INFO ] Stage: Environment setup >>>>>>> >>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>> [ INFO ] Stage: Clean up >>>>>>> [ INFO ] Generating answer file >>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>> [ INFO ] Stage: Pre-termination >>>>>>> [ INFO ] Stage: Termination >>>>>>> [root@he ~]# >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users@ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>> >> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> wrote:
Maybe even makes sense to open a bugzilla ticket already. Better safe than sorry.
We still need at least one log file to understand what happened.
On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I do not know what logs you are expecting ? the logs which I got is pasted in the mail if you require in pastebin let me know I will upload there .
Please run sosreport utility and share the resulting archive where you prefer. You can follow this guide: http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I got only 10lines to in the vdsm logs and are below ,
Can you please provide full sos report?
[root@he /]# tail -f /var/log/vdsm/vdsm.log Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.HsmDomainMonitorLock' Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.HsmDomainMonitorLock' (0 active users) Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is waiting for it. Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing records. Thread-100::INFO::2015-11-27 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: stopMonitoringDomain, Return response: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> state finished Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-100::DEBUG::2015-11-27 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> > > > *Below are the entire logs* > > Sorry, with the entire log I mean if you can attach or share somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are not enough to point out the issue.
> > > > > *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * > > Detector thread::DEBUG::2015-11-26 > 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:50944 > Detector thread::DEBUG::2015-11-26 > 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 50944) > Detector thread::DEBUG::2015-11-26 > 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) > Adding connection from 127.0.0.1:50945 > Detector thread::DEBUG::2015-11-26 > 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) > Connection removed from 127.0.0.1:50945 > Detector thread::DEBUG::2015-11-26 > 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:50945 > Detector thread::DEBUG::2015-11-26 > 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 50945) > Detector thread::DEBUG::2015-11-26 > 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) > Adding connection from 127.0.0.1:50946 > Detector thread::DEBUG::2015-11-26 > 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) > Connection removed from 127.0.0.1:50946 > Detector thread::DEBUG::2015-11-26 > 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) > Detected protocol xml from 127.0.0.1:50946 > Detector thread::DEBUG::2015-11-26 > 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over > http detected from ('127.0.0.1', 50946) > > > > > *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * > > MainProcess::DEBUG::2015-11-26 > 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) > call readMultipathConf with () {} > MainProcess::DEBUG::2015-11-26 > 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) > return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', > ' polling_interval 5', ' getuid_callout > "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', > ' no_path_retry fail', ' user_friendly_names no', ' > flush_on_last_del yes', ' fast_io_fail_tmo 5', ' > dev_loss_tmo 30', ' max_fds 4096', '}', '', > 'devices {', 'device {', ' vendor "HITACHI"', ' > product "DF.*"', ' getuid_callout > "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', > '}', 'device {', ' vendor "COMPELNT"', ' > product "Compellent Vol"', ' no_path_retry > fail', '}', 'device {', ' # multipath.conf.default', ' > vendor "DGC"', ' product ".*"', ' > product_blacklist "LUNZ"', ' path_grouping_policy > "group_by_prio"', ' path_checker "emc_clariion"', ' > hardware_handler "1 emc"', ' prio "emc"', ' > failback immediate', ' rr_weight > "uniform"', ' # vdsm required configuration', ' > getuid_callout "/lib/udev/scsi_id --whitelisted > --replace-whitespace --device=/dev/%n"', ' features "0"', > ' no_path_retry fail', '}', '}'] > MainProcess|Thread-13::DEBUG::2015-11-26 > 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) > call getHardwareInfo with () {} > MainProcess|Thread-13::DEBUG::2015-11-26 > 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) > return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': > 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise > Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', > 'systemManufacturer': 'Red Hat'} > MainProcess|Thread-21::DEBUG::2015-11-26 > 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) > call validateAccess with ('qemu', ('qemu', 'kvm'), > '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} > MainProcess|Thread-21::DEBUG::2015-11-26 > 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) > return validateAccess with None > MainProcess|Thread-22::DEBUG::2015-11-26 > 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) > call validateAccess with ('qemu', ('qemu', 'kvm'), > '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} > MainProcess|Thread-22::DEBUG::2015-11-26 > 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) > return validateAccess with None > MainProcess|PolicyEngine::DEBUG::2015-11-26 > 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) > call ksmTune with ({'run': 0},) {} > MainProcess|PolicyEngine::DEBUG::2015-11-26 > 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) > return ksmTune with None > > > > *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * > > > 2015-11-26 15:02:02,632:DEBUG:recent_client:False > 2015-11-26 15:04:44,975:DEBUG:recent_client:True > 2015-11-26 15:05:15,039:DEBUG:recent_client:False > 2015-11-26 15:07:23,311:DEBUG:recent_client:True > 2015-11-26 15:08:25,774:DEBUG:recent_client:True, > ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 > duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), > bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 > duplex:full) > 2015-11-26 15:08:55,845:DEBUG:recent_client:False > 2015-11-26 15:08:59,859:DEBUG:recent_client:True > 2015-11-26 15:09:29,929:DEBUG:recent_client:False > 2015-11-26 15:13:32,292:DEBUG:recent_client:True, > ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 > duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), > bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 > duplex:full) > 2015-11-26 15:14:02,363:DEBUG:recent_client:False > > > > > *[root@he ~]# tail -f /var/log/vdsm/mom.log * > > > 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy > '04-cputune' > 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine > starting > 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is > disabled > 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM > configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 > 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() > 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() > 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() > 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() > 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() > 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics() > > > > > > > > > > On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> > wrote: > >> Below are the logs, >> >> >> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >> Detector thread::DEBUG::2015-11-26 >> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:50944 >> Detector thread::DEBUG::2015-11-26 >> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 50944) >> Detector thread::DEBUG::2015-11-26 >> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >> Adding connection from 127.0.0.1:50945 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >> Connection removed from 127.0.0.1:50945 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:50945 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 50945) >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >> Adding connection from 127.0.0.1:50946 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >> Connection removed from 127.0.0.1:50946 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:50946 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 50946) >> >> >> >> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >> stirabos@redhat.com> wrote: >> >>> >>> >>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>> nbudoor@gmail.com> wrote: >>> >>>> I have done a fresh installation and now am getting the below >>>> error, >>>> >>>> [ INFO ] Updating hosted-engine configuration >>>> [ INFO ] Stage: Transaction commit >>>> [ INFO ] Stage: Closing up >>>> The following network ports should be opened: >>>> tcp:5900 >>>> tcp:5901 >>>> udp:5900 >>>> udp:5901 >>>> An example of the required configuration for iptables >>>> can be found at: >>>> /etc/ovirt-hosted-engine/iptables.example >>>> In order to configure firewalld, copy the files from >>>> /etc/ovirt-hosted-engine/firewalld to >>>> /etc/firewalld/services >>>> and execute the following commands: >>>> firewall-cmd -service hosted-console >>>> [ INFO ] Creating VM >>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>> temporary password for console connection. The VM may not have been >>>> created: please check VDSM logs >>>> [ INFO ] Stage: Clean up >>>> [ INFO ] Generating answer file >>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>> [ INFO ] Stage: Pre-termination >>>> [ INFO ] Stage: Termination >>>> >>>> >>>> >>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>> backup/ connectivity.log mom.log >>>> supervdsm.log vdsm.log >>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:42741 >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 42741) >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>> Adding connection from 127.0.0.1:42742 >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>> Connection removed from 127.0.0.1:42742 >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:42742 >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 42742) >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>> Adding connection from 127.0.0.1:42743 >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>> Connection removed from 127.0.0.1:42743 >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:42743 >>>> Detector thread::DEBUG::2015-11-26 >>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 42743) >>>> >>>> >>> >>> It failed before, can you please attach the whole VDSM logs? >>> >>> >>>> >>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>> stirabos@redhat.com> wrote: >>>> >>>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>> nbudoor@gmail.com> wrote: >>>>> >>>>>> Its a fresh setup ,I have deleted all the vms ,still am facing >>>>>> same issues . >>>>>> >>>>>> >>>>> Can you please paste the output of >>>>> vdsClient -s 0 list >>>>> ? >>>>> thanks >>>>> >>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>> oourfali@redhat.com> wrote: >>>>>> >>>>>>> Hi >>>>>>> >>>>>>> Seems like you have existing VMs running on the host (you can >>>>>>> check that by looking for qemu processes on your host). >>>>>>> Is that a clean deployment, or was the host used before for >>>>>>> running VMs? >>>>>>> Perhaps you already ran the hosted engine setup, and the VM >>>>>>> was left there? >>>>>>> >>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>> >>>>>>> Thanks, >>>>>>> Oved >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>> nbudoor@gmail.com> wrote: >>>>>>> >>>>>>>> HI >>>>>>>> >>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>> >>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>> [ INFO ] Stage: Initializing >>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>> Continuing will configure this host for serving as >>>>>>>> hypervisor and create a VM where you have to install oVirt Engine >>>>>>>> afterwards. >>>>>>>> Are you sure you want to continue? (Yes, No)[Yes]: >>>>>>>> yes >>>>>>>> Configuration files: [] >>>>>>>> Log file: >>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>> It has been detected that this program is executed >>>>>>>> through an SSH connection without using screen. >>>>>>>> Continuing with the installation may lead to broken >>>>>>>> installation if the network connection fails. >>>>>>>> It is highly recommended to abort the installation >>>>>>>> and run it inside a screen session using command "screen". >>>>>>>> Do you want to continue anyway? (Yes, No)[No]: yes >>>>>>>> [WARNING] Cannot detect if hardware supports virtualization >>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>> >>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>> [ INFO ] Stage: Clean up >>>>>>>> [ INFO ] Generating answer file >>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>> [ INFO ] Stage: Termination >>>>>>>> [root@he ~]# >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users@ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>> >>> >> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

pls fine the logs from the below mentioned URL, http://pastebin.com/ZeKyyFbN On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> wrote:
Maybe even makes sense to open a bugzilla ticket already. Better safe than sorry.
We still need at least one log file to understand what happened.
On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I do not know what logs you are expecting ? the logs which I got is pasted in the mail if you require in pastebin let me know I will upload there .
Please run sosreport utility and share the resulting archive where you prefer. You can follow this guide: http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I got only 10lines to in the vdsm logs and are below ,
Can you please provide full sos report?
[root@he /]# tail -f /var/log/vdsm/vdsm.log Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.HsmDomainMonitorLock' Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.HsmDomainMonitorLock' (0 active users) Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is waiting for it. Thread-100::DEBUG::2015-11-27 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing records. Thread-100::INFO::2015-11-27 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: stopMonitoringDomain, Return response: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None Thread-100::DEBUG::2015-11-27 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> state finished Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-100::DEBUG::2015-11-27 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-100::DEBUG::2015-11-27 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
> > > On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com> > wrote: > >> >> >> >> *Below are the entire logs* >> >> > Sorry, with the entire log I mean if you can attach or share > somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are > not enough to point out the issue. > > >> >> >> >> >> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >> >> Detector thread::DEBUG::2015-11-26 >> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:50944 >> Detector thread::DEBUG::2015-11-26 >> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 50944) >> Detector thread::DEBUG::2015-11-26 >> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >> Adding connection from 127.0.0.1:50945 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >> Connection removed from 127.0.0.1:50945 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:50945 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 50945) >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >> Adding connection from 127.0.0.1:50946 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >> Connection removed from 127.0.0.1:50946 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >> Detected protocol xml from 127.0.0.1:50946 >> Detector thread::DEBUG::2015-11-26 >> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >> http detected from ('127.0.0.1', 50946) >> >> >> >> >> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >> >> MainProcess::DEBUG::2015-11-26 >> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >> call readMultipathConf with () {} >> MainProcess::DEBUG::2015-11-26 >> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >> ' polling_interval 5', ' getuid_callout >> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >> ' no_path_retry fail', ' user_friendly_names no', ' >> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >> dev_loss_tmo 30', ' max_fds 4096', '}', '', >> 'devices {', 'device {', ' vendor "HITACHI"', ' >> product "DF.*"', ' getuid_callout >> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >> '}', 'device {', ' vendor "COMPELNT"', ' >> product "Compellent Vol"', ' no_path_retry >> fail', '}', 'device {', ' # multipath.conf.default', ' >> vendor "DGC"', ' product ".*"', ' >> product_blacklist "LUNZ"', ' path_grouping_policy >> "group_by_prio"', ' path_checker "emc_clariion"', ' >> hardware_handler "1 emc"', ' prio "emc"', ' >> failback immediate', ' rr_weight >> "uniform"', ' # vdsm required configuration', ' >> getuid_callout "/lib/udev/scsi_id --whitelisted >> --replace-whitespace --device=/dev/%n"', ' features "0"', >> ' no_path_retry fail', '}', '}'] >> MainProcess|Thread-13::DEBUG::2015-11-26 >> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >> call getHardwareInfo with () {} >> MainProcess|Thread-13::DEBUG::2015-11-26 >> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >> 'systemManufacturer': 'Red Hat'} >> MainProcess|Thread-21::DEBUG::2015-11-26 >> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >> call validateAccess with ('qemu', ('qemu', 'kvm'), >> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >> MainProcess|Thread-21::DEBUG::2015-11-26 >> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >> return validateAccess with None >> MainProcess|Thread-22::DEBUG::2015-11-26 >> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >> call validateAccess with ('qemu', ('qemu', 'kvm'), >> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >> MainProcess|Thread-22::DEBUG::2015-11-26 >> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >> return validateAccess with None >> MainProcess|PolicyEngine::DEBUG::2015-11-26 >> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >> call ksmTune with ({'run': 0},) {} >> MainProcess|PolicyEngine::DEBUG::2015-11-26 >> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >> return ksmTune with None >> >> >> >> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >> >> >> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >> duplex:full) >> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >> duplex:full) >> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >> >> >> >> >> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >> >> >> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >> '04-cputune' >> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine >> starting >> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is >> disabled >> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating KSM >> configuration: pages_to_scan:0 merge_across_nodes:8 run:0 sleep_millisecs:0 >> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() >> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() >> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() >> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() >> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() >> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics() >> >> >> >> >> >> >> >> >> >> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com> >> wrote: >> >>> Below are the logs, >>> >>> >>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50944 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50944) >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>> Adding connection from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>> Connection removed from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50945) >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>> Adding connection from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>> Connection removed from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50946) >>> >>> >>> >>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>> stirabos@redhat.com> wrote: >>> >>>> >>>> >>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>> nbudoor@gmail.com> wrote: >>>> >>>>> I have done a fresh installation and now am getting the below >>>>> error, >>>>> >>>>> [ INFO ] Updating hosted-engine configuration >>>>> [ INFO ] Stage: Transaction commit >>>>> [ INFO ] Stage: Closing up >>>>> The following network ports should be opened: >>>>> tcp:5900 >>>>> tcp:5901 >>>>> udp:5900 >>>>> udp:5901 >>>>> An example of the required configuration for iptables >>>>> can be found at: >>>>> /etc/ovirt-hosted-engine/iptables.example >>>>> In order to configure firewalld, copy the files from >>>>> /etc/ovirt-hosted-engine/firewalld to >>>>> /etc/firewalld/services >>>>> and execute the following commands: >>>>> firewall-cmd -service hosted-console >>>>> [ INFO ] Creating VM >>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>>> temporary password for console connection. The VM may not have been >>>>> created: please check VDSM logs >>>>> [ INFO ] Stage: Clean up >>>>> [ INFO ] Generating answer file >>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>> [ INFO ] Stage: Pre-termination >>>>> [ INFO ] Stage: Termination >>>>> >>>>> >>>>> >>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>> backup/ connectivity.log mom.log >>>>> supervdsm.log vdsm.log >>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:42741 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 42741) >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>> Adding connection from 127.0.0.1:42742 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>> Connection removed from 127.0.0.1:42742 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:42742 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 42742) >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>> Adding connection from 127.0.0.1:42743 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>> Connection removed from 127.0.0.1:42743 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:42743 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 42743) >>>>> >>>>> >>>> >>>> It failed before, can you please attach the whole VDSM logs? >>>> >>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>> stirabos@redhat.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>> nbudoor@gmail.com> wrote: >>>>>> >>>>>>> Its a fresh setup ,I have deleted all the vms ,still am facing >>>>>>> same issues . >>>>>>> >>>>>>> >>>>>> Can you please paste the output of >>>>>> vdsClient -s 0 list >>>>>> ? >>>>>> thanks >>>>>> >>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>> oourfali@redhat.com> wrote: >>>>>>> >>>>>>>> Hi >>>>>>>> >>>>>>>> Seems like you have existing VMs running on the host (you can >>>>>>>> check that by looking for qemu processes on your host). >>>>>>>> Is that a clean deployment, or was the host used before for >>>>>>>> running VMs? >>>>>>>> Perhaps you already ran the hosted engine setup, and the VM >>>>>>>> was left there? >>>>>>>> >>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Oved >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>> >>>>>>>>> HI >>>>>>>>> >>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>> >>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>> Continuing will configure this host for serving as >>>>>>>>> hypervisor and create a VM where you have to install oVirt Engine >>>>>>>>> afterwards. >>>>>>>>> Are you sure you want to continue? (Yes, No)[Yes]: >>>>>>>>> yes >>>>>>>>> Configuration files: [] >>>>>>>>> Log file: >>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>> It has been detected that this program is executed >>>>>>>>> through an SSH connection without using screen. >>>>>>>>> Continuing with the installation may lead to >>>>>>>>> broken installation if the network connection fails. >>>>>>>>> It is highly recommended to abort the installation >>>>>>>>> and run it inside a screen session using command "screen". >>>>>>>>> Do you want to continue anyway? (Yes, No)[No]: yes >>>>>>>>> [WARNING] Cannot detect if hardware supports virtualization >>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>> >>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>> [ INFO ] Generating answer file >>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>> [root@he ~]# >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users@ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users@ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
pls fine the logs from the below mentioned URL,
I'm sorry but without a full sos report we can't help you.
On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> wrote:
Maybe even makes sense to open a bugzilla ticket already. Better safe than sorry.
We still need at least one log file to understand what happened.
On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I do not know what logs you are expecting ? the logs which I got is pasted in the mail if you require in pastebin let me know I will upload there .
Please run sosreport utility and share the resulting archive where you prefer. You can follow this guide: http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <sbonazzo@redhat.com
wrote:
On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> I got only 10lines to in the vdsm logs and are below , > > Can you please provide full sos report?
> > [root@he /]# tail -f /var/log/vdsm/vdsm.log > Thread-100::DEBUG::2015-11-27 > 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) > Trying to release resource 'Storage.HsmDomainMonitorLock' > Thread-100::DEBUG::2015-11-27 > 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) > Released resource 'Storage.HsmDomainMonitorLock' (0 active users) > Thread-100::DEBUG::2015-11-27 > 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) > Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is > waiting for it. > Thread-100::DEBUG::2015-11-27 > 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) > No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing > records. > Thread-100::INFO::2015-11-27 > 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: > stopMonitoringDomain, Return response: None > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) > Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) > Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> > state finished > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) > Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False > > > > On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> >> >> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com >> > wrote: >> >>> >>> >>> >>> *Below are the entire logs* >>> >>> >> Sorry, with the entire log I mean if you can attach or share >> somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are >> not enough to point out the issue. >> >> >>> >>> >>> >>> >>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >>> >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50944 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50944) >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>> Adding connection from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>> Connection removed from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50945) >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>> Adding connection from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>> Connection removed from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50946) >>> >>> >>> >>> >>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >>> >>> MainProcess::DEBUG::2015-11-26 >>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call readMultipathConf with () {} >>> MainProcess::DEBUG::2015-11-26 >>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >>> ' polling_interval 5', ' getuid_callout >>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>> ' no_path_retry fail', ' user_friendly_names no', ' >>> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >>> dev_loss_tmo 30', ' max_fds 4096', '}', '', >>> 'devices {', 'device {', ' vendor "HITACHI"', ' >>> product "DF.*"', ' getuid_callout >>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>> '}', 'device {', ' vendor "COMPELNT"', ' >>> product "Compellent Vol"', ' no_path_retry >>> fail', '}', 'device {', ' # multipath.conf.default', ' >>> vendor "DGC"', ' product ".*"', ' >>> product_blacklist "LUNZ"', ' path_grouping_policy >>> "group_by_prio"', ' path_checker "emc_clariion"', ' >>> hardware_handler "1 emc"', ' prio "emc"', ' >>> failback immediate', ' rr_weight >>> "uniform"', ' # vdsm required configuration', ' >>> getuid_callout "/lib/udev/scsi_id --whitelisted >>> --replace-whitespace --device=/dev/%n"', ' features "0"', >>> ' no_path_retry fail', '}', '}'] >>> MainProcess|Thread-13::DEBUG::2015-11-26 >>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call getHardwareInfo with () {} >>> MainProcess|Thread-13::DEBUG::2015-11-26 >>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >>> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >>> 'systemManufacturer': 'Red Hat'} >>> MainProcess|Thread-21::DEBUG::2015-11-26 >>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>> MainProcess|Thread-21::DEBUG::2015-11-26 >>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return validateAccess with None >>> MainProcess|Thread-22::DEBUG::2015-11-26 >>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>> MainProcess|Thread-22::DEBUG::2015-11-26 >>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return validateAccess with None >>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call ksmTune with ({'run': 0},) {} >>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return ksmTune with None >>> >>> >>> >>> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >>> >>> >>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>> duplex:full) >>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>> duplex:full) >>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >>> >>> >>> >>> >>> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >>> >>> >>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >>> '04-cputune' >>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine >>> starting >>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is >>> disabled >>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating >>> KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 >>> sleep_millisecs:0 >>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics() >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com >>> > wrote: >>> >>>> Below are the logs, >>>> >>>> >>>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50944 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50944) >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>> Adding connection from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>> Connection removed from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50945) >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>> Adding connection from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>> Connection removed from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50946) >>>> >>>> >>>> >>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>>> stirabos@redhat.com> wrote: >>>> >>>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>>> nbudoor@gmail.com> wrote: >>>>> >>>>>> I have done a fresh installation and now am getting the below >>>>>> error, >>>>>> >>>>>> [ INFO ] Updating hosted-engine configuration >>>>>> [ INFO ] Stage: Transaction commit >>>>>> [ INFO ] Stage: Closing up >>>>>> The following network ports should be opened: >>>>>> tcp:5900 >>>>>> tcp:5901 >>>>>> udp:5900 >>>>>> udp:5901 >>>>>> An example of the required configuration for iptables >>>>>> can be found at: >>>>>> /etc/ovirt-hosted-engine/iptables.example >>>>>> In order to configure firewalld, copy the files from >>>>>> /etc/ovirt-hosted-engine/firewalld to >>>>>> /etc/firewalld/services >>>>>> and execute the following commands: >>>>>> firewall-cmd -service hosted-console >>>>>> [ INFO ] Creating VM >>>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>>>> temporary password for console connection. The VM may not have been >>>>>> created: please check VDSM logs >>>>>> [ INFO ] Stage: Clean up >>>>>> [ INFO ] Generating answer file >>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>>> [ INFO ] Stage: Pre-termination >>>>>> [ INFO ] Stage: Termination >>>>>> >>>>>> >>>>>> >>>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>>> backup/ connectivity.log mom.log >>>>>> supervdsm.log vdsm.log >>>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:42741 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 42741) >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>> Adding connection from 127.0.0.1:42742 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>> Connection removed from 127.0.0.1:42742 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:42742 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 42742) >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>> Adding connection from 127.0.0.1:42743 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>> Connection removed from 127.0.0.1:42743 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:42743 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 42743) >>>>>> >>>>>> >>>>> >>>>> It failed before, can you please attach the whole VDSM logs? >>>>> >>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>>> stirabos@redhat.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>>> nbudoor@gmail.com> wrote: >>>>>>> >>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am >>>>>>>> facing same issues . >>>>>>>> >>>>>>>> >>>>>>> Can you please paste the output of >>>>>>> vdsClient -s 0 list >>>>>>> ? >>>>>>> thanks >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>>> oourfali@redhat.com> wrote: >>>>>>>> >>>>>>>>> Hi >>>>>>>>> >>>>>>>>> Seems like you have existing VMs running on the host (you >>>>>>>>> can check that by looking for qemu processes on your host). >>>>>>>>> Is that a clean deployment, or was the host used before for >>>>>>>>> running VMs? >>>>>>>>> Perhaps you already ran the hosted engine setup, and the VM >>>>>>>>> was left there? >>>>>>>>> >>>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Oved >>>>>>>>> >>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> HI >>>>>>>>>> >>>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>>> >>>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>> Continuing will configure this host for serving >>>>>>>>>> as hypervisor and create a VM where you have to install oVirt Engine >>>>>>>>>> afterwards. >>>>>>>>>> Are you sure you want to continue? (Yes, >>>>>>>>>> No)[Yes]: yes >>>>>>>>>> Configuration files: [] >>>>>>>>>> Log file: >>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>>> It has been detected that this program is >>>>>>>>>> executed through an SSH connection without using screen. >>>>>>>>>> Continuing with the installation may lead to >>>>>>>>>> broken installation if the network connection fails. >>>>>>>>>> It is highly recommended to abort the >>>>>>>>>> installation and run it inside a screen session using command "screen". >>>>>>>>>> Do you want to continue anyway? (Yes, No)[No]: yes >>>>>>>>>> [WARNING] Cannot detect if hardware supports virtualization >>>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>> >>>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>> [root@he ~]# >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users@ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users@ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
pls fine the logs from the below mentioned URL,
OK, the issue is here: Thread-88::ERROR::2015-12-02 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. (code=1) but it's pretty strange cause hosted-engine-setup already explicitly check for visualization support and just exits with a clear error if not. Did you played with the kvm module while hosted-engine-setup was running? Can you please hosted-engine-setup logs?
On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> wrote:
Maybe even makes sense to open a bugzilla ticket already. Better safe than sorry.
We still need at least one log file to understand what happened.
On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I do not know what logs you are expecting ? the logs which I got is pasted in the mail if you require in pastebin let me know I will upload there .
Please run sosreport utility and share the resulting archive where you prefer. You can follow this guide: http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <sbonazzo@redhat.com
wrote:
On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> I got only 10lines to in the vdsm logs and are below , > > Can you please provide full sos report?
> > [root@he /]# tail -f /var/log/vdsm/vdsm.log > Thread-100::DEBUG::2015-11-27 > 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) > Trying to release resource 'Storage.HsmDomainMonitorLock' > Thread-100::DEBUG::2015-11-27 > 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) > Released resource 'Storage.HsmDomainMonitorLock' (0 active users) > Thread-100::DEBUG::2015-11-27 > 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) > Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is > waiting for it. > Thread-100::DEBUG::2015-11-27 > 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) > No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing > records. > Thread-100::INFO::2015-11-27 > 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: > stopMonitoringDomain, Return response: None > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) > Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) > Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> > state finished > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-100::DEBUG::2015-11-27 > 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) > Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False > > > > On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> >> >> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju <nbudoor@gmail.com >> > wrote: >> >>> >>> >>> >>> *Below are the entire logs* >>> >>> >> Sorry, with the entire log I mean if you can attach or share >> somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are >> not enough to point out the issue. >> >> >>> >>> >>> >>> >>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >>> >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50944 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50944) >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>> Adding connection from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>> Connection removed from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50945 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50945) >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>> Adding connection from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>> Connection removed from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>> Detected protocol xml from 127.0.0.1:50946 >>> Detector thread::DEBUG::2015-11-26 >>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>> http detected from ('127.0.0.1', 50946) >>> >>> >>> >>> >>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >>> >>> MainProcess::DEBUG::2015-11-26 >>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call readMultipathConf with () {} >>> MainProcess::DEBUG::2015-11-26 >>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >>> ' polling_interval 5', ' getuid_callout >>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>> ' no_path_retry fail', ' user_friendly_names no', ' >>> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >>> dev_loss_tmo 30', ' max_fds 4096', '}', '', >>> 'devices {', 'device {', ' vendor "HITACHI"', ' >>> product "DF.*"', ' getuid_callout >>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>> '}', 'device {', ' vendor "COMPELNT"', ' >>> product "Compellent Vol"', ' no_path_retry >>> fail', '}', 'device {', ' # multipath.conf.default', ' >>> vendor "DGC"', ' product ".*"', ' >>> product_blacklist "LUNZ"', ' path_grouping_policy >>> "group_by_prio"', ' path_checker "emc_clariion"', ' >>> hardware_handler "1 emc"', ' prio "emc"', ' >>> failback immediate', ' rr_weight >>> "uniform"', ' # vdsm required configuration', ' >>> getuid_callout "/lib/udev/scsi_id --whitelisted >>> --replace-whitespace --device=/dev/%n"', ' features "0"', >>> ' no_path_retry fail', '}', '}'] >>> MainProcess|Thread-13::DEBUG::2015-11-26 >>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call getHardwareInfo with () {} >>> MainProcess|Thread-13::DEBUG::2015-11-26 >>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >>> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >>> 'systemManufacturer': 'Red Hat'} >>> MainProcess|Thread-21::DEBUG::2015-11-26 >>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>> MainProcess|Thread-21::DEBUG::2015-11-26 >>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return validateAccess with None >>> MainProcess|Thread-22::DEBUG::2015-11-26 >>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>> MainProcess|Thread-22::DEBUG::2015-11-26 >>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return validateAccess with None >>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>> call ksmTune with ({'run': 0},) {} >>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>> return ksmTune with None >>> >>> >>> >>> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >>> >>> >>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>> duplex:full) >>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>> duplex:full) >>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >>> >>> >>> >>> >>> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >>> >>> >>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >>> '04-cputune' >>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine >>> starting >>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is >>> disabled >>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating >>> KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 >>> sleep_millisecs:0 >>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() >>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics() >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju <nbudoor@gmail.com >>> > wrote: >>> >>>> Below are the logs, >>>> >>>> >>>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50944 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50944) >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>> Adding connection from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>> Connection removed from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50945) >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>> Adding connection from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>> Connection removed from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50946) >>>> >>>> >>>> >>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>>> stirabos@redhat.com> wrote: >>>> >>>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>>> nbudoor@gmail.com> wrote: >>>>> >>>>>> I have done a fresh installation and now am getting the below >>>>>> error, >>>>>> >>>>>> [ INFO ] Updating hosted-engine configuration >>>>>> [ INFO ] Stage: Transaction commit >>>>>> [ INFO ] Stage: Closing up >>>>>> The following network ports should be opened: >>>>>> tcp:5900 >>>>>> tcp:5901 >>>>>> udp:5900 >>>>>> udp:5901 >>>>>> An example of the required configuration for iptables >>>>>> can be found at: >>>>>> /etc/ovirt-hosted-engine/iptables.example >>>>>> In order to configure firewalld, copy the files from >>>>>> /etc/ovirt-hosted-engine/firewalld to >>>>>> /etc/firewalld/services >>>>>> and execute the following commands: >>>>>> firewall-cmd -service hosted-console >>>>>> [ INFO ] Creating VM >>>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>>>> temporary password for console connection. The VM may not have been >>>>>> created: please check VDSM logs >>>>>> [ INFO ] Stage: Clean up >>>>>> [ INFO ] Generating answer file >>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>>> [ INFO ] Stage: Pre-termination >>>>>> [ INFO ] Stage: Termination >>>>>> >>>>>> >>>>>> >>>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>>> backup/ connectivity.log mom.log >>>>>> supervdsm.log vdsm.log >>>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:42741 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 42741) >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>> Adding connection from 127.0.0.1:42742 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>> Connection removed from 127.0.0.1:42742 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:42742 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 42742) >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>> Adding connection from 127.0.0.1:42743 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>> Connection removed from 127.0.0.1:42743 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:42743 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 42743) >>>>>> >>>>>> >>>>> >>>>> It failed before, can you please attach the whole VDSM logs? >>>>> >>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>>> stirabos@redhat.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>>> nbudoor@gmail.com> wrote: >>>>>>> >>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am >>>>>>>> facing same issues . >>>>>>>> >>>>>>>> >>>>>>> Can you please paste the output of >>>>>>> vdsClient -s 0 list >>>>>>> ? >>>>>>> thanks >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>>> oourfali@redhat.com> wrote: >>>>>>>> >>>>>>>>> Hi >>>>>>>>> >>>>>>>>> Seems like you have existing VMs running on the host (you >>>>>>>>> can check that by looking for qemu processes on your host). >>>>>>>>> Is that a clean deployment, or was the host used before for >>>>>>>>> running VMs? >>>>>>>>> Perhaps you already ran the hosted engine setup, and the VM >>>>>>>>> was left there? >>>>>>>>> >>>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Oved >>>>>>>>> >>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> HI >>>>>>>>>> >>>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>>> >>>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>> Continuing will configure this host for serving >>>>>>>>>> as hypervisor and create a VM where you have to install oVirt Engine >>>>>>>>>> afterwards. >>>>>>>>>> Are you sure you want to continue? (Yes, >>>>>>>>>> No)[Yes]: yes >>>>>>>>>> Configuration files: [] >>>>>>>>>> Log file: >>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>>> It has been detected that this program is >>>>>>>>>> executed through an SSH connection without using screen. >>>>>>>>>> Continuing with the installation may lead to >>>>>>>>>> broken installation if the network connection fails. >>>>>>>>>> It is highly recommended to abort the >>>>>>>>>> installation and run it inside a screen session using command "screen". >>>>>>>>>> Do you want to continue anyway? (Yes, No)[No]: yes >>>>>>>>>> [WARNING] Cannot detect if hardware supports virtualization >>>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>> >>>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>> [root@he ~]# >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users@ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users@ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I have installed KVM in the nested environment in ESXi6.x version is that recommended ? apart from Hosted engine is there any other alternate way to configure Engine HA cluster ? -Nagaraju On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
pls fine the logs from the below mentioned URL,
OK, the issue is here:
Thread-88::ERROR::2015-12-02 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. (code=1)
but it's pretty strange cause hosted-engine-setup already explicitly check for visualization support and just exits with a clear error if not. Did you played with the kvm module while hosted-engine-setup was running?
Can you please hosted-engine-setup logs?
On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> wrote:
Maybe even makes sense to open a bugzilla ticket already. Better safe than sorry.
We still need at least one log file to understand what happened.
On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I do not know what logs you are expecting ? the logs which I got is pasted in the mail if you require in pastebin let me know I will upload there .
Please run sosreport utility and share the resulting archive where you prefer. You can follow this guide: http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola < sbonazzo@redhat.com> wrote:
> > > On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> > wrote: > >> I got only 10lines to in the vdsm logs and are below , >> >> > Can you please provide full sos report? > > > >> >> [root@he /]# tail -f /var/log/vdsm/vdsm.log >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) >> Trying to release resource 'Storage.HsmDomainMonitorLock' >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) >> Released resource 'Storage.HsmDomainMonitorLock' (0 active users) >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) >> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is >> waiting for it. >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) >> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing >> records. >> Thread-100::INFO::2015-11-27 >> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: >> stopMonitoringDomain, Return response: None >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) >> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) >> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> >> state finished >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> Owner.releaseAll requests {} resources {} >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> Owner.cancelAll requests {} >> Thread-100::DEBUG::2015-11-27 >> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) >> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False >> >> >> >> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < >> stirabos@redhat.com> wrote: >> >>> >>> >>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju < >>> nbudoor@gmail.com> wrote: >>> >>>> >>>> >>>> >>>> *Below are the entire logs* >>>> >>>> >>> Sorry, with the entire log I mean if you can attach or share >>> somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are >>> not enough to point out the issue. >>> >>> >>>> >>>> >>>> >>>> >>>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >>>> >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50944 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50944) >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>> Adding connection from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>> Connection removed from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50945 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50945) >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>> Adding connection from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>> Connection removed from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>> Detected protocol xml from 127.0.0.1:50946 >>>> Detector thread::DEBUG::2015-11-26 >>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>> http detected from ('127.0.0.1', 50946) >>>> >>>> >>>> >>>> >>>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >>>> >>>> MainProcess::DEBUG::2015-11-26 >>>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>> call readMultipathConf with () {} >>>> MainProcess::DEBUG::2015-11-26 >>>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >>>> ' polling_interval 5', ' getuid_callout >>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>> ' no_path_retry fail', ' user_friendly_names no', ' >>>> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >>>> dev_loss_tmo 30', ' max_fds 4096', '}', '', >>>> 'devices {', 'device {', ' vendor "HITACHI"', ' >>>> product "DF.*"', ' getuid_callout >>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>> '}', 'device {', ' vendor "COMPELNT"', ' >>>> product "Compellent Vol"', ' no_path_retry >>>> fail', '}', 'device {', ' # multipath.conf.default', ' >>>> vendor "DGC"', ' product ".*"', ' >>>> product_blacklist "LUNZ"', ' path_grouping_policy >>>> "group_by_prio"', ' path_checker "emc_clariion"', ' >>>> hardware_handler "1 emc"', ' prio "emc"', ' >>>> failback immediate', ' rr_weight >>>> "uniform"', ' # vdsm required configuration', ' >>>> getuid_callout "/lib/udev/scsi_id --whitelisted >>>> --replace-whitespace --device=/dev/%n"', ' features "0"', >>>> ' no_path_retry fail', '}', '}'] >>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>> call getHardwareInfo with () {} >>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >>>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >>>> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >>>> 'systemManufacturer': 'Red Hat'} >>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>> return validateAccess with None >>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>> return validateAccess with None >>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>> call ksmTune with ({'run': 0},) {} >>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>> return ksmTune with None >>>> >>>> >>>> >>>> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >>>> >>>> >>>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >>>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >>>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >>>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >>>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>> duplex:full) >>>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >>>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >>>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >>>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>> duplex:full) >>>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >>>> >>>> >>>> >>>> >>>> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >>>> >>>> >>>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >>>> '04-cputune' >>>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy Engine >>>> starting >>>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is >>>> disabled >>>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating >>>> KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 >>>> sleep_millisecs:0 >>>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() >>>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() >>>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() >>>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() >>>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() >>>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics() >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju < >>>> nbudoor@gmail.com> wrote: >>>> >>>>> Below are the logs, >>>>> >>>>> >>>>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:50944 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 50944) >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>> Adding connection from 127.0.0.1:50945 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>> Connection removed from 127.0.0.1:50945 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:50945 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 50945) >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>> Adding connection from 127.0.0.1:50946 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>> Connection removed from 127.0.0.1:50946 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:50946 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 50946) >>>>> >>>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>>>> stirabos@redhat.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>>>> nbudoor@gmail.com> wrote: >>>>>> >>>>>>> I have done a fresh installation and now am getting the below >>>>>>> error, >>>>>>> >>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>> [ INFO ] Stage: Transaction commit >>>>>>> [ INFO ] Stage: Closing up >>>>>>> The following network ports should be opened: >>>>>>> tcp:5900 >>>>>>> tcp:5901 >>>>>>> udp:5900 >>>>>>> udp:5901 >>>>>>> An example of the required configuration for >>>>>>> iptables can be found at: >>>>>>> /etc/ovirt-hosted-engine/iptables.example >>>>>>> In order to configure firewalld, copy the files from >>>>>>> /etc/ovirt-hosted-engine/firewalld to >>>>>>> /etc/firewalld/services >>>>>>> and execute the following commands: >>>>>>> firewall-cmd -service hosted-console >>>>>>> [ INFO ] Creating VM >>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>>>>> temporary password for console connection. The VM may not have been >>>>>>> created: please check VDSM logs >>>>>>> [ INFO ] Stage: Clean up >>>>>>> [ INFO ] Generating answer file >>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>>>> [ INFO ] Stage: Pre-termination >>>>>>> [ INFO ] Stage: Termination >>>>>>> >>>>>>> >>>>>>> >>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>>>> backup/ connectivity.log mom.log >>>>>>> supervdsm.log vdsm.log >>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:42741 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 42741) >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>> Adding connection from 127.0.0.1:42742 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>> Connection removed from 127.0.0.1:42742 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:42742 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 42742) >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>> Adding connection from 127.0.0.1:42743 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>> Connection removed from 127.0.0.1:42743 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:42743 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 42743) >>>>>>> >>>>>>> >>>>>> >>>>>> It failed before, can you please attach the whole VDSM logs? >>>>>> >>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>>>> stirabos@redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>> >>>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am >>>>>>>>> facing same issues . >>>>>>>>> >>>>>>>>> >>>>>>>> Can you please paste the output of >>>>>>>> vdsClient -s 0 list >>>>>>>> ? >>>>>>>> thanks >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>>>> oourfali@redhat.com> wrote: >>>>>>>>> >>>>>>>>>> Hi >>>>>>>>>> >>>>>>>>>> Seems like you have existing VMs running on the host (you >>>>>>>>>> can check that by looking for qemu processes on your host). >>>>>>>>>> Is that a clean deployment, or was the host used before for >>>>>>>>>> running VMs? >>>>>>>>>> Perhaps you already ran the hosted engine setup, and the VM >>>>>>>>>> was left there? >>>>>>>>>> >>>>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Oved >>>>>>>>>> >>>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> HI >>>>>>>>>>> >>>>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>>>> >>>>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>> Continuing will configure this host for serving >>>>>>>>>>> as hypervisor and create a VM where you have to install oVirt Engine >>>>>>>>>>> afterwards. >>>>>>>>>>> Are you sure you want to continue? (Yes, >>>>>>>>>>> No)[Yes]: yes >>>>>>>>>>> Configuration files: [] >>>>>>>>>>> Log file: >>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>>>> It has been detected that this program is >>>>>>>>>>> executed through an SSH connection without using screen. >>>>>>>>>>> Continuing with the installation may lead to >>>>>>>>>>> broken installation if the network connection fails. >>>>>>>>>>> It is highly recommended to abort the >>>>>>>>>>> installation and run it inside a screen session using command "screen". >>>>>>>>>>> Do you want to continue anyway? (Yes, No)[No]: >>>>>>>>>>> yes >>>>>>>>>>> [WARNING] Cannot detect if hardware supports virtualization >>>>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>> >>>>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>> [root@he ~]# >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list >>>>>>>>>>> Users@ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users@ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Sandro Bonazzola > Better technology. Faster innovation. Powered by community > collaboration. > See how it works at redhat.com >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have installed KVM in the nested environment in ESXi6.x version is that recommended ?
I often use KVM over KVM in nested environment but honestly I never tried to run KVM over ESXi but I suspect that all of your issues comes from there.
apart from Hosted engine is there any other alternate way to configure Engine HA cluster ?
Nothing else from the project. You can use two external VMs in cluster with pacemaker but it's completely up to you.
-Nagaraju
On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
pls fine the logs from the below mentioned URL,
OK, the issue is here:
Thread-88::ERROR::2015-12-02 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. (code=1)
but it's pretty strange cause hosted-engine-setup already explicitly check for visualization support and just exits with a clear error if not. Did you played with the kvm module while hosted-engine-setup was running?
Can you please hosted-engine-setup logs?
On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> wrote:
Maybe even makes sense to open a bugzilla ticket already. Better safe than sorry.
We still need at least one log file to understand what happened.
On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> I do not know what logs you are expecting ? the logs which I got is > pasted in the mail if you require in pastebin let me know I will upload > there . >
Please run sosreport utility and share the resulting archive where you prefer. You can follow this guide: http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
> > > On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola < > sbonazzo@redhat.com> wrote: > >> >> >> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com> >> wrote: >> >>> I got only 10lines to in the vdsm logs and are below , >>> >>> >> Can you please provide full sos report? >> >> >> >>> >>> [root@he /]# tail -f /var/log/vdsm/vdsm.log >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) >>> Trying to release resource 'Storage.HsmDomainMonitorLock' >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) >>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users) >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) >>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is >>> waiting for it. >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) >>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing >>> records. >>> Thread-100::INFO::2015-11-27 >>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: >>> stopMonitoringDomain, Return response: None >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) >>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) >>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> >>> state finished >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>> Owner.releaseAll requests {} resources {} >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> Owner.cancelAll requests {} >>> Thread-100::DEBUG::2015-11-27 >>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) >>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False >>> >>> >>> >>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < >>> stirabos@redhat.com> wrote: >>> >>>> >>>> >>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju < >>>> nbudoor@gmail.com> wrote: >>>> >>>>> >>>>> >>>>> >>>>> *Below are the entire logs* >>>>> >>>>> >>>> Sorry, with the entire log I mean if you can attach or share >>>> somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are >>>> not enough to point out the issue. >>>> >>>> >>>>> >>>>> >>>>> >>>>> >>>>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >>>>> >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:50944 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 50944) >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>> Adding connection from 127.0.0.1:50945 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>> Connection removed from 127.0.0.1:50945 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:50945 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 50945) >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>> Adding connection from 127.0.0.1:50946 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>> Connection removed from 127.0.0.1:50946 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>> Detected protocol xml from 127.0.0.1:50946 >>>>> Detector thread::DEBUG::2015-11-26 >>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>> http detected from ('127.0.0.1', 50946) >>>>> >>>>> >>>>> >>>>> >>>>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >>>>> >>>>> MainProcess::DEBUG::2015-11-26 >>>>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>> call readMultipathConf with () {} >>>>> MainProcess::DEBUG::2015-11-26 >>>>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >>>>> ' polling_interval 5', ' getuid_callout >>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>> ' no_path_retry fail', ' user_friendly_names no', ' >>>>> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >>>>> dev_loss_tmo 30', ' max_fds 4096', '}', '', >>>>> 'devices {', 'device {', ' vendor "HITACHI"', ' >>>>> product "DF.*"', ' getuid_callout >>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>> '}', 'device {', ' vendor "COMPELNT"', ' >>>>> product "Compellent Vol"', ' no_path_retry >>>>> fail', '}', 'device {', ' # multipath.conf.default', ' >>>>> vendor "DGC"', ' product ".*"', ' >>>>> product_blacklist "LUNZ"', ' path_grouping_policy >>>>> "group_by_prio"', ' path_checker "emc_clariion"', ' >>>>> hardware_handler "1 emc"', ' prio "emc"', ' >>>>> failback immediate', ' rr_weight >>>>> "uniform"', ' # vdsm required configuration', ' >>>>> getuid_callout "/lib/udev/scsi_id --whitelisted >>>>> --replace-whitespace --device=/dev/%n"', ' features "0"', >>>>> ' no_path_retry fail', '}', '}'] >>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>> call getHardwareInfo with () {} >>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >>>>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >>>>> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >>>>> 'systemManufacturer': 'Red Hat'} >>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>> return validateAccess with None >>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>> return validateAccess with None >>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>> call ksmTune with ({'run': 0},) {} >>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>> return ksmTune with None >>>>> >>>>> >>>>> >>>>> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >>>>> >>>>> >>>>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >>>>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >>>>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >>>>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >>>>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>> duplex:full) >>>>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >>>>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >>>>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >>>>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>> duplex:full) >>>>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >>>>> >>>>> >>>>> >>>>> >>>>> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >>>>> >>>>> >>>>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >>>>> '04-cputune' >>>>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy >>>>> Engine starting >>>>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is >>>>> disabled >>>>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating >>>>> KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 >>>>> sleep_millisecs:0 >>>>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() >>>>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() >>>>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() >>>>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() >>>>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() >>>>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics() >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju < >>>>> nbudoor@gmail.com> wrote: >>>>> >>>>>> Below are the logs, >>>>>> >>>>>> >>>>>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 50944) >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>> Adding connection from 127.0.0.1:50945 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>> Connection removed from 127.0.0.1:50945 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 50945) >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>> Adding connection from 127.0.0.1:50946 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>> Connection removed from 127.0.0.1:50946 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 50946) >>>>>> >>>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>>>>> stirabos@redhat.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>>>>> nbudoor@gmail.com> wrote: >>>>>>> >>>>>>>> I have done a fresh installation and now am getting the below >>>>>>>> error, >>>>>>>> >>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>> [ INFO ] Stage: Closing up >>>>>>>> The following network ports should be opened: >>>>>>>> tcp:5900 >>>>>>>> tcp:5901 >>>>>>>> udp:5900 >>>>>>>> udp:5901 >>>>>>>> An example of the required configuration for >>>>>>>> iptables can be found at: >>>>>>>> /etc/ovirt-hosted-engine/iptables.example >>>>>>>> In order to configure firewalld, copy the files from >>>>>>>> /etc/ovirt-hosted-engine/firewalld to >>>>>>>> /etc/firewalld/services >>>>>>>> and execute the following commands: >>>>>>>> firewall-cmd -service hosted-console >>>>>>>> [ INFO ] Creating VM >>>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>>>>>> temporary password for console connection. The VM may not have been >>>>>>>> created: please check VDSM logs >>>>>>>> [ INFO ] Stage: Clean up >>>>>>>> [ INFO ] Generating answer file >>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>> [ INFO ] Stage: Termination >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>>>>> backup/ connectivity.log mom.log >>>>>>>> supervdsm.log vdsm.log >>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:42741 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 42741) >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>> Adding connection from 127.0.0.1:42742 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>> Connection removed from 127.0.0.1:42742 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:42742 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 42742) >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>> Adding connection from 127.0.0.1:42743 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>> Connection removed from 127.0.0.1:42743 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:42743 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 42743) >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> It failed before, can you please attach the whole VDSM logs? >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>>>>> stirabos@redhat.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am >>>>>>>>>> facing same issues . >>>>>>>>>> >>>>>>>>>> >>>>>>>>> Can you please paste the output of >>>>>>>>> vdsClient -s 0 list >>>>>>>>> ? >>>>>>>>> thanks >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>>>>> oourfali@redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi >>>>>>>>>>> >>>>>>>>>>> Seems like you have existing VMs running on the host (you >>>>>>>>>>> can check that by looking for qemu processes on your host). >>>>>>>>>>> Is that a clean deployment, or was the host used before >>>>>>>>>>> for running VMs? >>>>>>>>>>> Perhaps you already ran the hosted engine setup, and the >>>>>>>>>>> VM was left there? >>>>>>>>>>> >>>>>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Oved >>>>>>>>>>> >>>>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> HI >>>>>>>>>>>> >>>>>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>>>>> >>>>>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>> Continuing will configure this host for serving >>>>>>>>>>>> as hypervisor and create a VM where you have to install oVirt Engine >>>>>>>>>>>> afterwards. >>>>>>>>>>>> Are you sure you want to continue? (Yes, >>>>>>>>>>>> No)[Yes]: yes >>>>>>>>>>>> Configuration files: [] >>>>>>>>>>>> Log file: >>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>>>>> It has been detected that this program is >>>>>>>>>>>> executed through an SSH connection without using screen. >>>>>>>>>>>> Continuing with the installation may lead to >>>>>>>>>>>> broken installation if the network connection fails. >>>>>>>>>>>> It is highly recommended to abort the >>>>>>>>>>>> installation and run it inside a screen session using command "screen". >>>>>>>>>>>> Do you want to continue anyway? (Yes, No)[No]: >>>>>>>>>>>> yes >>>>>>>>>>>> [WARNING] Cannot detect if hardware supports >>>>>>>>>>>> virtualization >>>>>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>> >>>>>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>> [root@he ~]# >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Users mailing list >>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users@ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Sandro Bonazzola >> Better technology. Faster innovation. Powered by community >> collaboration. >> See how it works at redhat.com >> > >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Simone I have installed KVM server on the physical machine and installed centos6.7 vm on the server and tried to deploy Hosted-engine in the vm ,getting the same Error below is the Logs. http://pastebin.com/pg6k8irV can you pls help me ? Thanks, Nagaraju On Wed, Dec 2, 2015 at 5:35 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have installed KVM in the nested environment in ESXi6.x version is that recommended ?
I often use KVM over KVM in nested environment but honestly I never tried to run KVM over ESXi but I suspect that all of your issues comes from there.
apart from Hosted engine is there any other alternate way to configure Engine HA cluster ?
Nothing else from the project. You can use two external VMs in cluster with pacemaker but it's completely up to you.
-Nagaraju
On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
pls fine the logs from the below mentioned URL,
OK, the issue is here:
Thread-88::ERROR::2015-12-02 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. (code=1)
but it's pretty strange cause hosted-engine-setup already explicitly check for visualization support and just exits with a clear error if not. Did you played with the kvm module while hosted-engine-setup was running?
Can you please hosted-engine-setup logs?
On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <stirabos@redhat.com
wrote:
On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> wrote:
Maybe even makes sense to open a bugzilla ticket already. Better safe than sorry.
We still need at least one log file to understand what happened.
On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> wrote:
> > On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com> > wrote: > >> I do not know what logs you are expecting ? the logs which I got is >> pasted in the mail if you require in pastebin let me know I will upload >> there . >> > > > Please run sosreport utility and share the resulting archive where > you prefer. > You can follow this guide: > http://www.linuxtechi.com/how-to-create-sosreport-in-linux/ > >> >> >> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola < >> sbonazzo@redhat.com> wrote: >> >>> >>> >>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju <nbudoor@gmail.com >>> > wrote: >>> >>>> I got only 10lines to in the vdsm logs and are below , >>>> >>>> >>> Can you please provide full sos report? >>> >>> >>> >>>> >>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) >>>> Trying to release resource 'Storage.HsmDomainMonitorLock' >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) >>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users) >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) >>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is >>>> waiting for it. >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) >>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing >>>> records. >>>> Thread-100::INFO::2015-11-27 >>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: >>>> stopMonitoringDomain, Return response: None >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) >>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) >>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> >>>> state finished >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>> Owner.releaseAll requests {} resources {} >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>> Owner.cancelAll requests {} >>>> Thread-100::DEBUG::2015-11-27 >>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) >>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False >>>> >>>> >>>> >>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < >>>> stirabos@redhat.com> wrote: >>>> >>>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju < >>>>> nbudoor@gmail.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> *Below are the entire logs* >>>>>> >>>>>> >>>>> Sorry, with the entire log I mean if you can attach or share >>>>> somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are >>>>> not enough to point out the issue. >>>>> >>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >>>>>> >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 50944) >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>> Adding connection from 127.0.0.1:50945 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>> Connection removed from 127.0.0.1:50945 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 50945) >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>> Adding connection from 127.0.0.1:50946 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>> Connection removed from 127.0.0.1:50946 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>> Detector thread::DEBUG::2015-11-26 >>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>> http detected from ('127.0.0.1', 50946) >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >>>>>> >>>>>> MainProcess::DEBUG::2015-11-26 >>>>>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>> call readMultipathConf with () {} >>>>>> MainProcess::DEBUG::2015-11-26 >>>>>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >>>>>> ' polling_interval 5', ' getuid_callout >>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>>> ' no_path_retry fail', ' user_friendly_names no', ' >>>>>> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >>>>>> dev_loss_tmo 30', ' max_fds 4096', '}', '', >>>>>> 'devices {', 'device {', ' vendor "HITACHI"', ' >>>>>> product "DF.*"', ' getuid_callout >>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>>> '}', 'device {', ' vendor "COMPELNT"', ' >>>>>> product "Compellent Vol"', ' no_path_retry >>>>>> fail', '}', 'device {', ' # multipath.conf.default', ' >>>>>> vendor "DGC"', ' product ".*"', ' >>>>>> product_blacklist "LUNZ"', ' path_grouping_policy >>>>>> "group_by_prio"', ' path_checker "emc_clariion"', ' >>>>>> hardware_handler "1 emc"', ' prio "emc"', ' >>>>>> failback immediate', ' rr_weight >>>>>> "uniform"', ' # vdsm required configuration', ' >>>>>> getuid_callout "/lib/udev/scsi_id --whitelisted >>>>>> --replace-whitespace --device=/dev/%n"', ' features "0"', >>>>>> ' no_path_retry fail', '}', '}'] >>>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>> call getHardwareInfo with () {} >>>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >>>>>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >>>>>> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >>>>>> 'systemManufacturer': 'Red Hat'} >>>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>> return validateAccess with None >>>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>> return validateAccess with None >>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>> call ksmTune with ({'run': 0},) {} >>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>> return ksmTune with None >>>>>> >>>>>> >>>>>> >>>>>> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >>>>>> >>>>>> >>>>>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >>>>>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >>>>>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >>>>>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >>>>>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>>> duplex:full) >>>>>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >>>>>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >>>>>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >>>>>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>>> duplex:full) >>>>>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >>>>>> >>>>>> >>>>>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >>>>>> '04-cputune' >>>>>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy >>>>>> Engine starting >>>>>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is >>>>>> disabled >>>>>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - Updating >>>>>> KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 >>>>>> sleep_millisecs:0 >>>>>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - getStatistics() >>>>>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - getStatistics() >>>>>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - getStatistics() >>>>>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - getStatistics() >>>>>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - getStatistics() >>>>>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - getStatistics() >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju < >>>>>> nbudoor@gmail.com> wrote: >>>>>> >>>>>>> Below are the logs, >>>>>>> >>>>>>> >>>>>>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 50944) >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>> Adding connection from 127.0.0.1:50945 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>> Connection removed from 127.0.0.1:50945 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 50945) >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>> Adding connection from 127.0.0.1:50946 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>> Connection removed from 127.0.0.1:50946 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 50946) >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>>>>>> stirabos@redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>> >>>>>>>>> I have done a fresh installation and now am getting the >>>>>>>>> below error, >>>>>>>>> >>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>> The following network ports should be opened: >>>>>>>>> tcp:5900 >>>>>>>>> tcp:5901 >>>>>>>>> udp:5900 >>>>>>>>> udp:5901 >>>>>>>>> An example of the required configuration for >>>>>>>>> iptables can be found at: >>>>>>>>> /etc/ovirt-hosted-engine/iptables.example >>>>>>>>> In order to configure firewalld, copy the files >>>>>>>>> from >>>>>>>>> /etc/ovirt-hosted-engine/firewalld to >>>>>>>>> /etc/firewalld/services >>>>>>>>> and execute the following commands: >>>>>>>>> firewall-cmd -service hosted-console >>>>>>>>> [ INFO ] Creating VM >>>>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>>>>>>> temporary password for console connection. The VM may not have been >>>>>>>>> created: please check VDSM logs >>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>> [ INFO ] Generating answer file >>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>>>>>> backup/ connectivity.log mom.log >>>>>>>>> supervdsm.log vdsm.log >>>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:42741 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 42741) >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>> Adding connection from 127.0.0.1:42742 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>> Connection removed from 127.0.0.1:42742 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:42742 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 42742) >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>> Adding connection from 127.0.0.1:42743 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>> Connection removed from 127.0.0.1:42743 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:42743 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 42743) >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> It failed before, can you please attach the whole VDSM logs? >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>>>>>> stirabos@redhat.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am >>>>>>>>>>> facing same issues . >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Can you please paste the output of >>>>>>>>>> vdsClient -s 0 list >>>>>>>>>> ? >>>>>>>>>> thanks >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>>>>>> oourfali@redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi >>>>>>>>>>>> >>>>>>>>>>>> Seems like you have existing VMs running on the host (you >>>>>>>>>>>> can check that by looking for qemu processes on your host). >>>>>>>>>>>> Is that a clean deployment, or was the host used before >>>>>>>>>>>> for running VMs? >>>>>>>>>>>> Perhaps you already ran the hosted engine setup, and the >>>>>>>>>>>> VM was left there? >>>>>>>>>>>> >>>>>>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Oved >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> HI >>>>>>>>>>>>> >>>>>>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>>>>>> >>>>>>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>>> Continuing will configure this host for >>>>>>>>>>>>> serving as hypervisor and create a VM where you have to install oVirt >>>>>>>>>>>>> Engine afterwards. >>>>>>>>>>>>> Are you sure you want to continue? (Yes, >>>>>>>>>>>>> No)[Yes]: yes >>>>>>>>>>>>> Configuration files: [] >>>>>>>>>>>>> Log file: >>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>>>>>> It has been detected that this program is >>>>>>>>>>>>> executed through an SSH connection without using screen. >>>>>>>>>>>>> Continuing with the installation may lead to >>>>>>>>>>>>> broken installation if the network connection fails. >>>>>>>>>>>>> It is highly recommended to abort the >>>>>>>>>>>>> installation and run it inside a screen session using command "screen". >>>>>>>>>>>>> Do you want to continue anyway? (Yes, No)[No]: >>>>>>>>>>>>> yes >>>>>>>>>>>>> [WARNING] Cannot detect if hardware supports >>>>>>>>>>>>> virtualization >>>>>>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>>> >>>>>>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>> [root@he ~]# >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list >>>>>>>>>>> Users@ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> >>> -- >>> Sandro Bonazzola >>> Better technology. Faster innovation. Powered by community >>> collaboration. >>> See how it works at redhat.com >>> >> >> > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >

On Mon, Jan 4, 2016 at 3:06 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Hi Simone
I have installed KVM server on the physical machine and installed centos6.7 vm on the server and tried to deploy Hosted-engine in the vm ,getting the same Error below is the Logs.
can you pls help me ?
The issue is here: Thread-84::ERROR::2016-01-04 19:31:42,304::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`3d3edc54-ceae-43e5-84a4-50a21c31d9cd`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. libvirt refuses to start the engine VM cause KVM is not available. Can you please check it?
Thanks, Nagaraju
On Wed, Dec 2, 2015 at 5:35 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have installed KVM in the nested environment in ESXi6.x version is that recommended ?
I often use KVM over KVM in nested environment but honestly I never tried to run KVM over ESXi but I suspect that all of your issues comes from there.
apart from Hosted engine is there any other alternate way to configure Engine HA cluster ?
Nothing else from the project. You can use two external VMs in cluster with pacemaker but it's completely up to you.
-Nagaraju
On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
pls fine the logs from the below mentioned URL,
OK, the issue is here:
Thread-88::ERROR::2015-12-02 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. (code=1)
but it's pretty strange cause hosted-engine-setup already explicitly check for visualization support and just exits with a clear error if not. Did you played with the kvm module while hosted-engine-setup was running?
Can you please hosted-engine-setup logs?
On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> wrote:
> Maybe even makes sense to open a bugzilla ticket already. Better > safe than sorry. >
We still need at least one log file to understand what happened.
> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> > wrote: > >> >> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <nbudoor@gmail.com >> > wrote: >> >>> I do not know what logs you are expecting ? the logs which I got >>> is pasted in the mail if you require in pastebin let me know I will upload >>> there . >>> >> >> >> Please run sosreport utility and share the resulting archive where >> you prefer. >> You can follow this guide: >> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/ >> >>> >>> >>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola < >>> sbonazzo@redhat.com> wrote: >>> >>>> >>>> >>>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju < >>>> nbudoor@gmail.com> wrote: >>>> >>>>> I got only 10lines to in the vdsm logs and are below , >>>>> >>>>> >>>> Can you please provide full sos report? >>>> >>>> >>>> >>>>> >>>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) >>>>> Trying to release resource 'Storage.HsmDomainMonitorLock' >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) >>>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users) >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) >>>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is >>>>> waiting for it. >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) >>>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing >>>>> records. >>>>> Thread-100::INFO::2015-11-27 >>>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: >>>>> stopMonitoringDomain, Return response: None >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) >>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) >>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> >>>>> state finished >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>> Owner.releaseAll requests {} resources {} >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>> Owner.cancelAll requests {} >>>>> Thread-100::DEBUG::2015-11-27 >>>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) >>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False >>>>> >>>>> >>>>> >>>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < >>>>> stirabos@redhat.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju < >>>>>> nbudoor@gmail.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> *Below are the entire logs* >>>>>>> >>>>>>> >>>>>> Sorry, with the entire log I mean if you can attach or share >>>>>> somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are >>>>>> not enough to point out the issue. >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >>>>>>> >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 50944) >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>> Adding connection from 127.0.0.1:50945 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>> Connection removed from 127.0.0.1:50945 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 50945) >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>> Adding connection from 127.0.0.1:50946 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>> Connection removed from 127.0.0.1:50946 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>> http detected from ('127.0.0.1', 50946) >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >>>>>>> >>>>>>> MainProcess::DEBUG::2015-11-26 >>>>>>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>> call readMultipathConf with () {} >>>>>>> MainProcess::DEBUG::2015-11-26 >>>>>>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >>>>>>> ' polling_interval 5', ' getuid_callout >>>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>>>> ' no_path_retry fail', ' user_friendly_names no', ' >>>>>>> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >>>>>>> dev_loss_tmo 30', ' max_fds 4096', '}', '', >>>>>>> 'devices {', 'device {', ' vendor "HITACHI"', ' >>>>>>> product "DF.*"', ' getuid_callout >>>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>>>> '}', 'device {', ' vendor "COMPELNT"', ' >>>>>>> product "Compellent Vol"', ' no_path_retry >>>>>>> fail', '}', 'device {', ' # multipath.conf.default', ' >>>>>>> vendor "DGC"', ' product ".*"', ' >>>>>>> product_blacklist "LUNZ"', ' path_grouping_policy >>>>>>> "group_by_prio"', ' path_checker "emc_clariion"', ' >>>>>>> hardware_handler "1 emc"', ' prio "emc"', ' >>>>>>> failback immediate', ' rr_weight >>>>>>> "uniform"', ' # vdsm required configuration', ' >>>>>>> getuid_callout "/lib/udev/scsi_id --whitelisted >>>>>>> --replace-whitespace --device=/dev/%n"', ' features "0"', >>>>>>> ' no_path_retry fail', '}', '}'] >>>>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>>>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>> call getHardwareInfo with () {} >>>>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>>>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >>>>>>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >>>>>>> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >>>>>>> 'systemManufacturer': 'Red Hat'} >>>>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>>>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>>>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>> return validateAccess with None >>>>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>>>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>>>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>> return validateAccess with None >>>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>>>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>> call ksmTune with ({'run': 0},) {} >>>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>>>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>> return ksmTune with None >>>>>>> >>>>>>> >>>>>>> >>>>>>> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >>>>>>> >>>>>>> >>>>>>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >>>>>>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >>>>>>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >>>>>>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >>>>>>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >>>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>>>> duplex:full) >>>>>>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >>>>>>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >>>>>>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >>>>>>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >>>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>>>> duplex:full) >>>>>>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >>>>>>> >>>>>>> >>>>>>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >>>>>>> '04-cputune' >>>>>>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy >>>>>>> Engine starting >>>>>>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is >>>>>>> disabled >>>>>>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - >>>>>>> Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 >>>>>>> sleep_millisecs:0 >>>>>>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - >>>>>>> getStatistics() >>>>>>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - >>>>>>> getStatistics() >>>>>>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - >>>>>>> getStatistics() >>>>>>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - >>>>>>> getStatistics() >>>>>>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - >>>>>>> getStatistics() >>>>>>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - >>>>>>> getStatistics() >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju < >>>>>>> nbudoor@gmail.com> wrote: >>>>>>> >>>>>>>> Below are the logs, >>>>>>>> >>>>>>>> >>>>>>>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 50944) >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>> Adding connection from 127.0.0.1:50945 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>> Connection removed from 127.0.0.1:50945 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 50945) >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>> Adding connection from 127.0.0.1:50946 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>> Connection removed from 127.0.0.1:50946 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 50946) >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>>>>>>> stirabos@redhat.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> I have done a fresh installation and now am getting the >>>>>>>>>> below error, >>>>>>>>>> >>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>> The following network ports should be opened: >>>>>>>>>> tcp:5900 >>>>>>>>>> tcp:5901 >>>>>>>>>> udp:5900 >>>>>>>>>> udp:5901 >>>>>>>>>> An example of the required configuration for >>>>>>>>>> iptables can be found at: >>>>>>>>>> /etc/ovirt-hosted-engine/iptables.example >>>>>>>>>> In order to configure firewalld, copy the files >>>>>>>>>> from >>>>>>>>>> /etc/ovirt-hosted-engine/firewalld to >>>>>>>>>> /etc/firewalld/services >>>>>>>>>> and execute the following commands: >>>>>>>>>> firewall-cmd -service hosted-console >>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>>>>>>>> temporary password for console connection. The VM may not have been >>>>>>>>>> created: please check VDSM logs >>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>>>>>>> backup/ connectivity.log mom.log >>>>>>>>>> supervdsm.log vdsm.log >>>>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>> Detected protocol xml from 127.0.0.1:42741 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>> http detected from ('127.0.0.1', 42741) >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>>> Adding connection from 127.0.0.1:42742 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>>> Connection removed from 127.0.0.1:42742 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>> Detected protocol xml from 127.0.0.1:42742 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>> http detected from ('127.0.0.1', 42742) >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>>> Adding connection from 127.0.0.1:42743 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>>> Connection removed from 127.0.0.1:42743 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>> Detected protocol xml from 127.0.0.1:42743 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>> http detected from ('127.0.0.1', 42743) >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> It failed before, can you please attach the whole VDSM logs? >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>>>>>>> stirabos@redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am >>>>>>>>>>>> facing same issues . >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Can you please paste the output of >>>>>>>>>>> vdsClient -s 0 list >>>>>>>>>>> ? >>>>>>>>>>> thanks >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>>>>>>> oourfali@redhat.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi >>>>>>>>>>>>> >>>>>>>>>>>>> Seems like you have existing VMs running on the host >>>>>>>>>>>>> (you can check that by looking for qemu processes on your host). >>>>>>>>>>>>> Is that a clean deployment, or was the host used before >>>>>>>>>>>>> for running VMs? >>>>>>>>>>>>> Perhaps you already ran the hosted engine setup, and the >>>>>>>>>>>>> VM was left there? >>>>>>>>>>>>> >>>>>>>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Oved >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> HI >>>>>>>>>>>>>> >>>>>>>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>>>>>>> >>>>>>>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>>>> Continuing will configure this host for >>>>>>>>>>>>>> serving as hypervisor and create a VM where you have to install oVirt >>>>>>>>>>>>>> Engine afterwards. >>>>>>>>>>>>>> Are you sure you want to continue? (Yes, >>>>>>>>>>>>>> No)[Yes]: yes >>>>>>>>>>>>>> Configuration files: [] >>>>>>>>>>>>>> Log file: >>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>>>>>>> It has been detected that this program is >>>>>>>>>>>>>> executed through an SSH connection without using screen. >>>>>>>>>>>>>> Continuing with the installation may lead to >>>>>>>>>>>>>> broken installation if the network connection fails. >>>>>>>>>>>>>> It is highly recommended to abort the >>>>>>>>>>>>>> installation and run it inside a screen session using command "screen". >>>>>>>>>>>>>> Do you want to continue anyway? (Yes, >>>>>>>>>>>>>> No)[No]: yes >>>>>>>>>>>>>> [WARNING] Cannot detect if hardware supports >>>>>>>>>>>>>> virtualization >>>>>>>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>>>> >>>>>>>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>>> [root@he ~]# >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Users mailing list >>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>>> >>>> -- >>>> Sandro Bonazzola >>>> Better technology. Faster innovation. Powered by community >>>> collaboration. >>>> See how it works at redhat.com >>>> >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >>

Is there any command to check KVM is available or not ? Below is the output when I run the rpm command. [root@he /]# rpm -qa |grep kvm qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64 On Mon, Jan 4, 2016 at 8:24 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Jan 4, 2016 at 3:06 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Hi Simone
I have installed KVM server on the physical machine and installed centos6.7 vm on the server and tried to deploy Hosted-engine in the vm ,getting the same Error below is the Logs.
can you pls help me ?
The issue is here:
Thread-84::ERROR::2016-01-04 19:31:42,304::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`3d3edc54-ceae-43e5-84a4-50a21c31d9cd`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules.
libvirt refuses to start the engine VM cause KVM is not available. Can you please check it?
Thanks, Nagaraju
On Wed, Dec 2, 2015 at 5:35 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have installed KVM in the nested environment in ESXi6.x version is that recommended ?
I often use KVM over KVM in nested environment but honestly I never tried to run KVM over ESXi but I suspect that all of your issues comes from there.
apart from Hosted engine is there any other alternate way to configure Engine HA cluster ?
Nothing else from the project. You can use two external VMs in cluster with pacemaker but it's completely up to you.
-Nagaraju
On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
pls fine the logs from the below mentioned URL,
OK, the issue is here:
Thread-88::ERROR::2015-12-02 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. (code=1)
but it's pretty strange cause hosted-engine-setup already explicitly check for visualization support and just exits with a clear error if not. Did you played with the kvm module while hosted-engine-setup was running?
Can you please hosted-engine-setup logs?
On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi < stirabos@redhat.com> wrote:
> > > On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> > wrote: > >> Maybe even makes sense to open a bugzilla ticket already. Better >> safe than sorry. >> > > We still need at least one log file to understand what happened. > > >> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> >> wrote: >> >>> >>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju < >>> nbudoor@gmail.com> wrote: >>> >>>> I do not know what logs you are expecting ? the logs which I got >>>> is pasted in the mail if you require in pastebin let me know I will upload >>>> there . >>>> >>> >>> >>> Please run sosreport utility and share the resulting archive where >>> you prefer. >>> You can follow this guide: >>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/ >>> >>>> >>>> >>>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola < >>>> sbonazzo@redhat.com> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju < >>>>> nbudoor@gmail.com> wrote: >>>>> >>>>>> I got only 10lines to in the vdsm logs and are below , >>>>>> >>>>>> >>>>> Can you please provide full sos report? >>>>> >>>>> >>>>> >>>>>> >>>>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) >>>>>> Trying to release resource 'Storage.HsmDomainMonitorLock' >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) >>>>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users) >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) >>>>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is >>>>>> waiting for it. >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) >>>>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing >>>>>> records. >>>>>> Thread-100::INFO::2015-11-27 >>>>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: >>>>>> stopMonitoringDomain, Return response: None >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) >>>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) >>>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> >>>>>> state finished >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>>> Owner.releaseAll requests {} resources {} >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>> Owner.cancelAll requests {} >>>>>> Thread-100::DEBUG::2015-11-27 >>>>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) >>>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False >>>>>> >>>>>> >>>>>> >>>>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < >>>>>> stirabos@redhat.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju < >>>>>>> nbudoor@gmail.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *Below are the entire logs* >>>>>>>> >>>>>>>> >>>>>>> Sorry, with the entire log I mean if you can attach or share >>>>>>> somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are >>>>>>> not enough to point out the issue. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >>>>>>>> >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 50944) >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>> Adding connection from 127.0.0.1:50945 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>> Connection removed from 127.0.0.1:50945 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 50945) >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>> Adding connection from 127.0.0.1:50946 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>> Connection removed from 127.0.0.1:50946 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>> http detected from ('127.0.0.1', 50946) >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >>>>>>>> >>>>>>>> MainProcess::DEBUG::2015-11-26 >>>>>>>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> call readMultipathConf with () {} >>>>>>>> MainProcess::DEBUG::2015-11-26 >>>>>>>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >>>>>>>> ' polling_interval 5', ' getuid_callout >>>>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>>>>> ' no_path_retry fail', ' user_friendly_names no', ' >>>>>>>> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >>>>>>>> dev_loss_tmo 30', ' max_fds 4096', '}', '', >>>>>>>> 'devices {', 'device {', ' vendor "HITACHI"', ' >>>>>>>> product "DF.*"', ' getuid_callout >>>>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>>>>> '}', 'device {', ' vendor "COMPELNT"', ' >>>>>>>> product "Compellent Vol"', ' no_path_retry >>>>>>>> fail', '}', 'device {', ' # multipath.conf.default', ' >>>>>>>> vendor "DGC"', ' product ".*"', ' >>>>>>>> product_blacklist "LUNZ"', ' path_grouping_policy >>>>>>>> "group_by_prio"', ' path_checker "emc_clariion"', ' >>>>>>>> hardware_handler "1 emc"', ' prio "emc"', ' >>>>>>>> failback immediate', ' rr_weight >>>>>>>> "uniform"', ' # vdsm required configuration', ' >>>>>>>> getuid_callout "/lib/udev/scsi_id --whitelisted >>>>>>>> --replace-whitespace --device=/dev/%n"', ' features "0"', >>>>>>>> ' no_path_retry fail', '}', '}'] >>>>>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>>>>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> call getHardwareInfo with () {} >>>>>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>>>>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >>>>>>>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >>>>>>>> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >>>>>>>> 'systemManufacturer': 'Red Hat'} >>>>>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>>>>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>>>>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> return validateAccess with None >>>>>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>>>>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>>>>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> return validateAccess with None >>>>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>>>>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> call ksmTune with ({'run': 0},) {} >>>>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>>>>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>> return ksmTune with None >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >>>>>>>> >>>>>>>> >>>>>>>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >>>>>>>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >>>>>>>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >>>>>>>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >>>>>>>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >>>>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>>>>> duplex:full) >>>>>>>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >>>>>>>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >>>>>>>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >>>>>>>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >>>>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>>>>> duplex:full) >>>>>>>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >>>>>>>> >>>>>>>> >>>>>>>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >>>>>>>> '04-cputune' >>>>>>>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy >>>>>>>> Engine starting >>>>>>>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server >>>>>>>> is disabled >>>>>>>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - >>>>>>>> Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 >>>>>>>> sleep_millisecs:0 >>>>>>>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - >>>>>>>> getStatistics() >>>>>>>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - >>>>>>>> getStatistics() >>>>>>>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - >>>>>>>> getStatistics() >>>>>>>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - >>>>>>>> getStatistics() >>>>>>>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - >>>>>>>> getStatistics() >>>>>>>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - >>>>>>>> getStatistics() >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju < >>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>> >>>>>>>>> Below are the logs, >>>>>>>>> >>>>>>>>> >>>>>>>>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 50944) >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>> Adding connection from 127.0.0.1:50945 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>> Connection removed from 127.0.0.1:50945 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 50945) >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>> Adding connection from 127.0.0.1:50946 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>> Connection removed from 127.0.0.1:50946 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 50946) >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>>>>>>>> stirabos@redhat.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> I have done a fresh installation and now am getting the >>>>>>>>>>> below error, >>>>>>>>>>> >>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>> The following network ports should be opened: >>>>>>>>>>> tcp:5900 >>>>>>>>>>> tcp:5901 >>>>>>>>>>> udp:5900 >>>>>>>>>>> udp:5901 >>>>>>>>>>> An example of the required configuration for >>>>>>>>>>> iptables can be found at: >>>>>>>>>>> /etc/ovirt-hosted-engine/iptables.example >>>>>>>>>>> In order to configure firewalld, copy the files >>>>>>>>>>> from >>>>>>>>>>> /etc/ovirt-hosted-engine/firewalld to >>>>>>>>>>> /etc/firewalld/services >>>>>>>>>>> and execute the following commands: >>>>>>>>>>> firewall-cmd -service hosted-console >>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set >>>>>>>>>>> temporary password for console connection. The VM may not have been >>>>>>>>>>> created: please check VDSM logs >>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>>>>>>>> backup/ connectivity.log mom.log >>>>>>>>>>> supervdsm.log vdsm.log >>>>>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>>> Detected protocol xml from 127.0.0.1:42741 >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>>> http detected from ('127.0.0.1', 42741) >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>>>> Adding connection from 127.0.0.1:42742 >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>>>> Connection removed from 127.0.0.1:42742 >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>>> Detected protocol xml from 127.0.0.1:42742 >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>>> http detected from ('127.0.0.1', 42742) >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>>>> Adding connection from 127.0.0.1:42743 >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>>>> Connection removed from 127.0.0.1:42743 >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>>> Detected protocol xml from 127.0.0.1:42743 >>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>>> http detected from ('127.0.0.1', 42743) >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> It failed before, can you please attach the whole VDSM logs? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>>>>>>>> stirabos@redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am >>>>>>>>>>>>> facing same issues . >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Can you please paste the output of >>>>>>>>>>>> vdsClient -s 0 list >>>>>>>>>>>> ? >>>>>>>>>>>> thanks >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>>>>>>>> oourfali@redhat.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi >>>>>>>>>>>>>> >>>>>>>>>>>>>> Seems like you have existing VMs running on the host >>>>>>>>>>>>>> (you can check that by looking for qemu processes on your host). >>>>>>>>>>>>>> Is that a clean deployment, or was the host used before >>>>>>>>>>>>>> for running VMs? >>>>>>>>>>>>>> Perhaps you already ran the hosted engine setup, and >>>>>>>>>>>>>> the VM was left there? >>>>>>>>>>>>>> >>>>>>>>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> Oved >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> HI >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>>>>> Continuing will configure this host for >>>>>>>>>>>>>>> serving as hypervisor and create a VM where you have to install oVirt >>>>>>>>>>>>>>> Engine afterwards. >>>>>>>>>>>>>>> Are you sure you want to continue? (Yes, >>>>>>>>>>>>>>> No)[Yes]: yes >>>>>>>>>>>>>>> Configuration files: [] >>>>>>>>>>>>>>> Log file: >>>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>>>>>>>> It has been detected that this program is >>>>>>>>>>>>>>> executed through an SSH connection without using screen. >>>>>>>>>>>>>>> Continuing with the installation may lead to >>>>>>>>>>>>>>> broken installation if the network connection fails. >>>>>>>>>>>>>>> It is highly recommended to abort the >>>>>>>>>>>>>>> installation and run it inside a screen session using command "screen". >>>>>>>>>>>>>>> Do you want to continue anyway? (Yes, >>>>>>>>>>>>>>> No)[No]: yes >>>>>>>>>>>>>>> [WARNING] Cannot detect if hardware supports >>>>>>>>>>>>>>> virtualization >>>>>>>>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>>>> [root@he ~]# >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Sandro Bonazzola >>>>> Better technology. Faster innovation. Powered by community >>>>> collaboration. >>>>> See how it works at redhat.com >>>>> >>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >

I get the below out put , [root@he ~]# lsmod |grep kvm kvm_intel 55624 0 kvm 345460 1 kvm_intel On Tue, Jan 5, 2016 at 12:06 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Is there any command to check KVM is available or not ?
Below is the output when I run the rpm command.
[root@he /]# rpm -qa |grep kvm qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
On Mon, Jan 4, 2016 at 8:24 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Mon, Jan 4, 2016 at 3:06 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
Hi Simone
I have installed KVM server on the physical machine and installed centos6.7 vm on the server and tried to deploy Hosted-engine in the vm ,getting the same Error below is the Logs.
can you pls help me ?
The issue is here:
Thread-84::ERROR::2016-01-04 19:31:42,304::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`3d3edc54-ceae-43e5-84a4-50a21c31d9cd`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules.
libvirt refuses to start the engine VM cause KVM is not available. Can you please check it?
Thanks, Nagaraju
On Wed, Dec 2, 2015 at 5:35 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju <nbudoor@gmail.com> wrote:
I have installed KVM in the nested environment in ESXi6.x version is that recommended ?
I often use KVM over KVM in nested environment but honestly I never tried to run KVM over ESXi but I suspect that all of your issues comes from there.
apart from Hosted engine is there any other alternate way to configure Engine HA cluster ?
Nothing else from the project. You can use two external VMs in cluster with pacemaker but it's completely up to you.
-Nagaraju
On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi <stirabos@redhat.com
wrote:
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor@gmail.com> wrote:
> pls fine the logs from the below mentioned URL, > > http://pastebin.com/ZeKyyFbN >
OK, the issue is here:
Thread-88::ERROR::2015-12-02 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3363, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus) vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. (code=1)
but it's pretty strange cause hosted-engine-setup already explicitly check for visualization support and just exits with a clear error if not. Did you played with the kvm module while hosted-engine-setup was running?
Can you please hosted-engine-setup logs?
> > On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> >> >> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm@gmail.com> >> wrote: >> >>> Maybe even makes sense to open a bugzilla ticket already. Better >>> safe than sorry. >>> >> >> We still need at least one log file to understand what happened. >> >> >>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos@redhat.com> >>> wrote: >>> >>>> >>>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju < >>>> nbudoor@gmail.com> wrote: >>>> >>>>> I do not know what logs you are expecting ? the logs which I got >>>>> is pasted in the mail if you require in pastebin let me know I will upload >>>>> there . >>>>> >>>> >>>> >>>> Please run sosreport utility and share the resulting archive >>>> where you prefer. >>>> You can follow this guide: >>>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/ >>>> >>>>> >>>>> >>>>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola < >>>>> sbonazzo@redhat.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju < >>>>>> nbudoor@gmail.com> wrote: >>>>>> >>>>>>> I got only 10lines to in the vdsm logs and are below , >>>>>>> >>>>>>> >>>>>> Can you please provide full sos report? >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource) >>>>>>> Trying to release resource 'Storage.HsmDomainMonitorLock' >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource) >>>>>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users) >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource) >>>>>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is >>>>>>> waiting for it. >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource) >>>>>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing >>>>>>> records. >>>>>>> Thread-100::INFO::2015-11-27 >>>>>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect: >>>>>>> stopMonitoringDomain, Return response: None >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare) >>>>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState) >>>>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing -> >>>>>>> state finished >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>>>> Owner.releaseAll requests {} resources {} >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>>> Owner.cancelAll requests {} >>>>>>> Thread-100::DEBUG::2015-11-27 >>>>>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref) >>>>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < >>>>>>> stirabos@redhat.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju < >>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *Below are the entire logs* >>>>>>>>> >>>>>>>>> >>>>>>>> Sorry, with the entire log I mean if you can attach or share >>>>>>>> somewhere the whole /var/log/vdsm/vdsm.log cause the latest ten lines are >>>>>>>> not enough to point out the issue. >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log * >>>>>>>>> >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 50944) >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>> Adding connection from 127.0.0.1:50945 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>> Connection removed from 127.0.0.1:50945 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 50945) >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>> Adding connection from 127.0.0.1:50946 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>> Connection removed from 127.0.0.1:50946 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>> http detected from ('127.0.0.1', 50946) >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log * >>>>>>>>> >>>>>>>>> MainProcess::DEBUG::2015-11-26 >>>>>>>>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> call readMultipathConf with () {} >>>>>>>>> MainProcess::DEBUG::2015-11-26 >>>>>>>>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {', >>>>>>>>> ' polling_interval 5', ' getuid_callout >>>>>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>>>>>> ' no_path_retry fail', ' user_friendly_names no', ' >>>>>>>>> flush_on_last_del yes', ' fast_io_fail_tmo 5', ' >>>>>>>>> dev_loss_tmo 30', ' max_fds 4096', '}', '', >>>>>>>>> 'devices {', 'device {', ' vendor "HITACHI"', ' >>>>>>>>> product "DF.*"', ' getuid_callout >>>>>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"', >>>>>>>>> '}', 'device {', ' vendor "COMPELNT"', ' >>>>>>>>> product "Compellent Vol"', ' no_path_retry >>>>>>>>> fail', '}', 'device {', ' # multipath.conf.default', ' >>>>>>>>> vendor "DGC"', ' product ".*"', ' >>>>>>>>> product_blacklist "LUNZ"', ' path_grouping_policy >>>>>>>>> "group_by_prio"', ' path_checker "emc_clariion"', ' >>>>>>>>> hardware_handler "1 emc"', ' prio "emc"', ' >>>>>>>>> failback immediate', ' rr_weight >>>>>>>>> "uniform"', ' # vdsm required configuration', ' >>>>>>>>> getuid_callout "/lib/udev/scsi_id --whitelisted >>>>>>>>> --replace-whitespace --device=/dev/%n"', ' features "0"', >>>>>>>>> ' no_path_retry fail', '}', '}'] >>>>>>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>>>>>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> call getHardwareInfo with () {} >>>>>>>>> MainProcess|Thread-13::DEBUG::2015-11-26 >>>>>>>>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID': >>>>>>>>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise >>>>>>>>> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)', >>>>>>>>> 'systemManufacturer': 'Red Hat'} >>>>>>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>>>>>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>>>>>> MainProcess|Thread-21::DEBUG::2015-11-26 >>>>>>>>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> return validateAccess with None >>>>>>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>>>>>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> call validateAccess with ('qemu', ('qemu', 'kvm'), >>>>>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {} >>>>>>>>> MainProcess|Thread-22::DEBUG::2015-11-26 >>>>>>>>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> return validateAccess with None >>>>>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>>>>>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> call ksmTune with ({'run': 0},) {} >>>>>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26 >>>>>>>>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) >>>>>>>>> return ksmTune with None >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *[root@he ~]# tail -f /var/log/vdsm/connectivity.log * >>>>>>>>> >>>>>>>>> >>>>>>>>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False >>>>>>>>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True >>>>>>>>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False >>>>>>>>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True >>>>>>>>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True, >>>>>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>>>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>>>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>>>>>> duplex:full) >>>>>>>>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False >>>>>>>>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True >>>>>>>>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False >>>>>>>>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True, >>>>>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 >>>>>>>>> duplex:unknown), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), >>>>>>>>> bond0:(operstate:down speed:0 duplex:unknown), eth0:(operstate:up speed:100 >>>>>>>>> duplex:full) >>>>>>>>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *[root@he ~]# tail -f /var/log/vdsm/mom.log * >>>>>>>>> >>>>>>>>> >>>>>>>>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy >>>>>>>>> '04-cputune' >>>>>>>>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy >>>>>>>>> Engine starting >>>>>>>>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server >>>>>>>>> is disabled >>>>>>>>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - >>>>>>>>> Updating KSM configuration: pages_to_scan:0 merge_across_nodes:8 run:0 >>>>>>>>> sleep_millisecs:0 >>>>>>>>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - >>>>>>>>> getStatistics() >>>>>>>>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - >>>>>>>>> getStatistics() >>>>>>>>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - >>>>>>>>> getStatistics() >>>>>>>>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - >>>>>>>>> getStatistics() >>>>>>>>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - >>>>>>>>> getStatistics() >>>>>>>>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - >>>>>>>>> getStatistics() >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju < >>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Below are the logs, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> [root@he ~]# tail -f /var/log/vdsm/vdsm.log >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>> Detected protocol xml from 127.0.0.1:50944 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>> http detected from ('127.0.0.1', 50944) >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>>> Adding connection from 127.0.0.1:50945 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>>> Connection removed from 127.0.0.1:50945 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>> Detected protocol xml from 127.0.0.1:50945 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>> http detected from ('127.0.0.1', 50945) >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>>> Adding connection from 127.0.0.1:50946 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>>> Connection removed from 127.0.0.1:50946 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>> Detected protocol xml from 127.0.0.1:50946 >>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>> http detected from ('127.0.0.1', 50946) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < >>>>>>>>>> stirabos@redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < >>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> I have done a fresh installation and now am getting the >>>>>>>>>>>> below error, >>>>>>>>>>>> >>>>>>>>>>>> [ INFO ] Updating hosted-engine configuration >>>>>>>>>>>> [ INFO ] Stage: Transaction commit >>>>>>>>>>>> [ INFO ] Stage: Closing up >>>>>>>>>>>> The following network ports should be opened: >>>>>>>>>>>> tcp:5900 >>>>>>>>>>>> tcp:5901 >>>>>>>>>>>> udp:5900 >>>>>>>>>>>> udp:5901 >>>>>>>>>>>> An example of the required configuration for >>>>>>>>>>>> iptables can be found at: >>>>>>>>>>>> /etc/ovirt-hosted-engine/iptables.example >>>>>>>>>>>> In order to configure firewalld, copy the files >>>>>>>>>>>> from >>>>>>>>>>>> /etc/ovirt-hosted-engine/firewalld to >>>>>>>>>>>> /etc/firewalld/services >>>>>>>>>>>> and execute the following commands: >>>>>>>>>>>> firewall-cmd -service hosted-console >>>>>>>>>>>> [ INFO ] Creating VM >>>>>>>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot >>>>>>>>>>>> set temporary password for console connection. The VM may not have been >>>>>>>>>>>> created: please check VDSM logs >>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf' >>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/ >>>>>>>>>>>> backup/ connectivity.log mom.log >>>>>>>>>>>> supervdsm.log vdsm.log >>>>>>>>>>>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>>>> Detected protocol xml from 127.0.0.1:42741 >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>>>> http detected from ('127.0.0.1', 42741) >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>>>>> Adding connection from 127.0.0.1:42742 >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>>>>> Connection removed from 127.0.0.1:42742 >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>>>> Detected protocol xml from 127.0.0.1:42742 >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>>>> http detected from ('127.0.0.1', 42742) >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) >>>>>>>>>>>> Adding connection from 127.0.0.1:42743 >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection) >>>>>>>>>>>> Connection removed from 127.0.0.1:42743 >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read) >>>>>>>>>>>> Detected protocol xml from 127.0.0.1:42743 >>>>>>>>>>>> Detector thread::DEBUG::2015-11-26 >>>>>>>>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over >>>>>>>>>>>> http detected from ('127.0.0.1', 42743) >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> It failed before, can you please attach the whole VDSM >>>>>>>>>>> logs? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < >>>>>>>>>>>> stirabos@redhat.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < >>>>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am >>>>>>>>>>>>>> facing same issues . >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> Can you please paste the output of >>>>>>>>>>>>> vdsClient -s 0 list >>>>>>>>>>>>> ? >>>>>>>>>>>>> thanks >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < >>>>>>>>>>>>>> oourfali@redhat.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Seems like you have existing VMs running on the host >>>>>>>>>>>>>>> (you can check that by looking for qemu processes on your host). >>>>>>>>>>>>>>> Is that a clean deployment, or was the host used >>>>>>>>>>>>>>> before for running VMs? >>>>>>>>>>>>>>> Perhaps you already ran the hosted engine setup, and >>>>>>>>>>>>>>> the VM was left there? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> CC-ing Sandro who is more familiar in that than me. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> Oved >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < >>>>>>>>>>>>>>> nbudoor@gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> HI >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Getting below error while configuring Hosted engine, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> root@he ~]# hosted-engine --deploy >>>>>>>>>>>>>>>> [ INFO ] Stage: Initializing >>>>>>>>>>>>>>>> [ INFO ] Generating a temporary VNC password. >>>>>>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>>>>>> Continuing will configure this host for >>>>>>>>>>>>>>>> serving as hypervisor and create a VM where you have to install oVirt >>>>>>>>>>>>>>>> Engine afterwards. >>>>>>>>>>>>>>>> Are you sure you want to continue? (Yes, >>>>>>>>>>>>>>>> No)[Yes]: yes >>>>>>>>>>>>>>>> Configuration files: [] >>>>>>>>>>>>>>>> Log file: >>>>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log >>>>>>>>>>>>>>>> Version: otopi-1.3.2 (otopi-1.3.2-1.el6) >>>>>>>>>>>>>>>> It has been detected that this program is >>>>>>>>>>>>>>>> executed through an SSH connection without using screen. >>>>>>>>>>>>>>>> Continuing with the installation may lead >>>>>>>>>>>>>>>> to broken installation if the network connection fails. >>>>>>>>>>>>>>>> It is highly recommended to abort the >>>>>>>>>>>>>>>> installation and run it inside a screen session using command "screen". >>>>>>>>>>>>>>>> Do you want to continue anyway? (Yes, >>>>>>>>>>>>>>>> No)[No]: yes >>>>>>>>>>>>>>>> [WARNING] Cannot detect if hardware supports >>>>>>>>>>>>>>>> virtualization >>>>>>>>>>>>>>>> [ INFO ] Bridge ovirtmgmt already created >>>>>>>>>>>>>>>> [ INFO ] Stage: Environment packages setup >>>>>>>>>>>>>>>> [ INFO ] Stage: Programs detection >>>>>>>>>>>>>>>> [ INFO ] Stage: Environment setup >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> *[ ERROR ] The following VMs has been found: >>>>>>>>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage >>>>>>>>>>>>>>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running* >>>>>>>>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf' >>>>>>>>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>>>>>>>> [root@he ~]# >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>>> Users@ovirt.org >>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users@ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Sandro Bonazzola >>>>>> Better technology. Faster innovation. Powered by community >>>>>> collaboration. >>>>>> See how it works at redhat.com >>>>>> >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >> >
participants (5)
-
Budur Nagaraju
-
Maxim Kovgan
-
Oved Ourfali
-
Sandro Bonazzola
-
Simone Tiraboschi