[Users] Sanlock issue when trying to start vm

Hi, after getting the 3.1 beta engine and a host set up I now get an error when trying to start a vm. Engine reports this: VM myvm is down. Exit message: internal error Failed to open socket to sanlock daemon: No such file or directory. On the node no sanlock daemon is running. When I try to start the service I get this: Jun 19 20:22:31 node systemd-sanlock[13607]: Starting sanlock: [ OK ] Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: sanlock daemon started 2.3 aio 1 10 renew 20 80 host 93bea910-1d9d-4203-8333-beb3d7b92c10.node.local time 1340130151 Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: set scheduler RR|RESET_ON_FORK priority 99 failed: Operation not permitted Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: wdmd connect failed for watchdog handling Jun 19 20:22:31 node systemd[1]: sanlock.service: main process exited, code=exited, status=255 Jun 19 20:22:31 node systemd[1]: Unit sanlock.service entered failed state. I followed the SELinux and sanlock threads but I disabled SELinux on the node so that cannot be a reason for this to fail. Regards, Dennis

What is your node? Fedora 16/17 or just ovirt-node? On 2012-6-20 2:36, Dennis Jacobfeuerborn wrote:
Hi, after getting the 3.1 beta engine and a host set up I now get an error when trying to start a vm.
Engine reports this: VM myvm is down. Exit message: internal error Failed to open socket to sanlock daemon: No such file or directory.
On the node no sanlock daemon is running. When I try to start the service I get this: Jun 19 20:22:31 node systemd-sanlock[13607]: Starting sanlock: [ OK ] Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: sanlock daemon started 2.3 aio 1 10 renew 20 80 host 93bea910-1d9d-4203-8333-beb3d7b92c10.node.local time 1340130151 Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: set scheduler RR|RESET_ON_FORK priority 99 failed: Operation not permitted Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: wdmd connect failed for watchdog handling Jun 19 20:22:31 node systemd[1]: sanlock.service: main process exited, code=exited, status=255 Jun 19 20:22:31 node systemd[1]: Unit sanlock.service entered failed state.
I followed the SELinux and sanlock threads but I disabled SELinux on the node so that cannot be a reason for this to fail.
Regards, Dennis _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Shu Ming<shuming@linux.vnet.ibm.com> IBM China Systems and Technology Laboratory

On 06/19/2012 09:36 PM, Dennis Jacobfeuerborn wrote:
Hi, after getting the 3.1 beta engine and a host set up I now get an error when trying to start a vm.
Engine reports this: VM myvm is down. Exit message: internal error Failed to open socket to sanlock daemon: No such file or directory. Please take a look at the following bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=832935 https://bugzilla.redhat.com/show_bug.cgi?id=832056
On the node no sanlock daemon is running. When I try to start the service I get this: Jun 19 20:22:31 node systemd-sanlock[13607]: Starting sanlock: [ OK ] Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: sanlock daemon started 2.3 aio 1 10 renew 20 80 host 93bea910-1d9d-4203-8333-beb3d7b92c10.node.local time 1340130151 Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: set scheduler RR|RESET_ON_FORK priority 99 failed: Operation not permitted Jun 19 20:22:31 node sanlock[13621]: 2012-06-19 20:22:31+0200 910 [13621]: wdmd connect failed for watchdog handling Jun 19 20:22:31 node systemd[1]: sanlock.service: main process exited, code=exited, status=255 Jun 19 20:22:31 node systemd[1]: Unit sanlock.service entered failed state.
I followed the SELinux and sanlock threads but I disabled SELinux on the node so that cannot be a reason for this to fail.
Regards, Dennis _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Thanks, Rami Vaknin, QE @ Red Hat, TLV, IL.

On 06/20/2012 08:17 AM, Rami Vaknin wrote:
On 06/19/2012 09:36 PM, Dennis Jacobfeuerborn wrote:
Hi, after getting the 3.1 beta engine and a host set up I now get an error when trying to start a vm.
Engine reports this: VM myvm is down. Exit message: internal error Failed to open socket to sanlock daemon: No such file or directory. Please take a look at the following bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=832935 https://bugzilla.redhat.com/show_bug.cgi?id=832056
Thanks, after modprobe softdog and restarting wdmd and sanlock I was able to start the VM. Unfortunately my experiment with using nested VMs to fake nodes didn't pan out (the "guest-in-guest" booted for a bit and then froze). Is there a way to use pure qemu guests for testing (like in devstack)? While I do have two systems that support hardware virtualization i cannot reinstall either of them to use them as a host. Regards, Dennis

On 06/20/2012 08:33 AM, Dennis Jacobfeuerborn wrote:
On 06/20/2012 08:17 AM, Rami Vaknin wrote:
On 06/19/2012 09:36 PM, Dennis Jacobfeuerborn wrote:
Hi, after getting the 3.1 beta engine and a host set up I now get an error when trying to start a vm.
Engine reports this: VM myvm is down. Exit message: internal error Failed to open socket to sanlock daemon: No such file or directory. Please take a look at the following bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=832935 https://bugzilla.redhat.com/show_bug.cgi?id=832056 Thanks, after modprobe softdog and restarting wdmd and sanlock I was able to start the VM. Unfortunately my experiment with using nested VMs to fake nodes didn't pan out (the "guest-in-guest" booted for a bit and then froze).
Is there a way to use pure qemu guests for testing (like in devstack)? While I do have two systems that support hardware virtualization i cannot reinstall either of them to use them as a host.
Regards, Dennis _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users The CentOS builds include a plugin called vdsm-hook-simpleqemu I think I saw the hook sitting in Git as well so that plug-in might do what you are looking for?
Thanks Robert

On 06/21/2012 12:04 AM, Robert Middleswarth wrote:
On 06/20/2012 08:33 AM, Dennis Jacobfeuerborn wrote:
On 06/20/2012 08:17 AM, Rami Vaknin wrote:
On 06/19/2012 09:36 PM, Dennis Jacobfeuerborn wrote:
Hi, after getting the 3.1 beta engine and a host set up I now get an error when trying to start a vm.
Engine reports this: VM myvm is down. Exit message: internal error Failed to open socket to sanlock daemon: No such file or directory. Please take a look at the following bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=832935 https://bugzilla.redhat.com/show_bug.cgi?id=832056 Thanks, after modprobe softdog and restarting wdmd and sanlock I was able to start the VM. Unfortunately my experiment with using nested VMs to fake nodes didn't pan out (the "guest-in-guest" booted for a bit and then froze).
Is there a way to use pure qemu guests for testing (like in devstack)? While I do have two systems that support hardware virtualization i cannot reinstall either of them to use them as a host.
Regards, Dennis _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users The CentOS builds include a plugin called vdsm-hook-simpleqemu I think I saw the hook sitting in Git as well so that plug-in might do what you are looking for?
Hm, looks like it but that doesn't seem to be available for the regular builds. The is a vdsm-hook-faqemu package though which when comparing the code seems to do a similar thing (it also uses the "fake_kvm_support" setting from vdsm.conf like simpleqemu). Is the latter maybe a replacement of the former? Regards, Dennis

On 06/22/2012 02:02 AM, Dennis Jacobfeuerborn wrote:
On 06/21/2012 12:04 AM, Robert Middleswarth wrote:
On 06/20/2012 08:33 AM, Dennis Jacobfeuerborn wrote:
On 06/20/2012 08:17 AM, Rami Vaknin wrote:
On 06/19/2012 09:36 PM, Dennis Jacobfeuerborn wrote:
Hi, after getting the 3.1 beta engine and a host set up I now get an error when trying to start a vm.
Engine reports this: VM myvm is down. Exit message: internal error Failed to open socket to sanlock daemon: No such file or directory. Please take a look at the following bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=832935 https://bugzilla.redhat.com/show_bug.cgi?id=832056 Thanks, after modprobe softdog and restarting wdmd and sanlock I was able to start the VM. Unfortunately my experiment with using nested VMs to fake nodes didn't pan out (the "guest-in-guest" booted for a bit and then froze).
Is there a way to use pure qemu guests for testing (like in devstack)? While I do have two systems that support hardware virtualization i cannot reinstall either of them to use them as a host.
Regards, Dennis _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users The CentOS builds include a plugin called vdsm-hook-simpleqemu I think I saw the hook sitting in Git as well so that plug-in might do what you are looking for?
Hm, looks like it but that doesn't seem to be available for the regular builds. The is a vdsm-hook-faqemu package though which when comparing the code seems to do a similar thing (it also uses the "fake_kvm_support" setting from vdsm.conf like simpleqemu). Is the latter maybe a replacement of the former?
Hm, i tried this but I still get the message "Domain requires KVM, but it is not available." Regards, Dennis

On 06/22/2012 04:07 AM, Dennis Jacobfeuerborn wrote:
On 06/22/2012 02:02 AM, Dennis Jacobfeuerborn wrote:
On 06/21/2012 12:04 AM, Robert Middleswarth wrote:
On 06/20/2012 08:33 AM, Dennis Jacobfeuerborn wrote:
On 06/20/2012 08:17 AM, Rami Vaknin wrote:
On 06/19/2012 09:36 PM, Dennis Jacobfeuerborn wrote:
Hi, after getting the 3.1 beta engine and a host set up I now get an error when trying to start a vm.
Engine reports this: VM myvm is down. Exit message: internal error Failed to open socket to sanlock daemon: No such file or directory. Please take a look at the following bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=832935 https://bugzilla.redhat.com/show_bug.cgi?id=832056 Thanks, after modprobe softdog and restarting wdmd and sanlock I was able to start the VM. Unfortunately my experiment with using nested VMs to fake nodes didn't pan out (the "guest-in-guest" booted for a bit and then froze).
Is there a way to use pure qemu guests for testing (like in devstack)? While I do have two systems that support hardware virtualization i cannot reinstall either of them to use them as a host.
Regards, Dennis _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users The CentOS builds include a plugin called vdsm-hook-simpleqemu I think I saw the hook sitting in Git as well so that plug-in might do what you are looking for?
Hm, looks like it but that doesn't seem to be available for the regular builds. The is a vdsm-hook-faqemu package though which when comparing the code seems to do a similar thing (it also uses the "fake_kvm_support" setting from vdsm.conf like simpleqemu). Is the latter maybe a replacement of the former?
Hm, i tried this but I still get the message "Domain requires KVM, but it is not available."
that's becuase the bootstrap script requires to be hacked as well. but you couldn't pick a better timing to fail on this, as federico just posted a patch for this which you can help verify :) http://gerrit.ovirt.org/#/c/5611/
participants (5)
-
Dennis Jacobfeuerborn
-
Itamar Heim
-
Rami Vaknin
-
Robert Middleswarth
-
Shu Ming