Backport KVM bug fix for nested KVM in ESXi and Hyper-V

Hello, I send a dedicated subject message on this topic (second attempt because the first one seems not to be present in ovirt archive..). Also, the reply to my other related message seems not visible inside list archive page for some reason. It seems this nasty problem in nested virt using pc-i440fx-rhel7.X.0 machine type with X >= 3 impacts not only vSphere as main hypervisor for nested KVM, but other hypervisors too (Hyper-V) and other machine types too and could be due to a bug in KVM, so in the kernel, if I understood correctly. According to this link below https://bugs.launchpad.net/qemu/+bug/1636217 and its comment by Roman Kagan in June this year: " This is a KVM bug. It has been fixed in mainstream Linux in commit d391f1207067268261add0485f0f34503539c5b0 Author: Vitaly Kuznetsov <email address hidden> Date: Thu Jan 25 16:37:07 2018 +0100 x86/kvm/vmx: do not use vm-exit instruction length for fast MMIO when running nested I was investigating an issue with seabios >= 1.10 which stopped working for nested KVM on Hyper-V. The problem appears to be in handle_ept_violation() function: when we do fast mmio we need to skip the instruction so we do kvm_skip_emulated_instruction(). This, however, depends on VM_EXIT_INSTRUCTION_LEN field being set correctly in VMCS. However, this is not the case. Intel's manual doesn't mandate VM_EXIT_INSTRUCTION_LEN to be set when EPT MISCONFIG occurs. While on real hardware it was observed to be set, some hypervisors follow the spec and don't set it; we end up advancing IP with some random value. I checked with Microsoft and they confirmed they don't fill VM_EXIT_INSTRUCTION_LEN on EPT MISCONFIG. Fix the issue by doing instruction skip through emulator when running nested. Fixes: 68c3b4d1676d870f0453c31d5a52e7e65c7448ae Suggested-by: Radim Krčmář <email address hidden> Suggested-by: Paolo Bonzini <email address hidden> Signed-off-by: Vitaly Kuznetsov <email address hidden> Acked-by: Michael S. Tsirkin <email address hidden> Signed-off-by: Radim Krčmář <email address hidden> Although the commit mentions Hyper-V as L0 hypervisor, the same problem pertains to ESXi. The commit is included in v4.16. " Is it possible to backport the fix to the kernel provided by plain RHEL/CentOS hosts and/or RHVH/ovirt-ng nodes? Thanks, Gianluca

On Tue, Oct 16, 2018 at 3:23 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I send a dedicated subject message on this topic (second attempt because the first one seems not to be present in ovirt archive..). Also, the reply to my other related message seems not visible inside list archive page for some reason.
It seems this nasty problem in nested virt using pc-i440fx-rhel7.X.0 machine type with X >= 3 impacts not only vSphere as main hypervisor for nested KVM, but other hypervisors too (Hyper-V) and other machine types too and could be due to a bug in KVM, so in the kernel, if I understood correctly.
In the mean time I'm trying to use a workaround to setup a 3-hosts HCI environment usign 3 VMs inside ESXi. My approach: - cp -p /usr/libexec/qemu-kvm /usr/libexec/qemu-kvm.orig - rm /usr/libexec/qemu-kvm - create a new /usr/libexec/qemu-kvm that is a wrapper: #!/bin/bash i=0 while [ $# -gt 0 ]; do case "$1" in -machine) shift 2;; *) args[i]="$1" (( i++ )) shift ;; esac done exec /usr/libexec/qemu-kvm.orig -machine pc-i440fx-rhel7.2.0 "${args[@]}" - chmod 755 /usr/libexec/qemu-kvm - chcon system_u:object_r:qemu_exec_t:s0 qemu-kvm - chcon system_u:object_r:qemu_exec_t:s0 qemu-kvm.orig And then I proceed with my setup from cockpit. All goes well, with local hosted engine vm created from appliance, engine-setup done, host addition done, storage domain for engine done, but then it arrives a step where guestfish comes into place and I have the error below. Executing ps command before guestfish fails I see: [root@ovirt01 ~]# ps -ef|grep guestf root 28812 28807 5 16:55 pts/1 00:00:00 guestfish -a /var/tmp/localvmxmSf0U/images/65f7f081-4d9e-43ae-926f-25807f075f1d/a0a00e73-d3ea-4b9b-bd26-06fe189931f2 --rw -i copy-in /var/tmp/localvmxmSf0U/ifcfg-eth0 /etc/sysconfig/network-scripts : selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts /etc/sysconfig/network-scripts/ifcfg-eth0 force:true root 28833 28812 33 16:55 pts/1 00:00:00 /usr/libexec/qemu-kvm.orig -machine pc-i440fx-rhel7.2.0 -global virtio-blk-pci.scsi=off -nodefconfig -enable-fips -nodefaults -display none -cpu host -m 500 -no-reboot -rtc driftfix=slew -no-hpet -global kvm-pit.lost_tick_policy=discard -kernel /var/tmp/.guestfs-0/appliance.d/kernel -initrd /var/tmp/.guestfs-0/appliance.d/initrd -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 -device virtio-scsi-pci,id=scsi -drive file=/var/tmp/localvmxmSf0U/images/65f7f081-4d9e-43ae-926f-25807f075f1d/a0a00e73-d3ea-4b9b-bd26-06fe189931f2,cache=writeback,id=hd0,if=none -device scsi-hd,drive=hd0 -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw -device scsi-hd,drive=appliance -device virtio-serial-pci -serial stdio -chardev socket,path=/tmp/libguestfsAdBLA9/guestfsd.sock,id=channel0 -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 -append panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 quiet TERM=xterm-256color root 28834 28812 0 16:55 pts/1 00:00:00 guestfish -a /var/tmp/localvmxmSf0U/images/65f7f081-4d9e-43ae-926f-25807f075f1d/a0a00e73-d3ea-4b9b-bd26-06fe189931f2 --rw -i copy-in /var/tmp/localvmxmSf0U/ifcfg-eth0 /etc/sysconfig/network-scripts : selinux-relabel /etc/selinux/targeted/contexts/files/file_contexts /etc/sysconfig/network-scripts/ifcfg-eth0 force:true But then I get this in gui libguestfs: error: appliance closed the connection unexpectedly.\nThis usually means the libguestfs appliance crashed Complete output . . . [ INFO ] TASK [Copy configuration files to the right location on host] [ INFO ] TASK [Copy configuration archive to storage] [ INFO ] changed: [localhost] [ INFO ] TASK [Initialize metadata volume] [ INFO ] changed: [localhost] [ INFO ] TASK [include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [Find the local appliance image] [ INFO ] ok: [localhost] [ INFO ] TASK [Set local_vm_disk_path] [ INFO ] ok: [localhost] [ INFO ] TASK [Generate DHCP network configuration for the engine VM] [ INFO ] skipping: [localhost] [ INFO ] TASK [Generate static network configuration for the engine VM] [ INFO ] changed: [localhost] [ INFO ] TASK [Inject network configuration with guestfish] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["guestfish", "-a", "/var/tmp/localvmxmSf0U/images/65f7f081-4d9e-43ae-926f-25807f075f1d/a0a00e73-d3ea-4b9b-bd26-06fe189931f2", "--rw", "-i", "copy-in", "/var/tmp/localvmxmSf0U/ifcfg-eth0", "/etc/sysconfig/network-scripts", ":", "selinux-relabel", "/etc/selinux/targeted/contexts/files/file_contexts", "/etc/sysconfig/network-scripts/ifcfg-eth0", "force:true"], "delta": "0:00:01.821590", "end": "2018-10-16 16:55:12.044900", "msg": "non-zero return code", "rc": 1, "start": "2018-10-16 16:55:10.223310", "stderr": "libguestfs: error: appliance closed the connection unexpectedly.\nThis usually means the libguestfs appliance crashed.\nDo:\n export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\nand run the command again. For further information, read:\n http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\nYou can also run 'libguestfs-test-tool' and post the *complete* output\ninto a bug report or message to the libguestfs mailing list.\nlibguestfs: error: /usr/libexec/qemu-kvm killed by signal 6 (Aborted).\nTo see full error messages you may need to enable debugging.\nDo:\n export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\nand run the command again. For further information, read:\n http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\nYou can also run 'libguestfs-test-tool' and post the *complete* output\ninto a bug report or message to the libguestfs mailing list.\nlibguestfs: error: guestfs_launch failed.\nThis usually means the libguestfs appliance failed to start or crashed.\nDo:\n export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\nand run the command again. For further information, read:\n http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\nYou can also run 'libguestfs-test-tool' and post the *complete* output\ninto a bug report or message to the libguestfs mailing list.", "stderr_lines": ["libguestfs: error: appliance closed the connection unexpectedly.", "This usually means the libguestfs appliance crashed.", "Do:", " export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1", "and run the command again. For further information, read:", " http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs", "You can also run 'libguestfs-test-tool' and post the *complete* output", "into a bug report or message to the libguestfs mailing list.", "libguestfs: error: /usr/libexec/qemu-kvm killed by signal 6 (Aborted).", "To see full error messages you may need to enable debugging.", "Do:", " export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1", "and run the command again. For further information, read:", " http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs", "You can also run 'libguestfs-test-tool' and post the *complete* output", "into a bug report or message to the libguestfs mailing list.", "libguestfs: error: guestfs_launch failed.", "This usually means the libguestfs appliance failed to start or crashed.", "Do:", " export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1", "and run the command again. For further information, read:", " http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs", "You can also run 'libguestfs-test-tool' and post the *complete* output", "into a bug report or message to the libguestfs mailing list."], "stdout": "", "stdout_lines": []} Any hint on how to debug guestfish problem, so where to put th suggested debug env variables for cockpit to adopt them, or understand if it is not related with the problem to be nested inside ESXi? The nodes are ovirt-ng-nodes based on ovirt-node-ng-4.2.6.1-0.20180913.0 Thanks, Gianluca

On Wed, Oct 17, 2018 at 12:04 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Oct 16, 2018 at 3:23 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I send a dedicated subject message on this topic (second attempt because the first one seems not to be present in ovirt archive..). Also, the reply to my other related message seems not visible inside list archive page for some reason.
It seems this nasty problem in nested virt using pc-i440fx-rhel7.X.0 machine type with X >= 3 impacts not only vSphere as main hypervisor for nested KVM, but other hypervisors too (Hyper-V) and other machine types too and could be due to a bug in KVM, so in the kernel, if I understood correctly.
In the mean time I'm trying to use a workaround to setup a 3-hosts HCI environment usign 3 VMs inside ESXi.
[snip]
But then I get this in gui
libguestfs: error: appliance closed the connection unexpectedly.\nThis usually means the libguestfs appliance crashed Complete output . . . [ INFO ] TASK [Copy configuration files to the right location on host] [ INFO ] TASK [Copy configuration archive to storage] [ INFO ] changed: [localhost] [ INFO ] TASK [Initialize metadata volume] [ INFO ] changed: [localhost] [ INFO ] TASK [include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [Find the local appliance image] [ INFO ] ok: [localhost] [ INFO ] TASK [Set local_vm_disk_path] [ INFO ] ok: [localhost] [ INFO ] TASK [Generate DHCP network configuration for the engine VM] [ INFO ] skipping: [localhost] [ INFO ] TASK [Generate static network configuration for the engine VM] [ INFO ] changed: [localhost] [ INFO ] TASK [Inject network configuration with guestfish] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["guestfish", "-a", "/var/tmp/localvmxmSf0U/images/65f7f081-4d9e-43ae-926f-25807f075f1d/a0a00e73-d3ea-4b9b-bd26-06fe189931f2", "--rw", "-i", "copy-in", "/var/tmp/localvmxmSf0U/ifcfg-eth0", "/etc/sysconfig/network-scripts", ":", "selinux-relabel", "/etc/selinux/targeted/contexts/files/file_contexts", "/etc/sysconfig/network-scripts/ifcfg-eth0", "force:true"], "delta": "0:00:01.821590", "end": "2018-10-16 16:55:12.044900", "msg": "non-zero return code", "rc": 1, "start": "2018-10-16 16:55:10.223310", "stderr": "libguestfs: error: appliance closed the connection unexpectedly.\nThis usually means the libguestfs appliance crashed.\
I was able to run guestfish with "-v" option, using a wrapper also for guestfish itself: " #!/bin/bash exec /usr/bin/guestfish.orig -v "$@" " The reason for the failure seems this one: " qemu-kvm.orig: error: failed to set MSR 0x38d to 0x0\nqemu-kvm.orig: /builddir/build/BUILD/qemu-2.10.0/target/i386/kvm.c:1809: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.\nlibguestfs: error: appliance closed the connection unexpectedly, see earlier error messages\nlibguestfs: child_cleanup: 0x5614c38e4580: child process died\nlibguestfs: sending SIGTERM to process 29457\nlibguestfs: error: /usr/libexec/qemu-kvm killed by signal 6 (Aborted), see debug messages above " Any hint? Based on full output (see below) inside the GUI, it seems that the step "libguestfs: finished testing qemu features" has a result of kind -machine accel=kvm:tcg \\\n -cpu host \\\n -m 500 \\\n -no-reboot \\\n -rtc driftfix=slew \\\n -no-hpet \\\n -global kvm-pit.lost_tick_policy=discard and so possibly makes useless my wrapper for guestfish??? here full output into the GUI: [ INFO ] TASK [Inject network configuration with guestfish] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["guestfish", "-a", "/var/tmp/localvmBnuX85/images/65f7f081-4d9e-43ae-926f-25807f075f1d/a0a00e73-d3ea-4b9b-bd26-06fe189931f2", "--rw", "-i", "copy-in", "/var/tmp/localvmBnuX85/ifcfg-eth0", "/etc/sysconfig/network-scripts", ":", "selinux-relabel", "/etc/selinux/targeted/contexts/files/file_contexts", "/etc/sysconfig/network-scripts/ifcfg-eth0", "force:true"], "delta": "0:00:01.248846", "end": "2018-10-18 00:43:14.708976", "msg": "non-zero return code", "rc": 1, "start": "2018-10-18 00:43:13.460130", "stderr": "libguestfs: launch: program=guestfish.orig\nlibguestfs: launch: version=1.36.10rhel=7,release=6.el7_5.2,libvirt\nlibguestfs: launch: backend registered: unix\nlibguestfs: launch: backend registered: uml\nlibguestfs: launch: backend registered: libvirt\nlibguestfs: launch: backend registered: direct\nlibguestfs: launch: backend=direct\nlibguestfs: launch: tmpdir=/tmp/libguestfsdJP9Xf\nlibguestfs: launch: umask=0022\nlibguestfs: launch: euid=0\nlibguestfs: begin building supermin appliance\nlibguestfs: run supermin\nlibguestfs: command: run: /usr/bin/supermin5\nlibguestfs: command: run: \\ --build\nlibguestfs: command: run: \\ --verbose\nlibguestfs: command: run: \\ --if-newer\nlibguestfs: command: run: \\ --lock /var/tmp/.guestfs-0/lock\nlibguestfs: command: run: \\ --copy-kernel\nlibguestfs: command: run: \\ -f ext2\nlibguestfs: command: run: \\ --host-cpu x86_64\nlibguestfs: command: run: \\ /usr/lib64/guestfs/supermin.d\nlibguestfs: command: run: \\ -o /var/tmp/.guestfs-0/appliance.d\nsupermin: version: 5.1.19\nsupermin: rpm: detected RPM version 4.11\nsupermin: package handler: fedora/rpm\nsupermin: acquiring lock on /var/tmp/.guestfs-0/lock\nsupermin: if-newer: output does not need rebuilding\nlibguestfs: finished building supermin appliance\nlibguestfs: begin testing qemu features\nlibguestfs: checking for previously cached test results of /usr/libexec/qemu-kvm, in /var/tmp/.guestfs-0\nlibguestfs: loading previously cached test results\nlibguestfs: qemu version: 2.10\nlibguestfs: qemu mandatory locking: yes\nlibguestfs: finished testing qemu features\n[00074ms] /usr/libexec/qemu-kvm \\\n -global virtio-blk-pci.scsi=off \\\n -nodefconfig \\\n -enable-fips \\\n -nodefaults \\\n -display none \\\n -machine accel=kvm:tcg \\\n -cpu host \\\n -m 500 \\\n -no-reboot \\\n -rtc driftfix=slew \\\n -no-hpet \\\n -global kvm-pit.lost_tick_policy=discard \\\n -kernel /var/tmp/.guestfs-0/appliance.d/kernel \\\n -initrd /var/tmp/.guestfs-0/appliance.d/initrd \\\n -object rng-random,filename=/dev/urandom,id=rng0 \\\n -device virtio-rng-pci,rng=rng0 \\\n -device virtio-scsi-pci,id=scsi \\\n -drive file=/var/tmp/localvmBnuX85/images/65f7f081-4d9e-43ae-926f-25807f075f1d/a0a00e73-d3ea-4b9b-bd26-06fe189931f2,cache=writeback,id=hd0,if=none \\\n -device scsi-hd,drive=hd0 \\\n -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \\\n -device scsi-hd,drive=appliance \\\n -device virtio-serial-pci \\\n -serial stdio \\\n -device sga \\\n -chardev socket,path=/tmp/libguestfsbhn4kG/guestfsd.sock,id=channel0 \\\n -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \\\n -append 'panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color'\nqemu-kvm.orig: error: failed to set MSR 0x38d to 0x0\nqemu-kvm.orig: /builddir/build/BUILD/qemu-2.10.0/target/i386/kvm.c:1809: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.\nlibguestfs: error: appliance closed the connection unexpectedly, see earlier error messages\nlibguestfs: child_cleanup: 0x5614c38e4580: child process died\nlibguestfs: sending SIGTERM to process 29457\nlibguestfs: error: /usr/libexec/qemu-kvm killed by signal 6 (Aborted), see debug messages above\nlibguestfs: error: guestfs_launch failed, see earlier error messages\nlibguestfs: closing guestfs handle 0x5614c38e4580 (state 0)\nlibguestfs: command: run: rm\nlibguestfs: command: run: \\ -rf /tmp/libguestfsdJP9Xf\nlibguestfs: command: run: rm\nlibguestfs: command: run: \\ -rf /tmp/libguestfsbhn4kG", "stderr_lines": ["libguestfs: launch: program=guestfish.orig", "libguestfs: launch: version=1.36.10rhel=7,release=6.el7_5.2,libvirt", "libguestfs: launch: backend registered: unix", "libguestfs: launch: backend registered: uml", "libguestfs: launch: backend registered: libvirt", "libguestfs: launch: backend registered: direct", "libguestfs: launch: backend=direct", "libguestfs: launch: tmpdir=/tmp/libguestfsdJP9Xf", "libguestfs: launch: umask=0022", "libguestfs: launch: euid=0", "libguestfs: begin building supermin appliance", "libguestfs: run supermin", "libguestfs: command: run: /usr/bin/supermin5", "libguestfs: command: run: \\ --build", "libguestfs: command: run: \\ --verbose", "libguestfs: command: run: \\ --if-newer", "libguestfs: command: run: \\ --lock /var/tmp/.guestfs-0/lock", "libguestfs: command: run: \\ --copy-kernel", "libguestfs: command: run: \\ -f ext2", "libguestfs: command: run: \\ --host-cpu x86_64", "libguestfs: command: run: \\ /usr/lib64/guestfs/supermin.d", "libguestfs: command: run: \\ -o /var/tmp/.guestfs-0/appliance.d", "supermin: version: 5.1.19", "supermin: rpm: detected RPM version 4.11", "supermin: package handler: fedora/rpm", "supermin: acquiring lock on /var/tmp/.guestfs-0/lock", "supermin: if-newer: output does not need rebuilding", "libguestfs: finished building supermin appliance", "libguestfs: begin testing qemu features", "libguestfs: checking for previously cached test results of /usr/libexec/qemu-kvm, in /var/tmp/.guestfs-0", "libguestfs: loading previously cached test results", "libguestfs: qemu version: 2.10", "libguestfs: qemu mandatory locking: yes", "libguestfs: finished testing qemu features", "[00074ms] /usr/libexec/qemu-kvm \\", " -global virtio-blk-pci.scsi=off \\", " -nodefconfig \\", " -enable-fips \\", " -nodefaults \\", " -display none \\", " -machine accel=kvm:tcg \\", " -cpu host \\", " -m 500 \\", " -no-reboot \\", " -rtc driftfix=slew \\", " -no-hpet \\", " -global kvm-pit.lost_tick_policy=discard \\", " -kernel /var/tmp/.guestfs-0/appliance.d/kernel \\", " -initrd /var/tmp/.guestfs-0/appliance.d/initrd \\", " -object rng-random,filename=/dev/urandom,id=rng0 \\", " -device virtio-rng-pci,rng=rng0 \\", " -device virtio-scsi-pci,id=scsi \\", " -drive file=/var/tmp/localvmBnuX85/images/65f7f081-4d9e-43ae-926f-25807f075f1d/a0a00e73-d3ea-4b9b-bd26-06fe189931f2,cache=writeback,id=hd0,if=none \\", " -device scsi-hd,drive=hd0 \\", " -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \\", " -device scsi-hd,drive=appliance \\", " -device virtio-serial-pci \\", " -serial stdio \\", " -device sga \\", " -chardev socket,path=/tmp/libguestfsbhn4kG/guestfsd.sock,id=channel0 \\", " -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \\", " -append 'panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color'", "qemu-kvm.orig: error: failed to set MSR 0x38d to 0x0", "qemu-kvm.orig: /builddir/build/BUILD/qemu-2.10.0/target/i386/kvm.c:1809: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.", "libguestfs: error: appliance closed the connection unexpectedly, see earlier error messages", "libguestfs: child_cleanup: 0x5614c38e4580: child process died", "libguestfs: sending SIGTERM to process 29457", "libguestfs: error: /usr/libexec/qemu-kvm killed by signal 6 (Aborted), see debug messages above", "libguestfs: error: guestfs_launch failed, see earlier error messages", "libguestfs: closing guestfs handle 0x5614c38e4580 (state 0)", "libguestfs: command: run: rm", "libguestfs: command: run: \\ -rf /tmp/libguestfsdJP9Xf", "libguestfs: command: run: rm", "libguestfs: command: run: \\ -rf /tmp/libguestfsbhn4kG"], "stdout": "", "stdout_lines": []} Thanks for your time, Gianluca
participants (1)
-
Gianluca Cecchi