Moving Combined Engine & Node to new network.
by Rulas Mur
Hi,
I setup the host+engine on centos 7 on my home network and everything
worked perfectly, However when I connected it to my work network networking
failed completely.
hostname -I would be blank.
lspci does list the hardware
nmcli d is empty
nmcli con show is empty
nmcli device status is empty
there is a device in /sys/class/net/
Is there a way to fix this? or do I have to reinstall?
On another note, ovirt is amazing!
Thanks for the quality product,
Rulasmur
7 years, 2 months
Q: Upgrade 4.2 -> 4.2.1 Dependency Problem
by Andrei V
Hi !
I run into unexpected problem upgrading oVirt node (installed manually on CentOS):
This problem have to be fixed manually otherwise upgrade command from host engine also fail.
-> glusterfs-rdma = 3.12.5-2.el7
was installed manually as a dependency resolution for ovirt-host-4.2.1-1.el7.centos.x86_64
Q: How to get around this problem? Thanks in advance.
Error: Package: ovirt-host-4.2.1-1.el7.centos.x86_64 (ovirt-4.2)
Requires: glusterfs-rdma
Removing: glusterfs-rdma-3.12.5-2.el7.x86_64 (@ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.5-2.el7
Obsoleted By: mlnx-ofa_kernel-3.4-OFED.3.4.2.1.5.1.ged26eb5.1.rhel7u3.x86_64 (HP-spp)
Not found
Available: glusterfs-rdma-3.8.4-18.4.el7.centos.x86_64 (base)
glusterfs-rdma = 3.8.4-18.4.el7.centos
Available: glusterfs-rdma-3.12.0-1.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.0-1.el7
Available: glusterfs-rdma-3.12.1-1.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.1-1.el7
Available: glusterfs-rdma-3.12.1-2.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.1-2.el7
Available: glusterfs-rdma-3.12.3-1.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.3-1.el7
Available: glusterfs-rdma-3.12.4-1.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.4-1.el7
7 years, 2 months
Ovirt 3.6 to 4.2 upgrade
by Gary Lloyd
Hi
Is it possible/supported to upgrade from Ovirt 3.6 straight to Ovirt 4.2 ?
Does live migration still function between the older vdsm nodes and vdsm
nodes with software built against Ovirt 4.2 ?
We changed a couple of the vdsm python files to enable iscsi multipath on
direct luns.
(It's a fairly simple change to a couple of the python files).
We've been running it this way since 2012 (Ovirt 3.2).
Many Thanks
*Gary Lloyd*
________________________________________________
I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>
________________________________________________
7 years, 2 months
ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf
by Reznikov Alexei
Hi all!
After upgrade from ovirt 4.0 to 4.1, a have trouble add to next
HostedEngine host to my cluster via webui... host add succesfully and
become up, but HE not active in this host.
log's from trouble host
# cat agent.log
> KeyError: 'Configuration value not found:
file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway'
# cat /etc/ovirt-hosted-engine/hosted-engine.conf
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
host_id=2
log deploy from engine in attach.
trouble host:
ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch
ovirt-host-deploy-1.6.7-1.el7.centos.noarch
vdsm-4.19.45-1.el7.centos.x86_64
CentOS Linux release 7.4.1708 (Core)
engine host:
ovirt-release41-4.1.9-1.el7.centos.noarch
ovirt-engine-4.1.9.1-1.el7.centos.noarch
CentOS Linux release 7.4.1708 (Core)
Please help me fix it.
Thanx, Alex.
7 years, 2 months
Slow conversion from VMware in 4.1
by Luca 'remix_tj' Lorenzetto
Hello,
i've started my migrations from vmware today. I had successfully
migrated over 200 VM from vmware to another cluster based on 4.0 using
our home-made scripts interacting with the API's. All the migrated vms
are running RHEL 6 or 7, with no SELinux.
We understood a lot about the necessities and we recorded also some
metrics about migration times. In July, with 4.0 as destination, we
were migrating ~30gb vm in ~40 mins.
It was an acceptable time, considering that about 50% of our vms stand
around that size.
Today we started migrating to the production cluster, that is,
instead, running 4.1.8. With the same scripts, the same api calls, and
a vm of about 50gb we were supposing that we will have the vm running
in the new cluster after 70 minutes, more or less.
Instead, the migration is taking more than 2 hours, and this not
because of the slow conversion time by qemu-img given that we're
transferring an entire disk via http.
Looking at the log, seems that activities executed before qemu-img
took more than 2000 seconds. As example, appears to me that dracut
took more than 14 minutes, which is in my opinion a bit long.
Is there any option to get a quicker conversion? Also some tasks to
run in the guests before the conversion are accepted.
We have to migrate ~300 vms in 2.5 months, and we're only at 11 after
7 hours (and today an exception that allowed us to start 4 hours in
advance, but usually our maintenance time is significantly lower).
This is a filtered out log reporting only the rows were we can
understand how much time has passed:
[ 0.0] Opening the source -i libvirt -ic
vpx://vmwareuser%40domain@vcenter/DC/Cluster/Host?no_verify=1
vmtoconvert
[ 6.1] Creating an overlay to protect the source from being modified
[ 7.4] Initializing the target -o vdsm -os
/rhev/data-center/e8263fb4-114d-4706-b1c0-5defcd15d16b/a118578a-4cf2-4e0c-ac47-20e9f0321da1
--vdsm-image-uuid 1a93e503-ce57-4631-8dd2-eeeae45866ca --vdsm-vol-uuid
88d92582-0f53-43b0-89ff-af1c17ea8618 --vdsm-vm-uuid
1434e14f-e228-41c1-b769-dcf48b258b12 --vdsm-ovf-output
/var/run/vdsm/v2v
[ 7.4] Opening the overlay
[00034ms] /usr/libexec/qemu-kvm \
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Initializing cgroup subsys cpuacct
[ 0.000000] Linux version 3.10.0-693.11.1.el7.x86_64
(mockbuild(a)x86-041.build.eng.bos.redhat.com) (gcc version 4.8.5
20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Fri Oct 27 05:39:05 EDT
2017
[ 0.000000] Command line: panic=1 console=ttyS0 edd=off
udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1
cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable
8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1
guestfs_network=1 TERM=linux guestfs_identifier=v2v
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable
[ 0.000000] BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007cfddfff] usable
[ 0.000000] BIOS-e820: [mem 0x000000007cfde000-0x000000007cffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] SMBIOS 2.8 present.
[ 0.000000] Hypervisor detected: KVM
[ 0.000000] e820: last_pfn = 0x7cfde max_arch_pfn = 0x400000000
[ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[ 0.000000] found SMP MP-table at [mem 0x000f72f0-0x000f72ff]
mapped at [ffff8800000f72f0]
[ 0.000000] Using GB pages for direct mapping
[ 0.000000] RAMDISK: [mem 0x7ccb2000-0x7cfcffff]
[ 0.000000] Early table checksum verification disabled
[ 0.000000] ACPI: RSDP 00000000000f70d0 00014 (v00 BOCHS )
[ 0.000000] ACPI: RSDT 000000007cfe14d5 0002C (v01 BOCHS BXPCRSDT
00000001 BXPC 00000001)
[ 0.000000] ACPI: FACP 000000007cfe13e9 00074 (v01 BOCHS BXPCFACP
00000001 BXPC 00000001)
[ 0.000000] ACPI: DSDT 000000007cfe0040 013A9 (v01 BOCHS BXPCDSDT
00000001 BXPC 00000001)
[ 0.000000] ACPI: FACS 000000007cfe0000 00040
[ 0.000000] ACPI: APIC 000000007cfe145d 00078 (v01 BOCHS BXPCAPIC
00000001 BXPC 00000001)
[ 0.000000] No NUMA configuration found
[ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000007cfddfff]
[ 0.000000] NODE_DATA(0) allocated [mem 0x7cc8b000-0x7ccb1fff]
[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000000] kvm-clock: cpu 0, msr 0:7cc3b001, primary cpu clock
[ 0.000000] kvm-clock: using sched offset of 1030608733 cycles
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x00001000-0x00ffffff]
[ 0.000000] DMA32 [mem 0x01000000-0xffffffff]
[ 0.000000] Normal empty
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x00001000-0x0009efff]
[ 0.000000] node 0: [mem 0x00100000-0x7cfddfff]
[ 0.000000] Initmem setup node 0 [mem 0x00001000-0x7cfddfff]
[ 0.000000] ACPI: PM-Timer IO Port: 0x608
[ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 0.000000] Using ACPI (MADT) for SMP configuration information
[ 0.000000] smpboot: Allowing 1 CPUs, 0 hotplug CPUs
[ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
[ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[ 0.000000] e820: [mem 0x7d000000-0xfeffbfff] available for PCI devices
[ 0.000000] Booting paravirtualized kernel on KVM
[ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:1
nr_cpu_ids:1 nr_node_ids:1
[ 0.000000] PERCPU: Embedded 33 pages/cpu @ffff88007ca00000 s97048
r8192 d29928 u2097152
[ 0.000000] KVM setup async PF for cpu 0
[ 0.000000] kvm-stealtime: cpu 0, msr 7ca0f440
[ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
[ 0.000000] Built 1 zonelists in Node order, mobility grouping on.
Total pages: 503847
[ 0.000000] Policy zone: DMA32
[ 0.000000] Kernel command line: panic=1 console=ttyS0 edd=off
udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1
cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable
8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1
guestfs_network=1 TERM=linux guestfs_identifier=v2v
[ 0.000000] Disabling memory control group subsystem
[ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100
[ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using
standard form
[ 0.000000] Memory: 1994224k/2047864k available (6886k kernel code,
392k absent, 53248k reserved, 4545k data, 1764k init)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[ 0.000000] Hierarchical RCU implementation.
[ 0.000000] \tRCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=1.
[ 0.000000] NR_IRQS:327936 nr_irqs:256 0
[ 0.000000] Console: colour *CGA 80x25
[ 0.000000] console [ttyS0] enabled
[ 0.000000] tsc: Detected 2099.998 MHz processor
[ 0.065500] Calibrating delay loop (skipped) preset value.. 4199.99
BogoMIPS (lpj=2099998)
[ 0.066153] pid_max: default: 32768 minimum: 301
[ 0.066548] Security Framework initialized
[ 0.066872] SELinux: Disabled at boot.
[ 0.067181] Yama: becoming mindful.
[ 0.067622] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
[ 0.068574] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
[ 0.069290] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.069813] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.070525] Initializing cgroup subsys memory
[ 0.070877] Initializing cgroup subsys devices
[ 0.071237] Initializing cgroup subsys freezer
[ 0.071589] Initializing cgroup subsys net_cls
[ 0.071932] Initializing cgroup subsys blkio
[ 0.072275] Initializing cgroup subsys perf_event
[ 0.072644] Initializing cgroup subsys hugetlb
[ 0.072984] Initializing cgroup subsys pids
[ 0.073316] Initializing cgroup subsys net_prio
[ 0.073721] CPU: Physical Processor ID: 0
[ 0.074810] mce: CPU supports 10 MCE banks
[ 0.075185] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[ 0.075621] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0
[ 0.076030] tlb_flushall_shift: 6
[ 0.085827] Freeing SMP alternatives: 24k freed
[ 0.091125] ACPI: Core revision 20130517
[ 0.091976] ACPI: All ACPI Tables successfully acquired
[ 0.092448] ftrace: allocating 26586 entries in 104 pages
[ 0.116144] smpboot: Max logical packages: 1
[ 0.116640] Enabling x2apic
[ 0.116863] Enabled x2apic
[ 0.117290] Switched APIC routing to physical x2apic.
[ 0.118588] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.119054] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2683 v4 @
2.10GHz (fam: 06, model: 4f, stepping: 01)
[ 0.119813] Performance Events: 16-deep LBR, Broadwell events,
Intel PMU driver.
[ 0.121545] ... version: 2
[ 0.121847] ... bit width: 48
[ 0.122161] ... generic registers: 4
[ 0.122472] ... value mask: 0000ffffffffffff
[ 0.122874] ... max period: 000000007fffffff
[ 0.123276] ... fixed-purpose events: 3
[ 0.123584] ... event mask: 000000070000000f
[ 0.124004] KVM setup paravirtual spinlock
[ 0.125379] Brought up 1 CPUs
[ 0.125616] smpboot: Total of 1 processors activated (4199.99 BogoMIPS)
[ 0.126464] devtmpfs: initialized
[ 0.128347] EVM: security.selinux
[ 0.128608] EVM: security.ima
[ 0.128835] EVM: security.capability
[ 0.129796] atomic64 test passed for x86-64 platform with CX8 and with SSE
[ 0.130333] pinctrl core: initialized pinctrl subsystem
[ 0.130805] RTC time: 20:26:38, date: 01/24/18
[ 0.131217] NET: Registered protocol family 16
[ 0.131774] ACPI: bus type PCI registered
[ 0.132096] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[ 0.132660] PCI: Using configuration type 1 for base access
[ 0.133830] ACPI: Added _OSI(Module Device)
[ 0.134170] ACPI: Added _OSI(Processor Device)
[ 0.134514] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.134872] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.137001] ACPI: Interpreter enabled
[ 0.137303] ACPI: (supports S0 S5)
[ 0.137573] ACPI: Using IOAPIC for interrupt routing
[ 0.137971] PCI: Using host bridge windows from ACPI; if necessary,
use "pci=nocrs" and report a bug
[ 0.140442] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[ 0.140917] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[ 0.141446] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
[ 0.141961] acpi PNP0A03:00: fail to add MMCONFIG information,
can't access extended PCI configuration space under this bridge.
[ 0.142997] acpiphp: Slot [2] registered
[ 0.143309] acpiphp: Slot [3] registered
[ 0.143625] acpiphp: Slot [4] registered
[ 0.143949] acpiphp: Slot [5] registered
[ 0.144260] acpiphp: Slot [6] registered
[ 0.144575] acpiphp: Slot [7] registered
[ 0.144887] acpiphp: Slot [8] registered
[ 0.145205] acpiphp: Slot [9] registered
[ 0.145523] acpiphp: Slot [10] registered
[ 0.145841] acpiphp: Slot [11] registered
[ 0.146161] acpiphp: Slot [12] registered
[ 0.146642] acpiphp: Slot [13] registered
[ 0.146960] acpiphp: Slot [14] registered
[ 0.147279] acpiphp: Slot [15] registered
[ 0.147602] acpiphp: Slot [16] registered
[ 0.147934] acpiphp: Slot [17] registered
[ 0.148255] acpiphp: Slot [18] registered
[ 0.148579] acpiphp: Slot [19] registered
[ 0.148896] acpiphp: Slot [20] registered
[ 0.149219] acpiphp: Slot [21] registered
[ 0.149546] acpiphp: Slot [22] registered
[ 0.149863] acpiphp: Slot [23] registered
[ 0.150178] acpiphp: Slot [24] registered
[ 0.150505] acpiphp: Slot [25] registered
[ 0.150824] acpiphp: Slot [26] registered
[ 0.151139] acpiphp: Slot [27] registered
[ 0.151461] acpiphp: Slot [28] registered
[ 0.151786] acpiphp: Slot [29] registered
[ 0.152104] acpiphp: Slot [30] registered
[ 0.152426] acpiphp: Slot [31] registered
[ 0.152741] PCI host bridge to bus 0000:00
[ 0.153059] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 0.153478] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
[ 0.153991] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
[ 0.154508] pci_bus 0000:00: root bus resource [mem
0x000a0000-0x000bffff window]
[ 0.155072] pci_bus 0000:00: root bus resource [mem
0x7d000000-0xfebfffff window]
[ 0.162550] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7]
[ 0.163097] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6]
[ 0.163590] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177]
[ 0.164129] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376]
[ 0.165004] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by
PIIX4 ACPI
[ 0.165564] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB
[ 0.223140] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[ 0.223712] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[ 0.224245] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[ 0.224789] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[ 0.225296] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
[ 0.225817] ACPI: Enabled 2 GPEs in block 00 to 0F
[ 0.226262] vgaarb: loaded
[ 0.227000] SCSI subsystem initialized
[ 0.227314] ACPI: bus type USB registered
[ 0.227640] usbcore: registered new interface driver usbfs
[ 0.228068] usbcore: registered new interface driver hub
[ 0.228487] usbcore: registered new device driver usb
[ 0.228936] PCI: Using ACPI for IRQ routing
[ 0.229436] NetLabel: Initializing
[ 0.230112] NetLabel: domain hash size = 128
[ 0.230455] NetLabel: protocols = UNLABELED CIPSOv4
[ 0.230843] NetLabel: unlabeled traffic allowed by default
[ 0.231317] amd_nb: Cannot enumerate AMD northbridges
[ 0.231722] Switched to clocksource kvm-clock
[ 0.235503] pnp: PnP ACPI init
[ 0.235767] ACPI: bus type PNP registered
[ 0.236396] pnp: PnP ACPI: found 5 devices
[ 0.236716] ACPI: bus type PNP unregistered
[ 0.242333] NET: Registered protocol family 2
[ 0.242806] TCP established hash table entries: 16384 (order: 5,
131072 bytes)
[ 0.243384] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[ 0.243907] TCP: Hash tables configured (established 16384 bind 16384)
[ 0.244414] TCP: reno registered
[ 0.244668] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[ 0.245130] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[ 0.245656] NET: Registered protocol family 1
[ 0.246013] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 0.246473] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[ 0.246924] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[ 0.247457] Unpacking initramfs...
[ 0.249930] Freeing initrd memory: 3192k freed
[ 0.251174] sha1_ssse3: Using AVX optimized SHA-1 implementation
[ 0.251706] sha256_ssse3: Using AVX2 optimized SHA-256 implementation
[ 0.252355] futex hash table entries: 256 (order: 2, 16384 bytes)
[ 0.252836] Initialise system trusted keyring
[ 0.253187] audit: initializing netlink socket (disabled)
[ 0.253610] type=2000 audit(1516825598.479:1): initialized
[ 0.275426] HugeTLB registered 1 GB page size, pre-allocated 0 pages
[ 0.275927] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[ 0.277129] zpool: loaded
[ 0.277350] zbud: loaded
[ 0.277669] VFS: Disk quotas dquot_6.5.2
[ 0.277998] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.278609] msgmni has been set to 3901
[ 0.278956] Key type big_key registered
[ 0.279450] NET: Registered protocol family 38
[ 0.279810] Key type asymmetric registered
[ 0.280125] Asymmetric key parser 'x509' registered
[ 0.280523] Block layer SCSI generic (bsg) driver version 0.4
loaded (major 250)
[ 0.281107] io scheduler noop registered
[ 0.281416] io scheduler deadline registered (default)
[ 0.281839] io scheduler cfq registered
[ 0.282216] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 0.282648] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[ 0.283250] input: Power Button as
/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[ 0.283835] ACPI: Power Button [PWRF]
[ 0.284207] GHES: HEST is not enabled!
[ 0.284534] Serial: 8250/16550 driver, 1 ports, IRQ sharing enabled
[ 0.307889] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 0.308457] Non-volatile memory driver v1.3
[ 0.308809] Linux agpgart interface v0.103
[ 0.309200] crash memory driver: version 1.1
[ 0.309568] rdac: device handler registered
[ 0.309913] hp_sw: device handler registered
[ 0.310247] emc: device handler registered
[ 0.310565] alua: device handler registered
[ 0.310922] libphy: Fixed MDIO Bus: probed
[ 0.311267] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[ 0.311780] ehci-pci: EHCI PCI platform driver
[ 0.312129] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[ 0.312609] ohci-pci: OHCI PCI platform driver
[ 0.312958] uhci_hcd: USB Universal Host Controller Interface driver
[ 0.313474] usbcore: registered new interface driver usbserial
[ 0.313926] usbcore: registered new interface driver usbserial_generic
[ 0.314428] usbserial: USB Serial support registered for generic
[ 0.314911] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU]
at 0x60,0x64 irq 1,12
[ 0.316032] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 0.316418] serio: i8042 AUX port at 0x60,0x64 irq 12
[ 0.316857] mousedev: PS/2 mouse device common for all mice
[ 0.317468] input: AT Translated Set 2 keyboard as
/devices/platform/i8042/serio0/input/input1
[ 0.318561] input: VirtualPS/2 VMware VMMouse as
/devices/platform/i8042/serio1/input/input2
[ 0.319363] input: VirtualPS/2 VMware VMMouse as
/devices/platform/i8042/serio1/input/input3
[ 0.320042] rtc_cmos 00:00: RTC can wake from S4
[ 0.320573] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
[ 0.321099] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram
[ 0.321632] cpuidle: using governor menu
[ 0.321989] hidraw: raw HID events driver (C) Jiri Kosina
[ 0.322467] usbcore: registered new interface driver usbhid
[ 0.322894] usbhid: USB HID core driver
[ 0.323734] drop_monitor: Initializing network drop monitor service
[ 0.324272] TCP: cubic registered
[ 0.324537] Initializing XFRM netlink socket
[ 0.324936] NET: Registered protocol family 10
[ 0.325410] NET: Registered protocol family 17
[ 0.325872] microcode: CPU0 sig=0x406f1, pf=0x1, revision=0x1
[ 0.326331] microcode: Microcode Update Driver: v2.01
<tigran(a)aivazian.fsnet.co.uk>, Peter Oruba
[ 0.327060] Loading compiled-in X.509 certificates
[ 0.327855] Loaded X.509 cert 'Red Hat Enterprise Linux Driver
Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
[ 0.329151] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch
signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
[ 0.330379] Loaded X.509 cert 'Red Hat Enterprise Linux kernel
signing key: 34fc3b85a61b8fead6e9e905e7e602a1f7fa049a'
[ 0.331196] registered taskstats version 1
[ 0.331639] Key type trusted registered
[ 0.332056] Key type encrypted registered
[ 0.332920] IMA: No TPM chip found, activating TPM-bypass!
[ 0.333605] Magic number: 2:270:448
[ 0.333970] rtc_cmos 00:00: setting system clock to 2018-01-24
20:26:38 UTC (1516825598)
[ 0.335302] Freeing unused kernel memory: 1764k freed
[ 0.339427] alg: No test for crc32 (crc32-pclmul)
[ 0.342995] alg: No test for crc32 (crc32-generic)
[ 0.352535] scsi host0: ata_piix
[ 0.352853] scsi host1: ata_piix
[ 0.353127] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14
[ 0.353645] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15
[ 0.541003] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
[ 0.545766] random: fast init done
[ 0.548737] random: crng init done
[ 0.565923] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ 0.567592] scsi host2: Virtio SCSI HBA
[ 0.569801] scsi 2:0:0:0: Direct-Access QEMU QEMU HARDDISK
2.5+ PQ: 0 ANSI: 5
[ 0.570526] scsi 2:0:1:0: Direct-Access QEMU QEMU HARDDISK
2.5+ PQ: 0 ANSI: 5
[ 0.580538] sd 2:0:0:0: [sda] 104857600 512-byte logical blocks:
(53.6 GB/50.0 GiB)
[ 0.581264] sd 2:0:1:0: [sdb] 8388608 512-byte logical blocks:
(4.29 GB/4.00 GiB)
[ 0.581894] sd 2:0:0:0: [sda] Write Protect is off
[ 0.582312] sd 2:0:0:0: [sda] Write cache: enabled, read cache:
enabled, doesn't support DPO or FUA
[ 0.583032] sd 2:0:1:0: [sdb] Write Protect is off
[ 0.583444] sd 2:0:1:0: [sdb] Write cache: enabled, read cache:
enabled, doesn't support DPO or FUA
[ 0.586373] sd 2:0:1:0: [sdb] Attached SCSI disk
[ 0.602190] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ 0.636655] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ 1.253809] tsc: Refined TSC clocksource calibration: 2099.994 MHz
[ 1.510203] sda: sda1 sda2 sda3
[ 1.511710] sd 2:0:0:0: [sda] Attached SCSI disk
[ 1.528245] EXT4-fs (sdb): mounting ext2 file system using the ext4 subsystem
[ 1.530353] EXT4-fs (sdb): mounted filesystem without journal. Opts:
[/usr/lib/tmpfiles.d/systemd.conf:11] Unknown group 'utmp'.
[/usr/lib/tmpfiles.d/systemd.conf:19] Unknown user 'systemd-network'.
[/usr/lib/tmpfiles.d/systemd.conf:20] Unknown user 'systemd-network'.
[/usr/lib/tmpfiles.d/systemd.conf:21] Unknown user 'systemd-network'.
[/usr/lib/tmpfiles.d/systemd.conf:25] Unknown group 'systemd-journal'.
[/usr/lib/tmpfiles.d/systemd.conf:26] Unknown group 'systemd-journal'.
[ 1.650422] input: PC Speaker as /devices/platform/pcspkr/input/input4
[ 1.655216] piix4_smbus 0000:00:01.3: SMBus Host Controller at
0x700, revision 0
[ 1.694118] sd 2:0:0:0: Attached scsi generic sg0 type 0
[ 1.696802] sd 2:0:1:0: Attached scsi generic sg1 type 0
[ 1.698009] FDC 0 is a S82078B
[ 1.710807] AES CTR mode by8 optimization enabled
[ 1.724293] ppdev: user-space parallel port driver
[ 1.732252] Error: Driver 'pcspkr' is already registered, aborting...
[ 1.734673] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[ 1.746232] EDAC MC: Ver: 3.0.0
[ 1.749324] EDAC sbridge: Ver: 1.1.1
[ 25.658309] device-mapper: uevent: version 1.0.3
[ 25.659092] device-mapper: ioctl: 4.35.0-ioctl (2016-06-23)
initialised: dm-devel(a)redhat.com
[ 57.8] Inspecting the overlay
[ 51.302190] EXT4-fs (sda1): mounted filesystem with ordered data
mode. Opts: (null)
[ 58.667082] EXT4-fs (dm-1): mounted filesystem with ordered data
mode. Opts: (null)
[ 61.147593] EXT4-fs (dm-4): mounted filesystem with ordered data
mode. Opts: (null)
[ 63.977572] EXT4-fs (dm-0): mounted filesystem with ordered data
mode. Opts: (null)
[ 75.614795] EXT4-fs (dm-6): mounted filesystem with ordered data
mode. Opts: (null)
[ 80.782266] EXT4-fs (dm-5): mounted filesystem with ordered data
mode. Opts: (null)
[ 98.734329] EXT4-fs (dm-2): mounted filesystem with ordered data
mode. Opts: (null)
[ 102.090148] EXT4-fs (dm-7): mounted filesystem with ordered data
mode. Opts: (null)
[ 105.057661] EXT4-fs (dm-3): mounted filesystem with ordered data
mode. Opts: (null)
[ 108.085788] EXT4-fs (dm-9): mounted filesystem with ordered data
mode. Opts: (null)
[ 111.328257] EXT4-fs (dm-8): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.201934] EXT4-fs (dm-0): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.212101] EXT4-fs (dm-2): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.221689] EXT4-fs (dm-5): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.233016] EXT4-fs (dm-6): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.971075] EXT4-fs (dm-4): mounted filesystem with ordered data
mode. Opts: (null)
[ 113.788961] EXT4-fs (dm-1): mounted filesystem with ordered data
mode. Opts: (null)
[ 113.799156] EXT4-fs (sda1): mounted filesystem with ordered data
mode. Opts: (null)
[ 113.811402] EXT4-fs (dm-3): mounted filesystem with ordered data
mode. Opts: (null)
[ 113.823347] EXT4-fs (dm-9): mounted filesystem with ordered data
mode. Opts: (null)
[ 115.345857] EXT4-fs (dm-7): mounted filesystem with ordered data
mode. Opts: (null)
[ 115.356280] EXT4-fs (dm-8): mounted filesystem with ordered data
mode. Opts: (null)
[ 476.5] Checking for sufficient free disk space in the guest
[ 476.5] Estimating space required on target for each disk
[ 476.5] Converting Red Hat Enterprise Linux Server 7.4 (Maipo) to run on KVM
[ 1072.265252] dracut[1565] No '/dev/log' or 'logger' included for
syslog logging
[ 1076.444899] dracut[1565] Executing: /sbin/dracut --verbose
--add-drivers "virtio virtio_ring virtio_blk virtio_scsi virtio_net
virtio_pci" /boot/initramfs-3.10.0-693.el7.x86_64.img
3.10.0-693.el7.x86_64
[ 1104.118050] dracut[1565] dracut module 'busybox' will not be
installed, because command 'busybox' could not be found!
[ 1111.893587] dracut[1565] dracut module 'crypt' will not be
installed, because command 'cryptsetup' could not be found!
[ 1112.694542] dracut[1565] dracut module 'dmraid' will not be
installed, because command 'dmraid' could not be found!
[ 1117.763735] dracut[1565] dracut module 'mdraid' will not be
installed, because command 'mdadm' could not be found!
[ 1117.769004] dracut[1565] dracut module 'multipath' will not be
installed, because command 'multipath' could not be found!
[ 1122.366992] dracut[1565] dracut module 'cifs' will not be
installed, because command 'mount.cifs' could not be found!
[ 1122.387968] dracut[1565] dracut module 'iscsi' will not be
installed, because command 'iscsistart' could not be found!
[ 1122.390569] dracut[1565] dracut module 'iscsi' will not be
installed, because command 'iscsi-iname' could not be found!
[ 1140.889553] dracut[1565] dracut module 'busybox' will not be
installed, because command 'busybox' could not be found!
[ 1140.910458] dracut[1565] dracut module 'crypt' will not be
installed, because command 'cryptsetup' could not be found!
[ 1140.915646] dracut[1565] dracut module 'dmraid' will not be
installed, because command 'dmraid' could not be found!
[ 1140.924489] dracut[1565] dracut module 'mdraid' will not be
installed, because command 'mdadm' could not be found!
[ 1140.928995] dracut[1565] dracut module 'multipath' will not be
installed, because command 'multipath' could not be found!
[ 1140.939832] dracut[1565] dracut module 'cifs' will not be
installed, because command 'mount.cifs' could not be found!
[ 1140.954810] dracut[1565] dracut module 'iscsi' will not be
installed, because command 'iscsistart' could not be found!
[ 1140.957229] dracut[1565] dracut module 'iscsi' will not be
installed, because command 'iscsi-iname' could not be found!
[ 1142.066303] dracut[1565] *** Including module: bash ***
[ 1142.073837] dracut[1565] *** Including module: nss-softokn ***
[ 1143.838047] dracut[1565] *** Including module: i18n ***
[ 1230.935044] dracut[1565] *** Including module: network ***
[ 1323.749409] dracut[1565] *** Including module: ifcfg ***
[ 1323.755682] dracut[1565] *** Including module: drm ***
[ 1340.716219] dracut[1565] *** Including module: plymouth ***
[ 1359.941093] dracut[1565] *** Including module: dm ***
[ 1366.392221] dracut[1565] Skipping udev rule: 64-device-mapper.rules
[ 1366.394670] dracut[1565] Skipping udev rule: 60-persistent-storage-dm.rules
[ 1366.397021] dracut[1565] Skipping udev rule: 55-dm.rules
[ 1375.796931] dracut[1565] *** Including module: kernel-modules ***
[ 1627.998656] dracut[1565] *** Including module: lvm ***
[ 1631.138460] dracut[1565] Skipping udev rule: 64-device-mapper.rules
[ 1631.141015] dracut[1565] Skipping udev rule: 56-lvm.rules
[ 1631.143409] dracut[1565] Skipping udev rule: 60-persistent-storage-lvm.rules
[ 1635.270706] dracut[1565] *** Including module: qemu ***
[ 1635.277842] dracut[1565] *** Including module: rootfs-block ***
[ 1636.845616] dracut[1565] *** Including module: terminfo ***
[ 1639.189294] dracut[1565] *** Including module: udev-rules ***
[ 1640.076624] dracut[1565] Skipping udev rule: 40-redhat-cpu-hotplug.rules
[ 1649.962889] dracut[1565] Skipping udev rule: 91-permissions.rules
[ 1651.008527] dracut[1565] *** Including module: biosdevname ***
[ 1651.921630] dracut[1565] *** Including module: systemd ***
[ 1685.124521] dracut[1565] *** Including module: usrmount ***
[ 1685.128532] dracut[1565] *** Including module: base ***
[ 1694.743507] dracut[1565] *** Including module: fs-lib ***
[ 1696.295216] dracut[1565] *** Including module: shutdown ***
[ 1698.578228] dracut[1565] *** Including modules done ***
[ 1699.586287] dracut[1565] *** Installing kernel module dependencies
and firmware ***
[ 1717.505952] dracut[1565] *** Installing kernel module dependencies
and firmware done ***
[ 1724.539224] dracut[1565] *** Resolving executable dependencies ***
[ 1844.709874] dracut[1565] *** Resolving executable dependencies done***
[ 1844.723313] dracut[1565] *** Hardlinking files ***
[ 1847.281611] dracut[1565] *** Hardlinking files done ***
[ 1847.284119] dracut[1565] *** Stripping files ***
[ 1908.635888] dracut[1565] *** Stripping files done ***
[ 1908.638262] dracut[1565] *** Generating early-microcode cpio image
contents ***
[ 1908.645054] dracut[1565] *** Constructing GenuineIntel.bin ****
[ 1909.567397] dracut[1565] *** Store current command line parameters ***
[ 1909.571686] dracut[1565] *** Creating image file ***
[ 1909.574239] dracut[1565] *** Creating microcode section ***
[ 1911.789907] dracut[1565] *** Created microcode section ***
[ 1921.680575] dracut[1565] *** Creating image file done ***
[ 1926.764407] dracut[1565] *** Creating initramfs image file
'/boot/initramfs-3.10.0-693.el7.x86_64.img' done ***
[1994.1] Mapping filesystem data to avoid copying unused and blank areas
[ 1984.841231] EXT4-fs (dm-8): mounted filesystem with ordered data
mode. Opts: discard
[ 1987.252106] EXT4-fs (dm-9): mounted filesystem with ordered data
mode. Opts: discard
[ 1990.531305] EXT4-fs (dm-3): mounted filesystem with ordered data
mode. Opts: discard
[ 1992.903109] EXT4-fs (dm-7): mounted filesystem with ordered data
mode. Opts: discard
[ 1995.876230] EXT4-fs (dm-2): mounted filesystem with ordered data
mode. Opts: discard
[ 1995.986384] EXT4-fs (dm-5): mounted filesystem with ordered data
mode. Opts: discard
[ 1997.748087] EXT4-fs (dm-6): mounted filesystem with ordered data
mode. Opts: discard
[ 1997.785914] EXT4-fs (dm-0): mounted filesystem with ordered data
mode. Opts: discard
[ 1997.824003] EXT4-fs (dm-4): mounted filesystem with ordered data
mode. Opts: discard
[ 2000.172658] EXT4-fs (dm-1): mounted filesystem with ordered data
mode. Opts: discard
[ 2001.214202] EXT4-fs (sda1): mounted filesystem with ordered data
mode. Opts: discard
[2010.7] Closing the overlay
[2010.7] Checking if the guest needs BIOS or UEFI to boot
[2010.7] Assigning disks to buses
[2010.7] Copying disk 1/1 to
/rhev/data-center/e8263fb4-114d-4706-b1c0-5defcd15d16b/a118578a-4cf2-4e0c-ac47-20e9f0321da1/images/1a93e503-ce57-4631-8dd2-eeeae45866ca/88d92582-0f53-43b0-89ff-af1c17ea8618
(raw)
[7000.4] Creating output metadata
[7000.4] Finishing off
Any help is appreciated.
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
7 years, 2 months
VM with multiple vdisks can't migrate
by fsoyer
------=_=-_OpenGroupware_org_NGMime-15477-1518516470.384163-4------
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 29601
Hi all,
I discovered yesterday a problem when migrating VM with more than one v=
disk.
On our test servers (oVirt4.1, shared storage with Gluster), I created =
2 VMs needed for a test, from a template with a 20G vdisk. On this VMs =
I added a 100G vdisk (for this tests I didn't want to waste time to ext=
end the existing vdisks... But I lost time finally...). The VMs with th=
e 2 vdisks works well.
Now I saw some updates waiting on the host. I tried to put it in mainte=
nance... But it stopped on the two VM. They were marked "migrating", bu=
t no more accessible. Other (small) VMs with only 1 vdisk was migrated =
without problem at the same time.
I saw that a kvm process for the (big) VMs was launched on the source A=
ND destination host, but after tens of minutes, the migration and the V=
Ms was always freezed. I tried to cancel the migration for the VMs : fa=
iled. The only way to stop it was to poweroff the VMs : the kvm process=
died on the 2 hosts and the GUI alerted on a failed migration.
In doubt, I tried to delete the second vdisk on one of this VMs : it mi=
grates then without error ! And no access problem.
I tried to extend the first vdisk of the second VM, the delete the seco=
nd vdisk : it migrates now without problem !=C2=A0=C2=A0=C2=A0
So after another test with a VM with 2 vdisks, I can say that this bloc=
ked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-1=
2 16:46:29,705+01 INFO =C2=A0[org.ovirt.engine.core.bll.MigrateVmToServ=
erCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Loc=
k Acquired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10=
-85cc-d573004a099d=3DVM]', sharedLocks=3D''}'
2018-02-12 16:46:29,955+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-=
46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand inter=
nal: false. Entities affected : =C2=A0ID: 3f57e669-5e4c-4d10-85cc-d5730=
04a099d Type: VMAction group MIGRATE=5FVM with role type USER
2018-02-12 16:46:30,261+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4=
6a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParam=
eters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb=
1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0=
.6', dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.=
168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', =
migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa=
lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve=
rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall=
ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim=
it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act=
ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=
=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt=
ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]=
}}]]'}), log id: 14f61ee0
2018-02-12 16:46:30,262+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) =
[2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(H=
ostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runAsy=
nc=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3=
f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId=
=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321=
', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown=
time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console=
Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max=
IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched=
ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim=
it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act=
ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=
=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt=
ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=
=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo=
g id: 775cd381
2018-02-12 16:46:30,277+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) =
[2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,=
log id: 775cd381
2018-02-12 16:46:30,285+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4=
6a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom=
, log id: 14f61ee0
2018-02-12 16:46:30,301+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-3=
2) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5F=
START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID=
: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: nu=
ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECON=
DARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.=
fr, User: admin@internal-authz).
2018-02-12 16:46:31,106+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] STAR=
T, FullListVDSCommand(HostName =3D victor.local.systea.fr, FullListVDSC=
ommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-=
f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log =
id: 54b4b435
2018-02-12 16:46:31,147+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINI=
SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D=
pc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D=
{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QE=
MU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtru=
e, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guest=
NumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, cust=
om=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4d=
f1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879=
c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d57=
3004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specP=
arams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial=
, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', =
deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null',=
logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c=
6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F=
8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5=
d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac=
-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', d=
evice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',=
address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugge=
d=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPropertie=
s=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null=
'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmD=
eviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f5=
7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLE=
R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D=
0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false',=
plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPrope=
rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'=
null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-=
4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908d=
b=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4a=
a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'=
unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D=
'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1'=
, customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h=
ostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D=
1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, ma=
xMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85c=
c-d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuarantee=
dSize=3D8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3D=
ovirtmgmt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVC=
pus=3D16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log=
id: 54b4b435
2018-02-12 16:46:31,150+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] F=
etched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'
2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re=
ceived a vnc Device without an address when processing VM 3f57e669-5e4c=
-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa=
rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.=
6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p=
ort=3D5901}
2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re=
ceived a lease Device without an address when processing VM 3f57e669-5e=
4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e=
669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f=
dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off=
set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1=
92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea=
ses, type=3Dlease}
2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66=
9-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) was unexpectedly det=
ected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(gi=
nger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb=
1')
2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66=
9-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-=
8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
....
2018-02-12 16:46:41,631+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-=
4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22=
-840a-f17d7cd87bb1'(victor.local.systea.fr)
2018-02-12 16:46:41,632+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, Destr=
oyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandPa=
rameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd8=
7bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', =
secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D't=
rue'}), log id: 560eca57
2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, Dest=
royVDSCommand, log id: 560eca57
2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-=
4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' =
--> 'Down'
2018-02-12 16:46:41,651+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3=
f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c=
2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo'
2018-02-12 16:46:42,163+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4=
d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingTo' -->=
'Up'
2018-02-12 16:46:42,169+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, =
MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr, MigrateSta=
tusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-487=
8-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), =
log id: 7a25c281
2018-02-12 16:46:42,174+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,=
MigrateStatusVDSCommand, log id: 7a25c281
2018-02-12 16:46:42,194+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVEN=
T=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712024-5982-46a8-8=
2c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call St=
ack: null, Custom ID: null, Custom Event ID: -1, Message: Migration com=
pleted (VM: Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destina=
tion: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, =
Actual downtime: (N/A))
2018-02-12 16:46:42,201+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object '=
EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DV=
M]', sharedLocks=3D''}'
2018-02-12 16:46:42,203+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullL=
istVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandPa=
rameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285=
cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc6=
5298
2018-02-12 16:46:42,254+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, Full=
ListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440f=
x-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748,=
guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0, cp=
uType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760085fd=
, custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9=
3ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D=
'879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc=
-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', s=
pecParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-se=
rial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'fals=
e', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'nu=
ll', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93=
-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254dev=
ice=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-b=
f0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c=
4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d=
'}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D=
'[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p=
lugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProp=
erties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D=
'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D=
'VmDeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTR=
OLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bu=
s=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'fal=
se', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP=
roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=
=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9=
3ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1=
908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485=
-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=
=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addres=
s=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', manage=
d=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'chann=
el1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSock=
et=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D=
32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a0=
99d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse=
, maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNe=
twork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3, memGuarantee=
dSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304259600, disp=
lay=3Dvnc}], log id: 7cc65298
2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a=
vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85=
cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{=
displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type=
=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D59=
01}
2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a=
lease Device without an address when processing VM 3f57e669-5e4c-4d10-=
85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c=
-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16=
, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D62=
91456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0=
.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, typ=
e=3Dlease}
2018-02-12 16:46:46,260+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINI=
SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D=
pc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D=
18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49=
-9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, =
transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2=
, guestNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=3D{device=5Ffbd=
dd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56503=
9fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af=
02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', devi=
ce=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addr=
ess=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', mana=
ged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'cha=
nnel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'nu=
ll', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb27=
4device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-41=
56-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmD=
evice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284=
d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet'=
, type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=
=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', read=
Only=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snapsh=
otId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbd=
dd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceI=
d=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-=
85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0=
x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true=
', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', sn=
apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F=
fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56=
5039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D=
'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHAN=
NEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controll=
er=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D=
'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D=
'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}},=
vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5F=
SECONDARY, nice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3D=
false, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2,=
smpThreadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmE=
nable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devic=
es=3D[Ljava.lang.Object;@286410fd, memGuaranteedSize=3D8192, maxVCpus=3D=
16, clientIp=3D, statusTime=3D4304263620, display=3Dvnc}], log id: 58cd=
ef4c
2018-02-12 16:46:46,267+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re=
ceived a vnc Device without an address when processing VM 3f57e669-5e4c=
-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa=
rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.=
5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p=
ort=3D5901}
2018-02-12 16:46:46,268+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re=
ceived a lease Device without an address when processing VM 3f57e669-5e=
4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e=
669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f=
dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off=
set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1=
92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea=
ses, type=3Dlease}
=C2=A0
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7=
458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-=
627a-4b83-b59e-886400d55474=3DVM]', sharedLocks=3D''}'
2018-02-12 16:49:06,407+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-=
4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand inter=
nal: false. Entities affected : =C2=A0ID: f7d4ec12-627a-4b83-b59e-88640=
0d55474 Type: VMAction group MIGRATE=5FVM with role type USER
2018-02-12 16:49:06,712+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4=
142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParam=
eters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf6=
9', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0=
.5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.=
168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', =
migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa=
lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve=
rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall=
ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim=
it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act=
ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=
=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt=
ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]=
}}]]'}), log id: 3702a9e0
2018-02-12 16:49:06,713+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) =
[92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(H=
ostName =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsy=
nc=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f=
7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsId=
=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321=
', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown=
time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console=
Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max=
IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched=
ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim=
it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act=
ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=
=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt=
ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=
=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo=
g id: 1840069c
2018-02-12 16:49:06,724+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) =
[92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand,=
log id: 1840069c
2018-02-12 16:49:06,732+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4=
142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom=
, log id: 3702a9e0
2018-02-12 16:49:06,753+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-4=
9) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5F=
START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID=
: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: nu=
ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMA=
RY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr=
, User: admin@internal-authz).
...
2018-02-12 16:49:16,453+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] F=
etched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'
2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec=
ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict=
or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'=
)
2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-=
840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
...
2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec=
ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict=
or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'=
)
2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-=
840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
=C2=A0
and so on, last lines repeated indefinitly for hours since we poweroff =
the VM...
Is this something known ? Any idea about that ?
Thanks
--
Cordialement,
Frank Soyer
=C2=A0
------=_=-_OpenGroupware_org_NGMime-15477-1518516470.384163-4------
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 30330
<html>Hi all,<br />I discovered yesterday a problem when migrating VM w=
ith more than one vdisk.<br />On our test servers (oVirt4.1, shared sto=
rage with Gluster), I created 2 VMs needed for a test, from a template =
with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I di=
dn't want to waste time to extend the existing vdisks... But I lost tim=
e finally...). The VMs with the 2 vdisks works well.<br />Now I saw som=
e updates waiting on the host. I tried to put it in maintenance... But =
it stopped on the two VM. They were marked "migrating", but no more acc=
essible. Other (small) VMs with only 1 vdisk was migrated without probl=
em at the same time.<br />I saw that a kvm process for the (big) VMs wa=
s launched on the source AND destination host, but after tens of minute=
s, the migration and the VMs was always freezed. I tried to cancel the =
migration for the VMs : failed. The only way to stop it was to poweroff=
the VMs : the kvm process died on the 2 hosts and the GUI alerted on a=
failed migration.<br />In doubt, I tried to delete the second vdisk on=
one of this VMs : it migrates then without error ! And no access probl=
em.<br />I tried to extend the first vdisk of the second VM, the delete=
the second vdisk : it migrates now without problem ! =
<br /><br />So after another test with a VM with 2 vdisks, I can say th=
at this blocked the migration process :(<br /><br />In engine.log, for =
a VMs with 1 vdisk migrating well, we see :<blockquote>2018-02-12 16:46=
:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerComma=
nd] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acqui=
red to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d=
573004a099d=3DVM]', sharedLocks=3D''}'<br />2018-02-12 16:46:29,955+01 =
INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ov=
irt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Run=
ning command: MigrateVmToServerCommand internal: false. Entities affect=
ed : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction grou=
p MIGRATE=5FVM with role type USER<br />2018-02-12 16:46:30,261+01 INFO=
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.t=
hread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, M=
igrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D'true', hostI=
d=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3f57e669-5e4c-4d10-=
85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId=3D'd569c2dd-8f30-=
4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321', migrationMethod=
=3D'ONLINE', tunnelMigration=3D'false', migrationDowntime=3D'0', autoCo=
nverge=3D'true', migrateCompressed=3D'false', consoleAddress=3D'null', =
maxBandwidth=3D'500', enableGuestEvents=3D'true', maxIncomingMigrations=
=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[init=3D[{n=
ame=3DsetDowntime, params=3D[100]}], stalling=3D[{limit=3D1, action=3D{=
name=3DsetDowntime, params=3D[150]}}, {limit=3D2, action=3D{name=3DsetD=
owntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDowntime, pa=
rams=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=3D[400=
]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=
=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), log id: 14f61ee0<br =
/>2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbroke=
r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32=
) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand=
(HostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runA=
sync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVds=
Id=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:543=
21', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDo=
wntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', conso=
leAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', m=
axIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSch=
edule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{l=
imit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, a=
ction=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{na=
me=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDow=
ntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, para=
ms=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), =
log id: 775cd381<br />2018-02-12 16:46:30,277+01 INFO [org.ovirt.=
engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thr=
ead.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, Mi=
grateBrokerVDSCommand, log id: 775cd381<br />2018-02-12 16:46:30,285+01=
INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ov=
irt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FIN=
ISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<br />20=
18-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.dbbroker=
.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32)=
[2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5FST=
ART(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: =
4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null=
, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECONDA=
RY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr=
, User: admin@internal-authz).<br />2018-02-12 16:46:31,106+01 INFO &nb=
sp;[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (Defa=
ultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName =3D =
victor.local.systea.fr, FullListVDSCommandParameters:{runAsync=3D'true'=
, hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds=3D'[3f57e669-5=
e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435<br />2018-02-12 16:46:=
31,147+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullLis=
tVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCo=
mmand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-rhel7.3=
.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D{0QEMU=5FQEMU=5F=
HARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQ=
M00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtrue, timeOffset=3D=
0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guestNumaNodes=3D[Lja=
va.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, custom=3D{device=5Ff=
bddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565=
039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-=
af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', de=
vice=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', ad=
dress=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', ma=
naged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'c=
hannel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'=
null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb=
274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-=
4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DV=
mDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d95572=
84d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'table=
t', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{b=
us=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', re=
adOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snap=
shotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ff=
bddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{devic=
eId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d1=
0-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0=
x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true=
', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', sn=
apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F=
fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56=
5039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D=
'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHAN=
NEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controll=
er=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D=
'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D=
'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}},=
vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5F=
SECONDARY, nice=3D0, status=3DMigration Source, maxMemSize=3D32768, boo=
tMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOf=
IoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D8192, kvmEnab=
le=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D=
[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D16, clientIp=3D=
, statusTime=3D4299484520, maxMemSlots=3D16}], log id: 54b4b435<br />20=
18-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbroker.mo=
nitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fet=
ched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'<br />2018-02=
-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbroker.monitor=
ing.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received =
a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-8=
5cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D=
{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.6}, typ=
e=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D5=
901}<br />2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.=
vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54=
a65b66] Received a lease Device without an address when processing VM 3=
f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5F=
id=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0=
-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d0425=
7be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/g=
lusterSD/192.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=
=5Fmd/xleases, type=3Dlease}<br />2018-02-12 16:46:31,152+01 INFO  =
;[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartz=
Scheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=
=5FSECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2=
dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'c=
e3938b1-b23f-4d22-840a-f17d7cd87bb1')<br />2018-02-12 16:46:31,152+01 I=
NFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (Defa=
ultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099=
d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.lo=
cal.systea.fr) ignoring it in the refresh until migration is done<br />=
....<br />2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.=
vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57=
e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1=
-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<br />2018-02-12 1=
6:46:41,632+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.De=
stroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand=
(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandParameters:{ru=
nAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', secondsToWait=3D=
'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D'true'}), log id: =
560eca57<br />2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.c=
ore.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [=
] FINISH, DestroyVDSCommand, log id: 560eca57<br />2018-02-12 16:46:41,=
650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyze=
r] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a09=
9d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' --> 'Down'<br />2=
018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f=
57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c2=
dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo'<br =
/>2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroke=
r.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c=
-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingTo' -=
-> 'Up'<br />2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine=
.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-work=
er-4) [] START, MigrateStatusVDSCommand(HostName =3D ginger.local.syste=
a.fr, MigrateStatusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd=
569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d=
573004a099d'}), log id: 7a25c281<br />2018-02-12 16:46:42,174+01 INFO &=
nbsp;[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand=
] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id:=
7a25c281<br />2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.=
core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-wo=
rker-4) [] EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712=
024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e85=
7a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message=
: Migration completed (VM: Oracle=5FSECONDARY, Source: victor.local.sys=
tea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Tota=
l: 11 seconds, Actual downtime: (N/A))<br />2018-02-12 16:46:42,201+01 =
INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJo=
inPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks=3D=
'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DVM]', sharedLocks=3D''}'<br />=
2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullL=
istVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandPa=
rameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285=
cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc6=
5298<br />2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.=
vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FI=
NISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D=
pc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D=
18748, guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D=
0, cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760=
085fd, custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F=
879c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{devi=
ceId=3D'879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d=
10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvir=
tio-serial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D=
'false', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D=
'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7=
d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254=
device=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aa=
c-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-=
01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a0=
99d'}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParam=
s=3D'[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false=
', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', custom=
Properties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevic=
e=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{=
id=3D'VmDeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vm=
Id=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'=
CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x0=
1, bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D=
'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', cus=
tomProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDe=
vice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F8=
79c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a=
a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-=
8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', de=
vice=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', ad=
dress=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', ma=
naged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'c=
hannel1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'=
null', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPer=
Socket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemS=
ize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d57=
3004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3D=
false, maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, disp=
layNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3, memGuar=
anteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304259600,=
display=3Dvnc}], log id: 7cc65298<br />2018-02-12 16:46:42,257+01 INFO=
[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring]=
(ForkJoinPool-1-worker-4) [] Received a vnc Device without an address =
when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skippi=
ng device: {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, key=
Map=3Dfr, displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b=
1-446a-4e88-9e40-9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:42,25=
7+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMo=
nitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without =
an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devi=
ces, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099=
d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da=
09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456, device=3Dlease,=
path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:=5FDATA01/1e51cecc-=
eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />2018-0=
2-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbro=
ker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, Fu=
llListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i44=
0fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D1874=
8, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D=
{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, trans=
parentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, gue=
stNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=3D{device=5Ffbddd528=
-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc2=
54=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af02-56=
5039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D=
'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D=
'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0'=
, customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h=
ostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274devic=
e=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-848=
5-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmDevice:=
{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284d6', v=
mId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet', type=
=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, =
type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D=
'false', deviceAlias=3D'input0', customProperties=3D'[]', snapshotId=3D=
'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7=
d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'fb=
ddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-d5=
73004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', sp=
ecParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0x0000, =
type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true', read=
Only=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', snapshotI=
d=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd5=
28-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc=
c254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D'VmD=
eviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D'3f5=
7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL'=
, bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D=
0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D'true=
', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D'[]=
', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}}, vm=
Type=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5FSE=
CONDARY, nice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3Dfa=
lse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2, s=
mpThreadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmEna=
ble=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=
=3D[Ljava.lang.Object;@286410fd, memGuaranteedSize=3D8192, maxVCpus=3D1=
6, clientIp=3D, statusTime=3D4304263620, display=3Dvnc}], log id: 58cde=
f4c<br />2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.v=
dsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fc=
b200a] Received a vnc Device without an address when processing VM 3f57=
e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvn=
c, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D1=
92.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d=
2c442d, port=3D5901}<br />2018-02-12 16:46:46,268+01 INFO [org.ov=
irt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuart=
zScheduler5) [7fcb200a] Received a lease Device without an address when=
processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping d=
evice: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e=
51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b=
6d-94a4-8b0d04257be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/d=
ata-center/mnt/glusterSD/192.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-=
920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<p> </p></blockquote><=
br />For the VM with 2 vdisks we see :<blockquote><p>2018-02-12 16:49:0=
6,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand=
] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquire=
d to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-627a-4b83-b59e-886=
400d55474=3DVM]', sharedLocks=3D''}'<br />2018-02-12 16:49:06,407+01 IN=
FO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovir=
t.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Runni=
ng command: MigrateVmToServerCommand internal: false. Entities affected=
: ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group =
MIGRATE=5FVM with role type USER<br />2018-02-12 16:49:06,712+01 INFO &=
nbsp;[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thr=
ead.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, Mig=
rateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D'true', hostId=3D=
'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f7d4ec12-627a-4b83-b59e=
-886400d55474', srcHost=3D'192.168.0.5', dstVdsId=3D'ce3938b1-b23f-4d22=
-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321', migrationMethod=3D'=
ONLINE', tunnelMigration=3D'false', migrationDowntime=3D'0', autoConver=
ge=3D'true', migrateCompressed=3D'false', consoleAddress=3D'null', maxB=
andwidth=3D'500', enableGuestEvents=3D'true', maxIncomingMigrations=3D'=
2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[init=3D[{name=3D=
setDowntime, params=3D[100]}], stalling=3D[{limit=3D1, action=3D{name=3D=
setDowntime, params=3D[150]}}, {limit=3D2, action=3D{name=3DsetDowntime=
, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDowntime, params=3D=
[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=3D[400]}}, {l=
imit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, =
action=3D{name=3Dabort, params=3D[]}}]]'}), log id: 3702a9e0<br />2018-=
02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbr=
oker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5=
af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostNa=
me =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync=3D=
'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f7d4ec=
12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsId=3D'c=
e3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321', mi=
grationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDowntime=3D=
'0', autoConverge=3D'true', migrateCompressed=3D'false', consoleAddress=
=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', maxIncomin=
gMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'=
[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{limit=3D1,=
action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, action=3D{=
name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetD=
owntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, pa=
rams=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500=
]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), log id: 1=
840069c<br />2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.co=
re.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-=
6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrok=
erVDSCommand, log id: 1840069c<br />2018-02-12 16:49:06,732+01 INFO &nb=
sp;[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.threa=
d.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, Migr=
ateVDSCommand, return: MigratingFrom, log id: 3702a9e0<br />2018-02-12 =
16:49:06,753+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditlog=
handling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af3=
3-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5FSTART(62), =
Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-=
f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom =
Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMARY, Source:=
ginger.local.systea.fr, Destination: victor.local.systea.fr, User: adm=
in@internal-authz).<br />...<br />2018-02-12 16:49:16,453+01 INFO  =
;[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (Def=
aultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-=
4d22-840a-f17d7cd87bb1'<br />2018-02-12 16:49:16,455+01 INFO [org=
.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzSched=
uler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle=5FPR=
IMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f=
-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd=
-8f30-4878-8aea-858db285cf69')<br />2018-02-12 16:49:16,455+01 INFO &nb=
sp;[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuar=
tzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is m=
igrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.sys=
tea.fr) ignoring it in the refresh until migration is done<br />...<br =
/>2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroke=
r.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4e=
c12-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly det=
ected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vi=
ctor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf6=
9')<br />2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.v=
dsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM=
'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b=
23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the =
refresh until migration is done<br /> </p></blockquote><br />and s=
o on, last lines repeated indefinitly for hours since we poweroff the V=
M...<br />Is this something known ? Any idea about that ?<br /><br />Th=
anks<br />--<br /><style type=3D"text/css">.Text1 {
color: black;
font-size:9pt;
font-family:Verdana;
}
.Text2 {
color: black;
font-size:7pt;
font-family:Verdana;
}</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer=
</b><br /> </p></html>
------=_=-_OpenGroupware_org_NGMime-15477-1518516470.384163-4--------
7 years, 2 months
Re: [ovirt-users] Network configuration validation error
by spfma.tech@e.mail.fr
--=_0f45da8360572fbabc948ee54ed96c44
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
I did not see I had to enable another repo to get this update, so I was=
sure I had the latest version available !=0A After adding it, things we=
nt a lot better and I was able to update the engine and all the nodes fl=
awlessly to version 4.2.1.6-1.el7.centos Thanks a lot for your help ! =
The "no default route error" has disappeared indeed. But I still coul=
dn't validate network setup modifications on one node as I still had the=
following error in the GUI : =0A=0A =09* must match "^b((25[0-5]|2[0-4=
]d|[01]dd|d?d)_){3}(25[0-5]|2[0-4]d|[01]dd|d?d)"=0A =09* Attribute: ipCo=
nfiguration.iPv4Addresses[0].gateway=0A=0A So I tried a dummy thing : I=
put a value in the gateway field for the NIC which doesn't need one (NF=
S), was able to validate. Then I edited it again, removed the value and=
was able to validate again ! Regards =0A=0A Le 12-Feb-2018 10:42:30=
+0100, mburman(a)redhat.com a crit: =0A "no default route" bug was fixed=
only on 4.2.1 Your current version doesn't have the fix =0A On Mon, Feb=
12, 2018 at 11:09 AM, wrote:=0A=0A Le 12-Feb-2018 08:06:43 +0100, jbel=
ka(a)redhat.com a crit: =0A> This option relevant only for the upgrade fro=
m 3.6 to 4.0(engine had=0A > different OS major versions), it all other=
cases the upgrade flow very=0A > similar to upgrade flow of standard en=
gine environment.=0A > =0A > =0A > 1. Put hosted-engine environment to G=
lobalMaintenance(you can do it via=0A > UI)=0A > 2. Update engine packag=
es(# yum update -y)=0A > 3. Run engine-setup=0A > 4. Disable GlobalMaint=
enance=0A > So I followed these steps connected in the engine VM and d=
idn't get any error message. But the version showed in the GUI is still=
4.2.0.2-1.el7.centos. Yum had no newer packages to install. And I still=
have the "no default route" and network validation problems. Regards =
=0A > Could someone explain me at least what "Cluster PROD is at versio=
n 4.2 which=0A > is not supported by this upgrade flow. Please fix it be=
fore upgrading."=0A > means ? As far as I know 4.2 is the most recent br=
anch available, isn't it ?=0A=0A I have no idea where did you get=0A=0A=
"Cluster PROD is at version 4.2 which is not supported by this upgrade=
flow. Please fix it before upgrading."=0A=0A Please do not cut output a=
nd provide exact one.=0A=0A IIUC you should do 'yum update ovirt*setup*'=
and then 'engine-setup'=0A and only after it would finish successfully=
you would do 'yum -y update'.=0A Maybe that's your problem?=0A=0A Jiri=
=0A=0A-----------------------------------------------------------------=
--------------------------------=0AFreeMail powered by mail.fr =0A_____=
__________________________________________=0A Users mailing list=0AUsers=
@ovirt.org=0Ahttp://lists.ovirt.org/mailman/listinfo/users=0A=0A-- =0A=
=0AMichael Burman =0A=0ASenior Quality engineer - rhv network - redhat i=
srael =0A=0ARed Hat =0A=0A mburman(a)redhat.com M: 0545355725 IM: mburman=
=0A=0A-----------------------------------------------------------------=
--------------------------------=0AFreeMail powered by mail.fr
--=_0f45da8360572fbabc948ee54ed96c44
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<div> </div>=0A<div><span style=3D"font-family: arial, helvetica, s=
ans-serif; font-size: 10pt; color: #000000;">I did not see I had to enab=
le another repo to get this update, so I was sure I had the latest versi=
on available !<br /></span></div>=0A<div><span style=3D"font-family: ari=
al, helvetica, sans-serif; font-size: 10pt; color: #000000;">After addin=
g it, things went a lot better and I was able to update the engine and a=
ll the nodes flawlessly to version <span class=3D"gwt-InlineLabel">4.2.1=
.6-1.el7.centos</span></span></div>=0A<div><span style=3D"font-family: a=
rial, helvetica, sans-serif; font-size: 10pt; color: #000000;"><span cla=
ss=3D"gwt-InlineLabel">Thanks a lot for your help !</span></span></div>=
=0A<div> </div>=0A<div><span style=3D"font-family: arial, helvetica=
, sans-serif; font-size: 10pt; color: #000000;">The "no default route er=
ror" has disappeared indeed.</span></div>=0A<div> </div>=0A<div><sp=
an style=3D"font-family: arial, helvetica, sans-serif; font-size: 10pt;=
color: #000000;">But I still couldn't validate network setup modificati=
ons on one node as I still had the following error in the GUI :</span></=
div>=0A<div>=0A<ul style=3D"margin-top: 0;">=0A<li>must match "^b((25[0-=
5]|2[0-4]d|[01]dd|d?d)_){3}(25[0-5]|2[0-4]d|[01]dd|d?d)"</li>=0A<li>Attr=
ibute: ipConfiguration.iPv4Addresses[0].gateway</li>=0A</ul>=0A<div>So I=
tried a dummy thing : I put a value in the gateway field for the NIC wh=
ich doesn't need one (NFS), was able to validate. Then I edited it again=
, removed the value and was able to validate again !</div>=0A</div>=0A<d=
iv> </div>=0A<div>Regards</div>=0A<p><br /> Le 12-Feb-2018 10:42:30=
+0100, mburman(a)redhat.com a écrit:</p>=0A<blockquote style=3D"ma=
rgin-left: 0; padding-left: 5px; border-left: 2px solid #000080;">=0A<di=
v dir=3D"ltr">=0A<div>"no default route" bug was fixed only on 4.2.1</di=
v>=0AYour current version doesn't have the fix</div>=0A<div class=3D"gma=
il_extra"><br />=0A<div class=3D"gmail_quote">On Mon, Feb 12, 2018 at 11=
:09 AM, <span dir=3D"ltr"><<a href=3D"mailto:spfma.tech@e.mail.fr" ta=
rget=3D"_blank" rel=3D"noreferrer noopener">spfma.tech(a)e.mail.fr</a>>=
</span> wrote:<br />=0A<blockquote class=3D"gmail_quote" style=3D"margin=
: 0 0 0 .8ex; border-left: 1px #ccc solid; padding-left: 1ex;">=0A<div>&=
nbsp;</div>=0A<p><br /><br /> Le 12-Feb-2018 08:06:43 +0100, <a href=3D"=
mailto:jbelka@redhat.com" target=3D"_blank" rel=3D"noreferrer noopener">=
jbelka(a)redhat.com</a> a écrit:</p>=0A<blockquote style=3D"margin-=
left: 0; padding-left: 5px; border-left: 2px solid #000080;">> This o=
ption relevant only for the upgrade from 3.6 to 4.0(engine had<br /> >=
; different OS major versions), it all other cases the upgrade flow very=
<br /> > similar to upgrade flow of standard engine environment.<br /=
> > <br /> > <br /> > 1. Put hosted-engine environment to Globa=
lMaintenance(you can do it via<br /> > UI)<br /> > 2. Update engin=
e packages(# yum update -y)<br /> > 3. Run engine-setup<br /> > 4.=
Disable GlobalMaintenance<br /> ></blockquote>=0A<div> </div>=
=0A<div>So I followed these steps connected in the engine VM and didn't=
get any error message. But the version showed in the GUI is=0A<div id=
=3D"m_-3545490503635468479welcome-section">=0A<div>still 4.2.0.2-1.el7.c=
entos. Yum had no newer packages to install. And I still have the "no de=
fault route" and network validation problems.</div>=0A<div>Regards</div>=
=0A</div>=0A</div>=0A<div><br /> > Could someone explain me at least=
what "Cluster PROD is at version 4.2 which<br /> > is not supported=
by this upgrade flow. Please fix it before upgrading."<br /> > means=
? As far as I know 4.2 is the most recent branch available, isn't it ?<=
br /><br /> I have no idea where did you get<br /><br /> "Cluster PROD i=
s at version 4.2 which is not supported by this upgrade flow. Please fix=
it before upgrading."<br /><br /> Please do not cut output and provide=
exact one.<br /><br /> IIUC you should do 'yum update ovirt*setup*' and=
then 'engine-setup'<br /> and only after it would finish successfully y=
ou would do 'yum -y update'.<br /> Maybe that's your problem?<br /><br /=
> Jiri</div>=0A<br />=0A<div class=3D"HOEnZb">=0A<div class=3D"h5"><hr /=
>FreeMail powered by <a href=3D"https://mail.fr" target=3D"_blank" rel=
=3D"noreferrer noopener">mail.fr</a></div>=0A</div>=0A<br />____________=
___________________________________<br /> Users mailing list<br /><a hre=
f=3D"mailto:Users@ovirt.org" target=3D"_blank" rel=3D"noreferrer noopene=
r">Users(a)ovirt.org</a><br /><a href=3D"http://lists.ovirt.org/mailman/li=
stinfo/users" target=3D"_blank" rel=3D"noreferrer noopener">http://lists=
.ovirt.org/mailman/listinfo/users</a><br /><br /></blockquote>=0A</div>=
=0A<br /><br clear=3D"all" /><br />-- <br />=0A<div class=3D"gmail_signa=
ture">=0A<div dir=3D"ltr">=0A<div>=0A<div dir=3D"ltr">=0A<div>=0A<p styl=
e=3D"font-weight: bold; margin: 0; padding: 0; font-size: 14px; text-tra=
nsform: uppercase; margin-bottom: 0;">Michael Burman</p>=0A<p style=3D"f=
ont-weight: normal; font-size: 10px; margin: 0px 0px 4px; text-transform=
: uppercase;">Senior Quality engineer - rhv network - redhat israel</p>=
=0A<p style=3D"font-weight: normal; margin: 0; font-size: 10px; color: #=
999;"><a style=3D"color: #0088ce; font-size: 10px; margin: 0; text-decor=
ation: none; font-family: overpass, sans-serif;" href=3D"https://www.red=
hat.com" target=3D"_blank" rel=3D"noreferrer noopener">Red Hat <br /><br=
/></a></p>=0A<p style=3D"font-weight: normal; margin: 0px 0px 6px; font=
-size: 10px; color: #999999;"><span style=3D"margin: 0px; padding: 0px;"=
> <a style=3D"color: #0088ce; font-size: 10px; margin: 0; text-decoratio=
n: none; font-family: overpass, sans-serif;" href=3D"mailto:mburman@redh=
at.com" target=3D"_blank" rel=3D"noreferrer noopener">mburman(a)redhat.com=
</a> </span> M: <a style=3D"color: #0088ce; font-size: 11px;=
margin: 0; text-decoration: none; font-family: overpass, sans-serif;" t=
arget=3D"_blank" rel=3D"noreferrer noopener">0545355725</a>  =
; IM: mburman</p>=0A<table border=3D"0">=0A<tbody>=0A<tr>=0A<td width=3D=
"100"> </td>=0A</tr>=0A</tbody>=0A</table>=0A</div>=0A</div>=0A</di=
v>=0A</div>=0A</div>=0A</div>=0A</blockquote>=0A <br/=
><hr>FreeMail powered by <a href=3D"https://mail.fr" target=3D"_blank">m=
ail.fr</a>=0A
--=_0f45da8360572fbabc948ee54ed96c44--
7 years, 2 months
Import Domain and snapshot issue ... please help !!!
by Enrico Becchetti
This is a cryptographically signed message in MIME format.
--------------ms010906060108030106020200
Content-Type: text/plain; charset=iso-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable
Content-Language: it-IT
=A0Dear All,
I have been using ovirt for a long time with three hypervisors and an=20
external engine running in a centos vm .
This three hypervisors have HBAs and access to fiber channel storage.=20
Until recently I used version 3.5, then I reinstalled everything from=20
scratch and now I have 4.2.
Before formatting everything, I detach the storage data domani (FC) with =
the virtual machines and reimported it to the new 4.2 and all went well. =
In
this domain there were virtual machines with and without snapshots.
Now I have two problems. The first is that if I try to delete a snapshot =
the process is not end successful and remains hanging and the second=20
problem is that
in one case I lost the virtual machine !!!
So I need your help to kill the three running zombie tasks because with=20
taskcleaner.sh I can't do anything and then I need to know how I can=20
delete the old snapshots
made with the 3.5 without losing other data or without having new=20
processes that terminate correctly.
If you want some log files please let me know.
Thank you so much.
Best Regards
Enrico
--------------ms010906060108030106020200
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME
MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC
BZMwggWPMIIDd6ADAgECAgMAkbUwDQYJKoZIhvcNAQELBQAwQzELMAkGA1UEBhMCSVQxDTAL
BgNVBAoTBElORk4xJTAjBgNVBAMTHElORk4gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcN
MTcwMjIyMTM0NDU1WhcNMTgwMjIyMTM0NDU1WjBoMQswCQYDVQQGEwJJVDENMAsGA1UEChME
SU5GTjEdMBsGA1UECxMUUGVyc29uYWwgQ2VydGlmaWNhdGUxEDAOBgNVBAcTB1BlcnVnaWEx
GTAXBgNVBAMTEEVucmljbyBCZWNjaGV0dGkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDM9hqLromxJFRrJxIkfgn+pYWI6IJmy0zu0OH6x/zrp8eEpTkI6Rlngg3jVexYWldr
obFy6TSSW429GDbkszGjy/yNJKtgQnRama8Izihm/RY5m/j8iTqvw4RPdk6CrxO8AD3FGiR8
sSImHzBagHmvAHwA9Ba5gTCJVgRO4MsGAMaxKNkwUrGIZK+Rmi5zVLW2yfjqsSiO4sCZDXey
/Ndm17b8FF5HG5rNWjwG8A0y2s25AWTxURvM1gNnTXl8zOPWmtvZcpp5dBp3VlA6N/0Pk3uy
klBIBEEK67W9+vqcDfqNON24hL+Cm3OsGrW/XpIIb/oQNdLp5dtSUG3qTn8BAgMBAAGjggFl
MIIBYTAMBgNVHRMBAf8EAjAAMA4GA1UdDwEB/wQEAwIEsDAdBgNVHSUEFjAUBggrBgEFBQcD
AgYIKwYBBQUHAwQwPwYDVR0fBDgwNjA0oDKgMIYuaHR0cDovL3NlY3VyaXR5LmZpLmluZm4u
aXQvQ0EvSU5GTi1DQS0yMDE1LmNybDAlBgNVHSAEHjAcMAwGCisGAQQB0SMKAQgwDAYKKoZI
hvdMBQICATAdBgNVHQ4EFgQUiw04Jlbw6qOmGOQVq8eXcDhG08cwcwYDVR0jBGwwaoAUQ4xN
/uyWyunvCqR9wZ0C5Z+9loChR6RFMEMxCzAJBgNVBAYTAklUMQ0wCwYDVQQKEwRJTkZOMSUw
IwYDVQQDExxJTkZOIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggkAsyYCxn1JJicwJgYDVR0R
BB8wHYEbZW5yaWNvLmJlY2NoZXR0aUBwZy5pbmZuLml0MA0GCSqGSIb3DQEBCwUAA4ICAQBY
SsKt/XTw+Kc+Vc1pjVi5uuEf/oP0U/nQQywIyM+rhbkeHy8c1eokA5k8Xnycwx8kzAh0e2lz
W0a8gM2tSI799DF6N8s5NfCo6RoPciSYA+7edylWlE8TfGoJ2I5pNZV3IctYlfokSiz59EPz
CNhQsfVbF8qkINB/J8HlbKvm0ERDOAse1pBjNwdU3Lt5nxmw3Yd08b9jPw4g5eMSmW4XmRzP
kERTjAQxophM4OIKb0Vj0XfqONBt+LansgyzHcWbo+aKJmgQDvdumnGsNoQbRI4TxjnlB4Dg
Vgbvo0mKZYfu0Sr2U9+wpRJ0U/+YljbVEuzmD9hl/eBRoBgIhx9dwoBYsuXEkWo8RkgeVpnt
z8ze4zubi7+4BnuuVLZeq1VWQFy1qU7KGTou4a6B+gM88ZviGJI6o03oDhE976uOST5v5pZ7
q7uhgBd8FW594PAWdI1oPQ5YVKX+NX6XrXwf7GYf18OTmV3kNxWllrNVz3YLLhbnSSf1o3sd
FLZXMu1r4dbeQuk7duYa/c/POyl3f8H/JGAPIOVQWfotMm8X/WET9Rhglc5Ead18rZPePTpj
zyjfB+k0C0ZH1cM7ynTpMDxrk9X7Zq4wbKFlzG1OVKSzTiZGLQrjhciXoSDHs4J9RNM5ar2G
IYWhWFkQ+tVY/73EOsNHHXjk438qUUD33jGCAwgwggMEAgEBMEowQzELMAkGA1UEBhMCSVQx
DTALBgNVBAoTBElORk4xJTAjBgNVBAMTHElORk4gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkC
AwCRtTANBglghkgBZQMEAgEFAKCCAY8wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkq
hkiG9w0BCQUxDxcNMTgwMjEzMTE0ODMwWjAvBgkqhkiG9w0BCQQxIgQglU+Ffi75OVEriDBU
KCyOh3iM6sO9i/cJZDBS/vADmIswWQYJKwYBBAGCNxAEMUwwSjBDMQswCQYDVQQGEwJJVDEN
MAsGA1UEChMESU5GTjElMCMGA1UEAxMcSU5GTiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eQID
AJG1MFsGCyqGSIb3DQEJEAILMUygSjBDMQswCQYDVQQGEwJJVDENMAsGA1UEChMESU5GTjEl
MCMGA1UEAxMcSU5GTiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eQIDAJG1MGwGCSqGSIb3DQEJ
DzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG9w0D
AgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwDQYJKoZIhvcN
AQEBBQAEggEAm8SIJoSJRCwHurGmhWeXVun/OCq1tAm/fd0Dng6Y8q/gPOCGeXfDfay26WU+
Nn8IviGPdRyvirngnID2zxykr5q8XKvqNnJSXtE/4TCjoCIYvAxKwv3ICX8R/H8W/15n5gil
y4IUZR9iypWmC1IXkaUxwNewKn6xETpypj0qVe+8jm/0pXpF+5nBgAai08hVNO+fmAYGk3zA
KlsXYqKyeOfuvhmARRU5ZwLd6ByYtQXW4WqdagdLOxvB1WcB8BItTwW+Kw9LG0DQS0E9tCfV
mbkO1Cb1YwW0CDZfRYeC5RiUVnnlUrkm6MrDCuHkGye+TWn95+bm/sBpyHbfmm1ktwAAAAAA
AA==
--------------ms010906060108030106020200--
7 years, 2 months
Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1
by Stefano Danzi
Hello!
In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
Hosted engine starts regularly.
I have a sigle host with Hosted Engine.
Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
When I start any VM I get this error: "The CPU type of the cluster is
unknown. Its possible to change the cluster cpu or set a different one
per VM."
All VMs have " Guest CPU Type: N/D"
Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu
type before the upgrade), my CPU should be Ivy Bridge but it isn't in
the dropdown list.
If I try to select a similar cpu (SandyBridge IBRS) I get an error. I
can't chage cluster cpu type when I have running hosts with a lower CPU
type.
I can't put host in maintenance because hosted engine is running on it.
How I can solve?
7 years, 2 months
leftover of disk moving operation
by Gianluca Cecchi
Hello,
I had a problem during a disk migration from one storage to another in a
4.1.7 environment connected to SAN storage.
Now, after deleting the live storage migration snapshot, I want to retry
(with the VM powered off) but at destination the logical volume still
exists and was not pruned after the initial failure.
I get
HSMGetAllTasksStatusesVDS failed: Cannot create Logical Volume:
('c0097b1a-a387-4ffa-a62b-f9e6972197ef',
u'a20bb16e-7c7c-4ed4-85c0-cbf297048a8e')
I was able to move the other 4 disks that were part of this VM.
Can I simply lvremove the target LV at host side (I have only one host
running at this moment) and try the move again, or do I have to execute
anything more, eg at engine rdbms level?
Thanks,
Gianluca
7 years, 2 months
Defining custom network filter or editing existing
by Tim Thompson
All,
I was wondering if someone can point me in the direction of the
documentation related to defining custom network filters (nwfilter) in
4.2. I found the docs on assigning a network filter to a vNIC profile,
but I cannot find any mention of how you can create your own. Normally
you'd use 'virst nwfilter-define', but that is locked out since vdsm
manages everything. I need to expand clean-traffic's scope to include
ipv6, since it doesn't handle ipv6 at all by default, it seems.
Thanks,
-Tim
7 years, 2 months
VM is down with error: Bad volume specification
by Chris Boot
Hi all,
I'm running oVirt 4.2.0 and have been using oVirtBackup with it. So far
it has been working fine, until this morning. Once of my VMs seems to
have had a snapshot created that I can't delete.
I noticed when the VM failed to migrate to my other hosts, so I just
shut it down to allow the host to go into maintenance. Now I can't start
the VM with the snapshot nor can I delete the snapshot.
Please let me know what further information you need to help me diagnose
the issue and recover the VM.
Best regards,
Chris
-------- Forwarded Message --------
Subject: alertMessage (ovirt.boo.tc), [VM morse is down with error. Exit
message: Bad volume specification {'address': {'bus': '0', 'controller':
'0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
'ec083085-52c1-4da5-88cf-4af02e42a212', 'index': 0, 'iface': 'scsi',
'apparentsize': '12386304', 'cache': 'none', 'imageID':
'ec083085-52c1-4da5-88cf-4af02e42a212', 'truesize': '12386304', 'type':
'file', 'domainID': '23372fb9-51a5-409f-ae21-2521012a83fd', 'reqsize':
'0', 'format': 'cow', 'poolID': '00000001-0001-0001-0001-000000000311',
'device': 'disk', 'path':
'/rhev/data-center/00000001-0001-0001-0001-000000000311/23372fb9-51a5-409f-ae21-2521012a83fd/images/ec083085-52c1-4da5-88cf-4af02e42a212/aa10d05b-f2f0-483e-ab43-7c03a86cd6ab',
'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
'aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', 'diskType': 'file',
'specParams': {}, 'discard': True}.]
Date: Tue, 23 Jan 2018 11:32:21 +0000 (GMT)
From: engine(a)ovirt.boo.tc
To: bootc(a)bootc.net
Time:2018-01-23 11:30:39.677
Message:VM morse is down with error. Exit message: Bad volume
specification {'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target': '0', 'unit': '0'}, 'serial':
'ec083085-52c1-4da5-88cf-4af02e42a212', 'index': 0, 'iface': 'scsi',
'apparentsize': '12386304', 'cache': 'none', 'imageID':
'ec083085-52c1-4da5-88cf-4af02e42a212', 'truesize': '12386304', 'type':
'file', 'domainID': '23372fb9-51a5-409f-ae21-2521012a83fd', 'reqsize':
'0', 'format': 'cow', 'poolID': '00000001-0001-0001-0001-000000000311',
'device': 'disk', 'path':
'/rhev/data-center/00000001-0001-0001-0001-000000000311/23372fb9-51a5-409f-ae21-2521012a83fd/images/ec083085-52c1-4da5-88cf-4af02e42a212/aa10d05b-f2f0-483e-ab43-7c03a86cd6ab',
'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
'aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', 'diskType': 'file',
'specParams': {}, 'discard': True}.
Severity:ERROR
VM Name: morse
Host Name: ovirt2.boo.tc
Template Name: Blank
--
Chris Boot
bootc(a)boo.tc
7 years, 2 months
effectiveness of "discard=unmap"
by Matthias Leopold
Hi,
i'm sorry to bother you again with my ignorance of the DISCARD feature
for block devices in general.
after finding several ways to enable "discard=unmap" for oVirt disks
(via standard GUI option for iSCSI disks or via "diskunmap" custom
property for Cinder disks) i wanted to check in the guest for the
effectiveness of this feature. to my surprise i couldn't find a
difference between Linux guests with and without "discard=unmap" enabled
in the VM. "lsblk -D" reports the same in both cases and also
fstrim/blkdiscard commands appear to work with no difference. Why is
this? Do i have to look at the underlying storage to find out what
really happens? Shouldn't this be visible in the guest OS?
thx
matthias
7 years, 2 months
Info about windows guest performance
by Gianluca Cecchi
Hello,
while in my activities to accomplish migration of a Windows 2008 R2 VM
(with an Oracle RDBMS inside) from vSphere to oVirt, I'm going to check
performance related things.
Up to now I only ran Windows guests inside my laptops and not inside an
oVirt infrastructure.
Now I successfully migrated this kind of VM to oVirt 4.1.9.
The guest had an LSI logic sas controller. Inside the oVirt host that I
used as proxy (for VMware virt-v2v) I initially didn't have the virtio-win
rpm.
I presume that has been for this reason that the oVirt guest has been
configured with IDE disks...
Can you confirm?
For this test I started with ide, then added a virtio-scsi disk and then
changed also the boot disk to virtio-scsi and all now goes well, with also
ovirt-guest-tools-iso-4.1-3 provided iso used to install qxl and so on...
So far so good.
I found this bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1277353
where it seems that
"
For optimum I/O performance it's critical to make sure that Windows
guests use the Hyper-V reference counter feature. QEMU command line
should include
-cpu ...,hv_time
and
-no-hpet
"
Analyzing my command line I see the "-no-hpet" but I dont see the "hv_time"
See below full comand.
Any hints?
Thanks,
Gianluca
/usr/libexec/qemu-kvm
-name guest=testmig,debug-threads=on
-S
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-12-testmig/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
-cpu Westmere,vmx=on
-m size=4194304k,slots=16,maxmem=16777216k
-realtime mlock=off
-smp 2,maxcpus=16,sockets=16,cores=1,threads=1
-numa node,nodeid=0,cpus=0-1,mem=4096
-uuid x-y-z-x-y
-smbios type=1,manufacturer=oVirt,product=oVirt
Node,version=7-4.1708.el7.centos,serial=xx,uuid=yy
-no-user-config
-nodefaults
-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-12-testmig/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control
-rtc base=2018-02-09T12:41:41,driftfix=slew
-global kvm-pit.lost_tick_policy=delay
-no-hpet
-no-shutdown
-boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5
-device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
-drive if=none,id=drive-ide0-1-0,readonly=on
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/2de93ee3-7d6e-4a10-88c4-abc7a11fb687/a9f4e35b-4aa0-45e8-b775-1a046d1851aa,format=qcow2,if=none,id=drive-scsi0-0-0-1,serial=2de93ee3-7d6e-4a10-88c4-abc7a11fb687,cache=none,werror=stop,rerror=stop,aio=native
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1
-drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/f821da0a-cec7-457c-88a4-f83f33404e65/0d0c4244-f184-4eaa-b5bf-8dc65c7069bb,format=raw,if=none,id=drive-scsi0-0-0-0,serial=f821da0a-cec7-457c-88a4-f83f33404e65,cache=none,werror=stop,rerror=stop,aio=native
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
-netdev tap,fd=30,id=hostnet0
-device e1000,netdev=hostnet0,id=net0,mac=00:50:56:9d:c9:29,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent
-device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice tls-port=5900,addr=10.4.192.32,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
-msg timestamp=on
7 years, 2 months
Network configuration validation error
by spfma.tech@e.mail.fr
--=_c8dd4bd4673b7a5ada782429f2b8b302
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi, I am experiencing a new problem : when I try to modify something in=
the network setup on the second node (added to the cluster after instal=
ling the engine on the other one) using the Engine GUI, I get the follow=
ing error when validating : must match "^\b((25[0-5]|2[0-4]\d|[01]\d\=
d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)"=0A Attribut : ipConfigu=
ration.iPv4Addresses[0].gateway Moreover, on the general status of the=
r server, I have a "Host has no default route" alert. The ovirtmgmt ne=
twork has a defined gateway of course, and the storage network has none=
because it is not required. Both server have the same setup, with diffe=
rent addresses of course :-) I have not been able to find anything use=
ful in the logs. Is this a bug or am I doing something wrong ? Regar=
ds =0A=0A---------------------------------------------------------------=
----------------------------------=0AFreeMail powered by mail.fr
--=_c8dd4bd4673b7a5ada782429f2b8b302
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<div> </div>=0A<div>Hi,</div>=0A<div>I am experiencing a new proble=
m : when I try to modify something in the network setup on the second no=
de (added to the cluster after installing the engine on the other one) u=
sing the Engine GUI, I get the following error when validating :</div>=
=0A<div> </div>=0A<div> must match "^\b((25[0-5]|=
2[0-4]\d|[01]\d\d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)"<br />&n=
bsp; Attribut : ipConfiguration.iPv4Addresses[0].gateway</di=
v>=0A<div> </div>=0A<div>Moreover, on the general status of ther se=
rver, I have a "Host has no default route" alert.</div>=0A<div> </d=
iv>=0A<div>The ovirtmgmt network has a defined gateway of course, and th=
e storage network has none because it is not required. Both server have=
the same setup, with different addresses of course :-)</div>=0A<div>&nb=
sp;</div>=0A<div>I have not been able to find anything useful in the log=
s.</div>=0A<div> </div>=0A<div>Is this a bug or am I doing somethin=
g wrong ?</div>=0A<div> </div>=0A<div>Regards</div>=0A =
<br/><hr>FreeMail powered by <a href=3D"https://mail.fr" target=
=3D"_blank">mail.fr</a>=0A
--=_c8dd4bd4673b7a5ada782429f2b8b302--
7 years, 2 months
Using network assigned to VM on CentOS host?
by Wesley Stewart
This might be a stupid question. But I am testing out a 10Gb network
directly connected to my Freenas box using a Cat6 crossover cable.
I setup the connection (on device eno4) and called the network "Crossover"
in oVirt.
I dont have DHCP on this, but I can easy assign VMs a NIC on the
"Crossover" network, assign them an ip address (10.10.10.x) and everything
works fine. But I was curious about doing this for the CentOS host as
well. I want to test out hosting VM's on the NFS share over the 10Gb
network but I wasn't quite sure how to do this without breaking other
connections and I did not want to do anything incorrectly.
I appreciate your feedback! I apologize if this is a stupid question.
Running oVirt 4.1.8 on CentOS 7.4
7 years, 2 months
Hosted-Engine mount .iso file CLI
by Russell Wecker
I have a hosted engine setup that when I ran the system updates for it, and
it will not boot. I would like to have it boot from a rescue CD image so i
can fix it i have copied the /var/run/ovirt-hosted-engine-ha/vm.conf to
/root and modified it however i cannot seem to find the exact options to
configure the file for .iso my current settings are
devices={index:2,iface:ide,shared:false,readonly:true,deviceId:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{controller:0,target:0,unit:0,bus:1,type:drive},device:cdrom,path:,type:disk}
How do i change it to boot from local .iso.
Thanks
Any help would be most appreciated.
7 years, 2 months
Issue with 4.2.1 RC and SSL
by ~Stack~
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--X1liMzBwOelVuDr9VnkmQpyzIXGjCLTVO
Content-Type: multipart/mixed; boundary="nhGTtLudrG119NimaBSBZPnEVXBJuW62Y";
protected-headers="v1"
From: ~Stack~ <i.am.stack(a)gmail.com>
To: users <users(a)ovirt.org>
Message-ID: <ff271e8b-7ec9-f0b6-6e00-511c5aad1b27(a)gmail.com>
Subject: Issue with 4.2.1 RC and SSL
--nhGTtLudrG119NimaBSBZPnEVXBJuW62Y
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Greetings,
I was having a lot of issues with 4.2 and 95% of them are in the change
logs for 4.2.1. Since this is a new build, I just blew everything away
and started from scratch with the RC release.
The very first thing that I did after the engine-config was to set up my
SSL cert. I followed the directions from here:
https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/
Logged in the first time to the web interface and everything worked! Grea=
t.
Install my hosts (also completely fresh installs - Scientific Linux 7
fully updated) and none would finish the install...
I can send the full host debug log if you want, however, I'm pretty sure
that the problem is because of the SSL somewhere. I've cut/pasted the
relevant part.
Any advice/help, please?
Thanks!
~Stack~
2018-02-07 16:56:21,697-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.tune.tuned.Plugin._misc (None)
2018-02-07 16:56:21,698-0600 DEBUG otopi.context
context._executeMethod:128 Stage misc METHOD
otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id
2018-02-07 16:56:21,698-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None)
2018-02-07 16:56:21,699-0600 DEBUG otopi.transaction
transaction._prepare:61 preparing 'File transaction for '/etc/vdsm/vdsm.i=
d''
2018-02-07 16:56:21,699-0600 DEBUG otopi.filetransaction
filetransaction.prepare:183 file '/etc/vdsm/vdsm.id' missing
2018-02-07 16:56:21,705-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None)
2018-02-07 16:56:21,706-0600 DEBUG otopi.context
context._executeMethod:128 Stage misc METHOD
otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks
2018-02-07 16:56:21,706-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None)
2018-02-07 16:56:21,707-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None)
2018-02-07 16:56:21,707-0600 DEBUG otopi.context
context._executeMethod:128 Stage misc METHOD
otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc
2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD
otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc (None)
2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ### Setting up PKI
2018-02-07 16:56:21,709-0600 DEBUG
otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:813 execute:
('/usr/bin/openssl', 'req', '-new', '-newkey', 'rsa:2048', '-nodes',
'-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), executable=3D'None',
cwd=3D'None', env=3DNone
2018-02-07 16:56:21,756-0600 DEBUG
otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:863
execute-result: ('/usr/bin/openssl', 'req', '-new', '-newkey',
'rsa:2048', '-nodes', '-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), rc=3D=
0
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ### Please issue VDSM
certificate based on this certificate request
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ***D:MULTI-STRING
VDSM_CERTIFICATE_REQUEST --=3D451b80dc-996f-432e-9e4f-2b29ef6d1141=3D--
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND -----BEGIN CERTIFICATE REQUEST--=
---
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMZm
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
eYTWbHKkN+GlQnZ8C6fdk++htyFE+IHSzkhTyTSZdM0bPTdvhomTeCwzNlWBWdU+
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
PrVB7j/1iksSt6RXDQUWlPDPBNfAa6NtZijEaGuxAe0RpI71G5feZmgVRmtIfrkE
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
5BjhnCMJW46y9Y7dc2TaXzQqeVj0nkWkHt0v6AVdRWP3OHfOCvqoABny1urStvFT
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
TeAhSBVBUWTaNczBrZBpMXhXrSAe/hhLXMF3VfBV1odOOwb7AeccYkGePMxUOg8+
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
XMAKdDCn7N0ZC4gSyEAP9mSobvOvNObcfw02NyYdny32/edgPrXKR+ISf4IwVd0d
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
mDonT4W2ROTE/A3M/mkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQCpAKAMv/Vh
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
0ByC02R3fxtA6b/OZyys+xyIAfAGxo2NSDJDQsw9Gy1QWVtJX5BGsbzuhnNJjhRm
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
5yx0wrS/k34oEv8Wh+po1fwpI5gG1W9L96Sx+vF/+UXBenJbhEVfir/cOzjmP1Hg
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
TtK5nYnBM7Py5JdnnAPww6jPt6uRypDZqqM8YOct1OEsBr8gPvmQvt5hDGJKqW37
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
xFbad6ILwYIE0DXAu2h9y20Pl3fy4Kb2LQDjltiaQ2IBiHFRUB/H2DOxq0NpH4z7
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
wqU/ai7sXWT/Vq4R6jD+c0V0WP4+VgSkgqPvnSYHwqQUbc9Kh7RwRnVyzLupbWdM
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND Pr+MZ2D1jg27
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND -----END CERTIFICATE REQUEST----=
-
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
--=3D451b80dc-996f-432e-9e4f-2b29ef6d1141=3D--
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%QStart: VDSM_CERTIFICATE_CHAI=
N
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ### Please input VDSM
certificate chain that matches certificate request, top is issuer
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ### type
'--=3D451b80dc-996f-432e-9e4f-2b29ef6d1141=3D--' in own line to mark end,=
'--=3D451b80dc-996f-ABORT-9e4f-2b29ef6d1141=3D--' aborts
2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ***Q:MULTI-STRING
VDSM_CERTIFICATE_CHAIN --=3D451b80dc-996f-432e-9e4f-2b29ef6d1141=3D--
--=3D451b80dc-996f-ABORT-9e4f-2b29ef6d1141=3D--
2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%QEnd: VDSM_CERTIFICATE_CHAIN
2018-02-07 16:56:22,765-0600 DEBUG otopi.context
context._executeMethod:143 method exception
Traceback (most recent call last):
File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/tmp/ovirt-h7XmTvEqc3/otopi-plugins/ovirt-host-common/vdsm/pki.py",
line 241, in _misc
'\n\nPlease input VDSM certificate chain that '
File "/tmp/ovirt-h7XmTvEqc3/otopi-plugins/otopi/dialog/machine.py",
line 327, in queryMultiString
v =3D self._readline()
File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/dialog.py", line 248, in
_readline
raise IOError(_('End of file'))
IOError: End of file
2018-02-07 16:56:22,766-0600 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Misc configuration':
End of file
--nhGTtLudrG119NimaBSBZPnEVXBJuW62Y--
--X1liMzBwOelVuDr9VnkmQpyzIXGjCLTVO
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJae5YTAAoJELkej+ysXJPmkskP/RipREtSwvODOXqtHU893bjl
3RrYt9BZ98vYqdM0paqori+b+HGwhM3BMP9wsDV2j96fg6wbYgGIZT3aCgxNJza2
mIuvZWWu8Y7NgbUL/Wb2cWPx/ZendohCcT6BC83mPnw6ehGik9+zhNpBFJWrbLwP
LnsJMh1Hx5R06X9FWjCUnsREKLLmBGCuauLEomdpOSVtrMADWknUWz9jJP3RGYWI
7eDEys2oAkrwt5IQC51JIdi++PX7QWaARP22Oo6qH97ofLxL4B1xIzxCP1Y0JNY8
k+i2vF8sQHIkUR1B5aO5sUHyhK9pAGf+ASBiYMH+04d6L2t/mhRB4G0EOJtR/VJu
lZSF3wU+F4IEOWvMmk1/fPMqxctgbTAfWoMu8dmoHT17p4fodRy6506acByP0+RG
kQ6O4ccfwgAk2GXNSMRAgzi4gQxfh4+T7UIZ9DccW93Cn/35uF/3UzE0PlH0Dy9I
TacAksGGb96OKSGVp2AHJu78/1hDrNn++lH0pZFWWiHWsEXrtWnEw0kKJojuuj5i
GM7El2VeFjC33ObeqCrLCRpibxwl2FaTVN1VxPyCVFQ+SLv4ayAydo05v5ETl/NB
MkxuyidIQgovlIiUu++9Gw9EkHu+A4VVOlugwukX187F6Ln2Sy21hDsp/c86mHuD
7aOnX/1hWCV5ut1kD9N1
=shka
-----END PGP SIGNATURE-----
--X1liMzBwOelVuDr9VnkmQpyzIXGjCLTVO--
7 years, 2 months
Few Questions on New Install
by Talk Jesus
Greetings,
Just installed Ovirt:
Software Version:4.2.0.2-1.el7.centos
How Do I:
- add a subnet of IPv4 to assign to VMs
- download (or import) basic Linux templates like Centos 7, Ubuntu 16 even
if using minimal iso
- import from SolusVM based KVM nodes
Does oVirt support bulk IPv4 assignment to VMs? If I wish to assign say a
full /26 subnet of IPv4 to VM #1, is this a one click option?
Thank you. I read the docs, but everything is a bit confusing for me.
7 years, 2 months
Network Topologies
by aeR7Re
This is a multi-part message in MIME format.
--b1_bf9e7016867c417f2a03f0009ce95b53
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
SGVsbG8sCkknbSBsb29raW5nIGZvciBzb21lIGFkdmljZSBvbiBvciBldmVuIGp1c3Qgc29tZSBl
eGFtcGxlcyBvZiBob3cgb3RoZXIgb1ZpcnQgdXNlcnMgaGF2ZSBjb25maWd1cmVkIG5ldHdvcmtp
bmcgaW5zaWRlIHRoZWlyIGNsdXN0ZXJzLgoKQ3VycmVudGx5IHdlJ3JlIHJ1bm5pbmcgYSBjbHVz
dGVyIHdpdGggaG9zdHMgc3ByZWFkIGFjcm9zcyBtdWx0aXBsZSByYWNrcyBpbiBvdXIgREMsIHdp
dGggbGF5ZXIgMiBzcGFubmVkIGJldHdlZW4gdGhlbSBmb3IgVk0gbmV0d29ya3MuIFdoaWxlIHRo
aXMgaXMgZnVuY3Rpb25hbCwgaXQncyAxMDAlIG5vdCBpZGVhbCBhcyB0aGVyZSdzIG11bHRpcGxl
IHNpbmdsZSBwb2ludHMgb2YgZmFpbHVyZSBhbmQgYXQgc29tZSBwb2ludCBzb21lb25lIGlzIGdv
aW5nIHRvIGFjY2lkZW50YWxseSBsb29wIGl0IDopCgpXaGF0IHdlJ3JlIGFmdGVyIGlzIGEgbWV0
aG9kIG9mIHByb3ZpZGluZyBhIFZNIG5ldHdvcmsgYWNyb3NzIG11bHRpcGxlIHJhY2tzIHdoZXJl
IHRoZXJlIGFyZSBubyBzaW5nbGUgcG9pbnRzIG9mIGZhaWx1cmUuIFdlJ3ZlIGdvdCBsYXllciAz
IHN3aXRjaGVzIGluIHJhY2tzIGNhcGFibGUgb2YgcnVubmluZyBhbiBJR1AvRUdQLgoKQ3VycmVu
dCBpZGVhczoKLSBSdW4gYSByb3V0aW5nIGRhZW1vbiBvbiBlYWNoIFZNIGFuZCBoYXZlIGl0IGFk
dmVydGlzZSBhIC8zMiB0byB0aGUgZGlzdHJpYnV0aW9uIHN3aXRjaAotIE9WTiBmb3IgbGF5ZXIg
MiBiZXR3ZWVuIGhvc3RzICsgcG90ZW50aWFsbHkgVlJSUCBvciBzaW1pbGFyIG9uIHRoZSBkaXN0
cmlidXRpb24gc3dpdGNoCgpTbyBhcyBwZXIgbXkgb3JpZ2luYWwgcGFyYWdyYXBoLCBhbnkgYWR2
aWNlIG9uIHRoZSBtb3N0IGFwcHJvcHJpYXRlIG5ldHdvcmsgdG9wb2xvZ3kgZm9yIGFuIG9WaXJ0
IGNsdXN0ZXI/IG9yIGhvdyBoYXZlIHlvdSBzZXQgdXAgeW91ciBuZXR3b3Jrcz8KClRoYW5rIHlv
dQoKU2VudCB3aXRoIFtQcm90b25NYWlsXShodHRwczovL3Byb3Rvbm1haWwuY29tKSBTZWN1cmUg
RW1haWwu
--b1_bf9e7016867c417f2a03f0009ce95b53
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64
PGRpdj5IZWxsbyw8YnI+PC9kaXY+PGRpdj5JJ20gbG9va2luZyBmb3Igc29tZSBhZHZpY2Ugb24g
b3IgZXZlbiBqdXN0IHNvbWUgZXhhbXBsZXMgb2YgaG93IG90aGVyIG9WaXJ0IHVzZXJzIGhhdmUg
Y29uZmlndXJlZCBuZXR3b3JraW5nIGluc2lkZSB0aGVpciBjbHVzdGVycy4gPGJyPjwvZGl2Pjxk
aXY+PGJyPjwvZGl2PjxkaXY+Q3VycmVudGx5IHdlJ3JlIHJ1bm5pbmcgYSBjbHVzdGVyIHdpdGgg
aG9zdHMgc3ByZWFkIGFjcm9zcyBtdWx0aXBsZSByYWNrcyBpbiBvdXIgREMsIHdpdGggbGF5ZXIg
MiBzcGFubmVkIGJldHdlZW4gdGhlbSBmb3IgVk0gbmV0d29ya3MuIFdoaWxlIHRoaXMgaXMgZnVu
Y3Rpb25hbCwgaXQncyAxMDAlIG5vdCBpZGVhbCBhcyB0aGVyZSdzIG11bHRpcGxlIHNpbmdsZSBw
b2ludHMgb2YgZmFpbHVyZSBhbmQgYXQgc29tZSBwb2ludCBzb21lb25lIGlzIGdvaW5nIHRvIGFj
Y2lkZW50YWxseSBsb29wIGl0IDopIDxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PldoYXQg
d2UncmUgYWZ0ZXIgaXMgYSBtZXRob2Qgb2YgcHJvdmlkaW5nIGEgVk0gbmV0d29yayBhY3Jvc3Mg
bXVsdGlwbGUgcmFja3Mgd2hlcmUgdGhlcmUgYXJlIG5vIHNpbmdsZSBwb2ludHMgb2YgZmFpbHVy
ZS4gV2UndmUgZ290IGxheWVyIDMgc3dpdGNoZXMgaW4gcmFja3MgY2FwYWJsZSBvZiBydW5uaW5n
IGFuIElHUC9FR1AuPGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+Q3VycmVudCBpZGVhczo8
YnI+PC9kaXY+PGRpdj4tIFJ1biBhIHJvdXRpbmcgZGFlbW9uIG9uIGVhY2ggVk0gYW5kIGhhdmUg
aXQgYWR2ZXJ0aXNlIGEgLzMyIHRvIHRoZSBkaXN0cmlidXRpb24gc3dpdGNoPGJyPjwvZGl2Pjxk
aXY+LSBPVk4gZm9yIGxheWVyIDIgYmV0d2VlbiBob3N0cyArIHBvdGVudGlhbGx5IFZSUlAgb3Ig
c2ltaWxhciBvbiB0aGUgZGlzdHJpYnV0aW9uIHN3aXRjaDxicj48L2Rpdj48ZGl2Pjxicj48L2Rp
dj48ZGl2PlNvIGFzIHBlciBteSBvcmlnaW5hbCBwYXJhZ3JhcGgsIGFueSBhZHZpY2Ugb24gdGhl
IG1vc3QgYXBwcm9wcmlhdGUgbmV0d29yayB0b3BvbG9neSBmb3IgYW4gb1ZpcnQgY2x1c3Rlcj8g
b3IgaG93IGhhdmUgeW91IHNldCB1cCB5b3VyIG5ldHdvcmtzPzxicj48L2Rpdj48ZGl2Pjxicj48
L2Rpdj48ZGl2PlRoYW5rIHlvdTxicj48L2Rpdj48ZGl2IGNsYXNzPSJwcm90b25tYWlsX3NpZ25h
dHVyZV9ibG9jayI+PGRpdiBjbGFzcz0icHJvdG9ubWFpbF9zaWduYXR1cmVfYmxvY2stdXNlciBw
cm90b25tYWlsX3NpZ25hdHVyZV9ibG9jay1lbXB0eSI+PGJyPjwvZGl2PjxkaXYgY2xhc3M9InBy
b3Rvbm1haWxfc2lnbmF0dXJlX2Jsb2NrLXByb3RvbiI+U2VudCB3aXRoIDxhIGhyZWY9Imh0dHBz
Oi8vcHJvdG9ubWFpbC5jb20iPlByb3Rvbk1haWw8L2E+IFNlY3VyZSBFbWFpbC48YnI+PC9kaXY+
PC9kaXY+PGRpdj48YnI+PC9kaXY+
--b1_bf9e7016867c417f2a03f0009ce95b53--
7 years, 2 months
Re: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt
by Luca 'remix_tj' Lorenzetto
What you're looking at is called fault tolerance in other hypervisors.
As far as i know, ovirt doesn't implement such solution.
But if your system doesn't support failure recovery done by high
availability options, you should take in account to revise your application
architecture if you want to keep running on ovirt.
Luca
Il 10 feb 2018 8:31 AM, "Ranjith P" <ranjithspr13(a)yahoo.com> ha scritto:
Hi,
>>Who's shutting down the hypervisor? (Or perhaps it is shutdown
externally, due to overheating or otherwise?)
We need a continuous availability of VM's in our production setup. If the
hypervisor goes down due to any hardware failure or work load then VM's
above hypervisor will reboot and started on available hypervisors. This is
normally happening but it disrupting VM's. Can you suggest a solution in
this case? Can we achieve this challenge using glusterfs?
Thanks & Regards
Ranjith
Sent from Yahoo Mail on Android
<https://overview.mail.yahoo.com/mobile/?.src=Android>
On Sat, Feb 10, 2018 at 2:07 AM, Yaniv Kaul
<ykaul(a)redhat.com> wrote:
On Fri, Feb 9, 2018 at 9:25 PM, ranjithspr13(a)yahoo.com <
ranjithspr13(a)yahoo.com> wrote:
Hi,
Anyone can suggest how to setup VM Live migration (without restart vm)
while Hypervisor goes down in ovirt?
I think there are two parts to achieving this:
1. Have a script that migrates VMs off a specific host. This should be easy
to write using the Python/Ruby/Java SDK, Ansible or using REST directly.
2. Having this script run as a service when a host shuts down, in the right
order - well before libvirt and VDSM shut down, and would be fast enough
not to be terminated by systemd.
This is a bit more challenging.
Who's shutting down the hypervisor? (Or perhaps it is shutdown externally,
due to overheating or otherwise?)
Y.
Using glusterfs is it possible? Then how?
Thanks & Regards
Ranjith
Sent from Yahoo Mail on Android
<https://overview.mail.yahoo.com/mobile/?.src=Android>
______________________________ _________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 2 months
VM backups - Bacchus
by Niyazi Elvan
Dear Friends,
It has been a while I could not have time to work on Bacchus. This weekend
I created an ansible playbook to replace the installation procedure.
You simply download installer.yml and settings.yml files from git repo and
run the installer as "ansible-playbook installer.yml" Please check it at
https://github.com/openbacchus/bacchus . I recommend you to run the
installer on a fresh VM, which has no MySQL DB or previous installation.
Hope this helps to more people and please let me know about your ideas.
ps. Regarding oVirt 4.2, I had a chance to look at it and tried the new
domain type "Backup Domain". This is really cool feature and I am planning
to implement the support in Bacchus. Hopefully, CBT will show up soon and
we will have a better world :)
King Regards,
--
Niyazi Elvan
7 years, 2 months
Maximum time node can be offline.
by Thomas Letherby
Hello all,
Is there a maximum length of time an Ovirt Node 4.2 based host can be
offline in a cluster before it would have issues when powered back on?
The reason I ask is in my lab I currently have a three node cluster that
works really well, however a lot of the time I only actually need the
resources of one host, so to save power I'd like to keep the other two
offline until needed.
I can always script them to boot once a week or so if I need to.
Thanks,
Thomas
7 years, 2 months
Live migration of VM(0 downtime) while Hypervisor goes down in ovirt
by ranjithspr13@yahoo.com
------=_Part_1968242_1358094662.1518204324810
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Hi,Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt?Using glusterfs is it possible? Then how?
Thanks & RegardsRanjith
Sent from Yahoo Mail on Android
------=_Part_1968242_1358094662.1518204324810
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit
Hi,<div id="yMail_cursorElementTracker_1518203906835">Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt?</div><div id="yMail_cursorElementTracker_1518204210420">Using glusterfs is it possible? Then how?</div><div id="yMail_cursorElementTracker_1518204099547"><br></div><div id="yMail_cursorElementTracker_1518204100461">Thanks & Regards</div><div id="yMail_cursorElementTracker_1518204113848">Ranjith<br><br><div id="ymail_android_signature"><a href="https://overview.mail.yahoo.com/mobile/?.src=Android">Sent from Yahoo Mail on Android</a></div></div>
------=_Part_1968242_1358094662.1518204324810--
7 years, 2 months
Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider
by maoz zadok
Hello there,
I'm following
https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt/
guide
in order to import VMS from Libvirt to oVirt using ssh.
URL: "qemu+ssh://host1.example.org/system"
and get the following error:
Failed to communicate with the external provider, see log for additional
details.
*oVirt agent log:*
*- Failed to retrieve VMs information from external server
qemu+ssh://XXX.XXX.XXX.XXX/system*
*- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot recv
data: Host key verification failed.: Connection reset by peer*
*remote host sshd DEBUG log:*
*Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port
48148 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147
port 48148 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child 110006*
*Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 port
48150 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: Connection closed by XXX.XXX.XXX.147
port 48150 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child 110008*
*Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 port
48152 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: Connection closed by XXX.XXX.XXX.147
port 48152 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child 110010*
*Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 port
48154 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: Connection closed by XXX.XXX.XXX.147
port 48154 [preauth]*
Thank you!
7 years, 2 months
Virt-viewer not working over VPN
by Vincent Royer
Hi, I asked this on the virt-viewer list, but it appears to be dead, so my
apologies if this isn't the right place for this question.
When I access my vm's locally using virt-viewer on windows clients,
everything works fine, spice or vnc.
When I access the same vm's remotely over a site-to-site VPN (setup between
the two firewalls), it fails with an error: unable to connect to libvirt
with uri: [none]. Similarly I cannot connect in a browser-based vnc
session (cannot connect to host).
I can resolve the DNS of the server from my remote client (domain override
in the firewall pointing to the DNS server locally) and everything else I
do seems completely unaware of the vpn link (SSH, RDP, etc). For example
connecting to https://ovirt-enginr.mydomain.com works as expected. The
only function not working remotely is virt-viewer.
Any clues would be appreciated!
7 years, 2 months
Re: [ovirt-users] Ovirt backups lead to unresponsive VM
by Alex K
Ok. I will reproduce and collect logs.
Thanx,
Alex
On Jan 29, 2018 20:21, "Mahdi Adnan" <mahdi.adnan(a)outlook.com> wrote:
I have Windows VMs, both client and server.
if you provide the engine.log file we might have a look at it.
--
Respectfully
*Mahdi A. Mahdi*
------------------------------
*From:* Alex K <rightkicktech(a)gmail.com>
*Sent:* Monday, January 29, 2018 5:40 PM
*To:* Mahdi Adnan
*Cc:* users
*Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM
Hi,
I have observed this logged at host when the issue occurs:
VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer
or
VDSM host.domain command GetStatsVDS failed: Connection reset by peer
At engine logs have not been able to correlate.
Are you hosting Windows 2016 server and Windows 10 VMs?
The weird is that I have same setup on other clusters with no issues.
Thanx,
Alex
On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan <mahdi.adnan(a)outlook.com>
wrote:
Hi,
We have a cluster of 17 nodes, backed by GlusterFS storage, and using this
same script for backup.
we have no issues with it so far.
have you checked engine log file ?
--
Respectfully
*Mahdi A. Mahdi*
------------------------------
*From:* users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> on behalf of Alex
K <rightkicktech(a)gmail.com>
*Sent:* Wednesday, January 24, 2018 4:18 PM
*To:* users
*Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM
Hi all,
I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on
top glusterfs.
On some VMs (especially one Windows server 2016 64bit with 500 GB of disk).
Guest agents are installed at VMs. i almost always observe that during the
backup of the VM the VM is rendered unresponsive (dashboard shows a
question mark at the VM status and VM does not respond to ping or to
anything).
For scheduled backups I use:
https://github.com/wefixit-AT/oVirtBackup
The script does the following:
1. snapshot VM (this is done ok without any failure)
2. Clone snapshot (this steps renders the VM unresponsive)
3. Export Clone
4. Delete clone
5. Delete snapshot
Do you have any similar experience? Any suggestions to address this?
I have never seen such issue with hosted Linux VMs.
The cluster has enough storage to accommodate the clone.
Thanx,
Alex
7 years, 2 months
Cannot Remove Disk
by Donny Davis
Ovirt 4.2 has been humming away quite nicely for me in the last few months,
and now I am hitting an issue when try to touch any api call that has to do
with a specific disk. This disk resides on a hyperconverged DC, and none of
the other disks seem to be affected. Here is the error thrown.
2018-02-08 10:13:20,005-05 ERROR
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default
task-22) [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during
ValidateFailure.:
org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: Quota
6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool
5a497956-0380-021e-0025-00000000035e
Any ideas what can be done to fix this?
7 years, 2 months
Info about exporting from vSphere
by Gianluca Cecchi
Hello,
I have this ind of situation.
Source env:
It is vSphere 6.5 (both vCenter Server appliance and ESXi hosts) where I
have an admin account to connect to, but currently only to vCenter and not
to the ESXi hosts
The VM to be migrated is Windows 2008 R2 SP1 with virtual hw version 8
(ESXi 5.0 and later) and has one boot disk 35Gb and one data disk 250Gb.
The SCSI controller is LSI logic sas and network vmxnet3
It has no snapshots at the moment
I see in my oVirt 4.1.9 that I can import from:
1) VMware
2) VMware Virtual Appliance
and found also related documentations in RHEV 4.1 Virtual Machine
Management pdf
Some doubts:
- what is the best between the 2 methods if I can chose? Their Pros&Cons?
- Does 1) imply that I also need the ESXi account? Currently my windows
domain account that gives me access to vcenter doesn't work connecting to
ESXi hosts
- also it seems that 1) is more intrusive, while for 2) I only need to put
the ova file into some nfs share...
Thanks in advance,
Gianluca
7 years, 2 months
when creating VMs, I don't want hosted_storage to be an option
by Mike Farnam
Hi All - Is that a way to mark hosted_storage somehow so that it’s not available to add new VMs to? Right now it’s the default storage domain when adding a VM. At the least, I’d like to make another storage domain the default.
Is there a way to do this?
Thanks
7 years, 2 months
oVirt CLI Question
by Andrei V
Hi,
How to force power off, and then launch (after timeout e.g. 20sec)
particular VM from bash or Python script?
Is 20sec is enough to get oVirt engine updated after forced power off ?
What happened with this wiki? Seems like it is deleted or moved.
http://wiki.ovirt.org/wiki/CLI#Usage
Is this project part of oVirt distro? It looks like in state of active
development with last updates 2 months ago.
https://github.com/fbacchella/ovirtcmd
Thanks !
7 years, 2 months
IndexError python-sdk
by David David
Hi all.
python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64
Issue is that I cant upload a snapshot I get IndexError when do
upload_disk_snapshots.py
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
Output:
Traceback (most recent call last):
File "snapshot_upload.py", line 298, in <module>
images_chain = get_images_chain(disk_path)
File "snapshot_upload.py", line 263, in get_images_chain
base_volume = [v for v in volumes_info.values() if
'full-backing-filename' not in v ][0]
IndexError: list index out of range
7 years, 2 months
Re: [ovirt-users] vdsmd fails after upgrade 4.1 -> 4.2
by Frank Rothenstein
This is a multi-part message in MIME format.
------------MIME--520934545-23878-delim
Content-Type: multipart/alternative;
boundary="----------MIME--520934545-17562-delim"
------------MIME--520934545-17562-delim
Content-Type: text/plain;
charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Thanks Thomas=2C
it seems you were right=2E I followed the instructions to enable
hugepages via kernel command line and after reboot vdsmd starts
correctly=2E
=28I went back to 4=2E1=2E9 in between=2C added the kernel command line and=
upgraded to 4=2E2=29
The docs/release notes should mention it - or did I miss it=3F
Am Dienstag=2C den 06=2E02=2E2018=2C 17=3A17 -0800 schrieb Thomas Davis=3A=
=3E sorry=2C make that=3A
=3E=20
=3E hugeadm --pool-list
=3E Size Minimum Current Maximum Default
=3E 2097152 1024 1024 1024 *
=3E 1073741824 4 4 4 =20
=3E=20
=3E=20
=3E On Tue=2C Feb 6=2C 2018 at 5=3A16 PM=2C Thomas Davis =3Ctadavis=40lbl=
=2Egov=3E wrote=3A
=3E =3E I found that you now need hugepage1g support=2E The error messages=
=3E =3E are wrong - it=27s not truly a libvirt problem=2C it=27s hugepages1=
g are
=3E =3E missing for libvirt=2E
=3E =3E=20
=3E =3E add something like=3A
=3E =3E=20
=3E =3E default=5Fhugepagesz=3D1G hugepagesz=3D1G hugepages=3D4 hugepagesz=
=3D2M
=3E =3E hugepages=3D1024 to the kernel command line=2E
=3E =3E=20
=3E =3E You can also do a =27yum install libhugetlbfs-utils=27=2C then do=
=3A
=3E =3E=20
=3E =3E hugeadm --list
=3E =3E Mount Point Options
=3E =3E /dev/hugepages rw=2Cseclabel=2Crelatime
=3E =3E /dev/hugepages1G rw=2Cseclabel=2Crelatime=2Cpagesize=3D1G
=3E =3E=20
=3E =3E if you do not see the /dev/hugepages1G listed=2C then vdsmd/libvirt=
=3E =3E will not start=2E
=3E =3E=20
=3E =3E=20
=3E =3E=20
=3E =3E=20
=3E =3E=20
=3E =3E=20
=3E =3E On Mon=2C Feb 5=2C 2018 at 5=3A49 AM=2C Frank Rothenstein =3Cf=2Ero=
thenstein=40bo
=3E =3E dden-kliniken=2Ede=3E wrote=3A
=3E =3E =3E Hi=2C=20
=3E =3E =3E=20
=3E =3E =3E I=27m currently stuck - after upgrading 4=2E1 to 4=2E2 I cannot=
start
=3E =3E =3E the=20
=3E =3E =3E host-processes=2E=20
=3E =3E =3E systemctl start vdsmd fails with following lines in journalctl=
=3A=20
=3E =3E =3E=20
=3E =3E =3E =3Csnip=3E=20
=3E =3E =3E=20
=3E =3E =3E Feb 05 14=3A40=3A15 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A vdsm=3A Running wait=
=5Ffor=5Fnetwork=20
=3E =3E =3E Feb 05 14=3A40=3A15 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A vdsm=3A Running run=5F=
init=5Fhooks=20
=3E =3E =3E Feb 05 14=3A40=3A15 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A vdsm=3A Running check=
=5Fis=5Fconfigured=20
=3E =3E =3E Feb 05 14=3A40=3A15 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E sasldblistusers2=5B10440=5D=3A DIGEST-MD5 common mech free=20=
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A Error=3A=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A One of the modules is=
not configured
=3E =3E =3E to=20
=3E =3E =3E work with VDSM=2E=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A To configure the modul=
e use the
=3E =3E =3E following=3A=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A =27vdsm-tool configure=
=5B--module
=3E =3E =3E module-=20
=3E =3E =3E name=5D=27=2E=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A If all modules are not=
configured
=3E =3E =3E try to=20
=3E =3E =3E use=3A=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A =27vdsm-tool configure=
--force=27=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A =28The force flag will=
stop the
=3E =3E =3E module=27s=20
=3E =3E =3E service and start it=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A afterwards automatical=
ly to load the
=3E =3E =3E new=20
=3E =3E =3E configuration=2E=29=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A abrt is already config=
ured for vdsm=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A lvm is configured for=
vdsm=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A libvirt is not configu=
red for vdsm
=3E =3E =3E yet=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A Current revision of mu=
ltipath=2Econf=20
=3E =3E =3E detected=2C preserving=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A Modules libvirt are no=
t configured=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A vdsm=3A stopped during=
execute=20
=3E =3E =3E check=5Fis=5Fconfigured task =28task returned with error code 1=
=29=2E=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet system=
d=5B1=5D=3A=20
=3E =3E =3E vdsmd=2Eservice=3A control process exited=2C code=3Dexited stat=
us=3D1=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet system=
d=5B1=5D=3A
=3E =3E =3E Failed to=20
=3E =3E =3E start Virtual Desktop Server Manager=2E=20
=3E =3E =3E -- Subject=3A Unit vdsmd=2Eservice has failed=20
=3E =3E =3E -- Defined-By=3A systemd=20
=3E =3E =3E -- Support=3A http=3A//lists=2Efreedesktop=2Eorg/mailman/listin=
fo/systemd
=3E =3E =3E -devel=20
=3E =3E =3E --=20
=3E =3E =3E -- Unit vdsmd=2Eservice has failed=2E=20
Frank Rothenstein=C2=A0
Systemadministrator
Fon=3A +49 3821 700 125
Fax=3A=C2=A0+49 3821 700 190Internet=3A=C2=A0www=2Ebodden-kliniken=2Ede
E-Mail=3A f=2Erothenstein=40bodden-kliniken=2Ede
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F
BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
Sandhufe 2
18311 Ribnitz-Damgarten
Telefon=3A 03821-700-0
Telefax=3A 03821-700-240
E-Mail=3A info=40bodden-kliniken=2Ede=20
Internet=3A http=3A//www=2Ebodden-kliniken=2Ede
Sitz=3A Ribnitz-Damgarten=2C Amtsgericht=3A Stralsund=2C HRB 2919=2C Steuer=
-Nr=2E=3A 079/133/40188
Aufsichtsratsvorsitzende=3A Carmen Schr=C3=B6ter=2C Gesch=C3=A4ftsf=C3=BChr=
er=3A Dr=2E Falko Milski=2C MBA
Der Inhalt dieser E-Mail ist ausschlie=C3=9Flich f=C3=BCr den bezeichneten=
Adressaten bestimmt=2E Wenn Sie nicht der=20
vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten=2C be=
achten Sie bitte=2C dass jede=20
Form der Ver=C3=B6ffentlichung=2C Vervielf=C3=A4ltigung oder Weitergabe des=
Inhalts dieser E-Mail unzul=C3=A4ssig ist=2E=20
Wir bitten Sie=2C sofort den Absender zu informieren und die E-Mail zu l=
=C3=B6schen=2E=20
=C2=A0 =C2=A0 =C2=A0 =C2=A9 BODDEN-KLINIKEN Ribnitz-Damgarten GmbH 2017
*** Virenfrei durch Kerio Mail Server und SOPHOS Antivirus ***
------------MIME--520934545-17562-delim
Content-Type: text/html;
charset="utf-8"
Content-Transfer-Encoding: quoted-printable
=3Chtml=3E
=3Cbody=3E
Thanks Thomas, <br>
<br>
it seems you were right. I followed the instructions to enable <br>
hugepages via kernel command line and after reboot vdsmd starts <br>
correctly. <br>
(I went back to 4.1.9 in between, added the kernel command line and <br=
>
upgraded to 4.2) <br>
<br>
The docs/release notes should mention it - or did I miss it? <br>
<br>
Am Dienstag, den 06.02.2018, 17:17 -0800 schrieb Thomas Davis: <br>
<font color=3D"#000000">> sorry, make that: </font><br>
<font color=3D"#000000">> </font><br>
<font color=3D"#000000">> hugeadm --pool-list </font><br>
<font color=3D"#000000">> Size =
Minimum Current Maximum Default </font><br>
<font color=3D"#000000">> 2097152&nb=
sp; 1024 1024 &n=
bsp; 1024 * </font><br>
<font color=3D"#000000">> 1073741824 =
4 &=
nbsp; 4 4=
</font><br>
<font color=3D"#000000">> </font><br>
<font color=3D"#000000">> </font><br>
<font color=3D"#000000">> On Tue, Feb 6, 2018 at 5:16 PM, Thomas Davis &=
lt;<a href=3D"mailto:tadavis@lbl.gov">tadavis(a)lbl.gov</a>> wrote: </=
font><br>
<font color=3D"#000000">> > I found that you now need hugepage1g supp=
ort. The error messages </font><br>
<font color=3D"#000000">> > are wrong - it's not truly a libvirt prob=
lem, it's hugepages1g are </font><br>
<font color=3D"#000000">> > missing for libvirt. </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > add something like: </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > default_hugepagesz=3D1G hugepagesz=3D1G h=
ugepages=3D4 hugepagesz=3D2M </font><br>
<font color=3D"#000000">> > hugepages=3D1024 to the kernel command li=
ne. </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > You can also do a 'yum install libhugetlb=
fs-utils', then do: </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > hugeadm --list </font><br>
<font color=3D"#000000">> > Mount Point =
Options </font><br>
<font color=3D"#000000">> > /dev/hugepages &nb=
sp; rw,seclabel,relatime </font><br>
<font color=3D"#000000">> > /dev/hugepages1G =
rw,seclabel,relatime,pagesize=3D1G </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > if you do not see the /dev/hugepages1G li=
sted, then vdsmd/libvirt </font><br>
<font color=3D"#000000">> > will not start. </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > On Mon, Feb 5, 2018 at 5:49 AM, Frank Rot=
henstein <f.rothenstein@bo </font><br>
<font color=3D"#000000">> > dden-kliniken.de> wrote: </font><b=
r>
<font color=3D"#000000">> > > Hi, </font><br>
<font color=3D"#000000">> > > </font><br>
<font color=3D"#000000">> > > I'm currently stuck - after upgradin=
g 4.1 to 4.2 I cannot start </font><br>
<font color=3D"#000000">> > > the </font><br>
<font color=3D"#000000">> > > host-processes. </font><br>
<font color=3D"#000000">> > > systemctl start vdsmd fails with fol=
lowing lines in journalctl: </font><br>
<font color=3D"#000000">> > > </font><br>
<font color=3D"#000000">> > > <snip> </font><br>
<font color=3D"#000000">> > > </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:15 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: vdsm: R=
unning wait_for_network </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:15 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: vdsm: R=
unning run_init_hooks </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:15 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: vdsm: R=
unning check_is_configured </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:15 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > sasldblistusers2[10440]: DIGEST-MD5 =
common mech free </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: Error: =
</font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: One of =
the modules is not configured </font><br>
<font color=3D"#000000">> > > to </font><br>
<font color=3D"#000000">> > > work with VDSM. </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: To conf=
igure the module use the </font><br>
<font color=3D"#000000">> > > following: </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: 'vdsm-t=
ool configure [--module </font><br>
<font color=3D"#000000">> > > module- </font><br>
<font color=3D"#000000">> > > name]'. </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: If all =
modules are not configured </font><br>
<font color=3D"#000000">> > > try to </font><br>
<font color=3D"#000000">> > > use: </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: 'vdsm-t=
ool configure --force' </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: (The fo=
rce flag will stop the </font><br>
<font color=3D"#000000">> > > module's </font><br>
<font color=3D"#000000">> > > service and start it </font><br=
>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: afterwa=
rds automatically to load the </font><br>
<font color=3D"#000000">> > > new </font><br>
<font color=3D"#000000">> > > configuration.) </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: abrt is=
already configured for vdsm </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: lvm is =
configured for vdsm </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: libvirt=
is not configured for vdsm </font><br>
<font color=3D"#000000">> > > yet </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: Current=
revision of multipath.conf </font><br>
<font color=3D"#000000">> > > detected, preserving </font><br=
>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: Modules=
libvirt are not configured </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: vdsm: s=
topped during execute </font><br>
<font color=3D"#000000">> > > check_is_configured task (task retur=
ned with error code 1). </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net systemd[1]: </font><br>
<font color=3D"#000000">> > > vdsmd.service: control process exite=
d, code=3Dexited status=3D1 </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net systemd[1]: </font><br>
<font color=3D"#000000">> > > Failed to </font><br>
<font color=3D"#000000">> > > start Virtual Desktop Server Manager=
. </font><br>
<font color=3D"#000000">> > > -- Subject: Unit vdsmd.service has f=
ailed </font><br>
<font color=3D"#000000">> > > -- Defined-By: systemd </font><=
br>
<font color=3D"#000000">> > > -- Support: <a href=3D"http://lists.=
freedesktop.org/mailman/listinfo/systemd">http://lists.freedesktop.org/mail=
man/listinfo/systemd</a> </font><br>
<font color=3D"#000000">> > > -devel </font><br>
<font color=3D"#000000">> > > -- </font><br>
<font color=3D"#000000">> > > -- Unit vdsmd.service has failed. &#=
13;</font><br>
=
=3CBR /=3E
=3CBR /=3E
=3Cfont face=3D=22Arial=22 style=3D=22font-size=3A 12px=3B font-family=3A A=
rial=3B=22=3E=3Cspan style=3D=22font-size=3A 12px=3B=22=3E=3C/span=3E=3C/fo=
nt=3E=3Cdiv=3E=3Cfont face=3D=22arial=22 size=3D=222=22=3E=3Cfont=3E=3Cfont=
=3E=3Cfont=3E=3Cbr=3EFrank Rothenstein=26nbsp=3B=3C/font=3E=3C/font=3E=3C/f=
ont=3E=3Cfont=3E=3Cfont=3E=3Cbr=3E=3Cbr=3ESystemadministrator=3Cbr=3EFon=3A=
+49 3821 700 125=3Cbr=3EFax=3A=26nbsp=3B=3C/font=3E+49 3821 700 190=3C/fon=
t=3E=3C/font=3E=3C/div=3E=3Cdiv=3E=3Cfont=3E=3Cfont face=3D=22arial=22 size=
=3D=222=22=3EInternet=3A=26nbsp=3Bwww=2Ebodden-kliniken=2Ede=3Cbr=3EE-Mail=
=3A f=2Erothenstein=40bodden-kliniken=2Ede=3Cbr=3E=3C/font=3E=3Cbr=3E=3C/fo=
nt=3E=3Cspan style=3D=22font-size=3A 12px=3B font-family=3A arial=3B=22=3E=
=3Cimg src=3D=22cid=3A547f8827=2Ef9d4cce7=2Epng=2Ed413ca20=22=3E=3C/span=3E=
=3Cbr=3E=3Cfont face=3D=22arial=22 style=3D=22font-size=3A 12px=3B font-fam=
ily=3A Arial=3B=22=3E=3Cspan style=3D=22font-size=3A 12px=3B=22=3E=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=3Cbr=3EBODDEN-KLINIKEN=
Ribnitz-Damgarten GmbH=3C/span=3E=3Cbr=3E=3Cspan style=3D=22font-size=3A 1=
2px=3B=22=3ESandhufe 2=3C/span=3E=3Cbr=3E=3Cspan style=3D=22font-size=3A 12=
px=3B=22=3E18311 Ribnitz-Damgarten=3C/span=3E=3Cbr=3E=3Cbr=3E=3Cfont size=
=3D=221=22=3ETelefon=3A 03821-700-0=3Cbr=3ETelefax=3A 03821-700-240=3C/font=
=3E=3Cbr=3E=3Cbr=3E=3Cfont size=3D=221=22=3EE-Mail=3A info=40bodden-klinike=
n=2Ede =3Cbr=3EInternet=3A http=3A//www=2Ebodden-kliniken=2Ede=3C/font=3E=
=3Cbr=3E=3Cbr=3E=3Cspan style=3D=22font-size=3A 12px=3B=22=3ESitz=3A Ribnit=
z-Damgarten=2C Amtsgericht=3A Stralsund=2C HRB 2919=2C Steuer-Nr=2E=3A 079/=
133/40188=3C/span=3E=3Cbr=3E=3Cspan style=3D=22font-size=3A 12px=3B=22=3EAu=
fsichtsratsvorsitzende=3A Carmen Schr=C3=B6ter=2C Gesch=C3=A4ftsf=C3=BChrer=
=3A Dr=2E Falko Milski=2C MBA=3C/span=3E=3Cbr=3E=3Cbr=3E=3Cfont size=3D=221=
=22 style=3D=22font-size=3A 12px=3B=22=3EDer Inhalt dieser E-Mail ist aussc=
hlie=C3=9Flich f=C3=BCr den bezeichneten Adressaten bestimmt=2E Wenn Sie ni=
cht der =3Cbr=3Evorgesehene Adressat dieser E-Mail oder dessen Vertreter se=
in sollten=2C beachten Sie bitte=2C dass jede =3Cbr=3EForm der Ver=C3=B6ffe=
ntlichung=2C Vervielf=C3=A4ltigung oder Weitergabe des Inhalts dieser E-Mai=
l unzul=C3=A4ssig ist=2E =3Cbr=3EWir bitten Sie=2C sofort den Absender zu i=
nformieren und die E-Mail zu l=C3=B6schen=2E =3C/font=3E=3Cbr=3E=3Cspan sty=
le=3D=22font-size=3A 12px=3B=22=3E=3Cbr=3E=26nbsp=3B =26nbsp=3B =26nbsp=3B=
=C2=A9 BODDEN-KLINIKEN Ribnitz-Damgarten GmbH 2017=3C/span=3E=3Cbr=3E=3Csp=
an style=3D=22font-size=3A 12px=3B=22=3E*** Virenfrei durch Kerio Mail Serv=
er und SOPHOS Antivirus ***=3C/span=3E=3C/font=3E=3C/div=3E=3CBR /=3E
=3C/body=3E
=3C/html=3E
------------MIME--520934545-17562-delim--
------------MIME--520934545-23878-delim
Content-Type: image/png; name="547f8827.f9d4cce7.png"
Content-Transfer-Encoding: base64
Content-Disposition: inline
Content-ID: <547f8827.f9d4cce7.png.d413ca20>
iVBORw0KGgoAAAANSUhEUgAAAMgAAABACAYAAABfuzgrAAAAAXNSR0IArs4c6QAAAARnQU1BAACx
jwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAEYJSURBVHhe7b0HWJVXuja8KYotJqZOJnOmfFNy
Zs5kMr2kTBJjjb0rtpiqibF3o6KCUgUEu2IXu4gVO4qKoiIqzYKoFBVsdNibfX/3vV52YjIzHvOf
//uu+a6Lxyz2+6531Wc991PWu/aODbVUS7X0L6kWILVUS4+gWoDUUi09gmoBUku19AiqBUgt1dIj
qBYgtVRLj6BagNRSLT2CagFSS7X0CKoFSC3V0iPo/wxAnPpTXXPhNFePQ98sZxp5bLJ6qiHnN+7+
W/rHkv+Tuq48/n1oHP+snKGHHnxd+l+Q2mN6ZJlv01eFH13rXz21etNf6993p39ex5X7/6VFF/3L
uo/Dd1ENP1Xq6xrfpMcGSDmTb3Q8hm/IwNB1aRixKRvDN17B2LUXMHzpcYSdKMapYocph2oHUHmP
6QFKy50oYVYeU3qRA/MPXsFfB8/FH0euwn99thDvjl2KgA3HkFoE3GSZSiZ7NYfqqOIVsOd8FiZE
H8eI9akYvfEyRq9OwZfr0jFsQzaCE8pw7LYd+Q4Hilm2lKmy3M66GkUV4rMrMW7dOQxdn4Ehm67C
Z30SJq85hrHrUxB0KB/7OKhLdgfKWNrBLu1V1XBUcgSVRTh1KQfj1p/CmLWnMDjmMoatSoXPlkwM
4f2odakYsS4RBy7dYE1gx4nzGLMyERO3ZsN/TzruMs+oBfsDODmWCt6rD9H25HwMmbMdV+4+QGU1
+2LHziqn6b8MdlNWFH0oHSPWnMFo9jl2RQKuFGp2LETeqq39N6oxaAXnE30MN3MLUVBehQkbzmDo
pkyMjk5CWNwVlErjMDmrq5kqmey4zaX5bOF+rt1FjFxzGvNjE9WdUU6xx9MwemUSxm2+jADWv8qy
4qOjPA8OctfBUg67VojrknqH89iDi/c0LuDagxIMX7wfYzZfxNA157FwzymTTyYg7nwuhkafwfiN
GZiwOAGnLxeypWpUVpaj1FmN0/mVmLwpHYOWHMS5tGxTKzz2AiZwvYesyyavL5O/yZi4Kgmjl5HP
G9NY9rjhVXUV570qAcM2puOL6BQk5mh8HHjJbRSVlRl5pESYzwKu9cToEwiPS1VN/kfJtBejmmPQ
c/Hg2/TYABEbfvNJJGztFsDWMQq2ZmFo0jMKnn8djAZvj4CtVQjc2vjis3l7jZBz5EAxQUI6nF+O
DjNi4PX3oajL9FzP2fjNxP34vc9hNOkehvqvDcDTrUajQ2g8LpXYcbOSQuM0q4M5W4/A9vZ42Lot
YR/h8GrhC4+/jYXtnSCmGfBqOhI/6TgE0+PScdklXRQEOCsw9+Q92N7yha39YtjaLIbb34fD441h
zJsA25tfwr3paLw0IBzDorYbEJcZlDA5i7E9JQe29/w532DWnw9bi3C82M4fjTtMQhPOs2G70Qja
vNt0N335NrY5iSkY7t0WY9ouLQDJTtQ7y0EdgQdMoikbLsD25y+w71quERI4mDhcThlFvNA4NI3+
M2PYJ/tvMwfPe89G0hXCzs4nlXZc5PP/HLoGtj8Ow4eh29Ussu6X4sWPl3K8c2Fr6Ys/j9qCXHav
tu12O6tKhTiRyiWxNZ8GW1vO6d0Z6O6zytSXcHy5NI7rOoP1I/g8DIF7rXlUVRWgyvFAUGFBowLh
uzkdtr+MwpaUAnN/Ivcu3N/9kvVDYGsajL6hMSZf5LflJHkzle3OR923fRF9II251SihcN7iVbeg
fbC9PgmvDo5UD4Z+/wnXrFkoeb+a6x+CFzqH4wftffH99lN5HcC+JllyRrK9Q/mQXLZbhF9+FIGK
qnIuPydKZVJMxjpoJQSSaxV2/MdHi/GHYcusipwTqu5zyQV/iwffpscGyH3K+6ufkfmdo+Deazl+
zwU4ywUoKK3A1HmbKbwz4fXBWk5mMtaeL0JxjUDkck1/1i8MdTuF4MmOAeg0dR2O5ZdCeuISyyTd
r0Avn6Vo0HoibK2D8dchC5FMkGhCopgjKVxICnmPVWjYKRQh2zKQVVqF2MtleH/OETzVYiS+12UK
hckPb41bRSEjLqlNNd35yUXw6BIOW9eFsHVahKnbzuNURTV2XClG/4CN+I9eM2HrEoY6nXzRZtwi
ZNyrpBVRr3asSb6Fup9s5HMKS5c5GLPpCshO5FdUIr+4DJdKy6mRLCs3JWoHGnTwR4MP15E/C/B8
v3k4mStR5yioKOxcIGvhnQjdmwvbX4fj0HXZVOZUsg0+rOKEi3mhhVLZT0N3cc4U4h5r8MMPFuL8
dS44QS/qvuA0hdwPzcctwQPVZ92sokr89AsKE+dat3Mwmk7eibtiInlsJwidpq4Dl/jh1YnKpVsU
6nSdDW/fNaZN0fTV++D+HoWvxyLU916EJ7vMxKmbpWYtymmlHVI8lbKP4DyyCYQvsTPttrk/d/se
nqQAe3ovJw/mo3fIZpMvCt6eTP6HcS5r0ZjA23r8MnOrzFznJZUSHAH4Sc9gXLxTRCDaNWQ0m7gO
np3CUa/HYvz0wyVIIA7zyO+s8kpco5LI4Brb5SnQMjbptxDuA7bA1mcrFdQC9KqZU7W8EAFaypqU
V+nALwcvx1uTNph71YW9xFgQWeX/EUCE1j8MXWYEVQz425BFuFsD9+LKajKdaO9Jgeq7AX+fvInT
t8xa/2nL4N6OC9JlMZ7ruwRnqE41EC6ZYbz0muxMg3YEQb9NBNoMvDdjmxFG0Zodh6gZ5sHWaxPq
tZqGrSfEXFpQpkKm0WuOolFHCnpvanEu7qDIbQYkonlnaUHaz4KbN61Phwi6HWdMvbsSGH767aIW
pEC5fxANjzbT0JoAc9EyAsTWezXcerNuxzD40AWwiCOmG1butJOxlhaYvnIv6rw3FXX7r2Cd9Sw/
B3/5YiGkW1WitMwSbEmyb+xV2N4Yg53ZN8kjPq0iE8kIeaVFXDCXK/Zp6G4KMeftvRHPDViM09ct
jiRepGD+bRIFcQES82RVyD26DlklXPwRBKj3UtTrNgvNp+5BYQ1AHHJZCZBKClQGh+LWnlaiRxS8
es1Fv6CNpl0tpe+6Q6jTYRa8uE7u3edwncPxu09DjXts7AYFCaVyhO2YufsiFdckbEuzLEha4X08
0W46PLrTilFJDJi91eSL/DbTgvTkXHqsg1vLMGxMzDL5p/MK8CzHYGs1D4uOal0JRro94tbrY8jL
Nv5o2HMu/qO7Hy4STZaYW2MtMopBq2nnPKhIKHc2b4GElrelP1Yl55qyBtAOliNIcgmqn3w4H699
WQMQSSIBVE0XVH3+jwAitL88kGbPOxpeXefinRFLcd88sYTNvTM1bV8CpPNCtKaAS89sy8hBwxZj
4daTWqXbUnwenWzAUFlRSr+bC24vRXGZNe0PllIrdiZTus5Hvc6hrGsxfuWeRGof5nvHwqPVFMQk
SlDJREcpCqhtrvHuPZ9YAmADPPuuR6NW43Eo564Z05JLdwmaQHj2XkytOAez6a6JpU4utINaSNd/
CTjK9ufBs9tCeLYP/cpl2JR6Gx7vc9z9oij08zF29VmTbwSytARVZHgl+xf5rtoHrza0YrQ0bgM2
w12L1WYmRi1NMGDVWFx/QuLour0xGvE5+cowYDP+FYW40llGRWRpncERBEhPWZAVePKj1ThQUIVL
ReV448Op8Gw7AyNiUq0mKyW+TlqQCvzwMyqwPstQt2sI3pywDbel1dicNKSDQmLnTSbZbWs7ne0K
+CHoMSPatKOZTFq1Fx7tgwiOcLj3mQv3/isJmBB8EHHIrKcsEcrEHzsC9l62LMgli1+p1P71qSQ8
6F3Y2s1Br6AtJl8UvPMc15YWxHsd6nZcgKjTdwxf3h4dAq+2vugammgUJSqyONYSo4ybTd0Erx6R
cOsYip/3nYVL1BxlBujVBJFllSXcDo7erUMovMSrbovg9vEezm0j6nePQPwlWTeOmXImRhRSGb08
aDHenLrVgMFwmm0pRpMUGn5+ix7fxWL67Qi6UBR2WYvfD1lihFMCL01ua0n/23sF3JqOwazY04YB
0+gO2VrS3+0fTSGcjaijF2viEw7NQcxSY1bLbJOWnLpuwOXVLRL16BaNXHnUDHhunMwzrZa3LMRM
LD0sgDCocpYarSYhn72Tblg7ai7vDajTIxwDIy3tFXb2PoWWLlYP+qftQzAvNt5iQlUZqkuLDINC
TuezX8YZPWQd16D3nHiVwJbTl2jeA2m5qPm6zMaQ1ZmGqazJ4Nqad0UNRyfM3YznegTjswOcAy2l
rcsSeHgvxjOdfbEyQf4210iVSEE7rsD29xHYlWkF+I4SzkAs4OMK/mPrJv/jgA20fLIgy9Hw083Y
yxX0nr0LjZt+jraTFxutXlxVAafDiBauEDz/S3GJN10nuVjT4nBXXXKMRpjILYoV0sk0t04BBNJy
avsIdPNbY+alIfhuiEdjWqYJiTfILz+jrNyZnu8WgXkHtdqkMlmyagQdzKIFmYjdVxRFABcK7qNR
+5lw701l1mEe+oR+DZCZ286Sx6Go128N3FpHYv0dYNaFPLqJn+OH3aebzQBjY6sL6OJSefLy7fHr
4SHF1X0JXvKei1TKuMRc5R4wThSXymgRVPb5D5ehuc9GdA5nDMX23b6g0mM88l+fhPOpiDEtrewt
ume/+GQBXpuyxdRXkvLQ5kUNq/6BvhNAfj4wCm696DJRy/zxy804zrxTnNyHK2hC21N7tpqObmMj
UVBm+ZG//XwBBYV1+qzCE138cfSSpTUVNBmTxkLS5mL4rrQc1Gc88HTvSPqeoejkv9uALDKemqqt
XCzGOZ1nI/JYjpmMXJFyaRN+pjOmqdecWrE/LVjHCLw1JJS5wII0ulgdwugmRbFuOBYQIIbke9IC
iSWHbxbi+e4+jKtYpst6/HFaonHvDqVk4vvdJ3HB6Wr0WYpPN+RAo5dzYOInJu0IiQJX78UTHX0R
c/MeRu68RhcnDJ70423UbC92m4kTecWGHxUsH76b8/ndACRczzVzkIelkEmjUb+Gc5zg0IjttEIB
FJB5aPzxRrTbWk2BXQTPd8cZhWTmX/Op8pcelOEnwxiD0AcXAN6YvIMWVg9r2mRpbQoYgHRk8E8X
SwDpPHW54aGS38bDVE4R2HOnHGO2p6F+z3DGZxFoTI/hSbqqJ2+XWVqXFBZ/jUH6UOy6XBOD0MVq
3JEKpQctSIf5tCAuN6YGIB0C0bjXIri3CkGfbWXwHLyVQjwd27I0c/ZPoZfKcjgsBfHuxBi4M260
dVtH13w1DlMjyWmSvdI6XCx2GAGXemjUKRhdJ843vPj1p5SVtlyzgfvZfjh6UQGojHh1l3L360EL
8bbPFmMxDEDIGxdA/hl9J4C8MpjagYj26jmH5i8Mr47fiB+9HwqPluPh1X81PovJxy2zGJa/+F+f
0u3qRX+WluGpLr64eNeavJ5J85e6uE1hPXqF/qh3JBpQ+9XtGoFWNIMSmODdZG5XTroP3ZbOkQg/
lm/ql3GyVTXW5xY5U6fZeFPGrctCvD4o0OQvzaCL1YbWoQ/H3Xku5m87YvJlpqtLLc2rhf157ymo
581AniD88ZBdyCe3Dp67hO91Hse+GVv1W4mX6G69/fk8vDFkFt78JAx/GjQDK/cfNW34LI7FU4yh
NmXfNbsybw2fw0B5Fjz60vJRsNpNWQeJkTgTFJsOzze/QEJ+gVEAEvbr7O8Kp3KZE7NCYFqLqdHw
7LUAdWlRPWmZbX3oX/eMRr0OQTh27ZYRVM3AmgUBQgvys5G08Iy3PORiESAuAAsg0rkCSAYlxaPD
1wDp7b/OlNHYfNfFw71NEHZlFZp5NJ9CxdZ+Gp7ietvahuL1sZuRwzFK2Hy2JNNbmEALYgEkhXxs
RBDIJbR1nI+ewVZsI5q5nS4WrXA9Kqtnuy9Ew48ZJ/SmteM6T9tkbQdXlDvMelfXuK1NJ3C+nai0
em2hwlmKVz+NwtufhKLpRzPQYlAA/th1uCkneNVt74+Ooy1rsSk5D09155p3IVBpyb///nzMPZxl
FHIu45bfDFyAv0+INoAwSlq8+f8DIBLoVz6n39qLDODCNeo7D93m7UGXaUvxk64UJAamdXssxBcL
D+AmF0Zo/sPnc81ORJ0P1qF+p5lILqSGYL6dLpYGR55Y5CjDoay7aER/v05PCiqFudmX602ZRXEn
yEgG+dodYQA5L8EKvqrtnFqlSlDAuGgK0G19t8KjyyK8M9ACyIq0O2ZnzNZrJce3AJFbLbdNpstR
YS1EYmERftTXj8EghaDbSvxqyCbcIrd2XrhGwfCncAqcy/Dq59H4InALPp+1Al8ErEWvGQsZDyWZ
Nqat3IWnO8zA3izLHy+kHni+My0a4zVb99Wo2ykIH4db27HBOy/B9tfB2J1XiJiLuWj8zmjUb+YL
2+sj4fZGP7w7aKIRwAGh1ICd51E45qEJAVqn+wq6O0vJzwi80i8QV2mlpW4qjRmmRqUF+ZEsCAXQ
1jngGwCRI2I5JdW4yMbrUKBs3aOMInLtYokvU9ccgHtLP+xLVyAOZBRV47f9p8LWYgI8B3AuXeah
Z+hO8yyC7qTbayMZg1gu1tnbtCDU5LbeVAoCSMhDANmRSnAvQuOeUWjSgYDXGPtyTag867eYiqOZ
VjRb5uSa1GwjvzV+HQFC15huc2Pyv0fwHgyaFYNPZizBiPC1GDAhxJSTBDSgPLYdt8jca5b+G46x
7iyON4rADsdz3UJw7A5dLM7yNwPn4a3RUaas2FPtZBRDeXSJ4rfpsQGijv9rEAfchwPvtgSvTt6C
TOYJweuPpcKzJ58R8R6tp2Pa+gRTvvWXK+DVhbHJ+9RsXWdja+ZdY8q1o+J0qgSXpcakbuGi2Ly5
wBIqIr9HSJzJX7/nMO/pD9Ont3UOwir6x4YqCUEmIT8+32GZ9vfjCJAF6DI6whSJOn0Xbgy8TZud
lmL2tpNGEFRJbrnGvjn/Hhp2mUlBDGfZ2Wjnu8sog3UXbtK9YtDXnfPqGImJGy0fXAsiZtKNNnGI
5jN51R40aDUVx64yt4p2gfFR1Em6IL3Iq/676W4twPPtGZhfysaiE4yL/jQMOwruYvCmBNje8qEr
wPinM+fX+ksMW2X13z8yyQiIF63HswyWv9hZwnguBB6dF6A+BXtwVAJLkSotGyIL8lNjQRYTSLQg
k74JECu2qcYVTtpLAOkWZdrxnvb1zl3AhiN0k2bhNK25ZefJh5OZaNKdAJbge1Prt52B7SnXsCj+
Buq+MRxxWZYFOU2ANOxIRdZLAFmInkFfb/P6bSdAOi1E497RaNAyCIM25OLJTxQzEqS0Dq3Hb8N9
LWT1fQqrNZ83J1IWtPFD9/zp/ktwnIyXdZXVFd+tUtYLYhtB13TCarO2GrUsUXMfuuRtw1B/YCyt
eCjaTlqFS+THX4ZGocXYpSwhcHCpaD0EEEtd/iM9NkCE6z8PoU/YNYoCuwh/GLHMuA0ircMTPYjY
PhTEzsvwyw/mmYn4bjiOpzqQud2Z320ZQg6nWcvkoCCREfZKikKFpT0WJ15l22Qa3Qhbp0iMj7bc
oajt8WQktfj7KxngTcSygxkmH7QeEnIJ7JebGANp8eivetDvnLrSeju8Jvku6lHotVVq60iAbE82
+XpT7iyViDPAT2SQ3ov99taCRNL3jzX5Mak3ac0iOSbt9sz+ahfLrjfJjGHs1HYWtIHx89fjex2m
IN4lWNUlxoJ2W8RxtQsz83midwQVTCjeX5WGhu/5YO/1O9h+9hpW7E7BvJ3nMftAJqbHHEFqid3w
+pOwA/BkcGzrsRTf+3gd4rmCrw+jxu3AuKT/JoIpEAnZ1hzYITKLq/CrkeQd4xS3bnPwjs9O3JIA
8KnTrl1D8Vm7XQRIO7bbbSPqtZuPftO/FuQA8tz2jj8OZ8iCUKuKwaSxqxPYZrilkXssxq+HrsTg
Lblwf3049pqdIrpYBffQoGOIxWuu40dB0SZfFBybQiEOtraA20RiC9n02fJ9cO8wk/nkb9v5iNyj
158cYc2mw19HUWkw9tFu25OMS09RVPSkmny30yW3m5emxZy5ky7/QjQjoDTaUv4RQK4Wl+Bn3my/
Pd32PlzbZn4YsOkyfj1mM9r6rlQX7IyrxLac8kYkxNZ0v0GPDRB1+qdBEfSLFTgtxhvDLRBU2auM
MPxmGCffhW5Kj7V4+YsYpFNODl/JxpMtxlBA1sKt5zp8suKQ0doO7SJVVKKqooK+Z4kUOjUmgdCd
WpQC+b0egYjLzDPCv2RfMgV3Eeq8v4rmeRLWJl03gCxixFvCCcmevDtxHRqQmU/3W4pnOvggqbDM
1F129IbZRjTWhWY/mMJoyMGRV1rA7DlzDxeIC/HpdlqLEJzPs0z8lqRLaNSDC05fuU77YGr2ZAMI
u+IeO7nh1FYv4yDm+S3dgu+1m4i9V2/XgEZ/HSaYfHfKajR5n0LQZjbcGWzW78OYjG5VQpblmhiU
W2rDrI+u9Pl54DYGzFQ6VCzPfrQaifediD6aiRc7TCA/6J70j8G7kzaiiMNxOhy4RID84hOuTU+u
A93dtyduQhYZZTSjNiTKqH+pKbM5PVtTf9T5cBfd0tn4IPDrN97BGxMYkM/BgatmUxfV0qxcHN21
+3INvDpR2ZAf7nKzqf0bMMiOO18DEFqQunoX5s3gu8NiDAlZYfJF4VtT4GXiE64DLdeis0W4Rsb9
vOsEuo50tygb3+s/H5fvSDosXd58EpVAu1AqgzV4oc8cXOCCyrIS6WTSA1RXMYNeSKnDiXqMpVpO
sPpTC7lG0J3Yf/4aGrXxRV1atLp6P+JNq0JP588T5ltt0WLBvG7gZY1X8W36TgD5/SACgPGBrctS
/HnkMiMAElYh+9k+oWQcF6jvCvxwYITZcbjKQfYI4aBazIVbl1X4X32DccWMjMSK95lU9yrTE92p
oXR0pZ0/Oo+bWyNiQOgWarVudHO6z+Nkx2Hp8RumjiWIwJx9DKbbT8GT1HD1247FyBWxZqwS8/Xn
stneKAoTBZRB4sTdZwwDlQSP2NM38UKbafCiK2VrORFtgzcx16INKTlo2Jtz7bkcnm0DMTE20yiC
h4ZvQKhxTJi7Fo1a+2J71h2TbwTS8cA8O8mO3Ol2utPnN9ZRAXer6TiSX6P9K7gyjgpq6yqU8NI1
rw+Dd1NZUIv23IDnP16Fg9d11IPBu6/ALiCsN3HXgn0XTPlz98pooZagbu8ouFFZvD2JATXzNV5X
m3oPkkFhsCk+0nuDDn7o7LvIiKTS1I06ajIKcRlSO5xdNRWZjqnwLj63Ck0UY3SIQJ0B2u5nPETh
25VuxV3JBTUA6U1B7L4cn8ysOc5BCtyWTgVB2REo283FklPW3Mcu3AWv7rTSPTiXbovwcY311jzf
nbgenl0j4d5tPp7vPRvnOQmtm/grOdZaaF76dGec23TUQl5xvixQoAFXWn0MmCOrOB11+2+wXkRz
/O1DY613eHLzGfPoQARDOtPut+mxASKh/MMXlobXC8FXx27F/ptVSMwroItDf/ndCajbdwk83/XB
4j2ut87UWOTzzwZQKJoHo0EPPzSfthwHC0ug96bavdl9rRh/m7Sdi01t2T4Er30Wjhv3JN5W4LTy
6EUykBqzJ80kA9ZhO7Kw9epNrE6/hY8Wn4Ib+23QQ29jZ6P5hPVmG1BMk/Buv5AHL/r1tl56P7AR
76/PwN4bhdiYfQudlpzHC+0C0PDvY+gGTkLnkDXmsCTKLXGKPk8Xq0cEPLrOR4MuYfh45XnEc1wn
bt7HcWr/5Bu3cKW4zPQ1Y9VuNGgTgm1Z98yYjVYqo4og88X0hYl3UJ+m3tZlGdz7cSytZ2J7TqGp
a7SWsxqOalpiqjFTn9QnhJat5xYK3GZ8v98cnLxuWbzMu1UUHM63l+KNFXip8xRkUDhT6Fv8bthK
uDOG8OocirdoQXYzJjp6/ZbZ9Yq/nIvrRcVIkQV5j1aViqx+b+02xRhBk1L5cmM8GtCKHsnW3hqh
VU7r4KC7WG65k1GJt9BEMWWHUHh4L0GdVj6ITbH2LQWQhu3o0hAcitk+9IsywBLN3M41bM/8jwnA
NsHYdsGyOvJAXv64Zi4918DzvWnYmXzNtNeU1tGLMY0n5/KDvhFYcf4ujuTd4dhuIuFqPg5n5RnA
KB5pSJC1m2xZEDstirbTUZ5DT6XEeBjNJm9CHbajTQ8vBvSvj1xo+tbWhf7qUGcRE//7B3psgEh4
fvwRJ9mdk+lEJL4XgZfen4vnu5LZzSaRAfS1uy7AgMhE6yBuBYdfcccw6Si9idcmbINnx3HU1B/j
Ge/paDpxDVoOi8APdY7qjRHU0v5o5hOLkwSdIWovUfjG/eyLbXenNm1B/5sC0LhLCDyb0Ty3nER/
mgFnUz94r72MNFbVlEukRkhbEnLQ4E26eG1pgVouRB2O+YUW7O/vtCrNpuPJVjPwg1YT6Gatx1W7
AjXWK7MEceWJS3DvQqZ2pEJoTleLAHiiRxieoAv3XPvxeOKdwfDZFG+C9UFzY+hKTsNexiDq3wTq
FXlUeXcMUGVpBy6Ih1czX9TpSC38OoN0glRqwCyK/GrGNeUPbTf2CyI4mjFW6LKQFnI8MggoKQ3x
c+CaY+QFXZY2gWjYchx6B25CIof+/U+5Nh2oqbvMh2enWXim+yw82W4qnu4ajEbvTcXIoGXmBWO9
Fn6sS3fpzcloO26Z0dgCgM/KA7C9Ngbx6QS3k+tXTufKTn5UlZuXohKq4Uv3oW6L8YwHp6JR0xHY
eSbfjDkp9w6eaEleEzy25pPxecBi5lo0Y8MZ8p/5bSJQv/UkbDhUcyqBtDrlDtffB3X6cH3fGY+/
fhpqNn/+MIkK4q3p8PRmPi3l0+T9M5380KTzTDzZOQAN/tzH8E/Atr0zFm1GzVFzhior+MSej9KS
+2Yb/WxZNZ5uPwnuBHSdZlPQYsQCSznRRdZbdK2R7v9HAKlyVMJv2S58vvwYPlh9BkOXH8aYeVsx
dflejI6KQ8DuVGpncz7VoLhajLVzObjo6lxp2ZlsjFi1Hf0DNqDjmCXoMSkKn4bFYPTS3QSRdeRY
AlDG+MQuH5N0JvUSBkZuxqhV7I8LOCpqN0Ys3Iapa/ZhRORGrDudhwtcYdcOh0NOs44P0C8/kXYd
IyPWY8iygxiy6hhG8XP8oq0Ys2w3xq47irlxGWajQcIhWDhlJ0vJcjaRSCs1kPMbuuwIRrLu4CX7
MGbFQUxYcwBjF2+Hz7Kd2HUm3bgwsUfOYHTYOmQyUDVUTeGqInR0/oltqYyAND5yPUYt2oYRdMlS
7zywNKyTs5aloYtVVfPiU8cnVuw7hT5hmzGGgbPP4s24fUcxRKnhb3pROYYu3oUJq/ZjwoLNGBe6
GueKnJi24QTzOc6V8exnN8Ys3o3xDIaHR+3F5wt24WhKBvI5mCGzN+LLaPJj4Q4s3VazG0aKS0rD
2AXbGAsUm9jSMIJuSHWF4SyKyjUyYFjYCq7ZTn5GIyPvgcnLvluM4RFr8DnX54tFsVgbdwQ6GC2K
S8rE4Pmc99I9GDt3PddUEGAMWlVl+D5i+X58wTGOWRyLsaGrcPpGPoK3nsbIqP0YEbUP41ce5Hx2
YsyS3ZxfHIYtPwjfZZY7ppGNWrIdC7YdMveOmvcoqCg0B0UFIFma2HPXMJ5jHrt4J+ZRsRky21g6
3FPJf5pbzYAfoscDiN52l0sPaqkfJmnqhxvlohNIGlQZyzrl8GingGOWoEgQLd3OT1YTusWgr/K4
KBIAAUWAcu2i/PfEyVELW3v9GqPYJrZYo1NbyhXk1L7y9KmkZxqDRK9adWhJpFIlHq5x/WtifwrY
H+pLuywO1q4i4yvJN83F1a+LdG2Bg4CsJqgcRWb8rhMGas1Vvma5SWzdvCOw2vwmZzhSgfIbvfwz
0oysMprfV6STvgKqLpl0pSRumtIcm2nfzjXVJgVJ9TVWMw7+0aeeiBtyx02+iXx18U9IcyF/1Ibq
fXPk6vlfVXyIDK+dZqxqQ7NTLbNuHKfyy5ij9XA9/6of3TC+sjwVbRRVsOQ/9vmYFoQVS2VuK4yp
NYypZA80vcXlFYYh2lGSv+osKzaDoe5kLYqlAlYGos5KJ8pYRrnmOxBVHDYFgtaPfFQehdp+l3Ou
wAPmSRdbZdmGnVPV2S29M6FWsJKumaeF0xdfzPagRqbdIH21R/84Zfah8zby7nWsRfeWYKo/lmRf
YlGZhFrLLg7yQ+XUipZKrbpApiRRtHoiSXnQWmlXS2Oo5Dj1JSBBVGUttqsV3jkFBuoqHVDkvbNK
SucerZ2u1YYleGrfIaHV3M08tZoaNzlrEvvUvqSZt84uaY4sQtKYVFrnHyUgausrdSHh1lhVXQVV
R2pe1kLn7c068L+auhZf+JjJCJJDXgET56CTzKZPMVBAYGMSVvFFs9OdFQSzttpXXcmCdqHMM6Wa
gZhr5nM+WiOj2FlKrfKJ4Yna1RzEU620QzzRmpMfeq6kOmZINUnlrMOf4oBxoK0xq2Ezb85JPGTL
FlT18Jv02ADRYS/JlgTanLszZ+2rUMyOZAl0crRa5/PL2am4a/piQTHe7NQ46QZV06WROWMZaV4y
rIrlBDqnhL6KLWn3h/fS6uXs0M5YRmvqYH0tgOyEXflMAthXSWNUnyyruatdwwytrm40aHFRXBLY
zDFxjVnZald76hyXq44x1WVGWFRFd67FV3I1peL646zk8nEOYk0ReVXKzl1HPIx2Nq6UDhdyrFQq
JSbWKUd5xX1WE/MocPyoZEdqv8JsP3JhGWiC7qasiwG3BmwUg9WeBETjMHOtKSPhFcvVjpZC/FWZ
ryaiwSvpWoVc165nYiAb1DpIyMgVVmVlKRbzJSPqZZd1MOPRrPVcjbmS5IXjEQPNuyMLzBqvqcZk
/moOylciA2paIol3KmkNSfPQONSyhmkpSc1f/RpVqFxzrTy1o7JSjUaJGiupa5YzY+ZTKVwCU3Ud
evZP6LFjEK3NN0gaRe8zeKkdfTNodaijxYwD9MJGzDX1xGkyyjVJM+USas9KBlK8kx4137gTkypu
sm2V/CapHVcyc+Q/CzBfC675FiIXQOurPOOiiRHqWdqGTLUEnrXIGCk/E7KYJahx+DQhNaoF0Ax0
X8N064Hy1bOsES8fIsMDkr5/4VAMZYSCiRpUZU1ta/VNK9JdOlNmstSWhmok3QzAaEkHweaghtA7
F9OfNHKNYKi42hD+v1p0JtfRiSrNTc+knR3WQUPNX8+tdvQWWcqGPTIJWIaHBpiMLXgv3SsoG1fF
WBH2SECoLfFO/DEKjG06zbjIMwb4emR0EpNiUhfX9E/PLGJFbQYYKZCQu0iDZA2TVMvFET3S2Fme
Aq/xahxac/PUPKtZa12bpHnqU8/VHnlUA2RjrXhl5Edd8vPb9NgAkbCvPXIZYQwE1+9JQmlpOc7d
eIAlO0/hJvv7dc9JyGJw5yqrhdPOlzoXMa7EnF1ncOpKPq+duHTxOnMrDWu0X69FMFR1Dw/Ky7Hp
2EUsjD2KlTF7cY3Brxj87QlY07TIYrFF6vMeecL/DKWlp6BcOzIkF0MEQbWpeiI7rzJuWXvn34Bn
DeNU76v+DaM1Q8uA36MArFy/H5FLYxETl4hSfZnnn5DGmsdHP3/HG0n5d4y/bnZZcu+Sp8m4Q7Wf
V1SKTQcOI7/Y+l6EhF9JYLcEWAMS4Ci8FO4iftopsBJ6fe/cjI2jkttaQXOkuE6AclbeJQi0XSuh
0Aw1c4GGwbL5qw0CWUByRycdtMnA9gQcCboBg9ClGI0f+ur/gwr59vrqAWsRbNXGSnL1KaAKlmXl
Zfl1Ulcy4UoCnTZHstlmBjVp8n07jt+vRjwZcoFNXGMXlottAaNS7WiO8lrkFtkVC+l0suCmEmqV
95y7lJ85xmQ8Eo2F1pdPWdhqzGgC1bRAbtaVFyrOJzV8Fn8VE1Y9PkBuUN26vz4EMxMy0WJkGL4I
WozllGyfrRnIybsBW6tAhMSlI2j9QTPcCxkXsTG9BD7LdiE5M8vsMvVfloaE3Gr0D1yPv30UhLRz
F+G35gCmbz6NdftO4Ub2VTPgi/zz04/m4vMFccw/ip+1Hox9abeQea8C01YegP+81cgtqkA6Ob02
MRv+i9dhRWIOdqXfgf/ynThXYNmEfSnXMGTaHLTtNxbXiitw7HIBRgVFYUlcilmAKzkPEHHgAuJO
ncfIsEV4oacPzl2/aRZv6sJYrIreYXiaeKMQMek5mLEpARsOZNBIismCdAmyKYRtRi2BT+BiJJxL
w4b9CeYruVEcz6Yj5zE2ci0SC4CI5TuQcOmueYH6bIsB+HLDAYxbu9fs0885lIxf9h6L8+ws9mI2
nm/9IZJvCT5aS1kSLigXzkommwJCF4jmSPeyMEp6820tM//ygfIMoEiuZ65/VrmHr113WgFyjxbH
tWFh8CTTL80iQWJxtaZvr+iHJkoJjCqnbAwLaVPGBOBq4+u2JexX2fReuhszT+Sg1bKjeCP6In4x
Lw0/CE3H90Ky8KOI2/hLVD5eX5iB/lvTsPBikflagerqjZLmU1lWQReVwCQgHWanVO9qtHl9l2Xs
RulouJRyVqSA0O1XfTNgPVDiPGRQtLZVxnzyglTCa/ONHFp+B0FeQXfysQGi70D89P1IzEstwxAK
z6DAKASeteO3n841LLC1DuGiJ6H9+IUYt+o4wpZH47lOPpi+5SheeLUFDlzOxi9HbUPo3kxM2HwW
vxowCxdzCrHvhhMTNmXgJ2/2RsqFi2asKWzw1ZEb4bvDOp/T1m8nmo9dgRROKoGpl080Xus3Bluy
yvFCex9MWXcE9ZpPxIcRseg4OhIthgRgPcHx627jcYpofaXneKw+dQV7rxYjmYr/ey2GYPbebPgt
WYcmrUdi/Yk0hO86CVtbX2P1/vBBICIO3sDIgOUE2AosPpYJr7f6wSc2CT98exASzli/vCHZiTp+
BbY/fGGudSKAQ0IeV6Ru2wBMWbwRLYbPwcvvR8Bn7ma80sUXSVychu8MwsSY0+juvxatJizC2rR8
/LTXVEQlX4f/zpN4od1YpNxWixZAlFyCrh9gqDRBfo3Q1+Q/TK7yKusCia4fl9S6dIDlupHhxm2R
a8XB814y5cKL5NCSNj63Uw3KTeYq3mW+Xtoe5xBnpdxBjzVZ+OHEbXAbsQY2n51wjzgBm/9h2EJP
w23WOdimHUd931Nwn3ESdQOT8YRvCuqPS4T74J34VVgSZqbewVmLJRRyzoXWVjJvVH9N/KLxSm1J
+Wl8wqdiC43TigeFDNmvIl5X4g7N4k1eScq+3HgISTduG/xU0gJVyyWlK/fYALnCcfz+i8UESRia
j7FeOPkcLcTbY1YYi+HeORLn+Dl503G8MzYK4Zvi8Pqk1aqKZ/7YDfuyc/Cz4Vsx92guJm0/jT8P
X2KepXCuL/cNwLxdqbheUIZPRvpi9oEreG1CDCZtsF4oeYcdxN8HhSGZ8xu4+AD6+ixFu8HTEU//
5M8D5xlmvNRlBnbkFGDu3hPo7rsUY9bG4S9DI8yEm01Zj0Grz2DP7TsYOD8Gv/54FvwP5MI3eiea
TlphQLnwcAae8p6H0/fL8OPewfg46hgGzFiNwUExWHXhKn7yvp/1VnbwQsQcTjd1tBjTYs+hcSt/
c32e6Rftx+Jk7k3U778GGZS0ceuT8I5fnDlR8PNus5DMig3fHIoTtDKBu07gzc/DsfbCbfy8px+m
77iB4cvP4EedfQgQCh0Xz/wiyUOCLuHX+wMXfRsIrjyR8lRe5Pr870g1JVByIMU7OS0OClS18QE0
S+uQqMGmhqHC8sH0+rqK4+Ct6h7js1HxOfhF8BE8HZAIz9BE2GYcRt3ZF+AWlgHP2Zdhi7gKW1Aa
QXKBKQXukSnMO406s9NQ3z8L9XxzUT/kHstkwSMwCa+E7MHSJEmeBUxtRFgWjr3SsminVK6g5EFj
kFzqsfihH0uqgI4CCRKFHHK5cfX0uy0dY9Pww2lrEXcl19SpMEqBbhZj6scGSD4H9OJ7IxG4Lwe/
6z0ZezNzMecUATJ0juGR7feDkFLqMD/7M2D+IcykT/63MVGGyY1f7YiD1/Lw8mcrEJNZgskbDuGt
4fNMu81HLkJbn/XmurCsChl5ReZbe3/9YjnmHrpqBvyjnsGYdfAKvIP3YEb8bYxbugPvDQ/GtjwH
Xv10AZI40yfb+mDt2RsI23oYr306BYLWzz6JwJdHcvGLz+YgnurE9sf+OFh4H80nRyPg+C0ExOzH
7wfOMhpz3p4svNgt3HyN+OUBs7GT4FPfGv+s/el8NsoaF61o7NF0k6/44dDNItR5bRwulzkNgOr/
3hsnr91A/d5L6E5WYPiKI3hzylak0/b/Z69wJHHlnnn9MxzJr8L0DUfQckgk3bdC/O2DcGRwnbVI
P203CWk1Z9W1uC6BF0noBQjluT4FGF1/GwQuS+K6flyqpsDLL5cSoLdPWFRRmCo455qvBMv10nab
DjBJOPkhHiqW3FFoR68NF/HkuB14YnoCnolIhy2A1mIxQTDnPGxhV1A/4AbqTLqCp/xvo77fdTSY
fY3AyCBITsEtYD8aTN+PZycexXNjT6LhRFp2n5NwC08lWJJRb/hutJ1/BduuVxn+G1WhCLuUVxWU
RMV/jH8Ui2i/1FnFldJ3f+j66XdUdDhIv/CltY2huW86Jwk23+OoM+MQNmXmfMtF+w4Ake5o/Zkv
9mWVYmrUBgz1nYN58Vfxoe9iPOAAW41fQm27DJ1HBCOd3IrYsAudJy8wdX/buh+OXs1D+4lLsD4h
DZtPpqHZp9OxYuch/KnXSPT3W4Upc9aYt+Zazhu0lR1GLYD3qNnoN9QXo9ckGeGcsuEk3hqzhJp9
FdoNC0LU2WK0nLjSLMzPe/ni0KU8REZvxdDQFdiRXYif9ZmBplM2ISDhDnSOt9uUpeg3ZS7aTZiL
7gGrMX3ldnwwc5npc//pLPzO2x8HUq8jNO483hrmz3hnB1bHJWPr2Sy8N9rPgMd79AIkJFuxkl4v
KaaYx/JtP52GEZGb0fKjybh07wF++0k4TuYUY8b6fejltxoX7znQdtg8pJVUoxfnNdg/Gl2GhCAp
7z62nEhHl6FhuMqVOXTpOt7pO9m8oRYJEC5L4AKLku7larkA4PoUuco8XE/X/4werucixwMG6Oa7
59YZLAmTdhmlmTVvCZ2znM/t+tEEy6XZdceBvlvz0WRaIur4XUC98KuoE5wKr8DTaBR+Fh6zzsAj
+BxdqAt4Pvwmnpl5A89Ou4pnp6Th6UlJeM7nGF4OO4nPDt6GP9c1trAUB+6XYnN+GYJTS/DmspN4
OoQuGGMV24xcvEjwTdyTgYRi64f0zCxMzEQmaidVmwXOBxZABCDFbCwiO3iOGJpxtAA/HbMTTxOI
9YOvw2syw4L4M0bO1Z7RDlQUjw0QWQmRTJdLF+lanVJ5mes7fKAvvqgTIVvJWmarvphpGMyk8koq
I7+/gHPSboceCsHmuZm1Vc/V68PLrL5V35pNDdXUOXHtATafLcCKE3fw5J964MPp1jfnZFa14ArE
1Zbq66sFInnPD9iUbqWdcrX1RlI/KitBEOm7WubFJlvTy0w9U8nce9YY9Vjtqj0905BUWmXUt+5V
Rv2ojD5dz1z3qu96selKEnZXejjfZUF07QKFyFVHn/+KHgaRiC2S1eS+NiH0Y3M6vMnBaINMkynn
QMvtVaiij651Xnu7HP1izuPFaXFo6H8KtkC6TsE3aBHy4BF2CfVmpeCpwBQ8Ny0DzwVnoWHoJdhC
aEmCThJIB/DLsGMYuTublrPU8Fcr+TVftGWg/TWnkaPoAideWcP2Q1NplSjYU7fiJb8dmHL6Jk5w
fBqP6ornFTWxh+RIQ9fnSTYyM/EOXglnzDP+INzo3j0ZcQMNfTPRaPAOBO5OMP2Yo3zfDSBkmhgo
hjMZJvJTbSj40X66fj1Q91oa6/2DgigtIHPMCxm6AFwM80JNtVjGCBY/NQGrTbYg349oF1Zk5fif
AYo2JyU0YoIROtbTpBVuKUfX2lIUWtWMyt5kV0fTCnDkXC7ulJFhzK+kNqmgA60+jR9hCnPV5aKo
PybV1U/LmJ7UGZdLx1C0UOZe0WA1IeSk7TJv0Plhxl/FaTvodajcfS4S5yhnnW2JN2KO+EGRNG0p
KYg0c9M2aU2e69r8ZCjb+O+SCzSmi4fqCDSu5Mr7Nn27vKxSGcfkWldUkFOl/NQvAeqSda4wbaQL
2GHHVTQKTaLQn4ZX8EnUCz0Lr9DzqBuaTouRCc+wy3APvQK34KvwCMk3n7ZZqbQk8XhpwSGMTr+N
o1xoAcNIg05nqAP2Ix5V8MapU2xmt6qK2dUmzu248zDcQnbDfUkObLPpwk05giYzj+LtqGQEXyhC
7N0SHLpbhv2FxVid+wDjTuWiS3QyXgk6gXqTjjMGugLbfLp1cxj/sN5LfocxOPYqzj+wXjtI9iQP
0gaPDxAJjNSIKxmh1+JT6JnMAUUjYVwoU6amvNmTtpLTvMGm7jTHKHgv7EjYTB2lmjra7zZ5yhbr
uIjVhAIDKz7lM73JLaEgW4IFxz1OiuMgE6sr+IyMdnCWmihHqGaM8DuqmE8BqJL/rP60lWleICrp
wKDEUkSB0wE9nUg23NKBvLvsk/X4n4kEq7lwTh1pp85hJ/pWmsZVTQVgN/v1BQSHAj32YSeINAi5
AJq3mQUTeWP9Zi51pXnJZuVVk096taa5u4T3UcllQUQPWwQXcB6+1+e/ItWtIH+0g3Wf835QzHno
TZ/64Ycs3DHqkjarU/Cs3154hFLAwq7D5p+OuuGZaMAY4amgU2gSmkzX6izqzrrM2IHPQ2lRFuXD
No0Be/AxDDx2C3Gcrna57pAf2lZ16t0LeWZiGvahrQFFDebXiuXu6RukJdLvdrrbxei4jVbAJwlu
c9l+eLYV8PsmoZFvPJ6fsgcvTd6DH3x5AF7jtjK+iEW9gIPwCjnDsWQaC+QewfqTt+KPy09h881i
s0ur+Sms0nxRKaVZ8R0BYo5MaCFNK5yFFpYaypSRGPOvAEMBIUtZjEEd7yWkUqquN5fmDSgXX/Ji
JFfHUbQzo1UwWl2Coja4YKqnWlWFfCxQqLFiFmPoyHE4NBb7bbZt7bfYFaBRELTQ5WXSB06U8VMn
OyvoG+nHl40/bgbLumyr2lnKZZC/rRdvmgMfSgFwXGZIXDFZEKP5eS+jYH01VB4tgaV71eOdXjJJ
+Kv11pm80rjM22mDUNVTWQHeUizmyIYR3Jr00L0AIl5ZifcPJVdZCfzDbtU3QMAPk19za6cCMXN/
iOxcBFfdr4h5BuO8lMXOYP0lWUVoufIUnpq6Gw2DU+AVkQWPmRcJglx4Bl3HkwSCx/SzeCIk3SSv
4DRjRYy2DjpHzb0Z3rsysYu6Qu6lIf3eUZkUSBlxUUJOyntgHpUGOWNcJCNbGp4RCY69SrWrzJb6
WwuS0WTKQTSeQ4CEXKRFyII7Xbo6BGndmWfQMDCD8c8NuGkTIESbACm8TkYDgvvXoXuwObfUnLLW
jnoZ+SIZlQ40Jwyo2JxUeN8BIFpkjlIaT9pai8DJOHhdrjVgo/qFjUpqHv1TP1pAHZEgrIwLpgFo
0hJ6PTfHLmRBBBAtlOqKGWqf4NOvn5i6ymK/dj6UL6+zVxISWX0d1dD+vNrVqyqO0ICIYsQylnau
qCxm8xVcD52tovsgIVHfpi+95NJJLL3mso5WaJxEiWlIhkGmvYI3ZizK1pAloKaGtlhZhkm+q/p1
MHjVmTUn29bbawNy88xqW3N8OOmBPg2Lmcy9xmAqiSdMrGh9f6EmsYBJqkctbyyYqvLTsIR/rLna
uQaWu1tRwfUyYFDj+lRH7KNmIEY4dV1D2lCNPF+ApitO4qmQg/T7j8MjMBnus+k2zboEL98L+EHA
ZTwx5TyempUDd9/LdKtuUhizjXvl4Z8Mj2nH8KMZB7A26/7XJyu0QCUU9CIqmVLmWLrIHGaVc6kC
moMZSk2SMjWGu+Zar0IYu6NV0G40mrgbTeYSiLNo0QLP0ZJdQKPZlwhQgmYmg3q/bLp6sjB0p8IT
4HO6AOdrfgLXCKJ+/EOpgjIoL6O8iOtQwlT8+EG6RS7Gukj3rr/W58PXrs+Hk+jh6394KDLXVj9f
P6Iw8K9yXUW/vv76mcj13Lp6OD38zEWqazHLlUyZmio1H1ZeDX19r79Wr67n1kisv9an6/q7kRRO
FRfKLiuo4KmMUlGDfvO+TgLDWwMESosW1yCXmbI85vwTn6mMoqdiVmCJGr0g51RaWu6Ntm4ltoI/
cO3WHSxMzUbLXVl4ioJtm3qUAqbt2VzLVQq+gvrh1MzUxk1mnsT3/c8xCM+ki0UtHUpNvpjlFjIQ
n7kfPw7ci3H7LpmdPvVq3rSzL6kxA1lJO8daTZdX/xsDuXPVxRyxhlKs+fA5BbasSBGmxWNzWFY3
io2oNLWD+c6ceDwxkZbELwuNp1zD01Oz0dCfYGHMYZt5EC/MOIzX5hxHwNHryGL9r9ZDmk1xI8ek
I/GGzEP9sdJ3BEgt/d8iWctSWqIHVSXEBt1DptIyxmD8lIBXOktQUX2Pn0V44CglAOim0KXT3r9i
H6N+BRiZbkmUAKaDhCYMtTY45HenUEDX3AaGxhej1dp8fH/qadQfdwyNfc7giennGU9cRd0AAoDx
hC2Ubsqsc/CISIFn5CkGyifgMY9u1OwMuIekoS4/GwSewBNTd6LjqiQGyXKaBFjhVqOWGOpTEZYF
cIV4cvGMK85yZQyMdbZMUJL7c54g2V9QhS3Z9xCVcg3bbjzAUYJHrh+5YgL8FE5t1OE8vBiwBx5h
h+hqJeCp4L34U/hhfLY9GwvySnGEZWTB5CEYy2SO6WsUVvAv2AoS36ZagPybkn4o2sT9XDUtnJKO
0JcZjS/LoEOGJRQ5u1l0l+grua4lYDopLX/9AkESd6McG68Uwf94Dj7blo62a9LwU7pNDaeepKVI
Z7BN9yi8jEHtLTQOykcDuieN/VPxxMzTaMIAt2FQEurNOg3P0PMURGro+bQYc7UzlYknwi7iybEH
8BZ9/NlnCoxm1zh01L26XOO03EtO4atk7pmvX9iUwRBoJfDaJYu+UYjB+86jU0wK3l6TiLZbUtBu
VTwGbEzAwA174HPoNA7dLDPxg3G/WEcvh+dcvY7pV3Owl9f6zcaNl+/hCv038cJsxLFjxcbGDBvr
YdlPJT7+B6oFyL8rSblRynOulWJvSj62X7+P6JulmHu5AuGpFZifWoVFpxwIP1IE37hrmLjjIoZu
SkO/6GR0XZWMV4N24Vehe/GLyCP40ZyjeHbWETQMSEWdmWlwn3aG8cJp1AtIoXU4h7r02+sHnEdd
XwbZQQy8Q26gztwCxhKZ8Aw5Cy9ahSdDT/JZEhoz+K4fkEVLkQfPoFy4RVyD59Rk/MwnEaO3F5iA
XrtTt/lZQgGspGhWO6nvhRCZC6FeOwCKZymSMmwCkvanrvHR3PSb+Hj3aXTakozWWzPRbl8h2u29
jY+O3ERI2l2cZNVjBUWIzirCyNizmL37HG6UVeGeEEKSdyaQ7cm7h0E7UjFt1xkUFhfTQtF+2S3V
YS/XWy7B0dpO10is0fwj1QLk35S0GaZ08a4d4eev42/Rx2Ebu4Jani6Ebwp96+uw+VCIJ1xFvekn
4DXtKOr6MF6YrJSIeiEZqMPkHkS3iEKv9xG2UMYHs5So+eUyzcqgkJ9FveBkAuAsGs48hhe1RetP
KxGeyXiD7pTeFYSfhW32ebjTSngxzvDyv456fvl4ZnoqfjwpDn1WnseRAutlngRU73/sdPcczgcE
xx0Knt5jUPoViWt3UDt81ADaG9SVrM3a1ByM2XwK3TZlodXG6+i++x5arrmED/fcwsLMezhP6VX7
2gSydjIA/YTg+7vi0SwmASNP3MT8pEKsPHYTfrEX0S8qBuMTEnHwahb5yF60fV51H1VV9zgmvT2z
NljkWLksWi1A/h8ivbIqQSGX0ApQRblFdiw6lIOmYQloNHkbA+cj9Lep4WdRcAkEz8BUNIrMhkfA
BdSN1BvtVHhGEBiBF+AVmWWOfTQJp6UgIOoolghLZXCdBs+wdLgxeYamo07oRbpTGXgmOIPAuUgQ
sf4cBufzc2CLuAT3CFohBr1PBx5F302MM64VmhHK4GnX0NgDAsP6SjPvyyiEJdqwt8AjQCh+ECgS
9bWAS3fx4eaz6Lz2AjpvzkbHmEJ0XJ+DT7dfx6ILRcY9lLsoTW/ihgr2pE0Ju/WrBzr+MyXxJt7e
lI6/7MxBq5gbmLA/FwevFSHHJfT6w6SdT+1YWrGQrIeJgix0mKTC36RagPybkpZRxwPNOyGzr2ky
DV2mlK25kodum+ny+KyDze8MNfwVavssxhJJ8AhORz0KuldAGpqE0R2iW9WYVqQ+A+n64ZcMcNxo
PTwiGT/QKrgzrw7r6t1GXdarF3wJHjNpfYIFDlqpyHxaIcYo0w+hwcR16LT+GN29EtT8NqS1eUbh
016gzqdJoOXmCDiSTfn3EuRkCmAcg4FZZ/MwcMc59IrNRAsC472dBXh3RzFaxlWh27o0TDiQjURO
W21U0BLpf6pj/Y6Atql10kDtsmXTJ3DjngNbTudg2flbiL1bhdPkk2IOvebSqzsZHQFY4zAWjklQ
VvxmnojHZmNDrX2TagHyb0paOlfwWLN7i3K6FiVcdWlhLbS0st5V+GcBzSlYL89KxoszT+IZv0Q8
FZSMejOTUT80k27RRTTQuwt/Wo3AdHiE0hKEE0wRegOt7VkCIYBWJDCD8QcBo+MbkQTR7DS40QXz
nBSP/ww6hpG7MnBSb1JJTsFAFkKyxUFqJzolpxTBu09iwt5zGJ5wA902XkSXDblov+wyOi87iw7L
k/He6vN4L+Yq2u28jRZMrXfdw3tbGWdsu4+usXk4+sA4ZJwf51p1Gw7zZSidnLtPXlSbZwKmrIfd
cde8r7C48zUpR+WMq0cwCQyuJF66NgusfStx0cVpa24PUy1A/k1JSy4gSBtbiQJDbVpGlXi/6gGK
KRj6FZYi+vUugThX7MSStHv4kH78K7MO41n/eNhm0A0LSSIICBb/U6jvl2RepHmG6Bh6qnnD7BZ6
me7VRXj4837yCQbtCWgYvAtNpsXgtciDCDxZgJN3SsxumZEhHdUpvw+nfr+McmXibWYV8/rC/XLs
LijB0Pir6LDlDDrEXUaLDRfQY8NF9Iy5xrxrdIOuo/W2PHSIyUbPzZkYtDkNOwsqcJnCLE2v0xHW
3pLeZrMfWg9tDitHc1Usov+ToZ0uqJPxhFwlKQz9L9pkLWQJqsinO+TKfQMlvSblP7lQwoOSrg1I
9EZGDpfOUdQC5P8dkprTOwz5EwpsFWTS73aYs2ySJD1jOblfTmpYEwzrC7DmGxtmJymJF1Gp9zDl
IIPWdcloHpOHX0Vdw4thF9DA9zjcxsfhhaBE/CT0BP6w4AzarMvAx3syMGrXKazLyId+NeArknRW
VjO+thMIlWbn6T5FS5GAOf2rIyN6vc1xSzvLPVL8sL/wNlZnXMGME/nwPX0XY47ewYiEQkw5cwcL
0vOQxIbUlpHacrajb0GZebFZZlVxnmrPyK54oheK+v/C6LW7vCKWUbZkXsBw0qo59b0Qc4xIDwQd
qg8dDdKvx+jLXWKQ+uBjluIcNA9rR+vbVAuQf1eSVGiBzdtmgUCJ1zVnVszP6fDT8hUoWObXQSpZ
RZ6186v3gxIAyYLERC/K9EMJF8scSC0qR8rdYlwq1f9a2W6+gyNQSUOrvJpWXTbDcSiDV+qT/+mZ
2pbLY8FRkspkjvDoqaWLpZNdsigLJ9AU8oGSrmUh9cyQBFqmyPRVk2pIbZk/GoN4oQNTNahQ2PBV
6GB4wYq6V1IbGpN6UYxh6jLT1T7LmCJM4tk/o1qA1FItPYJqAVJLtfQIqgVILdXSI6gWILVUS4+g
WoDUUi09gmoBUku19AiqBUgt1dIjqBYgtVRLj6BagNRSLT2CagFSS7X0CKoFSC3V0iOoFiC1VEv/
koD/DQ4+w6kmMAFSAAAAAElFTkSuQmCC
------------MIME--520934545-23878-delim--
7 years, 2 months
Re: [ovirt-users] ovn problem - Failed to communicate with the external provider, see log for additional details.
by George Sitov
Thank you! It was certifucate problem. I returned it to pki and all work.
8 февр. 2018 г. 4:44 ПП пользователь "Marcin Mirecki" <mmirecki(a)redhat.com>
написал:
Hello George,
Probably your engine and provider certs do not match.
The engine pki should be in:
/etc/pki/ovirt-engine/certs/
The provider keys are defined in the SSL section of the config file
(/etc/ovirt-provider-ovn/conf.d/...):
[SSL]
https-enabled=true
ssl-key-file=...
ssl-cert-file=...
ssl-cacert-file=...
You can compare the keys/certs using openssl.
Was the provider created using egine-setup?
For testing purposes you can change the "https-enabled" to false and try
connecting using http.
Thanks,
Marcin
On Thu, Feb 8, 2018 at 12:58 PM, Ilya Fedotov <kosha79(a)gmail.com> wrote:
> Hello, Georgy
>
> Maybe, the problem have the different domain name and name your node
> name(local domain), and certificate note valid.
>
>
>
> with br, Ilya
>
> 2018-02-05 22:36 GMT+03:00 George Sitov <usual.man(a)gmail.com>:
>
>> Hello!
>>
>> I have a problem wiith configure external provider.
>>
>> Edit config file - ovirt-provider-ovn.conf, set ssl parameters.
>> systemctl start ovirt-provider-ovn start without problem.
>> In external proveder in web gui i set:
>> Provider URL: https://ovirt.mydomain.com:9696
>> Username: admin@internal
>> Authentication URL: https://ovirt.mydomain.com:35357/v2.0/
>> But after i press test button i see error - Failed to communicate with
>> the external provider, see log for additional details.
>>
>> /var/log/ovirt-engine/engine.log:
>> 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.pro
>> vider.network.openstack.BaseNetworkProviderProxy] (default task-29)
>> [69fa312e-6e2e-4925-b081-385beba18a6a] Bad Gateway (OpenStack response
>> error code: 502)
>> 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.pro
>> vider.TestProviderConnectivityCommand] (default task-29)
>> [69fa312e-6e2e-4925-b081-385beba18a6a] Command '
>> org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
>> failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050)
>>
>> In /var/log/ovirt-provider-ovn.log:
>>
>> 2018-02-05 21:33:55,510 Starting new HTTPS connection (1):
>> ovirt.astrecdata.com
>> 2018-02-05 21:33:55,516 [SSL: CERTIFICATE_VERIFY_FAILED] certificate
>> verify failed (_ssl.c:579)
>> Traceback (most recent call last):
>> File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line
>> 126, in _handle_request
>> method, path_parts, content)
>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py",
>> line 176, in handle_request
>> return self.call_response_handler(handler, content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
>> call_response_handler
>> return response_handler(content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
>> line 60, in post_tokens
>> user_password=user_password)
>> File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26,
>> in create_token
>> return auth.core.plugin.create_token(user_at_domain, user_password)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py",
>> line 48, in create_token
>> timeout=self._timeout())
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 62, in create_token
>> username, password, engine_url, ca_file, timeout)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 53, in wrapper
>> response = func(*args, **kwargs)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 46, in wrapper
>> raise BadGateway(e)
>> BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
>> (_ssl.c:579)
>>
>> Whan i do wrong ?
>> Please help.
>>
>> ----
>> With best regards Georgii.
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
7 years, 2 months
ovn problem - Failed to communicate with the external provider, see log for additional details.
by George Sitov
Hello!
I have a problem wiith configure external provider.
Edit config file - ovirt-provider-ovn.conf, set ssl parameters.
systemctl start ovirt-provider-ovn start without problem.
In external proveder in web gui i set:
Provider URL: https://ovirt.mydomain.com:9696
Username: admin@internal
Authentication URL: https://ovirt.mydomain.com:35357/v2.0/
But after i press test button i see error - Failed to communicate with
the external provider, see log for additional details.
/var/log/ovirt-engine/engine.log:
2018-02-05 21:33:55,517+02 ERROR
[org.ovirt.engine.core.bll.provider.network.openstack.BaseNetworkProviderProxy]
(default task-29) [69fa312e-6e2e-4925-b081-385beba18a6a] Bad Gateway
(OpenStack response error code: 502)
2018-02-05 21:33:55,517+02 ERROR
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
(default task-29) [69fa312e-6e2e-4925-b081-385beba18a6a] Command
'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050)
In /var/log/ovirt-provider-ovn.log:
2018-02-05 21:33:55,510 Starting new HTTPS connection (1):
ovirt.astrecdata.com
2018-02-05 21:33:55,516 [SSL: CERTIFICATE_VERIFY_FAILED] certificate
verify failed (_ssl.c:579)
Traceback (most recent call last):
File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 126,
in _handle_request
method, path_parts, content)
File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
176, in handle_request
return self.call_response_handler(handler, content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
call_response_handler
return response_handler(content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", line
60, in post_tokens
user_password=user_password)
File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
create_token
return auth.core.plugin.create_token(user_at_domain, user_password)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
48, in create_token
timeout=self._timeout())
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 62,
in create_token
username, password, engine_url, ca_file, timeout)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 53,
in wrapper
response = func(*args, **kwargs)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 46,
in wrapper
raise BadGateway(e)
BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
(_ssl.c:579)
Whan i do wrong ?
Please help.
----
With best regards Georgii.
7 years, 2 months
Engine AAA LDAP startTLS Protocol Issue
by Alan Griffiths
Hi,
Trying to configure Engine to authenticate against OpenLDAP and I seem
to be hitting a protocol bug.
Attempts to test the login during the setup fail with
2018-02-07 12:27:37,872Z WARNING Exception: The connection reader was
unable to successfully complete TLS negotiation:
SSLException(message='Received fatal alert: protocol_version',
trace='getSSLException(Alerts.java:208) /
getSSLException(Alerts.java:154) / recvAlert(SSLSocketImpl.java:2033)
/ readRecord(SSLSocketImpl.java:1135) /
performInitialHandshake(SSLSocketImpl.java:1385) /
startHandshake(SSLSocketImpl.java:1413) /
startHandshake(SSLSocketImpl.java:1397) /
run(LDAPConnectionReader.java:301)', revision=0)
Running a packet trace I see that it's trying to negotiate with TLS
1.0, but my LDAP server only support TLS 1.2.
This looks like a regression as it works fine in 4.0.
I see the issue in both 4.1 and 4.2
4.1.9.1
4.2.0.2
Should I submit a bug?
Thanks,
Alan
7 years, 2 months
oVirt DR: ansible with 4.1, only a subset of storage domain replicated
by Luca 'remix_tj' Lorenzetto
Hello,
i'm starting the implementation of our disaster recovery site with RHV
4.1.latest for our production environment.
Our production setup is very easy, with self hosted engine on dc
KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
and EMC VNX8000. Both storage arrays supports replication via their
own replication protocols (SRDF, MirrorView), so we'd like to delegate
to them the replication of data to the remote site, which is located
on another remote datacenter.
In KVMPD DC we have some storage domains that contains non critical
VMs, which we don't want to replicate to remote site (in case of
failure they have a low priority and will be restored from a backup).
In our setup we won't replicate them, so will be not available for
attachment on remote site. Can be this be an issue? Do we require to
replicate everything?
What about master domain? Do i require that the master storage domain
stays on a replicated volume or can be any of the available ones?
I've seen that since 4.1 there's an API for updating OVF_STORE disks.
Do we require to invoke it with a frequency that is the compatible
with the replication frequency on storage side. We set at the moment
RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
updated with the required frequency?
I've seen a recent presentation by Maor Lipchuk that is showing the
automagic ansible role for disaster recovery:
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
7 years, 2 months
Spice Newb
by Marshall Mitchell
--_000_SN1PR20MB0670222DE294B753BA2AE843D6FC0SN1PR20MB0670namp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
I'm attempting to get my first install of oVirt going in full swing. I hav=
e all the hosts installed and an engine running. All is smooth. I'm not try=
ing to connect to the Spice console with my Remote Viewer and I have no clu=
e how to figure out what port I should be connecting too. I've been all ove=
r the web via google looking for a process to install / configure / verify =
spice is operational, but I've not been lucky. How do I go about connecting=
/ finding the port numbers for my VM's? I did open the firewall range requ=
ired. I appreciate the help.
-Marshall
--_000_SN1PR20MB0670222DE294B753BA2AE843D6FC0SN1PR20MB0670namp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">I’m attempting to get my first install of oVir=
t going in full swing. I have all the hosts installed and an engine r=
unning. All is smooth. I’m not trying to connect to the Spice console=
with my Remote Viewer and I have no clue how to figure
out what port I should be connecting too. I’ve been all over the web=
via google looking for a process to install / configure / verify spice is =
operational, but I’ve not been lucky. How do I go about connecting / =
finding the port numbers for my VM’s? I did
open the firewall range required. I appreciate the help. <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">-Marshall<o:p></o:p></p>
</div>
</body>
</html>
--_000_SN1PR20MB0670222DE294B753BA2AE843D6FC0SN1PR20MB0670namp_--
7 years, 2 months
Clear name_server table entries
by Carlos Rodrigues
Hi,
I'm getting the following problem:
https://bugzilla.redhat.com/show_bug.cgi?id=1530944#c3
and after fix DNS entries no /etc/resolv.conf on host, i have to many
entries on name_server table:
engine=# select count(*) from name_server;
count
-------
31401
(1 row)
I would like to know if may i delete this entries?
Best regards,
--
Carlos Rodrigues
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
7 years, 2 months
Re: [ovirt-users] After realizing HA migration, the virtual machine can still get the virtual machine information by using the "vdsm-client host getVMList" instruction on the host before the migration.
by Petr Kotas
Hi Pym,
the feature is know in testing. I am not sure when it will be released, but
I hope for sooner date.
Petr
On Tue, Feb 6, 2018 at 12:36 PM, Pym <pym0914(a)163.com> wrote:
> Thank you very much for your help, so is this patch released now? Where
> can I get this patch?
>
>
>
>
>
>
> At 2018-02-05 20:52:04, "Petr Kotas" <pkotas(a)redhat.com> wrote:
>
> Hi,
>
> I have experimented on the issue and figured out the reason for the
> original issue.
>
> You are right, that the vm1 is not properly stopped. This is due to the
> known issue in the graceful shutdown introduced in the ovirt 4.2.
> The vm on the host in shutdown are killed, but are not marked as stopped.
> This results in the behavior you have observed.
>
> Luckily, the patch is already done and present in the latest ovirt.
> However, be ware that gracefully shutting down the host, will result in
> graceful shutdown of
> the VMs. This result in engine not migrating them, since they have been
> terminated gracefully.
>
> Hope this helps.
>
> Best,
> Petr
>
>
> On Fri, Feb 2, 2018 at 6:00 PM, Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
>
>>
>>
>> On Thu, Feb 1, 2018 at 1:06 PM, Pym <pym0914(a)163.com> wrote:
>>
>>> The environment on my side may be different from the link. My VM1 can be
>>> used normally after it is started on host2, but there is still information
>>> left on host1 that is not cleaned up.
>>>
>>> Only the interface and background can still get the information of vm1
>>> on host1, but the vm2 has been successfully started on host2, with the HA
>>> function.
>>>
>>> I would like to ask a question, whether the UUID of the virtual machine
>>> is stored in the database or where is it maintained? Is it not successfully
>>> deleted after using the HA function?
>>>
>>>
>> I just encounter a similar behavior:
>> after a reboot of the host 'vdsm-client Host getVMFullList' is still
>> reporting an old VM that is not visible with 'virsh -r list --all'.
>>
>> I filed a bug to track it:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1541479
>>
>>
>>
>>>
>>>
>>>
>>>
>>> 2018-02-01 16:12:16,"Simone Tiraboschi" <stirabos(a)redhat.com> :
>>>
>>>
>>>
>>> On Thu, Feb 1, 2018 at 2:21 AM, Pym <pym0914(a)163.com> wrote:
>>>
>>>>
>>>> I checked the vm1, he is keep up state, and can be used, but on host1
>>>> has after shutdown is a suspended vm1, this cannot be used, this is the
>>>> problem now.
>>>>
>>>> In host1, you can get the information of vm1 using the "vdsm-client
>>>> Host getVMList", but you can't get the vm1 information using the "virsh
>>>> list".
>>>>
>>>>
>>> Maybe a side effect of https://bugzilla.redhat.com
>>> /show_bug.cgi?id=1505399
>>>
>>> Arik?
>>>
>>>
>>>
>>>>
>>>>
>>>>
>>>> 2018-02-01 07:16:37,"Simone Tiraboschi" <stirabos(a)redhat.com> :
>>>>
>>>>
>>>>
>>>> On Wed, Jan 31, 2018 at 12:46 PM, Pym <pym0914(a)163.com> wrote:
>>>>
>>>>> Hi:
>>>>>
>>>>> The current environment is as follows:
>>>>>
>>>>> Ovirt-engine version 4.2.0 is the source code compilation and
>>>>> installation. Add two hosts, host1 and host2, respectively. At host1, a
>>>>> virtual machine is created on vm1, and a vm2 is created on host2 and HA is
>>>>> configured.
>>>>>
>>>>> Operation steps:
>>>>>
>>>>> Use the shutdown -r command on host1. Vm1 successfully migrated to
>>>>> host2.
>>>>> When host1 is restarted, the following situation occurs:
>>>>>
>>>>> The state of the vm2 will be shown in two images, switching from up
>>>>> and pause.
>>>>>
>>>>> When I perform the "vdsm-client Host getVMList" in host1, I will get
>>>>> the information of vm1. When I execute the "vdsm-client Host getVMList" in
>>>>> host2, I will get the information of vm1 and vm2.
>>>>> When I do "virsh list" in host1, there is no virtual machine
>>>>> information. When I execute "virsh list" at host2, I will get information
>>>>> of vm1 and vm2.
>>>>>
>>>>> How to solve this problem?
>>>>>
>>>>> Is it the case that vm1 did not remove the information on host1 during
>>>>> the migration, or any other reason?
>>>>>
>>>>
>>>> Did you also check if your vms always remained up?
>>>> In 4.2 we have libvirt-guests service on the hosts which tries to
>>>> properly shutdown the running VMs on host shutdown.
>>>>
>>>>
>>>>>
>>>>> Thank you.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
>
>
7 years, 2 months
qcow2 images corruption
by Nicolas Ecarnot
Hello,
TL; DR : qcow2 images keep getting corrupted. Any workaround?
Long version:
This discussion has already been launched by me on the oVirt and on
qemu-block mailing list, under similar circumstances but I learned
further things since months and here are some informations :
- We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using CentOS
7.{2,3} hosts
- Hosts :
- CentOS 7.2 1511 :
- Kernel = 3.10.0 327
- KVM : 2.3.0-31
- libvirt : 1.2.17
- vdsm : 4.17.32-1
- CentOS 7.3 1611 :
- Kernel 3.10.0 514
- KVM : 2.3.0-31
- libvirt 2.0.0-10
- vdsm : 4.17.32-1
- Our storage is 2 Equallogic SANs connected via iSCSI on a dedicated
network
- Depends on weeks, but all in all, there are around 32 hosts, 8 storage
domains and for various reasons, very few VMs (less than 200).
- One peculiar point is that most of our VMs are provided an additional
dedicated network interface that is iSCSI-connected to some volumes of
our SAN - these volumes not being part of the oVirt setup. That could
lead to a lot of additional iSCSI traffic.
From times to times, a random VM appears paused by oVirt.
Digging into the oVirt engine logs, then into the host vdsm logs, it
appears that the host considers the qcow2 image as corrupted.
Along what I consider as a conservative behavior, vdsm stops any
interaction with this image and marks it as paused.
Any try to unpause it leads to the same conservative pause.
After having found (https://access.redhat.com/solutions/1173623) the
right logical volume hosting the qcow2 image, I can run qemu-img check
on it.
- On 80% of my VMs, I find no errors.
- On 15% of them, I find Leaked cluster errors that I can correct using
"qemu-img check -r all"
- On 5% of them, I find Leaked clusters errors and further fatal errors,
which can not be corrected with qemu-img.
In rare cases, qemu-img can correct them, but destroys large parts of
the image (becomes unusable), and on other cases it can not correct them
at all.
Months ago, I already sent a similar message but the error message was
about No space left on device
(https://www.mail-archive.com/qemu-block@gnu.org/msg00110.html).
This time, I don't have this message about space, but only corruption.
I kept reading and found a similar discussion in the Proxmox group :
https://lists.ovirt.org/pipermail/users/2018-February/086750.html
https://forum.proxmox.com/threads/qcow2-corruption-after-snapshot-or-heav...
What I read similar to my case is :
- usage of qcow2
- heavy disk I/O
- using the virtio-blk driver
In the proxmox thread, they tend to say that using virtio-scsi is the
solution. Having asked this question to oVirt experts
(https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but
it's not clear the driver is to blame.
I agree with the answer Yaniv Kaul gave to me, saying I have to properly
report the issue, so I'm longing to know which peculiar information I
can give you now.
As you can imagine, all this setup is in production, and for most of the
VMs, I can not "play" with them. Moreover, we launched a campaign of
nightly stopping every VM, qemu-img check them one by one, then boot.
So it might take some time before I find another corrupted image.
(which I'll preciously store for debug)
Other informations : We very rarely do snapshots, but I'm close to
imagine that automated migrations of VMs could trigger similar behaviors
on qcow2 images.
Last point about the versions we use : yes that's old, yes we're
planning to upgrade, but we don't know when.
Regards,
--
Nicolas ECARNOT
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 2 months
Re: [ovirt-users] GUI trouble when adding FC datadomain
by Yaniv Kaul
On Feb 2, 2018 1:09 PM, "Roberto Nunin" <robnunin(a)gmail.com> wrote:
Hi Yaniv
Currently Engine is 4.2.0.2-1 on CentOS7.4
I've used using oVirt Node image 4.2-2017122007.iso
LUN I need is certainly empty. (the second one in the list).
Please file a bug with logs, so we can understand the issue better.
Y.
2018-02-02 13:01 GMT+01:00 Yaniv Kaul <ykaul(a)redhat.com>:
> Which version are you using? Are you sure the LUNs are empty?
> Y.
>
>
> On Feb 2, 2018 11:19 AM, "Roberto Nunin" <robnunin(a)gmail.com> wrote:
>
>> Hi all
>>
>> I'm trying to setup ad HE cluster, with FC domain.
>> HE is also on FC.
>>
>> When I try to add the first domain in the datacenter, I've this form:
>>
>> [image: Immagine incorporata 1]
>>
>> So I'm not able to choose any of the three volumes currently masked
>> towards the chosen host.
>> I've tried all browser I've: Firefox 58, Chrome 63, IE 11, MS Edge, with
>> no changes.
>>
>> Tried to click in the rows, scrolling etc. with no success.
>>
>> Someone has found the same issue ?
>> Thanks in advance
>>
>> --
>> Roberto Nunin
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
--
Roberto Nunin
7 years, 2 months
Migration of a VM from ovirt 3.6 to ovirt 4.2
by eee ffff
------=_Part_436298_197042249.1518015278852
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Dear ovirt-users,
I would like to copy the VMs that I have now on a running ovirt 3.6 Data Center to a new ovirt 4.2 Data Center, located in a different building. An export domain is not an option,
as I would need to upgrade the ovirt 3.6 host to 4.2 and (as this is an operation that I would have to do multiple times) constantly upgrading and downgrading a host, so that it would be compatible to the ovirt environment does not make sense.
Do you have other suggestions?
Cheers,
Eli
------=_Part_436298_197042249.1518015278852
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<HTML><BODY>
Dear ovirt-users,<br><br>I would like to copy the VMs that I have now on a =
running ovirt 3.6 Data Center to a new ovirt 4.2 Data Center, located in a =
different building. An export domain is not an option, as I would nee=
d to upgrade the ovirt 3.6 host to 4.2 and (as this is an operation that I =
would have to do multiple times) constantly upgrading and downgrading a hos=
t, so that it would be compatible to the ovirt environment does not make se=
nse. Do you have other suggestions?<br><br>Cheers,<br>Eli<br></BODY><=
/HTML>
------=_Part_436298_197042249.1518015278852--
7 years, 2 months
4.1 slot user portal vm creation
by Staniforth, Paul
--_000_151723540099698601leedsbeckettacuk_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
We are experiences slow response when trying to create a new vm i=
n the user portal (for some users the New Virtual Machine page doesn't get =
created). Also in the Templates page of the user portal it doesn't list the=
templates, just has the 3 waiting to load icons flashing.
In the admin portal it lists the templates with no problem.
We are running 4.1.9 on the engine and nodes.
Any help appreciated.
Thanks,
Paul S.
To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
--_000_151723540099698601leedsbeckettacuk_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none"><!--P{margin-top:0;margin-b=
ottom:0;} --></style>
</head>
<body dir=3D"ltr" style=3D"font-size:12pt;color:#000000;background-color:#F=
FFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>Hello,</p>
<p> We are experiences slow response when=
trying to create a new vm in the user portal (for some users the New =
Virtual Machine page doesn't get created). Also in=
the Templates page of the user portal it doesn't =
list the templates, just has the 3 waiting&nb=
sp;to
load icons flashing.</p>
<p>In the admin portal it lists the templates with no problem.<br>
</p>
<p><br>
</p>
<p>We are running 4.1.9 on the engine and nodes. <=
/p>
<p><br>
</p>
<p>Any help appreciated.</p>
<p><br>
</p>
<p>Thanks,</p>
<p> Paul S.<br>
</p>
To view the terms under which this email is distributed, please go to:- <br=
>
<a href=3D"http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html"=
target=3D"_blank">http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaim=
er.html</a>
<p></p>
</body>
</html>
--_000_151723540099698601leedsbeckettacuk_--
7 years, 2 months
Add a disk and set the console for a VM in the user portal
by nicolas@devels.es
--=_eb4ca82dfafca5034ad170a1bec7f025
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8;
format=flowed
Hi,
We recently upgraded to oVirt 4.2.0 and we're testing things so we can
determine if our production system might also be upgraded or not. We do
an extensive use of the User Portal, I've granted the VmCreator and
DiskProfileUser privileges on a user (the user has a quota as well), I
logged in to the user portal, I can successfully create a VM setting its
memory and CPUs but:
1) I can't see a way to change the console type. By default, when the
machine is created, SPICE is chosen as the mechanism, and I'd like to
change it to VNC, but I can't find a way.
2) I can't see a way to add a disk to the VM.
I'm attaching a screenshot of what I see in the panel.
Are some new privileges needed to add a disk or change the console type?
Thanks
--=_eb4ca82dfafca5034ad170a1bec7f025
Content-Transfer-Encoding: base64
Content-Type: image/png;
name="Captura de pantalla de 2018-02-06 11-43-35.png"
Content-Disposition: attachment;
filename="Captura de pantalla de 2018-02-06 11-43-35.png";
size=12012
iVBORw0KGgoAAAANSUhEUgAAAU0AAADACAIAAAAPy6YhAAAAA3NCSVQICAjb4U/gAAAAGXRFWHRT
b2Z0d2FyZQBnbm9tZS1zY3JlZW5zaG907wO/PgAAIABJREFUeJztnX1sG3We/z9kho6742YOj2TT
ieLGPmw6yD7Z4G4MCesIQ7OXdNNtpFYJuuzS37YH3SId3P12qUS7ArZiyR27BalPKl3xkIqgREqX
3CYiuboX7y8B5zCN92zO+cWs3ToXF5ufrZvUI2bKDPz+cB6cxHlsHp3v6w/Lnvk+fObhPZ/vk+dz
13fffQcIBCKvKVhrAxAIxIqDdI5A5D9I5whE/oN0jkDkP0jnCET+gy8hj9/VFZMWWw9jcZo1S6hs
c/Ht/3DfAEX81VrbgcgzlqLzoNcXWXQmTuswa5ZS2+bhmx5X6MdeUVHMfHRAY0HnCrF8oHb7bIQ/
fKPxrZ74alX3TU9X6MdekQOID8d+2BL3LbbFhEDMztK9Bmnaf6hSO19+Kdp1sTXAz5UkHR3o8wbC
sfgoLwNGkBTN6C1lZWaNYsm2bTj+Z2j4KZ/Ijf+MD8eecit9TnItbULkEXfQOsRxBT5vdnzuJOmw
q7Xdm5AKi1m2nFERksilYpFwNCE5lm7YBuObngHur9jid2xCxp8DAABuYbasqVWIvGJNe4HcQHu7
N0maag5Us9Tk5oo1M2j1+aanK/Rjn6gIiH/Yb/gDZJru+N7df32evXutbVsTJG6o19UfjMZHRcAI
BUUz2p22sl1aJQBAtOt0s2+8cYgRJEUXG2zl5Sw9dhsne9662E/V/NMBNvu+nqPBOKXAcYid+5/b
q1/xI11V1lDnUri3b1hUldZXZot8Ounop+5eXySW4oEgaS1rdzgmLitwnrff8uvqa+iA2xOKpngZ
I9VaU1llhZGayO5xuQci8VERCJKk1FpTmXNXUaZHICWDbrdnKJocFYFUMTpLuWPsfprNEpfbG4mP
ikCqtKyj0mGk8PlrmYMxkXMAXCzxw1b4aL/hD3D9vKroHev3NlGnJQvO3/Z2ZwRnTBYHQ+GSmEpE
o+FIsmyXdiIJVlxabWMwWebTsWjA398eisTrGyqKct/JC2gwYoyt0sZkZccK1St3hGvEGuo8Fozw
wJTZ5hiFF8IdTW1BmbGWVTsoTEwEvZ72pni64cld9HgSOeVrbaVYR2V9pYqQUiF3R3d7GzQcrNAA
QNLT2tIr6cudNWoSRC4VjcRkfExByU9bmq4mqJ12Z5makLnhgT53SzRRe7Ban0tiQrSrucUPepuz
RkvKCX9v7+UmruapWlY5dy2z801PV+iHPlEc/83FEj9sxT31hg8270h7MuiNiGpHQ4N94vrap6fB
KS1rzHhb1rrLpn3/7W5vX9B+wJzjjC+owYirdGY2z9z3DNbunuLiKR4KWfXsDhTifa4Ar3ns8JO7
MhdJb2QZ/K1mt8vPHjBP5JPUZfsrx34qrVUVodPtoWCyQkODEIsmZKaqyj5+D7DW8UyC3+UeJkz7
G6r1mVNgZPWq99/q7uqzHHEWzbTE0+XjtVWHajP16PXFBH++u7c/zjo1c9QyK9981BH6cWBS5AAA
gNl3bivZvCIHAIETANOo5mjeTYcymxnXcDKWBPOMq7awBuPmYO3m1SReAFCQc/i9eDCcwrQWc9ZF
wrU2ViVHgxFhcpvGoM96VigKaRJEngcAUKjVKoh5uvzxrOSZyiOhqFxotOmzZEWZrXpsdGgwx1Ra
MhJJYVozO1mPUq+nIRWLpueqZRZyi7zysfv+sGuTNtfHobUaQo76gtz8SSfAJz6mk2kw2uZqMG4a
7uAUSJIgSfPOq0mzzQPjpAIgzc8xTcxzPJBa1dRbn9ZQIHNJDmB8O6GcMf0kQaZYTXltDd/e1fnO
abdab7KV2sxFyrGiORko9dTHPK6iKAhzKQmm3xlcnAM50f7bxvap21W8CKCctZZcIJHPgYJ1OP3N
nZ1vvRWy2m02VkvNd38J0aGEjDEMPXPXAhqMGSQxLQhZFeEKRd49GZZ+QHyg9c3AHdRMUoUExOMJ
AWaZKJ/1CbEIcJqtPmhwjIS8Xk+g85LXs7P6wN45WnFzVEkaqvaVqaeeL5ykFlXLNx91DP04cBuJ
fFZo84FDan+f2+PvbPZ2FRab7Q6HdeqIZsa/SAKfTIQDfe5BQe0oY3OcvfkbjBnEwfYzg1m/CdP+
56rzrb++dg8uXGtksMGoz58078rxNAbAaYqEEJcSIPs6J+McYBS9mA4XrixiK4rY8vKBtubuLhdr
qDWSFIVBPJHVKgAAKcXxQFKFM88JSZEQEoDWzLF2J0ctUwv65qOOv7yUwv4KIKtngO197L4PkMiz
UWjMzgNmZzoe9Pb1ersvhSJj450ZxMHLb47LEiMZdneD05rzjxPzNxgzEMWOGnvWCDtOMXd4BOuQ
peictVmomWePi/ojKRkwlc6snalCnFFNr0phKLOqIl53u0tT79TmuNNp1qDq9/r8nHnXeIFS1BdM
YYxNtwRh4LSZVbsiAi8AKHWsnggNeaOO6oklfZx/ICwXmg05bhoNq1f1D3g9cbZivs7elFomN3/r
6Qodk5iehsIvB/7yw+70DQAk8jlRatiKWgPb0/ROf0//CDsxOEroHqu1azCMUCgpmprj3M3bYByH
Uuv1+ea/p7MUnZudlebMN4kbiSYzrVApmvRHUgBAqnWGMe0QtLZojh4WXuSo3Z1s7vY2nw/rWLOO
IRUgCXwqEQNDldOoAI290hJqcb/fMmq36GhcjAc9fQGecdSa5+1zAQCAEOxqGyIMeoamFLjAhX29
UWAceiUAKFiHY6Cpu60JyuyshpS4sK/XO0zu3FeWcymvpqzSEm7pb25KWSwsowA+HU8MDwuGfXvN
yjlqmeQbX+K2goEv07DT+tcfSaGKq6IdiXxecI1BW9g/wI1KMDFBTtJarXbOXGN552swbiburN3O
DXS29qembJKH+y8P92e+q0oPHa6Y6xTjtPXAIcbv6fOFQh63TxxfrmQmM3YptJUN9Wp3n7evc4CX
cJJmzDU1Wetk5gFXqUne2+/y8qIMGKlidM56p3WsbUBbDxwke3v6vV1toyIQKo3OUe+cdZ3MhCUB
V0e/LGNEIc1oLWrFfLVk8V2/N/JD4a8HK/GekIxEvjC4WIwHkprRGFwI8zYYNxF3ofe9rgri+ab/
OhLLfEfN9VlI+9ua/aBlGBVFKhWQjkcHA4EYGCb659Gu081B3ezDZDPWvUrJgbbm7ghPqHI2GCHa
dbrZT9kqLdkr4HCVjp1jxmRDkncTCOsUoq6GrZAAABQ4XjLvdNHmRKHWMeAL+8J+UZYBI0harXus
fs7FyPMwX4MRAECOeTtj2ZmInfvzTufw3Rrx2muvvfbaa5kvefyJQKwHULsdgch/0PtkEIj8B+kc
gch/kM4RiPwH6RyByH+QzhGI/AfpHIHIf5DOEYj8B+kcgch/kM4RiPwH6RyByH+QzhGI/Af9cQqx
Gfniiy+uX7++1lYsgpKSkvvuu2/J2ZHOEZuR69ev/+AHP1ihwr/66qt77rnn7ruXM3KW2+2+E52j
djsCscx8/fXXsiyvtRVTQDpHrFumhKDnPG83NrYEly8s/Mr92fvbb79db3/33jjtdokLD3i8gXAs
OSrKgBGFlJoxWLOCZa5ThOhACGPN80dVXKflL8kGKdr1VlvMWn/QnvOVy4jVZn1rZAIu+GFLx2AK
V+kMFtNEJM3ocHrdH4AQ8bh9Ov2K6XCly98oNiye9eZyV5T1LhMAAIj3tHUMcrRt/35n1uuSZ0TS
XI8IkWBMlHQbtvwl2oBrK488t1b2IGayAXQuBN3eBDCP1Tr1c7ycjwt7evrGQpAXanTWsgq7fuLd
yiOuc81xe0O55B0PpV5Ia63O3fbxNwzOmyBTRY/LHYgmeQkvpLVWx257tkHZ0dSJQpqxOmvUwaY2
X0oGuHq+8SoAkKb6Z6tnvHo8Hf20zzsYjSe5TBx2g73SaR7rjES7TjeHdFNzhT98ozVheepwBR3u
enuW8ucM7T5+sKJnLGY7qdKanFUV6tSnbrc3NBng3TkRRn5WI6VZbJgwUjNZgLvXF4lxPOAkRevK
a6pZCkCKB3s9A5FYiuN5CSdprW3aSc9B2t9yvpOzPnXYmdUpkMIfnm4d1u1/Zq9+gfc08ufrCini
j8qY3pbjjegTpINtTe1hTGdzVGtJ4GJ+T29rU2xKuB455m7uUJsrquprKFxMBF2d7pZ24vCTE8XO
kyAdbGtqj5I7yyodNMZHA729rU1c/cHKsfeCJz9taboaI3TWMptaIQmJWIqkFIy9tkHV9c5V3ra/
1kIB4ESuY+BjoQQwlvJSmsT5qNfd39mGqw7nCM48HXy28ucP7S7H3K1djM1ZW6FWSImBzvb+tqYQ
KSuMzpqDxRRwIVd7d3sbefDgeCDy2Yyc1YYpCOGOptaAqN5pcdo1uMTFokBmgl9KXDjCUzq7laEI
4IJ9LndbJ3nowJyBOJSsxeC6HPRFHZUTQTWksC8ikqw1Z5QNxAbQeTLGyUAXM7NbKoXdrpA0+ZZv
MLIGTdvFdpcrbJh8usugddY6jQoAAKXSXlUaOnM1GE5brcoFJJCiblcIdu6v35uRit7IKN6+2Ov2
2xt2UWPR1PGd+w7uNY4paSwEOo0rcQC8kKZnjwinsT/ZMPFDz+CJ0+3R4SQUzR9DREHlKn9Bod1l
vNhZa88kUFbsNgff8XLU7npn5gwqrVX24OnuUJizj5U7q5G5bZhKvM8V4BnHwYbxp4Z5IkK8wlh9
0DhZMJWKvTMQiklm41w3Jq63suRgcCDq1I4dohAKRMVCs2UxMt9U/nz9z6uJkghA4MTsKWLBCE/o
LIYsH6A02AwkH/FHJ6dhMIbVZ40TKelCAniOhwUliAUjPGmwZu2ni7WFcjwSk2A8mjprMy7DOJSi
kCJB4BYYSj0HCwvtjqmNWZogVRQArc/qhigoCof06CxmLM7IZCaOvWUBwY8oNQmyIM43eYZrLSaV
GPGGxiwQQv6wqGIt63ZwP51OP/vssyzL/s3f/E1paem//uu/ZrafPXu2uLjYZrPZbLbMup26ujqW
ZTNbnn/++eUyYP37cwLHAGRJnBLaNBuB43mgNFOn13BKTUFglJuMZoiTxMyDzV7MMEcCieN44H3N
jb5puylBAFCORVNfYhCvdHSgfyAYTXC8KEoSgCTCjIDui2Bhod1zHCxBZD9LccAhK070nRjJ5Ypj
P4aUHPJ6fKFEkuNlSZJAEgGKF1CmxmphvO6BYJq1KiEd8kdlxmFd3AVYTX9+9OjRoqKizz//vKCg
4MaNG3v37i0uLrZYLADw93//9ydOnMhOfP78eYfDsbwGrH+dU2oKg0QsLsGsUUwWtHYCn+9Y50gg
gQSgsuyrsUxrmSqoO4zbkQ62vdMeJnY6HJVaDUVimBxzNV2OzZFDkhZ2vDPyTfk179m4MyMXaFK8
p6mpn2dsDoeDoUkCAz5w+R33gvJSrEXr7goEOesuLOSPQbFzjrD2a8vIyMjHH388ODhYUFAAADt2
7Dhx4sTrr79+6dKlVbNh/etcoTVoYDjiC6aNuUdnFBRNQjCenPog4BIcEMVzxc1djA0URUJCAFqT
MywySZEYJKdFU18QXNAb4tWO+r0TET2lKYrEAINpupZ4fs4llYsM7b4MRs4HmSuOPQBA1BtI4IZ9
B5wTvXERJABsQaVmRuMCwaQRC8RAW8ku9om7av58aGjIZDJh2ORxWSyWV199dXVqz7D+++dAmct2
kmLE1f5pPPfNxbA6Uoz4QunJTcKQN8ITWnaO0btFwbA6Ugx5B5I59+LFBgYbHfKGc3RYMRwDabZ+
LkiyBIBnNZiFcGBYnNyvpBQgcVz2kUVCiTnLx3Wsnhgd8mYNTYyFdtfnCu2+AOYzcp5jBNqgLZSj
Pj83vVxJEgAnFJPXKB4K5j7DucD1VrYwGfR6BuO43sJupDU62Y+YCxcuZHrjTz/9dGbLM888k9ly
4cKF5apx/ftzAIW+ep9jtNV99Z3zAQPL6tQkDpKQTMRSlLXWrgVc63DujLR3NLfF7RY9lZlXC0m6
KudyDIwBAACudzh3RtqvNjclbCa9SiHx6WQsMoxb6iuNOIDS7LT5mvrbmiSbzcgoJJ6LRYHda9cC
rtFpsJDf1cPY9SRgKv3UAH10sbYQvH2dHoWdISUu7PMEOZKECclQBlbd6/Z0D9BOgxqTk8Peq65Y
tsPLUf4iQ7vPz3xGznOMgBeVVRhC7Veb3k+VmrVKXErHoil1eaWZ0WuxwaCrg3GYaeDjIa83CtTC
vDkAAK61sJSv3wekaf9CJ83Xgvvvvz8QCMiyPOHSfT7fAw88kPmO+ueT4EX2hkPMQJ83EA72hr3y
+Pp225iPUbJ7nyI/dff63G39mTUutn0Hyo3L2WFTsnufIhl3r8/rCvAyYEQhzejt44N/uKaivoF2
uz0B12C/jBEkzZRaAACAslZVxzpd3vbWfqzQcuDINA0UOQ5UCZ29nvZmESMKGbbiQBXWe941sZ+2
19YIXX19Lee7ZYwo1OhsNftlV7N/IkGu8hcX2n1+5jNynmMEULK1DYSnp2+gr8snyhih0piLMQCl
uao22ekKuFoDMkaq9bbqekOs9e3wgg3TmM3qfrdgsCxF5qvWbmcY5pFHHnn55ZdfeumlgoKCaDR6
8uTJd999d3Vqz4DiKCI2LkKw7Xx70jx1ZdyCuHLlSmlp6YoYBXDz5s17771369atE1tu3bp17Nix
np6eLVu2KBSKF198cc+ePQBw9uzZZDKZ7c/r6uqOHDky05+73e7HH398ySZtDH+OQOQg6feGJcZh
W9qww2p6uG3btp05c2bm9p///OfTtnzwwQcrYQDSOWKjIXEjMU7kY76e3jhlPTDXgmjEGEjniI2G
EO5rc0VEvLDYXF3tXPKK9k3VY0U6R2w0lNYDz1nnT4bIAukcsUnZVP58A6yTQSAQdwjy54hNCvLn
CAQir0D+HLFJWTl/XlBQcNddd61Q4UsD+XMEYpnZunVr9r/T1gPInyM2IyUlJdeuXVtrKxZBSUnJ
nWRH69sRiPwHtdsRiPwH6RyByH+QzhGI/AfpHIHIf9B4O2Iz8sUXX1y/fn2trVgEJSUl991335Kz
I50jNiPXr1/PxEVYCb766qt77rnn7rvvXsYy3W73negctdsRiGXm66+/luU5X7696uS7zkdc5xpP
d0THfnGetxsbW4JLinOAyDO+WzG+/fbb9bYsZSO026Ndp5t9/LSNxM79z+3Vr4k9CMRGYyPoHAAA
Y2yVtuyoC1iheu2sQeQB683lrigbRee4SmdmkftGIJbERtF5bqShtjcuJ21PHa6YfLWv4G95s1N8
7GjDrvniEkjJoMvlCcWSvAgESdGM3uaciMaXjn7q7vVFYhwPOEnRuvKaapaC8IdvtEPVs5UKb6fL
G04IVGnDWN3p6KcutzcSHxWBVGlZR6XDmBXubY69I65zzXF7Q7nk7fVFYqlMmAmrc7d96XEVEAsB
+fNNgTDU0dw+TFnKKh0UIfLJWDgiwVh8FyHc0dQaENU7LU67Bpe4WBTI8TDA4mjI1RwKEwabw0Io
GRoAQIh2Nbf4QW9z1mhJOeHv7b3cxNU8VZuJ7Tf3XgCQY+7mDrW5oqq+hsLFRNDV6W5pJw4/id5Y
jFgmNozOJTEtCFnW4grFndmeHI7xhey+SmsRAADoWfOu8V3xPleAZxwHG+xjAULN2a8XjQWipv0H
q/WTsdvini4fr606VJsJ6KrXFxP8+e7e/jjr1My3FwAAZNA6a8eiwSmV9qrS0JmrwXDaakUufeVA
/nwdIg62nxnM+k2Y9j9XfUf9dZqhCW/Q3aOrLtdPjayeDIZTmLbKQufOiOnsTn12gMZkJJLCtOVZ
gXmVej0N7lg0DRrl3HvHimTY7BKVdCEBCY4HQDrPD9Lp9LFjx1wu1913371169bjx4//6Ec/AoCz
Z8/+5je/0Wg0APC9733vT3/6U11d3Z///GeSJAHg0UcfPXXq1LIYsFF0ThQ7auxZI+w4xdxhiQq2
ppZr7+htPe9TFRssNrvVOBYVkeN4ILWq2WKtkrR66i4uzoGcaP9tY/vUhCpeBFDOvXfsaEhi5oVY
Xwst8o/V9OdHjx4tKir6/PPPCwoKbty4sXfv3uLiYovFAihe6jQotV6/QP+90GUwCq39wBFLMhz0
ej3uy4E+pnT//oqieUMp47neCEQaqvaVqaeeTZykFrQXAN841wGxaEZGRj7++OPBwcGCggIA2LFj
x4kTJ15//fVLly6tmg35cH9NdXw8Ly7msBS03lqpt5aFu1rb+rt95oN2mqRICHEpAebXPAAAkBQJ
IQFojSZX+rn3ItaMVfPnQ0NDJpMp+41xFovl1VdfXZ3aM2zsda84SZLAc1yWB+fCoeQSSlLqLVoS
eI4HANqgLZSjPj+3wLwaVq+SY15PPGdDYu69iM1I9iPmwoULNpvNZrM9/fTTmS3PPPNMZsuFCxeW
q8YN7s81Jn2hz+/u8pPlOhoTExGPqz+5sGOKe9r6OEanVdGkQuLjQY9/lDQYGADAi8oqDKH2q03v
p0rNWiUupWPRlLq80jzbNJemrNISbulvbkpZLCyjAD4dTwwPC4Z9e83K+fYi8p77778/EAjIsjzh
0n0+3wMPPJD5jvrnCwAvch6oga5ed/P5ThkjVYzOvq8m0d4Wmz8rqVZJwQG3f1SUASMKaa1lX225
PnM+lGxtA+Hp6Rvo6/KJMkaoNObiud7Tq9BWNtSr3X3egKujX5YxopBmtJbx4bq59yLWiFVrtzMM
88gjj7z88ssvvfRSQUFBNBo9efLku+++uzq1Z0Dve0VsRq5cuVJaWrpChd+8efPee+/dunXrxJZb
t24dO3asp6dny5YtCoXixRdf3LNnDwCcPXs2mUxm+/O6urojR47M9Odut/vxxx9fsklI54jNyJUr
V77//e+vUOFffvnlNJ3fOXeo8409DodAIBbCBu+fIxBLZVO1ZJE/RyDyH+TPEZsU5M8RCERegfw5
YpOC/DkCgcgrkD9HbFJWzp8XFBTcddddK1T40kD+HIFYZrZu3Zr977T1APLniM1ISUnJtWvX1tqK
RVBSUnIn2dG6VwQi/0HtdgQi/0E6RyDyH6RzBCL/QTpHIPIfNN6O2Ix88cUX169fX2srFkFJScl9
99235OxI54jNyPXr13/wgx+sUOFfffXVPffcc/fddy9jmW63+050jtrtCMQy8/XXX8vy+gqzgXS+
coQ/fKPxrZ74MpUm+FsaG5s+TS9TcUtkxHWu8XRHdG2NWB6+WzG+/fbb9bYsZQO02yV/y287eccz
B+3T3qss+Fve7OQdhw7aZ4mEdgcIcb/H4wsNJzlelDGCpGi1dqe9bJdWCSDFg36+2Kpfmbcyj7jO
XfIqchzuNLjwpzHSymoWfAWjXaebffz4LwwjSFrN6Kx2O4siSOQ9G0Dnq44w4mlrdQ9LJGNg7TYV
ASKXiA+HownJCQAgxbwur2LfyuhcCvuCowCj/oG4PSuo+0zSQ33uiMXKzpVmJlhxabWNwUCWRYFL
RENBb/tgYMCxr9a+wNgzecR6c7krCtL5dNLBzsvuYVy3u6HWSuc4PVIsGOUl45LLT376fsuw+WCt
OZewhLAvJDA7d6YHA75oeaV21svDhUJxeZa9gr/tbX/xgSd3zWzm4JSWNU6EqbPaHVyw4/12d2sn
daiWRVEj8pc80nk66nG5ByLxUREIkqTUWlOZc9e4mxLiAy6XNxRLiUCqGIPd6TTnbvHG+3tDfKGp
PrfIRzxvt7oTIoD3UqMXADBDzXO1LD7iOtccL3+2Vh3s6u4LxXiFqf7Zau0sdvI8L0i5gzClQ76w
SJeX2/jYoH8g6tTqc9mYDn7Y1DE4KgN0/raxEwBUpU8dznL+kiTwPJ8j40xwiq2uicYu+Xr6R1hn
0XgF0U9dbm8kPioCqdKyjkqHkcqjG2UM5M83IklPa0uvpC931qhJELlUNBKT8TGRS/Ge5iavoLU6
apyUnAx5+zqbk9LBJ60ze8DxYDgFqlLbLJ5UY6qpJ93NnXFDzQG7GgCIcQHIfNjd5grJOpPDRClo
1VKOgQv6ojLjYOkiwajyDnhDgp7N4fWVuooDB5TtzV7c8VSVAQfAyTsYoMCLbKzK1x8OxZ1FGgAQ
ol3NLX7Q25w1WlJO+Ht7LzdxNU8hd7+hyRedC7FoQmaqquzjzWHWOrGP83d7k5rygwcy43V6vY4S
zl/u6w2bq6f7SymV4IAwaGfr9eJKWkMSAKAopOmp4koNhugFyEGSYLbQ5skBXwyY3UYKgDIZVF7v
QDDNWnOUp6BoqhAHIEiaztXqAHmsooVdXYpRYxBKcRJocIh7uny8tupQbSb4m15fTPDnu3v746xz
cUMB6x7kzzcgCrVaBV5Pl19daZ42fJwOB2Mys9s0KUuFVs/AYCyaBP20e1cQJBkIBbEUE9Q256wi
Tw80nenOCvrWfaaxO/NNV/VPB8w4AMCIN5jCmN0GCgBAYzar+92+AGedZ9h9HCnY9kZ7aPIBEnvn
t/1jdj32zMFdcxWCK3AFyJIIAJCMRFKYtjzrQJR6PQ3uWDQNGuTRl0o6nT527JjL5br77ru3bt16
/PjxH/3oRwBw9uzZ3/zmNxqNBgC+973v/elPf6qrq/vzn/9MkiQAPProo6dOnVoWA/JF56Apr63h
27s63zntVutNtlKbuWjstuRSHMDopLLGIUVxRikKHAcQhZk75odQqWcXk5KtekotSAAAo772joSx
dvdOAgAAV445ZCnsC41ijE2LC4IAAESxTgX98w67T4DrnA1PlUoSAIiD3W1D6uoaSyEAAK6g5nlS
SIIoAEYQAABcnAM50f7bxvapaVS8CJBfOl9Nf3706NGioqLPP/+8oKDgxo0be/fuLS4utlgsgOKl
ToIDBgC5Rq6kzO5MKpqtPmhwjIS8Xk+g85LXs7P6wF527BbHih+rd07rc+fq1eIqqhAiqUQS9Ivt
8uL4XOdSQWvGRrkUIRwIlbqoaIp4CRHwAAAOc0lEQVRqhJAvxIPMX7345tXs7cG5h92za6A0iszR
phME4KS6qGiBh5AcjstAayZ6AKShal+ZevrJWlizAjGTkZGRjz/+eHBwsKCgAAB27Nhx4sSJ119/
/dKlS6tmwwbQOU5SCojFOQmm9UW5BAeYhiKz0yqL2Ioitrx8oK25u8vFGmqNOKVSgizKCo1mAfe9
xqAlfQG/b8TmLFrFc5MODoRF0rC7ypIlJynS1+4NzTrsvkwIQ57gKKitBhoAgKRICAlAa/J/7cyq
+fOhoSGTyZT9xjiLxfLqq6+uTu0ZNsK6V4Y1kGKobyA5xaVz/r7gKKFndTk0gNNmVo2JAi8AgFK/
k8ESA56wsIC6cL3dpsZSA51dQ7OtMMVxHGRBzD0ztjS4YGBYVpnKrPpsjGWleoKP+EI5DCdwDEAU
FjZ7NjvCiKetc5AnTQ5L5iGoYfUqOeb1xJfz8BDTyH7EXLhwwWaz2Wy2p59+OrPlmWeeyWy5cOHC
ctW4Afw54Nqy3aZw+9WmtyOs2cBQCkjHwwHfYAI3VDnHJp6EYFfbEGHQMzSlwAUu7OuNAuPILFmj
rLvLA03utrd5q41lKFziuUQ0zDGVtTlWkgBtr62Ov98RuHw+WsyyBoZW4CDwiURM0jorzRSAWssQ
vqDbpQWWAozSFi1ubpmuOPy/p22KD/hiWPFu2/SOuMJYyhaG/LmG3RUMo4L+/i4PZWcUQKi1Wf5X
aX3yf1shJxIXDQ5JmCyLQjoRC4VDwymR1O3eX6kfz64pq7SEW/qbm1IWC8sogE/HE8PDgmHfXnN+
dc9Xj/vvvz8QCMiyPOHSfT7fAw88kPmO+ueTKI3VB+sZd58v7HH5RBkjCmmG3f13ZdaJTi6uUpO8
t9/l5UUZMFLF6Jz1zvH5cVxjbzhIud2eob4ub2a5urrYoiFnqY1i9x5Ws95ebzDicwdEGTCCJCk1
y2aukoLdXRVrd/k7W30YubP60CJ1PpOoL5gi9I5cg/VFNqvalxl2nzYHoCmv2c119vVebgZCXV5/
cGHtbHm4v30YAAAwjKQ0avYxh81qnNIfUmgrG+rV7j5vwNXRL2fOtdaizr9m/Kq12xmGeeSRR15+
+eWXXnqpoKAgGo2ePHny3XffXZ3aM6D3vSI2I1euXCktLV2hwm/evHnvvfdu3bp1YsutW7eOHTvW
09OzZcsWhULx4osv7tmzBwDOnj2bTCaz/XldXd2RI0dm+nO32/34448v2SSkc8Rm5MqVK9///vdX
qPAvv/xyms7vnDvU+Xofh2tsbGxsbMx8QZ/r5zPzBbFRQP4csRm5cuXKrl27VqjweDyO/DkCgVht
NsZ4OwKx7Gyqlizy5whE/oP8OWKTgvw5AoHIK5A/R2xSVs6fFxQU3HXXXStU+NJA/hyBWGa2bt2a
/e+09QDy54jNSElJybVr19baikVQUlJyJ9nROhkEIv9B7XYEIv9BOkcg8h+kcwQi/0E6RyDyH6Rz
BCL/QTpHIPIfpHMEIv9BOkcg8h+kcwQi/0E6RyDyH6RzBCL/QTpHIPIfpHMEIv+5s/+lSgLHJbnk
KCdKkiQB4LiCIAspmqYpBfrHKwKxXliaGoURv8frC0Zio+IsKbBCtd5osZWatUokeARijVnK/8/9
Lac7IwuLyEswjvoG+/Q4oMvKiOvcpaC+/tlK7UrWgkBsZJbibEPRBYfdFmOhaNqumRoJNNp1utk3
UQSGESStZnRWu52dEvQz3vN2c5CpPVypRS0CBOKOWMo4HLYI3WFY7o46Vlxas2/fvn01NZVOu1El
D3vb37n4vmdEmJoMKRyxPMQ++F+P/vh3AQAAuOU+/rc/fsV9a670YvcvHn30+CerYtsqsBQhVVRa
Ul2+xGxd80kI1U6Hk80ZORuntKxRP/7LandwwY73292tndSh2rFA4JqKg89VLME8BGJuttAGk4nY
vmWt7VhFluQwGefBZ8tGwsFQaDiW4jiO4wVJlmUADCMUJElRtEpTrDcYDVpS4HgJqPlrwSm2uiYa
u+Tr6R9hnUUAYx1vbf2z1VoAAGFkwOX2RmIpXsaIQkrNmO1Ou145o6D00IdNlyNKR329XYMvOBdi
M0GYfvovJ9faiNVlKTp3vXWa05tNrMHssFbkHE6X0sl4LDrQ4Q6EoPSZhl0LEhZeZGNVvv5wKO4s
mj5yl/a3t3YnmVJHjZYEkU/FQlEBn9lQEKKu1vaQwnZgv12DLzgXYt3w2St/e3zLsZNG95n33KHk
bVDusNcdPfbTh+nxBMnPPjh15oNPQjdvg2r7g08cev7o7hIiZ1G3htpOvfaeO3Tz9pYdpicO/WT7
5C6x+xeP/5o4+X9OOgAg+dm7p85c/iR0Mw3K7TuMD9Y9/8Ie4/Qik+5XjhwPmE6e+5WDFq//8dSp
99zXbqRAqdq+w+Q4fOzopIHrlKXonFCIiZD3ash7FQAwopAiFbgCxzEAURIkQeRGeXk8LaZbxLwa
xagxCKU4CTTTMiXCMVFtr6wwU5nfrNk+I7cU/7StbUAw1DQ4tYoF50KsM1IdxxvTL5z8/QtGGpKf
XfzFc8dfoVve3EMDgPjZ744816Hc9/w/HzPRt296Pjj165/dSDe9WcvMKCX2x+NHTt148OjJY/Yd
kLzWdqbxvRu3t5lmpEv+8ZVffnC79ti5Xxm23E7dGPrsJk1PF/mtz373D698suPE73/loAECZ55r
vPbgCyebHtwOqRtDgaHb9HoXOSxN5859j0ndfYOZDrosjqbE0ZzpsEKdrbIqd/88tzEKXAGylKPj
TzM0NjjQ9SlTZZ1lQp4LtrdeTTFVDXuNyoXnQqw/ttiPvrDHuA0AgH7oUJ39g1+7A+IeBwHJjouX
k/YTv//H3dsAAEpKTm4X/67hrYufVP/q4WnaHPrgomdb9dmTdQ8RAFDCHDuZHtp3KjWzsuTQjdvb
6/Y5jAwAMCXG6U8CURz64Je/7Nh29PcndzMAALeu30gqTU888VAJAcAwJaaHl/0ErARLufsTsrH6
oNUZj0ai0VgslRzlREESJAlwHMcVJFlI0RqmuNigL1IKI9G4pJzunWdDEkQBMCJHO4zaVVM72tHp
bj7Tp9KxNrvdrJ3S6U96WvwRvtBiZ6lF5EKsR3Y8ZNg28YPYtm0LJG/dBiDEwCcBML3w8OROKHE8
sePs5U9C8PBUfSYDoZtKk8M0eScxDz60A/5tZmXGPfsMHWd+/g+xQz+pe+IhZuqttwVu/tvxVy7e
qj33+9rx3sG2h2sdF48fPwKHf1r3hMO4AVw5ACxN556W8zFax7IGHcPaWZqa5iolgUsmkonhvvar
oXCMchxt0Cxw4Cs5HJeB1tC5jKL0ziefLYsP+b39/a5mX59u9/5a6/gDhB9OqEw7IejrcrEHs+fb
58yFWI9sIZS5O9y3b6VvA71tW/a2bdu2bbmVnjFBduvWLdhm2JZdjpJW5hxfN/70XJPx8nvvXfzl
vsZtD+77+QtZHf7bn5w5Bbdvb7keS8PEM4N2nGz6ffd7b7332s9OvbLjicPPP1/30PpX+1JueZqE
SCLiS0R8md8YRuA4juMAkiRIoixnpSWKqYW224UhT3AU1FbD7GdNoTHuqjZay4Md77e7XH7Dk9bM
E4TcWXWgWs/Ro2+72zt0h/YaFQvJhdhQbNmm3AK3bt0CmJT6rVu3bm8zbJuedNu2bTBN/reT6du5
yyWYh+uOPVz3fOyTy6caf/2zm9DyL7szt+Dt7dXn3tzz2T/8rPF4m/FcbclE8cbdR/9l99FbQ91n
Xmt87kjyXMvRmT3/9cVS1sk4ntxXalBNPixlWRRFnud5XswWOaHS2Woaqo0LepQII562zkGeNDks
8z4cccpgZgiZ5yY78gocB6DtNc5iabCz3Z9eWC7ERoIwPWyCgPuTLPle97hvKE0PGaYnpY2G7elr
nwQmL3UyMHRznuKZh+tO/tx++5pnaHzTlpKHTbTxpyefNw6deuXdoek3zjbj7mPHarff/CwQW9oR
rSJLW/eqsNUcrpC4kWg0lkqkkmmeF6Wx/jlBUJRazWgYrZZWCPFgKEqy2pkuXeKiwSEJk2VRSCdi
oXBoOCWSut37K/U53X/Y9b4PDAaGpkhc4mO+3pCksrEzngiUtaYydLHd1T6gfdJKLTQXYmNAVx+t
azvS+MvfwdFakwpuet5rPHPDdPRXjpntfFPdTx7sOPXKr0teOGTfDqlAx5kz1wByXPnr3e+6wfiQ
ccf2bZAMXL58bYvpeeO0NCW1J5//5O8aj58xNf3jQ/BZ28UbtN20Yzu95fb1a++5b9J208zx/vXG
UnTu72xud6mKdXq9llExJj1LkoQCx3EAQRJEnue4VDLmc3mikWiC1zx2lM3xDxN5uL99GAAAMIyk
NGr2MYfNaszZMwcAoBla8PS7BnhRBowo1OjKa505/x6jZGuqhi5edrV5mAb7gnMhNgSE6ei5c9tP
nbn4i4YbKVDteNDxwrmje0pyJWVq//nN242/e++XDY23t2w3OH5y8sT242dnpttGJD1nXnnvZip9
e4tyu+Hhw//8/J6ZjwN6z6+OeQ788teNDze9QMP1P55qO3UzfRu2qAwPPnHijXXfaIel/V/N8/br
7oQ8fzoAgEJb/WEn+iMKArGmLCkuspQM9rq9gXCMn13tGKnWsjZHmVmDFqAhEGvNd3fA1//z33/5
v/957T8++fcrVz766KOPrlz593//5D/+87/+8t//79Y3ubO89tprr732WubLGn4iEJuKJflzBAKx
oUDvgUQg8h+kcwQi/0E6RyDyH6RzBCL/QTpHIPIfpHMEIv9BOkcg8h+kcwQi/0E6RyDyH6RzBCL/
QTpHIPIfpHMEIv9BOkcg8h+kcwQi/0E6RyDyH6RzBCL/QTpHIPIfpHMEIv9BOkcg8h+kcwQi/0E6
RyDyH6RzBCL/QTpHIPIfpHMEIv/5/3G+RF7rzPp3AAAAAElFTkSuQmCC
--=_eb4ca82dfafca5034ad170a1bec7f025--
7 years, 2 months
A possible bug on Fedora 27
by Valentin Bajrami
Hi Community,
Recently we discovered that our VM's became unstable after upgrading
from Fedora 26 to Fedora 27. The journalctl log shows the following
Jan 29 20:03:28 host1.project.local libvirtd[2741]: 2018-01-29
19:03:28.789+0000: 2741: error : qemuMonitorIO:705 : internal error: End
of file from qemu monitor
Jan 29 20:09:14 host1.project.local libvirtd[2741]: 2018-01-29
19:09:14.111+0000: 2741: error : qemuMonitorIO:705 : internal error: End
of file from qemu monitor
Jan 29 20:10:29 host1.project.local libvirtd[2741]: 2018-01-29
19:10:29.584+0000: 2741: error : qemuMonitorIO:705 : internal error: End
of file from qemu monitor
A similar bug report is already present here:
https://bugzilla.redhat.com/show_bug.cgi?id=1523314 but doesn't reflect
our problem entirely. This bug seems to be triggered only when a VM is
shut down gracefully. In our case this is being triggered without
attempting to shutdown a VM. Again, this is causing the VM's to be
unstable and eventually they'll shut down by themselves.
Do you have any clue what could be causing this?
--
Met vriendelijke groeten,
Valentin Bajrami
7 years, 2 months